content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
"accessible" math, grouping, & IQ
from a friend:
Education Next has made the Jacob Vigdor article (released online in October 2012) the lead story in the current Winter 2013 issue.
He argues that the achievement gap and generally dwindling math performance of US students has been addressed by making the math curriculum "more accessible" (i.e., it has been dumbed down). He
then argues that it need not be dumbed down if the curriculum were differentiated between low and high performing students.
In fact, this is pretty much how it was in the 50's and 60's. Students did not need the 3 or 4 years of math in high school to get admitted into colleges. What he leaves out, however, is the
quality of math education in the lower grades and how this has affected the number of students who might otherwise be high performing students.
There’s no disagreement that some kids are smarter than others. Most people know that you can’t just set a standard (like algebra in 8th grade) and do nothing else. But Vigdor overlooks overlooks
that issue and then claims that the failed initiative defines some IQ/algebra correlation. There are many other variables to consider–which he doesn’t.
The “Math Wars” are about curriculum and teaching methods, but this article skips over that analysis. Most schools separate kids starting in 7th grade. In affluent areas, since “enough” students
get onto the top math track in high school, (often due to tutors, learning centers, or help from parents), educators will not look for any fundamental issues in K-6. They only assume that it’s a
relative problem.
Why not interview parents to see what is done (or not) at home and try to find out how the best students got there? There may see an IQ connection, but it’s not that simple. There are things one
can do to separate the variables. But too many authors of the recent spate of articles about math, algebra and its need, either can’t or won’t.
In his report, he pooh poohs the idea of introducing Singapore Math into classrooms, citing the usual cultural differences argument which is specious. (Teachers in Singapore have better math
background; students go to school all year round, so there’s no forgetting concepts during the summer; the culture promotes education and hard work, etc). He neglects the fact that Singapore’s
texts present the material clearly and succinctly and that there have been successes in schools in the US that have used it.
I remember one day back in middle school, when C. had done well on one of his death-march-to-algebra math tests, we were taking a walk & discussing his triumph. At some point we got to talking about
where he would now rank in Singapore terms. We figured probably on par with Singapore kids who have developmental disabilities.
I'm (half) serious.
Remember the Singapore Math pilot project in New Milford, Connecticut?
The SPED kids were ahead of the general ed kids
9 comments:
More anecdata with n=1, but Saxon math (standard math around here in many elementary schools) made my math-y kiddo feel like she had forgotten how to do math.
Singapore Math (at home), on the other hand, really challenged her and helped her get her confidence back.
There are tons of other reasons why she's good at math and was extremely subject accelerated, but I do think that it all started with us playing with math manipulatives and then doing Singapore
Primary Math.
I believe it!
It's hard to believe that we keep struggling to get past the same simple arguments and strawmen, even with people in power. We keep hearing the same reasons and justifications over and over. We
hear the problems of traditional "rote" math even though it's been gone from K-6 for at least 20 years and rigorous, traditional high school math has won the battle for the most able students.
K-8 educators talk about critical thinking and problem solving, but they never examine their rigor and expectations or whether it works or not.
This makes me think of top-down versus bottom up analysis. There are many problems in education and some of them interact. If you approach the problem from the top, you are bound to think that
there are only one or two problems. It's also easy to filter the problems through your own philosophy. Educators see rote learning and assume that the solution is to drive skills and mastery from
the top using real world problems. If this doesn't fix the problem, then something else must be going on.
But what is the error? What number or numbers cause people to think that there is a problem? Is it NAEP, PISA, or TIMSS? Where does this number come from? What is the formula and what are the
assumptions? What is the test and what, exactly are the questions that are missed? Who are these kids and what, exactly, goes on in the classroom? That is a bottom up analysis.
I tried to do that once with our state math test and there was no way that I could trace the formula they used. On top of that, the test questions used real world, thinking problems. This
required someone to figure out how to separate skill or number prolbems from problem solving problems. I was on a teacher/parent analysis committee once where data came back saying that our score
in problem solving went down. What was the solution? More problem solving even though they had no idea what that meant.
It's tough when you believe that there is no linkage between skills and understanding. If you believe that success in math is driven by some magical (Professor Hill) Think System, then you might
as well give up on designing tests that give you corrective feedback information.
However, if you work backwards from key "errors", even though they might be anecdotes of n=1, your analysis will be on clear grounds, AND, you will probably find that the problem/solution applies
to a very large class of students. You don't fix education with a top down analysys. You fix it by finding the anecdotes at the bottom and fixing them one-by-one.
The SPED results don't surprise me one bit. Back in the day when I taught a class for "Learning Disabled" students (Grades 3-6) I had the same experience.
Most of the students were severely reading disabled (that is, they were either complete non-readers or they were several years behind their age expectations). I had just discovered Engelmann's
Direct Instruction programs and was using Corrective Reading. My students -- who were not all that "disabled" clearly -- made spectacular gains. They went up from 2-6 YEARS on norm-referenced
measures in one school year. One boy who couldn't write his name in September was reading Lord of the Rings in June.
Naively, I thought the powers that be would be pleased with the results. Wrong!! The Supt. of special ed had a meltdown at the IEP meetings, screaming and pounding on the table, "We didn't put
them in this program to get ahead!"
My principal took me aside later and said, "Keep it up -- we just won't tell Special Ed how well they are doing after this. We can be selective in what we choose to report."
I think this is called "tall poppy syndrome," but I've kept it in mind ever since. The bureaucracy is resistant to any suggestion that something else -- no matter what it is -- works
significantly better. Even worse if it was "not invented here."
Where the math issues are concerned, I've seen a shift in thinking over the past few years, locally at least. This is not only what I observe from colleagues (in several different schools
recently) but also from math curriculum people and district-level meetings.
The idea that students will develop solid math skills through a discovery approach seems to have slowly died a natural death, quietly but surely. The emphasis now is on teaching them skills,
including math facts and algorithms (yes, the traditional ones) along with activities that promote problem solving and require students to use those math facts and algorithms.
We have time allotted every day for basic math skills -- yes, rote learning, including mental math, number facts, times tables. Kids also engage in problem-solving activities that are quite
teacher directed," at least at first, and remind me of the Morningside Academy "talk aloud problem solving" strategy they use which has a lot of supporting data re effectiveness.
We have meetings of our school math improvement committee where we look at specifics from the standardized tests -- what items are missed, by what students, what patterns suggest things we should
target instructionally in a focused way? We attempt to develop plans that address identified weaknesses in a comprehensive, schoolwide manner so that we're all on the same page, not just "lone
rangers" in our individual classrooms, doing our own thing.
"Differentiation" remains a problematic area. In K-2, there is so much developmental difference among children that teachers routinely devise lessons and activities that are multi-level or can be
applied at whatever level the child is at. This becomes harder and harder as you move up through the grades, so we try to address this by grouping students, not by "ability" (which we can't
reliably assess), but by instructional level, which we CAN reliably assess. Timetabling can be an obstacle here, because teachers at the same grade level may not be able to schedule math at the
same time, enabling cross-class instructional groups. However, we can do it to some extent, and this benefits both higher and lower achievers.
One problem to which we have no real solution is that of providing enough practice and drill for the children who need it most. In a low-SES community, you can't outsource to parents and tutors.
We can and do provide extra tutoring before and after school, but then students who take the bus are excluded. Some of the neediest kids simply don't follow through with extra practice at home,
and we can't force them to do it -- besides which, practice without feedback is not very effective.
We have some students who are perfectly capable of learning, but who need much more instructional time and directed practice than we can provide during the school day and I am frankly stumped as
to how to solve THAT problem. It applies to areas other than math of course, but is easy to identify in math.
Technology could be part of the solution here -- I've had students do very well with Timez Attack and other CAI, but most kids don't have internet access at home, and we don't have much
technology in the classrooms either.
Engelmann's early research showed that some children needed many thousands of repetitions to mastery, and these were not necessarily the low-IQ students (my genius-level-IQ student who couldn't
learn the alphabet comes to mind), but the challenge remains, how to provide that kinds of practice in school?.
Now that I've taught grades 1-7 of Singapore Math, I think the real issue with getting it into American schools is that most of the elementary school teachers here in the U.S. don't have a good
enough understanding of math to teach it without a LOT of self-remediation.
My mother-in-law is a retired 3rd grade teacher, and when we were visiting her one time, she offered to teach my DD her 4th grade Singapore Math lesson. She wound up not being able to do it
because she didn't have a solid enough understanding of the concept being taught. My MIL is a bright woman, but it did not inspire much confidence to see her struggle with 4th grade mat.
Crimson Wife - That's the crux of adopting Singapore's Primary Mathematics...Most teachers lack the deep number sense to teach it.
I had a (very young) second grade teacher confess to me this year that the first year she taught Primary Math at her school, she just taught the way she had learned. And the kids struggled.
The next year, she studied the teacher's manual and tried very hard to do things the "Singapore" way. And her students soared. Good for her. Bummer for her first 24 kids, though. They then went
to a third grade group of teachers who told me this year that they weren't going to teach any long division because fourth grade taught it and it was too hard for their third graders.
(Yes-Allison, my jaw dropped to the ground when they told me this!)
If you know anything about Singapore Math, you know there's no way you can take a skill/concept like long division out of the curriculum - it is practiced continually afterward. And they don't
teach long division in fourth grade, they practice and review. It's part of the sequence that students get a year and 3/4 to practice with single digit divisors before 5th grade introduces
two-digit divisors.
The school has used the U.S. Edition of Primary Mathematics for 6+ years and for the last 3-4 years, fourth grade has been complaining that the kids don't know long division when they come in.
Now we know why.
BTW, this is probably as good a post as any for me to say thanks Catherine and all the KTM readers that took the time to download, and especially review, our app going into Christmas.
He had a huge jump in Amazon downloads Christmas Day (over 500 of the free demo) and kept the momentum up through the week following. By Monday we had moved into the #1 slot for "Hot New Apps" in
the Educational segment and have maintained #1 so far this week.
The hardest part is getting one's app noticed among the hundreds that are out there, so this has been a great help, both to us and, I'm sure, to the kids and parents that are using it.
Again, thanks everyone so much.
At the risk of hijacking the thread, we have a few polishing touches we want to add to Blackboard Math, then will be prioritizing development for our next app.
I'm trying to decide between fractions and story problems. If anyone has an opinion they'd like to share, please email me (afolz@edisongauss.com), since I don't want to hijack the thread.
Then again, Catherine is on vacation right? What's the saying... "forgiveness is easier to get than permission." :-) | {"url":"http://kitchentablemath.blogspot.com/2013/01/accessible-math-grouping-iq.html","timestamp":"2014-04-16T10:31:11Z","content_type":null,"content_length":"409915","record_id":"<urn:uuid:75348842-4427-4c24-9078-35f87d36bd14>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00009-ip-10-147-4-33.ec2.internal.warc.gz"} |
3.2.4.2.2 Definition of Similarity
Two objects S (in source code) and C (in compiled code) are defined to be similar if and only if they are both of one of the types listed here (or defined by the implementation) and they both satisfy
all additional requirements of similarity indicated for that type.
The following X3J13 cleanup issues, not part of the specification, apply to this section:
Copyright 1996-2005, LispWorks Ltd. All rights reserved. | {"url":"http://www.lispworks.com/documentation/HyperSpec/Body/03_bdbb.htm","timestamp":"2014-04-20T00:41:04Z","content_type":null,"content_length":"16918","record_id":"<urn:uuid:47a54827-ddf8-485d-931d-ec26b059eba9>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00133-ip-10-147-4-33.ec2.internal.warc.gz"} |
Solving Coplanar Power-Limited Orbit Transfer Problem by Primer Vector Approximation Method
International Journal of Aerospace Engineering
Volume 2012 (2012), Article ID 480320, 9 pages
Research Article
Solving Coplanar Power-Limited Orbit Transfer Problem by Primer Vector Approximation Method
Department of Mechanical and Aerospace Engineering, University of Missouri, Columbia, MO 65211, USA
Received 1 November 2011; Revised 12 January 2012; Accepted 16 January 2012
Academic Editor: Alessandro A. Quarta
Copyright © 2012 Weijun Huang. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium,
provided the original work is properly cited.
The coplanar orbit transfer problem has been an important topic in astrodynamics since the beginning of the space era. Though many approximate solutions for power-limited orbit transfer problem have
been developed, most of them rely on simplifications of the dynamics of the problem. This paper proposes a new approximation method called primer vector approximation method to solve the classic
power-limited orbit transfer problem. This method makes no simplification on the dynamics, but instead approximates the optimal primer-vector function. With this method, this paper derives two
approximate solutions for the power-limited orbit transfer problem. Numerical tests show the robustness and accuracy of the approximations.
1. Introduction
Most trajectory optimization problems are nonlinear problems with no analytic solutions. However, to the coplanar power-limited orbit transfer in the classical inverse-square gravity field, many
researchers have proposed approximate solutions, for example, Edelbaum [1–4], Zee [5], Marinescu [6], Marec and Vinh [7], Haissig et al. [8], Kechichian’s [9], and Casalino and Colasurdo’s [10].
These proposed solutions are built on assumptions about the transfer scenarios. For example, the solution of [6] assumes the transfer is in a close range, the solutions of [2, 4–6] assume the
transfer happens in a long duration, and the solutions of [1, 3, 7–11] assume the admissible control to be within a rather limited set.
An approximate solution to an optimal control problem implies both the approximation of the control policy and the approximation of the dynamics. However, we automatically approximate the optimal
control policy as well, when we approximate the dynamics. The mentioned references more or less approximate the dynamics. One potential problem with approximating dynamics is that, once the
assumptions are violated the obtained control will be infeasible. From a software point of view, an infeasible control produces unexpected results and might cause the software to crash. Therefore, a
more robust approximation method for optimal control problem should be to purely make approximations on the optimal control, not the dynamics.
A popular method for generating feasible transfer trajectories is the shape-based method [12–15]. In its essence, this method generates feasible control without any compromise on the dynamics. Though
there are different variates of the shape-based method, as far as the author knows, there is no theoretical research to address the connection between the real optimal solution and the trajectory
generated by the shape-based method.
This paper proposes an innovative approximation method—the primer vector approximation method, which combines the advantages of the approximation method and the shape-based method and uses feasible
control to approximate the optimal control. The method reformulates the classic transfer problem with a nonlinear transformation from Carter and Humi [16]. The purpose of this reformulation is to put
all the nonlinear terms to the coefficients of the control variables. Thus the optimal control vector, called primer vector, of the new formulation can be analyzed and approximated without affecting
the dynamics. To demonstrate this method, this paper derives two approximate solutions for both low-thrust close-range transfer and low-thrust long-duration transfer.
Four transfer scenarios are designed to test the two approximate solutions numerically. The tests show that the approximations are close to the optimal solutions when the scenarios are within the
assumptions of the approximations. More importantly, the solutions are feasible even when the scenarios are far away from the assumptions of the approximations. Theoretically, both approximations can
generate feasible transfer control between any two kinds of orbits, including hyperbolic and parabolic orbit.
The remaining part of this paper is organized in the following way. The second section is about the formulations of the coplanar power-limited orbit transfer problem. The third section introduces the
primer vector approximation method and derives two approximate solutions. The fourth section numerically tests the two solutions. A conclusion is made in the fifth section.
2. Formulations of Coplanar Power-Limited Orbit Transfer Problem
2.1. Polar Coordinate Formulation
The polar coordinate system used in this paper is presented in Figure 1. The meanings of the symbols are given in Nomenclature.
Mathematically, the dynamics of a controlled satellite in an inverse-square gravity field can be described as
Set and . The two second-order ordinary differential equations (ODEs), (1), are equivalent to the following four first-order ODEs:
Set . If a transfer start from time to time , the objective function of the coplanar power-limited orbit transfer problem is The initial orbit at is defined by a vector and the final orbit at is
defined by a vector . A free-time rendezvous problem is to find the optimal control function , which minimizes the objective function, (3), and transfers a spacecraft from the original position at
time to the final position at the optimal arriving time . This problem is the simplest problem to introduce the primer vector approximation method. A more common power-limited transfer problem
unbounded thrust is the fixed-time rendezvous problem, the solution to which can be found by adding a time constraint on the approximate solution to the free-time rendezvous problem. We can adopt the
method in Novak’s [15] paper to use a solution to the free-time rendezvous problem as a basis for the solution to the related fixed-time rendezvous problem. This paper, however, focuses only on the
free-time rendezvous problem.
To numerically solve the free-time rendezvous problem, we can use the indirect shooting method, which first transforms the problem into a two-point boundary value problem (TPBVP) and then solves the
TPBVP with the shooting technique. To setup the TPBVP, we write down the augmented Hamiltonian of the optimal problem as follows:
The adjoint ODEs of the problem are
The optimal control is , where and is called primer vector. A common shooting method will shoot the optimal arrival time and the values of the four variables , , , and until the shooting functions
and transversality condition are satisfied. At each iteration of the shooting process, there are eight ODEs, (2) and (5), to solve. The shooting process will be time-consuming if the initial guess is
poor or the number of the transfer revolutions is large.
In this paper, the indirect shooting method with the polar coordinate formulation will be used to generate optimal solutions for the four tests in the fourth session.
2.2. A New Formulation
The polar formulation of the power-limited orbit transfer problem has nonlinear terms in the dynamic equations and adjoint equations. A new formulation of the power-limited orbit transfer problem
will be derived here with a nonlinear transformation from Carter and Humi [16]. This transformation will transfer the state variables with respect to time to new state variables with respect to polar
angle .
Use and to represent the first and second derivative operations with respective to the polar angle . Set . Thus and . The new state variables are defined as , , and . The mapping from the new
variables back to the variables in the polar coordinate is
From the chain rule, we have
Therefore, with (7) and (1), the dynamic equations of the new variables are
For further simplification, write (8) in matrix form, set and define
Thus, state space representation (SSR) of the ODEs in (8) is
The new boundaries of a transfer in the new dynamic equations will be defined by the initial polar angle , the final polar angle and two constant vectors of the new states
Under the new state vector and new control vector , the objective function, (3), becomes
The objective function, (12), dynamic equations, (10) and the boundary conditions, (11), compose the new formulation of the coplanar power-limited orbit transfer problem.
Mathematically, a coplanar power-limited orbit transfer problem can be described as
To solve this problem, set to be the new augmented Hamiltonian and to be the new adjoint vector corresponding to the new state variable
Apply the theory of optimal control and the optimal control of the new formulation becomes , in which is the primer vector of the new formulation and
The mapping from the new primer vector to the original primer vector is
Then the adjoint equations are
In conclusion, the transfer problem given by (13) is an alternative formulation of the coplanar power-limited orbit transfer problem. The optimal solution for this new formulation needs to satisfy a
TPBVP defined by the dynamic equation, (10), adjoint equation, (17), the boundary conditions, (11).
2.3. Properties of the New Formulation
The new formulation has unique properties that facilitate analysis. Firstly, the system can be solved explicitly when there is no control () on the state equation, (10). The solutions are
Secondly, the nonlinear term in the adjoint vector equation (17) has the following interesting property.
Theorem 1. For an optimal trajectory, the solution for the adjoint vector has the following form: where is the initial polar angle of the trajectory, is a 3 by 3 unit matrix, and is a diagonal matrix
Proof. Since (17) is a linear nonhomogeneous equation with constant coefficients, its solution is the sum of the general solution for the related homogeneous equation and the particular integral. The
solution can be written down as Set and (use index notation for ) With (20), (21), and (22), we obtain (19) of Theorem 1.
Up to this point, no assumption has been made about the new formulation of the transfer problem. It is, however, easy to see that, if the thrust level is very small, will become very small.
Therefore, for a low-thrust transfer, the adjoint vector can be approximated by setting .
3. Primer Vector Approximation Method
3.1. Equivalent Optimal Control Problem
The primer vector approximation method aims to find an approximation for the optimal control (primer vector function), (15), without affecting the accuracy of the state dynamics. Thus, it is
important to ensure that the approximate control will lead to a precise integration of the state equation, (10). This is the core of the primer vector approximation method.
To introduce the strategy for primer vector approximation, first define a family of simpler optimal control problems where is a set of matrix functions and all of its members are continuous while .
The following theorem gives the relationship between the family and the optimal control problem, defined in (13).
Theorem 2. Given a transfer problem , defined by (13), among the family of optimal control problem , defined by (23), there is at least one member problem with that has the same solution, that is, .
Proof. Suppose that the optimal solution for the given problem , defined by (13), is known and set . With Theorem 1, the optimal primer vector function can be written as an explicit function of Set
and plug it into the state equation of (13). We obtain the following relationships for an optimal solution to problem : Meanwhile, any member problem in can be solved by finding the solution for the
following TPBVP: This solution is Select a problem with Put (28) into (27). After simplification, we obtain Therefore, the optimal control and optimal trajectory of problems and are the same, that
If and a transfer problem has the same solution, we call the equivalent optimal control problem of the original problem in (13).
Theorem 3. If is an equivalent optimal control problem of a transfer problem , defined by (13), then the optimal control of another member problem is a feasible control for the transfer problem .
Proof. From the general solution equation (27), we know that no matter what functional form of we pick, the obtained control will always generate a trajectory that satisfies the two end point
boundaries. Thus, is a feasible control for the power-limited transfer problem.
Theorem 2 gives us a new way to solve any optimal power-limited transfer problem . That is, we first find its equivalent problem in , and then the optimal solution is given by , , and (27). In fact,
from the proof of Theorem 2, we can construct such a problem with , Though we do not really know until we solve the original problem , we can always “shape” the matrix function ) based on our
assumptions and knowledge of the transfer scenario. Suppose the “shaped” matrix function is and it corresponds to a problem . Theorem 3 indicates that, even though is not exactly , it gives a
feasible solution for the original problem as long as . Thus, if we can find a proper satisfying , then the solution for , (27), is a feasible approximate solution for , essentially . In other words,
once we find (an approximation of ), the approximate solution to is given by (27) with . Since any corresponds to a unique primer vector function in (27), this process of finding an approximation of
is named as primer vector approximation (PVA) in this paper. The following section uses this PVA to derive two explicit approximate solutions to the coplanar power-limited orbit transfer problem.
3.2. Primer Vector Approximation under the Low-Thrust Assumption
In this section, we assume the thrusting magnitude is very low (i.e., is very small), such that , becomes a symmetric matrix function and
The thrusting magnitude is usually very low in two kinds of transfers. One is the close range transfer and the other the long duration multirevolution transfer.
3.2.1. Approximate Solution for Close Range Transfer
Because the transfer happens in the vicinity of the initial orbit, a reasonable approximation of is where represents the initial orbit. Put (33) into (27). We obtain the approximate optimal solution
for a close range transfer.
Though (27) generally requires numerical integration to get , the approximate optimal trajectory and control are still explicit functions of . Moreover, is independent of the target location .
Therefore, for all possible close-range transfers of the initial orbit , we only need to compute one time for one period of the initial orbit. This property is greatly useful when we schedule
transfers for a satellite formation around an elliptic reference orbit specified by .
When the initial orbit is circular, the computation of can be done analytically. Using index notation and setting , and , we have With (34), the approximate optimal solution, (27), becomes analytic.
3.2.2. Approximate Solution for Long Duration Multirevolution Transfer
A long duration multirevolution transfer tends to build up its orbital energy and angular momentum monotonically. Thus, it is reasonable to approximate the characteristic matrix function with a
linear matrix function
With (35), the matrix function in (27) can be expanded analytically. With the analytic , the approximate optimal solution given by (27) is analytic. (The complete formula for is too lengthy to show
4. Numerical Test
4.1. Cases of Free-Time Rendezvous Problem
Without loss of generality, we can set the gravitation constant and use normalized distance in the test cases.
In Table 1, four coplanar rendezvous cases are chosen to test the accuracy of the two analytic approximations. Cases A and B test the approximate solutions for close range transfers, with case A
featuring a circular reference while case B featuring a high elliptic reference. Cases C and D test the approximate solutions for long duration transfers. Case C is a circular-to-elliptic transfer,
while case D is a circular-to-circular transfer. We use “Approx. C.R.” to identify the approximate solution for close range transfer, (27) and (23), and use “Approx. L.D.” to identify the approximate
solution for long range transfer, (27) and (35).
4.2. Results and Discussions
The results of cases A and B show that “Approx. C.R” captures the close-range-transfer primer vector dynamics. Where the circular reference is used (case A), the percentage error of the cost and the
time of flight (TOF) are about 0.5% and 0.1%, respectively. As the eccentricity of the reference orbit increases, the primer vector dynamics becomes more complicated. However, even when the
eccentricity of the reference orbit is as high as 0.8 (case B), “Approx. C.R” still captures the primer vector dynamics well. The percentage error of the cost and the TOF are about 1.5% and 0.16%,
The results of cases C and D show that “Approx. L.D” captures the long-duration-transfer primer vector dynamics. The percentage error of the cost and the time of flight (TOF) are around 1% and 2.3%,
respectively, for case C, and around 0.26% and 1.4%, respectively, for case D.
It is interesting to see that “Approx. C.R”, intended for close-range transfers, works very well for long-duration transfers too. It is, however, worth pointing out that, since “Approx. C.R” requires
a numerical integration, when the initial orbit is elliptic, it is computationally slower than “Approx. L.D.” But when the initial orbit is circular, the analytic "Approx. C.R" actually becomes
computationally faster than “Approx. L.D.”
A significant advantage of the primer vector approximation method is that it precisely follows the dynamic equations of motion and gives a feasible solution. Table 2 shows the missed target errors of
case D, which are obtained by simulating Equations (2) with the generated control profiles of the two approximations. The results verify the advantage of the proposed method.
Figures 2 and 3 show the thrust angle and thrust magnitude histories of case B, while Figures 5 and 6 show those of case D. Figures 4 and 7 present the trajectories for case B and case D,
respectively. In the figures, the black solid line represents the optimal solution, while the black dashed line is from “Approx. LD” and the gray solid line from “Approx. CR”. Only the “Approx. CR”
solution (gray line) and the true optimal solution are shown in Figure 3, because the “Approx. LD” solution is too far away from the optimum to effectively display.
5. Conclusions
This paper uses a new method to approximate solutions for a nonlinear optimal control problem. This method begins with a transformation to push all the nonlinearity to the coefficients of the control
terms. Then it analyzes of the adjoint equations and embeds the process of finding solutions into the process of approximating the primer vector curve of the equivalent linear quadratic optimal
control problem. This method is powerful and leads to an extremely simple and accurate explicit approximate solution. At the cases of circular close-range transfers and long-duration transfers,
analytic approximate solutions exist.
This paper focuses on the display of the primer vector approximation method itself and includes no numerical comparison of this new method with other approximation methods. However, as far as the
author knows, the proposed method is quite unique and radically different from other approximation methods. This method has a precise integration of the state dynamics, while other approximation
methods more or less approximate the state dynamics. As a result, even though the given transfer scenario largely violates the assumptions of the approximation, the obtained solutions are still
feasible. This is a major advantage of using the primer vector approximation method.
: Standard gravitational parameter of the Earth
: Distance from the attraction center to the spacecraft
: Polar angle of the spacecraft in the polar coordinate
: Radial direction component of the velocity of the spacecraft
: Normal direction component of the velocity of the spacecraft
: Thrust angle of the spacecraft
: Adjoint variable corresponding to
: Adjoint variable corresponding to
: Adjoint variable corresponding to
: Adjoint variable corresponding to
: Control acceleration in the radial direction of the spacecraft
: Control acceleration in the normal direction of
: Augmented Hamiltonian of the polar coordinate formulation of trajectory optimization
: Augmented Hamiltonian of the new formulation of trajectory optimization
: Objective function of the power-limited transfer problem
: Objective function of the equivalent problem
: State vector of a space vehicle in the new formulation
: Adjoint vector of the the new formulation
: Control variable in the new formulation
: Primer vector in the new formulation
: State vector of the equivalent problem
: Adjoint vector of the equivalent problem
: Control variable of the equivalent problem
: Primer vector of the equivalent problem.
0: Initial value
: Final value.
: Transpose of a matrix
*: Optimal value
: First derivative with respect to the polar angle
: Second derivative with respect to the polar angle.
1. T. N. Edelbaum, “Theory of maxima and minima,” in Optimization techniques: with applications to aerospace systems, G. Leitmann, Ed., vol. 5, chapter 1, Academic Press, 1962.
2. T. N. Edelbaum, “Optimum low-thrust rendezvous and station keeping,” Journal of Spacecraft and Rockets, vol. 40, no. 6, pp. 960–965, 2003. View at Publisher · View at Google Scholar
3. T. N. Edelbaum, “Optimum power-limited orbit transfer in strong gravity fields,” AIAA Journal, vol. 3, pp. 921–925, 1965. View at Publisher · View at Google Scholar
4. T. N. Edelbaum, “An asymptotic solution for optimum power limited orbit transfer,” AIAA Journal, vol. 4, pp. 1491–1494, 1966. View at Publisher · View at Google Scholar · View at Zentralblatt
5. C. Zee, “Low constant tangential thrust spiral trajectories,” AIAA Journal, vol. 1, no. 7, pp. 1581–1583, 1963.
6. A. Marinescu, “Optimal low-thrust orbital rendezvous,” Journal of Spacecraft and Rockets, vol. 13, no. 7, pp. 385–392, 1976. View at Publisher · View at Google Scholar · View at Scopus
7. J. P. Marec and N. X. Vinh, “Optimal low-thrust, limited power transfers between arbitrary elliptical orbits,” Acta Astronautica, vol. 4, no. 5-6, pp. 511–540, 1977. View at Scopus
8. C. M. Haissig, K. D. Mease, and N. X. Vinh, “Minimum-fuel, power-limited transfers between coplanar elliptical orbits,” Acta Astronautica, vol. 29, no. 1, pp. 1–15, 1993. View at Scopus
9. J. A. Kechichian, “Reformulation of Edelbaum's low-thrust transfer problem using optimal control theory,” Journal of Guidance, Control, and Dynamics, vol. 20, no. 5, pp. 988–994, 1997. View at
10. L. Casalino and G. Colasurdo, “Improved Edelbaum's approach to optimize low earth/geostationary orbits low-thrust transfers,” Journal of Guidance, Control, and Dynamics, vol. 30, no. 5, pp.
1504–1510, 2007. View at Publisher · View at Google Scholar · View at Scopus
11. Y. Gao and C. A. Kluever, “Analytic orbital averaging technique for computing tangential-thrust trajectories,” Journal of Guidance, Control, and Dynamics, vol. 28, no. 6, pp. 1320–1323, 2005.
View at Scopus
12. N. Markopoulos, “Explicit, near-optimal guidance for power-limited transfers between coplanar circular orbits,” Journal of Guidance, Control, and Dynamics, vol. 19, no. 6, pp. 1317–1325, 1996.
View at Scopus
13. A. E. Petropoulos and J. M. Longuski, “Shape-based algorithm for automated design of low-thrust, gravity-assist trajectories,” Journal of Spacecraft and Rockets, vol. 41, no. 5, pp. 787–796,
2004. View at Scopus
14. B. J. Wall and B. A. Conway, “Shape-based approach to low-thrust rendezvous trajectory design,” Journal of Guidance, Control, and Dynamics, vol. 32, no. 1, pp. 95–102, 2009. View at Publisher ·
View at Google Scholar · View at Scopus
15. D. M. Novak and M. Vasile, “Improved shaping approach to the preliminary design of low-thrust trajectories,” Journal of Guidance, Control, and Dynamics, vol. 34, no. 1, pp. 128–147, 2011. View at
Publisher · View at Google Scholar · View at Scopus
16. T. Carter and M. Humi, “Three-impulse rendezvous near circular orbit determined through a new primer vector analysis,” in Proceedings of the AAS/AIAA Spaceflight Mechanics Meeting, vol. 140 of
Advances in the Astronautical Sciences Series, 2011. | {"url":"http://www.hindawi.com/journals/ijae/2012/480320/","timestamp":"2014-04-17T18:01:01Z","content_type":null,"content_length":"399825","record_id":"<urn:uuid:10856a66-73d7-4019-a185-689034ba02e8>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00209-ip-10-147-4-33.ec2.internal.warc.gz"} |
converting multiple-valued result to a single-valued list
Major Section: PROGRAMMING
Example Forms:
; Returns the list (3 4):
(mv-list 2 (mv 3 4))
; Returns a list containing the three values returned by var-fn-count:
(mv-list 3 (var-fn-count '(cons (binary-+ x y) z) nil))
General form:
(mv-list n term)
Logically, (mv-list n term) is just term; that is, in the logic mv-list simply returns its second argument. However, the evaluation of a call of mv-list on explicit values always results in a single
value, which is a (null-terminated) list. For evaluation, the term n above (the first argument to an mv-list call) must ``essentially'' (see below) be an integer not less than 2, where that integer
is the number of values returned by the evaluation of term (the second argument to that mv-list call).
We say ``essentially'' above because it suffices that the translation of n to a term (see trans) be of the form (quote k), where k is an integer greater than 1. So for example, if term above returns
three values, then n can be the expression 3, or (quote 3), or even (mac 3) if mac is a macro defined by (defmacro mac (x) x). But n cannot be (+ 1 2), because even though that expression evaluates
to 3, nevertheless it translates to (binary-+ '1 '2), not to (quote 3).
Mv-list is the ACL2 analogue of the Common Lisp construct multiple-value-list. | {"url":"http://www.cs.utexas.edu/users/moore/acl2/v4-1/MV-LIST.html","timestamp":"2014-04-17T16:00:51Z","content_type":null,"content_length":"2288","record_id":"<urn:uuid:ba383087-8de4-4b3b-a8fa-4b5982943ab6>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00456-ip-10-147-4-33.ec2.internal.warc.gz"} |
Why ode solution tends to a constant?
August 7th 2011, 04:13 AM #1
Junior Member
Jun 2011
Why ode solution tends to a constant?
Problem 2 Consider an autonomous system of differential equations
1. Let
for all x, z in
2. Let
show that there exists
Would someone help me in the second question? I have no idea about it.
PS: This is Berkely math problem Fall 81, Problem 2.
Re: Why ode solution tends to a constant?
I don't know about others, but all your pictures are broken. Please fix.
August 8th 2011, 01:32 AM #2 | {"url":"http://mathhelpforum.com/differential-equations/185740-why-ode-solution-tends-constant.html","timestamp":"2014-04-16T11:27:11Z","content_type":null,"content_length":"35197","record_id":"<urn:uuid:91ae63e5-1361-4ef4-9364-47261b3dfb47>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00525-ip-10-147-4-33.ec2.internal.warc.gz"} |
NAG Library
NAG Library Routine Document
1 Purpose
G11CAF returns parameter estimates for the conditional logistic analysis of stratified data, for example, data from case-control studies and survival analyses.
2 Specification
SUBROUTINE G11CAF ( N, M, NS, Z, LDZ, ISZ, IP, IC, ISI, DEV, B, SE, SC, COV, NCA, NCT, TOL, MAXIT, IPRINT, WK, LWK, IFAIL)
INTEGER N, M, NS, LDZ, ISZ(M), IP, IC(N), ISI(N), NCA(NS), NCT(NS), MAXIT, IPRINT, LWK, IFAIL
REAL (KIND=nag_wp) Z(LDZ,M), DEV, B(IP), SE(IP), SC(IP), COV(IP*(IP+1)/2), TOL, WK(LWK)
3 Description
In the analysis of binary data, the logistic model is commonly used. This relates the probability of one of the outcomes, say
, to
explanatory variates or covariates by
$Proby=1=expα+zTβ 1+expα+zTβ ,$
is a vector of unknown coefficients for the covariates
is a constant term. If the observations come from different strata or groups,
would vary from strata to strata. If the observed outcomes are independent then the
s follow a Bernoulli distribution, i.e., a binomial distribution with sample size one and the model can be fitted as a generalized linear model with binomial errors.
In some situations the number of observations for which
may not be independent. For example, in epidemiological research, case-control studies are widely used in which one or more observed cases are matched with one or more controls. The matching is based
on fixed characteristics such as age and sex, and is designed to eliminate the effect of such characteristics in order to more accurately determine the effect of other variables. Each case-control
group can be considered as a stratum. In this type of study the binomial model is not appropriate, except if the strata are large, and a conditional logistic model is used. This considers the
probability of the cases having the observed vectors of covariates given the set of vectors of covariates in the strata. In the situation of one case per stratum, the conditional likelihood for
strata can be written as
$L=∏i=1nsexpziTβ ∑l∈SiexpzlTβ ,$ (1)
is the set of observations in the
th stratum, with associated vectors of covariates
$l\in {S}_{i}$
, and
is the vector of covariates of the case in the
th stratum. In the general case of
cases per strata then the full conditional likelihood is
$L=∏i=1nsexpsiTβ ∑l∈CiexpslTβ ,$ (2)
is the sum of the vectors of covariates for the cases in the
th stratum and
$l\in {C}_{i}$
refer to the sum of vectors of covariates for all distinct sets of
observations drawn from the
th stratum. The conditional likelihood can be maximized by a Newton–Raphson procedure. The covariances of the parameter estimates can be estimated from the inverse of the matrix of second derivatives
of the logarithm of the conditional likelihood, while the first derivatives provide the score function,
${U}_{\mathit{j}}\left(\beta \right)$
, for
$\mathit{j}=1,2,\dots ,p$
, which can be used for testing the significance of parameters.
If the strata are not small,
can be large so to improve the speed of computation, the algorithm in
Howard (1972)
and described by
Krailo and Pike (1984)
is used.
A second situation in which the above conditional likelihood arises is in fitting Cox's proportional hazard model (see
) in which the strata refer to the risk sets for each failure time and where the failures are cases. When ties are present in the data
uses an approximation. For an exact estimate, the data can be expanded using
to create the risk sets/strata and G11CAF used.
4 References
Cox D R (1972) Regression models in life tables (with discussion) J. Roy. Statist. Soc. Ser. B 34 187–220
Cox D R and Hinkley D V (1974) Theoretical Statistics Chapman and Hall
Howard S (1972) Remark on the paper by Cox, D R (1972): Regression methods J. R. Statist. Soc. B 34 and life tables 187–220
Krailo M D and Pike M C (1984) Algorithm AS 196. Conditional multivariate logistic analysis of stratified case-control studies Appl. Statist. 33 95–103
Smith P G, Pike M C, Hill P, Breslow N E and Day N E (1981) Algorithm AS 162. Multivariate conditional logistic analysis of stratum-matched case-control studies Appl. Statist. 30 190–197
5 Parameters
1: N – INTEGERInput
On entry: $n$, the number of observations.
Constraint: ${\mathbf{N}}\ge 2$.
2: M – INTEGERInput
On entry
: the number of covariates in array
Constraint: ${\mathbf{M}}\ge 1$.
3: NS – INTEGERInput
On entry: the number of strata, ${n}_{s}$.
Constraint: ${\mathbf{NS}}\ge 1$.
4: Z(LDZ,M) – REAL (KIND=nag_wp) arrayInput
On entry: the $i$th row must contain the covariates which are associated with the $i$th observation.
5: LDZ – INTEGERInput
On entry
: the first dimension of the array
as declared in the (sub)program from which G11CAF is called.
Constraint: ${\mathbf{LDZ}}\ge {\mathbf{N}}$.
6: ISZ(M) – INTEGER arrayInput
On entry
: indicates which subset of covariates are to be included in the model.
If ${\mathbf{ISZ}}\left(j\right)\ge 1$, the $j$th covariate is included in the model.
If ${\mathbf{ISZ}}\left(j\right)=0$, the $j$th covariate is excluded from the model and not referenced.
Constraint: ${\mathbf{ISZ}}\left(j\right)\ge 0$ and at least one value must be nonzero.
7: IP – INTEGERInput
On entry
, the number of covariates included in the model as indicated by
${\mathbf{IP}}\ge 1$
number of nonzero values of
8: IC(N) – INTEGER arrayInput
On entry
: indicates whether the
th observation is a case or a control.
If ${\mathbf{IC}}\left(i\right)=0$, indicates that the $i$th observation is a case.
If ${\mathbf{IC}}\left(i\right)=1$, indicates that the $i$th observation is a control.
Constraint: ${\mathbf{IC}}\left(\mathit{i}\right)=0$ or $1$, for $\mathit{i}=1,2,\dots ,{\mathbf{N}}$.
9: ISI(N) – INTEGER arrayInput
On entry
: stratum indicators which also allow data points to be excluded from the analysis.
If ${\mathbf{ISI}}\left(i\right)=k$, indicates that the $i$th observation is from the $k$th stratum, where $k=1,2,\dots ,{\mathbf{NS}}$.
If ${\mathbf{ISI}}\left(i\right)=0$, indicates that the $i$th observation is to be omitted from the analysis.
$0\le {\mathbf{ISI}}\left(\mathit{i}\right)\le {\mathbf{NS}}$
and more than
values of
, for
$\mathit{i}=1,2,\dots ,{\mathbf{N}}$
10: DEV – REAL (KIND=nag_wp)Output
On exit: the deviance, that is, $-2×\text{}$, (maximized log marginal likelihood).
11: B(IP) – REAL (KIND=nag_wp) arrayInput/Output
On entry
: initial estimates of the covariate coefficient parameters
must contain the initial estimate of the coefficent of the covariate in
corresponding to the
th nonzero value of
Suggested value
: in many cases an initial value of zero for
may be used. For another suggestion see
Section 8
On exit
contains the estimate
${\stackrel{^}{\beta }}_{i}$
of the coefficient of the covariate stored in the
th column of
is the
th nonzero value in the array
12: SE(IP) – REAL (KIND=nag_wp) arrayOutput
On exit: ${\mathbf{SE}}\left(\mathit{j}\right)$ is the asymptotic standard error of the estimate contained in ${\mathbf{B}}\left(\mathit{j}\right)$ and score function in ${\mathbf{SC}}\left(\
mathit{j}\right)$, for $\mathit{j}=1,2,\dots ,{\mathbf{IP}}$.
13: SC(IP) – REAL (KIND=nag_wp) arrayOutput
On exit: ${\mathbf{SC}}\left(j\right)$ is the value of the score function ${U}_{j}\left(\beta \right)$ for the estimate contained in ${\mathbf{B}}\left(j\right)$.
14: COV(${\mathbf{IP}}×\left({\mathbf{IP}}+1\right)/2$) – REAL (KIND=nag_wp) arrayOutput
On exit
: the variance-covariance matrix of the parameter estimates in
stored in packed form by column, i.e., the covariance between the parameter estimates given in
$j\ge i$
, is given in
15: NCA(NS) – INTEGER arrayOutput
On exit: ${\mathbf{NCA}}\left(\mathit{i}\right)$ contains the number of cases in the $\mathit{i}$th stratum, for $\mathit{i}=1,2,\dots ,{\mathbf{NS}}$.
16: NCT(NS) – INTEGER arrayOutput
On exit: ${\mathbf{NCT}}\left(\mathit{i}\right)$ contains the number of controls in the $\mathit{i}$th stratum, for $\mathit{i}=1,2,\dots ,{\mathbf{NS}}$.
17: TOL – REAL (KIND=nag_wp)Input
On entry: indicates the accuracy required for the estimation. Convergence is assumed when the decrease in deviance is less than ${\mathbf{TOL}}×\left(1.0+\mathrm{CurrentDeviance}\right)$. This
corresponds approximately to an absolute accuracy if the deviance is small and a relative accuracy if the deviance is large.
Constraint: ${\mathbf{TOL}}\ge 10×\mathbit{machine precision}$.
18: MAXIT – INTEGERInput
On entry
: the maximum number of iterations required for computing the estimates. If
is set to
then the standard errors, the score functions and the variance-covariance matrix are computed for the input value of
is not updated.
Constraint: ${\mathbf{MAXIT}}\ge 0$.
19: IPRINT – INTEGERInput
On entry
: indicates if the printing of information on the iterations is required.
${\mathbf{IPRINT}}\le 0$
No printing.
${\mathbf{IPRINT}}\ge 1$
The deviance and the current estimates are printed every IPRINT iterations. When printing occurs the output is directed to the current advisory message unit (see X04ABF).
Suggested value: ${\mathbf{IPRINT}}=0$.
20: WK(LWK) – REAL (KIND=nag_wp) arrayWorkspace
21: LWK – INTEGERInput
On entry
: the dimension of the array
as declared in the (sub)program from which G11CAF is called.
Constraint: ${\mathbf{LWK}}\ge p{n}_{0}+\left({c}_{\mathrm{m}}+1\right)\left(p+1\right)\left(p+2\right)/2+{c}_{\mathrm{m}}$, where ${n}_{0}$ is the number of observations included in the model,
i.e., the number of observations for which ${\mathbf{ISI}}\left(i\right)e 0$ and ${c}_{\mathrm{m}}$ is the maximum number of observations in any stratum.
22: IFAIL – INTEGERInput/Output
On entry
must be set to
$-1\text{ or }1$
. If you are unfamiliar with this parameter you should refer to
Section 3.3
in the Essential Introduction for details.
For environments where it might be inappropriate to halt program execution when an error is detected, the value
$-1\text{ or }1$
is recommended. If the output of error messages is undesirable, then the value
is recommended. Otherwise, if you are not familiar with this parameter, the recommended value is
When the value $-\mathbf{1}\text{ or }\mathbf{1}$ is used it is essential to test the value of IFAIL on exit.
On exit
unless the routine detects an error or a warning has been flagged (see
Section 6
6 Error Indicators and Warnings
If on entry
, explanatory error messages are output on the current error message unit (as defined by
Errors or warnings detected by the routine:
On entry, ${\mathbf{M}}<1$,
or ${\mathbf{N}}<2$,
or ${\mathbf{NS}}<1$,
or ${\mathbf{IP}}<1$,
or ${\mathbf{LDZ}}<{\mathbf{N}}$,
or ${\mathbf{TOL}}<10×\mathbit{machine precision}$,
or ${\mathbf{MAXIT}}<0$.
On entry, ${\mathbf{ISZ}}\left(i\right)<0$, for some $i$,
or the value of IP is incompatible with ISZ,
or ${\mathbf{IC}}\left(i\right)e 1$ or $0$.
or ${\mathbf{ISI}}\left(i\right)<0$ or ${\mathbf{ISI}}\left(i\right)>{\mathbf{NS}}$,
or the number of values of ${\mathbf{ISZ}}\left(i\right)>0$ is greater than or equal to ${n}_{0}$, the number of observations excluding any with ${\mathbf{ISI}}\left(i\right)=0$.
The value of
is too small.
Overflow has been detected. Try using different starting values.
The matrix of second partial derivatives is singular. Try different starting values or include fewer covariates.
Convergence has not been achieved in
iterations. The progress towards convergence can be examined by using a nonzero value of
. Any non-convergence may be due to a linear combination of covariates being monotonic with time.
Full results are returned.
7 Accuracy
The accuracy is specified by
The other models described in
Section 3
can be fitted using the generalized linear modelling routines
The case with one case per stratum can be analysed by having a dummy response variable
such that
for a case and
for a control, and fitting a Poisson generalized linear model with a log link and including a factor with a level for each strata. These models can be fitted by using
G11CAF uses mean centering, which involves subtracting the means from the covariables prior to computation of any statistics. This helps to minimize the effect of outlying observations and
accelerates convergence. In order to reduce the risk of the sums computed by Howard's algorithm becoming too large, the scaling factor described in
Krailo and Pike (1984)
is used.
If the initial estimates are poor then there may be a problem with overflow in calculating $\mathrm{exp}\left({\beta }^{\mathrm{T}}{z}_{i}\right)$ or there may be non-convergence. Reasonable
estimates can often be obtained by fitting an unconditional model.
9 Example
The data was used for illustrative purposes by
Smith et al. (1981)
and consists of two strata and two covariates. The data is input, the model is fitted and the results are printed.
9.1 Program Text
9.2 Program Data
9.3 Program Results | {"url":"http://www.nag.com/numeric/FL/nagdoc_fl24/html/G11/g11caf.html","timestamp":"2014-04-24T19:13:12Z","content_type":null,"content_length":"41210","record_id":"<urn:uuid:94388da2-86e8-4f14-b9b6-5f75956aba64>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00607-ip-10-147-4-33.ec2.internal.warc.gz"} |
Joint Binomial problem
April 14th 2007, 07:54 AM #1
Grand Panjandrum
Nov 2005
Joint Binomial problem
captain I need your help for some problem I have with the probability.
Problem 1
2 random variables are independent and each has a binomial distribution with success probability of 0.3 and 2 trials.
1- find the joint probability distribution.
Here I tried to compute the binomial but I get 0!
2- find the probability that the second is greater than the first. ??
RV X and Y are identicaly independent distributed with distribution B(0.3,2)
Hence their joint distribution is the product of their individual distributions:
p(X=x, Y=y) = 2!/[(2-x)! x!] 0.3^x (1-0.3)^(2-x) 2!/[(2-y)! y!] 0.3^y (1-0.3)^(2-y)
x=0, 1 or 2 y=0, 1 or 2.
p(X>Y) = p(X=0, Y=1 or 2) + p(X=1, Y=2) = p(X=0,Y=1) + p(X=0,Y=2) + p(X=1,Y=2)
which can be evaluated from the joint distribution given above.
Follow Math Help Forum on Facebook and Google+ | {"url":"http://mathhelpforum.com/advanced-statistics/13688-joint-binomial-problem.html","timestamp":"2014-04-17T15:43:41Z","content_type":null,"content_length":"30958","record_id":"<urn:uuid:5395412c-45af-4c03-8abb-d2d64c767ed7>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00374-ip-10-147-4-33.ec2.internal.warc.gz"} |
8 Theorem Prover
Go to the first, previous, next, last section, table of contents.
Disclaimer: The theorem proving component of Twelf is in an even more experimental stage and currently under active development. There are two main restrictions which limit its utility: (1) it only
support reasoning about closed objects, and (2) it cannot apply lemmas automatically.
Nonetheless, it can prove a number of interesting examples automatically which illustrate our approach the meta-theorem proving which is described in Schuermann and Pfenning 1998, CADE. These
examples include type preservation for Mini-ML, one direction of compiler correctness for different abstract machines, soundness and completeness for logic programming interpreters, and the deduction
theorem for Hilbert's formulation of propositional logic. These and other examples can be found in the example directories of the Twelf distribution (see section 13 Examples).
A theorem in Twelf is, properly speaking, a meta-theorem: it expresses a property of objects constructed over a fixed LF signature. Theorems are stated in the meta-logic M2 whose quantifiers range
over LF objects. In the simplest case, we may just be asserting the existence of an LF object of a given type. This only requires direct search for a proof term, using methods inspired by logic
programming. More generally, we may claim and prove forall/exists statements which allow us to express meta-theorems which require structural induction, such as type preservation under evaluation in
a simple functional language (see section 5.6 Sample Program).
There are two forms of declarations related to the proving of theorems and meta-theorems. The first, %theorem, states a theorem as a meta-formula (mform) in the meta-logic M2 defined below. The
second, %prove, gives a resource bound, a theorem, and an induction ordering and asks Twelf to search for a proof.
Note that a well-typed %theorem declaration always succeeds, while the %prove declaration only succeeds if Twelf can find a proof.
dec ::= {id:term} % x:A
| {id} % x
decs ::= dec
| dec decs
mform ::= forall* decs mform % implicit universal
| forall decs mform % universal
| exists decs mform % existential
| true % truth
thdecl ::= id : mform % theorem name a, spec
pdecl ::= nat order callpats % bound, induction order, theorems
decl ::= ...
| %theorem thdecl. % theorem declaration
| %prove pdecl. % prove declaration
The prover only accepts quantifier alternations of the form forall* decs forall decs exists decs true. Note that the implicit quantifiers (which will be suppressed when printing the proof terms) must
all be collected in front.
The syntax and meaning of order and callpats are explained in section 7 Termination, since they are also critical notions in the simpler termination checker.
As a first example, we use the theorem prover to establish a simple theorem in first-order logic (namely that A implies A for any proposition A), using the signature for natural deduction (see
section 3.6 Sample Signature).
trivI : exists {D:{A:o} nd (A imp A)}
%prove 2 {} (trivI D).
The empty termination ordering {} instructs Twelf not to use induction to prove the theorem. The declarations above succeed, and with the default setting of 3 for Twelf.chatter we see
%theorem trivI : ({A:o} nd (A imp A)) -> type.
%prove 2 {} (trivI D).
%mode -{D:{A:o} nd (A imp A)} trivI D.
% ------------------
/trivI/: trivI ([A:o] impi ([D1:nd A] D1)).
% ------------------
The line starting with %theorem shows the way the theorem will be realized as a logic program predicate, the line starting with /trivI/ gives the implementation, which, in this case, consists of just
one line.
The second example is the type preservation theorem for evaluation in the lambda-calculus. This is a continuation of the example in Section section 5.6 Sample Program in the file `examples/guide/
lam.elf'. Type preservation states that if and expression E has type T and E evaluates to V, the V also has type T. This is expressed as the following %theorem declaration.
tps : forall* {E:exp} {V:exp} {T:tp}
forall {D:eval E V} {P:of E T}
exists {Q:of V T}
The proof proceeds by structural induction on D, the evaluation from E to V. Therefore we can search for the proof with the following declaration (where the size bound of 5 on proof term size is
somewhat arbitrary).
%prove 5 D (tps D P Q).
Twelf finds and displays the proof easily. The resulting program is installed in the global signature and can then be used to apply type preservation (see section 8.5 Proof Realizations).
We expect the proof search component of Twelf to undergo major changes in the near future, so we only briefly review the current state.
Proving proceeds using three main kinds of steps:
Using iterative deepening, Twelf searches directly for objects to fill the existential quantifiers, given all the constants in the signature and the universally quantified variables in the
theorem. The number of constructors in the answer substitution for each existential quantifier is bounded by the size which is given as part of the %prove declaration, thus guaranteeing
termination (in principle).
Based on the termination ordering, Twelf appeals to the induction hypothesis on smaller arguments. If there are several ways to use the induction hypothesis, Twelf non-deterministically picks one
which has not yet been used. Since there may be infinitely many different ways to apply the induction hypothesis, the parameter Twelf.Prover.maxRecurse bounds the number of recursion steps in
each case of the proof.
Based on the types of the universally quantified variables, Twelf distinguishes all possible cases by considering all constructors in the signatures. It nevers splits a variable which appears as
an index in an input argument, and if there are several possibilities it picks the one with fewest resulting cases. Splitting can go on indefinitely, so the paramater Twelf.Prover.maxSplit bounds
the number of times a variable may be split.
The basic proof steps of filling, recursion, and splitting are sequentialized in a simple strategy which never backtracks. First we attempt to fill all existential variables simultaneously. If that
fails we recurse by trying to find new ways to appeal to the induction hypothesis. If this is not possible, we pick a variable to distinguish cases and then prove each subgoal in turn. If none of the
steps are possible we fail.
This behavior can be changed with the parameter Twelf.Prover.strategy which defaults to Twelf.Prover.FRS (which means Filling-Recursion-Splitting). When set to Twelf.Prover.RFS Twelf will first try
recursion, then filling, followed by splitting. This is often faster, but fails in some cases where the default strategy succeeds.
Proofs of meta-theorems are realized as logic programs. Such a logic program is a relational representation of the constructive proof and can be executed to generate witness terms for the
existentials from given instances of the universal quantifiers. As an example, we consider once more type preservation (see section 8.2 Sample Theorems).
After the declarations,
tps : forall* {E:exp} {V:exp} {T:tp}
forall {D:eval E V} {P:of E T}
exists {Q:of V T}
%prove 5 D (tps D P Q).
Twelf answers
tps ev_lam (tp_lam ([x:exp] [P2:of x T1] P1 x P2))
(tp_lam ([x:exp] [P3:of x T1] P1 x P3)).
tps (ev_app D1 D2 D3) (tp_app P1 P2) P6
<- tps D3 P2 (tp_lam ([x:exp] [P4:of x T2] P3 x P4))
<- tps D2 P1 P5
<- tps D1 (P3 E5 P5) P6.
which is the proof of type preservation expressed as a logic program with two clauses: one for evaluation of a lambda-abstraction, and one for application. Using the %solve declaration (see section
5.2 Solve Declaration) we can, for example, evaluate and type-check the identity applied to itself and then use type preservation to obtain a typing derivation for the resulting value.
e0 = (app (lam [x] x) (lam [y] y)).
%solve p0 : of e0 T.
%solve d0 : eval e0 V.
%solve tps0 : tps d0 p0 Q.
Recall that %solve c : V executes the query V and defines the constant c to abbreviate the resulting proof term.
Go to the first, previous, next, last section, table of contents. | {"url":"http://www.cs.cmu.edu/~twelf/guide-1-2/twelf_8.html","timestamp":"2014-04-20T16:30:46Z","content_type":null,"content_length":"11854","record_id":"<urn:uuid:e19f5dfc-e311-4624-948b-ffa28a1313d9>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00493-ip-10-147-4-33.ec2.internal.warc.gz"} |
Binomial proof?
October 15th 2009, 11:20 AM #1
Oct 2009
Binomial proof?
I am confused over how I should even start this problem
Let n be an integer greater than 1, show that (1+x)^n > 1+nx for all x between (-1,0)
Guessing here but if n has to be integer greater than 1 the, greatest it can be is 2. and since x can't be greater than -1 but less than 0 then 0< 1 + x<1, and since n is postive (1 +x)< (1+x)^n
all the time. Now for 1+nx, we know n has to be greater than 1 and a integer therefore it always increases the value of x(which is a -negative) which 1 + nx < 1, and we knwo that (1 + x) < 1,
therefore (1 +n)^n must be atleast equal if not greater than 1, threfore (1 +x)^n>1+nx. I know it is a really bad explanation and might be wrong but it is something
October 15th 2009, 12:26 PM #2
Oct 2009 | {"url":"http://mathhelpforum.com/algebra/108253-binomial-proof.html","timestamp":"2014-04-20T07:51:17Z","content_type":null,"content_length":"31291","record_id":"<urn:uuid:a3912e89-bf30-4fcd-9f6f-5a8f3cae1574>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00569-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: st: re: predicting y, with other variables
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]
Re: st: re: predicting y, with other variables
From "alessia matano" <alexis.rtd@gmail.com>
To statalist@hsphsun2.harvard.edu
Subject Re: st: re: predicting y, with other variables
Date Tue, 22 May 2007 16:53:11 +0200
Thank you kit for your answer,
but it is not exactly what i want to do.
I try to explain it better then. I didi my regressione, kept my b. Now
there is a x variable in which I did a variation (I took one of its
observation and I modify it, saying doing a 1% variation. I want then
to know how the predicted values are going to change).
Because I am here I'd like to ask you other two stupid things.
1. with xtabond2 how is it possible to get the R^2?
2. Is that possible that using difference gmm, I get coefficients
estimates much higher in magnitude than applying saying an OLS?
3. Which is the difference between one step and twostep gmm? Is it
possible with xtabond2 to get the first stage estimtes??
really thanks a lot for your help
2007/5/22, Kit Baum <baum@bc.edu>:
sacrificial cabbage
Alessia said
I hope some of you can help me...
I estimate a regression and I saved the coefficients e(b) in the
corresponding matrix. Now I would like to apply these coefficients
estimates to the same set of x's variables, but for one that I want to
substitute with another and calculate the predicted values (to see
which is the variation in the rpedicted values using this other
variables). Anyone of you knows how to do this?
I tried creating a matrix with the new x's variables, but my n=2400
and so the matsize set does not allow me to create such a matrix.
Just create additional observations in your x matrix for the
observations for which you want to predict 'out of sample'. When you
run the regression, those observations will be ignored as their value
of y is missing. Then
predict double oos if ~e(sample)
will generate predictions (using the default , xb) for the
observations that are not in the regression sample.
Kit Baum, Boston College Economics and DIW Berlin
An Introduction to Modern Econometrics Using Stata:
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2007-05/msg00766.html","timestamp":"2014-04-17T19:08:09Z","content_type":null,"content_length":"8240","record_id":"<urn:uuid:c9e1698b-b0b9-47a2-9376-029bd5ec50b1>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00241-ip-10-147-4-33.ec2.internal.warc.gz"} |
Help with a problem from Atkins Molecular Quantum Chemistry
August 3rd 2007, 07:24 AM #1
Aug 2007
Coventry, UK
Hi all,
Show that if $( /Omega f )* = - /Omega f*$, then $< /Omega > = 0$ for any real function $f$
This is probably disgustingly simple, but it has me stumped
Sorry for the presentation, can anyone show me how to use TeX equations in my post? I am new to the forums.
Hi all,
Show that if $( /Omega f )* = - /Omega f*$, then $< /Omega > = 0$ for any real function $f$
This is probably disgustingly simple, but it has me stumped
Sorry for the presentation, can anyone show me how to use TeX equations in my post? I am new to the forums.
Could you define your terms? I am assuming the < > are the standard expectation value brackets, but what are $\Omega$ and f?
See there first few posts in this thread for LaTeX.
Could you define your terms? I am assuming the < > are the standard expectation value brackets, but what are and f?
See there first few posts in this thread for LaTeX.
Omega is an operator, (I think it is Hermitian, but I am unsure.) f is any real function, $<\Omega>$ Is the expectation value for the operator Omega.
In words - if I have understood correctly - if the complex conjugate of the result of an (hermitian) operator acting on a function is equal to minus the effect of the same (hermitian) operator
acting on the complex conjugate of the same real function, then the expectation value of the (hermitian) operator is zero for any real function f.
Omega is an operator, (I think it is Hermitian, but I am unsure.) f is any real function, $<\Omega>$ Is the expectation value for the operator Omega.
In words - if I have understood correctly - if the complex conjugate of the result of an (hermitian) operator acting on a function is equal to minus the effect of the same (hermitian) operator
acting on the complex conjugate of the same real function, then the expectation value of the (hermitian) operator is zero for any real function f.
Okay, so the complex conjugate of $\Omega f$ is equal to $-\Omega(f^*)$?
I'm still confused on one point. According to the definitions I know (Quantum Mechanics in general, not specifically Molecular Quantum Chemistry)
$\left < \Omega \right > = \int_V \psi ^* \Omega \psi \ d^3x$
where $\psi (x, y, z)$ is your wavefunction.
Where is f coming into this?
Okay, let me try this and you can correct me if I'm wrong on any point. (This would be easier if we were to use "bra-ket" notation. If you'd like me to rewrite it, just let me know.)
$\Omega$ is a Hermitian operator, so
$\left < \Omega \right > = \int_V \psi ^* \Omega \psi \ d^3x$ is a real number, where $\psi$ is a wavefunction.
So we know that
$\left < \Omega \right >^* = \left < \Omega \right >$
From here on I am taking $\psi = f$ as a real valued function. And we know that $( \Omega f)^* = - \Omega (f ^*) = - \Omega f$. (Given.)
$\left < \Omega \right > ^* = \left ( \int_V f ^* \Omega f \ d^3x \right ) ^* = \left ( \int_V f \Omega f \ d^3x \right ) ^*$<-- Since f is a real valued function
$= \int_V f^* ( \Omega f )^* \ d^3x = \int_V f^* (- \Omega f ) \ d^3x = - \int_V f^* \Omega f \ d^3x = - \left < \Omega \right >$
But $\left < \Omega \right >^* = \left < \Omega \right >$ since the expectation value is real.
$\left < \Omega \right > = - \left < \Omega \right >$
$\left < \Omega \right > = 0$
Yes, I think it would be easier in bra-ket notation. However, thanks very much, I follow your answer.
I think the main reason I was stumped is that I failed to realise that $f=f*$ for a real function. Some, sort of mental block, I don't quite know.
Thanks again for your help.
No problem. It was odd for me as well. I don't recall if I have ever done this assuming a real wavefunction. (Which may mean that I did the proof in a longer way than necessary. I tend to do
August 3rd 2007, 07:30 AM #2
August 3rd 2007, 07:36 AM #3
Aug 2007
Coventry, UK
August 3rd 2007, 08:06 AM #4
August 3rd 2007, 09:56 AM #5
August 4th 2007, 01:44 AM #6
Aug 2007
Coventry, UK
August 4th 2007, 06:06 AM #7 | {"url":"http://mathhelpforum.com/advanced-applied-math/17472-help-problem-atkins-molecular-quantum-chemistry.html","timestamp":"2014-04-17T16:32:59Z","content_type":null,"content_length":"62130","record_id":"<urn:uuid:c23a7885-a8ac-47a5-a2b9-6f19ef5376a4>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00118-ip-10-147-4-33.ec2.internal.warc.gz"} |
Multiple choice text question
September 24th 2013, 08:33 AM #1
Sep 2011
Multiple choice text question
On March 2,treasury bill expiring on April 20, had a bid discount of 5.80 and ask discount of 5.86.What is the best estimate of risk-free rate as given in the text?
A) 5.86%
B) 5.83%
C) 6.11%
D) 6.14%
E) None of the above
My answer is C.
What would be your answer?
Answer is not provided by the author.
Follow Math Help Forum on Facebook and Google+ | {"url":"http://mathhelpforum.com/business-math/222244-multiple-choice-text-question.html","timestamp":"2014-04-18T03:55:51Z","content_type":null,"content_length":"29009","record_id":"<urn:uuid:70b21fbb-154f-4087-a81d-b78b4c04d7e5>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00408-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts by
Posts by Kim
Total # Posts: 1,974
Explain why a suspension is considered a heterogeneous mixture?
Trig Proofs
Please solve this proof Tan(pi/4 +x)+ Tan(pi/4+x)= 2Sec2x
Trig Proofs
Thanks so much for the help=)
Trig Proofs
Please Help: please solve this proof: cosx- cosy = -2sin(x+y/2)sin(x-y/2)
-2x7= -3 +t+24 -14= =3+t+24 -11= t+24 -35= t
Can someone please check my work for me. Q: Plot the graph of the equations 2x - 3y = 5 and 4x - 6y = 21 and interpret the result. A: 6(2x ¨C 3y = 5) = 12x ¨C 18y = 30 3(4x - 6y = 21) = 12x - 18y =
63 12x ¨C 18y = 30 + 12x - 18y = 63 0 ¡Ù 93
would it be correct to say that the equations are not independent therefore they have an infinite solution set.
so can you tell me if this is correct.... 4x 5y = 14 -12x + 15y = -42 15(4x 5y = 14) = 60x 75y = 210 5(-12x +15y = -42) = -60x + 75y = -210 60x 75y = 210 + -60x + 75y = -210 = 0 The equations
are not independent therefore they have an infinite solut...
with the first one was i suppose to multiply it by -3? i multiplied the first equiation by 15 and the second by -5. does it matter?
what about this one? I'm having a tough time with this one as well... Solve by substitution or elimination method: -2x + 6y = 19 10x 30y = -15
Solve by substitution or elimination method: 4x 5y = 14 -12x + 15y = -42 when i try this solution i end up cancelling out the whole problem. can someone please help me!
A painter needs to cover a triangular region 62 meters by 68 meters by 70 meters. A can of paint covers 70 square meters. How many cans will be needed?
math (ruler)
I need help finding the mesurment in a ruler the little bitty mark could you pull up a ruler for me with all the marks .
would it be 30
round to the nearest ten dollars, then estimate the answer/ $73.88=---dollars +29.04----dollars ------------------ Total-----dollars
how would you create an equation for sec(2x) using both sec(x) and csc(x)? the steps i have so far are 1/(cos^2(x)-sin^2(x)) = 1/(1-1)/(sec^2(x)-csc^2(x)) but then i do not know what to do after
how would you determine a formula for cot(2x) using only cot(x)? Thenhow would you determine a formula for sec(2x) using only sec(x)?
when you are determining a formula for cot(2x) using only tan(x) would the formula be...(1-tan^2(x))/(2tan(x))?
when 2 lines intersect and form an acute angle A and both lines have a positive x-intercepts but the y-intercepts are on opposite sides of the origin. How would you write an equation showing the
relationship between A and the angles that the normals make with the positive x-axis?
If the density of ice is 0.92 g/cm^3, what percent of an ice cube s (or iceberg s) mass is above water.
physical science
An object in the shape of a cube has a mass of 375 grams and a side that measure 5.0 cm. Will this object float in water? Prove your answer by comparing the density of the cube to that of water.
if you have the triangle ABC with the points A(47,100) B(-1273,203) and C(971,-732) how would you find the center point and the radius of the incircle of triangle ABC?
if you have the triangle ABC with the points A(47,100) B(-1273,203) and C(971,-732) how would you find the center and the radius of the incircle of triangle ABC?
the gradual transition from monarchy topolitical democracy during the last half of the second millennium stimulated a corresponding transition from imperial rule by the power of the sword to imperial
rule by the pwoer of money.
how would you factor (5(x+2)^4)+(3x(x+2)^3)-(2x(x+2)^2). I have so far gotten it down to 2(3x^4+25x^3+74x^2+92x+40) but i'm not sure that i am doing this right.
Ella is twice Bella's age, and Stella is three times Bella's age. The difference between Stella's and Bella's ages is 8 yrs. How old is each girl?
So no 1 goes above that? as in 1 ------- 2^2*7^8?
Simplify (2^-1∙7^-4)^-2 Is it 1/2^2*7^8 OR simply just 2^2*7^8
Thanks for your help :)
Is it the same thing? I don't know, but my first choice was A. I still do not understand how (p-4+m p2m)-4 / (p-4)-m = (p-4+3m)-4 / p4m = p^16-12m / p^4m where does the 12 fit in? and how does the
answer end up as A? = p16-16m
Ok but the answer is typed incorectly choice A is p^-16+16 not = p^16-16m
Where did you get 3m from? The problem reads Simplify (p^-4+m p^2m)^-4 ---------------- (p^-4)^-m Theses are the choices A) p-16m + 16 B) p-8m 4 C) p-11m 4 D) p-3m + 16 I say its A or B
Thank you for all your help! :)
What about breaking the fractions a part like this? 5x -2 - 3y -1 = 5 - 3 ----------------- x -1 + y -1 ------- x^2 y --------------------- 1 + 1 ------------ x y = 5y+3x^2 * x^2 y -------- x+y x^2 y
(5y+3x) = 5xy^2 + 3x^2 y
how would you find an equation to a line that is parallel and 3 units away from the line x-5y+10=0?
what are three fresh water resources and what are three ocean water resources?
Environmental Science
What are three fresh water resources and three ocean water resources
Why would it be this because a*a is 2a? I thought at first it is a^2? Can you please explain, thanks!
a*a = a^2? yes? No? Or is it (m + 4) (2a 2)
Factor (m + 4) (a 5) + (m + 4) (a + 3) I am pretty sure its this but want to double check! Thanks! (m + 4) (a^2 2)
There is no division necessary here? So the final answer is what is @ the bottom? (5x^-1)-3 (-2+1=-1, and -1+1=0)
medical information management and office practice
Explain the difference between qualitative and quantitative medical record analysis.
how would you prove that (cos^2x-sin^2x)(1-cos^2xsin^2x)= (cos^6-sin^6)?
math/slope-intercept form
Perform the operation and write the result in standard form. (2-3i)(5i) over 2+3i Please help!! I do not understand this.
Perform the addition or subtraction and write the result in standard form. (8+sqrt -18) - (4+3sqrt2i) I know that you have to subtract 8 from 4 but I do not know what to do after this.
7th grade reading
I know this was already answered,but the child would be possessive if she does not share her toys. It is the synomem for "overprotective"
3rd grade reading/spelling
What does it mean to adjust and monitor reading speed?
from a base elevation of 8500 ft, a mountain peak in colorado rises to a summit elevation of 14595 ft over a horizontal distance of 15842 ft. find the grade of the peak.
As the distance between an object and the center of the Earth is increased 1)the object's mass decreases and its weight remains the same. 2)both the object's mass and its weight remain the same 3)
both the object's mass and its weight decreases. 4)the object's m...
Suboridiante clauses: What is the subordinate clause in the sentence: May the dreams you dream today be tomorrow's dreams come true.
Marine Science
What happens to the majority of water at the poles and at the equator? My answer was that at the poles the water gets cold and at the equator the water gets hot but my teacher said this was wrong.
Please help!
that president of mix race will save your dumb butt you probably voted for bush didnt you that why america is in the shape it is in now because of people like you get a clue
i dont understand what they mean when they said my i dont understand i got everything except the ast question.
I need help i just dont understand it can someone PLEASE help me!!!!!!!!!
Write a 200-300 word summary that answers the following questions: what information about race and ethnicity in the united states has helped you better understand or relate to specific minority
groups?, Have you learned something new about your own cutural history?, trends in...
16. The Roper Organization conducted identical surveys in 1990 and 2000. One question asked women was, Are most men basically kind, gentle, and thoughtful? The 1990 survey revealed that, of the
3,000 women surveyed, 2,010 said that they were. In 2000, 1,530 of the ...
Jessica's weight is 540 newtons. According to the Third Law, if the Earth pulls on Jessica with a force of 540 N, then Jessica pulls on the Earth with a force of 540 newtons that acts in the opposite
direction. However, if Jessica stumbles while rollerblading, she will fal...
discrete math
what is the link between matricies and graph theory
Parametric curves and area
1.The curve of x=49-t^2 and y=t^3-1*t makes a loop which lies along the x-axis. What is the total area inside the loop? 2.Find the area of the region enclosed by the parametric equation: x=t^3-7*t
and y=4*t^2.
Calculus-Modeling Equations
An unknown radioactive element decays into non-radioactive substances. In 420 days the radioactivity of a sample decreases by 39 percent. 1.What is the half-life of the element? 2.How long will it
take for a sample of 100 mg to decay to 46 mg?
How many moles of KMn)4 are required to reach with the moles of Na2C2O4 in the last problem? and also, if the KmnO4 of the last problem is in 30.30mL of solution what is the molarity (M) of the KmnO4
how many moles of Na2C2O4 are there in 25.00 mL of a .7654 M solution of sodium oxalate?
thank you. so what did you get as a final answer for this problem then? and what did u do after doing 2.2=2PHN3 and after subsituting the partial pressure?? Thanks
How do you solve this: Consider the decomposition of ammonium chloride at a cerain temperature NH4Cl(s) <- NH3(g) + HCl(g) -> Calculate the equilibrium constant Kp if the total pressure is 2.2 atm at
that temperature.
how do you do linear measure and precision
4th grade english
Earlier this year, our class ______ a greenhouse. (visit) Would the correct form be has visited
Hello, I am taking an accounting class and the questions states "post to the accounts and keep a running bakance for each account" the question gives a page number to refer to but it does not provide
any guidance. What does these means?? Thanks
Determine whether the function is one-to-one. If it is, find itd inverse. f(x)= 3x+4 over 5
Use a graphing utility to approximate (to two-decimal-place accuracy) any relative minimum or maximum values of the function. y=x^3 - 6x^2 + 15 My answer was (4,12) for min and(0,-12) for max.
5x-6(-5x + 2) = 16
An object has a mass of 120 kg on the moon. What is the force of gravity acting on the object on the moon?
input =9 output = 2 what is the function. input = 27 output = 8 what is the function. input = 42 output = 13 what is the function input = 3 output = 0 what is the function.
input function output 9
Can someone help me with this one? Find all numbers for which the rational expression is undefined. 6/4w+5 Thanks
6th grade_Science/Math
Thank you Francis! I didn't get it. :)
Is this imagery?
Is this a metaphor or imagery? "The ground is wet, and smells of smoke; it feels soft and spongy to his touch."
Is this imagery?
He hooks an arm around his son's neck and is at once caught in a fierce embrace. He smooths the dark head wedged against his shoulder, brushes the hair aside at the back of his neck to touch bare
Thank you for your help, it is width. So i take the 2and times it by the 27. With that answer figure out the difference between the 2 times 27 and minus from 100 then divid by 2. Is this the way to
figure out the problem?
Can anyone help me on these two problems also. I am getting so confused and tired. The length of a rectangle is fixed at 27cm. What lengths will make the perimeter greater than 100cm? also this
problem Trians A and B are traveling in the same direction on parellel tracks. Tria...
I have a couple of problems that i am stuck on. They are word problem. 1 Soybean meal is 12% protien; cornmeal is 6% protien. How many pounds of each should be mixed together on order tyo get 280
pounds of mixture that is 11% protien? How many pounds of the cornmeal should be ...
Mr. Cline needs 36 ice cubes for the science experiment. Eack ice tray makes 12 ice cubes. If he makes 4 ice trays, does he have enough ice cubes? Can you use an estimate to solve? Explain.
College math
A planehas 7 hrs. to reach a target and come back to base. it flies out to the target at 480 miles per hr. and returns on the same route at 640 miles per hour. how many miles from the base is the
8th Grade Algebra: Answer Check #2
can u hlep me
8th Grade Algebra: Answer Check #2
One day a store sold 35 sweatshirts. White ones cost $9.95 and yellow cost $12.50. In all, $389.05 worth of sweatshirts were sold. How many of each color were sold? How many white sweatshirt were
sold? How many yellow sweatshirt were sold? I can not figure out how to do this p...
The base of a triangle is 8cm greater than the height. The area is 42cm^2. Find the height and length of the base.
4th grade
what volume of helium would be in a balloon the size of a 2liter soft drink?
josh that is the right answer by kimberly and I think you go to my school write back josh
math is kinded of hard for me but I try my best in math and please solve my math problem and see if its right sinccer kim and 1 think can you spell sinccer please thank you very much anybody and sue
5*5*5*5*5<40 than you do 5*14<70+4<74 5^2< 7
livly tempo kimberly
what ethical issues might arise by relying more and more on technology in Health Care or Human Service? As you answer,consider ethical issues from a consumer s perspective and from a Health Care or
Human Service worker s perspective.
public health
Review this artile: Hutt, J. & Wallis, LA. (2004). Injuries caused by airbags: a review. Trauma, 6, 271-278 and answer following: How did the researchers use the literature to explain a public health
phenomena? How was the literature used to show the necessity for the research...
adult education
Review this artile Hutt, J. & Wallis, LA. (2004). Injuries caused by airbags: a review. Trauma, 6, 271-278 and answer. How did the researchers use the literature to explain a public health
phenomena? How was the literature used to show the necessity for the resea...
let f(x)= x+3/x-11 and let the domain of the function be the set of all intergres except 11. Find the function values f(9)=
Write the first five terms of the sequence whose nth term is -7n-4.
Would you please explain further I am lost?
Pages: <<Prev | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | Next>> | {"url":"http://www.jiskha.com/members/profile/posts.cgi?name=Kim&page=16","timestamp":"2014-04-19T10:44:38Z","content_type":null,"content_length":"27562","record_id":"<urn:uuid:a1953ef1-473d-40b1-8295-1e4ec0aec40e>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00551-ip-10-147-4-33.ec2.internal.warc.gz"} |
Matrix inversion lemma with pseudoinverses
up vote 8 down vote favorite
The utility of the Matrix Inversion Lemma has been well-exploited for several questions on MO. Thus, with some positive hope, I'd like to field a question of my own.
Suppose we pick $n$ values $x_1,\ldots,x_n$, independently sampled from $N(0,1)$ (mean 0, unit variance gaussian). Then, we form the (rank 3 at best) positive semidefinite matrix: $$A = \alpha ee^T +
[\cos(x_i-x_j)],$$ where $e$ denotes the vector of all ones, and $\alpha > 0$ is a fixed scalar.
For $n \ge 3$, simple experiments lead one to conjecture that: $$e^TA^\dagger e = \alpha^{-1},$$ where $A^\dagger$ is the Moore-Penrose pseudoinverse of $A$ (obtained in Matlab using the 'pinv'
This should be fairly easy to prove with the right tools, such as a Matrix inversion lemma that allows rank deficient matrices or pseudoinverses. So my question is:
How to prove the above conjecture (without too much labor, if possible)?
random-matrices linear-algebra matrices
1 If you just let $x_1,\dots,x_n$ be distinct scalars instead of specifying a particular probability distribution, shouldn't you get the same result? – Michael Hardy Aug 4 '11 at 4:13
Actually, I suspected it to be true as long as all the $x$'s were such that their contribution remains independent of $ee^T$ (Mikael makes this explicit in the answer below) – Suvrit Aug 4 '11 at
Why can the rank never exceed 3? – Michael Hardy Aug 5 '11 at 21:22
1 @Michael: expand $\cos(x-y)=\cos x\cos y + \sin x \sin y$ to notice that $A$ is a sum of three rank-1 matrices. – Suvrit Aug 5 '11 at 23:13
add comment
1 Answer
active oldest votes
In fact more generally for any positive semidefinite matrix $A = \sum_{i=0}^k e_i e_i^T$ with $e_i$'s linearly independent, we have that $e_i^T B e_i = 1$, where $B$ is the
Moore-Penrose pseudoinverse of $A$. This applies here since almost surely your matrix $A$ is of this form with $k=3$ and $e_1 = \sqrt \alpha e$.
up vote 8 down Proof: Let $E$ be the linear span of the $e_i$'s. If I understood correctly the notion of Moore-Penrose pseudoinverse, $B$ is described in the following way: as a linear map, $B$ is
vote accepted zero on the orthogonal of $E$, and on $E$ it is the inverse of the restriction of $A$ to $E$. Let $\beta_{i,j}$ be defined by $B e_i = \sum_j \beta_{i,j} e_j$, so that $e_i^T B e_i = \
sum_j\beta_{i,j} e_i^T e_j$. Expressing that $A B e_i = e_i$, we get in particular that $\sum_j\beta_{i,j} e_i^T e_j = 1$, QED.
Nice clean answer Mikael. It seems so easy once somebody proves it :-) thanks! – Suvrit Aug 4 '11 at 16:09
add comment
Not the answer you're looking for? Browse other questions tagged random-matrices linear-algebra matrices or ask your own question. | {"url":"http://mathoverflow.net/questions/72059/matrix-inversion-lemma-with-pseudoinverses","timestamp":"2014-04-18T01:14:29Z","content_type":null,"content_length":"57159","record_id":"<urn:uuid:5aa5dc2f-b83e-4a61-9bc6-0364f1d5256d>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00266-ip-10-147-4-33.ec2.internal.warc.gz"} |
Partial signatures
From HaskellWiki
"The regular (full) signature of a function specifies the type of the function and -- if the type includes constrained type variables -- enumerates all of the typeclass constraints. The list of the
constraints may be quite large. Partial signatures help when:
• we wish to add an extra constraint to the type of the function but we do not wish to explicitly write the type of the function and enumerate all of the typeclass constraints,
• we wish to specify the type of the function and perhaps some of the constraints -- and let the typechecker figure out the rest of them.
Contrary to a popular belief, both of the above are easily possible, in Haskell98." | {"url":"http://www.haskell.org/haskellwiki/Partial_signatures","timestamp":"2014-04-19T03:53:33Z","content_type":null,"content_length":"13253","record_id":"<urn:uuid:ef0aa645-2687-49e4-884d-0b83b2aafb23>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00273-ip-10-147-4-33.ec2.internal.warc.gz"} |
Symmetry Point Groups
Symmetry is very important.
C[1] Group
C[1] has only one symmetry operation, {E}. The order of C[1] group is 1. Molecules in this group have no symmetry, which means we can not perform rotation, reflection of a mirror plane, etc. And the
only symmetry operation is identity, E.
Figure 2.1 HCFBrCl. Point group is C[1]. This picture is drawn by ACD Labs 11.0.
C[i] Group
C[i] has 2 symmetry operations, {E, i}. The order of C[i] group is 2. Molecules in this group have low symmetry, an inversion center. For example, C[2]H[2]F[2]Cl[2] has an inversion center.
Figure 2.2 C[2]H[2]F[2]Cl[2]. Point group is C[i]. This picture is drawn by MacMolPlt.
C[s] Group
C[s] has 2 symmetry operations, {E, σ}. The order of C[s] group is 2. Molecules in this group have low symmetry, a mirror plane. For example, CH[2]BrCl has a mirror plane.
Figure 2.3 CH[2]BrCl. Point group is C[s]. This picture is drawn by MacMolPlt.
Cyclic symmetries
This class includes C[n], C[nh], C[nv], and S[n], which have only one proper or improper rotation axis.
Cyclic group: C[n] group
C[n] (nσ[2])
symmetry elements, E and C[n].
And n symmetry operations, {E, C[n]^1, C[n]^2, … , C[n]^n-1}
The order of C[n] group is n.
Figure 2.4 C[2]H[4]Cl[2.] Point group is C[2]. This picture is drawn by MacMolPlt.
Pyramidal group: C[nv] group
For C[nv] group, symmetry elements are E, C[n], and nσ[v].
And symmetry operations are {E, C[n]^k(k=1, … ,n-1), nσ[v] }
The order of C[nv] group is 2n. For example, NH[3] has a C[3] axis and three mirror planes σ[v].
Therefore, the point group of NH[3] is C[3v].
Figure 2.5 NH[3]. Point group is C[3v]. This picture is drawn by MacMolPlt.
Now we can generate a group multiplication table for NH[3]:
Table 2.2 Group multiplication table of symmetry operation of NH[3] molecule
This C[3v ]group, as what is mentioned before, has all the properties of a group in mathmetics. And all the molecules that have one C[3 ]axis and 3 mirror planes such as NH[3] molecule can be
assigned to this C[3v] group. In the same way, the operations in the following groups also have all the properties of a mathmetical group and can generate a multiplicaiton table.
Reflection group: C[nh ]group
For C[nh] group, symmetry elements are E, C[n], σ[h], and S[n].
And symmetry operations are {E, C[n]^k(k=1, … ,n-1), σ[h], σ[h] C[n]^m(m=1, … ,n-1)}
The order of C[nh] group is 2n.
For example, point group of C[2]H[2]F[2] is C[2h].
Figure 2.6 C[2]H[2]F[2]. Point group is C[2h]. This picture is drawn by MacMolPlt.
Improper rotation group: S[n ]group
If n=1, S[1]=C[s]
If n=2, S[2]=C[i]
If n=odd number, S[n] (n=3, 5, 7 …) = C[nh]
For example, operations in S[3 ]are the same as C[3h], e.g. B(OH)[3].
S[3]={E, S[3], S[3]^2, S[3]^3, S[3]^4, S[3]^5} ={E, S[3], C[3]^2, σ[h], C[3], S[3]^5}= C[3h]
Figure 2.7 B(OH)[3]. Point group C[3h]. This picture is drawn by MacMolPlt.
Therefore, for S[n ]group, n can only be 4, 6, 8 …..
The symmetry elements are E and S[n]. And symmetry operations are {E, S[n]^k(k=1, … ,n-1)}. The order of S[n] group is n.
For example, the point group of 1,3,5,7 -tetrafluoracyclooctatetrane is S[4].
Figure 2.8 1,3,5,7 -tetrafluoracyclooctatetrane. Point group is S[4]. The left
picture is drawn by MacMolPlt. The animated figure is drawn by ACD Labs 11.0.
Dihedral symmetries
This class includes D[n], D[nh], and D[nd], which have one proper rotation C[n] axis and n C[2] axis perpendicular to C[n] axis.
Dihedral group: D[n] group
For D[n] group, symmetry elements are E, C[n], and nC[2] (σC[n]).
And symmetry operations are {E, C[n]^k(k=1, … ,n-1), nC[2]}
The order of D[n] group is 2n.
For example, the point group of [Co(en)[3]]^3+ is D[3].
Figure 2.9 [Co(en)[3]]^3+.Point group is D[3]. The figure is drawn by ACD Labs 11.0.
Prismatic group: D[nh] group
For D[nh] group, symmetry elements are E, C[n], σ[h],and nC[2] (σC[n]).
And symmetry operations are {E, C[n]^k(k=1, … ,n-1), ?[h], S[n]^m(m=1, … ,n-1), nC[2], nσ[v]}
The order of D[nh] group is 4n.
For example, the point group of benzene is D[6h].
Figure 2.10 Benzene. Point group D[6h]. This picture is drawn by MacMolPlt.
Antiprismatic group: D[nd] group
For D[nd] group, symmetry elements are E, C[n], σ[d], and nC[2] (σC[n]).
And symmetry operations are {E, C[n]^k(k=1, … ,n-1), S[2n]^m(m=1, … ,2n-1), nC[2], nσ[d]}
The order of D[nd] group is 4n.
For example, pinot group of C[2]H[6] is D[3d].
Figure 2.11 C[2]H[6]. Point group D[3]d.This picture is drawn by MacMolPlt.
Polyhedral symmetries
This class includes T, T[h], T[d], O, O[h], I and I[h], which have more than two high-order axes.
Cubic groups: T, T[h], T[d], O, O[h]
These groups do not have a C[5] peoper rotation axis.
T group
For T group, symmetry elements are E, 4C[3], and 3C[2].
And symmetry operations are {E, 4C[3], 4C[3]^2, 3C[2]}
The order of T group is 12.
T[h] group
For T[d] group, symmetry elements are E, 3C[2], 4C[3], i, 4S[6 ]and 3σ[h].
And symmetry operations are {E, 4C[3], 4C[3]^2, 3C[2], i, 4S[6], 4S[6]^5, 3σ[h]}
The order of T[d] group is 24.
T[d] group
For T[d] group, symmetry elements are E, 3C[2], 4C[3], 3S[4 ]and 6σ[d].
And symmetry operations are {E, 8C[3], 3C[2], 6S[4], 6σ[d]}
The order of T[d] group is 24.
For example, the point group of CCl[4] is T[d]. Figure 2.12 CCl[4]. Point group is T[d]. The figure is drawn by ACD Labs 11.0.
O group
For O group, symmetry elements are E, 3C[4], 4C[3], and 6C[2].
And symmetry operations are {E, 8C[3], 3C[2], 6C[4], 6C[2]}
The order of O group is 24.
O[h] group
For O[h] group, symmetry elements are E, 3S[4], 3C[4], 6C[2], 4S[6], 4C[3], 3?[h], 6σ[d], and i.
And symmetry operations are {E, 8C[3], 6C[2], 6C[4], 3C[2], i, 6S[4], 8S[6], 3σ[h], 6σ[d]}
The order of O[h] group is 48.
For example, the point group of SF[6] is O[h].
Figure 2.13 SF[6]. Point group is O[h]. The figure is drawn by ACD Labs 11.0.
Icosahedral groups: I, I[h]
These groups have a C[5] peoper rotation axis.
I group
For I group, symmetry elements are E, 6C[5], 10C[3], and 15C[2].
And symmetry operations are {E, 15C[5], 12C[5]^2, 20C[3], 15C[2]}
The order of I group is 60.
I[h] group
For I[h] group, symmetry elements are E, 6S[10], 10S[6], 6C[5], 10C[3], 15C[2] and 15σ.
And symmetry operations are {E, 15C[5], 12C[5]^2, 20C[3], 15C[2], i, 12S[10], 12S[10]^3, 20S[6], 15σ}
The order of I[h] group is 120.
For example, the point group of C[60] is I[h].
Figure 2.14 C[60]. Point group is I[h]. The figure is drawn by ACD Labs 11.0.
Linear groups
This class includes C[?v] and D[?h], which are the symmetry of linear molecules.
C[∞v] group
For C[∞v] group, symmetry elements are E, C[∞] and ∞σ[v].
such as CO, HCN, NO, HCl.
Figure 2.15 HCl. Point group is C[?v]. This picture is drawn by MacMolPlt.
D[∞h] group
For D[σh] group, symmetry elements are E, C[∞] ∞σ[v] , σ[h], i, and ∞C[2].
such as CO[2], O[2], N[2].
Figure 2.16 O[2]. Point group is D[?h]. This picture is drawn by MacMolPlt | {"url":"http://chemwiki.ucdavis.edu/Theoretical_Chemistry/Symmetry/Common_Point_Groups_for_Molecules/Symmetry_Point_Groups","timestamp":"2014-04-17T04:16:18Z","content_type":null,"content_length":"80280","record_id":"<urn:uuid:fd4cd9d3-8b5f-446e-b131-fcf61cc16b59>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00535-ip-10-147-4-33.ec2.internal.warc.gz"} |
Colloquium-Govind Menon (Brown University)-Building polyhedra by self-assembly: theory and experiment
Mathematics - Colloquium
Friday, March 21, 2014
11:00 AM-12:00 PM
ABSTRACT: An important goal in materials chemistry since the mid-1990s has been to build materials and devices by mimicking physical and biological mechanisms of self-assembly. The area of
self-assembly is now a vast, interdisciplinary enterprise, but mathematical engagement in the area has been quite modest to date.
I will discuss how an important biological example -- the self-assembly of icosahedral viruses -- can inspire and guide the development of technology. In particular, I will discuss the utility of a
common discrete geometric framework to model:
(a) the self-assembly of a "simple" virus (the bacteriophage MS2) (this is mainly work of Reidun Twarock and her co-workers at the University of York);
(b) self-folding polyhedra (joint work with David Gracias' lab at Johns Hopkins).
There is very little advanced mathematics in this talk, and the ideas are accessible to a broad audience.
Suggested Audiences: Adult, College
E-mail: ma-chair@wpi.edu
Last Modified: March 10, 2014 at 3:24 PM | {"url":"http://www.socialweb.net/Clients/WPI/math.lasso?id=161777","timestamp":"2014-04-17T15:26:37Z","content_type":null,"content_length":"2087","record_id":"<urn:uuid:44f61321-a450-4d56-bd30-faa80bc8d30e>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00497-ip-10-147-4-33.ec2.internal.warc.gz"} |
Five Common GMAT Math Mistakes
A good deal of success in math is simply about careful detail management. If you are, by nature, someone already comfortable with math, highly organized and detail oriented, probably none of these
mistakes will plague your work. This is a post for folks who may be a little rusty in math, and need to be a little more careful with the basic work of detail management.
Dropping the negative sign
Suppose you are solving the equation
5 – 2x = 13
We want to isolate x. One tactic would be to begin by subtracting 5 from both sides. On the right, 13 – 5 = 8. On the left, the 5′s cancel, but with what are we left? It would be a mistake to
subtract 5 and wind up with:
I have that in red, with an unequal sign, to emphasize that it is wrong. Of course, the mistake is: when we subtract the 5 and get rid of it, the 2x term does not magically change from negative to
positive. It still has a negative sign in front of it. Therefore, the next steps are:
-2x = 8
x = -4
Actually, if you notice any tendency toward making this mistake, I highly recommend: make your first step to add any subtracted variable to other side, to make it positive. If your first step,
automatically, is to make the variable positive, then you will be considerably less likely to make this mistake.
Dividing by the numerator
Suppose you have this equation to solve:
Both sides are clearly divisible by 5, so one possible first step would be to divide by sides by 5. On the right,
You can multiply by x, and divide by 4 to solve —- or, you simply could take the reciprocal of both sides (always a completely legitimate move when you have fraction = number or fraction =
fraction). Either way, the answer is
Distributing a square
These next three mistakes are part of a broad category. First of all, one of most fundamental laws underlying all arithmetic and algebra is a law called the Distributive Law. Symbolically, it
When read left-to-right, it is called distributing: we distribute A over (B + C). When this same equation is read right-to-left, it is called “factoring out.” See this post for a more extended
panegyric to the Distributive Law, in a more advanced context. That equation is 100% true, 100% of the time. In words, we can say: multiplication distributes over addition and subtraction. It’s
one of the most fundamental laws in all of mathematics.
That pattern is very important, and has a wide variety of applications in elementary and advanced mathematics. For some reason, though, this pregnant pattern is ripe for vast over-generalization.
The mind seems almost magnetically drawn to distributing all kind of things other than multiplication over addition and subtraction.
One example is: an exponent. Suppose we are asked to expand algebraically the expression:
Be careful here, because unless you are a pro at math, your mind is going to be magnetically attracted to the wrong thing to do. Here’s the wrong thing to do:
If you notice, this mistake involves following the Distributive Law pattern, but with an exponent rather than with multiplication. That’s illegal. What’s the correct procedure? Well, squaring
anything means multiplying it by itself, so the first step would be:
From there, you would FOIL out the expression. That’s the step-by-step way to get to the answer. It can be a very handy shortcut to have the following two patterns memorized.
The Square of a Sum:
The Square of a Difference:
Those formulas take into account the proper FOILing. Memorizing these can be a time-saving shortcut and also might help you to remember to avoid Mistake #3 here.
Distributing a fraction
This is another mistake of the “over-extend the Distributive Law to regions that are not valid” variety. Here is the succinct way to express this mistake.
In other words, you can neither combine nor separate fractions by additions in the denominator. This one has far-reaching ramifications. For example, in the following fraction …
… what can you cancel? NOTHING! If the 12 were over just the 3x, or if the 12 were just over the 8, then you would be able to cancel, but because you can’t separate the fraction, you can do
absolutely no canceling. BTW, in the fraction ….
… even though some cancellation is possible, you can’t do any while the fraction is still like this. You have to separate it, by the addition in the numerator (a 100% legitimate move) and then you
can cancel:
Another related mistake. Suppose we have to solve the following equation.
While it’s true in general that you can take the reciprocal of both sides, unfortunately, you can only take the reciprocal of a single number or a single fraction, NOT a sum or difference of
The reciprocal of a sum is not the sum of the reciprocals. How do you find the reciprocal of a sum? You would have to add the two fractions, using a common denominator, combining them into a single
fraction. Here, by far the easiest solution would be to begin by subtracting 1/48 from both sides, and performing the fraction subtraction on the left side, so that you have a single fraction equals
1/x. Then, you would legitimately be allowed to take the reciprocal of both sides to solve.
Distributing a root
The final mistake, yet another example of illegitimately over-extending the pattern of the Distributive Law, is distributing root signs. Succinctly, this mistake says:
You cannot separate a square-root by addition or subtraction. You can separate a root by multiplication or division: see this post for more on that. You can see more about roots in general here.
If you have the equation…
… it is illegal to try to simplify that by taking a square-root of each term:
Think about it. Mr. Pythagoras was a very intelligent individual. If it were possible to simplify
In fact, whereas the former is true for the three side of every right triangle, the latter is not true for the three sides of any triangle. In fact, it constitutes a blatant violation of the
Triangle Inequality, a law that is true for every possible triangle.
These are very common mistakes, and GMAT questions are often designed to elicit falling into one of these mistakes. If you can simply learn and avoid these five mistakes, then you will avoid the
common traps to which so many GMAT takers will readily succumb. | {"url":"http://magoosh.com/gmat/2012/five-common-gmat-math-mistakes/","timestamp":"2014-04-16T21:52:15Z","content_type":null,"content_length":"65173","record_id":"<urn:uuid:7b028fe3-4a4f-4a4e-b782-c8cdee0cc833>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00555-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathGroup Archive: April 2006 [00464]
[Date Index] [Thread Index] [Author Index]
Re: Re: unable to FullSimplify
• To: mathgroup at smc.vnet.net
• Subject: [mg65892] Re: [mg65872] Re: unable to FullSimplify
• From: Andrzej Kozlowski <akoz at mimuw.edu.pl>
• Date: Fri, 21 Apr 2006 01:33:30 -0400 (EDT)
• References: <200604160545.BAA07958@smc.vnet.net> <200604181056.GAA14321@smc.vnet.net> <e24uhn$5s7$1@smc.vnet.net> <200604200914.FAA05331@smc.vnet.net>
• Sender: owner-wri-mathgroup at wolfram.com
On 20 Apr 2006, at 20:31, Andrzej Kozlowski wrote:
> On 20 Apr 2006, at 18:14, Vladimir wrote:
>> Andrzej Kozlowski wrote:
>>> Well, it seems to me that you are expecting too much.
>> Well, yes, considering the internal simplification and
>> related code is supposedly thousands of pages long.
>>> x + x^4 + 4*x^5 + 6*x^6 + 4*x^7 + x^8
>>> there are just too many different groupings and rearrangements that
>>> would have to be tried to get to a simpler form.
>> According to documentation, "FullSimplify effectively has to try
>> combining every part of an expression with every other".
> Yes, you are right. I wrote that without giving it much thought; my
> only really significant point was the one about the need to use
> only complexity decreasing transformations and this still stands.
>>> Sometimes the only
>>> way to transform an expression to a simpler form is by first
>>> transforming it to a more complex one
>> I'm sure FullSimplify can be improved without the need
>> for such complexification steps. For example:
>> In[]:= subdivide[a_ + b_] := FullSimplify[a] + FullSimplify[b];
>> FullSimplify[Expand[x + (x + x^2)^4],
>> TransformationFunctions -> {Automatic, subdivide}]
>> Out[]= x + x^4*(1 + x)^4
> Well, yes, it works nicely here but the question is, if you make
> this a default transformation transformation for FullSimplify how
> will it effect complexity? If you have a sum of n terms you will
> have to break it up into all possible pairs, then apply
> FullSimplify to each, and then keep doing this recursively. In fact
> it even seems hard to see how you would avoid numerous unnecessary
> attempts at simplifying the same expression... Obviously complexity
> is a very important consideration i choosing default functions for
> FullSimplify. Functions like subdivide should be added by users
> when they see the need for it.
> Andrzej Kozlowski
Some more thoughts: if your subdivide were really a default
transformation in FullSimplify it may well lead to an infinite loop,
since it would be calling on itself. Of course one could avoid this
problem, but even without recursive calls to itself, subdivide would
greatly increase the complexity of FullSimplify, which probably would
become unusable in many istuations where it works fine now. And even
once you have done that, there woudl stil be cases where a
simlification exist but there is no route that leads to it by
complexity reducing transformations (I have posted such examples at
least two times to this list already). Note however also the folowing
interesting feature:
u = Expand[x + (x + x^2)^4];
FullSimplify[f[u] == f[x + (x + x^2)^4]]
Note that even thogugh FullSimplify does not reduce u to x + (x + x^2)
^4, it is actually able to determin that f[u] is equal to f[x + (x +
x^2)^4], for an arbitrary f. To tell the truth, I do not know how it
does it, perhaps when used in this form it will actually expand the
argument to f (expanding is generally a form of "complexification")?
I hope to hear an answer to this, which is the real reason why I am
writing about this topic again. ;-)
In fact, it is in general better to use the approach:
FullSimplify[f[u] - f[x + (x + x^2)^4]]==0
I have posted in the past examples where the former method fails
while the latter succeeds.
The ability to show that two expressions are equal is, in my opinion,
the principal function of FullSimplify, which performs it very well.
Adding high complexity transformations would not likely improve in
this respect but may well harm it by making it slower.
• References: | {"url":"http://forums.wolfram.com/mathgroup/archive/2006/Apr/msg00464.html","timestamp":"2014-04-16T22:01:48Z","content_type":null,"content_length":"39182","record_id":"<urn:uuid:49703a4a-dcab-4c10-96b7-13f6e487c26c>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00649-ip-10-147-4-33.ec2.internal.warc.gz"} |
photometry FAQ
┃ Dr. James M. Palmer passed away on Thursday, January 4, 2007 after a courageous battle with cancer. His book, The Art of Radiometry, by James M. Palmer and Barbara G. Grant, is available from ┃
┃ SPIE Press: http://spie.org/x648.html?product_id=798237 ISBN: 9780819472458. Vol: PM 184. 393 pages. Hardcover. ┃
┃ ┃
┃ Contact Information: Ms. Cindy Gardner, Administrative Associate. Telephone: 520-621-3035. E-mail: cindy@optics.arizona.edu ┃
┃ ┃
┃ Dr. Palmer's Faculty Web site: /faculty/Resumes/Palmer.htm ┃
Radiometry and photometry FAQ
James M. Palmer
Research Professor
Optical Sciences Center
University of Arizona
Tucson, AZ; 85721
│ │
│"When I use a word, it means just what I choose it to mean -│
│ neither more nor less." │
│ │
│ Lewis Carroll (Charles Lutwidge Dodgson)│
NOTE: Because of the limitations of HTML, a "clean" version of this document
in Click here to download the PDF file.
Effective technical communication demands a system of symbols, units and nomenclature (SUN) that is reasonably consistent and that has widespread acceptance. Such a system is the International System
of Units (SI). There is no area where words are more important than radiometry and photometry. Unfortunately, the Lewis Carroll quote seems to be the way things often work. This document is an
attempt to provide necessary and correct to become conversant in this arena.
1. What is the motivation for this FAQ?
2. What is radiometry? What is photometry? How do they differ?
3. What is projected area? What is solid angle?
4. What are the quantities and units used in radiometry?
5. How do I account for spectral quantities?
6. What are the quantities and units used in photometry?
7. What is the difference between lambertian and isotropic?
8. When do the properties of the eye get involved?
9. How do I convert between radiometric and photometric units?
10. Where can I learn more about this stuff?
1. What is the motivation for this FAQ?
There is so much misinformation and conceptual confusion regarding photometry and radiometry, particularly on the WWW by a host of "authorities", it is high time someone got it straight. So here it
is, with links to the responsible agencies.
Background: It all started over a century ago. An organization called the General Conference on Weights and Measures (CGPM) was formed by a diplomatic treaty called the Metre Convention. This treaty
was signed in 1875 in Paris by representatives from 17 nations (including the USA). There are now 48 member nations. Also formed were the International Committee for Weights and Measures (CIPM) and
the International Bureau of Weights and Measures (BIPM). The CIPM, along with a number of sub-committees, suggests modifications to the CGPM. In our arena, the subcommittee is the CCPR, Consultative
Committee on Photometry and Radiometry. The BIPM is the physical facility responsible for dissemination of standards, the international metrology institute.
The SI was adopted by the CGPM in 1960. It currently consists of seven base units and a larger number of derived units. The base units are a choice of seven well-defined units which by convention are
regarded as independent. The seven are: metre, kilogram, second, ampere, kelvin, mole and candela. The derived units are those formed by various combinations of the base units.
International organizations involved in the promulgation of SUN include the International Commission on Illumination (CIE), the International Union of Pure and Applied Physics (IUPAP), and the
International Standards Organization (ISO). In the USA, the American National Standards Institute (ANSI) is the primary documentary (protocol) standards organization. Many other scientific and
technical organizations publish recommendations concerning the use of SUN for their learned publications. Examples are the International Astronomical Union (IAU) and the American Institute of Physics
Read all about the SI, its history and application, at physics.nist.gov/cuu/ or at www.bipm.fr.
This topic is currently of great importance to me inasmuch as I have a commission to prepare an authoritative chapter on these issues for the forthcoming "Handbook of Optics III."
2. What is radiometry? What is photometry? How do they differ?
Radiometry is the measurement of optical radiation, which is electromagnetic radiation within the frequency range between 3×10^11 and 3×10^16 Hz. This range corresponds to wavelengths between 0.01
and 1000 micrometres (m m), and includes the regions commonly called the ultraviolet, the visible and the infrared. Two out of many typical units encountered are watts/m^2 and photons/sec-steradian.
Photometry is the measurement of light, which is defined as electromagnetic radiation which is detectable by the human eye. It is thus restricted to the wavelength range from about 360 to 830
nanometers (nm; 1000 nm = 1 mm). Photometry is just like radiometry except that everything is weighted by the spectral response of the eye. Visual photometry uses the eye as a comparison detector,
while physical photometry uses either optical radiation detectors constructed to mimic the spectral response of the eye, or spectroradiometry coupled with appropriate calculations to do the eye
response weighting. Typical photometric units include lumens, lux, candelas, and a host of other bizarre ones.
The only real difference between radiometry and photometry is that radiometry includes the entire optical radiation spectrum, while photometry is limited to the visible spectrum as defined by the
response of the eye. In my forty years of experience, photometry is more difficult to understand, primarily because of the arcane terminology, but is fairly easy to do, because of the limited
wavelength range. Radiometry, on the other hand, is conceptually somewhat simpler, but is far more difficult to actually do.
3. What is projected area? What is solid angle?
Projected area is defined as the rectilinear projection of a surface of any shape onto a plane normal to the unit vector. The differential form is dA[proj] = cos(b) dA where b is the angle between
the local surface normal and the line of sight. We can integrate over the (perceptible) surface area to get
Some common examples are shown in the table below:
│ SHAPE │ AREA │ PROJECTED AREA │
│ Flat rectangle │ A = L×W │ Aproj= L×W cos b │
│ Circular disc │ A = p r^2 │ Aproj = p r^2 cos b │
│ │ = p d^2 / 4 │ = p d^2 cos b / 4 │
│ Sphere │ A = 4 p r^2 = p d^2 │ Aproj = A/4 = p r^2 │
Plane angle and solid angle are two derived units in the SI system. The following definitions are taken from NIST SP811.
"The radian is the plane angle between two radii of a circle that cuts off on the circumference an arc equal in length to the radius."
The abbreviation for the radian is rad. Since there are 2p radians in a circle, the conversion between degrees and radians is 1 rad = (180/p) degrees.
A solid angle extends the concept to three dimensions.
"One steradian (sr) is the solid angle that, having its vertex in the center of a sphere, cuts off an area on the surface of the sphere equal to that of a square with sides of length equal to the
radius of the sphere."
The solid angle is thus ratio of the spherical area to the square of the radius. The spherical area is a projection of the object of interest onto a unit sphere, and the solid angle is the surface
area of that projection. If we divide the surface area of a sphere by the square of its radius, we find that there are 4p steradians of solid angle in a sphere. One hemisphere has 2p steradians.
The symbol for solid angle is either w , the lowercase Greek letter omega, or W , the uppercase omega. I use w exclusively for solid angle, reserving W for the advanced concept of projected solid
angle (w cosq ).
Both plane angles and solid angles are dimensionless quantities, and they can lead to confusion when attempting dimensional analysis.
4. What are the quantities and units used in radiometry?
Radiometric units can be divided into two conceptual areas: those having to do with power or energy, and those that are geometric in nature. The first two are:
Energy is an SI derived unit, measured in joules (J). The recommended symbol for energy is Q. An acceptable alternate is W.
Power (a.k.a. radiant flux) is another SI derived unit. It is the derivative of energy with respect to time, dQ/dt, and the unit is the watt (W). The recommended symbol for power is F (the uppercase
Greek letter phi). An acceptable alternate is P.
Energy is the integral over time of power, and is used for integrating detectors and pulsed sources. Power is used for non-integrating detectors and continuous sources. Even though we patronize the
power utility, what we are actually buying is energy in watt-hours.
Now we become more specific and incorporate power with the geometric quantities area and solid angle.
Irradiance (a.k.a. flux density) is another SI derived unit and is measured in W/m^2. Irradiance is power per unit area incident from all directions in a hemisphere onto a surface that coincides with
the base of that hemisphere. A similar quantity is radiant exitance, which is power per unit area leaving a surface into a hemisphere whose base is that surface. The symbol for irradiance is E and
the symbol for radiant exitance is M. Irradiance (or radiant exitance) is the derivative of power with respect to area, dF /dA. The integral of irradiance or radiant exitance over area is power.
Radiant intensity is another SI derived unit and is measured in W/sr. Intensity is power per unit solid angle. The symbol is I. Intensity is the derivative of power with respect to solid angle, dF /
dw . The integral of radiant intensity over solid angle is power.
Radiance is the last SI derived unit we need and is measured in W/m^2-sr. Radiance is power per unit projected area per unit solid angle. The symbol is L. Radiance is the derivative of power with
respect to solid angle and projected area, dF /dw dA cos(q) where q is the angle between the surface normal and the specified direction. The integral of radiance over area and solid angle is power.
A great deal of confusion concerns the use and misuse of the term intensity. Some folks use it for W/sr, some use it for W/m^2 and others use it for W/m^2-sr. It is quite clearly defined in the SI
system, in the definition of the base unit of luminous intensity, the candela. Some attempt to justify alternate uses by adding adjectives like field or optical (used for W/m^2) or specific (used for
W/m^2-sr), but this practice only adds to the confusion. The underlying concept is (quantity per unit solid angle). For an extended discussion, I wrote a paper entitled "Getting Intense on Intensity"
for Metrologia (official journal of the BIPM) and a letter to OSA's "Optics and Photonics News". A modified version is available on the web..
Photon quantities are also common. They are related to the radiometric quantities by the relationship Q[p] = hc/l where Q[p] is the energy of a photon at wavelength l , h is Planck's constant and c
is the velocity of light. At a wavelength of 1 mm, there are approximately 5×10^18 photons per second in a watt. Conversely, also at 1 mm, 1 photon has an energy of 2×10^ 19 joules (watt-sec). Common
units include sec^ 1-m^ 2-sr^ 1 for photon radiance.
5. How do I represent spectral quantities?
Most sources of optical radiation are spectrally dependent, and just radiance, intensity, etc. give no information about the distribution of these quantities over wavelength. Spectral quantities,
like spectral radiance, spectral power, etc. are defined as the quotient of the quantity in an infinitesimal range of wavelength divided by that wavelength range. In other words, spectral quantities
are derivative quantities, per unit wavelength, and have an additional (l ^ 1) in their units. When integrated over wavelength they yield the total quantity. These spectral quantities are denoted by
using a subscript l, e.g., L[l] , E[l] , F[l], and I[l].
Some other quantities (examples include spectral transmittance, spectral reflectance, spectral responsivity, etc.) vary with wavelength but are not used as derivative quantities. These quantities
should not be integrated over wavelength; they are only weighting functions, to be included with the above derivative quantities. To distinguish them from the derivative quantities, they are denoted
by a parenthetical wavelength, i.e. Â(l) or t(l).
6. What are the quantities and units used in photometry?
They are basically the same as the radiometric units except that they are weighted for the spectral response of the human eye and have funny names. A few additional units have been introduced to deal
with the amount of light reflected from diffuse (matte) surfaces. The symbols used are identical to those radiometric units, except that a subscript "v " is added to denote "visual. "The following
chart compares them.
│ QUANTITY │ RADIOMETRIC │ PHOTOMETRIC │
│ power │ watt (W) │ lumen (lm) │
│ power per unit area │ W/m^2 │ lm/m^2 = lux (lx) │
│ power per unit solid angle │ W/sr │ lm/sr = candela (cd) │
│ power per area per solid angle │ W/m^2-sr │ lm/m^2-sr = cd/m^2 = nit │
Now we can get more specific about the details.
The candela is one of the seven base units of the SI system. It is defined as follows:
The candela is the luminous intensity, in a given direction, of a source that emits monochromatic radiation of frequency 540×10^12 hertz and that has a radiant intensity in that direction of 1/683
watt per steradian.
The candela is abbreviated as cd and its symbol is I[v]. The above definition was adopted by the 16th CGPM in 1979.
The candela was formerly defined as the luminous intensity, in the perpendicular direction, of a surface of 1/600 000 square metre of a black body at the temperature of freezing platinum under a
pressure of 101 325 newtons per square metre. This earlier definition was initially adopted in 1946 and later modified by the 13th CGPM (1967). It was abrogated in 1979 and replaced by the current
The current definition was adopted because of several reasons. First, the freezing point of platinum (» 2042K) was tied to another base unit, the kelvin. If the best estimate of this point were
changed, it would then impact the candela. The uncertainty of the thermodynamic temperature of this fixed point created an unacceptable uncertainty in the value of the candela. Second, the
realization of the Pt blackbody was extraordinarily difficult; only a few were ever built. Third, if the temperature were slightly off, possibly because of temperature gradients or contamination, the
freezing point might change or the temperature of the cavity might differ. The sensitivity of the candela to a slight change in temperature is significant. At a wavelength 555 nm, a change in
temperature of only 1K results in a luminance change approaching 1%. Fourth, the relative spectral radiance of blackbody radiation changes drastically (some three orders of magnitude) over the
visible range. Finally, recent advances in radiometry offered a host of new possibilities for the realization of the candela.
The value 683 lm/W was selected based upon the best measurements with existing platinum freezing point blackbodies. It has varied over time from 620 to nearly 700 lm/W, depending largely upon the
assigned value of the freezing point of platinum. The value of 1/600 000 square metre was chosen to maintain consistency with prior standards. Note that neither the old nor the new definition say
anything about the spectral response of the human eye. There are additional definitions that include the characteristics of the eye, but the base unit (candela) and those SI units derived from it are
Also note that in the definition there is no specification for the spatial distribution of intensity. Luminous intensity, while often associated with an isotropic point source, is a valid
specification for characterizing highly directional light sources such as spotlights and LEDs.
One other issue before we press on. Since the candela is now defined in terms of other SI derived quantities, there is really no need to retain it as an SI base quantity. It remains so for reasons of
history and continuity.
The lumen is an SI derived unit for luminous flux. The abbreviation is lm and the symbol is F[v]. The lumen is derived from the candela and is the luminous flux emitted into unit solid angle (1 sr)
by an isotropic point source having a luminous intensity of 1 candela. The lumen is the product of luminous intensity and solid angle, cd-sr. It is analogous to the unit of radiant flux (watt),
differing only in the eye response weighting. If a light source is isotropic, the relationship between lumens and candelas is 1 cd = 4p lm. In other words, an isotropic source having a luminous
intensity of 1 candela emits 4p lumens into space, which just happens to be 4p steradians. We can also state that 1 cd = 1 lm/sr, analogous to the equivalent radiometric definition.
If a source is not isotropic, the relationship between candelas and lumens is empirical. A fundamental method used to determine the total flux (lumens) is to measure the luminous intensity (candelas)
in many directions using a goniophotometer, and then numerically integrate over the entire sphere. Later on, we can use this "calibrated" lamp as a reference in an integrating sphere for routine
measurements of luminous flux.
Lumens are what we get from the hardware store when we purchase a light bulb. We want a high number of lumens with a minimum of power consumption and a reasonable lifetime. Projection devices are
also characterized by lumens to indicate how much luminous flux they can deliver to a screen.
Illuminance is another SI derived quantity which denotes luminous flux density . It has a special name, lux, and is lumens per square metre, or lm/m^2. The symbol is E[v]. Most light meters measure
this quantity, as it is of great importance in illuminating engineering. The IESNA Lighting Handbook has some sixteen pages of recommended illuminances for various activities and locales, ranging
from morgues to museums. Typical values range from 100 000 lx for direct sunlight to 20-50 lx for hospital corridors at night.
Luminance should probably be included on the official list of derived SI quantities, but is not. It is analogous to radiance, differentiating the lumen with respect to both area and direction. It
also has a special name, nit, and is cd/m^2 or lm/m^2-sr if you prefer. The symbol is L[v]. It is most often used to characterize the "brightness " of flat emitting or reflecting surfaces. A typical
use would be the luminance of your laptop computer screen. They have between 100 and 250 nits, and the sunlight readable ones have more than 1000 nits. Typical CRT monitors have between 50 and 125
Other photometric units
We have other photometric units (boy, do we have some strange ones). Photometric quantities should be reported in SI units as given above. However, the literature is filled with now obsolete
terminology and we must be able to interpret it. So here are a few terms that have been used in the past.
1 metre-candle = 1 lux
1 phot = 1 lm/cm^2 = 10^4 lux
1 foot-candle = 1 lumen/ft^2 = 10.76 lux
1 milliphot = 10 lux
Luminance: Here we have two classes of units. The first is conventional, easily related to the SI unit, the cd/m^2 (nit).
1 stilb = 1 cd/cm^2 = 10^4 cd/m^2 = 10^4 nit
1 cd/ft^2 = 10.76 cd/m^2 = 10.76 nit
The second class was designed to "simplify" characterization of light reflected from diffuse surfaces by including in the definitions the concept of a perfect diffuse reflector (
lambertian, reflectance r = 1). If one unit of illuminance falls upon this hypothetical reflector, then 1 unit of luminance is reflected. The perfect diffuse reflector emits 1/p units of luminance
per unit illuminance. If the reflectance is r, then the luminance is r times the illuminance. Consequently, these units all have a factor of (1/p) built in.
1 lambert = (1/p) cd/cm^2 = (10^4/p) cd/m^2
1 apostilb = (1/p) cd/m^2
1 foot-lambert = (1/p) cd/ft^2 = 3.426 cd/m^2
1 millilambert = (10/p) cd/m^2
1 skot = 1 milliblondel = (10^-3/p) cd/m^2
Photometric quantities are already the result of an integration over wavelength. It therefore makes no sense to speak of spectral luminance or the like.
7. What is the difference between lambertian and isotropic?
Both terms mean "the same in all directions" and are unfortunately sometimes used interchangeably.
Isotropic implies a spherical source that radiates the same in all directions, i.e., the intensity (W/sr) is the same in all directions. We often hear about an "isotropic point source." There can be
no such thing; because the energy density would have to be infinite. But a small, uniform sphere comes very close. The best example is a globular tungsten lamp with a milky white diffuse envelope, as
used in dressing room lighting. From our vantage point, a distant star can be considered an isotropic point source.
Lambertian refers to a flat radiating surface. It can be an active surface or a passive, reflective surface. Here the intensity falls off as the cosine of the observation angle with respect to the
surface normal (Lambert's law). The radiance (W/m^2-sr) is independent of direction. A good example is a surface painted with a good "matte" or "flat" white paint. If it is uniformly illuminated,
like from the sun, it appears equally bright from whatever direction you view it. Note that the flat radiating surface can be an elemental area of a curved surface.
The ratio of the radiant exitance (W/m^2) to the radiance (W/m^2-sr) of a lambertian surface is a factor of p and not 2p . We integrate radiance over a hemisphere, and find that the presence of the
factor of cos(q) in the definition of radiance gives us this interesting result. It is not intuitive, as we know that there are 2p steradians in a hemisphere.
A lambertian sphere illuminated by a distant point source will display a radiance which is maximum at the surface where the local normal coincides with the incoming beam. The radiance will fall off
with a cosine dependence to zero at the terminator. If the intensity (integrated radiance over area) is unity when viewing from the source, then the intensity when viewing from the side is 1/p .
Think about this and consider whether or not our Moon is lambertian. I'll have more to say about this at a later date in another place!
8. Where do the properties of the eye get involved?
We know that the eye does not see all wavelengths equally. The eye has two general classes of photosensors, cones and rods.
Cones: The cones are responsible for light-adapted vision; they respond to color and have high resolution in the central foveal region. The light-adapted relative spectral response of the eye is
called the spectral luminous efficiency function for photopic vision, V(l). This empirical curve, first adopted by the International Commission on Illumination (CIE) in 1924, has a peak of unity at
555 nm, and decreases to levels below 10^ 5 at about 370 and 785 nm. The 50% points are near 510 nm and 610 nm, indicating that the curve is slightly skewed. The V(l) curve looks very much like a
Gaussian function; in fact a Gaussian curve can easily be fit and is a good representation under some circumstances. I used a non-linear regression technique to obtain the following equation:
More recent measurements have shown that the 1924 curve may not best represent typical human vision. It appears to underestimate the response at wavelengths shorter than 460 nm. Judd (1951), Vos
(1978) and Stockman and Sharpe (1999) have made incremental advances in our knowledge of the photopic response.
Rods: The rods are responsible for dark-adapted vision, with no color information and poor resolution when compared to the foveal cones. The dark-adapted relative spectral response of the eye is
called the spectral luminous efficiency function for scotopic vision, V (l). This is another empirical curve, adopted by the CIE in 1951. It is defined between 380 nm and 780 nm. The V (l) curve has
a peak of unity at 507 nm, and decreases to levels below 10^ 3 at about 380 and 645 nm. The 50% points are near 455 nm and 550 nm. This scotopic curve can also be fit with a Gaussian, although the
fit is not quite as good as the photopic curve. My best fit is
Photopic (light adapted cone) vision is active for luminances greater than 3 cd/m^2. Scotopic (dark-adapted rod) vision is active for luminances lower than 0.01 cd/m^2. In between, both rods and
cones contribute in varying amounts, and in this range the vision is called mesopic. There are currently efforts under way to characterize the composite spectral response in the mesopic range for
vision research at intermediate luminance levels.
The Color Vision Lab at UCSD has an impressive collection of the data files, including V(l), V (l), and some of the newer ones that you need to do this kind of work.
9. How do I convert between radiometric and photometric units?
We know from the definition of the candela that there are 683 lumens per watt at a frequency of 540THz, which is 555 nm (in vacuum or air). This is the wavelength that corresponds to the maximum
spectral responsivity of the human eye. The conversion from watts to lumens at any other wavelength involves the product of the power (watts) and the V(l) value at the wavelength of interest. As an
example, we can compare laser pointers at 670 nm and 635 nm. At 670 nm, V(l) is 0.032 and a 5 mW laser has 0.005W×0.032×683 lm/W = 0.11 lumens. At 635 nm, V(l) is 0.217 and a 5 mW laser has
0.005W×0.217×683 lm/W = 0.74 lumens. The shorter wavelength (635 nm) laser pointer will create a spot that is almost 7 times as bright as the longer wavelength (670 nm) laser (assuming the same beam
In order to convert a source with non-monochromatic spectral distribution to a luminous quantity, the situation is decidedly more complex. We must know the spectral nature of the source, because it
is used in an equation of the form:
where X[v] is a luminous term, X[l] is the corresponding spectral radiant term, and V(l) is the photopic spectral luminous efficiency function. For X, we can pair luminous flux (lm) and spectral
power (W/nm), luminous intensity (cd) and spectral radiant intensity (W/sr-nm), illuminance (lx) and spectral irradiance (W/m^2-nm), or luminance (cd/m^2) and spectral radiance (W/m^2-sr-nm). This
equation represents a weighting, wavelength by wavelength, of the radiant spectral term by the visual response at that wavelength. The constant K[m] is a scaling factor, the maximum spectral luminous
efficiency for photopic vision, 683 lm/W. The wavelength limits can be set to restrict the integration to only those wavelengths where the product of the spectral term Xl and V(l) is non-zero.
Practically, this means we only need integrate from 360 to 830 nm, limits specified by the CIE V(l) table. Since this V(l) function is defined by a table of empirical values, it is best to do the
integration numerically. Use of the Gaussian equation given above is only an approximation. I compared the Gaussian equation with the tabulated data using blackbody curves and found the differences
to be less than 1% for temperatures between 1500K and >20000K. This result is acceptable for smooth curves, but don t try it for narrow wavelength sources, like LEDs.
There is nothing in the SI definitions of the base or derived units concerning the eye response, so we have some flexibility in the choice of the weighting function. We can use a different spectral
luminous efficacy curve, perhaps one of the newer ones. We can also make use of the equivalent curve for scotopic (dark-adapted) vision for studies at lower light levels. This V'(l) curve has its own
constant K'm, the maximum spectral luminous efficiency for scotopic vision. K'[m] is 1700 lm/W at the peak wavelength for scotopic vision (507 nm) and this value was deliberately chosen such that the
absolute value of the scotopic curve at 555 nm coincides with the photopic curve, at the value 683 lm/W. Some workers are referring to "scotopic lumens", a term which should be discouraged because of
the potential for misunderstanding. In the future, we can also expect to see spectral weighting to represent the mesopic region.
The International Commission on Weights and Measures (CGPM) has approved the use of the CIE V(l) and V'(l) curves for determination of the value of photometric quantities of luminous sources.
Now about converting from lumens to watts. The conversion from watts to lumens that we saw just above required that the spectral function Xl of the radiation be known over the spectral range from 360
to 830 nm, where V(l) is non-zero. Attempts to go in the other direction, from lumens to watts, are far more difficult. Since we are trying to back out a quantity that was weighted and placed inside
of an integral, we must know the spectral function Xl of the radiation over the entire spectral range where the source emits, not just the visible. There are a few tricks which will have to wait for
my forthcoming book chapter.
10. Where can I learn more about this stuff?
Books, significant journal articles:
DeCusatis, C., "Handbook of Applied Photometry." AIP Press (1997). Authoritative, with pertinent chapters written by technical experts at BIPM, CIE and NIST. Skip chapter 4!
Rea, M., ed. "Lighting Handbook: Reference and Application," 8th edition, Illuminating Engineering Society of North America (1993).
"The Basis of Physical Photometry" CIE Technical Report 18.2 (1983).
"Symbols, Units and Nomenclature in Physics" International Union of Pure and Applied Physics (1987).
"American National Standard Nomenclature and Definitions for Illuminating Engineering" ANSI Standard ANSI/IESNA RP-16 96 (1996).
Publications available on the World Wide Web
All you ever wanted to know about the SI is contained at BIPM and at NIST. Available publications (highly recommended) include:
"The International System of Units (SI)." 8th edition (2006), direct from BIPM. The official document is in French; this is the English translation). Download it now in PDF format.
NIST Special Publication SP330 "The International System of Units (SI)." The US edition of the above BIPM publication. Download it now in PDF format.
NIST Special Publication SP811 "Guide for the Use of the International System of Units (SI)." Download it now in PDF format.
Papers published in recent issues of the NIST Journal of Research are available on the web in PDF format. Of particular interest is "The NIST Detector-Based Luminous Intensity Scale," Vol. 101, page
109 (1996), which you can download now in PDF format.
Useful Web sites
│BIPM Int. Bureau of Weights & Measures │NIST Nat'l Inst. of Standards & Technology│
│ISO International Standards Organization │ANSI American Nat'l Standards Institute │
│CIE International Commission on Illumination │IESNA Illum. Eng. Society of N. America │
│IUPAP Int. Union of Pure & Applied Physics │Color Vision Lab at UCSD │
│AIP American Institute of Physics │SPIE - International Society for Opt. Eng.│
│OSA Optical Society of America │ │
Version 1.1; 08 July 1999
Send corrections and comments to the author.
email: jmpalmer@u.arizona.edu or jpalmer@azstarnet.com
WWW: fp.optics.arizona.edu/Palmer/
Tentative table of contents for my forthcoming book "The Art of Radiometry" | {"url":"http://fp.optics.arizona.edu/Palmer/rpfaq/rpfaq.htm","timestamp":"2014-04-21T02:28:15Z","content_type":null,"content_length":"48635","record_id":"<urn:uuid:2b5abf0f-1db6-40c9-9255-f1ab9a72339f>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00312-ip-10-147-4-33.ec2.internal.warc.gz"} |
An integer (pronounced IN-tuh-jer) is a whole number (not a fractional number) that can be positive, negative, or zero.
Examples of integers are: -5, 1, 5, 8, 97, and 3,043.
Examples of numbers that are not integers are: -1.43, 1 3/4, 3.14, .09, and 5,643.1.
The set of integers, denoted Z, is formally defined as follows:
Z = {..., -3, -2, -1, 0, 1, 2, 3, ...}
In mathematical equations, unknown or unspecified integers are represented by lowercase, italicized letters from the "late middle" of the alphabet. The most common are p, q, r, and s.
The set Z is a denumerable set. Denumerability refers to the fact that, even though there might be an infinite number of elements in a set, those elements can be denoted by a list that implies the
identity of every element in the set. For example, it is intuitive from the list {..., -3, -2, -1, 0, 1, 2, 3, ...} that 356,804,251 and -67,332 are integers, but 356,804,251.5, -67,332.89, -4/3, and
0.232323 ... are not.
The elements of Z can be paired off one-to-one with the elements of N, the set of natural numbers, with no elements being left out of either set. Let N = {1, 2, 3, ...}. Then the pairing can proceed
in this way:
In infinite sets, the existence of a one-to-one correspondence is the litmus test for determining cardinality, or size. The set of natural numbers and the set of rational numbers> have the same
cardinality as Z. However, the sets of real numbers, imaginary numbers, and complex numbers have cardinality larger than that of Z.
This was last updated in September 2005
Tech TalkComment
Contribute to the conversation
All fields are required. Comments will appear at the bottom of the article. | {"url":"http://whatis.techtarget.com/definition/integer","timestamp":"2014-04-18T08:04:55Z","content_type":null,"content_length":"60859","record_id":"<urn:uuid:08c774d4-1077-4204-9023-eb9d3855500c>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00632-ip-10-147-4-33.ec2.internal.warc.gz"} |
Bitopological spaces and algebraic topology
MathOverflow is a question and answer site for professional mathematicians. It's 100% free, no registration required.
Is it possible to introduce the concept of bitopological spaces such as $(X,T_1,T_2)$ (introduced by J.C.Kelly see Proc. London Math. Soc. (3) 13 (1963) 71–89 MR0143169, J.C.
Kelly) in the homotopy theory or homology theory?
up vote 3 down vote
favorite gn.general-topology at.algebraic-topology homology homotopy-theory
add comment
Is it possible to introduce the concept of bitopological spaces such as $(X,T_1,T_2)$ (introduced by J.C.Kelly see Proc. London Math. Soc. (3) 13 (1963) 71–89 MR0143169, J.C. Kelly) in the homotopy
theory or homology theory?
In some sense, the (relatively new) field of directed algebraic topology represents an attempt to do this. This article by Marco Grandis includes a discussion (on page 8) of why
up vote 3 down bitopological spaces are too general to admit a good (directed) homotopy theory.
add comment
In some sense, the (relatively new) field of directed algebraic topology represents an attempt to do this. This article by Marco Grandis includes a discussion (on page 8) of why bitopological spaces
are too general to admit a good (directed) homotopy theory. | {"url":"http://mathoverflow.net/questions/100102/bitopological-spaces-and-algebraic-topology","timestamp":"2014-04-19T17:45:11Z","content_type":null,"content_length":"50282","record_id":"<urn:uuid:a2976fb8-cdc3-4017-90d1-f86c461b9b53>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00285-ip-10-147-4-33.ec2.internal.warc.gz"} |
Homework Help
Posted by COFFEE on Friday, September 14, 2007 at 8:09pm.
Fiber Linear Density = 1 denier = 1 g/9000m
Fiber Density = 1.14 g/cm^3
Fiber surface area in cm^2/g
Assume that the fiber strand is a uniform cylinder
Surface area = 3,150 cm^2/g
....... how do i get to the answer? my professor gives us these notes:
Pi*D is the diameter of the top of the cylinder when it is cut vertically in half. and L is the height.
((Pi*D^2)/(4)) * L = volume of the cylinder
((Pi*D^2)/(4)) * L * Density = weight
..please help!
• Surface Area - bobpursley, Friday, September 14, 2007 at 8:30pm
If it is a cylinder of length L
volume= PI r^2 *L
Surface area= PI*2(r)L
density= mass/volume=1.14g/cm^3
fiberarea/g= fiberarea/mass=
PI*2r*L/(1.14g/cm^2 * PI*r^2*L)
2cm^2/(1.14g*r)= 2/1.14r g/cm^2
Now, r
linear density=1g/9000m
= mass/L= volumedensity*volume/L
=1/(1.14*900000*PI) cm^2
t hen find the sqrt root of that.
final answer= 2/(1.14r) and it will give you your answer.
□ Surface Area - goh han liang, Friday, September 5, 2008 at 7:59am
how to do surface area
Related Questions
SCI - What is the function of fiber in the body? o What are some examples of ...
Statistics for Decision - Calorie Level Fiber Type High Calorie Low Calorie ...
Math - FiberType HighCalorie LowCalori Totals High Fiber 18 4 22 Medium Fiber 23...
physics - Gold, which has a density of 19.32 g/cm^3, is the most ductile metal ...
Math - High Fiber foods have at least 5 gof fiber per serving. Write an ...
Science - I have 2 questions. 1.) Is there a powder that just contains ...
physics - Gold, which has a density of 19.32 g/cm3, is the most ductile metal ...
physics - Fiber Optic Distances When light rays travel down optical fibers, they...
physics - Fiber Optic Distances When light rays travel down optical fibers, they...
health - Which of the following statements is true? A. Fiber concentration is ... | {"url":"http://www.jiskha.com/display.cgi?id=1189814940","timestamp":"2014-04-19T03:07:23Z","content_type":null,"content_length":"9515","record_id":"<urn:uuid:4990aac0-fef6-4506-88e7-3dd1846bf8f6>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00259-ip-10-147-4-33.ec2.internal.warc.gz"} |
Russell Bertrand Arthur William
<history of philosophy, biography> orphaned at the age of four, Bertrand Russell (1872-1970) studied (and later taught) both mathematics and philosophy at Cambridge. As the grandson of a British
prime minister, Russell devoted much of his public effort to matters of general social concern. Jailed as a pacifist during the First World War, he later supported the battle against Fascism but
deplored the development of weapons of mass destruction, as is evident in "The Bomb and Civilization" (1945), New Hopes for a Changing World (1951), and his untitled last essay. Throughout his life,
Russell was an outspoken critic of organized religion, detailing its harmful social consequences in "Why I Am Not a Christian" (1927) and defending an agnostic alternative in "A Free Man's Worship"
(1903). His Marriage and Morals (1929) is an attack upon the repressive character of conventional sexual morality. Russell's Autobiography (1967-69) is an excellent source of information, analysis,
and self-congratulation regarding his interesting life. Its pages include his eloquent statements of "What I Have Lived For" and "A Liberal Decalogue". Russell was awarded the Nobel Prize for
literature in 1950. Through an early appreciation of the philosophical work of Leibniz, published in A Critical Exposition of the Philosophy of Leibniz (1900), Russell came to regard logical analysis
as the crucial method for philosophy. In Principia Mathematica (1910-13), written jointly with Alfred North Whitehead, he showed that all of arithmetic could be deduced from a restricted set of
logical axioms, a thesis defended in less technical terms in Russell's Introduction to Mathematical Philosophy (1919). Applying simlarly analytical methods to philosophical problems, Russell
believed, could resolve disputes and provide an adequate account of human experience. Indeed, his A History of Western Philosophy (1946) tried to show that the philosophical tradition had moved
slowly but steadily toward just such a culmination. The attempt to account clearly for every constituent of ordinary assertions soon proved problematic, however. Russell proposed a ramified theory of
types in order to avoid the self-referential paradoxes that might otherwise emerge from such abstract notions as "the barber who shaves all but only those who do not shave themselves" or "the class
of all classes that are not members of themselves". In the theory of descriptions put forward in On Denoting (1905), Russell argued that proper analysis of denoting phrases enables us to represent
all thought symbolically while avoiding philosophical difficulties about non-existent objects. As his essay on "Vagueness" (1923) shows, Russell long persisted in the belief that adequate
explanations could provide a sound basis for human speech and thought. In similar fashion, the analysis of statements attributing a common predicate to different subjects in "On the Relations of
Universals and Particulars" (1911) convinced Russell that both particulars and universals must really exist. He developed this realistic view further in The Problems of Philosophy (1912). Our
Knowledge of the External World (1914) continues this project by showing how Russell's philosophy of logical atomism can construct a world of public physical objects using private individual
experiences as the atomic facts from which one could develop a complete description of the world. Although Russell's philosophical positions were soon eclipsed by those of Wittgenstein and the
logical positivists, his model of the possibilities for analytic thought remains influential. Recommended Reading: Primary sources: Bertrand Russell, A Critical Exposition of the Philosophy of
Leibniz: With an Appendix of Leading Passages (Routledge, 1993); Alfred North Whitehead and Bertrand Arthur Russell, Principia Mathematica (Cambridge, 1997); Bertrand Russell, The Principles of
Mathematics (Norton, 1996); Bertrand Russell, Introduction to Mathematical Philosophy (Dover, 1993); Bertrand Russell, The Philosophy of Logical Atomism, ed. by David Pears (Open Court, 1985);
Bertrand Russell, The Problems of Philosophy (Oxford, 1998); Bertrand Russell, Why I Am Not a Christian, and Other Essays on Religion and Related Subjects (Simon & Schuster, 1977); Bertrand Russell,
A History of Western Philosophy and Its Connection With Political and Social from the Earliest Times to the Present Day (Simon & Schuster, 1975); The Autobiography of Bertrand Russell (Routledge,
2000). Secondary sources: Ray Monk, Russell (Routledge, 1999); Essays on Bertrand Russell, ed. by E. D. Klemke (Illinois, 1971); John G. Slater, Bertrand Russell (St. Augustine, 1994); Peter Hylton,
Russell, Idealism, and the Emergence of Analytic Philosophy (Oxford, 1992); Jan Dejnozka, Bertrand Russell on Modality and Logical Relevance (Ashgate, 1999). Additional on-line information about
Russell includes: McMaster University's The Bertrand Russell Archives. The Bertrand Russell Society Home Page, hosted by John Lenz. A.D. Irvine's article in The Stanford Encyclopedia of Philosophy.
Mark Sainsbury's article in The Oxford Companion to Philosophy. Also see: acquaintance and description, analysis, analytic philosophy, logical atomism, Cambridge philosophy, descriptions, logical
empiricism, English philosophy, impredicative definition, logic, logically proper names, logicism, philosophy of mathematics, mnemic causation, names, the persecution of philosophers, the axiom of
reducibility, referential opacity, the nature of relations, skepticism about religion, Russell's paradox, set theory, 'to be', the verb, the theory of types, and vicious circles. The article in the
Columbia Encyclopedia at Bartleby.com. The thorough collection of resources at EpistemeLinks.com. Eric Weisstein's discussion at Treasure Trove of Scientific Biography. Snippets from Russell in The
Oxford Dictionary of Quotations. Bjoern Christensson's brief guide to Internet material on Russell. A short article in Oxford's Who's Who in the Twentieth Century. An entry in The Oxford Dictionary
of Scientists. Discussion of Russell's logical treatment of mathematics from Mathematical MacTutor. A brief entry in The Macmillan Encyclopedia 2001.
[A Dictionary of Philosophical Terms and Names]
Try this search on OneLook / Google
Nearby terms: rule « rules of inference « run-time « Russell Bertrand Arthur William » Russell's Attic » Russell's Paradox » Russell's theory of descriptions
Russell's Attic
<mathematics> An imaginary room containing countably many pairs of shoes (i.e. a pair for each natural number), and countably many pairs of socks. How many shoes are there? Answer: countably many
(map the left shoes to even numbers and the right shoes to odd numbers, say). How many socks are there? Also countably many, we want to say, but we can't prove it without the Axiom of Choice, because
in each pair, the socks are indistinguishable (there's no such thing as a left sock). Although for any single pair it is easy to select one, we cannot specify a general method for doing this.
Try this search on OneLook / Google
Nearby terms: rules of inference « run-time « Russell Bertrand Arthur William « Russell's Attic » Russell's Paradox » Russell's theory of descriptions » Ryle Gilbert
Russell's Paradox
<mathematics> A logical contradiction in set theory discovered by the British mathematician Bertrand Russell (1872-1970). If R is the set of all sets which don't contain themselves, does R contain
itself? If it does then it doesn't and vice versa. This contradiction infects set theory when it is permissible to speak of "all sets" or set complements without qualification, or when a set is
defined loosely as any collection of any elements, or when every predicate (intension) determines a set (extension). See complement The paradox stems from the acceptance of the following axiom: If P
(x) is a property then
{x : P}
is a set. This is the Axiom of Comprehension (actually an axiom schema). By applying it in the case where P is the property "x is not an element of x", we generate the paradox, i.e. something clearly
false. Thus any theory built on this axiom must be inconsistent.
In lambda-calculus Russell's Paradox can be formulated by representing each set by its characteristic function - the property which is true for members and false for non-members. The set R becomes a
function r which is the negation of its argument applied to itself:
r = \ x . not (x x)
If we now apply r to itself,
r r = (\ x . not (x x)) (\ x . not (x x))
= not ((\ x . not (x x))(\ x . not (x x)))
= not (r r)
So if (r r) is true then it is false and vice versa.
An alternative formulation is: "if the barber of Seville is a man who shaves all men in Seville who don't shave themselves, and only those men, who shaves the barber?" This can be taken simply as a
proof that no such barber can exist whereas seemingly obvious axioms of set theory suggest the existence of the paradoxical set R.
Zermelo Fr"nkel set theory is one "solution" to this paradox. Another, type theory, restricts sets to contain only elements of a single type, (e.g. integers or sets of integers) and no type is
allowed to refer to itself so no set can contain itself.
A message from Russell induced Frege to put a note in his life's work, just before it went to press, to the effect that he now knew it was inconsistent but he hoped it would be useful anyway.
[FOLDOC] and [Glossary of First-Order Logic]
Try this search on OneLook / Google
Nearby terms: run-time « Russell Bertrand Arthur William « Russell's Attic « Russell's Paradox » Russell's theory of descriptions » Ryle Gilbert » Saadiah Gaon Sa adyah ben Joseph
Russell's theory of descriptions
<logic, philosophy of language> roughly, the view that sentences in which phrases of the form the-so-and-so appear can be reduced to more revealing logical forms in which "the" disappears and in
which there is no longer any temptation to think that such phrases are like proper names. E.g. "The present king of France is bald" becomes "There exists something which is presently kind of France
and there is no other individual who is such and that individual is bald". Russell's theory has been called a paradigm of philosophy.
Try this search on OneLook / Google
Nearby terms: Russell Bertrand Arthur William « Russell's Attic « Russell's Paradox « Russell's theory of descriptions » Ryle Gilbert » Saadiah Gaon Sa adyah ben Joseph » safety-critical system | {"url":"http://www.swif.uniba.it/lei/foldop/foldoc.cgi?Russell","timestamp":"2014-04-19T02:03:43Z","content_type":null,"content_length":"16189","record_id":"<urn:uuid:a3395de6-f44d-4870-9483-842b1f61fdef>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00093-ip-10-147-4-33.ec2.internal.warc.gz"} |
Lilburn Calculus Tutor
Find a Lilburn Calculus Tutor
...As an engineer and as a college level tutor, I have worked with algebra 2 type functions and problems for many years. I have been working with spreadsheets for more than 20 years. Over the
years, and still today, I have developed many spreadsheets with multiple complex formulas for business and personal use.
22 Subjects: including calculus, geometry, GRE, ASVAB
...I've taken graduate and undergraduate courses on graph theory, combinatorial theory, and linear programming among others. I've co-taught a course at the Center for Talented Youth at Johns
Hopkins University on applied discrete mathematics. I took Differential Equations in high school as well as in college.
41 Subjects: including calculus, reading, physics, writing
...Student scores have improved upon their re-exam. As a former Teacher of the Year (2008-2009), I believe in teaching students in a manner that they learn. Since not everyone is a auditory
learner, I use visual and hands-on learning to help the lesson stick.
21 Subjects: including calculus, geometry, algebra 1, algebra 2
...I majored in electrical engineering and currently work in the power industry. My love for math has grown since grade school which prompted me to take all of the math courses that I could in
college. Before I transferred to Clemson, I attended Newberry College where I maintained a GPA above 3.0 and majored in Math and Computer Science.
14 Subjects: including calculus, geometry, algebra 1, algebra 2
...I'm ready to put my knowledge to work. Let's get started! Elementary science is built off a healthy curiosity for the world around us.
15 Subjects: including calculus, reading, chemistry, biology
Nearby Cities With calculus Tutor
Avondale Estates calculus Tutors
Berkeley Lake, GA calculus Tutors
Chamblee, GA calculus Tutors
Clarkston, GA calculus Tutors
Covington, GA calculus Tutors
Doraville, GA calculus Tutors
Duluth, GA calculus Tutors
Grayson, GA calculus Tutors
Norcross, GA calculus Tutors
Pine Lake calculus Tutors
Scottdale, GA calculus Tutors
Snellville calculus Tutors
Stone Mountain calculus Tutors
Sugar Hill, GA calculus Tutors
Tucker, GA calculus Tutors | {"url":"http://www.purplemath.com/Lilburn_calculus_tutors.php","timestamp":"2014-04-18T23:37:19Z","content_type":null,"content_length":"23736","record_id":"<urn:uuid:00f6b10f-f3c8-4400-8645-56c8f9ecc60d>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00387-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Calculate the storage delay in a linearly graded p-n junction with the plot of the doping profile shown below
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/5169fc41e4b015a79c1125c7","timestamp":"2014-04-19T10:06:11Z","content_type":null,"content_length":"36070","record_id":"<urn:uuid:20898db4-849f-4655-8e59-2014b8f3de83>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00236-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Date Subject Author
4/20/04 cube root of a given number vsvasan
4/20/04 Re: cube root of a given number A N Niel
4/20/04 Re: cube root of a given number Richard Mathar
7/14/07 Re: cube root of a given number Sheila
7/14/07 Re: cube root of a given number amzoti
7/14/07 Re: cube root of a given number quasi
7/14/07 Re: cube root of a given number arithmeticae
7/16/07 Re: cube root of a given number Gottfried Helms
7/16/07 Re: cube root of a given number Iain Davidson
7/21/07 Re: cube root of a given number arithmetic
7/21/07 Re: cube root of a given number arithmetic
7/21/07 Re: cube root of a given number Iain Davidson
7/21/07 Re: cube root of a given number arithmetic
7/22/07 Re: cube root of a given number Iain Davidson
7/22/07 Re: cube root of a given number arithmetic
7/22/07 Re: cube root of a given number Iain Davidson
7/23/07 Re: cube root of a given number arithmetic
7/24/07 Re: cube root of a given number Iain Davidson
7/24/07 Re: cube root of a given number arithmetic
7/24/07 Re: cube root of a given number arithmetic
7/24/07 Re: cube root of a given number Iain Davidson
7/25/07 Re: cube root of a given number arithmetic
7/24/07 Re: cube root of a given number gwh
7/25/07 Re: cube root of a given number arithmetic
7/25/07 Re: cube root of a given number Iain Davidson
7/25/07 Re: cube root of a given number arithmetic
7/25/07 Re: cube root of a given number Iain Davidson
7/25/07 Re: cube root of a given number arithmetic
7/25/07 Re: cube root of a given number arithmetic
7/25/07 Re: cube root of a given number Iain Davidson
7/25/07 Re: cube root of a given number arithmetic
7/26/07 Re: cube root of a given number Iain Davidson
7/26/07 Re: cube root of a given number arithmetic
7/26/07 Re: cube root of a given number Iain Davidson
7/26/07 Re: cube root of a given number arithmetic
8/6/07 Re: cube root of a given number arithmetic
7/26/07 Re: cube root of a given number semiopen
7/26/07 Re: cube root of a given number Iain Davidson
7/26/07 Re: cube root of a given number semiopen
7/26/07 Re: cube root of a given number arithmetic
7/26/07 Re: cube root of a given number semiopen
7/26/07 Re: cube root of a given number arithmetic
7/26/07 Re: cube root of a given number Iain Davidson
7/27/07 Re: cube root of a given number arithmetic
7/27/07 Re: cube root of a given number Iain Davidson
7/27/07 Re: cube root of a given number Iain Davidson
7/27/07 Re: cube root of a given number arithmetic
7/27/07 Re: cube root of a given number arithmetic
7/27/07 Re: cube root of a given number arithmetic
7/27/07 Re: cube root of a given number Iain Davidson
7/27/07 Re: cube root of a given number arithmetic
7/27/07 Re: cube root of a given number Iain Davidson
7/27/07 Re: cube root of a given number arithmetic
7/27/07 Re: cube root of a given number Iain Davidson
7/27/07 Re: cube root of a given number arithmetic
7/27/07 Re: cube root of a given number arithmetic
7/27/07 Re: cube root of a given number Iain Davidson
7/27/07 Re: cube root of a given number arithmetic
7/27/07 Re: cube root of a given number Iain Davidson
7/28/07 Re: cube root of a given number arithmetic
7/28/07 Re: cube root of a given number Iain Davidson
8/5/07 Re: cube root of a given number arithmeticae
8/5/07 Re: cube root of a given number Iain Davidson
8/6/07 Re: cube root of a given number arithmetic
8/6/07 Re: cube root of a given number Iain Davidson
8/6/07 Re: cube root of a given number arithmeticae
8/7/07 Re: cube root of a given number Iain Davidson
8/7/07 Re: cube root of a given number mike3
8/10/07 Re: cube root of a given number arithmetic
8/10/07 Re: cube root of a given number Iain Davidson
8/11/07 Re: cube root of a given number r3769@aol.com
8/11/07 Re: cube root of a given number Iain Davidson
8/11/07 Re: cube root of a given number r3769@aol.com
8/11/07 Re: cube root of a given number Iain Davidson
8/11/07 Re: cube root of a given number r3769@aol.com
8/12/07 Re: cube root of a given number Iain Davidson
8/17/07 Re: cube root of a given number r3769@aol.com
8/12/07 Re: cube root of a given number arithmetic
8/13/07 Re: cube root of a given number Iain Davidson
8/24/07 Re: cube root of a given number arithmetic
8/28/07 Re: cube root of a given number narasimham
1/10/13 Re: cube root of a given number ... Milo Gardner
8/28/07 Re: cube root of a given number arithmetic
8/28/07 Re: cube root of a given number Iain Davidson
8/7/07 Re: cube root of a given number mike3
8/7/07 Re: cube root of a given number Iain Davidson
8/10/07 Re: cube root of a given number arithmetic
8/10/07 Re: cube root of a given number arithmetic
7/28/07 Re: cube root of a given number arithmetic
7/27/07 Re: cube root of a given number arithmetic
7/27/07 Re: cube root of a given number arithmetic
7/27/07 Re: cube root of a given number arithmetic
7/26/07 Re: cube root of a given number Iain Davidson
7/26/07 Re: cube root of a given number arithmetic
7/25/07 Re: cube root of a given number Iain Davidson
7/26/07 Re: cube root of a given number arithmetic
7/22/07 Re: cube root of a given number arithmetic
7/21/07 Re: cube root of a given number arithmetic
7/16/07 Re: cube root of a given number Proginoskes
7/21/07 Re: cube root of a given number arithmetic
7/22/07 Re: cube root of a given number Proginoskes
7/22/07 Re: cube root of a given number Virgil
7/22/07 Re: cube root of a given number Proginoskes
7/23/07 Re: cube root of a given number arithmetic
7/23/07 Re: cube root of a given number arithmetic
7/24/07 Re: cube root of a given number Proginoskes
7/16/07 Re: cube root of a given number gwh
7/17/07 Re: cube root of a given number Iain Davidson
7/21/07 Re: cube root of a given number arithmetic
7/21/07 Re: cube root of a given number arithmetic
7/21/07 Re: cube root of a given number arithmetic
7/24/07 Re: cube root of a given number pomerado@hotmail.com
7/25/07 Re: cube root of a given number orangatang1@googlemail.com | {"url":"http://mathforum.org/kb/message.jspa?messageID=5828308","timestamp":"2014-04-21T03:06:16Z","content_type":null,"content_length":"150762","record_id":"<urn:uuid:a9e85129-c438-4f59-9afd-bcf9d240464a>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00112-ip-10-147-4-33.ec2.internal.warc.gz"} |
High-precision computation: Mathematical physics and dynamics
- Mathematics of Computation , 2013
"... Abstract. We consider some fundamental generalized Mordell–Tornheim–Witten (MTW) zeta-function values along with their derivatives, and explore connections with multiplezeta values (MZVs). To
achieve this, we make use of symbolic integration, high precision numerical integration, and some interestin ..."
Cited by 5 (4 self)
Add to MetaCart
Abstract. We consider some fundamental generalized Mordell–Tornheim–Witten (MTW) zeta-function values along with their derivatives, and explore connections with multiplezeta values (MZVs). To achieve
this, we make use of symbolic integration, high precision numerical integration, and some interesting combinatorics and special-function theory. Our original motivation was to represent unresolved
constructs such as Eulerian loggamma integrals. We are able to resolve all such integrals in terms of a MTW basis. We also present, for a substantial subset of MTW values, explicit closed-form
expressions. In the process, we significantly extend methods for high-precision numerical computation of polylogarithms and their derivatives with respect to order.
- in the proceedings of PASCO 2010
"... Homotopy continuation methods to solve polynomial systems scale very well on parallel machines. In this paper we examine its parallel implementation on multiprocessor multicore workstations
using threads. With more cores we can speed up pleasingly parallel path tracking jobs. In addition, we can com ..."
Cited by 3 (3 self)
Add to MetaCart
Homotopy continuation methods to solve polynomial systems scale very well on parallel machines. In this paper we examine its parallel implementation on multiprocessor multicore workstations using
threads. With more cores we can speed up pleasingly parallel path tracking jobs. In addition, we can compute solutions more accurately in the same amount of time with threads, and thus achieve
quality up. Focusing on polynomial evaluation and linear system solving (the key ingredients of Newton’s method) we can double the accuracy of the results with the quad doubles of QD-2.3.9 in less
than double the time, if we use all available eight cores on our workstation. 1
"... computing system for ..."
, 2013
"... The digits of π have intrigued both the public and research mathematicians from the beginning of time. This article briefly reviews the history of this venerable constant, and then describes
some recent research on the question of whether π is normal, or, in other words, whether its digits are stati ..."
Add to MetaCart
The digits of π have intrigued both the public and research mathematicians from the beginning of time. This article briefly reviews the history of this venerable constant, and then describes some
recent research on the question of whether π is normal, or, in other words, whether its digits are statistically random in a specific sense. 1 Pi and its day in modern popular culture The number π,
unique among the pantheon of mathematical constants, captures the fascination both of the public and of professional mathematicians. Algebraic constants such as √ 2 are easier to explain and to
calculate to high accuracy (e.g., using a simple Newton iteration scheme). The constant e is pervasive in physics and chemistry, and even appears in financial mathematics. Logarithms are ubiquitous
in the social sciences. But none of these other constants has ever gained much traction in the popular culture. In contrast, we see π at every turn. In an early scene of Ang Lee’s 2012 movie
adaptation of Yann Martel’s award-winning book The Life of Pi, the title character Piscine (“Pi”) Molitor writes hundreds of digits of the decimal expansion of π on a blackboard to impress his
teachers and schoolmates, who chant along with every digit. 1 This has even led to humorous take-offs such as a 2013 Scott Hilburn cartoon entitled “Wife of Pi, ” which depicts a 4 figure seated next
to a π figure, telling their marriage counselor “He’s irrational and he goes on and on. ” [21].
, 2012
"... We consider some fundamental generalized Mordell–Tornheim–Witten (MTW) zeta-function values along with their derivatives, and explore connections with multiple-zeta values (MZVs). To achieve
these results, we make use of symbolic integration, high precision numerical integration, and some interestin ..."
Add to MetaCart
We consider some fundamental generalized Mordell–Tornheim–Witten (MTW) zeta-function values along with their derivatives, and explore connections with multiple-zeta values (MZVs). To achieve these
results, we make use of symbolic integration, high precision numerical integration, and some interesting combinatorics and special-function theory. Our original motivation was to represent previously
unresolved constructs such as Eulerian log-gamma integrals. Indeed, we are able to show that all such integrals belong to a vector space over an MTW basis, and we also present, for a substantial
subset of this class, explicit closed-form expressions. In the process, we significantly extend methods for high-precision numerical computation of polylogarithms and their derivatives with respect
to order.
, 2013
"... The present article concerns itself with the description of real numbers converter into basic positional notations (binary, denary, hexadecimal) with the controlled accuracy of fractional part
of converted number formation. Here the converter functionality and the peculiarities of implementation of ..."
Add to MetaCart
The present article concerns itself with the description of real numbers converter into basic positional notations (binary, denary, hexadecimal) with the controlled accuracy of fractional part of
converted number formation. Here the converter functionality and the peculiarities of implementation of the used algorithms of converting long numbers from one numerical notation into the other
without making use of the processor input/output are specified. Moreover, the analysis of the program action period while converting the numbers of different exponents has been carried out.
"... Abstract—Double precision summation is at the core of numerous important algorithms such as Newton-Krylov methods and other operations involving inner products, but the effectiveness of
summation is limited by the accumulation of rounding errors, which are an increasing problem with the scaling of m ..."
Add to MetaCart
Abstract—Double precision summation is at the core of numerous important algorithms such as Newton-Krylov methods and other operations involving inner products, but the effectiveness of summation is
limited by the accumulation of rounding errors, which are an increasing problem with the scaling of modern HPC systems and data sets. To reduce the impact of precision loss, researchers have proposed
increasedand arbitrary-precision libraries that provide reproducible error or even bounded error accumulation for large sums, but do not guarantee an exact result. Such libraries can also increase
computation time significantly. We propose big integer (BigInt) expansions of double precision variables that enable arbitrarily large summations without error and provide exact and reproducible
results. This is feasible with performance comparable to that of double-precision floating point summation, by the inclusion of simple and inexpensive logic into modern NICs to accelerate performance
on large-scale systems. I.
, 2013
"... Abstract. Questions whether numerical simulation is reproducible or not have been reported in several sensitive applications. Numerical reproducibility failure mainly comes from the finite
precision of computer arithmetic. Results of floating-point computation depends on the computer arithmetic prec ..."
Add to MetaCart
Abstract. Questions whether numerical simulation is reproducible or not have been reported in several sensitive applications. Numerical reproducibility failure mainly comes from the finite precision
of computer arithmetic. Results of floating-point computation depends on the computer arithmetic precision and on the order of arithmetic operations. Massive parallel HPC which merges, for instance,
many-core CPU and GPU, clearly modifies these two parameters even from run to run on a given computing platform. How to trust such computed results? This paper presents how three classic approaches
in computer arithmetic may provide some first steps towards more numerical reproducibility. 1. Numerical reproducibility: context and motivations As computing power increases towards exascale, more
complex and larger scale numerical simulations are performed in various domains. Questions whether such simulated results are reproducible or not have been reported more or less recently, e.g. in
energy science [1], dynamic weather forecasting [2], atomic or molecular dynamic [3,4], fluid dynamic [5]. This paper focuses on numerical non-reproducibility due to the finite precision of computer
arithmetic – see [6] for other issues about “reproducible research ” in computational mathematics. The following example illustrates a typical failure of numerical reproducibility. In the energetic
field, power system state simulation aims to compute at “real time ” a reliable estimate of the bus voltages for a given | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=7822621","timestamp":"2014-04-18T17:22:11Z","content_type":null,"content_length":"31257","record_id":"<urn:uuid:5e0b04bc-7262-4269-9a32-c3c055893e45>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00469-ip-10-147-4-33.ec2.internal.warc.gz"} |
A Hill-Climbing Landmarker Generation Algorithm Based on Efficiency and Correlativity Criteria
Daren Ler, Irena Koprinska, and Sanjay Chawla, University of Sydney
For a given classification task, there are typically several learning algorithms available. The question then arises: which is the most appropriate algorithm to apply. Recently, we proposed a new
algorithm for making such a selection based on landmarking - a meta-learning strategy that utilises meta-features that are measurements based on efficient learning algorithms. This algorithm, which
creates a set of landmarkers that each utilise subsets of the algorithms being landmarked, was shown to be able to estimate accuracy well, even when employing a small fraction of the given
algorithms. However, that version of the algorithm has exponential computational complexity for training. In this paper, we propose a hill-climbing version of the landmarker generation algorithm,
which requires only polynomial training time complexity. Our experiments show that the landmarkers formed have similar results to the more complex version of the algorithm.
This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy. | {"url":"http://www.aaai.org/Library/FLAIRS/2005/flairs05-069.php","timestamp":"2014-04-17T09:46:51Z","content_type":null,"content_length":"2976","record_id":"<urn:uuid:572949d9-2494-4fd6-bfd4-ae45fcfbd296>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00365-ip-10-147-4-33.ec2.internal.warc.gz"} |
I recently posted the following information in the talk page of the Wikipedia article on functions, where they were arguing about whether "function" means a set of ordered pairs with the functional
property or a structure with a domain $D$, a codomain $C$, and a graph $G$ which is a subset of $D\times C$ with the functional property.
I collected data from some math books published since 2000 that contain a definition of function; they are listed below. In this list, "typed" means function was defined as going from a set A to a
set B, A was called the domain, and B was not given a name. If "typed" is followed by a word (codomain, range or target) that was the name given the codomain. One book defined a function essentially
as a partial function. Some that did not name the codomain defined "range" in the sense of image. Some of them emphasized that the range/image need not be the same as the codomain.
As far as I know, none of these books said that if two functions had the same domain and the same graph but different codomains they had to be different functions. But I didn't read any of them
My impression is that modern mathematical writing at least at college level does distinguish the domain, codomain, and image/range of a function, not always providing a word to refer to the codomain.
If the page number as a question mark after it that means I got the biblio data for the book from Amazon and the page number from Google books, which doesn't give the edition number, so it might be
I did not look for books by logicians or computing scientists. My experience is that logicians tend to use partial functions and modern computing scientists generally require the codomain to be
Opinion: If you don't distinguish functions as different if they have different codomains, you lose some basic intuition (a function is a map) and you mess up common terminology. For example the only
function from {1} to {1} is the identity function, and is surjective. The function from {1} to the set of real numbers (which is a point on the real line) is not the identity function and is not
Mathematics for Secondary School Teachers
By Elizabeth G. Bremigan, Ralph J. Bremigan, John D. Lorch, MAA 2011
p. 6 (typed)
Oxford Concise Dictionary of Mathematics, ed. Christopher Clapham and James Nicholson, Oxford University Press, 4th ed., 2009.
p. 184, (typed, codomain)
Math and Math-in-school: Changes in the Treatment of the Function Concept in …
By Kyle M. Cochran, Proquest, 2011
p74 (partial function)
Discrete Mathematics: An Introduction to Mathematical Reasoning
By Susanna S. Epp, 4th edition, Cengage Learning, 2010
p. 294? (typed, co-domain)
Teaching Mathematics in Grades 6 – 12: Developing Research-Based …
By Randall E. Groth, SAGE, 2011
p236 (typed, codomain)
Essentials of Mathematics, by Margie Hale, MAA, 2003.
p. 38 (typed, target).
Elements of Advanced Mathematics
By Steven G. Krantz, 3rd ed., Chapman and Hall, 2012
p79? (typed, range)
Bridge to Abstract Mathematics
By Ralph W. Oberste-Vorth, Aristides Mouzakitis, Bonita A. Lawrence, MAA 2012
p76 (typed, codomain)
The Road to Reality by Roger Penrose, Knopf, 2005.
p. 104 (typed, target)
Precalculus: Mathematics for Calculus
By James Stewart, Lothar Redlin, Saleem Watson, Cengage, 2011
p. 143. (typed)
The Mathematics that Every Secondary School Math Teacher Needs to Know
By Alan Sultan, Alice F. Artzt , Routledge, 2010.
p.400 (typed)
More about the definition of function
Maya Incaand commented on my post Definition of "function":
Why did you decide against "two inequivalent descriptions in common use"? Is it no longer true?
This question concerns [1], which is a draft article. I have not promoted it to the standard article in abstractmath because I am not satisfied with some things in it.
More specifically, there really are two inequivalent descriptions in common use. This is stated by the article, buried in the text, but if you read the beginning, you get the impression that there is
only one specification. I waffled, in other words, and I expect to rewrite the beginning to make things clearer.
Below are the two main definitions you see in university courses taken by math majors and grad students. A functional relation has the property that no two distinct ordered pairs have the same first
Strict definition: A function consists of a functional relation with specified codomain (the domain is then defined to be the set of first elements of pairs in the relation). Thus if $A$ and $B$ are
sets and $A\subseteq B$, then the identity function $1_A:A\to A$ and the inclusion function $i:A\to B$ are two different functions.
Relational definition: A function is a functional relation. Then the identity and inclusion functions are the same function. This means that a function and its graph are the same thing (discussed in
the draft article).
These definitions are subject to variations:
Variations in the strict definition: Some authors use "range" for "codomain" in the definition, and some don't make it clear that two functions with the same functional relation but different
codomains are different functions.
Variations in the relational definition: Most such definitions state explicitly that the domain and range are determined by the relation (the set of first coordinates and the set of second
There are many other variations in the formalism used in the definition. For example, the strict definition can be formalized (as in Wikipedia) as an ordered triple $(A, B, f)$ where $A$ and $B$ are
sets and $f$ is a functional relation with the property thar every element of $A$ is the first element of an ordered pair in the relation.
You could of course talk about an ordered triple $(A,f,B)$ blah blah. Such definitions introduce arbitrary constructions that have properties irrelevant to the concept of function. Would you ever say
that the second element of the function $f(x)=x+1$ on the reals is the set of real numbers? (Of course, if you used the formalism $(A,f,B)$ you would have to say the second element of the function is
its graph! )
It is that kind of thing that led me to use a specification instead of a definition. If you pay attention to such irrelevant formalism there seems to be many definitions of function. In fact, at the
university level there are only two, the strict definition and the relational definition. The usage varies by discipline and age. Younger mathematicians are more likely to use the strict definition.
Topologists use the strict definition more often than analysts (I think).
There is also variation in usage.
• Most authors don't tell you which definition they use, and it often doesn't matter anyway.
• If an author defines a function using a formula, there is commonly an implicit assumption that the domain includes everything for which the formula is well-defined. (The "everything" may be
modified by referring to it as an integer, real, or complex function.)
Definitions of function on the web
Below are some definitions of function that appear on the web. I have excluded most definitions aimed at calculus students or below; they often assume you are talking about numbers and formulas. I
have not surveyed textbooks and research papers. That would have to be done for a proper scholarly article about mathematical usage of "function". But most younger people get their knowledge from the
web anyway.
The meaning of the word “superposition”
This is from the Wikipedia article on Hilbert's 13th Problem as it was on 31 March 2012:
[Hilbert's 13th Problem suggests this] question: can every continuous function of three variables be expressed as a composition of finitely many continuous functions of two variables? The
affirmative answer to this general question was given in 1957 by Vladimir Arnold, then only nineteen years old and a student of Andrey Kolmogorov. Kolmogorov had shown in the previous year that
any function of several variables can be constructed with a finite number of three-variable functions. Arnold then expanded on this work to show that only two-variable functions were in fact
required, thus answering Hilbert's question.
In their paper A relation between multidimensional data compression and Hilbert’s 13th problem, Masahiro Yamada and Shigeo Akashi describe an example of Arnold's theorem this way:
Let $f ( \cdot , \cdot, \cdot )$ be the function of three variable defined as \(f(x, y, z)=xy+yz+zx\), $x ,y , z\in \mathbb{C}$ . Then, we can easily prove that there do not exist functions of
two variables $g(\cdot , \cdot )$ , $u(\cdot, \cdot)$ and $v(\cdot , \cdot )$ satisfying the following equality: $f(x, y, z)=g(u(x, y),v(x, z)) , x , y , z\in \mathbb{C}$ . This result shows us
that $f$ cannot be represented any 1-time nested superposition constructed from three complex-valued functions of two variables. But it is clear that the following equality holds: $f(x, y, z)=x
(y+z)+(yz)$ , $x,y,z\in \mathbb{C}$ . This result shows us that $f$ can be represented as a 2-time nested superposition.
The article about superposition in All about circuits says:
The strategy used in the Superposition Theorem is to eliminate all but one source of power within a network at a time, using series/parallel analysis to determine voltage drops (and/or currents)
within the modified network for each power source separately. Then, once voltage drops and/or currents have been determined for each power source working separately, the values are all
“superimposed” on top of each other (added algebraically) to find the actual voltage drops/currents with all sources active.
Superposition Theorem in Wikipedia:
The superposition theorem for electrical circuits states that for a linear system the response (Voltage or Current) in any branch of a bilateral linear circuit having more than one independent
source equals the algebraic sum of the responses caused by each independent source acting alone, while all other independent sources are replaced by their internal impedances.
Quantum superposition in Wikipedia:
Quantum superposition is a fundamental principle of quantum mechanics. It holds that a physical system — such as an electron – exists partly in all its particular, theoretically possible states
(or, configuration of its properties) simultaneously; but, when measured, it gives a result corresponding to only one of the possible configurations (as described in interpretation of quantum
Mathematically, it refers to a property of solutions to the Schrödinger equation; since theSchrödinger equation is linear, any linear combination of solutions to a particular equation will also
be a solution of it. Such solutions are often made to be orthogonal (i.e. the vectors are at right-angles to each other), such as the energy levels of an electron. By doing so the overlap energy
of the states is nullified, and the expectation value of an operator (any superposition state) is the expectation value of the operator in the individual states, multiplied by the fraction of the
superposition state that is "in" that state
The CIO midmarket site says much the same thing as the first paragraph of the Wikipedia Quantum Superposition entry but does not mention the stuff in the second paragraph.
In particular, the Yamada & Akashi article describes the way the functions of two variables are put together as "superposition", whereas the Wikipedia article on Hilbert's 13th calls it composition.
Of course, superposition in the sense of the Superposition Principle is a composition of multivalued functions with the top function being addition. Both of Yamada & Akashi's examples have addition
at the top. But the Arnold theorem allows any continuous function at the top (and anywhere else in the composite).
So one question is: is the word "superposition" ever used for general composition of multivariable functions? This requires the kind of research I proposed in the introduction of The Handbook of
Mathematical Discourse, which I am not about to do myself.
The first Wikipedia article above uses "composition" where I would use "composite". This is part of a general phenomenon of using the operation name for the result of the operation; for examples,
students, even college students, sometimes refer to the "plus of 2 and 3" instead of the "sum of 2 and 3". (See "name and value" in abstractmath.org.) Using "composite" for "composition" is analogous
to this, although the analogy is not perfect. This may be a change in progress in the language which simplifies things without doing much harm. Even so, I am irritated when "composition" is used for
Quantum superposition seems to be a separate idea. The second paragraph of the Wikipedia article on quantum superposition probably explains the use of the word in quantum mechanics. | {"url":"http://www.abstractmath.org/Word%20Press/?tag=wikipedia","timestamp":"2014-04-18T20:43:04Z","content_type":null,"content_length":"54770","record_id":"<urn:uuid:9857df82-fd17-4b21-a2ed-2d6d41cd00d4>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00523-ip-10-147-4-33.ec2.internal.warc.gz"} |
Analysis of Adaptive Fuzzy Technique for Multiple Crack Diagnosis of Faulty Beam Using Vibration Signatures
Advances in Fuzzy Systems
Volume 2013 (2013), Article ID 164853, 16 pages
Research Article
Analysis of Adaptive Fuzzy Technique for Multiple Crack Diagnosis of Faulty Beam Using Vibration Signatures
Department of Mechanical Engineering, SiKsha ‘O’ Anusandhan University, Bhubaneswar, Orissa 751030, India
Received 22 April 2012; Accepted 7 December 2012
Academic Editor: Ashu M. G. Solo
Copyright © 2013 Amiya Kumar Dash. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any
medium, provided the original work is properly cited.
This paper discusses the multicrack detection of structure using fuzzy Gaussian technique. The vibration parameters derived from the numerical methods of the cracked cantilever beam are used to set
several fuzzy rules for designing the fuzzy controller used to predict the crack location and depth. Relative crack locations and relative crack depths are the output parameters from the fuzzy
inference system. The method proposed in the current analysis is used to evaluate the dynamic response of cracked cantilever beam. The results of the proposed method are in good agreement with the
results obtained from the developed experimental setup.
1. Introduction
Beams are one of the most commonly used structural elements in numerous engineering applications and experience a wide variety of static and dynamic loads. Cracks may develop in beam-like structures
due to such loads. Considering the crack as a significant form of such damage, its modeling is an important step in studying the behavior of damaged structures. As stated, beam type structures are
being commonly used in steel construction and machinery industries. Studies based on structural health monitoring for crack detection deal with change in natural frequencies and mode shapes of the
An analytical study has been performed by Yang et al. [1] on the free and forced vibration of inhomogeneous Euler-Bernoulli beams containing open edge cracks. Analytical solutions are obtained for
cantilever, with different end conditions to evaluate the dynamic response of the beam due to the edge crack. Orhan [2] has performed a free and forced vibration analysis of a cracked beam in order
to identify the cracks in a cantilever beam. Their study reveals that free vibration analysis provides more suitable information for the detection of cracks than the forced vibration analysis. Damage
in a cracked structure has been analyzed using genetic algorithm technique by Vakil-Baghmisheh et al. [3]. For modeling the cracked-beam structure, an analytical model of a cracked cantilever beam
has been utilized, and natural frequencies are obtained through numerical methods. A genetic algorithm is utilized to monitor the possible changes in the natural frequencies of the structure.
Theoretical and experimental dynamic behaviors of different multibeams systems containing a transverse crack have been performed by Saavedra and Cuitio [4]. A new cracked stiffness matrix is deduced
based on flexibility, and this can be used subsequently in the FEM analysis of crack systems. Bakhary et al. [5] used Artificial Neural Network (ANN) for damage detection. In his analysis, an ANN
model is created by applying Rosenblueth’s point estimate method verified by Monte Carlo simulation. The results have demonstrated that the statistical ANN approach gives more reliable identification
of structural damage. Friswell et al. [6] have applied genetic algorithm to the problem of damage detection using vibration data. The objective is to identify the position of one or more damage sites
in a structure and to estimate the extent of the damage. A comprehensive analysis of the stability of a cracked beam subjected to a follower compressive load is presented by Wang [7]. The vibration
analysis on such cracked beam has been conducted to identify the critical compression load for instability based on the variation of the first two resonant frequencies of the beam. Chondros et al. [8
] have developed a continuous cracked beam vibration theory for the lateral vibration of cracked Euler-Bernoulli beams with single-edge or double-edge open cracks using the Hu-Washizu-Barr
variational formulation.
A new method for natural frequency analysis of beam with an arbitrary number of cracks has been developed by Khiem and Lien [9] on the basis of the transfer matrix method and rotational spring model
of crack. Cam et al. [10] have performed a study to obtain information about the location and depth of the cracks in cracked beam. Experimental and simulations results obtained are in good agreement.
Zheng and Kessissoglou [11] have presented a method based on finite element method for detection of crack in faulty structural member. The results obtained from the proposed method are validated
using experimental analysis. Tada et al. [12] have provided the basis for computation of compliance matrix for damage detection following fracture mechanics theory. Sekhar and Prabhu [13] have
derived a method for crack detection in a cracked shaft using finite element analysis using correct expression for strain energy release rate function. Hossain et al. [14] have presented an
investigation for comparative performance of intelligent system-like genetic algorithms (GAs) and adaptive neurofuzzy inference system (ANFIS) algorithms for identification of fault in an active
vibration control (AVC) system. A comparative performance of the proposed method is presented and discussed through a set of experiments. Ranjbaran et al. [15] in their paper have formulated a method
for vibration analysis of a beam assuming the beam to be nonuniform. The vibration characteristics of the beam are computed and the results are compared with other methods. Zhou and Biegalski [16]
have proposed a method to analyze the vibration signatures of a deck truss bridge with cracks at the gusset plate connecting the lower lateral bracing. In-service monitoring has been performed to
measure the vibration properties of the truss and the lateral bracing members to avoid resonance with the excitation frequency. Wada et al. [18] have proposed a fuzzy control method with triangular
type membership functions using an image processing unit to control the level of granules inside a hopper. They have stated that the image processing technique can be used as a detecting element, and
with the use of fuzzy reasoning methods, good process responses are obtained. Pawar et al. [17] have used a genetic fuzzy system to identify the crack depth location in a composite matrix cracking
model. As described by them, the genetic fuzzy system combines the uncertainty characteristics of fuzzy logic with the learning ability of genetic algorithm. Parhi [19] has designed a mobile robot
navigation control system using fuzzy logic. Fuzzy rules embedded in the controller of a mobile robot enable it to avoid obstacles in a cluttered environment that includes other mobile robots.
In the present study, a finite element model for a cracked beam element is developed, and the results from theoretical and finite element analyses have been used to set the fuzzy rules. The fuzzy
rules are used for designing the fuzzy inference system based on Gaussian membership function which is subsequently applied for prediction of damage in a faulty structure. The theoretical, finite
element and fuzzy analysis are done to study the response of a cantilever beam in the presence of cracks. Theoretical results are compared with the experimental, fuzzy, and FEM results. A close
agreement between the results has been observed.
2. Theoretical Analysis
2.1. Local Flexibility of a Cracked Beam under Bending and Axial Loading
The presence of a transverse surface crack of depth “” and “” on beam of width “” and height “” introduces a local flexibility, which can be defined in matrix form, the dimension of which depends on
the degrees of freedom. Here, a matrix is considered. A cantilever beam is subjected to axial force and bending moment , shown in Figure 1(a), which gives coupling with the longitudinal and
transverse motion. The cross-sectional view and the front view of the beam are shown in Figures 1(b) and 3, respectively.
The strain energy release rate at the fractured section can be written as follows [12]: where and are the stress intensity factors of mode I (opening of the crack) for loads and , respectively. The
values of stress intensity factors from earlier studies [12] are where expressions for and are as follows: Let be the strain energy due to the crack. Then, from Castigliano’s theorem, the additional
displacement along the force is The strain energy will have the form where is the strain energy density function.
From (1) and (4), thus we have The flexibility influence coefficient will be, by definition, and can be written as From (9), calculating (=), and , we get The local stiffness matrix can be obtained
by taking the inversion of compliance matrix. That is, Figure 2 shows the variation of dimensionless compliances to that of relative crack depth.
2.2. Analysis of Vibration Characteristics of the Cracked Beam
A cantilever beam of length “”, width “”, and depth “”, with a crack of depth “” and “” at a distance “” and “”, respectively, from the fixed end, is considered (shown in Figure 1). Taking , , and as
the amplitudes of longitudinal vibration for the sections before, in-between, and after the crack, , , and are the amplitudes of bending vibration for the same sections.
The normal function for the system can be defined as where, , , , , , , , , and , . Constants are to be determined from boundary conditions. The boundary conditions of the cantilever beam in
consideration are At the cracked section Also, at the cracked section , we have Multiplying both sides of the previous equation by , we get Similarly, Multiplying both sides of the previous equation
by , we get where Similarly, at the crack section , we can have the expression The normal functions in (13) along with the boundary conditions as mentioned earlier yield the characteristic equation
of the system as This determinant is a function of natural circular frequency , the relative locations of the crack , and the local stiffness matrix which in turn is a function of the relative crack
depth .
The results of the theoretical analysis for the first three mode shapes for uncracked and cracked beams are shown in Figure 4.
3. Analysis of Cracked Beam Using Finite Element Method (FEM)
In the following section, FEM is analyzed for vibration analysis of a cantilever cracked beam (Figure 5). The relationship between the displacement and the forces can be expressed as where overall
flexibility matrix can be expressed as The displacement vector in (23) is due to the crack.
The forces acting on the beam element for FEM analysis are shown in Figure 5.
Under this system, the flexibility matrix of the intact beam element can be expressed as where .
The displacement vector in (25) is for the intact beam.
The total flexibility matrix of the cracked beam element can now be obtained by Through the equilibrium conditions, the stiffness matrix of a cracked beam element can be obtained as follows [13]:
where is the transformation matrix and is expressed as The results of the finite element analysis for the first three mode shapes of the cracked beam are compared with that of the numerical analysis
of the cracked beam and are presented in Figure 6.
4. Analysis of the Fuzzy Controller
The fuzzy controller developed has got six input parameters and two output parameters.
The linguistic terms used for the inputs are as follows:relative first natural frequency = “fnf”; relative second natural frequency = “snf”;relative third natural frequency = “tnf”;average
relative first mode shape difference = “fmd”;average relative second mode shape difference = “smd”;average relative third mode shape difference = “tmd”.
The linguistic terms used for the outputs are as follows:first relative crack location = “rcl1” and first relative crack depth = “rcd1”;second relative crack location = “rcl2” and second relative
crack depth = “rcd2”.
The fuzzy controller used in the present text is shown in Figure 7(a). The Gaussian membership functions are shown pictorially in Figure 7(b). The linguistic terms for the Gaussian membership
functions, used in the fuzzy controller, are described in Table 1.
4.1. Fuzzy Mechanism for Crack Detection
Based on the previous fuzzy subsets, the fuzzy control rules are defined in a general form as follows: where , , , , , and because “,” “,” “,” “,” “,” and “” have ten membership functions each.
From expression (29), two sets of rules can be written: According to the usual fuzzy logic control method [19], a factor is defined for the rules as follows: where , freq[j], and are the first,
second, and third relative natural frequencies of the cantilever beam with crack, respectively; , , and are the average first, second, and third relative mode shape differences of the cantilever beam
with crack, respectively. By applying the composition rule of inference [19], the membership values of the relative crack location and relative crack depth, and , can be computed as The overall
conclusion by combining the outputs of all the fuzzy rules can be written as follows: The crisp values of relative crack location and relative crack depth are computed using the centre of gravity
method [19] as
4.2. Fuzzy Controller for Finding out Crack Depth and Crack Location
The inputs to the fuzzy controller are relative first natural frequency, relative second natural frequency, relative third natural frequency, average relative first mode shape difference, average
relative second mode shape difference, and average relative third mode shape difference. The outputs from the fuzzy controller are relative crack depth and relative crack location. Twenty numbers of
the fuzzy rules out of several hundreds of fuzzy rules are being listed in Table 2. The fuzzy controller results when rule 6 and rule 19 are activated from Table 2 are shown in Figure 8.
5. Experimental Setup
Experiments are performed to determine the natural frequencies and mode shapes for different crack depths on Aluminum beam specimen . The experimental setup is shown in Figure 9. The amplitude of
transverse vibration at different locations along the length of the Aluminum beam is recorded by positioning the vibration pickup and tuning the vibration generator at the corresponding resonant
frequencies. These results for first three modes are plotted in Figure 10. Corresponding numerical results are also presented in the same graph for comparison.
6. Discussion
The results from fuzzy controller and the information obtained from theoretical, finite element and experimental analysis of the cracked cantilever beam are depicted later.
Fuzzy logic systems promise an efficient way for damage assessment in a cracked structure. They are able to treat uncertain and imprecise information; they make use of knowledge in the form of
linguistic rules. At first, the theoretical expressions for the cracked and uncracked beams are developed for calculation of natural frequencies, mode shapes following correct expression of strain
energy release rate. From Figure 2, it is observed that the compliances increase with the increase in relative crack depth. Finite element analysis is being performed on the cracked beam element
(Figure 5) to find out the vibration characteristics. The comparison between results obtained from theoretical analysis for cracked and uncracked beams with the magnified view at the vicinity of
crack location is presented in Figure 4. Results from finite element analysis (FEA) and numerical analysis are compared for both uncracked and cracked beams and are shown in Figure 6. For validation
of the information obtained from various methodologies for analysis of the cracked cantilever beam, an experimental setup is developed as shown in Figure 9. Experiments are performed on the Aluminum
beam specimen to estimate the first three mode shapes which are compared with the mode shapes obtained from the analysis as mentioned earlier. The vibration signatures (natural frequencies, mode
shapes) are used to establish the fuzzy rules for designing fuzzy controller based on Gaussian membership functions that are depicted in Figures 7(a) and 7(b). The linguistic terms of fuzzy rules in
the present fuzzy controller are given in Table 1. Some of the actual rules made for the fuzzy controller of the present investigation are listed in Table 2. The outputs of the Gaussian fuzzy
controller obtained by activating rule 6 and rule 19 from Table 2 are presented in Figure 8. The mode shapes obtained from experimental and numerical, finite element, fuzzy analysis for cracked and
uncracked beams are compared graphically in Figure 10. Some of the predicted outputs of the developed fuzzy controller and corresponding numerical finite element, and experimental results are
presented in Table 3. It is observed that the results of all analyses are in good agreement.
7. Conclusions
In the present section, results obtained from the different analyses have the following conclusions.
As clear cut deviation of the mode shapes, natural frequencies can be detected for the cracked and uncracked beams at the vicinity of crack location. The comparison of results derived from
theoretical and finite element method (FEM) for the cracked structure shows a good agreement. The fuzzy controller developed with Gaussian membership functions is designed with the help of the
vibration signatures obtained from numerical and finite element analyses.
The first three relative natural frequencies and the first three mode shapes in dimensionless forms are the input parameters to the fuzzy inference system. Relative crack location and relative crack
depth are the outputs from the system. The comparison of results between experimental and fuzzy analyses shows a close agreement. From the comparison of results, it is observed that the developed
fuzzy inference system can predict the relative crack location and relative crack depth in a faster and accurate way, thereby decreasing a considerable amount of computational time. The proposed
method can be used as an online condition monitoring tool, and in the future, hybrid technique can be developed for a faster and more efficient way for fault detection in the domain of dynamic
vibrating structure.
1. J. Yang, Y. Chen, Y. Xiang, and X. L. Jia, “Free and forced vibration of cracked inhomogeneous beams under an axial force and a moving load,” Journal of Sound and Vibration, vol. 312, no. 1-2,
pp. 166–181, 2008. View at Publisher · View at Google Scholar · View at Scopus
2. S. Orhan, “Analysis of free and forced vibration of a cracked cantilever beam,” NDT and E International, vol. 40, no. 6, pp. 443–450, 2007. View at Publisher · View at Google Scholar · View at
3. M. T. Vakil-Baghmisheh, M. Peimani, M. H. Sadeghi, and M. M. Ettefagh, “Crack detection in beam-like structures using genetic algorithms,” Applied Soft Computing Journal, vol. 8, no. 2, pp.
1150–1160, 2008. View at Publisher · View at Google Scholar · View at Scopus
4. P. N. Saavedra and L. A. Cuitio, “Crack detection and vibration behavior of cracked beams,” Computers and Structures, vol. 79, no. 16, pp. 1451–1459, 2001. View at Publisher · View at Google
Scholar · View at Scopus
5. N. Bakhary, H. Hao, and A. J. Deeks, “Damage detection using artificial neural network with consideration of uncertainties,” Engineering Structures, vol. 29, no. 11, pp. 2806–2815, 2007. View at
Publisher · View at Google Scholar · View at Scopus
6. M. I. Friswell, J. E. T. Penny, and S. D. Garvey, “A combined genetic and eigensensitivity algorithm for the location of damage in structures,” Computers and Structures, vol. 69, no. 5, pp.
547–556, 1998. View at Scopus
7. Q. Wang, “A comprehensive stability analysis of a cracked beam subjected to follower compression,” International Journal of Solids and Structures, vol. 41, no. 18-19, pp. 4875–4888, 2004. View at
Publisher · View at Google Scholar · View at Scopus
8. T. G. Chondros, A. D. Dimarogonas, and J. Yao, “A continuous cracked beam vibration theory,” Journal of Sound and Vibration, vol. 215, no. 1, pp. 17–34, 1998. View at Scopus
9. N. T. Khiem and T. V. Lien, “A simplified method for natural frequency analysis of a multiple cracked beam,” Journal of Sound and Vibration, vol. 245, no. 4, pp. 737–751, 2001. View at Publisher
· View at Google Scholar · View at Scopus
10. E. Cam, S. Orhan, and M. Lüy, “An analysis of cracked beam structure using impact echo method,” NDT and E International, vol. 38, no. 5, pp. 368–373, 2005. View at Publisher · View at Google
Scholar · View at Scopus
11. D. Y. Zheng and N. J. Kessissoglou, “Free vibration analysis of a cracked beam by finite element method,” Journal of Sound and Vibration, vol. 273, no. 3, pp. 457–475, 2004. View at Publisher ·
View at Google Scholar · View at Scopus
12. H. Tada, P. C. Paris, and G. R. Irwin, The Stress Analysis of Cracks Hand Book, Del Research Corporation, Hellertown, Pa, USA, 1973.
13. A. S. Sekhar and B. S. Prabhu, “Crack detection and vibration characteristics of cracked shafts,” Journal of Sound and Vibration, vol. 157, no. 2, pp. 375–381, 1992. View at Scopus
14. M. A. Hossain, A. A. M. Madkour, K. P. Dahal, and H. Yu, “Comparative performance of intelligent algorithms for system identification and control,” Journal of Intelligent Systems, vol. 17, no. 4,
pp. 313–329, 2008. View at Scopus
15. A. Ranjbaran, S. Hashemi, and A. R. Ghaffarian, “A new approach for buckling and vibration analysis of cracked column,” International Journal of Engineering A, vol. 21, no. 3, pp. 225–230, 2008.
View at Scopus
16. Y. E. Zhou and A. E. Biegalski, “Problem diagnosis and retrofit of lateral bracing system of a truss bridge,” in Proceedings of the 2008 Structures Congress-Structures Congress 2008: Crossing the
Borders, vol. 314, April 2008. View at Scopus
17. P. M. Pawar, K. Venkatesulu Reddy, and R. Ganguli, “Damage detection in beams using spatial fourier analysis and neural networks,” Journal of Intelligent Material Systems and Structures, vol. 18,
no. 4, pp. 347–359, 2007. View at Publisher · View at Google Scholar · View at Scopus
18. K. Wada, N. Hayano, and H. Oka, “Application of the fuzzy control method for level control of a hopper,” Advanced Powder Technology, vol. 2, no. 3, pp. 163–172, 1991. View at Publisher · View at
Google Scholar
19. D. R. Parhi, “Navigation of mobile robots using a fuzzy logic controller,” Journal of Intelligent and Robotic Systems, vol. 42, no. 3, pp. 253–273, 2005. View at Publisher · View at Google
Scholar · View at Scopus | {"url":"http://www.hindawi.com/journals/afs/2013/164853/","timestamp":"2014-04-19T13:33:43Z","content_type":null,"content_length":"466130","record_id":"<urn:uuid:04cb051b-1a75-4007-9cc6-f8bc679c164c>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00349-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: irrational numbers and pi
Replies: 5 Last Post: Apr 27, 1998 5:10 AM
Messages: [ Previous | Next ]
Re: irrational numbers and pi
Posted: Apr 23, 1998 5:06 PM
>A student wanted to know if the digits of an irrational number such as
>sqr(2) can be found as well. What a great question. I'm not sure I know
>the answer. Any thoughts?
I don't think so. If all of the digits of root(2) can be found together in
pi, then you could write pi = a + root(2)/b. Where a is the number up to
the start of the root(2) digits and b is some power of 10. That would make
pi irrational but no longer transendental since pi would be a root of the
equation (bX-a)^2-2=0.
Michael Thwaites <Michael.Thwaites@ucop.edu>
Date Subject Author
4/23/98 irrational numbers and pi John W. Threlkeld
4/23/98 Re: irrational numbers and pi Michael Thwaites
4/23/98 Re: irrational numbers and pi Guy F. Brandenburg
4/24/98 Re: irrational numbers and pi John Conway
4/24/98 Re: irrational numbers and pi Timothy Poston
4/27/98 Re: irrational numbers and pi John Conway | {"url":"http://mathforum.org/kb/thread.jspa?threadID=356713&messageID=1091901","timestamp":"2014-04-17T08:28:51Z","content_type":null,"content_length":"22341","record_id":"<urn:uuid:fbf4d2d4-f3f1-4c83-93bb-70544c3bf861>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00495-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mead, CO Science Tutor
Find a Mead, CO Science Tutor
...Please feel free to contact me with any questions you may have. I look forward to helping you achieve success in your studies!I have taught organic chemistry at the high school level for the
past seven years. It was part of the curriculum for a survey of chemistry class.
7 Subjects: including biology, chemistry, anatomy, physical science
...My name is Ethan, and I am looking for students of all competencies who want to learn more about the sciences, hone their critical thinking skills, and/or further develop their literary and
writing skill-sets. When I am learning something new or challenging, I place great value in establishing a...
19 Subjects: including physics, biology, chemistry, reading
...I have extensive course work in chemistry, biology, politics, economics, and physics with experience in many other fields. I have two years of experience tutoring general chemistry for majors
and non-majors at DTCC and have worked as a general biology teaching assistant at CU Boulder that includ...
39 Subjects: including microbiology, genetics, ecology, physical science
...I have home schooled both children (ages 8 and 11) and they excel in elementary math; both test above grade level. Recently, I also tutored a high school senior in AP Calculus. I have spent
years tutoring math and science, as well as volunteering with children of all ages.
7 Subjects: including physical science, chemistry, algebra 1, elementary math
...I have a BA Anthropology/Human Biology. A MA Human Biology and field activity from Oxford. I have 25 years teaching / tutoring Biology.
3 Subjects: including biology, physical science, geology
Related Mead, CO Tutors
Mead, CO Accounting Tutors
Mead, CO ACT Tutors
Mead, CO Algebra Tutors
Mead, CO Algebra 2 Tutors
Mead, CO Calculus Tutors
Mead, CO Geometry Tutors
Mead, CO Math Tutors
Mead, CO Prealgebra Tutors
Mead, CO Precalculus Tutors
Mead, CO SAT Tutors
Mead, CO SAT Math Tutors
Mead, CO Science Tutors
Mead, CO Statistics Tutors
Mead, CO Trigonometry Tutors | {"url":"http://www.purplemath.com/mead_co_science_tutors.php","timestamp":"2014-04-18T04:27:13Z","content_type":null,"content_length":"23506","record_id":"<urn:uuid:08fb9b65-965a-4dee-ae33-2abd62b9fb49>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00073-ip-10-147-4-33.ec2.internal.warc.gz"} |
Finding Numbers Using Their Product
Date: 3/24/96 at 13:29:49
From: RDIANNE GREEN
Subject: math help
Dear Doctor Math,
I am home schooled and I have been unable to answer the following
algebra questions.
I am hoping you can help give me insight into these problems:
1. The product of two consecutive integers is 240. What are the
integers? (Note that the product of two positive integers will
be positive, and that the product of two negative integers will
also be positive.) "Reject roots that do not fulfill the
conditions of the problem."
2. A lot has an area of 4,500 square feet. Its length is 40 feet
longer than its width. What are the dimensions of the lot?
For the first question I am supposed to give the answer in the
four step process.
Thanks a lot in advance for your help.
Hope to get your response soon,
Date: 3/24/96 at 14:11:55
From: Doctor Steven
Subject: Re: math help
Well, so as I don't actually do the work for you I'll give you
some VERY similar problems and work through them.
#1. Say the product of two consecutive integers is 182.
Step 1) Call the first integer x, then the second integer is
Step 2) Set up the equation given by the information, namely
x*(x+1) = 182.
Step 3) Simplify to get:
x^2 + x - 182 = 0.
Step 4) Use quadratic equation to get:
x = - 14, or x = 13.
So the numbers are -14, -13, or 13, 14.
#2. Say a lot has 6000 square feet, and its length is 40 feet
longer than its width.
Step 1) Call its width x, then its length is x+40.
Step 2) Set up the equation to get:
x*(x+40) = 6000.
Step 3) Simplify to get:
x^2 + 40x - 6000 = 0.
Step 4) Use the quadratic equation to get
x = 60, or x = -100.
Since negative lengths don't make sense, discard the -100 value,
and you get the dimensions of this lot as 60 ft * 100 ft.
-Doctor Steven, The Math Forum | {"url":"http://mathforum.org/library/drmath/view/58630.html","timestamp":"2014-04-16T11:37:35Z","content_type":null,"content_length":"6808","record_id":"<urn:uuid:0ba83d84-5f65-4aeb-a83f-222a6c167e84>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00527-ip-10-147-4-33.ec2.internal.warc.gz"} |
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/users/faris_waleed/medals","timestamp":"2014-04-21T12:41:47Z","content_type":null,"content_length":"98745","record_id":"<urn:uuid:8198e009-a73b-4777-8963-72f8b523fd3f>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00046-ip-10-147-4-33.ec2.internal.warc.gz"} |
Noncausal Gauss Markov random fields: Parameter structure and estimation
Results 1 - 10 of 21
- IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE , 1997
"... We present a technique for constructing random fields from a set of training samples. The learning paradigm builds increasingly complex fields by allowing potential functions, or features, that
are supported by increasingly large subgraphs. Each feature has a weight that is trained by minimizing the ..."
Cited by 554 (14 self)
Add to MetaCart
We present a technique for constructing random fields from a set of training samples. The learning paradigm builds increasingly complex fields by allowing potential functions, or features, that are
supported by increasingly large subgraphs. Each feature has a weight that is trained by minimizing the Kullback-Leibler divergence between the model and the empirical distribution of the training
data. A greedy algorithm determines how features are incrementally added to the field and an iterative scaling algorithm is used to estimate the optimal values of the weights. The random field models
and techniques introduced in this paper differ from those common to much of the computer vision literature in that the underlying random fields are non-Markovian and have a large number of parameters
that must be estimated. Relations to other learning approaches, including decision trees, are given. As a demonstration of the method, we describe its application to the problem of automatic word
- IEEE Trans. Image Processing , 1997
"... Discontinuity-preserving Bayesian image restoration typically involves two Markov random fields: one representing the image intensities/gray levels to be recovered and another one signaling
discontinuities/edges to be preserved. The usual strategy is to perform joint maximum a posteriori (MAP) estim ..."
Cited by 28 (10 self)
Add to MetaCart
Discontinuity-preserving Bayesian image restoration typically involves two Markov random fields: one representing the image intensities/gray levels to be recovered and another one signaling
discontinuities/edges to be preserved. The usual strategy is to perform joint maximum a posteriori (MAP) estimation of the image and its edges, which requires the specification of priors for both
fields. In this paper, instead of taking an edge prior, we interpret discontinuities (in fact their locations) as deterministic unknown parameters of the compound Gauss--Markov random field (CGMRF),
which is assumed to model the intensities. This strategy should allow inferring the discontinuity locations directly from the image with no further assumptions. However, an additional problem
emerges: The number of parameters (edges) is unknown. To deal with it, we invoke the minimum description length (MDL) principle; according to MDL, the best edge configuration is the one that allows
the shortest description of the image and its edges. Taking the other model parameters (noise and CGMRF variances) also as unknown, we propose a new unsupervised discontinuity-preserving image
restoration criterion. Implementation is carried out by a continuation-type iterative algorithm which provides estimates of the number of discontinuities, their locations, the noise variance, the
original image variance, and the original image itself (restored image). Experimental results with real and synthetic images are reported.
- IEEE Trans. Inform. Theory , 2000
"... Abstract—Hyperspectral sensors are passive sensors that simultaneously record images for hundreds of contiguous and narrowly spaced regions of the electromagnetic spectrum. Each image
corresponds to the same ground scene, thus creating a cube of images that contain both spatial and spectral informat ..."
Cited by 19 (1 self)
Add to MetaCart
Abstract—Hyperspectral sensors are passive sensors that simultaneously record images for hundreds of contiguous and narrowly spaced regions of the electromagnetic spectrum. Each image corresponds to
the same ground scene, thus creating a cube of images that contain both spatial and spectral information about the objects and backgrounds in the scene. In this paper, we present an adaptive anomaly
detector designed assuming that the background clutter in the hyperspectral imagery is a three-dimensional Gauss–Markov random field. This model leads to an efficient and effective algorithm for
discriminating man-made objects (the anomalies) in real hyperspectral imagery. The major focus of the paper is on the adaptive stage of the detector, i.e., the estimation of the Gauss–Markov random
field parameters. We develop three methods: maximum-likelihood; least squares; and approximate maximum-likelihood. We study these approaches along three directions: estimation error performance,
computational cost, and detection performance. In terms of estimation error, we derive the Cramér–Rao bounds and carry out Monte Carlo simulation studies that show that the three estimation
procedures have similar performance when the fields are highly correlated, as is often the case with real hyperspectral imagery. The approximate maximum-likelihood method has a clear advantage from
the computational point of view. Finally, we test extensively with real hyperspectral imagery the adaptive anomaly detector incorporating either the least squares or the approximate
maximum-likelihood estimators. Its performance compares very favorably with that of the RX algorithm, an alternative detector commonly used with multispectral data, while reducing by up to an order
of magnitude the associated computational cost. Index Terms—Anomaly detection, Cramér–Rao bounds, Gauss– Markov random field, hyperspectral imagery, least squares, maximum
- IEEE Transactions on image processing , 2001
"... Abstract—Hyperspectral sensors collect hundreds of narrow and contiguously spaced spectral bands of data. Such sensors provide fully registered high resolution spatial and spectral images that
are invaluable in discriminating between man-made objects and natural clutter backgrounds. The price paid f ..."
Cited by 17 (1 self)
Add to MetaCart
Abstract—Hyperspectral sensors collect hundreds of narrow and contiguously spaced spectral bands of data. Such sensors provide fully registered high resolution spatial and spectral images that are
invaluable in discriminating between man-made objects and natural clutter backgrounds. The price paid for this high resolution data is extremely large data sets, several hundred of Mbytes for a
single scene, that make storage and transmission difficult, thus requiring fast onboard processing techniques to reduce the data being transmitted. Attempts to apply traditional maximum likelihood
detection techniques for in-flight processing of these massive amounts of hyperspectral data suffer from two limitations: first, they neglect the spatial correlation of the clutter by treating it as
spatially white noise; second, their computational cost renders them prohibitive without significant data reduction like by grouping the spectral bands into clusters, with a consequent loss of
spectral resolution. This paper presents a maximum likelihood detector that successfully confronts both problems: rather than ignoring the spatial and spectral correlations, our detector exploits
them to its advantage; and it is computationally expedient, its complexity increasing only linearly with the number of spectral bands available. Our approach is based on a Gauss–Markov random field
(GMRF) modeling of the clutter, which has the advantage of providing a direct parameterization of the inverse of the clutter covariance, the quantity of interest in the test statistic. We discuss in
detail two alternative GMRF detectors: one based on a binary hypothesis approach, and the other on a ‘single ’ hypothesis formulation. We analyze extensively with real hyperspectral imagery data
(HYDICE and SEBASS) the performance of the detectors, comparing them to a benchmark detector, the RX-algorithm. Our results show that the GMRF ‘single ’ hypothesis detector outperforms significantly
in computational cost the RX-algorithm, while delivering noticeable detection performance improvement. Index Terms—Gauss–Markov random field, hyperspectral sensor imagery, maximum-likelihood
detection, ‘single ’ hypothesis test. I.
- IEEE Trans. on Signal Processing, http://arxiv.org/pdf/0708.0242
"... Abstract—This paper presents a distributed Kalman filter to estimate the state of a sparsely connected, large-scale,-dimensional, dynamical system monitored by a network of sensors. Local Kalman
filters are implemented on-dimensional subsystems,, obtained by spatially decomposing the large-scale sys ..."
Cited by 16 (6 self)
Add to MetaCart
Abstract—This paper presents a distributed Kalman filter to estimate the state of a sparsely connected, large-scale,-dimensional, dynamical system monitored by a network of sensors. Local Kalman
filters are implemented on-dimensional subsystems,, obtained by spatially decomposing the large-scale system. The distributed Kalman filter is optimal under an th order Gauss–Markov approximation to
the centralized filter. We quantify the information loss due to this th-order approximation by the divergence, which decreases as increases. The order of the approximation leads to a bound on the
dimension of the subsystems, hence, providing a criterion for subsystem selection. The (approximated) centralized Riccati and Lyapunov equations are computed iteratively with only local communication
and low-order computation by a distributed iterate collapse inversion (DICI) algorithm. We fuse the observations that are common among the local Kalman filters using bipartite fusion graphs and
consensus averaging algorithms. The proposed algorithm achieves full distribution of the Kalman filter. Nowhere in the network, storage, communication, or computation of-dimensional vectors and
matrices is required; only dimensional vectors and matrices are communicated or used in the local computations at the sensors. In other words, knowledge of the state is itself distributed. Index
Terms—Distributed algorithms, distributed estimation, information filters, iterative methods, Kalman filtering, large-scale systems, matrix inversion, sparse matrices. I.
- IEEE Trans. Image Process , 1999
"... Abstract — In the physical sciences, e.g., meteorology and oceanography, combining measurements with the dynamics of the underlying models is usually referred to as data assimilation. Data
assimilation improves the reconstruction of the image fields of interest. Assimilating data with algorithms lik ..."
Cited by 13 (7 self)
Add to MetaCart
Abstract — In the physical sciences, e.g., meteorology and oceanography, combining measurements with the dynamics of the underlying models is usually referred to as data assimilation. Data
assimilation improves the reconstruction of the image fields of interest. Assimilating data with algorithms like the Kalman–Bucy filter (KBf) is challenging due to their computational cost which for
two-dimensional (2-D) fields is of where is the linear dimension of the domain. In this paper, we combine the block structure of the underlying dynamical models and the sparseness of the measurements
(e.g., satellite scans) to develop four efficient implementations of the KBf that reduce its computational cost to in the case of the block KBf and the scalar KBf, and to in the case of the local
block KBf (lbKBf) and the local scalar KBf (lsKBf). We illustrate the application of the lbKBf to assimilate altimetry satellite data in a Pacific equatorial basin. Index Terms—Computed imaging, data
assimilation, Kalman– Bucy filter, Gauss–Markov fields, physical oceanography, satellite altimetry. I.
- Advances in Neural Information Processing Systems 19 , 2007
"... This paper proposes a new approach to model-based clustering under prior knowledge. The proposed formulation can be interpreted from two different angles: as penalized logistic regression, where
the class labels are only indirectly observed (via the probability density of each class); as finite mixt ..."
Cited by 9 (0 self)
Add to MetaCart
This paper proposes a new approach to model-based clustering under prior knowledge. The proposed formulation can be interpreted from two different angles: as penalized logistic regression, where the
class labels are only indirectly observed (via the probability density of each class); as finite mixture learning under a grouping prior. To estimate the parameters of the proposed model, we derive a
(generalized) EM algorithm with a closed-form E-step, in contrast with other recent approaches to semi-supervised probabilistic clustering which require Gibbs sampling or suboptimal shortcuts. We
show that our approach is ideally suited for image segmentation: it avoids the combinatorial nature Markov random field priors, and opens the door to more sophisticated spatial priors (e.g.,
wavelet-based) in a simple and computationally efficient way. Finally, we extend our formulation to work in unsupervised, semi-supervised, or discriminative modes. 1
- Journal of Statistical Computing and Simulation
"... The problem of simulating from distributions with intractable normalizing constants has received much attention in the recent literature. In this paper, we propose an asymptotic algorithm, the
socalled double Metropolis-Hastings (MH) sampler, for tickling this problem. Unlike other auxiliary variabl ..."
Cited by 8 (2 self)
Add to MetaCart
The problem of simulating from distributions with intractable normalizing constants has received much attention in the recent literature. In this paper, we propose an asymptotic algorithm, the
socalled double Metropolis-Hastings (MH) sampler, for tickling this problem. Unlike other auxiliary variable algorithms, the double MH sampler removes the need of exact sampling, the auxiliary
variables being generated using MH kernels, and thus can be applied to a wide range of problems for which exact sampling is not available. While for the problems for which exact sampling is
available, it can typically produce the same accurate results as the exchange algorithm, but using much less CPU time. The new method is illustrated by various spatial models.
, 1996
"... Multiple views of a scene, obtained from cameras positioned at distinct viewpoints, can provide a viewer with the benefits of added realism, selective viewing, and improved scene understanding.
The importance of these signals is evidenced by the recently proposed Multi-View Profile (MVP) extension t ..."
Cited by 5 (1 self)
Add to MetaCart
Multiple views of a scene, obtained from cameras positioned at distinct viewpoints, can provide a viewer with the benefits of added realism, selective viewing, and improved scene understanding. The
importance of these signals is evidenced by the recently proposed Multi-View Profile (MVP) extension to the MPEG-2 video compression standard, and their explicit incorporation into the future MPEG-4
standard. However, multi-view compression implementations typically rely on single-view image sequence model assumptions. We hypothesize (and demonstrate) that impressive system bandwidth reduction
can be achieved by utilizing displacement vector field and image intensity models tuned to the special characteristics of multi-view video signals. This thesis focuses on the predictive coding of
non-periodic, i.e., arbitrary, multi-view video signals for the applications of simulated motion parallax and viewer-specified degree of stereoscopy. To facilitate their practical use, we desire
algorithms tha...
- IEEE TRANS. INFO. THEORY , 1997
"... This paper considers the achievable accuracy in jointly estimating the parameters of a real valued two-dimensional homogeneous random field with mixed spectral distribution, from a single
observed realization of it. On the basis of a 2-D Wold-like decomposition, the field is represented as a sum of ..."
Cited by 4 (3 self)
Add to MetaCart
This paper considers the achievable accuracy in jointly estimating the parameters of a real valued two-dimensional homogeneous random field with mixed spectral distribution, from a single observed
realization of it. On the basis of a 2-D Wold-like decomposition, the field is represented as a sum of mutually orthogonal components of three types: purelyindeterministic, harmonic, and evanescent.
An exact form of the Cramer-Rao lower bound on the error variance in jointly estimating the parameters of the different components is derived. It is shown that the estimation of the harmonic
component is decoupled from that of the purely-indeterministic and evanescent components. Moreover, the bound on the parameters of the purely-indeterministic and evanescent components is independent
of the harmonic component. Numerical evaluation of the bounds provides some insight into the effects of various parameters on the achievable estimation accuracy. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=1346541","timestamp":"2014-04-18T14:38:15Z","content_type":null,"content_length":"44149","record_id":"<urn:uuid:b6d81bf9-6df6-419d-bebd-1e8c0bc5f660>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00143-ip-10-147-4-33.ec2.internal.warc.gz"} |
Carmichael’s lambda function
Results 1 - 10 of 18
- ANN. OF MATH , 1982
"... ..."
- Math.Comp.,70
"... Abstract. Consider the pseudorandom number generator un ≡ u e n−1 (mod m), 0 ≤ un ≤ m − 1, n =1, 2,..., where we are given the modulus m, the initial value u0 = ϑ and the exponent e. One case of
particular interest is when the modulus m is of the form pl, where p, l are different primes of the same ..."
Cited by 18 (11 self)
Add to MetaCart
Abstract. Consider the pseudorandom number generator un ≡ u e n−1 (mod m), 0 ≤ un ≤ m − 1, n =1, 2,..., where we are given the modulus m, the initial value u0 = ϑ and the exponent e. One case of
particular interest is when the modulus m is of the form pl, where p, l are different primes of the same magnitude. It is known from work of the first and third authors that for moduli m = pl, if the
period of the sequence (un) exceeds m3/4+ε, then the sequence is uniformly distributed. We show rigorously that for almost all choices of p, l it is the case that for almost all choices of ϑ, e, the
period of the power generator exceeds (pl) 1−ε. And so, in this case, the power generator is uniformly distributed. We also give some other cryptographic applications, namely, to rulingout the
cycling attack on the RSA cryptosystem and to so-called time-release crypto. The principal tool is an estimate related to the Carmichael function λ(m), the size of the largest cyclic subgroup of the
multiplicative group of residues modulo m. In particular, we show that for any ∆ ≥ (log log N) 3,wehave λ(m) ≥ N exp(−∆) for all integers m with 1 ≤ m ≤ N, apartfromatmost N exp −0.69 ( ∆ log ∆) 1/3)
exceptions. 1.
- Acta Arith
"... We consider two standard pseudorandom number generators from number theory: the linear congruential generator and the power generator. For the former, we are given integers e, b, n (with e, n>
1) and a seed u0, and we compute the sequence ..."
Cited by 7 (2 self)
Add to MetaCart
We consider two standard pseudorandom number generators from number theory: the linear congruential generator and the power generator. For the former, we are given integers e, b, n (with e, n> 1) and
a seed u0, and we compute the sequence
, 2005
"... A common pseudorandom number generator is the power generator: x ↦ → x ℓ (mod n). Here, ℓ, n are fixed integers at least 2, and one constructs a pseudorandom sequence by starting at some residue
mod n and iterating this ℓth power map. (Because it is the easiest to compute, one often takes ℓ = 2; thi ..."
Cited by 6 (2 self)
Add to MetaCart
A common pseudorandom number generator is the power generator: x ↦ → x ℓ (mod n). Here, ℓ, n are fixed integers at least 2, and one constructs a pseudorandom sequence by starting at some residue mod
n and iterating this ℓth power map. (Because it is the easiest to compute, one often takes ℓ = 2; this case is known as the BBS generator, for Blum,
- J. NUM. THEORY , 2003
"... We obtain an asymptotic formula for the number of square-free values among p 1; for primes ppx; and we apply it to derive the following asymptotic formula for LðxÞ; the number of square-free
values of the Carmichael function lðnÞ for 1pnpx; LðxÞ ðk þ oð1ÞÞ x ln 1 a x; where a 0:37395y is the Artin ..."
Cited by 5 (3 self)
Add to MetaCart
We obtain an asymptotic formula for the number of square-free values among p 1; for primes ppx; and we apply it to derive the following asymptotic formula for LðxÞ; the number of square-free values
of the Carmichael function lðnÞ for 1pnpx; LðxÞ ðk þ oð1ÞÞ x ln 1 a x; where a 0:37395y is the Artin constant, and k 0:80328y is another absolute constant.
- Acta Arith
"... We study the average multiplicative order of elements modulo n and show that its behaviour is very close to the behaviour of the largest possible multiplicative order of elements modulo n given
by the Carmichael function #(n). 2000 Mathematics Subject Classification: Primary 11N37, 11N64; Secondary ..."
Cited by 4 (1 self)
Add to MetaCart
We study the average multiplicative order of elements modulo n and show that its behaviour is very close to the behaviour of the largest possible multiplicative order of elements modulo n given by
the Carmichael function #(n). 2000 Mathematics Subject Classification: Primary 11N37, 11N64; Secondary 20K01 1
, 2002
"... Assuming the Generalized Riemann Hypothesis, we prove the following: If b is an integer greater than one, then the multiplicative order of b modulo N is larger than N 1−ǫ for all N in a density
one subset of the integers. If A is a hyperbolic unimodular matrix with integer coefficients, then the ord ..."
Cited by 4 (1 self)
Add to MetaCart
Assuming the Generalized Riemann Hypothesis, we prove the following: If b is an integer greater than one, then the multiplicative order of b modulo N is larger than N 1−ǫ for all N in a density one
subset of the integers. If A is a hyperbolic unimodular matrix with integer coefficients, then the order of A modulo p is greater than p 1−ǫ for all p in a density one subset of the primes. Moreover,
the order of A modulo N is greater than N 1−ǫ for all N in a density one subset of the integers.
"... . We outline some cryptographic applications of the recent results of the authors about small values of the Carmichael function and the period of the power generator of pseudorandom numbers.
Namely, we show rigorously that almost all randomly selected RSA moduli are safe against the so-called cyclin ..."
Cited by 2 (2 self)
Add to MetaCart
. We outline some cryptographic applications of the recent results of the authors about small values of the Carmichael function and the period of the power generator of pseudorandom numbers. Namely,
we show rigorously that almost all randomly selected RSA moduli are safe against the so-called cycling attack and we also provide some arguments in support of the reliability of the timed-release
crypto scheme, which has recently been proposed by R. L. Rivest, A. Shamir and D. A. Wagner. 1. Introduction For an integer n # 1 we define the Carmichael function #(n) as the largest possible order
of elements of the unit group in the residue ring modulo n. More explicitly, for a prime power p k we write # p k = p k-1 (p - 1), if p # 3 or k # 2; 2 k-2 , if p = 2 and k # 3; and finally, #(n) =
lcm # p k1 1 , . . . , # p k# # , where n = p k1 1 . . . p k# # is the prime number factorization of n. Various upper and lower bounds for #(n) have been...
, 1995
"... We extend the method due originally to Loh and Niebuhr for the generation of Carmichael numbers with a large number of prime factors to other classes of pseudoprimes, such as Williams's
pseudoprimes and elliptic pseudoprimes. We exhibit also some new Dickson pseudoprimes as well as superstrong Dicks ..."
Cited by 2 (0 self)
Add to MetaCart
We extend the method due originally to Loh and Niebuhr for the generation of Carmichael numbers with a large number of prime factors to other classes of pseudoprimes, such as Williams's pseudoprimes
and elliptic pseudoprimes. We exhibit also some new Dickson pseudoprimes as well as superstrong Dickson pseudoprimes.
"... Abstract. We study asymptotic properties of periods and transient phases associated with modular power sequences. The latter are simple; the former are vaguely related to the reciprocal sum of
square-free integer kernels. Let Zn denote the ring of integers modulo n. Define S(x) to be the sequence {x ..."
Cited by 2 (2 self)
Add to MetaCart
Abstract. We study asymptotic properties of periods and transient phases associated with modular power sequences. The latter are simple; the former are vaguely related to the reciprocal sum of
square-free integer kernels. Let Zn denote the ring of integers modulo n. Define S(x) to be the sequence {x k} ∞ k=0 for each x ∈ Zn. We wish to understand the periodicity properties of S(x), that
is, the statistics of σ(x) = τ(x) = the period of S(x) = the least m ≥ 1 for which xk+m = x k for all sufficiently large k, the transient phase of S(x) = the least ℓ ≥ 0 for which xk+σ(x) = x k for
all k ≥ ℓ. For example, the unique x with (σ, τ) = (1, 0) is x = 1. If (σ, τ) = (2, 0), then x is a square root of unity; if (σ, τ) = (3, 0), then x is a cube root of unity [1]. If τ = 0 (with no
condition placed on σ), then x is relatively prime to n. Hence the number of such x is # { x ∈ Zn: x k = 1 for some k ≥ 1} = ϕ(n) where ϕ is the Euler totient function and, asymptotically [1, 2], n≤N
ϕ(n) ∼ 3 π 2N2 = (0.303963550927...)N 2 as N → ∞. As another example, if (σ, τ) = (1, 1), then x is an idempotent. The number of such x, including 0 and 1, is # { x ∈ Zn: x 2 = x} = 2 ω(n) where ω(n)
denotes the number of distinct prime factors of n and [1, 3] 2 ω(n) ∼ 6 π2N · lnN n≤N as N → ∞. More difficult examples appear in the following sections. As in [1], we make no claim of originality:
Our purpose is only to gather relevant formulas in one place. 0 | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=2025009","timestamp":"2014-04-19T23:41:35Z","content_type":null,"content_length":"35318","record_id":"<urn:uuid:02d8a682-bd14-48aa-bc1b-a74cdc03374d>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00401-ip-10-147-4-33.ec2.internal.warc.gz"} |
Help with finding the derivative of inverse functions
January 23rd 2008, 04:41 PM
Help with finding the derivative of inverse functions
Hey I need some help with the following problems:
Find (f^-1)'(a):
1. f(x)= 2x^3 + 3x^2 + 7x + 4, a=4
2. f(x)= x^3 + 3sinx + 2cosx, a=2
January 23rd 2008, 05:43 PM
If $\mathrm{f}^{-1}(a)=b$, then $a=\mathrm{f}(b)$. Find b from the given a.
Now use this rule: If $y=\mathrm{f}(x)$, $(\mathrm{f}^{-1})'(y)=\frac{1}{\mathrm{f}'(x)}$. Hence $(\mathrm{f}^{-1})'(a)=\frac{1}{\mathrm{f}'(b)}$.
For example, for (1): $\mathrm{f}(x)=4\$$\Rightarrow\$$2x^3 + 3x^2 + 7x + 4=4$$\Rightarrow\$$x(2x^2+3x+7)=0$$\Rightarrow\$$x=0$ (the quadratic has no real roots). So $(\mathrm{f}^{-1})'(4)=\frac
Similarly for (2). $\mathrm{f}(x)=2\$$\Rightarrow\$$x^3 + 3\sin{x} + 2\cos{x}=2$. By inspection, $x=0$ is a solution. Hence $(\mathrm{f}^{-1})'(2)=\frac{1}{\mathrm{f}'(0)}$. | {"url":"http://mathhelpforum.com/calculus/26698-help-finding-derivative-inverse-functions-print.html","timestamp":"2014-04-20T23:49:26Z","content_type":null,"content_length":"7521","record_id":"<urn:uuid:76e3cdfd-2f86-4746-b246-5b531407ac82>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00421-ip-10-147-4-33.ec2.internal.warc.gz"} |
A few more log questions
March 31st 2010, 09:34 PM #1
Super Member
Dec 2008
A few more log questions
Need help on the following:
1)Make the independent variable the subject of the equation:
$u(t) = \frac{ae^t + b}{ce^t + d}$
This is what i have done, however i don't understand how the book's answer got $ln(\frac{du-b}{-cu+a})$
$u(ce^t + d) = ae^t +b$
$uce^t + ud - ae^t - b =0$
$e^t(uc - a) = -ud+b$
$e^t = \frac{-(ud - b)}{uc - a}$
$t=ln{-(ud - b)}{(uc - a)}$
2)Simply: $\frac{(e^x)^2}{e^{2x-1}}$
Not sure how to start this.
3)Solve for x given that: $ln(x) = ln(a) +n * ln(t)$
This is what i have done:
$ln(x) = 1 + n * ln(at)$
$e^{ln(x)} = e^{ln(at^{1+n}}$
book's answer is $at^n$
4)Solve for x given: $ln(x) = ln(a) + kt$
This is what i have done:
$e^{ln(x)} = e^{ln(a)} + e^{kt}$
$x = a + e^{kt}$
book's answer says its $ae^{kt}$
Hello Paymemoney
Need help on the following:
1)Make the independent variable the subject of the equation:
$u(t) = \frac{ae^t + b}{ce^t + d}$
This is what i have done, however i don't understand how the book's answer got $ln(\frac{du-b}{-cu+a})$
$u(ce^t + d) = ae^t +b$
$uce^t + ud - ae^t - b =0$
$e^t(uc - a) = -ud+b$
$e^t = \frac{-(ud - b)}{uc - a}$
You are correct down to here. But what's this?
$t=ln{-(ud - b)}{(uc - a)}$
Surely you mean:
$t=\ln\left(\frac{-(ud - b)}{(uc - a)}\right)$
which is equivalent to the answer in the book. (Just multiply top-and-bottom by $-1$.)
2)Simply: $\frac{(e^x)^2}{e^{2x-1}}$
Not sure how to start this.
Just use the rules of indices:
to write:
3)Solve for x given that: $ln(x) = ln(a) +n * ln(t)$
I'm not sure how you got this:
$ln(x) = 1 + n * ln(at)$
On the RHS, use the rule:
$n\ln (y) = \ln(y^n)$
to get:
$\ln(x) = \ln(a) +n\ln(t)$
$\Rightarrow \ln(x) = \ln(a) + \ln(t^n)$
And then use:
$\ln(a) + \ln(b) = \ln(ab)$
to get:
$\Rightarrow \ln(x) = \ln(at^n)$
$\Rightarrow x = at^n$
4)Solve for x given: $ln(x) = ln(a) + kt$
This is what i have done:
$e^{ln(x)} = e^{ln(a)} + e^{kt}$
No. You must combine the RHS into a single logarithm:
$\ln(x) = \ln(a) + kt$
$\Rightarrow \ln(x) = \ln(a) + kt\ln(e)$, using the fact that $\ln(e) = 1$.
$=\ln(a) +\ln(e^{kt})$
$\Rightarrow x = ae{kt}$
March 31st 2010, 10:45 PM #2 | {"url":"http://mathhelpforum.com/algebra/136805-few-more-log-questions.html","timestamp":"2014-04-17T05:33:38Z","content_type":null,"content_length":"45256","record_id":"<urn:uuid:a2339f56-0c7d-4df2-9661-8f14fc42ec04>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00032-ip-10-147-4-33.ec2.internal.warc.gz"} |
Cartoon: Read this week's Pawprint
Cartoon: Read this week’s Pawprint
108 Comment
• jPn7PD Appreciate you sharing, great blog post. Really Cool.
• Thanks-a-mundo for the post.Thanks Again. Keep writing.
• I am so grateful for your blog.Thanks Again. Awesome.
• I loved your article post.Thanks Again. Awesome.
• I value the article.Much thanks again. Cool.
• Very informative blog.Really looking forward to read more. Much obliged.
• Thanks again for the blog article.Much thanks again. Cool.
• Enjoyed every bit of your post. Much obliged.
• Im thankful for the article.Really looking forward to read more. Much obliged.
• Muchos Gracias for your article.Really looking forward to read more. Much obliged.
• I think this is a real great blog post.Really looking forward to read more. Awesome.
• Really informative article.Really thank you! Really Great.
• Really appreciate you sharing this blog.Much thanks again. Fantastic.
• Muchos Gracias for your article post. Want more.
• Thanks for the blog article. Much obliged.
• Really informative blog post.Really looking forward to read more. Awesome.
• wow, awesome blog post.Really looking forward to read more. Great.
• Thank you ever so for you post. Cool.
• Thanks-a-mundo for the post.Much thanks again. Keep writing.
• Hey, thanks for the blog post.Thanks Again. Keep writing.
• Very good blog.Much thanks again. Great.
• I truly appreciate this article post.Really thank you! Want more.
• R6Mqh9 This is one awesome blog post.Thanks Again. Want more.
• I loved your post.Much thanks again. Really Cool.
• Appreciate you sharing, great post. Awesome.
• A round of applause for your blog post. Really Great.
• Awesome article post.Much thanks again. Really Cool.
• I think this is a real great article.Really thank you! Fantastic.
• A round of applause for your article.Really thank you! Want more.
• Thank you for your blog article. Will read on…
• I loved your blog.Really thank you! Fantastic.
• Im obliged for the blog.Really looking forward to read more. Will read on…
• Very informative blog.Really thank you! Really Cool.
• Really informative post.Thanks Again. Much obliged.
• I value the article post.Much thanks again. Awesome.
• Im grateful for the article post.Much thanks again.
• Thanks for the post. Fantastic.
• Im thankful for the post.Really looking forward to read more. Cool.
• Thanks-a-mundo for the post. Really Cool.
• Great, thanks for sharing this article.Much thanks again. Fantastic.
• Thank you ever so for you blog post.Really looking forward to read more. Great.
• Thank you ever so for you article.Much thanks again. Want more.
• I value the blog article. Much obliged.
• One of our visitors a short while ago suggested the following website.
• Wow, great article post.Thanks Again. Cool.
• A big thank you for your post.Much thanks again. Cool.
• I appreciate you sharing this blog post. Awesome.
• I really like and appreciate your blog.Much thanks again. Great.
• Thanks so much for the blog article.Really thank you! Keep writing.
• Thank you ever so for you article.Really thank you! Cool.
• Thank you for your article.Thanks Again. Really Cool.
• Muchos Gracias for your blog post.Really looking forward to read more. Really Great.
• Thank you ever so for you article post.Thanks Again.
• wow, awesome blog.Thanks Again. Will read on…
• Enjoyed every bit of your blog.Really thank you! Much obliged.
• I loved your post.Much thanks again. Keep writing.
• Muchos Gracias for your article.Really thank you! Will read on…
• Say, you got a nice article post.Really looking forward to read more. Awesome.
• I loved your article.Really looking forward to read more. Will read on…
• Awesome blog post.Really thank you! Keep writing.
• Awesome article.Much thanks again. Really Great.
• I really enjoy the article.Thanks Again. Great.
• I cannot thank you enough for the article. Really Great.
• This is one awesome blog article.Much thanks again.
• Very informative article post.Much thanks again. Want more.
• Thank you ever so for you blog article. Keep writing.
• I truly appreciate this blog.Thanks Again. Much obliged.
• Awesome blog article.Thanks Again. Fantastic.
• Great article post.Really looking forward to read more. Will read on…
• Thanks for the blog. Really Cool.
• Very good blog.Much thanks again. Cool.
• Pingback: Imc9AElKLy
• Thanks for sharing, this is a fantastic blog article.Really looking forward to read more. Cool.
• Thanks for the blog post.Really looking forward to read more. Awesome.
• Enjoyed every bit of your article.Really looking forward to read more. Cool.
• Very informative blog.Really thank you! Awesome.
• I cannot thank you enough for the blog post.Really thank you! Cool.
• Thanks a lot for the article post.Really thank you! Keep writing.
• I value the post.Really looking forward to read more. Great.
• Major thanks for the article.Much thanks again. Awesome.
• A big thank you for your blog post.Thanks Again. Great.
• Im grateful for the blog post.Really looking forward to read more. Awesome.
• Thanks so much for the article.Much thanks again. Fantastic.
• I really liked your blog.Really looking forward to read more. Will read on…
• Thanks for the blog post.Really looking forward to read more. Fantastic.
• I enjoyed that post. Keep posting, I’ll be taking a look at your site regularly…I will be happy if you visit my blog and say something http://twitter-guide-1.blogspot.com/
• Thanks for sharing, this is a fantastic blog. Awesome.
• I loved your article post.Really thank you! Much obliged.
• Very informative article. Much obliged.
• wow, awesome blog post.Really looking forward to read more. Want more.
• Thanks so much for the blog.Thanks Again. Want more.
• Hey, thanks for the blog post.Thanks Again. Will read on…
• Appreciate you sharing, great post.Really thank you! Fantastic.
• Im thankful for the blog. Much obliged.
• Im thankful for the article post.Thanks Again. Great.
• I truly appreciate this blog.Really looking forward to read more. Fantastic.
• Very informative post.Much thanks again. Fantastic.
• This is one awesome article post.Really looking forward to read more. Much obliged.
• Enjoyed every bit of your blog article.Much thanks again. Really Great.
• I appreciate you sharing this post. Want more.
• Thanks again for the article post. Much obliged.
• Great, thanks for sharing this blog.
• I really like and appreciate your blog article.Thanks Again. Really Great.
• Awesome blog article.Thanks Again. Keep writing.
• Really enjoyed this blog article.Much thanks again. Cool.
• Great article.Really thank you! Want more.
• Major thanks for the article post.Really looking forward to read more.
The following is the News-Review’s annual listing of holiday services hosted by local churches. Riverhead Sunday, April 20: Sunrise service, Indian Island Park, 7 a.m. Baiting Hollow Congregational
Friday, April 18: Good Friday service, 7 p.m. Sunday, April 20: Dawn service on church’s back lawn, 6:30 a.m.; Easter worship, 10 a.m. Calvary Baptist Church Saturday, […]
Photos posted to Instagram from around the North Fork this week. Share your photo with us directly by using #northforker. See the photos on northforker.com | {"url":"http://riverheadnewsreview.timesreview.com/2012/03/35659/cartoon-read-this-weeks-pawprint/","timestamp":"2014-04-17T09:40:26Z","content_type":null,"content_length":"248757","record_id":"<urn:uuid:5796803b-568d-4c95-8d75-224c328aeb5d>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00626-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Write each arithmetic series as the sum of terms, find each sum.
• one year ago
• one year ago
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
this one's different
Best Response
You've already chosen the best response.
how do i find a_1
Best Response
You've already chosen the best response.
is it -5?
Best Response
You've already chosen the best response.
|dw:1354173877445:dw| put k=10 and get answer
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
does a_1 =-5 ?
Best Response
You've already chosen the best response.
\(a_1\) is the result of the equation when you plug your first term into it. Your equation is \(100-5k\). The series starts when k = 1, so plug 1 in for k and solve. That's \(a_1\).
Best Response
You've already chosen the best response.
100-5k is general term for k series so first term=95
Best Response
You've already chosen the best response.
what about a_n, |dw:1354174176642:dw|
Best Response
You've already chosen the best response.
its wrong i know...
Best Response
You've already chosen the best response.
its wrong refer to my solution above in figure
Best Response
You've already chosen the best response.
@Life: You need to distinguish between the formula for the sum of the series and the equation you're given (in this case, 100-5k). You need to find the value for \(a_{10}\), and then plug that
into your series summation formula.
Best Response
You've already chosen the best response.
\(a_1 = 95\) (you solved that already). \(a_{n} = a_{10} = 100-5(10)\) Plug \(a_{1}\), \(a_{10}\) (which is the *value* you get—*not* 10), and \(n\) into your \(S_{n}\) formula and solve.
Best Response
You've already chosen the best response.
i dont understand how "I" got 95, i never did, but i undetstand everything after that, i thought a1=-5
Best Response
You've already chosen the best response.
Your equation is \(100-5k\), so \(a_{1}\) will be the result you get when you plug 1 in for \(k\).
Best Response
You've already chosen the best response.
\(a_{2}\) will be the result you get when you plug in 2 for \(k\). \(a_{3}\) will be the result you get when you plug in 3 for \(k\). ... \(a_{10}\) will be the result you get when you plug in 10
for \(k\).
Best Response
You've already chosen the best response.
@Life: Have you ever done computer programming? If so, do you know what a for loop is?
Best Response
You've already chosen the best response.
That's all a series is. For each value from what's below sigma to what's above it (in this case, between k=1 and k=10), perform the equation. Once you know each one, add them all together.
Best Response
You've already chosen the best response.
my final answer, s10=725, AM I RIGHT? :)
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
sorry for being dumb lol, i understand it now though, so it was all worth it
Best Response
You've already chosen the best response.
You're not being dumb. You are showing effort (not just "what's the answer?"), which is appreciated. :)
Best Response
You've already chosen the best response.
Good luck with the rest of them. I have faith that you'll get them all. Good night. :)
Best Response
You've already chosen the best response.
Good night :D
Best Response
You've already chosen the best response.
I really do appreciate you sticking with me until i understood it. It was probably a good half an hour before I finally understood it, so thanks a bunch
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50b70cf9e4b0c789d50fa70d","timestamp":"2014-04-16T22:41:05Z","content_type":null,"content_length":"150388","record_id":"<urn:uuid:0d39dd0f-a142-456e-9985-9c3eb6e19d3b>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00404-ip-10-147-4-33.ec2.internal.warc.gz"} |
Rhododendron Park, WA Math Tutor
Find a Rhododendron Park, WA Math Tutor
...I am detail oriented and very focused on ensuring that whomever I am working with has a comprehensive, worthwhile and enjoyable experience. I have worked as a laboratory chemist and as an
instructor at Tacoma Community College for several years. I have also taught high school level sciences and mathematics.
12 Subjects: including geometry, ASVAB, algebra 1, algebra 2
...From learning shapes, colors, letters and numbers to learning simple addition and subtraction, I have some experience with montessori methods using manipulatives and other objects to help
teach the younger elementary school age all that they need to build on in the upper elementary ages. I also ...
46 Subjects: including ACT Math, trigonometry, SAT math, algebra 1
...My primary programming language is currently Java. Regardless of the subject, I would say I am effective at recognizing patterns. I love sharing any shortcuts or tips that I discover.I have
taken 2 quarters of Discrete Structures (Mathematics) at University of Washington, Tacoma.
16 Subjects: including algebra 1, algebra 2, calculus, chemistry
...My past teaching experience includes four years instructing beginning and intermediate college astronomy laboratories, as well as individual student tutoring for those courses. I have found
that the best method for teaching is determined by paying attention to the learning styles and abilities o...
5 Subjects: including prealgebra, precalculus, algebra 1, geometry
With my teaching experience of all levels of high school mathematics and the appropriate use of technology, I will do everything to find a way to help you learn mathematics. I can not promise a
quick fix, but I will not stop working if you make the effort. -Bill
16 Subjects: including discrete math, Mathematica, algebra 1, algebra 2
Related Rhododendron Park, WA Tutors
Rhododendron Park, WA Accounting Tutors
Rhododendron Park, WA ACT Tutors
Rhododendron Park, WA Algebra Tutors
Rhododendron Park, WA Algebra 2 Tutors
Rhododendron Park, WA Calculus Tutors
Rhododendron Park, WA Geometry Tutors
Rhododendron Park, WA Math Tutors
Rhododendron Park, WA Prealgebra Tutors
Rhododendron Park, WA Precalculus Tutors
Rhododendron Park, WA SAT Tutors
Rhododendron Park, WA SAT Math Tutors
Rhododendron Park, WA Science Tutors
Rhododendron Park, WA Statistics Tutors
Rhododendron Park, WA Trigonometry Tutors
Nearby Cities With Math Tutor
Alderton, WA Math Tutors
Burnett, WA Math Tutors
Cedarview, WA Math Tutors
Crocker, WA Math Tutors
Cumberland, WA Math Tutors
Electron, WA Math Tutors
Kanaskat, WA Math Tutors
Krain, WA Math Tutors
Lake Tapps, WA Math Tutors
Meeker, WA Math Tutors
Morganville, WA Math Tutors
Osceola, WA Math Tutors
Ponderosa Estates, WA Math Tutors
Prairie Ridge, WA Math Tutors
Wabash, WA Math Tutors | {"url":"http://www.purplemath.com/Rhododendron_Park_WA_Math_tutors.php","timestamp":"2014-04-19T17:48:19Z","content_type":null,"content_length":"24257","record_id":"<urn:uuid:e3ba38cd-c65f-4889-98e5-f821158ae1e2>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00069-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts by
Total # Posts: 1,315
Snow skiing is a very exciting sport. what type if sentence pattern is this? S-v,s-v-io, s-v-io-do,e-v-s,s-lv-pn,s-lv-pa, s-v-do-oc, question pattern
Solve the following problem. Round answer to two decimal places when necessary. I=240 W=1/3 A=1*W A=
Critical thinking is very important in making decisions that impact an organization s growth and survival. Which of the following traits of a critical thinker is essential in this process?
what do you call a river or stream that runs or flows really fast?
what mass of acetylene, C2H2, will be produced from the reaction of 90 g of calcium carbide, CaC2, with water in the following reaction? CaC2+2H2O-->C2H2+Ca(OH)2
An etheral solution contains benzoic acid, 9-fluorenone, abd ethyl 4-aminobenzoate. Why would benzoic acid and 9-fluorenone not be extracted from this etheral solution by aqueous HCl ?
algerbra 2
What did the baby pocupine say when it backed into a cactus?
Imagine that you have been hired as an IT consultant and have been asked to write a recommendation for an OS. This start-up company primarily develops games and has approached you to recommend an OS
for its 25 employees who use desktop PCs. The primary purpose of the desktops ...
algebra 2
Solve the inequality (x-3)/(2x+1)>0
There are 40 tags numbered one through 40 in a bag. What is the propability that Glen will randomly pick a multiple of 5 and then a multiple of 9 without replacing the first tag?
Algebra 1
1. 5x + 9 = 3x + 1 2. 14 + 7n = 14n + 28 3. 22(g 1) = 2g + 8 4. d + 12 3d = 5d 6 5. 4(m 2) = 2(3m + 3) 6. (4y 8) = 2(y + 4) 7. 5a 2(4a + 5) = 7a 8. 11w + 2(3w 1) = 15w 9. 4(...
Changing the current changes the ___ of an electromagnet. Choices are: A. strength B. direction C. charge D. both a and b
I need help trying to write out these binomial expansions: they all need to be raised to the 8th power. Thanks 1. (x + y) 2. (w + z) 3. (x - y) 4. (2a + 3b) - Now explain how your answer for #1 could
be used as a formula to help you answer each of the other items. In each case...
I need help with this in order to correctly complete my other related problems. Thank you so much in advance =) Arrange the following 0.1 M solutions in order of increasing pH and state why you
placed each solution in that position: NaCH3COO, HCl, HCN, NaOH, NH3, NaCN, KNO3, H...
I need help with this in order to correctly complete my other related problems. Thank you so much in advance =) Arrange the following 0.1 M solutions in order of increasing pH and state why you
placed each solution in that position: NaCH3COO, HCl, HCN, NaOH, NH3, NaCN, KNO3, H...
Q: Which governmental position was NOT held by James F. Byrnes? Choices are: A. U.S. Secretary of State B. U.S. Supreme Court justice C. governor of South Carolina D. vice president of the United
Q: What action did the United States NOT take against Japan before the attack on Pearl Harbor? Choices are: A. cut off the sale of oil and metal to Japan B. sent the Japanese ambassador back to Japan
C. froze all the Japanese assets in the United States D. discontinued negotia...
What is the sum of (x^2-3x+2) and (5x^2-3x-8)? show your work help me!!!
Two prisms are similar. The surface area of one is four times the surface area of the other. What is the ratio of the corresponding dimensions?
A snailis trying to climb up a cup left by some students that littered.Each hour the snail slithers up 2`` but when he is tired he sleeps and slides 1``. How many hours until he reaches the top of
the cup with a height of 10 is this an integer question?
math patterning
Look over the following sequence of numbers. What are the next 3 numbers in the pattern?what is the pattern rule? -1,0,1,0,1,2,3,6,11,20,37
I need to find the approximate surface area of a cone with the R=5 and L=6. I thought I figured it out by 3.14(5)(6) which will give me 94.2 I then would take 94.2 +3.14(5)squared and I came up with
172.7, but my answer should be either 20m squared, 98m squared, 118m squared o...
Let L1 be the line with the vector equation r= (-2,3) +s(-1,3), seR. Let L2 be with the line with parametric equation x=3t+2, y=t-7, teR. Are L1 and L2 perpendicular? How do you know?
Western experience
Oops I meant c.
Western experience
I was thinking D by the way because calculating technology doesnt have much to do with communication
Western experience
All of the following are late-twentieth century technological developments that have revolutionized communication except A) development of the world wide web B) development of the personal computer
C) development of calculating machinery D) development of the microchip E) deve...
I have the fetal pig test on muscles in AP biology class coming up soon. And I was wondering if anyone knows any good online sites where thye give pictures and i can practice labeling the muscles on
the pig.. thanks
PLEASE HELP!!!
actually that is the kind of way i did it. I almost did 2 experiments on a total of 150 people. Thanks anyway
PLEASE HELP!!!
thank you
PLEASE HELP!!!
Hi, I'm doing a science fair and they're asking us to put pictures of us doing experiments or to bring in a model. In my project I experimented on people so was I supposed to take pictures of people
performing the tasks? I already did the project and experiments and d...
science fair
Hi, I'm doing a science fair and they're asking us to put pictures of us doing experiments or to bring in a model. In my project I experimented on people so was I supposed to take pictures of people
performing the tasks? I already did the project and experiments and di...
Why was South Carolina placed under military control? I think it is because it refused to ratify the 13th Amendment, but I am not sure.
If a football field's length is 120 yds. and its width is 53 1/3 yards and a soccer field's lenght is 130 yds and its width is 100 yards how many times the area of the small field is the area of the
larger field? I multiplied the length x width and got 6,400 for area o...
physical science
absolute zero is the same as
normally warms up faster when heat is applied
fun facts
what are some fun facts on multitasking? if not that what are some websites to find good fun facts
science fair
in my science fair report school layout they want us to write a paragraph on testing my hypothesis and before that they asked us to write procedure. i wrote the procedure now do i rewrite it for
testing my hypothesis?
science fair
in my science fair report school layout they want us to write a paragraph on testing my hypothesis and before that they asked us to write the procedure. i wrote the procedure now do i rewrite it for
testing my hypothesis? or do i write something else
science fair
in my science fair report school layout they want us to write a paragraph on testing my hypothesis and before that they asked us to write the procedure. i wrote the procedure now do i rewrite it for
testing my hypothesis? or do i write something else
1. Simplify, and write in the base of 3: (9^x . 3^4x)^2 answer: 3^12x 2. Differentiate and simplify a. y= 3x^4 - 4e^x + 8 answer: 4(3x^3 - e^x) b. y= (3 + 4e^x)^5 answer: 20e^x (3+ 4ex)^4 c. y= (e^x
- 1) / x^2 answer: (xe^x - 2e^x +2) / x^3
triangular prism with base perimeter 24 cm, base area 24 cm^2, and height 15 cm. Choices are: A. 606 cm^2 B. 384 cm^2 C. 408 cm^2 D. 87 cm^2
rectangular prism with base perimeter 30 cm, base area 50 cm^2, and height 150 cm. Choices are: A. 7,560 cm^2 B. 4,600 cm^2 C. 1,800 cm^2 D. 4,550 cm^2
Spam Corp. is financed entirely by common stock and has a beta of 1.0. The firm is expected to generate a level, perpetual stream of earnings and dividends. The stock has a price-earnings ratio of 8
and a cost of equity of 12.5%. The company's stock is selling for $50. Now...
Calculus- please help
Q. Find the minimum value of Q=x^2y subject to the constraint 2x^2+4xy=294. Its the derivative method.
A brick of mass 0,5 kg possesses 100J of gravitational potential energy when held at a certain height above the ground.How long will it take to reach the ground when it is released?
assume that adults have IQ scores that are normally distributed with a mean of 100 and a standard deviation of 20. find the probability that a randomly selected adult has an IQ less than 20.
a statistics professor plans classes so carefully that the lenghts of her classes are uniformly distributed between 46.0 and 56.0 minutes. find the probability that a given class period runs less
than 50.5 minutes
whats c?
A fish pond at the local park is a regular hexagon. a. Write a formula for the perimeter of the pond in terms of the length of a side. Explain your formula. b. Each side has a length of 7.5 feet.
Find the perimeter of the pond. c. Suppose the designer of the pond wants to make...
Which of the following materials will burn the fastest in open air? A. a log, two feet in diameter B. two logs, each one foot in diameter C. a pile of small splinters made from a two-foot diameter
log D. Both logs and the splinters will burn at the same rate.
English Literature
Can anyone explain how expatriate Americans and native Europeans viewed America after WWI.
Which of the following materials will burn the fastest in open air? A. a log, two feet in diameter B. two logs, each one foot in diameter C. a pile of small splinters made from a two-foot diameter
log D. Both logs and the splinters will burn at the same rate.
Which of the following conditions would likely cause the activation energy to be high? A. atoms are close together B. the temperature is hot C. atoms are not close together D. a catalyst is present.
How far does a rider travel in one turn of each Ferris wheel? This is the first chart Measure Ferris Wheel Diameter (ft) 250 Height, including base 264 # of passenger cars 36 #of people per car 60
This is the second chart Measure Cosmo Clock Diameter (ft) 328 Height, including...
What are the main differences and similarities of the superior appendicular skeleton and inferior
Which of the following sentences is punctuated correctly? A. Melissa said that she would never go on a cruise B. Justin asked, "Will tickets be available online"? C. "Of course," Mr. Howard replied
"You can buy tickets and rent a car online." I th...
If Tom, an accountant, agrees to provide accounting services to Carl, a friend, in exchange for Carl fixing Tom's office floor, then: (Points : 1) Tom must report income on his tax return. Carl must
report income on his tax return. Neither Tom nor Carl must report income ...
Hamad is an employee of Mountain Company. He properly completed his Form 1040EZ tax return and was required to pay the IRS $1,244 at the time of filing. He had income tax withholding during the year
of $4,782. His tax liability for the year was: A. $1,244. B. $3,538. C. $4,782...
sorry my friend posted the same question u can reply to either one
Hamad is an employee of Mountain Company. He properly completed his Form 1040EZ tax return and was required to pay the IRS $1,244 at the time of filing. He had income tax withholding during the year
of $4,782. His tax liability for the year was: A. $1,244. B. $3,538. C. $4,782...
Science help!
adding magenta and yellow dyes makes what color? At first i thought it was red. but when I realized we are adding dye's not light im thinking its black.. but im not sure! please help!
If the side of triangle are equal, and it was made by placing beads touching it other, how many beads would you use if there are 3 beads on each side?
math, geometry
What if it has two equally sized bases of 10 and a height of 1 is it a trapazoid or quadaralladiral. ..Sorry for spelling mistakes.
math, geometry
Im not sure but i think it can if it has four sides...am i correct?
math, geometry
can a trapezoid have equal sized bases?
Proof! Help!
If k vertices has kC2 edges, show that (k+1) vertices has (k+1) C 2 edges
It only says that he argued against it for five hours in the state house. It doesn't expain why he opossed it?
Why did Jame Otis oppose writ of assistance?
Social Studies
Why did President Andrew Johnson veto a bill to continue the Freedmen's Bureau? Choices are: A. He believed it was not profitable B. He believed it was unconstitutional C. He believed it was not cost
effective D. He believed it did not do what it was promised. I chose B. H...
Social Studies
Which Reconstruction plan divided the South into military districts? Choices are: A. the 100% plan B. Johnson's plan C. Lincoln's plan D. the Radical Republican plan Please help. Cant find answer.
Social Studies
Which U.S. Consitutional amendment has been the basis for Supreme Court decisions regarding public school segragation and forced reapportionment of voter districts? Choices are: A. 13th B. 14th C.
15th D. 16th I said B. 14th amendment
Social Studies
Which U.S. Constitutional amendment gave former slaves citizenship? Choices are: A. 13th B. 14th C. 15th D. 16th I said A. The 13th amendment
why do a rectangle and a parallelogram share the same formula? why does the triangle have a formula of b*w /2
thats all ..it seems so simple thank you
why do the parallelogram and the rectangle have the same formula? b*h
Social Studies
Thanks John
Social Studies
Why was Lincoln assassinated? Choices are: A. Booth had a mental illness B. Booth blamed Lincoln for the war C. Booth shot the president by accident D. Booth was paid by southern sympathizers to kill
the president
Social Studies
What group believed President Lincoln's Reconstruction plan was too easy? Choices are: A. Democrats B. Freedmen C. U.S. Senate D. Radical Republicans
Social Studies
thanks for your help Ms. Sue !
Social Studies
thanks for your help Ms. Sue !
Social Studies
thanks for your help Ms. Sue !
Social Studies
Why did President Abraham Lincoln want Reconstruction to be simple and easy? Choices are: A. He wanted to win the votes in the South in his reelection campaign. B. He wanted to keep a good working
relationship with former Confederates who would probably return to Congress. C. ...
Which of the following is punctuated correctly? Choices are: A. The new shoe store is located at 11347 Arthur Drive, Baltimore, Maryland 21201. B. The new shoe store is located at 11347 Arthur Drive,
Baltimore, Maryland, 21201. C. The new shoe store is located at 11347 Arthur ...
what do you think it is?
Which of the following is punctuated correctly? Choices are: A. President Roosevelt described December 7, 1941 as a date which will live in infamy. B. President Roosevelt described December 7, 1941
as a date, which will live in infamy. C. President Roosevelt described December...
Today, decisions in planning, selecting, and/or creating artwork for public display are made in the following way. Choices are: A. They are the decision of the artist B. by collaboration among the
community, artist, and local government. C. They are the decision of the mayor o...
One gallon of liquid occupies 231 cubic inches. Write a rule that expresses the number of gallons g(c) as a function of the number of cubic inches c. Choices are: g(c)=c/231 g(c)=132/c g(c)=231c g(c)
During beta-particle emission, a neutron splits into _______? a proton and an electron two protons two electrons two neutrons
How many solutions do the linear equations x-y=2 -2x+2y=-1 have?
algebra 1
Amanda invested a total of $3,600 into three separate accounts that pay 5%, 7%, and 9% annual interest. Amanda has four times as much invested in the account that pays 9% as she does in the account
that pays 5%. If the total interest for the year is $282, how much did Amanda i...
elements of design
1. Study the following sentence, then select the answer that best describes it. THE COW JUMPS OVER THE MOON! A. Italic serifed type B. Bold roman sans serif type C. Bold italic sans serif type(my
answer) D. Bold roman serifed type 2. Which one of the following statements about...
desktop publishing: an introduction
place help i have this class and elements of design left and then i have my high school deploma from pennfoster i just need to know if my answers are right
desktop publishing: an introduction
1. What significant contribution did Gutenberg make to the printing process? A. Rotary press B. Woodcuts C. Movable types D.Monotype My answer A 2. The process of using a computer to write, design,
and assemble documents is called A. optical storage C. consulting B. desktop pu...
Alecia deposited $500 in a savings account at 5% compounded semiannually. What is her balance after 5 years? Can you please show me step by step how to do this problem. The choices are $650.00
$640.04 $670.05 $897.93
Solve the equation. 78=-2(m+3)+m -1/3m-7=5 1/4y+9=1/2 x-9=-6x+5 x+9=5(4x-2) Can you please show me how you got your answer.
A particular form of electromagnetic radiation has a frequency of 5.42x10^15 Hz. What is the wavelength is nanometers? meters? reply in scientific notation.
s=g+h and I am suppose to solve for h
I have to solve the equation s=g+h and am totally clueless as to how to even start. Please help explain. Thank you.
Pages: <<Prev | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | Next>> | {"url":"http://www.jiskha.com/members/profile/posts.cgi?name=Samantha&page=5","timestamp":"2014-04-17T22:29:29Z","content_type":null,"content_length":"30437","record_id":"<urn:uuid:f4d677ee-69dd-4f35-af83-2e959a42b7d2>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00223-ip-10-147-4-33.ec2.internal.warc.gz"} |
Wolfram Demonstrations Project
Integer Torus Maps
Each point {x,y} in an integer grid of size n is joined to the point with coordinates obtained by iteratively applying the transformation Mod[{{a,b},{c,d}}.{x,y},n] the specified number of times.
When the matrix {{a,b},{c,d}} is invertible, every point in the grid maps to a distinct point. | {"url":"http://demonstrations.wolfram.com/IntegerTorusMaps/","timestamp":"2014-04-16T07:16:45Z","content_type":null,"content_length":"44206","record_id":"<urn:uuid:1efd5585-f8f0-4257-afcf-048778ecfcfd>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00400-ip-10-147-4-33.ec2.internal.warc.gz"} |
Markov chains..
If we have calculated for three sequential bus stops bus headways distributions, passenger arrival distributions, dwell time distributions is it possible to represent bus passing from one stop to the
other as a Markov chain?
Which are the assumptions for this?
Can you please help with any similar model? | {"url":"http://www.physicsforums.com/showthread.php?p=4248216","timestamp":"2014-04-20T21:27:47Z","content_type":null,"content_length":"19404","record_id":"<urn:uuid:8c286703-8861-4999-8006-28549863ab20>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00397-ip-10-147-4-33.ec2.internal.warc.gz"} |
Quantum Statistical Methods in Quantum Optics 1 : Master Equations and Fokker-Planck Equations
, by
Carmichael, Howard J.
• ISBN: 9783540548829 | 3540548823
• Cover: Hardcover
• Copyright: 2/1/1999
The book provides an introduction to the methods of quantum statistical mechanics used in quantum optics and their application to the quantum theories of the single-mode laser and optical
bistability. The generalized representations of Drummond and Gardiner are discussed together with the more standard methods for deriving Fokker--Planck equations. Particular attention is given to the
theory of optical bistability formulated in terms of the positive P-representation, and the theory of small bistable systems. This is a textbook at an advanced graduate level. It is intended as a
bridge between an introductory discussion of the master equation method and problems of current research. | {"url":"http://www.biggerbooks.com/quantum-statistical-methods-quantum-optics/bk/9783540548829","timestamp":"2014-04-16T22:25:08Z","content_type":null,"content_length":"68655","record_id":"<urn:uuid:db46130c-b790-4990-a178-7674f5cba1a5>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00370-ip-10-147-4-33.ec2.internal.warc.gz"} |
From YPPedia
Puzzle Codename: Carp
Username: tanonev
Additional contact info: Sage: Tanonev; Email: tanonev (AT) stanford (DOT) edu
Project forum thread: discussion
A prototype is available for this proposal.
Check it out and contribute to the design!
Game concept
Arrange colored cubes into required shapes and patterns. Inspirations: Popcap Chuzzle, Y!PP Shipwrightery, Y!PP Duty Navigation, GCPP Haddock
Create all the patterns by rolling the cubes in rows and columns to match shape and color.
The board consists of a 6x6 array of colored squares, representing the tops of the cubes. One square is highlighted as the current cube. At the top right is an "unfolded" representation of the
current cube. A single cube's faces may be different colors.
Use the arrow keys to change the current cube. The board "wraps around" at all edges.
Use the WASD pad to "roll" the cubes in the respective direction. For example, W will move the current cube up one position in the array, along with all of the other cubes in its column (wrapping
around to the bottom of the board). In addition, each of those cubes rotate upward once, so their "south" faces are now their "top" (visible) faces and their old top faces are now their north faces.
Holding the Shift key allows you to roll the cubes without losing points or matching patterns. That way you can preview how a series of rolls will affect the board. Releasing the Shift key reverts
the board to the state it was in when you first pressed the Shift key.
(NEW) Alternatively, you can use the mouse to select cubes and roll rows. Move your mouse to a cube to select it as the current cube, and click and drag it in a cardinal direction to roll it.
At the bottom is a queue of required patterns. The queue shows you the next three patterns, but only the leading (leftmost) pattern can actually be matched, at which point the queue will advance and
show you more patterns. When a pattern is matched, the cubes involved in the match are cleared and new cubes enter in the same manner as a line clear.
Patterns never specify actual colors; instead, they use shades of gray to indicate the pattern of colors required. For example, one pattern may indicate a solid 2x2 block; another may indicate a
single block of one color surrounded on all 4 sides by blocks of a second color.
DISCLAIMER: The colored faces are colored faces for ease of prototyping. The different colors are supposed to represent different types of materials used in a furnisher.
Dave, an able furnisher, decides to play the furnisher puzzle again. When he starts, he gets this:
Dave now quickly makes a clear, and the pieces fall down.
Dave decides to make a reflected combo, thus the sets up the pieces needed.
He realizes then that he only needs one move to make the next two pieces at the same time, so he then decides to make a double combo, and shifts the cubes to the spaces he wants
He thinks: Hmm.... I could probably do a Double Reflected, but I'm not sure if I can do it. He realizes that only the current piece will set off the combo, so he builds another one of the second
Satisfied with his combo, Dave finishes his game and soon scores and Incredible.
Each pattern has its own base value. More complex patterns have higher base values. Each cube also has its own base value. Solid-color cubes, for example, are worth much less than six-color cubes.
It is possible to simultaneously create multiple copies of the required pattern with one move. If this occurs, a "Mirror bonus" (Reflected, Triplicate, Fractalized) is awarded, a multiplier equal to
the square of the number of copies of the pattern.
It is possible for a new match to be made after a clear occurs. If this occurs, a Chain bonus (Double, Triple, Bingo, Donkey, Vegas) is awarded, a multiplier equal to the number of clears in the
NEW: If both the Chain bonus and the Mirror bonus apply to a clear, the Mirror bonus is only equal to the number of copies, not the square of the number.
1 point is deducted for every roll made.
Scoring formula:
3^f * d * p * m * c (* m if c=1)
• f = # of frozen cubes
• d = sum of cube values
• p = pattern value
• m = # of copies of pattern
• c = chain position
Current pattern values:
Current cube values:
• solid: 1 point
• 2-color: 2 points
• 3-color: 3 points
• 6-color: 6 points
At higher levels, a random cube may become "frozen" immediately after a clear. It glazes over and can no longer be rolled (which affects other cubes in its row and column as well). However, when it
is cleared, it triples the value of the clear. This Frozen bonus is cumulative, so a pattern clear with 3 frozen cubes is worth 27 times its usual amount.
End criteria
The queue contains 15 patterns. The game ends when the queue is emptied. The game also ends if all rows and columns are frozen.
Difficulty scaling
Higher levels get harder dice to work with. At the very beginning, all cubes will be solid-color. At the highest level, there will be an abundance of 6-color cubes. This makes it harder to create
clears, but at the same time, the cubes are worth more, resulting in more scoring opportunities.
Higher levels also get more complicated patterns, which are naturally worth more.
Higher levels get frozen cubes with greater frequencies, which complicates movement in exchange for greater bonuses.
Crafting type
Known problems
Scoring not yet balanced. | {"url":"http://yppedia.puzzlepirates.com/GCPP:Proposal-Carp","timestamp":"2014-04-20T08:15:02Z","content_type":null,"content_length":"25973","record_id":"<urn:uuid:3704abc9-d2f1-403a-a75d-593ef43abba8>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00448-ip-10-147-4-33.ec2.internal.warc.gz"} |
Annuity Question
January 18th 2009, 10:20 AM #1
Jan 2009
Annuity Question
Q10 - You want to buy an annuity that will pay you £1000 per year for the next 6 years. The first payment will be made to you in one year's time. What is the maximum you should pay, if you
estimate that you could achieve a return of 12% pa on the money?
We have been given the answer - it is £3604.78, but we have to show how to get this answer.
I have tried lots of different formulas but cant get this answer, the closest I've been is £2692 by using this formula =6000*((1-(12.5/100))^6). I'm not sure which is the correct formula as I
have tried working out present value and future value, but neither seem to work!
If anyone could just point me in the right direction that would be a great help.
Q10 - You want to buy an annuity that will pay you £1000 per year for the next 6 years. The first payment will be made to you in one year's time. What is the maximum you should pay, if you
estimate that you could achieve a return of 12% pa on the money?
We have been given the answer - it is £3604.78, but we have to show how to get this answer.
I have tried lots of different formulas but cant get this answer, the closest I've been is £2692 by using this formula =6000*((1-(12.5/100))^6). I'm not sure which is the correct formula as I
have tried working out present value and future value, but neither seem to work!
If anyone could just point me in the right direction that would be a great help.
If an annuity will pay you £1000 per year for the next 6 years, then
$<br /> \begin{array}{l}<br /> A = \sum\limits_{n = 1}^6 {1,000\left( {1.12} \right)^{ - n} } = 1,000\frac{{1 - \left( {1.12} \right)^{ - 6} }}{{0.12}} \\ <br /> \Leftrightarrow A \approx
4,111.407324... \\ <br /> \end{array}<br />$
If an annuity will pay you £1000 per year for the next 5 years, then
$<br /> \begin{array}{l}<br /> A = \sum\limits_{n = 1}^5 {1,000\left( {1.12} \right)^{ - n} } = 1,000\frac{{1 - \left( {1.12} \right)^{ - 5} }}{{0.12}} \\ <br /> \Leftrightarrow A \approx
3,604.776202... \\ <br /> \end{array}<br />$
January 18th 2009, 11:04 AM #2
Apr 2008 | {"url":"http://mathhelpforum.com/algebra/68719-annuity-question.html","timestamp":"2014-04-18T09:02:33Z","content_type":null,"content_length":"34143","record_id":"<urn:uuid:cd4399c4-65ef-42d8-9259-b8baa513dba1>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00316-ip-10-147-4-33.ec2.internal.warc.gz"} |
Applied Linear Equations: Distance Problem - Problem 1
So, here is an example of another distance, rate, time problem that you may come across, the good old train problems that everybody loves to hate. A train leaves San Francisco and travels North, at
40 miles per hour. Another train leaves at the same time, and travels South at 80 miles per hour, how long until they are 540 miles apart.
So, this is a classic example of two rates, sort of going opposite each other and trying to figure out when they're distance apart. There are two ways of doing this. I will show you the way I sort of
think about it, which is a very logical approach, and then, we can also think about it, from a more mathematical mind as well.
A train travels North at 40 miles per hour, after an hour, how far is it it gone? 40 miles. A train travels South at 80, after an hour, it has gone 80 miles, so in hour one, how far apart are they?
One goes 40, one goes 80 the opposite directions they go 120 total.
So, I want to sort of see a visual. One goes North, one goes South, here is 40 and here is 80, together they've gone 120 miles. After two hours, this one has gone another 40, this one has gone
another 80, so basically, they're going to double this distance, 240, so every hour, they actually move 120 miles apart.
Going to the equation that relates everything, distance is equal to rate times time, the rate that they're moving apart is the sum of these two things which is just 120 and we're asking when they are
540 miles apart, equal to distance.
From here just a very straight forward relationship, divide by 120, let's go to our calculator 540 divided by 120, t is 4 point 5 hours. So that's from a logic perspective. If you want to do a little
bit more of a mathematical approach. How you can look at this is distance of train one, the distance of train two is equal to the total distance, we somehow solve this.
We know that the total distance needs to be 540, we know that distance is equal to rate times time, so this is rate times time train one, rate times time train two. The rate of the first train is 40,
the rate of the second train is 80 and we know that they leave at the same time so this is the same t, t and t equal to 540. Combine like terms 40t plus 80t will end up giving us 120t and we end up
solving it out the same exact way.
So one way a little bit more logical, one way a little bit more mathematical, either way we'll get the same answer.
distance rate time train word problem | {"url":"https://www.brightstorm.com/math/precalculus/linear-equations-and-inequalities/applied-linear-equations-distance-problems-problem-1/","timestamp":"2014-04-19T14:55:05Z","content_type":null,"content_length":"67018","record_id":"<urn:uuid:78d1a8bd-97d7-4200-9053-44305abdcd55>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00048-ip-10-147-4-33.ec2.internal.warc.gz"} |
Reimbursable Projects
Reimbursement Examples
In general, reimbursement for school construction projects is based on the capacity of a building, which can be justified by present or projected student enrollment. Classroom capacity is normally
calculated on the basis of 25 students per regular classroom (other values are assigned to laboratories, gymnasiums, art rooms, music rooms, etc.). For example, if a district has a twenty classroom
elementary building, we would normally consider the building to have a full-time equivalent capacity of 500 (20 x 25). The capacity in this example would have to be supported by current or projected
This capacity is then converted to rated pupil capacity. The term "rated pupil capacity" has no significance other than this is a method for calculating reimbursement. An elementary building with a
full-time equivalent capacity of 500 is deemed to have a "rated pupil capacity" of 700 (see Attachment B in the PlanCon Part A instructions for the conversion charts). (PDF - Requires Acrobat Reader)
Click on a new elementary school to see an example of the reimbursement calculations for new construction. Click on additions and alterations to a secondary school to see an example of the
reimbursement calculations for additions and alterations to an existing school.
CALCULATION OF REIMBURSEMENT FOR A NEW ELEMENTARY BUILDING WITH A FULL-TIME EQUIVALENT CAPACITY OF 500, WHICH IS CONVERTED TO A RATED PUPIL CAPACITY OF 700. ACTUAL PROJECT COSTS ARE $4,000,000. THE
SCHOOL DISTRICT HAS A MARKET VALUE AID RATIO OF .6500.
(1) Maximum Reimbursable Formula Amount
(a) Full time equivalent capacity 500
(b) Conversion of full-time equivalent
capacity to rated pupil capacity 500 x 1.4 = 700
(c) Rated pupil capacity multiplied
by $4,700 (legislated per pupil amount
for elementary) 700 x $4,700 = $3,290,000
(d) MAXIMUM REIMBURSABLE FORMULA AMOUNT $3,290,000
(2) Actual Structure Costs Based on Bids
(a) Structure Costs $3,000,000
(b) Architect's Fee (6% limit) $ 180,000
(c) Movable Fixtures & Equipment $ 50,000
(d) TOTAL ACTUAL STRUCTURE COSTS $3,230,000
LESSER OF (1d) FORMULA OR (2d) ACTUAL COSTS $3,230,000
(3) Additional Funding for LEED Silver, Gold
or Platinum Certification 700 X $470 = $329,000
(4) Specified Ancillary Costs
(a) Rough Grading to Receive the Building $ 30,000
(b) Sanitary Sewage Disposal $ 10,000
(c) Architect's Fee (6% limit) $ 2,400
(d) Cost of Acquiring Site $ 20,600
(e) TOTAL $ 63,000
ELIGIBLE ANCILLARY COSTS $ 63,000
TOTAL REIMBURSABLE PROJECT AMOUNT $3,622,000
(5) Other Project Costs
Contingency, Supervision, Printing,
Financing Costs, Other $ 707,000
(6) Total Project Costs (2d plus 4e and 5) $4,000,000
The reimbursable project amount is then divided by the total project costs to determine a reimbursable percentage. A one-half percentage point reduction in the reimbursable percentage is made until
Plancon Part J, Project Accounting Based on Final Costs, for the project is reviewed and approved by the Department.
$3,622,000 / $4,000,000 = 90.55%
This percent is multiplied by the school district's bond issue (principal and interest payments) to determine the level of Commonwealth participation in the cost of the project. The Commonwealth's
share is then multiplied by a measure of a district's wealth, i.e., Market Value Aid Ratio (MVAR) or Capital Account Reimbursement Fraction (CARF), (or in some cases, a "Density Factor" of 50
percent) whichever is greater to determine the net state subsidy.
$200,000 X .9005 X .6500 = $117,065
For projects financed by cash , i.e. without the issuance of debt, the reimbursable percent is multiplied by the total project costs for the school construction project to determine the level of
Commonwealth participation in the cost of the project. The Commonwealth's share is then multiplied by a measure of a district's wealth, i.e., the greater of Market Value Aid Ratio (MVAR), Capital
Account Reimbursement Fraction (CARF) or Density Factor, to determine the net state subsidy.
If a project is financed by cash, i.e., without the issuance of debt, no reimbursement will be paid until PlanCon Part J, Project Accounting Based on Final Costs, is submitted and approved by the
Department. At PlanCon Part J, a certification must be provided indicating that, in accordance with Section 2575.1 of the Public School Code of 1949, as amended, the school district/AVTS is providing
full payment on account of the approved building construction cost without incurring debt or without incurring a lease. For purposes of calculating reimbursement, bond proceeds that are transferred
to the general fund and then used for a reimbursable construction project are still considered bond proceeds.
CALCULATION OF REIMBURSEMENT FOR A SECONDARY BUILDING WITH AN ADDITION AND ALTERATIONS AND A FULL-TIME EQUIVALENT CAPACITY OF 901, WHICH IS CONVERTED TO A RATED PUPIL CAPACITY OF 1,000. ACTUAL
PROJECT COSTS ARE $9,500,000. THE GROSS AREA (OR ARCHITECTURAL AREA) OF THE ADDITION IS 30,000 SQUARE FEET AND THE GROSS AREA (OR ARCHITECTURAL AREA) OF THE EXISTING BUILDING IS 130,000 SQUARE FEET
FOR A BUILDING TOTAL OF 160,000 SQUARE FEET. THE SCHOOL DISTRICT'S MARKET VALUE AID RATIO IS .6500 .
(1) Maximum Reimbursable Formula Amount
(a) Total Full-Time Equivalent Capacity 901
(b) Conversion of Full-Time Equivalent
Capacity to Rated Pupil Capacity 901 x 1.1100 = 1,000
(c) Rated Pupil Capacity multiplied
by $6,200 (legislated per pupil
amount for secondary) 1,000 x 6,200 = $6,200,000
(d) Architectural area of addition
as percent of total building area 18.75%
(e) Architectural area of existing
as percent of total building area 81.25%
AMOUNT – ADDITION ((c) times (d)) $1,162,500
AMOUNT – EXISTING ((c) times (e)) $5,037,500
(2) Actual Structure Costs Based on Bids Addition Existing
(a) Structure Costs $2,500,000 $ 5,700,000
(b) Architect's Fee (6% limit) $ 150,000 $ 222,000
(c) Movable Fixtures & Equipment $ 80,000 $ 20,000
(d) TOTAL ACTUAL STRUCTURE COSTS $2,730,000 $5,942,000
(e) LESSER OF (1f) FORMULA OR (2d) ACTUAL COSTS - ADDITION $1,162,500
(f) LESSER OF (1g) FORMULA OR (2d) ACTUAL COSTS – EXISTING $5,037,500
(g) TOTAL $6,200,000
(3) Additional Funding for Project with Additions
and/or Alterations to Existing Building
(Appraisal Value=0) 1,000 X $620 = $620,000
(4) Specified Ancillary Costs
(a) Rough Grading to Receive Building $ 50,000
(b) Sanitary Sewage Disposal $ 65,000
(c) Architect's Fee (6% Limit) $ 6,900
(d) TOTAL $121,900
ELIGIBLE ANCILLARY COSTS $ 121,900
(2g plus 3 and 4d) $6,941,900
(5) Other Project Costs
Contingency, Supervision, Printing,
Financing Costs, Other $706,100
(6) Total Project Costs (2d, Addition and Existing,
plus 4d plus 5) $9,500,000
The reimbursable project amount is then divided by the total project costs to determine a reimbursable percentage. A one-half percentage point reduction in the reimbursable percentage is made until
Plancon Part J, Project Accounting Based on Final Costs, for the project is reviewed and approved by the Department.
$6,941,900 / $9,500,000 = 73.07%
This percent is multiplied by the school district's bond issue (principal and interest payments) to determine the level of Commonwealth participation in the cost of the project. The Commonwealth's
share is then multiplied by a measure of a district's wealth, i.e., Market Value Aid Ratio (MVAR) or Capital Account Reimbursement Fraction (CARF), (or Density Factor, if applicable) whichever is
greater to determine the net state subsidy.
$500,000 X .7257 X .6500 = $235,853
For projects financed by cash , i.e. without the issuance of debt, the reimbursable percent is multiplied by the total project costs for the school construction project to determine the level of
Commonwealth participation in the cost of the project. The Commonwealth's share is then multiplied by a measure of a district's wealth, i.e., the greater of Market Value Aid Ratio (MVAR), Capital
Account Reimbursement Fraction (CARF) or Density Factor, to determine the net state subsidy.
If a project is financed by cash, i.e., without the issuance of debt, no reimbursement will be paid until PlanCon Part J, Project Accounting Based on Final Costs, is submitted and approved by the
Department. At PlanCon Part J, a certification must be provided indicating that, in accordance with Section 2575.1 of the Public School Code of 1949, as amended, the school district/AVTS is providing
full payment on account of the approved building construction cost without incurring debt or without incurring a lease. For purposes of calculating reimbursement, bond proceeds that are transferred
to the general fund and then used for a reimbursable construction project are still considered bond proceeds. | {"url":"http://www.portal.state.pa.us/portal/server.pt?open=514&objID=509110&mode=2","timestamp":"2014-04-20T08:39:33Z","content_type":null,"content_length":"101185","record_id":"<urn:uuid:84fb9a1b-de0e-4ad0-bdfb-78732bf86bb5>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00056-ip-10-147-4-33.ec2.internal.warc.gz"} |
San Clemente Prealgebra Tutor
Find a San Clemente Prealgebra Tutor
Hello, Currently I am a graduate student at California State University, Fullerton pursuing a M.A. in applied linguistics along with a TESOL certificate. I received my B.A. in linguistics from CSU
Fullerton in May 2013, making the Dean's Honors list with a 3.75 GPA my final semester. Taking gradu...
12 Subjects: including prealgebra, English, reading, writing
...At TAC we have no textbooks, only the Great Books. We read the works of the original authors as opposed to someone else's thoughts on that author. The individual class meetings are based upon
the Socratic method of discussion, the Professor (we call the Tutors) will open the class with a question regarding the text prepared beforehand.
8 Subjects: including prealgebra, reading, writing, algebra 1
...I also have a Master's Degree in Organizational Management from University of Phoenix. I used to assist elementary school teachers as well as High School teachers with subjects such as English
Grammar, History of Arts, Social Studies and Math, including Algebra I & II. I am currently assisting my son, who is a Freshman in High School with his school work and projects.
31 Subjects: including prealgebra, English, ESL/ESOL, grammar
...We got the ACT score today and he got at 35!!!!! The breakdown is English 34, Math 33, Reading 36, Science 35. His score before tutoring was Composite 31...Thanks so much for all of your help.
I will let you know what the writing score is in a few weeks when we get it back." - Drew G. (Parent)"Hi Gil.
23 Subjects: including prealgebra, chemistry, calculus, geometry
...I am married with 3 children and live in Mission Viejo. I graduated from the University of Michigan in 1997 with my B.A and am currently enrolled in a Master's degree program in Education. I
worked in Finance and Sales for 12 years but will begin student teaching in Elementary school in the Fall.
36 Subjects: including prealgebra, reading, writing, statistics | {"url":"http://www.purplemath.com/san_clemente_ca_prealgebra_tutors.php","timestamp":"2014-04-21T14:46:49Z","content_type":null,"content_length":"24501","record_id":"<urn:uuid:510a2077-8075-4b91-805f-264bcebd1d20>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00085-ip-10-147-4-33.ec2.internal.warc.gz"} |
Indexing Animated Objects Using Spatiotemporal Access Methods
Results 1 - 10 of 41
- In VLDB , 2003
"... A predictive spatio-temporal query retrieves the set of moving objects that will intersect a query window during a future time interval. Currently, the only access method for processing such
queries in practice is the TPR-tree. In this paper we first perform an analysis to determine the factor ..."
Cited by 145 (10 self)
Add to MetaCart
A predictive spatio-temporal query retrieves the set of moving objects that will intersect a query window during a future time interval. Currently, the only access method for processing such queries
in practice is the TPR-tree. In this paper we first perform an analysis to determine the factors that affect the performance of predictive queries and show that several of these factors are not
considered by the TPR-tree, which uses the insertion/deletion algorithms of the R*-tree designed for static data. Motivated by this, we propose a new index structure called the TPR*- tree, which
takes into account the unique features of dynamic objects through a set of improved construction algorithms. In addition, we provide cost models that determine the optimal performance achievable by
any data-partition spatio-temporal access method. Using experimental comparison, we illustrate that the TPR*-tree is nearly-optimal and significantly outperforms the TPR-tree under all conditions.
, 2002
"... Spatiotemporal objects, i.e., objects which change their position and/or extent over time appear in many applications. In this paper we examine the problem of indexing large volumes of such
data. Important in this environment is how the spatiotemporal objects move and/or change. We consider a rath ..."
Cited by 59 (11 self)
Add to MetaCart
Spatiotemporal objects, i.e., objects which change their position and/or extent over time appear in many applications. In this paper we examine the problem of indexing large volumes of such data.
Important in this environment is how the spatiotemporal objects move and/or change. We consider a rather general case where object movements/changes are defined by combinations of polynomial
functions. We further concentrate on "snapshot" as well as small "interval" queries as these are quite common when examining the history of the gathered data. The obvious approach that approximates
each spatiotemporal object by an MBR and uses a traditional multidimensional access method to index them is inefficient. Objects that "live" for long time intervals have large MBRs which introduce a
lot of empty space. Clustering long intervals has been dealt in temporal databases by the use of partially persistent indices. What differentiates this problem from traditional temporal indexing, is
that objects are allowed to move/change during their lifetime. Better ways are thus needed to approximate general spatiotemporal objects. One obvious solution is to introduce artificial splits: the
lifetime of a long-lived object is split into smaller consecutive pieces. This decreases the empty space but increases the number of indexed MBRs. We first give an optimal algorithm and a heuristic
for splitting a given spatiotemporal object in a predefined number of pieces. Then, given an upper bound on the total number of possible splits, we present three algorithms that decide how the splits
are distributed among all the objects so that the total empty space is minimized. The number of splits cannot be increased indefinitely since the extra objects will eventually affect query
performance. Usi...
- Proc. 2004 SIGMOD, toappear
"... In this thesis, we investigate the subject of indexing large collections of spatiotemporal trajectories for similarity matching. Our proposed technique is to first mitigate the dimensionality
curse problem by approximating each trajectory with a low order polynomial-like curve, and then incorporate ..."
Cited by 49 (0 self)
Add to MetaCart
In this thesis, we investigate the subject of indexing large collections of spatiotemporal trajectories for similarity matching. Our proposed technique is to first mitigate the dimensionality curse
problem by approximating each trajectory with a low order polynomial-like curve, and then incorporate a multidimensional index into the reduced space of polynomial coefficients. There are many
possible ways to choose the polynomial, including Fourier transforms, splines, non-linear regressions, etc. Some of these possibilities have indeed been studied before. We hypothesize that one of the
best approaches is the polynomial that minimizes the maximum deviation from the true value, which is called the minimax polynomial. Minimax approximation is particularly meaningful for indexing
because in a branch-and-bound search (i.e., for finding nearest neighbours), the smaller the maximum deviation, the more pruning opportunities there exist. In general, among all the polynomials of
the same degree, the optimal minimax polynomial is very hard to compute. However, it has been shown that the Chebyshev approximation is almost identical to the optimal minimax polynomial, and is easy
to compute [32]. Thus, we shall explore how to use
- IEEE Data Engineering Bulletin , 2003
"... The rapid increase in spatio-temporal applications calls for new auxiliary indexing structures. A typical spatio-temporal application is one that tracks the behavior of moving objects through
location-aware devices (e.g., GPS). Through the last decade, many spatio-temporal access methods are develop ..."
Cited by 43 (6 self)
Add to MetaCart
The rapid increase in spatio-temporal applications calls for new auxiliary indexing structures. A typical spatio-temporal application is one that tracks the behavior of moving objects through
location-aware devices (e.g., GPS). Through the last decade, many spatio-temporal access methods are developed. Spatio-temporal access methods focus on two orthogonal directions: (1) Indexing the
past, (2) Indexing the current and predicted future positions. In this short survey, we classify spatio-temporal access methods for each direction based on their underlying structure with a brief
discussion of future research directions.
- TKDE , 2004
"... Abstract—A range aggregate query returns summarized information about the points falling in a hyper-rectangle (e.g., the total number of these points instead of their concrete ids). This paper
studies spatial indexes that solve such queries efficiently and proposes the aggregate Point-tree (aP-tree) ..."
Cited by 28 (2 self)
Add to MetaCart
Abstract—A range aggregate query returns summarized information about the points falling in a hyper-rectangle (e.g., the total number of these points instead of their concrete ids). This paper
studies spatial indexes that solve such queries efficiently and proposes the aggregate Point-tree (aP-tree), which achieves logarithmic cost to the data set cardinality (independently of the query
size) for two-dimensional data. The aP-tree requires only small modifications to the popular multiversion structural framework and, thus, can be implemented and applied easily in practice. We also
present models that accurately predict the space consumption and query cost of the aP-tree and are therefore suitable for query optimization. Extensive experiments confirm that the proposed methods
are efficient and practical. Index Terms—Database, spatial database, range queries, aggregation. 1
- In Proc. of the 11th Intl. Symp. on Advances in Geographic Information Systems (ACM-GIS , 2003
"... With the proliferation of mobile computing, the ability to index efficiently the movements of mobile objects becomes important. Objects are typically seen as moving in two-dimensional (x,y)
space, which means that their movements across time may be embedded in the three-dimensional (x,y,t) space. Fu ..."
Cited by 27 (2 self)
Add to MetaCart
With the proliferation of mobile computing, the ability to index efficiently the movements of mobile objects becomes important. Objects are typically seen as moving in two-dimensional (x,y) space,
which means that their movements across time may be embedded in the three-dimensional (x,y,t) space. Further, the movements are typically represented as trajectories, sequences of connected line
segments. In certain cases, movement is restricted, and specifically in this paper, we aim at exploiting that movements occur in transportation networks to reduce the dimensionality of the data.
Briefly, the idea is to reduce movements to occur in one spatial dimension. As a consequence, the movement data becomes two-dimensional (x,t). The advantages of considering such lowerdimensional
trajectories are the reduced overall size of the data and the lower-dimensional indexing challenge. Since off-the-shelf database management systems typically do not offer higherdimensional indexing,
this reduction in dimensionality allows us to use such DBMSes to store and index trajectories. Moreover, we argue that, given the right circumstances, indexing these dimensionality-reduced
trajectories can be more efficient than using a three-dimensional index. This hypothesis is verified by an experimental study that incorporates trajectories stemming from real and synthetic road
- THE VLDB JOURNAL
"... Spatio-temporal objects — that is, objects that evolve over time — appear in many applications. Due to the nature of such applications, storing the evolution of objects through time in order to
answer historical queries (queries that refer to past states of the evolution) requires a very large speci ..."
Cited by 26 (3 self)
Add to MetaCart
Spatio-temporal objects — that is, objects that evolve over time — appear in many applications. Due to the nature of such applications, storing the evolution of objects through time in order to
answer historical queries (queries that refer to past states of the evolution) requires a very large specialized database, what is termed in this article as a spatio-temporal archive. Efficient
processing of historical queries on spatio-temporal archives requires equally sophisticated indexing schemes. Typical spatio-temporal indexing techniques represent the objects using minimum bounding
regions (MBR) extended with a temporal dimension, which are then indexed using traditional multi-dimensional index structures. However, rough MBR approximations introduce excessive overlap between
index nodes which deteriorates query performance. This article introduces a robust indexing scheme for answering spatio-temporal queries more efficiently. A number of algorithms and heuristics are
elaborated, which can be used to preprocess a spatiotemporal archive in order to produce finer object approximations which, in combination with a multi-version index structure, will greatly improve
query performance in comparison to the straightforward approaches. The proposed techniques introduce a query-efficiency vs. space tradeoff, that can help tune a structure according to available
resources. Empirical observations for estimating the necessary amount of additional storage space required for improving query performance by a given factor are also provided. Moreover, heuristics
for applying the proposed ideas in an online setting are discussed. Finally, a thorough experimental evaluation is conducted to show the merits of the proposed techniques.
, 2004
"... With the proliferation of wireless communications and geo-positioning, e-services are envisioned that exploit the positions of a set of continuously moving users to provide context-aware
functionality to each individual user. Because advances in disk capacities continue to outperform Moore's Law, ..."
Cited by 25 (1 self)
Add to MetaCart
With the proliferation of wireless communications and geo-positioning, e-services are envisioned that exploit the positions of a set of continuously moving users to provide context-aware
functionality to each individual user. Because advances in disk capacities continue to outperform Moore's Law, it becomes increasingly feasible to store on-line all the position information obtained
from the moving e-service users. With the much slower advances in I/O speeds and many concurrent users, indexing techniques are of essence in this scenario. Past
- Proceedings of MDM , 2003
"... Abstract. In this paper, we describe an efficient indexing method for a shape-based similarity search of the trajectory of dynamically changing locations of people and mobile objects. In order
to manage trajectories in database systems, we define a data model of trajectories as directed lines in a s ..."
Cited by 20 (2 self)
Add to MetaCart
Abstract. In this paper, we describe an efficient indexing method for a shape-based similarity search of the trajectory of dynamically changing locations of people and mobile objects. In order to
manage trajectories in database systems, we define a data model of trajectories as directed lines in a space, and the similarity between trajectories is defined as the Euclidean distance between
directed discrete lines. Our proposed similarity query can be used to find interested patterns embedded into the trajectories, for example, the trajectories of mobile cars in a city may include
patterns for expecting traffic jams. Furthermore, we propose an efficient indexing method to retrieve similar trajectories for a query by combining a spatial indexing technique (R +-Tree) and a
dimension reduction technique, which is called PAA (Piecewise Approximate Aggregate). The indexing method can efficiently retrieve trajectories whose shape in a space is similar to the shape of a
candidate trajectory from the database. 1
- IEEE Data Engineering Bulletin , 2002
"... The domain of spatiotemporal applications is a treasure trove of new types of data and queries. In this work, the focus is on a spatiotemporal sub-domain, namely the trajectories of moving point
objects. We examine the issues posed by this type of data with respect to indexing and point out existing ..."
Cited by 19 (4 self)
Add to MetaCart
The domain of spatiotemporal applications is a treasure trove of new types of data and queries. In this work, the focus is on a spatiotemporal sub-domain, namely the trajectories of moving point
objects. We examine the issues posed by this type of data with respect to indexing and point out existing approaches and research directions. An important aspect of movement is the scenario in which
it occurs. Three different scenarios, namely unconstrained movement, constrained movement, and movement in networks are used to categorize various indexing approaches. Each of these scenarios give us
different means to either simplify indexing, or to improve the overall query processing performance. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=14357","timestamp":"2014-04-18T13:33:12Z","content_type":null,"content_length":"41698","record_id":"<urn:uuid:00dcfa8f-2614-4f38-a76e-b073e606dfbe>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00003-ip-10-147-4-33.ec2.internal.warc.gz"} |
Free variables and bound variables
, and a number of linguistic disciplines including
mathematical logic
computer science
, a
free variable
is a
for a place or places in a
, into which some definite
may take place, or with respect to which some operation (
, to give two examples) may take place. The idea is related to, but somewhat deeper and more complex than, that of a
that will later be replaced by some
literal string
), or a
wildcard character
that stands for an unspecified symbol.
In elementary algebra, for example, the symbol x is used to construct formulae, under the assumptions (i) later we may assign x a value such as 2, and (ii) each occurrence of x stands for the same
unknown value. The variable x becomes a bound variable, for example, when we write
'for all x, (x + 1)^2 = x^2 +2x + 1.'
In this proposition it no longer much matters whether we use
or some other letter; but it would be confusing notationally to use the letter again elsewhere in some compound
. That is, free variables become bound, and then in a sense
from further work supporting the formation of formulae.
Before stating a precise definition of free variable and bound variable (or dummy variable), we present some examples that perhaps make these two concepts clearer than the definition would
(unfortunately the term dummy variable is used by many statisticians to mean an indicator variable or some variant thereof; the name is really not apt for that purpose, but magnificently conveys the
intuition behind the definition of this concept):
In the expression
y is a free variable and x is a bound variable (or dummy variable); consequently the value of this expression depends on the value of y, but there is nothing called x on which it could depend.
In the expression
x is a free variable and y is a bound variable; consequently the value of this expression depends on the value of x, but there is nothing called y on which it could depend.
In the expression
y is a free variable and x is a bound variable; consequently the value of this expression depends on the value of y, but there is nothing called x on which it could depend.
In the expression
x is a free variable and h is a bound variable; consequently the value of this expression depends on the value of x, but there is nothing called h on which it could depend.
In the expression
z is a free variable and x and y are bound variables; consequently the truth-value of this expression depends on the value of z, but there is nothing called x or y on which it could depend.
Variable-binding operators
The expressions
are variable-binding operators. The variables that they bind are x (in the first, second, and fourth examples) and h (in the third example).
Formal explanation
Variable-binding mechanisms occur in different contexts in mathematics, logic and computer science but in all cases they are purely syntactic properties of expressions and variables in them. For this
section we can summarize syntax by identifying expressions with trees whose leaf nodes are variables, function constants or predicate constants and whose nodes are logical operators. Variable-binding
operators are logical operators that occur in almost every formal language. Indeed languages which do not have them are either extremely inexpressive or extremely difficult to use. A binding operator
Q takes two arguments: a variable v and an expression P, and when applied to its arguments produces a new expression Q(v, P). The meaning of binding operators is supplied by the semantics of the
language and does not concern us here.
Variable binding relates three things: a variable v, a location a for that variable in an expression and a node n of the form Q(v, P). Note: we define a location in an expression as a leaf node in
the syntax tree. Variable binding occurs when that location is below the node n
To give an example from mathematics, consider an expression which defines a function
where t is an expression. t may contain some, all or none of the x[1], ..., x[n] and it may contain other variables. In this case we say that function definition binds the variables x[1], ..., x[n].
In the lambda calculus, x is a bound variable in the term M = λ x . T, and a free variable of T. We say x is bound in M and free in T. If T contains a subterm λ x . U then x is rebound in this term.
This nested, inner binding of x is said to "shadow" the outer binding. Occurrences of x in U are free occurrences of the new x.
Variables bound at the top level of a program are technically free variables within the terms to which they are bound but are often treated specially because they can be compiled as fixed addresses.
Similarly, an identifier bound to a recursive function is also technically a free variable within its own body but is treated specially.
A closed term is one containing no free variables.
See also closure (mathematics), lambda lifting, scope, combinatory logic
Some of this article is based on an entry in FOLDOC, used by permission. | {"url":"http://july.fixedreference.org/en/20040724/wikipedia/Free_variables_and_bound_variables","timestamp":"2014-04-17T13:03:36Z","content_type":null,"content_length":"9873","record_id":"<urn:uuid:acf4f27a-abeb-46e1-a8eb-9fcb4f6ecbd0>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00366-ip-10-147-4-33.ec2.internal.warc.gz"} |
Reducing the time complexity of the derandomized evolution strategy with covariance matrix adaptation (CMA-ES)
Hansen N., Mueller SD., Koumoutsakos P., Evolutionary Computation, 11, 1-18, 2003
This paper presents a novel evolutionary optimization strategy based on the derandomized evolution strategy with covariance matrix adaptation (CMA-ES). This new approach is intended to reduce the
number of generations required for convergence to the optimum. Reducing the number of generations, i.e., the time complexity of the algorithm, is important if a large population size is desired: (1)
to reduce the effect of noise; (2) to improve global search properties; and (3) to implement the algorithm on (highly) parallel machines. Our method results in a highly parallel algorithm which
scales favorably with large numbers of processors. This is accomplished by efficiently incorporating the available information from a large population, thus significantly reducing the number of
generations needed to adapt the covariance matrix. The original version of the CMA-ES was designed to reliably adapt the covariance matrix in small populations but it cannot exploit large populations
efficiently. Our modifications scale up the efficiency to population sizes of up to 10n, where n is the problem dimension. This method has been applied to a large number of test problems,
demonstrating that in many cases the CMA-ES can be advanced from quadratic to linear time complexity. | {"url":"http://www.cse-lab.ethz.ch/index.php?option=com_content&view=article&id=155&catid=46","timestamp":"2014-04-17T09:49:17Z","content_type":null,"content_length":"10456","record_id":"<urn:uuid:8c4c7704-8c36-46cc-be47-e2efe51b4c97>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00247-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
How can we know whether an equation is a function or not?
• one year ago
• one year ago
Best Response
You've already chosen the best response.
A function gives you one and only one 'y' value for a specific 'x' value. f(x)=x+1 is a function. If we take the equation of a circle \[x^{2}+y^2=r^2\] it's not a function because we can have two
values 'y' for one 'x'. |dw:1359894319590:dw| P1(x1, y1) P2(x1, y2)
Best Response
You've already chosen the best response.
if we have the graph of a function we can do a simple geometric test called the vertical line test. It says that if a graph of a function then you will be able to draw a vertical line anywhere on
that curve and it will only intersect at one point. This is a graphical representation of the idea that a function has only one input for every output, or, to say it another way, that there is
only one y for every x. In this case it is obviously not a function by this definition.
Best Response
You've already chosen the best response.
Hello Akshayb. It's easy to difference between equations and functions. Equation: In an equation there's a unique value for each x, y... Per example: \[x + 3 = 2y\] and \[y - 1 = x\]In this given
equation, values for x and y are uniques ( x = 1 and y = 2). You cannot change values because you won't get an equality... (trying with x=0 and y=5)\[0 + 3 \neq 2\times5\] Function: in a
function, there are infinite values for x, y... Per example: (s = speed, l = lenght in meters, t = time in seconds) Wich is my speed rate if i reach goal of 100m in 15 seconds? \[s = \frac{ l }{
t }\] So... \[s = \frac{ 100 }{ 15 } = 6,7\] 6,7 meters per second is your speed (given values 100 for lenght and 15 for time). And what if you reach it in 10 seconds, or 7 seconds or... 'x'
seconds? As you can see, the value of 's' depends on the values of distance and time and you can change values all the time. That's a function. I hope this post was useful ;)
Best Response
You've already chosen the best response.
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/510e43e2e4b0d9aa3c4796de","timestamp":"2014-04-20T06:21:51Z","content_type":null,"content_length":"47890","record_id":"<urn:uuid:4a7c4b15-ed39-47e3-9d9d-823ef5dafb86>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00027-ip-10-147-4-33.ec2.internal.warc.gz"} |
Greens Farms Math Tutor
...Prior to teaching I was a financial analyst with a major corporation. I have a MS in Education and a MBA in Accounting and Finance. As a tutor, I take a personal approach when working with my
8 Subjects: including algebra 2, SAT math, algebra 1, prealgebra
...My experience includes being a substitute teacher in Long Island and teaching Math 8 during summer school. I also welcome Freshmen and Sophomore college students as well. I can do private
tutoring or small groups.
9 Subjects: including geometry, algebra 1, calculus, French
...Then, I went back to school for my PhD study. I was an instructor for the Computer Literacy class, and a teaching assistant for the computer Algorithm class for several semesters in the
Department of Computer Science at the University of Iowa. I have successfully finished my PhD study in the Department of Computer Science at the University of Iowa in December 2007.
35 Subjects: including calculus, general computer, Microsoft Word, Microsoft PowerPoint
...This skill has served me well and can be passed on to others. My civilian background is mostly sales. Sales has taught me how to identify the real needs of an individual so that I may meet
those needs.
11 Subjects: including geometry, algebra 1, reading, public speaking
...My first teaching was as English teacher, for two years in Brazil, where I grew up and learned Portuguese. I moved to the USA when I was 19 years old. After September 11Th, 2001 teaching
English there as a natural way to help myself deal with everything that was happening here.
5 Subjects: including SAT math, English, TOEFL, SAT reading | {"url":"http://www.purplemath.com/greens_farms_ct_math_tutors.php","timestamp":"2014-04-20T13:22:24Z","content_type":null,"content_length":"23637","record_id":"<urn:uuid:8367d315-7b02-4218-89b3-eda04355cdf9>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00097-ip-10-147-4-33.ec2.internal.warc.gz"} |
Finite Simple Groups
Date: 3/15/96 at 0:23:57
From: Anonymous
Subject: What is the Monster?
What is the Monster - large finite simple group - and why is it
Date: 3/24/96 at 15:51:23
From: Doctor Steven
Subject: Re: What is the Monster?
The classification of the finite simple groups has been basically
what has driven the study of abstract algebra in the 20th century.
The finite simple groups are:
Z_p where p is prime - the integers modulo a prime.
A_n where n > 5 - the alternating groups
Lie algebras - I don't understand these
Sporadic Groups - I don't understand these either,
either, but the Monster is in here
This classification was finished in the 1980's and covered over
15,000 pages of proofs. A serious effort, led by Daniel
Gorenstein, is under way to condense this into a concise 3000-5000
page proof.
The reason why the Monster is interesting is because it's the
largest of the sporadic groups (hence the name), and several other
sporadic groups can be determined by studying its structure. Also
interesting is that this group was investigated by hand, while
most other sporadic groups were found using very large computers
running for very long times.
Some more information can be gotten on the Monster, and finite
simple groups in general, by reading this book:
Mathematical Surveys and Monographs: The Classification of the
<i>Simple Groups</i> by Daniel Gorenstein, Lyons and Solomon,
The American Mathematical Society, 1994.
Hope this helps.
-Doctor Steven, The Math Forum | {"url":"http://mathforum.org/library/drmath/view/51459.html","timestamp":"2014-04-18T23:38:56Z","content_type":null,"content_length":"6488","record_id":"<urn:uuid:343b3660-551f-4fce-9d59-0b2261808104>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00499-ip-10-147-4-33.ec2.internal.warc.gz"} |
College Point ACT Tutor
...TEACHING PHILOSOPHY I am a big believer in “hands-on” learning, in which the instructor regularly elicits responses from the student to ensure s/he understands the concepts being discussed and
is actively involved in absorbing the material. I think mistakes are great as I see them as learning o...
18 Subjects: including ACT Math, geometry, GRE, algebra 1
...I love helping students achieve their goals. With 15 years of experience, I understand that everyone learns differently and I try to find the best way with each individual student to make that
breakthrough. I have worked for several tutoring companies over the years, in both private and classro...
12 Subjects: including ACT Math, geometry, algebra 1, algebra 2
...That’s where I come in. My tutoring approach sets me and my students apart. My tutoring and coaching services are completely personalized.
52 Subjects: including ACT Math, reading, English, writing
...This achievement led to my acceptance to Teach for America, a highly selective national teaching corps dedicated to eliminating the achievement gap between high- and low-income students by
teaching in high-need schools with results-oriented, data-driven teaching. In my current position as Social...
30 Subjects: including ACT Math, reading, English, writing
...Let me tell you a little bit about myself. I'm a New York City teacher, currently teaching integrated algebra, geometry, and algebra 2/trigonometry. I've been teaching in the city for the past
two years.
7 Subjects: including ACT Math, geometry, algebra 1, algebra 2 | {"url":"http://www.purplemath.com/College_Point_ACT_tutors.php","timestamp":"2014-04-19T09:55:06Z","content_type":null,"content_length":"23605","record_id":"<urn:uuid:1b011e8c-5b54-4c2d-82f1-259478645476>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00507-ip-10-147-4-33.ec2.internal.warc.gz"} |
the encyclopedic entry of Disconnected set
measure theory
, a branch of
, the
ham sandwich theorem
, also called the
Stone–Tukey theorem
Arthur H. Stone
John Tukey
, states that given
"objects" in
space, it is possible to divide all of them in half (according to volume) with a single (
− 1)-dimensional
. Here the "objects" should be
finite measure
(or, in fact, just of finite
outer measure
) for the notion of "dividing the volume in half" to make sense.
The ham sandwich theorem takes its name from the case when n = 3 and the three objects of any shape are a chunk of ham and two chunks of bread — notionally, a sandwich — which can then each be
bisected with a single cut (i.e., a plane). In two dimensions, the theorem is known as the pancake theorem of having to cut two infinitesimally thin pancakes on a plate each in half with a single cut
(i.e., a straight line).
The ham sandwich theorem is also sometimes referred to as the "ham and cheese sandwich theorem", again referring to the special case when n = 3 and the three objects are
1. a chunk of ham,
2. a slice of cheese, and
3. two slices of bread (treated as a single disconnected object).
The theorem then states that it is possible to slice the ham and cheese sandwich in half such that each half contains the same amount of bread, cheese, and ham. It is possible to treat the two slices
of bread as a single object, because the theorem only requires that the portion on each side of the plane vary continuously as the plane moves through 3-space.
The ham sandwich theorem has no relationship to the "squeeze theorem" (sometimes called the "sandwich theorem").
According to Beyer and Zardecki (2004), the earliest known paper about the ham sandwich theorem, specifically the d = 3 case of bisecting three solids with a plane, is by Steinhaus and others (1938).
Beyer and Zardecki's paper includes a translation of the 1938 paper. It attributes the posing of the problem to Hugo Steinhaus, and credits Stefan Banach as the first to solve the problem, by a
reduction to the Borsuk–Ulam theorem. The paper poses the problem in two ways: first, formally, as "Is it always possible to bisect three solids, arbitrarily located, with the aid of an appropriate
plane?" and second, informally, as "Can we place a piece of ham under a meat cutter so that meat, bone, and fat are cut in halves?" Later, the paper offers a proof of the theorem.
A more modern reference is Stone and Tukey (1942), which is the basis of the name "Stone–Tukey theorem". This paper proves the n-dimensional version of the theorem in a more general setting involving
measures. The paper attributes the n = 3 case to Stanisław Marcin Ulam, based on information from a referee; but Beyer & Zardecki (2004) claim that this is incorrect, given Steinhaus's paper,
although "Ulam did make a fundamental contribution in proposing" the Borsuk–Ulam theorem.
Reduction to the Borsuk–Ulam theorem
The ham sandwich theorem can be proved as follows using the Borsuk–Ulam theorem. This proof follows the one described by Steinhaus and others (1938), attributed there to Stefan Banach, for the n = 3
Let A[1], A[2], …, A[n] denote the n objects that we wish to simultaneously bisect. Let S be the unit (n − 1)-sphere embedded in n-dimensional Euclidean space $mathbb\left\{R\right\}^n$, centered at
the origin. For each point p on the surface of the sphere S, we can define a continuum of oriented hyperplanes perpendicular to the (normal) vector from the origin to p, with the "positive side" of
each hyperplane defined as the side pointed to by that vector. By the intermediate value theorem, every family of such hyperplanes contains at least one hyperplane that bisects the bounded object A[n
]: at one extreme translation, no volume of A[n] is on the positive side, and at the other extreme translation, all of A[n]'s volume is on the positive side, so in between there must be a translation
that has half of A[n]'s volume on the positive side. If there is more than one such hyperplane in the family, we can pick one canonically by choosing the midpoint of the interval of translations for
which A[n] is bisected. Thus we obtain, for each point p on the sphere S, a hyperplane π(p) that is perpendicular to the vector from the origin to p and that bisects A[n].
Now we define a function ƒ from the (n − 1)-sphere S to (n − 1)-dimensional Euclidean space $mathbb\left\{R\right\}^\left\{n-1\right\}$ as follows:
ƒ(p) = (
volume of A[1] on the positive side of π(p),
volume of A[2] on the positive side of π(p),
volume of A[n−1] on the positive side of π(p)
This function
. By the
Borsuk–Ulam theorem
, there are
antipodal points p
on the sphere
such that
) =
). Antipodal points
correspond to hyperplanes π(
) and π(
) that are equal except that they have opposite positive sides. Thus,
) =
) means that the volume of
is the same on the positive and negative side of π(
) (or π(
)), for
= 1, 2, ...,
− 1. Thus, π(
) (or π(
)) is the desired ham sandwich cut that simultaneously bisects the volumes of
, …,
Measure theoretic versions
In measure theory, Stone and Tukey (1942) proved two more general forms of the ham sandwich theorem. Both versions concern the bisection of n subsets X[1], X[2], …, X[n] of a common set X, where X
has a Carathéodory outer measure and each X[i] has finite outer measure.
Their first general formulation is as follows: for any suitably restricted real function $f : S^n times X to mathbb\left\{R\right\}$, there is a point p of the n-sphere $S^n$ such that the surface $f
\left(s,x\right) = 0$, dividing X into $f\left(s,x\right) < 0$ and $f\left(s,x\right) > 0$, simultaneously bisects the outer measure of X[1], X[2], …, X[n]. The proof is again a reduction to the
Borsuk-Ulam theorem. This theorem generalizes the standard ham sandwich theorem by letting $f\left(s,x\right) = s_0 + s_1 x_1 + cdots + s_n x_n$.
Their second formulation is as follows: for any n+1 measurable functions f[0], f[1], …, f[n] over X that are linearly independent over any subset of X of positive measure, there is a linear
combination $f = a_0 f_0 + a_1 f_1 + cdots + a_n f_n$ such that the surface $f\left(x\right) = 0$, dividing X into $f\left(x\right) < 0$ and $f\left(x\right) > 0$, simultaneously bisects the outer
measure of X[1], X[2], …, X[n]. This theorem generalizes the standard ham sandwich theorem by letting $f_0\left(x\right) = 1$ and letting $f_i\left(x\right)$, for $i > 0$, be the ith coordinate of x.
Discrete and computational geometry versions
In discrete geometry and computational geometry, the ham sandwich theorem usually refers to the special case in which each of the sets being divided is a finite set of points. Here the relevant
measure is the counting measure, which simply counts the number of points on either side of the hyperplane. In two dimensions, the theorem can be stated as follows:
For a finite set of points in the plane, each colored "red" or "blue", there is a line that simultaneously bisects the red points and bisects the blue points, that is, the number of red points on
either side of the line is equal and the number of blue points on either side of the line is equal.
There is an exceptional case when a point lies on the line. In this situation, we count the point as being on either both sides of the line or on neither side of the line. This exceptional case is
actually required for the theorem to hold, in case the number of red points or the number of blue is odd, and still each set must be bisected.
In computational geometry, this ham sandwich theorem leads to a computational problem, the ham sandwich problem. In two dimensions, the problem is this: given a finite set of n points in the plane,
each colored "red" or "blue", find a ham sandwich cut for them. First, Megiddo (1985) described an algorithm for the special, separated case. Here all red points are on one side of some line and all
blue points are on the other side, a situation where there is a unique ham sandwich cut, which Megiddo could find in linear time. Later, Edelsbrunner and Waupotitsch (1986) gave an algorithm for the
general two-dimensional case; the running time of their algorithm is O(n log n). Finally, Lo and Steiger (1990) found an optimal O(n)-time algorithm. This algorithm was extended to higher dimensions
by Lo, Matousek, and Steiger (1994). Given d sets of points in general position in d-dimensional space, the algorithm computes a (d−1)-dimensional hyperplane that has equal numbers of points of each
of the sets in each of its half-spaces, i.e., a ham-sandwich cut for the given points.
Byrnes, Cairns, and Jessup (2001) showed that it is not always possible to position the hyperplane correctly just by cutting through the objects'
centers of gravity
• Beyer, W. A. & Zardecki, Andrew (Jan. 2004). " The early history of the ham sandwich theorem". American Mathematical Monthly 111 (1), 58–61.
• Edelsbrunner, H. & Waupotitsch, R. (1986). "Computing a ham sandwich cut in two dimensions". J. Symbolic Comput. 2, 171–178.
• Lo, Chi-Yuan & Steiger, W. L. (1990). "An optimal time algorithm for ham-sandwich cuts in the plane". In Proceedings of the Second Canadian Conference on Computational Geometry, pp. 5–9.
• Lo, Chi-Yuan; Matoušek, Jirí; & Steiger, William L. (1994). "Algorithms for Ham-Sandwich Cuts". In Discrete and Computational Geometry 11, 433–452.
• Megiddo, Nimrod. (1985). "Partitioning with two lines in the plane". J. Algorithms 6, 430–433.
• Steinhaus, Hugo & others (1938). "A note on the ham sandwich theorem". Mathesis Polska(Latin for "Polish Mathematics") 9, 26–28.
• Stone, A. H. & Tukey, J. W. (1942). " Generalized "sandwich" theorems". Duke Mathematical Journal 9, 356–359.
• Byrnes, G.B.; Cairns, G.; & Jessup, B. (2001). "Left-overs from the Ham-Sandwich Theorem". American Mathematical Monthly 108 246–9
External links | {"url":"http://www.reference.com/browse/Disconnected+set","timestamp":"2014-04-17T17:10:00Z","content_type":null,"content_length":"98945","record_id":"<urn:uuid:78b1eaf0-0b3f-4623-b1a8-199d7ea04b17>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00574-ip-10-147-4-33.ec2.internal.warc.gz"} |
Item response theory
Original model formulation
Item response theory (IRT) refers to statistical models for data from questionnaires and tests as a basis for measuring abilities, attitudes, or other variables in psychometrics (http://
en.wikipedia.org/wiki/Item_response_theory). Doran et al (2007) fitted multilevel Rasch model, which is a special instance of an IRT, using the R function "lmer". This example shows how the model
from Doran et al (2007) can simply be implemented using random effects in ADMB. It also shows how the model can easily can be expanded in ways not possible in other other sofwtare package than ADMB.
Data and Model
2042 soldiers responded to a total of 19 items, all of which with a dichotomous outcome (0 or 1). The 19 items were grouped into 3 categories, which were modelled as fixed effects (variable "itcoff"
in irt_doran.tpl). Further, the soldiers were grouped into 49 companies, which taken to be random effects (v). Similarly, there was a random effect associated with each individual soldier (u) and one
with each item (w).
A logistic regression with
Prob(x=0) = 1/(1+exp(bx)) , Prob(x=1) = 1-Prob(x=0)
(the Rasch model) and the linear predictor
bx = itcoff + sigma1*u + sigma2*v + sigma3*w;
was used. The sigma's are standard deviations of the random effects.
Running the model
All files needed to run the model are available in a zip file in the box to the left. In order for the model to run efficiently, use the command line
irt_doran -shess -ndi 45000
However, for a data set of this size some of the default settings of the software can be modified to obtain maximum speed of execution:
irt_doran -iprint 1 -mno 50000 -ams 3000000 -gbs 50000000 -cbs 25000000 -noinit -shess -ndi 45000
This yields the following result (.par) file, which matches the results from "lmer" exactly (but differs somewhat from what was reported in Doran et al, 2007).
# Number of parameters = 6 Objective function value = 20354.9 Maximum gradient component = 6.12735e-005
# itcoff:
-1.67737 0.492791 0.135192
# sigma1:
# sigma2:
# sigma3:
The ADMB program runs much slower than "lmer" for this model, because the structure of the model is hard-coded into lmer. ADMB on the other hand targets a much larger class of models, and is thus not
as efficient. However, the benefit of using ADMB is that you can change the model in the way you like, as is exemplified next. This is not currently possible in any other model package.
Extention: changing the asymptotes
Given that we think that Pr(x=0) can never be exactly 0 or 1, the following extention of the logistic regression is useful:
Prob(x=0) =a + (b-a)/(1+exp(bx)),
where a and b are parameters to be estimated. Two special cases are considered
1. irt1_doran: only a is estimated (b=0).
2. irt1_doran: both a and b are estimated.
Files for both these models are provided in boxes to the left.
Comparison of results
A comparison of (log) likelihood values are given in the following table
Name log-like Comment # pars AIC
irt_doran -20354.9 Doran et al. 6 40721.8
irt1_doran -20326.0 a estimated 9 40670
irt2_doran -20320.0 a and b estimated 12 40664
Note that the are 3 parameters associated with each of a and b: one of each level of the fixed effect "itcoff". According to the AIC criterion, both a and b are significant parameters for these data.
Doran, H., Bates, D., Bliese, P., Dowling, M. Estimating the Multilevel Rasch Model: With the lme4 Package. Journal of Statistical Software, 20, 2007. (http://www.jstatsoft.org/v20/i02/paper) | {"url":"http://www.admb-project.org/examples/categorical-data/item-response-theory-irt-and-the-multilevel-rasch-model-1","timestamp":"2014-04-19T07:05:58Z","content_type":null,"content_length":"32964","record_id":"<urn:uuid:d8339d26-f367-44ec-b355-8cf63e3ba4d5>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00522-ip-10-147-4-33.ec2.internal.warc.gz"} |
Probability in Algebra.
March 22nd 2013, 06:26 AM
Probability in Algebra.
k is uniformly chosen from the interval (-5,5) . Let p be the probability that the quartic f(x)=kx^4+(k^2+1)x^2+k has 4 distinct real roots such that one of the roots is less than -4 and the
other three are greater than -1. Find the value of 1000p.
I had posted this thread 3 days ago too nobody took the initiative to COMPLETELY solve it. I need help as i m no pro in probability.
March 22nd 2013, 07:02 AM
Re: Probability in Algebra.
So you are complaining that nobody did your homework for you? Perhaps you are at the wrong board! Have you solved that equation yet? That would be the obvious first step.
March 22nd 2013, 08:15 AM
Re: Probability in Algebra.
Yup, i have solved it, i have got the roots, so what???? I tried a little bit of wiggling but nothing worked out.
March 23rd 2013, 03:15 AM
Re: Probability in Algebra.
Maybe you need to be a bit more polite if you expect someone to offer you help! | {"url":"http://mathhelpforum.com/algebra/215275-probability-algebra-print.html","timestamp":"2014-04-20T12:32:49Z","content_type":null,"content_length":"4622","record_id":"<urn:uuid:2a972730-2024-4a4b-9991-1bbfed0dcde2>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00416-ip-10-147-4-33.ec2.internal.warc.gz"} |
factoring polynomials [Archive] - Free Math Help Forum
I'm having trouble with this problem. I'm supposed to factor 5y^5-5y^4-20y^3+20y^2. I come up with 5y^2(y-1)(y+2)(y-2). The book gives an answer of (y-1)(y+1)(y-2). How did they come up with this
answer? I have tried everything. Can someone help?
Thanks for the help guys. I had a sneaking suspicion that it was wrong, but how often does that happen? I knew you guys would be able to help. Usually when the book has an answer that I think is
impossible, after I look at it for a while it turns out to be right. Thanks for the info!
Powered by vBulletin® Version 4.2.2 Copyright © 2014 vBulletin Solutions, Inc. All rights reserved. | {"url":"http://www.freemathhelp.com/forum/archive/index.php/t-38790.html","timestamp":"2014-04-18T13:24:31Z","content_type":null,"content_length":"4520","record_id":"<urn:uuid:24712bca-c891-4aaa-8a3f-d45d8d725d73>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00198-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Alpha-Beta-Zero to dq0 block performs a transformation of αβ0 Clarke components in a fixed reference frame to dq0 Park components in a rotating reference frame.
The dq0 to Alpha-Beta-Zero block performs a transformation of dq0 Park components in a rotating reference frame to αβ0 Clarke components in a fixed reference frame.
The block supports the two conventions used in the literature for Park transformation:
● Rotating frame aligned with A axis at t = 0. This type of Park transformation is also known as the cosinus-based Park transformation.
● Rotating frame aligned 90 degrees behind A axis. This type of Park transformation is also known as the sinus-based Park transformation. Use it in SimPowerSystems models of three-phase synchronous
and asynchronous machines.
Knowing that the position of the rotating frame is given by ω.t (where ω represents the frame rotation speed), the αβ0 to dq0 transformation performs a −(ω.t) rotation on the space vector U[s] = u[α]
+ j· u[β]. The homopolar or zero-sequence component remains unchanged.
Depending on the frame alignment at t = 0, the dq0 components are deduced from αβ0 components as follows:
When the rotating frame is aligned with A axis, the following relations are obtained:
The inverse transformation is given by
When the rotating frame is aligned 90 degrees behind A axis, the following relations are obtained:
The inverse transformation is given by
The abc-to-Alpha-Beta-Zero transformation applied to a set of balanced three-phase sinusoidal quantities u[a], u[b], u[c] produces a space vector U[s] whose u[α] and u[β] coordinates in a fixed
reference frame vary sinusoidally with time. In contrast, the abc-to-dq0 transformation (Park transformation) applied to a set of balanced three-phase sinusoidal quantities u[a], u[b], u[c] produces
a space vector U[s] whose u[d] and u[q] coordinates in a dq rotating reference frame stay constant. | {"url":"http://www.mathworks.se/help/physmod/sps/powersys/ref/alphabetazerotodq0dq0toalphabetazero.html?nocookie=true","timestamp":"2014-04-25T02:54:54Z","content_type":null,"content_length":"43691","record_id":"<urn:uuid:295c949b-60c4-40dc-8695-b0688c1cad4b>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00544-ip-10-147-4-33.ec2.internal.warc.gz"} |
Advantage and Disadvantage in D&D Next: The Math
Everyone will be sharing opinions about D&D Next today and for the foreseeable future. I wanted to do something a little different and focus on just one thing: the math behind the Advantage and
Disadvantage mechanic.
For those who haven’t read the playtest material yet, if you have Advantage for a die roll, you get to roll twice and take the better result (kind of like the Avenger in 4th Edition). If you have
Disadvantage, you have to roll twice and take the worse result.
In reading through the rules, I noticed that being blinded gives you disadvantage for your attacks, while being prone gives you the same -2 to your attack that you would get in 4th Edition. So what’s
the impact of disadvantage? Is it similar to a -2?
My first thought was, what’s the average of 2d20 keeping the highest (advantage), and what’s the average of 2d20 keeping the lowest (disadvantage)? I know that the average of a single d20 roll is
10.5, so knowing the average of advantage or disadvantage should tell me whether it’s equivalent to +/-2, +/-3 or what, right?
I decided to simulate this by having Excel roll a whole bunch of dice (over a million pairs of d20 rolls) and then taking some averages. For those fellow Excel geeks out there, my d20 roll formula
is: =ROUNDUP(20*RAND(), 0). I generated two columns of these, then a column that was the maximum of the two results (=MAX(A2, B2)) for advantage and one that was the minimum (=MIN(A2, B2)) for
I got a result of about 13.83 for a roll with advantage and 7.18 with disadvantage. I later learned that the precise values are 13.825 and 7.175. Comparing this to the 10.5 average you get for a
single d20, advantage adds 3.325 to the average roll and disadvantage subtracts 3.325.
Done! Right?
It’s not all about the averages
However, as my fellow EN Worlders soon pointed out, this isn’t the most useful way to look at things. In D&D, what you care about is your chance of success or failure on a die roll. And when you
change the distribution of results from a uniform d20 roll (equal 5% probability of every number from 1 to 20) to the maximum or minimum of 2d20, the impact is not the same as a straight plus or
minus to a d20 roll.
The most useful way I’ve found to look at this is with the following table. The first column shows you the target number you need to roll on the die in order to succeed. (Note that if you need an 18
to hit but you have +6 to hit, then the target number on the die is a 12.) The second column shows the percentage of time you’ll get that result or better on a single d20 roll. The third column shows
how often you’ll get your number with advantage, and the fourth shows the same for disadvantage.
What does it all mean?
Let’s take an example from the table. Assume you need to roll an 11 to succeed. With a straight d20, you have a 50% chance of success. With advantage, this goes up to 75%. That’s the equivalent of a
+5 bonus to the roll, since you would also have a 75% chance of success if you only needed a 6 or better on a single d20. Pretty impressive!
On the flip side for the target of 11, disadvantage means you only have a 25% chance of success, equivalent to a -5 penalty to the roll (when you need a 16 or better on a d20, you also have a 25%
chance of success).
So does that mean advantage/disadvantage is equivalent to +/- 5? Not all the time. In fact, it’s only that big when you need exactly an 11 on the die.
Let’s say you need a 15 on the die to succeed. With a single d20, you’ll only get this 30% of the time. With advantage, you’ll get it 51% of the time – about the same as you would get an 11 or better
on a single d20. So advantage in this case is worth about a +4. Disadvantage, similarly, is about a -4: You only succeed 9% of the time with disadvantage, which is about the same as a single d20 with
a target of 19.
At the extremes, advantage makes the least difference. If you need a natural 20 to hit, that’s only going to happen 5% of the time normally. Advantage ups your chance to 9.75% – equivalent to getting
a +1. Disadvantage takes you chance down to 0.25%, or 1 in 400. That’s the chance of rolling back to back crits – not a common occurrence. But in terms of a modifier, it’s not much different from
giving you a -1 to your roll when you need a 20 – it’s just about impossible.
In reality
Most of the time, D&D tends to set things up so that you need somewhere between a 7 and a 14 to succeed on a task unless it’s trivially easy or ridiculously hard. If you look at the percent success
in the d20 column for those rows, then find the equivalent percent success in the Advantage column, you’ll see that this is usually similar to getting a +4 to +5 bonus to the roll. Disadvantage is
exactly the same in the opposite direction.
So there you have it. For target die rolls that are reasonably close to the middle of the range, advantage or disadvantage is about the same as having a plus or minus 4 or 5 to your die roll. It’s
pretty powerful – much more powerful than the +2 for combat advantage that you get in 4th Edition.
Note that I haven’t factored in the additional chance of a critical hit with advantage, since I don’t really care about damage per round or anything like that. Suffice it to say that your chance of
critting with advantage is 9.75% instead of 5%, and you can do the rest from there.
- Michael the OnlineDM
31 thoughts on “Advantage and Disadvantage in D&D Next: The Math”
1. Nice analysis. The new 2D20 advantage/disadvantage mechanic sounds like an interesting experiment. I can’t wait to try it out.
2. Well then… Guess I don’t need to put up my article on the analysis. Agree with your conclusions… Excellent article :).
□ I got the same results on my analysis as well, and I also am glad that someone posted theirs so I don’t have to.
3. There’s an analytic formula for the probabilities of the max or min of pairs of dice of a given side, no need for simulation. If the die type has D sides and X is the number you want to roll,
Pr(X) = (X^2 – (X-1)^2)/D^2
which simplifies to
Pr(X) = (2X-1)/D^2
If you want a justification, draw a square with D rows and columns and label rows and columns with the die roll on a given die. There will be D^2 boxes. Each box has probability 1/D^2 of
occurring. Inside each box write the max of the two rows. If you count them up you’ll see that they follow the rule I just gave. Another way to think about it is that the number of boxes that
have a value of X or smaller is X^2, so by taking X^2 – (X-1)^2 you are counting the number of boxes that have an X in them. Dividing by D^2 once again normalizes for the total number of boxes.
(It is worth your time to do this for, say D = 6 to get a feel for the process.)
Beyond that, max or min has the benefit of preserving the original range of the D20, so you can’t exceed the numbers given, while giving the effective bonuses listed in the article. That is, it’s
about +4 or so for the way D&D tends to be calibrated, but you can’t actually exceed the performance you could expect without advantage.
The formula for mins works the same basic way but the probabilities are reversed, as can be seen from the table above. (This formula generalizes to trios, quads, etc., by using cubes, quartics,
□ Thanks for the math! I did come to understand this as I worked on it; I only shared the results of my simulation because that was my own starting point and I thought it was an interesting
☆ Yes, I didn’t mean to knock simulations. They’re often useful ways to explore things, but if you have an analytic solution it’s even better.
○ Okay, I’m clearly not getting it with regard to Pr(X) = (2X-1)/D^2 …. Does this mean that, if I want to roll 2 d6 and get a 6, then I calculate it like so: Pr(6) = (2*6-1)/6*2? This
gives me 11/36, which seems to be right … I would expect to get at least one 6, 11 times out of 36 throws.
Here’s where I am missing something (it may be I don’t understand the use of the carrot – D^2).
If I want to know my chance of getting at least a 5 on 2 throws of a d6, my 6*6 chart shows I can expect that at least 20 times out of 36, but the formula above looks like:
Pr(X) = (2X-1)/D^2
Pr(5) = (2*5-1)/36, or 9/36 so clearly I’m not getting something.
This isn’t an attempt to sharpshoot your math; I’m certain there’s something I don’t get here. Can you help me understand?
■ The formula is not the probability of getting at least a number, it’s the probability of getting exactly that number.
D^2 is D squared. So if D = 6 then you have 6^2 = 36 possibilities.
I’d also not want to write the max of 2D6 as 2 D6 because that’s much too close to the standard 2D6, which is for the sum of 2D6.
How about max(2,D6) or something like that?
■ Oh one meta-point:
When working with probabilities it’s frequently much easier to determine
Pr(X <= x)
that is the cumulative probability than the probability of a particular value x directly. So if you want to know Pr(X = x) just use the fact that
Pr(X = x) = Pr(X <= x) – Pr(X <= x-1)
(This is only true for discrete variables that are scored as integers, such as dice.)
4. Pingback: Links of the Week: May 28, 2012 | Keith Davies — In My Campaign - Keith's thoughts on RPG design and play.
5. You analysis made for extremely interesting reading, however you failed to take into account one of the most important aspects of this mechanic: rolling more dice = more fun!
□ Heck yeah! Rolling more dice is awesome, I agree.
□ If you’re rolling max(D20,D20) for 20 giant rats and thus have to roll 40 dice in 20 die chunks, I’d really have to say the answer is… no, not so much. Lots of die rolling takes time and
grind time is most definitely not fun for everyone sitting there and waiting.
6. While I like the system, I wish there was some way to have “minor (dis)advantage”: the equivalent to the +2. I guess I should just give +2.
□ Yep. If you look at the rules for being prone, you’ll see that the -2 to attack still exists.
7. While it may be new to D&D, this mechanic is hardly new at all. Savage Worlds uses this for all rolls by the PCs, for instance. But, at least WotC is developing a 21st century game rather and
20th century one now.
□ I don’t recall anyone particularly playing up the novelty of this mechanic in this article. But I don’t see any need to belittle the company here.
☆ It is not the article, it is the folks who only play d20/D& X that need to understand there are other games doing very modern methodologies. I was actually being sincere that wotc is
thinking in both the past and the future.
○ Roll D20 and pick the better of the two is already in 4E. Lots of powers do that and an entire class, the Avenger, is built around it.
You’re right, that a max mechanic has been done for a while. Savage Worlds did it, but as I recall Deadlands (Savage World’s ancestor) did it before. Another game with a max mechanic
is Blue Planet.
8. A friend sent me a link to your article, but I thought the graphed data set was also really interesting. Not only is Advantage better more often, but it is SO much better than +2. Anyway, the
graph is also good for those who like pictures (like me!). This is a fun problem!
(Crossposted to both articles on this analysis! I am SO happy to see math in the world!)
9. Pingback: Combat Speed in D&D Next | The Id DM
10. Pingback: The (Dis)Advantaged « Jack's Toolbox
11. Great article! Thank you for writing it.
12. For some reason, you didn’t seem to get a pingback when I wrote an analysis/extension of this article even though I followed the link within my article back to this page – something that I have
only just noticed. It’s probably not especially relevant anymore, but here’s a link to my article: http://www.campaignmastery.com/blog/implications-of-ddnext-advantage/ | {"url":"http://onlinedungeonmaster.com/2012/05/24/advantage-and-disadvantage-in-dd-next-the-math/","timestamp":"2014-04-20T05:55:57Z","content_type":null,"content_length":"61164","record_id":"<urn:uuid:74f8aba8-6056-4625-a1a5-a8812e0d19ac>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00479-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fairview, TX Calculus Tutor
Find a Fairview, TX Calculus Tutor
...I tutored at the University of Texas at Dallas, where I was also a Teaching Assistant. I taught courses at Richland College and Collin County Community College. My specialties are Physics I
and Physics II, both algebra and calculus based.
8 Subjects: including calculus, physics, geometry, algebra 1
...Chemistry, Gen. Physics, Algebra, Calculus I and II, I could be a resource for you. Good luck with your study, and I will look forward to working with you!
24 Subjects: including calculus, chemistry, reading, English
...Currently I teach Math classes at a private college in Bedford as an adjunct professor. I also work in Software industry building state of the art computer algorithms for IT companies. I have
a passion for teaching and make anyone understand Mathematical concepts in a clear, effective and simple manner.
23 Subjects: including calculus, geometry, GRE, statistics
...I took a sample one in the Marine Corp office years ago and missed only two questions at the time. Admittedly I never enlisted, but I know I can help you learn and study for this Test. Since
it is comprehensive, we will need to cover each section bit by bit.
26 Subjects: including calculus, chemistry, reading, writing
...I believe learning can actually be fun as well as rewarding. I myself am strong in English and Math, and was a Technical Writer, Editor and Proofreader for 10 years in the business world. I
quit my job to stay home with my kids and raise them, which was enormously fulfilling.
10 Subjects: including calculus, English, writing, algebra 2
Related Fairview, TX Tutors
Fairview, TX Accounting Tutors
Fairview, TX ACT Tutors
Fairview, TX Algebra Tutors
Fairview, TX Algebra 2 Tutors
Fairview, TX Calculus Tutors
Fairview, TX Geometry Tutors
Fairview, TX Math Tutors
Fairview, TX Prealgebra Tutors
Fairview, TX Precalculus Tutors
Fairview, TX SAT Tutors
Fairview, TX SAT Math Tutors
Fairview, TX Science Tutors
Fairview, TX Statistics Tutors
Fairview, TX Trigonometry Tutors | {"url":"http://www.purplemath.com/fairview_tx_calculus_tutors.php","timestamp":"2014-04-17T15:51:29Z","content_type":null,"content_length":"23659","record_id":"<urn:uuid:c8de02b8-f599-431c-ac4e-b9d9cb495f42>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00383-ip-10-147-4-33.ec2.internal.warc.gz"} |
opposite rays geometry example
Best Results From Wikipedia Yahoo Answers Youtube
From Wikipedia
Cross section (geometry)
In geometry, a cross-section is the intersection of a figure in 2-dimensional space with a line, or of a body in 3-dimensional space with a plane, etc. More plainly, when cutting an object into
slices one gets many parallel cross-sections.
Cavalieri's principle states that solids with corresponding cross-sections of equal areas have equal volumes.
The cross-sectional area (A') of an object when viewed from a particular angle is the total area of the orthographic projection of the object from that angle. For example, a cylinder of height h and
radius r has A' = \pi r^2 when viewed along its central axis, and A' = 2 \pi rh when viewed from an orthogonal direction. A sphere of radius r has A' = \pi r^2 when viewed from any angle. More
generically, A' can be calculated by evaluating the following surface integral:
\,\! A' = \iint \limits_\mathrm{top} d\mathbf{A} \cdot \mathbf{\hat{r}},
where \mathbf{\hat{r}} is a unit vector pointing along the viewing direction toward the viewer, d\mathbf{A} is a surface element with outward-pointing normal, and the integral is taken only over the
top-most surface, that part of the surface that is "visible" from the perspective of the viewer. For a convex body, each ray through the object from the viewer's perspective crosses just two
surfaces. For such objects, the integral may be taken over the entire surface (A) by taking the absolute value of the integrand (so that the "top" and "bottom" of the object do not subtract away, as
would be required by the Divergence Theorem applied to the constant vector field \mathbf{\hat{r}}) and dividing by two:
\,\! A' = \frac{1}{2} \iint \limits_A | d\mathbf{A} \cdot \mathbf{\hat{r}}|
From Yahoo Answers
Question:I need to know this for my geometry class. I take a regular geometry course and i need your help! Thanks
Answers:If guess if they are joined then a straight line
Question:It was a quetion that came up to me and a couple of friends.
Answers:You mean TWO OPPOSITE RAYS? like <------------> Hmm. That's pretty interesting. As we know Geometry, we can never state a statement if it has no proof, or there is no theorem/postulate that
supports it. I can't recall a theorem in Euclidean Geometry that says: Two opposite rays determine a LINE. But yeah technically speaking, 2 opp. rays determine a line. But I'm no mathematician to
claim it. :)
Question:Could you give me some ideas on how these examples of geometry terms are represented in real life? It is supposed to be objects. Here are the terms. Point Line Segment Ray Opposite Rays
Perpendicular Lines Parrallel Lines Acute Angle Obtuse Angle Right Angle Vertical Angles (Acute only) Adjacent Angles (must be less than 180 Degrees) Linear Pair Thank you so much !
Answers:You have to put down tile or some type of flooring, you need to be able to use these to figure out how much tile to order and then how to lay this tile so that you have to cut the least
amount (I mean really who wants to cut tiles for forever).
Question:A pair of adjacent angles whose noncommon sides are opposite rays
Answers:A linear pair
From Youtube
Rays - YourTeacher.com - Geometry Help :For a complete lesson on geometry rays, go to www.yourteacher.com - 1000+ online math lessons featuring a personal math teacher inside every lesson! In this
lesson, students learn the definitions of a segment, a ray, and length, as well as the symbols that are used in Geometry to represent each figure. Students also learn the definitions of an endpoint,
opposite rays, a coordinate, and a number line. Students are then given geometric figures that are composed of segments and rays, and are asked true false questions about the given figures. Students
are also given number lines, and are asked short answer questions about the given number lines. Students are also given the coordinates of the endpoints of segments, and are asked to find the segment
Euclidean & Non-Euclidean Geometries Part 5: Axioms (Cont.) :Continued from Part 4. I knock a glass candleholder off the shelf during the video, and the sound, while not very loud, might surprise you
or your cat. I also knock something else off the ledge, but I don't remember what it was. EUCLID'S POSTULATE III. For every point O and every point A not equal to O there exists a circle with center
O and radius OA. DEFINITION. The ray AB is the following set of points lying on the line AB: those points that belong to the segment AB and all points C such that B is between A and C. The ray AB is
said to emanate from A and to be part of line AB. DEFINITION. Rays AB and AC are opposite if they are distinct, if they emanate from the same point A, and if they are part of the same line AB = AC.
DEFINITION. An "angle with vertex A" is a point A together with two nonopposite rays AB and AC (called the sides of the angel) emanating from A. DEFINITION. If two angles BAD and CAD have a common
side AD and the other two sides AB and AC form opposite rays, the angles are supplements of each other, or supplementary angles. DEFINITION. An angle BAD is a right angle if it has a supplementary
angle to which it is congruent. EUCLID'S POSTULATE IV. All right angles are congruent to each other. DEFINITION. Two lines m and n are parallel if they do not intersect, ie, if no point lies on both
of them. EUCLID'S POSTULATE V. (THE PARALLEL POSTULATE) For every line l (el) and for every point P that does not lie on l (el) there exists and unique line m through P ... | {"url":"http://www.edurite.com/kbase/opposite-rays-geometry-example","timestamp":"2014-04-19T06:54:20Z","content_type":null,"content_length":"72071","record_id":"<urn:uuid:6236a81e-d140-4187-97c3-9cf0cb2af7ca>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00462-ip-10-147-4-33.ec2.internal.warc.gz"} |
Basic concepts - 8th grade math - Gene Expression | DiscoverMagazine.com
Many fellow ScienceBloggers are doing a “Basic Concepts” series. Here are some of them:
Mean, Median, and Mode
Normal Distribution
Central Dogma of Molecular Biology
Instead of thinking up something new I’ve decided to repost a an older post where I cover the “basic” equations and models which I pretty much assume in many of my posts. The post below….
Begin repost
Occasionally I appeal to formalizations or equations on this weblog to illustrate a general verbal principle. I don’t do it to obscure or needlessly technicalize a topic of interest, but rather, it
is often a neat and dense way to package the information and express precisely the various relations that I am attempting to communicate. Most of the formulas are extremely light on mathematical
subtly, and there is little need for a level of fluency beyond what one should have attained in 8th grade algebra. The difficulty is almost always an unfamiliarity with the notation. But, the
formulas are condensations of some of the general concepts that I want to communicate to readers of this blog. I know that a substantial proportion of the core 300 readers of this weblog come from
backgrounds in the mathematical sciences, and the main hurdle for this subset is simply to map over knowledge from their own fields into biology (though I haven’t thrown out diffusion equations here
in relation to changes in allele frequency, so I’m not sure how much mapping even needs to be done, seeing as how it is almost always 8th grade algebra on display, with a little basic statistics and
probability). But many do not come to this weblog from the mathematical sciences, so I am here to reassure you that you need not skip any equation or formalization, because they are mathematically
quite trivial and easy to grasp. For me, formalization and mathematicization in the context of this weblog helps everyone trade in a common currency. It facilitates communication and cuts down on
needless verbal confusions. Over the past 3 years I myself have slowly become more and more prone toward formalizing my genetical thinking in a “Wright-Fisher” model, so to speak. In the rough
outlines little has changed, but I have gained a great deal in precision and predictivity. As I note above, this gain in precision and predictivity is attained via the most minimal acquisitions of
mathematical tools. Many of the models can be easily illustrated with difference equations, which can be confirmed computationally (in MS Excel even!) because of their discrete nature (i.e., don’t
sweat not taking any courses in differential equations, linear algebra, probability and statistics, or, yes, even calculus! Though I think if you don’t have calculus you probably will miss some of
the logic and implications).
Godless & I never have gotten around to a “GNXP FAQ.” The reasons are rooted in human psychology, this weblog is a hobby, a pastime, and writing an FAQ requires forethought and effort we simply never
felt we could spare! Nevertheless, I do link to technical webpages whenever I can when I use an equation, and at this point, I feel I should go over a few formula that I feel are particularly useful,
and good currency to have “under your belt” so that one can get beyond vague impressions and intuitions. And I want to emphasize: there really isn’t much beyond 8th grade algebra here!
For example, consider the Hardy-Weinberg Equilibrium (HWE)….
p^2 + 2pq + q^2 = 1
p = frequency allele (“A”) at a locus (def. 4) in the population
q = frequency allele (“a”) at a locus in the population
p^2 = frequency of homozygote A genotype in a population (that is, frequency of p squared, p X p, because you have two copies, AA, at a locus)
q^2 = frequency of homozygote a genotype in a population
2pq = frequency of heterozygote genotype in a population
G.H. Hardy (most well known to the public because of his collaboration with Ramanujin) thought little of this formula, and didn’t understand why it wasn’t obvious for biologists! As most of you know
from high school biology, if you have two parents who are heterozygotes for a “dominant” and “recessive” allele, that is, they are Aa and Aa, on a locus, and they have offspring, 1/4 of the progeny
will exhibit the recessive trait and 3/4 will not (because only the 1/4 who are homozygotes, aa, express the trait). Biologists in the early 20th century actually debated why populations did not
exhibit the 3:1 ratio that emerged out of the Punnet Squares. Of course, the key is that the expression of the recessive trait is a function of the frequency of the recessive allele within the
population, the 3:1 ratio on a populational level is only valid where p and q are both at frequencies of about 50%.
This model has many assumptions (like all models, otherwise, they wouldn’t be models!), but, it works as a good null hypothesis. You might think that it isn’t worthwhile to even know this equation,
but I disagree, for two reasons….
1) It gives more concreteness to your ability to think about genetic questions, and it is the basis for extrapolation of that thought. Believe it or not, HWE is the starting point for a lot of
mathematical population genetic models, and for a reason.
2) A lot of the genetic issues that come up in everyday discourse are amenable to in-your-head-HWE calculations. For example, cystic fibrosis, which is a recessive disease. Or, Rh- status (again,
recessive). The payoff in terms of utility is not that great, but the ease of doing the calculation in your head and being able to brush aside verbal qualifications is I think useful. Remember 3
years ago when stories popped up that ‘blondes were going extinct’ (it seems like it was a hoax)? As our own David Burbridge noted, this seemed a peculiar assertion to make sans massive selection
(some selection was implied of course, though little evidence given as to the magnitude of differential fitness), the alleles which caused blondism likely wouldn’t disappear from the population,
assuming that one models it as a ‘recessive’ trait (I actually think the recessive-dominant talk confuses more than helps, but that’s another post).
So, with the preliminaries over, on to a few trivial issues which I hope to bring to your attention. I want to first excuse myself of any pretense at a high standard of rigor and formality, the
threshold for success is low if we get beyond vague verbal platitudes. There are lots of different notations, and I’m mildly constrained by the formatting of HTML (no, I don’t want to use MathML,
this is not a math weblog, and I don’t want to give people bizarre javascript warnings). So bear with me….
V[p] = V[a] + V[d] + V[e] + V[i]
V[p] = Variation of the phenotype (trait)
V[a] = Additive genetic variation
V[d] = Dominance genetic variation
V[e] = Environmental variation
V[i] = Interactional (epistatic) variation
Normally you should see some σ^2 (where I put the V’s) to make it clear that you are talking in terms of statistical parameters of variance. Roughly speaking, all that matters is that the variation
of a given phenotype attributable to a subcomponent is being illustrated here. If you want to get into the nitty-gritty of this equation, the last chapter of Principles of Population Genetics is
nice, while Introduction to Quantitative Genetics really does more than “introduce” you to the nuances of this formalization. But, on this weblog we have talked about V[a] a lot, because that is what
is to a large extent responsible for the normal distribution you see on continuous quantitative traits. When we are delving into psychometrics, this really matters. But, to get less contentious, let
me use height as an illustrative trait. Clearly there is a median and modal height, an “average,” and variation around this average. Some of this is no doubt due to genes, we see rough relationships
between parents and their offspring. But the key is that the relationship is rough, there are other components of the variation. As Francis Galton first noted, if you plot offspring height against
parental height one notes a correlation, but an imperfect one. V[a], the additive genetic variance, is roughly proportional to the regression coefficient of the line of best fit. In plain English,
the similarity in height between a population of parents and their offspring (correct for sex) is due to V[a]. V[a] on the genetic level can be thought of as constituted by genes of small effect
across the loci. Going back to the notation I used for the Punnet Square, instead of just AA, Aa and aa on a locus, imagine multiple loci, each with its own cluster of alleles, and each contributing
a small increment independently to the phenotype. A large number of random variables of small effect will tend to result in a modal, that is, most frequent, value that is also intermediate on the
range of the distribution (see central limit theorem, I don’t know how to put this in plain English well). But here is the key: the V[a], which is basically the “narrow sense heritability” is the
populational proportion of variation, it doesn’t mean much on the individual level. This is important because a recurrent problem on this weblog is that people have conflated the point and taken a
stand on the “Nature vs. Nurture” issue by stating that “I believing that the trait X is mostly genetic/environment.” Though the gist of the comment is well taken, the way the phrase is put suggests
a misunderstanding of the details of what is being communicated, that is, these values explain populations, but are not necessarily relevant when speaking, or conceiving of, individuals.
Additionally, the calculation of these values is often problematic. I have also left off the important confounding parameters of gene-environment association and gene-environment interaction. The
former is basically an increase of the dispersion because of the correlation of particular genotypes to particular environments (the classic example is the increase in feed to extremely productive
dairy cows, this obviously increases the difference between the animals in terms of their milk productivity since the environmental variable is different across them). Gene-environment interaction is
even more difficult, as it involves the unpredictable reactions of genotypes to varied environments. This is the classic norm of reaction problem. The epistatic gene interactions are also important
confounders of the idealized genetic world, as gene-gene interactions break up the calm of the additive independent loci universe. In sum, this idealized model is an utopia which is never attained in
the real world. Does that mean we should give up on this formalization?
Developmental psychologist Alison Gopnik noted earlier this year:
Because humans create and change their own environments we don’t and can’t, in principle, know what the range of possible environments will turn out be. And we don’t know how those possible
environments might interact with genetics over the course of development to cause a particular distribution of adult traits. This means that we simply don’t and can’t know how much genes
contribute to complex human traits in general — the question is incoherent. In a particular case, with a particular specified environment, and a complete developmental history of the causal
interactions between the organism and the environment, we might be able to give a causal account of the path from gene to trait. But there is no general answer about how gene and trait are
related across all environments.
Frankly, I wonder why Alison Gopnik is not a solid state physicist or a novelist, because these types of assertions make me wonder as to validity of any attempt at a rigorous and predictive
intellectuality any softer than molecular biology! Ultimately, such stringent standards for controlling of variables seems to render only the purest of humanities and the hardest of sciences immune
from the taint of noise, the former because of its unalloyed reliance on intuition, and the latter because of its recourse toward deterministic models of extremely precise caliber. Why is Alison
Gopnik a developmental psychologist? Does her “Theory Theory” stand up to the scrutiny of this standard? I doubt it.
The fact is anyone who is doing work in a noisy and statistical science is very well aware of confounding factors. It is part of the business. But even in the midst of the murkiness of the complex
and messy world around us, we attempt to model it as best as we can. This is what Gopnik is doing in her corner of cognitive science. This is what public policy analysts do when they decompose and
examine topics of human importance in a rational manner. If the critiques of the likes of Gopnik are taken to their logical conclusion, outside of the physical, and to some extent biophysical,
sciences, all we will have recourse to are customs, traditions and a priori values. Is this what Gopnik would want? If you read Curious Minds, you will get a feel for the cultural milieu that Gopnik
grew up in, and her personal biases in terms of politics and culture. I doubt that she would be excited about mapping her views expressed above to other fields. I have pointed out this hypocrisy
before, that the norms and biases we hold tend to hold result in wildly different standards of rational integrity and model building depending on the domain we are addressing. As a cognitive
scientist, Gopnik should know better!
There are two key problems here which is the real root of Gopnik’s issue (at least on the objective big picture scale as opposed to tactical sniping within-field). Verbal expositions about variation
tend to underemphasize the various subcomponents of the variation, amongst those “in the know,” the problems are known, accounted for, and we keep moving on operationally with our mental process.
But, amongst those not in the know, it seems we are ignoring the other components of variation. Obviously this is a function of clarity and reiteration. But, there is another problem: the tendency
toward cognitive models to canalize into typologies. Two independent variables contribute to this specifically, human bias toward slotting statistical-populational concepts and trends into idealized
types, boxes, bins and categories, usually in relation to “theories” in a Gopnikesque manner, and, frankly, stupidity. A large proportion of the human race, perhaps the vast majority with sub-115
IQs, simply can not understand basic statistics, conditional probabilities, and so on. But even for those whose intelligence and motivation is high, it can be difficult to break out of preconceived
boxes. The “Nature vs. Nurture” box is a very powerful one, and I believe it coopts a natural tendency to think in terms of black and white dichotomies. We have to climb up the hill of our biases
with the rope and ladder of logical-abstraction, and allow ourselves to be guided by the system and the mathematical logic. Simply keep the faith alive!
OK, enough polemics, I want to leave you with some positive.
R = h^2 X S
R = response, S = selection and h^2 is the narrow sense heritability I mentioned above (additive genetic variance). You often hear talk of evolution, but what about quantitizing how fast it occurs?
This “prediction equation” comes out of animal breeding, but the basic principle is obvious, the response to selection is a function of the amount of selection you are engaging in (i.e., selecting a
subset of the overall population via some truncation for minimum phenotype value) and the narrow sense heritability. Over time if the selection is strong enough the additive genetic variance should
be exhausted, and “evolution” should stop. This sort of start-and-stop gradualism is common in microevolutionary processes (though even in breeding it takes a while, especially in a large
population). When someone wonders if black Americans are somehow more robust because of slavery, remember, you need to have high heritability and strong selection for “robustness” to be shifted in
just a few generations. You too can “do the math”!
Molecules and phylogenies
Back in the mid-1980s we started hearing a lot about “African Eve.” As we note on this weblog, there were two parts of this:
1) A caution not to overread the results into imagining a single foremother.
2) The popular press basically ignoring this caution.
But here are a few “fun facts.”
1) The probability of fixation of a new mutation is 1/2N, where N is the population size.
2) The time until fixation is usually 4N, where N is the population size.
3) 1/2N X 2Nμ, where μ is the mutation rate, reflects the reality that the probability of mutation times the rate of mutation results in the substitution, the turnover of alleles, on loci being a
function just of mutation rate (μ).
4) Time until extinction of an allele is usually 2N[e]/N X ln(2N).
5) The long term effective population is 1/t(1/N[0] + … + 1/N[t – 1]).
You can do some “plug & chug” in Excel if you want to check out the long term effective population, but trust me when I tell you that it is far closer to the low bound as a function of time than the
high bound as far as what your intuition would tell you. That is, population bottlenecks are extremely salient demographic events, and tend to be important even after the population bounces back.
This also should make one cautious about assuming that larger populations have more genetic variation than smaller ones, that is only true if you correct for the long term effective population, as
often large populations are simply “blow ups” of small populations. Probability of fixation, extinction and time until fixation should give you a clue as to an important fact: lineages come and go,
the only thing that is inevitable in a world without selection is extinction. If you go back 100,000 years, about 5,000 generations, it stands to reason that most lineages will have gone extinct.
Think of it like surnames that are passed through the male line, how likely is it going to be that a surname will be passed for 5,000 generations through an uninterrupted line of males? (remember,
mtDNA is passed through females, flip the logic!) Using some of the equations above, and considering the wild fluctuations in population size that were likely in the past, the caution seems a lot
more solid I think.
r X B > C
The logic of kin selection. r = coefficient of relatedness, roughly, the chance that an allele picked from a locus will be identical by descent to an allele selected from another individual at the
same locus (i.e., 1/2 between siblings, 1/2 between parents, 1/4 between grandparents & grandchildren, etc.). B = is the fitness benefit and C = cost. It’s a simple way of thinking about the world,
kin clearly matters. As J.B.S. Haldane was reputed to have said he would “give up his life for 2 brothers or eight cousins” (1/2 r to a sibling, 1/8 to a cousin, makes sense).
But here is one thing to consider, recent literature reviews I have seen in Molecular Markers, Natural History and Evolution as well as Cooperation among Animals suggests that kin selection can not
explain the full range of eusociality amongst haplodiploid hymenoptera across that taxon. They were William Hamilton’s original models because sisters are more closely related to each other, r =
0.75, than they would be to their own offspring (r = 0.5) since males are haploid. In many species the coefficients of relatedness are empirically far lower the idealized kin selection models due to
multiple queens and multiple inseminations of each queen (there weren’t genetic tests when Hamilton published his original paper in The Journal of Theoretical Biology). And yet eusociality exists
amongst them, just as in other species who do fit the ideal kin selective parameters eusociality also flourishes. The explanatory bubble seems to have burst…or has it? One solution is that kin
selection might have been the necessary condition for eusociality to evolve, that is, the ancestral state, but once eusociality was a feature of this class of organisms other mechanisms also appeared
to reinforce it, allowing the relexation of the necessity for a high coefficient of relation.
This is not to deny Hamilton’s point or his logic, but, like Triver’s reciprocal altruism, or the older ideas of group selection, I think it should makes us cautious of “one formalization to rule
them all!” Gopnik was right in that, generalization is a difficult enterprise in higher order biology, and even more difficult in the human sciences (hard to do controlled lab tests!). But that does
not mean that formalization is all for naught. In the sciences we should attempt to achieve the maximum level of predictive power with the minimum number of parameters. If a given species requires a
synthesis of various models in a layered palimpsest, that’s life. Model building must continue, because even if each model is only a brick, the house needs those bricks to ultimately be completed.
OK, I was going to hit a lot more equations and formulas…but I’ve already gone WAY too long because of my inability to shut up with the words. Perhaps I’ll revisit this topic. But, here are some
books you might find of some interest:
Principles of Population Genetics
Genetics of Populations
Introduction to Quantitative Genetics
Genetics and Analysis of Quantitative Traits
Narrow Roads of Gene Land
Evolutionary Quantitative Genetics
Eeep! You all are writing these faster than I can digest them!
I note there’s a special place set aside for “The Week of Just Science.” Was that easy to set up? I’d love to have these “Basic Concepts” gathered together and RSS-ified somehow..
Also, I was going to ask you to recommend a Population Genetics textbook, and it seems you provided me links to two!
• http://www.scienceblogs.com/gnxp
I note there’s a special place set aside for “The Week of Just Science.” Was that easy to set up? I’d love to have these “Basic Concepts” gathered together and RSS-ified somehow..
there are plenty of ways to do an rss aggregation. for ‘just science’ i used feedpress 0.98 & wordpress. but don’t worry about basic concepts, scienceblogs.com is working on a new category from
what i hear….
“The Week of Just Science” is a good idea. Thanks for your post.
Just wanted to point out that you misspelt the name of one of the best mathematician, the world has ever seen.
Its not Ramanujin but Ramanujan.
• http://matt-at-berkeley.blogspot.com/
For what it’s worth- the Hartl and Clark (Principals of Pop Gen) is light years ahead of any of the other Pop gen textbooks…
• http://akinokure.blogspot.com
Bob, you should also read the chapter on maintenance of genetic variation in Roff’s _Evolutionary Quantitative Genetics_. The book doesn’t have lots on pop gen, but this issue is a pretty “hot
topic” in evolutionary genetics, though it doesn’t get much treatment in the Hartl & Clarke book.
Wow, Matt and agnostic, thanks for the info.
I think razib gets a “cut” if I buy these directly from the link. How about if I put them on an amazon ‘wish list,’ then later purchase them. Anyone know?
• http://www.scienceblogs.com/gnxp
I think razib gets a “cut” if I buy these directly from the link. How about if I put them on an amazon ‘wish list,’ then later purchase them. Anyone know?
if you click through i think i get a cut. but no biggie either way, i don’t depend on the $$$ i get via these links. its a legacy when i was paying webhosting costs and i used it to defray them.
but now i pay no webhosting costs on either blog.
Cool. I’ll just buy them directly from the link. My price is the same either way, and I’m pleased that you might benefit a bit. | {"url":"http://blogs.discovermagazine.com/gnxp/2007/01/basic-concepts-8th-grade-math/","timestamp":"2014-04-16T17:29:36Z","content_type":null,"content_length":"358167","record_id":"<urn:uuid:7867290e-ba40-40f9-a4e0-0ed78192a847>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00624-ip-10-147-4-33.ec2.internal.warc.gz"} |
GED® Math | Math Lesson 3B
Home | Writing | Reading | Social Studies | Math | Science
Lesson 3B: Geometry and Graphing
Lesson Outline:
1. Triangles, Angles, and Lines
2. Graphing
a. The Graph
b. Line Plotting – Linear Equation revisited
c. Distance between points
d. Midpoint between points
TRIANGLES, ANGLES AND LINES
Let’s say we have two parallel lines with another line passing through both of them. The corresponding angles will have the same degree measure.
If we add another line we create two triangles. Two triangles with the same angles.
These are known as similar triangles with corresponding angles having the same degree measure, as indicated by the matching colors.
What’s more, each pair of corresponding sides form the same ratio.
Thus, you can calculate the length of an unknown side if you have the lengths of:
- A side corresponding to the unknown side
- Or two other corresponding sides
Note also that if you have two triangles and know that at least two sets of angles correspond (have the same measure), then the triangles are similar triangles. The third pair of angles must be the
same, because the three angles in all triangles add up to the same total (180º).
What is x?
A) 12 km
B) 10 km
C) 8 km
D) 6.5 km
E) There is not enough information
We know we have two similar triangles, because they have opposite angles and right angles in common.
• So, we can set up a ratio with the lengths of corresponding sides to solve for x:
x / 20 = 13 / 26
x = (13 / 26) * 20
= (1/2) * 20
= 10 km
The answer is B
A right triangle is any triangle that has a vertex that is a right angle (90º).
The lengths of the sides of a right triangle always have a relationship expressed by:
a2 + b2= c2 where a and b are the “legs” an c is the
This is called the Pythagorean relationship
(named for Pythagoras who came up with it
long ago in Greece.)
Helpful tip:
If you have the lengths of any two sides of a right triangle, you can calculate the unknown third side.
What is x in yards?
A) 2 yards
B) 6 yards
C) 18 yards
D) 21 yards
E) 36 yards
It is a right triangle so you can use the Pythagorean relationship:
a2 + b2 = c2 where the hypotenuse c = 10 and the leg a = 8 and the missing leg b = x
8 2 + x2 = 102
64+ x2= 100
x2 = 100 - 64
x2 = 36
x = 6 feet
• The answer asked for is in yards. Remember 1 yd. = 3 ft.
1 ft = 1/3 yds., 1/3 yard for very foot, so
x = 6 feet * 1/3 yds. per ft. = 2 yards
The answer is A.
In Algebra, we looked at the linear equation as a function with a set of ordered pairs. Here we look at how the linear equation and its ordered pairs create a line on a coordinate plane or graph.
a. The Graph
The graph, or coordinate plane, has a horizontal x-axis and vertical y-axis, both of which are numbered.
Coordinates are a pair of numbers of the form (x, y) that can be graphed, or plotted, onto the graph at those x and y values.
Plot the point (1, 2)
The x-coordinate (horizontal axis) is 1 and the y-coordinate (vertical axis) is 2.
Start at the center (0, 0), which is also called the origin. First follow the x-axis to the right (positive), one unit. Then move up 2 units (positive) and you arrive at (1, 2) on the graph. Mark the
point with a dot.
Note: If you are asked to plot a point on the test, you will be provided with a coordinate grid. You simply bubble in the correct spot on the grid.
b. Line plotting - Linear Equation revisited
Recall that a linear equation represents a line on a graph.
The solutions for the equation create a line when plotted on a graph.
y = 0.5x + 1
We can plug in 2 for x to get a value for y.
y = 0.5 (2) + 1
= 1 + 1
= 2
so one point on the line for this linear equation is (2,2).
You can plug in other values to create a set of ordered pairs.
x y
3 2.5
1 1.5
-1 0.5
-2 0
-3 -0.5
This gives the coordinates for the line. You can then plot the points given by the coordinates and draw a line that passes through them.
Note: On the test you will not be asked to plot an entire line as an answer. However you may be asked about parts of this process or information that is revealed in a line plot.
Now that you understand what a line plot is, there is a simpler approach of getting this information out of a linear equation.
Recall that slope-intercept form gets its name because there is a slope, m, and y-intercept, b.
y = mx + b
The y-intercept is where the line crosses the y axis. This is where x = 0. So the simplest point to plot is (0, b)
The slope m is a number that indicates the steepness of the line. The bigger the number, the steeper the line is. When it is positive, the line rises to the right (the positive direction on the
x-axis). When it is negative, the line rises toward the left.
In the previous example,
y = 0.5x + 1
The slope is 0.5 and the y-intercept is 1.
We can use this information to plot a line.
1. Start at (0,b). In this case the y-intercept b = 1 so start at (0,1).
2. Use the slope m to extend a line from the point.
This is done by treating the slope as a ratio.
slope = rise/run
- “rise” being the number of units up and “run” being the number of units over
(to the left or right, depending on the sign)
In this case m = 0.5 or ½
rise = 1
run = 2
Up 1, over (to the right) 2.
If you try this, you will get the same line shown in the previous graph.
Continue to extend the line as you see fit, following the method using the ratio of the slope.
If you have a slope of m = -3, the ratio would be
_ 3 Up 3 and
1 over (to the left) 1
c. Distance between points
The distance formula is included on the list of formulas under "Coordinate Geometry."
It is used to find the number of units between two points, given their coordinates.
This comes from the Pythagorean relationship. Any line segment on a graph can be considered the hypotenuse of a right triangle. We find the lengths of the legs by subtracting the coordinates along
each dimension:
In this case the distance between the two points is:
distance = √( (3-1)2 + (3-1)2)
= √( (2)2 + (2)2)
= √(4 + 4)
= √(8)
d. Midpoint between points
The midpoint is a point halfway between two points.
When you have two points (x1,y1) and (x2,y2)
The midpoint formula simply takes an “average” or middle value of the x and y dimensions. We will further discuss the concept of an “average” in the following and final math section on simple
statistics concepts.
Click on the link below to move on to lesson 4.
Back: Math Lesson 3A | Next: Math Lesson 4
Signup! It's Free! | Language Arts | Reading | Social Studies | Math | Science | {"url":"http://www.gedforfree.com/free-ged-course/math/math-lesson-3b.html","timestamp":"2014-04-17T15:27:03Z","content_type":null,"content_length":"23872","record_id":"<urn:uuid:2577d925-e282-4b4d-b549-19a862293597>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00588-ip-10-147-4-33.ec2.internal.warc.gz"} |
Potrero Math Tutor
Find a Potrero Math Tutor
...I look forward to working with you!I have formally taught 2 years of 10th grade math. I have formally taught 2 years of 10th grade math. I lived in France for 5 years, minored in French in
college and lived in West Africa for 2 years, teaching math in French at a local high school.
14 Subjects: including trigonometry, probability, algebra 1, algebra 2
...Today I have hundreds of hours of experience, with the majority in Algebra and Statistics, and I would be comfortable well into college math. During the learning process, small knowledge gaps
from past courses tend to reappear as roadblocks down the line. By identifying and correcting these problems, I help students become effective independent learners for both current and future
14 Subjects: including geometry, linear algebra, probability, algebra 1
...I completed my first undergraduate studies majoring in physics in 1972 from the College of Science, the University of Mosul in Iraq. I became a professional in teaching meteorology and
climatology at Iraqi Air Force and Defense College and the departments affiliated with the college for the peri...
3 Subjects: including algebra 1, Arabic, drawing
...I also taught all subjects at a primary school in Papua New Guinea for 3 months after my freshman year at Stanford, and I am currently a substitute at a Child Development Center. I have been
playing and writing music since I was ten years old. I am currently a professional musician in an Indie/Folk/Island/Jazz band called Ed Ghost Tucker, and I am the predominant songwriter for the
38 Subjects: including algebra 1, biology, vocabulary, grammar
...I have the patience and knowledge of the language to work with students of all levels on their pronunciation, listening and comprehension of the language and well as vocabulary, grammar and
even common idioms. I welcome the opportunity to work with you. Te prometo que vas a dominar el Español!!...
21 Subjects: including algebra 1, Microsoft PowerPoint, track & field, football | {"url":"http://www.purplemath.com/potrero_math_tutors.php","timestamp":"2014-04-18T03:54:12Z","content_type":null,"content_length":"23782","record_id":"<urn:uuid:006d0a0d-e774-47d7-b8d2-30ae31f53178>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00100-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts about brauer group on A Mind for Madness
I’m giving up on the p-divisible group posts for awhile. I would have to be too technical and tedious to write anything interesting about enlarging the base. It is pretty fascinating stuff, but not
blog material at the moment.
I’ve been playing around with counting fibration structures on K3 surfaces, and I just noticed something I probably should have been aware of for a long time. This is totally well-known, but I’ll
give a slightly anachronistic presentation so that we can use results from 2013 to prove the Birch and Swinnerton-Dyer conjecture!! … Well, only in a case that has been known since 1973 when it was
published by Artin and Swinnerton-Dyer.
Let’s recall the Tate conjecture for surfaces. Let ${k}$ be a finite field and ${X/k}$ a smooth, projective surface. We’ve written this down many times now, but the long exact sequence associate to
the Kummer sequence
$\displaystyle 0\rightarrow \mu_{\ell}\rightarrow \mathbb{G}_m\rightarrow \mathbb{G}_m\rightarrow 0$
(for ${\elleq \text{char}(k)}$) gives us a cycle class map
$\displaystyle c_1: Pic(X_{\overline{k}})\otimes \mathbb{Q}_{\ell}\rightarrow H^2_{et}(X_{\overline{k}}, \mathbb{Q}_\ell(1))$
In fact, we could take Galois invariants to get our standard
$\displaystyle 0\rightarrow Pic(X)\otimes \mathbb{Q}_{\ell}\rightarrow H^2_{et}(X_{\overline{k}}, \mathbb{Q}_\ell(1))^G\rightarrow Br(X)[\ell^\infty]\rightarrow 0$
The Tate conjecture is in some sense the positive characteristic version of the Hodge conjecture. It conjectures that the first map is surjective. In other words, whenever an ${\ell}$-adic class
“looks like” it could come from an honest geometric thing, then it does. But if the Tate conjecture is true, then this implies the ${\ell}$-primary part of ${Br(X)}$ is finite. We could spend some
time worrying about independence of ${\ell}$, but it works, and hence the Tate conjecture is actually equivalent to finiteness of ${Br(X)}$.
Suppose now that ${X}$ is an elliptic K3 surface. This just means that there is a flat map ${X\rightarrow \mathbb{P}^1}$ where the fibers are elliptic curves (there are some degenerate fibers, but
after some heavy machinery we could always put this into some nice form, we’re sketching an argument here so we won’t worry about the technical details of what we want “fibration” to mean). The
generic fiber ${X_\eta}$ is a genus ${1}$ curve that does not necessarily have a rational point and hence is not necessarily an elliptic curve.
But we can just use a relative version of the Jacobian construction to produce a new fibration ${J\rightarrow \mathbb{P}^1}$ where ${J}$ is a K3 surface fiberwise isomorphic to ${X}$, but now ${J_\
eta=Jac(X_\eta)}$ and hence is an elliptic curve. Suppose we want to classify elliptic fibrations that have ${J}$ as the relative Jacobian. We have two natural ideas to do this.
The first is that etale locally such a fibration is trivial, so you could consider all glueing data to piece such a thing together. The obstruction will be some Cech class that actually lives in ${H^
2(X, \mathbb{G}_m)=Br(X)}$. In fancy language, you make these things as ${\mathbb{G}_m}$-gerbes which are just twisted relative moduli of sheaves. The class in ${Br(X)}$ is giving you the obstruction
the existence of a universal sheaf.
A more number theoretic way to think about this is that rather than think about surfaces over ${k}$, we work with the generic fiber ${X_\eta/k(t)}$. It is well-known that the Weil-Chatelet group: ${H
^1(Gal(k(t)^{sep}/k(t), J_\eta)}$ gives you the possible genus ${1}$ curves that could occur as generic fibers of such fibrations. This group is way too big though, because we only want ones that are
locally trivial everywhere (otherwise it won’t be a fibration).
So it shouldn’t be surprising that the classification of such things is given by the Tate-Shafarevich group:
Ш $\displaystyle (J_\eta /k(t))=ker ( H^1(G, J_\eta)\rightarrow \prod H^1(G_v, (J_\eta)_v))$
Very roughly, I’ve now given a heuristic argument (namely that they both classify the same set of things) that ${Br(X)\simeq}$ Ш ${(J_\eta)}$, and it turns out that Grothendieck proved the natural
map that comes form the Leray spectral sequence ${Br(X)\rightarrow}$ Ш${(J_\eta)}$ is an isomorphism (this rigorous argument might actually have been easier than the heuristic one because we’ve
computed everything involved in previous posts, but it doesn’t give you any idea why one might think they are the same).
Theorem: If ${E/\mathbb{F}_q(t)}$ is an elliptic curve of height ${2}$ (occuring as the generic fiber of an elliptic K3 surface), then ${E}$ satisfies the Birch and Swinnerton-Dyer conjecture.
Idea: Using the machinery alluded to before, we spread out ${E}$ to an elliptic K3 surface ${X\rightarrow \mathbb{P}^1}$ over a finite field. As of this year, it seems the Tate conjecture is true for
K3 surfaces (the proofs are all there, I’m not sure if they have been double checked and published yet). Thus ${Br(X)}$ is finite. Thus Ш${ (E)}$ is finite. But now it is well-known that if Ш${ (E)}$
being finite is equivalent to the Birch and Swinnerton-Dyer conjecture.
by hilbertthm90 2 Comments
What’s up with the fppf site?
I’ve been thinking a lot about something called Serre-Tate theory lately. I want to do some posts on the “classical” case of elliptic curves. Before starting though we’ll go through some
preliminaries on why one would ever want to use the fppf site and how to compute with it. It seems that today’s post is extremely well known, but not really spelled out anywhere.
Let’s say you’ve been reading stuff having to do with arithmetic geometry for awhile. Then without a doubt you’ve encountered étale cohomology. In fact, I’ve used it tons on this blog already. Here’s
a standard way in which it comes up. Suppose you have some (smooth, projective) variety ${X/k}$. You want to understand the ${\ell^n}$-torsion in the Picard group or the (cohomological) Brauer group
where ${\ell}$ is a prime not equal to the characteristic of the field.
What you do is take the Kummer sequence:
$\displaystyle 0\rightarrow \mu_{\ell^n}\rightarrow \mathbb{G}_m\stackrel{\ell^n}{\rightarrow} \mathbb{G}_m\rightarrow 0.$
This is an exact sequence of sheaves in the étale topology. Thus it gives you a long exact sequence of cohomology. But since ${H^1_{et}(X, \mathbb{G}_m)=Pic(X)}$ and ${H^2_{et}(X, \mathbb{G}_m)=Br
(X)}$. Just writing down the long exact sequence you get that the image of ${H^1_{et}(X, \mu_{\ell^n})\rightarrow Pic(X)}$ is exactly ${Pic(X)[\ell^n]}$, and similarly with the Brauer group. In fact,
people usually work with the truncated short exact sequence:
$\displaystyle 0\rightarrow Pic(X)/\ell^n Pic(X) \rightarrow H^2_{et}(X, \mu_{\ell^n})\rightarrow Br(X)[\ell^n]\rightarrow 0$
Fiddling around with other related things can help you figure out what is happening with the ${\ell^n}$-torsion. That isn’t the point of this post though. The point is what do you do when you want to
figure out the ${p^n}$-torsion where ${p}$ is the characteristic of the ground field? It looks like you’re in big trouble, because the above Kummer sequence is not exact in the étale topology.
It turns out that you can switch to a finer topology called the fppf topology (or site). This is similar to the étale site, except instead of making your covering families using étale maps you make
them with faithfully flat and locally of finite presentation maps (i.e. fppf for short when translated to french). When using this finer topology the sequence of sheaves actually becomes exact again.
A proof is here, and a quick read through will show you exactly why you can’t use the étale site. You need to extract ${p}$-th roots for the ${p}$-th power map to be surjective which will give you
some sort of infinitesimal cover (for example if ${X=Spec(k)}$) that looks like ${Spec(k[t]/(t-a)^p)\rightarrow Spec(k)}$.
Thus you can try to figure out the ${p^n}$-torsion again now using “flat cohomology” which will be denoted ${H^i_{fl}(X, -)}$. We get the same long exact sequences to try to fiddle with:
$\displaystyle 0\rightarrow Pic(X)/p^n Pic(X) \rightarrow H^2_{fl}(X, \mu_{p^n})\rightarrow Br(X)[p^n]\rightarrow 0$
But what the heck is ${H^2_{fl}(X, \mu_{p^n})}$? I mean, how do you compute this? We have tons of books and things to compute with the étale topology. But this fppf thing is weird. So secretly we
really want to translate this flat cohomology back to some étale cohomology. I saw the following claimed in several places without really explaining it, so we’ll prove it here:
$\displaystyle H^2_{fl}(X, \mu_p)=H^1_{et}(X, \mathbb{G}_m/\mathbb{G}_m^p).$
Actually, let’s just prove something much more general. We actually get that
$\displaystyle H^i_{fl}(X, \mu_p)=H^{i-1}_{et}(X, \mathbb{G}_m/\mathbb{G}_m^p).$
The proof is really just a silly “trick” once you see it. Since the Kummer sequence is exact on the fppf site, by definition this just means that the complex ${\mu_p}$ thought of as concentrated in
degree ${0}$ is quasi-isomorphic to the complex ${\mathbb{G}_m\stackrel{p}{\rightarrow} \mathbb{G}_m}$. It looks like this is a useless and more complicated thing to say, but this means that the
hypercohomology (still fppf) is isomorphic:
$\displaystyle \mathbf{H}^i_{fl}(X, \mu_p)=\mathbf{H}^i_{fl}(X, \mathbb{G}_m\stackrel{p}{\rightarrow} \mathbb{G}_m).$
Now here’s the trick. The left side is the group we want to compute. The right hand side only involves smooth group schemes, so a theorem of Grothendieck tells us that we can compute this
hypercohomology using fpqc, fppf, étale, Zariski … it doesn’t matter. We’ll get the same answer. Thus we can switch to the étale site. But of course, just by definition we now extend the ${p}$-th
power map (injective on the etale site) to an exact sequence
$\displaystyle 0\rightarrow \mathbb{G}_m \rightarrow \mathbb{G}_m\rightarrow \mathbb{G}_m/\mathbb{G}_m^p\rightarrow 0.$
Thus we get another quasi-isomorphism of complexes. This time to ${\mathbb{G}_m/\mathbb{G}_m^p[-1]}$. This is a complex concentrated in a single degree, so the hypercohomology is just the etale
cohomology. The shift by ${-1}$ decreases the cohomology by one and we get the desired isomorphism ${H^i_{fl}(X, \mu_p)=H^{i-1}_{et}(X, \mathbb{G}_m/\mathbb{G}_m^p)}$. In particular, we were curious
about ${H^2_{fl}(X, \mu_p)}$, so we want to figure out ${H^1_{et}(X, \mathbb{G}_m/\mathbb{G}_m^p)}$.
Alright. You’re now probably wondering what in the world to I do with the étale cohomology of ${\mathbb{G}_m/\mathbb{G}_m^p}$? It might be on the étale site, but it is a weird sheaf. Ah. But here’s
something great, and not used all that much to my knowledge. There is something called the multiplicative de Rham complex. On the étale site we actually have an exact sequence of sheaves via the
“dlog” map:
$\displaystyle 0\rightarrow \mathbb{G}_m/\mathbb{G}_m^p\stackrel{d\log}{\rightarrow} Z^1\stackrel{C-i}{\rightarrow} \Omega^1\rightarrow 0.$
This now gives us something nice because if we understand the Cartier operator (which is Serre dual to the Frobenius!) and know things how many global ${1}$-forms are on the variety (maybe none?) we
have a hope of computing our original flat cohomology!
More Complicated Brauer Computations
Let’s wrap up some of our Brauer group loose ends today. We can push through the calculation of the Brauer groups of curves over some other fields using the same methods as the last post, but just a
little more effort.
First, note that with absolutely no extra effort we can run the same argument as yesterday in the following situation. Suppose ${X}$ is a regular, integral, quasi-compact scheme of dimension ${1}$
with the property that all closed points ${v\in X}$ have perfect residue fields ${k(v)}$. Let ${g: \text{Spec} K \hookrightarrow X}$ be the inclusion of the generic point.
Running the Leray spectral sequence a little further than last time still gives us an inclusion, but we will usually want more information because ${Br(K)}$ may not be ${0}$. The low degree terms
(plus the argument from last time) gives us a sequence:
$\displaystyle 0\rightarrow Br'(X)\rightarrow Br(K)\rightarrow \bigoplus_v Hom_{cont}(G_{k(v)}, \mathbb{Q}/\mathbb{Z})\rightarrow H^3(X, \mathbb{G}_m)\rightarrow \cdots$
This allows us to recover a result we already proved. In the special case that ${X=\text{Spec} A}$ where ${A}$ is a Henselian DVR with perfect residue field ${k}$, then the uniformizing parameter
defines a splitting to get a split exact sequence
$\displaystyle 0\rightarrow Br(A)\rightarrow Br(K)\rightarrow Hom_{cont}(G_k, \mathbb{Q}/\mathbb{Z})\rightarrow 0$
Thus when ${A}$ is a strict local ring (e.g. ${\mathbb{Z}_p}$) we get an isomorphism ${Br(K)\rightarrow \mathbb{Q}/\mathbb{Z}}$ since ${Br(A)\simeq Br(k)=0}$ (since ${k}$ is ${C_1}$). In fact, going
back to Brauer groups of fields, we had a lot of trouble trying to figure anything out about number fields. Now we may have a tool (although without class field theory it isn’t very useful, so we’ll
skip this for now).
The last computation we’ll do today is to consider a smooth (projective) curve over a finite field ${C/k}$. Fix a separable closure ${k^s}$ and ${K}$ the function field. First, we could attempt to
use Leray on the generic point, since we can use that ${H^3(K, \mathbb{G}_m)=0}$ to get some more information. Unfortunately without something else this isn’t enough to recover ${Br(C)}$ up to
Instead, consider the base change map ${f: C^s=C\otimes_k k^s\rightarrow C}$. We use the Hochschild-Serre spectral sequence ${H^p(G_k, H^q(C^s, \mathbb{G}_m))\Rightarrow H^{p+q}(C, \mathbb{G}_m)}$.
The low degree terms give us
$\displaystyle 0\rightarrow Br(k)\rightarrow \ker (Br(C)\rightarrow Br(C^s))\rightarrow H^1(G_k, Pic(C^s))\rightarrow \cdots$
First, ${\ker( Br(C)\rightarrow Br(C^s))=Br(C)}$ by the last post. Next ${H^1(G_k, Pic^0(C^s))=0}$ by Lang’s theorem as stated in Mumford’s Abelian Varieties, so ${H^1(G_k, Pic(C^s))=0}$ as well.
That tells us that ${Br(C)\simeq Br(k)=0}$ since ${k}$ is ${C_1}$. So even over finite fields (finite was really used and not just ${C_1}$ for Lang’s theorem) we get that smooth, projective curves
have trivial Brauer group.
Brauer Groups of Curves
Let ${C/k}$ be a smooth projective curve over an algebraically closed field. The main goal of today is to show that ${Br(C)=0}$. Both smooth and being over an algebraically closed field are crucial
for this computation. The computation will run very similarly to the last post with basically one extra step.
We haven’t actually talked about the Brauer group for varieties, but there are again two definitions. One has to do with Azumaya algebras over ${\mathcal{O}_C}$ modulo Morita equivalence. The other
is the cohomological Brauer group, ${Br'(C):=H^2(C, \mathbb{G}_m)}$. As already stated, it is a big open problem to determine when these are the same. We’ll continue to only consider situations where
they are known to be the same and hence won’t cause any problems (or even require us to define rigorously the Azumaya algebra version).
First, note that if we look at the Leray spectral sequence with the inclusion of the generic point ${g:Spec(K)\hookrightarrow C}$ we get that ${R^1g_*\mathbb{G}_m=0}$ by Hilbert 90 again which tells
us that ${0\rightarrow H^2(C, g_*\mathbb{G}_m)\hookrightarrow Br(K)}$. Now ${K}$ has transcendence degree ${1}$ over an algebraically closed field, so by Tsen’s theorem this is ${C_1}$. Thus the last
post tells us that ${H^2(C, g_*\mathbb{G}_m)=0}$.
The new step is that we need to relate ${H^2(C, g_*\mathbb{G}_m)}$ to ${Br(C)}$. On the étale site of ${C}$ we have an exact sequence of sheaves
$\displaystyle 0\rightarrow \mathbb{G}_m\rightarrow g_*\mathbb{G}_m\rightarrow Div_C\rightarrow 0$
where ${\displaystyle Div_C=\bigoplus_{v \ \text{closed}}(i_v)_*\mathbb{Z}}$.
Taking the long exact sequence on cohomology we get
$\displaystyle \cdots \rightarrow H^1(C, Div_C)\rightarrow Br(C)\rightarrow H^2(C, g_*\mathbb{G}_m)\rightarrow \cdots .$
Thus it will complete the proof to show that ${H^1(C, Div_C)=0}$, since then ${Br(C)}$ will inject into ${0}$. Writing ${\displaystyle Div_C=\bigoplus_{v \ \text{closed}}(i_v)_*\mathbb{Z}}$ and using
that cohomology commutes with direct sums we need only show that for some fixed closed point ${(i_v): Spec(k(v))\hookrightarrow C}$ that ${H^1(C, (i_v)_*\mathbb{Z})=0}$.
We use Leray again, but this time on ${i_v}$. For notational convenience, we’ll abuse notation and call both the map and the point ${v\in C}$. The low degree terms give us ${H^1(C, v_*\mathbb{Z})\
hookrightarrow H^1(v, \mathbb{Z})}$. Using the Galois cohomology interpretation of étale cohomology of a point ${H^1(v,\mathbb{Z})\simeq Hom_{cont}(G_{k(v)}, \mathbb{Z})}$ (the homomorphisms are not
twisted since the Galois action is trivial). Since ${G_{k(v)}}$ is profinite, the continuous image is compact and hence a finite subgroup of ${\mathbb{Z}}$. Thus ${H^1(C, v_*\mathbb{Z})=0}$ which
implies ${H^1(C, Div_C)=0}$ which gives the result that ${Br(C)=0}$.
So again we see that even for a full curve being over an algebraically closed field is just too strong a condition to give anything interesting. This suggests that the Brauer group really is
measuring some arithmetic properties of the curve. For example, we could ask whether or not good/bad reduction of the curve is related to the Brauer group, but this would require us to move into
Brauer groups of surfaces (since the model will be a relative curve over a one-dimensional base).
Already for local fields or ${C_1}$ fields the question of determining ${Br(C)}$ is really interesting. The above argument merely tells us that ${Br(C)\hookrightarrow Br(K)}$ where ${K}$ is the
function field, but this is true of all smooth, proper varieties and often doesn’t help much if the group is non-zero.
Brauer Groups of Fields
Today we’ll talk about the basic theory of Brauer groups for certain types of fields. If the last post was too poorly written to comprehend, the only thing that will be used from it is that for
fields we can refer to “the” Brauer group without any ambiguity because the cohomological definition and the Azumaya (central, simple) algebra definition are canonically isomorphic in this case.
Let’s just work our way from algebraically closed to furthest away from being algebraically closed. Thus, suppose ${K}$ is an algebraically closed field. The two ways to think about ${Br(K)}$ both
tell us quickly that this is ${0}$. Cohomologically this is because ${G_K=1}$, so there are no non-trivial Galois cohomology classes. The slightly more interesting approach is that any central,
simple algebra over ${K}$ is already split, i.e. a matrix algebra, so it is the zero class modulo the relation we defined last time.
I’m pretty sure I’ve blogged about this before, but there is a nice set of definitions that measures how “far away” from being algebraically closed you are. A field is called ${C_r}$ if for any $
{d,n}$ such that ${n>d^r}$ any homogeneous polynomial (with ${K}$ coefficients) of degree ${d}$ in ${n}$ variables has a non-trivial solution.
Thus the condition ${C_0}$ just says that all polynomials have non-trivial solutions, i.e. ${K}$ is algebraically closed. The condition ${C_1}$ is usually called being quasi-algebraically closed.
Examples include, but are not limited to finite fields and function fields of curves over algebraically closed fields. A more complicated example that may come up later is that the maximal,
unramified extension of a complete, discretely valued field with perfect residue field is ${C_1}$.
A beautiful result is that if ${K}$ is ${C_1}$, then we still get that ${Br(K)=0}$. One could consider this result “classical” if done properly. First, by Artin-Wedderburn any finite dimensional,
central, simple algebra has the form ${M_n(D)}$ where ${D}$ is a finite dimensional division algebra with center ${K}$. If you play around with norms (I swear I did this in a previous post somewhere
that I can’t find!) you produce the right degree homogeneous polynomial and use the ${C_1}$ condition to conclude that ${D=K}$. Thus any central, simple algebra is already split giving ${Br(K)=0}$.
We might give up and think the Brauer group of any field is ${0}$, but this is not the case (exercise to test understanding: think of ${\mathbb{R}}$). Let’s move on to the easiest example we can
think of for a non-${C_1}$ field: ${\mathbb{Q}_p}$ for some prime ${p}$. The computation we do will be totally general and will actually work to show what ${Br(K)}$ is for any ${K}$ that is complete
with respect to some non-archimedean discrete valuation, and hence for ${K}$ a local field.
The trick is to use the valuation ring, ${R=\mathbb{Z}_p}$ to interpolate between the Brauer group of ${K}$ and the Brauer group of ${R/m=\mathbb{F}_p}$, a ${C_1}$ field! Since ${K}$ is the fraction
field of ${R}$, the first thing we should check is the Leray spectral sequence at the generic point ${i:Spec(K)\hookrightarrow Spec(R)}$. This is given by ${E_2^{p,q}=H^p(Spec(R), R^qi_*\mathbb{G}_m)
\Rightarrow H^{p+q}(G_K, (K^s)^\times)}$.
By Hilbert’s Theorem 90, we have ${R^1i_*\mathbb{G}_m=0}$. Recall that last time we said there is a canonical isomorphism ${Br(R)\rightarrow Br(\mathbb{F}_p)}$ given by specialization. This gives us
a short exact sequence from the long exact sequence of low degree terms:
$\displaystyle 0\rightarrow Br(\mathbb{F}_p)\rightarrow Br(\mathbb{Q}_p)\rightarrow Hom(G_{\mathbb{F}_p}, \mathbb{Q}/\mathbb{Z})\rightarrow 0$
Now we use that ${Br(\mathbb{F}_p)=0}$ and ${G_{\mathbb{F}_p}\simeq \widehat{\mathbb{Z}}}$ to get that ${Br(\mathbb{Q}_p)\simeq \mathbb{Q}/\mathbb{Z}}$. As already mentioned, nothing in the above
argument was specific to ${\mathbb{Q}_p}$. The same argument shows that any (strict) non-archimedean local field also has Brauer group ${\mathbb{Q}/\mathbb{Z}}$.
To get away from local fields, I’ll just end by pointing out that if you start with some global field ${K}$ you can try to use a local-to-global idea to get information about the global field. From
class field theory we get an exact sequence
$\displaystyle 0\rightarrow Br(K)\rightarrow \bigoplus_v Br(K_v)\rightarrow \mathbb{Q}/\mathbb{Z}\rightarrow 0,$
which eventually we may talk about. We know what all the maps are already from this and the previous post. The first is specialization (or corestriction from a few posts ago, or most usually this is
called taking invariants). Then the second map is just summing since each term of the direct sum is a ${\mathbb{Q}/\mathbb{Z}}$.
Next time we’ll move on to Brauer groups of curves even though so much more can still be said about fields.
Intro to Brauer Groups
I want to do a series on the basics of Brauer groups since they came up in the past few posts. Since I haven’t really talked about Galois cohomology anywhere, we’ll take a slightly nonstandard
approach and view everything “geometrically” in terms of étale cohomology. Everything should be equivalent to the Galois cohomology approach, but this way will allow us to use the theory that is
already developed elsewhere on the blog.
I apologize in advance for the sporadic nature of this post. I just need to get a few random things out there before really starting the series. There will be one or two posts on the Brauer group of
a “point” which will just mean the usual Brauer group of a field (to be defined shortly). Then we’ll move on to the Brauer group of a curve, and maybe if I still feel like continuing the series of a
Let ${K}$ be a field and ${K^s}$ a fixed separable closure. We will define ${Br(K)=H^2_{et}(Spec(K), \mathbb{G}_m)=H^2(Gal(K^s/K), (K^s)^\times)}$. This isn’t the usual definition and is often called
the cohomological Brauer group. The usual definition is as follows. Let ${R}$ be a commutative, local, (unital) ring. An algebra ${A}$ over ${R}$ is called an Azumaya algebra if it is a free of
finite rank ${R}$-module and ${A\otimes_R A^{op}\rightarrow End_{R-mod}(A)}$ sending ${a\otimes a'}$ to ${(x\mapsto axa')}$ is an isomorphism.
Define an equivalence relation on the collection of Azumaya algebras over ${R}$ by saying ${A}$ and ${A'}$ are similar if ${A\otimes_R M_n(R)\simeq A'\otimes_R M_{n'}(R)}$ for some ${n}$ and ${n'}$.
The set of Azumaya algebras over ${R}$ modulo similarity form a group with multiplication given by tensor product. This is called the Brauer group of ${R}$ denoted ${Br(R)}$. Often times, when an
author is being careful to distinguish, the cohomological Brauer group will be denoted with a prime: ${Br'(R)}$. It turns out that there is always an injection ${Br(R)\hookrightarrow Br'(R)}$.
One way to see this is that on the étale site of ${Spec(R)}$, the sequence of sheaves ${1\rightarrow \mathbb{G}_m\rightarrow GL_n\rightarrow PGL_n\rightarrow 1}$ is exact. It is a little tedious to
check, but using a Čech cocycle argument (caution: a priori the cohomology “groups” are merely pointed sets) one can check that the injection from the associated long exact sequence ${H^1(Spec(R),
PGL_n)/H^1(Spec(R), GL_n)\hookrightarrow Br'(R)}$ is the desired injection.
If we make the extra assumption that ${R}$ has dimension ${0}$ or ${1}$, then the natural map ${Br(R)\rightarrow Br'(R)}$ is an isomorphism. I’ll probably regret this later, but I’ll only prove the
case of dimension ${0}$, since the point is to get to facts about Brauer groups of fields. If ${R}$ has dimension ${0}$, then it is a local Artin ring and hence Henselian.
One standard lemma to prove is that for local rings a cohomological Brauer class ${\gamma\in Br'(R)}$ comes from an Azumaya algebra if and only if there is a finite étale surjective map ${Y\
rightarrow Spec(R)}$ such that ${\gamma}$ pulls back to ${0}$ in ${Br'(Y)}$. The easy direction is that if it comes from an Azumaya algebra, then any maximal étale subalgebra splits it (becomes the
zero class after tensoring), so that is our finite étale surjective map. The other direction is harder.
Going back to the proof, since ${R}$ is Henselian, given any class ${\gamma\in H^2(Spec(R), \mathbb{G}_m)}$ a standard Čech cocycle argument shows that there is an étale covering ${(U_i\rightarrow
Spec(R))}$ such that ${\gamma|_{U_i}=0}$. Choosing any ${U_i\rightarrow Spec(R)}$ we have a finite étale surjection that kills the class and hence it lifts by the previous lemma.
It is a major open question to find conditions to make ${Br(X)\rightarrow Br'(X)}$ surjective, so don’t jump to the conclusion that we only did the easy case, but it is always true. Now that we have
that the Brauer group is the cohomological Brauer group we can convert the computation of ${Br(R)}$ for a Henselian local ring to a cohomological computation using the specialization map (pulling
back to the closed point) ${Br(R)\rightarrow Br(k)}$ where ${k=R/m}$.
by hilbertthm90 4 Comments
Finiteness of X(k)/B for Rational Surfaces
Recall our setup. We start with a projective surface ${X/k}$ that becomes rational after some finite extension of scalars ${k'/k}$. Let ${C_0(X)}$ be the group of ${0}$-cycles of degree ${0}$. Last
time we defined the Manin pairing ${(-,-): C_0(X)\times (Br(X)/Br(k))\rightarrow Br(k)}$ using the corestriction map ${(\sum n_ix_i, a)=\prod_i cor_{k(x_i)/k}(a(x_i))^{n_i}}$. Two rational points are
called Brauer equivalent if ${(x-y, a)=1}$ for all ${a\in Br(X)}$, and denote the set of rational points up to Brauer equivalence by ${X(k)/B}$.
Now let ${N=NS(X_{k'})}$ be the Néron-Severi group of ${X_{k'}}$. It turns out we can factor the Manin pairing as follows:
$\displaystyle \begin{matrix} C_0(X)\times (Br(X)/Br(k)) & \longrightarrow & Br(k) \\ \downarrow & & \uparrow \\ A_0(X)\times H^1(G, N) & \longrightarrow & H^1(G, N\otimes \overline{k}^\times)\times
H^1(G, N)\end{matrix}$
The goal of today is to say something about this factoring. Last time we wrote down the Hochschild-Serre spectral sequence and said the map ${Br(k)\rightarrow Br(X)}$ was just the quotient map ${E_2^
{2,0}\rightarrow E_\infty^{2,0}}$ followed by the inclusion. Note that since all differentials are ${0}$ for all ${E_n^{1,1}}$ we get that it equals ${H^1(G, N)}$ and sits inside ${Br(X)}$. Thus we
have a sequence ${Br(k)\rightarrow Br(X)\rightarrow H^1(G,N)}$ whose composition is ${0}$ and hence gives a map
$\displaystyle Br(X)/Br(k)\rightarrow H^1(G, N).$
This defines for us the left vertical map, since the left factor is just projection from all ${0}$-cycles of degree ${0}$ to ${0}$-cycles modulo rational equivalence of degree ${0}$. The right
vertical map is just the one induced on group cohomology via the standard intersection pairing on the surface ${N\otimes \overline{k}^\times \times N\rightarrow \overline{k}^\times}$.
This leaves us with the bottom map. Call it ${\Phi \times id}$ where ${\Phi:A_0(X)\rightarrow H^1(G, N\otimes \overline{k}^\times )}$. It turns out that the majority of Bloch’s paper is merely
defining this map and checking that the above diagram commutes, so we won’t get into that. It involves lots of K-theory which I’m not going to get into.
Supposing the above, the main theorem of the paper is that ${Im \Phi}$ is finite in the case of our hypotheses. We can check the nice corollary that ${X(k)/B}$ is finite. If ${X(k)}$ is empty we’re
done, so fix some ${x_0\in X(k)}$. The proof is that we can make ${\Psi: X(k)\rightarrow H^1(G, N\otimes \overline{k}^\times)}$ by ${\Psi(x)=\Phi([x]-[x_0])}$. Since ${Im \Psi\subset Im \Phi}$, it
must have finite cardinality. We need only check that distinct Brauer classes stay distinct to get the result, but this follows from commutativity of the diagram and the fact that Brauer classes are
by definition distinguished under the Manin pairing.
It turns out that Manin had already proved that ${X(k)/B}$ is finite for cubic surfaces, so Bloch’s result extends this to any rational surface. As a consequence of the construction of ${\Phi}$,
Bloch also gets the strange result that if ${X}$ is a conic bundle, i.e. ${X\rightarrow \mathbf{P}^1}$ has generic fiber a conic, and ${k}$ is local, then if ${X}$ has good reduction then ${|Im\Phi |
=1}$. Thus at places of good reduction ${A_0(X)}$ is trivial.
Note how useful this is. For example, take some conic bundle over ${\mathbf{Q}_p}$. Good reduction means that there exists some proper, regular model ${\frak{X}/\mathbf{Z}_p}$ whose generic fiber is
${X}$ and whose special fiber is non-singular. It is hard to tell whether or not ${X}$ has good reduction, because you might accidentally be picking the wrong model. With this condition of Bloch, one
can sometimes explicitly calculate some non-trivial element of ${A_0(X)}$ (Manin actually does this using the defining equation of a class of Châtalet surfaces!) which tells you ${X}$ has bad
To phrase this a different way, to test for honest bad reduction without some criterion requires a choice of model over ${\mathbf{Z}_p}$. There could be infinitely many distinct choices here, so it
could be hard to tell if you’ve exhausted all possibilities. This criterion of Bloch says that no choice needs to be made. Bad reduction can be tested inherently from the variety over ${\mathbf{Q}_p}
Brauer Equivalence of Rational Points
Today I’m going to start a series on the arithmetic of rational surfaces. I feel like this theory is fairly unknown, but it provides such a wonderful source of examples that don’t exist in the curve
case. Everything is so explicit with equations and concrete calculations.
The theorem I want to get to is in Bloch’s paper “On the Chow Groups of Certain Rational Surfaces.” It says that for a certain class of rational surfaces, good/bad reduction can be detected on the
Chow group of ${0}$-cycles of degree ${0}$. We will also pull a lot from Manin’s book Cubic Surfaces and possibly from some papers of Colliet-Thélène.
Rather than prove this amazing result, I want to describe some of the constructions and ideas that go into it. Let’s get some terminology out of the way. For our purposes, a projective surface ${X/k}
$ will be called rational if there exists some ${k'/k}$ for which the extension of scalars ${X_{k'}}$ is birational with ${\mathbf{P}^2_{k'}}$.
Fix an algebraic closure ${\overline{k}}$ and let ${G=Gal(\overline{k}/k)}$. Nothing in the immediate theory should require this, but since the goal is a theorem about reduction type, without loss of
generality ${k}$ is either assumed global or complete, local, arising as the fraction field of some DVR of mixed characteristic. If you want to push the theory through for positive characteristic,
you will need to at least alter all the ${\overline{k}}$ to ${k^{sep}}$.
Bloch’s proof involves playing around with Brauer groups and in particular realizing a certain pairing of Manin in a new way. Our goal for today will just be to work out some of the basic Brauer
group theory. Recall that ${Br(X)=H^2(X, \mathbf{G}_m)}$. Unless otherwise stated, all cohomology will be étale or Galois (the distinction should be obvious from context and so subscripts will be
omitted). This means that ${Br(k)=H^2(Spec(k), \mathbb{G}_m)=H^2(G, \overline{k}^\times)}$.
First, we’ll check that there is a map ${Br(k)\rightarrow Br(X)}$ in a sort of ridiculous way, because we’ll need to use the map involved at a later point. For étale cohomology, we have the
convergent first-quadrant Hochschild-Serre spectral sequence ${E_2^{p,q}=H^p(G, H^q(X_{\overline{k}}, \mathbf{G}_m))\Rightarrow H^{p+q}(X, \mathbf{G}_m)}$. Thus writing ${H^2(X, \mathbf{G}_m)=Br(X)\
simeq E_{\infty}^{0,2}\oplus E_{\infty}^{1,1}\oplus E_{\infty}^{2,0}}$ and using that ${E_{\infty}^{2,0}}$ is a quotient of ${E^{2,0}_2=Br(k)}$ we get a map via inclusion then projection.
Given any point ${x_i: Spec(k(x_i))\rightarrow X}$, we have a finite index subgroup ${G'=Gal(\overline{k}/k(x_i))}$ of ${G}$. By functoriality of Galois cohomology, this gives us two group
homomorphisms. The restriction, ${res: Br(k)\rightarrow Br(k(x_i))}$ and corestriction (aka the transfer), ${cor_{k(x_i)/k}: Br(k(x_i))\rightarrow Br(k)}$. A well-known property of these two maps is
that ${(cor)\circ (res)=[G: G']}$ is multiplication by the index (or by Galois theory in this case, ${[k(x_i):k]}$).
The last map we need before we can piece together Manin’s pairing is that we can pullback on cohomology given a point ${x_i^*: Br(X)\rightarrow Br(k(x_i))}$. We denote this by ${a(x_i)}$ (thought of
as the Brauer class restricted to ${x_i}$). Define ${C_0(X)}$ to be the free abelian group generated by the points of ${X}$ (i.e. ${0}$-cycles) of degree ${0}$. Caution: This is not the Chow group or
anything because we aren’t moding out by any sort of rational or algebraic equivalence yet. We only require degree ${0}$.
For those not used to the degree map when over non-algebraically closed fields, recall that ${deg(\sum n_ix_i)=\sum [k(x_i): k]n_i}$, so it is possible that ${2x_1-3x_2}$ is a degree ${0}$ cycle if
the residue field extensions are ${3}$ and ${2}$ respectively.
The Manin pairing is ${(-,-): C_0(X)\times (Br(X)/Br(k))\rightarrow Br(k)}$ given by ${\displaystyle \left(\sum n_ix_i, a\right)=\prod_i cor_{k(x_i)/k}(a(x_i))^{n_i}}$. Certainly at the level of $
{C_0(X)\times Br(X)}$ everything is well-defined by the above maps. There is some work in checking that we can pass to the quotient.
Here is why it works. If we take a class ${a\in Br(X)}$ that is in already in ${Br(k)}$, then ${\displaystyle Br(k)\rightarrow Br(X)\stackrel{x_i^*}{\rightarrow} Br(k(x_i))}$ is just ${res(a)}$ (this
is somewhat non-obvious from our definition). Now given any element ${\sum n_ix_i\in C_0(X)}$ we need to check that it pairs with ${a}$ to ${1}$. We crucially use our arithmetic definition of degree
${0}$ here.
$\displaystyle \begin{matrix} (\sum n_ix_i, a) & = & \displaystyle \prod_i cor_{k(x_i)/k}(a(x_i))^{n_i} \\ & = & \displaystyle\prod_i (cor_{k(x_i)/k}\circ res)(a^{n_i}) \\ & = & \displaystyle\prod_i
a^{[k(x_i):k]n_i} \\ & = & \displaystyle a^{\sum [k(x_i):k]n_i} \\ & = & a^0 =1 \end{matrix}$
This gives us a well-defined pairing which I’ll refer to as the Manin pairing (non-standard terminology to my knowledge). We say that two rational points ${x,y\in X(k)}$ are Brauer equivalent if $
{(x-y, a)=1}$ for all ${a\in Br(X)}$. The key idea of the next post will be to rewrite this pairing in a way that allows us to check that for a smooth rational surface over a global field the set of
rational points up to Brauer equivalence: ${X(k)/B}$ is finite. But more importantly, the set has only one element (i.e. all points are Brauer equivalent) at places of good reduction.
by hilbertthm90 2 Comments
Heights of Varieties
Now that we’ve defined the height of a ${p}$-divisible group we’ll define the height of a variety in positive characteristic. There are a few ways we can motivate this definition, but really it just
works and turns out to be a very useful concept. We’ll mostly follow the paper of Artin and Mazur.
We could do this in more generality, but to keep things as simple as possible we’ll assume that we have a proper variety ${X}$ over a perfect field ${k}$ of characteristic ${p}$. The first motivation
is that we can think about ${\mathrm{Pic}(X)}$. One way to get information about this group is to use deformation theory and look at the formal completion ${\widehat{\mathrm{Pic}}(X)}$.
The way to define this is to define the ${S}$-valued points (${S}$ an Artin local ${k}$-algebra with residue field ${k}$) to be the group fitting into the sequence ${0\rightarrow \widehat{\mathrm
{Pic}}(X)(S)\rightarrow H^1(X\times S, \mathbb{G}_m)\rightarrow H^1(X, \mathbb{G}_m)}$.
So ${\widehat{\mathrm{Pic}}(X)}$ is a functor which by Schlessinger’s criterion is prorepresentable by a formal group over ${k}$. Notice that ${\widehat{\mathrm{Pic}}(X)(S)=\mathrm{ker}(\mathrm{Pic}
(X\times S)\rightarrow \mathrm{Pic}(X))}$, so there is a pretty concrete way to think about what is going on. We take our scheme and consider some nilpotent thickening. The line bundles on this
thickening that are just extensions from the trivial line bundle are what is in this formal Picard group.
There is no reason to stop with just ${H^1}$. We could define ${\Phi^r: Art_k\rightarrow Ab}$ by ${\Phi^r(S)}$ is the kernel of the restriction map ${H^r(X\times S, \mathbb{G}_m)\rightarrow H^r(X, \
mathbb{G}_m)}$. In the cases we care about, modulo some technical details, we can apply Schlessinger type arguments to this to get that if the dimension of ${X}$ is ${n}$, then ${\Phi^n}$ is not only
pro-representable, but by formal Lie group of dimension ${1}$. We’ll call this ${\Phi_X}$.
When ${n=2}$ this is just the well-known Brauer group, and so for instance the height of a K3 surface is the height of the Brauer group. We also have that if ${\Phi_X}$ is not ${\widehat{\mathbb{G}}
_a}$ then it is a ${p}$-divisible group and amazingly the Dieudonne module of ${\Phi_X}$ is related to the Witt sheaf cohomology via ${D(\Phi_X)=H^n(X, \mathcal{W})}$. Recall that ${D(\Phi_X)}$ is a
free ${W(k)}$-module of rank the height of ${\Phi_X}$, so in particular ${H^n(X, \mathcal{W})}$ is a finite ${W(k)}$-module!
Remember that we computed an example where that wasn’t finitely generated. So non-finite generatedness of ${H^n(X, \mathcal{W})}$ actually is related to the height in that if the variety is of finite
height then ${H^n(X, \mathcal{W})}$ is finitely generated. Since we call a variety of infinite height supersingular, we can rephrase this as saying that ${H^n(X, \mathcal{W})}$ is not finitely
generated if and only if ${X}$ is supersingular.
Just as an example of what heights can be, an elliptic curve must have height ${1}$ or ${2}$ and a K3 surface can have height between ${1}$ and ${10}$ (inclusive). As of right now it seems that the
higher dimensional analogue of if the finite height range of a Calabi-Yau threefold is bounded is still open. People have proved certain bounds in terms of hodge numbers. For instance ${h(\Phi_X)\leq
h^{1, 2}+1}$. For a general CY ${n}$-fold we have ${h\leq h^{1, n-1}+1}$.
This is pretty fascinating because my interpretation of this (which could be completely wrong) is that since for K3 surfaces the moduli space is ${20}$ dimensional, we get that (for
non-supersingular) ${h^{1,1}=20}$ since this is just the dimension of the tangent space of the deformations, which for a smooth moduli should match the dimension of the moduli space. Thus we get a
uniform bound (not the one I mentioned earlier).
But for CY threefolds the moduli space is much less uniform. They aren’t all deformation equivalent. They lie on different components that have different dimensions (this is a guess, I haven’t
actually seen this written anywhere). So this doesn’t allow us to say ${h^{1,2}}$ is some number. It depends on the dimension of the component of the moduli that it is on (since ${h^{1,2}=\dim H^2(X,
\Omega)=\dim H^1(X, \mathcal{T})}$ using the CY conditions and Serre duality). So I think it is still an open problem for how big that can be. If it can get unreasonably large, then maybe we can
arbitrarily large heights of CY threefolds.
Next time maybe we’ll prove some equivalent ways of computing heights for CY varieties and talk about how height has been used by Van der Geer and Katsura and others in a useful way for K3 surfaces. | {"url":"http://hilbertthm90.wordpress.com/tag/brauer-group/","timestamp":"2014-04-17T18:30:37Z","content_type":null,"content_length":"178730","record_id":"<urn:uuid:deef15fd-7f45-40b8-ab73-42b37e0ecad0>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00559-ip-10-147-4-33.ec2.internal.warc.gz"} |
S IN
We shall now move on to the more realistic case of a multi-component universe consisting of radiation and collisionless dark matter. (For the moment we are ignoring the baryons, which we will study
in Sec. 6). It is convenient to use y = a / a[eq] as independent variable rather than the time coordinate. The background expansion of the universe described by the function a(t) can be equivalently
expressed (in terms of the conformal time
It is also useful to define a critical wave number k[c] by:
which essentially sets the comoving scale corresponding to matter-radiation equality. Note that 2x = k[c] y k[c] y = (1/4)(k[c] ^2 in the matter dominated phase.
We now manipulate Eqs. (52), (55), (56), (57) governing the growth of perturbations by essentially eliminating the velocity. This leads to the three equations
for the three unknowns [m], [R]. Given suitable initial conditions we can solve these equations to determine the growth of perturbations. The initial conditions need to imposed very early on when the
modes are much bigger than the Hubble radius which corresponds to the y << 1, k
We will take y[i], k) = [i](k) as given value, to be determined by the processes that generate the initial perturbations. First equation in Eq. (75) shows that we can take [R] = -2[i] for y[i] [m] =
(3/4) [R] = -(3/2) [i];. The exact equation Eq. (72) determines [m], [R]) are given. Finally we use the last two equations to set [m] = 3[R] = 4y = y[i] << 1 to be:
with [m](y[i], k) = 3y[i], k); [R](y[i], k) = 4y[i], k).
Given these initial conditions, it is fairly easy to integrate the equations forward in time and the numerical results are shown in Figs 2, 3, 4, 5. (In the figures k[eq] is taken to be a[eq]H[eq].)
To understand the nature of the evolution, it is, however, useful to try out a few analytic approximations to Eqs. (72) – (74) which is what we will do now.
Figure 2. Left Panel:The evolution of gravitational potential
4.1 . Evolution for d[H]
Let us begin by considering very large wavelength modes corresponding to the k[R] [m]. Then Eqs. (72), (73) become
Differentiating the first equation and using the second to eliminate [m], we get a second order equation for
[There is simple way of determining such an exact solution, which we will describe in Sec. 4.4.]. The initial condition on [R] is chosen such that it goes to -2[i] initially. The solution shows that,
as long as the mode is bigger than the Hubble radius, the potential changes very little; it is constant initially as well as in the final matter dominated phase. At late times (y >> 1) we see that
[i] so that k
4.2. Evolution for d[H] in the radiation dominated phase
When the mode enters Hubble radius in the radiation dominated phase, we can no longer ignore the pressure terms. The pressure makes radiation density contrast oscillate and the gravitational
potential, driven by this, also oscillates with a decay in the overall amplitude. An approximate procedure to describe this phase is to solve the coupled [R] - [m] which is sub-dominant and then
determine [m] using the form of
When [m] is ignored, the problem reduces to the one solved earlier in Eqs (64), (65) with w = 1/3 giving J[3/2] can be expressed in terms of trigonometric functions, the solution given by Eq. (64)
Note that as y [i], ly >> 1, the potential becomes [i] (ly)^-2 cos(ly). In the same limit, we get from Eq. (65) that
(This is analogous to Eq. (68) for the radiation dominated case.) This oscillation is seen clearly in Fig 3. and Fig. 4 (left panel). The amplitude of oscillations is accurately captured by Eq. (80)
for k = 100k[eq] mode but not for k = k[eq]; this is to be expected since the mode is not entering in the radiation dominated phase.
Figure 3. Evolution of [R] for a mode with k = 100 k[eq]. The mode remains frozen outside the Hubble radius at (k / k[eq])^3/2(-[R]) k / k[eq])^3/2 2Fig. 2 ) and oscillates when it enters the Hubble
radius. The oscillations are well described by Eq. (80) with an amplitude of 6.
Figure 4. Evolution of [R] for two modes k = k[eq] and k = 0.01 k[eq]. The modes remain frozen outside the Hubble radius at (-[R])
Figure 5. Evolution of |[m]| for 3 different modes. The modes are labelled by their wave numbers and the epochs at which they enter the Hubble radius are shown by small arrows. All the modes remain
frozen when they are outside the Hubble radius and grow linearly in the matter dominated phase once they are inside the Hubble radius. The mode that enters the Hubble radius in the radiation
dominated phase grows logarithmically until y = y[eq]. These features are well approximated by Eqs. (83), (85).
Let us next consider matter perturbations during this phase. They grow, driven by the gravitational potential determined above. When y << 1, Eq. (73) becomes:
The general solution to the homogeneous part of Eq. (82) (obtained by ignoring the right hand side) is (c[1] + c[2] lny); hence the general solution to this equation is
For y << 1 the growing mode varies as lny and dominates over the rest; hence we conclude that, matter, driven by
4.3. Evolution in the matter dominated phase
Finally let us consider the matter dominated phase, in which we can ignore the radiation and concentrate on Eq. (72) and Eq. (73). When y >> 1 these equations become:
These have a simple solution which we found earlier (see Eq. (69)):
In this limit, the matter perturbations grow linearly with expansion: [m] y a. In fact this is the most dominant growth mode in the linear perturbation theory.
4.4. An alternative description of matter-radiation system
Before proceeding further, we will describe an alternative procedure for discussing the perturbations in dark matter and radiation, which has some advantages. In the formalism we used above, we used
perturbations in the energy density of radiation ([R]) and matter ([m]) as the dependent variables. Instead, we now use perturbations in the total energy density, [R], [m], these variables are
defined as:
Given the equations for [R], [m], one can obtain the corresponding equations for the new variables (
where we have defined
These equations show that the entropy perturbations and gravitational potential (which is directly related to total energy density perturbations) act as sources for each other. The coupling between
the two arises through the right hand sides of Eq. (88) and Eq. (89). We also see that if we set k^4) and - for long wave length modes - the Fig. (2) right panel. The entropy perturbation [R] or [m]
whichever is the dominant energy density perturbation. To illustrate the behaviour of k
which has the two independent solutions:
both of which diverge as y y
Multiplying by [i] we get the solution that was found earlier (see Eq. (78)). Given the form of
The corresponding velocity field, which we quote for future reference, is given by:
We conclude this section by mentioning another useful result related to Eq. (88). When
where we have defined:
(The i factor arises because of converting a gradient to the k space; of course, when everything is done correctly, all physical quantities will be real.) Other equivalent alternative forms for
For modes which are bigger than the Hubble radius, Eq. (96) shows that
This is the easiest way to obtain the solution in Eq. (78).
The conservation law for y << 1 to y >> 1. Let us compare the values of [i] + [i] = (3/2) [i]; late in the matter dominated phase, [f] + [f] = (5/3) [f]. Hence the conservation of [f] = (3/5)(3/2)[i]
= (9/10) [i] which was the result obtained earlier. The expression in Eq. (99) also works at late times in the
One key feature which should be noted in the study of linear perturbation theory is the different amount of growths for [R] and [m]. The [R] grows in amplitude only by a factor of few. The physical
reason, of course, is that the amplitude is frozen at super-Hubble scales and the pressure prevents the growth at sub-Hubble scales. In contrast, [m], which is pressureless, grows logarithmically in
the radiation dominated era and linearly during the matter dominated era. Since the later phase lasts for a factor of 10^4 in expansion, we get a fair amount of growth in [m]. | {"url":"http://ned.ipac.caltech.edu/level5/March06/Padmanabhan/Nabhan4.html","timestamp":"2014-04-21T03:07:33Z","content_type":null,"content_length":"28268","record_id":"<urn:uuid:e3f2ce5f-0406-436b-83a1-d2e1c67451dc>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00267-ip-10-147-4-33.ec2.internal.warc.gz"} |
The POWERMUTT Project
Introduction to Research Methods in Political Science: SITE
The POWERMUTT* Project MAP
(for use with SPSS)
*Politically-Oriented Web-Enhanced Research Methods for Undergraduates Topics & Tools
Resources for introductory research methods courses in political science and related disciplines
From the menu bar, click on “Analyze,” then on “Regression,” and on “Linear….” In the left window, select the dependent variable you wish to analyze, and click on the top right arrow. Then select
your independent variables, and click on the second right arrow.
You can use “Selection Variable” in much the same way as “Select Cases.” If you wish to include only some cases in the analysis, select the variable you will use as a filter, click on the third right
arrow, click on “Rule…,” define the rule you wish to use to select cases for analysis, and click on “Continue.”
REGRESSION provides three alternatives for handling missing data. Listwise deletion means that if a case has missing data for any of the variables in the correlation matrix, it will be deleted from
all calculations. This insures that all coefficients will be based on the same cases, but will eliminate a case from all calculations even if it is missing data for only one or two variables in the
correlation matrix. Another option is pairwise deletion. Pairwise deletion means that each correlation will be based on all case with non-missing values for the two variables in question. This has
the advantage of using as much information as possible for the calculation of each coefficient. The disadvantage is that the coefficients may not be based on the same subset of cases, since different
cases may be missing data for different variables. A final alternative is to substitute mean values for any missing data, which may or may not make sense depending on how your data is structured. If
you only have a small proportion of missing data, it will not make much difference which option you choose. If you have a lot of missing data, no option is very satisfactory. The default option for
REGRESSION is listwise deletion. If you wish to use either of the alternatives just described, click on “Options” and select the option you prefer.
“Options” can also be used to save residual scores as a new variable.
Click on “Continue” (if you have not already done so) and on “OK.”
Output will include the statistical significance (“Sig.”) of the overall equation, and of each term on the right side of the equation. If the significance level is given as “.000” this does not
really mean that there is a zero probability of the relationship occurring by chance. Rather, it means that the probability is less than .0005. Note also that the significance levels given are for
“two-tailed tests,” that is, for hypotheses that predict a relationship, but do not specify whether the relationship is positive or negative. When an hypothesis correctly predicts the direction of
the relationship, a “one-tailed” test is appropriate. The significance level (the risk that the relationship is due to chance) for a one-tailed test is half that of a two-tailed test. For example, if
the two-tailed probability that the relationship is due to chance is .04, the one-tailed probability is only .02.
The “Constant” described in the output is the “a” coefficient (the Y intercept).
Last updated April 28, 2013 .
© 2003---2013 John L. Korey. Licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License. | {"url":"http://www.csupomona.edu/~jlkorey/POWERMUTT/Tools/regression.html","timestamp":"2014-04-19T14:32:05Z","content_type":null,"content_length":"12375","record_id":"<urn:uuid:f0c8ba27-3cb7-4d4d-80bc-a39560ae5731>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00603-ip-10-147-4-33.ec2.internal.warc.gz"} |
numpy.binary_repr(num, width=None)[source]¶
Return the binary representation of the input number as a string.
For negative numbers, if width is not given, a minus sign is added to the front. If width is given, the two’s complement of the number is returned, with respect to that width.
In a two’s-complement system negative numbers are represented by the two’s complement of the absolute value. This is the most common method of representing signed integers on computers [R16]. A
N-bit two’s-complement system can represent every integer in the range
num : int
Only an integer decimal number can be used.
Parameters :
width : int, optional
The length of the returned string if num is positive, the length of the two’s complement if num is negative.
bin : str
Returns :
Binary representation of num or two’s complement of num.
See also
Return a string representation of a number in the given base system.
binary_repr is equivalent to using base_repr with base 2, but about 25x faster.
[R16] (1, 2) Wikipedia, “Two’s complement”, http://en.wikipedia.org/wiki/Two’s_complement
>>> np.binary_repr(3)
>>> np.binary_repr(-3)
>>> np.binary_repr(3, width=4)
The two’s complement is returned when the input number is negative and width is specified:
>>> np.binary_repr(-3, width=4) | {"url":"http://docs.scipy.org/doc/numpy-dev/reference/generated/numpy.binary_repr.html","timestamp":"2014-04-18T23:25:20Z","content_type":null,"content_length":"9801","record_id":"<urn:uuid:bbc62394-df99-4a3f-acbb-84d2c76680bd>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00161-ip-10-147-4-33.ec2.internal.warc.gz"} |
Patent application title: KEY DERIVATION FUNCTIONS TO ENHANCE SECURITY
Sign up to receive free email alerts when patent applications with chosen keywords are published SIGN UP
Key derivation algorithms are disclosed. In one key derivation application, a segment of the master key is hashed. Two numbers of derived from another segment of the master key. A universal hash
function, using the two numbers, is applied to the result of the hash, from which bits are selected as the derived key. In another embodiment, an encoded counter is combined with segments of the
master key. The result is then hashed, from which bits are selected as the derived key.
An apparatus comprising:an input port to receive a master key;an implementation of a universal hash algorithm;an implementation of a secure hash algorithm;means for generating a derivative key from
said master key using the implementation of said universal hash algorithm and said secure hash algorithm; andan output port to output said derivative key.
An apparatus according to claim 1, wherein the implementation of said universal hash algorithm includes:a divider to divide said master key into a first segment and a second segment;a repeater to
repeat said counter to form an encoded counter as a longer bit pattern;an implementation of a first bitwise binary function operative on said first segment and said encoded counter to produce a first
result;an implementation of a second bitwise binary function operative on said second segment and said encoded counter to produce a second result; anda combiner to combine said first result, said
second result, and said encoded counter to produce said result.
An apparatus according to claim 1, wherein:the apparatus further comprises a divider to divide said master key into a first segment and a second segment;the implementation of said secure hash
algorithm includes:a combiner to combine said first segment with said counter to produce a modified first segment; andmeans for using the implementation of said secure hash algorithm to securely hash
said modified first segment into a hash value; andthe implementation of said universal hash algorithm includes:a determiner to determine a first number and a second number from said second segment;a
calculator including an implementation of an arithmetic formula to compute a result using said hash value, said first number, and said second number; anda bit selector to select a set of bits from
said result as a derivative key.
An apparatus according to claim 1, wherein the means for generating includes:means for using the implementation of said universal hash algorithm with said master key to produce a result; andmeans for
using the implementation of said secure hash algorithm with said result and a counter to produce said derivative key.
An apparatus according to claim 4, wherein the implementation of said universal hash algorithm includes:a divider to divide said master key into a first segment and a second segment;a repeater to
repeat said counter to form an encoded counter as a longer bit pattern;an implementation of a first bitwise binary function operative on said first segment and said encoded counter to produce a first
result;an implementation of a second bitwise binary function operative on said second segment and said encoded counter to produce a second result; anda combiner to combine said first result, said
second result, and said encoded counter to produce said result.
An apparatus according to claim 1, wherein the means for generating includes:means for using the implementation of said secure hash algorithm with said master key and a counter to produce a result;
andmeans for using the implementation of said universal hash algorithm with said result to produce said derivative key.
An apparatus according to claim 6, wherein:the apparatus further comprises a divider to divide said master key into a first segment and a second segment;the means for using the implementation of said
secure hash algorithm includes:a combiner to combine said first segment with said counter to produce a modified first segment; andmeans for using the implementation of said secure hash algorithm to
securely hash said modified first segment into a hash value; andthe means for using the implementation of said universal hash algorithm includes:a determiner to determine a first number and a second
number from said second segment;a calculator including an implementation of an arithmetic formula to compute a result using said hash value, said first number, and said second number; anda bit
selector to select a set of bits from said result as a derivative key.
An apparatus comprising:an input port to receive a master key;a first calculator to implement a universal hash algorithm;a second calculator to implement a secure hash algorithm;a key deriver to
generate a derivative key from said master key using the first calculator and the second calculator; andan output port to output said derivative key.
An apparatus according to claim 8, wherein the key deriver includes:a divider to divide said master key into a first segment and a second segment;a repeater to repeat said counter to form an encoded
counter as a longer bit pattern;a third calculator to implement a first bitwise binary function operative on said first segment and said encoded counter to produce a first result;a fourth calculator
to implement a second bitwise binary function operative on said second segment and said encoded counter to produce a second result; anda combiner to combine said first result, said second result, and
said encoded counter to produce said result.
An apparatus according to claim 8, wherein:the apparatus further comprises a divider to divide said master key into a first segment and a second segment;the second calculator includes:a combiner to
combine said first segment with said counter to produce a modified first segment; anda fifth calculator to implement said secure hash algorithm to securely hash said modified first segment into a
hash value; andthe first calculator includes:a determiner to determine a first number and a second number from said second segment;a sixth calculator to implement an arithmetic formula to compute a
result using said hash value, said first number, and said second number; anda bit selector to select a set of bits from said result as a derivative key.
An apparatus according to claim 8, wherein the key deriver includes:a third calculator to implement said universal hash algorithm with said master key to produce a result; anda fourth calculator to
implement said secure hash algorithm with said result and a counter to produce said derivative key.
An apparatus according to claim 11, wherein the first calculator includes:a divider to divide said master key into a first segment and a second segment;a repeater to repeat said counter to form an
encoded counter as a longer bit pattern;a fifth calculator to implement a first bitwise binary function operative on said first segment and said encoded counter to produce a first result;a sixth
calculator to implement a second bitwise binary function operative on said second segment and said encoded counter to produce a second result; anda combiner to combine said first result, said second
result, and said encoded counter to produce said result.
An apparatus according to claim 8, wherein the key deriver includes:a third calculator to implement said secure hash algorithm with said master key and a counter to produce a result; anda fourth
calculator to implement said universal hash algorithm with said result to produce said derivative key.
An apparatus according to claim 13, wherein:the apparatus further comprises a divider to divide said master key into a first segment and a second segment;the third calculator includes:a combiner to
combine said first segment with said counter to produce a modified first segment; anda third calculator to implement said secure hash algorithm to securely hash said modified first segment into a
hash value; andthe fourth calculator includes:a determiner to determine a first number and a second number from said second segment;a fifth calculator including an implementation of an arithmetic
formula to compute a result using said hash value, said first number, and said second number; anda bit selector to select a set of bits from said result as a derivative key.
An apparatus, comprising:an input port to receive a master key;a divider to divide said master key into a first segment and a second segment;a concatenator to concatenate said first segment and a
counter to produce a modified first segment;a hasher to securely hash said modified first segment into a hash value;a determiner to determine a first number and a second number from said second
segment;a calculator including an implementation of an arithmetic formula to compute a result using said hash value, said first number, and said second number; anda bit selector to select a set of
bits from said result as a derivative key.
An apparatus according to claim 15, further comprising an output port to output said derivative key.
An apparatus according to claim 15, wherein the calculator includes:an implementation of a first function to compute a product of said hash value and said first number;an implementation of a second
function to compute a sum of said product and said second number; andan implementation of a third function to compute said result of said sum modulo a modulus.
An apparatus according to claim 17, wherein the determiner is operative to determine said first number and said second number modulo said modulus.
An apparatus according to claim 17, wherein the implementation of said third function includes an implementation of said third function to compute said result of said sum modulo a prime modulus.
An apparatus according to claim 15, wherein the bit selector is operative to select as said derivative key a set of least significant bits from said result.
A data security device, comprising:a key deriver, including:an input port to receive a master key;a divider to divide said master key into a first segment and a second segment;a concatenator to
concatenate said first segment and a counter to produce a modified first segment;a hasher to securely hash said modified first segment into a hash value;a determiner to determine a first number and a
second number from said second segment modulo a modulus;a calculator including an implementation of an arithmetic formula to compute a result using said hash value, said first number, and said second
number; anda bit selector to select a set of bits from said result as a derivative key; andan encrypter to encrypt data using said derivative key.
A data security device according to claim 21, further comprising a data transformer.
A data security device according to claim 22, wherein the data transformer is operative to transform an original master key into said master key.
A data security device according to claim 22, wherein the data transformer is operative to transform said derivative key into a transformed derivative key.
A data security device according to claim 21, wherein the calculator includes:an implementation of a first function to compute a product of said hash value and said first number;an implementation of
a second function to compute a sum of said product and said second number; andan implementation of a third function to compute said result from said sum modulo said modulus.
A data security device according to claim 21, wherein the implementation of said third function includes an implementation of said third function to compute said result from said sum modulo a prime
A method for performing key derivation, comprising:securely hashing a master key to produce a hash value;determining a first number and a second number from the master key;computing a universal hash
function of the hash value, the first number, and the second number to produce a result; andselecting a derivative key from bits in the result.
A method according to claim 27, wherein:the method further comprises dividing the master key into a first segment and a second segment;securely hashing a master key includes securely hashing the
first segment to produce the hash value; anddetermining a first number and a second number includes determining the first number and the second number from the second segment.
A method according to claim 28, wherein:the method further comprises determining a counter; andsecurely hashing the first segment includes combining the first segment and the counter.
A method according to claim 27, wherein determining a first number and a second number includes:deriving a third number and a fourth number from the master key;computing the first number as the third
number modulo a modulus; andcomputing the second number as the fourth number modulo the modulus.
A method according to claim 27, wherein computing a universal hash function includes:computing a product of the first number and the hash value;computing a sum of the product and the second number;
andcomputing the result as the sum modulo a modulus.
A method according to claim 31, wherein computing the result includes computing the result as the sum modulo a prime divisor.
A method according to claim 31, wherein selecting a derivative key includes selecting the derivative key from a set of least significant bits in the result.
A method for encrypting data using a derivative key, comprising:generating the derivative key, including:dividing the master key into a first segment and a second segment;securely hashing the first
segment to produce a hash value;determining a first number and a second number from the second segment;computing a product of the first number and the hash value;computing a sum of the product and
the second number;computing a result as the sum modulo a modulus; andselecting the derivative key from bits in the result; andencrypting data using the derivative key.
A method according to claim 34, further comprising applying a data transformation to the master key before generating the derivative key.
A method according to claim 35, wherein applying a data transformation includes:dividing the master key into a third segment and a fourth segment, each of the third segment and the fourth segment
including at least one bit;organizing the bits in the fourth segment into a number of groups, the number of groups equal to a number of bits in the third segment; each group having a same number of
bits;associating each of the groups with a bit in the third segment;applying a permutation function to at least one of the groups according to the associated bit in the third segment; andconstructing
the transformed master key from the third segment and the permuted groups.
A method according to claim 35, wherein applying a data transformation includes:dividing the master key into a third segment and a fourth segment, each of the third segment and the fourth segment
including at least one bit;computing a power as a function of the third segment, the power being relatively prime to a function of a second modulus;computing a result of raising a function of the
fourth segment to the power;computing an exponential permutation as the result modulo the second modulus; andconstructing the transformed master key from the third segment and the computed
exponential permutation.
A method according to claim 34, further comprising applying a data transformation to the derivative key.
A method according to claim 38, wherein applying a data transformation includes:dividing the master key into a third segment and a fourth segment, each of the third segment and the fourth segment
including at least one bit;organizing the bits in the fourth segment into a number of groups, the number of groups equal to a number of bits in the third segment; each group having a same number of
bits;associating each of the groups with a bit in the third segment;applying a permutation function to at least one of the groups according to the associated bit in the third segment; andconstructing
the transformed master key from the third segment and the permuted groups.
A method according to claim 38, wherein applying a data transformation includes:dividing the master key into a third segment and a fourth segment, each of the third segment and the fourth segment
including at least one bit;computing a power as a function of the third segment, the power being relatively prime to a function of a second modulus;computing a result of raising a function of the
fourth segment to the power;computing an exponential permutation as the result modulo the second modulus; andconstructing the transformed master key from the third segment and the computed
exponential permutation.
A method according to claim 34, further comprising encrypting the derivative key.
A method according to claim 41, further comprising transmitting the encrypted derivative key.
A method according to claim 34, further comprising transmitting the encrypted data.
A method according to claim 34, wherein:the method further comprises determining a counter; andsecurely hashing the first segment includes combining the first segment and the counter.
A method according to claim 34, wherein determining a first number and a second number includes:deriving a third number and a fourth number from the second segment;computing the first number as the
third number modulo a modulus; andcomputing the second number as the fourth number modulo the modulus.
A method according to claim 34, wherein selecting a derivative key includes selecting the derivative key from a set of least significant bits in the result.
An apparatus, comprising:an input port to receive a master key;a combiner to combine said master key and a value to produce a modified master key;a hasher to hash said modified master key into a hash
value; anda bit selector to select a set of bits from said hash value as a derivative key.
An apparatus according to claim 47, wherein:the apparatus further comprises a repeater to repeat said value to form an encoded value as a longer bit pattern; andthe combiner is operative to combine
said master key and said encoded value to produce a modified master key.
An apparatus according to claim 47, wherein:the combiner includes a divider to divide said master key into a first segment and a second segment; andthe combiner is operative to combine said encoded
value, said first segment, and said second segment to produce said modified master key.
An apparatus according to claim 49, wherein the combiner further includes:an implementation of a first bitwise binary function operative on said first segment and said encoded value to produce a
first result;an implementation of a second bitwise binary function operative on said second segment and said encoded value to produce a second result; anda combiner to combine said first result and
said second result to produce said modified master key.
An apparatus according to claim 50, wherein the combiner includes a concatenator to concatenate said first result and said second result to produce said modified master key.
An apparatus according to claim 47, wherein the bit selector is operative to select a set of least significant bits from said hash value as said derivative key.
A data security device, comprising:a key deriver, including:an input port to receive a master key;a divider to divide said master key into a first segment and a second segment;a repeater to repeat a
value to form an encoded value as a longer bit pattern;an implementation of a first bitwise binary function operative on said first segment and said encoded value to produce a first result;an
implementation of a second bitwise binary function operative on said second segment and said encoded value to produce a second result;a combiner to combine said first result, said second result, and
said encoded value to produce said modified master key;a hasher to hash said modified master key into a hash value; anda bit selector to select a set of bits from said result as a derivative key;
andan encrypter to encrypt data using said derivative key.
A data security device according to claim 53, further comprising a data transformer.
A data security device according to claim 54, wherein the data transformer is operative to transform an original master key into said master key.
A data security device according to claim 54, wherein the data transformer is operative to transform said derivative key into a transformed derivative key.
A method for performing key derivation, comprising:combining a master key with a value to produce a modified master key;hashing the modified master key to produce a hash value; andselecting a
derivative key from bits in the hash value.
A method according to claim 57, further comprising repeating a bit pattern in the value to form a longer bit pattern.
A method according to claim 57, wherein combining a master key with a value includes computing a bitwise binary function using the master key and the value.
A method according to claim 57, wherein combining a master key with a value includes:dividing the master key into a first segment and a second segment;combining the first segment with the value to
produce a first result;combining the second segment with the value to produce a second result; andcombining the first result and the second result to produce the modified master key.
A method according to claim 60, wherein combining the first result and the second result includes concatenating the first result and the second result to produce the modified master key.
A method according to claim 57, wherein selecting a derivative key includes selecting the derivative key from a set of least significant bits from the hash value.
A method for encrypting a derivative key, comprising:combining a master key with a value to produce a modified master key;hashing the modified master key to produce a hash value;selecting a
derivative key from bits in the hash value; andencrypting data using the derivative key.
A method according to claim 63, further comprising applying a data transformation to the master key before generating the derivative key.
A method according to claim 64, wherein applying a data transformation includes:dividing the master key into a third segment and a fourth segment, each of the third segment and the fourth segment
including at least one bit;organizing the bits in the fourth segment into a number of groups, the number of groups equal to a number of bits in the third segment; each group having a same number of
bits;associating each of the groups with a bit in the third segment;applying a permutation function to at least one of the groups according to the associated bit in the third segment; andconstructing
the transformed master key from the third segment and the permuted groups.
A method according to claim 64, wherein applying a data transformation includes:dividing the master key into a third segment and a fourth segment, each of the third segment and the fourth segment
including at least one bit;computing a power as a function of the third segment, the power being relatively prime to a function of a predefined modulus;computing a result of raising a function of the
fourth segment to the power;computing an exponential permutation as the result modulo the predefined modulus; andconstructing the transformed master key from the third segment and the computed
exponential permutation.
A method according to claim 63, further comprising applying a data transformation to the derivative key.
A method according to claim 67, wherein applying a data transformation includes:dividing the master key into a third segment and a fourth segment, each of the third segment and the fourth segment
including at least one bit;organizing the bits in the fourth segment into a number of groups, the number of groups equal to a number of bits in the third segment; each group having a same number of
bits;associating each of the groups with a bit in the third segment;applying a permutation function to at least one of the groups according to the associated bit in the third segment; andconstructing
the transformed master key from the third segment and the permuted groups.
A method according to claim 67, wherein applying a data transformation includes:dividing the master key into a third segment and a fourth segment, each of the third segment and the fourth segment
including at least one bit;computing a power as a function of the third segment, the power being relatively prime to a function of a predefined modulus;computing a result of raising a function of the
fourth segment to the power;computing an exponential permutation as the result modulo the predefined modulus; andconstructing the transformed master key from the third segment and the computed
exponential permutation.
A method according to claim 63, further comprising encrypting the derivative key.
A method according to claim 70, further comprising transmitting the encrypted derivative key.
A method according to claim 63, further comprising transmitting the encrypted data.
A method according to claim 63, wherein combining a master key with a value includes:dividing the master key into a first segment and a second segment;combining the first segment with the value to
produce a first result;combining the second segment with the value to produce a second result; andcombining the first result and the second result to produce the modified master key.
A method according to claim 73, wherein combining the first result and the second result includes concatenating the first result and the second result to produce the modified master key.
A method according to claim 63, wherein combining a master key with a value includes:dividing the master key into a first segment and a second segment;combining the first segment with the value to
produce a first result; andcombining the first result with the second segment to produce the modified master key.
An apparatus according to claim 47, wherein the combiner is operative to said master key and a counter to produce said modified master key.
A data security device according to claim 53, wherein the repeater is operative to repeat a counter to form said encoded value as a longer bit pattern.
A method according to claim 57, wherein combining a master key with a value to produce a modified master key includes combining said master key with a counter to produce said modified master key.
A method according to claim 63, wherein combining a master key with a value to produce a modified master key includes combining said master key with a counter to produce a modified master key.
RELATED APPLICATION DATA [0001]
This application is a continuation of U.S. patent application Ser. No. 10/918,718, entitled "KEY DERIVATION FUNCTIONS TO ENHANCE SECURITY," filed Aug. 12, 2004, now allowed, the contents of which are
hereby incorporated by reference in their entirety.
This application is related to co-pending U.S. patent application Ser. No. 10/817,717, entitled "PERMUTATION DATA TRANSFORM TO ENHANCE SECURITY", filed Aug. 12, 2004, and to co-pending U.S. patent
application Ser. No. 10/913,103, entitled "EXPONENTIAL DATA TRANSFORM TO ENHANCE SECURITY", filed Aug. 12, 2004, all commonly assigned.
FIELD [0003]
This invention pertains to data security, and more particularly to new key derivation functions to enhance security.
BACKGROUND [0004]
For thousands of years, man has found it necessary to keep secrets. But for most of history, the art of keeping secrets developed slowly. The Caesar shift cipher, supposedly used by Julius Caesar
himself, involved taking a letter and shifting it forward through the alphabet, to hide the message. Thus, "A" became "D", "B" became "E", and so on. Although generally considered a very weak
encryption, there were few better encryption algorithms developed until centuries later.
Encryption became a focus of intense research during the two World Wars. Much effort was expended, both in developing codes that the enemy could not break, and in learning how to read the enemy's
encrypted mail. Mechanical devices were designed to aid in encryption. One of the most famous of these machines is the German Enigma machine, although Enigma was by no means the only mechanical
encryption machine of the era.
The advent of the computer has greatly altered the landscape for the use of encryption. No longer requiring complex machines or hours of manual labor, computers can encrypt and decrypt messages at
high speed and for trivial cost. The understanding of the mathematics underlying computers has also introduced new encryption algorithms. The work of Diffie and Hellman led to a way to exchange
private keys using exponential arithmetic modulo primes, and relies on the fact that calculating the shared key given the public information is computationally infeasible. And the popular RSA
algorithm (named after its inventors: R. Rivest, A. Shamir, and L. Adleman) relies on the fact that factoring large numbers is also computationally infeasible to decrypt encrypted data. The work of
Diffie and Hellman, and the RSA algorithm, can theoretically be cracked, but cracking these algorithms would depend on solving mathematical problems that have yet to be solved. (As an aside, the RSA
algorithm was also one of the first public-key cryptosystems, using a different key to decrypt than the key used to encrypt. This made it possible to publicly distribute one key without losing
But no encryption algorithm has an infinite life span. For example, DES (the Data Encryption Standard) was originally released in 1976. The government originally estimated its useful life at 10
years. DES has lasted much longer than the original estimated life span, but because of its relatively short key, DES is considered less than ideal. DES has since been replaced by AES (the Advanced
Encryption Standard) as the government standard, but DES remains in widespread use. Various improvements to DES exist, but these improvements cannot make DES secure forever. Eventually, DES will
generally be considered insecure.
A need remains for a way to enhance the security of existing encryption algorithms.
SUMMARY [0009]
The invention is a method and apparatus for performing key derivation from a master key. In one embodiment, a portion of the master key is hashed. Two numbers are derived from another portion of the
master key. A universal hash function, using the two numbers, is applied to the result of the hash, from which bits are selected as the derived key.
In another embodiment, a universal hash function, using an encoded counter, is applied to portions of the master key, and the results combined. The combined result is then hashed, from which bits are
selected as the derived key.
The foregoing and other features, objects, and advantages of the invention will become more readily apparent from the following detailed description, which proceeds with reference to the accompanying
BRIEF DESCRIPTION OF THE DRAWINGS [0012]
FIG. 1 shows a general implementation of a secure hash algorithm to generate derivative keys from a master key.
[0013]FIG. 2
shows the typical operation of the secure hash algorithm of FIG. 1.
[0014]FIG. 3
show the typical operation of a universal hash algorithm.
[0015]FIG. 4
shows different ways to combine the secure hash algorithm and the universal hash algorithm of FIG. 1 to generate more secure derivative keys, according to an embodiment of the invention.
[0016]FIG. 5
shows a server and device capable of performing data transformations, key generation, key wrapping, and data encryption, according to an embodiment of the invention.
FIG. 6 shows a data security device operable to enhance security by using a data transformer in combination with a key wrapper, key deriver, or an encryption function, according to an embodiment of
the invention.
FIGS. 7A-7B show a flowchart for using the data security device of FIG. 6, according to an embodiment of the invention.
[0019]FIG. 8
shows details of the data transformer of FIGS. 5 and 6, according to an embodiment of the invention.
[0020]FIG. 9
shows details of the data transformer of FIGS. 5 and 6, according to another embodiment of the invention.
FIGS. 10A-10C show a flowchart for using the data transformer of
FIG. 8
, according to an embodiment of the invention.
FIG. 11 shows a flowchart for using the data transformer of
FIG. 9
, according to an embodiment of the invention.
FIG. 12 shows details of the key derivation function of FIGS. 5 and 6, according to an embodiment of the invention.
[0024]FIG. 13
shows details of the key derivation function of FIGS. 5 and 6, according to another embodiment of the invention.
[0025]FIG. 14
shows a flowchart for using the key derivation function of FIG. 12, according to an embodiment of the invention.
FIG. 15 shows a flowchart for using the key derivation function of
FIG. 13
, according to an embodiment of the invention.
[0027]FIG. 16
shows a flowchart for using a key derivation function in the data security device of
FIG. 5
, according to an embodiment of the invention.
DETAILED DESCRIPTION [0028]
FIG. 1 shows a general implementation of a secure hash algorithm to generate derivative keys from a master key. The general concept is that master key 105 is input to secure hash algorithm 110. An
example of a secure hash algorithm is SHA-1 (Secure Hash Algorithm 1). The result is derived key 115-1. Secure hash algorithm 110 can be used multiple times. Depending on the implementation of secure
hash algorithm 110, master key 105 can be used repeatedly as input to secure hash algorithm 110 with or without modification. For example, if secure hash algorithm 110 uses a clock to control its
output, then master key 105 can be used without modification to generated derived keys 115-2 and 115-3. Otherwise, master key 105 can be combined with a counter in some way to modify master key 105
sufficiently to differentiate derived keys 115-2 and 115-3 from derived key 115-1. If secure hash algorithm 105 is properly implemented, then changing as little as a single bit in master key 105 can
result in derived keys 115-2 and 115-3 being completely unrelated to derived key 115-1.
[0029]FIG. 2
shows the typical operation of the secure hash algorithm of FIG. 1. As shown, a hash algorithm maps inputs to hash values. In
FIG. 2
, the hash values vary between 0 and n for some value of n. The output of a hash algorithm can be referred to as baskets;
FIG. 2
shows baskets 205, 210, 215, and so on to basket 220.
Unlike a general hash algorithm, which can use any desired mapping to map inputs to baskets, a secure hash algorithm is unpredictable (sometimes also called collision-free): knowing that one input
produces a particular output does not give any information about how to find another input that would produce the same output. For example, knowing that an input of "5" maps to basket 215 does not
aid someone in finding any other input value that would also map to basket 215. In fact, there may be no other inputs that map to basket 215, for some particular hash algorithms. This is what makes
secure hash algorithm 110 "secure": that there is no easy way to find another input that maps to a desired output. The only way to find another input that maps to a particular output is by
experimenting with different inputs, in the hope of finding another value that maps to the desired output.
The weakness of a secure hash algorithm is that the baskets might not all be mapped to equally. In other words, there might be only one input that is mapped to basket 215, but 100 inputs that map to
basket 205. And as mentioned above, some baskets might have no inputs that map to them.
A universal hash algorithm provides the distribution feature that is missing from a secure hash algorithm. As shown in
FIG. 3
, universal hash algorithm 305 also maps inputs to baskets 310, 315, 320, up to 325. But unlike the secure hash algorithm of
FIG. 2
, universal hash algorithm 305 distributes its input evenly across the baskets. Thus, basket 310 is mapped to just as often as basket 315, 320, 325, and so on.
The weakness of a universal hash algorithm is that it is typically easy to find other inputs that map to the same basket. For example, consider the universal hash algorithm that maps to 10 baskets,
numbered 0 through 9, by selecting the basket that corresponds to the last digit of the input. It is easy to see that this hash algorithm distributes its output evenly across all baskets. But it is
also easy to see how to find another input that maps to the same basket as a given input. For example, 1, 11, 21, 31, etc. all map to basket 315.
Thus, it should be apparent that both secure hash algorithms and universal hash algorithms have advantages and disadvantages. The best solution from the point of view of security would be to somehow
combine the advantages of both secure hash algorithms and universal hash algorithms.
FIG. 4
shows how the secure hash algorithm of FIGS. 1-2 and the universal hash algorithm of
FIG. 3
can be combined to generate more secure derivative keys, according to an embodiment of the invention. In sequence 405, master key 105 is first passed to secure hash algorithm 110. The result of
secure hash algorithm 110 is then used as input to universal hash algorithm 305, and from the result derived key 115-1 can be generated.
Whereas sequence 405 shows secure hash algorithm 110 being used before universal hash algorithm 305, sequence 410 reverses this ordering. Thus, master key 105 is used as input to universal hash
algorithm 305. The result of universal hash algorithm 305 is then used as input to secure hash algorithm 110, from which result derived key 115-1 can be generated.
Secure hash algorithm 110 and universal hash algorithm 305 can be implemented in any desired form. For example, secure hash algorithm 110 and universal hash algorithm 305 can be implemented in any
variety of Read Only Memory (ROM), in firmware, or as software stored in a memory, to provide a few examples where the implementations of secure hash algorithm 110 and universal hash algorithm 305
are executed by general purpose processors. Implementations can also include dedicated devices: for example, a processor can be specifically designed to implement secure hash algorithm 110 and
universal hash algorithm 305. Thus, as another example, a calculator can be designed to implement either secure hash algorithm 110 or universal hash algorithm 305. A person skilled in the art will
recognize other ways in which secure hash algorithm 110 and universal hash algorithm 305 can be implemented.
[0037]FIG. 5
shows a server and device capable of performing data transformations, key generation, key wrapping, and data encryption, according to an embodiment of the invention. In
FIG. 5
, server 505 is shown. Server 505 includes data transformer 510, key derivation function 515, key wrapping function 520, and encryption function 525. Data transformer 510 is responsible for
performing a data transformation. As will be discussed below with reference to FIGS. 8-9, 10A-10C, and 11, data transformations, while intrinsically not secure, increase the complexity of encoded
data by scrambling the data, thereby making cryptanalysis more difficult. For example, data transformation can mask patterns that exist in the encoded, but not transformed, data.
Key derivation function 515 is responsible for deriving keys for use in encrypting data. Although it is true that any key can be used to encrypt data, the more a particular key is used, the more
likely it is that the key can be determined with cryptanalysis. Thus, some systems rely on a master key to generate derived keys, which are then used to encrypt the data. As often as desired, a new
derived key can be generated; any data encrypted using only derived keys will then provide no value in breaking messages encrypted with the new derived key. Existing key derivation functions exist;
three new key derivation functions are described below with reference to FIGS. 12-13 and 15-16.
Key wrapping function 520 is responsible for wrapping a key for transmission. Key wrapping is typically accomplished by encrypting the key for transmission. As an example, RSA can be used to encrypt
(that is, wrap) the key. The key, now sufficiently secured, can be transmitted, even over insecure connections, to other machines, where the key can be unwrapped (decrypted) and used for data
Often, the wrapped key is a key for use with a private key, or symmetric, cryptosystem, which is wrapped using a public key, or asymmetric, cryptosystem. A private key cryptosystem is one where the
same key is used to encrypt and decrypt, as opposed to a public key cryptosystem, which use different keys to encrypt and decrypt. For example, DES and AES are private key cryptosystems; RSA is a
public key cryptosystem. While public key cryptosystems make it possible to safely distribute a key (there is no worry that the key can be intercepted and used by a third party to decrypt private
messages), public key cryptosystems often are slower to implement and result in longer messages than private key cryptosystems. Obviously, to wrap a key using a public key cryptosystem, server 505
needs to know the public key of the device to which the wrapped key is to be communicated. But a person skilled in the art will recognize that any encryption algorithm can be used to wrap the key,
and that the key to be wrapped can be for any kind of cryptosystem.
Encryption function 525 is used to encrypt data. Typically, the data is encrypted using the key that is wrapped using key wrapping function 520, although a person skilled in the at will recognize
that any key can be used to encrypt the data, that the data can be any data that is desired to be encrypted, and that any desired encryption function can be used.
[0042]FIG. 5
also shows device 530 capable of performing data transformations, key wrapping, and data encryption, according to an embodiment of the invention. Despite the fact that device 530 looks like a
personal digital assistant (PDA), a person skilled in the art will recognize that device 530, as well as server 505, can be any device using security algorithms. Thus, for example, device 530 might
be a computer (e.g., a desktop or notebook computer), exchanging files with server 505 (which might be an ordinary computer, and not a server per se). Or, device 530 might be digital media device:
e.g., to present digital content to a user, with server 505 providing the content to device 530. Alternatively, device 530 might receive the content from any legitimate source, with server 505
specifying the rights granted to device 530 with respect to the content. Or, device 530 might be software to implement some functionality stored on some medium used with a general-purpose machine,
such as a computer. In this variation, what makes device 530 part of the system shown in
FIG. 5
is less dependent on the hardware of device 530, and more dependent on the software being executed by device 530. A person skilled in the art will recognize that the software can implement any
desired functionality, and that the software can be stored on any appropriate medium, such as a floppy disk, any variety of compact disc (CD) or digital video disc (DVD, sometimes also called a
digital versatile disc), a tape medium, or a Universal Serial Bus (USB) key, to name a few of the more popular possibilities. Or, device 530 might be a cellular telephone and server 505 a base
station, where the cellular telephone and the base station are communicating in an encrypted manner. A person skilled in the art will recognize other variations for device 530 and server 505, and
will also recognize that the manner in which server 505 and device 530 communicate can be any manner of communications channel: e.g., wireline, wireless, or any other form of communication.
Device 530 is similar to server 505 of
FIG. 5
, in that includes data transformer 510, key wrapping function 520, and encryption function 525. Note that unlike server 505 of
FIG. 5
, device 530 does not include key derivation function 515. This is because key derivation is generally only needed on server 505. Provided there is a way to communicate with the other device, only
one device needs to generate the derivative key. Of course, if there is no way to securely communicate the derivative key but both devices can accurately generate the same derivate key, then device
530 can include key derivation function 515 (although then device 530 might not need key wrapping function 520).
FIG. 6 shows a data security device operable to enhance security by using a data transformer in combination with a key wrapper, key deriver, or an encryption function, according to an embodiment of
the invention. Data security device 605 can be part of either server 505 or device 530 of
FIG. 5
, with modification as needed to add or remove components. In data security device 605, input port 610 is responsible for receiving data. The data can be a master key from which to generate a
derivative key, a key to be wrapped, or data to be encrypted, among other possibilities. Divider 615 is responsible for dividing the data into blocks. As discussed below with reference to FIGS. 12-13
and 14-16, sometimes the functions apply data transformations to multiple portions of the data; divider 615 breaks the data up into blocks of the desired sizes so that data transformer 510 can be
applied to each block. Data transformer 510 is responsible for performing the data transformation, which is discussed further below with reference to FIGS. 12-13 and 14-16. Combiner 620 is
responsible for combining the blocks, after their data transformation, back together for application of the appropriate security function. Various security functions that can be used include key
derivation function 515, key wrapping function 520, or encryption function 525. Finally, output port 625 outputs the data, after transformation and/or application of the security function.
It is worth noting that, although typically divider 615 breaks the data into blocks that conform to the size of the data transformation algorithm, this is not required. Thus, divider 615 might break
the data up into blocks that are smaller or larger than the expected input to data transformer 510. If divider 615 breaks the data up into blocks that are smaller than expected by data transformer
510, the data can be padded to make them large enough; if divider 615 breaks the data up into blocks larger than expected by data transformer 510, data transformer 510 can apply the data
transformation to only as many bits of the data as it needs. For example, if data transformer 510 is implemented as described in the embodiment of FIG. 10, data transformer 510 operates on 8 byte
inputs. If data transformer 510 receives more than 8 bytes, data transformer 510 can apply to only 8 bytes of the input. These can be any 8 bytes within the data: e.g., the first 8 bytes, the last 8
bytes, or any other desired combination.
It is also worth noting that any data can be transformed. Thus, the data to be transformed can be a master key, where the transformed master key is to be used to generate derivative keys. Or, the
data can be a derivative key that is to be wrapped before transmission. Or, the data can be data that is to be encrypted using an implementation of an encryption algorithm. A person skilled in the
art will recognize other types of data that can be transformed.
FIGS. 7A-7B show a flowchart for using the data security device of FIG. 6, according to an embodiment of the invention. In FIG. 7A, at block 705, the data is divided into blocks. At block 710, each
of the blocks can be transformed using a data transformation. Each of blocks can be independently data transformed or not, as desired; in other words, some blocks might be transformed, and others
not. At block 715, the blocks can be reassembled. As shown by dashed line 720, blocks 705-715 are optional, and can be skipped if not needed.
In FIG. 7B, the data security device can be used in different ways. At block 725, a key wrapping algorithm can be applied to the data. At block 730, a key derivation algorithm can be applied to the
data. And at block 735, a data encryption algorithm can be applied to the data.
[0049]FIG. 8
shows details of the data transformer of FIGS. 5 and 6, according to an embodiment of the invention. In the embodiment of data transformer 510 shown in
FIG. 8
, data transformer 510 operates by permuting bit groups using permutation functions. Data transformer 510 includes input port 805 to receive data to be transformed, divider 810, padder 815, permuter
820, and output port 825 to output the transformed data. Divider 810 is responsible for dividing the input data into the bit groups for application of the permutation functions. In fact, divider 810
starts by dividing the data into two segments. The first segment includes bits that are used to control the application of the permutation functions on the bit groups, which are portioned from the
second segment. In one embodiment, the data includes 64 bits; the first segment includes 8 bits, and the second segment includes 8 7-bit groups. But a person skilled in the art will recognize that
the data can be of any length, and the data can be divided into groups of any desired lengths, even with different groups being of different length. Finally, the first segment, which includes the
bits that control the application of the permutation groups, can be omitted, if the individual groups are always permuted.
If data transformer 510 supports receiving data of unpredictable sizes (instead of assuming that the data is always of a fixed size), then divider 810 might not be able to divide the data into bit
groups properly. Padder 815 can be used to pad the data with additional bits, so that the data is of appropriate length to be properly divided.
In one embodiment, the application of the permutation functions is controlled by the bits of the first segment: a bit group is permuted using a particular permutation function if a corresponding bit
in the first segment is set. For example, if the corresponding bit has the value of 1, then the corresponding group is permuted using the appropriate permutation function; if the corresponding bit
has the value 0, then the corresponding group is not permuted. Alternatively, if the corresponding bit has the value 0, the corresponding bit group can be viewed as having been permuted using the
identity permutation function. The permutation functions can be indexed as well; if the number of permutation function matches the number of bit groups in the second segment (and therefore also
matches the number of bits in the first segment), then a single index can identify three corresponding elements: a bit in the first segment, a bit group in the second segment, and a permutation
function to apply to the bit group.
Permuter 820 is responsible for controlling the permutation of the bit groups of the second segment. In one embodiment, permuter 820 implements permutations according to the functions shown in Table
1 below, although a person skilled in the art will recognize that any permutation functions can be used.
-US-00001 TABLE 1 Function Permutation (of a b c d e f g) P
f a e b d g c P
g f d a b c e P
c g b f a e d P
e c a g f d b P
d e f c g b a P
b d g e c a f P
e c a g f d b P
c g b f a e d
There are some interesting features of the permutations shown in Table 1. First, each of the permutation functions is a power of permutation function P
. Thus, P
∘ P
, P
∘ P
∘ P
∘ P
), etc. Because P
∘ P
would result in P
again, P
and P
are chosen to repeat earlier powers of P
. This means that data transformer 510 only needs to know the implementation of one permutation function; the rest of the permutation functions can be derived from the base permutation function.
Second, the permutations of Table 1 do not introduce any structures in the data that are similar to those found in encryption functions such as RSA, DES, AES, SHA-1, etc.
Because permutation functions are invertible, the data transformation that results from applying the permutation functions of Table 1 is easily reversible. Table 2 shows the permutation functions
that are the inverses of the permutation functions of Table 1.
-US-00002 TABLE 2 Function Permutation (of a b c d e f g) P
b d g e c a f P
d e f c g b a P
e c a g f d b P
c g b f a e d P
g f d a b c e P
f a e b d g c P
c g b f a e d P
e c a g f d b
, to reverse the data transformation applying the permutation functions of Table 1, all that is needed is to apply a second data transformation, using the permutation functions of Table 2. To make
this reverse transformation possible, output port 825 outputs the bits of the first segment directly, along with the permuted groups; otherwise, a receiver of the transformed data would not know
which bit groups have been permuted.
As with the permutation functions of Table 1, all of the permutation functions in Table 2 can be derived from a single base function: in this case, P
. Thus, P
∘ P
, P
∘ P
∘ P
∘ P
), etc.
[0056]FIG. 9
shows details of the data transformer of FIGS. 5 and 6, according to another embodiment of the invention. In
FIG. 9
, input port 905 and output port 910 operate similarly as in data transformer 510 of
FIG. 8
. But rather than permuting the data using permutation functions, data transformer 510 of
FIG. 9
operates by computing an exponential permutation on the data: this calculation is done by calculator 915. In one embodiment, data transformer 510 operates on data input that is 3 bytes long. The
first segment is used to calculate a power, to which the last two bytes are raised. The result is then taken modulo a modulus. For example, one embodiment computes the data transformation as Y=
((B+1).sup.(2A+1) mod 65537)-1, where A is the first byte of the data input and B is the last two bytes of the data input. The transformed data then includes A and Y, and is 3 bytes long. But a
person skilled in the art will recognize that the input can be of different lengths, and that different exponential permutation functions can be applied.
The above-shown exponential permutation function has some advantages. First, abstract algebra shows that where the exponent and the modulus (minus one) are relatively prime, the function cycles
through all possible values between 1 and the modulus, which means that the exponential permutation function is a permutation. By selecting 65537 as the prime number, one less than 65537 is 65536,
which is a power of 2. Thus, regardless of the value of A, (2A+1) is odd, and is therefore relatively prime to 65536. Second, if A is 0, then the data output is unchanged. Finally, as with the
permutation data transformer of
FIG. 8
, the structure of data transformer 510 of
FIG. 9
uses a structure not existing in cryptographic algorithms such as RSA, DES, AES, SHA-1, etc.
If data transformer 510 supports receiving data of unpredictable sizes (instead of assuming that the data is always of a fixed size), then divider 920 might not be able to divide the data into
segments of appropriate size. Padder 925, as with padder 815 in the data transformer of
FIG. 8
, can be used to pad the data with additional bits, so that the data is of appropriate length to be properly divided.
As with the permutation data transformer of
FIG. 8
, data transformer 510 of
FIG. 9
is reversible. To make it possible to reverse the data transformation, output port 910 outputs A unchanged along with Y. Then, to reverse the exponential permutation, calculator 915 computes the
inverse of 2A+1 modulo 65536 (that is, 65537-1). If this inverse is called e, then the reverse exponential permutation is ((Y+1)
mod 65537)-1. The result of this calculation restores the original bytes B. Thus, the exponential permutation can be reversed simply by applying a second data transformation, changing the exponent of
the data transformer.
Now that the apparatuses of FIGS. 8 and 9 have been presented, the methods of their use can be understood. FIGS. 10A-10C show a flowchart for using the data transformer of
FIG. 8
, according to an embodiment of the invention. In FIG. 1 OA, at block 1005, the data is received. At block 1010, the data is divided into two segments (assuming that the permutation of bit groups are
controlled by bits in the first segment). At block 1015, the data transformer checks to see if the second data segment can be divided evenly into groups. If not, then at block 1020 the data is padded
to support dividing the second segment into evenly-sized groups. (This assumes that the data transformer attempts to divide the data input into evenly-sized groups; if the data transformer does not
need to divide the input data into evenly-sized groups, then blocks 1015 and 1020 can be omitted.)
At block 1025 (FIG. 1 OB), the second segment is divided into bit groups. Although block 1025 describes the second segment as being divided into groups of equal size, as described above, the groups
can be divided into groups of unequal size, if the data transformer supports this. At block 1030, each group is associated with a bit in the first segment. At block 1035, a base permutation function
is defined. At block 1040, other permutation functions are defined as powers of the base permutation function. (Again, there is no requirement that the permutations be powers of a base permutation
function; each of the permutation functions can be unrelated to the others, in which case blocks 1035 and 1040 can be modified/omitted.) At block 1045, the permutation functions are indexed.
At block 1050 (FIG. 10C), the data transformer checks to see if any bits in the first segment (which controls the application of the permutation functions to the bit groups in the second segment)
have yet to be examined. If there are unexamined bits, then at block 1055 the data transformer examines the bit to see if it is set. If the bit is set, then at block 1060 the permutation function
indexed by the bit is identified, and at block 1065 the identified permutation is applied to the associated permutation group. Control then returns to block 1050 to see if there are any further
unexamined bits in the first segment. After all bits in the first segment have been examined, then at block 1070 the data transformer constructs the data transformation from the first segment and the
permuted bit groups.
FIG. 11 shows a flowchart for using the data transformer of
FIG. 9
, according to an embodiment of the invention. At block 1105, the data transformer receives the data. At block 1110, the data transformer divides the data into two segments. At block 1115, the first
segment is used to construct a power that is relatively prime to the selected modulus. At block 1120, the second segment is raised to the computed power. At block 1125, the remainder is computed by
taking the result modulo the modulus. Finally, at block 1130, the data transform is constructed from the first segment and the remainder.
As discussed above with reference to
FIG. 5
, existing key derivation functions exist. But the existing key derivation functions do not provide the advantages of both the secure hash function and the universal hash function, as described above
with reference to
FIG. 4
. FIG. 12 shows details of one key derivation function that combine the advantages of a secure hash function and a universal hash function. In FIG. 12, key derivation function 515 includes input port
1205 and output port 1210, which are used to provide the inputs to the key derivation function and the output derived key, respectively. Key derivation function 515 also includes divider 1215,
combiner 1220, hash 1225, determiner 1230, calculator 1235, and bit selector 1240.
Divider 1215 divides the master key into two parts. Combiner 1220 combines the first part of the master key with a counter, which can be part of the input data. One way to combine the master key with
the counter is by concatenating the first part of the master key with the counter, which can be of any size (e.g., 4 bytes). This concatenation can be performed in either order: that is, either the
first part of the master key or the counter can be the front of the combination. The result of this combination is then hashed using hash function 1225, which can be a secure hash function. (In this
embodiment, hash function 1225 takes the place of secure hash algorithm 110 in sequence 405 of
FIG. 4
Determiner 1230 is used to determine two numbers from the second part of the master key. In one embodiment, these two numbers, a and b, are determined as the first and last 32 bytes of the second
part of the master key, modulo a prime number p. Selecting a and b in this manner calls for the master key to be of sufficient length for the second part of the master key to be 64 bytes long. But a
person skilled in the art will recognize that the master key does not necessarily have to be this long. For example, if computing a and b modulo p sufficiently alters the bits of a and b, a and b
might be selected in such a way that their original bits overlap from within the second part of the master key.
A particular choice for the prime number can be p
, although a person skilled in the art will recognize that other primes can be selected instead. Calculator 1235 can then implement the universal hash function of ax+b mod p, where x is the result of
hash 1225. (This universal hash function takes the place of universal hash algorithm 305 in sequence 405 of
FIG. 4
.) Finally, bit selector 1240 selects the bits from the result of the universal hash function for the derived key, which can then be output. For example, bit selector 1240 can select the least
significant bits of the result of the universal hash function as the derived key.
[0068]FIG. 13
shows details of the key derivation function of FIGS. 5 and 6, according to another embodiment of the invention. In contrast to the embodiment of the invention shown in FIG. 12, which implements a
key derivation function according to sequence 405 of
FIG. 4
, key derivation function 515 of
FIG. 13
does not apply the universal hash algorithm after the secure hash algorithm. Instead, the embodiment of the invention shown in
FIG. 13
applies a liner mapping to the input to the secure hash algorithm.
As with key derivation function 515 of FIG. 12, key derivation function 515 of
FIG. 13
includes input port 1305 and output port 1310, which receive the master key as input and output the derived key, respectively. Key derivation function 515 of
FIG. 13
also includes divider 1315, encoder 1320, combiner 1325, hash 1330, and bit selector 1335.
Divider 1315, as with divider 1215 of FIG. 12, divides the master key into two parts. Encoder 1320 then encodes a counter. Encoder 1320 can operate in any manner desired. For example, encoder 1320
can operate by repeating the counter to extend it to the length of the first part of the master key. So, for example, if the first part of the master key is 64 bytes long and the counter is
represented using 4 bytes, encoder 1320 can repeat those 4 bytes 16 times, to extend the counter to a 64 byte length. Combiner 1325 can then combine the encoded counter with each part of the master
key separately. For example, combiner 1325 can combine the parts of the master key and the encoded counter at the bit level. One embodiment uses an XOR binary function to combine the parts of the
master key and the encoded counter. But a person skilled in the art will recognize that combiner 1325 can use any bitwise binary function, or indeed any function, to combine the parts of the master
key and the encoded counter. Combiner 1325 can then recombine the two parts of the master key (after the combination with the encoded counter) back together: for example, the two parts can be
concatenated together (but a person skilled in the art will recognize that combiner 1325 can recombine the two parts of the master key in other ways). Combiner 1325 can also concatenate the
recombined parts of the master key with the encoded counter one more time.
Hash 1330 takes the output of combiner 1325 and hashes it. Hash 1330 can be a secure hash function. Bit selector 1335, as with bit selector 1240 in FIG. 12, can then select bits from the result of
hash 1330 as the derived key.
Now that the apparatuses of FIGS. 12 and 13 have been presented, the methods of their use can be understood.
FIG. 14
shows a flowchart for using the key derivation function of FIG. 12, according to an embodiment of the invention. At block 1405, the master key is divided into segments. At block 1410, the first
segment is combined with an encoded counter. As described above with reference to FIG. 12, this combination can be the concatenation of the first segment with the encoded counter. At block 1415, the
combined first segment is hashed.
At block 1420, two numbers are determined from the second segment. As discussed above with reference to FIG. 12, these two numbers can be determined relative to a modulus. At block 1425, a universal
hash function is defined using the two determined numbers and the modulus. At block 1430, the result of the hash is applied to the universal hash function. At block 1435, bits are selected from the
result of the universal hash as the derivative key.
FIG. 15 shows a flowchart for using the key derivation function of
FIG. 13
, according to an embodiment of the invention. At block 1505, the master key is divided into segments. At block 1510, each of the segments is combined with an encoded counter. As described above with
reference to
FIG. 13
, this can be done by applying an XOR bit function to each of the segments individually with the encoded counter. At block 1515, the combined blocks are then recombined, and (as discussed above with
reference to
FIG. 13
), can also be combined again with the encoded counter. At block 1520, this modified master key is then hashed, and at block 1525, bits are selected from the result of the hash as the derivative key.
The key derivation functions shown in FIGS. 12-15 are only two examples. Other key derivation functions can also be used that combine the advantages of a secure hash algorithm and a universal hash
FIG. 16
shows a flowchart for yet another key derivation function in the data security device of
FIG. 5
, according to an embodiment of the invention. At block 1605, the master key is divided into segments. At block 1610, the segments are transformed using data transformation. Because the segments will
typically be larger than the data transformer can use, only a subset of the segments are used: e.g., only the first bytes needed by the data transformation. At block 1615, the transformed segments
are combined, and combined with encoded counter: e.g., the segments and the encoded counter can be concatenated together. At block 1620, the result is hashed, and at block 1625, bits are selected
from the result of the hash as the derivative key.
While the apparatuses of FIGS. 12-13, and the flowcharts of FIGS. 14-16 show the generation of a single derivative key from a master key, it is worth noting that embodiments of the invention can
easily be adapted to generate repeated derivative keys. These additional derivative keys can be generated in numerous ways. For example, the flowcharts of FIGS. 1416 all include counters. For each
additional derivative key desired, the counter can be incremented. Thus, to derive the first key, the counter can use the value 1, to derive the second key, the counter can use the value 2, and so
In another variation, rather than using bit selector 1240 of FIG. 12 or bit selector 1335 of
FIG. 13
to select bits for the derivative key, enough results can be generated at one time to select bits from the combined results for all the derivative keys. For example, assume that u keys are desired,
each k bits long, and further assume that the results of the apparatuses of FIGS. 12-13 and/or the flowcharts of FIGS. 14-16 produce l bits before bit selection. If the key derivation function is
applied m times, so that m * l≧u * k, then the u derivative keys can all be selected at the same time from the m * l resulting bits. For example, the m * l resulting bits might all be concatenated
together; the first key might then be selected as the first k bits, the second key might be selected as the second k bits, and so on until all u keys have been selected.
The following discussion is intended to provide a brief, general description of a suitable machine in which certain aspects of the invention may be implemented. Typically, the machine includes a
system bus to which is attached processors, memory, e.g., random access memory (RAM), read-only memory (ROM), or other state preserving medium, storage devices, a video interface, and input/output
interface ports. The machine may be controlled, at least in part, by input from conventional input devices, such as keyboards, mice, etc., as well as by directives received from another machine,
interaction with a virtual reality (VR) environment, biometric feedback, or other input signal. As used herein, the term "machine" is intended to broadly encompass a single machine, or a system of
communicatively coupled machines or devices operating together. Exemplary machines include computing devices such as personal computers, workstations, servers, portable computers, handheld devices,
telephones, tablets, etc., as well as transportation devices, such as private or public transportation, e.g., automobiles, trains, cabs, etc.
The machine may include embedded controllers, such as programmable or non-programmable logic devices or arrays, Application Specific Integrated Circuits, embedded computers, smart cards, and the
like. The machine may utilize one or more connections to one or more remote machines, such as through a network interface, modem, or other communicative coupling. Machines may be interconnected by
way of a physical and/or logical network, such as an intranet, the Internet, local area networks, wide area networks, etc. One skilled in the art will appreciated that network communication may
utilize various wired and/or wireless short range or long range carriers and protocols, including radio frequency (RF), satellite, microwave, Institute of Electrical and Electronics Engineers (IEEE)
802.11, Bluetooth, optical, infrared, cable, laser, etc.
The invention may be described by reference to or in conjunction with associated data including functions, procedures, data structures, application programs, etc. which when accessed by a machine
results in the machine performing tasks or defining abstract data types or low-level hardware contexts. Associated data may be stored in, for example, the volatile and/or non-volatile memory, e.g.,
RAM, ROM, etc., or in other storage devices and their associated storage media, including hard-drives, floppy-disks, optical storage, tapes, flash memory, memory sticks, digital video disks,
biological storage, etc. Associated data may be delivered over transmission environments, including the physical and/or logical network, in the form of packets, serial data, parallel data, propagated
signals, etc., and may be used in a compressed or encrypted format. Associated data may be used in a distributed environment, and stored locally and/or remotely for machine access.
Having described and illustrated the principles of the invention with reference to illustrated embodiments, it will be recognized that the illustrated embodiments may be modified in arrangement and
detail without departing from such principles. And, though the foregoing discussion has focused on particular embodiments, other configurations are contemplated. In particular, even though
expressions such as "in one embodiment" or the like are used herein, these phrases are meant to generally reference embodiment possibilities, and are not intended to limit the invention to particular
embodiment configurations. As used herein, these terms may reference the same or different embodiments that are combinable into other embodiments.
Consequently, in view of the wide variety of permutations to the embodiments described herein, this detailed description and accompanying material is intended to be illustrative only, and should not
be taken as limiting the scope of the invention. What is claimed as the invention, therefore, is all such modifications as may come within the scope and spirit of the following claims and equivalents
Patent applications by Ivan Bjerre Damgaard, Aabyhoej DK
Patent applications by Torben Pryds Pedersen, Aabyhoej DK
Patent applications by Vincent Rijmen, Graz AT
Patent applications in class Using master key (e.g., key-encrypting-key)
Patent applications in all subclasses Using master key (e.g., key-encrypting-key)
User Contributions:
Comment about this patent or add new information about this topic: | {"url":"http://www.faqs.org/patents/app/20090262943","timestamp":"2014-04-18T19:11:32Z","content_type":null,"content_length":"111309","record_id":"<urn:uuid:e3e20d79-842d-46d0-87bb-214809707a5c>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00173-ip-10-147-4-33.ec2.internal.warc.gz"} |
North Carolina Standard Course of Study: Grade 3
Shodor > Interactivate > Standards > North Carolina Standard Course of Study: Grade 3
North Carolina Standard Course of Study
Grade 3
Standard Category • Show All
Standard Category (...)
Number and Operations, Measurement, Geometry, Data Analysis and Probability, Algebra
• COMPETENCY GOAL 1: The learner will model, identify, and compute with whole numbers through 9,999.
• COMPETENCY GOAL 2: The learner will recognize and use standard units of metric and customary measurement.
• COMPETENCY GOAL 3: The learner will recognize and use basic geometric properties of two- and three-dimensional figures.
• COMPETENCY GOAL 4: The learner will understand and use data and simple probability concepts.
• COMPETENCY GOAL 5: The learner will recognize, determine, and epresent patterns and simple mathematical relationships.
• COMPETENCY GOAL 5: The learner will recognize, determine, and represent patterns and simple mathematical relationships.
No Results Found
©1994-2014 Shodor Website Feedback
North Carolina Standard Course of Study
Grade 3
Standard Category • Show All
Standard Category (...)
Number and Operations, Measurement, Geometry, Data Analysis and Probability, Algebra
• COMPETENCY GOAL 1: The learner will model, identify, and compute with whole numbers through 9,999.
• COMPETENCY GOAL 2: The learner will recognize and use standard units of metric and customary measurement.
• COMPETENCY GOAL 3: The learner will recognize and use basic geometric properties of two- and three-dimensional figures.
• COMPETENCY GOAL 4: The learner will understand and use data and simple probability concepts.
• COMPETENCY GOAL 5: The learner will recognize, determine, and epresent patterns and simple mathematical relationships.
• COMPETENCY GOAL 5: The learner will recognize, determine, and represent patterns and simple mathematical relationships.
No Results Found
Number and Operations, Measurement, Geometry, Data Analysis and Probability, Algebra | {"url":"http://www.shodor.org/interactivate/standards/organization/326/","timestamp":"2014-04-19T01:48:13Z","content_type":null,"content_length":"13365","record_id":"<urn:uuid:387e611a-7c00-4c71-b53c-d431b4e61201>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00271-ip-10-147-4-33.ec2.internal.warc.gz"} |
Potrero Math Tutor
Find a Potrero Math Tutor
...I look forward to working with you!I have formally taught 2 years of 10th grade math. I have formally taught 2 years of 10th grade math. I lived in France for 5 years, minored in French in
college and lived in West Africa for 2 years, teaching math in French at a local high school.
14 Subjects: including trigonometry, probability, algebra 1, algebra 2
...Today I have hundreds of hours of experience, with the majority in Algebra and Statistics, and I would be comfortable well into college math. During the learning process, small knowledge gaps
from past courses tend to reappear as roadblocks down the line. By identifying and correcting these problems, I help students become effective independent learners for both current and future
14 Subjects: including geometry, linear algebra, probability, algebra 1
...I completed my first undergraduate studies majoring in physics in 1972 from the College of Science, the University of Mosul in Iraq. I became a professional in teaching meteorology and
climatology at Iraqi Air Force and Defense College and the departments affiliated with the college for the peri...
3 Subjects: including algebra 1, Arabic, drawing
...I also taught all subjects at a primary school in Papua New Guinea for 3 months after my freshman year at Stanford, and I am currently a substitute at a Child Development Center. I have been
playing and writing music since I was ten years old. I am currently a professional musician in an Indie/Folk/Island/Jazz band called Ed Ghost Tucker, and I am the predominant songwriter for the
38 Subjects: including algebra 1, biology, vocabulary, grammar
...I have the patience and knowledge of the language to work with students of all levels on their pronunciation, listening and comprehension of the language and well as vocabulary, grammar and
even common idioms. I welcome the opportunity to work with you. Te prometo que vas a dominar el Español!!...
21 Subjects: including algebra 1, Microsoft PowerPoint, track & field, football | {"url":"http://www.purplemath.com/potrero_math_tutors.php","timestamp":"2014-04-18T03:54:12Z","content_type":null,"content_length":"23782","record_id":"<urn:uuid:006d0a0d-e774-47d7-b8d2-30ae31f53178>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00100-ip-10-147-4-33.ec2.internal.warc.gz"} |
1. A stock is not expected to pay a dividend over the next four years. Five years from now, the company anticipates that it will establish a dividend of $1.00 per share (i.e., D[5] = $1.00). Once the
dividend is established, the market expects that the dividend will grow at a constant rate of 5 percent per year forever. The risk-free rate is 5 percent, the company’s beta is 1.2, and the market
risk premium is 5 percent. The required rate of return on the company’s stock is expected to remain constant. What is the current stock price? A: $ 7.36
B: $ 8.62
C: $ 9.89
D: $10.982. Mack Industries just paid a dividend of $1.00 per share (i.e., D[0] = $1.00). Analysts expect the company’s dividend to grow 20 percent this year (i.e., D[1] = $1.20), and 15 percent next
year. After two years the dividend is expected to grow at a constant rate of 5 percent. The required rate of return on the company’s stock is 12 percent. What should be the current price of the
company’s stock? A: $12.33
B: $16.65
C: $16.91
D: $18.673. R. E. Lee recently took his company public through an initial public offering. He is expanding the business quickly to take advantage of an otherwise unexploited market. Growth for his
company is expected to be 40 percent for the first three years and then he expects it to slow down to a constant 15 percent. The most recent dividend (D[0]) was $0.75. Based on the most recent
returns, the beta for his company is approximately 1.5. The risk-free rate is 8 percent and the market risk premium is 6 percent. What is the current price of Lee’s stock?A: $77.14
B: $75.17
C: $67.51
D: $73.88 4. A stock is expected to pay no dividends for the first three years, i.e., D[1] = $0, D[2] = $0, and D[3] = $0. The dividend for Year 4 is expected to be $5.00 (i.e., D[4] = $5.00), and it
is anticipated that the dividend will grow at a constant rate of 8 percent a year thereafter. The risk-free rate is 4 percent, the market risk premium is 6 percent, and the stock’s beta is 1.5.
Assuming the stock is fairly priced, what is the current price of the stock?A: $ 69.31
B: $ 72.96
C: $ 79.38
D: $ 86.385. Stewart Industries expects to pay a $3.00 per share dividend on its common stock at the end of the year (D[1] = $3.00). The dividend is expected to grow 25 percent a year until t = 3,
after which time the dividend is expected to grow at a constant rate of 5 percent a year (i.e., D[3] = $4.6875 and D[4] = $4.9219). The stock’s beta is 1.2, the risk-free rate of interest is 6
percent, and the rate of return on the market is 11 percent. What is the company’s current stock price?A: $29.89
B: $30.64
C: $37.29
D: $53.696. McPherson Enterprises is planning to pay a dividend of $2.25 per share at the end of the year (i.e., D[1] = $2.25). The company is planning to pay the same dividend each of the following
2 years and will then increase the dividend to $3.00 for the subsequent 2 years (i.e., D[4] and D[5]). After that time the dividends will grow at a constant rate of 5 percent per year. If the
required return on the company’s common stock is 11 percent per year, what is the current stock price?A: $52.50
B: $40.41
C: $37.50
D: $50.007. Hadlock Healthcare expects to pay a $3.00 dividend at the end of the year (D[1] = $3.00). The stock’s dividend is expected to grow at a rate of 10 percent a year until three years from
now (t = 3). After this time, the stock’s dividend is expected to grow at a constant rate of 5 percent a year. The stock’s required rate of return is 11 percent. What is the price of the stock today?
A: 49
B: 54
C: 64
D: 528. Rogers Robotics currently (1998) does not pay a dividend. However, the company is expected to pay a $1.00 dividend two years from today (2000). The dividend is then expected to grow at a rate
of 20 percent a year for the following three years. After the dividend is paid in 2003, it is expected to grow forever at a constant rate of 7 percent. Currently, the risk-free rate is 6 percent,
market risk premium (k[M] – k[RF]) is 5 percent, and the stock’s beta is 1.4. What should be the price of the stock today?A: 22.91
B: 21.20
C: 20.16
D: 28.809. Whitesell Technology has just paid a dividend (D[0]) and is expected to pay a $2.00 per-share dividend at the end of the year (D[1]). The dividend is expected to grow 25 percent a year for
the following four years, (i.e., D[5] = $2.00 * (1.25)^4 = $4.8828). After this time period, the dividend will grow forever at a constant rate of 7 percent a year. The stock has a required rate of
return of 13 percent, (i.e., k[s] = 0.13). What is the expected price of the stock two years from today? (Calculate the price assuming that D[2] has already been paid.)A: 83.97
B: 95.87
C: 69.56
D: 67.6310. An analyst estimates that Cheyenne Co. will pay the following dividends: D[1] = $3.0000, D[2] = $3.7500, and D[3] = $4.3125. The analyst also estimates that the required rate of return on
Cheyenne’s stock is 12.2 percent. After the third dividend, the dividend is expected to grow by 8 percent per year forever. What is the price of the stock today?A: $81.40
B: $84.16
C: $85.27
D: $87.2211. Lewisburg Company’s stock is expected to pay a dividend of $1.00 per share at the end of the year. The dividend is expected to grow 20 percent per year each of the following three years
(i.e., D[4] = $1.7280), after which time the dividend is expected to grow at a constant rate of 7 percent per year. The stock’s beta is 1.2, the market risk premium is 4 percent, and the risk-free
rate is 5 percent. What is the price of the stock today?A: $49.61
B: $45.56
C: $48.43
D: $46.6412. Namath Corporation’s stock is expected to pay a dividend of $1.25 per share at the end of the year. The dividend is expected to increase by 20 percent per year for each of the following
two years. After that, the dividend is expected to increase at a constant rate of 8 percent per year. The stock has a required return of 10 percent. What should be the price of the stock today?A:
B: $59.38
C: $70.11
D: $76.7613. Faulkner Corporation expects to pay an end-of-year dividend, D[1], of $1.50 per share. For the next two years the dividend is expected to grow by 25 percent per year, after which time
the dividend is expected to grow at a constant rate of 7 percent per year. The stock has a required rate of return of 12 percent. Assuming that the stock is fairly valued, what is the price of the
stock today? A: $45.03
B: $40.20
C: $37.97
D: $36.3814. The Textbook Production Company has been hit hard due to increased competition. The company’s analysts predict that earnings (and dividends) will decline at a rate of 5 percent annually
forever. Assume that k[s] = 11 percent and D[0] = $2.00. What will be the price of the company’s stock three years from now?A: $27.17
B: $ 6.23
C: $28.50
D: $10.1815. Newburn Entertainment’s stock is expected to pay a year-end dividend of $3.00 a share. (D[1] = $3.00, the dividend at time 0, D[0], has already been paid.) The stock’s dividend is
expected to grow at a constant rate of 5 percent a year. The risk-free rate, k[RF], is 6 percent and the market risk premium, (k[M] – k[RF]), is 5 percent. The stock has a beta of 0.8. What is the
stock’s expected price five years from now?A: $60.00
B: $76.58
C: $96.63
D: $72.11
Close Window | {"url":"http://www.drbenniewaller.com/classes/fina350/quizzes/quiz17b.asp","timestamp":"2014-04-19T19:58:23Z","content_type":null,"content_length":"25000","record_id":"<urn:uuid:36b0c9de-d555-4287-80b5-196582bb9482>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00555-ip-10-147-4-33.ec2.internal.warc.gz"} |
Optimum angle
Hi everyone, im hoping that someone may be able to advise me on an issue i currently have.
I am trying to implement a Physics computer simulation (only basic) designed for children learning about forces. One of the sub games that i have created is to fire a character from a cannon and find
the optimum angle, which ends up at 45 degrees.
However, the second sub game models drag force on the projectile, taking into account its mass, area, drag coefficient, and the air density. The equations are solved using an RK4 method. Excuse my
lack of knowledge on this subject, but I am not that hot on physics, i just have an interest which is why i decided to model this simulation as my final university project as a computing student.
Now, the question i have is that ive always understood that 45 degrees is the optimum angle to travel the farthest, but when simulating the drag projectile, the optimum angle is now 35. Is this
correct, in that 45 degrees would no longer be optimum, or am i going wrong somewhere?
Many thanks | {"url":"http://www.physicsforums.com/showthread.php?t=553070","timestamp":"2014-04-18T23:19:42Z","content_type":null,"content_length":"27833","record_id":"<urn:uuid:5f2d9cd3-3722-4835-8e5c-163461251a00>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00082-ip-10-147-4-33.ec2.internal.warc.gz"} |
Norridge, IL Calculus Tutor
Find a Norridge, IL Calculus Tutor
I have ten years experience tutoring high school and college students in chemistry (organic, inorganic, physical or analytical), physics and mathematics (geometry, calculus and algebra) on both a
one on one level as well as in big groups. My undergraduate was a major in chemistry with a minor in ma...
20 Subjects: including calculus, chemistry, physics, geometry
...I use both analytical as well as graphical methods or a combination of the two as needed to cater to each student. Having both an Engineering and Architecture background, I am able to explain
difficult concepts to either a left or right-brained student, verbally or with visual representations. ...
34 Subjects: including calculus, reading, writing, statistics
...My friends ask me for help in math when they don't understand something. My tutoring methods would be the following: I would first ask the child if he or she believes he or she learns better
by seeing, hearing or touching. I would then have the child fill out a questionnaire to see if what he or she believes is true.
19 Subjects: including calculus, reading, geometry, algebra 1
...Willing to travel by CTA after work during the week to meet potential students so long as my commute is reasonable. During weekends I am willing to travel by car. Best fit students for me
include high school through undergraduate level students seeking help in math, physics, inorganic chemistry, mechanical engineering, or related technical fields.
22 Subjects: including calculus, chemistry, physics, geometry
...My grades in my first two semesters of the IU theory sequence were both A+; I earned A's in subsequent honors theory courses. I'd be happy to tutor through the first year of undergraduate
music theory. I'm willing to tutor groups.
13 Subjects: including calculus, statistics, geometry, algebra 1
Related Norridge, IL Tutors
Norridge, IL Accounting Tutors
Norridge, IL ACT Tutors
Norridge, IL Algebra Tutors
Norridge, IL Algebra 2 Tutors
Norridge, IL Calculus Tutors
Norridge, IL Geometry Tutors
Norridge, IL Math Tutors
Norridge, IL Prealgebra Tutors
Norridge, IL Precalculus Tutors
Norridge, IL SAT Tutors
Norridge, IL SAT Math Tutors
Norridge, IL Science Tutors
Norridge, IL Statistics Tutors
Norridge, IL Trigonometry Tutors | {"url":"http://www.purplemath.com/Norridge_IL_calculus_tutors.php","timestamp":"2014-04-17T07:42:15Z","content_type":null,"content_length":"24118","record_id":"<urn:uuid:dcdb9e28-27b1-4706-a51e-6068caf21868>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00133-ip-10-147-4-33.ec2.internal.warc.gz"} |
[SciPy-user] Triple integral with Gaussian quadrature
Robert Kern rkern at ucsd.edu
Thu Oct 6 04:46:34 CDT 2005
Nils Wagner wrote:
> Hi all,
> Is there a way to compute a triple integral via Gaussian quadrature in
> scipy ?
> A small example would be appreciated.
> What integration method is used in
> tplquad(func, a, b, gfun, hfun, qfun, rfun, args=(),
> epsabs=1.4899999999999999e-08, epsrel=1.4899999999999999e-08) ?
It's just quad() run over the function defined by doing the double
integral over the remaining dimensions (and the double integral is also
implemented with quad() by integrating over the function defined by
doing the integral using quad() to finally integrate over the last
dimension). quad() comes from QUADPACK. Exactly which QUADPACK function
is called depends on the inputs; infinite bounds require one function,
weighted integrals require others, explicit breakpoints require yet
another. You can figure which by looking at the source code in
Lib/integrate/quadpack.py, and then you can read the documentation in
Lib/integrate/quadpack/readme to learn about the algorithms used by
those functions.
In order to do triple integrals with Gaussian quadrature, you can
probably do a similar recursive scheme like tplquad() does. You can read
the source for details.
Robert Kern
rkern at ucsd.edu
"In the fields of hell where the grass grows high
Are the graves of dreams allowed to die."
-- Richard Harter
More information about the SciPy-user mailing list | {"url":"http://mail.scipy.org/pipermail/scipy-user/2005-October/005402.html","timestamp":"2014-04-18T03:26:52Z","content_type":null,"content_length":"4045","record_id":"<urn:uuid:a0147df8-965f-4494-aec4-7d60a697a823>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00479-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: probability
Hi Sam;
A question for you, you have posted several other questions.
I don't know whether or not you needed help or you were just posing a problem.
I answered them, can you check them and tell me what you think?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof. | {"url":"http://www.mathisfunforum.com/viewtopic.php?pid=157819","timestamp":"2014-04-21T02:25:53Z","content_type":null,"content_length":"20176","record_id":"<urn:uuid:f4dface0-1be0-45c2-a805-8dac43392216>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00050-ip-10-147-4-33.ec2.internal.warc.gz"} |
Jeffersonville, PA Statistics Tutor
Find a Jeffersonville, PA Statistics Tutor
I've had many math professors throughout undergrad and graduate school and I've found that, although they all know what they're talking about and are all very intelligent, what makes a math
teacher great is understanding that not everyone understands things the same way he/she does. Throughout my y...
19 Subjects: including statistics, calculus, geometry, algebra 1
...Therefore, I became well versed in teaching all of the subjects that are required to pass the TEAS exam. In addition, I have studied the test format including how the questions are developed,
selected and revised and I am familiar with the types of questions used, the timing format and nationwid...
51 Subjects: including statistics, English, reading, geometry
...Additionally, I have completed 4 courses at the SAS Institute in Philadelphia and a Terradata SQL course at LearnQuest. Finally, I also have a firm grasp of the Microsoft Office Suite. I
prefer a hands-on approach to teaching and tutoring, an approach developed and polished during office hours as a TA and adjunct mathematics faculty.
26 Subjects: including statistics, geometry, GRE, algebra 1
...I understand DC and AC and RF circuits but am not a circuit designer. I have a BS Mechanical Engineering from Penn State 1967. I worked in this capacity at Xerox from 1967 to 1984 when I left
Xerox to work at Lockheed Martin 1984 to 2012 in Pennsylvania.
17 Subjects: including statistics, calculus, physics, geometry
...I have taught middle school math for 6 years. On a daily basis, I helped students with study skills such as time management, organization, and reading carefully. I have tutored and
homeschooled many students as well.
21 Subjects: including statistics, reading, algebra 1, SAT math
Related Jeffersonville, PA Tutors
Jeffersonville, PA Accounting Tutors
Jeffersonville, PA ACT Tutors
Jeffersonville, PA Algebra Tutors
Jeffersonville, PA Algebra 2 Tutors
Jeffersonville, PA Calculus Tutors
Jeffersonville, PA Geometry Tutors
Jeffersonville, PA Math Tutors
Jeffersonville, PA Prealgebra Tutors
Jeffersonville, PA Precalculus Tutors
Jeffersonville, PA SAT Tutors
Jeffersonville, PA SAT Math Tutors
Jeffersonville, PA Science Tutors
Jeffersonville, PA Statistics Tutors
Jeffersonville, PA Trigonometry Tutors
Nearby Cities With statistics Tutor
Center Square, PA statistics Tutors
Chesterbrook, PA statistics Tutors
Eagleville, PA statistics Tutors
East Norriton, PA statistics Tutors
Lawncrest, PA statistics Tutors
Lawndale, PA statistics Tutors
Limerick, PA statistics Tutors
Lower Gwynedd, PA statistics Tutors
Lower Merion, PA statistics Tutors
Norristown, PA statistics Tutors
Plymouth Valley, PA statistics Tutors
Radnor, PA statistics Tutors
Tredyffrin, PA statistics Tutors
Trooper, PA statistics Tutors
Upper Gwynedd, PA statistics Tutors | {"url":"http://www.purplemath.com/Jeffersonville_PA_statistics_tutors.php","timestamp":"2014-04-20T09:03:49Z","content_type":null,"content_length":"24579","record_id":"<urn:uuid:78a6c8aa-cf0f-4732-b100-1c80703890a2>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00564-ip-10-147-4-33.ec2.internal.warc.gz"} |
Incorporating uncertainty in distance-matrix phylogenetic reconstruction
Seminar Room 1, Newton Institute
One approach to estimating phylogenetic trees supposes that a matrix of estimated evolutionary distances between taxa is available. Agglomerative methods have been proposed in which closely related
taxon-pairs are successively combined to form ancestral taxa. Several of these computationally efficient agglomerative algorithms involve steps to reduce the variance in estimated distances. However,
formal statistical models and methods for agglomerative distance-based phylogenetic reconstruction have not previously been published.
We propose a statistical agglomerative phylogenetic method which formally considers uncertainty in distance estimates and how it propagates during the agglomerative process. It simultaneously
produces two topologically identical rooted trees, one tree having branch lengths proportional to elapsed time, and the other having branch lengths proportional to underlying evolutionary divergence.
The method models two major sources of variation which have been separately discussed in the literature: noise, reflecting inaccuracies in measuring divergences, and distortion, reflecting randomness
in the amounts of divergence in different parts of the tree. The methodology is based on successive hierarchical generalised least-squares regressions. It involves only means, variances and
covariances of distance estimates, thereby avoiding full distributional assumptions. Exploitation of the algebraic structure of the estimation leads to an algorithm with computational complexity
comparable to the leading published agglomerative methods. A parametric bootstrap procedure allows full uncertainty in the phylogenetic reconstruction to be assessed.
The video for this talk should appear here if JavaScript is enabled.
If it doesn't, something may have gone wrong with our embedded player.
We'll get it fixed as soon as possible. | {"url":"http://www.newton.ac.uk/programmes/PLG/seminars/2007121714001.html","timestamp":"2014-04-16T16:07:15Z","content_type":null,"content_length":"7856","record_id":"<urn:uuid:9c7be719-94c1-4fcc-ad8e-ee6780d5be81>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00016-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts by
Total # Posts: 1,485
thank you :)
If 0.48 g of sulfur are added to 200 g of carbon tetracholoride, and the freezing point of the carbon tetrachloride (Kf=30 degrees C/m) is depressed by 0.28 degrees C, what is the molar mass and
molecular formula of the sulfur?
Assume that you randomly select 8 cards from a deck of 52. What is the probability that all of the cards selected are hearts?
A group of students who are selecting 2 of their group at random to give a report, but assume that there are 5 males and 3 females. What is the probability that 2 females are selected? 2 males?
Assume that you select 2 coins at random from 5 coins: 3 dimes and 2 quarters. What is the probability that all of the coins selected are dimes?
Social capital refers to the collective value of all "social networks" [who people know] and the inclinations that arise from these networks to do things for each other ["norms of reciprocity"].
cultural capital is defined as forms of knowledge, both tangible and intangible, that have value in a given society in relation to status and power Cultural capital defines how people (human) engage
each other (social) and their resources (economic).
what are some examples of social and cultural capital that young children experience
A 1.7 cm thick bar of soap is floating in water, with 1.1 cm of the bar underwater. Bath oil with a density of 890.0 kg/m3 is added and floats on top of the water. How high on the side of the bar
will the oil reach when the soap is floating in only the oil.
A 1.7 cm thick bar of soap is floating in water, with 1.1 cm of the bar underwater. Bath oil with a density of 890.0 kg/m3 is added and floats on top of the water. How high on the side of the bar
will the oil reach when the soap is floating in only the oil.
45 1/5 x 3/4 =
the importance of these organic compounds ( carbohydrates, proteinds, lipids) in the cells and tissues of living things
A ball is thrown upward from the top of a 50.0 m tall building. The ball's initial speed is 12.0 m/s. At the same instant, a person is running on the ground at a distance of 44.0 m from the building.
What must be the average speed of the person if he is to catch the ball a...
yes i read it but i dnt understand it
launguge arts
I need to un scramble these words and make new ones help eioubdhrs
2. Find a media piece article, video, presentation, song, or other that recognizes the fundamental concepts of chemistry in biology. Include the link or reference citation for the piece and describe
how it helped you better understand how fundamental concepts of chem...
Joe and Bob manufacture bikes. Joe can make 12 bikes in 8 hours while Bob can make 12 bikes in 6 hours. Working together how many hours will it take them to make 12 bikes? Please explain showing
steps so I can learn it. I don't just need the answer. Thanks! PS I originally...
Joe and Bob manufacture bikes. Joe can make 12 bikes in 8 hours while Bob can make 6 bikes in 12 hours. Working together how many hours will it take them to make 12 bikes? Please explain showing
steps so I can learn it. I don't just need the answer. Thanks!
A soccer player kicks the ball toward a goal that is 27.8 m in front of him. The ball leaves his foot at a speed of 18.5 m/s and an angle of 30.4° above the ground. Find the speed of the ball when
the goalie catches it in front of the net. (Note: The answer is not 18.5 m/s...
here's the rest of the info there is angle on one triangle measured 15.5 ft and on another triangle 5ft2 inn and 7ft 9 in
Jenny is 5ft 2in. tall. To find the height of a light pole, she measured her shadow and the pole shadow What is the height of the pole?
Hydrochloric acid
determine whether 17 is a solution of the equation 5x+7 = 92
Can somone please explain how to do this problem: How many cubic centimeters of olive oil have the same mass as 1.00L of gasoline? the density of gas is .66g/ml and olive oil is .92g/ml Thank you for
your help
Sorry, it's an 8.0 oz box so there are 16 servings in the box Thanks
how many grams of sodium are used to prepare 50 boxes of crackers? sodium per serving is 140mg, serving size is .50 oz for 6 crakers. i got 112g and the answer is suppose to be 110g and i cant figure
out how to get 110 thank you
why are the atoms in metallic bonds positively charged?
1. Calculate the radius of orbit for a hydrogen electron in the n=3 level. 2.The half life of an unknown substance is one minutes. If the initial number of particles is 20 how many particles will
remain after two minutes have elapsed?
what are some modern day myths?
Math, Quadratics
Farmer John is building a pen to keep his pig in. John has 40m of fencing. He will only need to make 3 sides to the pen because the 4th side will be along a barn. If John wants the maximum area what
should the side lengths of the pen be?
Math, Quadratics
Hi Damon, I got the same answer as you up to the equation x^2 + 30x - 200=0, but after that I don't understand what happened after that.. where does the 900 and 800 come form? and why you do divide
by 2? thanks
Math, Quadratics
Adam has a hockey rink in his backyard. The current dimensions are 10m by 20m. Adam wants to have a hockey tournament and needs to double the area of his hockey rink. How much must Adam increase each
dimension if he wants to increase them the same amount?
A rocket is fired at a speed of 75.0 m/s from ground level, at an angle of 59.7° above the horizontal. The rocket is fired toward an 11.0 m high wall, which is located 28.5 m away. The rocket attains
its launch speed in a negligibly short period of time, after which its en...
What mass of sodium bromide (NaBr) is required to make 250 cm3 of a 0.35 mol.dm-3 solution?
320g of sulfur dioxide react with 32.0g of oxygen and excess water to form sulfuric acid (H2SO4) a. What is the limiting reagent? b. What is the theoretical yield of product? c. How many grams of
excess reagent will remain?
Ethane (C2H6) reacts with oxygen to produce carbon dioxide and water. Assuming there is an excess of oxygen, calculate the mass of CO2 and H2O produced from 1.25 g of ethane
7th grade Math
Sorry Eric, yes I did mean 43 (typo sorry)
7th grade Math
Assume that there are 100 marbles in each bag. Firstly divide 100 by 7 (ratio 4:3) 14.28. Multiply 14.28 by 4 = 57.12 *57 marbles Mulitply 14.28 by 3 = 42.84 *53 marbles Secondly do the same for
ratio 3:2 Divide 100 by 5 = 20 Multiply 20 by 3 = 60 marbles Multiply 20 by 2 = 40...
fifth grade math
Daughter has homework problem. P(3,6) is only thing listed. ?
Correction...it is supposed to be P(6,3)
We don't understand
Scores on a certain nationwide college entrance examination follow a normal distribution with a mean of 400 and a standard deviation of 100. Find the probability that a student will score over 500.
which measure will remain the same if Jarod next three quiz score are 96,93,and 93 a.mode b.range c.mean d.median
Its a if, then question. It says If I get another chance, then I will do better. And 'i get another chance is lined' what would it be? Like would it still be a hypothesis without the 'if' underlined?
Would it be the argument?
A roof rises 1.7 feet vertically and 13.3 feet horizontally. What is the grade of the roof in percentage
NO its (Ella es americana.)
what is the sq root property of 2x^2=26
In 35.0 s, a pump delivers 0.574 m3 of oil into barrels on a platform 25.5 m above the intake pipe. The oil's density is 0.820 g/cm3.
math word problem
A rectangle garden is 30 ft by 40 ft. Part of the garden is removed in order to install a walkway of uniform width around it. The area of the new garden is one-half the area of the old garden. How
wide is the walkway?
The number 33 bus arrives at the corner of Coldspring and Hillen every 15 minutes. The number 40 bus arrives at the same corner every 46 minutes. If both buses leave the corner at 9:08 am, when is
the next time the buses arrive at the corner at the same time?
Thank you.
Divide: (20b^(3)+17b^(2)+18b+43)÷(4b+5)
Thank you.
3y-24/14 divided by y-8/4y
Thank you.
Solve for x. 4x(x-1)-5x(x)=3
What is the word ball in this sentence? The young boy played ball in the yard. adverb?
I need help with this and have no clue as to what I should do. I have trouble with paraphrasing. Review the following passage and paraphrase the long passage in the following box. Use the reference
to create an appropriate APA-formatted, in-text citation. Additionally, include...
Thank you very much!!
-4^2+44x-160=0 If you graphed the above equation, would the graph open up or down? How can you tell without graphing it?
A farmer decides to enclose a rectangular garden, using the side of a barn as one side of the rectangle. What is the maximum area that the farmer can enclose with 100 ft of fence? What should the
dimensions of the garden be to this given area? The maximum area that can be encl...
The length of a rectangle is twice the width. The area is 578 yd.^2. Find the length and width.
Thank you.
In the community you live, you have been asked to serve on the planning committee for the Fourth of July festival. Your committee has decided to begin meeting early to avoid some of the problems
experienced in previous years. Unfortunately, those problems stemmed from poor bud...
You have just graduated and one of your favorite courses was Financial Management. While you were in school, your grandfather died and left you $1 million. You have decided to invest the funds in a
fast-food franchise and have two choices Franchise L and Franchise S. You ...
Rati and Rates Problems
1.Write two equivalent ratios for the given ratio. a. 4/12 b. 11/33 SHOW WORK
Assume that you are the assistant to the CFO of XYZ Company. Your task is to estimate XYZ's WACC using the following data: 1. The firm's tax rate is 40%. 2. The current price of the 12% coupon,
semiannual payment, non-callable bonds with 15 years to maturity is $1,153....
Suppose the heights of women aged 20 to 29 follow approximately the N(66.7, 2.4) distribution. Also suppose men the same age have heights distributed as N(70.5, 2.2). What percent (± 0.1) of young
men are shorter than average young women?
Algebra - Please HELP
During the first part of a trip a canoeist travels 49 miles at a certain speed. The canoeist travels 25 miles on the second part of the trip at a speed of 5mph slower. The total time for the trip is
5hrs. What was the speed on each part of the trip?
The flower garden has the shape of a right triangle. 5ft of a perennial border forms the hypotenuse of the triangle, and one leg is 1ft longer than the other leg.Find the lengths of the legs.
Thank you!
During the first part of a trip a canoeist travels 49 miles at a certain speed. The canoeist travels 25 miles on the second part of the trip at a speed of 5mph slower. The total time for the trip is
5hrs. What was the speed on each part of the trip?
You deposit $1000 in an account that pays 8% interest compounded semiannually. After 2 years, the interest rate is increased to 8.40% compounded quarterly. What will be the value of the account after
4 years?
science virus
Influenza is an active virus. I know this because, it spreads very quickly.
science virus
Influenza is an active virus. I know this because, it spreads very quickly.
physics-question 7
A baseball is thrown at an angle of 26 degrees relative to the ground at a speed of 22.1 m/s. The ball is caught 39.2327 m from the thrower. The acceleration due to gravity is 9.81 m/s2 . How high is
the tallest spot in the ball s path? Answer in units of m
A spy in a speed boat is being chased down a river by government officials in a faster craft. Just as the officials boat pulls up next to the spy s boat, both boats reach the edge of a 5.2 m
waterfall. The spy s speed is 15 m/s and the officials speed is ...
4th of 2 and 24th of 2 both equal 2. so i'm thinking this is false, I just need confirmation.
I did not express a question involving power. Power is expressed as follows: 4^2. I am asking a question involving radical expressions, roots, and radicands.
radical expressions=the root of a radicand. I can't type the radical symbol. 4th power to the number 2 and 24th power to the number 2.
Is this radical expression true or false? 28 minus 4th root of 2 equals 24th root of 2
Can a radical be negative when the index is even?
Solve. .25(250)+250/250
If 250 specialty coffee drinks were sold at the bake sale, what would the average cost per drink be given the equation .25x+250/x?
I still have it wrong. I have pKa = 9.25. Solved for acid and got 2.39. Then I converted to grams and 62.25. Here's my work ka = 1.0e-14/1.8e-5 = 5.56e-10. solved for pKa -log(5.56e-10) = 9.25. H-H =
9.40=9.25 + log(1.70/acid). acid = 2.39. then, 2.39 x 0.490L = 1.17 M. th...
How many grams of NH4Cl must be added to 0.490 L of 1.70 M NH3 solution to yield a buffer solution with a pH of 9.40? Assume no volume change. Kb for NH3 = 1.8 10-5.
The average cost per drink can be found by dividing the total cost (.25x+250=total cost) by the number of drinks sold (x). Write an equation that shows the average cost per drink.
Add. Simplify if possible. 3/v+8/v^2
Add. Simplify if possible. 4/vy^2+9/yv^2
Find all the numbers for which the rational expression is undefined: -24/25v
Add. Simplify if possible. 4/vy^2+9/yv^2
Find all numbers for which the rational expression is undefined: -24/25v
Algebra-Rational Equation
The officejet printer can copy Lisa's dissertation in 22 min. The laserjet printer can copy the same document in 18 min. If the two machines work together, how long would they take to print the
Maria bicycles 4km/h faster than Javier. In the same time that it takes Javier to bike 60km, Maria bikes 72km. How fast does each bicyclist travel?
A group of parents is going to sell specialty coffee drinks including espresso, lattes, cappuccinos and mochas. It cost $250 to rent the equipment, which is a fixed cost. Additionally, the
ingredients for each drink (x) cost $0.25. Write an equation that shows the total cost t...
Pages: <<Prev | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | Next>> | {"url":"http://www.jiskha.com/members/profile/posts.cgi?name=Kelly&page=5","timestamp":"2014-04-19T22:41:20Z","content_type":null,"content_length":"27710","record_id":"<urn:uuid:7c9f982a-ccf3-4d5a-9511-10d9b933cdb4>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00635-ip-10-147-4-33.ec2.internal.warc.gz"} |
Talk Tennis - View Single Post - Combining weight, balance and swingweight
Originally Posted by
No. ω is angular acceleration (as said in the document). You can also see that I have used M=ω*I which indicates that it is angular acceleration. So the formula is correct.
However, ω is often used to denote angular velocity so you are right that it is confusing. I will change in the document. Thanks for being a careful reader.
According to all textbook about angular motion,
is angular velocity. So formula 4 was wrong.
Right now you start using
– angular acceleration without proper corrections and all your formulas become wrong. | {"url":"http://tt.tennis-warehouse.com/showpost.php?p=6879916&postcount=29","timestamp":"2014-04-18T20:03:24Z","content_type":null,"content_length":"17796","record_id":"<urn:uuid:8fe6b8e7-300d-41cc-ab8b-21c8f4f1c96d>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00544-ip-10-147-4-33.ec2.internal.warc.gz"} |
Millbrae SAT Math Tutor
Find a Millbrae SAT Math Tutor
...Over half of all words in English derive from Latin, one of the most logical and widely-spoken languages ever created. Study of Latin will help English-speaking students understand their own
language much better and provide them with a window into French, Spanish, Italian, and other languages. ...
49 Subjects: including SAT math, English, reading, writing
...Some of them were scoring in mid-500, when I started working with them. I bring my full attention and dedication to the students that I work with. Since every person is unique, I personalize my
approach to each student.
14 Subjects: including SAT math, calculus, statistics, geometry
...I took introductory linear algebra at DVC several years ago and received an A. I spent the next two years tutoring intro linear algebra on a daily basis at the DVC math lab. I then took an
advanced linear algebra class at UC Santa Cruz and received a C (horribly difficult class for math/compute...
15 Subjects: including SAT math, reading, writing, calculus
...Unfortunately for many people with test anxiety, test scores are very important in college and other school admissions and can therefore have a huge impact on your life. If you approach
test-taking in a way that makes it fun, it takes a lot of the anxiety out of the process, and your scores will improve. Working with a kind and patient tutor can be instrumental in this process.
48 Subjects: including SAT math, reading, Spanish, English
...Most students with serious learning disabilities are able to obtain a high school diploma. Goals set by students or their parents are usually accomplished. I tutor in math, science,
pre-algebra, algebra, algebra II, geometry, trigonometry, calculus, and physics.
29 Subjects: including SAT math, reading, physics, calculus | {"url":"http://www.purplemath.com/millbrae_ca_sat_math_tutors.php","timestamp":"2014-04-17T04:30:21Z","content_type":null,"content_length":"23942","record_id":"<urn:uuid:44d67485-a282-4924-b1dd-03c0dbc44c4d>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00387-ip-10-147-4-33.ec2.internal.warc.gz"} |
voltage controlled oscillators
Please visit VK2TIP's Book Shelf. My personal recommendations, thanks.
Sunday, 23-Jun-2013 12:55:32 PDT
Authored by Ian C. Purdie VK2TIP
What are voltage controlled oscillators?
A voltage controlled oscillator or as more commonly known, a vco, is an oscillator where the principal variable or tuning element is a varactor diode. The voltage controlled oscillator is tuned
across its band by a "clean" dc voltage applied to the varactor diode to vary the net capacitance applied to the tuned circuit.
A practical example of voltage controlled oscillators
Here I'm going to use a very practical example where one of my readers has a requirement for a voltage controlled oscillator operating at 1.8 - 2.0 Mhz (amateur radio band 160M). This is to be part
of a frequency synthesiser, although a vco isn't always associated with a frequency synthesiser.
The very high costs and difficulties encountered when buying quality variable capacitors today often make vco's an extremely attractive alternative. As an alternative all you need is an extremely
stable BUT very clean source of dc power, a varactor diode and a high quality potentiometer - usually a 10 turn type. Please note that circuit Q tends to be somewhat degraded by using varactor diodes
instead of variable capacitors.
For people who are confused at this point allow me to explain. Diodes when they have a reverse voltage applied exhibit the characteristics of a capacitor. Altering the voltage alters the capacitance.
Common diodes such as 1N914 and 1N4004 can be used but more commonly we use varactor diodes specifically manufactured for vco use e.g. Motorola's MVAM115, Philips BB112 and BB212. They are also
sometimes called tuner diodes.
The design requirements asked for were:-
(a) frequency coverage 1.8 - 2.0 Mhz
(b) voltage controlled by a frequency synthesiser with an output level sufficient to drive the input of a Phase Locked Loop (PLL)
(c) a further buffered output for a digital frequency readout.
(d) another buffered out put to drive succeeding amplifier stages.
Because in this example the ultimate frequency stability is determined by the reference crystal in the frequency synthesiser there can be some relaxation of stability standards. The buffered outputs
will be covered under buffer amplifiers.
Figure 1 - schematic of a hartley oscillator
Varactor Diode Tuning
Here Cv the variable capacitor, could be replaced by a suitable varactor diode as a tuning diode and in actual fact our reader has on hand a Motorola MVAM115 varactor. This I think is nearly similar
to my Philips BB112 diode. So we will rehash the above figure 1 to accomodate varactor diode tuning instead of using a conventional variable capacitor.
Figure 2 - schematic of a varactor tuned hartley oscillator
Now I've left Ct and C1 a/b all in the circuit. In this application of a frequency synthesiser they are unlikely to be necessary. To tune from 1.8 to 2.0 Mhz which is a frequency swing of 2 / 1.8 =
1.111 - which when squared means we need a capacitance ratio of 1.234 to 1
That means the ratio of minimum combined capacitance in the circuit to maximum combined capacitance in the circuit must change by 1.234 to 1.
Looking back at the tutorial on oscillators I said the inductor should have a reactance of about 180 ohms. So around the frequency of interest I expect an inductor of about 15 uH to be used for L1 in
Fig 2.
You should be used to calculating LC numbers by now but L X C at 1.8 Mhz = 7818 and at 2 Mhz it works out about 6332. Dividing both by our 15 uH inductor we get a Cmin of 422 pF and Cmax of 520 pf.
Which incidentally if you check 520 / 422 = 1.232:1 So the variation of C is 520 - 422 = 98pF swing.
For synthesisers or any voltage tuning you should have the largest voltage swing possible. This minimises the effects of noise voltage on the tuning voltage. My BB112 diode can be operated ideally
from 1V to 8V. That means we can tune the 200 Khz (2.0 - 1.8) with a variation of 8 - 1 = 7 volts. It follows 7v / 200 Khz = 35 uV/Hz. If our noise level is below this then the tuning can't be varied
or fm'ed by noise.
At 8V my diode exhibits a capacitance of around 28 pF while at 1V the capacitance is about 500 pF.
Diodes Back-To-Back
You will note I have two diodes back-to-back in series in Fig 2. Although this in effect divides total varactor diode capacitance by two it eliminates the nasty effect of the rf present in the tank
circuit driving a single diode into conduction on peaks which will increase the bias voltage, this also gives rise to harmonics.
It follows that my varactor diode capacitance now swings a net approximate 14 pF up to 250 pF when the bias voltage is varied from 8 volts down to 1 volt. You can of course go below 1V for higher
capacitance but I tend to be conservative and generally do not go below 1V very much.
Now we have a net swing of 250 - 14 or 236 pF. You will recall above I said "the variation of C is 520 - 422 = 98pF swing" so how do I reduce a 236 pF swing down to a 98 pF swing? Look at capacitor
C2 which is in series with both varactor diodes, does this not reduce the net capacitance?
Calculating Net Capacitance
This is a simple mathematical problem (Oh God - not again <G>). In this case we can use the formula C2 = [(Ca * Cb) / (Ca - Cb)] where Ca = existing C or in this case 236 pF and Cb = desired C or 98
pF. Now this isn't terribly accurate but you finish up in the ball park. Plugging those numbers into our sums we get C2 = [(235 * 98) / (236 - 98)] or 23030 / 137 = 168 pF.
Bearing in mind with a vco and the voltage swings involved, you can get a fair bit of leeway and that each varactor diode varies greatly from predicted data of capacitance versus voltage. That means
a lot of this is guesswork or suck-and-see. Technically it means it's all determined "empirically". All of that just says we will use a 180 pF capacitor for C2.
Using a 180 pF capacitor for C2 and putting it in series with D1 and D2 we get at 1 volt D1 = 500 pF, D2 = 500 pF and C2 = 180 pF. Net result = 1 / [(1 / 500) + (1 / 500) + (1 / 180)] which is about
105 pf.
Similarly at 8 volts we get D1 = 28 pF, D2 = 28 pF and C2 = 180 pF. Net result = 1 /[(1 / 28) + (1 / 28) + (1 / 180)] which is about 13 pf.
It follows the swing now becomes 13 pf to 105 pF or a net 92 pF which is near enough for this exercise. I had said very much earlier "by using our 15 uH inductor we get a Cmin of 422 pF and Cmax of
520 pf. Which incidentally if you check 520 / 422 = 1.232:1 So the variation of C is 520 - 422 = 98pF swing". How do we get near this requirement?
If we need Cmax of 520 pF and our series connection gives us 105 pF we need an extra 520 - 105 = 415 pF. On the other hand Cmin required is 422 pF and the series connection provides 13 pf we need 422
- 13 = 409 pF. It can be seen if we allow a trimmer of say 25 pF for Ct, which is the suggested trimmer in figure 2 - (that is Ct can varied from say 5 to 25 pF) - and we allow the combination C1 a/b
to be a total of around 390 pF we have obviously achieved our goal. Is this not cool?
For our inductor L1 I would use a toroid although if you have access to a variable inductor you could use it. An air cored inductor most likely would be too large to consider. Suitable toroids of the
Amidon / Micrometals type would at 2 Mhz be the T50-2 type which would require about 55 turns of #26 wire or even the T68-2 type requiring about 51 turns of #24 wire. Both gauges mentioned are those
which will conveniently fit around the core.
No matter your frequency range of interest the basic principles outlined above will more or less still apply.
Tuning Diode Voltage
For a frequency synthesiser the tuning voltage is derived from the low pass filter of the PLL and you don't need to worry about it. On the other hand when you have an application of replacing a
variable capacitor and manually tuning with say a ten turn potentiometer you need to be very careful about the "quality" of the voltage. It MUST be clean!
Below in Figure 3 is a suggested schematic for deriving suitable tuning voltages.
Figure 3 - schematic of deriving varactor diode tuning voltage
The 10K pot is your 5 or 10 turn "quality" potentiometer for tuning, the upper and lower trim pots (set and forget) allow you to adjust the voltage range of your choice that your tuning potentiometer
will see. Again use "quality" trimpots. The 100K resistor and the two 0.1 uF capacitors are further filtering. Obviously there is considerable interaction between the trimpots and potentiometer so
expect a lot of juggling back and forth.
If you wished, in some applications, both trimpots could be replaced by fixed resistors. It is simply a matter of using ohms law.
MC12149 Low power voltage controlled oscillator buffer
For higher frequencies consider the MC12149. It is intended for applications requiring high frequency signal generation up to 1300 MHz. An external tank circuit is used to determine the desired
frequency of operation. The VCO is realized using an emitter–coupled pair topology. The MC12149 can be used with an integrated PLL IC such as the MC12202 1.1 GHz Frequency Synthesizer to realize a
complete PLL sub–system. The device is specified to operate over a voltage supply range of 2.7 to 5.5 V. It has a typical current consumption of 15 mA at 3.0 V which makes it attractive for battery
operated handheld systems.
Data Sheet Page: MC12149 data sheet
NOTE: - The Motorola site does NOT support resume downloads.
74HC4046 phase-locked-loop
The 74HC4046 phase-locked-loop which is an integrated circuit contains a voltage controlled oscillator and will work as high as 17 Mhz.
With the 74HC4046 VCO, its tuning range is determined by one external capacitor C1 (between C1A and C1B) and one external resistor R1 (between R1 and GND) or two external resistors R1 and R2 (between
R1 and GND, and R2 and GND). Resistor R1 and capacitor C1 determine the frequency range of the VCO. Resistor R2 enables the VCO to have a frequency offset if required. Look at the 74HC4046 page.
VCO modelling and simulation in WinSPICE3
I'm greatful to a fellow Australian Simon Harpham for supplying me with a link to this site, Silicon Devices (UK) Limited.
"This note shows how a simple VCO model maybe created, then simulated to show its functionality to quite a complex level using a relatively simple test-bench implemented in a readily available SPICE
simulator by using a few simple mathematical formulae...."
Link: http://www.silicondevices.com/Resources/AppsNotes/ModellingVCOs.html
Got a question on this topic?
If you are involved in electronics then consider joining our "electronics Questions and Answers" news group to ask your question there as well as sharing your thorny questions and answers. Help out
your colleagues!.
The absolute fastest way to get your question answered and yes, I DO read most posts.
This is a mutual help group with a very professional air about it. I've learn't things. It is an excellent learning resource for lurkers as well as active contributors.
RELATED TOPICS on voltage controlled oscillators
varactor diode
broad band amplifiers
buffer amplifiers
colpitts oscillators
crystal oscillator
emitter degeneration
hartley oscillator
negative feedback
oscillator basics
oscillator drift
Click image to print out a printer friendly version of this page.
Looking for more? Visit my site map page:
This site is hosted at WebWizards.Net for better value.
the author Ian C. Purdie, VK2TIP of www.electronics-tutorials.com asserts the moral right to be identified as the author of this web site and all contents herein. Copyright © 2000, all rights
reserved. See copying and links. These electronic tutorials are provided for individual private use and the author assumes no liability whatsoever for the application, use, misuse, of any of these
projects or electronics tutorials that may result in the direct or indirect damage or loss that comes from these projects or tutorials. All materials are provided for free private and public use.
Commercial use prohibited without prior written permission from www.electronics-tutorials.com.
Copyright © 2000 - 2005, all rights reserved. URL - http://www.electronics-tutorials.com/oscillators/voltage-controlled-oscillators.htm
Updated 8th May, 2005 | {"url":"http://www.electronics-tutorials.com/oscillators/voltage-controlled-oscillators.htm","timestamp":"2014-04-17T07:31:13Z","content_type":null,"content_length":"23529","record_id":"<urn:uuid:264c38ff-f322-4374-971c-71e106373b3e>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00327-ip-10-147-4-33.ec2.internal.warc.gz"} |
Martingale part of the discontinuous put payoff
MathOverflow is a question and answer site for professional mathematicians. It's 100% free, no registration required.
I need the martingale part of the put payoff (not $C^2$..). Where $S_t=exp(\sigma W_t -\frac{\sigma^2t}{2})$
$d[(S_t -K)^+ ]$ ??
up vote 1 down vote favorite
2 I guess I need to use local times but how?
add comment
I need the martingale part of the put payoff (not $C^2$..). Where $S_t=exp(\sigma W_t -\frac{\sigma^2t}{2})$
I guess I need to use local times but how?
Thanks you all!
(proof, for $\phi(t,S_t)=(K-S_t)^+$:
Step 1 smoothing : $\phi_\epsilon(x)=1_{S_t\leq K+\epsilon}\cdot\phi(x)+1_{S_t\in]K-\epsilon,K]} \cdot \psi(x)$, where $\psi(x)=-\frac{1}{\epsilon^2}(K-x)^2(K-x-2\epsilon)$ .
This function is $C^1$, and also $C^2$ excepting in a countable set.
Step 2 Itô on $\phi_\epsilon(S_t)= \phi_\epsilon(S_0)+\int^t_0\phi_\epsilon^{'}(S_t)dS_t+\frac{1}{2}\int^t_0 1_{S_t\in[K-\epsilon,K]}\phi_\epsilon^{''}(S_t)d\langle S\rangle _t$
because $\phi _\epsilon=0$ out of $[K-\epsilon,K]$
Let's denote by $L_t=lim_{\epsilon \to0}{\frac{1}{2\epsilon^2}*\int{_{K-\epsilon}^K(3S_t+4\epsilon-3K)dS_t}}$ it's a finite variation process since it is increasing
up vote 2 down
vote accepted Step 3 We have that $\phi_\epsilon(S_t)-\phi_\epsilon(S_0)-\int^t_0\phi_\epsilon^{'}(S_t)dS_t \space \xrightarrow {L^2} \space\phi(S_t) -\phi(S_0) -\int^t_0\phi^{'}(S_t)dS_t $
(because $\int^t_0\phi_\epsilon^{'}(S_t) 1_{S_u\in[0,K-\epsilon]}dS_u \xrightarrow {L^2}\int_0^t\phi^{'}(S_u)du$ by using Itô isometry)
Finally We get the formula 'à la bridge' namely
$(K-S_t)^+=(K-S_0)^+-\int_0^t1_{K\leq S_u}dS_u+L_t$
and the martingale part is
$(K-S_0)^+-\int_0^t1_{K\leq S_u}\sigma S_u dB_u$
add comment
Step 1 smoothing : $\phi_\epsilon(x)=1_{S_t\leq K+\epsilon}\cdot\phi(x)+1_{S_t\in]K-\epsilon,K]} \cdot \psi(x)$, where $\psi(x)=-\frac{1}{\epsilon^2}(K-x)^2(K-x-2\epsilon)$ .
This function is $C^1$, and also $C^2$ excepting in a countable set.
Step 2 Itô on $\phi_\epsilon(S_t)= \phi_\epsilon(S_0)+\int^t_0\phi_\epsilon^{'}(S_t)dS_t+\frac{1}{2}\int^t_0 1_{S_t\in[K-\epsilon,K]}\phi_\epsilon^{''}(S_t)d\langle S\rangle _t$ because $\phi _\
epsilon=0$ out of $[K-\epsilon,K]$
Let's denote by $L_t=lim_{\epsilon \to0}{\frac{1}{2\epsilon^2}*\int{_{K-\epsilon}^K(3S_t+4\epsilon-3K)dS_t}}$ it's a finite variation process since it is increasing
Step 3 We have that $\phi_\epsilon(S_t)-\phi_\epsilon(S_0)-\int^t_0\phi_\epsilon^{'}(S_t)dS_t \space \xrightarrow {L^2} \space\phi(S_t) -\phi(S_0) -\int^t_0\phi^{'}(S_t)dS_t $ (because $\int^t_0\phi_
\epsilon^{'}(S_t) 1_{S_u\in[0,K-\epsilon]}dS_u \xrightarrow {L^2}\int_0^t\phi^{'}(S_u)du$ by using Itô isometry)
Simply use Itô-Tanaka's formula I guess this should give something like : $df(S_t)=D_-f(S_t)dS_t+\frac{1}{2}dL^s_tf''(ds)$
with $f(S)=(S-K)^+$ so $D_-f(S)=1_{]K,+\infty}(S)$ and $f''(ds)=\delta_K(ds)$
This gives if I am not mistaken :
With $L^K_t$ being the local time of your geometric Brownian Motion $S$ around level $K$ at time $t$.
up vote 1
down vote Regards
Edit NB:
-$D_-$ stands for the left derivative of $f$
-$f''(ds)$ stands for second derivatives in the distribution-sense.
-The use of Itô-Tanaka's formula allows to avoid the derivation of the Mollifiers-type argument for the direct proof of the result (which is quite cumbersome in my opinion). I should add
that Ito-Tanaka's formula is applicable to every $f$ that is the differnce of two convex functions if I remember well, which is the case here with $f(x)=(X-K)^+$.
add comment
Simply use Itô-Tanaka's formula I guess this should give something like : $df(S_t)=D_-f(S_t)dS_t+\frac{1}{2}dL^s_tf''(ds)$
With $L^K_t$ being the local time of your geometric Brownian Motion $S$ around level $K$ at time $t$.
Edit NB: -$D_-$ stands for the left derivative of $f$
-The use of Itô-Tanaka's formula allows to avoid the derivation of the Mollifiers-type argument for the direct proof of the result (which is quite cumbersome in my opinion). I should add that
Ito-Tanaka's formula is applicable to every $f$ that is the differnce of two convex functions if I remember well, which is the case here with $f(x)=(X-K)^+$. | {"url":"http://mathoverflow.net/questions/57524/martingale-part-of-the-discontinuous-put-payoff/57581","timestamp":"2014-04-17T07:30:51Z","content_type":null,"content_length":"59299","record_id":"<urn:uuid:fe6a7319-2916-4304-9be8-46a44e48c807>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00336-ip-10-147-4-33.ec2.internal.warc.gz"} |
Whitesburg, GA Algebra 2 Tutor
Find a Whitesburg, GA Algebra 2 Tutor
...I love helping students overcome their stumbling blocks and I look forward to helping you overcome yours in the coming months!Tutored on Algebra 1 topics during high school, college, and as a
GMAT instructor for three years. Scored in the 99th percentile on the GMAT. Tutored on Algebra 2 topics during high school, college, and as a GMAT instructor for three years.
28 Subjects: including algebra 2, physics, calculus, economics
...I receive a 1320 on my SAT, a 710 in math and a 610 in verbal. The essay section was not part of the SAT when I graduated. In High School I made all A's in all my Math Classes, and in my
Computer classes.
11 Subjects: including algebra 2, algebra 1, Microsoft Excel, precalculus
...During my Junior and Senior year, I was a Recitation Leader for the Freshman College Algebra courses. Every Tuesday and Thursday of the semester I would lead the classroom in previous homework
discussions and answer any questions students had in preparation for tests. This enjoyment of helping others inspired me to continue tutoring in my spare time.
9 Subjects: including algebra 2, algebra 1, precalculus, trigonometry
...I have successfully tutored students in all concept areas of Geometry. These include classifying angles and triangles, perimeter, area, volume, solving for missing sides and angles of
triangles, diameter and chord of circles, and coordinate geometry. You will increase and improve computation of mean, mode, median, reciprocals, proportions, and factorials.
17 Subjects: including algebra 2, reading, English, writing
...This will never change, and unless the Skype call is a lesson, there will never be a charge. My standard rate is $35 per hour. However, payment options are negotiable to an extent(please
message me for more details). Lessons are booked in two hour blocks, with a 24 hour cancellation policy.
14 Subjects: including algebra 2, reading, biology, chemistry | {"url":"http://www.purplemath.com/whitesburg_ga_algebra_2_tutors.php","timestamp":"2014-04-18T21:29:00Z","content_type":null,"content_length":"24275","record_id":"<urn:uuid:c36d1e18-753c-4136-bec6-7f770b0ed416>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00635-ip-10-147-4-33.ec2.internal.warc.gz"} |
Richardson Math Tutor
Find a Richardson Math Tutor
...I am also very familiar with both the Zumdahl and Brown, LeMay and Bursten textbooks that are used in many school districts' AP Chemistry courses. Even more, I have the talent to completely and
simplistically explain any topic so that the student will easily grasp it and then use it to successfu...
17 Subjects: including algebra 1, algebra 2, chemistry, geometry
...In addition, the frame work of physics, as a subject, is built on three basic units namely length (meters), mass (kg), and time (seconds). A student who does not have this background will have
flaws in topics such as mechanics, Light/optics, heat, current electricity etc. While is Africa, I was ...
16 Subjects: including calculus, statistics, biochemistry, physiology
...I have taught several courses on networking including computer networks, TCP/IP and internet protocols, network security and cryptography, computer and network security, etc. I hold a PhD
degree in electrical engineering. I have taken many courses on linear algebra and advanced topics that depe...
13 Subjects: including algebra 1, algebra 2, Microsoft Excel, general computer
...Whether you are in elementary, secondary or college, my knowledge and skill set will be a valuable asset in preparing you for a successful and productive academic career! Outside of classes I
took, I have a lot of experience with genetics in the practical setting of the laboratory. My research at SMU focused on genetic pathways.
30 Subjects: including geometry, precalculus, trigonometry, SAT math
...I can teach others the ins and outs of training for and running long distances from my own experience. My father taught me how to play chess at a very early age. I even placed second in a chess
tournament once.
48 Subjects: including algebra 1, algebra 2, calculus, chemistry | {"url":"http://www.purplemath.com/Richardson_Math_tutors.php","timestamp":"2014-04-20T01:49:11Z","content_type":null,"content_length":"23729","record_id":"<urn:uuid:c95ddf9d-c42c-4a85-ad08-f8e181ccd046>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00378-ip-10-147-4-33.ec2.internal.warc.gz"} |
return row and column indices given 2 values which define a range
Problem 2091. return row and column indices given 2 values which define a range
Inspired by problem http://www.mathworks.co.kr/matlabcentral/cody/problems/856-getting-the-indices-from-a-matrice Inputs: - matrix A, lower limit, upper limit Ouputs: - indices of matrix elements
which are bigger than or equal to lower limit and smaller than upper limit
A little complication: let your function be able to deal with a random order of the input arguments.
If all input arguments have the same size, assume that the first argument is the "matrix" with value(s).
Don't use "find" and don't use "regexp".
Solution Comments
1 Comment
In my own solution I did not use ind2sub.
2 Comments
Nice solution, I overlooked the potential usage of ind2sub.
José Ramón Menzinger
on 9 Jan 2014
You should use 'varargin' here ;-) take a look: http://www.mathworks.de/matlabcentral/cody/problems/2091-return-row-and-column-indices-given-2-values-which-define-a-range/solutions/382339
1 Comment
using varargin is also a good suggestion. Thanks. | {"url":"http://www.mathworks.com/matlabcentral/cody/problems/2091-return-row-and-column-indices-given-2-values-which-define-a-range","timestamp":"2014-04-24T09:05:46Z","content_type":null,"content_length":"33188","record_id":"<urn:uuid:7fee664a-8ca1-46fd-b082-fc5e2bf76c78>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00279-ip-10-147-4-33.ec2.internal.warc.gz"} |
higher impedance easier on electric system?
Re: higher impedance easier on electric system?
A lot of people have been having issues with the Yellow Tops lately, I have a brand new one that rests at 12.1, I need to bring it in, this is the second one in a year.
I plan on upgrading to something better upfront with a 120ah+ for the rear.
Re: higher impedance easier on electric system?
Re: higher impedance easier on electric system?
o ok yea u should my duralast gold has been fine sitting at 14.3.
Re: higher impedance easier on electric system?
okay, easy way to settle this: apparently, none has actually seen the efficiency graphs of different amplifiers. look them up, and i hope you find them, and not just take my word. every amp has a
max efficiency point. some are at higher impedances than others, but the point is that max efficiency is at/near the amplifiers full clean potential. the only real way to tell is to actually
tell, is to measure rail sag and watch where the voltage drops beyond a typical operating slope, or even drops some on particular amps with a good rail regulation. A good indicator i look at, is
rms doubling with ohm load. take, for instance, your 600rms@1. if it does 300rms@2, and 150rms@4, then i would have to say that power supply is in the lower operating range at 2 and 4, which
would be typically 40-60%, sometimes less. now, your other example of a 1500 that does 600@4, then it would be doing 375@4 if 1ohm was the most efficient. likely, that would be an amp that likes
2ohm, where it is efficient, but not overly strained and heated. old school kicker amps were a good one to look at. they gave the 1,2,2.66,4, and 8 ohm ratings. 2.66 is where they reached most
efficiency. usually looked like this: 37.5@8, 75@4, 125@2.66, 125@2, 100@1. so basically, in your example, the 600@1 would be more efficient than the 600@4.
as for the optimas, i've been saying that for a while. last year, or 1.5-2years ago, i threw out 13 optimas. last winter, my best friend pulled in the driveway with his newer yellow-top swollen
like i've never seen before, and i have 2 more yellow tops, and 1 red top i have to go drop-off for disposal. then, there is my last red top, which i have to try and save, but it might just be
toast like the rest. i run an x2power in the daily.
Re: higher impedance easier on electric system?
okay, easy way to settle this: apparently, none has actually seen the efficiency graphs of different amplifiers. look them up, and i hope you find them, and not just take my word. every amp has a
max efficiency point. some are at higher impedances than others, but the point is that max efficiency is at/near the amplifiers full clean potential. the only real way to tell is to actually
tell, is to measure rail sag and watch where the voltage drops beyond a typical operating slope, or even drops some on particular amps with a good rail regulation. A good indicator i look at, is
rms doubling with ohm load. take, for instance, your 600rms@1. if it does 300rms@2, and 150rms@4, then i would have to say that power supply is in the lower operating range at 2 and 4, which
would be typically 40-60%, sometimes less. now, your other example of a 1500 that does 600@4, then it would be doing 375@4 if 1ohm was the most efficient. likely, that would be an amp that likes
2ohm, where it is efficient, but not overly strained and heated. old school kicker amps were a good one to look at. they gave the 1,2,2.66,4, and 8 ohm ratings. 2.66 is where they reached most
efficiency. usually looked like this: 37.5@8, 75@4, 125@2.66, 125@2, 100@1. so basically, in your example, the 600@1 would be more efficient than the 600@4.
as for the optimas, i've been saying that for a while. last year, or 1.5-2years ago, i threw out 13 optimas. last winter, my best friend pulled in the driveway with his newer yellow-top swollen
like i've never seen before, and i have 2 more yellow tops, and 1 red top i have to go drop-off for disposal. then, there is my last red top, which i have to try and save, but it might just be
toast like the rest. i run an x2power in the daily.
How can this statement be valid if we do not even know which 2 amps are being compared. As you say, it depends on design, which also depends on Make and Model.
And what about dynamic headroom on a 600w@ 4ohms/1ohm stable amp compared to 600w@ 1ohm, assuming the 1ohm amp may not be .5 ohm stable?....and then again, how about the possibility of .1% THD at
4ohm rated output vs the possibility of 10% THD at 1ohm rated output?
OP, which 2 amps are you trying to compare, exactly?
Re: higher impedance easier on electric system?
what i am looking at is how the power is passed through the torroid(s), where the greatest majority of efficiency is determined. at lower power levels, a lot of the field enters in and around the
core and a lower amount is sent through the secondaries, as they only need so much. the use in the secondaries lowers resistance effect the secondaries have on the total inductance. dynamic
headroom is a big part to do with the rail storage caps and impedance drop below the mean topology. however, i was just answering the simple question of efficiency. iirc, the 2 amps were
hypothetical amps, so i went with what is most likely and common traits, which is not usually .5 stable, and even if it is .5, it would probably still reach the most efficiency at 1ohm, or 2ohm
load. at lower imp. there is more strain and loss in the whole system.
Re: higher impedance easier on electric system?
what i am looking at is how the power is passed through the torroid(s), where the greatest majority of efficiency is determined. at lower power levels, a lot of the field enters in and around the
core and a lower amount is sent through the secondaries, as they only need so much. the use in the secondaries lowers resistance effect the secondaries have on the total inductance. dynamic
headroom is a big part to do with the rail storage caps and impedance drop below the mean topology. however, i was just answering the simple question of efficiency. iirc, the 2 amps were
hypothetical amps, so i went with what is most likely and common traits, which is not usually .5 stable, and even if it is .5, it would probably still reach the most efficiency at 1ohm, or 2ohm
load. at lower imp. there is more strain and loss in the whole system.
Is this a contradiction to your earlier post? Or, are you referring to .5 ohm operation vs 1ohm operation? Are you suggesting an amplifier can be 86% efficient at 4 ohms and 98% at 1 ohm?...and
Dynamic Headroom.. this would be where some amps' max power rating might be handy, right? For instance, the Orion HCCA25001 is rated 625 @ 4, 1250 @ 2, 2500 @ 1 and 5000 max. does this mean there
is enough reserve capacitance to possibly achieve 5000 watts at .5 or .25 and at this point, there is no dynamic headroom available and the efficiency is at rock bottom?
it is late..er, early and I may not be making much sense, but thank you for trying.
Re: higher impedance easier on electric system?
efficiency is determined by the amplifier topology, which determines how efficiency reacts to changes in impedance load.
Amplifier efficiency climbs as you increase power output - until the limits of the power supply are met and thermal losses are linear.
Efficiency drops as impedance load is reduced (same output power level) due to increases in current: current = heat = losses. The drop is worse for Class A/B than modern Class D or other
Audio - ClassD: October 2010
AVR-2308CI Measurements & Analysis — Reviews and News from Audioholics
the age old question: do you draw less current with 600W@1 ohm or 600W@4 ohm?
answer: depends on the two amplifiers in question.
in the examples given above: if the amplifier is rear it's rating, it's efficiency is near peak. highest efficiency would be Class D and operating near rating - whatever impedance that may be.
of course, headroom/crest factor/etc. certainly come into play from a musical standpoint. we prefer our amplifiers to have more output capability for musical peaks, meaning most of the time they
are operating at lower efficiency points, waiting for a musical peak.
for the OP: you have a 454 block - you have flexibility as to what size (or how many) alternators you can put on that beast.
Re: higher impedance easier on electric system?
efficiency is determined by the amplifier topology, which determines how efficiency reacts to changes in impedance load.
Amplifier efficiency climbs as you increase power output - until the limits of the power supply are met and thermal losses are linear.
Efficiency drops as impedance load is reduced (same output power level) due to increases in current: current = heat = losses. The drop is worse for Class A/B than modern Class D or other
Audio - ClassD: October 2010
AVR-2308CI Measurements & Analysis — Reviews and News from Audioholics
the age old question: do you draw less current with 600W@1 ohm or 600W@4 ohm?
answer: depends on the two amplifiers in question.
in the examples given above: if the amplifier is rear it's rating, it's efficiency is near peak. highest efficiency would be Class D and operating near rating - whatever impedance that may be.
of course, headroom/crest factor/etc. certainly come into play from a musical standpoint. we prefer our amplifiers to have more output capability for musical peaks, meaning most of the time they
are operating at lower efficiency points, waiting for a musical peak.
for the OP: you have a 454 block - you have flexibility as to what size (or how many) alternators you can put on that beast.
Resurrecting an old thread.. from this last graph would the a/b class amp should a much higher efficiency at say 400 or 800 watts? or is this amp dependent, ie power supply dependent?
---------- Post added at 08:17 PM ---------- Previous post was at 08:16 PM ----------
Maybe its just me but this graph is so misleading to me..
Re: higher impedance easier on electric system?
efficiency is determined by the amplifier topology, which determines how efficiency reacts to changes in impedance load.
Amplifier efficiency climbs as you increase power output - until the limits of the power supply are met and thermal losses are linear.
Efficiency drops as impedance load is reduced (same output power level) due to increases in current: current = heat = losses. The drop is worse for Class A/B than modern Class D or other
Audio - ClassD: October 2010
AVR-2308CI Measurements & Analysis — Reviews and News from Audioholics
the age old question: do you draw less current with 600W@1 ohm or 600W@4 ohm?
answer: depends on the two amplifiers in question.
in the examples given above: if the amplifier is rear it's rating, it's efficiency is near peak. highest efficiency would be Class D and operating near rating - whatever impedance that may be.
of course, headroom/crest factor/etc. certainly come into play from a musical standpoint. we prefer our amplifiers to have more output capability for musical peaks, meaning most of the time they
are operating at lower efficiency points, waiting for a musical peak.
for the OP: you have a 454 block - you have flexibility as to what size (or how many) alternators you can put on that beast.
So, is this graph showing that at optimal impedance level of hypothetical amps above ( class D and class A/B) Class D reaches efficiency much faster, then a/b class?!
Re: higher impedance easier on electric system?
@neo_styles ; @pro-rabbit ; @ciaonzo ; @keep_hope_alive ; @Spooney ; @av83 ;
Just listing people I think could explain past common dogmatic beliefs.. Thanks for any replies!
Re: higher impedance easier on electric system?
@neo_styles ; @pro-rabbit ; @ciaonzo ; @keep_hope_alive ; @Spooney ;
Just listing people I think could explain past common dogmatic beliefs.. Thanks for any replies!
Nobody can answer this better than KHA, man. I can't remember enough of my BEE knowledge to be able to cover it adequately, but it has to do with the P/S switching required to output a higher
level of power IIRC and how class D topology allows for faster switching than A/B.
Re: higher impedance easier on electric system?
Nobody can answer this better than KHA, man. I can't remember enough of my BEE knowledge to be able to cover it adequately, but it has to do with the P/S switching required to output a higher
level of power IIRC and how class D topology allows for faster switching than A/B.
How does that correspond to Impedance levels and efficiency on electical systems? Thanks Neo!
Why are higher impedance's usually more effiecient with lower THD ratings?
Is this always a general law for any class/amp or is just a particular case or a/b class?
btw, whats your acronym BEE mean?
Re: higher impedance easier on electric system?
Basic Electrical Engineering. One of the first classes we take for my line of work.
As for impedance levels and efficiency, it's not overly complicated. Just think of what "impedance" means. Just like "resistance" is the ability to resist voltage, impedance is the ability to
resist the flow of current. So at lower impedances, the system is easier able to accept changes in current.
Re: higher impedance easier on electric system?
Basic Electrical Engineering. One of the first classes we take for my line of work.
As for impedance levels and efficiency, it's not overly complicated. Just think of what "impedance" means. Just like "resistance" is the ability to resist voltage, impedance is the ability to
resist the flow of current. So at lower impedances, the system is easier able to accept changes in current.
Lol, yet another engineer on here!
Makes sense.. but by lower impedance you mean lower impedance being 4 ohm compared to 2 ohm, right? | {"url":"http://www.caraudio.com/forums/amplifiers/554891-higher-impedance-easier-electric-system-2-print.html","timestamp":"2014-04-20T04:25:16Z","content_type":null,"content_length":"35013","record_id":"<urn:uuid:13d75ea9-df4c-4fd6-9db3-4d9bccabb48c>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00593-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: Theory on loop transformations
Wei Li <wei@cs.cornell.EDU>
Sat, 3 Apr 1993 18:19:48 GMT
From comp.compilers
| List of all articles for this month |
Newsgroups: comp.compilers
From: Wei Li <wei@cs.cornell.EDU>
Keywords: theory, optimize, bibliography, FTP
Organization: Cornell University, CS Dept., Ithaca, NY
References: 93-04-006
Date: Sat, 3 Apr 1993 18:19:48 GMT
assmann@karlsruhe.gmd.de (Uwe Assmann) writes:
|> In general, I am interested in a general theory on loop transformations.
We have a matrix-oriented approach to loop transformations that uses
non-singular matrices to represent loop transformations. Non-singular
matrices generalize the unimodular approach (unimodular matrices are a
special case of non-singular matrices in which the determinant is 1 or
-1). Some important transformations such as loop tiling can only be
modeled by non-singular matrices. Furthermore, we provide a completion
algorithm that makes the theory easier to use in practice. In
transformations for parallelism and data locality, it is very useful to
have such completion algorithm. Our work was presented at the 5th
Compiler Workshop at Yale last year. A journal version is to appear soon
in IJPP.
We have used the non-singular matrix framework to develop optimizations
for data locality in our compiler for NUMA parallel machines. You can
find how the transformation matrix is constructed automatically. The
algorithms are in the paper that appeared in ASPLOS V (ACM SIGPLAN
Notices, Vol 27, Number 9, Sep. 1992).
The papers can also be found via ftp from Cornell (ftp.cs.cornell.edu,
file: framework.ps
"A Singular Loop Transformation Framework
Based on Non-singular Matrices"
by Wei Li and Keshav Pingali
file: asplos92.ps
"Access Normalization: Loop Restructuring for NUMA Compilers"
by Wei Li and Keshav Pingali
file: pnuma.ps
"Loop Transformations for NUMA Machines"
by Wei Li and Keshav Pingali
SIGPLAN Notices, January 1993
-- Wei Li
Department of Computer Science
Cornell University
Ithaca, NY 14853
Post a followup to this message
Return to the comp.compilers page.
Search the comp.compilers archives again. | {"url":"http://compilers.iecc.com/comparch/article/93-04-016","timestamp":"2014-04-18T05:41:30Z","content_type":null,"content_length":"7507","record_id":"<urn:uuid:1055ee01-dca0-40ba-9e01-12030f41e2d5>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00297-ip-10-147-4-33.ec2.internal.warc.gz"} |
Gambrills Algebra 1 Tutor
...For guitar, I prefer the student bring in whatever they want to learn, be it in standard notation or tablature. I don't have a music degree. I began studying violin in elementary school in 1979
and began private lessons with a certified Suzuki teacher 2 years later.
49 Subjects: including algebra 1, reading, English, writing
...It is formula heavy and requires more memorization than the previous year of Algebra 2. Performing well on exams like the SAT is partially about the mathematics and partially about the
strategies needed to do well. The SAT includes math equations that range all the way to Algebra 2.
24 Subjects: including algebra 1, reading, calculus, geometry
...I am a licensed teacher and have taught grades 4-6 in public and private schools. Strong reading, writing, and math skills are the foundation for future academic success. I have taught every
aspect of reading from sound and word recognition through the skills necssary to read great literature.
32 Subjects: including algebra 1, reading, English, chemistry
...I am able to work at home or in the pupil's home and can travel up to about 8 miles. I currently work in the Prince George's County School District. I have taught grades 1-12, so I have
experience in all age groups.
4 Subjects: including algebra 1, algebra 2, elementary math, prealgebra
John received his Bachelor's Degree in Computer Science from Morehouse College and a Master of Business Administration (MBA) from Georgia Tech with concentrations in Finance and Information
Technology. He has served as a Life Leadership Adviser for the NBMBAA Leaders of Tomorrow Program (LOT) for t...
18 Subjects: including algebra 1, statistics, geometry, algebra 2 | {"url":"http://www.purplemath.com/gambrills_md_algebra_1_tutors.php","timestamp":"2014-04-21T02:39:27Z","content_type":null,"content_length":"23928","record_id":"<urn:uuid:21d19df3-5513-4d09-a5c8-0eaec556145b>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00074-ip-10-147-4-33.ec2.internal.warc.gz"} |
7.14 Securely Signing and Encrypting with RSA
7.14.1 Problem
You need to both sign and encrypt data using RSA.
7.14.2 Solution
Sign the concatenation of the public key of the message recipient and the data you actually wish to sign. Then concatenate the signature to the plaintext, and encrypt everything, in multiple messages
if necessary.
7.14.3 Discussion
Naïve implementations where a message is both signed and encrypted with public key cryptography tend to be insecure. Simply signing data with a private key and then encrypting the data with a public
key isn't secure, even if the signature is part of the data you encrypt. Such a scheme is susceptible to an attack called surreptitious forwarding. For example, suppose that there are two servers, S1
and S2. The client C signs a message and encrypts it with S1's public key. Once S1 decrypts the message, it can reencrypt it with S2's public key and make it look as if the message came from C.
In a connection-oriented protocol, it could allow a compromised S1 to replay a key transport between C and S1 to a second server S2. That is, if an attacker compromises S1, he may be able to imitate
C to S2. In a document-based environment such as an electronic mail system, if Alice sends email to Bob, Bob can forward it to Charlie, making it look as if it came from Alice instead of Bob. For
example, if Alice sends important corporate secrets to Bob, who also works for the company, Bob can send the secrets to the competition and make it look as if it came from Alice. When the CEO finds
out, it will appear that Alice, not Bob, is responsible.
There are several strategies for fixing this problem. However, encrypting and then signing does not fix the problem. In fact, it makes the system far less secure. A secure solution to this problem is
to concatenate the recipient's public key with the message, and sign that. The recipient can then easily determine that he or she was indeed the intended recipient.
One issue with this solution is how to represent the public key. The important thing is to be consistent. If your public keys are stored as X.509 certificates (see Chapter 10 for more on these), you
can include the entire certificate when you sign. Otherwise, you can simply represent the public modulus and exponent as a single binary string (the DER-encoding of the X.509 certificate) and include
that string when you sign.
The other issue is that RSA operations such as encryption tend to work on small messages. A digital signature of a message will often be too large to encrypt using public key encryption. Plus, you
will need to encrypt your actual message as well! One way to solve this problem is to perform multiple public key encryptions. For example, let's say you have a 2,048-bit modulus, and the recipient
has a 1,024-bit modulus. You will be encrypting a 16-byte secret and your signature, where that signature will be 256 bytes, for a total of 272 bytes. The output of encryption to the 1,024-bit
modulus is 128 bytes, but the input can only be 86 bytes, because of the need for padding. Therefore, we'd need four encryption operations to encrypt the entire 272 bytes.
In many client-server architectures where the client initiates a connection, the client won't have the server's public key in advance. In such a case, the server will often send a copy of its
public key at its first opportunity (or a digital certificate containing the public key). In this case, the client can't assume that public key is valid; there's nothing to distinguish it from an
attacker's public key! Therefore, the key needs to be validated using a trusted third party before the client trusts that the party on the other end is really the intended server. See Recipe 7.1.
Here is an example of generating, signing, and encrypting a 16-byte secret in a secure manner using OpenSSL, given a private key for signing and a public key for the recipient. The secret is placed
in the buffer pointed to by the final argument, which must be 16 bytes. The encrypted result is placed in the third argument, which must be big enough to hold the modulus for the public key.
Note that we represent the public key of the recipient as the binary representation of the modulus concatenated with the binary representation of the exponent. If you are using any sort of high-level
key storage format such as an X.509 certificate, it makes sense to use the canonical representation of that format instead. See Recipe 7.16 and Recipe 7.17 for information on converting common
formats to a binary string.
#include <openssl/sha.h>
#include <openssl/rsa.h>
#include <openssl/objects.h>
#include <openssl/rand.h>
#include <string.h>
#define MIN(x,y) ((x) > (y) ? (y) : (x))
unsigned char *generate_and_package_128_bit_secret(RSA *recip_pub_key,
RSA *signers_key, unsigned char *sec, unsigned int *olen) {
unsigned char *tmp = 0, *to_encrypt = 0, *sig = 0, *out = 0, *p, *ptr;
unsigned int len, ignored, b_per_ct;
int bytes_remaining; /* MUST NOT BE UNSIGNED. */
unsigned char hash[20];
/* Generate the secret. */
if (!RAND_bytes(sec, 16)) return 0;
/* Now we need to sign the public key and the secret both.
* Copy the secret into tmp, then the public key and the exponent.
len = 16 + RSA_size(recip_pub_key) + BN_num_bytes(recip_pub_key->e);
if (!(tmp = (unsigned char *)malloc(len))) return 0;
memcpy(tmp, sec, 16);
if (!BN_bn2bin(recip_pub_key->n, tmp + 16)) goto err;
if (!BN_bn2bin(recip_pub_key->e, tmp + 16 + RSA_size(recip_pub_key))) goto err;
/* Now sign tmp (the hash of it), again mallocing space for the signature. */
if (!(sig = (unsigned char *)malloc(BN_num_bytes(signers_key->n)))) goto err;
if (!SHA1(tmp, len, hash)) goto err;
if (!RSA_sign(NID_sha1, hash, 20, sig, &ignored, signers_key)) goto err;
/* How many bytes we can encrypt each time, limited by the modulus size
* and the padding requirements.
b_per_ct = RSA_size(recip_pub_key) - (2 * 20 + 2);
if (!(to_encrypt = (unsigned char *)malloc(16 + RSA_size(signers_key))))
goto err;
/* The calculation before the mul is the number of encryptions we're
* going to make. After the mul is the output length of each
* encryption.
*olen = ((16 + RSA_size(signers_key) + b_per_ct - 1) / b_per_ct) *
if (!(out = (unsigned char *)malloc(*olen))) goto err;
/* Copy the data to encrypt into a single buffer. */
ptr = to_encrypt;
bytes_remaining = 16 + RSA_size(signers_key);
memcpy(to_encrypt, sec, 16);
memcpy(to_encrypt + 16, sig, RSA_size(signers_key));
p = out;
while (bytes_remaining > 0) {
/* encrypt b_per_ct bytes up until the last loop, where it may be fewer. */
if (!RSA_public_encrypt(MIN(bytes_remaining,b_per_ct), ptr, p,
recip_pub_key, RSA_PKCS1_OAEP_PADDING)) {
out = 0;
goto err;
bytes_remaining -= b_per_ct;
ptr += b_per_ct;
/* Remember, output is larger than the input. */
p += RSA_size(recip_pub_key);
if (sig) free(sig);
if (tmp) free(tmp);
if (to_encrypt) free(to_encrypt);
return out;
Once the message generated by this function is received on the server side, the following code will validate the signature on the message and retrieve the secret:
#include <openssl/sha.h>
#include <openssl/rsa.h>
#include <openssl/objects.h>
#include <openssl/rand.h>
#include <string.h>
#define MIN(x,y) ((x) > (y) ? (y) : (x))
/* recip_key must contain both the public and private key. */
int validate_and_retreive_secret(RSA *recip_key, RSA *signers_pub_key,
unsigned char *encr, unsigned int inlen,
unsigned char *secret) {
int result = 0;
BN_CTX *tctx;
unsigned int ctlen, stlen, i, l;
unsigned char *decrypt, *signedtext, *p, hash[20];
if (inlen % RSA_size(recip_key)) return 0;
if (!(p = decrypt = (unsigned char *)malloc(inlen))) return 0;
if (!(tctx = BN_CTX_new( ))) {
return 0;
RSA_blinding_on(recip_key, tctx);
for (ctlen = i = 0; i < inlen / RSA_size(recip_key); i++) {
if (!(l = RSA_private_decrypt(RSA_size(recip_key), encr, p, recip_key,
RSA_PKCS1_OAEP_PADDING))) goto err;
encr += RSA_size(recip_key);
p += l;
ctlen += l;
if (ctlen != 16 + RSA_size(signers_pub_key)) goto err;
stlen = 16 + BN_num_bytes(recip_key->n) + BN_num_bytes(recip_key->e);
if (!(signedtext = (unsigned char *)malloc(stlen))) goto err;
memcpy(signedtext, decrypt, 16);
if (!BN_bn2bin(recip_key->n, signedtext + 16)) goto err;
if (!BN_bn2bin(recip_key->e, signedtext + 16 + RSA_size(recip_key))) goto err;
if (!SHA1(signedtext, stlen, hash)) goto err;
if (!RSA_verify(NID_sha1, hash, 20, decrypt + 16, RSA_size(signers_pub_key),
signers_pub_key)) goto err;
memcpy(secret, decrypt, 16);
result = 1;
if (signedtext) free(signedtext);
return result;
7.14.4 See Also
Recipe 7.1, Recipe 7.16, Recipe 7.17 | {"url":"http://etutorials.org/Programming/secure+programming/Chapter+7.+Public+Key+Cryptography/7.14+Securely+Signing+and+Encrypting+with+RSA/","timestamp":"2014-04-19T19:37:44Z","content_type":null,"content_length":"127911","record_id":"<urn:uuid:a8de91b3-17ae-478c-835d-2f6666ee823d>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00245-ip-10-147-4-33.ec2.internal.warc.gz"} |
Vector Tutorial
This tutorial will teach you the basics of vector math. Vectors are useful in 2D and 3D graphics, collision detection, physics, and many other areas of game programming.
What's a vector?
1 struct Vector {
2 float x;
3 float y;
4 float z;
5 };
6 typedef struct Vector Vector;
Expressing positions with vectors
in a coordinate system is the position represented by the vector {0, 0, 0};
world coordinates
are measured relative to the origin. You will sometimes also work in
local coordinates
, which are measured relative to a position that could be different from the origin.
In the coordinate system used throughout this tutorial, the X axis goes from left to right, the Y axis goes from bottom to top, and the Z axis goes from far to near. This corresponds to the
coordinate system most commonly used for three-dimensional OpenGL projections.
Expressing directions with vectors
of a vector, also known as its
, is its distance from the origin. Vectors of equal magnitude can be visualized as points on the outside of a sphere (3D) or a circle (2D) with a radius equal to the vector's magnitude. You can
a vector, making its magnitude equal to 1. This is useful for a number of things. Example:
This problem can be easily solved by using a normalized direction vector. The vector points in the direction the spaceship is facing; when you move the spaceship, you use the vector to calculate the
change in position. For example:
1 #define SPACESHIP_MOVE_SPEED 0.2f /* Arbitrary value; distance the spaceship moves per frame */
3 spaceship.position.x += (SPACESHIP_MOVE_SPEED * spaceship.direction.x);
4 spaceship.position.y += (SPACESHIP_MOVE_SPEED * spaceship.direction.y);
5 spaceship.position.z += (SPACESHIP_MOVE_SPEED * spaceship.direction.z);
Using a normalized vector, the spaceship moves the same distance regardless of which way it's facing. This certainly isn't the only way; you could accomplish the same thing by storing the ship's
direction as an angle, and using the trigonometric functions sin() and cos(). Vectors are just a convenience in this case. More on this later.
Constructing a vector
, will return numbers you can use to construct a normalized direction vector. This only works for two out of three axes, though; the third must be set to zero.
1 float angle = (M_PI / 4.0); /* 45 degrees */
3 vector.x = cos(angle);
4 vector.y = sin(angle);
5 vector.z = 0.0;
If you want to construct a vector in local coordinates (from point A, how far and in what direction is point B?), you can do it by subtracting each component in point B from the corresponding
component in point A. (If you only want the direction, and not the distance, you'll need to normalize the vector afterward):
1 vector.x = (pointB.x - pointA.x);
2 vector.y = (pointB.y - pointA.y);
3 vector.z = (pointB.z - pointA.z);
Normalizing a vector
A normalized vector has a magnitude of 1. Normalized vectors pointing in any direction are of equal distance from the origin, as though constrained to an imaginary sphere or circle. Vector
normalization is done like this:
1 void Vector_normalize(Vector * vector) {
2 float magnitude;
4 magnitude = sqrt((vector->x * vector->x) +
5 (vector->y * vector->y) +
6 (vector->z * vector->z));
7 vector->x /= magnitude;
8 vector->y /= magnitude;
9 vector->z /= magnitude;
10 }
What are we doing here? First, we use the distance formula to calculate the length, or magnitude, of the vector. For a vector that's already normalized, this will be approximately equal to 1. (I say
"approximately" because floating point numbers have limited precision, and rounding errors can cause things not to add up exactly to the number you would expect.) Once we have the magnitude, we
divide each component of the vector by it.
Note that if you attempt to normalize a vector with a magnitude of zero, you'll end up with a vector full of NaNs. (NaN stands for "Not a Number"; it's a special value returned from illegal
operations such as a divide by zero. Languages other than C may behave differently.)
Rotating a vector
Rotating a vector can be accomplished a few different ways. For two-dimensional rotation, one way to do it is by using atan2() to compute the angle of the vector, and using sin() and cos() to compute
a new angle. For three-dimensional rotation, you'll need to multiply the vector by a quaternion or a rotation matrix; see the
Quaternion tutorial
and the
Matrix tutorial
for more details. For two-dimensional rotation, you might do something like this:
1 #define SHIP_ROTATION_SPEED (M_PI / 100.0)
3 void rotateShip(Spaceship * ship, float rotationAngle) {
4 float angle;
6 angle = atan2(ship->direction.y, ship->direction.x);
8 angle += rotationAngle;
10 ship->direction.x = cos(angle);
11 ship->direction.y = sin(angle);
12 }
14 void leftArrowKey(Spaceship * ship) {
15 rotateShip(ship, -SHIP_ROTATION_SPEED);
16 }
18 void rightArrowKey(Spaceship * ship) {
19 rotateShip(ship, SHIP_ROTATION_SPEED);
20 }
Next, we add in the rotationAngle that was passed to the function. A new direction vector is then computed from the angle using sin() and cos(). The ship is now facing a new direction.
Dot product
Dot product is a vector operation that can be used to compute the angle between two vectors:
1 float Vector_dot(Vector vector1, Vector vector2) {
2 return ((vector1.x * vector2.x) +
3 (vector1.y * vector2.y) +
4 (vector1.z * vector2.z));
5 }
Cross product
Cross product
is a vector operation that can be used to compute a vector perpendicular to the plane on which two vectors lie:
1 Vector Vector_cross(Vector vector1, Vector vector2) {
2 Vector result;
4 result.x = ((vector1.y * vector2.z) - (vector1.z * vector2.y));
5 result.y = ((vector1.z * vector2.x) - (vector1.x * vector2.z));
6 result.z = ((vector1.x * vector2.y) - (vector1.y * vector2.x));
8 return result;
9 }
• Vector_cross(right, up) = front
• Vector_cross(front, right) = up
• Vector_cross(up, front) = right
• Vector_cross(up, right) = back
• Vector_cross(right, front) = down
• Vector_cross(front, up) = left
That's it! You should now be well on your way to using vectors effectively. If you have any questions, feel free to contact me directly, or ask on the message board. Happy coding!
Code Implemenation | {"url":"http://www.idevgames.com/articles/vector-tutorial","timestamp":"2014-04-17T00:55:53Z","content_type":null,"content_length":"29171","record_id":"<urn:uuid:fde01059-b427-49a0-b91c-afd31ca1ff56>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00253-ip-10-147-4-33.ec2.internal.warc.gz"} |
Young's Double Slit Experiment - Slit Separation Calculation
1. The problem statement, all variables and given/known data
Calculate the slit separation (d) given that:
Wavelength = 650 nm (Plugged in 6.5*10^-7 m)
m = 1 (plugged in 1)
Distance to screen (D) = 37.5 cm (plugged in 0.375m)
Distance between centre to side order (y) = 0.7 cm (pluged in 0.007m)
2. Relevant equations
We were only given one equation in our lab manual (the same equation they gave us for a single slit, slit width problem....except instead of d they had a there to represent slit width)
d = (m*Wavelength*D)/y
where d is the slit separation
3. The attempt at a solution
I plugged in the numbers and I produced a solution equal to 0.0348 mm. (I made sure to convert to meters before plugging into the equation and then converted back to milimetres by multiplying by
What ails me is that the theoretical, or given slit separation is 0.25mm. This makes my relative error aproximately 88% and I am positive I did not do the experiment that poorly. Surprisingly though,
the answer produced is VERY similar to the given SLIT WIDTH (0.04mm).
Now I checked this a million times and I think I may be stuck in a rut of not seeing something that is supremely obvious but is making me get the wrong answer. Or the person who designed my lab did
not supply me with a proper equation to solve this problem.
Any help is appreciated. | {"url":"http://www.physicsforums.com/showthread.php?t=475933","timestamp":"2014-04-17T07:24:06Z","content_type":null,"content_length":"26574","record_id":"<urn:uuid:a4248772-a19c-4770-8a09-2677df53bc95>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00131-ip-10-147-4-33.ec2.internal.warc.gz"} |
Journal of the Optical Society of America B
In this paper, we introduce the Hermite polynomial’s coherent state (HPCS) $|α〉H$, which is defined as $Hn(X)|α〉$ up to a normalization constant and where $Hn(X)$ is the coordinate operator’s
Hermite polynomial of order $n$ and $|α〉=exp(−(1/2)|α|2+αa†)|0〉$. This state may be produced by the superposition of some different photon added coherent states when $n=2$. The mathematical and
physical properties of HPCS are also studied. It is shown that HPCS has remarkable nonclassical state features such as sub-Poissonian and squeezing properties.
© 2012 Optical Society of America
OCIS Codes
(270.0270) Quantum optics : Quantum optics
(270.6570) Quantum optics : Squeezed states
ToC Category:
Quantum Optics
Original Manuscript: September 6, 2012
Revised Manuscript: October 5, 2012
Manuscript Accepted: October 8, 2012
Published: November 28, 2012
Gang Ren, Jian-ming Du, Hai-jun Yu, and Ye-jun Xu, "Nonclassical properties of Hermite polynomial’s coherent state," J. Opt. Soc. Am. B 29, 3412-3418 (2012)
Sort: Year | Journal | Reset
1. D. Bouwmeester, A. Ekert, and A. Zeilinger, The Physics of Quantum Information (Springer-Verlag, 2000).
2. X.-X. Xu, L.-Y. Hu, and H.-Y. Fan, “Photon-added squeezed thermal states: statistical properties and its decoherence in a photon-loss channel,” Opt. Commun. 283, 1801–1809 (2010). [CrossRef]
3. L.-Y. Hu, X.-X. Xu, Z.-S. Wang, and X.-F. Xu, “Photon-subtracted squeezed thermal state: nonclassicality and decoherence,” Phys. Rev. A 82, 043842 (2010). [CrossRef]
4. Roy J. Glauber, “The quantum theory of optical coherence,” Phys. Rev. 130, 2529–2539 (1963). [CrossRef]
5. E. C. G. Sudarshan, “Equivalence of semiclassical and quantum mechanical descriptions of statistical light beams,” Phys. Rev. Lett. 10, 277–279 (1963). [CrossRef]
6. J. R. Klauder, “Continuous-representation theory. III. On functional quantization of classical systems,” J. Math. Phys. 5, 177–186 (1964). [CrossRef]
7. C. C. Gerry, “Proposal for generating even and odd coherent states,” Opt. Commun. 91, 247–251 (1992). [CrossRef]
8. R. Blandino, F. Ferreyrol, M. Barbieri, P. Grangier, and R. Tualle-Brouri, “Characterization of a π-phase shift quantum gate for coherent-state qubits,” New J. Phys. 14, 013017 (2012). [CrossRef]
9. V. Parigi, A. Zavatta, M. S. Kim, and M. Bellini, “Probing quantum commutation rules by addition and subtraction of single photons to/from a light field,” Science 317, 1890–1893 (2007).
10. A. Ourjoumtsev, A. Dantan, R. Tualle-Brouri, and Ph. Grangier, “Increasing entanglement between Gaussian states by coherent photon subtraction,” Phys. Rev. Lett. 98, 030502 (2007). [CrossRef]
11. D. E. Browne, J. Eisert, S. Scheel, and M. B. Plenio, “Driving non-Gaussian to Gaussian states with linear optics,” Phys. Rev. A 67, 062320 (2003). [CrossRef]
12. H. Nha and H. J. Carmichael, “Proposed test of quantum nonlocality for continuous variables,” Phys. Rev. Lett. 93, 020401 (2004). [CrossRef]
13. S. D. Bartlett and B. C. Sanders, “Universal continuous-variable quantum computation: requirement of optical nonlinearity for photon counting,” Phys. Rev. A 65, 042304 (2002). [CrossRef]
14. G. S. Agarwal and K. Tara, “Nonclassical properties of states generated by the excitations on a coherent state,” Phys. Rev. A 43, 492–497 (1991). [CrossRef]
15. G. S. Agarwal and K. Tara, “Nonclassical character of states exhibiting no squeezing or sub-Poissonian statistics,” Phys. Rev. A 46, 485–488 (1992). [CrossRef]
16. F. Dell’Anno, S. De Siena, L. Albano, and F. Illuminati, “Continuous-variable quantum teleportation with non-Gaussian resources,” Phys. Rev. A 76, 022301 (2007). [CrossRef]
17. A. Kitagawa, M. Takeoka, M. Sasaki, and A. Chefles, “Entanglement evaluation of non-Gaussian states generated by photon subtraction from squeezed states,” Phys. Rev. A 73, 042310 (2006).
18. M. S. Kim, “Recent developments in photon-level operations on travelling light fields,” J. Phys. B 41, 133001 (2008). [CrossRef]
19. P. Marek, H. Jeong, and M. S. Kim, “Generating ‘squeezed’ superpositions of coherent states using photon addition and subtraction,” Phys. Rev. A 78, 063811 (2008). [CrossRef]
20. S. Y. Lee and H. Nha, “Quantum state engineering by a coherent superposition of photon subtraction and addition,” Phys. Rev. A 82, 053812 (2010). [CrossRef]
21. H.-C. Yuan, X.-X. Xu, and H.-Y. Fan, “Generalized photon-added coherent state and its quantum statistical properties,” Chin. Phys. B 19, 10425 (2010). [CrossRef]
22. H. Y. Fan, H. L. Lu, and Y. Fan, “Newton–Leibniz integration for ket–bra operators in quantum mechanics and derivation of entangled state representations,” Ann. Phys. 321, 480–494 (2006).
23. J. R. Klauder and B. S. Skargerstam, Coherent States (World Scientific, Singapore, 1985).
24. P. A. M. Dirac, The Principles of Quantum Mechanics, 4th ed. (Clarendon, 1984), p. 12.
25. V. V. Dodonov, Y. A. Korennoy, V. I. Manko, and Y. A. Moukhin, “Nonclassical properties of states generated by the excitations of even/odd coherent states of light,” J. Opt. B 8413–427(1996).
26. V. V. Dodonov, V. I. Manko, and P. G. Polynkin, “Geometrical squeezed states of a charged-particle in a time-dependent magnetic-field,” Phys. Lett. A 188, 232–238 (1994). [CrossRef]
27. V. V. Dodonov and V. I. Manko, Invariants and Evolution of Nonstationary Quantum Systems, M. A. Markov, ed. (Nova Science,1989).
28. C. K. Hong and L. Mandel, “Generation of higher-order squeezing of quantum electromagnetic field,” Phys. Rev. A 32, 974–982 (1985). [CrossRef]
29. J. Lee, J. Kim, and H. Nha, “Demonstrating higher-order nonclassical effects by photon-added classical states: realistic schemes,” J. Opt. Soc. Am. B 26, 1363–1369 (2009). [CrossRef]
30. L. Mandel, “Sub-Poissonian photon statistics in resonance fluorescence,” Opt. Lett. 4, 205–207 (1979). [CrossRef]
31. H. Y. Fan and H. R. Zaidi, “Application of IWOP technique to the generalized Weyl correspondence,” Phys. Lett. A 124, 303–307 (1987). [CrossRef]
OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies.
In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.
« Previous Article | Next Article » | {"url":"http://www.opticsinfobase.org/josab/abstract.cfm?uri=josab-29-12-3412","timestamp":"2014-04-20T18:14:39Z","content_type":null,"content_length":"192884","record_id":"<urn:uuid:ca84aa37-2278-4b25-ada0-d186922e1f12>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00420-ip-10-147-4-33.ec2.internal.warc.gz"} |
Introduction to Multiple Regression (1 of 3)
In multiple regression, more than one variable is used to predict the criterion. For example, a college admissions officer wishing to predict the future grades of college applicants might use three
variables (High School GPA, SAT, and Quality of letters of recommendation) to predict college GPA. The applicants with the highest predicted college GPA would be admitted. The prediction method would
be developed based on students already attending college and then used on subsequent classes. Predicted scores from multiple regression are
linear combinations
of the predictor variables. Therefore, the general form of a prediction equation from multiple regression is:
Y' = b
+ b
+ ... + b
+ A
where Y' is the predicted score, X
is the score on the first predictor variable, X
is the score on the second, etc. The
Y intercept
is A. The regression
, b
, etc.) are analogous to the
simple regression | {"url":"http://davidmlane.com/hyperstat/B123219.html","timestamp":"2014-04-20T21:21:39Z","content_type":null,"content_length":"3887","record_id":"<urn:uuid:420c2ef3-80a6-44a6-990a-c2a4bb678d89>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00532-ip-10-147-4-33.ec2.internal.warc.gz"} |
January 13th 2010, 09:13 AM #1
Jan 2010
I need to show that
if every subsequence of X = (xn) has a subsequence that converges to 0, then the lim X = 0.
Any help would be greatly appreciated!
It's saying that if the subsequence converges to 0, then it must mean that the sequence also converges to 0.
I would prove it by showing that a sequence $\{x_n\}$ which does not converge to $0$ has a subsequence which has no subsequence converging to $0$. This is easy. The sequence $\{x_n\}$ must be
frequently outside of some $\epsilon$-neighbourhood of $0$; take the subsequence which consists of those points.
I would prove it by showing that a sequence $\{x_n\}$ which does not converge to $0$ has a subsequence which has no subsequence converging to $0$. This is easy. The sequence $\{x_n\}$ must be
frequently outside of some $\epsilon$-neighbourhood of $0$; take the subsequence which consists of those points.
So you're saying to prove by contradiction?
If so then wouldn't you have to prove that if there exists a subsequence of X that has no subsequence that converges to 0, then X does not converge to 0?
no, what BrunoJ suggested was a proof by contrapostitive. he pretty much gave you the proof. you can formalize it if you want
January 13th 2010, 11:47 AM #2
January 13th 2010, 01:26 PM #3
January 13th 2010, 02:03 PM #4
Jan 2010
January 13th 2010, 02:41 PM #5
January 13th 2010, 07:03 PM #6
January 13th 2010, 07:59 PM #7
January 20th 2010, 04:42 PM #8
Jan 2010
January 20th 2010, 09:37 PM #9 | {"url":"http://mathhelpforum.com/differential-geometry/123598-sequences-subsequences.html","timestamp":"2014-04-17T08:08:10Z","content_type":null,"content_length":"60852","record_id":"<urn:uuid:98c5b4b9-5032-4176-b420-7b721ff943b7>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00137-ip-10-147-4-33.ec2.internal.warc.gz"} |