content
stringlengths
86
994k
meta
stringlengths
288
619
Haddon Township, NJ Algebra 2 Tutor Find a Haddon Township, NJ Algebra 2 Tutor ...I have used my extensive proofreading skills in both professional and educational settings. I can teach you how to proofread your own writing, which is critical to achieving competent writing skills. I tutored elementary math on a daily basis for eight years. 23 Subjects: including algebra 2, reading, writing, geometry ...I studied reading and writing under strict, old-school English teachers and professors, which is one reason why my verbal skills exceed those of most English teachers today. I have been helping students raise their scores on all three SAT sections for many years. I can do the same for you. 23 Subjects: including algebra 2, English, calculus, geometry ...I have also completed course work towards a Ph.D.I hold a MS in engineering, an MBA, and a PhD (all but thesis completed). I have worked as an adjunct professor at Temple University and Widener University. I am a retired director at AT&T and Bell Labs. I have MS in engineering and MBA from the University of Cincinnati and Temple University. 11 Subjects: including algebra 2, physics, geometry, calculus ...These various opportunities have taught me how to communicate and teach at these different levels. When it comes to chemistry, it's like teaching a new language. At first, learning a new language is difficult, but once you get the concepts and basics, learning the language becomes much easier. 6 Subjects: including algebra 2, chemistry, algebra 1, prealgebra ...My name is Lawrence and I am currently a senior at Temple University majoring in Sociology/Pre-Medical Studies. I truly enjoy helping others with topics related to math and science, as I have had a great deal of experience learning about. Helping out my peers with their studies is very gratifying to me and I just want to be able to spread the help I can provide. 11 Subjects: including algebra 2, geometry, algebra 1, SAT math Related Haddon Township, NJ Tutors Haddon Township, NJ Accounting Tutors Haddon Township, NJ ACT Tutors Haddon Township, NJ Algebra Tutors Haddon Township, NJ Algebra 2 Tutors Haddon Township, NJ Calculus Tutors Haddon Township, NJ Geometry Tutors Haddon Township, NJ Math Tutors Haddon Township, NJ Prealgebra Tutors Haddon Township, NJ Precalculus Tutors Haddon Township, NJ SAT Tutors Haddon Township, NJ SAT Math Tutors Haddon Township, NJ Science Tutors Haddon Township, NJ Statistics Tutors Haddon Township, NJ Trigonometry Tutors
{"url":"http://www.purplemath.com/haddon_township_nj_algebra_2_tutors.php","timestamp":"2014-04-20T11:21:15Z","content_type":null,"content_length":"24602","record_id":"<urn:uuid:7088caf6-d799-4982-be10-5096c8553f9e>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00580-ip-10-147-4-33.ec2.internal.warc.gz"}
Vector Space Linear Algebra for Quantum Mechanics Michael Fowler 10/14/08 We’ve seen that in quantum mechanics, the state of an electron in some potential is given by a wave function [], and physical variables are represented by operators on this wave function, such as the momentum in the x-direction [] The Schrödinger wave equation is a linear equation, which means that if [] and [] are solutions, then so is [], where c[1], c[2] are arbitrary complex numbers. This linearity of the sets of possible solutions is true generally in quantum mechanics, as is the representation of physical variables by operators on the wave functions. The mathematical structure this describes, the linear set of possible states and sets of operators on those states, is in fact a linear algebra of operators acting on a vector space. From now on, this is the language we’ll be using most of the time. To clarify, we’ll give some definitions. What is a Vector Space? The prototypical vector space is of course the set of real vectors in ordinary three-dimensional space, these vectors can be represented by trios of real numbers [] measuring the components in the x, y and z directions respectively. The basic properties of these vectors are: • any vector multiplied by a number is another vector in the space, []; • the sum of two vectors is another vector in the space, that given by just adding the corresponding components together:[] These two properties together are referred to as “closure”: adding vectors and multiplying them by numbers cannot get you out of the space. • A further property is that there is a unique null vector [] and each vector has an additive inverse [] which added to the original vector gives the null vector. Mathematicians have generalized the definition of a vector space: a general vector space has the properties we’ve listed above for three-dimensional real vectors, but the operations of addition and multiplication by a number are generalized to more abstract operations between more general entities. The operators are, however, restricted to being commutative and associative. Notice that the list of necessary properties for a general vector space does not include that the vectors have a magnitude—that would be an additional requirement, giving what is called a normed vector space. More about that later. To go from the familiar three-dimensional vector space to the vector spaces relevant to quantum mechanics, first the real numbers (components of the vector and possible multiplying factors) are to be generalized to complex numbers, and second the three-component vector goes an n component vector. The consequent n-dimensional complex space is sufficient to describe the quantum mechanics of angular momentum, an important subject. But to describe the wave function of a particle in a box requires an infinite dimensional space, one dimension for each Fourier component, and to describe the wave function for a particle on an infinite line requires the set of all normalizable continuous differentiable functions on that line. Fortunately, all these generalizations are to finite or infinite sets of complex numbers, so the mathematicians’ vector space requirements of commutativity and associativity are always trivially satisfied. We use Dirac’s notation for vectors, [] and call them “kets”, so, in his language, if []belong to the space, so does [] for arbitrary complex constants c[1], c[2]. Since our vectors are made up of complex numbers, multiplying any vector by zero gives the null vector, and the additive inverse is given by reversing the signs of all the numbers in the vector. Clearly, the set of solutions of Schrödinger’s equation for an electron in a potential satisfies the requirements for a vector space: []is just a complex number at each point in space, so only complex numbers are involved in forming [], and commutativity, associativity, etc., follow at once. Vector Space Dimensionality The vectors [] are linearly independent if A vector space is n-dimensional if the maximum number of linearly independent vectors in the space is n. Such a space is often called V^ n(C), or V^ n(R) if only real numbers are used. Now, vector spaces with finite dimension n are clearly insufficient for describing functions of a continuous variable x. But they are well worth reviewing here: as we’ve mentioned, they are fine for describing quantized angular momentum, and they serve as a natural introduction to the infinite-dimensional spaces needed to describe spatial wavefunctions. A set of n linearly independent vectors in n-dimensional space is a basis—any vector can be written in a unique way as a sum over a basis: You can check the uniqueness by taking the difference between two supposedly distinct sums: it will be a linear relation between independent vectors, a contradiction. Since all vectors in the space can be written as linear sums over the elements of the basis, the sum of multiples of any two vectors has the form: Inner Product Spaces The vector spaces of relevance in quantum mechanics also have an operation associating a number with a pair of vectors, a generalization of the dot product of two ordinary three-dimensional vectors, Following Dirac, we write the inner product of two ket vectors [] as []. Dirac refers to this [] form as a “bracket” made up of a “bra” and a “ket”. This means that each ket vector [] has an associated bra [] For the case of a real n-dimensional vector, [] are identical—but we require for the more general case that where ^* denotes complex conjugate. This implies that for a ket []the bra will be []. (Actually, bras are usually written as rows, kets as columns, so that the inner product follows the standard rules for matrix multiplication.) Evidently for the n-dimensional complex vector[] is real and positive except for the null vector: For the more general inner product spaces considered later we require[] to be positive, except for the null vector. (These requirements do restrict the classes of vector spaces we are considering—no Lorentz metric, for example—but they are all satisfied by the spaces relevant to nonrelativistic quantum mechanics.) The norm of [] is then defined by If [] is a member of V^ n(C), so is [], for any complex number a. We require the inner product operation to commute with multiplication by a number, so The complex conjugate of the right hand side is [] For consistency, the bra corresponding to the ket [] must therefore be []—in any case obvious from the definition of the bra in n complex dimensions given above. It follows that if Constructing an Orthonormal Basis: the Gram-Schmidt Process To have something better resembling the standard dot product of ordinary three vectors, we need [] that is, we need to construct an orthonormal basis in the space. There is a straightforward procedure for doing this called the Gram-Schmidt process. We begin with a linearly independent set of basis vectors, [] . We first normalize [] by dividing it by its norm. Call the normalized vector [] Now [] cannot be parallel to [] because the original basis was of linearly independent vectors, but [] in general has a nonzero component parallel to [] equal to [] since [] is normalized. Therefore, the vector [] is perpendicular to [] as is easily verified. It is also easy to compute the norm of this vector, and divide by it to get [] the second member of the orthonormal basis. Next, we take [] and subtract off its components in the directions[] and [] normalize the remainder, and so on. In an n-dimensional space, having constructed an orthonormal basis with members [] any vector [] can be written as a column vector, The corresponding bra is [], which we write as a row vector with the elements complex conjugated, []. This operation, going from columns to rows and taking the complex conjugate, is called taking the adjoint, and can also be applied to matrices, as we shall see shortly. The reason for representing the bra as a row is that the inner product of two vectors is then given by standard matrix multiplication: (Of course, this only works with an orthonormal base.) The Schwartz Inequality The Schwartz inequality is the generalization to any inner product space of the result [] (or []) for ordinary three-dimensional vectors. The equality sign in that result only holds when the vectors are parallel. To generalize to higher dimensions, one might just note that two vectors are in a two-dimensional subspace, but an illuminating way of understanding the inequality is to write the vector [] as a sum of two components, one parallel to []and one perpendicular to []. The component parallel to [] is just [], so the component perpendicular to [] is the vector []. Substituting this expression into [] , the inequality follows. This same point can be made in a general inner product space: if [] are two vectors, then is the component of [] perpendicular to [], as is easily checked by taking its inner product with []. Linear Operators A linear operator A takes any vector in a linear vector space to a vector in that space, [] and satisfies with c[1], c[2] arbitrary complex constants. The identity operator I is (obviously!) defined by: For an n-dimensional vector space with an orthonormal basis [] since any vector in the space can be expressed as a sum [], the linear operator is completely determined by its action on the basis vectors—this is all we need to know. It’s easy to find an expression for the identity operator in terms of bras and kets. Taking the inner product of both sides of the equation [] with the bra [] gives [] so Since this is true for any vector in the space, it follows that that the identity operator is just This is an important result: it will reappear in many disguises. To analyze the action of a general linear operator A, we just need to know how it acts on each basis vector. Beginning with [] this must be some sum over the basis vectors, and since they are orthonormal, the component in the [] direction must be just [] That is, So if the linear operator A acting on [] gives [], that is,[]the linearity tells us that where in the fourth step we just inserted the identity operator. Since the []’s are all orthogonal, the coefficient of a particular [] on the left-hand side of the equation must be identical with the coefficient of the same [] on the right-hand side. That is, [] Therefore the operator A is simply equivalent to matrix multiplication: Evidently, then, applying two linear operators one after the other is equivalent to successive matrix multiplication—and, therefore, since matrices do not in general commute, nor do linear operators. (Of course, if we hope to represent quantum variables as linear operators on a vector space, this has to be true—the momentum operator []certainly doesn’t commute with x!) Projection Operators It is important to note that a linear operator applied successively to the members of an orthonormal basis might give a new set of vectors which no longer span the entire space. To give an example, the linear operator [] applied to any vector in the space picks out the vector’s component in the [] direction. It’s called a projection operator. The operator [] projects a vector into its components in the subspace spanned by the vectors[] and [], and so on—if we extend the sum to be over the whole basis, we recover the identity operator. Exercise: prove that the [] matrix representation of the projection operator [] has all elements zero except the first two diagonal elements, which are equal to one. There can be no inverse operator to a nontrivial projection operator, since the information about components of the vector perpendicular to the projected subspace is lost. The Adjoint Operator and Hermitian Matrices As we’ve discussed, if a ket [] in the n-dimensional space is written as a column vector with n (complex) components, the corresponding bra is a row vector having as elements the complex conjugates of the ket elements. [] then follows automatically from standard matrix multiplication rules, and on multiplying [] by a complex number a to get [] (meaning that each element in the column of numbers is multiplied by a) the corresponding bra goes to [] But suppose that instead of multiplying a ket by a number, we operate on it with a linear operator. What generates the parallel transformation among the bras? In other words, if []what operator sends the bra [] to []? It must be a linear operator, because A is linear, that is, if under A [] and [] then under A [] Consequently, under the parallel bra transformation we must have []—the bra transformation is necessarily also linear. Recalling that the bra is an n-element row vector, the most general linear transformation sending it to another bra is an [] matrix operating on the bra from the right. This bra operator is called the adjoint of A, written [] That is, the ket [] has corresponding bra [] In an orthonormal basis, using the notation [] to denote the bra [] corresponding to the ket [] So the adjoint operator is the transpose complex conjugate. Important: for a product of two operators (prove this!), An operator equal to its adjoint [] is called Hermitian. As we shall find in the next lecture, Hermitian operators are of central importance in quantum mechanics. An operator equal to minus its adjoint, [], is anti Hermitian (sometimes termed skew Hermitian). These two operator types are essentially generalizations of real and imaginary number: any operator can be expressed as a sum of a Hermitian operator and an anti Hermitian operator, The definition of adjoint naturally extends to vectors and numbers: the adjoint of a ket is the corresponding bra, the adjoint of a number is its complex conjugate. This is useful to bear in mind when taking the adjoint of an operator which may be partially constructed of vectors and numbers, such as projection-type operators. The adjoint of a product of matrices, vectors and numbers is the product of the adjoints in reverse order. (Of course, for numbers the order doesn’t matter.) Unitary Operators An operator is unitary if [] This implies first that U operating on any vector gives a vector having the same norm, since the new norm []. Furthermore, inner products are preserved, [] Therefore, under a unitary transformation the original orthonormal basis in the space must go to another orthonormal basis. Conversely, any transformation that takes one orthonormal basis into another one is a unitary transformation. To see this, suppose that a linear transformation A sends the members of the orthonormal basis []to the different orthonormal set [], so []etc. Then the vector [] will go to []having the same norm, []. A matrix elememt [], but also []. That is, [] for arbitrary kets []. This is only possible if [], so A is unitary. A unitary operation amounts to a rotation (possibly combined with a reflection) in the space. Evidently, since [] the adjoint [] rotates the basis back—it is the inverse operation, and so [] also, that is, U and [] commute. We review in this section the determinant of a matrix, a function closely related to the operator properties of the matrix. Let’s start with [] matrices: The determinant of this matrix is defined by: Writing the two rows of the matrix as vectors: (R denotes row), [] is just the area (with appropriate sign) of the parallelogram having the two row vectors as adjacent sides: This is zero if the two vectors are parallel (linearly dependent) and is not changed by adding any multiple of [] to [] (because the new parallelogram has the same base and the same height as the original—check this by drawing). Let’s go on to the more interesting case of []matrices: The determinant of A is defined as where [] if any two are equal, +1 if ijk = 123, 231 or 312 (that is to say, an even permutation of 123) and –1 if ijk is an odd permutation of 123. Repeated suffixes, of course, imply summation Writing this out explicitly, Just as in two dimensions, it’s worth looking at this expression in terms of vectors representing the rows of the matrix This is the volume of the parallelopiped formed by the three vectors being adjacent sides (meeting at one corner, the origin). This parallelepiped volume will of course be zero if the three vectors lie in a plane, and it is not changed if a multiple of one of the vectors is added to another of the vectors. That is to say, the determinant of a matrix is not changed if a multiple of one row is added to another row. This is because the determinant is linear in the elements of a single row, and the last term is zero because two rows are identical—so the triple vector product vanishes. A more general way of stating this, applicable to larger determinants, is that for a determinant with two identical rows, the symmetry of the two rows, together with the antisymmetry of [] ensures that the terms in the sum all cancel in pairs. Since the determinant is not altered by adding some multiple of one row to another, if the rows are linearly dependent, one row could be made identically zero by adding the right multiples of the other rows. Since every term in the expression for the determinant has one element from each row, the determinant would then be identically zero. For the three-dimensional case, the linear dependence of the rows means the corresponding vectors lie in a plane, and the parallelepiped is flat. The algebraic argument generalizes easily to [] determinants: they are identically zero if the rows are linearly dependent. The generalization from [] determinants is that []becomes: where ijk…p is summed over all permutations of 132…n, and the [] symbol is zero if any two of its suffixes are equal, +1 for an even permutation and -1 for an odd permutation. (Note: any permutation can be written as a product of swaps of neighbors. Such a representation is in general not unique, but for a given permutation, all such representations will have either an odd number of elements or an even number.) An important theorem is that for a product of two matrices A, B the determinant of the product is the product of the determinants, [] This can be verified by brute force for [] matrices, and a proof in the general case can be found in any book on mathematical physics (for example, Byron and Fuller). It can also be proved that if the rows are linearly independent, the determinant cannot be zero. (Here’s a proof: take an [] matrix with the n row vectors linearly independent. Now consider the components of those vectors in the n – 1 dimensional subspace perpendicular to (1, 0, … ,0). These n vectors, each with only n – 1 components, must be linearly dependent, since there are more of them than the dimension of the space. So we can take some combination of the rows below the first row and subtract it from the first row to leave the first row (a, 0, 0, … ,0), and a cannot be zero since we have a matrix with n linearly independent rows. We can then subtract multiples of this first row from the other rows to get a determinant having zeros in the first column below the first row. Now look at the n – 1 by n – 1 determinant to be multiplied by a. Its rows must be linearly independent since those of the original matrix were. Now proceed by induction.) To return to three dimensions, it is clear from the form of that we could equally have taken the columns of A as three vectors, [] in an obvious notation, [], and linear dependence among the columns will also ensure the vanishing of the determinant—so, in fact, linear dependence of the columns ensures linear dependence of the rows. This, too, generalizes to []: in the definition of determinant [], the row suffix is fixed and the column suffix goes over all permissible permutations, with the appropriate sign—but the same terms would be generated by having the column suffixes kept in numerical order and allowing the row suffix to undergo the permutations. An Aside: Reciprocal Lattice Vectors It is perhaps worth mentioning how the inverse of a [] matrix operator can be understood in terms of vectors. For a set of linearly independent vectors [], a reciprocal set [] can be defined by and the obvious cyclic definitions for the other two reciprocal vectors. We see immediately that from which it follows that the inverse matrix to (These reciprocal vectors are important in x-ray crystallography, for example. If a crystalline lattice has certain atoms at positions [], where n[1], n[2], n[3] are integers, the reciprocal vectors are the set of normals to possible planes of the atoms, and these planes of atoms are the important elements in the diffractive x-ray scattering.) Eigenkets and Eigenvalues If an operator A operating on a ket [] gives a multiple of the same ket, then [] is said to be an eigenket (or, just as often, eigenvector, or eigenstate!) of A with eigenvalue []. Eigenkets and eigenvalues are of central importance in quantum mechanics: dynamical variables are operators, a physical measurement of a dynamical variable yields an eigenvalue of the operator, and forces the system into an eigenket. In this section, we shall show how to find the eigenvalues and corresponding eigenkets for an operator A. We’ll use the notation [] for the set of eigenkets [] with corresponding eigenvalues a[i]. (Obviously, in the eigenvalue equation here the suffix i is not summed over.) The first step in solving [] is to find the allowed eigenvalues a[i]. Writing the equation in matrix form: This equation is actually telling us that the columns of the matrix []are linearly dependent! To see this, write the matrix as a row vector each element of which is one of its columns, and the equation becomes which is to say the columns of the matrix are indeed a linearly dependent set. We know that means the determinant of the matrix [] is zero, Evaluating the determinant using [] gives an n^th order polynomial in[] sometimes called the characteristic polynomial. Any polynomial can be written in terms of its roots: where the a[i]’s, the roots of the polynomial, and C is an overall constant, which from inspection of the determinant we can see to be [] (It’s the coefficient of []) The polynomial roots (which we don’t yet know) are in fact the eigenvalues. For example, putting [] in the matrix, [] which means that [] has a nontrivial solution [] and this is our eigenvector [] Notice that the diagonal term in the determinant [] generates the leading two orders in the polynomial [], (and some lower order terms too). Equating the coefficient of [] here with that in [], Putting [] in both the determinantal and the polynomial representations (in other words, equating the[]-independent terms), So we can find both the sum and the product of the eigenvalues directly from the determinant, and for a [] matrix this is enough to solve the problem. For anything bigger, the method is to solve the polynomial equation []to find the set of eigenvalues, then use them to calculate the corresponding eigenvectors. This is done one at a time. Labeling the first eigenvalue found as a[1], the corresponding equation for the components v[i] of the eigenvector [] is This looks like n equations for the n numbers v[i], but it isn’t: remember the rows are linearly dependent, so there are only n – 1 independent equations. However, that’s enough to determine the ratios of the vector components v[1], …, v[n], then finally the eigenvector is normalized. The process is then repeated for each eignevalue. (Extra care is needed if the polynomial has coincident roots—we’ll discuss that case later.) Eigenvalues and Eigenstates of Hermitian Matrices For a Hermitian matrix, it is easy to establish that the eigenvalues are always real. (Note: A basic postulate of Quantum Mechanics, discussed in the next lecture, is that physical observables are represented by Hermitian operators.) Taking (in this section) A to be hermitian, []and labeling the eigenkets by the eigenvalue, that is, the inner product with the bra [] gives [] But the inner product of the adjoint equation (remembering []) with [] gives [] so [], and all the eigenvalues must be real. They certainly don’t have to all be different—for example, the unit matrix I is Hermitian, and all its eigenvalues are of course 1. But let’s first consider the case where they are all different. It’s easy to show that the eigenkets belonging to different eigenvalues are orthogonal. take the adjoint of the first equation and then the inner product with [], and compare it with the inner product of the second equation with []: so [] unless the eigenvalues are equal. (If they are equal, they are referred to as degenerate eigenvalues.) Let’s first consider the nondegenerate case: A has all eigenvalues distinct. The eigenkets of A, appropriately normalized, form an orthonormal basis in the space. Note also that, obviously, V is unitary: We have established, then, that for a Hermitian matrix with distinct eigenvalues (nondegenerate case), the unitary matrix V having columns identical to the normalized eigenkets of A diagonalizes A, that is, []is diagonal. Furthermore, its (diagonal) elements equal the corresponding eigenvalues of A. Another way of saying this is that the unitary matrix V is the transformation from the original orthonormal basis in ths space to the basis formed of the normalized eigenkets of A. Proof that the Eigenvectors of a Hermitian Matrix Span the Space We’ll now move on to the general case: what if some of the eigenvalues of A are the same? In this case, any linear combination of them is also an eigenvector with the same eigenvalue. Assuming they form a basis in the subspace, the Gram Schmidt procedure can be used to make it orthonormal, and so part of an orthonormal basis of the whole space. However, we have not actually established that the eigenvectors do form a basis in a degenerate subspace. Could it be that (to take the simplest case) the two eigenvectors for the single eigenvalue turn out to be parallel? This is actually the case for some 2×2 matrices—for example, [], we need to prove it is not true for Hermitian matrices, and nor are the analogous statements for higher-dimensional degenerate subspaces. A clear presentation is given in Byron and Fuller, section 4.7. We follow it here. The procedure is by induction from the 2×2 case. The general 2×2 Hermitian matrix has the form where a, c are real. It is easy to check that if the eigenvalues are degenerate, this matrix becomes a real multiple of the identity, and so trivially has two orthonormal eigenvectors. Since we already know that if the eigenvalues of a 2×2 Hermitian matrix are distinct it can be diagonalized by the unitary transformation formed from its orthonormal eigenvectors, we have established that any 2×2 Hermitian matrix can be so diagonalized. To carry out the induction process, we now assume any (n – 1) ×(n – 1) Hermitian matrix can be diagonalized by a unitary transformation. We need to prove this means it’s also true for an n × n Hermitian matrix A. (Recall a unitary transformation takes one complete orthonormal basis to another. If it diagonalizes a Hermitian matrix, the new basis is necessarily the set of orthonormalized eigenvectors. Hence, if the matrix can be diagonalized, the eigenvectors do span the n-dimensional space.) Choose an eigenvalue a[1] of A, with normalized eigenvector [] (We put in T for transpose, to save the awkwardness of filling the page with a few column vectors.) We construct a unitary operator V by making this the first column, then filling in with n – 1 other normalized vectors to construct, with [], an n-dimensional orthonormal basis. Now, since [], the first column of the matrix AV will just be [], and the rows of the matrix [] will be [] followed by n – 1 normalized vectors orthogonal to it, so the first column of the matrix [] will be a[1] followed by zeros. It is easy to check that [] is Hermitian, since A is, so its first row is also zero beyond the first diagonal term. This establishes that for an n × n Hermitian matrix, a unitary transformation exists to put it in the form: But we can now perform a second unitary transformation in the (n – 1) ×(n – 1) subspace orthogonal to [] (this of course leaves [] invariant), to complete the full diagonalization—that is to say, the existence of the (n – 1) ×(n – 1) diagonalization, plus the argument above, guarantees the existence of the n × n diagonalization: the induction is complete. Diagonalizing a Hermitian Matrix As discussed above, a Hermitian matrix is diagonal in the orthonormal basis of its set of eigenvectors: [], since If we are given the matrix elements of A in some other orthonormal basis, to diagonalize it we need to rotate from the initial orthonormal basis to one made up of the eigenkets of A. Denoting the initial orthonormal basis in the standard fashion the elements of the matrix are []. A transformation from one orthonormal basis to another is a unitary transformation, as discussed above, so we write it Under this transformation, the matrix element In fact, just as we discussed for the nondegenerate (distinct eigenvalues) case, the unitary matrix U we need is just composed of the normalized eigenkets of the operator A, And it follows as before that (The repeated suffixes here are of course not summed over.) If some of the eigenvalues are the same, the Gram Schmidt procedure may be needed to generate an orthogonal set, as mentioned earlier. Functions of Matrices The same unitary operator U that diagonalizes an Hermitian matrix A will also diagonalize A^2, because Commuting Hermitian Matrices Diagonalizing a Unitary Matrix Any unitary matrix can be diagonalized by a unitary transformation. To see this, recall that any matrix M can be written as a sum of a Hermitian matrix and an anti Hermitian matrix, where both A, B are Hermitian. This is the matrix analogue of writing an arbitrary complex number as a sum of real and imaginary parts. If A, B commute, they can be simultaneously diagonalized (see the previous section), and therefore M can be diagonalized. Now, if a unitary matrix is expressed in this form []with A, B Hermitian, it easily follows from[] that A, B commute, so any unitary matrix U can be diagonalized by a unitary transformation. More generally, if a matrix M commutes with its adjoint [], it can be (Note: it is not possible to diagonalize M unless both A, B are simultaneously diagonalized. This follows from [] being Hermitian and antiHermitian for any unitary operator U, so their off-diagonal elements cannot cancel each other, they must all be zero if M has been diagonalized by U, in which case the two transformed matrices [] are diagonal, therefore commute, and so do the original matrices A, B.) It is worthwhile looking at a specific example, a simple rotation of one orthonormal basis into another in three dimensions. Obviously, the axis through the origin about which the basis is rotated is an eigenvector of the transformation. It’s less clear what the other two eigenvectors might be—or, equivalently, what are the eigenvectors corresponding to a two dimensional rotation of basis in a plane? The way to find out is to write down the matrix and diagonalize it. The matrix Note that the determinant is equal to unity. The eigenvalues are given by solving The corresponding eigenvectors satisfy The eigenvectors, normalized, are: Note that, in contrast to a Hermitian matrix, the eigenvalues of a unitary matrix do not have to be real. In fact, from [], sandwiched between the bra and ket of an eigenvector, we see that any eigenvalue of a unitary matrix must have unit modulus—it’s a complex number on the unit circle. With hindsight, we should have realized that one eigenvalue of a two-dimensional rotation had to be [], the product of two two-dimensional rotations is given be adding the angles of rotation, and a rotation through [] changes all signs, so has eigenvalue -1. Note that the eigenvector itself is independent of the angle of rotation—the rotations all commute, so they must have common eigenvectors. Successive rotation operators applied to the plus eigenvector add their angles, when applied to the minus eigenvector, all angles are subtracted.
{"url":"http://galileo.phys.virginia.edu/classes/751.mf1i.fall02/751LinearAlgebra.htm","timestamp":"2014-04-16T07:15:32Z","content_type":null,"content_length":"242748","record_id":"<urn:uuid:9126891a-959f-4412-804f-3ea54f6160d7>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00018-ip-10-147-4-33.ec2.internal.warc.gz"}
2D arrangements Results 1 - 10 of 22 - SIAM J. COMPUT , 1999 "... Given n points in three dimensions, we show how to answer halfspace range reporting queries in O(logn+k) expected time for an output size k. Our data structure can be preprocessed in optimal O(n log n) expected time. We apply this result to obtain the first optimal randomized algorithm for the co ..." Cited by 33 (7 self) Add to MetaCart Given n points in three dimensions, we show how to answer halfspace range reporting queries in O(logn+k) expected time for an output size k. Our data structure can be preprocessed in optimal O(n log n) expected time. We apply this result to obtain the first optimal randomized algorithm for the construction of the ( k)-level in an arrangement of n planes in three dimensions. The algorithm runs in O(n log n+nk²) expected time. Our techniques are based on random sampling. Applications in two dimensions include an improved data structure for "k nearest neighbors" queries, and an algorithm that constructs the order-k Voronoi diagram in O(n log n + nk log k) expected time. - Proc. 41st IEEE , 2002 "... Analyzing the worst-case complexity of the k-level in a planar arrangement of n curves is a fundamental problem in combinatorial geometry. We give the first subquadratic upper bound (roughly O (nk 9 2 s 3 )) for curves that are graphs of polynomial functions of an arbitrary fixed degree s. Previously ..." Cited by 23 (3 self) Add to MetaCart Analyzing the worst-case complexity of the k-level in a planar arrangement of n curves is a fundamental problem in combinatorial geometry. We give the first subquadratic upper bound (roughly O(nk 9 2 s 3 )) for curves that are graphs of polynomial functions of an arbitrary fixed degree s. Previously, nontrivial results were known only for the case s = 1 and s = 2. We also improve the earlier bound for pseudo-parabolas (curves that pairwise intersect at most twice) to O(nk k). The proofs are simple and rely on a theorem of Tamaki and Tokuyama on cutting pseudo-parabolas into pseudo-segments, as well as a new observation for cutting pseudo-segments into pieces that can be extended to pseudo-lines. We mention applications to parametric and kinetic minimum spanning trees. , 2007 "... A top-k query retrieves the k highest scoring tuples from a data set with respect to a scoring function defined on the attributes of a tuple. The efficient evaluation of top-k queries has been an active research topic and many different instantiations of the problem, in a variety of settings, have b ..." Cited by 21 (1 self) Add to MetaCart A top-k query retrieves the k highest scoring tuples from a data set with respect to a scoring function defined on the attributes of a tuple. The efficient evaluation of top-k queries has been an active research topic and many different instantiations of the problem, in a variety of settings, have been studied. However, techniques developed for conventional, centralized or distributed databases are not directly applicable to highly dynamic environments and on-line applications, like data streams. Recently, techniques supporting top-k queries on data streams have been introduced. Such techniques are restrictive however, as they can only efficiently report top-k answers with respect to a pre-specified (as opposed to ad-hoc) set of queries. In this paper we introduce a novel geometric representation for the top-k query problem that allows us to raise this restriction. Utilizing notions of geometric arrangements, we design and analyze algorithms for incrementally maintaining a data set organized in an arrangement representation under streaming updates. We introduce query evaluation strategies that operate on top of an arrangement data structure that are able to guarantee efficient evaluation for ad-hoc queries. The performance of our core technique is augmented by incorporating tuple pruning strategies, minimizing the number of tuples that need to be stored and manipulated. This results in a main memory indexing technique supporting both efficient incremental updates and the evaluation of ad-hoc top-k queries. A thorough experimental study evaluates the efficiency of the proposed technique. "... We show how to compute and maintain the two-dimensional arrangement on a quadric that is induced by intersection curves with other quadrics. The key idea is to parameterize the quadric by two variables, which then allows to implicitly compute the arrangement in a modified parameter space. We give ..." Cited by 14 (8 self) Add to MetaCart We show how to compute and maintain the two-dimensional arrangement on a quadric that is induced by intersection curves with other quadrics. The key idea is to parameterize the quadric by two variables, which then allows to implicitly compute the arrangement in a modified parameter space. We give details of a possible parameterization and explain how to implement the needed geometric and topological predicates. , 1999 "... In light of recent developments, this paper re-examines the fundamental geometric problem of how to construct the k-level in an arrangement of n lines in the plane. ffl The author's recent dynamic data structure for planar convex hulls improves a decade-old sweep-line algorithm by Edelsbrunner and ..." Cited by 12 (6 self) Add to MetaCart In light of recent developments, this paper re-examines the fundamental geometric problem of how to construct the k-level in an arrangement of n lines in the plane. ffl The author's recent dynamic data structure for planar convex hulls improves a decade-old sweep-line algorithm by Edelsbrunner and Welzl, which now runs in O(n log m+m log 1+" n) deterministic time and O(n) space, where m is the output size and " is any positive constant. We discuss simplification of the data structure in this particular application, by viewing the problem kinetically. ffl Har-Peled recently announced a randomized algorithm with an expected running time of O((n + m)ff(n) log n). We observe that a version of an earlier randomized incremental algorithm by Agarwal, de Berg, Matousek, and Schwarzkopf yields almost the same result. ffl The current combinatorial bound by Dey shows that m = O(nk 1=3 ) in the worst case. We give an algorithm that guarantees O(n log n + nk 1=3 ) expected time. 1 - In Proc. 44th IEEE Sympos. Found. Comput. Sci , 2003 "... We give a surprisingly short proof that in any planar arrangement of n curves where each pair intersects at most a fixed number (s) of times, the k-level has subquadratic (O(n 2s )) complexity. This answers one of the main open problems from the author's previous paper (FOCS'00), which provided a ..." Cited by 9 (2 self) Add to MetaCart We give a surprisingly short proof that in any planar arrangement of n curves where each pair intersects at most a fixed number (s) of times, the k-level has subquadratic (O(n 2s )) complexity. This answers one of the main open problems from the author's previous paper (FOCS'00), which provided a weaker bound for a restricted class of curves (graphs of degree-s polynomials) only. When combined with existing tools (cutting curves, sampling, etc.), the new idea generates a slew of improved k-level results for most of the curve families studied earlier, including a near-O(n ) bound for - In SPM ’06: Proceedings of the 2006 ACM symposium on Solid and physical modeling , 2006 "... Penetration depth (PD) is a distance metric that is used to describe the extent of overlap between two intersecting objects. Most of the prior work in PD computation has been restricted to translational PD, which is defined as the minimal translational motion that one of the overlapping objects must ..." Cited by 9 (2 self) Add to MetaCart Penetration depth (PD) is a distance metric that is used to describe the extent of overlap between two intersecting objects. Most of the prior work in PD computation has been restricted to translational PD, which is defined as the minimal translational motion that one of the overlapping objects must undergo in order to make the two objects disjoint. In this paper, we extend the notion of PD to take into account both translational and rotational motion to separate the intersecting objects, namely generalized PD. When an object undergoes rigid transformation, some point on the object traces the longest trajectory. The generalized PD between two overlapping objects is defined as the minimum of the longest trajectories of one object under all possible rigid transformations to separate the overlapping objects. We present three new results to compute generalized PD between polyhedral models. First, we show that for two overlapping convex polytopes, the generalized PD is same as the translational PD. Second, when the complement of one of the objects is convex, we pose the generalized PD computation as a variant of the convex containment problem and compute an upper bound using optimization techniques. Finally, when both the objects are non-convex, we treat them as a combination of the above two cases, and present an algorithm that computes a lower and an upper bound on generalized PD. We highlight the performance of our algorithms on different models that undergo rigid motion in the 6-dimensional configuration space. Moreover, we utilize our algorithm for complete motion planning of polygonal robots undergoing translational and rotational motion in a plane. In particular, we use generalized PD computation for checking path non-existence. - CAGD , 2008 "... We describe a new subdivision method to efficiently compute the topology and the arrangement of implicit planar curves. We emphasize that the output topology and arrangement are guaranteed to be correct. Although we focus on the implicit case, the algorithm can also treat parametric or piecewise lin ..." Cited by 8 (2 self) Add to MetaCart We describe a new subdivision method to efficiently compute the topology and the arrangement of implicit planar curves. We emphasize that the output topology and arrangement are guaranteed to be correct. Although we focus on the implicit case, the algorithm can also treat parametric or piecewise linear curves without much additional work and no theoretical difficulties. The method isolates singular points from regular parts and deals with them independently. The topology near singular points is guaranteed through topological degree computation. In either case the topology inside regions is recovered from information on the boundary of a cell of the subdivision. Obtained regions are segmented to provide an efficient insertion operation while dynamically maintaining an arrangement structure. We use enveloping techniques of the polynomial represented in the Bernstein basis to achieve both efficiency and certification. It is finally shown on examples that this algorithm is able to handle curves defined by high degree polynomials with large coefficients, to identify regions of interest and use the resulting structure for either efficient rendering of implicit curves, point localization or boolean operation computation. , 2010 "... Abstract. We introduce a framework for the construction, maintenance, and manipulation of arrangements of curves embedded on certain two-dimensional orientable parametric surfaces in three-dimensional space. The framework applies to planes, cylinders, spheres, tori, and surfaces homeomorphic to them ..." Cited by 7 (6 self) Add to MetaCart Abstract. We introduce a framework for the construction, maintenance, and manipulation of arrangements of curves embedded on certain two-dimensional orientable parametric surfaces in three-dimensional space. The framework applies to planes, cylinders, spheres, tori, and surfaces homeomorphic to them. We reduce the effort needed to generalize existing algorithms, such as the sweep line and zone traversal algorithms, originally designed for arrangements of bounded curves in the plane, by extensive reuse of code. We have realized our approach as the Cgal package Arrangement on surface 2. We define a compact interface for our framework; only the operations in the interface need to be implemented for a specific application. The companion paper [6] describes concretizations for several types of surfaces and curves embedded on them, and applications. This is the first implementation of a generic algorithm that can handle arrangements on a large class of parametric - CRC Handbook of Algorithms and Theory of Computation , 1999 "... Introduction Robots are versatile mechanical devices equipped with actuators and sensors under the control of a computing system. They perform tasks by executing motions in the physical space. This space is populated by various objects and is subject to the laws of nature. A typical task consists o ..." Cited by 6 (3 self) Add to MetaCart Introduction Robots are versatile mechanical devices equipped with actuators and sensors under the control of a computing system. They perform tasks by executing motions in the physical space. This space is populated by various objects and is subject to the laws of nature. A typical task consists of achieving a goal spatial arrangement of objects from a given initial arrangement, for example, assembling a product. Robots are programmable, which means that they can perform a variety of tasks by simply changing the software commanding them. This software embeds robot algorithms, which are abstract descriptions of processes consisting of motions and sensing operations in the physical space. Robot algorithms differ in significant ways from traditional computer algorithms. The latter have full control over, and perfect access to the data they use, letting aside, for example, problems related to floating-point arithmetic. In contrast, robot algorithms eventually apply to physical objects i
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=666972","timestamp":"2014-04-23T09:00:31Z","content_type":null,"content_length":"40578","record_id":"<urn:uuid:22a35049-9b7c-47ca-a2b4-d79cec758e59>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00133-ip-10-147-4-33.ec2.internal.warc.gz"}
Pythagorean triples? 3-4-5, and 5-12-13 Basically, a pythogorean triple is just 3 positive integers which satisfy the pythogorean theorem $a^2 + b^2 = c^2$. So $3^2 + 4^2 = 5^2 \text{ and } 5^2 + 12^2 = 13^2$ In a nutshell, a Pythagorean triple $(a,b,c)$ has the property that $a^2+b^2=c^2$ Perhaps the best know method for generating such triples (attributed to Euclid) is to let: $a=m^2-n^2$ $b=2mn$ $c=m^ 2+n^2$ where $n<m$ and $m,n\in\mathbb{N}$. A triple is said to be primitive if $a,b,c$ are co-prime, and requires $m,n$ to be co-prime and $m-n$ is odd. Pythagorean triple - Wikipedia, the free
{"url":"http://mathhelpforum.com/algebra/210020-pythagorean-triples-3-4-5-5-12-13-a.html","timestamp":"2014-04-18T11:49:35Z","content_type":null,"content_length":"36158","record_id":"<urn:uuid:98f8b547-c87c-443d-bc06-053a8745f898>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00375-ip-10-147-4-33.ec2.internal.warc.gz"}
Distributive initial segments of the degrees of unsolvability, Zeitschrift für Mathematische Logik und Grundlagen der Mathematik 14 Results 1 - 10 of 17 - J. London Math. Soc , 1981 "... Degree theory, that is the study of the structure of the Turing degrees (or degrees of unsolvability) has been divided by Simpson [24; §5] into two parts—global and local. By the global theory he means the study of general structural properties of 3d— the degrees as a partially ordered set or uppers ..." Cited by 18 (6 self) Add to MetaCart Degree theory, that is the study of the structure of the Turing degrees (or degrees of unsolvability) has been divided by Simpson [24; §5] into two parts—global and local. By the global theory he means the study of general structural properties of 3d— the degrees as a partially ordered set or uppersemilattice. The local theory concerns - Bulletin of Symbolic Logic "... $1. Introduction. The occasion of a retiring presidential address seems like a time to look back, take stock and perhaps look ahead. ..." "... We prove that every countable jump upper semilattice can be embedded in D, where a jump upper semilattice (jusl) is an upper semilattice endowed with a strictly increasing and monotone unary operator that we call jump, and D is the jusl of Turing degrees. As a corollary we get that the existential ..." Cited by 5 (0 self) Add to MetaCart We prove that every countable jump upper semilattice can be embedded in D, where a jump upper semilattice (jusl) is an upper semilattice endowed with a strictly increasing and monotone unary operator that we call jump, and D is the jusl of Turing degrees. As a corollary we get that the existential theory of 〈D, ≤T, ∨, ′ 〉 is decidable. We also prove that this result is not true about jusls with 0, by proving that not every quantifier free 1-type of jusl with 0 is realized in D. On the other hand, we show that every quantifier free 1-type of jump partial ordering (jpo) with 0 is realized in D. Moreover, we show that if every quantifier free type, p(x1,..., xn), of jpo with 0, which contains the formula x1 ≤ 0 (m) &... & xn ≤ 0 (m) for some m, is realized in D, then every every quantifier free type of jpo with 0 is realized in D. We also study the question of whether every jusl with the c.p.p. and size κ ≤ 2 ℵ0 is embeddable in D. We show that for κ = 2 ℵ0 the answer is no, and that for κ = ℵ1 it is independent of ZFC. (It is true if MA(κ) holds.) , 2004 "... Decidability problems for (fragments of) the theory of the structure D of Turing degrees, form a wide and interesting class, much of which is yet unsolved. Lachlan showed in 1968 that the first order theory of D with the Turing reducibility relation is undecidable. Later results concerned the decida ..." Cited by 4 (0 self) Add to MetaCart Decidability problems for (fragments of) the theory of the structure D of Turing degrees, form a wide and interesting class, much of which is yet unsolved. Lachlan showed in 1968 that the first order theory of D with the Turing reducibility relation is undecidable. Later results concerned the decidability (or undecidability) of fragments of this theory, and of other theories obtained by extending the language (e.g. with 0 or with the Turing jump operator). Proofs of these results often hinge on the ability to embed certain classes of structures (lattices, jump-hierarchies, etc.) in certain ways, into the structure of Turing degrees. The first part of the dissertation presents two results which concern embeddings onto initial segments of D with known double jumps, in other words a double jump inversion of certain degree structures onto initial segments. These results may prove to be useful tools in uncovering decidability results for (fragments of) the theory of the Turing degrees in languages containing the double jump operator. The second part of the dissertation relates to the problem of characterizing the Turing degrees which have a strong minimal cover, an issue first raised by Spector in 1956. Ishmukhametov solved the problem for the recursively enumerable degrees, by showing that those which have a strong minimal cover are exactly the r.e. weakly recursive degrees. Here we show that this characterization fails outside the r.e. degrees, and also construct a minimal degree below 0 ′ which is not weakly recursive, thereby answering a question from Ishmukhametov’s paper. - Archive for Mathematical Logic , 1993 "... We describe the important role that the conjectures and questions posed at the end of the two editions of Gerald Sacks's Degrees of Unsolvability have had in the development of recursion theory over the past thirty years. Gerald Sacks has had a major influence on the development of logic, particular ..." Cited by 4 (1 self) Add to MetaCart We describe the important role that the conjectures and questions posed at the end of the two editions of Gerald Sacks's Degrees of Unsolvability have had in the development of recursion theory over the past thirty years. Gerald Sacks has had a major influence on the development of logic, particularly recursion theory, over the past thirty years through his research, writing and teaching. Here, I would like to concentrate on just one instance of that influence that I feel has been of special significance to the study of the degrees of unsolvability in general and on my own work in particular--- the conjectures and questions posed at the end of the two editions of Sacks's first book, the classic monograph Degrees of Unsolvability (Annals "... We present a summary of the lectures delivered to the Institute for Mathematical Sciences, Singapore, during the 2005 Summer School in Mathematical Logic. The lectures covered topics on the global structure of the Turing degrees D, the countability of its automorphism group, and the definability of ..." Cited by 4 (0 self) Add to MetaCart We present a summary of the lectures delivered to the Institute for Mathematical Sciences, Singapore, during the 2005 Summer School in Mathematical Logic. The lectures covered topics on the global structure of the Turing degrees D, the countability of its automorphism group, and the definability of the Turing jump within D. - Theoret. Comput. Sci , 1997 "... A BSS machine is #-uniform if it does not use exact tests; such machines are equivalent (modulo parameters) to Type 2 Turing machines. We define a notion of closure related to Turing machines for archimedean fields, and show that such fields admit nontrivial #-uniformly decidable sets iff they are n ..." Cited by 3 (3 self) Add to MetaCart A BSS machine is #-uniform if it does not use exact tests; such machines are equivalent (modulo parameters) to Type 2 Turing machines. We define a notion of closure related to Turing machines for archimedean fields, and show that such fields admit nontrivial #-uniformly decidable sets iff they are not Turing closed. Then, the partially ordered set of Turing closed fields is proved isomorphic to the ideal completion of unsolvability degrees. 1 Introduction In a previous paper [2], the authors have introduced a version of the BSS model of computability [1] in which exact tests are not allowed. Essentially, a BSS machine is #-uniform iff its halting set and computed function do not change when the test for equality with 0 is replaced with a test for membership to an arbitrary ball around 0. A set is #-uniformly semi-decidable iff it is the halting set of a #-uniform BSS machine; as it turns out, such sets are always open. There is a strict relation between #-uniform computability and r... - In Proceedings of Logic Colloquium , 2003 "... We prove that the two quantifier theory of the Turing degrees with order, join and jump is undecidable. ..." - Trans. Am. Math. Soc "... Abstract The three quantifier theory of (R; ^T), the recursively enumerable degrees under Turing reducibility, was proven undecidable by Lempp, Nies and Slaman [1998]. The two quantifier theory includes the lattice embedding problem and its decidability is a long standing open question. A negative s ..." Cited by 2 (2 self) Add to MetaCart Abstract The three quantifier theory of (R; ^T), the recursively enumerable degrees under Turing reducibility, was proven undecidable by Lempp, Nies and Slaman [1998]. The two quantifier theory includes the lattice embedding problem and its decidability is a long standing open question. A negative solution to this problem seems out of reach of the standard methods of interpretation of theories because the language is relational. We prove the undecidability of a fragment of the theory of R that lies between the two and three quantifier theories with ^T but includes function - Logic Colloquium '90, Association of Symbolic Logic Summer Meeting in Helsinki, Berlin 1993 [Lecture Notes in Logic 2 "... Gödel's work [Gö34] on undecidable theories and the subsequent formalisations of the notion of a recursive function ([Tu36], [K136] etc.) have led to an ever deepening understanding of the nature of the non-computable universe (which as Gödel himself showed, includes sets and functions of everyday s ..." Cited by 2 (1 self) Add to MetaCart Gödel's work [Gö34] on undecidable theories and the subsequent formalisations of the notion of a recursive function ([Tu36], [K136] etc.) have led to an ever deepening understanding of the nature of the non-computable universe (which as Gödel himself showed, includes sets and functions of everyday significance). The nontrivial aspect of Church's Thesis (any function not contained within one of the equivalent definitions of recursive/Turing computable, cannot be considered to be effectively computable) still provides a basis not only for classical and generalised recursion theory, but also for contemporary theoretical computer science. Recent years, in parallel with the massive increase in interest in the computable universe and the development of much subtler concepts of 'practically computable', have seen remarkable progress with some of the most basic and challenging questions concerning the non-computable universe, results both of philosophical significance and of potentially wider technical importance. Relativising Church's Thesis, Kleene and Post [KP54] proposed the now
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=800635","timestamp":"2014-04-19T14:52:17Z","content_type":null,"content_length":"36767","record_id":"<urn:uuid:c393b0fd-6dc2-48c5-b2f9-f1ad1d5863e0>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00527-ip-10-147-4-33.ec2.internal.warc.gz"}
Complete Response of a First Order Circuit The complete response of a first order circuit to a constant input described by an equation of the form This equation shows the voltage v(t) to be a function of the time t. This function has three parameters, A, B, and tau. The figure shown below represents the same relationship between v(t) and t, graphically. This figure provides an opportunity the experiment with the values of these three parameters and observe the resulting changes to the shape of the plot. 1. Set B=10 V and tau = 10 ms. Vary A. 2. Set A=10 V and tau = 10 ms. Vary B. 3. Set A=1 V and B=19 V. Vary tau. 4. Set A=19 V and B=1 V. Vary tau. 5. Set A=15 V and B=15 V. Vary tau.
{"url":"http://people.clarkson.edu/~jsvoboda/eta/plots/FOC.html","timestamp":"2014-04-17T03:53:45Z","content_type":null,"content_length":"1419","record_id":"<urn:uuid:30a772b5-a0f3-4eff-b3c6-59c635b3ae39>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00046-ip-10-147-4-33.ec2.internal.warc.gz"}
Braingle: 'Class Photo' Brain Teaser Class Photo Logic puzzles require you to think. You will have to be logical in your reasoning. Puzzle ID: #25168 Category: Logic Submitted By: cnmne Corrected By: cnmne Jacob's class picture has 40 photos arranged in a 8 x 5 grid. The photos in the top row are numbered 1 through 8 from left to right, with the photos in the remaining rows similarly numbered (as shown below). Given the following clues (bordering includes horizontal, vertical, and diagonal), where is Jacob's picture? X X X X X X X X (1 - 8) X X X X X X X X (9 - 16) X X X X X X X X (17 - 24) X X X X X X X X (25 - 32) X X X X X X X X (33 - 40) 1) There are 20 boys and 20 girls. 2) Each row and column has at least two girls, but no more than four girls. 3) Every girl borders at least one other girl. 4) Girls are located at positions that are prime numbers. 5) Boys are located at positions that are either squares or cubes. 6) Jacob is the only boy that borders a unique number of girls. The number 1 is not a prime number. Hide Show Answer What Next?
{"url":"http://www.braingle.com/brainteasers/teaser.php?op=1&id=25168&comm=0","timestamp":"2014-04-20T15:56:46Z","content_type":null,"content_length":"25652","record_id":"<urn:uuid:2087b0fb-d143-4d5f-9836-c7e2a727fded>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00603-ip-10-147-4-33.ec2.internal.warc.gz"}
Reformulation in mathematical programming: an application to quantum chemistry "... Summary. Mathematical programming is a language for describing optimization problems; it is based on parameters, decision variables, objective function(s) subject to various types of constraints. The present treatment is concerned with the case when objective(s) and constraints are algebraic mathema ..." Cited by 17 (13 self) Add to MetaCart Summary. Mathematical programming is a language for describing optimization problems; it is based on parameters, decision variables, objective function(s) subject to various types of constraints. The present treatment is concerned with the case when objective(s) and constraints are algebraic mathematical expressions of the parameters and decision variables, and therefore excludes optimization of black-box functions. A reformulation of a mathematical program P is a mathematical program Q obtained from P via symbolic transformations applied to the sets of variables, objectives and constraints. We present a survey of existing reformulations interpreted along these lines, some example applications, and describe the implementation of a software framework for reformulation and optimization. 1 , 2008 "... A reformulation of a mathematical program is a formulation which shares some properties with, but is in some sense better than, the original program. Reformulations are important with respect to the choice and efficiency of the solution algorithms; furthermore, it is desirable that reformulations c ..." Cited by 17 (13 self) Add to MetaCart A reformulation of a mathematical program is a formulation which shares some properties with, but is in some sense better than, the original program. Reformulations are important with respect to the choice and efficiency of the solution algorithms; furthermore, it is desirable that reformulations can be carried out automatically. Reformulation techniques are very common in mathematical programming but interestingly they have never been studied under a common framework. This paper attempts to move some steps in this direction. We define a framework for storing and manipulating mathematical programming formulations, give several fundamental definitions categorizing reformulations in essentially four types (opt-reformulations, narrowings, relaxations and approximations). We establish some theoretical results and give reformulation examples for each type. , 2009 "... The best known method to find exact or at least ε-approximate solutions to polynomial programming problems is the spatial Branch-and-Bound algorithm, which rests on computing lower bounds to the value of the objective function to be minimized on each region that it explores. These lower bounds are o ..." Cited by 5 (3 self) Add to MetaCart The best known method to find exact or at least ε-approximate solutions to polynomial programming problems is the spatial Branch-and-Bound algorithm, which rests on computing lower bounds to the value of the objective function to be minimized on each region that it explores. These lower bounds are often computed by solving convex relaxations of the original program. Although convex envelopes are explicitly known (via linear inequalities) for bilinear and trilinear terms on arbitrary boxes, such a description is unknown, in general, for multilinear terms of higher order. In this paper, we study convex relaxations of quadrilinear terms. We exploit associativity to rewrite such terms as products of bilinear and trilinear terms. Using a general technique, we establish that, any relaxation for k-linear terms that employs a successive use of relaxing bilinear terms (via the bilinear convex envelope) can be improved by employing instead a relaxation of a trilinear term (via the trilinear convex envelope). We present a computational analysis which helps establish which relaxations are strictly tighter, and we apply our findings to two well-studied applications: the Molecular Distance Geometry Problem and the Hartree-Fock Problem. , 2007 "... Keywords: Reformulation-Linearization Technique, lift-and-project, tight relaxations, valid inequalities, model reformulation, convex hull, convex envelopes, mixed-integer 0-1 program, polynomial programs, nonconvex programs, factorable programs, reduced relaxations. Discrete and continuous nonconve ..." Cited by 4 (1 self) Add to MetaCart Keywords: Reformulation-Linearization Technique, lift-and-project, tight relaxations, valid inequalities, model reformulation, convex hull, convex envelopes, mixed-integer 0-1 program, polynomial programs, nonconvex programs, factorable programs, reduced relaxations. Discrete and continuous nonconvex programming problems arise in a host of practical applications in the context of production planning and control, location-allocation, distribution, economics and game theory, quantum chemistry, and process and engineering design situations. Several recent advances have been made in the development of branch-and-cut type algorithms for mixed-integer linear and nonlinear programming problems, as well as polyhedral outer-approximation methods for continuous nonconvex programming problems. At the heart of these approaches is a sequence of linear (or convex) programming relaxations that drive the solution process, and the success of such algorithms is strongly tied in with the strength or tightness of these relaxations. The Reformulation-Linearization-Technique (RLT) is a method that generates such tight linear programming relaxations for not only constructing exact solution algorithms, but also to design powerful heuristic procedures for large classes of discrete combinatorial and continuous nonconvex programming problems. Its development originated in [4, 5, 6], initially focusing on 0-1 and mixed 0-1 linear and "... In this paper we compare four different ways to compute a convex linear relaxation of a quadrilinear monomial on a box, analyzing their relative tightness. We computationally compare the quality of the relaxations, and we provide a general theorem on pairwise-comparison of relaxation strength, which ..." Cited by 2 (1 self) Add to MetaCart In this paper we compare four different ways to compute a convex linear relaxation of a quadrilinear monomial on a box, analyzing their relative tightness. We computationally compare the quality of the relaxations, and we provide a general theorem on pairwise-comparison of relaxation strength, which applies to some of our pairs of relaxations for quadrilinear monomials. Our results can be used to configure a spatial Branch-and-Bound global optimization algorithm. We apply our results to the Molecular Distance Geometry Problem, demonstrating the usefulness of the present study. quadrilinear; convex relaxation; reformulation; global optimization, spatial Branch- "... A general optimization problem can be expressed in the form min{cx: x ∈ S}, (1) where x ∈ R n is the vector of decision variables, c ∈ R n is a linear objective function and S ⊂ R n is the set of feasible solutions of (1). Because S is generally ..." Add to MetaCart A general optimization problem can be expressed in the form min{cx: x ∈ S}, (1) where x ∈ R n is the vector of decision variables, c ∈ R n is a linear objective function and S ⊂ R n is the set of feasible solutions of (1). Because S is generally
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=3780794","timestamp":"2014-04-17T23:54:48Z","content_type":null,"content_length":"26658","record_id":"<urn:uuid:376564f1-ea05-4912-8117-e11db5060d7d>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00023-ip-10-147-4-33.ec2.internal.warc.gz"}
Able to add to array beyond size? This is allowed. However, by writing into memory that the OS hasn't allocated for you, you are stepping into undefined behavior. The consequences of this are non-deterministic, but a common one is unreliable data. That is, you may be writing into a variable that you declared earlier. Topic archived. No new replies allowed.
{"url":"http://www.cplusplus.com/forum/beginner/112550/","timestamp":"2014-04-20T09:06:46Z","content_type":null,"content_length":"10873","record_id":"<urn:uuid:81c3719e-6479-4173-85ce-618bb1b5318b>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00423-ip-10-147-4-33.ec2.internal.warc.gz"}
Gambrills Precalculus Tutor Find a Gambrills Precalculus Tutor ...Since 2007, tutoring has been a passion for me. My interest in learning is infectious, and my tutoring style is both friendly and engaging. I joined WyzAnt.com in the Spring of 2011, and since then have tutored nearly full time. 39 Subjects: including precalculus, English, calculus, physics ...Violin - the upper voice in the string family that carries the soaring melodies in cinematic music, and a favorite instrument among classical composers. I began studying the violin in 1979, in 4th grade strings class. Soon after, I began private lessons with a Suzuki method teacher. 49 Subjects: including precalculus, reading, chemistry, English ...I studied abroad for 4 months in Madrid, Spain. While there I volunteered with Helenski Espana, a human rights group. We taught lessons to school-aged children (in Spanish) on human rights and their basic rights as a citizen of Spain. 17 Subjects: including precalculus, Spanish, writing, physics ...I have been a GED teacher for 2 years, a tutor on WyzAnt for 2+ years and have successfully taught students in taking the SAT and GED. I have 6 students taught by me who have earned their GEDs and have raised the scores of 3 students in their SATs. I have been playing chess for most of my life and have been able to teach people on the fly. 31 Subjects: including precalculus, reading, English, writing ...I've initially learned how to program SAS during my senior year SAS class, but eventually I went on and did a few research projects with people from other colleges. Right now I am a full-time SAS programmer, and I have finished the first SAS programming source provided by SAS Institute. I was born and raised in China. 13 Subjects: including precalculus, calculus, geometry, Chinese
{"url":"http://www.purplemath.com/gambrills_md_precalculus_tutors.php","timestamp":"2014-04-20T06:28:08Z","content_type":null,"content_length":"24087","record_id":"<urn:uuid:9522a307-4c63-44ca-b3ae-7c293f94e0e7>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00655-ip-10-147-4-33.ec2.internal.warc.gz"}
Annuity Payments Payments in Time Value of Money formulas are a series of equal, evenly-spaced cash flows of an annuity such as payments for a mortgage or monthly receipts from a retirement account. Payments must: • be the same amount each period • occur at evenly spaced intervals • occur exactly at the beginning or end of each period • be all inflows or all outflows (payments or receipts) • represent the payment during one compounding (or discount) period Calculate Payments When Present Value Is Known The Present Value is an amount that you have now, such as the price of property that you have just purchased or the value of equipment that you have leased. When you know the present value, interest rate, and number of periods of an ordinary annuity, you can solve for the payment with this formula: ┃ payment = PVoa / [(1- (1 / (1 + i)^n )) / i] ┃ PVoa = Present Value of an ordinary annuity (payments are made at the end of each period) i = interest per period n = number of periods Example: You can get a $150,000 home mortgage at 7% annual interest rate for 30 years. Payments are due at the end of each month and interest is compounded monthly. How much will your payments be? PVoa = 150,000, the loan amount i = .005833 interest per month (.07 / 12) n = 360 periods (12 payments per year for 30 years) payment = 150,000 / [(1 - ( 1 / (1.005833)^360)) / .005833] = 997.95 Calculate Payments When Future Value Is Known The Future Value is an amount that you wish to have after a number of periods have passed. For example, you may need to accumulate $20,000 in ten years to pay for college tuition. When you know the future value, interest rate, and number of periods of an ordinary annuity, you can solve for the payment with this formula: ┃ payment = FVoa / [((1 + i)^n - 1 ) / i] ┃ FVoa = Future Value of an ordinary annuity (payments are made at the end of each period) i = interest per period n = number of periods Example: In 10 years, you will need $50,000 to pay for college tuition. Your savings account pays 5% interest compounded monthly. How much should you save each month to reach your goal? FVoa = 50,000, the future savings goal i = .004167 interest per month (.05 / 12) n = 120 periods (12 payments per year for 10 years) payment = 50,000 / [(1.004167^120 - 1) / .004167] = 321.99
{"url":"http://www.getobjects.com/Components/Finance/TVM/pmt.html","timestamp":"2014-04-18T03:40:23Z","content_type":null,"content_length":"10521","record_id":"<urn:uuid:5ed7f136-bfb3-44d7-9a9c-fb8bee732b61>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00446-ip-10-147-4-33.ec2.internal.warc.gz"}
Statistical mechanics Kerson Huang Unlike most other texts on the subject, this clear, concise introduction to the theory of microscopic bodies treats the modern theory of critical phenomena. Provides up-to-date coverage of recent major advances, including a self-contained description of thermodynamics and the classical kinetic theory of gases, interesting applications such as superfluids and the quantum Hall effect, several current research applications, The last three chapters are devoted to the Landau-Wilson approach to critical phenomena. Many new problems and illustrations have been added to this edition. Review: Statistical Mechanics, 2nd Edition User Review - Lukasz Glinka - Goodreads Standard statistical mechanics. Read full review part A THERMODYNAMICS AND KINETIC THEORY 3 SOME APPLICATIONS OF THERMODYNAMICS 31 THE PROBLEM OF KINETIC THEORY 52 16 other sections not shown Bibliographic information
{"url":"http://books.google.com.au/books?id=M8PvAAAAMAAJ&q=excited&dq=related:ISBN0132494833&source=gbs_book_other_versions_r&cad=5","timestamp":"2014-04-21T09:45:37Z","content_type":null,"content_length":"118018","record_id":"<urn:uuid:7ed376b7-8077-41e8-b173-13b979e2c9b0>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00163-ip-10-147-4-33.ec2.internal.warc.gz"}
Wolfram Demonstrations Project Fourier Transform of Radially Symmetric Potential Functions Radially symmetric potential functions that are positive and bounded (i.e., do not diverge at ) are commonly used to describe the interaction energy between pairs of macromolecules. The centers of mass of two mixed polymers can superimpose when their macromolecules entangle. It turns out that the behavior of these macromolecules (melting, crystallization, and diffusion, for example), is very sensitive to the shape of the potential that describes their interaction. Indeed, the shape of the Fourier transform of this class of potential functions has been shown to be important in classifying the phase behavior of such macromolecules [1]. This Demonstration shows that potential functions that have similar shapes in real space can be very dissimilar in Fourier space. The functions included are the Hertz [2], overlap, and Gaussian potentials. The three-dimensional Fourier transform of a radially symmetrical function is given by After work by: J. C. Pàmies, A. Cacciuto, and D. Frenkel
{"url":"http://demonstrations.wolfram.com/FourierTransformOfRadiallySymmetricPotentialFunctions/","timestamp":"2014-04-19T17:03:40Z","content_type":null,"content_length":"43968","record_id":"<urn:uuid:52aa2442-f907-4eb1-964e-f9ffdd43878b>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00555-ip-10-147-4-33.ec2.internal.warc.gz"}
Basis of these vectors May 23rd 2012, 10:02 AM #1 Basis of these vectors Im trying to figure out the following: can someone tell me the process and I will try and work it on my own. I am just not sure how to work it out. Let S be the subspace of R^4 with the vectors as basis: (1,1,0,-1) (1,3,0,1) (-1,0,1,8) a) Show that the (6,9,-9,-9)^T is in S Re: Basis of these vectors solve the equation a(1,1,0,-1) + b(1,3,0,1) + c(-1,0,1,8) = (6,9,-9,-9) for a,b, and c. this leads to: (a+b-c,a+3b,c,-a+b+8c) = (6,9,-9,-9) or the 4 linear equations: a+b-c = 6 a+3b = 9 c = -9 -a+b+8c = -9 or, if you prefer, the matrix equation: $\begin{bmatrix}1&1&-1\\1&3&0\\0&0&1\\-1&1&8 \end{bmatrix} \begin{bmatrix}a\\b\\c \end{bmatrix} = \begin{bmatrix}6\\9\\-9\\-9 \end{bmatrix}$ i can tell you right now, however, that a better question to ask yourself is NOT: "how do i prove (6,9,-9,-9) is in S?" but rather: "how do i tell IF (6,9,-9,-9) is in S?" S is SUBSPACE of R^4, of dimension 3 < 4, so there are going to be LOTS of vectors that AREN'T in S. May 23rd 2012, 02:41 PM #2 MHF Contributor Mar 2011
{"url":"http://mathhelpforum.com/advanced-algebra/199143-basis-these-vectors.html","timestamp":"2014-04-20T01:47:11Z","content_type":null,"content_length":"33228","record_id":"<urn:uuid:fcd7c1c7-7781-45d9-900a-4dcff2c68440>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00059-ip-10-147-4-33.ec2.internal.warc.gz"}
Question on input noise current - Page 2 - diyAudio Folks, let's not complicate the issue. Noise figure is mostly useless. I know, I know, you have been told that it is important, but for audio, with today's circuits, it is mostly useless. However, an IC op amp works just like an individual transistor, or fet, except you have to add the effect of the op amp topology, to get the answer to the noise produced. If you look at an IC op amp data sheet, you will all be able to find the VOLTAGE NOISE of the op amp. If the op amp is FET input, that is all you need to know. If the op amp has a bipolar input, then you HAVE to pay attention to the CURRENT NOISE, because the current noise will always be pretty high. This is BECAUSE the voltage noise is low, and that implies the base current is relatively high. Of course, extremely hi beta input devices will lower the base current, BUT there is always the trade-off with base resistivity, that would increase the VOLTAGE NOISE, if taken too far. This is what designers live for. The trade-offs in design for best performance. So you have to look at the data sheet for the current noise contribution and realize that any source over 1Kohm and even 100 ohms in many cases is going to see the current source noise contribution as important as the voltage noise. More later.
{"url":"http://www.diyaudio.com/forums/solid-state/20567-question-input-noise-current-2.html","timestamp":"2014-04-24T22:32:24Z","content_type":null,"content_length":"78572","record_id":"<urn:uuid:ab3e9064-23b0-412b-80d8-2218b7e11083>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00496-ip-10-147-4-33.ec2.internal.warc.gz"}
Time Series Analysis Oceanography 540--Marine Geological Processes--Winter Quarter 2001 Time Series Analysis Time series of oceanic phenomena often contain periodic components related to forcing at a wide range of time scales: waves, tides and tidal currents, diurnal and annual cycles, ENSO, Pacific Decadal Oscillation, and orbital geometry, its influence on incoming solar radiation and Pleistocene climate. Time series analysis involves the extraction of information about these periodic components from time series data. A periodic signal can be represented by its intensity or amplitude as a function of time: Figure 15-1 Using this figure we can introduce some definitions: • The peak intensity of the signal (relative to the mean) is the amplitude, A. • The waveform repeats with period T. • Time can be expressed in either time units, t, or as an angle • The angular scale is related to the time scale via the relationship: • The period is related to the angular frequency by the relationship: • The frequency f is the inverse of the period f=1/T. • Thus f. • With these definitions, the signal can be represented as: A physical process can generally be described by the values of some parameter h that varies with time, h=h(t). We refer to this representation as being in the time domain. An alternative and equivalent representation of the same signal would be to express it as its amplitude, H, as a function of frequency, f: H=H(f) . We refer to this representation as being in the frequency domain. Time Domain Frequency Domain Figure 15-2. The period of the waveform is 36 time units, so that the frequency is 1/36=.0278. (The vertical axis on the right hand side is power, not amplitude, see below). These two representations of the same signal can be related by the Fourier transform: Eq 15-1: (recall that The original time series can be obtained from the inverse transform: Eq 15-2: The transforms can be thought of as weighted averages of waveforms of a given frequency; the forward transform to the frequency domain weighted by the signal amplitude h(t) and the inverse transform to the time domain weighted by the frequency amplitude H(f). In other words, when the signal h has the same frequency as the waveform, the the integral H is large. If h(t) is real (we are generally considering real signals) then H(f) is generally a complex number. However under this condition H(-f)=H(f)*. (the notation * denotes the complex conjugate). This symmetry means that for real signals there is no unique information in the negative frequencies. The power of the signal, P, is a measure of its squared amplitude. The power at a frequency f is defined as: Eq 15-3: and so the total power in a signal is (fix equation, no (f) on lhs): Eq 15-4: Equations 16-1 through 16-4 apply to a continuous function spanning all times. In general though real signals will: • be represented by discrete measurements • these measurements will span a finite interval For such discrete time series, we consider measurements of h taken at discrete intervals, Eq 15-5: The quantity 1/f Eq 15-6: What is the significance of the critical frequency? There are two related issues. • The first issue involves a concept called the sampling theorem which says: If the spectrum of h(t) is zero outside of the frequency interval -f[c]<f<f[c], then all spectral information is obtained if the signal is sampled at times separated by no more than 1/(2 Figure 15-3 The period of the signal is 36 time units and so the Nyquist frequency is 1/18 time units. If we sample time unit by time unit, we get a smooth curve. Even if we sample by 18 time units, the Nyquist period, we get (provided we are not exactly in phase with the zero crossings) the highs and the lows so that we still capture the period of the signal. If we sample by 36 time units though, we only get a single value--the signal is constant through time. Insuring that H(f)=0 beyond the Nyquist frequency is achieved in practice by applying a low pass filter (a filter that lets low frequencies pass the filter, but not high frequency components) to the signal before sampling (as we will see, nature applies a low pass filter through bioturbation in the surface-most sediments). • The second issue is a phenomenon called aliasing. If there is power outside the frequency band -f then discrete sampling folds that power into this frequency band. Schematically: Figure 15-4. The width of the true transform is 2f[c]; the band from f[c] to 3f[c] maps to f[c] to -f[c], the band from 3 f[c] to 5f[c] maps to -f[c] to f[c], and so on.) To summarize: 1. Unless H(f) is zero once f exceeds 1/(2 2. To avoid aliasing, either the sampling interval must be chosen to insure that this condition is met or the original signal must be filtered to remove high frequency components before sampling. To understand the nature of the filtering effect of bioturbation consider as an example a core taken in sediments accumulating at 3 cm/ky and sampled at 10 cm intervals. Thus, the sampling interval is 3000 y and so the Nyquist frequency is 1/6000 y. To model bioturbation, consider a well mixed box of height h at the sediment surface with material delivered at rate F and removed as the product of sedimentation rate times concentration uc. A mass balance can be written: Eq 15-7: or rewriting in terms of a periodic variation in the delivery of material: Eq 15-8: This differential equation can be solved for some arbitrary initial condition to find that at large times, when initial conditions have decayed, there is a stationary solution: Eq 15-9: This equation can be simplified at the limits of high and low frequency. When f is large (the supply of material to the sediment varies at high frequency) it reduces to: Eq 15-10: In this case mixing averages the fluctations in the supply of rain to the sediment. When f is small (the supply to the sediment varies slowly) it reduces to: Eq 15-11: In this case the periodic signal is preserved. In other words, low frequency signals are preserved while high frequency ones are not; the system acts as a low pass filter. The boundary between "low" and "high" frequency is effectively f~u/h. For a depth of bioturbation of 2-10 cm in our sample core, this frequency is somewhere between 1/600 y and 1/3000 y. By carefully choosing the sampling interval, little aliasing is possible. (And of course we can also apply a low pass filter to be sure.) Conversely, a limit exists for the highest frequency signal that can be resolved in a core. Mathematica is a useful tool for solving Eq 15-8 | Download Notebook The time series analysis of a record consists of a series of computational steps. Depending on the application these might include: 1. detrending and whitening 2. autocorrelation 3. windowing 4. spectral estimation We will discuss these in the reverse order from which they are performed. Spectral estimation If we have N measurements of a parameter h sampled at an interval t with k=0,...,N-1. We introduce the notation for the values at these times: h From these N measurements we can derive independent information at only N discrete frequencies--these are normally spaced evenly within the Nyquist limits: Eq 15-12: With these conventions the discrete Fourier transform is written: Eq 15-13: The power at each of these frequencies is: Eq 15-14: This information can be plotted in the form of a periodiogram: Figure 15-5 A common problem is that these estimates are not within the bin implied by the geometry; they actually have a spread several bins wide. A process termed data windowing minimizes this leakage between bins. It involves applying a windowing function to the data before forming the transform: Eq 15-15: Then the power density is given by: Eq 15-16: The origin of the leakage between bins arises in the process of taking a portion out of a long time series. Consider this time series h and window function w: Extracting a discrete portion out of the time series h is equivalent to multiplying h by a window function w where w=1 in the region of interest and 0 everywhere else. The convolution theorem say that for two functions h(t) and w(t), if p=wh then P(f)=W(f)H(f). When the discrete transform is taken, the high frequency components associated with the steps in the function w are aliased into the Nyquist interval. The characteristic of a good window is to smoothly turn "on and off", thereby minimizing the introduction of high frequency components into the transform. In performing time series analysis, one can work with autocorrelated records to produce a correlogram. If the Fourier transform of h(t) is H(f), and the autocorrelation function of h(t) is r, then the Fourier transform R(f)=|G(f)|^2. For N data points taken from two times series g and h, the correlation as a function of the lag j is: Eq 15-17: Thus the autocorrelation of a time series onto itself is given by: Eq 15-18: Imagine a periodic function that is shifted progressively to the right: Figure 15-7 As it is shifted the correlation will drop, then rise to the same value when the function has been shifted exactly one cycle: Figure 15-8 Thus the autocorrelation produces a different time series, but with the same spectral components. Detrending, whitening There are generally long period and d.c. components that give rise to power in a spectra being concentrated at low frequencies. High pass filters can be applied to remove the d.c. baseline (detrending) and long period, low frequency components (whitening). The effect is to emphasize peaks at intermediate frequency, which no longer sit on the shoulder of a larger low frequency peak. Thus far we have emphasized frequency characteristics of a time series. Time series analysis also includes consideration of the frequency components within the time domain. These are separated by: 1. transforming to the frequency domain 2. multiplying the transform by a bandpass filter. This product contains only the information contained within this frequency band 3. applying the inverse transform back to the time domain Thus far we have been considering time series that are stationary, i.e., whose frequency content is not changing with time. We may also be interested in phenomena that do not satisfy this constraint. A traditional approach is to apply Fourier analysis to overlapping subsets of the time series in order to examine the evolution of the frequency content. A more recently developed technique is wavelet analysis (for a good review see (42)). Figure 15-9 is adapted from that reference and compares Fourier and wavelet analysis of a stationary time series with two superimposed sine waves and a time series containing signals of the same frequency content but in different segments of the observation period. Figure 15-9 We can review these ideas with this Matlab demonstration. Some resources for more detail on these topics: • W.H. Press, et al., Numerical Recipes in <C | Fortran | Pascal> chapter on Fourier and Spectral Applications develops background for some basic algorithms • J.S. Bendat and A.G. Piersol, Random Data: Analysis and Measurement Procedures survey of data analysis, including spectral methods • L.H. Koopmans, The Spectral Analysis of Time Series a comprehensive treatment, used for Charlie Eriksen's Data Analysis course Next Lecture | Lecture Index | Search the Ocean 540 Pages | Ocean 540 Home Oceanography 540 Pages Pages Maintained by Russ McDuff (mcduff@ocean.washington.edu) Copyright (©) 1994-2001 Russell E. McDuff and G. Ross Heath; Copyright Notice Content Last Modified 1/19/2001 | Page Last Built 1/19/2001
{"url":"http://www2.ocean.washington.edu/oc540/lec01-12/","timestamp":"2014-04-21T09:38:57Z","content_type":null,"content_length":"20068","record_id":"<urn:uuid:3c05bcd8-ed5b-409d-845e-44238111ffd7>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00037-ip-10-147-4-33.ec2.internal.warc.gz"}
Rockrimmon, CO Algebra 2 Tutor Find a Rockrimmon, CO Algebra 2 Tutor ...My favorite part of the teaching experience is getting to watch comprehension dawn on a student's face as they grasp a new concept. I look forward to the opportunity to teach students and help them to realize their full potential and find what they want to do with their education and their life.... 15 Subjects: including algebra 2, calculus, statistics, trumpet ...In order to solve these equations, one needs an extensive knowledge differential equations. Having a Bachelor and a Master of Science degree in physics means that I am very familiar with mathematics at many different levels. Linear Algebra can be used to manipulate vectors in many different ways. 16 Subjects: including algebra 2, calculus, physics, statistics I graduated with a degree in engineering and have used math in my job for the last twenty years, so I already know all the mistakes that can be made. I have tutored my daughter in math all throughout her schooling. The result is that she has had straight A's from the first grade through high school in math. 7 Subjects: including algebra 2, geometry, biology, algebra 1 ...I use effective practices, strategies and resources to inspire and enrich each of my student's learning. I usually use different learning styles such as workbooks, visual aid computers, hands-on activities like lab, student-centered discussions as learning centered, one-on-one tutoring and small... 39 Subjects: including algebra 2, calculus, physics, algebra 1 ...I graduated from the University of Southern California in 1996, and while I was there, I tutored my peers in business calculus, accounting and general education courses. After I graduated, I tutored students as I worked as a full-time staff at the University. I also have been an amatuer writer for many years. 57 Subjects: including algebra 2, chemistry, reading, writing Related Rockrimmon, CO Tutors Rockrimmon, CO Accounting Tutors Rockrimmon, CO ACT Tutors Rockrimmon, CO Algebra Tutors Rockrimmon, CO Algebra 2 Tutors Rockrimmon, CO Calculus Tutors Rockrimmon, CO Geometry Tutors Rockrimmon, CO Math Tutors Rockrimmon, CO Prealgebra Tutors Rockrimmon, CO Precalculus Tutors Rockrimmon, CO SAT Tutors Rockrimmon, CO SAT Math Tutors Rockrimmon, CO Science Tutors Rockrimmon, CO Statistics Tutors Rockrimmon, CO Trigonometry Tutors Nearby Cities With algebra 2 Tutor Buckskin Joe, CO algebra 2 Tutors Crystal Hills, CO algebra 2 Tutors Deckers, CO algebra 2 Tutors Edison, CO algebra 2 Tutors Elkton, CO algebra 2 Tutors Ellicott, CO algebra 2 Tutors Fair View, CO algebra 2 Tutors Goldfield, CO algebra 2 Tutors Ilse, CO algebra 2 Tutors Parkdale, CO algebra 2 Tutors Penitentiary, CO algebra 2 Tutors Stratmoor Hills, CO algebra 2 Tutors Tarryall, CO algebra 2 Tutors Truckton, CO algebra 2 Tutors Twin Rock, CO algebra 2 Tutors
{"url":"http://www.purplemath.com/Rockrimmon_CO_algebra_2_tutors.php","timestamp":"2014-04-17T01:22:57Z","content_type":null,"content_length":"24454","record_id":"<urn:uuid:d0da1e5b-8e0a-4f54-a646-79e13543c78c>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00066-ip-10-147-4-33.ec2.internal.warc.gz"}
Descriptive Statistics The Descriptive Statistics course introduces many essential topics of statistics. The students learn how to produce data through surveys, experiments, and observational studies. They then analyze the data through graphical and numerical summaries. The students next learn to model data, using basic probability and sampling distributions. And, with time permitting, the students discover how to draw conclusions from data using confidence intervals and significance tests. The students learn when and how to use technology to aid in solving statistical problems. Furthermore, they produce convincing oral and written statistical arguments, using appropriate terminology. Most importantly, they become critical consumers of published statistical results by heightening their awareness of ways in which statistics can be improperly used to mislead, confuse, and distort the truth. Assessments will include group and individual projects, daily homework assignments, quizzes, and tests. Resources used throughout the course include textbooks, graphing calculators, Fathom, and addtional websites, including statistical applets and Gallup. Unit Essential Questions Content Skills and Processes • Tables and charts • Analyzing and creating charts and tables • How can we display data? • Histograms, stem plots, and box plots • Analyzing and creating histograms, stem plots, and box plots Unit 1: Analyzing • How can we describe the center and spread of • Median and IQR • Recognizing when it is appropriate to use mean and standard deviation, or median Data data? • Mean and standard deviation and IQR, to describe the center and spread • What is the Normal Model? • The Normal Model • Using the 68-95-99.7 Rule • Finding z-scores • What features do we look for in a scatterplot? • Scatterplots • Recognizing from the scatterplot when it is appropriate to use linear regression • When is it appropriate to find correlation? • The correlation coefficient and • Using the calculator to find the correlation coefficient, r-squared, and the Unit 2: Linear • How do we find correlation? r-squared linear regression equation Regression • How do we find the equation for linear • The linear regression equation • Calculating residuals regression? • Residuals • Why doesn't correlation imply causation? • Generating random numbers • What are various types of sample surveys? • Randomness • Conducting SRS, stratified, and cluster sampling • How do sample surveys, experiments, and • Simple random samples, stratified • Understanding the problems associated with censuses Unit 3: observational studies differ? samples, and cluster samples • Understanding the main components of experimental design, i.e., control, Collecting Data • What are the essential components of a good • Experiments randomization, replication, and blocking experiment? • Observational studies • Understanding that only an experiment, rather than an observational study, can help to establish causation • Addition and multiplication rules of • Solving probability problems involving the use of addition, multiplication, and • What are the basic rules of probability? probability conditional probability rules • What is a random variable? • Conditional probability • Using tables and tree diagrams to find probabilities Unit 4: • What is expected value? • Random variables • Understanding what a random variable is Probability • What are the geometric and binomial models? • Expected value • Finding expected value of random variables • What is a sampling distribution? • Geometric and binomial models • Using the geometric and binomial models to calculate probabilities by hand and • Sampling distributions using the calculator • Understanding the basic principles of sampling distributions • What is a confidence interval? • Confidence intervals for means and • What is a margin of error? proportion • Finding a confidence interval for means and proportions Unit 5: Inference • How does one conduct an inference test for a • Margin of error • Calculating margin of error proportion or a mean? • Inference tests for means and • Conducting inference tests for means and proportions • What is are Type I and Type II errors? proportions • Understanding Type I and Type II errors and how to prevent them • Type I and Type II errors
{"url":"http://www.catlin.edu/upper-school/curriculum/course/7-probability-statistics","timestamp":"2014-04-21T15:07:35Z","content_type":null,"content_length":"26531","record_id":"<urn:uuid:7e41d2b7-1e8f-4111-8693-e1b23a9130fb>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00085-ip-10-147-4-33.ec2.internal.warc.gz"}
[FOM] Intuitionists and excluded-middle Jeremy Clark jeremy.clark at wanadoo.fr Tue Oct 25 03:26:48 EDT 2005 On Oct 24, 2005, at 11:01 pm, Jesse Alama wrote: > There must be a reference on the constructive/intuitionistic > development of the absolute value function and its basic properties. > Where might I find it? Bishop and Bridges' "Constructive Analysis" for example. The development is not at all axiomatic, so e.g. the abs function is defined for constructive reals, and then from the definition it is easy to prove abs(x)>0 => x neq 0. A constructive real is basically a Cauchy sequence of rational numbers so it is easy to show that if x neq 0 then x>0 or x<0. > PS Although intuitively your "neq" is stronger than "not =", this may > be only an appearance. Constructivist query: is the trichotomy > law for real numbers admitted? Absolument pas! So yes, the constructive neq is stronger than "not =". (Though they are classically equivalent of course.) More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/2005-October/009246.html","timestamp":"2014-04-18T06:15:56Z","content_type":null,"content_length":"3462","record_id":"<urn:uuid:5086072f-17c9-4abe-a24e-2f6cbb48c878>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00167-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Tools Discussion: All Topics, Interactive Mathematics Activities Discussion: All Topics Topic: Interactive Mathematics Activities << see all messages in this topic < previous message | next message > Subject: RE: Interactive Mathematics Activities Author: Eric Schlytter Date: Jun 30 2009 Here are some excellent new interactive math activities that engage students in learning. Many of these exciting activities have English narrations: Pythagorean Theorem: Volume of Prisms: Volume of Cylinders: Volume of Pyramids: Volume of Cones: Volume of Spheres: Surface Area of Prisms: Surface Area of Cylinders: Surface Area of Pyramids: Surface Area of Cones: Surface Area of Spheres: Graphing Real World Situations: You may find many more interactive math activities on my classroom web site: Reply to this message Quote this message when replying? yes no Post a new topic to the General Discussion discussion Discussion Help
{"url":"http://mathforum.org/mathtools/discuss.html?context=all&do=r&msg=82791","timestamp":"2014-04-17T13:15:00Z","content_type":null,"content_length":"21256","record_id":"<urn:uuid:78ad59e2-4c3a-4ba5-be79-d4a4d99fcf3b>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00599-ip-10-147-4-33.ec2.internal.warc.gz"}
[Numpy-discussion] Removing datetime support for 1.4.x series ? Fernando Perez fperez.net@gmail.... Thu Feb 4 13:21:31 CST 2010 On Thu, Feb 4, 2010 at 2:37 AM, Travis Oliphant <oliphant@enthought.com> wrote: > Perhaps one way to articulate my perspective is the following: > There are currently 2 groups of NumPy users: > 1) those who have re-compiled all of their code for 1.4.0 > 2) those who haven't It may be useful to keep in mind one important aspect of the cascading dependency effect we are dealing with here; I could recompile *my* codes easily for numpy from svn (I used to do it routinely). But with an ABI break, there are a *ton* of packages now on my system that would break if I put a new numpy in my python path. An easy way to see this is to see how many system packages I'd have to remove if I removed numpy: sudo apt-get remove python-numpy python-numpy-dbg python-numpy-doc The following packages will be REMOVED: impressive keyjnote mayavi2 music-applet python-gnuplot python-matplotlib python-mdp python-mvpa python-mvpa-lib python-netcdf python-numpy python-numpy-dbg python-numpy-doc python-pyepl python-pygame python-pywt python-rpy python-scientific python-scientific-doc python-scipy python-sparse python-sparse-examples python-tables python-visual pyxplot sagemath 0 upgraded, 0 newly installed, 26 to remove and 0 not upgraded. After this operation, 341MB disk space will be freed. Basically this means that if I want to update numpy on my ubuntu 9.10 laptop, all of a sudden not only do I have to recompile things like my codes or scipy/matplotlib (which I'd expect), but I also have to rebuild 23 other system-installed packages which would probably otherwise be fine. For this reason, I've had to back off completely from using post-abi-break numpy, I simply can't afford the time to break and rebuild so much of my system. I know this is a messy and difficult situation, but I wanted to illustrate this aspect of the dependency problem because I haven't seen it mentioned so far in the discussion, and it's a fairly nasty More information about the NumPy-Discussion mailing list
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2010-February/048275.html","timestamp":"2014-04-16T05:44:31Z","content_type":null,"content_length":"4862","record_id":"<urn:uuid:cdeb94fc-1f30-469b-b06c-5c9c944891f8>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00304-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions - User Profile for: jsarm_@_athforum.org Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. User Profile: jsarm_@_athforum.org User Profile for: jsarm_@_athforum.org UserID: 189180 Name: Johann W. Sarmiento Registered: 1/25/05 Total Posts: 5 Show all user messages
{"url":"http://mathforum.org/kb/profile.jspa?userID=189180","timestamp":"2014-04-21T15:26:34Z","content_type":null,"content_length":"11269","record_id":"<urn:uuid:1adeaa2a-7548-428d-ab3b-83b334d5d3a5>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00301-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts from February 23, 2011 on The Unapologetic Mathematician Let’s look back at yesterday’s example of a manifold. Not only did we cover the entire sphere by open neighborhoods with homeomorphisms to open regions of the plane, we did so in a “compatible” way, which I’ll explain soon. This notion of compatible coordinates is key to making a lot of differential topology and geometry work out right. For the moment, though, let’s introduce a useful term. A “coordinate patch” on a manifold $M$ is an open subset $U\subseteq M$ together with a map $\phi:U\to\mathbb{R}^n$ that is a homeomorphism between $U$ and its image $\phi(U)$. Armed with this definition, we might say that a manifold is a topological space where every point can be contained in some coordinate patch. The only subtle point here is that this definition would put too much emphasis on the patches rather than on the local topology of the manifold itself. The useful thing about a coordinate patch is that it lets us pull back coordinates from $\mathbb{R}^n$ to our manifold, or at least to the open set $U$. Let’s say $p\in U$ is sent to the point $\phi (p)\in\mathbb{R}^n$. We can now use the coordinate functions $x^i:\mathbb{R}^n\to\mathbb{R}$ to read off coordinates. In fact, when working in a particular coordinate patch, we will often abuse the notation and simply write $\displaystyle x^i(p)=x^i(\phi(p))$ Of course, when we write $x^i(p)$ the actual number we get out for the $i$th component depends immensely on our coordinate homeomorphism $\phi$, and yet we’ve made no mention of it in our notation! This is one of the most confusing things about doing differential geometry and topology calculations involving coordinates, and it’s important to keep it in mind. Let’s be a little more explicit about our example from last time. The two-dimensional sphere consists of all the points in $\mathbb{R}^3$ of unit length. If we pick an orthonormal basis for $\mathbb {R}^3$ and write the coordinates with respect to this basis as $x$, $y$, and $z$, then we’re considering all triples $(x,y,z)$ with $x^2+y^2+z^2=1$. We want to show that this set is a manifold. We know that we can’t hope to map the whole sphere into a plane, so we have to take some points out. Specifically, let’s remove those points with $z\leq0$, just leaving one open hemisphere. We will map this hemisphere homeomorphically to an open region in $\mathbb{R}^2$. But this is easy: just forget the $z$-component! Sending the point $(x,y,z)$ down to the point $(x,y)$ is clearly a continuous map from the open hemisphere to the open disk with $x^2+y^2<1$. Further, for any point $(x,y)$ in the open disk, there is a unique $z\geq0$ with $x^2+y^2+z^2=1$. Indeed, we can write down This inverse is also continuous, and so our map is indeed a homeomorphism. Similarly we can handle all the points in the lower hemisphere $z<0$. Again we send $(x,y,z)$ to $(x,y)$, but this time for any $(x,y)$ in the open unit disk — satisfying $x^2+y^2<0$ we can write which is also continuous, so this map is again a homeomorphism. Are we done? no, since we haven’t taken care of the points with $z=0$. But in these cases we can treat the other coordinates similarly: if $y>0$ we have our inverse pair while if $y<0$ we have Similarly if $x>0$ we have while if $x<0$ we have Now are we done? Yes, since every point on the sphere must have at least one coordinate different from zero, every point must fall into one of these six cases. Thus every point has some neighborhood which is homeomorphic to an open region in $\mathbb{R}^2$. This same approach can be generalized to any number of dimensions. The $n$-dimensional sphere consists of those points in $\mathbb{R}^{n+1}$ with unit length. It can be covered by $2(n+1)$ open hemispheres, each with a projection just like the ones above. • Recent Posts • Blogroll • Art • Astronomy • Computer Science • Education • Mathematics • Me • Philosophy • Physics • Politics • Science • RSS Feeds • Feedback Got something to say? Anonymous questions, comments, and suggestions at • Subjects • Archives
{"url":"http://unapologetic.wordpress.com/2011/02/23/","timestamp":"2014-04-17T12:42:32Z","content_type":null,"content_length":"49532","record_id":"<urn:uuid:869c2afa-079b-4be1-b282-b604cdb44030>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00240-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Help hey all amazing forum this is.! my name is mauli. a number X and a sequence of numbers {Yn:n belong to N} are i.i.d draws from the uniform distribution [0,1]. let N=inf{n belonging to N:Yn >X}. The player conducting these draws receives a prize of (N-1).Calculate the expected value of the prize..note that N denotes natural numbers
{"url":"http://mathhelpforum.com/new-users/214289-hello.html","timestamp":"2014-04-18T23:54:20Z","content_type":null,"content_length":"25994","record_id":"<urn:uuid:76309446-28a8-40c7-8b74-628909e1b34c>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00029-ip-10-147-4-33.ec2.internal.warc.gz"}
Is this result of Spain correct? up vote 1 down vote favorite Let us have a look on the proof of Theorem 2 in [P. G. Spain, Boolean algebras of projections, Proceedings of the Edinburgh Mathematical Society (Series 2) 19, 03, March 1975, 287-289] The author claims in the proof of Theorem 2 that if $B$ is a (Bade) complete Boolean algebra of projections on a Banach space which is relatively weakly compact subset of the algebra of operators, then the weak (WOT?) closure of the algebra generated by $B$ is a W*-algebra. I don't think the proof is OK. Take $p\neq 2$, $p\in (1,\infty)$ and consider $X=\ell_p$ with its canonical basis, which is 1-unconditional. The family $B$ of basis projections forms a complete Boolean algebra (actually isomorphic to $\wp(\omega)$) and is relatively weakly compact because all the projections have norm one and the unit ball of the space of operators on a reflexive space is WOT-compact. Though, $\overline{\mbox{alg }B}^{WOT}=B(\ell_p)$ which is not a von Neumann algebra (it is not even Banach-space isomorphic to a von Neumann algebra). My question is (before it's closed by the math police). Is Theorem 2 correct? banach-spaces oa.operator-algebras von-neumann-algebras Not sure this is a question- voting to close. – Daniel Moskovich Feb 18 '13 at 13:49 2 If this turns into a protracted tug-of-war then of course the discussion should be moved to meta. But, for the record, it seems to me that this is a real question. If anything, the OP suffers from having thought too productively about it, by producing a (putative) counterexample. Without that, Spain's claim could just be rephrased as a question. – HJRW Feb 18 '13 at 15:19 2 Why not email Philip Spain first? His email is available on his homepage: maths.gla.ac.uk/~pgs – Dmitri Pavlov Feb 18 '13 at 15:42 2 I find mention of the "math police" tiresome and unhelpful. – Yemon Choi Feb 18 '13 at 19:45 1 Also, having had a quick look at Theorem 2, I can't see where there might be a mistake. So I would respectfully venture that the answer to your question is YES. – Yemon Choi Feb 18 '13 at 19:51 add comment 1 Answer active oldest votes I haven't looked at the paper but your counterexample is mistaken. The basis projections generate not $B(l^p)$ but the algebra of multiplication operators, which is up vote 3 down vote isometrically isomorphic to $l^\infty$ and hence is a von Neumann algebra. add comment Not the answer you're looking for? Browse other questions tagged banach-spaces oa.operator-algebras von-neumann-algebras or ask your own question.
{"url":"http://mathoverflow.net/questions/122159/is-this-result-of-spain-correct","timestamp":"2014-04-18T15:53:07Z","content_type":null,"content_length":"57960","record_id":"<urn:uuid:aa555266-af25-46c0-b2cd-2525ac650e03>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00568-ip-10-147-4-33.ec2.internal.warc.gz"}
Name SGIS_texture_filter4 Name Strings GL_SGIS_texture_filter4 Version $Date: 1997/03/24 18:56:21 $ $Revision: 1.9 $ Number 7 Dependencies None Overview This extension allows 1D and 2D textures to be filtered using an application-defined, four sample per dimension filter. (In addition to the NEAREST and LINEAR filters defined in the original GL Specification.) Such filtering results in higher image quality. It is defined only for non-mipmapped filters. The filter that is specified must be symmetric and separable (in the 2D case). Issues * What should the default filter function be? - Implementation dependent. * Should this extension define 2-wide texture borders? Do we really want to aggrandize this border stuff :-( - No. * Should this extension define 2D filtering only (and not 1D)? - No. * A GLU function that accepts Mitchell parameters and a texture target should be defined. * This specification retains a separate filter function description with every texture. In conjunction with EXT_texture_object, this may result in a lot of filter functions. Implementations should optimize for the default filter function to save storage cost. Reasoning * The name is changed from "cubic" to "filter4" because the table allows the specification of filters that are not at all cubic in nature. A true cubic filter extension would define the filter function as a cubic polynomial. New Procedures and Functions void TexFilterFuncSGIS(enum target, enum filter, sizei n, const float* weights); void GetTexFilterFuncSGIS(enum target, enum filter, float* weights); New Tokens Accepted by the parameter of TexParameteri and TexParameterf, and by the parameter of TexParameteriv and TexParameterfv, when their parameter is TEXTURE_MIN_FILTER or TEXTURE_MAG_FILTER. Also accepted by the parameters of TexFilterFuncSGIS and GetTexFilterFuncSGIS: FILTER4_SGIS Accepted by the parameter of GetTexParameteriv and GetTexParameterfv, when the parameter is TEXTURE_1D or TEXTURE_2D: TEXTURE_FILTER4_SIZE_SGIS Additions to Chapter 2 of the GL Specification (OpenGL Operation) None Additions to Chapter 3 of the GL Specification (Rasterization) The additional token value FILTER4_SGIS is accepted as an enumerated value for the texture minification and magnification filters, causing Table 3.7 to be replaced with the table below: Name Type Legal Values ---- ---- ------------ TEXTURE_WRAP_S integer CLAMP, REPEAT TEXTURE_WRAP_T integer CLAMP, REPEAT TEXTURE_WRAP_R_EXT integer CLAMP, REPEAT TEXTURE_MIN_FILTER integer NEAREST, LINEAR, NEAREST_MIPMAP_NEAREST, NEAREST_MIPMAP_LINEAR, LINEAR_MIPMAP_NEAREST, LINEAR_MIPMAP_LINEAR, FILTER4_SGIS TEXTURE_MAG_FILTER integer NEAREST, LINEAR, FILTER4_SGIS TEXTURE_BORDER_COLOR 4 floats any 4 values in [0,1] Table 3.7: Texture parameters and their values. Filter4 filtering is specified by calling TexParameteri, TexParameterf, TexParameteriv, or TexParameterfv with set to one of TEXTURE_MIN_FILTER or TEXTURE_MAG_FILTER, and or set to FILTER4_SGIS. Because filter4 filtering is defined only for non-mipmapped textures, there is no difference between its definition for minification and magnification. First consider the 1-dimensional case. Let T be a computed texture value (one of R_t, G_t, B_t, or A_t). Let T[i] be the component value of the texel at location i in a 1-dimensional texture image. Then, if the appropriate texture filter mode is FILTER4_SGIS, a 4-texel group is selected: / floor(u - 1/2) mod 2**n, TEXTURE_WRAP_S is REPEAT i1 = ( \ floor(u - 1/2), TEXTURE_WRAP_S is CLAMP / (i1 + 1) mod 2**n, TEXTURE_WRAP_S is REPEAT i2 = ( \ i1 + 1, TEXTURE_WRAP_S is CLAMP / (i1 + 2) mod 2**n, TEXTURE_WRAP_S is REPEAT i3 = ( \ i1 + 2, TEXTURE_WRAP_S is CLAMP / (i1 - 1) mod 2**n, TEXTURE_WRAP_S is REPEAT i0 = ( \ i1 - 1, TEXTURE_WRAP_S is CLAMP Let f(x) be the filter weight function of positive distance x. Let A = frac(u - 1/2) where frac(x) denotes the fractional part of x, and u is the texture image coordinate in the s direction, as illustrated in Figure 3.10 of the GL Specification. Then the texture value T is found as T = f(1+A) * T[i0] + f(A) * T[i1] + f(1-A) * T [i2] + f(2-A) * T[i3] If any of the selected T[i] in the above equation refer to a border texel with unspecified value, then the border color given by the current setting of TEXTURE_BORDER_COLOR is used instead of the unspecified value. For 2-dimensional textures the calculations for i0, i1, i2, i3, and A are identical to the 1-dimensional case. A 16-texel group is selected, requiring four j values computed as / floor(v - 1/2) mod 2**m, TEXTURE_WRAP_T is REPEAT j1 = ( \ floor(v - 1/2), TEXTURE_WRAP_T is CLAMP / (j1 + 1) mod 2**m, TEXTURE_WRAP_T is REPEAT j2 = ( \ j1 + 1, TEXTURE_WRAP_T is CLAMP / (j1 + 2) mod 2**m, TEXTURE_WRAP_T is REPEAT j3 = ( \ j1 + 2, TEXTURE_WRAP_T is CLAMP / (j1 - 1) mod 2**m, TEXTURE_WRAP_T is REPEAT j0 = ( \ j1 - 1, TEXTURE_WRAP_T is CLAMP Let B = frac(v - 1/2) where v is the texture image coordinate in the t direction, as illustrated in Figure 3.10 of the GL Specification. Then the texture value T is found as T = f(1+A) * f(1+B) * T[i0,j0] + f(A) * f (1+B) * T[i1,j0] + f(1-A) * f(1+B) * T[i2,j0] + f(2-A) * f(1+B) * T[i3,j0] + f(1+A) * f(B) * T[i0,j1] + f(A) * f(B) * T[i1,j1] + f(1-A) * f(B) * T[i2,j1] + f(2-A) * f(B) * T[i3,j1] + f(1+A) * f(1-B) * T[i0,j2] + f(A) * f(1-B) * T[i1,j2] + f(1-A) * f(1-B) * T[i2,j2] + f(2-A) * f(1-B) * T[i3,j2] + f(1+A) * f(2-B) * T[i0,j3] + f(A) * f(2-B) * T[i1,j3] + f(1-A) * f(2-B) * T[i2,j3] + f(2-A) * f(2-B) * T[i3,j3] If any of the selected T[i,j] in the above equation refer to a border texel with unspecified value, then the border color given by the current setting of TEXTURE_BORDER_COLOR is used instead of the unspecified value. Filter4 texture filtering is currently undefined for 3-dimensional textures. Filter function --------------- The default filter function is implementation dependent. The filter function is specified in table format by calling TexFilterFuncSGIS with set to TEXTURE_1D or TEXTURE_2D, set to FILTER4_SGIS, and pointing an array of floating point values. The value must equal 2**m + 1 for some nonnegative integer value of m. The array contains samples of the filter function f(x), 0<=x<=2. Each element [i] is the value of f((2*i)/(-1)), 0<=i<=-1. The filter function is stored and used by GL as a set of samples f((2*i)/(Size-1)), 0<=i<=Size-1, where Size is an implementation dependent constant. If equals Size, the array is stored directly in GL state. Otherwise, an implementation dependent resampling method is used to compute the stored samples. Size must equal 2**m + 1 for some integer value of m greater than or equal to 4. The value Size for texture is returned by when GetTexParameteriv or GetTexParameterfv is called with set to TEXTURE_FILTER4_SIZE_SGIS. Minification vs. Magnification ------------------------------ If the magnification filter is given by FILTER4_SGIS, and the minification filter is given by NEAREST_MIPMAP_NEAREST or LINEAR_MIPMAP_NEAREST , then c = 0.5. The parameter c is used to determine whether minification or magnification filtering is done, as described in Section 3.8.2 of the GL Specification (Texture Magnification). Additions to Chapter 4 of the GL Specification (Per-Fragment Operations and the Framebuffer) None Additions to Chapter 5 of the GL Specification (Special Functions) GetTexFilterFuncSGIS is not included in display lists. Additions to Chapter 6 of the GL Specification (State and State Requests) The filter weights for the filter4 filter associated with a texture are queried by calling GetTexFilterFuncSGIS with set to the texture target and set to FILTER4_SGIS. The Size weight values are returned in the array , which must have at least Size elements. The value Size is an implementation dependent constant that is queried by the application by calling GetTexParameteriv or GetTexParameterfv as described above. Additions to the GLX Specification None GLX Protocol Two new GL commands are added. The following rendering command is sent to the server as part of a glXRender request: TexFilterFuncSGIS 2 16*4n rendering command length 2 2064 rendering command opcode 4 ENUM target 4 ENUM filter 4 INT32 n n LISTofFLOATS weights The remaining command is a non-rendering command and as such, is sent separately (i.e., not as part of a glXRender or glXRenderLarge request), using the glXVendorPrivateWithReply request: GetTexFilterFuncSGIS 1 CARD8 opcode (X assigned) 1 17 GLX opcode (glXVendorPrivateWithReply) 2 5 request length 4 4101 vendor specific opcode 4 GLX_CONTEXT_TAG context tag 4 ENUM target 4 ENUM filter => 1 1 reply 1 unused 2 CARD16 sequence number 4 m reply length, m = (n==1 ? 0 : n) 4 unused 4 CARD32 n if (n=1) this follows: 4 FLOAT32 params 12 unused otherwise this follows: 16 unused n*4 LISTofFLOAT32 params Note that n may be zero, indicating that a GL error occurred. Errors INVALID_ENUM is generated if TexFilterFuncSGIS or GetTexFilterFuncSGIS parameter is not TEXTURE_1D or TEXTURE_2D. INVALID_ENUM is generated if TexFilterFuncSGIS or GetTexFilterFuncSGIS parameter is not FILTER4_SGIS. INVALID_VALUE is generated if TexFilterFuncSGIS parameter does not equal 2**m + 1 for some nonnegative integer value of m. INVALID_ENUM is generated if GetTexParameteriv or GetTexParameterfv parameter is TEXTURE_FILTER4_SIZE_SGIS and parameter is not TEXTURE_1D or TEXTURE_2D. INVALID_OPERATION is generated if TexFilterFuncSGIS or GetTexFilterFuncSGIS is executed between execution of Begin and the corresponding execution of End. New State Get Value Get Command Type Value Attrib --------- ----------- ---- ------- ------ TEXTURE_FILTER4_FUNC_SGIS GetTexFilterFuncSGIS 2 x Size x R see text texture New Implementation Dependent State Minimum Get Value Get Command Type Value --------- ----------- ---- ------- TEXTURE_FILTER4_SIZE_SGIS GetTexParameterfv R 17
{"url":"https://www.opengl.org/registry/specs/SGIS/texture_filter4.txt","timestamp":"2014-04-18T13:32:01Z","content_type":null,"content_length":"12357","record_id":"<urn:uuid:955f3ed3-ebc0-4563-b897-c6b5b3444361>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00447-ip-10-147-4-33.ec2.internal.warc.gz"}
Magnitude Scale The Magnitude Scale Study the star field photograph shown below. It shows a region of the sky around the constellation Crux, commonly called the Southern Cross. Move your cursor across the photo to identify some stars. Which star is brightest? If you answered Alpha (α) Centauri, the star at the bottom left of the photograph you are right. As you can see, a photographic image such as this shows many more stars than you can see with your unaided eye. Nonetheless some stars are more prominent than others. In choosing α Centauri you made some assumptions. What were they? A photograph such as this shows bright stars as larger disks than fainter stars. Does this mean that these stars are physically larger than the fainter stars in the photo? Remember, in the section on astrometry we learnt that all stars other than our Sun are so distant that they are effectively point sources. Why then do some appear brighter (to our eyes) or larger (in photographs) than others? What, in fact is brightness and how can we measure it? The answers to these questions form the focus of this section. Historical Background Credit: Reproduced by permission of the Whipple Library, University of Cambridge. The concept of measuring and comparing the brightness of stars can be traced back to the Greek astronomer and mathematician Hipparchus (190 - 120 BC). One of the greatest astronomers of antiquity, he is credited with producing a catalogue of 850 stars with positions and comparative brightnesses. In his system, the brightest stars were assigned a magnitude of 1, the next brightest magnitude 2 and so on to the faintest stars, just visible to the unaided eye which were magnitude 6. This six-point scale can be thought of as a ranking, first-rate stars, the brightest, were first magnitude and dim low-rate stars were sixth magnitude. The discovery of fainter stars with telescopes in the early 1600s required the scale to be extended beyond magnitude 6. The development of visual photometers, instruments to measure stellar intensities, in the nineteenth century by John Herschel and others prompted the need for astronomers to adopt an international standard. The fact that eyes detect differences in intensity logarithmically rather than linearly was discovered in the 1830s. In 1856 Norman Pogson proposed that a star of magnitude 1 was 100 × brighter than a star of magnitude 6. A difference of one magnitude was therefore equal to ^5√100 = 2.512 times in brightness. Apparent Magnitude, m The apparent magnitude, m, of a star is the magnitude it has as seen by an observer on Earth. A very bright object, such as the Sun or the Moon can have a negative apparent magnitude. Even though Hipparchus originally assigned the brightest stars to have a magnitude of 1 more careful comparison shows that the brightest star in the night sky, Sirius or α Canis Majoris (CMa) actually has an apparent magnitude of m = -1.44. With the recalibration of Hipparchus' original values the bright star Vega is now defined to have an apparent magnitude of 0.0. Following the telescopic discovery of faint stars in the early 1600s the magnitude scale has also had to be extended to objects fainter than magnitude 6. The table below shows the range of apparent magnitudes for celestial objects. Object Apparent Sun -26.5 Full Moon -12.5 Venus -4.3 Mars or Jupiter -2 Sirius (α CMa) -1.44 Vega (α Lyr) 0.0 Alnair (α Gru) 1.73 Naked-eye limit 6.5 Binocular limit 10 Proxima Cen 11.09 Visual limit through 20 cm telescope 14 QSO at redshift z = 2 ≈ 20 Cepheid in galaxy M100 observed with HST 26 Galaxy at z = 6 observed with Gemini 8.1 m telescope 28 Limit for James Webb Space Telescope ≥ 30 If a star of magnitude 1 is 2.512 × brighter than a star of magnitude 2 and 100 &times brighter than a sixth magnitude star how much brighter is it than a star of magnitude 3? You need to be careful here. It is not simply 2 × 2.512 different. You need to remember that a difference of one magnitude equals ^5√100 = 2.512. A difference of 2 magnitudes therefore = 2.512^2 = 6.31 × difference in Two objects of different magnitudes therefore vary in brightness by 2.512 raised to the power of thee magnitude difference. If we write this as an equation, the ratio of brightness or intensity, I[A] /I[B] between two objects, A and B, with magnitudes m[A] and m[B] is given by the following equation: I[A]/I[B] = 2.512^m[B] - m[A] (4.1) Let us have a look at an example. Example 1: Comparing two stars. How much brighter is Alnair, apparent magnitude of +1.73 than Proxima Cen with a magnitude of 11.09? Using equation 4.1 we have: I[Alnair]/I[Prox] = 2.512^m[Prox] - m[Alnair] so, substituting in: I[Alnair]/I[Prox] = 2.512^11.09 - 1.73 I[Alnair]/I[Prox] = 2.512^11.09 - 1.73 I[Alnair]/I[Prox] = 2.512^9.36 I[Alnair]/I[Prox] = 5,549 ∴ Alnair is about 5,550 × brighter than Proxima Cen Example 2: How much brighter is the Sun than the full Moon? For this we recall from the table above that the Sun has an apparent magnitude of -26.5 and the full Moon, - 12.5. So using equation 4.1 we get: I[Sun]/I[Moon] = 2.512^m[M] - m[S] substituting in gives us: I[Sun]/I[Moon] = 2.512^-12.5 - (-26.5) I[Sun]/I[Moon] = 2.512^14.0 I[Sun]/I[Moon] = 398,359 ∴ the Sun is about 400,000 × brighter than the full Moon. It is important to remember that magnitude is simply a number, it does not have any units. The symbol for apparent magnitude is a lower case m; you must make this clear in any problem. Absolute Magnitude, M What does the fact that Sirius has an apparent magnitude of -1.44 and Betelgeuse an apparent magnitude of 0.45 tell us about these two stars? Another way of thinking about this is to ask why is Sirius the brightest star in the night sky? A star may appear bright for two main reasons: 1. It may be intrinsically luminous, that is it may be a powerful emitter of electromagnetic radiation, or 2. It may be very close to us, or both. The apparent magnitude of a star therefore depends partly on its distance from us. In fact Sirius appears brighter than Betelgeuse precisely because Sirius is very close to us, only 2.6 pc away whereas Betelgeuse is about 160 pc distant. The realisation that stars do not all have much the same luminosity meant that apparent magnitude alone was not sufficient to compare stars. A new system that would allow astronomers to directly compare stars was developed. This system is called the absolute magnitude, M. The absolute magnitude, M, of a star is the magnitude that star would have if it were at a distance of 10 parsecs from us. A distance of 10 pc is purely arbitrary but now internationally agreed upon by astronomers. The scale for absolute magnitude is the same as that for apparent magnitude, that is a difference of 1 magnitude = 2.512 times difference in brightness. This logarithmic scale is also open-ended and unitless. Again, the lower or more negative the value of M, the brighter the star is. Absolute magnitude is a convenient way of expressing the luminosity of a star. Once the absolute magnitude of a star is known you can also compare it to other stars. Betelgeuse, M = -5.6 is intrinsically more luminous than Sirius with an M = 1.41. Our Sun has an absolute visual magnitude of 4.8. Finding the Distance to Stars - Distance Modulus As you may recall from the section on astrometry, most stars are too distant to have their parallax measured directly. Nonetheless if you know both the apparent and absolute magnitudes for a star you can determine its distance. Let us look again at Sirius and Betelgeuse plus another star called GJ 75. Star apparent magnitude Absolute magnitude Distance Modulus m M m - M Sirius -1.44 1.41 -2.85 Betelgeuse 0.45 -5.14 5.59 GJ 75 5.63 5.63 0.00 How far away is GJ 75? It is an unusual star in that its apparent and absolute magnitudes are the same. Why? The reason is that it is actually 10 parsecs distant from us, so by definition its two magnitudes must be the same. What about Sirius? Its apparent magnitude is lower (therefore brighter) than its absolute magnitude. This means that it is closer than 10 parsecs to us. Betelgeuse's apparent magnitude is higher (therefore dimmer) than its absolute magnitude so it would appear even brighter in the night sky if it were only 10 parsecs distant. Is there a quick way of checking whether a star is close or not? Looking at the above table we see that if a star is at a distance of 10 parsecs, then m = M or m - M = 0. For Sirius, m - M = (-1.44) - 1.41 = -2.85. This value is negative and Sirius is closer than 10 pc. For Betelgeuse, m - M = 0.45 - (-5.14) = 5.59. This value is positive and Betelgeuse is more than 10 pc distance. Astronomers use the difference between apparent and absolute magnitude, the distance modulus, as a way of de terming the distance to a star. • Distance Modulus = m - M. • Distance modulus is negative for stars closer than 10 parsecs. • Distance modulus is positive for stars further away than 10 parsecs. • The size of the distance modulus determines the actual value of the distance, so that a star of distance modulus 1.5 is closer than one with a distance modulus of 8.7. Magnitude/Distance Calculations The distance modulus can be used to determine the distance to a star using the equation: m - M = 5 log(d/10) (4.2) where d is in parsecs. Note that if d = 10 pc then m and M are the same. (A formal derivation of this equation is given in the next page on luminosity). The NSW HSC Syllabus Formula Sheet provides the equation as: M = m - 5 log(d/10) (4.3) but this is simply a reworking of equation 4.2. You should be comfortable in solving this equation given any two of the three variables. Let us know look at how you can solve some examples. Example 3: Given m and d, need to find M. β Crucis (or Mimosa) has an apparent magnitude of 1.25 and is 108 parsecs distant. What is its absolute magnitude? Using equation 4.3 we have: M = m - 5 log(d/10) so; M = 1.25 - 5 log(108/10) M = 1.25 - 5 log(10.8) M = 1.25 - 5.x 1.0334 M = 1.25 - 5.1671 therefore: M = -3.92 so β Crucis has an absolute magnitude of -3.92 Note this calculation has shown full working so that each step is explicit. (Remember in solving magnitude equations log refers to logarithms to base 10 and not natural logarithms or ln.) Example 4: Given m and M, find d. Betelgeuse has an apparent magnitude of 0.45 and an absolute magnitude of -5.14. How far away is it? This problem requires us to rewrite equation 4.2 to give us d as the unknown. This is shown below: M = m - 5 log(d/10) so; 5 log (d/10) = m - M log (d/10) = (m - M)/5 d/10 = 10 ^(m - M)/5 d/10 = 10 ^(m - M)/5 d = 10*10 ^(m - M)/5 which can be written as: d = 10 ^(m - M + 5)/5 now substituting in: d = 10 ^(0.45 - (-5.14) + 5)/5 d = 10 ^10.59/5 d = 10 ^2.118 d = 131 parsecs so Betelgeuse is about 130 pc distant. Again, this example shows complete working whereas in reality you may not show every step. It is important, however, that you set your working to such problems out clearly so you can check your algebraic manipulation and your substitutions. Working with logs and indices can be tricky so ensure you know how to do these on your calculator. Example 5: Given M and d, find m. In practice this type of problem is less realistic for actual objects as we can normally measure their apparent magnitudes directly however it may be that we wish to calculate what apparent magnitude a class or type of object may have given the other parameters. Again, starting with equation 4.3 let us determine how bright a supergiant such as Deneb with an absolute magnitude of -8.73 would appear if it was 230 parsecs away. M = m - 5 log(d/10) -m = -M - 5 log(d/10) m = M + 5 log(d/10) so substituting in: m = -8.7 + 5 log(230/10) m = -8.7 + 5 log(23) m = -8.7 + 5*1.3617 m = -8.7 + 6.806 m = -1.89 so Deneb would have an apparent magnitude of -1.89. This would make it brighter in our night sky than Sirius (m = -1.44). In reality Deneb is about 990 pc distant although this value has a large Example 6: What if d is not given but parallax, p is given? This is actually very straight forward. Recall from the section on astrometry that there is a direct relationship between distance and parallax. d = 1/p so you simply need to insert this into equation 4.2 or 4.3. Naming & Identifying Stars Let us know revisit that photo of Crux and the Pointers from the top of this page. The photo below shows the same region with the prominent stars labelled. They also have their apparent magnitudes shown. Crux is a constellation, one of 88 regions that the celestial sphere has been broken up into and agreed upon internationally by astronomers. Crux is actually the smallest of the constellations and is easily identified in the southern skies. The prominent nearby stars commonly called the Pointers are actually part of a large constellation called Centaurus. Credit: Adapted from a photo by M. Bessell Now you may notice that the stars are named using letters from the Greek alphabet; α, β, γ δ and ε (alpha, beta, gamma, delta, epsilon being the first five) followed by the standard three-letter abbreviation for each constellation (Cru for Crucis or Crux and Cen for Centaurus). If you look closely at the apparent magnitudes for the five named stars in Crux you will see that the brightest star is labelled α, the next β and so on. This system is called the Bayer system, after Johann Bayer who introduced it in 1603. The brightest star in a constellation is assigned the letter α, the next β and so on. An exception to this rule is &alpha Orionis or Betelgeuse. It is in fact fainter than β Ori, Rigel by a small amount. This is an interesting historical case solved by the realisation that Betelgeuse has dimmed slightly in brightness since it was named under the Bayer system. One point to remember about constellations is that the stars within a constellation are not usually physically associated with each other, unlike stars in clusters. The fact that appear close together is purely an alignment effect along tour line of sight. In fact, the stars in the region above are widely separated in distance from us as shown in the next image. Some of the bright stars such as α Cen also have their own specific name. Sirius (α Sco) is one such examples whilst α Cen is also called Rigel Kentaurus. Many star names are Arabic in origin from the era when Greek records were preserved and developed by Islamic astronomers. The problem with the Bayer system of naming stars is that there are only 24 letters in the Greek alphabet but there are many more stars than is in each constellation. Most stars in fact do not have a specific name or Bayer classification. These days astronomers have compiled vast catalogs of stars, some with over 10 million objects, so most stars only have a catalog number. Stars may have many more than one identification name or catalog number, depending on the number of catalogs they are in. The variable star, δ Cep for example is also known as HIP 110991, SAO 34508, or any of more than 30 other identifiers! Many catalogs utilise celestial coordinates such as Right Ascension and Declination to identify objects. Thus δ Cep is known as IRAS 22273+5809, CCDM J22292+5825A and AAVSO 2225+57. The slight variations in RA and dec for the catalogs arises due to the proper motion of the stars and the precession of the reference frame.
{"url":"http://www.atnf.csiro.au/outreach/education/senior/astrophysics/photometry_magnitude.html","timestamp":"2014-04-18T11:03:02Z","content_type":null,"content_length":"77282","record_id":"<urn:uuid:0e59f5e0-1801-4715-92e5-11c1a9013740>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00374-ip-10-147-4-33.ec2.internal.warc.gz"}
Register or Login To Download This Patent As A PDF United States Patent 7,218,691 Narasimhan May 15, 2007 Method and apparatus for estimation of orthogonal frequency division multiplexing symbol timing and carrier frequency offset A system and method estimates symbol timing of long training symbols and coarse carrier frequency offset in an orthogonal frequency division multiplexing receiver of a wireless local area network. The system and method estimates the symbol timing and coarse carrier frequency offset during short training symbols of a preamble of a data packet. Fine carrier frequency offset is estimated during long training symbols. Channel estimates during a data portion of the data packet are adapted. Inventors: Narasimhan; Ravi (Los Altos, CA) Assignee: Marvell International Ltd. (Hamilton, BM) Appl. No.: 10/067,556 Filed: February 4, 2002 Related U.S. Patent Documents Application Number Filing Date Patent Number Issue Date 60273487 Mar., 2001 Current U.S. Class: 375/344 ; 370/203; 370/208; 375/149; 375/326 Current International Class: H04L 27/06&nbsp(20060101); H04B 1/707&nbsp(20060101) Field of Search: 375/240.28,356,260,355,232,350,362,343,363,147,354,326,344,149 370/503,208,480,210,281,290,292,295,310,203 References Cited U.S. Patent Documents 4550414 October 1985 Guinon et al. 5053982 October 1991 McCune, Jr. 5077753 December 1991 Grau, Jr. et al. 5111478 May 1992 McDonald 5231634 July 1993 Giles et al. 5282222 January 1994 Fattaouche et al. 5311544 May 1994 Park et al. 5555268 September 1996 Fattouche et al. 5602835 February 1997 Seki et al. 5608764 March 1997 Sugita et al. 5625573 April 1997 Kim 5694389 December 1997 Seki et al. 5726973 March 1998 Isaksson 5732113 March 1998 Schmidl et al. 5802117 September 1998 Ghosh 5809060 September 1998 Cafarella et al. 5889759 March 1999 McGibney 5909462 June 1999 Kamerman et al. 5953311 September 1999 Davies et al. 5991289 November 1999 Huang et al. 6021110 February 2000 McGibney 6035003 March 2000 Park et al. 6058101 May 2000 Huang et al. 6074086 June 2000 Yonge, III 6075812 June 2000 Cafarella et al. 6091702 July 2000 Saiki 6111919 August 2000 Yonge, III 6125124 September 2000 Junell et al. 6160821 December 2000 Dolle et al. 6169751 January 2001 Shirakata et al. 6172993 January 2001 Kim et al. 6414936 July 2002 Cho et al. 6658063 December 2003 Mizoguchi et al. 6711221 March 2004 Belotserkovsky et al. 6771591 August 2004 Belotserkovsky et al. 6862297 March 2005 Gardner et al. 6874096 March 2005 Norrell et al. 2002/0126618 September 2002 Kim 2003/0058966 March 2003 Gilbert et al. 2003/0112743 June 2003 You et al. 2003/0215022 November 2003 Li et al. 2004/0202234 October 2004 Wang Foreign Patent Documents Other References Analysis of coarse frequency synchronisation for HIPERLAN type-2.quadrature..quadrature.Platbrood, F.; Rievers, S.; Farserotu, J.; Vehicular Technology Conference, 1999. VTC 1999--Fall. IEEE VTS 50th.quadrature..quadrature.vol. 2, Sep. 19-22, 1999 pp. 688-692 vol. 2 .quadrature..quadrature.. cited by examiner . A high performance frequency and timing synchronization technique for OFDM.quadrature..quadrature.Mochizuki, N. et al; Global Telecommunications Conference, 1998. GLOBECOM 98. The Bridge to Global Integration. IEEE vol. 6, Nov. 8-12, 1998 pp. 3443-3448 vol. 6. cited by examiner . A synchronization scheme for multi-carrier CDMA systems.quadrature..quadrature.Seunghyeon Nahm; Wanyang Sung; Communications, 1998. ICC 98. Conference Record. 1998 IEEE International Conference on vol. 3, Jun. 7-11, 1998 pp. 1330-1334 vol. 3. cited by examiner . A novel method for blind frequency offset correction in an OFDM system.quadrature..quadrature.Visser, M.A. et al; Personal, Indoor and Mobile Radio Communications, 1998. The Ninth IEEE International Symposium on.quadrature..quadrature.vol. 2, Sep. 8-11, 1998 pp. 816-820. cited by examiner . "Wireless LAN Standard" IEEE Std 802.11a-1999. cited by other . Schmidl, Timothy M. "Robust Frequency and Timing Synchronization for OFDM" IEEE Transactions on Communications, vol. 45, No. 12, Dec. 1997. cited by other. Primary Examiner: Ghebretinsae; Temesghen Parent Case Text This application claims the benefit of U.S. Provisional Application No. 60/273,487, filed Mar. 5, 2001, which is hereby incorporated by reference. What is claimed is: 1. A method for estimating carrier frequency offset in an orthogonal frequency division multiplexing receiver of a wireless local area network, comprising: receiving a preamble including at least two adjacent short training symbols; sampling said at least two adjacent short training symbols of said preamble at a first rate; and correlating said at least two adjacent short training symbols to generate a correlation signal. 2. The method of claim 1 further comprising normalizing said correlation signal to generate a normalized correlation signal. 3. The method of claim 2 wherein said normalizing step further comprises dividing said correlation signal by an energy of at least one of said adjacent short training symbols. 4. The method of claim 2 further comprising: repeating said sampling, correlating and normalizing steps for all of said short training symbols; and identifying a maximum value of said normalized correlation signal during said short training symbols. 5. The method of claim 4 further comprising multiplying said maximum value of said normalized correlation signal by a threshold value to identify left and right edges of a plateau defined by said normalized correlation signal. 6. The method of claim 5 wherein said threshold value is greater than zero and less than one. 7. The method of claim 5 further comprising identifying left and right time index values corresponding to said left and right edges. 8. The method of claim 7 further comprising identifying a center time index value using said left and right time index values. 9. The method of claim 8 further comprising using said center time index value and a correlation value corresponding to said center time index value to calculate said carrier frequency offset. 10. The method of claim 9 further comprising: calculating a first value by dividing an imaginary component of said correlation value at said center time index value by a real component of said correlation value at said center time index value; and calculating an arctangent of said first value. 11. The method of claim 1 wherein said method is a software method. 12. The method of claim 10 further comprising calculating said carrier frequency offset by dividing said arctangent by 2.pi.T.sub.short wherein T.sub.short is the period of said short training 13. The method of claim 1 wherein said preamble forms part of an orthogonal frequency division multiplex (OFDM) packet. 14. A method for estimating carrier frequency offset in an orthogonal frequency division multiplexing receiver of a wireless local area network, comprising: receiving a preamble including a plurality of short training symbols; sampling said short training symbols of said preamble using a sampling window; and correlating samples from the short training symbols in a first half of said sampling window with samples from the short training symbols in a second half of said sampling window to generate a correlation signal. 15. The method of claim 14 wherein said sampling window has a period that is equal to a duration of two short training symbols. 16. The method of claim 14 further comprising normalizing said correlation signal to generate a normalized correlation signal. 17. The method of claim 14 wherein said method is a software method. 18. The method of claim 14 wherein said preamble forms part of an orthogonal frequency division multiplex (OFDM) packet. 19. A method for estimating carrier frequency offset in an orthogonal frequency division multiplexing receiver of a wireless local area network, comprising: receiving a preamble including a plurality of short training symbols; sampling said short training symbols of said preamble using a sampling window; correlating a first half of said sampling window with a second half of said sampling window to generate a correlation signal; normalizing said correlation signal to generate a normalized correlation signal; and wherein said normalizing step further comprises dividing said correlation signal by an energy of at least one of said first and second halves of said sampling window. 20. The method of claim 19 further comprising repeating said sampling, correlating and normalizing steps for all of said short training symbols. 21. The method of claim 20 further comprising identifying a maximum value of said normalized correlation signal during said short training symbols. 22. The method of claim 21 further comprising multiplying said maximum value of said normalized correlation signal by a threshold value to identify left and right edges of a plateau defined by said normalized correlation signal. 23. The method of claim 22 wherein said threshold value is greater than zero and less than one. 24. The method of claim 22 further comprising identifying left and right time index values corresponding to said left and right edges. 25. The method of claim 24 further comprising identifying a center time index value using said left and right time index values. 26. The method of claim 25 further comprising using said center time index value and a correlation value corresponding to said center time index value to calculate said carrier frequency offset. 27. The method of claim 26 further comprising calculating a first value by dividing an imaginary component of said correlation value at said center time index value by a real component of said correlation value at said center time index value and calculating an arctangent of said first value. 28. The method of claim 27 further comprising calculating said carrier frequency offset by dividing said arctangent by 2.pi.T.sub.short wherein T.sub.short is the period of said short training 29. A method for estimating carrier frequency offset in an orthogonal frequency division multiplexing receiver of a wireless local area network, comprising: sampling at least two adjacent short training symbols of a preamble of a data packet to generate a received signal; quantizing sign bits of real and imaginary components of said received signal; and correlating said quantized sign bits to generate a correlation signal. 30. The method of claim 29 further comprising generating a filtered sum of an absolute value of a real component of said correlation signal and an absolute value of an imaginary component of said correlation signal. 31. The method of claim 30 wherein said filtered sum is generated by a single pole filter. 32. The method of claim 30 further comprising identifying a local maximum value of said filtered sum during said short training symbols. 33. The method of claim 32 wherein said local maximum value is identified by updating and storing said filtered sums and by comparing at least one filtered sum to a prior filtered sum and to a subsequent filtered sum. 34. The method of claim 29 wherein said method is a software method. 35. The method of claim 30 further comprising identifying a maximum value of said filtered sum during said short training symbols. 36. The method of claim 35 further comprising identifying said maximum value by updating and storing said filtered sums and by comparing at least one filtered sum to a prior filtered sum and to a subsequent filtered sum. 37. The method of claim 36 further comprising: identifying a time index value corresponding to said maximum value; and identifying a correlation signal value corresponding to said time index value. 38. The method of claim 37 further comprising: calculating an imaginary component of said correlation signal value corresponding to said time index value; calculating a real component of said correlation signal value corresponding to said time index value; dividing said imaginary component by said real component to generate a quotient; and calculating an arctangent of said quotient to generate a coarse carrier frequency offset estimate. 39. The method of claim 38 further comprising: multiplying said received signal by e.sup.-jn.omega..sup..DELTA. where .omega..sub..DELTA. is said coarse carrier frequency offset estimate and n is a time index. 40. A method for estimating carrier frequency offset in an orthogonal frequency division multiplexing receiver of a wireless local area network, comprising: sampling short training symbols of a preamble of a data packet to generate a received signal; quantizing sign bits of real and imaginary components of said received signal; generating a filtered sum of an absolute value of a real component of said correlation signal and an absolute value of an imaginary component of said correlation signal; identifying a local maximum value of said filtered sum during said short training symbols; and multiplying said local maximum value of said filtered sum by a threshold value to identify a right edge of a plateau wherein said threshold value is greater than zero and less than one. 41. The method of claim 40 further comprising: identifying a right time index value corresponding to said right edge; and calculating symbol timing from said right time index value. 42. A method for estimating carrier frequency offset in an orthogonal frequency division multiplexing receiver of a wireless local area network, comprising: sampling short and long training symbols of a preamble of a data packet to generate a received signal; quantizing sign bits of real and imaginary components of said received signal; correlating said quantized sign bits of at least two adjacent short training symbols to generate a first correlation signal; generating a symbol timing estimate that is based on said first correlation signal and identifies a start time of first and second ones of said long training symbols correlating said first and second long training symbols to generate a second correlation signal; and calculating a fine carrier frequency offset from said second correlation signal. 43. The method of claim 42 wherein said step of calculating further comprises: calculating an imaginary component of said correlation signal; calculating a real component of said correlation signal; dividing said imaginary component by said real component to generate a quotient; and calculating an arctangent of said quotient to generate said fine carrier frequency offset estimate. 44. The method of claim 43 further comprising updating a sampling clock with said fine carrier frequency offset estimate. 45. A method for adapting a carrier frequency offset estimate in an orthogonal frequency division multiplexing receiver of a wireless local area network, comprising: generating channel estimates for an orthogonal frequency division multiplexing subcarrier as a function of subcarrier index values during a data portion of an orthogonal frequency division multiplexing packet; generating a complex number by summing a product of frequency domain signals and said channel estimates for each of said subcarrier index values and dividing said sum by a sum of a squared absolute value of said channel estimate for each of said subcarrier index values; calculating an imaginary component of said complex number; and adapting the carrier frequency offset estimate based on said imaginary component. 46. A method for adapting a carrier frequency offset estimate in an orthogonal frequency division multiplexing receiver of a wireless local area network, comprising: generating channel estimates for an orthogonal frequency division multiplexing subcarrier as a function of subcarrier index values; generating a complex number by summing a product of frequency domain signals and said channel estimates for each of said subcarrier index values and dividing said sum by a sum of a squared absolute value of said channel estimate for each of said subcarrier index values; calculating an imaginary component of said complex number; multiplying said imaginary component by an adaptation parameter to generate a product; and adding said product to a prior carrier frequency offset estimate to produce an adapted carrier frequency offset estimate. 47. A carrier frequency offset estimator for an orthogonal frequency division multiplexing receiver of a wireless local area network, comprising: sampling means for sampling at least two adjacent short training symbols of a preamble of a data packet to generate a received signal; quantizing means for quantizing sign bits of real and imaginary components of said received signal; and correlating means for correlating said quantized sign bits of at least two adjacent short training symbols to generate a correlation signal. 48. The carrier frequency offset estimator of claim 47 further comprising filtered sum generating means for generating a filtered sum of an absolute value of a real component of said correlation signal and an absolute value of an imaginary component of said correlation signal. 49. The carrier frequency offset estimator of claim 48 wherein said filtered sum is generated by a single pole filter. 50. The carrier frequency offset estimator of claim 48 further comprising first identifying means for identifying a local maximum value of said filtered sum during said short training symbols. 51. The carrier frequency offset estimator of claim 50 wherein said local maximum value is identified by updating and storing said filtered sums and by comparing at least one filtered sum to a prior filtered sum and to a subsequent filtered sum. 52. The carrier frequency offset estimator of claim 50 further comprising first multiplying means for multiplying said local maximum value of said filtered sum by a threshold value to identify a right edge of a plateau. 53. The carrier frequency offset estimator of claim 52 wherein said threshold value is greater than zero and less than one. 54. The carrier frequency offset estimator of claim 52 further comprising: second identifying means for identifying a right time index value corresponding to said right edge; and first calculating means for calculating symbol timing from said right time index value. 55. The carrier frequency offset estimator of claim 48 further comprising third identifying means for identifying a maximum value of said filtered sum during said short training symbols. 56. The carrier frequency offset estimator of claim 55 further comprising fourth identifying means for identifying said maximum value by updating and storing said filtered sums and by comparing at least one filtered sum to a prior filtered sum and to a subsequent filtered sum. 57. The carrier frequency offset estimator of claim 56 further comprising: fifth identifying means for identifying a time index value corresponding to said maximum value; and sixth identifying means for identifying a correlation signal value corresponding to said time index value. 58. The carrier frequency offset estimator of claim 57 further comprising: second calculating means for calculating an imaginary component of said correlation signal value corresponding to said time index value; third calculating means for calculating a real component of said correlation signal value corresponding to said time index value; dividing means for dividing said imaginary component by said real component to generate a quotient; and fourth calculating means for calculating an arctangent of said quotient to generate a coarse carrier frequency offset estimate. 59. The carrier frequency offset estimator of claim 58 further comprising: second multiplying means for multiplying said received signal by e.sup.-jn.omega..sup..DELTA. where .omega..DELTA. is said coarse carrier frequency offset estimate and n is a time index. 60. A carrier frequency offset estimator for an orthogonal frequency division multiplexing receiver of a wireless local area network, comprising: sampling means for sampling short and long training symbols of a preamble of a data packet to generate a received signal; quantizing means for quantizing sign bits of real and imaginary components of said received signal; first correlating means for correlating said quantized sign bits of at least two adjacent short training symbols to generate a first correlation signal; symbol timing generating means for generating a symbol timing estimate that is based on said first correlation signal and identifies a start time of first and second ones of said long training correlating means for correlating said first and second long training symbols to generate a second correlation signal; and first calculating means for calculating a fine carrier frequency offset from said second correlation signal. 61. The carrier frequency offset estimator of claim 60 wherein said first calculating means calculates an imaginary component of said correlation signal, calculates a real component of said correlation signal, divides said imaginary component by said real component to generate a quotient, and calculates an arctangent of said quotient to generate said fine carrier frequency offset 62. The carrier frequency offset estimator of claim 61 further comprising updating means for updating a sampling clock with said fine carrier frequency offset estimate. 63. A carrier frequency offset estimate adaptor for an orthogonal frequency division multiplexing receiver of a wireless local area network, comprising: channel estimate generating means for generating channel estimates for an orthogonal frequency division multiplexing subcarrier as a function of subcarrier index values during a data portion of an orthogonal frequency division multiplexing packet; complex number generating means for generating a complex number by summing a product of frequency domain signals and said channel estimates for each of said subcarrier index values and dividing said sum by a sum of a squared absolute value of said channel estimate for each of said subcarrier index values; first calculating means for calculating an imaginary component of said complex number; adapting means for adapting the carrier frequency offset estimate based on said imaginary component. 64. A carrier frequency offset estimate adaptor for an orthogonal frequency division multiplexing receiver of a wireless local area network, comprising: channel estimate generating means for generating channel estimates for an orthogonal frequency division multiplexing subcarrier as a function of subcarrier index values; complex number generating means for generating a complex number by summing a product of frequency domain signals and said channel estimates for each of said subcarrier index values and dividing said sum by a sum of a squared absolute value of said channel estimate for each of said subcarrier index values; first calculating means for calculating an imaginary component of said complex number; and first multiplying means for multiplying said imaginary component by an adaptation parameter to generate a product. 65. The carrier frequency offset estimate adaptor of claim 64 further comprising first adding means for adding said product to a prior carrier frequency offset estimate to produce an adapted carrier frequency offset estimate. 66. A carrier frequency offset estimator for an orthogonal frequency division multiplexing receiver of a wireless local area network, comprising: a sampler that samples at least two adjacent short training symbols of a preamble of a data packet to generate a received signal; a quantizer that quantizes sign bits of real and imaginary components of said received signal; and a correlator that correlates said quantized sign bits to generate a correlation signal. 67. The carrier frequency offset estimator of claim 66 further a filtered sum generator that generates a filtered sum of an absolute value of a real component of said correlation signal and an absolute value of an imaginary component of said correlation signal. 68. The carrier frequency offset estimator of claim 67 wherein said filtered sum is generated by a single pole filter. 69. The carrier frequency offset estimator of claim 67 further comprising a first identifier circuit that identifies a local maximum value of said filtered sum during said short training symbols. 70. The carrier frequency offset estimator of claim 69 wherein said local maximum value is identified by updating and storing said filtered sums and by comparing at least one filtered sum to a prior filtered sum and to a subsequent filtered sum. 71. The carrier frequency offset estimator of claim 69 further comprising a first multiplier circuit that multiplies said local maximum value of said filtered sum by a threshold value to identify a right edge of a plateau. 72. The carrier frequency offset estimator of claim 71 wherein said threshold value is greater than zero and less than one. 73. The carrier frequency offset estimator of claim 71 further comprising: second identifier circuit that identifies a right time index value corresponding to said right edge; and a first calculator that calculates symbol timing from said right time index value. 74. The carrier frequency offset estimator of claim 67 further comprising a third identifier that identifies a maximum value of said filtered sum during said short training symbols. 75. The carrier frequency offset estimator of claim 74 further comprising fourth identifying means for identifying said maximum value by updating and storing said filtered sums and by comparing at least one filtered sum to a prior filtered sum and to a subsequent filtered sum. 76. The carrier frequency offset estimator of claim 75 further comprising: a fifth identifier that identifies a time index value corresponding to said maximum value; and a sixth identifier that identifies a correlation signal value corresponding to said time index value. 77. The carrier frequency offset estimator of claim 76 further comprising: a second calculator that calculates an imaginary component of said correlation signal value corresponding to said time index value; a third calculator that calculates a real component of said correlation signal value corresponding to said time index value; a divider that divides said imaginary component by said real component to generate a quotient; and a fourth calculator that calculates an arctangent of said quotient to generate a coarse carrier frequency offset estimate. 78. The carrier frequency offset estimator of claim 77 further comprising: a second multiplier that multiplies said received signal by e.sup.-jn.omega..sup..DELTA. where .omega..sub..DELTA. is said coarse carrier frequency offset estimate and n is a time index. 79. A carrier frequency offset estimator for an orthogonal frequency division multiplexing receiver of a wireless local area network, comprising: a sampler that samples short and long training symbols of a preamble of a data packet and generates a received signal; a quantizer that quantizes sign bits of real and imaginary components of said received signal; a first correlator that correlates said quantized sign bits of at least two adjacent short training symbols to generate a first correlation signal; a symbol timing generator that produces a symbol timing estimate that is based on said first correlation signal and identifies a start time of first and second ones of said long training symbols; a second correlator that correlates said first and second long training symbols to generate a second correlation signal; and a first calculator that calculates a fine carrier frequency offset from said second correlation signal. 80. The carrier frequency offset estimator of claim 79 wherein said first calculator calculates an imaginary component of said correlation signal, calculates a real component of said correlation signal, divides said imaginary component by said real component to generate a quotient, and calculates an arctangent of said quotient to generate said fine carrier frequency offset estimate. 81. The carrier frequency offset estimator of claim 80 further comprising a sampling clock updater that updates a sampling clock with said fine carrier frequency offset estimate. 82. A carrier frequency offset estimate adaptor for an orthogonal frequency division multiplexing receiver of a wireless local area network, comprising: a channel estimator that generates channel estimates for an orthogonal frequency division multiplexing subcarrier as a function of subcarrier index values during a data portion of an orthogonal frequency division multiplexing packet; a complex number generator that generates a complex number by summing a product of frequency domain signals and said channel estimates for each of said subcarrier index values and dividing said sum by a sum of a squared absolute value of said channel estimate for each of said subcarrier index values; a first calculator that calculates an imaginary component of said complex number; and an adaptor that adapts the carrier frequency offset estimate based on said imaginary component. 83. The carrier frequency offset estimate adaptor of claim 82 further comprising a first multiplier that multiplies said imaginary component by an adaptation parameter to generate a product. 84. The carrier frequency offset estimate adaptor of claim 83 further comprising a first adder that adds said product to a prior carrier frequency offset estimate to produce an adapted carrier frequency offset estimate. The present invention relates to receivers, and more particularly to receivers that measure carrier frequency offset, symbol timing and/or phase noise of an orthogonal frequency division multiplexing A wireless local area network (WLAN) uses radio frequency (RF) signals to transmit and receive data between electronic devices. WLANs provide all of the features and benefits of traditional hard-wired LANs without requiring cable connections between the devices. In WLANs, transmitters and receivers (often implemented as wireless network interface cards) provide a wireless interface between a client and a wireless access point to create a transparent connection between the client and the network. Alternately, the WLAN provides a wireless interface directly between two devices. The access point is the wireless equivalent of a hub. The access point is typically connected to the WLAN backbone through a standard Ethernet cable and communicates with the wireless devices using an antenna. The wireless access point maintains the connections to clients that are located in a coverage area of the access point. The wireless access point also typically security by granting or denying access. IEEE section 802.11(a), which is hereby incorporated by reference, standardized WLANs that operate at approximately 5 GHz with data speeds up to 54 Mbps. A low band operates at frequencies from 5.15 to 5.25 GHz with a maximum power output of 50 mW. A middle band operates at frequencies from 5.25 to 5.35 GHz with a maximum power output of 250 mW. A high band operates at frequencies from 5.75 to 5.85 GHz with a maximum power output of 1000 mW. Because of the high power output, wireless devices operating in the high band will tend to include building-to-building and outdoor applications. The low and middle bands are more suitable for in-building applications. IEEE section 802.11(a) employs orthogonal frequency division multiplexing (OFDM) instead of direct sequence spread spectrum (DSSS) that is employed by IEEE section 802.11 (b). OFDM provides higher data rates and reduces transmission echo and distortion that are caused by multipath propagation and radio frequency interference (RFI). Referring now to FIG. 1, data packets include a preamble 10 that is specified by IEEE section 802.11(a). The preamble 10 includes a plurality of short training symbols 12 (S0, . . . , S9). The short training symbols 12 are followed by a guard interval 14 (Guard) and two long training symbols 16-1 and 16-2 (L0, L1). The duration of the short training symbol 12 is T.sub.short, the duration of the guard interval 14 is T.sub.G12, the duration of the long training symbols 16 is T.sub.long, the duration of the guard interval 15 for data symbols is T.sub.G1, and the duration of data symbols 18 is T.sub.data. Guard intervals 15 and data symbols 18 alternate after the long training symbols 16. According to IEEE section 802.11(a), T.sub.short=0.8 .mu.s, T.sub.G1=0.8 .mu.s, T.sub.G12=1.6 .mu.s, T.sub.long=3.2 .mu.s, and T.sub.data=4 .mu.s. One important task of the OFDM receiver is the estimation of symbol timing and carrier frequency offset. Symbol timing is needed to determine the samples of each OFDM symbol that correspond to the guard interval and the samples that are used for fast Fourier transform (FFT) processing. Compensation of the carrier frequency offset is also needed to maximize signal amplitude and minimize inter-carrier interference (ICI). Conventional symbol timing circuits correlate two halves of a single OFDM training symbol whose duration is equal to the duration of the data symbols. For example, see the symbol timing circuit disclosed in T. Schmidl and D.C. Cox, "Robust Frequency and Timing Synchronization for OFDM", IEEE Trans. Commun., vol. 45, no. 12, (December 1999), pp. 1613 1621, which is hereby incorporated by reference. The conventional symbol timing circuit exhibits a plateau when there is no intersymbol interference (ISI). The duration of the plateau is the duration of the guard interval that is not affected by ISI. The plateau in the conventional symbol timing circuit corresponds to the range of acceptable times for the start of the frame. For example, the center of the plateau is a desirable estimate of the symbol timing. Since only one training symbol is employed, the conventional symbol timing circuit does not allow time for possible switching of antennas and corresponding AGC settling during packet detection. A system and method according to the invention estimates carrier frequency offset in an orthogonal frequency division multiplexing receiver of a wireless local area network. Short training symbols of a preamble of a data packet are sampled to generate a received signal. Sign bits of real and imaginary components of the received signal are quantized. In other features, the sign bits of at least two adjacent short training symbols are used to generate a correlation signal. A filtered sum of an absolute value of a real component of the correlation signal and an absolute value of an imaginary component of the correlation signal are generated. In still other features, a local maximum value of the filtered sum is identified during the short training symbols. The local maximum value is identified by updating and storing the filtered sums and by comparing at least one filtered sum to a prior filtered sum and to a subsequent filtered sum. In still other features, the local maximum value of the filtered sum is multiplied by a threshold value to identify a right edge of a plateau. A right time index value corresponding to the right edge is identified. Symbol timing of long training symbols is calculated from the right time index value. In still other features, a maximum value of the filtered sum is identified during the short training symbols. The maximum value is identified by updating and storing the filtered sums and by comparing at least one filtered sum to a prior filtered sum and to a subsequent filtered sum. A time index value corresponding to the maximum value is identified. A correlation signal value corresponding to the time index value is identified. An imaginary component of the correlation signal value corresponding to the time index value is calculated. A real component of the correlation signal value corresponding to the time index value is calculated. The imaginary component is divided by the real component to generate a quotient. An arctangent of the quotient is calculated to generate a coarse carrier frequency offset estimate. In other features of the invention, a system and method estimates fine carrier frequency offset in an orthogonal frequency division multiplexing receiver of a wireless local area network. A symbol timing estimate is generated that identifies a start time of first and second long training symbols of a preamble of a data packet. The first and second long training symbols of the preamble are used to generate a received signal. The first and second long training symbols are correlated to generate a correlation signal. A fine carrier frequency offset is calculated from the correlation signal. In yet other features, the step of calculating includes calculating imaginary and real components of the correlation signal. The imaginary component is divided by the real component to generate a quotient. An arctangent of the quotient is calculated to generate the fine carrier frequency offset estimate. In other features of the invention, a system and method updates channel estimates in an orthogonal frequency division multiplexing receiver of a wireless local area network. The channel estimates are generated for an orthogonal frequency division multiplexing subcarrier as a function of subcarrier index values. A complex number is generated by summing a product of frequency domain signals and the channel estimates for each of the subcarrier index values and dividing the sum by a sum of a squared absolute value of the channel estimate for each of the subcarrier index values. The complex number is multiplied by the channel estimates to generate said updated channel estimates. In still other features of the invention, a system and method adapt a carrier frequency offset estimate in an orthogonal frequency division multiplexing receiver of a wireless local area network. Channel estimates are generated for an orthogonal frequency division multiplexing subcarrier as a function of subcarrier index values. A complex number is generated by summing a product of frequency domain signals and the channel estimates for each of the subcarrier index values. The sum is divided by a sum of a squared absolute value of the channel estimate for each of the subcarrier index values. An imaginary component of the complex number is calculated. In yet other features, the imaginary component is multiplied by an adaptation parameter to generate a product. The product is added to a carrier frequency offset estimate to produce an adapted carrier frequency offset estimate. Further areas of applicability of the present invention will become apparent from the detailed description provided hereinafter. It should be understood that the detailed description and specific examples, while indicating the preferred embodiment of the invention, are intended for purposes of illustration only and are not intended to limit the scope of the invention. The present invention will become more fully understood from the detailed description and the accompanying drawings, wherein: FIG. 1 illustrates a preamble of a packet transmitted by an orthogonal frequency division multiplexing receiver according to the prior art; FIG. 2 is a functional block diagram of an OFDM transmitter according to the present invention; FIG. 3 is a functional block diagram of an OFDM receiver according to the present invention; FIG. 4 is a simplified functional block diagram of the OFDM receiver of FIG. 3; FIG. 5 is a graph illustrating M.sub.n as a function of a time interval n; FIG. 6 is an exemplary functional block diagram for calculating M.sub.n and P.sub.n; FIG. 7 is a flowchart illustrating steps for calculating symbol timing, carrier frequency offset and phase noise; FIG. 8 is an exemplary functional block diagram for calculating updated channel estimates and an adapted carrier frequency estimate; FIG. 9 is a flowchart illustrating steps for calculating the updated channel estimates; and FIG. 10 is a flowchart illustrating steps for calculating the adapted carrier frequency estimate. The following description of the preferred embodiment(s) is merely exemplary in nature and is in no way intended to limit the invention, its application, or uses. Referring now to FIG. 2, an OFDM transmitter 30 is shown. The OFDM transmitter 30 includes a data scrambler 32 that receives input bits and scrambles the bits to prevent long strings of 1's and 0's. An output of the data scrambler 32 is input to a convolutional encoder 34 that adds redundant bits. For example, for each input bit the convolutional encoder 34 may generate two output bits in a rate 1/2 convolutional coder. Skilled artisans can appreciate that other code rates may be employed. An output of the convolutional encoder 34 is input to an interleaver and symbol mapper 36. An output of the interleaver and symbol mapper 36 is input to a serial to parallel (S/P) converter 38. Outputs of the S/P converter 38 are input to an inverse fast Fourier transform (FFT) circuit 40. Outputs of the inverse FFT circuit 40 are input to a parallel to serial (P/S) converter 42. An output of the P/S converter 42 is input to a cyclic prefix adder 44 that adds guard interval bits. An output of the cyclic prefix adder 44 is input to a waveform shaper 46. An output of the waveform shaper 46 is input to a digital to analog (D/A) converter 48. An output of the D/A converter 48 is input to a radio frequency (R/F) amplifier 50 that is connected to an antenna 52. In a preferred embodiment, the OFDM transmitter 30 complies with IEEE section 802.11(a). Referring now to FIG. 3, an OFDM receiver 60 receives the RF signals that are generated by the OFDM transmitter 30. The receiver 60 includes antennas 62-1 and 62-2. A switch 64 selects one of the antennas 62 based upon the strength of the RF signal detected by the antenna 62. An amplifier 66 is connected to an output of the switch 64. An analog to digital (A/D) converter 68 is connected to an output of the amplifier 66. An automatic gain control (AGC), antenna diversity and packet detection circuit 70 is connected to an output of the A/D converter 68. When the gain of the AGC decreases, a packet is detected. A symbol timing and carrier frequency offset circuit 74 according to the present invention is connected to an output of the circuit 70. The symbol timing and carrier frequency offset circuit 74 identifies a carrier frequency offset .omega..sub..DELTA., a starting time n.sub.g of a guard interval, and phase noise as will be described more fully below. The circuit 74 typically multiples the samples by e.sup.-j.omega..sub..DELTA..sup.n where n is a sample time index. A cyclic prefix remover 76 is connected to an output of the symbol timing and carrier frequency offset circuit 74. A S/P converter 78 is connected to an output of the cyclic prefix remover 76. A FFT circuit 80 is connected to an output of the S/P converter 78. A P/S converter 82 is connected to an output of the FFT circuit 80. A demap and deinterleave circuit 84 is connected to an output of the P/S converter 82. A channel estimator 86 that estimates multipath is connected to an output of the symbol timing and carrier frequency offset circuit 74. A frequency equalizer (FEQ) 90 is connected to an output of the channel estimator 86. An output of the FEQ 90 is input to the demap and deinterleave circuit 82. An output of the demap and deinterleave circuit 82 is input to a sample recovery clock 94 and to a Viterbi decoder 96. An output of the sample recovery clock 94 is input to the A/D converter 62. An output of the Viterbi decoder 96 is input to a descrambler 98. Referring now to FIG. 4, a simplified functional block diagram of FIG. 3 is shown and includes a radio frequency (RF) amplifier 100 that amplifies the received RF signal. An output of the amplifier 100 is input to a multiplier 102 having another input connected to a local oscillator (LO) 104. An output of the multiplier 102 is filtered by a filter 108 and input to an analog to digital (A/D) converter 110 having a sampling rate of 1/T.sub.s. The A/D converter 110 generates samples r.sub.n. A typical value for 1/T.sub.s is 20 MHz, although other sampling frequencies may be used. During the initial periods of the short training symbol 12, the circuit 70 brings the signal within a dynamic range of the OFDM receiver 60. Antenna selection for receive diversity is also performed. After packet detection and AGC settling, the following quantities are computed for estimation of OFDM symbol timing: q.sub.n=sgn[(r.sub.n)]+jsgn[I(r.sub.n)] .times..times..times..times..times. ##EQU00001## M.sub.n=.alpha..sub.sM.sub.n-1+(1-.alpha..sub.s)(|(P.sub.n)|+|I(P.sub.n)|- ) Where L=T.sub.short/T.sub.s is the number of samples in one short training symbol, is a real component of an argument, and I is an imaginary component of the argument. A typical value for L is L=16, although other values may be used. q.sub.n contains sign bits of real and imaginary components of the received signal r.sub.n. Quantization simplifies the hardware processing for symbol timing acquisition. P.sub.n represents a correlation between two adjacent short training symbols of q.sub.n. M.sub.n represents a filtered version of |(P.sub.n)|+|I(P.sub.n)|. The filter is preferably a single pole filter with a pole .alpha..sub.s. A typical value of .alpha..sub.s is .alpha..sub.s=1-3/32, although other values may be used. Referring now to FIG. 5, a plot of M.sub.n for a multipath channel having a delay spread of 50 ns is shown. M.sub.n has a plateau at 120 that results from the periodicity of the channel output due to the repetition of the short training symbols. The duration of the plateau depends on the number of periods of the short training symbol that remain after antenna selection and AGC settling. Therefore, a center of the plateau is not the best symbol timing estimate. A falling edge of the plateau indicates that no more short training symbols are present and that M.sub.n includes samples from the guard interval 14 that precedes the long training symbols 16. Therefore, the falling edge of the plateau provides an estimate of the symbol timing. After AGC settling, P.sub.n and M.sub.n are calculated. A left edge n.sub.1 of the plateau 120 is defined by M.sub.n>.tau..sub.1A. Typical values for .tau..sub.1 and A are .tau..sub.1=0.7 and A=32/ (T.sub.s.box-solid.20 MHz). A maximum value of M.sub.n is updated and stored as M.sub.n,max as time progresses. The complex number P.sub.n corresponding to M.sub.n,max is denoted by P.sub.n,max, which is also updated and stored as time progresses. A local maximum value M.sub.n,localmax, is set equal to M.sub.n-1 if the following conditions are met: M.sub.n-1.gtoreq.M.sub.n-2 and M.sub.n-1> M.sub.n. The local maximum value M.sub.n,localmax is updated and stored as time progresses. A time index n.sub.g is set to n-1 if the following conditions are met: M.sub.n<.tau..sub.2 M.sub.n,localmax and M.sub.n-1.gtoreq..tau..sub.2 M.sub.n,localmax. The index n.sub.g is used to determine the symbol timing. A typical value for .tau..sub.2 is .tau..sub.2=0.9. To determine a right edge n.sub.r of the plateau 120, M.sub.n must stay below .tau..sub.1M.sub.n,max for at least B consecutive samples. A typical value for B is B=8/(T.sub.s.box-solid.20 MHz). Once n.sub.r is determined, the coarse frequency offset .omega..sub..DELTA. is determined by: .omega..sub..DELTA.=tan.sup.-1[I (P.sub.n,max)/(P.sub.n,max)]/(L) A coarse frequency correction e.sup.-j.omega..sub..DELTA..sup.n is applied to the received signal. The symbol timing is then estimated by n.sub.g'= n.sub.g-n.sub..DELTA.. A typical value for n.sub..DELTA. is n.sub..DELTA.=32. Referring now to FIGS. 6 and 7, an exemplary implementation of the coarse frequency and symbol timing circuit 70 is shown. Typical parameter values include L=32, .tau..sub.1=0.7, A=64, .tau..sub.2= 0.7, B=15, n.sub..DELTA.=25, T.sub.s=40 MHz, and .alpha..sub.s=1-3/32. A low pass filter (LPF) 150 is connected to a sign-bit quantizer 152. The sign-bit quantizer 152 is connected to a buffer 154 and a multiplier 156. An L-1 output of the buffer 154 is connected to a conjugator 158 and a multiplier 160. A 2L-1 output of the buffer 154 is connected to a conjugator 162, which has an output connected to the multiplier 160. An output of the multiplier 160 is connected to an inverting input of an adder 164. An output of the multiplier 156 is connected to a non-inverting input of the adder 164. An output of the adder 164 is input to an adder 170. An output of the adder 170 is equal to P.sub.n and is connected to a delay element 172 that is fed back to an input of the adder 170. The output of the adder 174 is also input to a metric calculator 174. An output of the metric calculator 174 is connected to a multiplier 176. Another input of the multiplier is connected to a signal equal to 1-.alpha..sub.s. An output of the multiplier is input to an adder 180. An output of the adder 180 is equal to M.sub.n and is connected to a delay element 182, which has an output that is connected to a multiplier 184. The multiplier 184 has another input connected to .alpha..sub.s. An output of the multiplier 184 is connected to an input of the adder 180. Referring now to FIG. 7, steps performed by the coarse frequency circuit and symbol timing circuit 74 is shown generally at 200. Control begins in step 202. In step 204, M.sub.nmax, M.sub.nlocalmax, n.sub.1, n.sub.r, n.sub.s, n.sub.g, n.sub.max, and ctr are initialized. In step 204, control determines whether n.sub.1=0 and M.sub.n>.tau..sub.1A. If true, control sets n.sub.1=n in step 208 and continues with step 210. If false, control determines whether M.sub.n>M.sub.nmax. If true, control continues with step 212 where control sets M.sub.nmax=M.sub.n and nmax=n and then continues with step 214. If false, control continues with step 214 where control determines whether both M.sub.n-1>M.sub.n-2 and M.sub.n-1>M.sub.n. If true, control sets M.sub.nlocalmax=M.sub.n-1 and then continues with step 218. If false, control determines whether M.sub.n<.tau..sub.2M.sub.nlocalmax and M.sub.n-1.gtoreq..tau..sub.2 M.sub.nlocalmax in step 218. If true, control sets n.sub.g=n-1 in step 220 and continues with step 224. If false, control determines whether n.sub.1>0 and M.sub.n-1>.tau..sub.1M.sub.nmax in step 224. If true, control sets ctr=0 in step 226 and continues with step 230. If false, control determines whether n.sub.1>0 in step 232. If true, control sets ctr=ctr+1 in step 234 and continues with step 230. In step 230, control determines whether ctr=B or n=10L-1. If false, control sets n=n+1 in step 234 and returns to step 206. If true, control sets n.sub.r=n-B in step 238. In step 240, control calculates .omega..sub..DELTA.=tan.sup.-1[Im(P.sub.nmax)/Re(P.sub.nmax)]/(L) and .rho.=(1-.omega..sub..DELTA./.omega..sub.carrier). In step 242, control estimates a start of long training symbol using n.sub.g'=n.sub.g-n.sub..DELTA.. IEEE section 802.11(a) specifies that the transmit carrier frequency and sampling clock frequency are derived from the same reference oscillator. The normalized carrier frequency offset and the sampling frequency offset are approximately equal. Since carrier frequency acquisition is usually easier than sampling period acquisition, sampling clock recovery is achieved using the estimate of the carrier frequency offset .omega..sub..DELTA.. The coarse frequency estimate .omega..sub..DELTA. is used to correct all subsequent received samples. The coarse frequency estimate .omega..sub..DELTA. is refined during the long training symbols specified in IEEE section 802.11(a). r.sub.0,n, and r.sub.1,n (n=0, . . . , N-1) are the received samples that are associated with the long training symbols 16-1 and 16-2 (or L0 and L1), respectively. The value N is the number of samples contained within each long training symbol 16. A typical value for N is N=64 (for 1/T.sub.s=20 MHz) (where L=16 and n.sub..DELTA.=32). The estimate of fine frequency offset .omega..sub..DELTA.,fine is obtained by: .omega..sub..DELTA.,fine=tan.sup.-1[I(C.sub.L)/(C.sub.L)] where .times..times..times. ##EQU00002## The sampling clock is also updated accordingly. The residual frequency offset and phase noise are tracked during the data portion of the OFDM packet. H.sub.k are channel estimates for the OFDM subcarriers as a function of the subcarrier index k. The channel estimates H.sub.k are multiplied by a complex number C.sub.ML to compensate for common amplitude and phase error due to the residual frequency offsets and phase noise. P.sub.k, k.epsilon.K, are received frequency domain signals on the pilot tones after the known BPSK modulation is removed, where K={-21, -7, 7, 21}. The pilot tones are used to derive a maximum likelihood estimate of C.sub.ML: .times..times..times..times..epsilon..times..times..times..times..times..t- imes..times..epsilon..times..times..times..times. ##EQU00003## The new channel estimates are then {tilde over (H)}.sub.k= C.sub.MLH.sub.k. These updated channel estimates are used in the frequency equalizer (FEQ) for data detection. The carrier frequency estimate .omega..sub..DELTA. is adapted by: .omega..sub..DELTA..sup.l=.omega..sub..DELTA..sup.l-1+.beta.I(C.sub.ML) where .beta. is an adaptation parameter and the subscript 1 represents values during the 1-th OPDM data symbol. A typical value of .beta. is .beta.=1/1024. The sampling clock frequency is also adapted accordingly. Since the guard interval 14 of an OFDM data symbol is longer than the channel impulse response, an additional tolerance factor is provided in the symbol timing estimate. In order to obtain a symbol timing estimate within an acceptable range, a modified symbol timing estimate n.sub.g' is generated. The modified symbol timing estimate n.sub.g' is equal to n.sub.g-n.sub..DELTA. where n.sub..DELTA.. A typical value for n.sub..DELTA. is n.sub..DELTA.=32 when L=16. Referring now to FIG. 8, an exemplary circuit 250 for calculating the updated channel estimates and the adapted carrier frequency estimate .omega..sub..DELTA. is shown. The circuit includes multipliers 256-1, 256-2, . . . , 256-n that multiply H*.sub.k and P.sub.k, for k.epsilon.K. Absolute value circuits 260-1, 260-2, . . . 260-n calculate an absolute value of H.sub.k. Outputs of the absolute value circuit 260 are squared by multipliers 264-1, 264-2, . . . , 264-n. Outputs of the multipliers 256 are input to an adder 266. Outputs of the multipliers 264 are input to an adder 270. An output of the adder 266 is input to a numerator input of a divider 272. An output of the adder 270 is input to a denominator input of the divider 272. An output of the divider 272 C.sub.ML is input to a multiplier 274. Another input of the multiplier 274 is connected to H.sub.k. An output of the multiplier 274 generates {tilde over (H)}.sub.k. An output of the divider 272 is input to an imaginary component circuit 280 that outputs an imaginary component of C.sub.ML. An output of the imaginary component circuit 280 is input to a multiplier 284. Another input of the multiplier is connected to the adaptation parameter .beta.. An output of the multiplier 284 is input to an adder 286. Another input of the adder is connected to .omega..sub..DELTA..sup.l-1. An output of the adder 286 generates .omega..sub..DELTA..sup.l which is the adapted carrier frequency estimate. Referring now to FIG. 9, steps for calculating new channel estimates are shown generally at 300. In step 302, control begins. In step 304, channel estimates H.sub.k are obtained. In step 306, frequency domain signals P.sub.k on the pilot tones are obtained after BPSK modulation is removed. In step 308, the conjugates of the channel estimates H.sub.k are multiplied by the frequency domain signals P.sub.k and summed for each value of K. In step 310, C.sub.ML is computed by dividing the summed product generated in step 308 and divided by the sum for each value of k of the squared absolute values of H.sub.k. In step 312, the channel estimates H.sub.k are multiplied by C.sub.ML to obtain new channel estimates {tilde over (H)}.sub.k. Control ends in step 314. Referring now to FIG. 10, steps for generating the adapted carrier frequency estimate are shown generally at 320. Control begins in step 322. In step 324, the imaginary component of C.sub.ML is generated. In step 326, the imaginary component of C.sub.ML is multiplied by the adaptation parameter .beta.. In step 328, the product of step 326 is added to .omega..sub..DELTA..sup.l-1 (the l-1th carrier frequency offset estimate) to generate .omega..sub..DELTA..sup.l. Control ends in step 330. In an alternate method for calculating coarse frequency according to the present invention, after packet detection and AGC settling, the following quantities are computed for estimation of OFDM symbol timing: .times..times..times..times. ##EQU00004## .times..times. ##EQU00005## M.sub.n=|P.sub.n|.sup.2/R.sub.n.sup.2 Where L=T.sub.short/T.sub.s is the number of samples in one short training symbol. A typical value for L=16, although other values may be used. P.sub.n represents a correlation between two adjacent short training symbols. R.sub.n represents an average received power in a short training symbol. M.sub.n represents a normalized correlation between two adjacent short training symbols. M.sub.n exhibits the plateau at 120 due to the repetition of the short training symbol. In other words, M.sub.n is a maximum value as a sample window moves across the short training symbols 12 after packet detection and AGC settling. P.sub.n correlates received signals for two adjacent short training samples. Preferably, the sampling window has a duration of 2L, although other durations are The duration of the plateau 120 depends upon the number of periods of the short training symbol that remain after antenna selection and AGC settling is complete. Therefore, the center of the plateau 120 of M.sub.n is not usually the best symbol timing estimate. The right edge of the plateau 120 indicates that no more short training symbols are present. Samples that occur after the right edge of the plateau include samples from the guard interval 14 that precedes the long training symbols 16. Therefore, the right edge of the plateau 120 provides a good estimate of the symbol timing. After packet detection and AGC settling, M.sub.n is computed. M.sub.max is the maximum of M.sub.n and n.sub.max corresponds to a time index at which M.sub.max occurs. Points n.sub.1 and n.sub.r are left and right edges of the plateau 120, respectively. The points n.sub.1 and n.sub.r are identified such that M.sub.n1.apprxeq.M.sub.nr.apprxeq..tau..sub.1M.sub.max and n.sub.1<n.sub.max<n.sub.r. In other words, n.sub.1 and n.sub.r are the points preceding and following the maximum of M.sub.n that are equal to a threshold .tau..sub.1 multiplied by M.sub.max. A typical value for .tau..sub.1 is 0.7. The center of the plateau 120 is estimated by the midpoints n.sub.c=(n.sub.r+n.sub.1)/2. The carrier frequency offset .DELTA.f is estimated by: .alpha.=tan.sup.-1[I(P.sub.nc)/(P.sub.nc)] .DELTA.f=.alpha./(2.pi.T.sub.short) which is valid if |.DELTA.f|<1/(2T.sub.short). For example, |.DELTA.f|<1/(2T.sub.short)=625 kHz for T.sub.short=0.8 .mu.s. The estimate of the carrier frequency offset .DELTA.f may be refined using a correlation of the two long training symbols after the sample timing is determined as will be described below. In order to detect the falling edge of the plateau of M.sub.n, the mean absolute difference of M.sub.n near the center of the plateau is computed: .times..times..times. ##EQU00006## Where K is the number of terms in the estimate of the mean absolute difference. A typical value for K is (n.sub.r-n.sub.1)/2. The sample index n.sub.g at the beginning of the guard interval 14 preceding the long training symbols 16 is estimated by detecting the right or following edge of the plateau of M.sub.n. In other words, n.sub.g satisfies the following conditions: n.sub.g>n.sub.c M.sub.n.sub.g<M.sub.n.sub.g.sub.-1 |M.sub.n.sub.g-M.sub.n.sub.g-1|>.tau..sub.2D.sub.K n.sub.g'=n.sub.g-n.DELTA. A typical value for .tau..sub.2 is 10. Since the guard interval 14 of an OFDM data symbol is longer than the channel impulse response, an additional tolerance factor is provided in the symbol timing estimate. In order to obtain a symbol timing estimate within an acceptable range, a modified symbol timing estimate n.sub.g' is generated. The modified symbol timing estimate n.sub.g' is equal to n.sub.g-n.sub..DELTA. where n.sub..DELTA. is a small number that is less than the number of samples in the guard interval for a data symbol. For IEEE 802.11(a), the number of samples in the guard interval for a data symbol is L, which is the number of samples in a short training symbol. For example, a typical value for n.sub..DELTA. is L/4. The identification of the precise time that M.sub.n decreases from the plateau 120 (e.g. when the short training symbols 12 end) may vary somewhat. To accommodate the possible variation, the modified symbol timing estimate n.sub.g' provides additional tolerance. With the modified symbol timing estimate n.sub.g', a sampling window begins earlier in the guard interval 14. IEEE section 802.11(a) specifies that the transmit frequency and sample clock frequency are derived from the same reference oscillator. Therefore, the normalized carrier frequency offset and sampling period offset are approximately equal. Since carrier frequency acquisition is more simple than sampling period acquisition, sampling clock recovery is achieved using the estimate of the carrier frequency offset. The initial carrier frequency offset estimate .DELTA.f.sub.0 is obtained during the short timing symbols 12 in the preamble 10 of each packet as previously described above. Each complex output sample of the A/D converter 68 is adjusted using a current carrier frequency offset estimate .DELTA.f. If the original sampling period (before acquisition) is equal to T.sup.orig, the first update of the sampling period is: T.sub.0=T.sup.orgi(1-(.DELTA.f.sub.0/f.sub.nominal)). Where f.sub.nominal is the nominal carrier frequency. The estimate of the carrier frequency offset during the long training symbols 16 is used to obtain .DELTA.f.sub.1=.DELTA.f.sub.0+.epsilon..sub.1. The corresponding update of the sampling period is: T.sub.1=T.sub.0(1-(.epsilon..sub.1/f.sub.nominal)) During the OFDM data symbols that occur after the long training symbols 16, four subcarriers are used for pilot tones. After removing the known binary phase shift key (BPSK) modulation of the pilot tones, the main phase of the 4 pilots is determined to estimate a residual carrier frequency offset, .epsilon..sub.n, where n is the index of the OFDM symbol. For each OFDM symbol, the update of the carrier frequency offset and the sampling period is given by: .DELTA.f.sub.n=.DELTA.f.sub.n-1+.beta..epsilon..sub.n T.sub.n=T.sub.n-1(1-(.beta..epsilon..sub.n/f.sub.nominal)) Where .beta. is a loop parameter. This method is currently being used with a zero order hold after IFFT in the transmitter 30 (to model D/A). Those skilled in the art can now appreciate from the foregoing description that the broad teachings of the present invention can be implemented in a variety of forms. Therefore, while this invention has been described in connection with particular examples thereof, the true scope of the invention should not be so limited since other modifications will become apparent to the skilled practitioner upon a study of the drawings, the specification and the following claims. * * * * *
{"url":"http://patents.com/us-7218691.html","timestamp":"2014-04-20T08:21:33Z","content_type":null,"content_length":"83348","record_id":"<urn:uuid:377d2a41-d559-4729-97bd-7576a7982988>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00394-ip-10-147-4-33.ec2.internal.warc.gz"}
Polyvector Super-Poincaré Algebras Posted by Urs Schreiber Just heard a talk on the work D. V. Alekseevsky, V. Cortés, C. Devchand, A. Van Proeyen Polyvector Super-Poincare Algebras which is about classification of extensions of Poincaré Lie algebras of a vector space with scalar product of signature $(p,q)$$\mathrm{iso}(V) \simeq \mathrm{iso}(p,q)$ to super Lie algebras $\ underbrace{iso(p,q) \oplus W_0}_{\mathrm{even}} \oplus \underbrace{S}_{\mathrm{odd}} \,.$ At least parts of this is ancient knowledge in physics, but I am being told that to get this coherent, comprehensive and rigorous form quite a bit of work was required. One reason why these super Poincaré algebras are very interesting is the following: as is well known, it turns out that the parts of the super extension of the Poincaré algebra called $W_0$ above consists of various copies of exterior powers $\wedge^p V$ (called “polyvector spaces”) of the underlying vector space $V$. Now, like ordinary Einstein gravity may be conceived as a gauge theory for $\mathrm{iso}(3,1)$, theories of supergravity come from the respective super extensions of that. Like flat Minkowski space is a special solution to Einstein’s equations, characterized by the fact that it exhibits globally the symmetry of $\mathrm{iso}(3,1)$, supergravity theories have special solutions which globally respect parts of the super Poincaré symmetry. Strikingly, for each power $\wedge^p V$ that appears in the super extension of the Poincaré algebra these solutions may feature $(p+1)$-dimensional hypersurfaces that behave much like charged particles – only that instead of being 0-dimensional and coupling to a connection, they are $p$-dimensional and couple to a $(p+1)$-connection! A review of these structures – called (solitonic/BPS)$p$-branes – may be found for instance in K.S. Stelle BPS Branes in Supergravity Now, this is especially interesting for us, because on the other hand, as $n$-Café-regulars have heard us say before, at least some of these $p$-branes should really correspond to certain $(p+1)$ -functors $(p+1)\mathrm{Cob} \to (p+1)\mathrm{Hilb} \,.$ To indicate the categorification step, I like to speak of $(n =p+1)$-particles. There is some tantalizing interaction between supersymmetrization and categorification – many of the details of which still escape me. The most direct hint, so far, concerning what is really going on, is Castellani’s observation: Castellani remarks (not in these words, though, but I think these words are part of the clue) that with the super Lie 3-algebra $\mathrm{sugra}(10,1) \in 3\mathrm{sLie}$ which D’Auria and Fré once found to be the structure governing 11-dimensional supergravity (as discussed at length in SuGra 3-Connection Reloaded) comes a certain Lie 1-algebra of derivations of the Lie 3-algebra, and that this is the polyvector super extension of $\mathrm{iso}(10,1)$. So it seems that there is a close relation between a) super Lie $n$-algebras $g_{(n)}$ extending the Poincaré Lie 1-algebra b) polyvector super Lie 1-algebras extending the Poincaré Lie 1-algebra and apparently b) is part of the Lie $(n+1)$-algebra $\mathrm{DER}(g_{(n)})$. (I am being careful with saying “part of” etc, since the derivations considered in Derivation Lie 1-Algebras of Lie n-Algebras and What is a Lie Derivative, really? are closely related but not exactly the derivations that Castellani considers. Membranes and 5-Branes Using the results of the above paper by D. V. Alekseevsky, V. Cortés, C. Devchand, A. Van Proeyen, there is a quick way to see that 11-dimensional supergravity has a 2-brane (a 3-particle) and a 5-brane (a 6-particle) as follows: Proposition 1 on p. 391 says, essentially, that in odd dimensions there is (up to isomorphism, of course – these authors tend not to mention isomorphisms when they are obvious) a unique super extension of the Poincaré algebra which is of maximal size, i.e. where the polyvector part $W_0 = \sum_p \wedge^p V$ is as large as possible: namely this is the case when $W_0 \simeq S \vee S \,,$ where $S$ is the irreducible spinor module and $S \vee S$ its symmetric second power. And the isomorphism here is precisely the super Lie bracket $[\cdot,\cdot] : S \vee S \stackrel{\simeq}{\to} W_0 \,.$ Then all that remains to be done is to decompose $S \vee S$. For the case where we work over the complex numbers, the result is given in theorem 2, on p. 396: $S \vee S \simeq \sum_{i=0}^{[m/4]} \wedge^{m-4i} V \oplus \sum_{i=0}^{[(m-3)/4]} \wedge^{m-3-4i} V \,,$ where $m$ is half the dimension of $V$$\mathrm{dim}_\mathbb{C} V = p + q = 2 m +1 \,.$ So for $d = 11$ we find that $S \vee S \simeq V \oplus \wedge^2 V \oplus \wedge^5 V \,.$ This says that there are membranes (= 2-branes) and 5-branes in the game. Recall that, by D’Auria-Fré-Castellani, this is a direct consequence of the fact that in $(p+q) = (10+1)$ dimensions, the ordinary unextended super Poincaré Lie algebra happens to have a 4-cocycle $\ bar \psi \wedge \Gamma^{a b} \psi \wedge e_a \wedge e_b \in H^4(\mathrm{siso}(10,1)) \,,$ which in turn implies that there is a super Lie 3-algebra extension of Baez-Crans type. (By the way, I have the impression that by intregrating the 3-connection with values in the Lie 3-algebra over an $S^1$-fiber, thereby “contracting away one leg” of this 4-cocycle, we get something related to the $\mathrm{siso}(10,1)$ Chern-Simons 3-form. If anyone knows anything about this, please drop me a note!) Repository of useful Formulas One of the general nice things about this paper is that it coherently and comprehensively lists lots of useful data that helps not to get lost in the jungle of superstudies, especially when comparing notatin and terminology used in math and physics, respectively. For one, table 1 on p. 401 summarizes basic facts about Clifford algebras, their spinor modules and relates them to the physics terminology. Then, the entire appendix B reformulates the entire paper in physicist’s notation, listing all these matrices that appear there (charge conjugation etc.) and relating them to the abstract formulation. Very useful. I tend to always forget this stuff after a while. Posted at June 14, 2007 3:17 PM UTC Re: Polyvector Super-Poincaré Algebras Obviously, the symmetric tensor product of two so(n) spinors can be decomposed into so(n) irreps. To see that you get 2- and 5-branes follows directly from the dimensions: 32*33/2 = (11 choose 1) + (11 choose 2) + (11 choose 5) 528 = 11 + 55 + 462 Is there more to it that this? In d dimensions, you probably want a term (d choose 1) because otherwise you don’t have translations, although Bars’ two-time physics seems to get away with a decomposition starting with (d choose 2). Something else which has occurred to me is the following: is there a reason why the polyvector fields have to commute with the spinors? It is certainly possible to cook up nilpotent Lie superalgebras with a grading of depth > 2. This would mean that in addition to the relations {Q[α], Q[β]} = γ[αβ]^μ P[μ] + more one would have e.g. [Q[α], P[μ]] != 0. IIRC, the analogous thing (with Q bosonic) happens if you consider the e[7] grading with g[0] = so(10) + su(2) + u(1). Posted by: Thomas Larsson on June 15, 2007 3:07 AM | Permalink | Reply to this Re: Polyvector Super-Poincaré Algebras Yes, as I said, much of this is well known. I am not expert enough to tell at which points the paper I mentioned genuinely fills gaps that have been left open before. Posted by: urs on June 15, 2007 9:27 AM | Permalink | Reply to this Re: Polyvector Super-Poincaré Algebras All I am saying is that the symmetrized tensor product of two spinor irreps can be decomposed into irreps. That is pretty obvious. As for my second comment, I just realized that Nicolai’s e[10] stuff, and covariantly West’s e[11], does that for the bosonic subalgebra. You first embed iso(11) into igl(11) because you have gravity, and then embed igl(11) into e[11], whose level decomposition certainly does not stop at level 1. Then you need to fill in the fermonic operators at half-integer levels, but I don’t think there has been much progress on that. Posted by: Thomas Larsson on June 15, 2007 9:36 AM | Permalink | Reply to this Re: Polyvector Super-Poincaré Algebras You first embed $\mathrm{iso}(1)$ into $\mathrm{igl}(11)$ because you have gravity Yeah, this is a point I was wondering about last time, already (very end of Nicolai on E10 and Supergravity): is there a way to understand the $e_{10}$ description of supergravity with the others we are talking about here? In all cases we are dealing with gravity, so “beacause we have gravity” cannot quite be what underlies the transition from $\mathrm{iso}(11)$ to $\mathrm{igl}(11)$, I ‘d think. Something is going on here, which involves at least three different aspects of one underlying structure: $\array{ && \href{http://golem.ph.utexas.edu/category/2007/06/polyvector_superpoincar_algebr.html}{ \text{polyvector super Lie 1-algebra} } \\ & {}^?\swarrow && \searrow^? \\ \href{http:// golem.ph.utexas.edu/category/2006/08/sugra_3connection_reloaded.html}{\text{super Lie 3-algebra}} &&\stackrel{?}{\leftrightarrow}&& \href{http://golem.ph.utexas.edu/category/2006/11/ nicolai_on_e10_and_supergravit.html}{\text{super} \;\;e_{10}} }$ Posted by: urs on June 15, 2007 10:42 AM | Permalink | Reply to this Re: Polyvector Super-Poincaré Algebras In all cases we are dealing with gravity, so beacause we have gravity cannot quite be what underlies the transition from iso(11) to igl(11), I’d think. In gravity, you treat spinors with vielbeins, so the relevant Lie algebra is the semidirect product vect(d) (x map(d, so(d)) (diffeomorphisms and local Lorentz transformations). gl(d) is the zero degree subalgebra of vect(d), and there is a 1-1 correspondence between their irreps; to the gl(d) irrep R corresponds a tensor field of type R. Posted by: Thomas Larsson on June 15, 2007 12:17 PM | Permalink | Reply to this
{"url":"http://golem.ph.utexas.edu/category/2007/06/polyvector_superpoincar_algebr.html","timestamp":"2014-04-19T19:34:57Z","content_type":null,"content_length":"41810","record_id":"<urn:uuid:5f6a2921-fe0f-4901-beab-bbb5f919f423>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00454-ip-10-147-4-33.ec2.internal.warc.gz"}
which grade of concrete is the strongest? M20,M25,M30,M35 or M40? which grade of concrete is the strongest? Question M20,M25,M30,M35 or M40? Question Submitted By :: Trasha I also faced this Question!! Rank Answer Posted By Re: which grade of concrete is the strongest? M20,M25,M30,M35 or M40? Answer M40 1 Manoj Sinha # 1 Is This Answer Correct ? 398 Yes 40 No Re: which grade of concrete is the strongest? M20,M25,M30,M35 or M40? Answer Its depend upon mix design 4 Rajesh # 2 Is This Answer Correct ? 189 Yes 86 No Re: which grade of concrete is the strongest? M20,M25,M30,M35 or M40? Answer M40 is the high strength concrete bcz it has 40 N/mm2 4 Vijayaraja # 3 strength.others are less Is This Answer Correct ? 238 Yes 14 No Re: which grade of concrete is the strongest? M20,M25,M30,M35 or M40? Answer Its M40 3 Vinit # 4 Is This Answer Correct ? 156 Yes 13 No Re: which grade of concrete is the strongest? M20,M25,M30,M35 or M40? Answer M40......bcuase it have a high characteristic compressive 4 Gunaseelan.s # 5 strenth attained in 28 days..... Is This Answer Correct ? 146 Yes 6 No Re: which grade of concrete is the strongest? M20,M25,M30,M35 or M40? Answer please give ratio for M25to M40. because the ratio 5 Lakshmanan.k # 6 available in IS code only M10-M20, most case M25,so more then M25 ratio is our own preparation. so my suggestion is M20 or M25. Is This Answer Correct ? 29 Yes 80 No Re: which grade of concrete is the strongest? M20,M25,M30,M35 or M40? Answer M40 1 Faraheem # 7 because its have more cement content and it also depends upon what is mix design if you are giving cement content in M20 more than M40,then M20 will give higher strength than Is This Answer Correct ? 27 Yes 58 No Re: which grade of concrete is the strongest? M20,M25,M30,M35 or M40? Answer M40 becoz the ratio of cement:sand:aggregate is higher 0 Jainendra # 8 then the other. which are the core factor for any mix. the property of any mix like compreesive strength, characterstic strength etc . largely depends on the ratio. Is This Answer Correct ? 49 Yes 11 No Re: which grade of concrete is the strongest? M20,M25,M30,M35 or M40? Answer m40 0 Ahalnath # 9 Is This Answer Correct ? 51 Yes 6 No Re: which grade of concrete is the strongest? M20,M25,M30,M35 or M40? Answer M- 40 is strongest It depends upon the design mix, . M-40 0 Nagesh Raj # 10 means its strengthis 40n/mm2 Is This Answer Correct ? 63 Yes 5 No 12345 >>
{"url":"http://www.allinterview.com/showanswers/70887.html","timestamp":"2014-04-17T09:35:17Z","content_type":null,"content_length":"47662","record_id":"<urn:uuid:54b156d8-dde3-4067-9af8-1f17cfd57d80>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00158-ip-10-147-4-33.ec2.internal.warc.gz"}
Why believe the Hodge Conjecture? Tom Graber recently asked me why people believe the Hodge conjecture, given the sparse evidence for its truth. I didn’t have time then to answer fully (I was giving a talk), but it’s a question that deserves a full answer. So I’ve sketched below what I feel are the reasons for believing the Hodge conjecture. The Hodge conjecture is perhaps the most famous problem in algebraic geometry. But progress on the Hodge conjecture is slow, and a lot of algebraic geometry goes in different directions from the Hodge conjecture. Why should we believe the Hodge conjecture? How important will it be to solve the problem? The Hodge conjecture is about the relation between topology and algebraic geometry. The cohomology with complex coefficients of a smooth complex projective variety splits as a direct sum of linear subspaces, the Hodge decomposition H i(X,C) = Σj = 0i H j,i-j(X). The cohomology class of a complex subvariety of codimension p lies in the middle piece H p,p(X) of H 2p(X,C). The Hodge conjecture asserts that any element of H 2p(X,Q) which lies in the middle piece of the Hodge decomposition is the class of an algebraic cycle, meaning a Q‑linear combination of complex subvarieties. The main evidence for the Hodge conjecture is the Lefschetz (1,1)-theorem, which implies the Hodge conjecture for codimension-1 cycles. Together with the hard Lefschetz theorem, this also implies the Hodge conjecture of cycles of dimension 1. These results are part of algebraic geometers’ good understanding of line bundles and codimension-one subvarieties. Not much is known about the Hodge conjecture in other cases, starting with 2‑cycles on 4‑folds. For example, it holds for uniruled 4‑folds (Conte-Murre, 1978). That includes 4‑fold hypersurfaces of degree at most 5, but the Hodge conjecture remains unknown for smooth 4‑fold hypersurfaces of degree at least 6. Why should we believe the conjecture? One reason to believe the Hodge conjecture is that it suggests a close relation between Hodge theory and algebraic cycles, and this hope has led to a long series of discoveries about algebraic cycles. For example, Griffiths used Hodge theory to show that homological and algebraic equivalence for algebraic cycles can be different. (That is, an algebraic cycle with rational coefficients can represent zero in cohomology without being connected to zero through a continuous family of algebraic cycles.) Mumford used Hodge theory to show that the Chow group of zero-cycles modulo rational equivalence can be infinite-dimensional. There are many more discoveries in the same spirit, many of them summarized in Voisin’s book Hodge Theory and Complex Algebraic Geometry. Another reason for hope about the Hodge conjecture is that it is part of a wide family of conjectures about algebraic cycles. These conjectures add conviction to each other, and some of them have been proved, or checked for satisfying families of examples. The closest analog is the Tate conjecture, which describes the image of algebraic cycles in etale cohomology for a smooth projective variety over a finitely generated field, as the space of cohomology classes fixed by the Galois group. The Tate conjecture is not known even for codimension-1 cycles. But Tate proved the Tate conjecture for codimension-1 cycles on abelian varieties over finite fields. Faltings proved the Tate conjecture for codimension-1 cycles on abelian varieties over number fields by a deep argument, part of his proof of the Mordell conjecture. An important piece of evidence for the Hodge conjecture is Deligne’s theorem that Hodge cycles on abelian varieties are “absolute Hodge”, meaning that they satisfy the arithmetic properties (Galois invariance) that algebraic cycles would satisfy. This means that the Hodge and Tate conjectures for abelian varieties are closely related. The Tate conjecture belongs to a broad family of conjectures about algebraic cycles in an arithmetic context. These include the Birch–Swinnerton-Dyer conjecture, on the arithmetic of elliptic curves, and a vast generalization, the Bloch–Kato conjecture on special values of zeta functions. One relation among these conjectures is that the Birch–Swinnerton-Dyer conjecture for elliptic curves over global fields of positive characteristic is equivalent to the Tate conjecture for elliptic surfaces, by Tate. Some of the main advances in number theory over the past 30 years, by Kolyvagin and others, have proved the Birch–Swinnerton-Dyer conjecture for elliptic curves over the rationals of analytic rank at most 1. The Hodge conjecture belongs to several other families of conjectures. There is Bloch’s conjecture that the Hodge theory of an algebraic surface should determine whether the Chow group of zero cycle is finite-dimensional. There is the Beilinson–Lichtenbaum conjecture, recently proved by Voevodsky and Rost, which asserts that certain motivic cohomology groups with finite coefficients map isomorphically to etale cohomology. This web of conjectures mutually support each other. Mathematicians continually make progress on one or the other of them. Trying to prove them has led to a vast amount of progress in number theory, algebra, and algebraic geometry. For me, this is the best reason to believe the Hodge conjecture. 13 responses to “Why believe the Hodge Conjecture?” 1. Thanks alot for the post. I have tried several times to approach those beliefs and have always found it really vexing. Between the overloading of proper names (Tate, Hodge…) and the mixing of number theory and topology/ geometry, it’s really tough to make oneself a clear picture of what is known, what is not, and what is believed (and why). So your post is very highly welcome, and leaves me yearning for more. Or actually maybe not so much. It seems that it may be an illusion, if I had a short exhaustive roadmap to motivic conjectures, I think I would only be fooling myself that I understand things before actually looking at all the details. Still, a couple of questions: When you write “for elliptic surfaces, by Tate”, you mean Tate proved his conjecture for elliptic surfaces? Is it known now for all surfaces now? Threefolds? Abelian varieties? I guess there is a good reference for this, what is it? It seems to me that abelian varieties are a crucial component of our understanding of the Hodge and Tate conjectures but I cannot see further, the details of this. Do the Hodge and Tate conjectures (together perhaps) for abelian varieties (at least conjecturally) imply the general case? I would like to be able to comment more on this. I will think. Thanks again. □ Tate and Milne proved the equivalence of two problems, the Tate conjecture for elliptic surfaces over finite fields and the Birch-Swinnerton-Dyer conjecture for elliptic curves over global fields of positive characteristic. Both problems remain open. See for example Ulmer’s notes on elliptic curves over function fields. In particular, the Tate conjecture is not known for surfaces over finite fields. As I mentioned, Tate proved the Tate conjecture for codimension-1 cycles on abelian varieties over finite fields, but it is not known for cycles of arbitrary dimension on abelian varieties over finite fields. See for example Milne’s notes on the Tate conjecture. There is no reason to think that the Hodge or Tate conjecture for abelian varieties should imply the general case. Rather, these problems should be much easier for abelian varieties than for varieties in general. Among the good books in this area are Voisin’s Hodge Theory and Complex Algebraic Geometry and Andre’s Une introduction aux motifs. ☆ Regarding your remark on the Tate conjecture for abelian varieties not implying it for all varieties. Here is a comment of Milne: Actually it seems the Tate conjecture for abelian varieties does imply it in general. Kahn remarks in his Handbook of K-theory survey that this together with Beilinson’s conjecture (that rational and numerical equivalence coincide over a finite field, taking cycles with Q-coefficients), implies that Voevodsky’s triangulated category of mixed motives over a finite field k (in the Nisnevich topology) with rational coefficient (and perhaps also over an arbitrary scheme of finite type over a field of positive characteristic) is the derived category of pure motives over k, with morphisms tensored with Q. So the canonical t-structure on D(PureMotives_Q) gives a motivic t-structure and mixed motives with Q-coefficients over k are actually pure. He also remarks that taking finer topologies than the Nisnevich should give a t-structure directly for DM, but I am not sure he thinks this would also be equivalent to D (PureMotives) without tensoring with Q. Probably not in fact, looking into Milne’s article, remark 2.50, it seems this is related to the Bloch-Kato or Quillen-Lichtenbaum conjectures -I’d have to think about it more. I think Milne proved all that in his “Motives over finite fields” article in the first volume of Seattle’s Motives conference. I looked at the article a little and it looks awesome, but I have not understood much so far, so if anybody has related references that could help. I will try later, looking at Milne’s website too. ☆ No, the Tate conjecture for abelian varieties over a finite field F_q is not known to imply it for arbitrary smooth projective varieties over F_q. The point is that, by the Weil conjectures, the eigenvalues of Frobenius on the cohomology of any variety over F_q are equal to eigenvalues of Frobenius on some abelian variety over F_q. That suggests that every motive over F_q should be a summand of the motive of an abelian variety. But in order to prove that, you would need the Tate conjecture on arbitrary varieties, not just on abelian varieties. (You need the Tate conjecture to go from an isomorphism between the cohomology of two varieties as Galois modules to an algebraic correspondence between them.) ☆ Thanks alot, and I am very sorry for my mistakes. Your reply is really great and I think (hope) it is definitely worth it that you spend some time correcting such mistakes. ☆ Just in case others read this, to be clear, Milne in his motives article does assume the Tate conjecture for all smooth projective varieties over the base finite field. So does Kahn in his articles. 2. It’s a great post. Can you say more on this? Like why believe motives? □ Well, some versions of the category of motives can be defined now (Grothendieck motives, Chow motives, and so on). Many mathematicians find motives useful. (I have to admit that I haven’t used the language of motives much myself.) They provide a language for describing the relation between different algebraic varieties. I think the question about “believing motives” refers to whether a given category of motives has all the properties that we expect. For example, the category of Grothendieck motives would have especially good properties if Grothendieck’s standard conjectures hold. These conjectures would follow from the Hodge conjecture, so I certainly believe that they are true. But you can use the language of motives to analyze particular varieties without knowing these conjectures in general. One reference on these ideas is Yves Andre’s book Une introduction aux motifs (SMF, 2004). I also like the 2-volume conference proceedings Motives (AMS, 1994), which includes a lot of expository papers. 3. Can you say more specifically, what compelling evidence is there for the Hodge conjecture? I am afraid that some mathematicians might not see a web of related conjectures as a reason to “believe” the Hodge conjecture. □ Fair enough, Jason. Part of what I’m talking about is the psychology of mathematical research. It’s useful to believe the web of conjectures about algebraic cycles, as a way to encourage yourself to prove parts of these conjectures. If you take the opposite point of view, that “nobody knows anything about algebraic cycles”, then you may be discouraged from working in the area at all. And that would be a shame. There have been enough spectacular developments in the area that I think there is plenty of hope for further advances on the big conjectures. (Some of my favorites are Tate’s proof of the Tate conjecture for divisors on abelian varieties over finite fields, Faltings’s proof of the same statement over number fields, and the recent advances on the Tate conjecture for K3 surfaces over finite fields by Maulik and Madapusi Pera which I expect to yield a complete proof soon.) 4. Pingback: Fifteenth Linkfest 5. Here is one possible take on “why believe in the Hodge conjecture” and “why believe in motives”. Take the Tannakian viewpoint. The category of Grothendieck (pure) motives over a field k is conjectured to be Tannakian, yielding a motivic Galois group(oid). Specifically, take k = C. The standard conjecture necessary to have this is that homological and numerical equivalences agree on all smooth projective C-varieties (I’ll take homological equivalence with respect to Betti cohomology). This is an open question, but it is known for motives which are twisted direct summands of motives of abelian varieties, because Lieberman proved this standard conjecture for abelian varieties over C. So, if we restrict ourselves to this rigid subcategory, we do have a well-defined motivic Galois group Mot(ab), which is a proreductive algebraic group over Q. Now, if you take the Tannakian category of pure, polarisable Q-Hodge structures, you get another proreductive algebraic group MT; when you restrict to the Tannakian subcategory generated by pol. Hodge structures of type (1,0) + (0,1), you get a quotient MT(ab), and the functor which to a variety associates its Hodge cohomology gives you a morphism (*) MT(ab)\to Mot(ab). The Hodge conjecture, restricted to complex abelian varieties, is equivalent to saying that this morphism is an isomorphism. (In fact, the Hodge conjecture would imply that MT\to Mot would be epi or faithfully flat granting the existence of the “big” motivic Galois group Mot, hence that (*) is also epi; but on the other hand, any pol. Hodge structure of type (1,0) + (0,1) comes from an abelian variety, and this provides “isomorphism”.) Now, one can examine (*) “case by case” and look at morphisms (*A) MT(A)\to Mot(A) for abelian varieties A, where MT(A) is the Mumford-Tate group of A and Mot(A) is the quotient of Mot(ab) corresponding to the Tannakian subcategory of motives generated by the motive of A. This is now a homomorphism of (finitely generated) reductive groups, which is mono because both groups sit in GL(H^1(A))\times G_m. of course, (*) is equivalent to (*A) for all A. In many cases, one succeeds in proving the full Hodge conjecture for A and all its powers by just looking at the structure of MT(A), and eliminating cases. For example, products of elliptic curves, and a number of isolated examples of simple abelian varieties A. On the other hand, it is very hard to put these examples together and there is no general argument to say that if (*A) and (*B) are true, then (*A\times B) is true. (The converse is OK.) So, imagine that the Hodge conjecture is true for certain A’s, and false for certain others. This would create such a messy picture that I find it compelling enough empirically! 6. I agree. Some people prefer to look for counterexamples to the Hodge conjecture, which is fine. For me, the study of algebraic cycles gains fascination from the possibility of a grand unifying story. There are plenty of other problems, in algebraic geometry or other subjects, where things just seem complicated, with no big pattern. For me, believing in the Hodge conjecture is like believing in progress.
{"url":"http://burttotaro.wordpress.com/2012/03/18/why-believe-the-hodge-conjecture/","timestamp":"2014-04-17T06:41:59Z","content_type":null,"content_length":"89383","record_id":"<urn:uuid:2d5dfcd3-51f2-4cf3-816c-9dfcb566670d>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00483-ip-10-147-4-33.ec2.internal.warc.gz"}
Calculus Reform These are excerpts from the program for the Joint Mathematics Meetings, January 10-13, 1996, Orlando, Florida. From the MAA Session on Planning Reformed Calculus Programs - Experiences and Advice: □ Implementation of a new curriculum: How the paradigm shift affects all aspects of instruction. Barbara E. Reynolds, Cardinal Stritch College and Brown University □ A taxonomy for creating writing assignments in mathematics. Thomas G. Travison, Skidmore College □ Implementing C4L. William E. Fenton, Bellarmine College □ Establishing a departmental consensus for reform. James J. Reynolds, Clarion University □ Close contact calculus at the University of Maryland, College Park. Denny Gulick, University of Maryland, College Park □ Some errors to avoid in reforming a calculus program. John S. Meyer, Muhlenberg College □ Restructuring calculus at Sam Houston State University. David Karl Ruch, Sam Houston State University □ Students' retention in the light of calculus reform. Vesna Kilibarda, University of Alaska, Juneau □ Departmental change: Working with other disciplines. J. Curtis Chipman, Oakland University □ Maxima and minima in designing the AUGMENT curriculum. Lawrence E. Copes, Augsburg College □ Implementing a Mathematica-based calculus curriculum. Francisco Alarcon, Indiana University of Pennsylvania Charles H. Bertness, Indiana University of Pennsylvania Rebecca A. Stoudt, Indiana University of Pennsylvania □ Gaining college wide acceptance for reformed calculus. Robert P. Webber, Longwood College □ Seven years of Project CALC at Duke University---approaching a steady state? Jack Bookman, Duke University □ Calculus reform at the University of Northern Colorado. Dean E. Allison, University of Northern Colorado □ Process of calculus reform at UNCA: Students, faculty, administration, grants. Sherry L. Gale, University of North Carolina, Asheville □ A tale of two calculus reforms. Daniel J. Hrozencik, Westminster College □ Assessing calculus reform at a two-year college. Maria R. Brunett, Montgomery College □ A comparison of calculus teaching methodologies: The need for evaluation in calculus reform. Susan L. Ganter, Worcester Polytechnic Institute □ Evaluating calculus reform: Complex challenges of context and methodology. Joan Ferrini-Mundy, University of New Hampshire Darien Lauten, University of New Hampshire □ Two departments: A fictional tale of change and illusion. Martin E. Flashman, Humboldt State University Susan Tappero, Cabrillo College □ Where there is a strong will, there is a way. Claudia L. Pinter-Lucke, California State Polytech University, □ Planning science reform at a small college. Betty Mayfield, Hood College □ Attracting teachers to use computer algebra systems in the classroom. Lisa Townsley Kulich, Illinois Benedictine College □ Planning and implementation of calculus reform curricula: What worked and what we're still working on. Janet L. Beery, University of Redlands Richard N. Cornez, University of Redlands Alexander E. Koonce, University of Redlands Allen R. Killpatrick, University of Redlands Mary E. Scherer, University of Redlands □ Some formative and summative evaluations of a reform calculus program. Morton Brown, University of Michigan, Ann Arbor □ Cooperative teaching: Lessons from calculus reform at Occidental College. Donald Y. Goldberg, Occidental College Alan P. Knoerr, Occidental College AMS Special Session on Mathematics and Education Reform □ Current status and remaining agenda of the calculus reform Alan C. Tucker, State University of New York, Stony Brook □ Calculus instruction: Opportunities for bridge-building. Deborah Hughes Hallett, Harvard University □ Calculus reform and the advanced placement program. Raymond J. Cannon, Jr., Baylor University □ "The Irrelevance of Calculus Reform", and the aftermath. George E. Andrews, Pennsylvania State University, University Park □ A history of one calculus reform project. William J. Davis, Ohio State University, Columbus □ The workshop approach: Abandoning lectures. Nancy Baxter Hastings, Dickinson College □ Next steps. Ronald G. Douglas, State University of New York, Stony Brook □ Calculus reform, anthropology-zoology. Franklin A. Wattenberg, Weber State University □ A mathematics technology classroom: Evolution of a calculus reform implementation. Anita J. Salem, Rockhurst College □ Writing to learn calculus: Why, what, how. David A. Smith, Duke University Panel Discussions A modern course in calculus. MAA Panel Discussion A. Wayne Roberts, Macalester College Martin Flashman, Humboldt State University Sheldon Gordon, Suffolk Community College Margret Hoft, University of Michigan, Dearborn Sharon Ross, DeKalb University Future perspectives on calculus. MAA Panel Discussion Donald B. Small, U. S. Military Academy Does calculus reform really work? AMS Committee on Education Panel Discussion John H. Ewing, American Mathematical Society George E. Andrews, Pennsylvania State University Morton Brown, University of Michigan John C. Polking, Rice University Poster Sessions Innovations in freshman and sophomore mathematics instruction. MAA CUPM Subcommittee on Calculus Reform and the First Two Years Poster Session MAA Minicourse #9 Calculus for the 21st century. Lawrence C. Moore, Duke University David A. Smith, Duke University MAA Minicourse #16 Contemporary calculus through applications using the TI-82.
{"url":"http://mathforum.org/orlando/orlando.calc.reform.html","timestamp":"2014-04-19T12:08:55Z","content_type":null,"content_length":"12229","record_id":"<urn:uuid:3e132dfb-d352-4776-8578-6642dc76a516>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00421-ip-10-147-4-33.ec2.internal.warc.gz"}
Tag Archive for 'A Glossary' Average tax rate Taxes as a fraction of income; total taxes divided by total taxable income. Post sponsored by: Average rate of return (ARR) The ratio of the average cash inflow to the amount invested. Average maturity The average time to maturity of securities held by a mutual fund. Changes in interest rates have greater impact on funds with longer average life. Post sponsored by: Average life Also referred to as the weighted-average life (WAL). The average number of years that each dollar of unpaid principal due on the mortgage remains outstanding. Average life is computed as the weighted average time to the receipt of all future cash flows, using as the weights the dollar amounts of the principal paydowns. Average cost of capital A firm’s required payout to the bondholders and to the stockholders expressed as a percentage of capital contributed to the firm. Average cost of capital is computed by dividing the total required cost of capital by the total amount of contributed capital. Average collection period, or days’ receivables The ratio of accounts receivables to sales, or the total amount of credit extended per dollar of daily sales (average AR divided by sales * 365). Post sponsored by: Average age of accounts receivable The weighted-average age of all of the firm’s outstanding invoices. Post sponsored by: Average accounting return The average project earnings after taxes and depreciation divided by the average book value of the investment during its life. Post sponsored by: Average (across-day) measures An estimation of price that uses the average or representative price of a large number of trades. Post sponsored by: An arithmetic mean of selected stocks intended to represent the behavior of the market or some component of it. One good example is the widely quoted Dow Jones Industrial Average, which adds the current prices of the 30 DJIA’s stocks, and divides the results by a predetermined number, the divisor.
{"url":"http://beingwarrenbuffett.com/tag/a-glossary/","timestamp":"2014-04-19T22:12:20Z","content_type":null,"content_length":"57872","record_id":"<urn:uuid:ea464a2f-b587-45bd-a529-439098b2e58a>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00333-ip-10-147-4-33.ec2.internal.warc.gz"}
Department of Mathematics and Statistics Paul Duvall Clifford Smyth His current projects include the generalization of the arithmetic-geometric mean inequality to means based on other symmetric functions, the generalization of Reimer's inequality in combinatorial probability, and a study of the effects of vector-host affinity on disease. This last project is part of UNCG's Math-Biology Program. Past Graduate Students • Davorin Stajsic worked on the project "Combinatorial Game Theory" under the direction of Dr. Clifford Smyth.
{"url":"http://www.uncg.edu/mat/combinatorics/","timestamp":"2014-04-16T19:02:19Z","content_type":null,"content_length":"22209","record_id":"<urn:uuid:a45cab59-7c59-44b4-b4e3-204b323cb3f5>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00435-ip-10-147-4-33.ec2.internal.warc.gz"}
5. Generalizing an example-tracing tutor with formulas In this section, we introduce the concept and use of "formulas", powerful expressions that define the way CTAT example-tracing tutors match student input. A formula is an expression you write for a step in an example-tracing tutor that enables the tutor to calculate a value (or set of values) it should use to test against the student's input. Formula matching is an extension to existing matching methods. An example-tracing tutor works by comparing student-entered input to the input recorded in the steps of a behavior graph, which you create by demonstration. With the Edit Student Input Matching window, you can change this "exact" match to be more flexible: you can specify a range match (e.g., "1-10", which matches "6"), an "any" match (i.e., any input value will match), a wildcard match (e.g., "car*", which matches "cars" and "car"), or a regular expression match (e.g., "[THK]im", which matches "Tim", "Him", and "Kim"). With a formula match, the comparison is between the student-entered input and the evaluation of a formula, such as "link1.authorInput+5" (this equates to the value the author entered on Link 1 of the graph, plus 5). A formula matcher is the most powerful type of matcher in CTAT because it can combine the values of inputs on other links, student-entered input, and the contents of interface widgets (such as text fields). Whenever you would like to evaluate correctness based on these characteristics, consider writing a formula. Formulas are particularly useful in a few cases. When you need to specify a test (against a student action) that can only be determined at the time the student uses the tutor, a formula allows you to do so. For example, in the domain of fraction addition, you may have a set of common denominators that are valid for the current problem; but once the student uses one of those denominators in a converted fraction, he or she should use that same denominator in the other converted fractions and the unsimplified sum fraction. With a formula, you can check that the student chose a common denominator; then you can require the student to use that denominator in the other fractions. Another benefit of formulas is that the tutor can be more general, with logic that works across similar problems (much like a cognitive tutor). Using the fraction addition example again, you might have a path in the behavior graph that represents finding and using a common denominator that is found by multiplying the two denominators of the given fractions. The formula for the links in that path of the graph would specify the product of multiplying the two denominators of the given fractions, which can be looked up by using variables that hold the value of widgets in the interface. Since the formula specifies a general logic, not hard-coded numbers, this graph is then on the way to becoming general enough to use with mass production.
{"url":"http://ctat.pact.cs.cmu.edu/docs/ctat_2_6/formulas.html","timestamp":"2014-04-21T04:32:09Z","content_type":null,"content_length":"7801","record_id":"<urn:uuid:6af3369a-7893-4f3a-a9b2-326c3e2766f6>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00040-ip-10-147-4-33.ec2.internal.warc.gz"}
Tightness and the Two-Piece Property Tightness and the Two-Piece Property: An immersion f of a surface M into space has the two-piece property if the pre-image (by f ) of every open half-space is connected in M. In other words, a surface has the two-piece property if every plane cuts it into at most two parts. At first glance, this may seem unrelated to tightness, which was originally defined in terms of the total absolute curvature integral, but it turns out that the two ideas are equivalent. To see this, we use the fact that an immersion is tight if and only if every Morse height function has exactly one local maximum. First, suppose that an immersion has the two-piece property, and consider a direction for which the associated height function is Morse. Suppose there are two local maxima for this function. Then a plane just below the smaller of the two maxima cuts a small neighborhood of the maximum away from the rest of the surface. In addition, it also cuts off a larger, but disjoint, patch that contains the other maximum. The remainder of the surface below the plane makes a third piece, contradicting the fact that the surface has the two-piece property. Thus there can be only one maximum for any Morse height function, and the surface is tight. Conversely, suppose a tight immersion is cut by a plane, and consider the height function in the direction of the normal to this plane. (We can assume this is a Morse height function, for if not, since almost all height functions are Morse, a slight tilting of the plane will yield such a function without changing the number of components into which the plane cuts the surface.) Now each component of the surface that lies above the plane must contain a local maximum for the height function; but since the immersion is tight, every Morse height function has a single local maximum, so there is only one component above the plane. Similarly, by considering the height function in the opposite direction, there can be only a single component below the plane, so the plane divides the surface into exactly two pieces. This holds for all the planes that divide the surface, so the immersion has the two-piece property. Thus we have the following theorem originally due to Thomas Banchoff [B1]. Theorem: An immersion of a closed, compact, connected surface is tight if, and only if, it has the two-piece property. This theorem gives a concrete geometric interpretation of tightness, and provides an easily visualized criterion for determining whether or not a surface is tightly immersed. Furthermore, this property makes sense for both smooth and polyhedral surfaces, with or without boundary, whereas the definition in terms of the total absolute curvature integral relied on smoothness and the fact that the surface is closed. Tightness and homology: the modern definition Tightness and polar height functions Tightness and its consequences Kuiper's initial question 8/8/94 dpvc@geom.umn.edu -- The Geometry Center
{"url":"http://www.maa.org/external_archive/CVM/1998/01/tprppoh/article/Tight-TPP.html","timestamp":"2014-04-17T08:32:44Z","content_type":null,"content_length":"5005","record_id":"<urn:uuid:671b371e-cf81-4455-a60b-db14f3337db0>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00453-ip-10-147-4-33.ec2.internal.warc.gz"}
Rate Question, please help! Author Message axl169 Rate Question, please help! [#permalink] 07 Oct 2006, 20:02 Intern Hi, got stumped on this one. Any help would be really appreciated it. Joined: 16 Sep 2 Trains, X&Y, start simultaneously on opposite ends of a 100-mile route and travled toward each other on parallel tracks. Train X, traveling at a constant rate, completed the 100 2006 mile trip in 5 hours. Train Y, also traveling at a constant rate completed the 100 mile trip in 3 hours. How many miles had train X traveled when it met Train Y? Posts: 22 A. 37.5 B. 40.0 Followers: 0 C. 60.0 D. 62.5 Kudos [?]: 0 [0] E. 77.5 , given: 0 $uckafr33 Rate x Time = Distance Train X 5 x t = d Intern Train Y 3 x t = 100-d Joined: 21 Sep The only variable they have in common is that both trains have been travelling for the same amount of time...thus, set each equation equal to each other in terms of d. so Train X = t = d/5 Posts: 43 and Train Y = t = (100-d)/3 Followers: 0 Set each equation equal to each other: d/5 = (100-d)/3 Kudos [?]: 2 [0] Cross multiply and you get: , given: 0 3d = 500 - 5d Solve for d and you get 62.5. Thus the answer is 62.5 The correct answer was A. 37.5. But I don't know understand how that is. Joined: 16 Sep Posts: 22 Followers: 0 Kudos [?]: 0 [0] , given: 0 Re: Correct Answer [#permalink] 07 Oct 2006, 22:11 This post received mst axl169 wrote: Manager The correct answer was A. 37.5. But I don't know understand how that is. Joined: 01 Oct Yes the answer will be 37.5 Speed of train X = 100/5 mph Posts: 243 Speed of train y = 100/3 mph Followers: 1 Let's say when the 2 trains meet, X had travelled m miles. and Y had traveled 100-m miles. Kudos [?]: 4 [1] , given: 0 Since the trains started at the same time, they would have taken same time to cover m and 100-m miles respectively. m/(100/5) = 100-m/(100/3) 5m = 300-3m 8m = 300 m = 37.5 miles Alternatively you can solve this this way: Train X: rate=100/5=20miles/hour; time=t; d=d Train Y: rate=100/3=33*1/3miles/hour; time=t; d=100-d construct two equations: r*t=d.... train x: 20*t=d Director train y: 33*1/3t=100-d Joined: 05 Feb then solve... Posts: 905 33*1/3t+20t=100 Followers: 1 t=300/160 Kudos [?]: 22 [0 ], given: 0 The questions asks how many miles had train x trveled... we know the formula: r*t=d We go back to the begining and plug in the values for train x.... r=20 miles/hour; t=1.875;d=? Clearly the speeds ratio of A and B is 3:5 cicerone So distance covered by them when they meet each other also will be in the ration 3:5 Senior Manager So distance covered by A is 3/8 x 100 = 37.5 Joined: 28 Aug Hence A. Keep it simple....... Get the concepts of ratios.............Save time in the exam Posts: 302 Followers: 10 Averages Accelerated:Guide to solve Averages Quickly(with 10 practice problems) cicerone wrote: Clearly the speeds ratio of A and B is 3:5 Intern So distance covered by them when they meet each other also will be in the ration 3:5 Joined: 21 Sep So distance covered by A is 3/8 x 100 = 37.5 Hence A. Posts: 43 Keep it simple....... Get the concepts of ratios.............Save time in the exam Followers: 0 Kudos [?]: 2 [0] , given: 0 Cicerone Where do you get 3/8 from? I like your ratio method of doing this. $uckafr33 wrote: cicerone wrote: Clearly the speeds ratio of A and B is 3:5 So distance covered by them when they meet each other also will be in the ration 3:5 So distance covered by A is 3/8 x 100 = 37.5 Hence A. Senior Manager Keep it simple....... Get the concepts of ratios.............Save time in the exam Joined: 28 Aug 2006 Regards, Posts: 302 Cicerone Followers: 10 Where do you get 3/8 from? I like your ratio method of doing this. Hey look at the bold part Since the distance is shared in the raio of 3:5 A should cover 3/8 and in the same time B will cover 5/8 ............. Averages Accelerated:Guide to solve Averages Quickly(with 10 practice problems) SimaQ cicerone wrote: Director Clearly the speeds ratio of A and B is 3:5 So distance covered by them when they meet each other also will be in the ration 3:5 Joined: 05 Feb 2006 So distance covered by A is 3/8 x 100 = 37.5 Posts: 905 Hence A. Followers: 1 Keep it simple....... Get the concepts of ratios.............Save time in the exam Kudos [?]: 22 [0 Regards, ], given: 0 Yes, it works in this example, but it is better to have a clear concept of solving such problems. This ratio example won't work for other "rate" problems... SimaQ wrote: cicerone wrote: Clearly the speeds ratio of A and B is 3:5 So distance covered by them when they meet each other also will be in the ration 3:5 cicerone So distance covered by A is 3/8 x 100 = 37.5 Senior Manager Hence A. Joined: 28 Aug Keep it simple....... Get the concepts of ratios.............Save time in the exam Posts: 302 Yes, it works in this example, but it is better to have a clear concept of solving such problems. This ratio example won't work for other "rate" problems... Followers: 10 Hey SimaQ, please do not say this.............. Any question form Time and Distance out of Speed , Distance and Time if one of them is constant, i bet i will do it using ratios only.............. Why don't u try to test me........ Averages Accelerated:Guide to solve Averages Quickly(with 10 practice problems) londonluddite cicerone's method is neater but this is how i did it. trick to remember with this type of question where 2 things travel towards each other is that together they travel the total distance, whether its lifts, trains, cars, water going down a pipe, frogs etc Senior Manager Joined: 30 Aug 2006 where t = time Posts: 374 100/5 t + 100/3 t = 100 300/15 t + 500/15 t = 100 Followers: 2 t = 1500/800 = 15/8 Kudos [?]: 5 [0] 15/8 * 100/5 = 75/2 = 37.5 , given: 0 cheti Speed of X is 100/5 = 20.0 m/h. Speed of Y is 100/3 = 33.3 m/h. now if X and Y met, that means total distance travelled would be 100 miles (since both start from opposite ends). Joined: 06 Apr 2006 adding up the speed gives the consolidated speed which is 20 + 33.3 = 53.3 miles/hour Posts: 24 now, again applying distance-time formula, time taken is 100/53.3 = approx 2 hours. Followers: 0 so, X and Y met after 2 hours of their journey. Kudos [?]: 1 [0] X travelled less than 40 miles in approximately 2 hours.. , given: 0 hence A. The time after which both trains meet = t Joined: 23 Jun At time t, total distance traveled = 100 miles. distance traveled by x = (100/5)*t Posts: 847 distance traveled by y = (100/3)*t GMAT 1: 740 Q48 Putting these together, (100/5)*t + (100/3)*t = 100 Followers: 3 Simplifying, we get t=15/8 Kudos [?]: 18 [0 In 15/8 hrs, train X has traveled: 20*(15/8) = 37.5 ], given: 1
{"url":"http://gmatclub.com/forum/rate-question-please-help-36373.html?fl=similar","timestamp":"2014-04-17T01:02:58Z","content_type":null,"content_length":"177649","record_id":"<urn:uuid:d4d99e4c-1f96-4c85-b87a-dee90abbfece>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00505-ip-10-147-4-33.ec2.internal.warc.gz"}
Brisbane Precalculus Tutor Find a Brisbane Precalculus Tutor ...I also teach people how to excel at standardized tests. Unfortunately for many people with test anxiety, test scores are very important in college and other school admissions and can therefore have a huge impact on your life. If you approach test-taking in a way that makes it fun, it takes a lot of the anxiety out of the process, and your scores will improve. 48 Subjects: including precalculus, reading, English, French ...The topic may seem hard but in many ways it is just using your common sense in a new and different manner. I have a background as an actuary and MBA training. Therefore, I have experience with many of the areas of discrete math typically encountered in introductory college coursework: set theory, combinatorics, probability theory, matrices and operations research. 11 Subjects: including precalculus, calculus, geometry, algebra 1 ...I could explain the main points precisely and concisely (Algebra is my PhD area). I also provide extra practice and drill problems. My students show clear improvement in understanding and technical Algebraic skills. I have 5+ recent Hon. 15 Subjects: including precalculus, calculus, GRE, algebra 1 ...That said, I’ve been through the education system, and have seen its flaws, and places where it could work better. I personally am able to grasp concepts much easier when I know why I am being taught something, and how it would be useful to me. Having a comfortable atmosphere while helping kids... 6 Subjects: including precalculus, physics, calculus, algebra 1 ...I will teach all levels from middle school mathematics up to calculus. I have extensive experience teaching international baccalaureate mathematics. I'm flexible in my teaching style, and will work with parents, schools and students to determine the format of tutoring most likely to bring them success. 11 Subjects: including precalculus, chemistry, physics, calculus
{"url":"http://www.purplemath.com/Brisbane_Precalculus_tutors.php","timestamp":"2014-04-16T13:30:11Z","content_type":null,"content_length":"24226","record_id":"<urn:uuid:a67a664c-4c3c-48a5-9ce5-6f1ac8ef349e>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00016-ip-10-147-4-33.ec2.internal.warc.gz"}
Lab Project 3 Course Information Maple Labs Maple Tutorial Sample Maple Session Lab Project 1 Lab Project 2 Lab Project 3 Lab Project 4 Lab Project 5 Maple Demos Maple HowTo Practice Exams Student's Guide Math 215 Lab Project 3 Gradients and Rates of Change • To visualize the relationship between the graph and the level sets of a function f(x,y) using the pattern of its gradient. • To explore the sensitivity of ceratin functions to change in their variables. Packages in this lab To be able to use the Maple commands described in this section, you need to load the plots package. > with(plots): Warning, the name changecoords has been redefined Gradient and critical points Gradient vector of a function Maple command gradplot calculates and plots gradient vectors at many different points. The resulting picture is called the picture of the gradient vector field. The picture of the gradient vector field contains a lot of information about the function Example 1 — Local Minimum: > gp:=gradplot(f,x=-2..2,y=-2..2): The picture above is the basic picture of the gradient. We can make its main features more visible by decreasing the number of sample points by using the grid option (the default is grid=[20,20] ), changing arrow style to THICK (the default is THIN, the remaining two are LINE and SLIM, you should try them as well), and by adding color. This is what we get: > gp:=gradplot(f,x=-2..2,y=-2..2,arrows=THICK,grid=[15,15],color=blue): You can see that the gradient vectors point away from the origin. This menas that our function has a minimum at the origin (or somewhere near it, you can't obtain exact information from the graph, you need to use calculus for that.). Also, note that the length of the gradient vectors increases as you move further from the origin, this menas that the rate of increase of is getting bigger. Let us now combine this picture with the picture of level curves. We also arrange for the scales on both axes to be the same. > cp:=contourplot(f,x=-2..2,y=-2..2,contours=20): Note that the large magnitude of the gradient vector corresponds to the small distances between the level curves. Both of these features correspond to the higher rate of change of local minimum . For a local maximum , the picture is the same, but the arrows point in the oppposite direction (towards the maximum). Let us now look at this picture away from the local minimum. > display({ Note that at the points of the scaling=CONSTRAINED option, the angles become distorted and the level curves may not look perpendicular to the gradient vectors: > display({ Let us now combine these two-dimensional plots with the three-dimensional graph of our function. The Maple code in this example is a bit complicated, but you do not need to use it in your lab > surface:=plot3d(f,x=-2..2,y=-2..2,style=PATCHCONTOUR,contours=20): Example 2 — Local Minimum and Saddle: > f:=x^3-10*x+2*y^2; > gp:=gradplot(f,x=-2.5..2.5,y=-3..3,arrows=SLIM,color=blue): From the picture of the gradient vector field we can see that there are two critical points of > cp:=contourplot(f,x=-2.5..2.5,y=-3..3,contours=20): We now "zoom in" and study the plot near the critical points. > display({ > display({ We can very clearly see that critical points are indeed a local minimum and a saddle, but we can also see that they are not exactly at Maple to find the exact location as follows. > solve({diff(f,x)=0,diff(f,y)=0}); Since solutions are not rational numbers, Maple displays the result in a bit funny form. It means that Maple to solve this equation for you in radicals by changing the global _EnvExplicit option . This change will be in effect until you quit Maple. > _EnvExplicit:=true: We conclude the study of this example by combining gradient and level curves plots with the three-dimensional graph of our function. The Maple code in this example is a bit complicated, but you do not need to use it in your lab project. > surface:=plot3d(f,x=-2.5..2.5,y=-3..3,style=PATCHCONTOUR,contours=20): Example 3 — Monkey Saddle: > f:=x*(x^2-y^2); > gp:=gradplot(f,x=-1..1,y=-1..1,color=blue,grid=[30,30]): From the picture of the gradient vector field it looks like the origin is a critical point (verify that!), and that it is not a local minimum or maximum (some gradient vectors point towards the origin, and some point away from it). Let us add the level curves. To make the plot look better near the origin we increased the number of sample points. Try decreasing it by a factor of 10 if the next command takes too long to execute. > cp:=contourplot(f,x=-1..1,y=-1..1,contours=20,numpoints=10000): Now the picture looks similar to the picture of the saddle, but it is divided into six sectors instead of four. For this reason it is called a monkey saddle , there is a room for a tail! Here is a better view (use the mouse to turn the picture around). The Maple code in this example is a bit complicated, but you do not need to use it in your lab project. > surface:=plot3d(f,x=-2..2,y=-2..2,style=PATCHCONTOUR,contours=20): Example 4 — Parabolic Cylinder: > f:=(2*y-3*x)^2; > gp:=gradplot(f,x=-2..2,y=-2..2,color=blue): In this example there is a whole line of critical points given by the equation > cp:=contourplot(f,x=-2..2,y=-2..2,contours=20): display({gp,cp},scaling=CONSTRAINED); The next plot shows why the graph of The Maple code below is a bit complicated, but you do not need to use it in your lab project. > surface:=plot3d(f,x=-2..2,y=-2..2,style=PATCHCONTOUR,contours=20,view=-5..20):
{"url":"http://www.math.lsa.umich.edu/courses/215/12maple/10tutorial/03lab3.html","timestamp":"2014-04-20T01:19:34Z","content_type":null,"content_length":"26934","record_id":"<urn:uuid:a5c35981-41da-4262-b078-b8ed28739b6e>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00100-ip-10-147-4-33.ec2.internal.warc.gz"}
Sorting Random Numbers kid kash Sorting Random Numbers Hi, 1st time poster. I'm currently working on a program that generates random numbers 1-49. I want to sort them from lowest to highest, how would i come about doing that? here's what i got so far #include <stdio.h> #include <stdlib.h> #include <time.h> //for the seedrnd() function #define RANGE 49 //number of numbers #define BALLS 7 //number of balls to draw #define DELAY 1000000 //delay interval between picks int rnd(int range); char loop; void seedrnd(void); void main() do { int numbers[RANGE]; //array that holds the balls int i,b; unsigned long d; //delay variable seedrnd(); //seed the randomizer /* initialize the array */ for(i=0;i<RANGE;i++) //initialize the array printf("Press Enter to pick this week's numbers:"); /* draw the numbers */ for(d=0;d<=DELAY;d++); //pause here /* picks a random number and check to see whether it's already been picked */ b=rnd(RANGE); //draw number while(numbers[b]); //already drawn? numbers[b]=1; //mark it as drawn printf(" %i ",b+1); //add one for zero printf("\n Press 1 to return"); scanf("%d", &loop); while (loop==1); /* Generate a random value */ int rnd(int range) int r; r=rand()%range; //spit up random number /* Seed the randomizer */ void seedrnd(void) Thanks in advance! Ok...First of all welcome, anything I can do to help you on these boards please let me know. Second of all: Thank you very much for reading the Stickies and announcements heh. I really appreciate Now, just an FYI, I edited your code slighty, I just made it more readable. What it looks like you need now is sort it. There are several diffrent sorting algorithms you can use. I would use google to take a look at some of them, or your text book. The easiet one to use is called bubble sort or insertion sort. I hope this helps you start. Good luck, feel free to ask if you have anymore problems. Lead Moderator Since you are only sorting a small amount of numbers, I'd suggest looking for information on bubble sort. Its the easiest of all the sorts out there to implement (although also probably the slowest, but that won't matter here). It basically goes like this: for (int x=0; x<arraysize; x++) for (int y=x+1; y<arraysize; y++) if (array[x]>array[y]) then swap the two here's an insertion sort template<class T> void insertionSort(std::list<T>& listRef) list<T>::iterator iter1, iter2, iter3; iter1 = listRef.end(); for(iter2 = listRef.begin(); iter2 != iter1; iter2 = listRef.erase (iter2)) { iter3 = listRef.begin(); while(iter3 != listRef.end() && *iter2 > *iter3) listRef.insert(iter3, *iter2); void main() bad. int main() good.
{"url":"http://cboard.cprogramming.com/cplusplus-programming/30194-sorting-random-numbers-printable-thread.html","timestamp":"2014-04-18T10:58:21Z","content_type":null,"content_length":"11345","record_id":"<urn:uuid:f77ddfbd-fd23-4ac5-bfb2-bba6dc8e10ac>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00579-ip-10-147-4-33.ec2.internal.warc.gz"}
Introduction to Transformer Losses This article is excerpted from "Premium-Efficiency Motors and Transformers", a CD-ROM available from CDA by calling 888-480-4276, or through the Publications List. Transformer losses are produced by the electrical current flowing in the coils and the magnetic field alternating in the core. The losses associated with the coils are called the load losses, while the losses produced in the core are called no-load losses. What Are Load Losses? Load losses vary according to the loading on the transformer. They include heat losses and eddy currents in the primary and secondary conductors of the transformer. Heat losses, or I ^2R losses, in the winding materials contribute the largest part of the load losses. They are created by resistance of the conductor to the flow of current or electrons. The electron motion causes the conductor molecules to move and produce friction and heat. The energy generated by this motion can be calculated using the formula: Watts = (volts)(amperes) or VI. According to Ohm's law, V=RI, or the voltage drop across a resistor equals the amount of resistance in the resistor, R, multiplied by the current, I, flowing in the resistor. Hence, heat losses equal (I)(RI) or I ^2R. Transformer designers cannot change I, or the current portion of the I ^2R losses, which are determined by the load requirements. They can only change the resistance or R part of the I ^2R by using a material that has a low resistance per cross-sectional area without adding significantly to the cost of the transformer. Most transformer designers have found copper the best conductor considering the weight, size, cost and resistance of the conductor. Designers can also reduce the resistance of the conductor by increasing the cross-sectional area of the conductor. What Are No-load Losses? No-load losses are caused by the magnetizing current needed to energize the core of the transformer, and do not vary according to the loading on the transformer. They are constant and occur 24 hours a day, 365 days a year, regardless of the load, hence the term no-load losses. They can be categorized into five components: hysteresis losses in the core laminations, eddy current losses in the core laminations, I ^2R losses due to no-load current, stray eddy current losses in core clamps, bolts and other core components, and dielectric losses. Hysteresis losses and eddy current losses contribute over 99% of the no-load losses, while stray eddy current, dielectric losses, and I ^2R losses due to no-load current are small and consequently often neglected. Thinner lamination of the core steel reduces eddy current losses. The biggest contributor to no-load losses is hysteresis losses. Hysteresis losses come from the molecules in the core laminations resisting being magnetized and demagnetized by the alternating magnetic field. This resistance by the molecules causes friction that results in heat. The Greek word, hysteresis, means "to lag" and refers to the fact that the magnetic flux lags behind the magnetic force. Choice of size and type of core material reduces hysteresis losses. Values of Transformer Losses (A and B Values) The values of transformer losses are important to the purchaser of a transformer who wants to select the most cost-effective transformer for their application. The use of A and B factors is a method followed by most electric utilities and many large industrial customers to capitalize the future value of no-load losses (which relate to the cost to supply system capacity) and load losses (which relate to the cost of incremental energy). Put another way, A values provide an estimate of the equivalent present cost of future no-load losses, while B values provide an estimate of the equivalent present cost of future load losses. Most utilities regularly update their avoided cost of capacity and energy (typically on an annual basis), and use A and B values when specifying a transformer. Most smaller end users typically use life-cycle -cost evaluation methods, discussed in another article on this web site. When evaluating various transformer designs, the assumed value of transformer losses (A and B values) will contribute to determining the efficiency of transformer to be purchased. Assuming a high value for transformer losses will generally result in purchase of a more efficient unit; assuming a lower value of losses will result in purchase of a less efficient unit. What value of losses should be assumed? The total owning cost (TOC) method provides an effective way to evaluate various transformer initial purchase prices and cost of losses. The goal is to choose a transformer that meets specifications and simultaneously has the lowest TOC. The A and B values include the cost of no-load and load losses in the TOC formula: TOC = NLL x A + LL x B + C TOC = capitalized total owning cost, NLL = no-load loss in watts, A = capitalized cost per rated watt of NLL (A value), LL = load loss in watts at the transformer's rated load, B = capitalized cost per rated watt of LL (B value), C = the initial cost of the transformer including transportation, sales tax, and other costs to prepare it for service. What Is the A Value? The A value is an estimate of the present value of future capital cost (nonload- dependent) items at a given point in time. It can vary over time as utilities re-evaluate their costs on a periodic basis. (In other words, the A value is the answer to the question, what is a watt of no-load loss over the life of the transformer worth to me today?) Even if there is no load, there is capital that is devoted to fixed capacity to generate, transmit and distribute electricity, which contribute to the A value. The loading that may change daily on the transformer does not affect the no-load loss value. It is calculated using the following formula: A = [SC + (EC x 8760)] x 0.001 / [FC] = Cost of No-Load Loss in $/watt SC = Annual Cost of System Capacity in $/kW-year (SC is the levelized annual cost of generation, transmission and primary distribution capacity required to supply one watt of load to the distribution transformer coincident with the peak load). EC = Energy Cost (EC is the levelized annual cost per kWh of fuel, including inflation, escalation, and any other fuel related components of operation or maintenance costs that are proportional to the energy output of the generating units). 8,760 = hours per year FC = Fixed Charge on capital per year (FC is the levelized annual revenue required to carry and repay the transformer investment obligation and pay related taxes, all expressed as a per-unit quantity of the original). 0.001 = conversion from kilowatts to watts. What Is the B Value? Similar to the way the A value is determined, the B value is an estimate of the present value of future variable, or load-dependent, cost items at a given point in time. (In other words, the B value is the answer to the question, what is a watt of load loss over the life of the transformer worth to me today?) The B value can also change over time as utilities revaluate their costs on a periodic basis, but once determined, it is a constant value for a given transformer purchase. The cost of load losses, or B value, is calculated using the following formula: B = [(SC x RF) + (EC x 8,760 x LF)] (PL) ^2 (0.001) / (FC) = Cost of Load Loss Cost $/watt RF = Peak Loss Responsibility Factor (RF is the composite responsibility factor that reduces the system capacity requirements for load losses since the peak transformer losses do not necessarily occur at peak time). LF = Annual Loss Factor (LF is the ratio of the annual average load loss to the peak value of the load loss in the transformer). PL = Uniform Equivalent Annual Peak Load (PL is the levelized peak load per year over the life of the transformer. Transformer life cycle is defined as the useful life of the asset and is usually assumed to be 30-35 years). Specifying A and B Values For custom-designed transformers, manufacturers optimize the design of the unit to the specified A and B values resulting in a transformer designed to the lowest total owning cost, rather than one designed for cheapest first cost. In situations where A and B values have not been determined (or the enduser does not utilize or specify them), such as occur in commercial or small industrial applications, the suggested technique to maximize transformer efficiency is to obtain the no-load and full-load loss values of a specific transformer, in watts. This method is discussed in the article Transformer Life-Cycle Cost, elsewhere on this web site.
{"url":"http://www.copper.org/environment/sustainable-energy/transformers/education/trans_losses.html","timestamp":"2014-04-18T18:17:46Z","content_type":null,"content_length":"62893","record_id":"<urn:uuid:d55bb2ea-a339-4a69-967f-22b43afd8739>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00064-ip-10-147-4-33.ec2.internal.warc.gz"}
Fairview, TX Science Tutor Find a Fairview, TX Science Tutor ...As with other areas of science, astronomy is difficult to understand because of the relative sizes involved. The vastness and complexity of the universe is almost too much to comprehend. In order understand astronomy standards, students need to understand the laws that govern the world in front of us, that we have concrete experience with, and then extend that knowledge to the 15 Subjects: including geology, algebra 1, chemistry, geometry ...I graduated with a Bachelor's degree in Biology and a minor in health care, and I also obtained my teaching certification in composite science (teaching chemistry, biology and Physics) and in German. Currently I am teaching high school Physics at a public school district, as well as German 1, 2 ... 33 Subjects: including genetics, microbiology, biochemistry, biostatistics ...I'll be learning from them as much (or more) as they learn from me. Please give me a shot!In pursuing my certification to teach High School Physics, I have also tested (and passed) the criteria for Chemistry proficiency. My career experience as a civil engineer and my academic pursuit of teaching physics has given me an understanding of both real-world applications and academic 28 Subjects: including civil engineering, ACT Science, biology, chemistry ...I have been a teaching assistant for general chemistry and college physics courses for almost two years. In addition, I have been a course assistant for chemistry and physics courses and also have been tutoring math classes. The most rewarding part of my career as a teacher is to work with a st... 19 Subjects: including organic chemistry, physics, ACT Science, chemistry ...Prior to that I managed and tutored at an SAT ACT company in Plano. Currently, I tutor and work part time as an instructor in GRE GMAT Quantitative at UT Dallas while also exploring careers in educational technology. I hold a B Sc in Electrical Engineering and have worked in the telecom and tec... 11 Subjects: including ACT Science, geometry, GRE, SAT math Related Fairview, TX Tutors Fairview, TX Accounting Tutors Fairview, TX ACT Tutors Fairview, TX Algebra Tutors Fairview, TX Algebra 2 Tutors Fairview, TX Calculus Tutors Fairview, TX Geometry Tutors Fairview, TX Math Tutors Fairview, TX Prealgebra Tutors Fairview, TX Precalculus Tutors Fairview, TX SAT Tutors Fairview, TX SAT Math Tutors Fairview, TX Science Tutors Fairview, TX Statistics Tutors Fairview, TX Trigonometry Tutors
{"url":"http://www.purplemath.com/fairview_tx_science_tutors.php","timestamp":"2014-04-19T04:55:04Z","content_type":null,"content_length":"24046","record_id":"<urn:uuid:2886e763-c3c2-4d56-84aa-c67f96ce18e6>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00007-ip-10-147-4-33.ec2.internal.warc.gz"}
Blaise Pascal Blaise Pascal invented one of the first mechanical calculators: the pascaline He was among the contemporaries of Descartes and none displayed greater natural genius. But his mathematical reputation rests more on what he might have done than on what he actually achieved, as during a considerable part of his life he deemed it his duty to devote his whole time to religious exercises. Blaise Pascal was born at Clermont, France, on June 19, 1623, and died at Paris on Aug. 19, 1662. His father, a local judge at Clermont, and himself enjoyed some scientific reputation. They moved to Paris in 1631, partly to persue his own scientific studies, partly to carry on the education of his only son, who had already displayed exceptional abilities. Pascal was kept at home in order to ensure his not being overworked, and with the same purpose it was directed that his education should be at first confined to the study of languages, and should not include any mathematics. This naturally excited the boy's curiosity, and one day, being then twelve years old, he asked of what geometry consisted. His tutor replied that it was the science of constructing exact figures and of determining the proportions between their different parts. Pascal, stimulated no doubt by the injunction against reading it, gave up his play-time to this new study, and in a few weeks had discovered for himself many properties of figures, and in particular the proposition that the sum of the angles of a triangle is equal to two right angles. His father, struck by this display of ability, gave him a copy of Euclid's Elements, a book which Pascal read with avidity and soon mastered. Before Pascal turned 13 he had proven the 32-nd proposition of Euclid and discovered an error in Rene Descartes geometry. At 16, Pascal began preparing to write a study of the entire field of mathematics, but his father required his time to hand total long columns of numbers. Pascal began designing a calculating machine, which he finally perfected when he was thirty, the pascaline, a beautiful handcrafted box about fourteen by five by three inches. The first accurate mechanical calculator was born. The Pacaline was not a commercial success in Pascal's lifetime; it could do the work of six persons, people feared it would create unemployment. At the age of fourteen he was admitted to the weekly meetings of Roberval, Mersenne, Mydorge, and other French geometricians; from which, ultimately, the French Academy sprung. At sixteen Pascal wrote an essay on conic sections; and in 1641, at the age of eighteen, he constructed the first arithmetical machine, an instrument which, eight years later, he further improved. His correspondence with Fermat about this time shows that he was then turning his attention to analytical geometry and physics. He repeated Torricelli's experiments, by which the pressure of the atmosphere could be estimated as a weight, and he confirmed his theory of the cause of barometrical variations by obtaining at the same instant readings at different altitudes on the hill of Puy-de-Dôme. In 1650, when in the midst of these researches, Pascal suddenly abandoned his favorite pursuits to study religion, or, as he says in his Pensées, "contemplate the greatness and the misery of man''; and about the same time he persuaded the younger of his two sisters to enter the Port Royal society. In 1653 he had to administer his father's estate. He now took up his old life again, and made several experiments on the pressure exerted by gases and liquids; it was also about this period that he invented the arithmetical triangle, and together with Fermat created the calculus of probabilities. He was meditating marriage when an accident again turned the current of his thoughts to a religious life. He was driving a four-in-hand on November 23, 1654, when the horses ran away; the two leaders dashed over the parapet of the bridge at Neuilly, and Pascal was saved only by the traces breaking. Always somewhat of a mystic, he considered this a special summons to abandon the world. He wrote an account of the accident on a small piece of parchment, which for the rest of his life he wore next to his heart, to perpetually remind him of his covenant; and shortly moved to Port Royal, where he continued to live until his death in 1662. Constitutionally delicate, he had injured his health by his incessant study; from the age of seventeen or eighteen he suffered from insomnia and acute dyspepsia, and at the time of his death was physically worn out. His famous Provincial Letters directed against the Jesuits, and his Pensées, were written towards the close of his life, and are the first example of that finished form which is characteristic of the best French literature. The only mathematical work that he produced after retiring to Port Royal was the essay on the cycloid in 1658. He was suffering from sleeplessness and toothache when the idea occurred to him, and to his surprise his teeth immediately ceased to ache. Regarding this as a divine intimation to proceed with the problem, he worked incessantly for eight days at it, and completed a tolerably full account of the geometry of the cycloid. Pascal was dismayed and disgusted by society's reactions to his machine and completely renounced his interest in science an mathematics, devoting the rest of his life to God. He is best known for his collection of spiritual essays, Les Pensées. Even though the basic design of the Pascaline lived on in mechanical calculators for over three hundred years. As a counting machine, the Pascaline was not superseded until the invention of the electronic calculating machine. "The arithmetical machine produces effects which approach nearer to thought than all the actions of animals", wrote Pascal in Pensées, (a -customarily- lengthy piece) "but it does nothing which would enable us to attribute will to it, as to animals." Pascal, genius by any measure, died of a brain hemorrhage at the age of 1623 Born in Clermont on June 19 1631 Moved to Paris 1636 Proved the 32-nd proposition of Euclid 1641 Designed a calculating machine the Pascaline to be completed in 1653 1650 Pascal suddenly abandoned his favorite pursuits to study religion 1653 Returned from his religious studies to administer his father's estate 1662 died at Paris on August 19 Honors and awards In 1968 a programming language (PASCAL) was named after Pascal 1639 Geometry of Conics published in 1779 1658 Cycloid Provincial Letters
{"url":"http://www.thocp.net/biographies/pascal_blaise.html","timestamp":"2014-04-20T20:54:43Z","content_type":null,"content_length":"16537","record_id":"<urn:uuid:c24853fb-55f5-405f-b6ba-04969b8fbbc7>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00368-ip-10-147-4-33.ec2.internal.warc.gz"}
Aspen Hill, MD Math Tutor Find an Aspen Hill, MD Math Tutor ...I scored a 790/740 Math/Verbal on my SAT's and went through my entire high-school and college schooling without getting a single B, regardless of the subject. I did this through perfecting a system of self-learning and studying that allowed me to efficiently learn all the required materials whil... 15 Subjects: including prealgebra, probability, algebra 1, algebra 2 ...I have helped children as young as 6 with both reading comprehension and mathematical concepts. Young learners are more eager to learn new concepts but often have not learned logic or working with multiple steps to answer problems. I like to help this age group prepare for what they will study in high school. 31 Subjects: including algebra 1, algebra 2, biology, calculus ...In addition to teaching science and math, I helped students improve their reading comprehension and writing skills in preparation for a number of different standardized tests. The tests I taught for include SAT, ACT, GRE, GMAT, and to a more limited extent LSAT. Exposure to all these different ... 23 Subjects: including SAT math, ACT Math, algebra 1, algebra 2 ...I am a highly-qualified teacher, licensed to teach ESL/ESOL K-12 in the state of Maryland. I have successfully taught ESOL for five years in the public schools. I have taught newcomers, beginning, intermediate and advanced English Language Learners. 24 Subjects: including geometry, ACT Math, SAT math, prealgebra ...I use an eclectic style of tutoring, incorporating my knowledge and experience with a wide variety of test taking strategies. I base this knowing that every student is unique in their learning styles and no one approach works for all students. I help students identify weak areas in their core k... 2 Subjects: including algebra 1, MCAT Related Aspen Hill, MD Tutors Aspen Hill, MD Accounting Tutors Aspen Hill, MD ACT Tutors Aspen Hill, MD Algebra Tutors Aspen Hill, MD Algebra 2 Tutors Aspen Hill, MD Calculus Tutors Aspen Hill, MD Geometry Tutors Aspen Hill, MD Math Tutors Aspen Hill, MD Prealgebra Tutors Aspen Hill, MD Precalculus Tutors Aspen Hill, MD SAT Tutors Aspen Hill, MD SAT Math Tutors Aspen Hill, MD Science Tutors Aspen Hill, MD Statistics Tutors Aspen Hill, MD Trigonometry Tutors Nearby Cities With Math Tutor Adelphi, MD Math Tutors Camp Springs, MD Math Tutors Cloverly, MD Math Tutors Colesville, MD Math Tutors Darnestown, MD Math Tutors Franconia, VA Math Tutors Glenmont, MD Math Tutors N Chevy Chase, MD Math Tutors Norbeck, MD Math Tutors North Bethesda, MD Math Tutors North Chevy Chase, MD Math Tutors North Potomac, MD Math Tutors Oak Hill, VA Math Tutors Wheaton, MD Math Tutors Woodlawn, MD Math Tutors
{"url":"http://www.purplemath.com/Aspen_Hill_MD_Math_tutors.php","timestamp":"2014-04-20T16:29:24Z","content_type":null,"content_length":"23943","record_id":"<urn:uuid:f6433b60-e382-4a81-b33b-61ec1103b0a6>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00388-ip-10-147-4-33.ec2.internal.warc.gz"}
Middle City West, PA Algebra 2 Tutor Find a Middle City West, PA Algebra 2 Tutor ...I start off by building a rapport with you, student to student. We will talk about what your learning style is like and what your experience has been in the class that we are discussing. The reason that I said, "student to student," is because I love to learn about what you are writing about! 37 Subjects: including algebra 2, reading, writing, English ...As a tutor with a primary focus in math and science, I not only tutor algebra frequently, but also encounter this fundamental math subject every day in my professional life. I conduct research at UPenn and West Chester University on colloidal crystals and hydrodynamic damping. Students I tutor are mostly college-age, but range from middle school to adult. 9 Subjects: including algebra 2, calculus, physics, geometry ...Hard work pays off! I have been an SAT, ACT, PSAT tutor for over 10 years. My first career was in business as a Vice President, consultant, and trainer. 35 Subjects: including algebra 2, chemistry, English, reading I have been a part time college instructor for over 10 years at a local university. While I have mostly taught all levels of calculus and statistics, I can also teach college algebra and pre-calculus as well as contemporary math. My background is in engineering and business, so I use an applied math approach to teaching. 13 Subjects: including algebra 2, calculus, geometry, statistics ...I taught introductory and intermediate physics classes at New College, Duke University and RPI. Some years ago I started to tutor one-on-one and have found that, more than classroom instruction, it allows me to tailor my teaching to students' individual needs. Their success becomes my success. 21 Subjects: including algebra 2, reading, physics, writing Related Middle City West, PA Tutors Middle City West, PA Accounting Tutors Middle City West, PA ACT Tutors Middle City West, PA Algebra Tutors Middle City West, PA Algebra 2 Tutors Middle City West, PA Calculus Tutors Middle City West, PA Geometry Tutors Middle City West, PA Math Tutors Middle City West, PA Prealgebra Tutors Middle City West, PA Precalculus Tutors Middle City West, PA SAT Tutors Middle City West, PA SAT Math Tutors Middle City West, PA Science Tutors Middle City West, PA Statistics Tutors Middle City West, PA Trigonometry Tutors Nearby Cities With algebra 2 Tutor Briarcliff, PA algebra 2 Tutors Bywood, PA algebra 2 Tutors Carroll Park, PA algebra 2 Tutors Center City, PA algebra 2 Tutors East Camden, NJ algebra 2 Tutors Eastwick, PA algebra 2 Tutors Fernwood, PA algebra 2 Tutors Middle City East, PA algebra 2 Tutors Passyunk, PA algebra 2 Tutors Penn Ctr, PA algebra 2 Tutors Penn Wynne, PA algebra 2 Tutors Philadelphia algebra 2 Tutors Philadelphia Ndc, PA algebra 2 Tutors South Camden, NJ algebra 2 Tutors West Collingswood, NJ algebra 2 Tutors
{"url":"http://www.purplemath.com/Middle_City_West_PA_algebra_2_tutors.php","timestamp":"2014-04-17T01:36:45Z","content_type":null,"content_length":"24634","record_id":"<urn:uuid:3af14ef2-1e46-4ed7-a134-090a5f59e288>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00115-ip-10-147-4-33.ec2.internal.warc.gz"}
Solid of rev and Surface Area April 9th 2013, 07:00 PM #1 Apr 2013 new jersey Solid of rev and Surface Area I think I posted it in a wrong forum, being a newbie here and needs some help on my math problem. Thanks for all the help guys. Hi I'm stuck with my homework for solid of revolution and surface area of this figure Trapezoid with slope of -5/L. R2 = 2L and R1 = 3L/2. ANy help is appreciated. / ----- |---------\ / -----| ---------\ slope = -5/L ...............3L/2 ...2L a. Find the equation in terms of L (vol and surface area) b. Vol as function of L c. Value of L that minimizes the volume d. Minimum Volume Re: Solid of rev and Surface Area What have you been able to do with this? Re: Solid of rev and Surface Area What is the axis of rotation? Isn't part (b) repeating the first half of part (a)? And doesn't the volume decrease as L decreases, so (c) and (d) have no solution, assuming L>0? - Hollywood Re: Solid of rev and Surface Area This is what I've got so far. Re: Solid of rev and Surface Area The axis of revolution is at the Y-axis. Re: Solid of rev and Surface Area Okay, taking the origin at the center of the base, the side is a line with slope -5/L passing through (2L, 0). That has equation y= (-5/L)(x- 2L). When x= 3L/2, y= (-5/L)(3L/2- 2L)= (-5/L)(-3L/2) = 15/2. Rotating around the y- axis, you have disks with radius x, thickness dy, so area $\pi x^2 dy$. That's why you need to integrate $\pi\int_0^{15/2} x^2 dy$. If y= (-5/L)(x- 2L), what is x as a function of y? Now, when you say "surface area" do you mean the slant area only or do you want to incude the area of the top and bottom? Re: Solid of rev and Surface Area Only the slant area, excluding the top and bottom Re: Solid of rev and Surface Area Okay, so you only need to do the integral I gave you. Re: Solid of rev and Surface Area I thought that formula is for volume? The surface area is 2 pi times all others? Re: Solid of rev and Surface Area Oh, blast! That's right. Then you can do this: a line with slope m is of the form y= mx+ b. If we increase x by dx then we increase (decrease for negative m) y by y+ dy= m(x+ dx)+ b subtracting y from the left side and mx+ b from the right we get dy= m dx (which is just saying, again, that for a straight line the slope is the same as the derivative). A right triangle with legs dx and dy has hypotenus of length, by the Pythagorean theorem, $\sqrt{dx^2+ dy^2}= \sqrt{1+ \left(\frac{dy}{dx}\right)^2} dx$. Since the circumference of a circle, of radius r, is $2\pi r$, the surface area the product of those two lengths, $2\pi x\sqrt{1+ \left(\frac{dy}{dx}\right)^2} dx$. That's the formula you had before, right? Re: Solid of rev and Surface Area Yep, but using all the equation and applying my limits, I was getting negative results. 2 Pi X ( 1 + (dy/dx)^2)^1/2 dx from 0 to 5/2 for Y = 5X + 10 April 10th 2013, 06:34 AM #2 April 10th 2013, 06:45 AM #3 Super Member Mar 2010 April 10th 2013, 07:27 AM #4 Apr 2013 new jersey April 10th 2013, 07:28 AM #5 Apr 2013 new jersey April 10th 2013, 07:52 AM #6 MHF Contributor Apr 2005 April 10th 2013, 08:19 AM #7 Apr 2013 new jersey April 10th 2013, 08:36 AM #8 MHF Contributor Apr 2005 April 10th 2013, 08:44 AM #9 Apr 2013 new jersey April 10th 2013, 12:32 PM #10 MHF Contributor Apr 2005 April 10th 2013, 12:46 PM #11 Apr 2013 new jersey
{"url":"http://mathhelpforum.com/calculus/217142-solid-rev-surface-area.html","timestamp":"2014-04-16T12:09:48Z","content_type":null,"content_length":"60316","record_id":"<urn:uuid:d94d56f4-61dd-4be0-9030-c820b63eaf72>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00066-ip-10-147-4-33.ec2.internal.warc.gz"}
Sponsored by SIAM Activity Group on Geosciences NOTICE: The Regal Harvest House Hotel is now known as the Millennium Hotel Boulder From points of view ranging from science to public policy, there is burgeoning interest in modeling of geoscientific problems. Some examples include petroleum exploration and recovery, cleanup of hazardous waste, earthquake prediction, weather prediction, and global climate change. Such modeling is fundamentally interdisciplinary: physical and mathematical modeling at appropriate scales, physical experiments, mathematical theory, numerical approximations, and large-scale computational algorithms all have important roles to play. The conference facilitates communication between scientists of varying backgrounds and work environments who face similar issues in different fields, and provides a forum in which advances in parts of the larger modeling picture can become known to those working in other parts. These kinds of interactions are needed for meaningful progress in understanding and predicting complex physical phenomena in the geosciences.
{"url":"http://www.siam.org/meetings/gs01/body_index.htm","timestamp":"2014-04-18T00:23:31Z","content_type":null,"content_length":"2795","record_id":"<urn:uuid:8027ff35-cb51-49e6-9b54-21fbc8ea0743>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00554-ip-10-147-4-33.ec2.internal.warc.gz"}
gzl er, doesn't actually say that that is the only inconsistent thing you've said asphyxia gzl: I'll tex that part. Done in 3 minutes gzl dude, isn't your question just about (1 p)(1 p-1)...(1 2)? there is no need to tex anything if you think (1 2)(1 3) is an example of a product like that you're just blatantly getting the order backwards Steve|Office I've heard of books going in that order. gzl so have I, but I think he is making the mistake in thinking that (1 2)(1 3) is an example of a product of that form Steve|Office I guess I didn't read the first bit. gzl anyway, let him waste time texing it if he wants, I guess DemisM (1 2)(1 3) can be either of (1 3 2) or (1 2 3) right? gzl no, it's only one or the other depending on which order you want to go in. or is that what you meant DemisM yeah like it's one or the other not both gzl right asphyxia gzl: I am sorry if this bothers you, then dont have a look at it. but that is the best translation, I was able to make. I think it will help clear up the misunderstandings that statement was fine, the issue is that (1 2)(1 3) is not equal to (1 2 3) and has nothing to do with that theorem gzl (1 3)(1 2) would be an example or (1 5)(1 4)(1 3)(1 2) asphyxia the product of the last expression is equal to (1 2 3 4 5), right? gzl by your theorem. but you brought up this (1 2)(1 3) example which has nothing to do with this in fact if you apply that theorem it tells you that (1 2)(1 3) = (1 3 2) asphyxia ko gzl if there's something you're confused about you should point to what it is asphyxia well, I think it's kind of cleared up gzl ok asphyxia I will mutter about it later if there is some glitch... but thanks I think it helped me to translate that passage. Ok, I understand how to find out what the error term is for a given number of terms in a taylor expansion brick_ Yet given an error, how do I find the number of terms needed to get there? For instance, I have Sin(.45) as a power series. How many terms do I need to get to .0000001 error? me22 brick_: signs alternate, so find the first term smaller than the required error thianpa Anyone knows how i can swap two number only using the two variable ? Jafet http://en.wikipedia.org/wiki/Swap_(computer_science) I wrote that article (:
{"url":"http://ircarchive.info/math/2007/2/23/1.html","timestamp":"2014-04-20T06:21:14Z","content_type":null,"content_length":"6640","record_id":"<urn:uuid:c35c23fe-a265-416a-88f7-db5bd7bf41db>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00538-ip-10-147-4-33.ec2.internal.warc.gz"}
synthesizer-core-0.2: Audio signal processing coded in Haskell: Low level part Source code Contents Index This module gives some introductory examples to signal processing with plain Haskell lists. For more complex examples see Synthesizer.Plain.Instrument and Synthesizer.Plain.Effect. The examples require a basic understanding of audio signal processing. In the Haddock documentation you will only see the API. In order to view the example code, please use the "Source code" links beside the function documentation. This requires however, that the Haddock was executed with hyperlink-source option. Using plain lists is not very fast, particularly not fast enough for serious real-time applications. It is however the most flexible data structure, which you can also use without knowledge of low level programming. For real-time applications see Synthesizer.Generic.Tutorial. sine :: IO ExitCode Source Play a simple sine tone at 44100 sample rate and 16 bit. These are the parameters used for compact disks. The period of the tone is 2*pi*10. Playing at sample rate 44100 Hz results in a tone of 44100 / (20*pi) Hz, that is about 702 Hz. This is simple enough to be performed in real-time, at least on my machine. For playback we use SoX. sineStereo :: IO ExitCode Source Now the same for a stereo signal. Both stereo channels are slightly detuned in order to achieve a stereophonic phasing effect. In principle there is no limit of the number of channels, but with more channels playback becomes difficult. Many signal processes in our package support any tuple and even nested tuples using the notion of an algebraic module (see C). A module is a vector space where the scalar numbers do not need to support division. A vector space is often also called a linear space, because all we require of vectors is that they can be added and scaled and these two operations fulfill some natural laws. writeSine :: IO ExitCode Source Of course we can also write a tone to disk using sox. play :: T Double -> IO ExitCode Source For the following examples we will stick to monophonic sounds played at 44100 Hz. Thus we define a function for convenience. oscillator :: IO ExitCode Source Now, let's repeat the sine example in a higher level style. We use the oscillator static that does not allow any modulation. We can however use any waveform. The waveform is essentially a function which maps from the phase to the displacement. Functional programming proves to be very useful here, since anonymous functions as waveforms are optimally supported by the language. We can also expect, that in compiled form the oscillator does not have to call back the waveform function by an expensive explicit function call, but that the compiler will inline both oscillator and waveform such that the oscillator is turned into a simple loop which handles both oscillation and waveform computation. Using the oscillator with sine also has the advantage that we do not have to cope with pis any longer. The frequency is given as ratio of the sample rate. That is, 0.01 at 44100 Hz sample rate means 441 Hz. This way all frequencies are given in the low-level signal processing. It is not optimal to handle frequencies this way, since all frequency values are bound to the sample rate. For overcoming this problem, see the high level routines using physical dimensions. For examples see Synthesizer.Dimensional.RateAmplitude.Demonstration. It is very simple to switch to another waveform like a saw tooth wave. Instead of a sharp saw tooth, we use an extreme asymmetric triangle. This is a poor man's band-limiting approach that shall reduce aliasing at high oscillation frequencies. We should really work on band-limited oscillators, but this is hard in the general case. cubic :: IO ExitCode Source When we apply a third power to each value of the saw tooths we get an oscillator with cubic polynomial functions as waveform. The distortion function applied to a saw wave can be used to turn every function on the interval [-1,1] into a waveform. sawMorph :: IO ExitCode Source Now let's start with modulated tones. The first simple example is changing the degree of asymmetry according to a slow oscillator (LFO = low frequency oscillator). laser :: IO ExitCode Source It's also very common to modulate the frequency of a tone. pingSig :: T Double Source ping :: IO ExitCode Source A simple sine wave with exponentially decaying amplitude. fmPing :: IO ExitCode Source The ping sound can also be used to modulate the phase another oscillator. This is a well-known effect used excessively in FM synthesis, that was introduced by the Yamaha DX-7 synthesizer. filterSaw :: IO ExitCode Source One of the most impressive sounds effects is certainly frequency filtering, especially when the filter parameters are modulated. In this example we use a resonant lowpass whose resonance frequency is controlled by a slow sine wave. The frequency filters usually use internal filter parameters that are not very intuitive to use directly. Thus we apply a function (here parameter) in order to turn the intuitive parameters "resonance frequency" and "resonance" (resonance frequency amplification while frequency zero is left unchanged) into internal filter parameters. We have not merged these two steps since the computation of internal filter parameters is more expensive then the filtering itself and you may want to reduce the computation by computing the internal filter parameters at a low sample rate and interpolate them. However, in the list implementation this will not save you much time, if at all, since the list operations are too expensive. Now this is the example where my machine is no longer able to produce a constant audio stream in real-time. For tackling this problem, please continue with Synthesizer.Generic.Tutorial. Produced by Haddock version 2.4.2
{"url":"http://hackage.haskell.org/package/synthesizer-core-0.2/docs/Synthesizer-Plain-Tutorial.html","timestamp":"2014-04-20T11:39:47Z","content_type":null,"content_length":"20480","record_id":"<urn:uuid:ebdf14d4-ac8a-4cec-b71c-19603961ed24>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00076-ip-10-147-4-33.ec2.internal.warc.gz"}
Float to ASCII This is a discussion on Float to ASCII within the C Programming forums, part of the General Programming Boards category; Can someone please share the logic for converting a float into a string of ASCII characters, without using standard functions....... Float to ASCII Can someone please share the logic for converting a float into a string of ASCII characters, without using standard functions.... Just like what function itoa() does to an integer, is their any float counterpart. Have you looked in your C library documentation? Any of these conversions is relatively easy to accomplish using sprintf()... look it up, you'll see. itoa() is not C standard If you want to convert an float to a string you could use sprintf. float myFloat = 3.23; char myString[30]; sprintf(myString, "%f", myFloat); See more at sprintf - C++ Reference itoa - Wikipedia, the free encyclopedia An itoa function (and a similar function, ftoa, that converted a float to a string) was listed in the first-edition Unix manual. juice, converting float to string is very difficult in the general case. I looked into doing it for my custom template-sized floating point class and ended up still having not attempted that part myself. It usually involves using a bignum data type as a temporary. My homepage Advice: Take only as directed - If symptoms persist, please see your debugger Linus Torvalds: "But it clearly is the only right way. The fact that everybody else does it some other way only means that they are wrong" An itoa function (and a similar function, ftoa, that converted a float to a string) was listed in the first-edition Unix manual. That does not make it part of the C standard library, which is what "standard" means in this forum with respect to a function. C + C++ Compiler: port of Version Control System: Look up a C++ Reference and learn How To Ask Questions The Smart Way The original poster wanted the logic, not hints about non-standard functions. To answer the original question. There are generally three fields of bits within a floating point variable. They are the sign (determines if the value is positive or negative), a mantissa, and and exponent. If we assume the base is 10, and the floating point type is symmetric (i.e. all representable negative values can be converted to a representable positive value by multiplying by -1) then 1) The floating point type has three fields: a sign (typically represented using a bit where on means positive and off means negative, or vice versa), a mantissa (which is in the range 0 to 1, not including the value 1), and an exponent (which is a signed integer field). The "value" of the floating point variable is determined by sign * mantissa * 10 ^ exponent (where I use * to represent multiplication and ^ to represent "to the power of"). 2) The basic logic of printing out a floating point variable is to interpret those three fields separately. For example, using printf()'s %g format, you might see output in the form -0.20E+30 The negative is the sign, 0.20 is the mantissa, and +30 is the exponent. Practically, the the representation (layout of the fields, number of bits used to represent mantissa and exponent) are implementation defined (each floating point representation lays things out Keep in mind I've made vastly simplifying assumptions in the above, such as base 10 and symmetric values. Those assumptions are common in practice, but the C standard does not actually require them. Some floating point representations, such as those defined by IEEE, also support NaN (non-a-number), infinities, and other types of values. Those usages require the floating point type to reserve some bits for those usage (which reduces the number of bits used for mantissa and/or exponent). The C standard does not require such things, but does not prevent them either. In short, printing out a floating point value requires interpreting the various bit-fields in it in order to print out the "value". The actual bit-fields that need to be interpreted, and their layout within a floating point variable, vary between representations. In practice, the reason people will tell you to use standard methods is that floating point representations vary between systems, so the precise workings to print out a floating point value vary between systems. If you to do it by hand, your code will probably break when your port it (or, possibly, even if you change compilation settings related to floating point). Right 98% of the time, and don't care about the other 3%. Thanks grumpy, I just needed the logic to implement the fuction ftoa(), which is the float counterpart of itoa(), as stahta informed above in the 6th post. The easiest oprtion is to get a bigint library and assuming that the float holds a number rather than inf/nan, convert the float to a bigint and then convert the bigint into decimal digits. Then insert the decimal point in the correct place and prepend a minus sign is necessary. That's the basics, although proper implementations have all sorts of optimisations, plus tweaks that guarantee that converting the resulting string back into a float gives you the exact same binary value, whilst also guaranteeing a minimal number of digits to do so, and stuff like that. Unlike converting an int to a string which is totally basic, float to string is extremely complex. My homepage Advice: Take only as directed - If symptoms persist, please see your debugger Linus Torvalds: "But it clearly is the only right way. The fact that everybody else does it some other way only means that they are wrong" It's relatively easy if you think about it. For example: 123.456 = 123 + 0.456 => cout << 123 << '.'; 0.456*10 = 4 + 0.56 => cout << 4; 0.56*10 = 5 + 0.6 => cout << 5; 0.6*10 = 6 + 0.0 => cout << 6; 0.0 => end Printed: 123.456 It's really that simple! Devoted my life to programming... 10-01-2011 #1 Registered User 10-01-2011 #2 Registered User 10-01-2011 #3 10-01-2011 #4 Registered User 10-01-2011 #5 Registered User 10-01-2011 #6 Registered User 10-01-2011 #7 10-01-2011 #8 10-01-2011 #9 Registered User 10-01-2011 #10 Registered User 10-01-2011 #11 Registered User 10-01-2011 #12 10-01-2011 #13 Registered User 10-02-2011 #14 10-02-2011 #15
{"url":"http://cboard.cprogramming.com/c-programming/141570-float-ascii.html","timestamp":"2014-04-17T07:55:07Z","content_type":null,"content_length":"99379","record_id":"<urn:uuid:1892b9e9-8c9e-4ee4-82d0-52e743f14b2c>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00138-ip-10-147-4-33.ec2.internal.warc.gz"}
Reply to comment A recent study from Harvard reported that people who ate more red meat died at a greater rate. This provoked some wonderful media coverage: the Daily Express interpreted the study as saying that "if people cut down the amount of red meat they ate — say from steaks and beef burgers — to less than half a serving a day, 10 per cent of all deaths could be avoided". Well it would be nice to find something that would avoid 10% of all deaths, but sadly this is not what the study says. Their main conclusion is that an extra portion of red meat a day, where a portion is 85g or 3 oz — a lump of meat around the size of a pack of cards or slightly smaller than a standard quarter-pound burger — is associated with a hazard ratio of 1.13, that is a 13% increased risk of death. But what does this mean? Surely our risk of death is already 100%, and a risk of 113% does not seem very sensible? To really interpret this number we need to use some maths. Let's consider two friends — Mike and Sam, both aged 40, with the same average weight, alcohol consumption, amount of exercise, family history of disease, but not necessarily exactly the same income, education and standard of living. Meaty Mike eats a quarter-pound burger for lunch Monday to Friday, while Standard Sam does not eat meat for weekday lunches, but otherwise has a similar diet to Mike (we are not concerned here with their friend Veggie Vern, who doesn't eat meat at all.) Each one faces an annual risk of death, whose technical name is their hazard. A hazard ratio of 1.13 means that for two people like Mike and Sam, who are similar apart from the extra meat, the one with the risk factor — Mike — has a 13% increased annual risk of death over the follow-up period (around 20 years). However, this does not mean that he is going to live 13% less, although this is how some people interpreted this figure. So how does it affect how long they each might live? For this we have to go to the life tables provided by the Office of National Statistics. The figure shows a piece of the male life table for 2008-2010 for England and Wales. The The column headed We assume that people die uniformly throughout the year, so the average age to which a person who has reached birthday The row of the life table corresponding to age 40 is shown below. It shows that an average 40-year-old man can expect to live another 40 years (last column). This is not, of course, how long he will live — it may be less, it may be more, 40 years is the average. It also is based on the current hazards cohort life tables, rather than the standard period life table, and according to the current cohort life table for England and Wales, a 40-year-old can expect to live another 46 years. Since we assume Sam is consuming an average amount of red meat, we shall take him as an average man. We can see the effect of a hazard-ratio of 1.13 for Mike by multiplying all the Over 40 years this is 1/40th difference, or roughly one week a year or half hour per day. So a life-long habit of eating burgers for lunch is associated with a loss of half an hour a day, considerably more than it takes to eat the burger. As we showed in our discussion of microlives a half-hour a day off your life expectancy is also associated with two cigarettes a day and each day of being 5 Kg overweight. Of course we cannot say that precisely this time will be lost, and we cannot even be very confident that Mike will die first. An extremely elegant mathematical result says that if we assume a hazard So there is only a 53% chance that Mike dies first, rather than a 50:50 chance. Not a big effect. (The elegant result is proved here.) Finally, neither can we say the meat is directly causing the loss in life expectancy, in the sense that if Mike changed his lunch habits and stopped stuffing his face with a burger, his life expectancy would definitely increase. Maybe there's some other factor that both encourages Mike to eat more meat and leads to a shorter life. It is quite plausible that income could be such a factor — lower income in the US is associated with both eating more red meat and reduced life expectancy, even allowing for measurable risk factors. But the Harvard study does not adjust for income, arguing that the people in the study — health professionals and nurses — are broadly doing the same job. But, judging from the heated discussion that seems to accompany this topic, the argument will go on. About the author David Spiegelhalter is Winton Professor of the Public Understanding of Risk at the University of Cambridge. David and his team run the Understanding uncertainty website, which informs the public about issues involving risk and uncertainty.
{"url":"http://plus.maths.org/content/comment/reply/5686","timestamp":"2014-04-20T16:15:13Z","content_type":null,"content_length":"36578","record_id":"<urn:uuid:42e22edb-b11c-4891-8c0b-bfa4a4a49504>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00363-ip-10-147-4-33.ec2.internal.warc.gz"}
Computational Fluid Dynamics: Incompressible Flows Monday July 25/8:00 Computational Fluid Dynamics: Incompressible Flows Chair: Christina I. Draghicescu, University of Houston • 8:00: A New Algorithm for Computing Flows with Concentrated Vorticity. John Steinhoff, University of Tennessee Space Institute • 8:15: A Fast Vortex Method for External Flows. Xuefeng Li, Loyola University • 8:30: A Vorticity-Velocity Scheme for 3D Incompressible Flows. Xiao-Hui Wu and Jain-Ming Wu, University of Tennessee Space Institute • 8:45: Numerical Evaluation of the Fractal Dimension for Vortex Sheets. Christina I. Draghicescu, University of Houston • 9:00: An Efficient Fully Implicit Discretization for the Navier-Stokes Equation with Immersed Boundaries. Yu-Chung Chang, California Institute of Technology • 9:15: An Adaptive Projection Method for the Incompressible Navier-Stokes Equations. Ann S. Almgren, John B. Bell, Louis H. Howell, Lawrence Livermore National Laboratory and Phillip Colella, University of California, Berkeley • 9:30: Least Squares Finite Elements for Incompressible Flow in Stress-Velocity-Pressure Version. Ching Lung Chang, Cleveland State University • 9:45: A New Finite Element Method for Viscous Flow Problems in Enclosures Containing Moving Parts. F. Bertrand, P.A. Tanguy and F. Thibault, Ecole Polytechnique, Canada
{"url":"http://www.siam.org/meetings/archives/an94/cp1.htm","timestamp":"2014-04-21T10:50:32Z","content_type":null,"content_length":"1855","record_id":"<urn:uuid:c53f11c0-3d90-43a2-8427-671f549f501c>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00566-ip-10-147-4-33.ec2.internal.warc.gz"}
Good Triple Good Triple Equation on Partial Words x^2 ↑ y^mz The program takes as input three partial words x, y, z such that z is a proper prefix of y. The program outputs an integer m such that x^2 ↑ y^mz (if such m exists) and shows the decomposition of x, y, and z. Acknowledgement: This material is based upon work supported by the National Science Foundation under Grant No. DMS-0452020. Disclaimer: Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science
{"url":"http://www.uncg.edu/cmp/research/equations/x2ymz.cgi","timestamp":"2014-04-16T10:19:47Z","content_type":null,"content_length":"5996","record_id":"<urn:uuid:b7c10326-5515-42b2-9911-cffbdd0a9ece>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00186-ip-10-147-4-33.ec2.internal.warc.gz"}
Case-Control Studies Cohort studies have an intuitive logic to them, but they can be very problematic when: 1. The outcomes being investigated are rare; 2. There is a long time period between the exposure of interest and the development of the disease; or 3. It is expensive or very difficult to obtain exposure information from a cohort. In the first case, the rarity of the disease requires enrollment of very large numbers of people. In the second case, the long period of follow-up requires efforts to keep contact with and collect outcome information from individuals. In all three situations, cost and feasibility become an important concern. A case-control design offers an alternative that is much more efficient. The goal of a case-control study is the same as that of cohort studies, i.e. to estimate the magnitude of association between an exposure and an outcome. However, case-control studies employ a different sampling strategy that gives them greater efficiency. As with a cohort study, a case-control study attempts to identify all people who have developed the disease of interest in the defined population. This is not because they are inherently more important to estimating an association, but because they are almost always rarer than non-diseased individuals, and one of the requirements of accurate estimation of the association is that there are reasonable numbers of people in both the numerators (cases) and denominators (people or person-time) in the measures of disease frequency for both exposed and reference groups. However, because most of the denominator is made up of people who do not develop disease, the case-control design avoids the need to collect information on the entire population by selecting a sample of the underlying population. │ "Case-control studies are best understood by considering as the starting point a source population, which represents a hypothetical study population in which a cohort study might have been │ │ conducted. The source population is the population that gives rise to the cases included in the study. If a cohort study were undertaken, we would define the exposed and unexposed cohorts (or │ │ several cohorts) and from these populations obtain denominators for the incidence rates or risks that would be calculated for each cohort. We would then identify the number of cases occurring in │ │ each cohort and calculate the risk or incidence rate for each. In a case-control study the same cases are identified and classified as to whether they belong to the exposed or unexposed cohort. │ │ Instead of obtaining the denominators for the rates or risks, however, a control group is sampled from the entire source population that gives rise to the cases. Individuals in the control group │ │ are then classified into exposed and unexposed categories. The purpose of the control group is to determine the relative size of the exposed and unexposed components of the source population." │ To illustrate this consider the following hypothetical scenario in which the source population is the state of Massachusetts. Diseased individuals are red, and non-diseased individuals are blue. Exposed individuals are indicated by a whitish midsection. Note the following aspects of the depicted scenario: 1. The outcome being investigated is rare. 2. There is a fairly large number of exposed individuals in the state, but most of these are not diseased. 3. The proportion of exposed individuals among the disease cases (7/13) is higher than the proportion of exposure among the controls. If I somehow had exposure and outcome information on all of the subjects in the source population and looked at the association using a cohort design, it might look like this: │ │ Diseased │ Non-diseased │ Total │ │ Exposed │ 7 │ 1,000 │ 1,007 │ │ Non-exposed │ 6 │ 5,634 │ 5,640 │ Therefore, the incidence in the exposed individuals would be 7/1,007 = 0.70%, and the incidence in the non-exposed individuals would be 6/5,640 = 0.11%. Consequently, the risk ratio would be 0.70/ 0.11=6.52, suggesting that those who had the risk factor (exposure) had 6.5 times the risk of getting the disease compared to those without the risk factor. This is a strong association. In this hypothetical example, I had data on all 6,647 people in the source population, and I could compute the probability of disease (i.e., the risk or incidence) in both the exposed group and the non-exposed group, because I had the denominators for both the exposed and non-exposed groups. The problem, of course, is that I usually don't have the resources to get the data on all subjects in the population. If I took a random sample of even 5-10% of the population, I might not have any diseased people in my sample. An alternative approach would be to use surveillance databases or administrative databases to find most or all 13 of the cases in the source population and determine their exposure status. However, instead of enrolling all of the other 5,634 residents, suppose I were to just take a sample of the non-diseased population. In fact, suppose I only took a sample of 1% of the non-diseased people and I then determined their exposure status. The data might look something like this: │ │ Diseased │ Non-diseased │ Total │ │ Exposed │ 7 │ 10 │ unknown │ │ Non-exposed │ 6 │ 56 │ unknown │ With this sampling approach I can no longer compute the probability of disease in each exposure group, because I no longer have the denominators in the last column. In other words, I don't know the exposure distribution for the entire source population. However, the small control sample of non-diseased subjects gives me a way to estimate the exposure distribution in the source population. So, I can't compute the probability of disease in each exposure group, but I can compute the odds of disease in each group. The Odds Ratio The odds of disease in the exposed group are 7/10, and the odds of disease in the non-exposed group are 6/56. If I compute the odds ratio, I get (7/10) / (5/56) = 6.56, very close to the risk ratio that I computed from data for the entire population. We will consider odds ratios and case-control studies in much greater depth in a later module. However, for the time being the key things to remember are that: 1. The sampling strategy for a case-control study is very different from that of cohort studies, despite the fact that both have the goal of estimating the magnitude of association between the exposure and the outcome. 2. In a case-control study there is no "follow-up" period. One starts by identifying diseased subjects and determines their exposure distribution; one then takes a sample of the source population that produced those cases in order to estimate the exposure distribution in the overall source population that produced the cases. [In cohort studies none of the subjects have the outcome at the beginning of the follow-up period.] 3. In a case-control study, you cannot measure incidence, because you start with diseased people and non-diseased people, so you cannot calculate relative risk. 4. The case-control design is very efficient. In the example above the case-control study of only 79 subjects produced an odds ratio (6.56) that was a very close approximation to the risk ratio (6.52) that was obtained from the data in the entire population. 5. Case-control studies are particularly useful when the outcome is rare is uncommon in both exposed and non-exposed people. │ The Difference Between "Probability" and "Odds"? │ │ • The probability that an event will occur is the fraction of times you expect to see that event in many trials. │ │ • The odds are defined as the probability that the event will occur divided by the probability that the event will not occur. │ │ │ │ If the probability of an event occurring is Y, then the probability of the event not occurring is 1-Y. (Example: If the probability of an event is 0.80 (80%), then the probability that the event │ │ will not occur is 1-0.80 = 0.20, or 20%. │ │ │ │ The odds of an event represent the ratio of the (probability that the event will occur) / (probability that the event will not occur). This could be expressed as follows: │ │ │ │ Odds of event = Y / (1-Y) │ │ │ │ So, in this example, if the probability of the event occurring = 0.80, then the odds are 0.80 / (1-0.80) = 0.80/0.20 = 4 (i.e., 4 to 1). │ │ • If a race horse runs 100 races and wins 25 times and loses the other 75 times, the probability of winning is 25/100 = 0.25 or 25%, but the odds of the horse winning are 25/75 = 0.333 or 1 win │ │ to 3 loses. │ │ • If the horse runs 100 races and wins 5 and loses the other 95 times, the probability of winning is 0.05 or 5%, and the odds of the horse winning are 5/95 = 0.0526. │ │ • If the horse runs 100 races and wins 50, the probability of winning is 50/100 = 0.50 or 50%, and the odds of winning are 50/50 = 1 (even odds). │ │ • If the horse runs 100 races and wins 80, the probability of winning is 80/100 = 0.80 or 80%, and the odds of winning are 80/20 = 4 to 1. │ │ │ │ NOTE that when the probability is low, the odds and the probability are very similar. │ Further down in the article the author quoted the economist who had been interviewed for the story. What the economist had actually said was, "Whether we reach the technical definition [of a double-dip recession] I think is probably close to 50-50." Question: was the author correct in saying that the "odds" of a double-dip recession may have reached 50 percent? Hepatitis Outbreak in Marshfield, MA hepatitis A on the South Shore of Massachusetts. Over a period of a few weeks there were 20 cases of hepatitis A that were reported to the MDPH, and most of the infected persons were residents of Marshfield, MA. Marshfield's health department requested help in identifying the source from MDPH. The investigators quickly performed descriptive epidemiology. The epidemic curve indicated a point source epidemic, and most of the cases lived in the Marshfield area, although some lived as far away as Boston. They conducted hypothesis-generating interviews, and taken together, the descriptive epidemiology suggested that the source was one of five or six food establishments in the Marshfield area, but it wasn't clear which one. Consequently, the investigators wanted to conduct an analytic study to determine which restaurant was the source. There were several limitations. First, there were only 20 known cases, and not all of these lived and worked in Marshfield. There was no distinct, well-defined cohort. Given the small number of cases, even if they took a large random sample of residents of Marshfield, they would have likely ended up with very few people who had had recent hepatitis or who had eaten at any of the suspected What kind of study should the MDPH investigators do? They invited all 20 cases of hepatitis A to answer questions from a questionnaire designed for this study, and 19 of the cases agreed to complete the survey. Note that the lower three study designs (retrospective and prospective cohort studies and clinical trials) are similar in that an initially disease free cohort is divided into groups based on their "exposure" status, i.e., whether or not they have a particular "risk factor," and for all three, the investigator measures and compares the incidence of disease. In contrast, case-control studies identify diseased and non-diseased subjects and then measure and compare their likelihood of having had certain prior exposures.
{"url":"http://sphweb.bumc.bu.edu/otlt/MPH-Modules/EP/EP713_AnalyticOverview/EP713_AnalyticOverview5.html","timestamp":"2014-04-20T08:15:13Z","content_type":null,"content_length":"31235","record_id":"<urn:uuid:735d6691-0d2f-482e-800f-84c6064b2e88>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00542-ip-10-147-4-33.ec2.internal.warc.gz"}
Guttenberg, NJ Algebra 2 Tutor Find a Guttenberg, NJ Algebra 2 Tutor Hi! My name is James. I started tutoring when I was in High School and have been tutoring ever since. 12 Subjects: including algebra 2, geometry, SAT math, ACT Math ...I also had to review it this past summer because it is a subject on the DAT's. I have taken biostatistics in undergrad and received an A-. I also have taken psychological stats which is the same concept and uses the same formulas. I am not only good in science but also in math and my transcripts can prove it! 20 Subjects: including algebra 2, reading, chemistry, organic chemistry ...I am very qualified to tutor in SAT Math. I know the math questions that the SAT will have very well. I also know how to solve all of these math problems ranging from the geometry ones to the algebra ones. 22 Subjects: including algebra 2, Spanish, chemistry, algebra 1 I teach honors physics and AP physics at William L. Dickinson High School in Jersey City, NJ. I have also taught Algebra 1, Algebra 2, Geometry and SAT math. 10 Subjects: including algebra 2, physics, SAT math, algebra 1 I am an actuarial studies/math major at Touro College. I have tutored college and high school students in math at a wide range of levels. I help those who need complex math concepts explained 13 Subjects: including algebra 2, calculus, SAT math, geometry Related Guttenberg, NJ Tutors Guttenberg, NJ Accounting Tutors Guttenberg, NJ ACT Tutors Guttenberg, NJ Algebra Tutors Guttenberg, NJ Algebra 2 Tutors Guttenberg, NJ Calculus Tutors Guttenberg, NJ Geometry Tutors Guttenberg, NJ Math Tutors Guttenberg, NJ Prealgebra Tutors Guttenberg, NJ Precalculus Tutors Guttenberg, NJ SAT Tutors Guttenberg, NJ SAT Math Tutors Guttenberg, NJ Science Tutors Guttenberg, NJ Statistics Tutors Guttenberg, NJ Trigonometry Tutors
{"url":"http://www.purplemath.com/Guttenberg_NJ_algebra_2_tutors.php","timestamp":"2014-04-17T13:26:15Z","content_type":null,"content_length":"23738","record_id":"<urn:uuid:2eebe4b3-9962-4224-8dac-2641f1e46d58>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00166-ip-10-147-4-33.ec2.internal.warc.gz"}
storing really large numbers like 100! 01-23-2010 #1 Registered User Join Date Aug 2005 storing really large numbers like 100! 100! = 9.33262154 × 10^157 how would I store this number? I have thought about just putting it in a string. But lets say I want to compute the number myself by starting at 2 and then multiplying 3 ..followed by 4 all the way till i get to multiplying 100. I would have to store every intermediate step in a string and somehow 'multiply' these strings? anyone know of any solution to this? should I be factoring the numbers? Thanks a lot! o yea please, standard library Last edited by rodrigorules; 01-23-2010 at 09:36 AM. Yes, the only way to do it using only the standard library, is, I believe, to use an "infinite"-length datatype, such as a "string" or (dynamic) "char[]". You can certainly use this to create a class that can store arbitrarily-large numbers and perform basic operations on them. Ive always wanted to do something like this myself, for fun, but have never got around to doing so. It certainly is interesting. If you arent up to creating your own class, I would suggest to use some very well known "large number" libraries. Again, I dont think there is anything in the STDLIB, but there are some very good and well known ones. I dont have any on the top of my head, but a simple search will give you the answers. One thing to ask yourself is "do I have to limit myself to STDLIB?", when you decide which method (implement your own vs. reuse). I guess i will try to do some sort of multiplication with carry over loop and see how that works out. now thinking about it, it can't be too difficult. (i hope) edit: now thinking about it..i am also going to have to make a function for addition before i can handle multiplication. Last edited by rodrigorules; 01-23-2010 at 10:26 AM. Well so long as you know how to do it on paper, then doing the same using strings is no more difficult. Not the highest performance to be sure, but it'll work well enough. Plus, if you make a decent class out of it, you can tinker with the innards later on. If you dance barefoot on the broken glass of undefined behaviour, you've got to expect the occasional cut. If at first you don't succeed, try writing your phone number on the exam paper. I support http://www.ukip.org/ as the first necessary step to a free Europe. Tried making my addition for strings ... lots of struggle thus far and dev c++ isn't really helping with its debugger needing some debugging of its own. any ideas? the program always returns 1 (from the 2nd to last statement) string addition(string num1,string num2) string total = ""; int numSize; int carry=0; //I want the larger digit number on the top row, just as I would on paper. // SWAP if(num1.length() < num2.length()) string temp = num1; num1 = num2; num2 = temp; numSize = num1.length(); //add zeros so they now match in length...so I can simply go from right to left adding cols. while(numSize != num2.length()) num2= "0" + num2; for(int i = numSize-1; i >= 0; --i) int sum; //Here I attempt to add characters together and resolve an integer if(sum = (int)num2[i]+(int)num1[i]+carry > 9) //when adding, the carry is at most 1. carry = 1; total=(convertInt(sum - 10))+total; total = convertInt(sum) + total; (carry == 1) ? total = "1" + total : 0; return total; (convertInt accepts an int and returns it as a string) Last edited by rodrigorules; 01-23-2010 at 12:07 PM. Well if you just input the strings, then you have '0' to '9', not 0 to 9 The actual numeric value of each character is ( num[i] - '0' ) and you do +'0' when you're done with the maths on each character. If you dance barefoot on the broken glass of undefined behaviour, you've got to expect the occasional cut. If at first you don't succeed, try writing your phone number on the exam paper. I support http://www.ukip.org/ as the first necessary step to a free Europe. > and dev c++ isn't really helping with its debugger needing some debugging of its own. Dev-c++ is an old and dead project. code::blocks is it's successor. If you dance barefoot on the broken glass of undefined behaviour, you've got to expect the occasional cut. If at first you don't succeed, try writing your phone number on the exam paper. I support http://www.ukip.org/ as the first necessary step to a free Europe. I found out the error if(sum = (int)num2[i]+(int)num1[i]+carry > 9) the > has precedence over the = ...so sum was always 0 or 1 new line with +'0' if((sum = (( num2[i]-'0' )+( num1[i]-'0' ))+ carry) > 9) thank you salem. Last edited by rodrigorules; 01-23-2010 at 01:33 PM. I wrote a multiplication now. edit: nvm it works with humungo numbers. string mult(string num1,string num2) int carry; //make sure num1 is longer or equal to num2. if(num1.length() < num2.length()) string temp = num1; num1 = num2; num2 = temp; int num2Size=num2.length(); int num1Size=num1.length(); string* additionLines = new string[num2Size]; for(int i = num2Size-1; i >= 0; --i) for(int j = num1Size-1; j >=0; --j) int product; if( (product = ((num2[i]-'0')*(num1[j]-'0')) + carry) > 9 ) carry = product / 10; additionLines[i] = convertInt(product % 10) + additionLines[i]; carry = 0; additionLines[i] = convertInt(product) + additionLines[i]; if(carry != 0) additionLines[i] = convertInt(carry) + additionLines[i]; if(i < num2Size - 1) for(int k=i; k < num2Size - 1; ++k) additionLines[i] = additionLines[i] + "0"; string totalSum="0"; for(int i = 0; i < num2Size; ++i) totalSum = addition(totalSum,additionLines[i]); delete[] additionLines; return totalSum; Last edited by rodrigorules; 01-23-2010 at 04:36 PM. 01-23-2010 #2 Registered User Join Date Oct 2006 01-23-2010 #3 Registered User Join Date Aug 2005 01-23-2010 #4 01-23-2010 #5 Registered User Join Date Aug 2005 01-23-2010 #6 01-23-2010 #7 01-23-2010 #8 Registered User Join Date Aug 2005 01-23-2010 #9 Registered User Join Date Aug 2005 01-23-2010 #10 Registered User Join Date Aug 2005
{"url":"http://cboard.cprogramming.com/cplusplus-programming/123364-storing-really-large-numbers-like-100-a.html","timestamp":"2014-04-20T14:00:20Z","content_type":null,"content_length":"81909","record_id":"<urn:uuid:74a9994b-f291-4d1e-97e7-c124298e91d8>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00112-ip-10-147-4-33.ec2.internal.warc.gz"}
Georg Ohm Famous People By George Ohm is founder of the concept known as Ohm’s law. Explore this biography to get details about his life and works. Georg Ohm Famous as: Physicist & Mathematician Nationality: German Born on: 16 March 1789 AD Famous 16th March Birthdays Zodiac Sign: Pisces Famous Pisceans Born in: Erlangen, Brandenburg-Bayreuth Died on: 06 July 1854 AD place of death: Munich, Kingdom of Bavaria father: Johann Wolfgang Ohm mother: Maria Elizabeth Beck siblings: Georg Simon, Martin, Elizabeth Barbara Married: No education: Friedrich-Alexander-University, Erlangen-Nuremberg Works & Achievements: Ohm's law, Ohm's phase law, Ohm's acoustic law awards: 1841 - Copley Medal A German physicist and mathematician, Georg Simon Ohm is best remembered for his formulation of Ohm's Law, which defines the relationship between electrical resistance, electric force and electric current. This was an important discovery made in the field of science as it symbolized the true beginning of electrical circuit analysis. What is interesting to note is that Ohm wasn't the only scientist who was trying to develop this relationship. There were many other researchers, prior to Ohm, who tried to establish the relationship but failed. Ohm, with his philosophical arguments and physical reality of experiments proved his hypothesis. Just like other scientists, his idea too was rejected but Ohm was not the one to be disheartened. His strong will power backed his research which later was not only accepted but made a law in physics. To know more about this ingenious scientist, browse through the following lines. Childhood & Early Life Coming from a family of Protestants, Georg Simon Ohm was born to Johann Wolfgang Ohm and Maria Elizabeth Beck. While his father was a locksmith, his mother was the daughter of a tailor. Though his parents did not have any formal education, this did not stop his father from educating himself. And not just educating himself, Johann even educated his children through his own teachings. Ohm had two siblings, his younger brother Martin, who later became a well-known mathematician, and his sister Elizabeth Barbara. Georg, along with his brother Martin, trained himself in mathematics, physics, chemistry and philosophy at such a high standard that his formal education seemed to be valueless and uninspiring. At the age of eleven, Georg enrolled himself in Erlangen Gymnasium, where he continued his studies, until he was fifteen. However, this phase of learning, as mentioned above, wasn’t a very motivating one, for all that the education centre stressed on was rote learning and interpretating texts. Such was both the brothers’ intelligence quotient that Karl Christian von Langsdorf, professor at the University of Erlangen, compared the two to the Bernoulli family. In 1805, Georg Ohm enrolled in the University of Erlangen. However, instead of concentrating on his studies, Ohm dribbled away his time on extracurricular activities. Johann, upon seeing that his son was wasting his valuable years and missing out on the educational opportunity, sent Georg Ohm to Switzerland in 1806. Therein, Georg took up a post as a mathematics teacher in a school in Gottstadt bei Nydau. In 1809, Karl Christian von Langsdorf left the University of Erlangen to take up a post in the University of Heidelberg. Ohm too wanted to join him, but on the advice of Langsdorf, he read the works of Euler, Laplace and Lacroix. For the same, Ohm left his teaching post in Gottstadt bei Nydau in March 1809 to become a private tutor in Neuchâtel. During his free time, he continued his private study of mathematics. This continued for two years, after which, in the April of 1811, Ohm returned to the University of Erlangen Career in Teaching Georg Ohm had excelled in his private studies so much so that his own studies prepared him for his doctorate degree. Ohm received his PhD degree from the University of Erlangen on October 25, 1811. Immediately thereafter, he joined the department of mathematics as a lecturer. However, this did not continue for long as Ohm left his position three months later due to less growth opportunity. Since Ohm was poverty stricken, the meagre salary that he received from the university did not do much to uplift him from his pitiable state. Next, Ohm took up the job as a teacher of mathematics and physics in Bamberg offered to him by the Bavarian government in 1813. However, unsatisfied with this too, Georg began writing an elementary textbook on geometry as a way to give vent to his abilities. In 1816, the school in which Ohm was teaching was shut down and Ohm was posted to another overcrowded school in Bamberg as a teacher of mathematics. The following year, in September 1817, Ohm was offered a position of a teacher in mathematics and physics at the Jesuit Gymnasium of Cologne. The opportunity was an excellent one, as not only was the school better off than any other in which Ohm had taught, it also had a well-equipped laboratory. During his years as a teacher, Ohm, however, did not give up on his private studies and continued reading texts of the learned French mathematics, Lagrange, Legendre, Laplace, Biot and Poisson. Later, Ohm read the works of Fourier and Fresnel as well. Simultaneously, Ohm started his own experimental work in the school physics laboratory, after he learnt about Oersted's discovery of electromagnetism in 1820. These experiments that Ohm undertook were only as a measure to uplift his educational standard. Also, Ohm realised that if he wanted to attain a job that really inspired him, he had to work on research publications, for that was the only way he could prove himself to the world and have something solid on which he could petition for a position in a more stimulating environment. His Research Ohm submitted a paper in the year 1825, which dealt with the decrease in the electromagnetic force produced by a wire as the length of the wire increased. The paper was purely based on the experimental evidence that Ohm had charted from his tests and trials. Later the next year, Ohm presented two more papers in which he presented a mathematical description of conduction in circuits based on Fourier's study of heat conduction. The second paper was particularly an important one for in it, Ohm proposed laws which explained results of others working on galvanic electricity. It is also deemed to be a significant paper as it was the stepping stone for what we today know as Ohm’s law that came in the book published the following year. In 1827, Ohm published his famous book, Die galvanische Kette, mathematisch bearbeitet, wherein he gave a detailed explanation on the theory of electricity. What is interesting to note in the book is that Ohm, instead of jumping directly onto the subject, gave a mathematical background necessary for an understanding of the rest of the work. This was essential, for even the most learned and educated German physicist, required such an introduction as the approach to physics in the book was a non-mathematical one, a phenomenon unheard of in those days. According to Ohm’s theory, communication of electricity occurred between “contiguous particles”. In addition to this, the paper also illustrated the difference in the scientific approach of Ohm from that of Fourier and Navier. Later Years Ohm, who was given a year off by the Jesuit Gymnasium of Cologne at half pay, to concentrate his research in 1826, had to resume work in the September of 1827. During his year off that he spent in Berlin, Ohm all through believed that his publication would definitely earn him a better position at some reputed university, but the same did not happen and Ohm, reluctantly, resumed his post at Jesuit Gymnasium of Cologne. What was even worse was the fact that despite Ohm’s work was strongly an influential one, it was received with almost no enthusiasm. Deeply hurt by this, he decided to shift base to Berlin. As a result, Ohm formally resigned his position in March 1828, and took up a temporary work teaching mathematics in schools in Berlin. In 1833, Ohm accepted the position of a professor at the Nuremburg. Though this gave him the title that he so much desired for all his life, he was still not satisfied. Ohm’s hard work and peseverance was finally realised in 1842, when he received the Copley Medal award by the Royal Society. The following year, he was appointed as the foreign member of the Royal Society. In 1845, Ohm became a full member of the Bavarian Academy. Four years later, he took up a post in Munich as a curator of the Bavarian Academy’s physics cabinet and gave lectures in the University of Munich. It was only in 1852 that Ohm was designated for the chair of physics at the University of Munich, a position that he had been craving and striving for all through his life. Death & Legacy Georg Ohm breathed his last in Munich in the year 1854. He was interred in the Alter Südfriedhof. Not much is known about what caused the death of Georg Ohm. His name has been used in the terminology of electrical science in Ohm’s Law. Additionally, the SI unit of resistance, the ohm (symbol Ω) also adopts the name of this ace physicist. 1789 : Georg Simon Ohm was born 1800 : Was enrolled in Erlangen Gymnasium 1805 : Took aadmission in the University of Erlangen 1806 : Was sent to Switzerland; Took up a post as a mathematics teacher in a school in Gottstadt bei Nydau 1809 : Became a private tutor in Neuchâtel 1811 : Returned to the University of Erlangen; Received his PhD degree from the University of Erlangen 1813 : Took up the position of a teacher in mathematics and physics in Bamberg 1817 : Offered a position of a teacher in mathematics and physics at the Jesuit Gymnasium of Cologne. 1825 : Submitted his first paper 1826 : Presented two more papers; Was given a year off by the Jesuit Gymnasium of Cologne 1827 : Published his famous book, Die galvanische Kette; Resumed work in the Jesuit Gymnasium of Cologne 1828 : Resigned his position 1833 : Ohm accepted the position of a professor at the Nuremburg 1842 : Received the Copley Medal award by the Royal Society 1843 : Appointed as the foreign member of the Royal Society 1845 : Became a full member of the Bavarian Academy. 1849 : Took up a post in Munich as a curator of the Bavarian Academy's physics cabinet 1852 : Offered the chair of physics at the University of Munich 1854 : Georg Simon Ohm breathed his last
{"url":"http://www.thefamouspeople.com/profiles/georg-ohm-542.php","timestamp":"2014-04-18T05:45:49Z","content_type":null,"content_length":"63130","record_id":"<urn:uuid:83d4c506-0103-4ebc-a747-4a300a595236>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00569-ip-10-147-4-33.ec2.internal.warc.gz"}
Algebra 1 Several of my current Geometry students have commented on this very contrast. This has prompted me to offer a few possible reasons. First, Geometry requires a heavy reliance on explanations and justifications (particularly of the formal two-column proof variety) that involve stepwise, deductive reasoning. For many, this is their first exposure to this type of thought process, basically absent in Algebra 1. Second, a large part of Geometry involves 2-d and 3-d visualization abilities and the differences in appearance between shapes even when they are not positioned upright. Still further, for a number of students, distinguishing the characteristic properties amongst the different shapes becomes a new challenge. Third, in many cases Geometry entails the ability to form conjectures about observed properties of shapes, lines, line segments and angles even before the facts have been clearly established and stated... read more
{"url":"http://www.wyzant.com/resources/blogs/algebra_1?f=active","timestamp":"2014-04-18T08:03:48Z","content_type":null,"content_length":"77252","record_id":"<urn:uuid:1b3c260e-59eb-43b7-b6fc-e144210a64e7>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00143-ip-10-147-4-33.ec2.internal.warc.gz"}
Quicksort is a fast sorting algorithm, which is used not only for educational purposes, but widely applied in practice. On the average, it has O(n log n) complexity, making quicksort suitable for sorting big data volumes. The idea of the algorithm is quite simple and once you realize it, you can write quicksort as fast as bubble sort. The divide-and-conquer strategy is used in quicksort. Below the recursion step is described: 1. Choose a pivot value. We take the value of the middle element as pivot value, but it can be any value, which is in range of sorted values, even if it doesn't present in the array. 2. Partition. Rearrange elements in such a way, that all elements which are lesser than the pivot go to the left part of the array and all elements greater than the pivot, go to the right part of the array. Values equal to the pivot can stay in any part of the array. Notice, that array may be divided in non-equal parts. 3. Sort both parts. Apply quicksort algorithm recursively to the left and the right parts. Partition algorithm in detail There are two indices i and j and at the very beginning of the partition algorithm i points to the first element in the array and j points to the last one. Then algorithm moves i forward, until an element with value greater or equal to the pivot is found. Index j is moved backward, until an element with value lesser or equal to the pivot is found. If i ≤ j then they are swapped and i steps to the next position (i + 1), j steps to the previous one (j - 1). Algorithm stops, when i becomes greater than j. After partition, all values before i-th element are less or equal than the pivot and all values after j-th element are greater or equal to the pivot. Sort {1, 12, 5, 26, 7, 14, 3, 7, 2} using quicksort. Notice, that we show here only the first recursion step, in order not to make example too long. But, in fact, {1, 2, 5, 7, 3} and {14, 7, 26, 12} are sorted then recursively. Why does it work? On the partition step algorithm divides the array into two parts and every element from the left part is less or equal than every element from the right part. Also a ≤ pivot ≤ b inequality. After completion of the recursion calls both of the parts become sorted and, taking into account arguments stated above, the whole array is sorted. Complexity analysis On the average quicksort has O(n log n) complexity, but strong proof of this fact is not trivial and not presented here. Still, you can find the proof in [1]. In worst case, quicksort runs O(n^2) time, but on the most "practical" data it works just fine and outperforms other O(n log n) sorting algorithms. Code snippets Partition algorithm is important per se, therefore it may be carried out as a separate function. The code for C++ contains solid function for quicksort, but Java code contains two separate functions for partition and sort, accordingly. int partition(int arr[], int left, int right) int i = left, j = right; int tmp; int pivot = arr[(left + right) / 2]; while (i <= j) { while (arr[i] < pivot) while (arr[j] > pivot) if (i <= j) { tmp = arr[i]; arr[i] = arr[j]; arr[j] = tmp; return i; void quickSort(int arr[], int left, int right) { int index = partition(arr, left, right); if (left < index - 1) quickSort(arr, left, index - 1); if (index < right) quickSort(arr, index, right); void quickSort(int arr[], int left, int right) { int i = left, j = right; int tmp; int pivot = arr[(left + right) / 2]; /* partition */ while (i <= j) { while (arr[i] < pivot) while (arr[j] > pivot) if (i <= j) { tmp = arr[i]; arr[i] = arr[j]; arr[j] = tmp; /* recursion */ if (left < j) quickSort(arr, left, j); if (i < right) quickSort(arr, i, right); Full quicksort package Full quicksort package includes: • Ready-to-print PDF version of quicksort tutorial. • Full, thoroughly commented quicksort source code (Java & C++). • Generic quicksort source code in Java (Advanced). • Generic quicksort source code using templates in C++ (Advanced). Download link: full quicksort package Recommended books 1. Cormen, Leiserson, Rivest. Introduction to algorithms. (Theory) 2. Aho, Ullman, Hopcroft. Data Structures and Algorithms. (Theory) 3. Robert Lafore. Data Structures and Algorithms in Java. (Practice) 4. Mark Allen Weiss. Data Structures and Problem Solving Using C++. (Practice) 1. Quicksort Animation (with source code line by line visualization) 2. Quicksort in Java Applets Centre 3. Animated Sorting Algorithms: Quicksort Eleven responses to "Quicksort tutorial" 1. on Oct 22, 2009 said: wow this is the BEST explanation i have found yet for quick sort. Thanks! 2. on June 19, 2009 said: very clear and informative. Thanks a lot this was very helpful. 3. on April 20, 2009 said: very good algo for quick sort.............. this helps the student so much 4. on Mar 23, 2009 said: thanks for the tip..it really helps..simple and brief!!^.^..do you have a example flowchart of it? No, we haven't at the moment. Thought, flowcharts for algorithms is in our to-do-list. 5. on Mar 16, 2009 said: thank you , your codes are really simple to be understood and used 6. on Mar 5, 2009 said: one of the best explanation of quick sort on net. great work. keep it coming!!!! 7. on Feb 27, 2009 said: Thanks for the great program. it is shorter and simpler than any other quicksort that i have come across. 8. on Feb 12, 2009 said: it is really simple and much better than any of the examples i came across.. 9. on Jan 6, 2009 said: Thank u i am really happy because the code is simple and can be understood 10. on Jan 3, 2009 said: Really showed exactly what I wanted to know. Now if you could also include something on tail-recursion elimination, it would indeed be very helpful. We are going to develop "Quick sort in-depth" article, which will examine advanced quick sort problems, such as choosing the pivot value, quick sort optimization on small data volumes, etc. 11. on Nov 12, 2008 said: Thx You :) Contribute to AlgoList Liked this tutorial? Please, consider making a donation. Contribute to help us keep sharing free knowledge and write new tutorials. Every dollar helps!
{"url":"http://www.algolist.net/Algorithms/Sorting/Quicksort","timestamp":"2014-04-19T12:30:03Z","content_type":null,"content_length":"38683","record_id":"<urn:uuid:426c6940-30ee-4f98-bf20-2044153b11c7>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00188-ip-10-147-4-33.ec2.internal.warc.gz"}
Parts of Circles & Tangent Lines 9.1: Parts of Circles & Tangent Lines Created by: CK-12 Learning Objectives • Define circle, center, radius, diameter, chord, tangent, and secant of a circle. • Explore the properties of tangent lines and circles. Review Queue 1. Find the equation of the line with $m = -2$ 2. Find the equation of the line that passes though (6, 2) and (-3, -1). 3. Find the equation of the line perpendicular to the line in #2 and passes through (-8, 11). Know What? The clock to the right is an ancient astronomical clock in Prague. It has a large background circle that tells the local time and the “ancient time” and then the smaller circle rotates around on the orange line to show the current astrological sign. The yellow point is the center of the larger clock. How does the orange line relate to the small and larger circle? How does the hand with the moon on it (black hand with the circle) relate to both circles? Are the circles concentric or tangent? For more information on this clock, see: http://en.wikipedia.org/wiki/Prague_Astronomical_Clock Defining Terms Circle: The set of all points that are the same distance away from a specific point, called the center. Radius: The distance from the center to the circle. The center is typically labeled with a capital letter because it is a point. If the center is $A$$A$$\bigodot{A}$ Chord: A line segment whose endpoints are on a circle. Diameter: A chord that passes through the center of the circle. Secant: A line that intersects a circle in two points. Tangent: A line that intersects a circle in exactly one point. Point of Tangency: The point where the tangent line touches the circle. Notice that the tangent ray $\overrightarrow{TP}$$\overline{TP}$ Example 1: Identify the parts of $\bigodot{A}$ a) A radius b) A chord c) A tangent line d) The point of tangency e) A diameter f) A secant a) $\overline{HA}$$\overline{AF}$ b) $\overline{CD}, \overline{HF}$$\overline{DG}$ c) $\overleftrightarrow{BJ}$ d) Point $H$ e) $\overline{HF}$ f) $\overleftrightarrow{BD}$ Coplanar Circles Two circles can intersect in two points, one point, or no points. If two circles intersect in one point, they are called tangent circles. Congruent Circles: Two circles with the same radius, but different centers. Concentric Circles: When two circles have the same center, but different radii. If two circles have different radii, they are similar. All circles are similar. Example 2: Determine if any of the following circles are congruent. Solution: From each center, count the units to the circle. It is easiest to count vertically or horizontally. Doing this, we have: $\text{Radius of} \ \bigodot A = 3 \ units\!\\\text{Radius of} \ \bigodot B = 4 \ units\!\\\text{Radius of} \ \bigodot C = 3 \ units$ From these measurements, we see that $\bigodot A \cong \bigodot C$ Notice that two circles are congruent, just like two triangles or quadrilaterals. Only the lengths of the radii are equal. Tangent Lines We just learned that two circles can be tangent to each other. Two triangles can be tangent in two different ways, either internally tangent or externally tangent. If the circles are not tangent, they can share a tangent line, called a common tangent. Common tangents can be internally tangent and externally tangent too. Notice that the common internal tangent passes through the space between the two circles. Common external tangents stay on the top or bottom of both circles. Tangents and Radii The tangent line and the radius drawn to the point of tangency have a unique relationship. Let’s investigate it here. Investigation 9-1: Tangent Line and Radius Property Tools needed: compass, ruler, pencil, paper, protractor 1. Using your compass, draw a circle. Locate the center and draw a radius. Label the radius $\overline{AB}$$A$ 2. Draw a tangent line, $\overleftrightarrow{BC}$$B$$B$$B$ 3. Using your protractor, find $m \angle ABC$ Tangent to a Circle Theorem: A line is tangent to a circle if and only if the line is perpendicular to the radius drawn to the point of tangency. To prove this theorem, the easiest way to do so is indirectly (proof by contradiction). Also, notice that this theorem uses the words “if and only if,” making it a biconditional statement. Therefore, the converse of this theorem is also true. Example 3: In $\bigodot A, \overline{CB}$$B$$AC$ Solution: Because $\overline{CB}$$\overline{AB} \bot \overline{CB}$$\triangle ABC$$AC$ $5^2+8^2 &= AC^2\\25+64 &= AC^2\\89 &= AC^2\\AC &= \sqrt{89}$ Example 4: Find $DC$$\bigodot A$ $DC &= AC - AD\\DC &= \sqrt{89}-5 \approx 4.43$ Example 5: Determine if the triangle below is a right triangle. Explain why or why not. Solution: To determine if the triangle is a right triangle, use the Pythagorean Theorem. $4 \sqrt{10}$$c$ $8^2+10^2 & \ ? \ \left( 4 \sqrt{10} \right)^2\\64+100 &eq 160$ $\triangle ABC$$\overline{CB}$$\bigodot A$ Example 6: Find the distance between the centers of the two circles. Reduce all radicals. Solution: The distance between the two circles is $AB$$\overline{AD} \bot \overline{DC}$$\overline{DC} \bot \overline{CB}$$\overline{BE}$$EDCB$$AB$ $5^2+55^2 &= AC^2\\25+3025 &= AC^2\\3050 &= AC^2\\AC &= \sqrt{3050}=5\sqrt{122}$ Tangent Segments Let’s look at two tangent segments, drawn from the same external point. If we were to measure these two segments, we would find that they are equal. Theorem 10-2: If two tangent segments are drawn from the same external point, then the segments are equal. The proof of Theorem 10-2 is in the review exercises. Example 7: Find the perimeter of $\triangle ABC$ Solution: $AE = AD, EB = BF$$CF = CD$$\triangle ABC=6+6+4+4+7+7=34$ We say that $\bigodot G$inscribed in $\triangle ABC$ Example 8: If $D$$A$$AE$$DC$ Solution: Because $AE$$\triangle ABC$$\triangle DBE$$DB$ $10^2+24^2 &= DB^2\\100+576 &= 676\\DB &= \sqrt{676}=26$ To find $BC$ $\frac{5}{10}=\frac{BC}{26} & \longrightarrow BC=13\\DC=AB+BC &= 26+13=39$ Example 9: Algebra Connection Find the value of $x$ Solution: Because $\overline{AB} \bot \overline{AD}$$\overline{DC} \bot \overline{CB}, \overline{AB}$$\overline{CB}$$AB = CB$$x$ $4x-9 &= 15\\4x &= 24\\x &= 6$ Know What? Revisited Refer to the photograph in the “Know What?” section at the beginning of this chapter. The orange line (which is normally black, but outlined for the purpose of this exercise) is a diameter of the smaller circle. Since this line passes through the center of the larger circle (yellow point, also outlined), it is part of one of its diameters. The “moon” hand is a diameter of the larger circle, but a secant of the smaller circle. The circles are not concentric because they do not have the same center and are not tangent because the sides of the circles do not touch. Review Questions Determine which term best describes each of the following parts of $\bigodot P$ 1. $\overline{KG}$ 2. $\overleftrightarrow{FH}$ 3. $\overline{KH}$ 4. $E$ 5. $\overleftrightarrow{BK}$ 6. $\overleftrightarrow{CF}$ 7. $A$ 8. $\overline{JG}$ 9. What is the longest chord in any circle? Copy each pair of circles. Draw in all common tangents. Coordinate Geometry Use the graph below to answer the following questions. 13. Find the radius of each circle. 14. Are any circles congruent? How do you know? 15. Find all the common tangents for $\bigodot B$$\bigodot C$ 16. $\bigodot C$$\bigodot E$$CE$ 17. Find the equation of $\overline{CE}$ Determine whether the given segment is tangent to $\bigodot K$ Algebra Connection Find the value of the indicated length(s) in $\bigodot C$$A$$B$ 27. $A$$B$$\bigodot C$$\bigodot D$ 1. Is $\triangle AEC \sim \triangle BED$ 2. Find $BC$ 3. Find $AD$ 4. Using the trigonometric ratios, find $m \angle C$ 28. Fill in the blanks in the proof of Theorem 10-2. Given: $\overline{AB}$$\overline{CB}$$A$$C$$\overline{AD}$$\overline{DC}$Prove: $\overline{AB} \cong \overline{CB}$ Statement Reason 2. $\overline{AD} \cong \overline{DC}$ 3. $\overline{DA} \bot \overline{AB}$$\overline{DC} \bot \overline{CB}$ 4. Definition of perpendicular lines 5. Connecting two existing points 6. $\triangle ADB$$\triangle DCB$ 7. $\overline{DB} \cong \overline{DB}$ 8. $\triangle ABD \cong \triangle CBD$ 9. $\overline{AB} \cong \overline{CB}$ 29. From the above proof, we can also conclude (fill in the blanks): 1. $ABCD$ 2. The line that connects the ___________ and the external point $B$$\angle ADC$$\angle ABC$ 30. Points $A, B, C$$D$Explain why $\overline{AT} \cong \overline{BT} \cong \overline{CT} \cong \overline{DT}$ 31. Circles tangent at $T$$M$$N$$\overline{ST}$$T$$\overline{SN} \bot \overline{SM}$$SM=22, TN=25$$m \angle SNT=40^\circ$ 32. Four circles are arranged inside an equilateral triangle as shown. If the triangle has sides equal to 16 cm, what is the radius of the bigger circle? What are the radii of the smaller circles? 33. Circles centered at $A$$B$$W$$A, B$$W$$\overline{TU}$$\overline{VW}$$\overline{TV} \cong \overline{VU} \cong \overline{VW}$ Review Queue Answers 1. $y = -2x+3$ 2. $y = \frac{1}{3} x$ 3. $y = -3x - 13$ Files can only be attached to the latest version of None
{"url":"http://www.ck12.org/book/Geometry---Second-Edition/r1/section/9.1/anchor-content","timestamp":"2014-04-17T11:04:28Z","content_type":null,"content_length":"151412","record_id":"<urn:uuid:f493a96e-2a27-4804-9d81-b209b2897560>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00009-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: st: Several questions regarding xtprobit and margins command Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: st: Several questions regarding xtprobit and margins command From Nick Cox <njcoxstata@gmail.com> To statalist@hsphsun2.harvard.edu Subject Re: st: Several questions regarding xtprobit and margins command Date Tue, 13 Nov 2012 13:27:15 +0000 Sounds good. But making -margins- work is a secondary issue. You told your model that -Outcome- was a factor variable, which is a different story from it being treated as a measured predictor. On Tue, Nov 13, 2012 at 1:14 PM, Tobias Morville <tobiasmorville@gmail.com> wrote: > Hi Nick, and again thanks. I think i figured it out. > If i did a i.Outcome, it did work, which shows that you were right. > My other problem (with constant effects) was that the default for > margins, is a linear prediction. Adding predict(pu0) solved this. > As i understand, predict(pu0) says that there is some random effect > from all subjects, and then averages that, and compares each subject > to that random effect. It does not assume away the random effects.. i > think! > But this solved my questions, and i hope that the above might help > someone with the same problem at some point. > T > 2012/11/13 Nick Cox <njcoxstata@gmail.com>: >> My guess is that, although the error message may be confusing you, >> -Outcome- does not qualify as an acceptable argument for -margins- >> because, as said, it does not qualify as a factor variable or >> interaction. >> Nick >> On Tue, Nov 13, 2012 at 8:29 AM, Tobias Morville >> <tobiasmorville@gmail.com> wrote: >>> Hey Nick, and thanks for the anwser on Q#1, and the grammar correction too :-) >>> I run >>> -xtprobit stop_dummy Outcome outcome_lag1 seqEarn seqearn_outcome,re- >>> Where seqEarn goes from 0 to approx 800 (in a very few obs). Only in >>> integers. Outcome is the number of eyes the die shows, and >>> seqearn_outcome is a interaction between seqEarn and Outcome. >>> This adds another question. >>> As seqEarn (their accumulated earnings they get every game round) is a >>> positive monotonically increasing function of Outcome(t-z), is there >>> anything wrong with NOT adding the interactionterm between them? >> <snip> >>>>> ________________________________________ >>>>> From: owner-statalist@hsphsun2.harvard.edu [owner-statalist@hsphsun2.harvard.edu] on behalf of Nick Cox [njcoxstata@gmail.com] >>>>> Sent: 12 November 2012 18:20 >>>>> To: statalist@hsphsun2.harvard.edu >>>>> Subject: Re: st: Several questions regarding xtprobit and margins command >>>>> I'll comment on your problem #1. >>>>> The help for -margins- starts >>>>> "margins [marginlist] [if] [in] [weight] [, response_options options] >>>>> where marginlist is a list of factor variables or interactions that appear >>>>> in the current estimation results." >>>>> When you give arguments immediately after the command, the crucial part is >>>>> "margins marginlist ... >>>>> where marginlist is a list of factor variables or interactions that appear >>>>> in the current estimation results." >>>>> So, it would help if you gave the exact and complete -xtprobit- >>>>> command you used. I suspect that the error message will make sense >>>>> when we see the exact model you fitted. >>>>> Nick >>>>> P.S. "dice" is a strange word even to those for whom English is a >>>>> first language. "dice" is a plural: the singular is "die". One die, >>>>> two dice. >>>>> On Mon, Nov 12, 2012 at 2:58 PM, Tobias Morville >>>>> <tobiasmorville@gmail.com> wrote: >>>>>> I have a set of questions regarding the margins command, and marginal >>>>>> effects in general. >>>>>> I have a unbalanced paneldataset of 4124 observations, unevenly >>>>>> distributed on 18 subjects. >>>>>> My model is as follows: P(stop) = Outcome outcome_lag1 seqEarn, which >>>>>> im estimating in a RE probit setting with xtprobit command. >>>>>> Outcome: Outcome of a dice in period t. Lies from 1 to 6 >>>>>> Outcome_lag1: Outcome of the dice in period t-1 >>>>>> seqEarn: Accumulated earnings over each game. Drops to 0 if subject >>>>>> chooses to stop, or the dice shows a one. Starts at zero, and can only >>>>>> get more positive as people climb the reward ladder. >>>>>> All of these regressors are significant. >>>>>> Sooo, now the questions begin: >>>>>> 1) If i use -margins Outcome- (followed this guide >>>>>> http://www.stata.com/stata12/margins-plots/), then i get this >>>>>> errormessage: "'Outcome' not found in list of covariates", and that >>>>>> actually is the case for all margins commands, and is my number one >>>>>> headache. >>>>>> The only marginscommand that works, is if i use the -margins, >>>>>> dydx(Outcome outcome_lag1 seqEarn)-, which leads me to my next >>>>>> problem: >>>>>> 2) When i use a -margins, dydx(Outcome outcome_lag1 seqEarn)- my >>>>>> marginal effects are exactly the same as my regression coefficients? >>>>>> If i change the code to -margins, dydx(Outcome outcome_lag1 seqEarn) >>>>>> atmeans- they're the same again..? >>>>>> (So APE = MEM?) >>>>>> Im really confused about this, and i've read the ealier post >>>>>> (http://www.stata.com/statalist/archive/2009-11/msg01517.html), which >>>>>> covers some of the questions i have, but dosen't really anwser them. >>>>>> If i use mfx compute, predict(pu0) they change, but they become very >>>>>> small. And im guess that pu0 means that Im setting the random effects >>>>>> slope to 9, which is a bad idea for my data, as there is quite alot of >>>>>> random variability. >>>>>> 3) If i choose to ignore the fact that my marginal effects are the >>>>>> same as my probit regression coefficients, then im in my next pickle. >>>>>> That my marginal effects are constant. If i plot the predicted >>>>>> probability of stopping the game, over seqEarn, its constant, which >>>>>> suits my data very badly. And im afraid that i've misunderstood >>>>>> something very basic. >>>>>> if i try -margins, dydx(seqEarn) at (Outcome=(2 3 4 5 6)) vsquish- >>>>>> marginal effects are the same over Outcome size... >>>>>> .. and it's basically the same. >>>>>> What i would idealy like to see, is that predicted probability changes >>>>>> both over seqEarn and over Outcome and outcome_lag1 in some systematic >>>>>> way, but right now my newbieness in Stata problemshooting is driving >>>>>> me up the wall. >>>>>> ------ >>>>>> Background (just for the interested): >>>>>> I'm currently working with a dataset of 18 subjects, playing a virtual >>>>>> dicegame for 25 mins while in a fMRI scanner. The dice is random >>>>>> (1-6), and if you roll one, you lose whatever you accumulated this >>>>>> round. It's a balloon kind of thing: How far do people dare to go up >>>>>> the exponential reward ladder, before banking their earnings. * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/faqs/resources/statalist-faq/ * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2012-11/msg00542.html","timestamp":"2014-04-19T05:19:18Z","content_type":null,"content_length":"18625","record_id":"<urn:uuid:a29d24a7-eda8-41e9-8cd4-99b9eff15895>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00190-ip-10-147-4-33.ec2.internal.warc.gz"}
Turin, GA Algebra 2 Tutor Find a Turin, GA Algebra 2 Tutor ...I have developed a unique approach to tutoring that allows my students to learn in a highly-disciplined and organized manner. Since I am also extremely knowledgeable in effective writing, research papers and designing lesson plans and study guides, I feel that this further enhances my ability to... 57 Subjects: including algebra 2, reading, chemistry, geometry I am Georgia certified educator with 12+ years in teaching math. I have taught a wide range of comprehensive math for grades 6 through 12 and have experience prepping students for EOCT, CRCT, SAT and ACT. Unlike many others who know the math content, I know how to employ effective instructional strategies to help students understand and achieve mastery. 13 Subjects: including algebra 2, statistics, SAT math, GRE ...I have also taught high school in the subjects of Biology, Zoology, and Environmental Studies. I have taught Anatomy at a technical school and tutored that subject also. I currently homeschool my 3 children and love watching them experience the joy of learning. 9 Subjects: including algebra 2, chemistry, biology, algebra 1 ...I derive satisfaction in career counseling, knowledge impartation and mentoring. Going by my personal experience, I had no interest and often struggled with mathematics as a subject until I came in contact with a tutor, in my middle school, who ignited a likeness for the subject in me until it b... 18 Subjects: including algebra 2, calculus, geometry, accounting ...I was valedictorian of my high school, got a 35 on the ACT, a 1550 on my SAT (when it was out of 1600), and went to college on a number of local and national scholarships. I have tutored more than 30 students one-on-one for various tests and subjects, and some of my most recent students saw more... 17 Subjects: including algebra 2, chemistry, physics, geometry Related Turin, GA Tutors Turin, GA Accounting Tutors Turin, GA ACT Tutors Turin, GA Algebra Tutors Turin, GA Algebra 2 Tutors Turin, GA Calculus Tutors Turin, GA Geometry Tutors Turin, GA Math Tutors Turin, GA Prealgebra Tutors Turin, GA Precalculus Tutors Turin, GA SAT Tutors Turin, GA SAT Math Tutors Turin, GA Science Tutors Turin, GA Statistics Tutors Turin, GA Trigonometry Tutors Nearby Cities With algebra 2 Tutor Brooks, GA algebra 2 Tutors Chattahoochee Hills, GA algebra 2 Tutors Concord, GA algebra 2 Tutors Gay, GA algebra 2 Tutors Grantville, GA algebra 2 Tutors Haralson algebra 2 Tutors Hogansville algebra 2 Tutors Lovejoy, GA algebra 2 Tutors Palmetto, GA algebra 2 Tutors Sargent, GA algebra 2 Tutors Sharpsburg, GA algebra 2 Tutors Whitesburg, GA algebra 2 Tutors Williamson, GA algebra 2 Tutors Woolsey, GA algebra 2 Tutors Zebulon, GA algebra 2 Tutors
{"url":"http://www.purplemath.com/turin_ga_algebra_2_tutors.php","timestamp":"2014-04-17T10:57:15Z","content_type":null,"content_length":"24067","record_id":"<urn:uuid:2cdddb94-362e-4596-bd7c-aa73867dd829>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00575-ip-10-147-4-33.ec2.internal.warc.gz"}
Variational estimation of inhomogeneous Estimating the illumination and the reflectance properties of an object surface from a few images is an important but challenging problem. The problem becomes even more challenging if we wish to deal with real-world objects that naturally have spatially inhomogeneous reflectance. In this paper, we derive a novel method for estimating the spatially varying specular reflectance properties of a surface of known geometry as well as the illumination distribution of a scene from a specular-only image, for instance, recovered from two images captured with a polarizer to separate reflection components. Unlike previous work, we do not assume the illumination to be a single point light source. We model specular reflection with a spherical statistical distribution and encode its spatial variation with a radial basis function (RBF) network of their parameter values, which allows us to formulate the simultaneous estimation of spatially varying specular reflectance and illumination as a constrained optimization based on the I-divergence measure. To solve it, we derive a variational algorithm based on the expectation maximization principle. At the same time, we estimate optimal encoding of the specular reflectance properties by learning the number, centers, and widths of the RBF hidden units. We demonstrate the effectiveness of the method on images of synthetic and real-world objects. © 2011 Optical Society of America OCIS Codes (100.2960) Image processing : Image analysis (100.3190) Image processing : Inverse problems (150.2950) Machine vision : Illumination ToC Category: Image Processing Original Manuscript: July 16, 2010 Revised Manuscript: November 24, 2010 Manuscript Accepted: November 25, 2010 Published: January 17, 2011 Kenji Hara and Ko Nishino, "Variational estimation of inhomogeneous specular reflectance and illumination from a single view," J. Opt. Soc. Am. A 28, 136-146 (2011) Sort: Year | Journal | Reset 1. G. Kay and T. Caelli, “Inverting an illumination model from range and intensity maps,” CVGIP, Image Underst. 59, 183–201(1994). [CrossRef] 2. Y. Sato, M. Wheeler, and K. Ikeuchi, “Object shape and reflectance modeling from observation,” in Proceedings of the 24th Annual Conference on Computer Graphics and Interactive Techniques (Association for Computing Machinery, 1997), pp. 379–387. 3. P. Debevec, T. Hawkins, C. Tchou, H.-P. Duiker, W. Sarokin, and M. Sagar, “Acquiring the reflectance field of a human face,” in Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques (Association for Computing Machinery, 2000), pp. 145–156. 4. S. Marschner and D. Greenberg, “Inverse lighting for photography,” In Proceedings of IS&T/SID Fifth Color Imaging Conference (The Society for Imaging Science and Technology, 1997), pp. 262–265. 5. K. Hara, K. Nishino, and K. Ikeuchi, “Mixture of spherical distributions for single-view relighting,” IEEE Trans. Patt. Anal. Mach. Intell. 30, 25–35 (2008). [CrossRef] 6. K. Nishino, K. Ikeuchi, and Z. Zhang, “Re-rendering from a sparse set of images,” Tech. Rep. DU-CS-05-12 (Drexel University, 2005). 7. R. Ramamoorthi and P. Hanrahan, “A signal processing framework for inverse rendering,” in Proceedings of the 28th Annual Conference on Computer Graphics and Interactive Techniques (Association for Computing Machinery, 2001), pp. 117–128. 8. H. Lensch, J. Kautz, M. Goesele, W. Heidrich, and H. Seidel, “Image-based reconstruction of spatially varying materials,” in Proceedings of the 12th Eurographics Workshop on Rendering Techniques, S.J.Gortler and K.Myszkowski, eds. (Springer, 2001), pp. 104–115. 9. T. Zickler, R. Ramamoorthi, S. Enrique, and P. Belhumeur, “Reflectance sharing: predicting appearance from a sparse set of images of a known shape,” IEEE Trans. Patt. Anal. Mach. Intell. 28, 1287–1302 (2006). [CrossRef] 10. D. Goldman, B. Curless, A. Hertzmann, and S. Seitz, “Shape and spatially-varying BRDFs from photometric stereo,” IEEE Trans. Patt. Anal. Mach. Intell. 32, 1060–1071 (2010). [CrossRef] 11. I. Csiszár, “Why least squares and maximum entropy? An axiomatic approach to inverse problems,” Ann. Stat. 19, 2032–2066(1991). [CrossRef] 12. A. Dempster, N. Laird, and D. Rubin, “Maximum likelihood from incomplete data via the EM algorithm,” J. R. Stat. Soc. Ser. B. Methodol. 39, 1–38 (1977). [CrossRef] 13. S. Nayar, K. Ikeuchi, and T. Kanade, “Determining shape and reflectance of Lambertian, specular, and hybrid surfaces using extended sources,” in Proceedings of the International Workshop on Industrial Applications of Machine Intelligence and Vision (IEEE, 1989), pp. 169–175. [CrossRef] 14. Y. Sato and K. Ikeuchi, “Temporal-color space analysis of reflection,” J. Opt. Soc. Am. A 11, 2990–3002 (1994). [CrossRef] 15. A. L. Yuille, D. Snow, R. Epstein, and P. N. Belhumeur, “Determining generative models of objects under varying illumination: shape and albedo from multiple images using SVD and integrability,” Int. J. Comput. Vis. 35, 203–222 (1999). [CrossRef] 16. W. Y. Zhao and R. Chellappa, “Symmetric shape-from-shading using self-ratio image,” Int. J. Comput. Vis. 45, 55–75(2001). [CrossRef] 17. Q. Zheng and R. Chellappa, “Estimation of illuminant direction, albedo and shape from shading,” IEEE Trans. Patt. Anal. Mach. Intell. 13, 680–702 (1991). [CrossRef] 18. W. A. P. Smith and E. R. Hancock, “Recovering facial shape using a statistical model of surface normal direction,” IEEE Trans. Patt. Anal. Mach. Intell. 28, 1914–1930 (2006). [CrossRef] 19. N. Birkbeck, D. Cobzas, P. F. Sturm, and M. Jagersand, “Variational shape and reflectance estimation under changing light and viewpoints,” in Proceedings of the 9th European Conference on Computer Vision (Springer, 2006), pp. 536–549. 20. N. Alldrin, T. Zickler, and D. Kriegman, “Photometric stereo with non-parametric and spatially-varying reflectance,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2008), pp. 1–8. 21. S. Biswas, G. Aggarwal, and R. Chellappa, “Robust estimation of albedo for illumination-invariant matching and shape recovery,” IEEE Trans. Patt. Anal. Mach. Intell. 31, 884–899(2009). [CrossRef] 22. K. Ikeuchi and K. Sato, “Determining reflectance properties of an object using range and brightness images,” IEEE Trans. Patt. Anal. Mach. Intell. 13, 1139–1153 (1991). [CrossRef] 23. I. Sato, Y. Sato, and K. Ikeuchi, “Illumination from shadows,” IEEE Trans. Patt. Anal. Mach. Intell. 25, 290–300 (2003). [CrossRef] 24. S. Tominaga and N. Tanaka, “Estimating reflection parameters from a single color image,” IEEE Comput. Graph. Appl. 20, 58–66 (2000). [CrossRef] 25. T. Yu, H. Wang, N.Ahuja, and W.-C. Chen, “Sparse lumigraph relighting by illumination and reflectance estimation from multi-view images,” in Rendering Techniques 2006: Eurographics Symposium on Rendering, T.Akenine-Mo¨ller and W.Heidrich, eds. (Eurographics Association, 2006), pp. 41–50. 26. K. E.Torrance and E. M.Sparrow, “Theory for off-specular reflection from roughened surfaces,” J. Opt. Soc. Am. 57, 1105–1112 (1967). [CrossRef] 27. R. Fisher, “Dispersion on a sphere,” Proc. R. Soc. Lond., Ser. A. 217, 295–305 (1953). [CrossRef] 28. K. Mardia and P. Jupp, Directional Statistics (Wiley, 2000). 29. F. Solomon and K. Ikeuchi, “Extracting the shape and roughness of specular lobe objects using four light photometric stereo,” IEEE Trans. Patt. Anal. Mach. Intell. 18, 449–454 (1996). [CrossRef] 30. H. Kameoka, T. Nishimoto, and S. Sagayama, “A multipitch analyzer based on harmonic temporal structured clustering,” IEEE Trans. Audio Speech Lang. Process. 15, 982–994(2007). [CrossRef] 31. N. Ueda and R. Nakano, “Deterministic annealing EM algorithm,” Neural Netw. 11, 271–282 (1998). [CrossRef] 32. M. Lavielle and E. Moulines, “A simulated annealing version of the EM algorithm for non-Gaussian deconvolution,” Stat. Comput. 7, 229–236 (1997). [CrossRef] 33. L. Yingwei, N. Sundararajan, and P. Saratchandran, “A sequential learning scheme for function approximation using minimal radial basis function neural networks,” Neural Comput. 9,461–478 (1997). 34. S. Nayar, X. Fang, and T. Boult, “Removal of specularities using color and polarization,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 1993), pp. 583–590. [CrossRef] 35. L. Wolff and T. Boult, “Constraining object features using a polarization reflectance model,” IEEE Trans. Patt. Anal. Mach. Intell. 13, 635–657 (1991). [CrossRef] OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed. « Previous Article | Next Article »
{"url":"http://www.opticsinfobase.org/josaa/abstract.cfm?URI=josaa-28-2-136","timestamp":"2014-04-17T17:12:27Z","content_type":null,"content_length":"199791","record_id":"<urn:uuid:4c51c1b4-6d5b-4ea3-ba6b-f9eb3d74fcfc>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00614-ip-10-147-4-33.ec2.internal.warc.gz"}
Algebraic Geometry - Definition of a Morphism up vote 3 down vote favorite What's the best (or your favourite) definition of a morphism of a quasi projective variety? I've seen a huge number of equivalent definitions, and I'd like to know which is the best to memorise! I'd prefer to have a definition that doesn't mention sheaves/schemes or rational maps (since I normally define a rational map as a morphism on an open set, and don't want my definitions to be Many thanks in advance! 15 A natural transformation between the functors of points... – Qiaochu Yuan Mar 22 '12 at 20:19 6 Thanks Qiaochu, but I was looking for something perhaps a little more concrete. – Edward Hughes Mar 22 '12 at 20:27 7 @Edward: I don't see what's not concrete about Qiaochu's answer. Think of a variety as something like a system of equations. The "functor of points" just tells you what solutions that system has, in any given ring. A morphism is just a systematic way of turning solutions of one system into solutions of another. Don't be scared by the word "functor"! – Tom Leinster Mar 22 '12 at 22:21 4 I voted Yuan's comment up, but I was laughing at the time. – roy smith Mar 23 '12 at 2:15 3 I suggest you to read Mumford's red book on varieties and schemes. You'll get intuition both for Ruadhai's definition and for the functor of points. – Leo Alonso Mar 23 '12 at 15:13 show 12 more comments 4 Answers active oldest votes A regular map $\phi: X \to Y$ of quasi-projective varieties is a continuous map with respect to the Zariski topology such that for $V \subset Y$ an open set and $f$ a regular function up vote 12 on $V$, we have $f\circ \phi$ is regular on $\phi^{-1}V$. This seems to me to be to be exactly what you would want and quite intuitive and understandable. down vote By your regular function f, do you mean a regular map from V to A^1? In which case this seems a little circular... Sorry if I've misunderstood! – Edward Hughes Mar 22 '12 at 21:04 4 Ah, it would be circular if you didn't know what a regular function was. A function $f: X \to \mathbb{A}^1$ on a quasi-projective variety $X$ is called regular if $\forall x\in X$ there is a neighbourhood $U$ of $x$ such that $f|_U$ is the quotient of two homogeneous polynomials $f|_U = F/G$ of the same degree such that $G$ has no zeroes on $U$. A good reference for this level of algebraic geometry is I.R. Shafarevich - Basic algebraic geometry I. – Ruadhai 0 secs ago – Ruadhaí Dervan Mar 22 '12 at 21:12 Marvellous that's a fantastic definition now, and easy to remember! Thanks a lot. – Edward Hughes Mar 22 '12 at 22:18 @Ruadhai - would it be equivalent to say that a map $\phi:V \rightarrow W$ is a morphism iff for all $p \in V$ there exist affine open sets $U$ of $p$ and $U'$ of $\phi(p)$ s.t. $\ phi(U) \subset U'$ and $\phi|_U$ is a morphism of affine varieties? – Edward Hughes Mar 22 '12 at 22:41 3 This thread should be helpful, it's essentially the same question. mathoverflow.net/questions/1397/… – Ruadhaí Dervan Mar 22 '12 at 23:50 show 2 more comments I'll leave an answer mainly because I don't want to discourage people from asking basic questions. Actually, I don't have a favourite definition, but I'll give you my least favourite. What I mean by that is that it is clumsy, and not obviously well defined. However, there should be no doubt that it captures the correct meaning. 1. A map of affine varieties is a morphism if it is expressible by polynomials. 2. A map $f:X\to Y$ of general (quasiprojective or not) varieties is a morphism if is locally a morphism of affine varieties. To make this more precise, we should require that $f$ is up vote 7 continuous, $Y$ has an open cover $\lbrace U_i\rbrace$ by affine varieties, $X$ has an affine open cover $\lbrace V_{j_i}\rbrace$ refining $\lbrace f^{-1}U_i\rbrace$, and $f:V_{j_i}\to down vote U_i$ is a morphism in the sense of 1. You can treat this as a yardstick for measuring the correctness of other more elegant definitions (mentioned above and elsewhere). This process takes time, and, unfortunately, there really isn't any shortcut. 1 Hm, in some sense this is my favorite definition... :-) – Karl Schwede Mar 23 '12 at 13:21 1 this reveals the fact that different definitions are useful for different purposes. E.g. if one wants to display an actual morphism, this one is hard to avoid. for abstract proofs, others may appeal more. – roy smith Mar 24 '12 at 19:22 1 e.g. See which definition helps most to find a morphism from the intersection curve of the quadric surfaces Q : {x^2 – xz – yw =0} and Q’ : {yz –xw-zw=0} in P^3, to the plane cubic C: {y^2.z = x^3–xz^2} in P^2. (hint: try the “obvious” one.) – roy smith Mar 24 '12 at 20:08 Donu, is this definition complete? I.e. the notion of a covering by "affine varieties" seems to require the prior definition of an isomorphism, hence of a morphism. What am I missing? – roy smith Apr 2 '12 at 16:14 Dear roy, for what it's worth in my mind I was imagining the definition of a variety to be some affine varieties and gluing data (which can be expressed as isomorphisms of affine varieties). – Karl Schwede Apr 2 '12 at 17:56 show 3 more comments I confess I really like this question, because it troubled me a lot when trying to teach (and understand) beginning algebraic geometry. There seem to be two questions here: 1) which is the more fundamental notion, morphism or rational map; and 2) what special definition is preferred in the case of quasi projective varieties? I finessed these difficulties for years by working largely in the category of complex analytic varieties and appealing to Chow's theorems, so I am not expert, but I have thought about it. I will share my necessarily naive conclusions, so I can learn from the responses. As to 1), the notion of rational map seems more fundamental except in the one case of polynomial maps of affine varieties. I.e. if one wants the concept of regularity to be local, one must define it using rational functions. This arises as soon as one defines a regular function on even a quasi affine variety. So it appears as if regular functions are naturally a subclass of rational ones. I.e. first one must understand rational functions, and the set on which such a function is regular. If the varieties are embedded in projective space as closed sets, one could use the standard affine open cover and stick to polynomials, but if they are abstract or even quasi projective, it seems to require a notion of isomorphism, or of the sheaf of regular functions to define an affine open set. Note Ruadhai's definition above requires the use of rational functions to define that of regular ones, after which the definition of morphism appears to involve only the concept "regular". In case 2) we are dealing with projective space, which has two representations, as a space with a natural affine open cover, but also as a space which is a group quotient of an open subset of affine space. Hence there are two natural definitions of regular map, locally regular in terms of the affine cover, but also as a regular map of the big affine space which is constant on orbits. the latter definition yields a map by homogeneous polynomials of the same degree. This definition is the one most convenient for writing down examples. Again, Ruadhai's definition of regular locally rational function uses the homogeneous representation, so that definition of morphism is an amalgam of abstract sheaf theoretic concepts and concrete projective ones. The fully concrete version of Ruadhai's definition of morphism is thus: a map locally defined by homogeneous polynomials of the same degree with non vanishing denominators. This is the definition actually used in practice to write down morphisms of projective and quasi projective varieties. Several basic books give both definitions (locally rational and regular for the affine open cover, and local homogeneous representation) in the quasi projective case, Shafarevich, Harris, and Reid, I believe. This seems to me a helpful practice. Bill Fulton's lovely little book on curves, which is intended to prepare one for schemes, uses only the abstract open cover approach as definition, but then leaves as exercise to the reader to check that such innocent maps as [x:y]-->[x:y:z] from P^1 to P^2 are regular. To summarize, it seems one cannot avoid local definitions, and if one wants to avoid fractions altogether, the only definition I know of a morphism of quasi projective varieties is as a map given locally by homogeneous polynomials of the same degree without common zeroes. In my example above of a map from the intersection of two quadrics to a plane cubic the map [x:y:z:w] to [x:y:z] is undefined at [0:0:0:1]. By using the locally rational regular map definition in affine coordinates, it seems that point goes to [-1:0:1], as given by the hopefully equivalent map [w(y-w) : (x-z)(y-w) : w^2 ]. Edit:This is my attempt to understand Donu’s definition above. Now I think I understand it, as well as why it is a benchmark, and yet not his favorite. Of course I cannot speak for him. One of its beauties as he said is that it does mimic closely the familiar chart/coordinate system definition from analytic and differential geometry. Its shortcoming is perhaps its cumbersome nature. E.g. it requires an infinite family of coordinate charts even to discuss regular functions on (open subsets) of affine space. As to whether it enables one to dispense with rational as opposed to polynomial maps, well yes and no. It is true that all regular maps are locally polynomial in terms of the charts, but the trick is that the charts themselves are defined by rational functions. E.g. Suppose I want to glue two copies of A^1, along the complement of their origins, to get P^1. Can this gluing be done by polynomial functions? Well yes, as Karl said, but it depends up vote somewhat on your point of view. I may not think the regular function 1/z is a polynomial function on the set {z≠0} in A^1. But to understand Karl’s point, I must realize that z is not an 6 down affine coordinate system for that subset, so I should not expect all regular functions there to be polynomials in z. But the pair of functions (z,1/z) = (z,w) is an affine coordinate system vote there, and then 1/y is a polynomial in w, i.e. 1/y is a polynomial in the variable 1/y! In this way one can make any regular locally rational function look locally polynomial. Here are some details as I understand them. Define an abstract variety as a topological space with a basis of open sets {Uj} such that each Uj is equipped with a homeomorphism fj:Uj-->Vj where Vj is a Zariski closed subset of some affine space. Then require that every inclusion map Ui into Uj becomes a polynomial map of the corresponding affine varieties (fj)o(fi)^(-1):Vi-->Vj. In this category Donu’s definition of morphism makes perfect sense. I.e. a continuous map of abstract varieties is a morphism if it is locally polynomial in some collection of coordinate systems. In particular a continuous k valued function is regular if and only if it is locally defined by polynomials in some coordinate cover. Nothing could be conceptually simpler or more What is the catch? With this definition it is not immediately obvious that any familiar (non finite) example at all is an abstract variety, not even k ≈ A^1. I.e. it takes an infinite number of affine coordinate charts even to give affine space itself the structure of an abstract variety. Moreover these charts are defined by rational functions. If we define a coordinate system in U as a finite set of regular functions such that every regular function in U is a polynomial in terms of those functions, then it is sufficient but not necessary to be affine in order to possess a coordinate system. Fortunately every affine variety has a topological basis of affine open sets, thus we can put a structure of abstract variety on every quasi projective variety. The difference between the algebraic case and the analytic and differential cases is that the restriction of an affine coordinate system may not be an affine coordinate system. Thus we cannot use the same coordinate system on every open subset of affine space, as we would in differential geometry. Lemma: The principal open subsets Uf = {f≠0} define a structure of abstract affine variety on affine space A^n. Proof: It suffices to use polynomials f which have no repeated prime factors. Define the coordinate map Uf-->A^(n+1) by sending x-->(x,1/f(x)) = (x,T). Then the image Vf = {1-f.T = 0} is a closed affine set. The coordinate map itself is defined by regular rational functons on Uf, and is a homeomorphism. Moreover, Ug is contained in Uf if and only if g = fh, for some polynomial h. If we map Ug-->Vg by x-->(x,1/g(x)) = (x,W), then Vg = {1-g.W=0}. Hence the inclusion map from Ug to Uf becomes in affine coordinates, the map (x,W)-->(x,W.h(x)) from Vg to Vf, a polynomial map in the coordinates (x,W). QED. With this lemma it seems one can use restrictions to define a structure of abstract variety on every quasi affine and quasi projective variety. Finally, as Donu remarked, it is not obvious that one can check regularity using any coordinate cover. I.e. there might be one cover by affine coordinate systems in which a given map is locally polynomial, and yet another cover by different affine coordinate systems in which it is not. One must prove the usual theorem, via the nullstellensatz, that a locally polynomial function on an affine variety is globally polynomial. This is a beautiful, conceptually natural point of view on what it means to be a morphism of varieties. I would advocate, after giving this definition, proving that a map of quasi projective varieties is a morphism in this sense if and only if it has the property in the accepted answer above, if and only if it can be defined locally by sequences of homogeneous polynomials with no common zeroes. I.e. it is hard to be prepared for all situations with just one characterization. The last (homogeneous polynomial) point of view can then lead to the important idea that a morphism of a variety to projective space is also defined by a line bundle and a sequence of regular sections without common zeroes, an approach not yet mentioned. I.e. maps of a variety to projective space are much more special than maps to arbitrary varieties, and this special case is well worth understanding. If we want to discuss the meaning of the common zeroes of sections, as in the example of the intersection curve of two quadrics, note the restriction of a linear polynomial to this curve must vanish 4 times, so the image of the map defined by linear polynomials should have degree 4. Since the image curve has degree three there must be a point where the rational map is not defined. I.e. the line bundle on the intersection curve defining the morphism is O(1) restricted to the curve tensored with O(-p) where p is the point [0:0:0:1]. Since we extended the map using quadratic polynomials, each vanishing 8 times on the curve, presumably those polynomials have 5 common zeroes on our curve. 2 Roy, thanks for sharing your thoughts on this. – Donu Arapura Apr 4 '12 at 17:06 add comment If I understand correctly you are looking for the most elementary definition possible. I assume that you are working over algebraically closed filed and you think of quasi-projective varieties as subsets of the projective space. In this case I'll suggest the following definition: a morphism $f:X \to Y$ is a Zariski closed subset of $\Gamma(f) \subset X \times Y$ s.t. the projection $\Gamma(f) \to X$ is a bijection. You may ask what is a Zariski closed subset of $X \times Y$. Here are 2 definitions: up vote 3 down vote 1. A subset $Z$ s.t. for any open (quasi) affine sets $U \subset X$, $V \subset Y$ the intersection $Z \cup U \times V$ is Zariski closed in $U \times V$. (I assume that you know what is a Zariski closed subset of (quasi) affine variety) 2. There is a well known embedding of a product two projective spaces into a larger projective space. This shows that $X \times Y$ is quasi-projective (or gives a quasi-projective structure on $X \times Y$, depending on you framework). I assume that you know what is a Zariski closed subset of quasi-projective affine variety I like this definition too, but somehow it seems a bit heavy to require us to consider product spaces to define a morphism. That's just personal preference, btw, so you may disagree! I guess I'm just looking for the most intuitive definition, and thus far I reckon Ruadhai has it. Any other suggestions? – Edward Hughes Mar 22 '12 at 22:24 2 Does this definition work when X is not normal? E.g. the graph of the normalization of a curve with a cusp seems to project bijectively in both directions, but presumably is not a morphism in one of them. In Mumford's yellow book, there is an attempt to define a morphism using the graph, but he explicitly requires not just that there be only one image of a given point of the domain, but that the graph be locally there the graph of a regular map, defined in terms of locally rational, regular functions. – roy smith Apr 2 '12 at 16:25 Roy, I afraid that you are write (and I'm wrong) Let me think how to fix it. Sorry – Rami Apr 2 '12 at 21:29 Rami, well it works for normal varieties, hence for open sets in affine space. So it should suffice to fill the detail I was worried about in applying Donu's definition to quasi affine sets, namely how to find a basis of open affines, by showing a principal open set in k^n is isomorphic to a hypersurface. I.e. this principal open is smooth, hence normal. For principal opens in arbitrary closed affine sets we can restrict this isomorphism. ??? – roy smith Apr 3 '12 at 0:18 add comment Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry or ask your own question.
{"url":"http://mathoverflow.net/questions/91942/algebraic-geometry-definition-of-a-morphism/91951","timestamp":"2014-04-20T11:38:48Z","content_type":null,"content_length":"98826","record_id":"<urn:uuid:fb92efe7-e15e-4dc4-be43-2f6c50de6dd8>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00034-ip-10-147-4-33.ec2.internal.warc.gz"}
Integral of the Inverse Gamma Distribution. June 19th 2012, 07:50 AM #1 Nov 2009 Integral of the Inverse Gamma Distribution. The inverse gamma distribution is a pdf. If I was to integrate the inverse gamma distribution would it give an answer of 1? It is difficult to integrate with large values of alpha and beta? Follow Math Help Forum on Facebook and Google+
{"url":"http://mathhelpforum.com/statistics/200182-integral-inverse-gamma-distribution.html","timestamp":"2014-04-20T10:53:16Z","content_type":null,"content_length":"28869","record_id":"<urn:uuid:1bba4670-ccc7-4da6-8dbc-cf3cd2ec3d9f>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00470-ip-10-147-4-33.ec2.internal.warc.gz"}
Homework Help Posted by Tell on Saturday, April 18, 2009 at 10:07am. 1) Lowering the price by 10%. What is the break-even level of output? The company produces specialized glass units and is concerned that, as the market leader, they should be able to make higher profits than they currently are. The firm’s costs are as follows; overheads ‘d’ are £800,000 labour costs per unit ‘e’ are £100 and raw material costs per unit ‘f’ are £258. They also spend £40,000 ‘X’ per period on advertising and are capable of producing up to 1,000 units in each period. The total cost function is expressed as; TC = d + eQ + fQ + X The demand function is as follows; P = a – bQ + c√X where a = 4,000, b = 3 and c = 3. 2) Doubling the advertising budget. What is the break- even level of the output? • Maths - Tell, Sunday, April 19, 2009 at 9:30am The company produces specialized glass units and is concerned that, as the market leader, they should be able to make higher profits than they currently are. The firm’s costs are as follows; overheads ‘d’ are £800,000 labour costs per unit ‘e’ are £100 and raw material costs per unit ‘f’ are £258. They also spend £40,000 ‘X’ per period on advertising and are capable of producing up to 1,000 units in each period. The total cost function is expressed as; TC = d + eQ + fQ + X The demand function is as follows; P = a – bQ + c√X where a = 4,000, b = 3 and c = 3. 1)Lowering the price by 10%. What is the break-even level of output 2) Doubling the advertising budget. What is the break- even level of the output? • Economics - economyst, Monday, April 20, 2009 at 9:17am Interestingly, you are asking for the break-even level of output rather than the profit-maximizing level. Hummm. Anyway. This is simply an algebra problem. First collapse the two Q terms in TC. So, TC=800,000+358Q + 40000 Total Revenue is P*Q. So, TR=4000Q-3Q^2 + 600Q Set TC=TR and solve for Q (subject to the constraint that Q<=1000) 1) Repeat cept multiply all terms in the TR equation by 0.90 2) Repeat cept change advertising by 40K. Related Questions Maths Algebra - The company produces specialised industrial air extraction units... economics - The graph on the left shows the short-run marginal cost curve for a ... Algebra - The Oliver Company plans to market a new product. Based on its market ... Business Finance - Immediate help needed for the following two questions: 15-12A... Accounting: Break even point - How much profit increase for every unity sold ... Microeconomics - A company is working on the market of perfect competition. Its ... economy - consider a perfectly competitive market in which all firms have the ... economics - 5. A market contains a group of identical price-taking firms. Each ... Finance - You are a hard-working analyst in the office of financial operations ... accounting - You are a hard-working analyst in the office of financial ...
{"url":"http://www.jiskha.com/display.cgi?id=1240063648","timestamp":"2014-04-21T05:38:43Z","content_type":null,"content_length":"10250","record_id":"<urn:uuid:70bd3660-fe4f-4420-beb6-bdd961d200f9>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00050-ip-10-147-4-33.ec2.internal.warc.gz"}
Astronomy 201a (Fall 2012) Supplementary Material Dr. Onno Pols from Utrecht wrote an excellent set of textbook-level notes on stellar interiors (i.e., stellar structure, thermodynamics, and nuclear fusion) and stellar evolution. Local copies of the notes are provided here, grouped into 5 PDF files (each containing more than one chapter): 1. Introduction 2. Mechanical and Thermal Equilibrium 3. Equation of State of Stellar Interiors 4. Polytropic Stellar Models 5. Energy Transport in Stellar Interiors 6. Nuclear Processes in Stars 7. Stellar Models and Stellar Stability 8. Schematic Stellar Evolution: Consequences of the Virial Theorem 9. Early Stages of Evolution and the Main Sequence Phase 10. Post Main Sequence Evolution Through Helium Burning 11. Late Evolution of Low and Intermediate Mass Stars 12. Pre-Supernova Evolution of Massive Stars 13. Stellar Explosions and Remnants of Massive Stars Jorgen Christensen-Dalsgaard wrote a comprehensive set of lecture notes on stellar oscillations (i.e., pulsations and seismology). HERE are those notes in one (276-page) PDF file. Rob Rutten created an extensive set of lecture notes on radiative transfer in stellar atmospheres (with a bit of an emphasis on the Non-LTE upper layers most relevant to the solar chromosphere). HERE is a link to Rutten's external web page, which contains the full set of PDF notes (275 pages) and many other useful files. Upon retirement, Dr. J. B. Tatum put many of his lecture notes online. I've brought over his 11 chapters on stellar atmospheres and radiative transfer (each in a separate PDF file) and have archived them here: 1. Definitions of and Relations between Quantities used in Radiation Theory 2. Blackbody Radiation 3. The Exponential Integral Function 4. Flux, Specific Intensity and other Astrophysical Terms 5. Absorption, Scattering, Extinction and the Equation of Transfer 6. Limb Darkening 7. Atomic Spectroscopy 8. Boltzmann's and Saha's Equations 9. Oscillator Strengths and Related Topics 10. Line Profiles 11. Curve of Growth Koskinen and Vainio wrote up their lecture notes on solar physics ("from the core to the heliopause"). These notes go a bit further into the 'practical' applications of helioseismology and magnetohydrodynamics (MHD) than other purely astronomical sources. HERE is a local copy of these notes in a single (188-page) PDF file. BACK to main Astronomy 201a page.
{"url":"https://www.cfa.harvard.edu/~scranmer/Ay201a/ay201a_2012_supp.html","timestamp":"2014-04-18T23:17:06Z","content_type":null,"content_length":"5201","record_id":"<urn:uuid:4064e952-e8c1-448a-82bc-61821eb599c8>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00469-ip-10-147-4-33.ec2.internal.warc.gz"}
arrangement in circle problem April 21st 2010, 02:44 AM #1 Apr 2010 arrangement in circle problem A problem in my textbook asks: 1. In how many ways can four ours and four girls be arranged in a circle? In how many of these will boys and girls occur alternately? 2. In how many ways can n boys and n girls be arranged in a circle? In how many of these will the boys and girls occur alternately. The first part of both questions is easy, 7! and (2n-1)! But I'm really confused on the second parts. I worked it out as the first boy to sit begins the circle and everyone is placed relative to them. Then the next boy sits can sit to the left, or the right, or opposite the first boy. Then the third has two choices, and the fourth one. And then the girls sit down, the first has a choice of four places to sit, the second three, etc So I get the answer: (4-1)! * 4! = 144 and for n boys and n girls: (n-1)! n! but my book says the answer should be 72 and it says for n girls and n boys, the answer is 0.5 n!(n - 1)! but this doesn't make sense to me, especially as if you only have one boy and one girl then their answer would be half an arrangement, which is nonsense? A problem in my textbook asks: 1. In how many ways can four ours and four girls be arranged in a circle? In how many of these will boys and girls occur alternately? 2. In how many ways can n boys and n girls be arranged in a circle? In how many of these will the boys and girls occur alternately. The first part of both questions is easy, 7! and (2n-1)! But I'm really confused on the second parts. I worked it out as the first boy to sit begins the circle and everyone is placed relative to them. Then the next boy sits can sit to the left, or the right, or opposite the first boy. Then the third has two choices, and the fourth one. And then the girls sit down, the first has a choice of four places to sit, the second three, etc So I get the answer: (4-1)! * 4! = 144 and for n boys and n girls: (n-1)! n! but my book says the answer should be 72 and it says for n girls and n boys, the answer is 0.5 n!(n - 1)! but this doesn't make sense to me, especially as if you only have one boy and one girl then their answer would be half an arrangement, which is nonsense? I get what you get. For example, if n = 2, it is clear to me that there are two arrangements, which is another counterexample to the answer given by the book. The only way I could imagine the book answer being right is if two arrangements are considered equivalent up to rotation and reflection, and n > 1, but those assumptions seem inappropriate to the situation. Thank you, that is exactly what I thought. A problem in my textbook asks: 1. In how many ways can four ours and four girls be arranged in a circle? In how many of these will boys and girls occur alternately? 2. In how many ways can n boys and n girls be arranged in a circle? In how many of these will the boys and girls occur alternately. The first part of both questions is easy, 7! and (2n-1)! But I'm really confused on the second parts. I worked it out as the first boy to sit begins the circle and everyone is placed relative to them. Then the next boy sits can sit to the left, or the right, or opposite the first boy. Then the third has two choices, and the fourth one. And then the girls sit down, the first has a choice of four places to sit, the second three, etc So I get the answer: (4-1)! * 4! = 144 and for n boys and n girls: (n-1)! n! but my book says the answer should be 72 and it says for n girls and n boys, the answer is 0.5 n!(n - 1)! but this doesn't make sense to me, especially as if you only have one boy and one girl then their answer would be half an arrangement, which is nonsense? Hi lemons, this is offered as an explanation for the book solution, not as "the answer" as there will be disagreement. A boy and girl can only form a line. We need at least 3 to make a circle. If we have 3, then we have ABC, ACB as arrangements, as we can keep A still and move the other two, but as far a making a circle is concerned it is always the circle ABC. This is the view your book is taking. Though they could change hands, if we draw a circle and label the 3 points A, B, C with A at the top, B to the left and C to the right, then reading it anticlockwise it is ABC and if we interchange B and C, then clockwise it is ABC, so it's considered the same circle. As far as lining them up in a straight line is concerned, there are twice as many arrangements with A fixed. For 4 girls and 4 boys... If we fix one girl in place at the beginning of the line, there are $3!4!$ straight line arrangements of the remaining girls and boys alternating. If they are arranged in a circle, that number is halved.....from the perspective of the circle being the same forwards or backwards This would appear to be the view that your textbook is taking, but that is only one view and according to a particular interpretation. From another view, it will be considered "incorrect", but only because the textbook hasn't fully clarified the question. Last edited by Archie Meade; April 24th 2010 at 06:53 AM. A boy and girl can only form a line. We need at least 3 to make a circle. If we have 3, then we have ABC, ACB as arrangements, as we can keep A still and move the other two, but as far a making a circle is concerned it is always the circle ABC. This is the view your book is taking. For 4 girls and 4 boys... If we fix one girl in place at the beginning of the line, there are $3!4!$ straight line arrangements of the remaining girls and boys alternating. If they are arranged in a circle, that number is halved. The above reply has several issues. This is such a well know problem. First, we can certainly seat one boy and one girl at a circular table in one way. Given a circular table with four seats. We can seat the two girls opposite one another and the there only two ways to seat the two boys. Now consider the case of 4 girls, ABCD, and 4 boys, WXYZ. Seat A any where at table. Then seat BCD in ever other seat beginning at A’s left. This can be done in $3!$ ways. Then the boys can take the remaining seats in $4!$ ways. Thus for the case $N=4$ the answer is $(3!)(4!)$. Dividing by two is incorrect. April 21st 2010, 05:16 AM #2 April 24th 2010, 12:29 AM #3 Apr 2010 April 24th 2010, 05:24 AM #4 MHF Contributor Dec 2009 April 24th 2010, 06:23 AM #5
{"url":"http://mathhelpforum.com/discrete-math/140463-arrangement-circle-problem.html","timestamp":"2014-04-21T14:09:04Z","content_type":null,"content_length":"50290","record_id":"<urn:uuid:6d93ad1b-568e-42eb-9f8f-356987b1c5c0>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00515-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Choose the correct description of the graph of the inequality -x > 3. A Open circle on -3, shading to the left. B Closed circle on 3, shading to the left. C Open circle on 3, shading to the right. D Closed circle on -3, shading to the right. • one year ago • one year ago Best Response You've already chosen the best response. Best Response You've already chosen the best response. @Brittni0605 so it would be A? Best Response You've already chosen the best response. Best Response You've already chosen the best response. -x > 3 You want x but you have -x, so you need to multiply both sides by -1. You must be careful with an inequality bec when you multiply or divide both sides by a negative number, the direction of the inequality sign changes. Best Response You've already chosen the best response. Best Response You've already chosen the best response. @mathstudent55 is giving the explanation if you don't understand what I did. Best Response You've already chosen the best response. thank you both! @Brittni0605 and @mathstudent55 any chance one of you could help me with another? Best Response You've already chosen the best response. Sure post a new post pls Best Response You've already chosen the best response. Best Response You've already chosen the best response. @Brittni0605 Choose the correct description of the graph of the inequality x - 8 < 5x. A Open circle on -2, shading to the left. B Closed circle on -2, shading to the left. C Open circle on -2, shading to the right. D Closed circle on -2, shading to the right. My answer is C am I correct? Best Response You've already chosen the best response. Best Response You've already chosen the best response. @Brittni0605 Choose the correct description of the graph of the inequality -5x greater than or equal to 25. Open circle on -5, shading to the left. Closed circle on -5, shading to the left. Open circle on 5, shading to the right. Closed circle on 5, shading to the right. Best Response You've already chosen the best response. Best Response You've already chosen the best response. @Brittni0605 so it would be B? Best Response You've already chosen the best response. Best Response You've already chosen the best response. @Brittni0605 thanks here is another one: Solve the inequality 4x - 16 > 6x - 20. x < 7 x < 2 x > 7 x > 2 Best Response You've already chosen the best response. @Brittni0605 would the answer be B? Best Response You've already chosen the best response. Best Response You've already chosen the best response. B is correct Best Response You've already chosen the best response. @mathstudent55 thank you :) Best Response You've already chosen the best response. Best Response You've already chosen the best response. @mathstudent55 , @Brittni0605 maybe you both could help me with this: Which of the following is not a way to represent the solution of the inequality 2(x + 1) greater than or equal to 20? A x _> __9 B 9 _<__ x C A number line with a closed circle on 9 and shading to the left. D A number line with a closed circle on 9 and shading to the right. Best Response You've already chosen the best response. I apologize, I have to sign off. @mathstudent55 is doing a fine job of helping you. Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50e36243e4b028291d74ac3d","timestamp":"2014-04-18T14:22:58Z","content_type":null,"content_length":"175446","record_id":"<urn:uuid:ef246fd9-a20f-4982-8137-6521e5bd4a5c>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00388-ip-10-147-4-33.ec2.internal.warc.gz"}
Resolution: standard / high Figure 9. Impact of constrained bases on the difficulty of secondary structure design using RNA-SSD. Correlation between the fraction of bases constrained in a particular structure (x-axis) and the median expected run time for designing the structure with RNA-SSD (y-axis). We report the fraction of constrained bases after propagation for constraints on randomly chosen base positions. This fraction, for both randomly chosen bases and stems, corresponds to the median fraction of bases constrained in a set of 50 constraints that were generated by fixing a given percentage of bases or stems. There are two curves in each graph, one for designing structures with base constraints located in random positions and the other for constraints located in stems. (a) VS ribozyme from Neurospora mitochondria; (b) Group II intron ribozyme D135 from Saccharomyces. Aguirre-Hernández et al. BMC Bioinformatics 2007 8:34 doi:10.1186/1471-2105-8-34 Download authors' original image
{"url":"http://www.biomedcentral.com/1471-2105/8/34/figure/F9","timestamp":"2014-04-18T13:24:12Z","content_type":null,"content_length":"12527","record_id":"<urn:uuid:16d9a1e3-a99f-4f3b-a03f-62e7f1812680>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00373-ip-10-147-4-33.ec2.internal.warc.gz"}
Given a branched cover with branch cycle description $(g_1,...,g_r)$, does $g_i$ generate some decomposition group? up vote 5 down vote favorite Classically: Let $a_1,...,a_r$ be points in $\mathbb{P}^1_{\mathbb{C}}$, and let $\alpha_1,...,\alpha_r$ be simple loops around the $a_i$, all counterclockwise, and none touching (so $\alpha_1...\ alpha_r=1$ in the fundamental group of the projective line minus those points). An (pointed, to be pedantic) unramified $G$-cover (meaning a normal covering space with deck transformations$=G$) of $\ mathbb{P}^1_{\mathbb{C}}-a_1,...,a_r$ is given by a surjection $\pi_1(\mathbb{P}^1_{\mathbb{C}}-a_1,...,a_r) \rightarrow G$. Let $g_i$ be the image of $\alpha_i$. We say that this $G$-Galois branched cover has branch cycle description $(g_1,...,g_r)$ (note that this depends on our choice of the $\alpha_i$'s). This covering map of curves can be extended to a map of (smooth) projective groups. It can then be shown by a simple topological argument that $g_i$ generates the inertia group (=decomposition group in this case) of some point above $a_i$. My question is whether (and if so, how?) this is also true for the $\overline{\mathbb{F}_p}$ case. Let me be precise. It is known via Grothendieck that $\pi_1^{(p)}(\mathbb{P}^1_{\overline{\mathbb{F}_p}}-a_1,...,a_r)=\widehat{\langle \alpha_1,...,\alpha_r|\prod \alpha_i =1 \rangle}^{(p)}$ (the $^ {(p)}$ indicates that we're taking the inverse limit of all prime-to-$p$ finite quotients). Since these $\alpha_i$'s are given in SGA1 through a rather mysterious method, I wonder if the phenomenon described in the first paragraph is still true. My question, therefore, is: let $G$ be a prime-to-$p$ group, and let $X\rightarrow \mathbb{P}^1_{\overline{\mathbb{F}_p}}$ be a (pointed, to be pedantic) branched $G$-cover with branch points $a_1,...,a_r$. Let $\alpha_1,...,\alpha_r$ be such that $\pi_1^{(p)}(\mathbb{P}^1_{\overline{\mathbb{F}_p}}-a_1,...,a_r)=\widehat{\langle \alpha_1,...,\alpha_r|\prod \alpha_i =1 \rangle}^{(p)}$ (I'm almost positive that what I'm about to say is false if you're allowed to choose any such $\alpha_i$'s, so let's assume that we're taking the ones from Grothendieck's construction. If you see a better way of saying what the condition should be on the $\alpha_i$'s I would be very interested in that). Let the branch cycle description of this cover be $(g_1,...,g_r)$ (with respect to these $\ alpha_i$'s). Is it true that $g_i$ generates the inertia group (=decomposition group in this case) of some point of $X$ above $a_i$? The topological argument that we were able to use for the $\mathbb{C}$ case seems to no longer apply... ag.algebraic-geometry galois-theory characteristic-p add comment 1 Answer active oldest votes I'm not sure I totally understand your question, but I think the answer is yes almost by definition. The procyclic group topologically generated by you alpha_i is the image of the pi_1(Spec O_i) -> pi_1(P^1 - a_1, ... a_r) up vote 2 down where O_i is the completed local ring of P^1 at the puncture a_i (so it's isomorphic to F_q((t)).) I guess the content here is that Grothendieck's comparison is compatible with the vote passage between local and global. I was lazy and didn't write in base points; the choice of base point would induce the choice of WHICH decomposition group over a_i you get. add comment Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry galois-theory characteristic-p or ask your own question.
{"url":"http://mathoverflow.net/questions/63964/given-a-branched-cover-with-branch-cycle-description-g-1-g-r-does-g-i?sort=newest","timestamp":"2014-04-19T12:06:27Z","content_type":null,"content_length":"53853","record_id":"<urn:uuid:45ece707-2c7b-479b-be8c-0e53f816bff1>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00214-ip-10-147-4-33.ec2.internal.warc.gz"}
Advogato: Blog for gesslein Previous behavior of Mathomatic: 1-> y=x #1: y = x 1-> derivative z ; Fail! Variable not found; the derivative would be zero. Command usage: derivative ["nosimplify"] [variable or "all"] [order] 1-> y=2 #2: y = 2 2-> derivative x ; Fail! Current expression contains no variables; the derivative would be zero. Command usage: derivative ["nosimplify"] [variable or "all"] [order] Differentiation failed with a helpful error message and useless command usage info when the result is zero. That was wrong. Current behavior of Mathomatic, available in the development version and the next release: 1-> y=x #1: y = x 1-> derivative z ; Success! Warning: Variable not found; the derivative will be zero. Differentiating the RHS with respect to (z) and simplifying... #2: y' = 0 2-> y=2 #3: y = 2 3-> derivative x ; Success! Warning: Current expression contains no variables; the derivative will be zero. Warning: Variable not found; the derivative will be zero. Differentiating the RHS with respect to (x) and simplifying... #4: y' = 0 Mathomatic succeeds now when the derivative is zero, giving a warning message (or two). Displaying command usage info whenever a command failed will not happen anymore. Command usage info is displayed by the help command and when something has been entered incorrectly on the command More progress, I think. There have been many important improvements and fixes to Mathomatic this year. I am pleased, and thankful for the support I get. :-)
{"url":"http://www.advogato.org/person/gesslein/diary/5.html","timestamp":"2014-04-20T18:25:46Z","content_type":null,"content_length":"5316","record_id":"<urn:uuid:064435d4-b2ed-47bc-8ec2-8e8ac8393bec>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00041-ip-10-147-4-33.ec2.internal.warc.gz"}
Stability experimental Maintainer Patrick Perry <patperry@gmail.com> Summary statistics for Bools. The Summary data type data Summary Source A type for storing summary statistics for a data set of booleans. Specifically, this just keeps track of the number of True events and gives estimates for the success probability. True is interpreted as a one, and False is interpreted as a zero. Summary properties sampleCI :: Double -> Summary -> (Double, Double)Source Get a Central Limit Theorem-based confidence interval for the mean with the specified coverage level. The level must be in the range (0,1).
{"url":"http://hackage.haskell.org/package/monte-carlo-0.4.1/docs/Data-Summary-Bool.html","timestamp":"2014-04-16T09:09:16Z","content_type":null,"content_length":"8167","record_id":"<urn:uuid:93917af4-6d8d-4810-9202-4a17f5871e15>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00078-ip-10-147-4-33.ec2.internal.warc.gz"}
[SciPy-user] Post-Processing 3D Array Gael Varoquaux gael.varoquaux@normalesup.... Sun Mar 25 07:33:31 CDT 2007 On Sat, Mar 24, 2007 at 08:49:04PM +0000, Bryan Cole wrote: > > (1) It would be very nice to be able to cut several cross-sections of > > the tube along which I'd like to plot v_z. I can easily obtain the 2D > > array v_z_cross[i,j], function of theta and r only, but then I am not > > sure about how to plot it in 2D. Also, the geometry of the resulting > > plot has to look like a circle and absolutely not a square. > > (2)Would it also be possible to make a full 3D plot of v_z[i,j,k]? As in > > (1), the geometry of the resulting plot has to look like a cylinder and > > not a cube or a square duct. > Sounds like a job for VTK (www.vtk.org). In VTK-speak, your dataset is > best represented as a "structured grid". This means your data points can > be indexed as i,j,k (i.e. a grid), but it's not a Cartesian grid, so the > 3D location of each data point must be specified directly. At each data > point, you can have an arbitrary number of scalar, vector or tensor > attributes. > I would say the easiest way to start is to write out your data as a VTK > format file (this can be ASCII or binary) with format description given > in http://www.vtk.org/pdf/file-formats.pdf , then load this up in either > MayaVi (http://mayavi.sourceforge.net/) or Paraview > (http://www.paraview.org/HTML/Index.html). > There is a python module for writing VTK files called pyvtk > (http://cens.ioc.ee/projects/pyvtk/ ) which may simplify the task > further. Indeed I would say that vtk is very suited for this task. Tvtk and mayavi2 are python wrapper for vtk that are very pythonic and nice for interactiv work. I am currently working on a module to drive mayavi2 from "ipython -wthread" for scripting and interactive work. It still needs some work before I can even submit it for inclusion to mayavi2 but I am sending it along just in case in can be of some use. It can be of much help as it deals with the building of the vtk data objects from numpy arrays for most cases. Tvtk and mayavi2 are part of the enthought tools suite, if you want to use the attached module you will have to use the svn: For more info on mayavi2 and tvtk see the scipy wiki at http://www.scipy.org (I cannot provide a direct link to the proper page, as the wiki is down right now). More information about the SciPy-user mailing list
{"url":"http://mail.scipy.org/pipermail/scipy-user/2007-March/011463.html","timestamp":"2014-04-17T21:41:06Z","content_type":null,"content_length":"5384","record_id":"<urn:uuid:9c7333d1-fbee-4dee-b538-c8e0b1116852>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00228-ip-10-147-4-33.ec2.internal.warc.gz"}
188 helpers are online right now 75% of questions are answered within 5 minutes. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/users/ashleylynn/asked","timestamp":"2014-04-21T15:56:10Z","content_type":null,"content_length":"114414","record_id":"<urn:uuid:84d7cf1f-f4f0-4c30-a402-c738676ec95e>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00541-ip-10-147-4-33.ec2.internal.warc.gz"}
Localized Hardy spaces H 1 related to admissible functions on RD-spaces and applications to Schrödinger operators Download Links by Dachun Yang , Yuan Zhou author = {Dachun Yang and Yuan Zhou}, title = {Localized Hardy spaces H 1 related to admissible functions on RD-spaces and applications to Schrödinger operators}, year = {} Abstract. Let X be an RD-space, which means that X is a space of homogenous type in the sense of Coifman and Weiss with the additional property that a reverse doubling property holds in X. In this paper, the authors first introduce the notion of admissible functions ρ and then develop a theory of localized Hardy spaces H1 ρ(X) associated with ρ, which includes several maximal function characterizations of H1 ρ (X), the relations between H1 ρ (X) and the classical Hardy space H1 (X) via constructing a kernel function related to ρ, the atomic decomposition characterization of H1 ρ (X), and (X) via a finite atomic the boundedness of certain localized singular integrals on H1 ρ decomposition characterization of some dense subspace of H1 ρ (X). This theory has a wide range of applications. Even when this theory is applied, respectively, to the Schrödinger operator or the degenerate Schrödinger operator on Rn, or the sub-Laplace Schrödinger operator on Heisenberg groups or connected and simply connected nilpotent Lie groups, some new results are also obtained. The Schrödinger operators considered here are associated with nonnegative potentials satisfying the reverse Hölder inequality. 1 594 Harmonic analysis, real-variable methods, orthogonality, and oscillatory integrals - Stein - 1993 138 H p spaces of several variables - Fefferman, Stein - 1972 129 Analyse harmonique non-commutative sur certains espaces homogènes - Coifman, Weiss - 1971 129 Extension of Hardy spaces and their use in analysis - Coifman, Weiss - 1977 119 Spaces on Homogeneous Groups - Folland, Stein, et al. - 1982 118 Saloff-Coste L., Coulhon T. Analysis and Geometry on Groups - Varopoulos - 1992 109 S.: Balls and metrics defined by vector fields. I: Basic Properties - Nagel, Stein, et al. - 1985 103 Lectures on analysis on metric spaces. Universitext - Heinonen - 2001 87 Compensated compactness and hardy spaces - Coifman, Lions, et al. - 1993 64 A T(b) theorem with remarks on analytic capacity and the Cauchy integral, Colloquium Mathematicum LX/LXI - Christ - 1990 64 The L p -integrability of the partial derivatives of a quasi conformal mappings - Gehring - 1973 54 The uncertainty principle - Fefferman - 1983 51 Lipschitz functions on spaces of homogeneous type - Macías, Segovia - 1979 49 Weighted norm inequalities for the Hardy maximal function - Muckenhoupt - 1972 34 Weighted Hardy spaces - Strömberg, Torchinsky - 1989 33 Analysis on Lie groups - Varopoulos - 1988 31 Classical Fourier Analysis, Second Edition, Graduate Texts - Grafakos - 2008 29 L p estimates for Schrödinger operators with certain potentials - Shen - 1995 27 On the relation between elliptic and parabolic Harnack inequalities - Hebisch, Saloff-Coste 26 Duality of Hardy and BMO spaces associated with operators with heat kernel bounds - Duong, Yan 26 On the theory of harmonic functions of several variables - Stein, Weiss - 1960 23 Trotter’s Product Formula for an Arbitrary Pair of Self-Adjoint Contraction Semigroups, in: Topics in Functional Analysis - Kato - 1978 23 Banach Spaces for Analysts - Wojtaszczyk - 1991 22 A local version of real Hardy spaces - Goldberg - 1979 22 A decomposition into atoms of distributions on spaces of homogeneous type - Macías, Segovia - 1979 17 Boundedness of operators on Hardy spaces via atomic decompositions - Bownik - 2005 15 On the product theory of singular integrals - Nagel, Stein 13 An application of homogenization theory to harmonic analysis: Harnack inequalities and Riesz transforms on Lie groups of polynomial growth - Alexopoulos - 1992 13 A theory of Besov and Triebel-Lizorkin spaces on metric measure spaces modeled on Carnot-Carathéodory spaces - Han, Müller, et al. 12 A boundedness criterion via atoms for linear operators in Hardy spaces - Yang, Zhou - 2009 11 BMO spaces related to Schrödinger operators with potentials satisfying a reverse Hölder inequality - Dziubański, Garrigs, et al. 11 Littlewood-Paley characterizations for Hardy spaces on spaces of homogeneous type - Han, Müller, et al. 9 Maximal inequalities and Riesz transform estimates on L p spaces for Schrödinger operators with nonnegative potentials - Auscher, Ali 9 Analyse sur les groupes de Lie à croissance polynômiale - Saloff-Coste - 1990 8 space H 1 associated to Schrödinger operator with potential satisfying reverse Hölder inequality - Dziubański, Zienkiewicz - 1999 8 The Sobolev estimates for some Schrödinger type operators Math. Sci. Res. Hot-Line 3 - Zhong - 1999 7 Atomic decomposition of H p spaces associated with some Schrödinger operators Indiana Univ - Dziubański - 1998 7 p spaces associated with Schrödinger operators with potentials from reverse Hölder classes, Colloq - Dziubański, Zienkiewicz 7 Note on H 1 spaces related to degenerate Schrödinger operators - Dziubański - 2005 6 p spaces for Schrödinger operators, Fourier Analysis and Related - Dziubański, Zienkiewicz 5 Theory of function spaces. III, Birkhäuser Verlag - Triebel - 2006 5 New properties of Besov and Triebel-Lizorkin spaces on RDspaces - Yang, Zhou 3 A remark on estimates for uniformly elliptic operators on weighted Lp spaces and Morrey spaces - Kurata, Sugano 3 The ∂b-complex on decoupled boundaries in Cn - Nagel, Stein 2 spaces H 1 for Schrödinger operators with certain potentials - Dziubański, Zienkiewicz, et al. 2 D.: Radial maximal function characterizations for Hardy spaces on RD-spaces - Grafakos, Liu, et al. - 2009 2 Fundamental solution, eigenvalue asymptotics and eigenfunctions of degenerate elliptic operators with positive potentials - Kurata, Sugano 2 Estimations Lp des opérateurs de Schrödinger sur les groupes nilpotents - Li - 1999 2 Analyse sur les groupes de Lie nilpotents - Saloff-Coste - 1986 2 Fonctions maximales sur certains groupes de Lie - Saloff-Coste - 1987
{"url":"http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.244.1292","timestamp":"2014-04-16T11:51:04Z","content_type":null,"content_length":"35722","record_id":"<urn:uuid:cd5b2fa0-b996-4a3d-9f2b-993505ab0ec7>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00077-ip-10-147-4-33.ec2.internal.warc.gz"}
Ger Sligte Ger Sligte was an artist for the Arbeiders Jeugd Centrale (AJC) between 1918 and 1958. Born in Den Helder, Sligte got his artistic education in Amsterdam, and had his first job as a house painter. He got his apprenticeship as a painter from his friend Bob Buys. He then worked for an advertising agency, and eventually became an independent illustrator. For the JAC, he made illustrations for Het Jonge Volk, and the silent gag comic 'Mieke Meijer' in the monthly De Wiekslag in 1934. The strip ran until 1959 in several incarnations of this magazine. During the late 1930s, he also contributed 'Hilvert en Hille' to the children's supplement of the magazine Wij. In 1949, Sligte began a second silent gag comic, 'Bertje Branie' in De Snelwiek, also published by AJC. In addition to his work for AJC, Sligte drew 'Ali Kruuk' in De Spiegel between 1949 and 1953. He continued to work on this series in this series in the 1970s. However, he had to cancel his activities in 1975 due to illness.
{"url":"http://www.lambiek.net/artists/s/sligte_ger.htm?lan=dutch","timestamp":"2014-04-17T20:14:27Z","content_type":null,"content_length":"27320","record_id":"<urn:uuid:44bceb8c-3aad-44da-8f14-733726ba5401>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00639-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Find the unit price (unit rate) for this item by completing the chart or using a proportion. Record your answer in the box provided. • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/510711a7e4b0ad57a5643e8d","timestamp":"2014-04-19T12:55:54Z","content_type":null,"content_length":"52609","record_id":"<urn:uuid:0001b446-2a78-4f2f-b7d9-05b06d0653ae>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00062-ip-10-147-4-33.ec2.internal.warc.gz"}
How to Measure Concrete to Be Poured Edit Article Edited by Permasofty, Teresa, Kate Whitcomb Measuring the area where Concrete will be placed to calculate how many Cubic Yards, Cubic feet (or Cubic Meters) will be needed. 1. 1 Measure the Length (L). For an example lets say it's 12 feet 6 inches. 2. 2 Measure the Width (W). For example lets say its 15 feet. 3. 3 Measure the Height (H). Fro example lets say its 3 and 3/4 inches. 4. 4 Use a Construction Calculator. Do search online to find them. 5. 5 If you're calculating for Cubic Feet or Cubic Yards of Concrete needed, you will need to convert your inch measurements to Engineers Scale. Engineers scale is basically taking a foot (12 inches) and converting it into10ths. The engineers scale also eliminates the fractions of an inch. (i.e. 3/4 of an inch is equal to 0.75 of an inch) 6. 6 For the L measurement of 12 feet 6 inches, take the 6 inches and divide it by 12. 6 inches / 12 = 0.50 feet. Put the .50 after the 12. Now we have 12.5 feet. 7. 7 At 15 feet and 0 inches the W does not need to be converted. 8. 8 For the H measurement of 3 and 3/4 inches, we will first need to convert the fraction 3/4 of an inch to a decimal. Take the top number 3 and divide it by the bottom number 4. □ 3 / 4 = 0.75 of an inch. Put the .75 after the 3. We now have 3.75 inches. Now take 3.75 and divide that by 12. □ 3.75 / 12 = 0.31 feet. ☆ Now we have L = 12.5 feet, W = 15 feet and H = 0.31 feet. 9. 9 Multiply them altogether. 12.5 x 15 x 0.31 = 58.25 cubic feet. 10. 10 To calculate the Cubic Yards take the 58.25 cubic feet and divide it by 27. 58.25 / 27 = 2.15 Cubic Yards. • If you are using a pump to place the concrete add for some waste since there may be 1/2 of a cubic yard in the hoses and the hopper. • Typically Concrete will contain 5% air. That means once you tamp, vibrate, screed, float, etc. air bubbles will be released thus condensing the concrete. It's a good idea to add 5% to the calculation especially if you are using transit mixers. It may take them 40 minutes to come out with your 1/2 Cubic Yard order, which they may charge you a short load fee on. • On large pours measure as the pour goes along. If the W and H are the same you can measure and calculate that you have used x amount of Cubic Yards in x amount of feet. That way you know how many cubic yards you need to finish and can change your order accordingly. You don't want to be the one responsible for the crew standing around for an hour waiting for the last load. It's typically better to over order a little and through away a few cubic yards than pay the labor to stand around or worse yet, finish concrete in the cold dark. • If you are doing a large pour with multiple transit mixers have them spaced by out correctly. You don't want the crew standing around waiting for transit mixers and you also don't want the transit mixers stacked up waiting, and probably wanting to charge for stand by time. • Check your measurements before calculating. If your not sure have someone else measure it and see that you come have the same numbers. • Check your calculations. If your not sure have someone else check them. • If you are using transit mixers ask the concrete plant if they charge a short load fee on the last load. Many charge a short load fee on anything under 2 cubic yards. • Calculating Cubic Meters is much easier since the Metric system eliminates the use of fractions. For example L (3.81m) x W (4.57m) x H (0.095m) = 1.65 Cubic Meters. • Wear gloves, cover your skin, wear safety glasses, wear rubber boots if you have to stand in it. Some people have very sensitive skin and react badly to cement. Concrete splashed in the eyes is no fun either. Concrete will also dry out leather boots and shorten their life span. Article Info Categories: Concrete Recent edits by: Teresa, Permasofty Thanks to all authors for creating a page that has been read 10,037 times. Was this article accurate?
{"url":"http://www.wikihow.com/Measure-Concrete-to-Be-Poured","timestamp":"2014-04-19T14:36:37Z","content_type":null,"content_length":"61268","record_id":"<urn:uuid:bfcf5a17-1fe7-41c2-a2bf-d0c50e34f6d3>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00152-ip-10-147-4-33.ec2.internal.warc.gz"}
Fairburn, GA Statistics Tutor Find a Fairburn, GA Statistics Tutor ...I love helping students identify their weaknesses and score higher through the proper use of key strategies. Scored in the 99th percentile on the exam. Helped two twins with the math portion of the PSAT. 28 Subjects: including statistics, calculus, GRE, physics ...DIF” because of my leadership, oversight, and hands-on development of IRS’ discriminant function (DIF) models for selecting all types of compliance workload. At IRS, I also accumulated extensive experience in: 1) other case selection and resource allocation modeling; 2) tax gap and tax complianc... 2 Subjects: including statistics, SPSS ...I am a long-time guitarist and a novice computer programmer. I know some Spanish, have learned some Mandarin and Malay from living in Singapore. I am very interested in massively open online courses (MOOCs) from platforms like Coursera.org. 37 Subjects: including statistics, English, reading, GRE ...I have developed multiple ways to teach any and every subject related to algebra in order to help the individual student learn. I received an A at Georgia State in this class. It was required in my math degree. 15 Subjects: including statistics, chemistry, calculus, geometry I am looking forward to helping you to uncover the mysteries of mathematics! In my experience, I have found that everyone does not absorb math in the same way, but that it can be absorbed and retained by anyone. I will work with you to very quickly find the way that works best for you and set you on a course for sustained success. 10 Subjects: including statistics, calculus, algebra 2, SAT math Related Fairburn, GA Tutors Fairburn, GA Accounting Tutors Fairburn, GA ACT Tutors Fairburn, GA Algebra Tutors Fairburn, GA Algebra 2 Tutors Fairburn, GA Calculus Tutors Fairburn, GA Geometry Tutors Fairburn, GA Math Tutors Fairburn, GA Prealgebra Tutors Fairburn, GA Precalculus Tutors Fairburn, GA SAT Tutors Fairburn, GA SAT Math Tutors Fairburn, GA Science Tutors Fairburn, GA Statistics Tutors Fairburn, GA Trigonometry Tutors Nearby Cities With statistics Tutor Atlanta Ndc, GA statistics Tutors Avondale Estates statistics Tutors Chattahoochee Hills, GA statistics Tutors Conley statistics Tutors Hampton, GA statistics Tutors Hapeville, GA statistics Tutors Lake City, GA statistics Tutors Lovejoy, GA statistics Tutors Palmetto, GA statistics Tutors Red Oak, GA statistics Tutors Scottdale, GA statistics Tutors Tyrone, GA statistics Tutors Union City, GA statistics Tutors Villa Rica statistics Tutors Winston, GA statistics Tutors
{"url":"http://www.purplemath.com/Fairburn_GA_statistics_tutors.php","timestamp":"2014-04-16T05:03:32Z","content_type":null,"content_length":"23895","record_id":"<urn:uuid:0cdeb7e6-d966-4c99-99c7-5b66f08e6d4f>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00546-ip-10-147-4-33.ec2.internal.warc.gz"}
the definition of gas constant gas constant (symbol R), fundamental physical constant arising in the formulation of the general gas law. For an ideal gas (approximated by most real gases that are not highly compressed or not near the point of liquefaction), the pressure p times the volume V of the gas divided by its absolute temperature T is a constant. When one of these three is altered for a given mass of gas, at least one of the other two undergoes a change so that the expression pV/T remains constant. The constant, further, is the same for all gases, provided the mass of gas being compared is one mole, or one molecular weight in grams. For one mole, therefore, pV/T = R. Learn more about gas constant with a free trial on Britannica.com.
{"url":"http://dictionary.reference.com/browse/gas%20constant","timestamp":"2014-04-18T12:26:17Z","content_type":null,"content_length":"91158","record_id":"<urn:uuid:8919b55c-ddf0-4748-9106-2020e0c49170>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00507-ip-10-147-4-33.ec2.internal.warc.gz"}
Results 1 - 10 of 74 , 1987 "... Data flow is a natural paradigm for describing DSP applications for concurrent implementation on parallel hardware. Data flow programs for signal processing are directed graphs where each node represents a function and each arc represents a signal path. Synchronous data flow (SDF) is a special case ..." Cited by 483 (44 self) Add to MetaCart Data flow is a natural paradigm for describing DSP applications for concurrent implementation on parallel hardware. Data flow programs for signal processing are directed graphs where each node represents a function and each arc represents a signal path. Synchronous data flow (SDF) is a special case of data flow (either atomic or large grain) in which the number of data samples produced or consumed by each node on each invocation is specified a priori. Nodes can be scheduled statically (at compile time) onto single or parallel programmable processors so the run-time overhead usually associated with data flow evaporates. Multiple sample rates within the same system are easily and naturally handled. Conditions for correctness of SDF graph are explained and scheduling algorithms are described for homogeneous parallel processors sharing memory. A preliminary SDF software system for automatically generating assembly language code for DSP microcomputers is described. Two new efficiency techniques are introduced, static buffering and an extension to SDF to efficiently implement conditionals. - IEEE Transactions on Computers , 1987 "... Abstract-hrge grain data flow (LGDF) programming is natural and convenient for describing digital signal processing (DSP) systems, but its runtime overhead is costly in real time or cost-sensitive applications. In some situations, designers are not willing to squander computing resources for the sak ..." Cited by 480 (35 self) Add to MetaCart Abstract-hrge grain data flow (LGDF) programming is natural and convenient for describing digital signal processing (DSP) systems, but its runtime overhead is costly in real time or cost-sensitive applications. In some situations, designers are not willing to squander computing resources for the sake of program-mer convenience. This is particularly true when the target machine is a programmable DSP chip. However, the runtime overhead inherent in most LGDF implementations is not required for most signal processing systems because such systems are mostly synchronous (in the DSP sense). Synchronous data flow (SDF) differs from traditional data flow in that the amount of data produced and consumed by a data flow node is specified a priori for each input and output. This is equivalent to specifying the relative sample rates in signal processing system. This means that the scheduling of SDF nodes need not be done at runtime, but can be done at compile time (statically), so the runtime overhead evaporates. The sample rates can all be different, which is not true of most current data-driven digital signal processing programming methodologies. Synchronous data flow is closely related to computation graphs, a special case of Petri nets. This self-contained paper develops the theory necessary to statically schedule SDF programs on single or multiple proces-sors. A class of static (compile time) scheduling algorithms is proven valid, and specific algorithms are given for scheduling SDF systems onto single or multiple processors. Index Terms-Block diagram, computation graphs, data flow digital signal processing, hard real-time systems, multiprocessing, , 1991 "... We present a method for analyzing the time performance of asynchronous circuits, in particular, those derived by program transformation from concurrent programs using the synthesis approach developed by the second author. The analysis method produces a performance metric (related to the time needed ..." Cited by 138 (7 self) Add to MetaCart We present a method for analyzing the time performance of asynchronous circuits, in particular, those derived by program transformation from concurrent programs using the synthesis approach developed by the second author. The analysis method produces a performance metric (related to the time needed to perform an operation) in terms of the primitive gate delays of the circuit. Such a metric provides a quantitative means by which to compare competing designs. Because the gate delays are functions of transistor sizes, the performance metric can be optimized with respect to these sizes. For a large class of asynchronous circuits---including those produced by using our synthesis method---these techniques produce the global optimum of the performance metric. A CAD tool has been implemented to perform this optimization. 1 Introduction Performance analysis of a synchronous computer system is simplified by an external clock that partitions the events in the system into discrete segments. In a... , 1995 "... The rapid advances in high-performance computer architecture and compilation techniques provide both challenges and opportunities to exploit the rich solution space of software pipelined loop schedules. In this paper, we develop a framework to construct a software pipelined loop schedule which runs ..." Cited by 75 (12 self) Add to MetaCart The rapid advances in high-performance computer architecture and compilation techniques provide both challenges and opportunities to exploit the rich solution space of software pipelined loop schedules. In this paper, we develop a framework to construct a software pipelined loop schedule which runs on the given architecture (with a fixed number of processor resources) at the maximum possible iteration rate (`a la rate-optimal) while minimizing the number of buffers --- a close approximation to minimizing the number of registers. The main contributions of this paper are: ffl First, we demonstrate that such problem can be described by a simple mathematical formulation with precise optimization objectives under a periodic linear scheduling framework. The mathematical formulation provides a clear picture which permits one to visualize the overall solution space (for rate-optimal schedules) under different sets of constraints. ffl Secondly, we show that a precise mathematical formulation... , 1993 "... ing with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from Publications Dept., ACM Inc., fax +1 (212) 869-0481, or (permissions@acm.org). Qi Ning Guang R. Gao School of Com ..." Cited by 61 (10 self) Add to MetaCart ing with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from Publications Dept., ACM Inc., fax +1 (212) 869-0481, or (permissions@acm.org). Qi Ning Guang R. Gao School of Computer Science McGill University Montreal, Quebec Canada H3A 2A7 email: ning@cs.mcgill.ca gao@cs.mcgill.ca Abstract Although software pipelining has been proposed as one of the most important loop scheduling methods, simultaneous scheduling and register allocation is less understood and remains an open problem [28]. The objective of this paper is to develop a unified algorithmic framework for concurrent scheduling and register allocation to support time-optimal software pipelining. A key intuition leading to this surprisingly simple formulation and its efficient solution is the association of maximum computation rate of a program graph with its critical cycles due to Reiter's pioneering work... - In OOPSLA ’04: Proceedings of the 19th annual ACM SIGPLAN conference on Object-oriented programming, systems, languages, and applications , 2004 "... Distributed enterprise applications today are increasingly being built from services available over the web. A unit of functionality in this framework is a web service, a software application that exposes a set of “typed ” connections that can be accessed over the web using standard protocols. These ..." Cited by 36 (0 self) Add to MetaCart Distributed enterprise applications today are increasingly being built from services available over the web. A unit of functionality in this framework is a web service, a software application that exposes a set of “typed ” connections that can be accessed over the web using standard protocols. These units can then be composed into a composite web service. BPEL (Business Process Execution Language) is a high-level distributed programming language for creating composite web services. Although a BPEL program invokes services distributed over several servers, the orchestration of these services is typically under centralized control. Because performance and throughput are major concerns in enterprise applications, it is important to remove the inefficiencies introduced by the centralized control. In a distributed, or decentralized - In Proc. of the Conf. on Vector and Parallel Processing, CONPAR-92, number 634 in Lec. Notes in Comp. Sci , 1992 "... Software pipelining is one of the most important loop scheduling methods used by parallelizing compilers. It determines a static parallel schedule -- a periodic pattern -- to overlap instructions of a loop body from different iterations. The main contributions of this paper are the following: First, ..." Cited by 29 (5 self) Add to MetaCart Software pipelining is one of the most important loop scheduling methods used by parallelizing compilers. It determines a static parallel schedule -- a periodic pattern -- to overlap instructions of a loop body from different iterations. The main contributions of this paper are the following: First, we propose to express the fine-grain loop scheduling problem (in particular, software pipelining) on the basis of the mathematical formulation of r-periodic scheduling. This formulation overcomes some of the problems encountered by existing software pipelining methods. Second, we demonstrate the feasibility of the proposed method by (1) presenting a polynomial time algorithm to find an optimal schedule in this r-periodic form that maximizes the computation rate (in fact, we show that this schedule maximizes the computation rate theoretically possible), and by (2) establishing polynomial bounds for the optimal schedule, i.e. bounds on its period, its periodicity, the pattern size, and the c... - In Proceedings of the 44th annual Design Automation Conference , 2007 "... Abstract. A key step in the design of cyclo-static real-time systems is the determination of buffer capacities. In our multiprocessor system, we apply back-pressure, which means that tasks wait for space in output buffers. Consequently buffer capacities affect the throughput. This requires the deriv ..." Cited by 28 (9 self) Add to MetaCart Abstract. A key step in the design of cyclo-static real-time systems is the determination of buffer capacities. In our multiprocessor system, we apply back-pressure, which means that tasks wait for space in output buffers. Consequently buffer capacities affect the throughput. This requires the derivation of buffer capacities that both result in a satisfaction of the throughput constraint, and also satisfy the constraints on the maximum buffer capacities. Existing exact solutions suffer from the computational complexity that is associated with the required conversion from a cyclo-static dataflow graph to a single-rate dataflow graph. In this paper we present an algorithm, with linear computational complexity, that does not require this conversion and that strives to obtain close to minimal buffer capacities. The algorithm is applied to an MP3 playback application that is mapped on our multi-processor system. 1. - IEEE Transactions on Parallel and Distributed Systems , 1996 "... The rapid advances in high-performance computer architecture and compilation techniques provide both challenges and opportunities to exploit the rich solution space of software pipelined loop schedules. In this paper, we develop a framework to construct a software pipelined loop schedule which runs ..." Cited by 24 (9 self) Add to MetaCart The rapid advances in high-performance computer architecture and compilation techniques provide both challenges and opportunities to exploit the rich solution space of software pipelined loop schedules. In this paper, we develop a framework to construct a software pipelined loop schedule which runs on the given architecture (with a fixed number of processor resources) at the maximum possible iteration rate (`a la rate-optimal) while minimizing the number of buffers --- a close approximation to minimizing the number of registers. The main contributions of this paper are: ffl First, we demonstrate that such problem can be described by a simple mathematical formulation with precise optimization objectives under a periodic linear scheduling framework. The mathematical formulation provides a clear picture which permits one to visualize the overall solution space (for rate-optimal schedules) under different sets of constraints. ffl Secondly, we show that a precise mathematical formulation... - In ACSD’06, Proc. (2006), IEEE , 2006 "... Synchronous Data Flow Graphs (SDFGs) are a useful tool for modeling and analyzing embedded data flow applications, both in a single processor and a multiprocessing context or for application mapping on platforms. Throughput analysis of these SDFGs is an important step for verifying throughput requir ..." Cited by 22 (12 self) Add to MetaCart Synchronous Data Flow Graphs (SDFGs) are a useful tool for modeling and analyzing embedded data flow applications, both in a single processor and a multiprocessing context or for application mapping on platforms. Throughput analysis of these SDFGs is an important step for verifying throughput requirements of concurrent real-time applications, for instance within design-space exploration activities. Analysis of SDFGs can be hard, since the worst-case complexity of analysis algorithms is often high. This is also true for throughput analysis. In particular, many algorithms involve a conversion to another kind of data flow graph, the size of which can be exponentially larger than the size of the original graph. In this paper, we present a method for throughput analysis of SDFGs, based on explicit statespace exploration and we show that the method, despite its worst-case complexity, works well in practice, while existing methods often fail. We demonstrate this by comparing the method with state-of-the-art cycle mean computation algorithms. Moreover, since the state-space exploration method is essentially the same as simulation of the graph, the results of this paper can be easily obtained as a byproduct in existing simulation tools. 1
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=872779","timestamp":"2014-04-19T19:05:38Z","content_type":null,"content_length":"41896","record_id":"<urn:uuid:18096af0-595a-4fea-88e7-916dc64737e7>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00053-ip-10-147-4-33.ec2.internal.warc.gz"}
Phase Transformations in Metals and Alloys, Third Edition (Revised Reprint) In the decade since the first edition of this popular text was published, the metallurgical field has undergone rapid developments in many sectors. Nonetheless, the underlying principles governing these developments remain the same. A textbook that presents these advances within the context of the fundamentals is greatly needed by instructors in the field Phase Transformations in Metals and Alloys, Second Edition maintains the simplicity that undergraduate instructors and students have come to appreciate while updating and expanding coverage of recently developed methods and materials. The book is effectively divided into two parts. The beginning chapters contain the background material necessary for understanding phase transformations - thermodynamics, kinetics, diffusion theory and the structure and properties of interfaces. The following chapters deal with specific transformations - solidification, diffusional transformation in solids and diffusionless transformation. Case studies of engineering alloys are incorporated to provide a link between theory and practice. New additions include an extended list of further reading at the end of each chapter and a section containing complete solutions to all exercises in the book Designed for final year undergraduate and postgraduate students of metallurgy, materials science, or engineering materials, this is an ideal textbook for both students and instructors. User ratings 5 stars 4 stars 3 stars 2 stars 1 star Review: Phase Transformations in Metals and Alloys User Review - Amina - Goodreads I wont to read this book please now Read full review Review: Phase Transformations in Metals and Alloys User Review - Arvindkumar - Goodreads This book is important to update. Read full review Diffusion 60 Crystal Interfaces and Microstructure 110 Solidification 185 Diffusional Transformations in Solids 263 Diftusionless Transformations 382 Solutions to exercises 441 Index 510 References from web pages ESM 531 Syllabus for ESM531 The text for the course is "Phase Transformations in Metals and Alloys" by da Porter and ke Easterling. Chapman and Hall (2nd Ed.. ... www.matscieng.sunysb.edu/ esm531/ Porter Easterling Filed under: Phase transformations in Metals and Alloys — Tags: chained up dragon and person in glasses!, song, The Blow — daporter @ 10:10 pm ... Metals and Alloys da Porter and K. E Easterling, Phase Transformations in Metals and Alloys, 2nd edition. Chapman and Hall, (1992). [Ln30]. 2. ah Cottrell, An Introduction to ... www.msm.cam.ac.uk/ phase-trans/ abstracts/ AH.pdf Massachusetts Institute of Technology Massachusetts Institute of Technology. Department of Materials Science and Engineering. 77 Massachusetts Avenue, Cambridge MA 02139-4307 ... ocw.mit.edu/ NR/ rdonlyres/ 60514A1C-10E0-48AB-9CEE-792AB3786E6E/ 0/ hw_10.pdf Course Description for MAT_SCI Materials Science and Engineering ... READING: da Porter and ke Easterling, Phase Transformations in Metals and Alloys, 2nd edition, Chapman and Hall, New York. ISBN 0-412-45030-5 ... aquavite.northwestern.edu/ cdesc/ course-desc.cgi?school_id=700& dept_id=750& course_id=6001& quarter=W06 Slide 1 Phase Transformations in Metals and Alloys. David Porter & Kenneth Esterling. Van Nostrand Reinhold Co. Ltd., New York (1981). Diffusional ... web.iitd.ac.in/ ~anandh/ PhaseTransformations.ppt ME581: Syllabus and Lecture Calendar da Porter and ke Easterling, PHASE TRANSFORMATIONS IN METALS AND ALLOYS ra Swalin, THERMODYNAMICS OF SOLIDS. The class will meet two times per week with ... oregonstate.edu/ instruct/ me581/ ME581Syllabus.html Enhanced bulk modulus and reduced transition pressure in γ-fe2o3 ... 17: PORTER da AND EASTERLING ke, Phase Transformations in Metals and Alloys, 2nd edition (Chapman & Hall, London) 1992, p. 263 ff. ... www.iop.org/ EJ/ article/ 0295-5075/ 44/ 5/ 620/ node4.html TU Delft - Metals Science da Porter and ke Easterling Phase Transformations in Metals and Alloys, Chapman and Hall, 2nd Edition, 1992. • D. Hull and djBacon Introduction to ... www.tudelft.nl/ live/ pagina.jsp?id=c4da240d-6a80-489a-813f-a00680fbfc32& lang=en Acta Materialia : Additivity and isokinetic behaviour in relation ... Phase transformations in metals and alloys can be modelled with various degrees of accuracy. These models range from sophisticated numerical solutions to ... linkinghub.elsevier.com/ retrieve/ pii/ S1359645498002602 Bibliographic information
{"url":"http://books.google.com.au/books?id=eYR5Re5tZisC&dq=related:ISBN0256114404&lr=","timestamp":"2014-04-19T01:47:52Z","content_type":null,"content_length":"134150","record_id":"<urn:uuid:15c29eed-7e8f-4eac-b7e6-63bd8a3f0744>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00574-ip-10-147-4-33.ec2.internal.warc.gz"}
Hypothesis Testing in Generalized Linear Models with Functional Coefficient Autoregressive Processes Mathematical Problems in Engineering Volume 2012 (2012), Article ID 862398, 19 pages Research Article Hypothesis Testing in Generalized Linear Models with Functional Coefficient Autoregressive Processes ^1School of Mathematics and Statistics, Hubei Normal University, Huangshi 435002, China ^2Department of Mathematics, Huizhou University, Huizhou 516007, China Received 28 January 2012; Accepted 25 March 2012 Academic Editor: Ming Li Copyright © 2012 Lei Song et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. The paper studies the hypothesis testing in generalized linear models with functional coefficient autoregressive (FCA) processes. The quasi-maximum likelihood (QML) estimators are given, which extend those estimators of Hu (2010) and Maller (2003). Asymptotic chi-squares distributions of pseudo likelihood ratio (LR) statistics are investigated. 1. Introduction Consider the following generalized linear model: where is -dimensional unknown parameter, are functional coefficient autoregressive processes given by where are independent and identically distributed random variable errors with zero mean and finite variance , is a one-dimensional unknown parameter, and is a real valued function defined on a compact set which contains the true value as an inner point and is a subset of . The values of and are unknown. is a known continuous differentiable function. Model (1.1) includes many special cases, such as an ordinary regression model (when; see [1–7]), an ordinary generalized regression model (when ; see [8–13]), a linear regression model with constant coefficient autoregressive processes (when , ; see [14–16]), time-dependent and function coefficient autoregressive processes (when ; see [17]), constant coefficient autoregressive processes (when , ; see [18–20]), time-dependent or time-varying autoregressive processes (when ; see [21–23]), and a linear regression model with functional coefficient autoregressive processes (when; see [24]). Many authors have discussed some special cases of models (1.1) and (1.2) (see [1–24]). However, few people investigate the model (1.1) with (1.2). This paper studies the model (1.1) with (1.2). The organization of this paper is as follows. In Section 2, some estimators are given by the quasi-maximum likelihood method. In Section 3, the main results are investigated. The proofs of the main results are presented in Section 4, with the conclusions and some open problems in Section 5. 2. The Quasi-Maximum Likelihood Estimate Write the “true” model as where . Define, and by (2.2), we have Thus is measurable with respect to the field generated by, and Assume at first that the are i.i.d. , we get the log-likelihood of conditional on given by At this stage we drop the normality assumption, but still maximize (2.5) to obtain QML estimators, denoted by. The estimating equations for unknown parameters in (2.5) may be written as Thus, satisfy the following estimation equations where Remark 2.1. If , then the above equations become the same as Hu’s (see [24]). If ,, then the above equations become the same as Maller’s (see [15]). Thus we extend those QML estimators of Hu [24] and Maller [15]. For ease of exposition, we will introduce the following notations, which will be used later in the paper. Let vector . Define By (2.7), we have where the * indicates that the elements are filled in by symmetry, Because and are mutually independent, we have where By (2.8) (2.7) and, we have 3. Statement of Main Results In the section pseudo likelihood ratio (LR) statistics for various hypothesis tests of interest are derived. We consider the following hypothesis: When the parameter space is restricted by a hypothesis , letbe the corresponding QML estimators of , and let be minus twice the log-likelihood, evaluated at the fitted parameters. Also let be the “deviance” statistic for testing against. From (2.5) and (2.8), and similarly In order to obtain our results, we give some sufficient conditions as follows.(A1) is positive definite for sufficiently large and where and denotes the maximum in absolute value of the eigenvalues of a symmetric matrix.(A2) There is a constant such that(A3)andexist and are bounded, andis twice continuously differentiable, , . Theorem 3.1. Assume (2.1), (2.2) and (A1)–(A3).(1)Suppose and is a continuous function, holds. Then (2) Suppose , holds. Then (3) Suppose , holds. Then 4. Proof of Theorem To prove Theorem 3.1, we first introduce the following lemmas. Lemma 4.1. Suppose that (A1)–(A3) hold. Then, for all , where Proof. Similar to proof of Lemma 4.1 in Hu [24], here we omit. Lemma 4.2. Suppose that (A1)–(A3) hold. Then , and where are on the line ofand. Proof. Similar to proof of Theorem 3.1 in Hu [24], we easily prove that, and . Since (4.4) is easily proved, here we omit the proof (4.4). Proof of Theorem 3.1. Note that and are nonsingular. By Taylor’s expansion, we have where for some . Since , also . By (4.1), we have Thus is a symmetric matrix with. By (4.5) and (4.6), we have Letdenoteand, respectively. By (4.7), we have Note that By (2.15), (4.2) and (4.8), we get Note that By (2.1), (2.11) and (4.12), we have By (4.13) and (2.10), we have By (4.13), we have By (4.15), we have By (4.14) and (4.16), we have By (4.15), we have Thus, by (4.17) and (4.18), we have Since , we have Thus, by (4.17), (4.20) and mean value theorem, we have where for some . It is easy to know that By Lemma 4.2 and (4.22), we have Hence, by (4.11), we have By (4.24), we have By Lemma 4.2, we have Now, we prove (3.8). By (4.12), we have Note that From (4.28), we have By ( 2.8) and (2.10), we have From (4.30), we obtain that By (4.29), (4.31) and Lemma 4.2, we have By (3.3)–(3.5), we have Under the , and by (4.26), (4.32) and (4.33), we have It is easily proven that Thus, by (4.33)–(4.35), we finish the proof of (3.8). Next we prove (3.9). Under, , and , we have Hence By (2.8), (2.10), we have From (4.38), we obtain, Thus, by (4.37), (4.39) and Lemma 4.2, we have By (3.3)–(3.5), we have Under the, by (4.26), (4.40 ), and (4.41), we obtain Thus, by (4.35), (4.42), (3.9) holds. Finally, we prove (3.10). Under, we have Thus By (2.8) and (2.10), we have From (4.45), we obtain By (4.44), (4.46) and Lemma 4.2, we have By (3.3)–(3.5), we know that Under the , by (4.26), (4.47) and (4.48), we have Thus, (3.10) follows from (4.48), (4.49), and (4.35). Therefore, we complete the proof of Theorem 3.1. 5. Conclusions and Open Problems In the paper, we consider the generalized linear mode with FCA processes, which includes many special cases, such as an ordinary regression model, an ordinary generalized regression model, a linear regression model with constant coefficient autoregressive processes, time-dependent and function coefficient autoregressive processes, constant coefficient autoregressive processes, time-dependent or time-varying autoregressive processes, and a linear regression model with functional coefficient autoregressive processes. And then we obtain the QML estimators for some unknown parameters in the generalized linear mode model and extend some estimators. At last, we use pseudo LR method to investigate three hypothesis tests of interest and obtain the asymptotic chi-squares distributions of However, several lines of future work remain open. (1) It is well known that a conventional time series can be regarded as the solution to a differential equation of integer order with the excitation of white noise in mathematics, and a fractal time series can be regarded as the solution to a differential equation of fractional order with a white noise in the domain of stochastic processes (see [25]). In the paper, is a conventional nonlinear time series. We may investigate some hypothesis tests by pseudo LR method when theis a fractal time series (the idea is given by an anonymous reviewer). In particular, we assume that where is strictly decreasing sequence of nonnegative numbers, is a constant sequence, and is the Riemann-Liouville integral operator of order given by where is the Gamma function, and is a piecewise continuous on and integrable on any finite subinterval of (See [25, 26]). Fractal time series may have a heavy-tailed probability distribution function and has been applied various fields of sciences and technologies (see [25, 27–32]). Thus it is very significant to investigate various regression models with fractal time series errors, including regression model (1.1) with (5.1). (2) We maybe investigate the others hypothesis tests, for example::; :; :; : and is a continuous function,;:, ;:, . The authors would like to thank the anonymous referees for their valuable comments which have led to this much improved version of the paper. The paper was supported by Scientific Research Item of Department of Education, Hubei (no. D20112503), Scientific Research Item of Ministry of Education, China (no. 209078), and Natural Science Foundation of China (no. 11071022, 11101174). 1. Z. D. Bai and M. Guo, “A paradox in least-squares estimation of linear regression models,” Statistics & Probability Letters, vol. 42, no. 2, pp. 167–174, 1999. View at Publisher · View at Google 2. Y. Li and H. Yang, “A new stochastic mixed ridge estimator in linear regression model,” Statistical Papers, vol. 51, no. 2, pp. 315–323, 2010. View at Publisher · View at Google Scholar · View at 3. A. E. Hoerl and R. W. Kennard, “Ridge regression: Biased estimation for nonorthogonal problems,” Technometrics, vol. 12, pp. 55–67, 1970. 4. X. Chen, “Consistency of LS estimates of multiple regression under a lower order moment condition,” Science in China Series A, vol. 38, no. 12, pp. 1420–1431, 1995. 5. T. W. Anderson and J. B. Taylor, “Strong consistency of least squares estimates in normal linear regression,” The Annals of Statistics, vol. 4, no. 4, pp. 788–790, 1976. 6. G. González-Rodríguez, A. Blanco, N. Corral, and A. Colubi, “Least squares estimation of linear regression models for convex compact random sets,” Advances in Data Analysis and Classification, vol. 1, no. 1, pp. 67–81, 2007. View at Publisher · View at Google Scholar 7. H. Cui, “On asymptotics of t-type regression estimation in multiple linear model,” Science in China Series A, vol. 47, no. 4, pp. 628–639, 2004. View at Publisher · View at Google Scholar 8. M. Q. Wang, L. X. Song, and X. G. Wang, “Bridge estimation for generalized linear models with a diverging number of parameters,” Statistics & Probability Letters, vol. 80, no. 21-22, pp. 1584–1596, 2010. View at Publisher · View at Google Scholar 9. L. C. Chien and T. S. Tsou, “Deletion diagnostics for generalized linear models using the adjusted Poisson likelihood function,” Journal of Statistical Planning and Inference, vol. 141, no. 6, pp. 2044–2054, 2011. View at Publisher · View at Google Scholar · View at Scopus 10. L. Fahrmeir and H. Kaufmann, “Consistency and asymptotic normality of the maximum likelihood estimator in generalized linear models,” The Annals of Statistics, vol. 13, no. 1, pp. 342–368, 1985. View at Publisher · View at Google Scholar 11. Z. H. Xiao and L. Q. Liu, “Laws of iterated logarithm for quasi-maximum likelihood estimator in generalized linear model,” Journal of Statistical Planning and Inference, vol. 138, no. 3, pp. 611–617, 2008. View at Publisher · View at Google Scholar 12. Y. Bai, W. K. Fung, and Z. Zhu, “Weighted empirical likelihood for generalized linear models with longitudinal data,” Journal of Statistical Planning and Inference, vol. 140, no. 11, pp. 3446–3456, 2010. View at Publisher · View at Google Scholar 13. S. G. Zhao and Y. Liao, “The weak consistency of maximum likelihood estimators in generalized linear models,” Science in China A, vol. 37, no. 11, pp. 1368–1376, 2007. 14. P. Pere, “Adjusted estimates and Wald statistics for the AT(1) model with constant,” Journal of Econometrics, vol. 98, no. 2, pp. 335–363, 2000. View at Publisher · View at Google Scholar 15. R. A. Maller, “Asymptotics of regressions with stationary and nonstationary residuals,” Stochastic Processes and Their Applications, vol. 105, no. 1, pp. 33–67, 2003. View at Publisher · View at Google Scholar 16. W. A. Fuller, Introduction to Statistical Time Series, John Wiley & Sons, New York, NY, USA, 2nd edition, 1996. 17. G. H. Kwoun and Y. Yajima, “On an autoregressive model with time-dependent coefficients,” Annals of the Institute of Statistical Mathematics, vol. 38, no. 2, pp. 297–309, 1986. View at Publisher · View at Google Scholar 18. J. S. White, “The limiting distribution of the serial correlation coefficient in the explosive case,” Annals of Mathematical Statistics, vol. 29, pp. 1188–1197, 1958. 19. J. S. White, “The limiting distribution of the serial correlation coefficient in the explosive case—II,” Annals of Mathematical Statistics, vol. 30, pp. 831–834, 1959. 20. J. D. Hamilton, Time Series Analysis, Princeton University Press, Princeton, NJ, USA, 1994. 21. F. Carsoule and P. H. Franses, “A note on monitoring time-varying parameters in an autoregression,” International Journal for Theoretical and Applied Statistics, vol. 57, no. 1, pp. 51–62, 2003. View at Publisher · View at Google Scholar 22. R. Azrak and G. Mélard, “Asymptotic properties of quasi-maximum likelihood estimators for ARMA models with time-dependent coefficients,” Statistical Inference for Stochastic Processes, vol. 9, no. 3, pp. 279–330, 2006. View at Publisher · View at Google Scholar 23. R. Dahlhaus, “Fitting time series models to nonstationary processes,” The Annals of Statistics, vol. 25, no. 1, pp. 1–37, 1997. View at Publisher · View at Google Scholar 24. H. Hu, “QML estimators in linear regression models with functional coefficient autoregressive processes,” Mathematical Problems in Engineering, vol. 2010, Article ID 956907, 30 pages, 2010. View at Publisher · View at Google Scholar 25. M. Li, “Fractal time series—a tutorial review,” Mathematical Problems in Engineering, vol. 2010, Article ID 157264, 26 pages, 2010. View at Publisher · View at Google Scholar 26. Y. S. Mishura, Stochastic Calculus for Fractional Brownian Motion and Related Processes, vol. 1929 of Lecture Notes in Mathematics, Springer, Berlin, Germany, 2008. View at Publisher · View at Google Scholar 27. M. Li and W. Zhao, “Visiting power laws in cyber-physical networking systems,” Mathematical Problems in Engineering, vol. 2012, Article ID 302786, 13 pages, 2012. View at Publisher · View at Google Scholar 28. M. Li, C. Cattani, and S. Y. Chen, “Viewing sea level by a one-dimensional random function with long memory,” Mathematical Problems in Engineering, vol. 2011, Article ID 654284, 13 pages, 2011. View at Publisher · View at Google Scholar · View at Scopus 29. C. Cattani, “Fractals and hidden symmetries in DNA,” Mathematical Problems in Engineering, vol. 2010, Article ID 507056, 31 pages, 2010. View at Publisher · View at Google Scholar 30. J. Lévy-Véhel and E. Lutton, Fractals in Engineering, Springer, 1st edition, 2005. 31. V. Pisarenko and M. Rodkin, Heavy-Tailed Distributions in Disaster Analysis, Springer, 2010. 32. H. Sheng, Y.-Q. Chen, and T.-S. Qiu, Fractional Processes and Fractional Order Signal Processing, Springer, 2012.
{"url":"http://www.hindawi.com/journals/mpe/2012/862398/","timestamp":"2014-04-17T05:49:05Z","content_type":null,"content_length":"991965","record_id":"<urn:uuid:08c52dda-bed8-4c23-9413-c418175cfe0a>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00345-ip-10-147-4-33.ec2.internal.warc.gz"}
particular soultion of a initial value problem March 4th 2008, 09:42 AM particular soultion of a initial value problem i know this is more than likely a calculus problem but not sure if my algebra is right so hence the re posting over here. If its the wrong way to post it sorry! Starting with: 5(d^2y/d^2x) + 4(dy/dx) + y = 0 I have find a particular solution to the initial value problem where y(0) = -2, y'(0)=3 I know a general solution is y = e^-0.4x(C cos0.2x + Dsin0.2x) after finding the roots. So substituting in the values into the derivative y' = -0.4e^-0.4x(-0.2C sin0.2x + 0.2 D cos0.2x) ends with 0.4C+0.2D =3 So C = 3 and D = 9 Doing the same with the general solution with x=-2 y = e^-0.4x(C cos0.2x + Dsin0.2x) I end up with 1( C x 1 + D x 0) in other words C = -2 My confusion comes from the fact that C should be the same in both instances?? Cheers for the help Edit/Delete Message March 4th 2008, 12:31 PM Starting with: 5(d^2y/d^2x) + 4(dy/dx) + y = 0 I have find a particular solution to the initial value problem where y(0) = -2, y'(0)=3 I know a general solution is y = e^-0.4x(C cos0.2x + Dsin0.2x) after finding the roots. (I haven't checked that, so I'll assume it's correct.) So substituting in the values into the derivative y' = -0.4e^-0.4x(-0.2C sin0.2x + 0.2 D cos0.2x) ends with 0.4C+0.2D =3 No! What you are told is that y'(0)=3. This means that you put x=0 in the expression for y', which tells you that -0.4(0.2 D)=3 (since e^0 and cos0 are both 1, and sin0 is So C = 3 and D = 9 Doing the same with the general solution with x=-2 That should be y=-2 when x=0 y = e^-0.4x(C cos0.2x + Dsin0.2x) I end up with 1( C x 1 + D x 0) in other words C = -2 This time you've got it right! My confusion comes from the fact that C should be the same in both instances??
{"url":"http://mathhelpforum.com/advanced-algebra/29946-particular-soultion-initial-value-problem-print.html","timestamp":"2014-04-20T16:31:09Z","content_type":null,"content_length":"5973","record_id":"<urn:uuid:39a74fa3-4778-4cce-a3f4-40c238f305d0>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00155-ip-10-147-4-33.ec2.internal.warc.gz"}
Q.Coordinates of the centre of a circle whose radius is 2 unit and which touches the... - Homework Help - eNotes.com Q.Coordinates of the centre of a circle whose radius is 2 unit and which touches the line pair `x^2 - y^2 - 2x + 1=0` a) `(4,0)` b) `(1 + 2sqrt2,0)` c) `(4,1)` d) `(1,2sqrt2)` First consider the line pair with the equation `x^2-y^2-2x+1 = 0` The left side of the equation contains complete square trinomial `x^2-2x+1` , which equals `(x-1)^2` . So this equation can be rewritten as `(x-1)^2 = y^2` , which corresponds to the equations of two lines: `y = x-1` and `y = -x+1` . These lines are graphed below: If a circle is touching these two lines, it's center has to lie on the bisector of the angle formed by the lines (90 degree angle).This means the center has to either lie on the line with the equation x = 1 (and have x-coordinate x=1) or on the x-axis (and thus have coordinate y=0). Choices a, b and d all satisfy these requirements. Choice d: the center is at the point `(1, 2sqrt2)` . Consider the perpendicular dropped from this point to either of the lines. This is the radius of the circle in question and it has to equal 2. Since the right triangle formed by points (1,0), `(1, 2sqrt(2))` and this perpendicular has to be isosceles (bisector and the line form 45 degree angle), the length of the hypotenuse is `sqrt(2)` times 2, which is correct. So choice d is one of the correct choices. If the center lies on the x-axis, as in choices a and b, the x-coordinate of the center `(x_0)` has to be such that `x_0-1` is also `sqrt(2) ` times 2, the radius of the circle. So `x_0=1+2sqrt(2)` , as in choice b. Answer: b and d Join to answer this question Join a community of thousands of dedicated teachers and students. Join eNotes
{"url":"http://www.enotes.com/homework-help/multi-correct-choice-question-q-coordinates-centre-443457","timestamp":"2014-04-17T21:54:47Z","content_type":null,"content_length":"26974","record_id":"<urn:uuid:62cd64f4-392f-428a-9136-3df412dc7afd>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00436-ip-10-147-4-33.ec2.internal.warc.gz"}
by Harry Lythall - SM0VPO IMPEDANCE BRIDGE by Harry Lythall - SM0VPO This project is dedicated to Patrick. It was inspired by the e-mail question "How do you measure the impedance of an antenna?". The simple answer is that it does not really matter too much. If an antenna impedance is a little too high then the losses are still too small to worry about, but there are methods of measuring the impedance of an antenna, or any other device terminating a Consider for a moment the basic Wheatstone Bridge circuit. This is given to all students when beginning a career in radio or electronics. It consists of two voltage dividers (R1/R2 and R3/R4), each dividing the same voltage. If all four resistors are identical then the junctions (A1 and A2) contain the same voltage. The difference between them is zero. If any one of the resistors is changed in value then the circuit ballance will be destroyed and there will be a measurable voltage difference. This effect can be used to measure the impedance of antennas and even inductors and capacitors. All you need do is replace the DC source with an RF source and replace one of the resistors with the unknown device. We shall replace R4 with an antenna and feeder. We shall also connect an RF transformer to points A1 and A2. The secondary of the transformer can be used to give us an output signal that can be measured with a variety of devices. The output can be measured with any RF indicating device. I have used a Spectrum Analyser and my RF generator is the Tracking Generator in-built into the analyser. If you are not so wealthy then use a diode probe to indicate the ballance and use an ordinary signal generator to generate the RF signal. A small tip, if you are only interested in frequencies below about 20MHz then the VCO contained in the 74HCT4046 chip can be used. You will need a small low-pass filter to get rid of the third harmonic. But looking at my analyser for the moment, we see that an unterminated bridge circuit gives an (almost) perfectly flat, constant level signal from almost DC, to well into the VHF band. In the second analyser picture we have a 50-ohm termination and the bridge is ballanced, again, well into the VHF spectrum. Now we substitute the 50-ohm termination with the feeder connected to my old HF multiband antenna hidden in the trees. There are three dips in the signal showing that the antenna gives some sort of a match on three frequencies. Here we are only looking at 0 - 20MHz. Sweep the band and we see that the frequencies are 6.715MHz, 14.350MHz and 19.55MHz. My 7MHz antenna is a bit long (should be 7.050MHz), my 14MHz antenna is a bit short (should be 14.175MHz) but I can also see why my 18MHz antenna is such a load of shit. This just shows that when you add parallel dipole antennas to a feeder/balun then they have a tendency to detune each other. Here we can see the other antenna on the mast. This antenna is 10-metres above the ground and so the antenna is almost a perfect 50-ohms. The impedances are not at first sight perfect, but still quite good. In reality they are almost perfect (VSWR=1:1.1). Let us zoom in a little bit and take a closer look. My Spectrum analyser is digital, and a wide scan with a low bandwidth display tends to not display narrow peaks or troughs. The first is the 7MHz antenna. The "mush" to the right of centre is a load of broadcast stations interfering with my readings. 7.1MHz broadcast stations are both many and strong. The 14MHz trace is nice and clean and also shows quite a wide bandwidth. This is almost certainly due to the interraction of the 7MHz dipole. In both traces we are looking at a bandwidth span of 1MHz. When I decided to make up the bridge I thought that the wires should be as short as possible, so a PCB would be of benefit. Connectors cost money so I just terminated the Spectrum Analyser and Tracking Generator to coaxial cables and left a pair of copper wires hanging off the board for connecting to my feeder. The transformer is just a few turns of wire on a ferrite ring for both primary and secondary. I twisted the two wires together before I wound (Biflar), but I believe that this reduced the upper frequency possible with the bridge. To get the highest possible frequency of operation you should try to reduce the capacitance between the windings. When using a 50-Ohm dummy load then you may have to add a small capacitor to either R2 or the external connections to balance the bridge at VHF. I didn't bother since I am only interested in the HF bands. Ok, so I will get loads of e-mail for minute details of the coil. I used a total of 7 + 7 turns of 0.25mm Dia. PTFE insulated wire, wound on a Sunday, using my fingers and wound with a tension of 1.6 Newton-metres. The ambient temperature was 18 degrees C (+291 Kelvin), with a relative humidity of 28% in the workshop. I had corn-flakes for breakfast. The basic bridge can also be extended with a 50pf fixed capacitor and a 100pf variable cap to measure the reactance of the antenna/feeder system. If the best match occurs with the variable cap at 50% (50pf) then the antenna/feeder is resistive, but if the variable cap is more than 50% to get a balance then the antenna is inductive. If the variable is less than 50% to get a balance then the antenna/feeder is capacitive. You can also replace R2 with a 150-ohm non-inductive variable resistor. With a perfect balance then the resistor value is equal to the antenna/feeder impedance. The potentiometer can be calibrated if it is linear. 0 - 150 ohms equates to 0 - 300 degrees of rotation of the shaft. Best regards from Harry - SM0VPO Return to INFO page
{"url":"http://www.sm0vpo.com/antennas/bridge.htm","timestamp":"2014-04-21T07:04:49Z","content_type":null,"content_length":"8371","record_id":"<urn:uuid:66d7738c-2660-4b56-b591-b041fb3ae35e>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00478-ip-10-147-4-33.ec2.internal.warc.gz"}
Assume a Spherical FutureBaby…Assume a Spherical FutureBaby... Welcome to today’s exciting episode of “How Big a Dork Am I?” Today, we’ll be discussing the making of unnecessary models: In this graph, the blue points represent the average mass in grams of a fetus at a given week of gestation, while the red line is the mass predicted by a simple model treating the fetus as a sphere of uniform density with a linearly increasing radius. The “model” was set up by taking the 40-week length reported at BabyCenter, and dividing by two to get an approximate radius for the spherical baby. Then I assumed that the actual radius increased linearly from zero to the final value, calculated the volume of the sphere, and multiplied by a constant density to get reasonable agreement between the model and the data. If you take the numbers I put into this, and use them to estimate the mass of a cell in this model baby, you find that a cell with a volume of one cubic micron (10^-18 m^3) would have a mass of about 50 femtograms, which is kind of low, but remarkably good for such a silly model. Oh, the things I will do to amuse myself… 1. #1 Aaron Bergman May 20, 2008 That’s some serious procrastination you’ve got going there. Grading? Writing up a paper? 2. #2 Chad Orzel May 20, 2008 That’s some serious procrastination you’ve got going there. Grading? Writing up a paper? I was between a couple of meetings, and trying not to grade lab reports. 3. #3 John Novak May 20, 2008 (Also, there’s no error bars.) 4. #4 dr. dave May 20, 2008 Would this be a perfectly smooth, frictionless, spherical baby? 5. #5 Captain Button May 20, 2008 Inspired by the ultrasound pictures downblog: What is the mass of the Star Child at the end of 2001: A Space Odyssey? 6. #6 Romeo Vitelli May 20, 2008 When is this kid due again? You have way too much time on your hands. 7. #7 Daryl McCullough May 20, 2008 Here I am being an ambulance-chaser theorist. Why should the radius increase linearly with time? Here’s my theory: because the rate at which nutrients can enter the fetus is proportional to the surface area (maybe). According to this theory, we would predict: (where M = mass, A = surface area, and t = time) dM/dt = k A where k is an empirically determined constant. Assuming that M scales as the cube of some characteristic length, R, and A scales as the square of R, then we have k1 d/dt (R^3) = k k2 R^2 where k1 and k2 are two other constants. This simplifies to d/dt R = k k2/(3 k1) We can absorb k1, k2 and the factor 3 into the unknown constant k to get d/dt R = k Do I win? Isn’t that even dorkier than your original post? 8. #8 Anonymous May 20, 2008 Should reach 9.2 metric tons in 10 years. Hope your floors are reinforced. 9. #9 Ian Durham May 20, 2008 Congrats, by the way. It’s blissfully enjoyable until they learn to talk. Anyway, I actually think that is pretty darn cool, by the way. 10. #10 Ian Durham May 20, 2008 Congrats, by the way. It’s blissfully enjoyable until they learn to talk. Anyway, I actually think that is pretty darn cool, by the way (the graph). 11. #11 Chad Orzel May 20, 2008 John Novak: Also, there’s no error bars. They just don’t show up on the graph… The data are specified to +/- 1 gram. I’m sure that makes sense. Really. dr. dave: Would this be a perfectly smooth, frictionless, spherical baby? But of course. Can’t you tell from the ultrasounds? (Actually, Kate would probably beg to differ regarding the “frictionless” part…) Daryl: Here I am being an ambulance-chaser theorist. Why should the radius increase linearly with time? It’s not, really. You can tell from the graph– the slope of the model is too high early on, and too low later. A somewhat higher power would probably be a better fit– taking out the early part (which is fairly close to exponential, almost doubling every week) and the last three points, weeks 16-40 fit pretty well to t^4. Linear growth is the easiest thing to simulate, though. Yes, I just cranked that into Excel and had it do a power-law fit. God, I’m a dork. 12. #12 miller May 21, 2008 I would have tried to model it with the logistical equation. 13. #13 Luke May 23, 2008 Since your baby is expanding linearly with time, Ω → 0. I’m sure you and your wife are glad that his/her self-gravity isn’t significant.
{"url":"http://scienceblogs.com/principles/2008/05/20/assume-a-spherical-futurebaby/","timestamp":"2014-04-17T10:09:10Z","content_type":null,"content_length":"84884","record_id":"<urn:uuid:9d265338-21b0-4fe2-ba65-23e29eceb9f8>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00167-ip-10-147-4-33.ec2.internal.warc.gz"}
Lancaster, PA Math Tutor Find a Lancaster, PA Math Tutor I have a degree in business, and experience in business, accounting, finance, and math. I am an easy-going person who is very patient with students. My approach is to pinpoint a problem area and give the proper information and sources of information to help them when they are on their own. 22 Subjects: including statistics, ESL/ESOL, algebra 1, algebra 2 ...Accounting is a basic concept which is muddled by numbers and hard to see the larger picture. I can help you learn accounting which are more than just numbers but rather concepts/processes. I currently work at Lancaster Bible College as a Staff Accountant, I have been employed at LBC for 3 years, I also coach the Men and Womnen's Tennis Team at LBC, which is Division III. 4 Subjects: including algebra 1, prealgebra, accounting, tennis Teaching is my calling. I've taught elementary aged children in public schools for 22 + years. I get to know a child's strengths and needs quickly, and have proven over and over that I motivate, encourage, and bring out the best in children. 10 Subjects: including prealgebra, reading, writing, grammar ...Taught all classes from preschool through 12th grade and then adult classes in the evenings. Lessons were designed for those who spoke no English all the way to those studying for the TOEFL. I have tutored students as requested by school districts, for homebound students, in all subjects for students in both elementary and secondary schools. 22 Subjects: including algebra 2, ASVAB, ESL/ESOL, precalculus ...I have been "teacher helping" since 2001 at Pequea Elementary and 2008 at Hambright Elementary. I have worked extensively with K-6 grades, enforcing math and reading skills individually and in groups. I try to make learning fun by turning rote reading drills into fun games.I have been sewing for the last 20 years. 8 Subjects: including algebra 2, elementary (k-6th), algebra 1, prealgebra Related Lancaster, PA Tutors Lancaster, PA Accounting Tutors Lancaster, PA ACT Tutors Lancaster, PA Algebra Tutors Lancaster, PA Algebra 2 Tutors Lancaster, PA Calculus Tutors Lancaster, PA Geometry Tutors Lancaster, PA Math Tutors Lancaster, PA Prealgebra Tutors Lancaster, PA Precalculus Tutors Lancaster, PA SAT Tutors Lancaster, PA SAT Math Tutors Lancaster, PA Science Tutors Lancaster, PA Statistics Tutors Lancaster, PA Trigonometry Tutors Nearby Cities With Math Tutor Bausman Math Tutors Conestoga Math Tutors East Petersburg Math Tutors Lampeter Math Tutors Landisville, PA Math Tutors Meadia Heights, PA Math Tutors Millersville, PA Math Tutors Mountville, PA Math Tutors Newark, DE Math Tutors Reading Station, PA Math Tutors Reading, PA Math Tutors Willow Street Math Tutors Willow View Heights, PA Math Tutors Willow, PA Math Tutors York, PA Math Tutors
{"url":"http://www.purplemath.com/Lancaster_PA_Math_tutors.php","timestamp":"2014-04-17T19:26:10Z","content_type":null,"content_length":"24053","record_id":"<urn:uuid:86716c0d-bcee-414e-94a2-9cac1c238caf>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00066-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: July 2010 [00092] [Date Index] [Thread Index] [Author Index] Display/manipulating numeric values inside a DynamicModule[] • To: mathgroup at smc.vnet.net • Subject: [mg110744] Display/manipulating numeric values inside a DynamicModule[] • From: Leo Alekseyev <dnquark at gmail.com> • Date: Sun, 4 Jul 2010 03:10:13 -0400 (EDT) Recently, I've been using DynamicModules[] to interactively explore some plots that I make -- by introducing a locator, whose coordinates allow me to trace the plotted data in some fashion (e.g. by displaying the function value for the x coordinate of the locator, or finding the closest plotted point to a locator in a ListPlot, etc.) My problem is that I haven't figured out a good way to display dynamically updated values as part of the plot or, for that matter, perform manipulations with the dynamic values. The reason seems to be that once an expression has head Dynamic, the behavior of many familiar functions changes (e.g. NumericQ on a dynamic value returns False, which makes it impossible to numerically evaluate Re or Im, etc.) Below is a simple example of what I'm doing, and workaround that I came up with. Here are some concrete questions: (a) is there a better way to display the dynamic values "a" and "b" inside a formatted string?.. My workaround is to use something like Grid[{{"a:", a, "; b:", b}}], which is not entirely satisfactory (b) is there a better way of doing arithmetic with the dynamic value "a"? I would like to be able to say something like b = Re[a], as opposed to b = Dynamic[Re[whatever long expression I might have used to compute a]] (c) are there any good examples of using Dynamic to trace plots, as described above?.. I'm sure I'm not the only one who is doing this :) f[x_] := x^2 + I; DynamicModule[{p = {1, 1}, a, b}, a := Dynamic[f[p[[1]]]]; b := Dynamic[Re@f[p[[1]]]]; Show[{Plot[{Re[f[x]]}, {x, -2, 2}, Frame -> True], Graphics@Locator[Dynamic[p], Appearance -> {Large}], Graphics@Text[Grid[{{"a:", a, "; b:", b}}], {-1, 2}]}]]
{"url":"http://forums.wolfram.com/mathgroup/archive/2010/Jul/msg00092.html","timestamp":"2014-04-16T04:21:22Z","content_type":null,"content_length":"26879","record_id":"<urn:uuid:73ebb6cb-b998-4e1b-be8f-1a18323fd676>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00505-ip-10-147-4-33.ec2.internal.warc.gz"}
The R programming language is fast gaining popularity among data scientists to perform statistical analyses. It is extensible and has a large community of users, many of whom contribute packages to extend its capabilities. However, it is single-threaded and limited by the amount of RAM on the machine it is running on, which makes it challenging to run R programs on big data. There are efforts under way to remedy this situation, which essentially fall into one of the following two categories: • Integrate R into a parallel database, or • Parallelize R so it can process big data In this post, we look at Vertica’s take on “Integrating R into a parallel database” and the two major areas that allow for the performance improvement. A follow on blog will be posted to describe alternatives to the first approach. 1.) Running multiple instances of the R algorithm in parallel (query partitioned data) The first major performance benefit from Vertica R implementation has to do with running multiple instances of the R algorithm in parallel with queries that chunk the data independently. In the recently launched Vertica 6.0, we added the ability to write sophisticated R programs and have them run in parallel on a cluster of machines. At a high level Vertica threads communicate with R processes to compute results. It uses optimized data conversion from Vertica tables to R data frames and all ‘R’ processing is automatically parallelized between Vertica servers. The diagram below shows how the Vertica R integration has been implemented from a parallelization perspective. The parallelism comes from processing independent chunks of data simultaneously (referred to as data parallelism). SQL, being a declarative language, allows database query optimizers to figure out the order of operations, as well as which of them can be done in parallel, due to the well-defined semantics of the language. For example, consider the following query that computes the average sales figures for each month: SELECT avg(qty*price) FROM sales GROUP BY month; The semantics of the GROUP BY operation are such that the average sales of a particular month are independent of the average sales of a different month, which allows the database to compute the average for different months in parallel. Similarly, the SQL-99 standard defines analytic functions (also referred to as window functions) – these functions operate on a sliding window of rows and can be used to compute moving averages, percentiles etc. For example, the following query assigns student test scores into quartiles for each grade: SELECT name, grade, score, NTILE(4) OVER (PARTITION BY grade ORDER BY score DESC) FROM test_scores; │ name │ grade│ score│ ntile │ │ Tigger │ 1│ 98│ 1 │ │ Winnie │ 1│ 89│ 1 │ │ Rabbit │ 1│ 78│ 2 │ │ Roo │ 1│ 67│ 2 │ │ Piglet │ 1│ 56│ 3 │ │ Owl │ 1│ 54│ 3 │ │ Eeyore │ 1│ 45│ 4 │ │ Batman │ 2│ 98│ 1 │ │ Ironman │ 2│ 95│ 1 │ │ Spiderman │ 2│ 75│ 2 │ │ Heman │ 2│ 56│ 2 │ │ Superman │ 2│ 54│ 3 │ │ Hulk │ 2│ 43│ 4 │ Again, the semantics of the OVER clause in window functions allows the database to compute the quartiles for each grade in parallel, since they are independent of one another. Unlike some of our competitors, instead of inventing yet another syntax to perform R computations inside the database, we decided to leverage the OVER clause, since it is a familiar and natural way to express data parallel computations. A prior blog post shows how easy it is to create, deploy and use R functions on Vertica. Listed below is an example comparing using R and ODBC vs Vertica’ R implementation with the UDX framework. Looking at the chart above as your data volumes increase Vertica’s implementation using the UDX framework scales much better compared to an ODBC approach. Note: Numbers indicated on the chart should only be used for relative comparisons since this is not a formal benchmark. 2.) Leveraging column-store technology for optimized data exchange (query non-partitioned data). It is important to note that even for non-data parallel tasks (functions that operate on input that is basically one big chunk of non-partitioned data) , Vertica’s implementation provides better performance since computation runs on a server instead of client, and we have optimized data flow between DB and R (no need to parse data again). The other major benefits of Vertica’s R integration has to do with the UDX framework and the avoidance of ODBC and by the efficiencies obtained by Vertica’s column store. Here are some examples showing how much more efficient Vertica’s integration with ‘R’ is compared to a typical ODBC approach for a query having non-partitioned data. As the chart above indicates performance improvements are also achieved by the optimizing the data transfers between Vertica and R. Since Vertica is a column store and R is vector based it is very efficient to move data from a Vertica column in very large blocks to R vectors. Note: Numbers indicated on the chart should only be used for relative comparisons since this is not a formal This blog focused on performance and ‘R’ algorithms that are amenable to data parallel solutions. A following post will talk about our approach to parallelizing R for problems not amenable to data parallel solutions such as if you want to make one decision tree and “Parallelize R” so it can process the results more effectively. For more details on how to implement R in Vertica please go to the following blog http://www.vertica.com/2012/10/02/how-to-implement-r-in-vertica/
{"url":"http://www.vertica.com/author/pratibha-rana/","timestamp":"2014-04-17T21:22:19Z","content_type":null,"content_length":"61387","record_id":"<urn:uuid:9d1cbb45-cff6-4e1d-b676-20e9bce17ea9>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00157-ip-10-147-4-33.ec2.internal.warc.gz"}
How can I compute the range of signed and unsigned types James Cownie jcownie at etnus.com Thu Apr 19 02:01:34 PDT 2001 For floating point you may also want to look at which appears to calculate most (all ?) of the interesting properties of your floating point representation. (Of course it may get confused by x86s working in 80 bit intermediates unless you're very careful how you compile it, so, as ever, YMMV). This subroutine is intended to determine the parameters of the floating-point arithmetic system specified below. The determination of the first three uses an extension of an algorithm due to M. Malcolm, CACM 15 (1972), pp. 949-951, incorporating some, but not all, of the improvements suggested by M. Gentleman and S. Marovich, CACM 17 (1974), pp. 276-277. An earlier version of this program was published in the book Software Manual for the Elementary Functions by W. J. Cody and W. Waite, Prentice-Hall, Englewood Cliffs, NJ, 1980. The present program is a translation of the Fortran 77 program in W. J. Cody, "MACHAR: A subroutine to dynamically determine machine parameters". TOMS (14), 1988. Parameter values reported are as follows: ibeta - the radix for the floating-point representation it - the number of base ibeta digits in the floating-point irnd - 0 if floating-point addition chops 1 if floating-point addition rounds, but not in the IEEE style 2 if floating-point addition rounds in the IEEE style 3 if floating-point addition chops, and there is partial underflow 4 if floating-point addition rounds, but not in the IEEE style, and there is partial underflow 5 if floating-point addition rounds in the IEEE style, and there is partial underflow ngrd - the number of guard digits for multiplication with truncating arithmetic. It is 0 if floating-point arithmetic rounds, or if it truncates and only it base ibeta digits participate in the post-normalization shift of the floating-point significand in multiplication; 1 if floating-point arithmetic truncates and more than it base ibeta digits participate in the post-normalization shift of the floating-point significand in multiplication. machep - the largest negative integer such that 1.0+FLOAT(ibeta)**machep .NE. 1.0, except that machep is bounded below by -(it+3) negeps - the largest negative integer such that 1.0-FLOAT(ibeta)**negeps .NE. 1.0, except that negeps is bounded below by -(it+3) iexp - the number of bits (decimal places if ibeta = 10) reserved for the representation of the exponent (including the bias or sign) of a floating-point minexp - the largest in magnitude negative integer such that FLOAT(ibeta)**minexp is positive and normalized maxexp - the smallest positive power of BETA that overflows eps - the smallest positive floating-point number such that 1.0+eps .NE. 1.0. In particular, if either ibeta = 2 or IRND = 0, eps = FLOAT(ibeta)**machep. Otherwise, eps = (FLOAT(ibeta)**machep)/2 epsneg - A small positive floating-point number such that 1.0-epsneg .NE. 1.0. In particular, if ibeta = 2 or IRND = 0, epsneg = FLOAT(ibeta)**negeps. Otherwise, epsneg = (ibeta**negeps)/2. Because negeps is bounded below by -(it+3), epsneg may not be the smallest number that can alter 1.0 by xmin - the smallest non-vanishing normalized floating-point power of the radix, i.e., xmin = FLOAT(ibeta)**minexp xmax - the largest finite floating-point number. In particular xmax = (1.0-epsneg)*FLOAT(ibeta)**maxexp Note - on some machines xmax will be only the second, or perhaps third, largest number, being too small by 1 or 2 units in the last digit of the significand. -- Jim James Cownie <jcownie at etnus.com> Etnus, LLC. +44 117 9071438 More information about the Beowulf mailing list
{"url":"http://beowulf.org/pipermail/beowulf/2001-April/003126.html","timestamp":"2014-04-19T00:01:04Z","content_type":null,"content_length":"7394","record_id":"<urn:uuid:acbe74f8-5228-49ed-883a-4e3f30cd3d88>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00109-ip-10-147-4-33.ec2.internal.warc.gz"}
Scattering of a transversely confined Neumann Various properties of an electromagnetic wave whose spherical multipole expansion contains only Riccati– Neumann functions are examined. In particular, the novel behavior of the beam phase during diffractive spreading is discussed. When a Neumann beam is scattered by a spherical particle, the diffraction and external reflection portions of the scattering amplitude constructively interfere for large partial waves. As a result, a set of rapidly decreasing beam shape coefficients is required to cut off the partial wave sum in the scattering amplitudes. Because of its strong singularity at the origin, a Neumann beam can be produced by a point source of radiation at the center of a spherical cavity in a high conductivity metal, and Neumann beam scattering by a spherical particle can occur for certain partial waves if the sphere is placed at the center of the cavity as well. © 2011 Optical Society of America OCIS Codes (260.1960) Physical optics : Diffraction theory (290.4020) Scattering : Mie theory (290.5825) Scattering : Scattering theory ToC Category: Physical Optics Original Manuscript: August 17, 2011 Manuscript Accepted: October 2, 2011 Published: November 17, 2011 Virtual Issues Vol. 7, Iss. 2 Virtual Journal for Biomedical Optics James A. Lock, "Scattering of a transversely confined Neumann beam by a spherical particle," J. Opt. Soc. Am. A 28, 2577-2587 (2011) Sort: Year | Journal | Reset 1. G. Gouesbet, “Partial-wave expansions and properties of axisymmetric laser beams,” Appl. Opt. 35, 1543–1555 (1996). [CrossRef] [PubMed] 2. J. W. Goodman, “The angular spectrum of plane waves,” in Introduction to Fourier Optics (McGraw-Hill, 1968), pp. 48–51. 3. G. Gouesbet, B. Maheu, and G. Grehan, “Light scattering from a sphere arbitrarily located in a Gaussian beam, using a Bromwich formulation,” J. Opt. Soc. Am. A 5, 1427–1443 (1988). [CrossRef] 4. E. Hecht, “Fresnel integrals and the rectangular aperture,” in Optics, 2nd ed. (Addison-Wesley, 1987), pp. 447–449. 5. H. C. van de Hulst, “Solution for coefficients from boundary conditions,” in Light Scattering by Small Particles (Dover, 1981), p. 121. 6. C. F. Bohren and D. R. Huffman, “The internal and scattered fields,” in Absorption and Scattering of Light by Small Particles (Wiley-Interscience, 1983), p. 93. 7. M. Kerker, “Solution of the wave equation,” in The Scattering of Light and Other Electromagnetic Radiation (Academic, 1969), p. 44. 8. M. Born and E. Wolf, “Mathematical solution of the problem,” in Principles of Optics, 6th ed. (Cambridge University, 1998), pp. 641–643. 9. G. Arfken, “Spherical Bessel functions,” in Mathematical Methods for Physicists, 3rd ed. (Academic, 1985), pp. 622–630. 10. H. Chew, P. J. McNulty, and M. Kerker, “Model for Raman and fluorescent scattering by molecules embedded in small particles,” Phys. Rev. A 13, 396–404 (1976). [CrossRef] 11. M. Kerker and S. D. Druger, “Raman and fluorescent scattering of molecules embedded in spheres with radii up to several multiples of the wavelength,” Appl. Opt. 18, 1172–1179 (1979). [CrossRef] 12. H. Chew, “Transition rates of atoms near spherical surfaces,” J. Chem. Phys. 87, 1355–1360 (1987). [CrossRef] 13. S. D. Druger, S. Arnold, and L. M. Folan, “Theory of enhanced energy transfer between molecules embedded in spherical dielectric particles,” J. Chem. Phys. 87, 2649–2659 (1987). [CrossRef] 14. S. C. Hill, H. I. Saleheen, and J. A. Fuller, “Volume current method for modeling light scattered by inhomogeneously perturbed spheres,” J. Opt. Soc. Am. A 12, 905–915 (1995). [CrossRef] 15. S. C. Hill, H. I. Saleheen, M. D. Barnes, W. B. Whitten, and J. M. Ramsey, “Modeling fluorescence collection from single molecules in microspheres: effects of position, orientation, and frequency,” Appl. Opt. 35, 6278–6288 (1996). [CrossRef] [PubMed] 16. S. C. Hill, G. Videen, and J. D. Pendleton, “Reciprocity method for obtaining the far fields generated by a source inside or near a microparticle,” J. Opt. Soc. Am. B 14, 2522–2529 (1997). 17. J. D. Pendleton and S. C. Hill, “Collection of emission from, an oscillating dipole inside a sphere: analytical integration over a circular aperture,” Appl. Opt. 36, 8729–8737 (1997). [CrossRef] 18. J. P. Barton, “Electromagnetic fields for a spheroidal particle with an arbitrary embedded source,” J. Opt. Soc. Am. A 17, 458–464 (2000). [CrossRef] 19. B. Van der Pol and H. Bremmer, “The diffraction of electromagnetic waves from an electrical point source round a finitely conducting sphere, with applications to radiotelegraphy and the theory of the rainbow,” Philos. Mag. 24, 825–864 (1937). 20. J. A. Lock, “Debye series analysis of scattering of a plane wave by a spherical Bragg grating,” Appl. Opt. 44, 5594–5603(2005). [CrossRef] [PubMed] 21. J. A. Lock, “Scattering of an electromagnetic plane wave by a Luneburg lens. III. Finely stratified sphere model,” J. Opt. Soc. Am. A 25, 2991–3000 (2008). [CrossRef] 22. R. W. Boyd, “Intuitive explanation of the phase anomaly of focused light beams,” J. Opt. Soc. Am. 70, 877–880 (1980). [CrossRef] 23. H. M. Nussenzveig, “High-frequency scattering by a transparent sphere. I. Direct reflection and transmission,” J. Math. Phys. 10, 82–124 (1969). [CrossRef] 24. H. C. van de Hulst, “Solution of coefficients from boundary conditions,” in Light Scattering by Small Particles (Dover, 1981), p. 123. 25. H. C. van de Hulst, “Amplitude functions,” in Light Scattering by Small Particles (Dover, 1981), p. 124. 26. G. N. Watson, “The degenerate form of the addition theorem,” in A Treatise on the Theory of Bessel Functions, 2nd ed.(Cambridge University, 1958), pp. 368–370. 27. J. A. Lock and J. T. Hodges, “Far-field scattering of an axisymmetric laser beam of arbitrary profile by an on-axis spherical particle,” Appl. Opt. 35, 4283–4290 (1996). [CrossRef] [PubMed] 28. M.Abramowitz and I.A.Stegun (eds.), “Asymptotic expansions for large orders,” in Handbook of Mathematical Functions(National Bureau of Standards, 1964), p. 366, Eq. (9.3.3). 29. J. A. Lock and G. Gouesbet, “Rigorous justification of the localized approximation to the beam-shape coefficients in generalized Lorenz-Mie theory. I. On-axis beams,” J. Opt. Soc. Am. A 11, 2503–2515 (1994). [CrossRef] 30. H. C. van de Hulst, “The localization principle,” in Light Scattering by Small Particles (Dover, 1981), pp. 208–209. 31. J. A. Lock, “Partial-wave expansions of angular spectra of plane waves,” J. Opt. Soc. Am. A 23, 2803–2809 (2006). [CrossRef] 32. G. Grehan, B. Maheu, and G. Gouesbet, “Scattering of laser beams by Mie scatter centers: numerical results using a localized approximation,” Appl. Opt. 25, 3539–3548 (1986). [CrossRef] [PubMed] 33. J. F. Nye and M. V. Berry, “Dislocations in wave trains,” Proc. R. Soc. A 336, 165–190 (1974). [CrossRef] 34. S. Hassani, “Classification of isolated singularities,” in Foundations of Mathematical Physics (Allyn and Bacon, 1991), p. 474. 35. G. Gouesbet, J. A. Lock, and G. Grehan, “Partial-wave representations of laser beams for use in light-scattering calculations,” Appl. Opt. 34, 2133–2143 (1995). [CrossRef] [PubMed] 36. J. A. Lock, “Cooperative effects among partial waves in Mie scattering,” J. Opt. Soc. Am. A 5, 2032–2044 (1988). [CrossRef] 37. W. J. Wiscombe, “Improved Mie scattering algorithms,” Appl. Opt. 19, 1505–1509 (1980). [CrossRef] [PubMed] 38. J. A. Lock, “Contribution of high-order rainbows to the scattering of a Gaussian laser beam by a spherical particle,” J. Opt. Soc. Am. A 10, 693–706 (1993). [CrossRef] 39. L. Brillouin, “The scattering cross section of a sphere for electromagnetic waves,” J. Appl. Phys. 20, 1110–1125 (1949). [CrossRef] 40. G. Gouesbet, “Hypotheses on the a priori rational necessity of quantum mechanics,” Principia 14, 393–404 (2010). [CrossRef] 41. J. R. Wait, “Reflection of electromagnetic waves from inhomogeneous media with special profiles,” in Electromagnetic Waves in Stratified Media (MacMillan, 1962), pp. 64–86. OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed. « Previous Article | Next Article »
{"url":"http://www.opticsinfobase.org/vjbo/abstract.cfm?URI=josaa-28-12-2577","timestamp":"2014-04-21T16:11:50Z","content_type":null,"content_length":"167662","record_id":"<urn:uuid:f4797b80-7452-4bec-86f7-04f786681577>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00103-ip-10-147-4-33.ec2.internal.warc.gz"}
Report of Test FR 4020 Home Smoke Alarm Project, Alarm Response Calibrations Report of Test FR 4020 December, 2003 Thomas G. Cleary Building and Fire Research Laboratory National Institute of Standards and Technology This Report of Test presents data from a series of alarm response calibration tests as part of research into the performance of smoke alarms. The purpose of these tests was to provide a consistent calibration of all of the modified alarms used in the fire test series. The modified photoelectric and ionization alarms were exposed to smolder smoke and soot from a propene flame. CO alarms were exposed to smolder smoke that contained carbon monoxide. Full details on the alarm calibration procedures can be found in NIST Technical Note 1455 [1]. The alarm naming convention is described in NIST TN 1455, along with uncertainty estimates for each measurement. Test Data Each data file worksheet is identified as “Smolder” for smoldering smoke exposure and “Flame” for flaming soot exposure followed by a number incremented for repeated tests. A _s1, _s2, or s3 appended to the Smolder worksheet indicates it was used to generate the calibration equation for either test sequence 1 (SDC 1-15), test sequence 2 (SDC 20-28) or test sequence 3 (SDC30-41) respectively. In general, alarms were re-calibrated if they were used in a test sequence, or after they were repaired. CO Alarms CO Alarm Calibration Spreadsheet Files The worksheet columns for CO alarms include: • Time (s) = The time in seconds from the start of the test • T (oC) = The FE/DE duct air temperature • UL Transmittance = The FE/DE Upper Laser transmittance (1.52 m pathlength) • CO-#-## (V) = The modified alarm voltage • CO (V) = The carbon monoxide NDIR analyzer output voltage • CO2(V) = The carbon dioxide NDIR analyzer output voltage • Rel Humidity (%) = The FE/DE duct relative humidity • BP (torr) = FE/DE Lab barometric pressure in torr • Time (s) avg interval = This column contains blocks of time when nominally steady smoke production conditions existed. The next seven columns are the data points associated with these time • CO-#-## (V) interval mean = Mean value of CO alarm voltage over the steady smoke time interval • CO-#-## (V) interval mean SD = The standard deviation of the CO alarm voltage mean • CO (V) interval mean = Mean value of NDIR analyzer voltage over the steady smoke time interval • CO (V) interval SD = The standard deviation of the analyzer voltage mean • Vol fraction (x10E-6) CO interval mean = Mean value of NDIR analyzer CO concentration over steady smoke time interval. • Vol fraction (x10E-6) CO interval SD = The standard deviation of the analyzer CO concentration Photoelectric Alarms Photoelectric Alarm Calibration Spreadsheet Files The worksheet columns for Photoelectric alarms photo-1 and photo-3 include: • Time (s) = The time in seconds from the start of the test • T (deg C) = The FE/DE duct air temperature in oC • UL Transmittance = The FE/DE Upper Laser transmittance (1.52 m pathlength) • photo-#-## (volts) = The modified alarm voltage • Rel Humidity (%) = The FE/DE duct relative humidity • MIC (pA) = Measuring ionization chamber (MIC) current in picoamps • BP (torr) = FE/DE Lab barometric pressure in torr • Time (s) avg interval = This column contains blocks of time when nominally steady smoke production conditions existed. The next six columns are the data points associated with these time blocks • photo-#-## (V) interval mean = Mean value of the photo alarm voltage over the steady smoke time interval • photo-#-## (V) interval mean SD = The standard deviation of the photo alarm voltage mean • MIC (pA) interval mean = Mean value the MIC current over the steady smoke time interval • MIC (pA) interval SD = The standard deviation of the MIC current mean • UL Trans. Interval mean = Mean value of the upper laser transmittance over the steady smoke time interval • UL Trans. Interval SD = The standard deviation of the upper laser transmittance • Extinction (1/m) interval mean = Mean value of the extinction coefficient over the steady smoke time interval • Extinction (1/m) interval SD = The standard deviation of the extinction coefficient • Obs (%/ft) interval mean = Mean value of obscuration over the steady smoke time interval • Obs (%/ft) interval SD = The standard deviation of the smoke obscuration Ionization Alarms Ionization Alarm Calibration Spreadsheet Files The worksheet columns for Ionization alarms include: • Time (s) = The time in seconds from the start of the test • T (oC) = The FE/DE duct air temperature • UL Transmittance = The FE/DE Upper Laser transmittance (1.52 m pathlength) • ion-#-## (V) = The modified alarm voltage • Rel Humidity (%) = The FE/DE duct relative humidity • MIC (pA) = Measuring ionization chamber (MIC) current in picoamps • BP (torr) = FE/DE Lab barometric pressure in torr • Time (s) avg interval = This column contains blocks of time when nominally steady smoke production conditions existed. The next six columns are the data points associated with these time blocks. • ion-#-## (V) interval mean = Mean value of the photo alarm voltage over the steady smoke time interval • ion-#-## (V) interval mean SD = The standard deviation of the photo alarm voltage mean • MIC (pA) interval mean = Mean value the MIC current over the steady smoke time interval • MIC (pA) interval SD = The standard deviation of the MIC current mean • UL Trans. Interval mean = Mean value of the upper laser transmittance over the steady smoke time interval. • UL Trans. Interval SD = The standard deviation of the upper laser transmittance • Extinction (1/m) interval mean = Mean value of the extinction coefficient over the steady smoke time interval • Extinction (1/m) interval SD = The standard deviation of the extinction coefficient • Obs (%/ft) interval mean = Mean value of obscuration over the steady smoke time interval • Obs (%/ft) interval SD = The standard deviation of the smoke obscuration • Y interval mean = The ion alarm “Y” value computed from the alarm voltage signal over the steady smoke time interval • Y interval SD = The ion alarm “Y” value standard deviation Aspirated Alarms Aspirated Alarm Calibration Spreadsheet Files The worksheet columns for aspirated alarms include: • Time (s) = The time in seconds from the start of the test • T (oC) = The FE/DE duct air temperature • UL Transmittance = The FE/DE Upper Laser transmittance (1.52 m pathlength) • Asp-## (V) = The modified alarm voltage • Rel Humidity (%) = The FE/DE duct relative humidity • MIC (pA) = Measuring ionization chamber (MIC) current in picoamps • BP (torr) = FE/DE Lab barometric pressure in torr • Time (s) avg interval = This column indicates the time intervals of nominally steady smoke production that were specified to compute the mean values in the following columns. • Asp-## (V) interval mean = Mean value of the alarm voltage over the steady smoke time interval • Asp-#-## (V) interval mean SD = The standard deviation of the alarm voltage mean • UL Trans. Interval mean = Mean value of the upper laser transmittance over the steady smoke time interval. • UL Trans. Interval SD = The standard deviation of the upper laser transmittance • Extinction (1/m) interval mean = Mean value of the extinction coefficient over the steady smoke time interval • Extinction (1/m) interval SD = The standard deviation of the extinction coefficient • Obs (%/ft) interval mean = Mean value of obscuration over the steady smoke time interval • Obs (%/ft) interval SD = The standard deviation of the smoke obscuration The home smoke alarm project was sponsored by the U.S. Consumer Product Safety Commission, Centers for Disease Control and Prevention, U.S. Fire Administration, and the U.S. Department of Housing and Urban Affairs, Underwriters Laboratories. The National Fire Protection Association (In-kind contribution), and National Research Council Canada, (In-kind contribution). [1] Bukowski, R.W., Peacock, R.D., Averill, J.D., Cleary, T.G., Bryner, T.G., Walton, W.D., Reneke, P.A., and Kuligowski, E.D., Performance of Home Smoke Alarms: Analysis of the Response of Several Available Technologies in Residential Fire Settings, Natl. Inst. Stand. Technol., Tech. Note 1455 (2003).
{"url":"http://www.nist.gov/el/fire_protection/buildings/nist-report-of-test-fr-4020.cfm","timestamp":"2014-04-18T23:24:08Z","content_type":null,"content_length":"26160","record_id":"<urn:uuid:bc962302-6175-4a53-8dc8-9eccb7a7ec6e>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00072-ip-10-147-4-33.ec2.internal.warc.gz"}
And so it begins… :) Hello, all! I am Amanda Kriegl, a junior and Elementary Education Major here at Augustana and I will be working with Mrs. Peterson’s kindergarteners throughout these next two terms (Winter and Spring) for Number Sense! I am very excited and cannot wait to really see the kids grow in their number sense skills during these next few months we will be spending with them. With the first few weeks, I have really gotten to know the children in Mrs. Peterson’s classroom. During the first week, Markaye and I spent a lot of time simply getting to know the students on both a classroom and one-to-one basis. This really helped us to get a feeling for the overall demeanor of the classroom, as well as the entire classroom atmosphere. Keeping these things in mind, this will definitely help us to better prepare and execute lesson plans and activities in the future. Some key things I noticed was that Mrs. Peterson’s classroom has a lot of energy; her students like to move around and do a lot of talking. Additionally, we have a lot of participation from the students, which will help with the student involvement in the activities we present. During week two of Number Sense, Markaye, Courtney, Morgan, and I were able to get to know the students on an individual level by assessing their number sense skills. First, Morgan and Courtney assessed the students by using the “Trailblazer’s Assessment #1”, where they recorded each of the student’s scores in a common notebook that we will be keeping in the classroom so that it is easily accessible to any one of us while we are at Longfellow. Courtney and Morgan found that the students displayed great range of skill level, some of which really excelled at the tasks given. The students that did excel, were then further assessed through a more advanced assessment, the “Kathy Richardson Assessment C1 and C2” by Markaye and me. This assessment focused on many different tasks, such as the symbol-number relationship, counting-on, more and less concepts, etc. All of these assessments went fairly well, until we got to the last student, where we realized that our wording during one of the proposed questions was not completely clear to the students. The question we had been asking the students dealt with one of the more and less concepts. Initially, we had been asking the students “How many more chips do you have?” (to a given number that the student had vs. a given number we had), but then after this student responded with, “How many more to what?”, we realized that our wording had not been entirely clear to the student (and possibly to the other students as well). Once we reworded the question to ask “How many more chips do you have (student’s given # of chips) than I do (our given # of chips)?”, the student was able to promptly respond with the correct answer. This leads us to believe that our assessments on the other “more advanced” students need to be re-conducted because they had not been asked this question in this particular way. The question could have been confusing for those other students as well, but perhaps, they did not quite understand why they found the question to be confusing. At the end of the day, this only reiterates the fact that we are human and make mistakes and that not every assessment can be perfectly done, nor can it always tell you exactly what a student can or cannot do; these factors rely upon both the facilitator and the student. Therefore, we must assess once again to see what we can determine about the student’s current number sense abilities. Forward we march! Posted on November 26th, 2012 by amandakriegl10 Filed under: Uncategorized
{"url":"http://www.augustana.edu/blogs/numbersense/?p=479","timestamp":"2014-04-16T19:20:21Z","content_type":null,"content_length":"32082","record_id":"<urn:uuid:cc14d5c8-05d0-4269-9e1f-b47fa3f9aeea>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00104-ip-10-147-4-33.ec2.internal.warc.gz"}
Santee, CA Algebra Tutor Find a Santee, CA Algebra Tutor ...This is very important, because math is a series of building blocks. Once the basics are learned, a student can then develop the skills to tackle any math problem. I believe anyone can learn to understand math and science with just a little bit of time and help from a tutor that cares. 35 Subjects: including algebra 2, English, algebra 1, writing ...I was diagnosed with ADHD in 8th grade and was fortunate enough to have supportive parents, teachers, and tutors who helped me to build up my confidence and reach my potential. I am passionate about helping other students realize their strengths and successfully navigate around their own cogniti... 23 Subjects: including algebra 2, algebra 1, chemistry, reading ...Calculus is like a fun puzzle for me, and I would be happy to help you find patterns in the math that will make it more interesting for you too. My emphasis is more on how to apply the concepts and less on memorizing formulas (which you sometimes have to do anyway). I double majored in English i... 26 Subjects: including algebra 1, algebra 2, reading, English ...I have participated in the Financial Literacy Campaign as well. I have gotten results and students get AHA! moments quite often. I have always loved and excelled in the mathematics/algebra 11 Subjects: including algebra 1, algebra 2, calculus, public speaking ...I have over twenty hours of in-class tutoring experience at Montgomery Middle School. Along with my studies I have also learned guitar and how to cook as a way of keeping myself well rounded and regularly compose music with members within the San Diego, and more specifically UCSD, community. I used to hold events where I would cook healthy meals for up to fifty people every other 10 Subjects: including algebra 1, algebra 2, calculus, cooking Related Santee, CA Tutors Santee, CA Accounting Tutors Santee, CA ACT Tutors Santee, CA Algebra Tutors Santee, CA Algebra 2 Tutors Santee, CA Calculus Tutors Santee, CA Geometry Tutors Santee, CA Math Tutors Santee, CA Prealgebra Tutors Santee, CA Precalculus Tutors Santee, CA SAT Tutors Santee, CA SAT Math Tutors Santee, CA Science Tutors Santee, CA Statistics Tutors Santee, CA Trigonometry Tutors
{"url":"http://www.purplemath.com/Santee_CA_Algebra_tutors.php","timestamp":"2014-04-16T10:17:53Z","content_type":null,"content_length":"23863","record_id":"<urn:uuid:e14b21a7-ac3a-4d40-958a-a7ab12cad390>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00564-ip-10-147-4-33.ec2.internal.warc.gz"}
One-dimensional representations of W-algebras Seminar Room 1, Newton Institute Premet conjectured that any (finite) W-algebra has a one-dimensional representation. The goal of this talk is to explain results of the speaker towards this conjecture. We will start giving a sketch of proof for the classical Lie algebras. Then we explain a reduction to rigid nilpotent elements using a parabolic induction functor. Finally, we will explain how using the Brundan-Goodwin-Kleshchev category O one can try to describe one-dimensional representations of W-algebras associated to rigid elements in exceptional Lie algebras. The video for this talk should appear here if JavaScript is enabled. If it doesn't, something may have gone wrong with our embedded player. We'll get it fixed as soon as possible.
{"url":"http://www.newton.ac.uk/programmes/ALT/seminars/2009062611301.html","timestamp":"2014-04-21T03:14:35Z","content_type":null,"content_length":"6234","record_id":"<urn:uuid:e8fd243a-73c3-4662-a762-f4899529dddf>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00413-ip-10-147-4-33.ec2.internal.warc.gz"}
SPM/The DCM Equation. 2. Dynamical Systems From Wikibooks, open books for an open world What is a dynamic equation?[edit] To understand DCM, you'll need to a know what a dynamic equation is. It's extremely simple. A dynamic equation describes how a process (a system) changes in time or space. Here's a couple of examples - one from the real world, and one from the world of maths. (The latter is considerably more exciting.) Example 1: Let's say the bank gives you 3% interest on your savings. We're now at the end of year zero, and your extremely successful business has made you £50. How much will you have next year? We can work out the answer with a dynamic equation: $x(1) = 1.03 * x(0)\,$ Or more generally: $x(t) = 1.03 * x(t-1)\,$ Where $t$ is time and $x$ is your bank balance. You can apply this equation over and over again to see how your bank balance will develop. In reality, you probably know that there's a one-off equation to calculate compound interest for any number of years, but the point of this example is that the state equation is a simple rule describing how the system (your bank account) changes over Example 2: A dynamic equation may represent how a system changes in space, rather than time. Take these three equations, which describe the rates of change of three numbers: $\begin{array}{lcl} \dfrac{dx}{dt} & = & \sigma (y - x) \\ \\ \dfrac{dy}{dt} & = & x (\rho - z) - y \\ \\ \dfrac{dz}{dt} & = & xy - \beta z \end{array}\,$ These equations give you the rate of change of variables $x$, $y$ and $z$ over time, whilst $\sigma$, $\rho$ and $\beta$ are numbers selected in advance - they are the parameters of the system, which fine tune it. Don't worry about what they mean. Together these equations form the Lorenz Attractor, and if you plot them on a graph, you get something which is not only crucial to chaos theory, but something quite pretty: So we've seen that repeatedly applying a short dynamic equation to its own output can describe the change of a system over time or space. As we'll explore next, such an equation forms the basis of Dynamic Causal Modelling.
{"url":"https://en.wikibooks.org/wiki/SPM/The_DCM_Equation._2._Dynamical_Systems","timestamp":"2014-04-19T19:44:02Z","content_type":null,"content_length":"28380","record_id":"<urn:uuid:79d8cfc3-84b8-41e8-9eb0-73ccee65101b>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00626-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum: NCTM 2008 Math Forum Online Workshops - Building an Understanding of Probability Suzanne Alejandre, the Math Forum @ Drexel Download presentation slides PDF format [5.5 MB] In the Math Forum's online workshop, Technology Tools for Thinking and Reasoning about Probability we investigate some mathematics topics common to middle school curricula within the theme of probability. In this context we explore the Math Tools digital library and several software tools that contribute in some way to mathematical understanding, problem solving, reflection and We are currently running two sections of this workshop. They started on March 14, 2008, and the six weeks will conclude April 25. The workshop facilitators, participants in the 2007 Materials Development Institute are: Section 1: Craig Russell and Kathy Traylor Section 2: Seth Leavitt, Michelle Morison, and Amirah Cutts Let's Do Math! Choose which spinner is most likely to have produced the given experimental results. Using Technology Technology Problems of the Week tPoW submissions Students are invited to use the link "Submit your answer" to share their solutions, and then "self-mentor" using specially designed hints, checks, and suggestions for extensions. Workshop Activities Spinner's tPoW discussions Where are my students? Developing students' conceptual knowledge of probability Might online tools help? Math Tools Overview and Registration (free) Browse, Rate, Review, Discuss and using My MathTools Other Tools Cataloged in Math Tools Participants in workshop have shared Scheduled Upcoming Workshops Workshop 2: May 12 - June 23, 2008 Applications open April 7 - May 5 Workshop 3: June 30 - August 11, 2008 Applications open May 30 - June 23 Other Math Forum Resources Contact Information
{"url":"http://mathforum.org/workshops/nctm2008/session638.html","timestamp":"2014-04-16T13:44:21Z","content_type":null,"content_length":"6802","record_id":"<urn:uuid:2aec22c4-8751-4515-ab59-b3e0ff7f4a29>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00021-ip-10-147-4-33.ec2.internal.warc.gz"}
Remainder of Summation and Factorials November 1st 2009, 08:20 AM #1 Oct 2009 Remainder of Summation and Factorials My problem is What is the remainder when 1!+2!+...+100! is divided by 13? How do you simplify a factorial to find its remainder? Also, is the remainder of the summation of factorials the same as the remainder of the sum of the remainders of the factorials? Yes, you can find the remainder of each, add them up, and reduce again mod 13 to get the remainder of the sum. Note that 13!, 14!, ..., 100! all leave a remainder of 0 when divided by 13. That should simplify your calculations a lot. So any factorial greater than or equal to the divisor is 0, is this right? This leads into another question that I have find the remainder when 40!/((2^20) * 20!) is divided by 8. Can I find the remainder of each separately, or do I have to simply the factorials first? I'm guessing that I have to simplify the factorial first, because 40!/20! does not contain 8. So, 40!/20! = 40*39*38*...*22*21. Now since 40!/20! is a multiple of 32, and 8 divides 32, then 8 must divide 40!/20! with remainder 0. But here's where I'm unsure. I think that 2^4 == 0 (mod 8) implies 2^20 == 0 (mod 8). But now I'm left with 0/0. Does this mean the remainder is 0, indeterminate, or did I make a mistake? November 1st 2009, 08:41 AM #2 November 1st 2009, 09:45 AM #3 Oct 2009
{"url":"http://mathhelpforum.com/number-theory/111690-remainder-summation-factorials.html","timestamp":"2014-04-18T14:30:43Z","content_type":null,"content_length":"35916","record_id":"<urn:uuid:e96c398f-9c78-4b12-94b1-1493a5421406>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00571-ip-10-147-4-33.ec2.internal.warc.gz"}