content
stringlengths
86
994k
meta
stringlengths
288
619
Gage R&R The Gage R&R is one of the main tools used in the measure phase of a Lean Six Sigma project. The Gage R&R is a very underrated tool. Many people seem to dismiss this step and sometimes skip the measure phase altogether as they do not feel their measurement system needs any work. This may be the case, but unless there is no possibility of error in the measurement system, it should always be verified. Before you go ahead and create experiments and analyze any data, you want to make sure that the data is measured properly and that you can actually trust the data. This is the tool you need to use to test the capability of your measurement system. It is important to note that this tool is used for variable measurements (time, distance, length, weight, temperature, etc.) and not attribute measurements (category, error type, ranking, etc.). To test the capability of an attribute measurement system, you need to perform a Kappa Study. The Gage R&R tests two main characteristics of a measurement system - its REPEATABILITY, and its REPRODUCIBILITY. Yes, that is what the R&R stands for. What is repeatability? When we are doing a measurement system analysis, we want to find out how accurately a measurer can repeat their measurement. The measurer is usually a person, but sometimes it could also be a machine or gage. Basically, the question we are asking here is - If I measure the height of a product today, and I come back and measure the same piece of product next week…will I get the same result? For most people, that is a strange question. Most people will say - "Of course you will get the same result!" This is not always the case; especially when you start getting down to millimeters and further. If I am using a measuring tape to measure the height of a product, and the reading I get is 100 mm. It is very possible that I may measure it again and get a reading of 101 mm due to measurement error. No, the product hasn't changed. It is most likely that you did something slightly different when measuring this time. Perhaps you are measuring under a different light. Or your eyes are measuring from a different height. There could be a number of factors. What is reproducibility? Reproducibility looks at how well a measurer can reproduce a measurement performed by another measurer. Again, the measurer here is usually a person, but could be a machine or gage. Basically, the question we are asking here is - If Tony measured the height of this product, will I get the same result as he did when I measure the same piece of product? Again, there could be multiple factors for why the measurements each person took may come out a little different. The goal is to have as little measurement error as possible and to be able to repeat and reproduce measurements accurately every time. The Gage R&R is set up like an experiment. Samples are randomly chosen for multiple operators to measure. Each operator will also measure each sample randomly multiple times. The results of each measurement are then run through a Gage R&R analysis (very easily done through statistical software like Minitab or Sigma XL). This then tells us how good our repeatability and reproducibility are. So what if my measurement is a millimeter off? What's the big deal? The Gage R&R will tell you whether it is a big deal or not. This is determined by comparing the measurement variation to two things. Firstly, it compares the measurement error to the tolerance in the specification. This is called the P/T ratio (or P to T ratio). It addresses what percent of the tolerance is taken up by measurement error. For example, if your tolerance is +/- 1 mm and your measurement variation takes up most of that tolerance, then you need to find ways to improve your measurement system. Otherwise, you may be rejecting good product or accepting bad product because of a bad measurement system. On the other hand if the tolerance on the product is +/- 30 mm and your measurement variation is just a small part of that, your measurement system should be fine. The second thing that the measurement variation is compared to is the variation in the product itself. This is called the %R&R. This addresses what percent of the total variation is taken up by measurement error. Similar to the P/T ratio, you want your measurement variation be small compared to the product variation. The lower the P/T ratio and the %R&R, the better. As a rule of thumb, if you have a P/T ratio lower than 30% and a %R&R lower than 28%, you can consider the measurement system to be usable. During the measure phase of your project, if you find that the measurement system used to measure your inputs or outputs is not up to the mark through the Gage R&R, you will have to investigate the causes for the measurement variation and eliminate them. You will then have to perform the Gage R&R again to see if the system has improved. Once your Gage R&R metrics look good, you can start using the measurement system to collect data for analysis. Leave "Gage R&R" and go back to "Lean Six Sigma Tools"
{"url":"http://www.miconleansixsigma.com/gage-rr.html","timestamp":"2014-04-19T22:06:14Z","content_type":null,"content_length":"11444","record_id":"<urn:uuid:1fc8e9d1-dbde-4b5b-bd03-cc251eb05321>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00209-ip-10-147-4-33.ec2.internal.warc.gz"}
Indexicals - SICStus Prolog 10.35.10.1 Indexicals An indexical is a reactive functional rule of the form X in R, where R is a set valued range expression (see below). See Syntax of Indexicals for a grammar defining indexicals and range expressions. Indexicals can play one of two roles: propagating indexicals are used for constraint solving, and checking indexicals are used for entailment checking. When a propagating indexical fires, R is evaluated in the current store S, which is extended by adding the new domain constraint X::S(R) to the store, where S(R) denotes the value of R in S. When a checking indexical fires, it checks if D (X,S) is contained in S(R), and if so, the constraint corresponding to the indexical is detected as entailed. Send feedback on this subject.
{"url":"http://sicstus.sics.se/sicstus/docs/latest/html/sicstus.html/Indexicals.html","timestamp":"2014-04-19T06:10:01Z","content_type":null,"content_length":"3539","record_id":"<urn:uuid:1610e304-1aa4-42e3-a0ea-a94b87be8239>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00360-ip-10-147-4-33.ec2.internal.warc.gz"}
Poset Pinball, Highest Forms, and $(n-2,2)$ Springer Varieties In this manuscript we study type $A$ nilpotent Hessenberg varieties equipped with a natural $S^1$-action using techniques introduced by Tymoczko, Harada-Tymoczko, and Bayegan-Harada, with a particular emphasis on a special class of nilpotent Springer varieties corresponding to the partition $\lambda= (n-2,2)$ for $n \geq 4$. First we define the adjacent-pair matrix corresponding to any filling of a Young diagram with $n$ boxes with the alphabet $\{1,2,\ldots,n\}$. Using the adjacent-pair matrix we make more explicit and also extend some statements concerning highest forms of linear operators in previous work of Tymoczko. Second, for a nilpotent operator $N$ and Hessenberg function $h$, we construct an explicit bijection between the $S^1$-fixed points of the nilpotent Hessenberg variety Hess$(N,h)$ and the set of $(h,\lambda_N)$-permissible fillings of the Young diagram $\lambda_N$. Third, we use poset pinball, the combinatorial game introduced by Harada and Tymoczko, to study the $S^1$-equivariant cohomology of type $A$ Springer varieties $\mathcal{S}_{(n-2,2)}$ associated to Young diagrams of shape $(n-2,2)$ for $n\geq 4$. Specifically, we use the dimension pair algorithm for Betti-acceptable pinball described by Bayegan and Harada to specify a subset of the equivariant Schubert classes in the $\mathbb{T}$-equivariant cohomology of the flag variety $\mathcal {F}\ell ags(\mathbb{C}^n) \cong GL(n,\mathbb{C})/B$ which maps to a module basis of $H^*_{S^1}(\mathcal{S}_{(n-2,2)})$ under the projection map $H^*_\mathbb{T}(\mathcal{F}\ell ags(\mathbb{C}^n)) \to H^*_{S^1}(\mathcal{S}_{(n-2,2)})$. Our poset pinball module basis is not poset-upper-triangular; this is the first concrete such example in the literature. A straightforward consequence of our proof is that there exists a simple and explicit change of basis which transforms our poset pinball basis to a poset-upper-triangular module basis for $H^*_{S^1}(\mathcal{S}_{(n-2,2)})$. We close with open questions for future work. Full Text:
{"url":"http://www.combinatorics.org/ojs/index.php/eljc/article/view/v19i1p56/0","timestamp":"2014-04-21T13:22:25Z","content_type":null,"content_length":"17729","record_id":"<urn:uuid:c71532e5-938b-44cd-8b5a-1473e8a028f7>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00204-ip-10-147-4-33.ec2.internal.warc.gz"}
Numerical Comparisons You can use relational comparisons for the numeric values in your table cells. DT:Some Decision Table input output? 3 <5 5 >=3 8 3<_<9 You can use all the normal operators: <, >, <=, >=, !=. The ~= relational operator means approximately equal. It applies to floating point numbers. So if ~=3.0 then the 3.0 sets the precision so that 2.95 or 3.049 will both show equality. It is the number of decimals on the right side of the operator that determines the precision. So 2.5~=3 but 2.5 is not ~= 3.0. Regular Expression Comparisons You can match regular expressions by using the syntax =~/regex/. For example: check echo Bob =~/Bob/ check echo My name is Bob." =~/B.b/ The regular expression syntax is the Java standard.
{"url":"http://fitnesse.org/FitNesse.UserGuide.SliM.ValueComparisons","timestamp":"2014-04-19T07:45:50Z","content_type":null,"content_length":"4852","record_id":"<urn:uuid:62baa4ff-8e1f-4291-aea9-0cd922a326a0>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00359-ip-10-147-4-33.ec2.internal.warc.gz"}
The Equivalences of the Choi-Jamiolkowski Isomorphism (Part II) October 23rd, 2009 This is a continuation of this post. Please read that post to learn what the Choi-Jamiolkowski isomorphism is. In part 1, we learned about hermicity-preserving linear maps, positive maps, k-positive maps, and completely positive maps. Now let’s see what other types of linear maps have interesting equivalences through the Choi-Jamiolkowski isomorphism. Recall that the notation C[Φ] is used to represent the Choi matrix of the linear map Φ. 6. Entanglement Breaking Maps / Separable Quantum States An entanglement breaking map is defined as a completely positive map Φ with the property that (id[n] ⊗ Φ)(ρ) is a separable quantum state whenever ρ is a quantum state (i.e., a density operator). A separable quantum state σ is one that can be written in the form where {p[i]} forms a probability distribution (i.e., p[i] ≥ 0 for all i and the p[i]‘s sum to 1) and each σ[i] and τ[i] is a density operator. It turns out that the Choi-Jamiolkowski equivalence for entanglement-breaking maps is very natural — Φ is entanglement breaking if and only if C[Φ] is separable. Because it is known that determining whether or not a given state is separable is NP-HARD [1], it follows that determining whether or not a given linear map is entanglement breaking is also NP-HARD. Nonetheless, there are several nice characterizations of entanglement breaking maps. For example, Φ is entanglement breaking if and only if it can be written in the form where each operator A[i] has rank 1 (recall from Section 4 of the previous post that every completely positive map can be written in this form for some operators A[i] — the rank 1 condition is what makes the map entanglement breaking). For more properties of entanglement breaking maps, the interested reader is encouraged to read [2]. 7. k-Partially Entanglement Breaking Maps / Quantum States with Schmidt Number at Most k The natural generalization of entanglement breaking maps are k-partially entanglement breaking maps, which are completely positive maps Φ with the property that (id[n] ⊗ Φ)(ρ) always has Schmidt number [3] at most k for any density operator ρ. Recall that an operator has Schmidt number 1 if and only if it is separable, so the k = 1 case recovers exactly the entanglement breaking maps of Section 6. The set of operators associated with the k-partially entanglement breaking maps via the Choi-Jamiolkowski isomorphism are exactly what we would expect: the operators with Schmidt number no larger than k. In fact, pretty much all of the properties of entanglement breaking maps generalize in a completely natural way to this situation. For example, a map is k-partially entanglement breaking if and only if it can be written in the form where each operator A[i] has rank no greater than k. For more information about k-partially entanglement breaking maps, the interested reader is pointed to [4]. Additionally, there is an interesting geometric relationship between k-positive maps (see Section 5 of the previous post) and k-partially entanglement breaking maps that is explored in this note and in [5]. 8. Unital Maps / Operators with Left Partial Trace Equal to Identity A linear map Φ is said to be unital if it sends the identity operator to the identity operator — that is, if Φ(I[n]) = I[m]. It is a simple exercise in linear algebra to show that Φ is unital if and only if where Tr[1] denotes the partial trace over the first subsystem. In fact, it is not difficult to show that Tr[1](C[Φ]) always equals exactly Φ(I[n]). 9. Trace-Preserving Maps / Operators with Right Partial Trace Equal to Identity In quantum information theory, maps that are trace-preserving (i.e., maps Φ such that Tr(Φ(X)) = Tr(X) for every operator X ∈ M[n]) are of particular interest because quantum channels are modeled by completely positive trace-preserving maps (see Section 4 of the previous post to learn about completely positive maps). Well, some simple linear algebra shows that the map Φ is trace-preserving if and only if where Tr[2] denotes the partial trace over the second subsystem. The reason for the close relationship between this property and the property of Section 8 is that unital maps and trace-preserving maps are dual to each other in the Hilbert-Schmidt inner product. 10. Completely Co-Positive Maps / Positive Partial Transpose Operators A map Φ such that T○Φ is completely positive, where T represents the transpose map, is called a completely co-positive map. Thanks to Section 4 of the previous post, we know that Φ is completely co-positive if and only if the Choi matrix of T○Φ is positive semi-definite. Another way of saying this is that This condition says that the operator C[Φ] has positive partial transpose (or PPT), a property that is of great interest in quantum information theory because of its connection with the problem of determining whether or not a given quantum state is separable. In particular, any quantum state that is separable must have positive partial transpose (a condition that has become known as the Peres-Horodecki criterion). If n = 2 and m ≤ 3, then the converse is also true: any PPT state is necessarily separable [6]. It follows via our equivalences of Sections 4 and 6 that any entanglement breaking map is necessarily completely co-positive. Conversely, if n = 2 and m ≤ 3 then any map that is both completely positive and completely co-positive must be entanglement breaking. 11. Entanglement Binding Maps / Bound Entangled States A bound entangled state is a state that is entangled (i.e., not separable) yet can not be transformed via local operations and classical communication to a pure maximally entangled state. In other words, they are entangled but have zero distillable entanglement. Currently, the only states that are known to be bound entangled are states with positive partial transpose — it is an open question whether or not other such states exist. An entanglement binding map [7] is a completely positive map Φ such that (id[n] ⊗ Φ)(ρ) is bound entangled for any quantum state ρ. It turns out that a map is entanglement binding if and only if its Choi matrix C[Φ] is bound entangled. Thus, via the result of Section 10 we see that a map is entanglement binding if it is both completely positive and completely co-positive. It is currently unknown if there exist other entanglement binding maps. 1. L. Gurvits, Classical deterministic complexity of Edmonds’ Problem and quantum entanglement, Proceedings of the thirty-fifth annual ACM symposium on Theory of computing, 10-19 (2003). 2. M. Horodecki, P. W. Shor, M. B. Ruskai, General Entanglement Breaking Channels, Rev. Math. Phys 15, 629–641 (2003). arXiv:quant-ph/0302031v2 3. B. Terhal, P. Horodecki, A Schmidt number for density matrices, Phys. Rev. A Rapid Communications Vol. 61, 040301 (2000). arXiv:quant-ph/9911117v4 4. D. Chruscinski, A. Kossakowski, On partially entanglement breaking channels, Open Sys. Information Dyn. 13, 17–26 (2006). arXiv:quant-ph/0511244v1 5. L. Skowronek, E. Stormer, K. Zyczkowski, Cones of positive maps and their duality relations, J. Math. Phys. 50, 062106 (2009). arXiv:0902.4877v1 [quant-ph] 6. M. Horodecki, P. Horodecki, R. Horodecki, Separability of Mixed States: Necessary and Sufficient Conditions, Physics Letters A 223, 1–8 (1996). arXiv:quant-ph/9605038v2 7. P. Horodecki, M. Horodecki, R. Horodecki, Binding entanglement channels, J.Mod.Opt. 47, 347–354 (2000). arXiv:quant-ph/9904092v1 1. No comments yet. 1. October 23rd, 2009 at 09:24 | #1
{"url":"http://www.njohnston.ca/2009/10/the-equivalences-of-the-choi-jamiolkowski-isomorphism-part-ii/","timestamp":"2014-04-19T07:43:21Z","content_type":null,"content_length":"37154","record_id":"<urn:uuid:eaeed862-6659-4efc-900a-6f2695b3538f>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00471-ip-10-147-4-33.ec2.internal.warc.gz"}
Mathematics is the study of quantity, space, structure, and change. Mathematicians seek out patterns and formulate new conjectures. Mathematicians resolve the truth or falsity of conjectures by mathematical proofs, which are arguments sufficient to convince other mathematicians of their validity... originated as the study of a ring generated by vector bundles over a topological space or scheme. In algebraic topology Algebraic topology is a branch of mathematics which uses tools from abstract algebra to study topological spaces. The basic goal is to find algebraic invariants that classify topological spaces up to homeomorphism, though usually most classify up to homotopy equivalence.Although algebraic topology... , it is an extraordinary cohomology theory known as topological K-theory In mathematics, topological K-theory is a branch of algebraic topology. It was founded to study vector bundles on general topological spaces, by means of ideas now recognised as K-theory that were introduced by Alexander Grothendieck... . In Algebra is the branch of mathematics concerning the study of the rules of operations and relations, and the constructions and concepts arising from them, including terms, polynomials, equations and algebraic structures... algebraic geometry Algebraic geometry is a branch of mathematics which combines techniques of abstract algebra, especially commutative algebra, with the language and the problems of geometry. It occupies a central place in modern mathematics and has multiple conceptual connections with such diverse fields as complex... , it is referred to as algebraic K-theory In mathematics, algebraic K-theory is an important part of homological algebra concerned with defining and applying a sequenceof functors from rings to abelian groups, for all integers n.... . It also has some applications in operator algebra In functional analysis, an operator algebra is an algebra of continuous linear operators on a topological vector space with the multiplication given by the composition of mappings... s. It leads to the construction of families of In category theory, a branch of mathematics, a functor is a special type of mapping between categories. Functors can be thought of as homomorphisms between categories, or morphisms when in the category of small categories.... s, which contain useful but often hard-to-compute information. Physics is a natural science that involves the study of matter and its motion through spacetime, along with related concepts such as energy and force. More broadly, it is the general analysis of nature, conducted in order to understand how the universe behaves.Physics is one of the oldest academic... , K-theory and in particular twisted K-theory In mathematics, twisted K-theory is a variation on K-theory, a mathematical theory from the 1950s that spans algebraic topology, abstract algebra and operator theory.... have appeared in Type II string theory In theoretical physics, type II string theory is a unified term that includes both type IIA strings and type IIB strings. These account for two of the five consistent superstring theories in ten dimensions. Both theories have the maximal amount of supersymmetry — namely 32 supercharges... where it has been conjectured that they classify D-branes, Ramond–Ramond field strengths and also certain spinors on generalized complex manifolds. For details, see also K-theory (physics) In string theory, the K-theory classification refers to a conjectured application of K-theory to superstrings, to classify the allowed Ramond-Ramond field strengths as well as the charges of stable Early history The subject can be said to begin with Alexander Grothendieck Alexander Grothendieck is a mathematician and the central figure behind the creation of the modern theory of algebraic geometry. His research program vastly extended the scope of the field, incorporating major elements of commutative algebra, homological algebra, sheaf theory, and category theory... (1957), who used it to formulate his Grothendieck–Riemann–Roch theorem. It takes its name from the German "Klasse", meaning "class". Grothendieck needed to work with coherent In mathematics, a sheaf is a tool for systematically tracking locally defined data attached to the open sets of a topological space. The data can be restricted to smaller open sets, and the data assigned to an open set is equivalent to all collections of compatible data assigned to collections of... on an algebraic variety . Rather than working directly with the sheaves, he defined a group using (isomorphism classes of) sheaves as generators, subject to a relation that identifies any extension of two sheaves with their sum. The resulting group is called Grothendieck group In mathematics, the Grothendieck group construction in abstract algebra constructs an abelian group from a commutative monoid in the best possible way... In topology, by applying the same construction to vector bundle In mathematics, a vector bundle is a topological construction that makes precise the idea of a family of vector spaces parameterized by another space X : to every point x of the space X we associate a vector space V in such a way that these vector spaces fit together... Michael Atiyah Sir Michael Francis Atiyah, OM, FRS, FRSE is a British mathematician working in geometry.Atiyah grew up in Sudan and Egypt but spent most of his academic life in the United Kingdom at Oxford and Cambridge, and in the United States at the Institute for Advanced Study... Friedrich Hirzebruch Friedrich Ernst Peter Hirzebruch is a German mathematician, working in the fields of topology, complex manifolds and algebraic geometry, and a leading figure in his generation.-Life:He was born in Hamm, Westphalia... topological space Topological spaces are mathematical structures that allow the formal definition of concepts such as convergence, connectedness, and continuity. They appear in virtually every branch of modern mathematics and are a central unifying notion... Bott periodicity theorem In mathematics, the Bott periodicity theorem describes a periodicity in the homotopy groups of classical groups, discovered by , which proved to be of foundational significance for much further research, in particular in K-theory of stable complex vector bundles, as well as the stable homotopy... they made it the basis of an extraordinary cohomology theory. It played a major role in the second proof of the Index Theorem In differential geometry, the Atiyah–Singer index theorem, proved by , states that for an elliptic differential operator on a compact manifold, the analytical index is equal to the topological (circa 1962). Furthermore this approach led to a Noncommutative topology in mathematics is a term applied to the strictly C*-algebraic part of the noncommutative geometry program. The program has its origins in the Gel'fand duality between the topology of locally compact spaces and the algebraic structure of commutative C*-algebras.Several... Already in 1955, Jean-Pierre Serre Jean-Pierre Serre is a French mathematician. He has made contributions in the fields of algebraic geometry, number theory, and topology.-Early years:... had used the analogy of vector bundle In mathematics, a vector bundle is a topological construction that makes precise the idea of a family of vector spaces parameterized by another space X : to every point x of the space X we associate a vector space V in such a way that these vector spaces fit together... s with projective module In mathematics, particularly in abstract algebra and homological algebra, the concept of projective module over a ring R is a more flexible generalisation of the idea of a free module... s to formulate Serre's conjecture The Quillen–Suslin theorem, also known as Serre's problem or Serre's conjecture, is a theorem in commutative algebra about the relationship between free modules and projective modules over polynomial , which states that every finitely generated projective module over a polynomial ring In mathematics, especially in the field of abstract algebra, a polynomial ring is a ring formed from the set of polynomials in one or more variables with coefficients in another ring. Polynomial rings have influenced much of mathematics, from the Hilbert basis theorem, to the construction of... In mathematics, a free module is a free object in a category of modules. Given a set S, a free module on S is a free module with basis S.Every vector space is free, and the free vector space on a set is a special case of a free module on a set.-Definition:... ; this assertion is correct, but was not settled until 20 years later. ( Swan's theorem In the mathematical fields of topology and K-theory, the Serre–Swan theorem, also called Swan's theorem, relates the geometric notion of vector bundles to the algebraic concept of projective modules and gives rise to a common intuition throughout mathematics: "projective modules over commutative... is another aspect of this analogy.) In 1959, Serre formed the Grothendieck group In mathematics, the Grothendieck group construction in abstract algebra constructs an abelian group from a commutative monoid in the best possible way... construction for rings, and used it to prove a weak form of the conjecture. This application was one of the beginnings of algebraic K-theory In mathematics, algebraic K-theory is an important part of homological algebra concerned with defining and applying a sequenceof functors from rings to abelian groups, for all integers n.... The other historical origin of algebraic K-theory was the work of Whitehead and others on what later became known as Whitehead torsion In geometric topology, the obstruction to a homotopy equivalence f\colon X \to Y of finite CW-complexes being a simple homotopy equivalence is its Whitehead torsion \tau, which is an element in the Whitehead group Wh. These are named after the mathematician J. H. C... There followed a period in which there were various partial definitions of higher K-theory functors . Finally, two useful and equivalent definitions were given by Daniel Quillen using homotopy theory in 1969 and 1972. A variant was also given by Friedhelm Waldhausen Friedhelm Waldhausen is a German mathematician known for his work in algebraic topology.-Academic life:... in order to study the algebraic K-theory of spaces, which is related to the study of pseudo-isotopies. Much modern research on higher K-theory is related to algebraic geometry and the study of motivic cohomology Motivic cohomology is a cohomological theory in mathematics, the existence of which was first conjectured by Alexander Grothendieck during the 1960s. At that time, it was conceived as a theory constructed on the basis of the so-called standard conjectures on algebraic cycles, in algebraic geometry... The corresponding constructions involving an auxiliary quadratic form In mathematics, a quadratic form is a homogeneous polynomial of degree two in a number of variables. For example,4x^2 + 2xy - 3y^2\,\!is a quadratic form in the variables x and y.... received the general name Algebraic L-theory is the K-theory of quadratic forms; the term was coined by C. T. C. Wall,with L being used as the letter after K. Algebraic L-theory, also known as 'hermitian K-theory',is important in surgery theory.-Definition:... . It is a major tool of surgery theory In mathematics, specifically in geometric topology, surgery theory is a collection of techniques used to produce one manifold from another in a 'controlled' way, introduced by . Surgery refers to cutting out parts of the manifold and replacing it with a part of another manifold, matching up along... string theory String theory is an active research framework in particle physics that attempts to reconcile quantum mechanics and general relativity. It is a contender for a theory of everything , a manner of describing the known fundamental forces and matter in a mathematically complete system... the K-theory classification of Ramond–Ramond field strengths and the charges of stable D-branes was first proposed in 1997. See also • Algebraic K-theory In mathematics, algebraic K-theory is an important part of homological algebra concerned with defining and applying a sequenceof functors from rings to abelian groups, for all integers n.... • Topological K-theory In mathematics, topological K-theory is a branch of algebraic topology. It was founded to study vector bundles on general topological spaces, by means of ideas now recognised as K-theory that were introduced by Alexander Grothendieck... • K-theory (physics) In string theory, the K-theory classification refers to a conjectured application of K-theory to superstrings, to classify the allowed Ramond-Ramond field strengths as well as the charges of stable D-branes.... • Operator K-theory In mathematics, operator K-theory is a variant of K-theory on the category of Banach algebras .... • KK-theory In mathematics, KK-theory is a common generalization both of K-homology and K-theory , as an additive bivariant functor on separable C*-algebras... • L-theory Algebraic L-theory is the K-theory of quadratic forms; the term was coined by C. T. C. Wall,with L being used as the letter after K. Algebraic L-theory, also known as 'hermitian K-theory',is important in surgery theory.-Definition:... • Bott periodicity External links
{"url":"http://www.absoluteastronomy.com/topics/K-theory","timestamp":"2014-04-21T09:54:05Z","content_type":null,"content_length":"38094","record_id":"<urn:uuid:15ed4220-e6bf-4815-b4f2-4c0f5258055a>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00411-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: Help with inequality Replies: 1 Last Post: Dec 21, 2012 9:30 AM Messages: [ Previous | Next ] Help with inequality Posted: Dec 20, 2012 7:43 AM Can someone help me with the steps involved to solve the following inequality?: 2/(x-1) >= -1 The method I attempted was to solve it the same as though it were an equation but it doesn't seem to give the correct answer. Thanks.
{"url":"http://mathforum.org/kb/thread.jspa?threadID=2421721","timestamp":"2014-04-19T02:08:00Z","content_type":null,"content_length":"17261","record_id":"<urn:uuid:17987ee6-474f-4fff-9233-9f9f2cf4c66f>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00557-ip-10-147-4-33.ec2.internal.warc.gz"}
Wolfram Demonstrations Project The Logistic Difference Equation The logistic difference equation (or logistic map) , a nonlinear first-order recurrence relation, is a time-discrete analogue of the logistic differential equation, . Like its continuous counterpart, it can be used to model the growth or decay of a process, population, or financial instrument. Depending on the value of the constant , the solution of the difference equation can approach an equilibrium, move periodically through some cycle of values, or behave in a chaotic, unpredictable A visualization of solutions to the logistic difference equation can be obtained using what can be called a "stairstep diagram." A green line intersects back and forth between the graphs of and , beginning at the point . Every intersection of the green line and the red parabola represents a value of . It is easy to see if the solution converges to a single point, oscillates in "square-like" fashion, or is completely unpredictable. The equilibrium values for determine how or whether the long-term activity of a solution is predictable. If and , then , and the equilibrium solutions are or . Further investigation can be done to show that if , then is an asymptotically stable value. For , solutions converge instead to . For , solutions do not converge to a fixed point, except when exactly for some , in which case for all . Snapshot 1: the solution converges to a single value Snapshot 3: where , the solution oscillates with period 2 (a "two-cycle") For larger values of , the long-term activity is highly chaotic, though there may be certain values of with oscillations of period 4, 8, 16, 32, … . In this chaotic region (), there is a high sensitivity to the value of . Even varying a small amount changes most terms drastically; the solution becomes unpredictable. Snapshot 5: a solution that is chaotic and ultimately unpredictable; it can, however, be modeled as a simpler, three-cycle approximation Snapshots 2, 4, and 6: the stairstep diagrams of snapshots 1, 3, and 5, respectively
{"url":"http://demonstrations.wolfram.com/TheLogisticDifferenceEquation/","timestamp":"2014-04-20T05:46:49Z","content_type":null,"content_length":"48099","record_id":"<urn:uuid:0e52e671-57bd-49cd-9aad-0afe52bfcb5c>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00158-ip-10-147-4-33.ec2.internal.warc.gz"}
Q: Differentiation of a neural network function [Date Index] [Thread Index] [Author Index] Q: Differentiation of a neural network function • To: mathgroup@smc.vnet.net • Subject: [mg10865] Q: Differentiation of a neural network function • From: livantes@cs.city.ac.uk (Andreas Hadjiprocopis) • Date: Wed, 11 Feb 1998 18:32:34 -0500 • Organization: Posted via ULCC Internet Services Can anybody help me on this:? I would like to obtain an expression for the derivative of the function implemented by a fully connected feed forward neural network. a feed forward neural network is often viewed as a black box of n real inputs (X) and m real outputs (Y). *** FOR MY PROBLEM ASSUME m = 1 *** Hence, this black box represents a mapping from R^n to R^m. This mapping depends on a number of parameters, called the weights (like for example the coefficients of a polyonym) Training is the process of finding a particular value for each of these parameters - the weights - so that a specified mapping can be After training, the black box represents the specific mapping which is of the form: Phi : R^n -> R | y = W_(N+1) x f(W_(N) x f(W_(N-1) x f( ... x f(W_2 x f(W_1 x X)))))))) where: W1 ... W_(N+1) are matrices containing the parameters (weights) which are real numbers f(a) = 1 / (1 + exp(-a)) and f(A), A is a matrix of a_ij, is the new matrix AA whose each element aa_ij is equal to f(a_ij). and `x' is the cross product. If all the stuff regarding the neural network are a bit unclear please ignore them and just tell me how to obtain an expression for the derivative of a function of your choice, i will try and work from for example you might tell me how to obtain the derivative of f(x) = a / (b + c*exp(-x)) when a, b and c are general parameters (not instantiated to a specific value). Also, if mathematica can not do that, could you suggest some other method to do it, other than by hand? thank you very much, (please use email if possible) Andreas Hadjiprocopis livantes@soi.city.ac.uk Computer Science Department http://www.soi.city.ac.uk/~livantes/home.html Room A528, City University +44 71 477 8551 (telephone) London, UK, EC1V 0HB +44 71 477 8587 (fax)
{"url":"http://forums.wolfram.com/mathgroup/archive/1998/Feb/msg00144.html","timestamp":"2014-04-17T18:45:33Z","content_type":null,"content_length":"36193","record_id":"<urn:uuid:c5a5fffe-2365-4d49-bd08-94420c9e7058>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00589-ip-10-147-4-33.ec2.internal.warc.gz"}
Date: 10/13/2001 at 21:02:46 From: Bruce Chiarelli Subject: Many dimensions I was reading a book on analytic geometry, and it said that mathematicians are currently (this book was old, so i don't know if that's true) working on a four-dimensional hypercube. I did some research before I came here for help, but all I got was that the fourth dimension was time. Have there been any recent developments in this topic? -Bruce Chiarelli Date: 10/14/2001 at 04:20:08 From: Doctor Jeremiah Subject: Re: Many dimensions Hi Bruce, The fourth dimension in physics is time. The reason why it counts as a fourth dimension is that any event can be nailed down with its location in the universe (three dimensions) and the time it happened (another dimension). However, there could be more than three dimensions of space that we just can't see. If we prove that there are four dimensions, then time will automatically become the fifth. Strangely enough (if I remember right), the unified theory of physics (which makes quantum physics and gravitation into one set of equations) requires at least 22 dimensions of space, plus time on top of that! I have a book on the fourth dimension of geometry where they talk about what a fourth-dimensional cube would look like. Here is one way to approach it: To move from zero dimensions (a point) to one dimension (a line), you double the number of points and create a line for each original point. 1. + 2. + ---> + 3. +-------------+ To move from one dimension (a line) to two dimensions (a face), you double the number of points and lines and create a line for each original point and create a face for each original line. 1. +-------------+ 2. +-------------+ | | V V 3. +-------------+ | | | | | | To move from two dimensions (a face) to three dimensions (a cube), you double the number of points and lines and faces and create a line for each original point and create a face for each original line and create a cube for each original face. So to create higher dimension objects you can use a table like this, where P(-1) means the number of points in the dimension above and L(2) means the number of lines in the second dimension: dimens points lines faces cubes 4Dcubes 1 2P(-1) P(-1) 0 0 0 2 2P(-1) P(-1)+2L(-1) L(-1) 0 0 3 2P(-1) P(-1)+2L(-1) L(-1)+2F(-1) F(-1) 0 4 2P(-1) P(-1)+2L(-1) L(-1)+2F(-1) F(-1)+2C(-1) C(-1) Or an an arbitrary recurrance relation: dimens points lines faces cubes 4Dcubes n 2P(n-1) P(n-1)+2L(n-1) L(n-1)+2F(n-1) F(n-1)+2C(n-1) C(n-1) So if n=3 (3D space) then: dimens points lines faces cubes 4Dcubes 3 2P(2) P(2)+2L(2) L(2)+2F(2) F(2)+2C(2) C(2) Now P92) is the number of points in a 2D square (4), L(2) is the number of lines in a 2D square (4), and F(2) is the number of faces in a 2D square (1), and C(2) is the number of cubes in a 2D square (0), so: dimens points line faces cubes 4Dcubes 3 2(4)=8 4+2(4)=12 4+2(1)=6 1+2(0)=1 1 And for a 4D "cube" you would have: dimens points lines faces cubes 4Dcubes 4 2P(3) P(3)+2L(3) L(3)+2F(3) F(3)+2C(3) C(3) 4 2(8) 8+2(12) 12+2(6) 6+2(1) 1 4 16 8+24 12+12 6+2 1 So according to that reasoning, a 4D "cube" would have 16 points, 32 lines, 24 faces, and 8 cubes. Don't try to draw this at home! You can imagine this in three dimensions (without right angles) as a cube inside a larger cube where each line of the larger cube is connected by a face to its corresponding line of the smaller cube. Of course that's not what it looks like in four dimensions. If you wonder why, think about how you would imagine a 3D cube in a 2D world (a square inside a square where each point on the larger square is connected by a line to the corresponding point on the smaller As for proving that a spatial fourth dimension exists in the real world, I have no idea. But it doesn't matter because we would never know even if it did. Imagine you are a 2D person. on a 2D plane in space. A cube moves down through your plane, but you don't know about "up" and "down." You don't see it as a 3D object; you see a 2D cross- section of it. In the same way, if a 4D cube were to intersect our 3D space, it would look like a 3D object and we would have no idea that it was a 4D object except that it just appeared for no reason and when it moved out of the other side of our 3D space, it would just disappear for no reason. So if something just appears and disappears with no apparent explanation, then it might be a higher-dimensional object intersecting with our 3D universe! Does that answer your question? If not, please write back. - Doctor Jeremiah, The Math Forum
{"url":"http://mathforum.org/library/drmath/view/55363.html","timestamp":"2014-04-17T21:32:06Z","content_type":null,"content_length":"10169","record_id":"<urn:uuid:dc850db7-ad63-4843-b0be-f1b1ee7163c0>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00503-ip-10-147-4-33.ec2.internal.warc.gz"}
Greenwich, CT Statistics Tutor Find a Greenwich, CT Statistics Tutor ...I explored the area around Beijing and also visited Shanghai, Hangzhou, and Xi'an. One of the highlights of my trip was hiking to the top of HuaShan, a mountain near Xi'an, at night and waiting to see the sun rise over the east peak. It was beautiful! 36 Subjects: including statistics, Spanish, reading, writing ...Most importantly, I am personable and easy to talk to; Lessons are thorough but generally informal. I also make myself available by phone and e-mail outside of lessons--My goal is for you to succeed on your tests. My expertise is in basic and advanced math: algebra 1/2, trigonometry, geometry, precalculus/analysis, calculus (AB/BC), and statistics. 10 Subjects: including statistics, calculus, physics, geometry ...I prefer to meet in Manhattan, anywhere between City College and NYU. In addition to the subjects listed elsewhere, I am also able to tutor: Proofs or Mathematical Reasoning, Set Theory, Modern Analysis, Modern Algebra, Mathematical Logic/Advanced Logic/Computability/Modal Logic., and Game Theory. Besides math I can also tutor programming in the Python programming language. 32 Subjects: including statistics, physics, calculus, geometry ...With each new student, I begin by carefully observing how the student naturally proceeds as a test-taker. I then tailor my instruction toward my student's strengths, making sure that each student feels challenged yet motivated. I have a strong track record of success: I've tutored some students toward perfect SAT 800's/ACT 36's and others out of trouble-zones into respectability. 14 Subjects: including statistics, writing, GRE, algebra 1 ...I have a BA in statistics from Harvard and will be starting nursing school shortly. As someone who is not a typical "math person", I can relate to those struggling to understand material - I get it. I am willing to travel to my students, but also can see my students at my home. 18 Subjects: including statistics, chemistry, geometry, biology
{"url":"http://www.purplemath.com/greenwich_ct_statistics_tutors.php","timestamp":"2014-04-18T21:19:53Z","content_type":null,"content_length":"24377","record_id":"<urn:uuid:08f227cf-d18b-4af5-99b9-2a77b2b0ad6e>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00282-ip-10-147-4-33.ec2.internal.warc.gz"}
[SOLVED] Binomial expansion problem February 22nd 2010, 09:41 AM [SOLVED] Binomial expansion problem I'm having problem with my revision sheets - Any help would be much appreciated... (i) Write down the general term in the power series expansion of ((x^2) - (2/x))^18 WHAT IS THE GENERAL TERM AND HOW WOULD I GO ABOUT THIS? Hence find the term which is independant of x I DONT KNOW WHAT THIS MEANS (ii)Find the first four terms in the expansion of (1+x)^16 in ascending powers of x. hence expand (1 + z + z^2) in ascending powers of z up to and including the term in z^3 DO YOU FACTORISE THIS AND RAISE BOTH PARENTHESIS TO THE POWER 16?? February 22nd 2010, 11:05 AM Hello dojo I'm having problem with my revision sheets - Any help would be much appreciated... (i) Write down the general term in the power series expansion of ((x^2) - (2/x))^18 WHAT IS THE GENERAL TERM AND HOW WOULD I GO ABOUT THIS? Hence find the term which is independant of x I DONT KNOW WHAT THIS MEANS (ii)Find the first four terms in the expansion of (1+x)^16 in ascending powers of x. hence expand (1 + z + z^2) in ascending powers of z up to and including the term in z^3 DO YOU FACTORISE THIS AND RAISE BOTH PARENTHESIS TO THE POWER 16?? (i) Take out a factor $x^2$, whence it becomes $\big(x^2\big)^{18}$: $\left(x^2-\frac2x\right)^{18} = x^{36}\left(1-\frac{2}{x^3}\right)^{18}$ Now let $y = -\frac{2}{x^3}$, and expand $(1+y)^{18}$, the general term of which is $\binom{18}{r}y^r$, to get the general term: The term that's independent of $x$ is the one where the power of $x$ is zero. So put $r = 12$ ... $(1+x)^{16} = 1 + 16x+ \binom{16}{2}x^2 + \binom{16}{3}x^3+ ...$ Now put $x = z+z^2$: $(1+z+z^2)^{16} = 1 + 16(z+z^2)+ \binom{16}{2}(z+z^2)^2 + \binom{16}{3}(z+z^2)^3+ ...$ = ... Expand each $(z+z^2)$ term up to the term in $z^3$; collect like terms, and you're done. February 22nd 2010, 12:18 PM Hello, dojo! (i) Find the general term in the expansion of: . $\left(x^2 - \frac{2}{x}\right)^{18}$ Assuming that you know the Binomial Theorem, this is easy: . . ${18\choose n}\left(x^2\right)^n\left(\text{-}\frac{2}{x}\right)^{18-n}\;\;\text{ for }n \,=\,18,17,16\,\hdots 0$ Hence, find the term which is independant of $x$. ${\color{blue}\text{It means "Find the term which has no }x."}$ Somewhere in the middle, we have: . ${18\choose6}\left(x^2\right)^6\left(\text{-}\frac{2}{x}\right)^{12}$ . . $=\; \frac{18!}{6!\,12!}\left(x^{12}\right)\left(\frac{ (\text{-}2)^{12}}{x^{12}}\right) \;=\;(18,\!564)\left(x^{12}\right)\left(\frac{4,\! 096}{x^{12}}\right) \;=\; 76,\!038,\!144$ February 22nd 2010, 12:41 PM Thank you both for the insight very much appreciated!
{"url":"http://mathhelpforum.com/algebra/130133-solved-binomial-expansion-problem-print.html","timestamp":"2014-04-17T02:05:34Z","content_type":null,"content_length":"12331","record_id":"<urn:uuid:9f718d6c-933f-4eb3-b4c6-5dc4f518dda1>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00476-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: On the following diagram what part of a wave is shown by letter B? wavelength trough crest fetch • one year ago • one year ago Best Response You've already chosen the best response. pretty sure it's trough right? Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/510bfd8ee4b09cf125bc3f8c","timestamp":"2014-04-18T03:49:11Z","content_type":null,"content_length":"28580","record_id":"<urn:uuid:674e3185-de70-46e1-8dc1-a901ef7f930f>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00488-ip-10-147-4-33.ec2.internal.warc.gz"}
Homogenous System Trouble September 18th 2009, 11:25 AM Homo Trouble [UNSOLVED] There are 2 questions I can't simply get the answer to. The fact that in the first problem the equations are equivalent really trouble me. How do we solve these? I appreciate every help in advance! :) September 19th 2009, 09:35 AM NEW UPDATE FOR PROBLEM 1: I eliminate an equation because it's linearly dependent but then I'm left with one equation 4 variables. How do I go on from here? September 20th 2009, 07:15 AM Same problem I have a very similair problem! Only, in my problem there is only one equation anyways! Can anyone please help out? I have no idea how to do it! (Wondering) September 20th 2009, 08:41 AM
{"url":"http://mathhelpforum.com/algebra/102985-homogenous-system-trouble-print.html","timestamp":"2014-04-24T16:22:44Z","content_type":null,"content_length":"5289","record_id":"<urn:uuid:bc9438a5-81eb-4e08-9cbf-33009af76a86>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00599-ip-10-147-4-33.ec2.internal.warc.gz"}
Factoring binomials and polynomials This resource has been contributed by Winpossible, and can also be accessed on their website by clicking here - Getting started In this mini-lesson you'll learn how to factor the binomials and polynomials. It is an important step in solving problems in a good number of algebraic applications. In factoring polynomials, we determine all the terms that were multiplied together to get the given polynomial. We then try to factor each of the terms we found in the first step. This continues until we can’t factor anymore. For Example, here is the complete factorization of the polynomial x^4 – 16 = (x^2 + 4)(x + 2)(x – 2) This might seem complicated here as it is written in text, but it will be easy to follow once you hear the instructor explain it in the video here. This FREE mini-lesson is a part of Winpossible's online course that covers all topics within Algebra I. Click on the video below to go through it. If you like it, you can buy our online course in Algebra I by clicking here. This resource has been contributed by Winpossible, and can also be accessed on their website by clicking here - Factoring an expression of exponents with the same base In this mini-lesson you'll learn how to factor an expression of exponents with the same base. Generally speaking, for factoring exponents with the same base, we need to take a common factor out of the expression, and this factor is the base raised to the lowest exponent. For example, x is the common factor in the expression x + x , and hence the expression can be rewritten as x (1 + x This FREE mini-lesson is a part of Winpossible's online course that covers all topics within Algebra I. Click on the video below to go through it. If you like it, you can buy our online course in Algebra I by clicking here. This resource has been contributed by Winpossible, and can also be accessed on their website by clicking here - Factoring a quadratic into binomials This mini-lesson shows you how to factor a quadratic into binomials. As is the case in Algebra many times, the overview provided here in text might seem a little complicated, but don't worry -- it will be easy to follow once you hear the instructor explain it in the video provided. Some quadratics can be factored into two identical binomials. Such quadratics are called perfect square trinomials. As quadratic expression is the product of two binomials, factoring a quadratic means breaking the quadratic back into its binomial parts. Here factoring is done using the rule of LIOF (FOIL in reverse). A couple of general rules to keep in mind: • The factoring of x^2 + (a + b)x + ab will result into (x + a) (x + b). For example, the two factors of x^2 + 5x + 6 are (x + 2)(x + 3) • Another common type of algebraic factoring is called the difference of two squares: (x^2 – c^2) = (x + c) (x – c). For example: factors of x^2 – 4 are (x + 2) (x – 2) This FREE mini-lesson is a part of Winpossible's online course that covers all topics within Algebra I. Click on the video below to go through it. If you like it, you can buy our online course in Algebra I by clicking here. This resource has been contributed by Winpossible, and can also be accessed on their website by clicking here - Factoring a quadratic using the perfect square method In this mini-lesson you'll learn how to factor a quadratic using the perfect square method. In such cases, not only can the quadratic can be factored into two expressions, but the expressions are the same. If we try to explain it in text, here is the general rule -- if we have a quadratic equation in which first and last term are both perfect squares and middle term is two times the square root of the first and last terms multiplied, it simplifies the quadratic to a binomial product or just one binomial raised to the second power. Reading this explanation in text is confusing to many of us -- just click on the video of our instructor explaining it and you'll understand the concept much more easily. Note that perfect square trinomials are often expressions of one of the following forms: • (x^2 + 2ax + a^2), which is the same as (x + a)^2 • (x^2 – 2ax + a^2), which is the same as (x – a)^2 This FREE mini-lesson is a part of Winpossible's online course that covers all topics within Algebra I. Click on the video below to go through it. If you like it, you can buy our online course in Algebra I by clicking here. This resource has been contributed by Winpossible, and can also be accessed on their website by clicking here - Factoring 3rd degree polynomial In this mini-lesson you'll learn with the help of several examples how to factor 3 degree polynomial into 2 degree polynomial and 1 degree polynomial factor. As you know, if you write a polynomial as the product of two or more polynomials, you have factored it. It is fairly common to come across certain interesting forms of third degree polynomials, and here are a few rules to keep in mind in factoring them: RULE 1: a + b = (a + b) (a – ab + b RULE 2: a – b = (a – b)(a + ab + b RULE 3: (x + y + z)(x + y + z – xy – yz – zx) = x + y + z – 3xyz. This FREE mini-lesson is a part of Winpossible's online course that covers all topics within Algebra I. Click on the video below to go through it. If you like it, you can buy our online course in Algebra I by clicking here.
{"url":"http://www.curriki.org/xwiki/bin/view/Coll_wincurriki/Factoringbinomialsandpolynomials","timestamp":"2014-04-20T08:15:05Z","content_type":null,"content_length":"122541","record_id":"<urn:uuid:cad638d7-4b3b-4f1e-b968-4afe2406e6d8>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00291-ip-10-147-4-33.ec2.internal.warc.gz"}
Hickory Creek, TX Math Tutor Find a Hickory Creek, TX Math Tutor ...I directed a small church choir and gave some voice lessons. I worked as a substitute teacher many times for a middle school study skills class. In addition I am very organized in my teaching and studying. 15 Subjects: including algebra 2, algebra 1, geometry, prealgebra I have a B.S. in biochemistry with a double minor in biology and dance, and I also have some graduate coursework in biomedical research. I taught ballet for four years through college and danced for 17 years. When I teach or tutor I focus on discovering a student's specific learning pattern; this made me an excellent ballet teacher and a great tutor. 31 Subjects: including ACT Math, GED, SAT math, geometry ...I have been a teacher and coach for 13 years and I also hold certifications in physical education, health education, and special education. I taught special education for 4 years, and 2 of those have have been specifically in Math. I believe this experience helps me reach all types of students from advanced learners to learners with disabilities. 3 Subjects: including algebra 1, geometry, prealgebra ...MAPS analyzes a passage quickly yielding its organization and logic as well as the function of each of its parts. Next, teaching math for standardized tests can easily become bogged down by a mass of tricks students find difficult to remember on test day. I don't do that! 5 Subjects: including SAT math, GRE, GMAT, SAT reading ...I hold a Master's Degree in Education with emphasis on instruction in math and science for grades 4th through 8th. I have taken courses in pre-algebra, algebra I and II, Matrix Algebra, Trigonometry, pre-calculus, Calculus I and II, Geometry and Analytical Geometry, Differential Equations. I was a tutor in college for students that needed help in math. 11 Subjects: including algebra 1, algebra 2, American history, geometry Related Hickory Creek, TX Tutors Hickory Creek, TX Accounting Tutors Hickory Creek, TX ACT Tutors Hickory Creek, TX Algebra Tutors Hickory Creek, TX Algebra 2 Tutors Hickory Creek, TX Calculus Tutors Hickory Creek, TX Geometry Tutors Hickory Creek, TX Math Tutors Hickory Creek, TX Prealgebra Tutors Hickory Creek, TX Precalculus Tutors Hickory Creek, TX SAT Tutors Hickory Creek, TX SAT Math Tutors Hickory Creek, TX Science Tutors Hickory Creek, TX Statistics Tutors Hickory Creek, TX Trigonometry Tutors Nearby Cities With Math Tutor Argyle, TX Math Tutors Bartonville, TX Math Tutors Copper Canyon, TX Math Tutors Corinth, TX Math Tutors Corral City, TX Math Tutors Cross Roads, TX Math Tutors Double Oak, TX Math Tutors Highland Village, TX Math Tutors Lake Dallas Math Tutors Lakewood Village, TX Math Tutors Little Elm Math Tutors Oak Point, TX Math Tutors Roanoke, TX Math Tutors Shady Shores, TX Math Tutors Westlake, TX Math Tutors
{"url":"http://www.purplemath.com/Hickory_Creek_TX_Math_tutors.php","timestamp":"2014-04-21T07:09:00Z","content_type":null,"content_length":"24184","record_id":"<urn:uuid:847c4cfe-5140-43e5-b698-b9b93f0b6292>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00536-ip-10-147-4-33.ec2.internal.warc.gz"}
Mobius Glass @ Dementia Mobius has finally landed!! Nanos, Micros, Ions, Stratos.. Matrix percs, Retti percs. The craftmanship is absolutely Amazing! hey i was interested in buying one of the gridded mobius matrix dewarrs if possible My names Bret. I am very intersted in a mobius stemless with a matrix perc tube. Do you have any in stock? Thank you Bret lutz What do you have in stock for mobius bubblers? Also, do you ship…im in NY Hey, I was trying to get the mobius matrix stemless. I was wondering if you had it in stock and if you shipped to Maryland… would LOVE to get my hands on a matrix perc! let me know!!! It would be great if i could get a matrix or reti bubblers. Hey I was just wondering if you have any strata mobius’s with matrix percs still available and if they are at your state street store. Also, how much would one of those be? I live in the Columbus, Ohio area and am very interested in purchasing a mobious piece with the matrix perc. Please let me know what steps need to be taken to make this happen
{"url":"http://dementiagallery.com/mobius-glass-dementia/","timestamp":"2014-04-18T21:43:41Z","content_type":null,"content_length":"24361","record_id":"<urn:uuid:6f5e855c-c2d1-4992-80ab-432413f0e14b>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00011-ip-10-147-4-33.ec2.internal.warc.gz"}
A Short Account of the History of Mathematics Additional Information • Year Published: 1908 • Language: English • Country of Origin: England • Source: Ball, W.W.R. (1908). A Short Account of the History of Mathematics. London, New York: Macmillan. • Readability: □ Flesch–Kincaid Level: 12.0 • Word Count: 1,010 Ball, W. (1908). Isaac Barrow. A Short Account of the History of Mathematics (Lit2Go Edition). Retrieved April 18, 2014, from http://etc.usf.edu/lit2go/218/ Ball, W.W. Rouse. "Isaac Barrow." A Short Account of the History of Mathematics. Lit2Go Edition. 1908. Web. <http://etc.usf.edu/lit2go/218/a-short-account-of-the-history-of-mathematics/5525/ isaac-barrow/>. April 18, 2014. W.W. Rouse Ball, "Isaac Barrow," A Short Account of the History of Mathematics, Lit2Go Edition, (1908), accessed April 18, 2014, http://etc.usf.edu/lit2go/218/ Browse Happy and update your internet browser today! Isaac Barrow was born in London in 1630, and died at Cambridge in 1677. He went to school first at Charterhouse (where he was so troublesome that his father was heard to pray that if it pleased God to take any of his children he could best spare Isaac), and subsequently to Felstead. He completed his education at Trinity College, Cambridge; after taking his degree in 1648, he was elected to a fellowship in 1649; he then resided for a few years in college, but in 1655 he was driven out by the persecution of the Independents. He spent the next four years in the East of Europe, and after many adventures returned to England in 1659. He was ordained the next year, and appointed to the professorship of Greek at Cambridge. In 1662 he was made professor of geometry at Gresham College, and in 1663 was selected as the first occupier of the Lucasian chair at Cambridge. He resigned the latter to his pupil Newton in 1669, whose superior abilities he recognized and frankly acknowledged. For the remainder of his life he devoted himself to the study of divinity. He was appointed master of Trinity College in 1672, and held the post until his death. He is described as "low in stature, lean, and of a pale complexion," slovenly in his dress, and an inveterate smoker. He was noted for his strength and courage, and once when travelling in the East he saved the ship by his own prowess from capture by pirates. A ready and caustic wit made him a favourite of Charles II., and induced the courtiers to respect even if they did not appreciate him. He wrote with a sustained and somewhat stately eloquence, and with his blameless life and scrupulous conscientiousness was an impressive personage of the time. His earliest work was a complete edition of the Elements of Euclid, which he issued in Latin in 1655, and in English in 1660; in 1657 he published an edition of the Data. His lectures, delivered in 1664, 1665, and 1666, were published in 1683 under the title Lectiones Mathematicae; these are mostly on the metaphysical basis for mathematical truths. His lectures for 1667 were published in the same year, and suggest the analysis by which Archimedes was led to his chief results. In 1669 he issued his Lectiones Opticae et Geometricae. It is said in the preface that Newton revised and corrected these lectures, adding matter of his own, but it seems probable from Newton's remarks in the fluxional controversy that the additions were confined to the parts which dealt with optics. This, which is his most important work in mathematics, was republished with a few minor alterations in 1674. In 1675 he published an edition with numerous comments of the first four books of the Conics of Apollonius, and of the extant works of Archimedes and Theodosius. In the optical lectures many problems connected with the reflexion and refraction of light are treated with ingenuity. The geometrical focus of a point seen by reflexion or refraction is defined; and it is explained that the image of an object is the locus of the geometrical foci of every point on it. Barrow also worked out a few of the easier properties of thin lenses, and considerably simplified the Cartesian explanation of the rainbow. The geometrical lectures contain some new ways of determining the areas and tangents of curves. The most celebrated of these is the method given for the determination of tangents to curves, and this is sufficiently important to require a detailed notice, because it illustrates the way in which Barrow, Hudde and Sluze were working on the lines suggested by Fermat towards the methods of the differential calculus. Fermat had observed that the tangent at a point P on a curve was determined if one other point besides P on it were known; hence, if the length of the subtangent MT could be found (thus determining the point T), then the line TP would be the required tangent. Now Barrow remarked that if the abscissa and ordinate at a point Q adjacent to P were drawn, he got a small triangle PQR (which he called the differential triangle, because its sides PR and PQ were the differences of the abscissae and ordinates of P and Q), so that TM : MP = QR : RP. To find QR : RP he supposed that x, y were the co-ordinates of P, and x - e, y - a those of Q (Barrow actually used p for x and m for y, but I alter these to agree with modern practice). Substituting the co-ordinates of Q in the equation of the curve, and neglecting the squares and higher powers of e and a as compared with their first powers, he obtained e : a. The ratio a/e was subsequently (in accordance with a suggestion made by Sluze) termed the angular coefficient of the tangent at the point. Barrow applied this method to the curves (i) x² (x² + y²) = r²y²;(ii) x³ + y³ = r³; (iii) x³ + y³ = rxy, called la galande; (iv) y = (r - x) tan πx/2r, the quadratrix; and (v) y = r tan πx/2r. It will be sufficient here if I take as an illustration the simpler case of the parabola y² = px. Using the notation given above, we have for the point P, y² = px; and for the point Q, (y - a)² = p(x - e). Subtracting we get 2ay - a² = pe. But, if a be an infinitesimal quantity, a² must be infinitely smaller and therefore may be neglected when compared with the quantities 2ay and pe. Hence 2ay = pe, that is, e : a = 2y : p. Therefore TP : y = e : a = 2y : p. Hence TM = 2y²/p = 2x. This is exactly the procedure of the differential calculus, except that there we have a rule by which we can get the ratio a/e or dy/dx directly without the labour of going through a calculation similar to the above for every separate case.
{"url":"http://etc.usf.edu/lit2go/218/a-short-account-of-the-history-of-mathematics/5525/isaac-barrow/","timestamp":"2014-04-18T10:58:54Z","content_type":null,"content_length":"18053","record_id":"<urn:uuid:56b0fa4e-1e87-474b-bf67-6d3cd4eec312>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00425-ip-10-147-4-33.ec2.internal.warc.gz"}
prime factorization using base 1 I recently realized the power of base-1 numbers. They don't have the arbitrary range limitations of Perl's regular number representations while converting between the two is nearly trivial. Plus finding prime base-1 numbers is particularly compact code in Perl. And when the primality test fails you are also handed some factors! So base-1 numbers are perfect for finding prime factorizations! They aren't very space efficient, unfortunately (hey, no one's perfect). So factor1() returns the prime factorizations of base-1 numbers (as base-1 numbers). factor10() just converts a base-10 number into a base-1 number so factor1() can factor it and then Description: converts the returned list of base-1 factors into base 10 again. One line of test code is included that factors any base-10 numbers given on the command line. Now updated to be faster! #!/usr/bin/perl -w use strict; sub factor1 { return @_ if $_[0] !~ /^(..+?)\1+$/; return map { factor1($_) } ( "$1", $_[0] =~ s/$1/1/g, $_[0] )[0,-1]; sub factor10 { return map {length} factor1( 1x$_[0] ); print join $/, map { join " ", factor10($_) } @ARGV; Back to Snippets Section Log In^? Node Status^? node history Node Type: snippet [id://52469] How do I use this? | Other CB clients Other Users^? Others having an uproarious good time at the Monastery: (8) As of 2014-04-24 11:26 GMT Find Nodes^? Voting Booth^? April first is: Results (565 votes), past polls
{"url":"http://www.perlmonks.org/index.pl?node_id=52469","timestamp":"2014-04-24T11:28:43Z","content_type":null,"content_length":"21029","record_id":"<urn:uuid:6b7b36f9-6c82-48d3-b9ba-0b6192088269>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00066-ip-10-147-4-33.ec2.internal.warc.gz"}
Union Square, NJ Prealgebra Tutor Find an Union Square, NJ Prealgebra Tutor ...Through this approach, my students are much better equipped to begin their individual journey to fluency. I am qualified to teach Praxis because in addition to passing the Praxis general exam with distinction and thus gaining my NJ teaching license (top 10% of test takers), I have tutored Praxis... 37 Subjects: including prealgebra, English, geometry, Chinese ...Since my graduation in 2009, I have continued to tutor, and especially love working with students in mathematics. I believe that through the use of manipulatives, games, and other hands-on methods, every student can learn, and come to enjoy doing so. There is nothing I find more rewarding than ... 18 Subjects: including prealgebra, physics, writing, algebra 1 ...I enjoy teaching students mathematics skills through real-life examples, such as with cooking/baking and games. Throughout college, I have proofread the work of my classmates in various classes. In addition, as a high school mentor, I read numerous personal statements for my students, as well as their own personal school work. 18 Subjects: including prealgebra, chemistry, geometry, biology I love math/science and love to share my enthusiasm for these subjects with my students. I did my undergraduate in Physics and Astronomy at Vassar, and did an Engineering degree at Dartmouth. I'm now a PhD student at Columbia in Astronomy (have completed two Masters by now) and will be done in a year. 11 Subjects: including prealgebra, Spanish, calculus, physics ...Prior to that, I also taught several years of college prep physics, one year of honors physics, and several years of general physical science. Each one of my 14 full school years included at least 1 section of college preparatory chemistry. I have done home instruction in both chemistry and physics.I have been teaching chemistry from a basic level to AP for 14 years. 9 Subjects: including prealgebra, chemistry, physics, algebra 1 Related Union Square, NJ Tutors Union Square, NJ Accounting Tutors Union Square, NJ ACT Tutors Union Square, NJ Algebra Tutors Union Square, NJ Algebra 2 Tutors Union Square, NJ Calculus Tutors Union Square, NJ Geometry Tutors Union Square, NJ Math Tutors Union Square, NJ Prealgebra Tutors Union Square, NJ Precalculus Tutors Union Square, NJ SAT Tutors Union Square, NJ SAT Math Tutors Union Square, NJ Science Tutors Union Square, NJ Statistics Tutors Union Square, NJ Trigonometry Tutors Nearby Cities With prealgebra Tutor Arlington, NJ prealgebra Tutors Bayway, NJ prealgebra Tutors Elizabeth, NJ prealgebra Tutors Elmora, NJ prealgebra Tutors Greystone Park, NJ prealgebra Tutors Hopelawn, NJ prealgebra Tutors Menlo Park, NJ prealgebra Tutors Midtown, NJ prealgebra Tutors Monroe, NJ prealgebra Tutors North Elizabeth, NJ prealgebra Tutors Parkandbush, NJ prealgebra Tutors Peterstown, NJ prealgebra Tutors Rockaway Point, NY prealgebra Tutors Tabor, NJ prealgebra Tutors West Arlington, NJ prealgebra Tutors
{"url":"http://www.purplemath.com/Union_Square_NJ_Prealgebra_tutors.php","timestamp":"2014-04-20T16:03:19Z","content_type":null,"content_length":"24598","record_id":"<urn:uuid:2ab4aca2-dd00-4e88-a5cc-7ad03a72d848>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00200-ip-10-147-4-33.ec2.internal.warc.gz"}
Analogy between the exterior power and the power set up vote 19 down vote favorite The symmetric algebra of an object exists in every cocomplete $\otimes$-category. For the category of sets $\mathrm{Sym}(X)$ is the set of multi-subsets of $X$. The usual definition of the exterior power works in every cocomplete linear $\otimes$-category in which $2$ is invertible. But what about the non-linear case? Are there also "exterior powers" in $\ otimes$-categories which are not linear? Of course the usual definition using alternating maps does not work. But isn't it striking that for the cartesian category of sets there is a quite natural candidate, namely the power set? Here are some analogies (here $P(X)$ denotes the power set of $X$ if $X$ is finite; in general it is the set of all finite subsets of $X$; $P_n(X)$ is the set of all subsets of $X$ with $n$ elements): • $P(X) = \coprod_n P_n(X)$ and $\Lambda(M) = \oplus_n \Lambda^n(M)$ • $P(X \sqcup Y) = P(X) \times P(Y)$ and $\Lambda(M \oplus N) = \Lambda(M) \otimes \Lambda(N)$ It follows the "categorified Vandermonde identity": • $P_n(X \sqcup Y) = \coprod_{p+q=n} P_q(X) \times P_q(Y)$ and $\Lambda^n(M \oplus N) = \oplus_{p+q=n} \Lambda^p(M) \otimes \Lambda^q(N)$ • $(P(X),\cup)$ is a commutative monoid and $(\Lambda(M),\wedge)$ is a graded-commutative algebra, i.e. commutative monoid object in the tensor category of graded modules equipped with with twisted • If $M$ is free with (ordered) basis $X$, then $\Lambda(M)$ is free with basis $P(X)$, and $\Lambda^n(M)$ is free with basis $P_n(X)$. In particular, $\dim \Lambda^n(M)=\dim P_n(X)$. • If $T$ is a commutative monoid, then homomorphisms $P(X) \to T$ correspond to maps $f : X \to T$ with $f(x)^2=f(x)$, and if $A$ is a graded-commutative algebra, then homomorphisms $\Lambda(M) \to A$ correspond to homomorphisms of modules $f : M \to A_1$ with $f(x)^2=0$ or rather $f(x)f(y)+f(y)f(x)=0$ in the context of $\otimes$-categories (so these conditions are not the same, but both use $f(x)^2$). Therefore I would like to ask: Is there a notion of exterior algebra for certain cocomplete $\otimes$-categories, including categories of modules and the category of sets? In the latter case, do we get the power set? categorification ct.category-theory exterior-algebra symmetric-monoidal-catego By $P_n(X)$ do you mean the set of $n$-element subsets? In that case your first identity holds only when $X$ is finite, while $\Lambda(M)$ makes sense for $M$ of any dimension and has the decomposition that you give. Interesting question! – MTS Apr 13 '13 at 17:26 Ah sorry, I should write "finite power set" everywhere, i.e. the set of finite subsets. – Martin Brandenburg Apr 13 '13 at 18:07 4 Maybe the power set is more like a Clifford algebra. – Tom Goodwillie Apr 13 '13 at 19:27 1 To make the algebraic structure on $P(X)$ closer to that on $\Lambda(X)$, you can use the disjoint union rather than the union. Then the last bullet looks a little better. But note that the $\ Lambda$ side of the last bullet doesn't make sense in arbitrary categories --- rather, it has something special to do with usual modules over a ring in which $2$ is invertible. – Theo Johnson-Freyd Apr 13 '13 at 22:24 1 I wasn't really thinking of a quadratic form. I was just thinking that, like a Clifford algebra, the power set has a filtration such that the associated graded object is (like) an exterior algebra. – Tom Goodwillie Apr 14 '13 at 0:26 show 3 more comments 1 Answer active oldest votes To a set $X$ associate the free vector space $M(X)$ over $X$; conversely, for a vector space $M$ let $X$ be the index set of a basis of $M$. Then the analogy is just how one does exterior algebra in terms of a basis. This fits into the way how representation theory for $GL(n)$ and for the symmetric group $S(n)$ are related to each other, both using Young projectors in interated tensor products of $\ up vote 0 mathbb C^n$. This becomes more striking even if we take the direct limit for $n\to \infty$. See books and papers by Yuri Neretin (in arXiv). down vote Edit: For modules $M$ over an algebra $A$, one could consider the corresponding algebra of dual numbers $A\circledS M$ (i.e., $A\oplus M$ with multiplication $(a,m).(a',m') = (a.a', a.m' + m.a')$ and the Kaehler differentials over this algebra. See 2.3 of here. 1 I don't think this answers the question. The question is whether there is a general construction which specializes to both the exterior algebra and the power set, not whether you can relate the power set and the exterior algebra. – Qiaochu Yuan Apr 13 '13 at 18:59 I agree with Qiaochu. See also the fourth $\bullet$. – Martin Brandenburg Apr 13 '13 at 19:18 I cannot see any connection between the Edit and my question. – Martin Brandenburg Apr 14 '13 at 9:33 The algebra of Kaehler differentials of $A\circledS M$ generalizes the exterior algebra from vector spaces to modules over a commutative algebra, or even to bimodules over a non-commutative algebra. – Peter Michor Apr 14 '13 at 12:41 And what are Kaehler differentials for non-linear tensor categories? – Martin Brandenburg Apr 14 '13 at 22:09 add comment Not the answer you're looking for? Browse other questions tagged categorification ct.category-theory exterior-algebra symmetric-monoidal-catego or ask your own question.
{"url":"http://mathoverflow.net/questions/127476/analogy-between-the-exterior-power-and-the-power-set","timestamp":"2014-04-16T16:57:25Z","content_type":null,"content_length":"66607","record_id":"<urn:uuid:f5e8c595-d276-4c99-9106-711653cd02fc>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00041-ip-10-147-4-33.ec2.internal.warc.gz"}
Simplifying polynomial fractions - Equality question July 24th 2012, 03:49 PM #1 Jul 2012 Simplifying polynomial fractions - Equality question I am studying the simplification of polynomial fractions. My textbook has the example of: $\dfrac{2x+6}{x^2-9} = \dfrac{2}{x-3}$ : factored out (x+3) from the numerator and denominator My confusion is over the sense of 'equality' of the 2 expressions. I expected to be able to use either expression interchangeably as the definition for a function. However, $\ f(x) = \dfrac{2x+6}{x^2-9}$ is undefined for x = { 3, -3 } $\ f(x) = \dfrac{2}{x-3}$ is undefined for x = { 3 } The domains of the functions differ. It appears that 'information' was lost in the simplification of the original expression. Are the 2 expressions not really "equal"? Am I misunderstanding something about the concept of equality with regard to expressions? Last edited by ForumUser2; July 24th 2012 at 03:53 PM. Re: Simplifying polynomial fractions - Equality question That's right. As numbers, $\dfrac{2x+6}{x^2-9}$ and $\dfrac{2}{x-3}$ are equal iff $xe-3$. To compare these expressions as functions, we must equip each of them with a domain. As long as those domains do not include -3, they are equal as functions (in the set-theoretic sense). With their natural, i.e., maximal, domains, they are different as functions because their domains are In the process of transformations (e.g., when solving an equation), it is important to check if each equality is a true identity, i.e., holds for all values of variables. Similarly, it is important to check if two equalities or inequalities are truly equivalent, i.e., are both true or both false for all values of variables. Re: Simplifying polynomial fractions - Equality question I just love your blog.Thanks for posting it.i have something that someone comeback again….there is a lot of useful information a person can get from here…Web DevelopmentI must say,well done...i visited it daily,what a colourful blog it is....colours attracted me a lot...Free Legal Advice.i visited it daily,what a colourful blog it is.... a very decent and nice blog...i like it so very July 24th 2012, 04:14 PM #2 MHF Contributor Oct 2009 July 24th 2012, 10:14 PM #3 Jul 2012
{"url":"http://mathhelpforum.com/pre-calculus/201323-simplifying-polynomial-fractions-equality-question.html","timestamp":"2014-04-17T16:35:46Z","content_type":null,"content_length":"40012","record_id":"<urn:uuid:16d6a74e-7895-4bd8-ac21-f0302581be7d>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00211-ip-10-147-4-33.ec2.internal.warc.gz"}
Unrestricted effects and nondeterminism in purely functional code This post is partially a continuation of Referentially transparent nondeterminism. In a program's outermost input and output layers, unrestricted effects and nondeterminism do not break referential transparency since there is no enclosing program around to observe the effects. This gives us a lot of freedom to do things we wouldn't ordinarily be able to do without breaking purity. Just how far can we push this? Consider the mealy machine datatype: data Mealy a b = Mealy (a -> (b, Mealy a b)) Clearly, we can feed a Mealy a b from an IO a producer. But we can also combine two IO a producers nondeterministically. It is only when we sample the output that we observe the effect of the nondeterminism. Here is a typeclass expressing this: class (MonadPlus s) => Source s where -- <|> has the effect of merging two sources, nondeterministically -- perfectly okay, since when we sample later -- (at the end of the universe), we get IO (<|>) :: s a -> s a -> s a transform :: s a -> Mealy a b -> s b drain :: s a -> (a -> IO b) -> IO b -- deterministic merge - explicit interleaving zip :: s a -> s b -> s (a,b) filter :: (a -> Bool) -> s a -> s a -- etc This gives us nondeterminism in our input streams, which are nicely first class. Our choice of stream transformers, Mealy, are also first class. Only when we observe the output of a source (in drain) do we pay for the side effects we may have accumulated. Note that our (<|>) does not require us to pick a deterministic interleaving of the two sources - imagine we have a Source for keypresses of the letter a and another Source of keypresses of b. If our source representation were something like data Source a = Source { sample :: IO a }, our implementation of (<|>) would be required to pick a deterministic interleaving (wait for an a, then wait for a b, or vice versa). Our implementation can simply pass through either event when it occurs. Effects are paid for only when the stream is We can allow for more composition in our output handlers with a type like: type Sink a b = Iteratee a IO b -- replace drain signature in Source with: drain :: s a -> Sink a b -> IO b We then compose Sink values like we would any monadic Iteratee. What we lack, though, is the ability to pass along the nondeterminism from the merging we did with Source. Meaning, if we call drain (a <|> b) i, we ought to have the option (since this is the end of the universe), to evaluate this like drain a i OR drain b i, meaning we do not specify how the effects produced by the two separate instances of i are interleaved. To acheive this, we can add another constructor to Sink: data Sink a b = One (Iteratee a IO b) | Many (Sink a b) Many is used to signal to the Source evaluator than any effects <|>'d together may be fed to the given Sink concurrently. Again, since no one is around to observe it, this nondeterminism does not break RT. I don't know how often this sort of output nondeterminism is really useful, but it's interesting that it is possible. So, we so far have allowed unrestricted side effects and nondeterminism at both the input to our programs as well as the output of our programs. What else can we do? Well, we can allow commutative effects in the implementation of our stream transformer: data Mealy m a b = Mealy (a -> m (a, Mealy a b)) Here m must be some commutative monad. This ensures that we do not need to pick a global ordering on our effects. One interesting addition is to explicitly separate zip from the monad interface and require only that zip commute. This lets Mealy pipelines see effects from earlier pipeline stages, but not from parallel pipelines: class Monad m => Commutative m where -- subject to zip a b == fmap (snd *** fst) (zip b a) zip :: m a -> m b -> m (a,b) As a final thought, I wonder: is it possible to make an instance of Commutative for something like State? Clearly, the implementation of zip should feed the same input s into both rather than sequencing the state as usual, but what do we do with the two different output states? We require either some sort of commutative state merging function s -> s -> s to combine the two new states, or we are forced to propagate both values along somehow, without committing to an order! Related posts:
{"url":"http://pchiusano.blogspot.com/2011/07/unrestricted-effects-and-nondeterminism.html","timestamp":"2014-04-17T21:23:32Z","content_type":null,"content_length":"66846","record_id":"<urn:uuid:6b7c9440-ebf4-4559-a59f-b837345d5275>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00459-ip-10-147-4-33.ec2.internal.warc.gz"}
Bergenfield Math Tutor ...I also taught college level geology labs from 2008-2011. I have a BA in Geology from CUNY: Queens College. I've taught high school Regents Earth Science since 2003. 6 Subjects: including algebra 1, biology, prealgebra, astronomy ...During my junior spring semester, I directly enrolled in a Bolivian University in La Paz (la Universidad Mayor de San Andres). My coursework, which included history and anthropology, took place entirely in Spanish with other Bolivian students. I have also taken summer courses in Buenos Aires, Ar... 13 Subjects: including trigonometry, PSAT, algebra 1, algebra 2 My experience in tutoring spans a wide variety of subjects and disciplines. I have degrees in Biology and Mathematics and am comfortable teaching any and all of the subjects in both science and math. I have personally tutored everything from Algebra to Advanced Calculus and English to AP Biology and everything in between. 22 Subjects: including calculus, SAT math, English, reading ...I love playing and listening to music and I am also a big sports fan. Learning should always be fun so let's get started! We have mountains to move!As a Biological Sciences major at Cornell, I took both introductory and higher lever coursework in Genetics. 25 Subjects: including ACT Math, physics, probability, prealgebra ...I can help students of any age or ability. I will help in all areas for probability. Being able to use the calculator effectively where appropriate is key. 52 Subjects: including algebra 1, algebra 2, ACT Math, English
{"url":"http://www.purplemath.com/Bergenfield_Math_tutors.php","timestamp":"2014-04-17T13:05:59Z","content_type":null,"content_length":"23443","record_id":"<urn:uuid:bbe55593-16cc-4326-9e1b-8ed9371a3fb2>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00424-ip-10-147-4-33.ec2.internal.warc.gz"}
complementary filter | Robottini Kalman filter vs Complementary filter Sep 25 Posted by robottini in Tips | 35 Comments Note: At the bottom of the post the complete source code The use of accelerometer and gyroscope to build little robots, such as the self-balancing, requires a math filter in order to merge the signals returned by the sensors. The gyroscope has a drift and in a few time the values returned are completely wrong. The accelerometer, from the other side, returns a true value when the acceleration is progressive but it suffers much the vibrations, returning values of the angle wrong. Usually a math filter is used to mix and merge the two values, in order to have a correct value: the Kalman filter . This is the best filter you can use, even from a theoretical point of view, since it is one that minimizes the errors from the true signal value. However it is very difficult (see here) to understand. In fact, the filter needs to be able to calculate the coefficients of the matrices, the process-based error, measurement error, etc. that are not trivial. In the hobbistic world, recently are emerging other filters, called complementary filters. In fact, they manage both high-pass and low-pass filters simultaneously. The low pass filter filters high frequency signals (such as the accelerometer in the case of vibration) and low pass filters that filter low frequency signals (such as the drift of the gyroscope). By combining these filters, you get a good signal, without the complications of the Kalman filter. Making a study from a theoretical point of view, the discussion is complicated and is beyond the scope of this tutorial. The complementary filters can be have different ‘orders’. Here I speak about the so-called first-order filter that filter already well, and the second-order filter which filter even better. Clearly, going from first to second order, the algorithm is more complicated to use and perhaps there is no gain so obvious to justify the increase in complexity. A great introduction to the first order complementary filters applied to the an accelerometer and a gyroscope, comes from MIT (here). It introduces the filter in a a very simple mode. On this document is based the first Arduino algorithm: // a=tau / (tau + loop time) // newAngle = angle measured with atan2 using the accelerometer // newRate = angle measured using the gyro // looptime = loop time in millis() float tau=0.075; float a=0.0; float Complementary(float newAngle, float newRate,int looptime) { float dtC = float(looptime)/1000.0; x_angleC= a* (x_angleC + newRate * dtC) + (1-a) * (newAngle); return x_angleC; It ’enough to choose the response time of tau, to send the arguments, ie the angle measured with the accelerometer and the gyroscope, the time of the loop and you get in two lines, the angle calculated by the filter. The algorithm at the base of the second order complementary filter is described here. Indeed it is not described at all, but now we’ve figured out how the filter works by the MIT’s documentation. The principle is the same, the algorithm is more complicated. The translation of this algorithm for the Arduino: // newAngle = angle measured with atan2 using the accelerometer // newRate = angle measured using the gyro // looptime = loop time in millis() float Complementary2(float newAngle, float newRate,int looptime) { float k=10; float dtc2=float(looptime)/1000.0; x1 = (newAngle - x_angle2C)*k*k; y1 = dtc2*x1 + y1; x2 = y1 + (newAngle - x_angle2C)*2*k + newRate; x_angle2C = dtc2*x2 + x_angle2C; return x_angle2C; Here too we just have to set the k and magically we get the angle. If we want to apply the Kalman filter, we can re-use one of the codes already present in internet. This is the code that I copied from the Arduino forum (here): // KasBot V1 - Kalman filter module float Q_angle = 0.01; //0.001 float Q_gyro = 0.0003; //0.003 float R_angle = 0.01; //0.03 float x_bias = 0; float P_00 = 0, P_01 = 0, P_10 = 0, P_11 = 0; float y, S; float K_0, K_1; // newAngle = angle measured with atan2 using the accelerometer // newRate = angle measured using the gyro // looptime = loop time in millis() float kalmanCalculate(float newAngle, float newRate,int looptime) float dt = float(looptime)/1000; x_angle += dt * (newRate - x_bias); P_00 += - dt * (P_10 + P_01) + Q_angle * dt; P_01 += - dt * P_11; P_10 += - dt * P_11; P_11 += + Q_gyro * dt; y = newAngle - x_angle; S = P_00 + R_angle; K_0 = P_00 / S; K_1 = P_10 / S; x_angle += K_0 * y; x_bias += K_1 * y; P_00 -= K_0 * P_00; P_01 -= K_0 * P_01; P_10 -= K_1 * P_00; P_11 -= K_1 * P_01; return x_angle; To get the answer, you have to set 3 parameters: Q_angle, R_angle,R_gyro. The activity is a bit complicated . But what happens with these algorithms? Similar curves are obtained? Here’s a comparison: There are 5 curves: Color lines: • Red - accelerometer • Green - Gyro • Blue - Kalman filter • Black - complementary filter • Yellow - the second order complementary filter As you can see the signals filtered are very similarly. Note that in the presence of vibrations, the accelerometer (red) generally go crazy. The gyro (green) has a very strong drift increasing int the time. Now let’s see a comparison only between a filtered signal. That kalman (green), complementary (black) and complementary second-order (yellow). You can see how the Kalman is a bit late vs complementary filters, but it is more responsive to the vibration. In this case the second order filter does not return an ideal curve, probably I have to work a bit on the coefficients. In conclusion I think that the complementary filter, in this case the first order, can be used in place of the Kalman filter. The smoothing is good and the algorithm is much simpler than Kalman. The hardware I used was composed of: - Arduino 2009 - 6-axis IMU SparkFun Razor 6 DOF This is the complete source code:
{"url":"http://robottini.altervista.org/tag/complementary-filter","timestamp":"2014-04-21T10:43:11Z","content_type":null,"content_length":"40905","record_id":"<urn:uuid:d6f46e07-b837-402f-8810-1b213775309a>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00189-ip-10-147-4-33.ec2.internal.warc.gz"}
Reply to comment Submitted by Anonymous on December 22, 2011. You are not more likely to arrive at one of the smaller gaps because there are more of them; you seem to think that there is some sort of urn from which you are drawing out gaps of time with equal probability; easy mistake to make. Think of it this way; you are minutes from an urn, numbered :00-:59. There is one 45 minute gap and 15 one -minute gaps. You are most likely to pick a minute belonging to the 45 minute gap. Additionally, what he says doesn't apply to all non-uniform distributions, but only to those where the times between buses are exponentially distributed (and thus memoryless)
{"url":"http://plus.maths.org/content/comment/reply/2296/3019","timestamp":"2014-04-16T05:10:32Z","content_type":null,"content_length":"20476","record_id":"<urn:uuid:eb4c0182-0945-4c13-ba7f-142171e4cca4>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00339-ip-10-147-4-33.ec2.internal.warc.gz"}
Visualizing Cellular Automata – I Genetics has been a topic I was interested in for as long as I can remember. Recently I started reading about how biological systems function and how their principles are applied in creating programs. This led me to read more about genetic programming and AI, and eventually I came across Complex Systems and Cellular Automata. A quick google search was enough to get me excited about its generative nature and emergent patterns. The fact that I could create a purely rational system on my computer which applied logical rules on a set of states to generate such complex structures- orderly and chaotic at the same time, encouraged me to explore this as a project for 594P. The origins of Cell Automaton lie in Von Neumann’s simplification of the process of Kinematic Automata, a system designed to create self-replicating robots, due to Stanislaw Ulam’s insight on his methods. Though it became popular within a small computing community with John Conway’s “Game of Life”, it was Stephen Wolfram’s publication of “A New Kind of Science”, a book that explains how complex systems emerge from seemingly simplistic ones like Cell Automata, that reintroduced its concept as a thoroughly systematic investigation. The basics are very straightforward- you start with a set of initial states, iterate through all the cells, checking each cell’s neighborhood (a finite number of cells around it) and mapping its states to the rule being employed to calculate the next state of the cell. All the cells are updated once the rule is employed and then the process is repeated. I started with Elementary Cellular Automata- 1D structure of cells, where each cell’s neighborhood is composed of itself, the cell on its right and the cell on its left, and there are only two possible states for each cell: ’0′ and ’1′. With this configuration you have a possibility of 256 (2^(2^3)) rules to govern the behavior. Interesting behaviors emerge when the evolution of 1D cellular automata is tracked for a number of iterations. The following images display some of the interesting rules. The major observation Wolfram made was how some structures were very orderly while some very stochastic in nature. Although some of the most interesting ones are with a combination of both, order and randomness in their structure, for example rule 110. ( Continued on Visualizing Cellular Automata – II… ) One Response to “Visualizing Cellular Automata – I” 1. [...] …Continued from Visualizing Cellular Automata – I ) #gallery-1 { margin: auto; } #gallery-1 .gallery-item { float: left; margin-top: 10px; [...]
{"url":"http://www.riteshlala.net/home/visualizing-cellular-automata-i/","timestamp":"2014-04-16T13:03:29Z","content_type":null,"content_length":"18609","record_id":"<urn:uuid:42158acf-7e61-4d15-a6be-1f9810adf873>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00619-ip-10-147-4-33.ec2.internal.warc.gz"}
Proof methods September 16th 2011, 06:51 PM Proof methods 1. Sometimes to prove a statement false, we negate it and prove the negation is true. Is it also possible to prove a true statement by proving its negation is false with a counter example? 2. Could you explain proof by reduction? Thanks in advance September 16th 2011, 08:22 PM Prove It Re: Proof methods 1. Yes. But the more common practice is to prove the negation is false by going through a series of logical steps to arrive at a contradiction. 2. No... September 17th 2011, 04:04 AM Re: Proof methods Yes. A counterexample is given to a statement of the form "For all x, P(x)." If this is a negation, the original statement must be "There exists an x such that not P(x)." A counterexample to "For all x, P(x)" is some x0 such that P(x0) is false. But if you have a counterexample like this, you can prove the original existential statement directly. More often, one derives a contradiction from the negation "For all x, P(x)" without exhibiting a counterexample explicitly. Sometimes this counterexample is still hidden in the proof, so it can be extracted and used to prove the original existential statement directly. Other times, extracting a counterexample cannot be done in principle. September 17th 2011, 04:56 AM Re: Proof methods No, in general a counterexample to the negation will not prove the statement. Mathematica theorems are typically of the form "for all x, P(x)". You could disprove that by giving a single counter-example, denying the "all x" part. But the negation of that is "for some x, not P(x)". A single counterexample, one value of x such that P(x) is true, does not disprove that. September 17th 2011, 07:15 AM Re: Proof methods We all agree that counterexamples can be given only to universal statements ("for all x, P(x)"). Therefore, I was talking about existential theorems, which have universal negations, and whose negations therefore can be disproved with a counterexample. I agree that theorems are typically universal, but many have the form "for all x there exists an y..." After fixing an x, the existential statement can be proved either by constructing a witness or by contradiction.
{"url":"http://mathhelpforum.com/discrete-math/188145-proof-methods-print.html","timestamp":"2014-04-17T11:34:34Z","content_type":null,"content_length":"8804","record_id":"<urn:uuid:840362ec-97c1-4d2d-9783-175907e35cfc>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00092-ip-10-147-4-33.ec2.internal.warc.gz"}
Neat facts from Euler 2010 Posted by: Dave Richeson | July 20, 2010 Neat facts from Euler 2010 I had the wonderful honor of being the keynote speaker at the 9th annual meeting of the Euler Society. I spoke today about my book. It is now the end of the second day of this 2.5 day conference. I thought I’d post a few of the many interesting things that I learned. 1. Larry D’Antonio shared this quote from Kant (Physical Monadology, 1756): But how in this business can metaphysics be reconciled with geometry [mathematics], when it appears easier to mate griffins with horses than to unite transcendental philosophy with geometry (Apparently the expression “mating griffins and horses” goes back to Virgil’s 8th Ecologue and is supposed to signify the impossible, since griffins view horses as prey. When the two do mate their offspring is a hippogriff—a creature that was recently reintroduced in the Harry Potter series.) 2. Let $S=\{m^n:m,n\in\mathbb{Z},m,n>1\}=\{4,8,9,16,25,27,32,36,\ldots\}$ be the set of all nontrivial powers (listed without repeats). Christian Goldbach discovered the following summation: $\displaystyle \sum_{j\in S}\frac{1}{j-1}=\frac{1}{3}+\frac{1}{7}+\frac{1}{8}+\frac{1}{15}+\frac{1}{24}+\frac{1}{31}+\frac{1}{35}+\cdots=1$. Euler gave a “proof” of this in Variae observationes circa series infinitas and extended it in a number of interesting directions. Bruce Burdick gave a talk in which he showed Euler’s slick proof—it begins with Euler taking $x$ to be the sum of the harmonic series (which we all know is infinite). You can read Euler’s short proof on the first page of this English translation or on Ed Sandifer’s How Euler Did It site. Then Bruce showed how to make Euler’s proof rigorous. 3. The zeta function has the following property: Quote of the Day from Bruce Petrie: “In the 18th century, existence proofs didn’t exist.” 5. Tom Osler spoke about oblique angle diameters for curves. The definition is a little wordy, so I’ll describe it for a parabola. Take any line parallel to the axis of symmetry of a parabola (this line is an oblique angle diameter). Draw the tangent line to the parabola where it meets the line. Then draw any line parallel to this tangent line that meets the parabola twice. This line segment is always bisected by the oblique angle diameter (in the diagram below FD is the same length as DC). It turns out that every conic section has an infinite family of oblique angle diameters. For the parabola it is any line parallel to the axis of symmetry. For an ellipse and a hyperbola it is any line through the origin. Euler had a lot to say about this, but he had a very slick proof that if a curve has two oblique angle diameters $a$ units apart, then it is possible to find infinitely many simply by repeatedly translating one of the given ones by $a$ units. 6. In this same paper Euler discusses a neat fact about triangles with vertices on a conic section. I’ll describe it for the ellipse. If you pick a point on an ellipse ($A$ on the ellipse below) and draw the tangent line to the ellipse through $A$, then draw a line from the center ($B$ below) to the ellipse parallel to the tangent line and it meets at the point $C$, then the area of triangle $ABC$ does not depend on the point $A$. In particular, if the ellipse has the form $x^2/a^2+y^2/b^2=1$, then the area is $ab/2$. Posted in Math | Tags: conic section, ellipse, Euler, hyperbola, infinite series, Kant, parabola, Virgil, zeta function
{"url":"http://divisbyzero.com/2010/07/20/neat-facts-from-euler-2010/?like=1&source=post_flair&_wpnonce=bcc517a348","timestamp":"2014-04-21T12:09:26Z","content_type":null,"content_length":"67086","record_id":"<urn:uuid:bef8a3d9-a3a8-42d4-a050-643b1c2e194b>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00022-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: dumb question about rationals Bill Allombert on Wed, 08 Jun 2005 22:27:03 +0200 [Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index] Re: dumb question about rationals On Wed, Jun 08, 2005 at 02:07:42PM +0200, Vincent Torri wrote: > Ok, thank you for your quick answer. > My aim is to construct rationals from any long. It seems that > r = gdiv (stoi (l1), stoi (l2)); > is working (l1 and l2 are long int). > In that case, should I construct r, that is, should I call cgetg ? No, gdiv will do it for you. > If no, what is the precision of the rational ? > May i change it after this operation ? Yes, you can convert it to a real with finite precision by multiplying it by 1. with the precision you want or by calling gaffect. > Also, I've not well undestood the use of cgetg. cgetg() allocate objects on the PARI stack. Objects return by standard PARI functions are already allocated somewhere and usually on the stack, so you only need to use cgetg() when you want to build objects piece-wise. For example if you don't want PARI wasting time checking if l1 and l2 are coprime (because you know it is the case), you can do r=cgetg(3,t_FRAC); r[1]=stoi (l1); r[2]=stoi (l2); instead of gdiv (stoi (l1), stoi (l2)); but usually it is done for vectors and matrices. > cgetg (N, t_FRAC) allocate a rational whose components (numerator and > denominator) are coded on N*32 bits, right ? No, it is the length of the topmost part of the object. For a t_FRAC it is always 3 (1 codeword + 1 numerator + 1 denominator). > In case that there is an overflow, does PARI increase the precision of the > rational itself, or is there an error message (or something else) ? PARI functions never modify their input. Instead they allocate memory in the stack to store the result, so an overflow is not possible since PARI will alway allocate enough memory. > I ask that because the computations I do lead to intermediate rationals > with large coprime numerator and denominator (I don't know the limit of > these numbers). If you need to handle very large numbers, you can try to build PARI with GMP support which is much faster in that case.
{"url":"http://pari.math.u-bordeaux.fr/archives/pari-users-0506/msg00012.html","timestamp":"2014-04-19T14:33:56Z","content_type":null,"content_length":"7200","record_id":"<urn:uuid:dba67eb4-3eb9-46e7-95b3-84c5d1939b6a>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00237-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: In circle O, CD = 44, OM = 20, ON = 19, CD is perpendicular to OM and EF to ON. a. Find the radius. If your answer is not an integer, express it in radical form. b. Find FN. If your answer is not an integer, express it in radical form. c. Find EF. Express it as a decimal rounded to the nearest tenth. • 8 months ago • 8 months ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/52158695e4b0450ed75e58f6","timestamp":"2014-04-20T18:41:37Z","content_type":null,"content_length":"146154","record_id":"<urn:uuid:8bbd0b05-51fc-4f13-96f9-f8cc343392d1>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00655-ip-10-147-4-33.ec2.internal.warc.gz"}
[Tutor] very odd math problem Steven D'Aprano steve at pearwood.info Fri Mar 11 06:05:34 CET 2011 Alex Hall wrote: > Hi all, > I am trying to get a list of ordered pairs from the below function. In > my code, evaluate is more exciting, but the evaluate here will at > least let this run. The below runs fine, with one exception: somehow, > it is saying that -2+2.0 is 4.x, where x is a huge decimal involving > E-16 (in other words, a really tiny number). Does anyone have any idea > what is going on here? Let's reword the description of the problem... "2.0 - 2 is a really tiny number close to 4e-16" Welcome to the wonders of floating point maths! Repeat after me: Floats are not real numbers... floats are not real numbers... floats are not real numbers... everything you learned about arithmetic in school only *approximately* applies to floats. Half :) and half :( First off, anything involving e-16 isn't a "huge decimal", it's a tiny decimal, very close to zero, no matter what the x is: Also, although you say "-2 + 2.0" in a comment, that's not actually what you calculate. I know this even though I don't know what you calculate, because I can test -2 + 2.0 and see that it is exactly zero: >>> -2 + 2.0 == 0 Somewhere in your calculation you're probably calculating something which *looks* like 2.0 but isn't. Here's an example: >>> x = 2 + 1e-14 >>> print(x) >>> x == 2.0 but you can see the difference by printing the float with more decimal places than shown by the default view: >>> repr(x) Another problem: you calculate your values by repeated addition. This is the wrong way to do it, because each addition has a tiny little error, and repeating them just compounds error upon error. Here's an example: >>> x = 0.0 >>> for i in range(10): ... x += 0.1 >>> x == 1.0 >>> print(x) >>> repr(x) The right way is to do it like this: >>> x = 0.0 >>> for i in range(1, 11): ... x = i*0.1 >>> x == 1.0 This ensures that errors don't compound. Some further resources: David Goldberg used to have a fantastic (although quite technical) discussion of floating point issues, "What Every Computer Scientist Should Know About Floating-Point Arithmetic": Unfortunately, since Oracle bought Sun, they've removed the article. If you can find a copy of Apple's old "Apple Numeric Manual" (2nd Edition), it has a fantastic introduction by William Kahan. Even though the book is about Apple's SANE, a lot will apply to other floating point systems as well. Google on William Kahan and read his stuff :) More information about the Tutor mailing list
{"url":"https://mail.python.org/pipermail/tutor/2011-March/082428.html","timestamp":"2014-04-17T20:23:11Z","content_type":null,"content_length":"5554","record_id":"<urn:uuid:d8cd1135-f4b1-4e7f-a43a-3f85cb97a0d6>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00657-ip-10-147-4-33.ec2.internal.warc.gz"}
God Plays Dice From the April Notices of the AMS, John D'Angelo writes Baseball and Markov Chains: Power Hitting and Power Series . Consider the following simple model of baseball. Players only hit singles; three singles score a run. That is, the third and every following player to get a hit in a given inning score a run. This can either be interpreted as that, say, all runners score from second on a single or all runners go from first to third on a single -- but not both! -- or that every third hit is actually a double. (And I do mean every third hit, not some random one-third of hits, so this is a bit unnatural.) Then the expected number of runs per half inning is p -10p+10)/(1-p). For real baseball the average number of runs per half-inning is around one half, which corresponds to p = 0.361. D'Angelo gives this as an exercise, but I independently came up with this model a while ago and can't resist sharing the solution. Let q = 1-p. The probability of getting k in an inning is p -- that's the probability of getting those hits in a certain order -- times the number of ways in which k hits and 3 outs can be arranged. Since the last batter of an inning must get out, the number of possible arrangements is the number of ways to pick 2 batters out of the first k+2 to get out, which is (k+2)(k+1)/2. The probability of getting k , if k is at least 1, is just the probability of getting k+2 hits, which is p (k+4)(k+3)/2. Call this f(k); then f(1) + 2f(2) + 3f(3) + ... = p by some annoying algebra. I'm pretty sure I came up with this exact model while procrastinating from some real work a couple years ago; it's probably been independently reinvented many times. With p = 0.361, the probabilities of scoring 0, 1, 2, 3, 4, 5 runs in an inning are .748, .123, .066, .034, .016, .008 (rounded to three decimal places). (Probabilities of larger numbers of runs can also be calculated; together they have probability around .006.) Assuming that each half-inning is independent, the probability G(k) of a team scoring k runs in a is, for each k, k 0 1 2 3 4 5 G(k) .073 .108 .129 .133 .124 .108 k 6 7 8 9 10 11 G(k) .088 .069 .052 .038 .026 .018 k 12 13 14 15 16 17 G(k) .012 .008 .005 .003 .002 .001 with probability about 0.0006 of scoring 18 runs or more. (This seems a bit low to me -- three times a season in the major leagues -- but after all this is a very crude model!) But one interesting thing here is that the distribution of the number of runs per game, which is a sum of nine skewed distributions, is still skewed; the mode is 3, and the median 4. Recall that I chose p so that the mean would be 4.5. And the actual distribution is similarly skewed. Of course a more sophisticated model of baseball is as a Markov chain. There are twenty-five states in this chain -- zero, one or two outs combined with eight possible ways to have runners on base, and three outs. We assume that each hitter hits randomly according to his actual statistics, and the runners move in the "appropriate" way. Of course determining what's appropriate here would be a bit tricky. How do runners move? A runner is probably more likely to take an extra base when a power hitter is hitting, but the sample size for any individual is fairly small. But one could probably predict from some measure of the hitter's power (say, the number of doubles and home runs, combined appropriately) the chances of a runner taking an extra base on a single. Something similar is necessary for sacrifice flies (which have to be deep enough to score the runner), grounding into double plays, etc. I'm not sure if the Markov models that are out there, such as that by , do this. Sagarin computes the (offensive) value of a player by determining how many runs per game a team composed of only that player would score. For the morbidly curious, here's my recently completed PhD thesis, Profiles of large combinatorial structures. (PDF, 1.1 MB, 262 pages (but double-spaced with wide margins)) This is why I haven't been posting! Abstract: We derive limit laws for random combinatorial structures using singularity analysis of generating functions. We begin with a study of the Boltzmann samplers of Flajolet and collaborators, a useful method for generating large discrete structures at random which is useful both for providing intuition and conjecture and as a possible proof technique. We then apply generating functions and Boltzmann samplers to three main classes of objects: permutations with weighted cycles, involutions, and integer partitions. Random permutations in which each cycle carries a multiplicative weight σ have probability (1-γ)^σ of having a random element be in a cycle of length longer than γn; this limit law also holds for cycles carrying multiplicative weights depending on their length and averaging σ. Such permutations have number of cycles asymptotically normally distributed with mean and variance ~ σ log n. For permutations with weights σ[k] = 1/k or σ[k] = k, other limit laws are found; the prior have finitely many cycles in expectation, the latter around √n. Compositions of uniformly chosen involutions of [n], on the other hand, have about √n cycles on average. These can be modeled as modified 2-regular graphs. A composition of two random involutions in S[n] typically has about n^1/2 cycles, characteristically of length n^1/2. The number of factorizations of a random permutation into two involutions appears to be asymptotically lognormally distributed, which we prove for a closely related probabilistic model. We also consider connections to pattern avoidance, in particular to the distribution of the number of inversions in involutions. Last, we consider integer partitions. Various results on the shape of random partitions are simple to prove in the Boltzmann model. We give a (conjecturally tight) asymptotic bound on the number of partitions p[M](n) in which all part multiplicities lie in some fixed set n, and explore when that asymptotic form satisfies log p[M](n) ~ π√(Cn) for rational C. Finally we give probabilistic interpretations of various pairs of partition identities and study the Boltzmann model of a family of random objects interpolating between partitions and overpartitions. What's the point of having two thousand readers if I can't ask a question like this once in a while? I'm working on the final version of my dissertation -- the one I'll submit to the graduate school next week. The dissertation manual states that no text may appear in the margin area. LaTeX, on the other hand, keeps wanting to put some pieces of mathematics, which appear inline, in the margins. (Presumably this is because this is "better" than the alternative of having very long inter-word spaces.) Two questions: - is there some way to check that nothing's sticking out in the margin? (I thought this is what "overfull \hbox" meant, but the line numbers where those appear aren't the ones where I have this problem.) There are some things that are just barely sticking out into the margin, and with thousands of lines total I don't trust my eye. - once I find all the places where text protrudes into the margin, is there some way around this other than just inserting \newline every time this problem occurs? This creates its own problems. I surely can't be the only person who's had this problem, but Google is failing me. From the Daily Mail: New ash cloud could delay re-opening of London airports. We have this gem: "Critics said the agency used a scientific model based on 'probability' rather than fact to forecast the spread of the ash cloud." See the Telegraph as well. What else are they supposed to do? The agency here -- the Met Office, which is the national weather service of the UK -- doesn't know what the ash cloud is going to do. If they waited to see what the cloud does, the planes would already be in the air. It would be too late. There's a mathematical relationships search. It will tell you, for example, that academically, Max Noether is the first cousin of Emmy Noether. (Both of their advisors were students of Jacobi.) But Michael Artin and Emil Artin aren't even related. It's less amusing, of course, when you search for people that aren't related in the standard way. But Paul Erdos is my great-great-great-great-uncle. (You can't search for me yet in the Mathematics Genealogy Project, which is where the data comes from; the link goes to the relationship between Erdos and another student of my advisor.) The word "probability" does not appear in the Bible, or so we learn from Conservapedia's List of missing words in the Bible. I can only conclude that Einstein was right, and God does not play dice.
{"url":"http://godplaysdice.blogspot.com/2010_04_01_archive.html","timestamp":"2014-04-19T00:04:45Z","content_type":null,"content_length":"77232","record_id":"<urn:uuid:4e8959f9-f21d-433b-ba98-f7d92ccd02ec>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00151-ip-10-147-4-33.ec2.internal.warc.gz"}
Sheaves as full reflective subcategories up vote 8 down vote favorite Hello everyone. My question is concerned with the following statement. "Having a grothendieck topology on a category C is equivalent to having a full reflective subcategory Sh(C) in the category PSh(C) of presheaves, whose reflection is left exact." What i need is a reference for this containing a proof. I tried google but could not find anything besides citations of this result. sheaf-theory ct.category-theory add comment 3 Answers active oldest votes I have seen a reference for this fact, and I think it was in Artin's book on Grothendieck Topologies. I have no copy available to check this right now. Before I found that reference, I wrote up a little treatment for my own benefit; I took the "full reflective subcategory" idea as the definition of a Grothendieck topos, then proved that all such come from Grothendieck topologies. It's in section 3.7 of http://www.math.uiuc.edu/~rezk/homotopy-topos-sketch.pdf up vote 10 The proof goes like this. That a Grothendieck topology gives rise to a full reflective subcategory with left-exact reflection is standard. If you're given such a reflective subcategory $D down vote \subseteq Psh(C)$, consider all the sieves, i.e., monomorphisms $f:S\to h_X$ where $h_X$ is the representable functor determined by $X\in C$. Call $f$ a covering sieve if $Lf$ is an accepted isomorphism, where $L: Psh(C)\to D$ is the left adjoint. You then show (i) the collection of covering sieves is a Grothendieck topology $\tau$, and (ii) sheaves for $\tau$ are exactly those presheaves isomorphic to objects of $D$. Both (i) and (ii) require using the fact that $L$ is left exact. (ii) is equivalent to the statement: (ii') for all $f:X\to Y$ in $Psh(C)$, $Lf$ is iso if and only if $L_\tau f$ is iso (where "$L_\tau$" is sheafification with respect to $\tau$.) It's convenient to prove (ii') first for monomorphisms $f$, and then for epimorphisms $f$. thank you very much. this is exactly what i needed. plus the rest of your notes seem interesting as well. – Garlef Wegart Feb 28 '10 at 16:20 just wanted to say, nice notes! – B. Bischof Mar 1 '10 at 2:33 add comment To add to what Charles wrote, another reference is Mac Lane and Moerdijk's Sheaves in Geometry and Logic. They prove something a bit more general, involving Lawvere-Tierney topologies on a topos. For the purposes of understanding what I'm about to write, it's not necessary to know what a Lawvere-Tierney topology is. Mac Lane and Moerdijk's book contains the following two results: 1. Let $\mathcal{E}$ be a topos. Then the subtoposes of $\mathcal{E}$ (i.e. the reflective full subcategories with left exact reflectors) correspond canonically to the Lawvere-Tierney topologies on $\mathcal{E}$. 2. Let $\mathbf{C}$ be a small category. Then the Lawvere-Tierney topologies on $\mathbf{Set}^{\mathbf{C}^{\mathrm{op}}}$ correspond canonically to the Grothendieck topologies on $\mathbf up vote 4 {C}$. down vote Result 1 is almost part of Corollary VII.4.7. The "almost" is because they don't go the whole way in proving the one-to-one correspondence, but I guess it's not too hard to finish it off. (Edit: it also appears as Theorem A.4.4.8 of Johnstone's Sketches of an Elephant, where Lawvere-Tierney topologies are called local operators.) Result 2 is Theorem V.4.1. I agree with the point of view that Charles advocates. When I started learning topos theory I got bogged down in detailed stuff about Grothendieck topologies, and it all seemed pretty technical and unappealing. It wasn't until years later that I learned the wonderful fact that Charles mentions: an elementary topos is Grothendieck iff it's a subtopos of some presheaf topos. I wish someone had told me that in the first place! add comment The mentioned references and some more are at nLab: category of sheaves. For instance the book by Kashiwara-Shapira has a useful account. When I myself learned this stuff I found it useful to read Mac-Lane/Moerdijk in parallel to Kashiwara/Shapira. The up vote 0 former has more of the topos-theoretic picture, the latter more of the homotopy-theoretic picture. down vote As we know from Rezk and Lurie, it*s both these aspects taken together that give the full picture. add comment Not the answer you're looking for? Browse other questions tagged sheaf-theory ct.category-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/16672/sheaves-as-full-reflective-subcategories?sort=newest","timestamp":"2014-04-24T04:03:21Z","content_type":null,"content_length":"61103","record_id":"<urn:uuid:4faa1252-9b13-4d19-90e8-bca460238757>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00127-ip-10-147-4-33.ec2.internal.warc.gz"}
Inductive vs. Deductive Reasoning Date: 07/24/2001 at 17:16:23 From: Angie Subject: Inductive vs. Deductive Reasoning I'm sure this is a simple question, but can you help me with the differences between inductive and deductive reasoning? To me, the explanations I have seen sound virtually the same. Date: 07/24/2001 at 21:53:44 From: Doctor Peterson Subject: Re: Inductive vs. Deductive Reasoning Hi, Angie. Inductive reasoning starts with specific examples or observations, and "deduces" (a confusing term) the apparent rules or patterns that lie behind them. It's what a scientist or detective uses; it's never completely certain, because the next observation might contradict the theory, or at least require us to modify it. But it does connect our conclusions to the real world around us. Deductive reasoning starts with the rules, and determines what the consequences will be. This is what we do in most of math, defining the rules for a mathematical entity (such as the commutative property of addition), and using those to prove that other, more complicated, facts are true. Here, we can be absolutely sure of our conclusions - as long as we assume the axioms are true. We would have to use inductive reasoning to decide whether our assumptions make sense in the world we live in, where it isn't safe just to assume anything! I went to the Dr. Math search page and entered the words inductive deductive to see what we have said before. Here's a longer explanation with Logic: Definitions - Doctor Peterson, The Math Forum
{"url":"http://mathforum.org/library/drmath/view/55695.html","timestamp":"2014-04-19T21:09:56Z","content_type":null,"content_length":"6597","record_id":"<urn:uuid:8221c9a6-60a0-4c8f-8c92-10991cbc6d4b>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00349-ip-10-147-4-33.ec2.internal.warc.gz"}
Narrow Search Earth and space science Sort by: Per page: Now showing results 1-8 of 8 This is a collection of mathematics problems relating to the moons of the solar system. Learners will use simple proportional relationships and work with fractions to study the relative sizes of the larger moons in our solar system, and explore how... (View More) temperatures change from place to place using the Celsius and Kelvin scales. (View Less) This collection of activities is based on a weekly series of space science mathematics problems distributed during the 2012-2013 school year. They were intended for students looking for additional challenges in the math and physical science... (View More) curriculum in grades 5 through 12. The problems were created to be authentic glimpses of modern science and engineering issues, often involving actual research data. The problems were designed to be one-pagers with a Teacher’s Guide and Answer Key as a second page. (View Less) This book contains 24 illustrated math problem sets based on a weekly series of space science problems. Each set of problems is contained on one page. The problems were created to be authentic glimpses of modern science and engineering issues, often... (View More) involving actual research data. Learners will use mathematics to explore problems that include basic scales and proportions, fractions, scientific notation, algebra, and geometry. (View Less) In this problem set, students calculate precisely how much carbon dioxide is in a gallon of gasoline. A student worksheet provides step-by-step instructions as students calculate the production of carbon dioxide. The investigation is supported the... (View More) textbook "Climate Change," part of "Global System Science," an interdisciplinary course for high school students that emphasizes how scientists from a wide variety of fields work together to understand significant problems of global impact. (View Less) This is a resource that explains the rationale behind the multiple time zone divisions in the United States. Learners will work through a problem set to practice calculating the time in one time zone, given the time in another time zone. This is... (View More) activity 9 from the educator guide, Exploring Magnetism: Magnetic Mysteries of the Aurora. (View Less) This is an activity about vectors and velocity. It outlines the addition and subtraction of vectors, and introduces the application of trigonometry to describing vectors. The resource is designed to support student analysis of THEMIS (Time History... (View More) of Events and Macroscale Interactions during Substorms) Magnetometer line-plot data. Learners will complete worksheets consisting of problem sets that allow them to work with vector data in magnetic fields. This is activity 15 from Exploring Magnetism: Earth's Magnetic Personality. (View Less) This is an activity about satellite size. Learners will calculate the volume of the IMAGE (Imager for Magnetopause-to-Aurora Global Exploration) satellite, the first satellite mission to image the Earth's magnetosphere. They will then determine the... (View More) effect of doubling and tripling the satellite dimensions on the satellite's mass and cost. This is the first activity in the Solar Storms and You: Exploring Satellite Design educator guide. (View Less) This is an activity about interpretation of a data graph. Learners will use mathematics to create a pie chart of percentages and answer accompanying questions. This is the fourth activity in the Solar Storms and You: Exploring Satellite Design... (View More) educator guide. (View Less)
{"url":"http://nasawavelength.org/resource-search?facetSort=1&topicsSubjects=Mathematics&resourceType%5B%5D=Instructional+materials%3AActivity&resourceType%5B%5D=Instructional+materials%3AProblem+set&instructionalStrategies=Homework+and+practice","timestamp":"2014-04-18T21:55:44Z","content_type":null,"content_length":"66219","record_id":"<urn:uuid:73a8265e-4517-4a99-99be-dd20b78c5150>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00659-ip-10-147-4-33.ec2.internal.warc.gz"}
Homework Help Posted by Jennifer on Wednesday, November 11, 2009 at 12:48pm. From a point A on the ground, the angle of elevation to the top of a tall building is 24.1 degrees. From a point B, which is 600 ft closer to the building, the angle of elevation is measured to be 30.2 degrees. Find the height of the building. PLEASE HELP ME! • Trig - drwls, Wednesday, November 11, 2009 at 1:15pm Get out a piece of paper and draw the situation. You have two right triangles. Each have two points that are the same (the top and bottom of the bulding), but the third points (where the observer is located) are different. Let H be the height of the building, in feet. Point A is X ft away and point B is A - 600 feet away. Solve those two simultaneous equations: H/X = tan 24.1 H/(600-X) = tan 30.2 As a first step in solving them, you can solve for X first, using (600-X)/X = tan 30.2/tan 24.1 = 1.3011 (I used a calculator for that) Rewrite as 780.67 = 2.3011 X X = 339.3 ft Now use either of the first two equations to solve for H. • Trig - Reiny, Wednesday, November 11, 2009 at 1:45pm look at the non-right-angles triangle, with its top angle at the top of the building. That top angle can be easily found to be 6.1 degrees. We can find the side coming up from the 30.2 degree angle, call it y by sine law y/sin24.1 = 600/sin6.1 y = 2305.56 then in the small right-angled triangle sin 30.2 = H/y H = 2305.56sin30.2 = 1159.74 Related Questions Pre-Calculus - 1.The angle of elevation to top of a building from a point on the... Trig - The angle of elevation to the top of a building from a point on the ... Trig - A 60-foot flagpole stands on top of a building. From a point on the ... Math- Pre calc - A 60 foot antenna stands on top of a building. From a point on ... Precalculus - To estimate the height of a building, two students find the angle ... Trig - A 60 ft. antenna stands on top of a building, From a point on the ground... Pre-Cal/Trig - Explain how to solve the following question. Surveying: From a ... TRIG Help help help test tommarow - A TV tower stands on top of the Empire State... Math - On top of the sears building is a tv tower. From a point 300 ft away the ... Math - On top of the sears building is a tv tower. From a point 300 ft away the ...
{"url":"http://www.jiskha.com/display.cgi?id=1257961702","timestamp":"2014-04-19T22:52:16Z","content_type":null,"content_length":"9579","record_id":"<urn:uuid:2baab8ea-d484-461b-803f-19f91ed3f5d4>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00464-ip-10-147-4-33.ec2.internal.warc.gz"}
HP "calculators" (was Re: World's first computer on ebay!) M H Stein dm561 at torfree.net Sat Apr 14 13:37:47 CDT 2007 ------------Original Message: Date: Sat, 14 Apr 2007 07:51:58 -0700 From: "Chuck Guzis" <cclist at sydex.com> Subject: Re: HP "calculators" (was Re: World's first computer on I said "X=Y=7" in GWBASIC changes both X and Y the same way regardless of their initial values. LET X=Y=7 changes them a different way, regardless of their initial values. If there's a conditional operator in either of those statements, I can't find it. Yes, it's true that BASIC doesn't differentiate lexically between the assignment operator and equality test, but that seems to be unrelated to the behavior of the two statements I gave. In fact, I don't know what the operation performed by GWBASIC is in "X=Y=7". We can't be talking about the same thing here; I program a fair bit in BASIC and use this technique quite often; (Y=7) is equivalent to 0 or -1 (0000H or FFFFH, depending on Y, when it is in a place where a numeric variable is expected. So if I read your examples correctly, when Y is 7 then (Y=7) is -1; in essence TRUE has a value of -1 and FALSE has a value of 0. The conditional operator is implied, which is why it's useful when an explicit IF/THEN is awkward; If Y=7 then (Y=7) is -1 else (Y=7) is 0. Y never changes. Just try: Y=7:PRINT Y=7: Y=6: PRINT Y=7 or: Y=7:PRINT Y>6:Y=6:PRINT Y>6 or: A$="X":PRINT A$="X":A$="Y":PRINT A$="X" or, to make it even more obscure: INPUT "Guess a letter":A$:PRINT mid$("WRONGRIGHT",-(A$="Z")*5+1,5) Admittedly, it's counterintuitive; X=Y=7 sure looks like a multiple assignment. More information about the cctech mailing list
{"url":"http://www.classiccmp.org/pipermail/cctech/2007-April/038458.html","timestamp":"2014-04-19T22:43:44Z","content_type":null,"content_length":"4708","record_id":"<urn:uuid:f73b6185-7f69-4da7-b429-165fd932f2ef>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00118-ip-10-147-4-33.ec2.internal.warc.gz"}
Cryptology ePrint Archive: Report 2012/526 Invertible Polynomial Representation for Private Set OperationsJung Hee Cheon and Hyunsook Hong and Hyung Tae LeeAbstract: In many private set operations, a set is represented by a polynomial over a ring $\Z_{\sigma}$ for a composite integer $\sigma$, where $\Z_\sigma$ is the message space of some additive homomorphic encryption. While it is useful for implementing set operations with polynomial additions and multiplications, a polynomial representation has a limitation due to the hardness of polynomial factorization over $\Z_\sigma$. That is, it is hard to recover a corresponding set from a resulting polynomial over $\Z_\sigma$ if $\sigma$ is not a prime. In this paper, we propose a new representation of a set by a polynomial over $\Z_\sigma$, in which $\sigma$ is a composite integer with {\em known factorization} but a corresponding set can be efficiently recovered from a polynomial except negligible probability in the security parameter. Note that $\Z_\sigma[x]$ is not a unique factorization domain, so a polynomial may be written as a product of linear factors in several ways. To exclude irrelevant linear factors, we introduce a special encoding function which supports early abort strategy. As a result, our representation can be efficiently inverted by computing all the linear factors of a polynomial in $\Z_{\sigma}[x]$ whose roots locate in the image of the encoding function. When we consider group decryption as in most private set operation protocols, inverting polynomial representations should be done without a single party possessing the secret of the utilized additive homomorphic encryption. This is very hard for Paillier's encryption whose message space is $\Z_N$ with unknown factorization of $N$. Instead, we detour this problem by using Naccache-Stern encryption with message space $\Z_\sigma$ where $\sigma$ is a smooth integer with public factorization. As an application of our representation, we obtain a constant round privacy-preserving set union protocol. Our construction improves the complexity than the previous without an honest majority. It can be also used for a constant round multi-set union protocol and a private set intersection protocol even when decryptors do not possess a superset of the resulting set. Category / Keywords: cryptographic protocols / Date: received 6 Sep 2012, last revised 8 Jul 2013Contact author: htsm1138 at snu ac krAvailable format(s): PDF | BibTeX Citation Version: 20130708:165011 (All versions of this report) Discussion forum: Show discussion | Start new discussion[ Cryptology ePrint archive ]
{"url":"http://eprint.iacr.org/2012/526","timestamp":"2014-04-20T03:12:33Z","content_type":null,"content_length":"3849","record_id":"<urn:uuid:c0e10a3d-b0f0-4c83-90cb-35f2cb4242b9>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00598-ip-10-147-4-33.ec2.internal.warc.gz"}
Taylorsville, GA Geometry Tutor Find a Taylorsville, GA Geometry Tutor ...I guarantee that I can make mathematics understandable to you or your child. Mathematics is not always easy to grasp. Effort is always required on the part of the student and the teacher for this guarantee to become a reality. 12 Subjects: including geometry, calculus, ASVAB, algebra 1 ...My strongest tutoring subjects include the topics that I have tutored for at the college-level including general mathematics (pre-algebra, algebra I/II, geometry, trigonometry, integral and differential calculus, differential equations) and descriptive and inferential statistics with the statisti... 26 Subjects: including geometry, reading, statistics, English ...Later I was awarded an acting scholarship to three different universities. Attended BYU where I worked as a statistics teaching assistant while studying economics. Became a mentor/tutor to 160 nontraditional students completing a statistics course online. 28 Subjects: including geometry, calculus, statistics, GRE I am a 46-year-old certified teacher with three years teaching experience in alternative school settings. I have taught both middle and high school math, and have helped over 100 students pass the math sections of their CRCT'S, EOCT'S AND GED. I am a proud veteran of the U.S. 11 Subjects: including geometry, algebra 1, GED, algebra 2 I was a National Merit Scholar and graduated magna cum laude from Georgia Tech in chemical engineering. I can tutor in precalculus, advanced high school mathematics, trigonometry, geometry, algebra, prealgebra, chemistry, grammar, phonics, SAT math, reading, and writing. I have been tutoring profe... 20 Subjects: including geometry, reading, chemistry, algebra 1
{"url":"http://www.purplemath.com/Taylorsville_GA_Geometry_tutors.php","timestamp":"2014-04-18T11:44:05Z","content_type":null,"content_length":"24070","record_id":"<urn:uuid:3d5fdd41-fb3c-4ddd-91c0-33e8f00c9f97>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00380-ip-10-147-4-33.ec2.internal.warc.gz"}
2004 Conference Proceedings Go to previous article Go to next article Return to 2004 Table of Contents Emley Henter Sales/Marketing Director Henter Math P.O. Box 40430 St. Petersburg, FL 33743-0430 Phone: (727) 347-1313 Fax: (727) 302-9422 The traditional pencil is a problem for people that are blind, or people that can't grip it or move it, or those that are learning disabled. A pencil plays a key part in learning Math, and other equation-solving disciplines. Typically a student uses a pencil to "work through" a math problem, writing down the intermediate answers and using them to get the final answer. But if you can't operate a pencil then you can't write down the intermediate answers, which makes it very difficult to use them in acquiring the final answer, and does not leave anything on the paper to show that you actually worked through the problem and you know how to solve it. Of course, if you are blind, the pencil doesn't tell you what numbers to add together either. Virtual Pencil is computer software that is used to interactively solve math problems. It is designed for those who are pencil impaired: unable to operate a pencil effectively. This is not a tutorial, although tutorial mode is part of the package. Think of it as a virtual pencil, a tool that can be used to solve a math problem. It moves to the right spot on the "paper", guided by the user, and inputs the answers that the user selects. When used with a screen reader the numbers and actions are read outloud, or displayed in Braille. The math problem is displayed on the screen, one number above the other with digits lined up in vertical columns. The Tutor tells the student where he is in the problem, what steps need to be done to solve it, and will even do the navigating and provide the answer. In test mode the student does not have the help provided by the tutor, extended tutor or next step features. He or she must know how to navigate around the problem, where to read the digits in the intermediate steps, and where to put the answers. Just like using a pencil. Teachers can create an assignment , password protect it, and then send it to the student via email, save it to a diskette, save it to the hard drive, or print it or emboss it. When emailing it or saving it, the password will stay with the assignment file wherever it goes. This is designed to prevent students from switching from test mode to tutor mode, so the test results will be valid. When an assignment or test is created in Virtual Pencil, the same file can be printed-out for the able-bodied students in the class, saving the teacher a lot of time. There are many options to change the look and behavior of Virtual Pencil, like the font size and color, the amount of information displayed or spoken, sound effects, hot keys, and message strings. The current product handles addition, subtraction, multiplication, and division with decimals and fractions. Future versions will do higher levels of math, like algebra, trigonometry, differential equations, and calculus. We are anticipating algebra being done by the end of 2004. For more information on Virtual Pencil please visit our website at www.VirtualPencil.com or call us at (727) 347-1313. Go to previous article Go to next article Return to 2004 Table of Contents Return to Table of Proceedings Reprinted with author(s) permission. Author(s) retain copyright.
{"url":"http://www.csun.edu/cod/conf/2004/proceedings/218.htm","timestamp":"2014-04-19T22:24:58Z","content_type":null,"content_length":"4655","record_id":"<urn:uuid:c3831e69-0c93-4d50-b67e-68edd268980d>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00493-ip-10-147-4-33.ec2.internal.warc.gz"}
Highlights of the Bertinoro workshop on Sublinear Algorithms (see workshop's webpage) The talks that I wish to highlight most are: 1. Optimal O(1)-time approximations of bounded-degree CSPs by Yuichi Yoshida 2. Learning and testing k-modal distributions by Rocco Servedio (et al) These two talks as well as 20 additional ones (15 regular talks and 5 survey talks) are briefly reviewed below. For the abstracts of all talks click HERE My top choices Yuichi Yoshida: Optimal O(1)-time approximations of bounded-degree CSPs This is a great result. It states that, for every CSP, there exists a constant $c$ such that every bounded-degree instance (of this CSP) can be approximated to within a factor of $c$ in O(1)-time, but pproximation to within any constant factor $c'>c$ requires at least $sqrt(n)$ queries. Furthermore, the constant $c$ equals the integrality gap of the generic LP relaxation of this CSP. (Cf., the optimality of polynomial-time approximations (under UGC) at the integrality gap of the generic SDP relaxation.) Rocco Servedio: Learning and testing k-modal distributions I found the algorithmic design very inspiring. When one talks of using testing as a preliminary step to learning, one usually envisions using the tester to select an algorithm out of several algorithms, each tailured for a different concept class. In this work the idea is to decompose the target input (object), which belongs to a complex concept class (which seems hard to learn) into several objects that should each belong to a simpler concept class. While such a partition exists, it is not known to the learner. Thus, the learner generates several possible partitions and tests them in attempt to find a good partition (i.e., a partition in which each part belongs to the said simpler concept class). Once a good partition is found, each part is learned (via the simpler learner corresponding to the basic class), and the full object is reconsructed. Additional choices Krzysztof Onak: A Near-Optimal Approximation of the Minimum Vertex Cover Size It is known that approximating the value of the mVC requires time that is linear in the average degree, and the current work provides an algorithm that essentially achieves this running time. An interesting feature of the algorithm is that it manages to conduct a large number of recursive calls to a non-trivial procedure while encurring a relatively small overhead (i.e., the amortized cost is logarithmic in the number of invocations). Sofya Raskhodnikova: Testing and Reconstruction of Lipschitz Functions The starting point is an appealing application of testing and reconstruction of Lipschitz functions to privacy preserving databases, where the key observation is that the archetypical technique for obtaining (differential) privacy is adding noise at a level that masks the effect of any individual entry on the query. This calls for testing whether the query indeed has such a limited effect, and for filtering the query such to obtain such a limited effect. This motivates the current study, which provides partial results (regarding cetian domains but not the most natural one). C. Seshadhri: Estimating the Longest Increasing Subsequence in Polylogarithmic Time The aim is to find an estimate that is accurate to within an additive error of $\e n$, where $n$ is the input length, and do it in polylogarithmic time (and arbitrary dependence on $\e>0$). This is done via "sublinear-time dynamic programming"; that is, running DP on small instances that emerge based on adaptive sampling. Tali Kaufman: Locally Testable Codes and Expanders I was intrigued by Tali's observation regarding the somewhat contradictory relation between LTC and expanders. On the one hand, it is known [BHR] that strong expanders (i.e., odd-neighbor expanders) yield codes that are hard to test (i.e., are not LTCs). On the other hand, strong LTC yield graphs that are weak expanders (i.e., small set expanders). (Caveat: the graphs in the two cases are only related; in the 1st case it corresponds to the parity check matrix of an LDPC, whereas in the 2nd case it describes positions that are probed on the same random-tape of the tester. Still...) Alexandr Andoni: Sublinear Algorithms via Precision Sampling This work makes explicit a method that was used in prior works, and may be called adaptive-presision sampling. The idea is that obtaining an estimate of a quantity defined over a domian that is partitioned into many sub-domains does not require obtaining the same level of precision in estimating this quantity over each sub-domain. Obtaining a precision that is bigger by a factor of $2^k$ on a $2^{-k}$ fraction of the subdomains, where these sub-domains are selected at random and the process is repeated for (logarithmically) many values of $k$ suffices. The authors cite some streaming algorithms as origins of this methods, but it has appeared also in property testing (see e.g.), and in cryptography (e.g., the analysis of hardcore predicates; cf. Clm 2.5.4.1 in Foundations of Crypto., Vol. 1). Asaf Shapira: Testing Odd-Cycle-Freeness in Boolean Functions Here a cycle is a sequence of points in the function domain that sum to zero and all evaluate to one. (This is a special case of a sequence of points that reside on a prescribed sub-space and evaluate to a certain pattern.) The current problem is reduced to testing bipartiteness. Pierre Fraigniaud: Local Distributed Decision Algorithms The question here is what decisions can be made by (synchronous) distributed algorithms (in the network model) that use a constant number of rounds. Specifically, the input string $x$ is distributed among the vertices of the graph $G$ (i.e., it may be viewed as $x:V(G)\to\{0,1\}^*$), and the requirement is that if $x$ is a YES-instance then all vertices accept, whereas if $x$ is a NO-instance then at least one vertex rejects. The work considers deterministic and randomized algorithms as well as deterministic and randomized evaluations of auxiliary certificates (i.e., vertex $v$ obatins both $x(v)$ and $w(v)$, where $w:V(G)\to\{0,1\}^*$ is a certificate for $x$). In the pure model (w.o., certificates) randomization helps iff the acceptance and rejection probabilities (re YES and NO instances, resp) satisfy a simple inequality. In the "certificated" model randomization is extremely powerful: it allows to decide membership in any set! Ning Xie: Local Computation Algorithms This work presents a general framework that generalizes problems such as local decodability and local reconstruction. For a function $F$ from strings to sets of strings, we seek a randomized algorithm that on explicit input $i$ and oracle access to $x$ answers with the $i$th bit of a fixed string in $F(x)$; that is, the string $y\in F(x)$ is determined based on the internal coin tosses of the algorithm, but is oblivious of $i$ (and so the answers obtained for all possible values of $i$ are consistent with a single string in $F(x)$). The problem of local deconstruction of a function $x:[N]\to\{0,1\}^*$ w.r.t the property $\Pi$ and proximity parameter $\e$ can be cast by letting $F(x)$ equal the set of all $y\in\Pi$ that are $\e$-close to $x$. Christian Sohler: Every Property of Hyperfinite Graphs is Testable This work refers to the bounded-degree graph model, and to testing within a number of queries that is independent of the size of the graph. A graph is called $(k,\e)$-hyperfinite if it is $\e$-close to a graph in which all connected components are of size at most $k$. The notion of a $k$-disc (i.e., the subgraph viewed by a BFS that is tructaed at depth $k$) serves as a pivot of this work. Once one implements a "partition oracle" (vizvis the aforementioned connected components), testing reduces to evaluating the frequency of the various possible $k$-discs. Oded Goldreich: Finding Cycles and Trees (Indeed, here I violate my promise not to choose on my works, but I get I should be excused given the context of this report.) This work advocates the study of sub-linear time algorithms for finding (small) substructures in graphs that are far from lacking such substractures; e.g., finding a cycle in a graph that is far from being cycle-free. This problem is related to one-sided error structure, especially when the tested property is cast as consisting of graphs that lack some (natural) substructures. The work begs for further study, e.g., of arbitrary minor-freeness and of extending the study to the general graph model (wheraes almost all the current results refer to the bounded-degree graph model). Oded Lachish: Testing acceptability by RO Boolean Formulea This "massively parameterized" property is defined in term of a fixed formula, and consists of testing whether a given assignment satisfies this formula. The results buikd on the "independence" of the parts of any read-once formula. Gilad Tsur: On Approximating the Number of Relevant Variables in a Function This work consideres a "double relaxation" of the task of determining the number of relevant variables of a function; specifically, one should distinguish between functikns that depend on $k$ variables and functions that are $\e$-far from depending on $k'$ variables, where $k'=(1+\gamma)\cdot k$ for a constant $\gamma>0$. The bottomline is that this double-relataxtion is not significantly easier than the standard propery testing problem (where $k'=k$). Andrew McGregor: Verifying the correct-operation of prioroty queues Given a stream of operations to/on a data structure (e.g., inserts and extracts to a priority queue), the task is to check whether these operations were performed correctly (e.g., whether each extract resulted in the minimum elements currently in the queue). The first observation is that fingerprinting can be used to check whether the multisets of inserted and deleted elements are equal. The algorithm works by treating differently short range versus long range violation, where the threshold is sqrt (i.e., square root of the length of the stream). The short range is dealt with by brute force, whereas the long range is dealt by a clever record keeping. Interestingly, the sqrt complexity that results is optimal. Robi Krauthgamer: Polylogarithmic Approximation for Edit Distance While the edit distance can be computed in quadratic time (by Dynamic Programming), the current work obtains a polylogarithmic approximation in nearly-linear time. The pivot is reducing the edit distance to a tree distance, which allows for faster evaluation via sampling. Specifically, this is obtained by referring to an asymmetric model in which one string is fixed and oracle access is provided to the other string, while keeping track of the number of queries made. Thus, the low query complexity achieved in the latter model, and the linear-time implementation of queries (in the standard model) yields the desired result (in the standard model). Artur Czumaj: Planar Graphs - Random Walks and Bipartiteness Testing This work presents techniques for dealing with the general graph model (i.e., extending results from the bounded-degree model to the general graph model). Ronitt Rubinfeld: Testing Properties of Collections of Distributions Two models are considered. In the sampling model, one gets pairs $(i,x)$, where $i$ is selected according to some distribution (i.e., uniform, fixed but known, or even unknown) and $x$ is selected according to the $i$th distribution. In the query model, in respond to the query $i$, one gets a sample $x$ selected according to the $i$th distribution. The question of testing the equality of distributions in the sampling model (for an unknown distribution) coincides with testing the independence of two parts of a distribution, but many other questions can be asked (and some were treated in this work). I wish to commend the organizers for including in the program a large number of surveys. Most of them are summarized below. Dana Ron: Approximating graph parameters Four types of parameters (of arbitrary graphs) were discussed and the ideas underlying their approximations were surveyed. 1. Average degrees, and the number of substructures (e.g., stars). Here the idea is to estimate the number from all sides of the object (where in the case of average degree the object is an edge). 2. The weight of a MST (in weighted graphs). Here the techiques build on connectivity testing in bounded-degree graphs. 3. The size of min-VC and maximial matching. Here the initial idea is a relation to ultra-fast distributed computing, but subsequent work relies on implementing certain partition oracles. 4. The distance to various graph properties. In the dense graph model a general result is known, but in the bounded-degree model the cuurently known results are sporadic. David Woodruff: Evaluating various norms in streaming Given a stream of updates to a large number of values, the task is to compute various norms of the resulting vector (of final aggregated values). This is relatively easy in the case of the Eucildian Norm (by relying on the fact that this norm squared equals the expected value of the square of the inner-product of the vector with a random $\pm1$-vector). The estimation of other norms relies on the relation between norms and on clustering of contributions according to their magnitude, which in turn can be carried out by a generic algorithm. Madhu Sudan: Affine Invariances in Property Testing This survey presents a unified perspective at testing algebraic properties. The pivot is the notion of affine invariances and the postulate that algebraic properties of functions are preserved under affine transformations of the function's domain (similarly to the way that graph properties are preserved under a relabelling of the vertices). Nir Ailon: Johnson-Lindenstrauss Transform(s) The survey started with a distinction between the distributed version (which states that a random linear transformation from $\R^n$ to $\R^k$ preserves a single distance w.p. $1-\exp(-k)$) and the metric embedding version (which preserves the distances among $N$ given points). It was shown that fast JL transformation are related to partial derandomization of the distributed version; e.g., rather than using a uniformly distributed linear transformation, one may use a transformation obtained by composing a few transformations including a sparse random transformation (from $\R^n$ to $\R^ k$) and a random diagonal transformation (of $\R^n$). Roger Wattenhofer: Distributed Algorithms The question addressed is obtaining MIS and approximate mVC by (synchronous) distributed algorithms (in the network model) that use a small number of rounds. As indicated in Dana's survey, this problem is closely related to obtaining fast sub-linear algorithms for these problems. The current survey started with a simple(r) analysis of Luby's randomized MIS algorithm, and a review of Cole-Vishkin "deterministic coin tossing". Lower and upper bounds on the performance of deterministic and randomized algorithms on arbitrary graphs were also presented. Back to list of Oded's choices.
{"url":"http://www.wisdom.weizmann.ac.il/~oded/MC/072.html","timestamp":"2014-04-16T19:01:40Z","content_type":null,"content_length":"16403","record_id":"<urn:uuid:ca97bad9-c357-4efe-a341-2f115d50adf8>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00332-ip-10-147-4-33.ec2.internal.warc.gz"}
Fair and accurate elections, statistically speaking Electoral College map of the 2000 election, one of the most disputed in U.S. history. A uniquely American institution, the Electoral College consists of popularly elected representatives apportioned to each state according to the size of states' congressional delegation. It's the electors who formally elect the President of the United States. According to Berkeley statistician Elchanan Mossel, this system of electing the president is significantly more likely to result in an erroneous election outcome compared to the simple majority voting system.. The political controversy surrounding the Electoral College -- the institution whereby we elect the president of the United States -- is as old as the republic. In spite of recent contentious elections that raised the controversy to new heights, the debate is unlikely to reach a resolution given the compelling political considerations on both sides. But rarely if ever does the public debate on this subject take into account objective, mathematical considerations. UC Berkeley s Elchanan Mossel, an associate professor in the departments of Statistics and Computer Science and an expert in probability theory, believes there is an important contribution statisticians can make to the debate. He is not alone. Statisticians have subjected voting-related issues to complex mathematical calculations at least since the 18th century, when Marquis de Condorcet, a French philosopher and mathematician began using probability theory in the context of voting. Mossel s analyses pit the Electoral College system against the simple majority-voting system in an attempt to test the strength of our electoral system in one key aspect: how prone to error is it and, in turn, what are the odds that the outcome of an election will actually be flipped by such random error? There are many ways of voting, Mossel says. You can vote by majority vote, Electoral College, weighed voting, even dictatorship. The statistical question is, Which voting method is most robust to Mossel s assumption is that any voting model is intrinsically subject to a finite error, meaning that the vote cast by a small number of voters in each election will end up being recorded differently from what those voters intended. This may be due to human error, hanging chads, or voting machines that flip some vote randomly. In a landslide election such unfortunate occurrences make no statistical difference. But in a close election the likes of which we ve often had in recent election cycles such errors may wreck havoc with the election, with and sometimes even without our Statistically, the most robust system in the world is a dictatorship, Mossel says, not without a measure of amusement. Under such a system, the results never depend on how people vote. But since most of us would prefer an alternative to dictatorship in spite of the system s robustness, the question then becomes which voting system in a democracy is most likely to produce accurate results. To that end Mossel compares all of the possible voting systems, including the two voting methods we are most familiar with simple majority-vote and the Electoral College system, both of which offer voters two alternatives to pick from. Before running his analysis, Mossel first sets out to tests the model to ensure it satisfies some basic statistical requirements for fair elections. One such mathematical criterion corresponds to the notion of fairness among all the alternatives meaning that the model must ensure that all alternatives (i.e., candidates) receive the same treatment. Let s say some people under one model voted for Candidate A and some people voted for Candidate B and the winner was Candidate A under a given system. Now we replace the people who voted for B with those who voted for A and vice versa and we want the result to flip, too. It s a natural notion of fairness that is also common in economics. The results should not depend on the names of the Another way to factor in democracy is transitivity, which assumes that every two people play the same role mathematically and no one person has a greater chance of changing the outcome than anyone else. One example of transitivity, Mossel says, is to imagine people seated in a circle. Then he rotates everyone (or every person s opinion) one seat to the left. We want the voting function to be transitive, meaning that the result is the same if we rotate people. Once criteria for democracy are factored in, the problem of finding the most robust voting system becomes a problem of mathematical analysis. The reasoning is not simple. Mathematicians do not rely on standard Euclidian geometry to solve social problems of such complexity, which makes voting analysis difficult to explain on national television. Instead they apply what s known as Gaussian geometry, or the geometry of spheres in very high-dimensions. This methodology is employed when studying aggregate behavior of large numbers of people. In the context of robustness of voting, a key role is played by geometric Isoperimetric theorems, which study the relationships between volumes and surface areas. ( Isoperirimetric means having the same perimeter.) To make his point, Mossel reduces the highly-complex problem to a very simple and amusing hypothetical question. We have the cold war all over again, he smiles. The U.S. and Russia decide to partition the world exactly in half, 50-50 each. The two states must have the exact same area, including the oceans. And they try to minimize the border between the two states so they need the fewest number of border guards. The optimal solution to this problem is obvious: split the world along the line of the equator. The mathematics we developed for the robustness problem in some sense corresponds to the partitioning of very high-dimensional spheres. After running his analysis, Mossel says, the answer is unequivocal. It also serves a mathematical mortal blow to the American system of electing a president. Applying isoperimetric theory tells us majority voting method is optimal. It is the most robust function. The difference between this common voting method and the Electoral College system is in fact stunning. The first person to determine a way to calculate the error for these voting methods was statistician W. F. Sheppard back in 1899. He determined that majority voting takes a noise rate of x to an error that s approximately the square root of x. So under majority vote, if the voting machine flips votes with a probability of 1 in 10,000, the chance that the result of the election will be flipped is roughly the square root of that probability, or 1 in 100. With Electoral College voting, in essence you re doing majority twice, Mossel says. First you do majority in each state and then you do the majority of the majority, so you take the square root of the square root. So you take square root of 1/10,000 once and get 1/100, and then you take square root again and get 1/10. The Electoral College appears to fail miserably based on the robustness to error criteria. We don t have the best system, Mossel says. Yet even in the face of his own analysis he remains highly philosophical about how meaningful this apparently whopping difference between the two systems really is. Philosophically it may not be morally relevant, he says. If the election is so close anyway and people don t have a strong preference, maybe it doesn t really matter? But to the extent that the democratic ideal is for the outcome to reflect the intent of the voter as much as humanly possible, then the difference in Mossel s robustness-to-error test could give political pundits food for thought. Voting theory is only one example of Mossel s vast work applying probability theory to a wide range of both scientific and social problems. These range from theoretical computer science and evolutionary biology to game theory and social choice the latter of which includes topics such as voting or economic problems. ScienceMatters@Berkeley is published online by the College of Letters and Science at the University of California, Berkeley. The mission of ScienceMatters@Berkeley is to showcase the exciting scientific research underway in the College of Letters and Science. More information: Mossel s statistical analyses can be found in the following papers: "Maximally Stable Gaussian Partitions with Discrete Applications," written in collaboration with Marcus Isaksson, and "Noise stability of functions with low influences: invariance and optimality," written with Ryan O Donnell and Krzysztof Oleszkiewicz. 3.5 / 5 (8) Feb 18, 2011 I have not voted in presidential election since I understood what the electoral college does and never will as long as it's there. The electoral college discounts my vote and your vote too. I live in a state that always votes democratic. The only vote that counts towards the election is the one vote that gives the dems a majority over the republicans in the state, all other votes are thrown away for the national election and that one vote is turned into electoral votes. Without a country wide majority vote and the electoral college instead there are only 50 votes that count, the ones that give a majority in each state. Yes, my vote counts in my state, but it doesn't count in the overall election and that is what is wrong with college. 1.7 / 5 (6) Feb 18, 2011 Mathematics has little to do with it. The Electors are not bound, in any legal, moral or ethical sense, to vote as their State has voted. The Electors retain the Right to Choose the President, as they see fit. Good protection against a dangerous Populist gaining the White House. These United States' are not a Democracy. They are fifty sovereign States, joined in a Republic, choosing who the next President of their Federation is going to be. 4.5 / 5 (2) Feb 18, 2011 I have NEVER understood why we need the electoral college. Can anyone offer a definitive reason as to why we still use this prehistoric election system? 2.6 / 5 (7) Feb 18, 2011 I have NEVER understood why we need the electoral college. Can anyone offer a definitive reason as to why we still use this prehistoric election system? It's called states rights and the desire for a weak president. 3 / 5 (2) Feb 18, 2011 Mathematics has little to do with it. The Electors are not bound, in any legal, moral or ethical sense, to vote as their State has voted. The Electors retain the Right to Choose the President, as they see fit. Good protection against a dangerous Populist gaining the White House. These United States' are not a Democracy. They are fifty sovereign States, joined in a Republic, choosing who the next President of their Federation is going to be. You seem to be missing a huge part of history. Out of 535 electoral votes, there are less than 10 who actually have a choice in the matter. Almost all states have laws that require their electoral college to vote according to the popular vote of that state. 5 / 5 (7) Feb 18, 2011 The National Popular Vote bill would guarantee the Presidency to the candidate who receives the most popular votes in all 50 states (and DC). Every vote, everywhere, would be politically relevant and equal in presidential elections. Elections wouldn't be about winning states. No more distorting and divisive red and blue state maps. Every vote, everywhere would be counted for and directly assist the candidate for whom it was cast. Candidates would need to care about voters across the nation, not just undecided voters in a handful of swing states. 5 / 5 (1) Feb 18, 2011 The National Popular Vote bill would take effect only when enacted, in identical form, by states possessing a majority of the electoral votes--enough electoral votes to elect a President (270 of 538). When the bill comes into effect, all the electoral votes from those states would be awarded to the presidential candidate who receives the most popular votes in all 50 states (and DC). The bill has passed 31 state legislative chambers, in 21 small, medium-small, medium, and large states, including one house in AR, CT, DE, DC, ME, MI, NV, NM, NY, NC, and OR, and both houses in CA, CO, HI, IL, NJ, MD, MA ,RI, VT, and WA . The bill has been enacted by DC, HI, IL, NJ, MD, MA, and WA. These 7 states possess 74 electoral votes — 27% of the 270 necessary to bring the law into 5 / 5 (1) Feb 18, 2011 The current system does not provide some kind of check on the "mobs." There have been 22,000 electoral votes cast since presidential elections became competitive (in 1796), and only 10 have been cast for someone other than the candidate nominated by the elector's own political party. The electors are dedicated party activists of the winning party who meet briefly in mid-December to cast their totally predictable votes in accordance with their pre-announced pledges. not rated yet Feb 18, 2011 A "republican" form of government means that the voters do not make laws themselves but, instead, delegate the job to periodically elected officials (Congressmen, Senators, and the President). The United States has a "republican" form of government regardless of whether popular votes for presidential electors are tallied at the state-level (as has been the case in 48 states) or at district-level (as has been the case in Maine and Nebraska) or at 50-state-level (as under the National Popular Vote bill). The Founding Fathers only said in the U.S. Constitution about presidential elections (only after debating among 30 ballots for choosing a method): "Each State shall appoint, in such Manner as the Legislature thereof may direct, a Number of Electors . . ." The U.S. Supreme Court has repeatedly characterized the authority of the state legislatures over the manner of awarding their electoral votes as "plenary" and "exclusive." 2.1 / 5 (12) Feb 18, 2011 The fear the author's of the Constition had is coming to pass. They did NOT want a king and that is exactly what the president is becoming. 4.5 / 5 (2) Feb 18, 2011 Mattytheory, Just look at the map at the top of the article and you'll have your reason. The founding fathers did not want just a few populous areas deciding the presidency. I forget the exact states or numbers, but it's something like if every person in New York, Calif. and ohio voted one way they could negate the vote in every other state of the country. 4.9 / 5 (10) Feb 18, 2011 We're straining out gnats and swallowing camels. The real problem is that ballots are designed to make it impossible for a third party to gain any power at all. The real stupidity in the design isn't the electoral system vs. popular vote. Its in the ballot not allowing a voter to specify a second choice without having to effectively nullify his first choice. There are a myriad of ways to implement this simply. It would totally eliminate the third-candidate split effect (a third candidate splits the vote of one first-party candidate, placing the other first-party candidate in power), which has plagued so many elections. It also allows people to express their desires without punishing them by throwing away their vote when they vote third party. 5 / 5 (6) Feb 18, 2011 The 11 most populous states contain 56% of the population of the United States and a candidate would win the Presidency if 100% of the voters in these 11 states voted for one candidate. However, if anyone is concerned about the this theoretical possibility, it should be pointed out that, under the current system, a candidate could win the Presidency by winning a mere 51% of the vote in these same 11 states -- that is, a mere 26% of the nation's votes. The political reality is that the 11 largest states rarely agree on any political question. In terms of recent presidential elections, the 11 largest states include five "red states (Texas, Florida, Ohio, North Carolina, and Georgia) and six "blue" states (California, New York, Illinois, Pennsylvania, Michigan, and New Jersey). The fact is that the big states are just about as closely divided as the rest of the country. 1.8 / 5 (6) Feb 19, 2011 This study is pure junk science. The study is based on the assumption that "correct" can only be that the election result corresponds to the majority of the votes cast. As such, the entire study is a tautology, discovering what it falsely assumed to begin with. The goal of the American system is NOT to elect the President simply by majority but rather to temper that States as well. The choice of simple majority vs electoral college system was subject to intense debate by the founders not for "statistical accuracy" concerns but explicitly for political considerations of preserving the rights of individual States AGAINST the tyranny of the few highly populous States. The United States is made up of a number of States and as the Electoral map shows, most of those do NOT equate to the concentrated high saturation voting of a few large States. The Electoral system is best for what it was designed and insures the unique style of American democracy protecting minority rights. 3 / 5 (5) Feb 19, 2011 This study is pure junk science. The study is ... blah blah blah... The Electoral system is best for what it was designed and insures the unique style of American democracy protecting minority rights. One of the biggest problems with modern america is the complete lack of comprehension skills of some of our citizens. The article goes out of its way several times to state that it is not commenting on the validity of our system or the other reasons for it. The purpose of this study was SOLELY to determine the likelyhood of a miscounted vote to swing the election in a way opposite of what it should be based on the rules of the system. In our set up, the likelyhood of that to happen is 10 times higher than standard vote counts. It is NOT providing commentary on our election method if every vote was counted exactly correct. 3.7 / 5 (3) Feb 19, 2011 I think you miss the point. So what if all of the voters in NY, CA, and OH all vote one way and manage to outweigh all of the other voters? That would mean that more than 50% of the voting population resides in those 3 states. Are you saying that their votes shouldn't all matter just because they all happened to vote a certain way AND they make up more than 50% of the population? Because that is what the electoral college - by your assertion - is supposed to do. Shouldn't more than 50% decide a popular vote no matter what the distribution? And anyway, INDIVIDUAL people don't ever all vote one way just because they live in the same state so your scenario is preposterous. Different things are important to different people. Not everyone has access to the same information and not everyone uses the same logic to make a decision. The electoral college discounts votes from small pockets of minorities. Moving away from the electoral college would guarantee that ALL votes mattere 3 / 5 (2) Feb 19, 2011 and that is why I don't understand when people argue against this idea. 1.7 / 5 (7) Feb 19, 2011 and that is why I don't understand when people argue against this idea. That must be because you had a US public school education and never studied US history. 4 / 5 (1) Feb 19, 2011 A lot of good discussion going on here. But the Electoral College acts to make smaller states count for more. If the President never had to care about Iowa or Vermont, you would soon have some very unhappy people. Possibly even talk of For a more in depth look at this, Discover Magazine put out a great article "Math Against Tyranny". Since reading this article I feel much better about the Electoral College and its role in keeping this Federal system we have balanced..... not rated yet Feb 19, 2011 How about this: require a simple majority in the national popular vote and either a majority of the popular vote in a majority of the states and/or a popular majority in a majority of congressional districts. That seems like it'd sufficiently and fairly balance the will of the majority against the fact that 56% of the electorate lives in 11 states (and roughly a tenth in California alone), while also preventing small states from having disproportionate influence. Combined with an instant run-off/Single Transferable Vote system, a single date for the primaries, an end to the stupid caucus system in some states, and voter ID verification (no one wants to talk about it, but voter fraud is rampant, and all the pollsters can do is have you swear under penalty of perjury that you're who you say you are and haven't voted already), it'd be a much better system than the current one. not rated yet Feb 19, 2011 and that is why I don't understand when people argue against this idea. An unregulated public vote can lead to a tyranny of the majority. The erection of electoral colleges and apportioned representation prevents abuses due to popularity of the abuse method. not rated yet Feb 20, 2011 These statistical measures are but one feature of a voting system. Consider what would have happened in the 2000 election if we used a 50-state-wide, majority vote selection system. There would have been recounts not just in Florida, but in more than a dozen states, each of which could decide the entire election. Certainly some other state could have come up with 318 votes or so in the other direction. The disgust people felt with the Florida process would have been amplified by the wider uncertainty across the country, with correspondingly more challenges, more ugly party machine details emerging, and vastly more court time. Side question: does this assume a normal distribution for the vote spread across elections? Do previous elections correspond to this distribution? Does it handle varying numbers of candidates in the not rated yet Feb 20, 2011 The idea that recounts will be more likely and messy with a national popular vote is distracting. Recounts are far more likely in the current system of state-by-state winner-take-all methods. The possibility of recounts should not even be a consideration in debating the merits of a national popular vote. No one has ever suggested that the possibility of a recount constitutes a valid reason why state governors or U.S. Senators, for example, should not be elected by a popular vote. The question of recounts comes to mind in connection with presidential elections only because the current system so frequently creates artificial crises and unnecessary disputes. A nationwide recount would not happen. We do and would vote state by state. Each state manages its own election and recount. The state-by-state winner-take-all system is not a firewall, but instead causes unnecessary fires. The larger the number of voters in an election, the smaller the chance of close election results. not rated yet Feb 20, 2011 Recounts in presidential elections would be far less likely to occur under a national popular vote system than under the current state-by-state winner-take-all system (i.e., awarding all of a state's electoral votes to the candidate who receives the most popular votes in each separate state). Based on a recent study of 7,645 statewide elections in the 26-year period from 1980 through 2006 by FairVote: *The average change in the margin of victory as a result of a statewide recount was a mere 274 votes. *The original outcome remained unchanged in over 90% of the recounts. *The probability of a recount is 1 in 332 elections (23 recounts in 7,645 elections), or once in 1,328 years. 1 / 5 (1) Feb 21, 2011 The idea that smaller states need the electoral college is BS. Why should any citizen have different rights than another? A pure majority election makes us all equal. Here is an extreme example of what's wrong with the electoral college. What if enough of the most populous states had 100% of the votes for one candidate to get slightly less than half the electoral votes while the rest of the states all gave the win to the other candidate by one vote. This would give the LOSER around 75% of the popular vote. 2.3 / 5 (3) Feb 21, 2011 Moby, the USA is NOT a democracy and it was DESIGNED that way. The USA was intended as a federal system of united states. The House of Representatives were to be the peoples' representative to the federal govt. Senators were originally elected by state legislators. The president was supposed to have a more limited role and to support the balance of power idea, the president would be elected by state electors not popular vote. This would give the LOSER around 75% of the popular vote. Smaller states don't want to be controlled by cities and states with the highest population. not rated yet Feb 21, 2011 Keep in mind that the Electoral College was created to solve the problem of travel way back when, before we had the convenience of cars and air travel. The argument that the electoral college makes small states count for more in the elections is true, but it also highlights the fact that the electoral college grossly misrepresents the popular vote. Getting rid of it would mean that the state lines would be disintegrated when it comes to the elections, so every state would be represented by exactly the voting population. No more, no less.
{"url":"http://phys.org/news/2011-02-fair-accurate-elections-statistically.html","timestamp":"2014-04-18T06:01:28Z","content_type":null,"content_length":"112856","record_id":"<urn:uuid:64185884-f659-458b-b1a1-438e7c7624c7>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00606-ip-10-147-4-33.ec2.internal.warc.gz"}
Fair and accurate elections, statistically speaking Electoral College map of the 2000 election, one of the most disputed in U.S. history. A uniquely American institution, the Electoral College consists of popularly elected representatives apportioned to each state according to the size of states' congressional delegation. It's the electors who formally elect the President of the United States. According to Berkeley statistician Elchanan Mossel, this system of electing the president is significantly more likely to result in an erroneous election outcome compared to the simple majority voting system.. The political controversy surrounding the Electoral College -- the institution whereby we elect the president of the United States -- is as old as the republic. In spite of recent contentious elections that raised the controversy to new heights, the debate is unlikely to reach a resolution given the compelling political considerations on both sides. But rarely if ever does the public debate on this subject take into account objective, mathematical considerations. UC Berkeley s Elchanan Mossel, an associate professor in the departments of Statistics and Computer Science and an expert in probability theory, believes there is an important contribution statisticians can make to the debate. He is not alone. Statisticians have subjected voting-related issues to complex mathematical calculations at least since the 18th century, when Marquis de Condorcet, a French philosopher and mathematician began using probability theory in the context of voting. Mossel s analyses pit the Electoral College system against the simple majority-voting system in an attempt to test the strength of our electoral system in one key aspect: how prone to error is it and, in turn, what are the odds that the outcome of an election will actually be flipped by such random error? There are many ways of voting, Mossel says. You can vote by majority vote, Electoral College, weighed voting, even dictatorship. The statistical question is, Which voting method is most robust to Mossel s assumption is that any voting model is intrinsically subject to a finite error, meaning that the vote cast by a small number of voters in each election will end up being recorded differently from what those voters intended. This may be due to human error, hanging chads, or voting machines that flip some vote randomly. In a landslide election such unfortunate occurrences make no statistical difference. But in a close election the likes of which we ve often had in recent election cycles such errors may wreck havoc with the election, with and sometimes even without our Statistically, the most robust system in the world is a dictatorship, Mossel says, not without a measure of amusement. Under such a system, the results never depend on how people vote. But since most of us would prefer an alternative to dictatorship in spite of the system s robustness, the question then becomes which voting system in a democracy is most likely to produce accurate results. To that end Mossel compares all of the possible voting systems, including the two voting methods we are most familiar with simple majority-vote and the Electoral College system, both of which offer voters two alternatives to pick from. Before running his analysis, Mossel first sets out to tests the model to ensure it satisfies some basic statistical requirements for fair elections. One such mathematical criterion corresponds to the notion of fairness among all the alternatives meaning that the model must ensure that all alternatives (i.e., candidates) receive the same treatment. Let s say some people under one model voted for Candidate A and some people voted for Candidate B and the winner was Candidate A under a given system. Now we replace the people who voted for B with those who voted for A and vice versa and we want the result to flip, too. It s a natural notion of fairness that is also common in economics. The results should not depend on the names of the Another way to factor in democracy is transitivity, which assumes that every two people play the same role mathematically and no one person has a greater chance of changing the outcome than anyone else. One example of transitivity, Mossel says, is to imagine people seated in a circle. Then he rotates everyone (or every person s opinion) one seat to the left. We want the voting function to be transitive, meaning that the result is the same if we rotate people. Once criteria for democracy are factored in, the problem of finding the most robust voting system becomes a problem of mathematical analysis. The reasoning is not simple. Mathematicians do not rely on standard Euclidian geometry to solve social problems of such complexity, which makes voting analysis difficult to explain on national television. Instead they apply what s known as Gaussian geometry, or the geometry of spheres in very high-dimensions. This methodology is employed when studying aggregate behavior of large numbers of people. In the context of robustness of voting, a key role is played by geometric Isoperimetric theorems, which study the relationships between volumes and surface areas. ( Isoperirimetric means having the same perimeter.) To make his point, Mossel reduces the highly-complex problem to a very simple and amusing hypothetical question. We have the cold war all over again, he smiles. The U.S. and Russia decide to partition the world exactly in half, 50-50 each. The two states must have the exact same area, including the oceans. And they try to minimize the border between the two states so they need the fewest number of border guards. The optimal solution to this problem is obvious: split the world along the line of the equator. The mathematics we developed for the robustness problem in some sense corresponds to the partitioning of very high-dimensional spheres. After running his analysis, Mossel says, the answer is unequivocal. It also serves a mathematical mortal blow to the American system of electing a president. Applying isoperimetric theory tells us majority voting method is optimal. It is the most robust function. The difference between this common voting method and the Electoral College system is in fact stunning. The first person to determine a way to calculate the error for these voting methods was statistician W. F. Sheppard back in 1899. He determined that majority voting takes a noise rate of x to an error that s approximately the square root of x. So under majority vote, if the voting machine flips votes with a probability of 1 in 10,000, the chance that the result of the election will be flipped is roughly the square root of that probability, or 1 in 100. With Electoral College voting, in essence you re doing majority twice, Mossel says. First you do majority in each state and then you do the majority of the majority, so you take the square root of the square root. So you take square root of 1/10,000 once and get 1/100, and then you take square root again and get 1/10. The Electoral College appears to fail miserably based on the robustness to error criteria. We don t have the best system, Mossel says. Yet even in the face of his own analysis he remains highly philosophical about how meaningful this apparently whopping difference between the two systems really is. Philosophically it may not be morally relevant, he says. If the election is so close anyway and people don t have a strong preference, maybe it doesn t really matter? But to the extent that the democratic ideal is for the outcome to reflect the intent of the voter as much as humanly possible, then the difference in Mossel s robustness-to-error test could give political pundits food for thought. Voting theory is only one example of Mossel s vast work applying probability theory to a wide range of both scientific and social problems. These range from theoretical computer science and evolutionary biology to game theory and social choice the latter of which includes topics such as voting or economic problems. ScienceMatters@Berkeley is published online by the College of Letters and Science at the University of California, Berkeley. The mission of ScienceMatters@Berkeley is to showcase the exciting scientific research underway in the College of Letters and Science. More information: Mossel s statistical analyses can be found in the following papers: "Maximally Stable Gaussian Partitions with Discrete Applications," written in collaboration with Marcus Isaksson, and "Noise stability of functions with low influences: invariance and optimality," written with Ryan O Donnell and Krzysztof Oleszkiewicz. 3.5 / 5 (8) Feb 18, 2011 I have not voted in presidential election since I understood what the electoral college does and never will as long as it's there. The electoral college discounts my vote and your vote too. I live in a state that always votes democratic. The only vote that counts towards the election is the one vote that gives the dems a majority over the republicans in the state, all other votes are thrown away for the national election and that one vote is turned into electoral votes. Without a country wide majority vote and the electoral college instead there are only 50 votes that count, the ones that give a majority in each state. Yes, my vote counts in my state, but it doesn't count in the overall election and that is what is wrong with college. 1.7 / 5 (6) Feb 18, 2011 Mathematics has little to do with it. The Electors are not bound, in any legal, moral or ethical sense, to vote as their State has voted. The Electors retain the Right to Choose the President, as they see fit. Good protection against a dangerous Populist gaining the White House. These United States' are not a Democracy. They are fifty sovereign States, joined in a Republic, choosing who the next President of their Federation is going to be. 4.5 / 5 (2) Feb 18, 2011 I have NEVER understood why we need the electoral college. Can anyone offer a definitive reason as to why we still use this prehistoric election system? 2.6 / 5 (7) Feb 18, 2011 I have NEVER understood why we need the electoral college. Can anyone offer a definitive reason as to why we still use this prehistoric election system? It's called states rights and the desire for a weak president. 3 / 5 (2) Feb 18, 2011 Mathematics has little to do with it. The Electors are not bound, in any legal, moral or ethical sense, to vote as their State has voted. The Electors retain the Right to Choose the President, as they see fit. Good protection against a dangerous Populist gaining the White House. These United States' are not a Democracy. They are fifty sovereign States, joined in a Republic, choosing who the next President of their Federation is going to be. You seem to be missing a huge part of history. Out of 535 electoral votes, there are less than 10 who actually have a choice in the matter. Almost all states have laws that require their electoral college to vote according to the popular vote of that state. 5 / 5 (7) Feb 18, 2011 The National Popular Vote bill would guarantee the Presidency to the candidate who receives the most popular votes in all 50 states (and DC). Every vote, everywhere, would be politically relevant and equal in presidential elections. Elections wouldn't be about winning states. No more distorting and divisive red and blue state maps. Every vote, everywhere would be counted for and directly assist the candidate for whom it was cast. Candidates would need to care about voters across the nation, not just undecided voters in a handful of swing states. 5 / 5 (1) Feb 18, 2011 The National Popular Vote bill would take effect only when enacted, in identical form, by states possessing a majority of the electoral votes--enough electoral votes to elect a President (270 of 538). When the bill comes into effect, all the electoral votes from those states would be awarded to the presidential candidate who receives the most popular votes in all 50 states (and DC). The bill has passed 31 state legislative chambers, in 21 small, medium-small, medium, and large states, including one house in AR, CT, DE, DC, ME, MI, NV, NM, NY, NC, and OR, and both houses in CA, CO, HI, IL, NJ, MD, MA ,RI, VT, and WA . The bill has been enacted by DC, HI, IL, NJ, MD, MA, and WA. These 7 states possess 74 electoral votes — 27% of the 270 necessary to bring the law into 5 / 5 (1) Feb 18, 2011 The current system does not provide some kind of check on the "mobs." There have been 22,000 electoral votes cast since presidential elections became competitive (in 1796), and only 10 have been cast for someone other than the candidate nominated by the elector's own political party. The electors are dedicated party activists of the winning party who meet briefly in mid-December to cast their totally predictable votes in accordance with their pre-announced pledges. not rated yet Feb 18, 2011 A "republican" form of government means that the voters do not make laws themselves but, instead, delegate the job to periodically elected officials (Congressmen, Senators, and the President). The United States has a "republican" form of government regardless of whether popular votes for presidential electors are tallied at the state-level (as has been the case in 48 states) or at district-level (as has been the case in Maine and Nebraska) or at 50-state-level (as under the National Popular Vote bill). The Founding Fathers only said in the U.S. Constitution about presidential elections (only after debating among 30 ballots for choosing a method): "Each State shall appoint, in such Manner as the Legislature thereof may direct, a Number of Electors . . ." The U.S. Supreme Court has repeatedly characterized the authority of the state legislatures over the manner of awarding their electoral votes as "plenary" and "exclusive." 2.1 / 5 (12) Feb 18, 2011 The fear the author's of the Constition had is coming to pass. They did NOT want a king and that is exactly what the president is becoming. 4.5 / 5 (2) Feb 18, 2011 Mattytheory, Just look at the map at the top of the article and you'll have your reason. The founding fathers did not want just a few populous areas deciding the presidency. I forget the exact states or numbers, but it's something like if every person in New York, Calif. and ohio voted one way they could negate the vote in every other state of the country. 4.9 / 5 (10) Feb 18, 2011 We're straining out gnats and swallowing camels. The real problem is that ballots are designed to make it impossible for a third party to gain any power at all. The real stupidity in the design isn't the electoral system vs. popular vote. Its in the ballot not allowing a voter to specify a second choice without having to effectively nullify his first choice. There are a myriad of ways to implement this simply. It would totally eliminate the third-candidate split effect (a third candidate splits the vote of one first-party candidate, placing the other first-party candidate in power), which has plagued so many elections. It also allows people to express their desires without punishing them by throwing away their vote when they vote third party. 5 / 5 (6) Feb 18, 2011 The 11 most populous states contain 56% of the population of the United States and a candidate would win the Presidency if 100% of the voters in these 11 states voted for one candidate. However, if anyone is concerned about the this theoretical possibility, it should be pointed out that, under the current system, a candidate could win the Presidency by winning a mere 51% of the vote in these same 11 states -- that is, a mere 26% of the nation's votes. The political reality is that the 11 largest states rarely agree on any political question. In terms of recent presidential elections, the 11 largest states include five "red states (Texas, Florida, Ohio, North Carolina, and Georgia) and six "blue" states (California, New York, Illinois, Pennsylvania, Michigan, and New Jersey). The fact is that the big states are just about as closely divided as the rest of the country. 1.8 / 5 (6) Feb 19, 2011 This study is pure junk science. The study is based on the assumption that "correct" can only be that the election result corresponds to the majority of the votes cast. As such, the entire study is a tautology, discovering what it falsely assumed to begin with. The goal of the American system is NOT to elect the President simply by majority but rather to temper that States as well. The choice of simple majority vs electoral college system was subject to intense debate by the founders not for "statistical accuracy" concerns but explicitly for political considerations of preserving the rights of individual States AGAINST the tyranny of the few highly populous States. The United States is made up of a number of States and as the Electoral map shows, most of those do NOT equate to the concentrated high saturation voting of a few large States. The Electoral system is best for what it was designed and insures the unique style of American democracy protecting minority rights. 3 / 5 (5) Feb 19, 2011 This study is pure junk science. The study is ... blah blah blah... The Electoral system is best for what it was designed and insures the unique style of American democracy protecting minority rights. One of the biggest problems with modern america is the complete lack of comprehension skills of some of our citizens. The article goes out of its way several times to state that it is not commenting on the validity of our system or the other reasons for it. The purpose of this study was SOLELY to determine the likelyhood of a miscounted vote to swing the election in a way opposite of what it should be based on the rules of the system. In our set up, the likelyhood of that to happen is 10 times higher than standard vote counts. It is NOT providing commentary on our election method if every vote was counted exactly correct. 3.7 / 5 (3) Feb 19, 2011 I think you miss the point. So what if all of the voters in NY, CA, and OH all vote one way and manage to outweigh all of the other voters? That would mean that more than 50% of the voting population resides in those 3 states. Are you saying that their votes shouldn't all matter just because they all happened to vote a certain way AND they make up more than 50% of the population? Because that is what the electoral college - by your assertion - is supposed to do. Shouldn't more than 50% decide a popular vote no matter what the distribution? And anyway, INDIVIDUAL people don't ever all vote one way just because they live in the same state so your scenario is preposterous. Different things are important to different people. Not everyone has access to the same information and not everyone uses the same logic to make a decision. The electoral college discounts votes from small pockets of minorities. Moving away from the electoral college would guarantee that ALL votes mattere 3 / 5 (2) Feb 19, 2011 and that is why I don't understand when people argue against this idea. 1.7 / 5 (7) Feb 19, 2011 and that is why I don't understand when people argue against this idea. That must be because you had a US public school education and never studied US history. 4 / 5 (1) Feb 19, 2011 A lot of good discussion going on here. But the Electoral College acts to make smaller states count for more. If the President never had to care about Iowa or Vermont, you would soon have some very unhappy people. Possibly even talk of For a more in depth look at this, Discover Magazine put out a great article "Math Against Tyranny". Since reading this article I feel much better about the Electoral College and its role in keeping this Federal system we have balanced..... not rated yet Feb 19, 2011 How about this: require a simple majority in the national popular vote and either a majority of the popular vote in a majority of the states and/or a popular majority in a majority of congressional districts. That seems like it'd sufficiently and fairly balance the will of the majority against the fact that 56% of the electorate lives in 11 states (and roughly a tenth in California alone), while also preventing small states from having disproportionate influence. Combined with an instant run-off/Single Transferable Vote system, a single date for the primaries, an end to the stupid caucus system in some states, and voter ID verification (no one wants to talk about it, but voter fraud is rampant, and all the pollsters can do is have you swear under penalty of perjury that you're who you say you are and haven't voted already), it'd be a much better system than the current one. not rated yet Feb 19, 2011 and that is why I don't understand when people argue against this idea. An unregulated public vote can lead to a tyranny of the majority. The erection of electoral colleges and apportioned representation prevents abuses due to popularity of the abuse method. not rated yet Feb 20, 2011 These statistical measures are but one feature of a voting system. Consider what would have happened in the 2000 election if we used a 50-state-wide, majority vote selection system. There would have been recounts not just in Florida, but in more than a dozen states, each of which could decide the entire election. Certainly some other state could have come up with 318 votes or so in the other direction. The disgust people felt with the Florida process would have been amplified by the wider uncertainty across the country, with correspondingly more challenges, more ugly party machine details emerging, and vastly more court time. Side question: does this assume a normal distribution for the vote spread across elections? Do previous elections correspond to this distribution? Does it handle varying numbers of candidates in the not rated yet Feb 20, 2011 The idea that recounts will be more likely and messy with a national popular vote is distracting. Recounts are far more likely in the current system of state-by-state winner-take-all methods. The possibility of recounts should not even be a consideration in debating the merits of a national popular vote. No one has ever suggested that the possibility of a recount constitutes a valid reason why state governors or U.S. Senators, for example, should not be elected by a popular vote. The question of recounts comes to mind in connection with presidential elections only because the current system so frequently creates artificial crises and unnecessary disputes. A nationwide recount would not happen. We do and would vote state by state. Each state manages its own election and recount. The state-by-state winner-take-all system is not a firewall, but instead causes unnecessary fires. The larger the number of voters in an election, the smaller the chance of close election results. not rated yet Feb 20, 2011 Recounts in presidential elections would be far less likely to occur under a national popular vote system than under the current state-by-state winner-take-all system (i.e., awarding all of a state's electoral votes to the candidate who receives the most popular votes in each separate state). Based on a recent study of 7,645 statewide elections in the 26-year period from 1980 through 2006 by FairVote: *The average change in the margin of victory as a result of a statewide recount was a mere 274 votes. *The original outcome remained unchanged in over 90% of the recounts. *The probability of a recount is 1 in 332 elections (23 recounts in 7,645 elections), or once in 1,328 years. 1 / 5 (1) Feb 21, 2011 The idea that smaller states need the electoral college is BS. Why should any citizen have different rights than another? A pure majority election makes us all equal. Here is an extreme example of what's wrong with the electoral college. What if enough of the most populous states had 100% of the votes for one candidate to get slightly less than half the electoral votes while the rest of the states all gave the win to the other candidate by one vote. This would give the LOSER around 75% of the popular vote. 2.3 / 5 (3) Feb 21, 2011 Moby, the USA is NOT a democracy and it was DESIGNED that way. The USA was intended as a federal system of united states. The House of Representatives were to be the peoples' representative to the federal govt. Senators were originally elected by state legislators. The president was supposed to have a more limited role and to support the balance of power idea, the president would be elected by state electors not popular vote. This would give the LOSER around 75% of the popular vote. Smaller states don't want to be controlled by cities and states with the highest population. not rated yet Feb 21, 2011 Keep in mind that the Electoral College was created to solve the problem of travel way back when, before we had the convenience of cars and air travel. The argument that the electoral college makes small states count for more in the elections is true, but it also highlights the fact that the electoral college grossly misrepresents the popular vote. Getting rid of it would mean that the state lines would be disintegrated when it comes to the elections, so every state would be represented by exactly the voting population. No more, no less.
{"url":"http://phys.org/news/2011-02-fair-accurate-elections-statistically.html","timestamp":"2014-04-18T06:01:28Z","content_type":null,"content_length":"112856","record_id":"<urn:uuid:64185884-f659-458b-b1a1-438e7c7624c7>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00606-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: If f(x) = 4x^2 -3x+1, find (f(x+h)-f(x))/(h)) • 3 months ago • 3 months ago Best Response You've already chosen the best response. For f(x+h), replace all x by x+h in 4x^3 -3x+1. For f(x), it's just 4x^3 -3x+1. \[ \frac{f(x+h)-f(x)}{h} = \frac{(4(x+h)^3 -3(x+h)+1) - (4x^3 -3x+1)}{h} =...\] Best Response You've already chosen the best response. Since this is how you get the derivative for small h, you should find answer close to 12x^2 - 3. Best Response You've already chosen the best response. okay I wrote the wrong equation b4, but I just corrected it. What is the next step after I write out: (4(x+h)^2 -3(x+h)+1)-(4x^2 -3x+1)/h) Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/52c8e4a3e4b04f95cb848b42","timestamp":"2014-04-17T04:06:06Z","content_type":null,"content_length":"32618","record_id":"<urn:uuid:0899d921-6e9d-450b-9750-a4219be30389>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00476-ip-10-147-4-33.ec2.internal.warc.gz"}
All courses are in FW11 unless stated otherwise. Previous courses can be found If you would like to volunteer a course, please contact Matthew Parkinson Term: Easter 2005 A look at continuations and logic. Tim Griffin 1st and 3rd June at 10am Mobile processes and bigraphs Robin Milner 20th, 22nd, and 24th June at 10am Term: Lent 2005 Behavioural pseudometrics Franck van Breugal 1st and 3rd Feb: 10am-11:45 Monads Nick Benton 17th and 22nd Feb: 11am-12:30 Elliptic Curve Cryptography: A case study in formalization using a higher order logic theorem prover Joe Hurd 15th, 17th, 21st and 23rd Mar: 10am-11:45 Term: Michaelmas 2004 Minicourse on Ordinals Thomas Forster 1st and 5th Nov: 10am-11:45 How to solve recursive domain equations. Andrew Pitts Nov 8th and 12th: 10am - 11:45 Domain theory for concurrency Glynn Winskel 17th Nov: 10am-11:45 Additional lecture 24th Nov: 10-11am Category theory for dummies (Introduction for Marcelo's course) Lucy Saunders-Evans 19th Nov: 10am The simply typed lambda calculus categorically. Marcelo Fiore 22nd and 26th Nov: 10am-11:45 Concurrency and Pi calculus Peter Sewell 29th Nov and 3rd Dec Easter 2005 Tim Griffin 1st and 3rd June at 10am In FW09 on the 3rd of June. Details: Abstract. Some time ago I wrote a paper on a rather unexpected relationship between continuations and classical logic. See . I will present the basic results of this paper, and then attempt to reinterpret those results in a way that may make more sense. Robin Milner 20th, 22nd, and 24th June at 10am Bigraphs are an algebraic-cum-graphical model for mobile processes that reconfigure their placing and linking. The prefix `bi' connotes that these two structures, placing and linking, are orthogonal: `where you are doesn't affect who you may talk to'. They aim to be useful for pervasive computing; I can illustrate this briefly with behaviour in a sentient environment. The model is mildly category theoretic, but most of the work can be (and will be!) seen graphically. There is also an algebraic presentation of bigraphs that lends itself to animation as a programming language. Bigraphs subsume CCS, Petri nets, pi calculus, ambient calculus and lambda calculus. Of course, if you only need one of these you would not use bigraphs; but applications tend to want more than one, and a coordinating model can also be useful to tease out ideas that are in common. In the examples, I shall use link graphs (half of bigraphs) to model bisimilarity in Petri nets, then pure bigraphs to do the same for CCS. This leads finally to binding bigraphs, where --- as time permits --- I sall treat lambda calculus with explicit substitutions and conjecture a generalsiation of ther Church Rosser theorem (confluence). you can find the slides for the course. The fourth slide lists some useful papers. Lent 2005 Franck van Breugal 1st and 3rd Feb: 10am-11:45 In FW09 on 3rd Feb Details: To model concurrent systems in which quantitative data (like probabilities, time, costs or rewards) plays a crucial role, extensions of labelled transition systems have been put forward. Notions of behavioural equivalence like bisimilarity have been adapted to this setting. However, such discrete Boolean-valued notions (states are either behaviourally equivalent or they are not) sit uneasily with models featuring quantitative data. For example, consider probabilities. If some of the probabilities change a little bit --the probabilities are often obtained experimentally and, hence, are usually approximations-- states that used to be behaviourally equivalent may not be anymore or vice versa. In conclusion, behavioural equivalences like probabilistic bisimilarity are not robust. To address this problem, pseudometrics that assign a distance, a real number between 0 and 1, to each pair of states of a system have been proposed. Such a pseudometric yields a smooth and quantitative notion of behavioural equivalence. The distance between states is used to express the similarity of their behaviour. The smaller the distance, the more alike the systems behave. In particular, the distance between systems is 0 if they are behaviourally indistinguishable. Nick Benton 17th and 22nd Feb: 11am-12:30 A tension in language design has been between simple semantics on the one hand, and rich possibilities for side-effects, exception handling and so on on the other. The introduction of monads has made a large step towards reconciling these alternatives. First proposed by Moggi as a way of structuring semantic descriptions, they were adopted by Wadler to structure Haskell programs. Monads have been used to solve long-standing problems such as adding pointers and assignment, inter-language working, and exception handling to Haskell, without compromising its purely functional The course introduces monads, effects, and exemplifies their applications in programming and in compilation. The course presents typed metalanguages for monads and related categorical notions, and then describes how they can be further refined by introducing effects. Joe Hurd 15th, 17th, 21st and 23rd Mar: 10am-11:45 Formalizing a mathematical theory using a theorem prover is a necessary first step to proving the correctness of programs that refer to that theory in their specification. In this course I will show how modern higher order logic theorem provers help (and occasionally hinder) such formalizations. The case study for the course will be a formalization of (the foundations of) elliptic curve cryptography using the HOL4 theorem prover, and part of the course will be an introduction to the mathematics of public key cryptography and elliptic curves. For more details see: • 10:00-12:00, Tue 15 March: Elliptic Curve Cryptography This lecture provides a mathematical introduction to elliptic curves and their applications in cryptography. • 10:00-12:00, Thu 17 March: Introduction to HOL This lecture is a tutorial introduction to the HOL4 theorem prover, aimed at getting a newbie up and proving as quickly as possible. • 10:00-12:00, Mon 21 March and 10:00-12:00, Wed 23 March: Formalized Elliptic Curves The details of how the mathematics of Elliptic Curves Cryptography is formalized in HOL, as a medium scale case study that will be used as a golden reference for verifying programs. Michaelmas 2004 Thomas Forster 1st and 5th Nov: 10am-11:45 In FW09 Details: Ordinals are transfinite analogues of natural numbers, and are of great importance in CS because they give a precise and illuminating way of describing the complexity of various recursions. Andrew Pitts Nov 8th and 12th: 10am - 11:45 My aim is to give the up-to-date (post-Freydian) view on the solution of recursive domain equations. I will try to keep things as concrete as possible, by taking "domain" to mean "omega-chain complete partially ordered set with a least element"; familiarity with the first half of the Part II course on Denotational Semantics will be assumed. However, a certain amount of category theory is essential: some familiarity with the notions of "category" (and the opposite of a category), "functor", "natural transformation", "limit" and "colimit" will be assumed, but not much more. • Why do we need to solve domain equations and why is it hard to do so? • Locally continuous functors of mixed variance on the category of omega-cppos and strict continuous functions. • Minimal invariant solutions and free bialgebras. • The limit-colimit construction. • Applications (sketchily). • S. Abramsky and A. Jung, Domain Theory. • Chapter 1 of S. Abramsky and D. M. Gabbay and T. S. E. Maibaum (eds), Handbook of Logic in Computer Science, Volume 3. Semantic Structures (Oxford University Press, 1994). Glynn Winskel 17th Nov: 10am-11:45 Additional lecture 24th Nov: 10-11am Abstract: I'll present a simple domain theory for concurrency. Although a domain theory for nondeterministic processes it also forms a model of linear logic in which there are comonads interpreting variants of the linear logic exponential. The model highlights the role of linearity in concurrent computation. Two choices of comonad yield two expressive metalanguages for higher-order processes, both arising from canonical constructions in the model. Their denotational semantics are fully abstract with respect to contextual equivalence. One language derives from an exponential of linear logic; it supports a straightforward operational semantics with simple proofs of soundness and adequacy. The other choice of comonad yields a model of affine-linear logic, and a process language with a tensor operation which can be understood as a parallel composition of independent processes. I'll conclude with a discussion of a broader programme of research, towards a comprehensive domain theory for concurrency. I hope these talks will motivate, give background and references to domain theory, category theory, linear logic, ..., the subjects of future mini-courses. (This is based on joint work with Mikkel Nygaard.) Lucy Saunders-Evans 19th Nov: 10am Marcelo Fiore 22nd and 26th Nov: 10am-11:45 • Type theory: syntax and equational theory. • Categorical models: cartesian closed categories, soundness and completeness. • The initial algebra approach: syntax and semantics. • Lambda definability and normalisation by evaluation. • R.Crole. Categories for types. CUP, 1995. • J.Lambek and P.Scott. Introduction to higher order categorical logic. CUP, 1986. • M.Fiore. Semantic analysis or normalisation by evaluation for typed lambda calculus. PPDP, 2002. • D.Scott. Relating theories of the lambda calculus. AP, 1980. Peter Sewell 29th Nov and 3rd Dec
{"url":"http://www.cl.cam.ac.uk/~mjp41/mini_courses.html","timestamp":"2014-04-19T04:25:20Z","content_type":null,"content_length":"14825","record_id":"<urn:uuid:7d27c908-30a4-40bc-ba5e-8573e558d506>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00629-ip-10-147-4-33.ec2.internal.warc.gz"}
End of the year quote “Truth alone will endure, all the rest will be swept away before the tide of time. I must continue to bear testimony to truth even if I am forsaken by all. Mine may today be a voice in the wilderness, but it will be heard when all other voices are silenced, if it is the voice of Truth.” (Mahatma Gandhi, Basic Education (1951), p. 89.) 6 Responses to End of the year quote 1. Buon anno Anno nuovo, vita nuova ! □ Thanks! Auguri e felice anno nuovo! (Happy new year!). 2. Dear Marco, I do not find the renormalization ideology convincing. I see there wrong conclusions based on wrong assumptions. I wrote a simple illustrative article and I would like to learn your opinion about it. Please read it and tell me whether my reasoning is correct and convincing. □ Dear Vladimir, I think that it is an example of braveness to challenge a technique that works so well and proved to be fertile for the results it yielded. Generally, I would cite the power of renormalization group in this case that, thanks to the formulation due to Kenneth Wilson, gave us a deep understanding of phase transitions that we will have otherwise missed. Besides, in the history of science, there is nothing that agrees so well with experiments than Standard Model that largely uses renormalization techniques. Through them one is able to get all those cross sections and rates that presently people at CERN is proving to agree so perfectly well with experimental data. By my side, working just with a classical field theory as I have widely discussed in this blog, I showed how interaction produces finite effects on mass and this is a fact being those exact solutions. So, renormalization is a really sound idea and to understand the reasons of its success we need a higher order theory with respect to the Standard Model. This is what people is currently looking for. I think that to convince people that your criticisms are sound you will need well more than a couple of Newton equations. Otherwise it appears just like a matter of principle most in the same way the late Dirac did. 3. Dear Marco, Thank you for your answer. I do not deny the usefulness of renormalization. It is the only our tool in making practical calculations until we find a better formulation of theory. My toy model is exactly renormalizable and it shows that renormalization may work. As well, my toy model is exactly soluble, so its precision is absolute. What I wanted to demonstrate is that our current interpretation of renormalization is erroneous. There are no bare particles, but our errors in coupling equations. Dirac did not find an exactly renormalizable system to substantiate his concerns, and I found such a system. It can be easily promoted to QM, for example, by substituting its Lagrangian in the path integral. Simplifying it to a classical system, I just wanted to strip from renormalization those “relativistic, quantum, and non linear” clothes that hide the essence – our mathematical and physical errors in coupling equations. It is we who spoils the equation coefficients (passage from (1) to (5)), and it is we who restores the old, physical values by hand, and there are no vacuum polarization and other “bare particle interaction effects” who do renormalizations for us. □ Dear Vladimir, I think you have not a clear idea of what renormalization is. The problem is that an interacting quantum field theory has products of distribution valued operators that must be properly defined. Renormalization gives a prescription for this and it works excellently well. As a physicist I should say that there is a proper mathematical procedure that extends distribution theory as devised by Laurent Schwartz to manage such products. These mathematical operations are inescapable unless a theory is trivial. In your case, you have to take a simple quantum field theory like that of a massless scalar field and show that you are able to produce identical results as renormalization techniques do. Then, put your computations on a paper and send them to some refereed journal. In this way you can get a hope to be heard.
{"url":"http://marcofrasca.wordpress.com/2012/12/31/end-of-the-year-quote/","timestamp":"2014-04-17T18:22:53Z","content_type":null,"content_length":"92415","record_id":"<urn:uuid:0c11419e-5143-4687-bddb-3c5997d19cac>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00234-ip-10-147-4-33.ec2.internal.warc.gz"}
Kids.Net.Au - Encyclopedia > ALU Arithmetic and Logical Unit (ALU) is one of the core components of all Central Processing Units . It is capable of calculating the results of a wide variety of common computations. The most common available operations are the integer arithmetic operations of addition, subtraction, and multiplication, the bitwise logic operations of AND, NOT, OR, and XOR, and various shift operations. Typically, a standard ALU does not handle integer division nor any floating point operations. For these calculations a separate component, such as a divider or Floating Point Unit (FPU), is often used, although it is also possible that a program may use the ALU to emulate these operations. The ALU takes as inputs the data to be operated on and a code from the control unit indicating which operation to perform, and for output provides the result of the computation. In some designs it may also take as input and output a set of condition codes, which can be used to indicate cases such as carry-in or carry-out, overflow, or other statuses. See also: execution unit All Wikipedia text is available under the terms of the GNU Free Documentation License
{"url":"http://encyclopedia.kids.net.au/page/al/ALU","timestamp":"2014-04-18T13:15:40Z","content_type":null,"content_length":"13071","record_id":"<urn:uuid:c8375fcd-f3ac-4334-b3a5-5544c7890632>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00016-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: a particle moves along the x-axis so that its velocity at any time t is given by v(t)=3t^2-18t+24 and its position is given by x(t)=t^3-9t^2+24t+4. A. For what values of t is the particle at rest? B. Find the total distance traveled by the particle from T=1 to T=3. Hint: the particle is moving to the left when v(t)<0 and to the right when v(t)>0. • 11 months ago • 11 months ago Best Response You've already chosen the best response. a) v=0 for rest so solve the quadratic equation of v(t) b)sketch v(t) for t=1 to v=3 the combination of the area under/over the graph between the x axis is the distance traveled, you may have to do two integrations for this Best Response You've already chosen the best response. I don't understand how to get these answers. I have an answer key, I just don't understand velocity at all. I am in precalculus and since it is the end of the year, my teacher is giving us a calculus packet so we can start learning. She has been gone every day since we've gotten it and she will be back tomorrow. I don't understand this velocity stuff at all. Best Response You've already chosen the best response. @Narses for partb) why don't we apply directly to x(t) ? Best Response You've already chosen the best response. x(t) may be positive and negative so simply doing x(3) - x(1) may not work Best Response You've already chosen the best response. I DON'T GET THISSSS Best Response You've already chosen the best response. that would be the magnitude of the distance between the two points not the distance traveled, I was giving a more general case Best Response You've already chosen the best response. Nope, not that Narses. the distance can get by absolute value @bandrockstar wait, after discussing, we will give you the correct instruction. Best Response You've already chosen the best response. the total distance traveled may not be the distance between x(1) and x(3) Best Response You've already chosen the best response. Okay, thank you. Best Response You've already chosen the best response. Ok, guide her steps, please Best Response You've already chosen the best response. velocity as a function of time is what you given is describes the speed and which direction, so part a says when is it stationary, that is when the velocity (or speed) is equal to 0 such 3t^ 2-18t+24 =0 Best Response You've already chosen the best response. Okay. I got that much now. Best Response You've already chosen the best response. so the first step is to set v(t) equal to zero? Best Response You've already chosen the best response. Best Response You've already chosen the best response. Best Response You've already chosen the best response. What am I trying to get out of setting it to zero? Like do I basically un-FOIL it? Best Response You've already chosen the best response. basically yes use whatever method you know to solve quadratic equations Best Response You've already chosen the best response. erm alright. Thanks hold on :) Best Response You've already chosen the best response. by setting it to zero Best Response You've already chosen the best response. by setting to zero you are telling the equation that the particle is at rest Best Response You've already chosen the best response. Okay, That i can actually understand. So what is it after you equal it to zero Best Response You've already chosen the best response. so for for part a you have 3t^2-18t+24 =0 that gives t^2-6t+8=0 by dividing both sides by 3 ( I guess you can solve is easier now?) Best Response You've already chosen the best response. I don't understand what you put in parenthesis. Best Response You've already chosen the best response. I said that you may be able to solve the equation now that we divided by 3? Best Response You've already chosen the best response. Yes. is it 3(t-2)(t-4) ? Best Response You've already chosen the best response. yes, there is no need for the three as the other side is equal to 0 as 0/3=0 Best Response You've already chosen the best response. Okay.. so where in the equation (t-2)(t-4) do I get my answer? What am I taking from it? The zeros? Best Response You've already chosen the best response. if (t-2)(t-4)=0 imagine a times b =0 how do we get that? what is a or b? Best Response You've already chosen the best response. Like the zeros being 2 and 4? I don't understand what I'm trying to get from this equation Best Response You've already chosen the best response. the solution is that either A or B HAASSS to equal 0 so, t-2=0 or t-4 =0 such t=2 and 4 Best Response You've already chosen the best response. Okay, that's what I was thinking. Best Response You've already chosen the best response. So t=2 and t=4 are the values for the particles at rest because thats the values where t=0. Correct? Best Response You've already chosen the best response. yes but a typo on your end is you want v=0 at the end Best Response You've already chosen the best response. That's what I meant :P How do I do B? Best Response You've already chosen the best response. so you have an eqaution for the distance from 0 in terms of t, how would you approach it? any ideas? Best Response You've already chosen the best response. Would i be plugging in 1 and 3 somewhere? But I have to use the second equation? I'm just not sure how to do this stuff AT ALL. I have absolutely NO background on it. Sorry :/ Best Response You've already chosen the best response. its ok, when I first had answered I thought you had basic calculus but no problem, from part a you know that its stationary at t=2 and 4 so in the time period 1->3 the particle must change Best Response You've already chosen the best response. Okay. Got it. Best Response You've already chosen the best response. as in the diagram shown it 'goes back on itself' so by simply finding the distance between point a and b where t =1 and 3 respectivly the particle will infact travelled further Best Response You've already chosen the best response. Best Response You've already chosen the best response. How do we find the distance? Will the second equation be used? Best Response You've already chosen the best response. yes, so between t=1 and 2 the particle will be traveling in the same direction so we can take the distance between the two points and do the same with t= 2 and 3 Best Response You've already chosen the best response. Okay. How do you find the distance between the two? Best Response You've already chosen the best response. I need to go soon but work out the difference of x(1) and x(2) and add to the difference of x(2) and x(3) that is the answer, Best Response You've already chosen the best response. x(1)= put 1 into to t of x(t) Best Response You've already chosen the best response. Thank you so much. I appreciate it very very very much :) Best Response You've already chosen the best response. I am not getting the same answer as the key. I get -4 for the difference of x(1) and x(2) and I get 2 for the difference of x(2) and x(3). I am getting 2 when I add them together. The answer on the key is 6 units. I don't understand what I'm doing wrong and I've checked my arithmetic twice. Best Response You've already chosen the best response. Uh.. Thank you.. Best Response You've already chosen the best response. Since you have the velocity at time v (t) = 3t ^ 2-18t +24, then the particle will be at rest when v (t) = 0, that is, you have to find the values of time t for which 0 = 3t ^ 2-18t +24, so you have to solve the quadratic equation. So 0 = 3 (t ^ 2-6t +8) Therefore 0 = (t-4) (t-2), so that t = 4 and t = 2. Thus at t = 2 and t = 4 the particle will be at rest. Best Response You've already chosen the best response. Best Response You've already chosen the best response. I already knew that. Best Response You've already chosen the best response. The answer is 6 units. Best Response You've already chosen the best response. No, it isn't. Best Response You've already chosen the best response. absolute valor (-2)+absolute Valor(4) =6 It is not a vector (Start Point-Final Point), but a scalar, sum of paths. Best Response You've already chosen the best response. v (t) = 3t-18t +24 ^ 2 = 3 (t-4) (t-2) is> zero when both (t-4) and (t-2) are positvos or when negative. And t-4> 0 when t> 4 and t-2> 0 when t> 2, so that both are positive parpentesis if t> 4 or both parentheses are negative if t <2, and both are negative paréntsis] 2 , 4 [. Has to calculate the distance of 1 to 2, d1, 2 = x (2)-x (1), this being the distance to the right (positive sign) and then the distance of 2 to 3, d2, 3 = x (3)-x (2), this being the distance reccorrida left (negative sign), Suma refusal distances more positive one and that is the total distance traveled. x (2) = 2 ^ 3-9 * 2 ^ 2 +24 * 2 +4 = 24, x (1) = 1 ^ 3-9 * 1 ^ 2 +24 * 1 +4 = 20 d1, 2 = x (2)-x (1) = 24-20 = 4 That is d1, 2 = 4 The distance to the right x (3) = 3 ^ 3-9 * 3 ^ 2 +24 * 3 +4 = 22 d2, 3 = x (3)-x (2) = 22-24 = -2 The distance to the left from the start point So the displacement vector is Right + Left = 4-2 = 2 as right shift vector, but if you want the total distance not as vector but a scalar, D=4+2=6 Best Response You've already chosen the best response. Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/518fc3c2e4b0cf6dd8f47b7e","timestamp":"2014-04-18T10:39:16Z","content_type":null,"content_length":"163903","record_id":"<urn:uuid:ee18f078-8555-4d83-93aa-24c9948d4149>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00501-ip-10-147-4-33.ec2.internal.warc.gz"}
Harbor Acres, NY Statistics Tutor Find a Harbor Acres, NY Statistics Tutor ...I was a high school teacher for a brief time and, more recently, I taught Probability and Statistics at Stony Brook University for a few years, where I received the President's Award for Excellence in Teaching. I believe that learning mathematics is about understanding important concepts, not me... 16 Subjects: including statistics, calculus, GRE, geometry ...While there, I tutored students in everything from counting to calculus, and beyond. I then earned a Masters of Arts in Teaching from Bard College in '07. I've been tutoring for 8+ years, with students between the ages of 6 and 66, with a focus on the high school student and the high school curriculum. 26 Subjects: including statistics, physics, calculus, geometry ...I developed techniques that help students raise their scores several hundred points! I work with students to develop a custom study plan that attacks their weaknesses and enhances their strengths. Students that hone these techniques over considerable practice have had great success! 34 Subjects: including statistics, calculus, writing, GRE ...I have looked at, used, tested, and written about pretty nearly every type of computer product or program available. Although my career is varied, the one central element in everything I have done is my desire to explain complicated technical concepts in simple terms that almost everyone can und... 17 Subjects: including statistics, accounting, finance, economics ...Mistakes or poor performance displayed at this early stage are immediately addressed with probing and feedback before I introduce new concepts and related terminologies for the core of the lesson. This core usually takes the form of a problem posed that leads to the culmination of a formula or a... 23 Subjects: including statistics, Spanish, calculus, GRE Related Harbor Acres, NY Tutors Harbor Acres, NY Accounting Tutors Harbor Acres, NY ACT Tutors Harbor Acres, NY Algebra Tutors Harbor Acres, NY Algebra 2 Tutors Harbor Acres, NY Calculus Tutors Harbor Acres, NY Geometry Tutors Harbor Acres, NY Math Tutors Harbor Acres, NY Prealgebra Tutors Harbor Acres, NY Precalculus Tutors Harbor Acres, NY SAT Tutors Harbor Acres, NY SAT Math Tutors Harbor Acres, NY Science Tutors Harbor Acres, NY Statistics Tutors Harbor Acres, NY Trigonometry Tutors Nearby Cities With statistics Tutor Baxter Estates, NY statistics Tutors East Atlantic Beach, NY statistics Tutors Fort Totten, NY statistics Tutors Garden City South, NY statistics Tutors Glenwood Landing statistics Tutors Harbor Hills, NY statistics Tutors Kenilworth, NY statistics Tutors Manorhaven, NY statistics Tutors Maplewood, NY statistics Tutors Meacham, NY statistics Tutors Port Washington, NY statistics Tutors Roslyn, NY statistics Tutors Saddle Rock Estates, NY statistics Tutors The Terrace, NY statistics Tutors University Gardens, NY statistics Tutors
{"url":"http://www.purplemath.com/Harbor_Acres_NY_Statistics_tutors.php","timestamp":"2014-04-21T02:38:14Z","content_type":null,"content_length":"24629","record_id":"<urn:uuid:a0eb8e78-16ae-47b7-991d-1d70e3a7fb99>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00243-ip-10-147-4-33.ec2.internal.warc.gz"}
Chapter 7: BJT Transistor Modeling Chapter 7: BJT Transistor Modeling ... to the ac domain where the conversion will become as ?=Po(ac)/Pi(dc) ... base BJT transistor. re model. re equivalent cct. 23. 23. isolation part, Zi=re. Zo ... – PowerPoint PPT presentation Number of Views:1541 Avg rating:3.0/5.0 Slides: 49 Added by: Anonymous more less Transcript and Presenter's Notes
{"url":"http://www.powershow.com/view/24f059-OTAzZ/Chapter_7_BJT_Transistor_Modeling_powerpoint_ppt_presentation","timestamp":"2014-04-18T08:13:40Z","content_type":null,"content_length":"104300","record_id":"<urn:uuid:9aee65ef-954d-4bfb-9234-e5069ce64a12>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00504-ip-10-147-4-33.ec2.internal.warc.gz"}
Examination 1 Here's the PostScript version of this page. It's much more nicely typeset than the rubbish you'll get if you use your browser to print this page. CS 294-5: Meshing and Triangulation (Autumn 1999) Examination 1 (20% of final grade) This is an open-book exam. It is due at the start of class on Tuesday, November 2, 1999. You may submit your answers by email, by hand, or under my office door. Answers submitted after 2:15 pm Tuesday will not be accepted. You may consult papers and other references, but you may not receive assistance from other people (except me). You are welcome to ask me to clarify anything in the readings, the lectures, or this exam that you don't fully understand. There are 30 points worth of questions below. Answer exactly 20 points worth. If you answer questions totalling more than 20 points, I will randomly select answers to ignore, bringing the total down to 20 points. If I ignore a question that you answered correctly and grade one that you got wrong, tough luck. [1] The quad-edge data structure (1 point). Figure 1 illustrates two PSLGs. If we consider the 2-faces to be part of each complex, then these PSLGs are topologically distinct. If we represent each PSLG with a quad-edge data structure, how do the two representations differ topologically? (In other words, if we aren't allowed to look at the coordinates of the vertices, how can we distinguish the two representations from each other?) Figure 1: Two PSLGs which are topologically different if we consider the 2-faces to be part of the topology. [2] Deloopsy (1 point). What's wrong with the following algorithm for constructing the constrained Delaunay triangulation of a simple polygon? 1 while P has more than three vertices 2 Find three consecutive vertices a, b, and c on the perimeter of P such that P 3 Output b out of the perimeter of P (Unfortunately, this algorithm has made an appearance in the literature.) [3] An upper bound on two-dimensional triangulations I (1 point). In Lecture 4, I gave a proof that a d-dimensional triangulation of n vertices has at most d-simplices. (My transparencies can be viewed from the course web page). Show how the proof method I demonstrated in class can also be used to prove an upper bound of 2n - 5 in two dimensions. (This bound is tight.) [4] An upper bound on two-dimensional triangulations II (1 point). Show how the same proof method can also be used to show that the bound is tight. (In other words, a triangulation with 2n - 5 triangles exists for any [5] Gift-wrapping gaffes (1 point). If the gift-wrapping algorithm is used to construct the Delaunay tetrahedralization of a vertex set that includes groups of five or more cospherical vertices, the algorithm must perform checks to make sure that overlapping tetrahedra are not constructed. Even if these checks are made, if groups of six or more cospherical vertices are present, it may be possible for the algorithm to get stuck and fail to construct a valid triangulation. Explain why. [6] Fast times with Frankensimplex (1 point). Dr. Frankensimplex claims to have discovered an n-vertex polyhedron so that at least half the tetrahedra are equilateral. Prove that the good doctor is [7] Who needs transformations most? (2 points.) Let A and B be two bad tetrahedral meshes. Mesh A was created by applying the advancing front method to a cube. Three adjacent sides of the cube were subdivided into very small triangles, and the other three sides were subdivided into very large triangles. No background mesh or other method was used to control the sizes of the tetrahedra as the fronts advanced; rather, node placement was based only on the desire to create tetrahedra as close to equilateral as possible, until the fronts collided. Mesh B was created by applying Shephard and Georges' Finite Octree algorithm to a domain with complicated boundaries. Each boundary octant was triangulated separately (using a Delaunay triangulation where possible), and no postprocessing was applied to improve the tetrahedra that were created where the boundaries meet the octree. Fortunately, you have in your possession software that applies optimization-based smoothing and topological transformations (2-3 flips, edge removal, edge contraction, vertex insertion, etc.) to tetrahedral meshes. Unfortunately, the software is extremely slow, and you will only have time to apply topological transformations and smoothing to one of the two meshes. The other mesh will have to get by with smoothing alone. Which mesh (A or B) has problems that need topological transformations to fix, and which mesh can probably heal most of its bad tetrahedra through smoothing alone? Explain why. [8] Meshing prespecified boundaries (2 points). Assume that you are given a two-dimensional domain boundary that has been subdivided into edges, and you are not allowed to add, remove, or smooth boundary vertices. You are asked to triangulate the domain with the best-quality triangles the boundary will allow. Of the Delaunay, advancing front, and quadtree approaches, which is best suited and which is least suited to this demand? Why? [9] Minimum spanning trees (2 points). Let V be a set of n points in the plane. The Euclidean minimum spanning tree T of V is a set of n - 1 edges such that (V, T) is a connected graph--any two vertices of V are joined by exactly one path through T--and the total length of the edges of T is shorter than (or at least as short as) that of any other spanning tree. Prove that every edge of T is in the Delaunay triangulation of V. Hint: If (v, w) is an edge in T, but vw is not Delaunay, show that you can improve T. [10] Constrained mesh smoothing (2 points). The optimization-based smoothing algorithm for tetrahedral meshes described by Freitag and Ollivier-Gooch chooses a search direction g. It then performs a line search along g, attempting to improve the worst dihedral angle of the tetrahedra adjoining the vertex being smoothed. Suppose that we want to modify the algorithm so that it can smooth a vertex that is constrained to lie in a planar boundary facet. Our method is to find the orthogonal projection of g onto the facet, then use the projected search vector as our search direction. Explain why this idea works well when there is only one angle in the active set, but is a rotten idea when there are two or more angles in the active set. Suggest a very simple fix. [11] Off-center subsegment splitting (2 points). Ruppert's Delaunay refinement algorithm normally splits encroached subsegments at their midpoints. However, as one student pointed out in class, we might achieve smaller meshes if we use off-center splits in cases where the encroaching vertex is an input vertex (and hence can't be rejected) and is not at the subsegment's center. For example, suppose that s is a subsegment encroached upon by an input vertex v. If v is very close to s, but not near the center of s, then splitting s off-center might reduce the number of triangles in the final mesh. One idea is to project v orthogonally onto s. Unfortunately, if v is near an endpoint of s, this idea might create an unreasonably tiny new feature, as Figure 2 Figure 2: Projecting an encroaching input vertex onto an encroached segment may unnecessarily reduce the feature size. How can we modify this idea so that the termination guarantee and edge length guarantee given by Theorem 20 in the lecture notes remain intact? Be sure to use only local criteria in deciding where to place the splitting point (and not global criteria, like the value of lfs[min]). Explain why Theorem 20 remains true. [12] Herbert's typo (2 points). In Herbert Edelsbrunner's course handout, Preserving Topology, he states that the contraction of an edge ab is a local unfolding if and only if the following conditions are true. However, the version of the handout that I downloaded had a typo. The second condition was written as follows. This confused me for several days, until I obtained the original reference and discovered the typo. (I hacked the PostScript to fix the handout for your benefit). Demonstrate why I was so confused by giving an example of a local unfolding for which condition (ii') is not satisfied. [13] Delaunay triangulation of an x-monotone chain (4 points). Let V = {v[1], v[2], ..., v[n]} be a set of vertices sorted in increasing order by x-coordinate, with no two vertices having the same x -coordinate. Suppose that each edge v[i]v[i + 1], x-monotone chain of Delaunay edges. Consider the following ^o, it can triangulate the lower portion as well.) The algorithm moves a horizontal sweepline upward over the plane, creating each Delaunay triangle as the sweepline passes over its circumcenter. A priority queue Q maintains a list of potential Delaunay triangles, some of which will prove to be Delaunay, and some of which will not. Assume that V is represented as a doubly-linked list. 1 for n - 2 3 if p be the circumcenter of Q with key p 6 while Q is not empty 7 Remove triangle y-coordinate from Q 8 if each edge bc and cd still has no triangle above it 9 Output c out of V 11 If some vertex a precedes b in V and a, b, and d are in counterclockwise order 12 Let p be the circumcenter of Q with key p 14 If some vertex e follows d in V and b, d, and e are in counterclockwise order 15 Let p be the circumcenter of Q with key p Prove the correctness of this algorithm. Some hints: • Every triangle is above two of its edges (and below only one), as Figure 3 shows. (Why?) • When the sweepline passes over the circumcenter of any Delaunay triangle t, both of the edges on t's underside have already been created, so t is already in Q. (You have to prove this.) • When a non-Delaunay triangle is removed from the priority queue, at least one of its two lower edges has already been covered by a Delaunay triangle. (You have to prove this, too.) • For simplicity, please assume that no four vertices are cocircular, and no two circumcenters have the same y-coordinate. • It might help you to think of the Voronoi dual of the triangulation. Figure 3: The upper Delaunay triangulation of an x-monotone chain. The significance of this algorithm is that it generalizes easily to terrains in three dimensions, or their analogues in higher dimensions, while maintaining its don't have to prove this.) [14] Nearest neighbors in curve reconstruction (4 points). Let S be a point set that 0.3-samples a curve F. Let x be a point in S, and let y be its nearest neighbor in S. Prove that x and y are adjacent samples on F, so the edge xy should be included in any reconstruction. (You should only need Amenta, Bern, and Eppstein's Lemma 1 to prove this.) [15] Triangulate a PSLG in (4 points). In Lecture 1, we learned a method of regularizing simple polygons (i.e., partitioning them into y-monotone polygons). Generalize the algorithm so that it works for segment-bounded PSLGs. (A PSLG is segment-bounded if the boundary between the region we want to triangulate and the region we don't is a union of segments.) You'll need to identify what types of vertices appear in PSLGs but not in simple polygons, and for each new vertex type either give pseudocode for handling it, or explain why it can be handled exactly like some other vertex type.
{"url":"http://www.cs.berkeley.edu/~jrs/meshf99/exam1/exam1.html","timestamp":"2014-04-17T01:30:30Z","content_type":null,"content_length":"17654","record_id":"<urn:uuid:60efd7e8-8f1c-4dc8-ac23-4c64c037bcaa>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00130-ip-10-147-4-33.ec2.internal.warc.gz"}
Question about Inertia I think Macrobe meant to ask how much force would be required to get a body off its stationary position. On a flat surface, it is equal to the co-efficient of static friction x Weight of the body (weight, not mass). If the body is on, say, a road, the force required to get the body would be around 0.7 times the weight of the body.
{"url":"http://www.physicsforums.com/showthread.php?p=4233152","timestamp":"2014-04-21T02:08:21Z","content_type":null,"content_length":"31971","record_id":"<urn:uuid:173a3879-ec05-4478-9626-ad5d46385386>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00104-ip-10-147-4-33.ec2.internal.warc.gz"}
CGTalk - NURBS and Tracking algorithms Joe Drumm 04-05-2005, 11:25 AM I've been programming in C++ for about 12 or so years, but mainly in areas other then computer graphics. Anyway, lately I've been playing around in my free time with some graphics programming lately (has always interested me, but never did much more then draw and rotate a cube on my Amiga back in the day) and am interested in being able to display and interact with b-splines. What I'm looking for is an interactive way for the user to click points and have the curve build up, for example the way it works in some applications such as Commotion, Combustion, AfterEffects, Photoshop, etc. Can anyone point me in the right direction where I could find some information on the algorithms behind this? I have come across some the of the math, and while I can follow some of it, I'm no math expert. I'm looking for something more oriented to a programmer with some code examples or something that explains the math in a bit more down to earth level. The rest of the application I can handle no problem, but it's the rendering of the b-splines that I'm stuck on. Also, out of curiousity, I am interested in reading about the techniques behind image morphing as well a tracking (i.e. tracker in combustion / after effects). Thanks for any tips,
{"url":"http://forums.cgsociety.org/archive/index.php/t-227692.html","timestamp":"2014-04-17T06:54:18Z","content_type":null,"content_length":"8414","record_id":"<urn:uuid:3e1d8c9f-b373-4788-aa9e-3c9d81968631>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00464-ip-10-147-4-33.ec2.internal.warc.gz"}
College Algebra ISBN: 9780321056580 | 0321056582 Edition: 8th Format: Hardcover Publisher: Addison Wesley Pub. Date: 1/1/2001 Why Rent from Knetbooks? Because Knetbooks knows college students. Our rental program is designed to save you time and money. Whether you need a textbook for a semester, quarter or even a summer session, we have an option for you. Simply select a rental period, enter your information and your book will be on its way! Top 5 reasons to order all your textbooks from Knetbooks: • We have the lowest prices on thousands of popular textbooks • Free shipping both ways on ALL orders • Most orders ship within 48 hours • Need your book longer than expected? Extending your rental is simple • Our customer support team is always here to help
{"url":"http://www.knetbooks.com/college-algebra-8th-lial/bk/9780321056580","timestamp":"2014-04-20T14:06:25Z","content_type":null,"content_length":"43684","record_id":"<urn:uuid:29a165a2-334f-4a65-b4b7-2619df632aa4>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00321-ip-10-147-4-33.ec2.internal.warc.gz"}
Wolfram Demonstrations Project Bennett's Octagon-to-Square Dissection The Demonstration gives Bennett's dissection of an octagon to a square. G. N. Frederickson, Dissections: Plane & Fancy, New York: Cambridge University Press, 2002 pp. 150-151. "Bennett's Octagon-to-Square Dissection" from the Wolfram Demonstrations Project Contributed by: Izidor Hafner Based on work by: Greg N. Frederickson The Demonstration gives Bennett's dissection of an octagon to a square.
{"url":"http://demonstrations.wolfram.com/BennettsOctagonToSquareDissection/","timestamp":"2014-04-19T01:50:46Z","content_type":null,"content_length":"43629","record_id":"<urn:uuid:3854be81-d4fd-4e53-bd15-da45a81ef01b>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00493-ip-10-147-4-33.ec2.internal.warc.gz"}
Jamaica, NY Algebra 2 Tutor Find a Jamaica, NY Algebra 2 Tutor ...That being said, I managed to teach myself the large majority of the material, and ace the Math B Regents! I would love to improve any student's experience with that course, because it is quite interesting once you get it! My senior year of high school I took AP Statistics and became absolutely intrigued by the idea that math could be manipulated in such strange ways. 11 Subjects: including algebra 2, Spanish, statistics, geometry ...I have studied nearly every area of mathematics up to the undergraduate or graduate level. In high school, I took AP courses in statistics (5 on exam) and BC calculus (4 on exam). In college, in addition to my coursework I worked in the college's math help center, tutored privately, participated... 22 Subjects: including algebra 2, calculus, geometry, trigonometry ...I also excelled in Living Environment and other science courses while in high school. I took general chemistry at Fordham University in 2012, and received an A for the first semester and a B+ for the second. I tutored a small group of undergraduate evening students from Fordham University in this subject in 2013, where I chiefly took on a problem-based learning approach. 8 Subjects: including algebra 2, chemistry, biology, algebra 1 ...As a researcher for a very prominent political strategist, she had the opportunity to contribute to two published books published on politics and international relations. Lisa studied classical piano as a child at Juilliard School of Music. She continues her professional development and has com... 27 Subjects: including algebra 2, reading, writing, English ...In addition to scholarships, they were also accepted to many of their top pick schools, which provided scholarships to them as well. I have many references available if you wish to speak to a former student about their experiences. Feel free to contact me to talk about my methods further, and I... 33 Subjects: including algebra 2, physics, calculus, GRE Related Jamaica, NY Tutors Jamaica, NY Accounting Tutors Jamaica, NY ACT Tutors Jamaica, NY Algebra Tutors Jamaica, NY Algebra 2 Tutors Jamaica, NY Calculus Tutors Jamaica, NY Geometry Tutors Jamaica, NY Math Tutors Jamaica, NY Prealgebra Tutors Jamaica, NY Precalculus Tutors Jamaica, NY SAT Tutors Jamaica, NY SAT Math Tutors Jamaica, NY Science Tutors Jamaica, NY Statistics Tutors Jamaica, NY Trigonometry Tutors
{"url":"http://www.purplemath.com/Jamaica_NY_Algebra_2_tutors.php","timestamp":"2014-04-19T15:13:00Z","content_type":null,"content_length":"24314","record_id":"<urn:uuid:e8d48486-23bd-4308-b4fe-604846d49e0a>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00300-ip-10-147-4-33.ec2.internal.warc.gz"}
intersecting lines September 9th 2009, 03:27 AM #1 Junior Member Aug 2009 intersecting lines Given the intersecting lines: L1: x = 3t - 3, y = -2t, z = 6t + 7 L2: x = s - 6, y = -3s - 5, z = 2s + 1 a) Find the (acute) angle, rounded to the nearest degree, between the lines. b) Find the point of intersection of the two lines c) Find the equation of the plane containing the two lines. Given the intersecting lines: L1: x = 3t - 3, y = -2t, z = 6t + 7 L2: x = s - 6, y = -3s - 5, z = 2s + 1 a) Find the (acute) angle, rounded to the nearest degree, between the lines. b) Find the point of intersection of the two lines c) Find the equation of the plane containing the two lines. $v * u = |v||u|cos(\theta)$ So all you need is 2 vectors that align with those 2 lines. E.g. $v = \left(\frac{dx}{dt},\frac{dy}{dt},\frac{dz}{dt}\ri ght) , u = \left(\frac{dx}{ds},\frac{dy}{ds},\frac{dz}{ds}\ri ght)$ Hope that helps, but if you still have problems I'll try to help some more. To find the point of intersection, solve the two equation x= 3t- 3= s- 6 and y= -2= -3s-5 for s and t, then check to be sure they also satisfy z= 6t+ 7= 2s+ 1. (In three dimensions most pairs of lines don't intersect but since they ask for the point of intersection, I presume these do.) To find the plane they both lie in, take the cross product of the "direction vectors" (that you also used in finding the angle between them) to find a vector perpendicular to both and use that as the "normal vector" to the plane. This exact question was answered on "Question on Intersecting Lines" and you also posted this same question again on " lines/planes?" September 9th 2009, 04:43 AM #2 Sep 2009 September 9th 2009, 05:01 AM #3 MHF Contributor Apr 2005 September 9th 2009, 05:09 AM #4
{"url":"http://mathhelpforum.com/calculus/101306-intersecting-lines.html","timestamp":"2014-04-18T00:42:48Z","content_type":null,"content_length":"39327","record_id":"<urn:uuid:807f337f-1944-43c6-aac7-ca98465be073>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00446-ip-10-147-4-33.ec2.internal.warc.gz"}
Cognitive Bias: Base-Rate Fallacy Image: MiG 19. Public Domain. The base-rate fallacy happens when available statistical data is ignored in favor of specific data to make a probability judgment. The C.I.A. gives this example to illustrate the problem: During the Vietnam War, a fighter plane made a non-fatal strafing attack on a US aerial reconnaissance mission at twilight. Both Cambodian and Vietnamese jets operate in the area. You know the following facts: (a) Specific case information: The US pilot identified the fighter as Cambodian. The pilot’s aircraft recognition capabilities were tested under appropriate visibility and flight conditions. When presented with a sample of fighters (half with Vietnamese markings and half with Cambodian) the pilot made correct identifications 80 percent of the time and erred 20 percent of the time. (b) Base rate data: 85 percent of the jet fighters in that area are Vietnamese; 15 percent are Cambodian. Question: What is the probability that the fighter was Cambodian rather than Vietnamese? A common procedure in answering this question is to reason as follows: We know the pilot identified the aircraft as Cambodian. We also know the pilot’s identifications are correct 80 percent of the time; therefore, there is an 80 percent probability the fighter was Cambodian. This reasoning appears plausible but is incorrect. It ignores the base rate–that 85 percent of the fighters in that area are Vietnamese. The base rate, or prior probability, is what you can say about any hostile fighter in that area before you learn anything about the specific sighting. The correct way to do this is to use Bayesian reasoning: If we suppose that there are 100 enemy fighter planes total, that means that 85 are Vietnamese and 15 are Cambodian. From paragraph (a), we know that the eye-witness identifies correctly enemy planes 80% of the time, so out of 85 Vietnamese planes, he would identify 68 correctly (85 * 0.80 = 68) and erroneously identify 17 (85 * 0.20 = 17). Out of the 15 Cambodian aircrafts, he would identify correctly 12 of them (15 * 0.80 = 12) and be mistaken about 3 (15 * 0.20 = 3). This makes a total of 71 Vietnamese and 29 Cambodian sightings, of which only 12 of the 29 Cambodian sightings are correct; the other 17 are incorrect sightings of Vietnamese aircraft. Therefore, when the pilot claims the attack was by a Cambodian fighter, the probability that the craft was actually Cambodian is only 12/29ths or 41 percent, despite the fact that the pilot’s identifications are correct 80 percent of the time. Ignore the base-rate in favor of specific data at your own risks! In some cases, it can make a huge difference. Normal (left) versus cancerous (right) mammography image. Public Domain image. Another example to make this kind of reasoning clearer: 1% of women at age forty who participate in routine screening have breast cancer. 80% of women with breast cancer will get positive mammographies. 9.6% of women without breast cancer will also get positive mammographies. A woman in this age group had a positive mammography in a routine screening. What is the probability that she actually has breast cancer? So, based on the numbers above, what is the probability that a woman who gets a positive mammography really has breast cancer? Lets go through it. If there are 10,000 women screened, 1% will have breast cancer. So that’s 100. 80% of those will get a positive result, so that’s 80. That leaves us with 9,900 women who don’t have breast cancer. Out of those, 9.6% will get a false-positive result, so that’s 950 women. You see where this is going? So out of 10,000 women who get tested, 80 will have a real positive result and 950 will have a false positive, for a total of 1,030 positive results. Out of those, only 80 really have cancer, that’s 7.76% (80/1,030 * 100 = 7.76). So if, with these numbers, you were to get a positive result to your mammography test, that would still mean that you only had a 7.76% chance of really having breast cancer. Counter-intuitive, but true. See also: Rationality Resources Nu Says: November 25, 2007 at 4:32 am | Reply I have two observations. In the first example, you don’t actually give the Bayesian probabilities for false alarms from the actual data, and assume that in each case it is 0.8. More likely, they will have differnent false alarm rates. Imagine for instance that all of the Migs were correctly identified and that 30% of the Cambodian a/c. This would bring your probability of a correct Cambodian sighting down to about 10%. In the second example, the statistic that scares me is that of those who get a second mamogram (standard procedure when you have an initial positive as you show) about 90 women out of 10,000 will have TWO false positives, and then be put on the track for having to treat breast cancer until found otherwise. Furthermore, 7 of those with breast cancer will have a second false negative. You are thus left with a conundrum in either case after the first positive. Michael Graham Richard Says: November 25, 2007 at 1:15 pm | Reply Hi Nu, I’m using the data from the CIA’s example (see link at the end). I’m sure that in a real-life situation things would be more complex. Also, I think that what they imply is that both the Vietnamese and Cambodian use the same planes (Soviet-made), so the witness is tested on how well he recognizes the markings on the planes and not plane types. I put a picture of a MiG 19 on top because it is one of the planes used in the war, but I didn’t do the research to see if both Cambodia and Vietnam used those. I know MiG 17 and 21 were more frequent, though. Very good observation about the second example. Scary stuff, really. why base rates matter – Small Gray Matters Says: September 13, 2009 at 9:51 pm | Reply [...] For a nice overview of empirical data on the base rate fallacy, see this article in BBS. For more blogospheric bloviation on base rates, see here, here, and here. [...] AmPahn Says: May 9, 2010 at 11:25 pm | Reply Hi everyone. I’m new here. Just wanted to say hi
{"url":"http://michaelgr.com/2007/11/24/cognitive-bias-base-rate-fallacy/","timestamp":"2014-04-18T00:12:51Z","content_type":null,"content_length":"54992","record_id":"<urn:uuid:0fc0d262-7442-47a4-a077-17bf9cb7098f>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00504-ip-10-147-4-33.ec2.internal.warc.gz"}
Poisson Lie 2-algebra Poisson Lie 2-algebra Symplectic geometry Basic concepts Classical mechanics and quantization A $(X, \omega)$ a 2-plectic manifold, its Poisson Lie 2-algebra is the higher analog of the Poisson bracket Lie algebra of a symplectic manifold. For the moment see here for more. Created on July 2, 2012 21:59:36 by Urs Schreiber
{"url":"http://www.ncatlab.org/nlab/show/Poisson+Lie+2-algebra","timestamp":"2014-04-21T09:39:08Z","content_type":null,"content_length":"18724","record_id":"<urn:uuid:889c612d-28db-4608-9a09-58f660d12fc6>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00410-ip-10-147-4-33.ec2.internal.warc.gz"}
Mathematicians Find New Solutions To An Ancient Puzzle Many people find complex math puzzling, including some mathematicians. Recently, mathematician Daniel J. Madden and retired physicist, Lee W. Jacobi, found solutions to a puzzle that has been around for centuries. Jacobi and Madden have found a way to generate an infinite number of solutions for a puzzle known as 'Euler's Equation of degree four.' The equation is part of a branch of mathematics called number theory. Number theory deals with the properties of numbers and the way they relate to each other. It is filled with problems that can be likened to numerical puzzles. "It's like a puzzle: can you find four fourth powers that add up to another fourth power" Trying to answer that question is difficult because it is highly unlikely that someone would sit down and accidentally stumble upon something like that," said Madden, an associate professor of mathematics at The University of Arizona in Tucson. Equations are puzzles that need certain solutions "plugged into them" in order to create a statement that obeys the rules of logic. For example, think of the equation x + 2 = 4. Plugging "3" into the equation doesn't work, but if x = 2, then the equation is correct. In the mathematical puzzle that Jacobi and Madden worked on, the problem was finding variables that satisfy a Diophantine equation of order four. These equations are so named because they were first studied by the ancient Greek mathematician Diophantus, known as 'the father of algebra.' In its most simple version, the puzzle they were trying to solve is the equation: (a)(to the fourth power) + (b)(to the fourth power) + (c)(to the fourth power) + (d)(to the fourth power) = (a + b + c + d)(to the fourth power) That equation, expressed mathematically, is: a^4 + b^4 +c^4 +d^4 = (a + b + c + d)^4. Madden and Jacobi found a way to find the numbers to substitute, or plug in, for the a's, b's, c's and d's in the equation. All the solutions they have found so far are very large numbers. In 1772, Euler, one of the greatest mathematicians of all time, hypothesized that to satisfy equations with higher powers, there would need to be as many variables as that power. For example, a fourth order equation would need four different variables, like the equation above. Euler's hypothesis was disproved in 1987 by a Harvard graduate student named Noam Elkies. He found a case where only three variables were needed. Elkies solved the equation: (a)(to the fourth power) + (b)(to the fourth power) + (c)(to the fourth power) = e(to the fourth power), which shows only three variables are needed to create a variable that is a fourth power. Inspired by the accomplishments of the 22-year-old graduate student, Jacobi began working on mathematics as a hobby after he retired from the defense industry in 1989. Fortunately, this was not the first time he had dealt with Diophantine equations. He was familiar with them because they are commonly used in physics for calculations relating to string theory. Jacobi started searching for new solutions to the puzzle using methods he found in some number theory texts and academic papers. He used those resources and Mathematica, a computer program used for mathematical manipulations. Jacobi initially found a solution for which each of the variables was 200 digits long. This solution was different from the other 88 previously known solutions to this puzzle, so he knew he had found something important. Jacobi then showed the results to Madden. But Jacobi initially miscopied a variable from his Mathematica computer program, and so the results he showed Madden were incorrect. "The solution was wrong, but in an interesting way. It was close enough to make me want to see where the error occurred," Madden said. When they discovered that the solution was invalid only because of Jacobi's transcription error, they began collaborating to find more solutions. Madden and Jacobi used elliptic curves to generate new solutions. Each solution contains a seed for creating more solutions, which is much more efficient than previous methods used. In the past, people found new solutions by using computers to analyze huge amounts of data. That required a lot of computing time and power as the magnitude of the numbers soared. Now people can generate as many solutions as they wish. There are an infinite number of solutions to this problem, and Madden and Jacobi have found a way to find them all. "Modern number theory allowed me to see with more clarity the implications of his (Jacobi's) calculations," Madden said. "It was a nice collaboration," Jacobi said. "I have learned a certain amount of new things about number theory; how to think in terms of number theory, although sometimes I can be stubbornly The article, ""On a^4 + b^4 +c^4 +d^4 = (a + b + c + d)^4" is published in the March issue of The American Mathematical Monthly. Story Source: The above story is based on materials provided by University of Arizona. Note: Materials may be edited for content and length. Cite This Page: University of Arizona. "Mathematicians Find New Solutions To An Ancient Puzzle." ScienceDaily. ScienceDaily, 18 March 2008. <www.sciencedaily.com/releases/2008/03/080314145039.htm>. University of Arizona. (2008, March 18). Mathematicians Find New Solutions To An Ancient Puzzle. ScienceDaily. Retrieved April 16, 2014 from www.sciencedaily.com/releases/2008/03/080314145039.htm University of Arizona. "Mathematicians Find New Solutions To An Ancient Puzzle." ScienceDaily. www.sciencedaily.com/releases/2008/03/080314145039.htm (accessed April 16, 2014).
{"url":"http://www.sciencedaily.com/releases/2008/03/080314145039.htm","timestamp":"2014-04-16T13:51:36Z","content_type":null,"content_length":"86472","record_id":"<urn:uuid:3f39292c-84ad-42d1-8659-49dede07ece9>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00596-ip-10-147-4-33.ec2.internal.warc.gz"}
How Not to Have Problems With the GMAT Problem Solving Section Edit Step 1: Sign Up You can sign up using your Facebook account too. Step 2: Reduce Your CO2 Step 3: Spread The Word <p>Students will develop visual reasoning and spatial skills while learning this branch of math. It can be made engaging through the use of objects and shapes which students can identify with. Since almost everything that we use and see can be broken down into geometrical shapes, getting students to see how Geometry can be useful should not be too hard. Students will use a lot of equations to find out the area, volume, perimeter and circumference of shapes and it is essential for students to memorize these.</p><p></p><p>Geometry students should make sure that they practice a lot of sums in order to memorize the formulas and learn how to apply them. Daily practice will also help students understand the properties of the different shapes that they learn about. If students find this subject a bit challenging, they should consider getting extra help with Geometry on a regular basis. Students can find great help online, from a number of online tutoring services. Students signing up for tutoring over the internet find the service convenient and easy to use, while providing excellent tutoring.</p><p></p><p>Geometry tutoring online enables students to work with qualified and experienced Geometry tutors who are available round the clock. Plane Geometry and Solid Geometry are two main divisions of Geometry; plane geometry deals with circles, lines and triangles where as solid geometry is related to prisms, cubes and pyramids. Additionally, different types of angles, symbols are also used to represent geometric formulas. These days, students can find all Geometry topics starting from the basic concepts to complex problems of different grades online. Students can also opt for online tutoring help for Geometry any time.</p><p></p><p>The subject is all about some topics like Geometry facts and calculations, finding Area, Surface Area, Calculating Perimeter, Circumference and Volume. Each topic contains detailed explanations with ample examples and these give students a better understanding. Hence, students become capable of solving all kinds of Geometric problems instantly. Additionally, the subject has vast usage in real life. While calculating a square footage of home or determining the length and width of any object, we use Geometric formulas. It is mostly used in Engineering and particularly in Architecture. Most students find this subject interesting due to its various angles, shapes and figures. Others who face some difficulty in Geometry, can easily take Geometry homework help online. It not only assists students to get their homework done in time, but also provides comprehensive and step-by-step explanations on each topic.</p><p></p><p>Online tutoring has brought dramatic changes in the way people learn and hence, it makes each subject understandable and interesting to students. The positive aspects of online tutoring make it a popular learning method among students. In that respect, Geometry tutoring is a definite way to solve any doubts instantly. Students can brush up concepts on any specific topic or can get solutions for any tough question instantly. In an online environment, students can easily interact with an online tutor who can answer several questions quickly and easily. The only thing students need to do is schedule an online session at a convenient time. More over, student can choose to take sessions with a preferred tutor anytime from home. It is a useful learning method for struggling Geometry students of any grade.</p><p></p><p><a href="http://male60help.skyrock.com/ 3178284035-The-Latest-Trend-Of-Supplemental-Education-Online-Tutoring.html">Geometry Study Tips</a>, <a href="http://www.nexopia.com/users/bun85font/blog/1-geometry-help-online">A Good Math Tutor</ a>, <a href="http://www.electricianprograms.org/content/tips-handle-gmat-geometry/">Earn Extra Money - Become an Online Tutor</a></p> How Not to Have Problems With the GMAT Problem Solving Section Carbon Tree 2 people in this tree reduced: Our Impact: 0.00Trees planted Annual Carbon Reduction Carbon Reductions Impact In Tons of Carbon Carbon Reduction Goals 32% (1.1 Tons) How Not to Have Problems With the GMAT Problem Solving Section's Wall
{"url":"http://www.makemesustainable.com/groups/253580","timestamp":"2014-04-21T07:05:58Z","content_type":null,"content_length":"22011","record_id":"<urn:uuid:42f5666a-bf52-4481-8cbb-37c9135fe2c8>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00243-ip-10-147-4-33.ec2.internal.warc.gz"}
Kenneth I. Appel Summary This section contains 388 words (approx. 2 pages at 300 words per page) World of Mathematics on Kenneth I. Appel The American mathematician Kenneth I. Appel was born in Brooklyn, New York on October 8, 1932. He earned a B.S. degree from Queens College in 1953, then served in the U.S. Army for two years following his graduation. In 1955, he enrolled in graduate studies at the University of Michigan, subsequently earning his M.A. and Ph.D. degrees there in 1956 and 1959, respectively. In 1959, he married Carole Stein. Between 1959 and 1961, Appel was on the technical staff of the Institute for Defense Analyses. In 1961, he joined the faculty of the Mathematics Department at the University of Illinois as Assistant Professor, later being advanced to Associate Professor in 1967, and to Professor in 1977. In 1993 he became Chairman of the Department of Mathematics at the University of New Hampshire. In 1976, Appel and a colleague, Wolfgang Haken, succeeded in proving that any map in a plane or on a sphere can be colored with only four colors in such a way that no two neighboring countries are of the same color. When Appel came up with his proof, mathematicians had been attempting to prove the theorem for over a hundred years. In 1852, Francis Guthrie had written to his brother asking him whether he knew of any proof that four colors are always sufficient. The brother relayed the question to the noted British mathematician Augustus De Morgan (1806-1871), but De Morgan did not know the answer. In 1878, the Cambridge mathematician Arthur Cayley (1821-1895) brought the problem before the London Mathematical Society. Very soon thereafter, Alfred Bray Kempe published a proof that the conjecture is true. But in 1890 Percy John Heawood discovered a flaw in Kempe's proof. The theorem thus remained unproven until Appel and Haken came up with their proof in 1976. Appel and Hagen's proof, which required a very large amount of computer time, was described in the book-length article by Appel and Haken entitled Every Planar Map is Four Colorable, Contemporary Mathematics, vol. 98, American Mathematical Society, 1989. A summary of the proof and a history of work on the problem was given in an article by Appel and Haken entitled: The Solution of the Four-Color-Map Problem,, Scientific American, vol. 237, No. 4, pp. 108-121 (1977). In 1979, Appel was awarded the Fulkerson Prize in Discrete Mathematics by the American Mathematical Society and Mathematical Programming Society. This section contains 388 words (approx. 2 pages at 300 words per page)
{"url":"http://www.bookrags.com/biography/kenneth-i-appel-wom/","timestamp":"2014-04-17T11:23:34Z","content_type":null,"content_length":"32762","record_id":"<urn:uuid:fd4808e9-24c5-47f6-9031-73010ea36525>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00149-ip-10-147-4-33.ec2.internal.warc.gz"}
Physics307L F08:Schedule/Week 8 agenda/Linear fit theory From OpenWetWare Following John R. Taylor, "An Introduction to Error Analysis," 2nd edition, Chapter 8: We have a relation as follows, and want to fit $\ A$ and $\ B$ to the data $\ y=A+Bx$ Assume same gaussian distribution for random error in each $\ y_i$ (same $\ \sigma$ for all). Not necessary, but simplifies derivation and results Principle of maximum likelihood For a given $\ A$ and $\ B$, the probability for each $\ y_i$ is: $Prob(y_i) \propto \frac{1}{\sigma_y}e^{-(y_i-A-Bx_i)^2/2\sigma_y^2}$ And we can call the probability of getting all of the data points as: $Prob = Prob(y_1) \cdot Prob(y_2) \cdot ... \cdot Prob(y_N)$ Each term has the same σ[y], so can be simplified as: $Prob \propto \frac{1}{\sigma_y^N}e^{-\chi^2/2}$ $chi-squared, \chi^2 = \sum_{i=1}^N \frac{\left (y_i - A - Bx_i \right )^2}{\sigma_y^2}$ To maximize the probability, minimize the chi-squared sum ... take derivatives, solve system of equations, obtain: $A=\frac{\sum x_i^2 \sum y_i^2 - \sum x_i \sum x_i y_i}{\Delta}$ $B=\frac{N\sum x_i y_i - \sum x_i \sum y_i}{\Delta}$ $\Delta=N \sum x_i^2 - \left ( \sum x_i \right )^2$ Can also derive formulas for weighting each point individually Also, formulas for calculating uncertainty in fit parameters
{"url":"http://www.openwetware.org/wiki/Physics307L_F08:Schedule/Week_8_agenda/Linear_fit_theory","timestamp":"2014-04-18T11:14:27Z","content_type":null,"content_length":"17919","record_id":"<urn:uuid:fc8ac8d0-08ed-4682-a865-8ef3b9a1f700>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00037-ip-10-147-4-33.ec2.internal.warc.gz"}
Cartoon Delta V Map This is the second cartoon delta-v map I've drawn. Clicking on the above can give a larger version. first cartoon map gave a lot more space to EML5 and little to L1 or L2. I knew of L4 and L5 through fiction like Gundam which was probably inspired by Gerard O'Neill's The High Frontier . Since then I've become less interested in L4 and L5 and more interested in L1 and L2. This new map reflects that shift in focus. I had heard of the Interplanetary Transport Network as well as Shane Ross, Martin Lo, and Edward Belbruno. But I knew almost nothing about the low delta V routes achieved with n-body mechanics. I had a vague notion that Lagrange points were involved but that was about it. Then in 2009 I came across a thread in Nasa Space Flight entitled An Alternative Lunar Architecture . In that thread Kirk Sorensen wrote at length about EML2 and work done by Robert Farquhar. Farquhar's 3 body work was done in the late 1960's and early 70's, decades before Ross, Belbruno and other modern advocates of 3-body mechanics. Here's one of the Farquhar graphics Sorensen posted to that thread: This is a 9 day route from LEO to EML2 taking delta V of about 3.5 km/s. It's time reversible so .4 km/s can drop a payload from from EML2 to an atmosphere grazing perigee. There's a 4 day route to EML1 that takes 3.8 days. It was surprising to me that EML2 could be reached with less delta V even though it's on the far side of the moon. It was in 2009 that I became more interested in L1 and L2. There are routes between LEO and EML1&2 taking even less delta V, but these are time consuming. These are described by Andreas Stock's Investigation on Low Cost Transfer Options to the Earth-Moon Libration Point Region . 3.1 km/s seems to be the minimum between LEO for EML1 as well as EML2. In the map above I've depicted these routes with darker brown branches. EML1 moves slower than an ordinary earth orbit at that altitude. An EML1 object nudged a little earthward will fall into an approximately 100,000 by 300,000 km elliptical orbit about the earth. A .3 km/nudge suffices to send to it to a 36,000 km perigee where a 1 km/s burn can circularize the payload at geosynch orbit. A .7 burn can drop an EML1 payload to a LEO grazing orbit. If the LEO grazing orbit passes through the upper atmosphere, aerobraking can provide the 3.1 km/s needed to circularize at LEO. EML2 moves faster than an ordinary earth orbit at its altitude. Nudged a bit away from the moon, a payload from EML2 will sail to a 1.8 million km apogee. The Sun Earth Lagrange 1 and 2 are 1.5 million kilometers from earth, so transfer from EML2 to SEL1 or 2 can be done with little delta V. Or an EML2 payload can sail through SEL1 or 2 completely out of earth's sphere of influence. Nudge either EML1 or 2 a little moonward and they will fall into an approximately 5,000 x 60,000 km lunar orbit. Since the moon's rotating about the earth, a 60,000 km apolune can pass by both EML2 and EML1 over time. Thus it's possible to move between EML1 and 2 with very little delta V. An object falling to a 300 km earth altitude from either EML1 or 2 will be traveling just a hair under escape when it reaches low altitude. Both EML1 and 2 make a complete circuit each 27.3 days so by timing your drop it's possible to choose longitude of perigee during a launch window. Plane changes are much less expensive at high altitudes so the velocity vector can be pointed in the right direction at perigee. Starting at EML2, injection into Mars or Venus Hohmanns can be done with around .9 km/s delta V (.4 km/sec to drop and a .5 km/s burn at perigee). On arriving at Mars, .7 km/s suffices to exit Hohmann for a 300 x 570,000 km Mars capture orbit. A Near Earth Asteroid with a of 2 km/s or less can be dropped into an earth capture orbit using a lunar swing by. From there repeated lunar swing bys and little delta V can park the rock in high lunar orbit. Planetary Resources hopes to search for smaller rocks with their Arkyd orbital telescopes. If successful, they will likely find a multitude of rocks within .2 km/s of EML2. Many NEAs are water rich and the cold traps at the lunar poles may have minable water deposits. So there are a number of potential propellent sources close to the earth-moon L1 and 2. Propellent sources high on the slopes of earth's gravity could break the exponent in Tsiolkovsky's rocket equation. This would give us mass fractions much easier to deal with. The highest delta V budget we'd have to endure is the 9.5 km/s from earth to LEO. Round trips between most other orbits would be in the neighborhood of 4 or 5 km/s. Some notes on Venus: An earlier version of this map indicated delta V from Venus' surface to a capture orbit was 11.6 km/s. But then I came across an excellent delta V map by a fellow who calls himself Curious Metaphor. In a of his map, I was convinced Venus' thick dense atmosphere would make for a slower, steeper ascent from the planet surface. I've added 20 km/s gravity loss between Venus' upper atmosphere and surface. In Venus' upper atmosphere I've added a location labeled Landis Land . Named for Geoffrey Landis who noted there is a layer in Venus' atmosphere with earth like temperature and pressure. Moreover, a nitrogen/oxygen mix such as we breath would be buoyant in Venus' CO2 atmosphere. Landis is a scientist as well as a science fiction writer. Some of his fiction takes place on the cloud cities of Venus. 4 comments: Nydoc said... I really like this map. Do you have a copy of Andreas Stock's paper Investigation on Low Cost Transfer Options to the Earth-Moon Libration Point Region? The link appears to be broken now. If you do, please send me a copy at peter.m.mcarthur@gmail.com In your "Inflated delta V's" post you gave the delta-v between highly elliptical orbits from Earth to Mars as 1.1 km/s. However, here to go from EM2 to a highly elliptical orbit around Mars is given as .9 + .7 = 1.6 km/s. I would have thought to leave from EM2 would cost less delta-v since it is further out of Earth's gravity well. In any case, it is interesting that to go from EM2 to a Mars transfer orbit, the required delta-v is so small, less than 1 km/s, whereas from LEO it's about 4 km/s. Bob Clark Bob, the capture orbits for departure are a best case scenario that would seldom (if ever) occur in the normal course of things. For arrival, capture they can be used routinely, so long as peri-apsis passes through the upper atmosphere. Following Prussing and Conway's method of Sphere of Influence (SOI), earth's SOI would be about 1,000,000 kilometers from earth's center. This would be apogee. Perigee would be 300 km, just above earth's atmosphere. EML2 follows a roughly circular path 450,000 km from earth's center. If nudged loose from the moon's influence, an object at EML2 would fly to a 1,800,000 km apogee, beyond SOI. It is quite likely the sun would tear the object from earth's influence into a heliocentric orbit. Or if it does descend back to the earth, It would take around 4 months to return to a perigee where the Oberth benefit could be enjoyed. So for routes from EML2 I use Farquhar's path to a perigee deep in earth's gravity. It takes a little more delta V but is still quite good. Nydoc, I have sent you my copy of that pdf. I have also written the school asking permission to upload that pdf and make it available for download.
{"url":"http://hopsblog-hop.blogspot.com/2013/04/cartoon-delta-v-map.html","timestamp":"2014-04-18T23:16:27Z","content_type":null,"content_length":"70623","record_id":"<urn:uuid:88eebfe2-e923-473e-8d7b-ef40a4697ae5>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00023-ip-10-147-4-33.ec2.internal.warc.gz"}
Order of Operations with Parenthesis and Exponents Print Worksheet # 1 in PDF Answers on 2nd page of PDF. The order of operations is something in math that is often committed to memory. To some, the order makes sense and to others, it's just a matter of remembering the order. There are many mnenomics devices that will help and my advice, is to use what works for you. Some people use PEDMAS, other use BEDMAS (Parenthesis/Brackets, Exponents, Division, Multiplication, Addition, Subtraction). I have also heard PEMDAS as Please Excuse My Dear Aunt Sally as a way to remember the acronym. To learn more about the order of operations, you may wish to see the full article here.
{"url":"http://math.about.com/od/prealgeb2/ss/orderof.htm","timestamp":"2014-04-20T14:01:01Z","content_type":null,"content_length":"41963","record_id":"<urn:uuid:de2da0a7-31a5-4c82-954b-6ef383931d68>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00586-ip-10-147-4-33.ec2.internal.warc.gz"}
Need urgent help in writing an algorithm No Profile Picture Junior Member Devshed Newbie (0 - 499 posts) Join Date Apr 2003 Rep Power Need urgent help in writing an algorithm Hi everyone, I am new to this forum and it seems to be very good one. I am in a position to write an algorithm in a short time and since I haven't done any programming for the last 2 to 3 years, i need any of your help. Below is the scnario: There is an array (100 rows X 6 cols) with string elements of X, Y, Z and U. These elements are randomly spread out in the array. The algorithm is supposed to search the array to connect X to X, Y to Y and Z to Z. And it shouldn't pair up with U (you can take U as garbage, have to avoid). The output is a file (an excel file) that should give the location of the pair. The critical part is that we should connect to the closest one. For example, say array[0,0], array[0,1] and array[3,5] all consist of X. The algorithm should pair up array[0,0] with [0,1]. I think I have said enough, but if it doesn't make sense or you want to do any assumption, go ahead and do it. Thanks in advance, can you say, for example how many of each of X, Y and Z in the matrix - and how many pairs you want recorded. thanks for your intention to help me. To answer your question, the number of X,Y and Z is not defined, it can be in any amount. Say we have 21 X es, obviously we can pair up 10 and leave the one. ok - and am i right to think the the ten pairs chosen should be such that the total distance between the items of the pairs is as small as possible? how do you calculate distance? is something that considers all possible pairs acceptable or does it miss the point? you are right. the distance should be as small as possible. however, the priority goes for the smallest distance. that means if you don't have a pair close, then you can pair with the one that is far instead of leaving it alone unpaired. in my case, you shouldn't worry about the number of X, Y and Z. I just need the algorithm to search (say in all 8 directions) and find the pair in an array. the distance can be calculated from the cordinates with Pythagrom theary ( d=sqrt(x2-x1)square + (y2-y1)square ) ). here's a thought, not sure how efficient it would be, but: go thru the entire array once, any x,y,z value found would have it's position in the array stored into a node on a linked list. x,y, and z would each have their own linked list. the node data members would be x,y, and the next pointer. at least this way u only need to traverse the array once. then u could write a method for the listclass that would sort the nodes according to proximity of each other, by this i mean: the first two nodes would be a pair, then the next two would be a pair, etc... im tryin to think of the algorithm to do that right now, i'll get back to u soon. ok, how does this sound, assuming that u take my advice and create linked lists like i said in above post. here is is: -list class would need a member that kept track of the number of nodes -node would need an additional data member, an array of ints representing the distance between itself and the other nodes. the array would need to grow, perhaps a vector would be a better idea? to explain this clearer, lets say there are 3 nodes in list: node 1 would have an array like this, distArray{dist to itself, dist. to node 2, dist. to node 3}, then node 2 array, distArray{dist. to node 1, dist. to itself, dist to node3}, node 3 array distArray{dist. to node1, dist. to node2, dist. to itself} -the measuring of distances would occur when u insert the node into the list. assuming the node being inserted is not the first node, something like this: addNode(node *ptr) node *tmpPtr = head; node *curPtr = ptr; int x = 0; while(tmpPtr->next != NULL&& x<numNodes) val = distance between tmpPtr and curPtr; tmpPtr[numNodes-1] = val; curPtr[x] = val; tmp = tmp->next; //here u need one more loop for the final node not done(when tmpPtr->next == NULL) //then set curPtr[x] = 0 as this will represent the distance to itself! //then attach curPtr to the list now u have all the distances stored in the arrays. so if there is an odd number of nodes, first thing to do is eliminate the further node, this can be calculated by summing the elements in the array of each node, the largest total is the furthest away and can be eliminated. - then start with the first node's distance array, find the smallest value in that array(besides 0 of course) -move to the next node's array. u need to compare 2 things here, 1)to see if u can match the value from the previous step and 2)to make sure that value isnt larger than any values in the current node, as that would mean this node has another node that is closer to it. -rinse and repeat -the good thing about this is that u never have to calculate the same distance twice! this was what i was trying to work around. -however there are still some holes, like how to deal with even distances, how to remove a pair from searching once u have made a match for it, prolly some more things i cant think of. hope that helps a bit. tell me if u come up with something better,this stuff interests me alot!! Last edited by infamous41md; April 7th, 2003 at 11:19 PM. HI infamous41md, thanks for your help here. I will take your advice and try out soon. if I have any questions or concerns, I will let you know. again, thanks a lot and I really appreciate your help. No Profile Picture Contributing User Devshed Newbie (0 - 499 posts) Join Date Mar 2001 Rep Power No Profile Picture Junior Member Devshed Newbie (0 - 499 posts) Join Date Apr 2003 Rep Power No Profile Picture Contributing User Devshed Newbie (0 - 499 posts) Join Date Mar 2001 Rep Power No Profile Picture Junior Member Devshed Newbie (0 - 499 posts) Join Date Apr 2003 Rep Power No Profile Picture Junior Member Devshed Newbie (0 - 499 posts) Join Date Apr 2003 Rep Power
{"url":"http://forums.devshed.com/software-design-43/urgent-help-writing-algorithm-57259.html","timestamp":"2014-04-16T23:16:37Z","content_type":null,"content_length":"85735","record_id":"<urn:uuid:e1eaa2e8-dcf7-4a56-9caa-63e8d877cd3c>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00277-ip-10-147-4-33.ec2.internal.warc.gz"}
Calculus: Early Transcendentals More About This Textbook Success in your calculus course starts here! James Stewart's CALCULUS: EARLY TRANSCENDENTALS texts are world-wide best-sellers for a reason: they are clear, accurate, and filled with relevant, real-world examples. With CALCULUS: EARLY TRANSCENDENTALS, Seventh Edition, Stewart conveys not only the utility of calculus to help you develop technical competence, but also gives you an appreciation for the intrinsic beauty of the subject. His patient examples and built-in learning aids will help you build your mathematical confidence and achieve your goals in the course. Read More Show Less Editorial Reviews A textbook for a course introducing students both to the practical applications and to the beauty of the field. Stewert (McMaster U.) focuses on the basic concepts, and presents the topics geometrically, numerically, and algebraically. No dates are noted for the early editions, but the fourth contains new exercises, updated data, projects for individual or group work, and other pedagogical features. The CD-ROM offers a demonstration version of the Journey Through Calculus program, which is referred to at appropriate places in the text. A full array of auxiliary material, including instructors guide and laboratory manuals, are also available. Annotation c. Book News, Inc., Portland, OR (booknews.com) Read More Show Less Product Details • ISBN-13: 9780538497909 • Publisher: Cengage Learning • Publication date: 11/19/2010 • Edition description: New Edition • Edition number: 7 • Sales rank: 94,527 • Product dimensions: 9.00 (w) x 11.34 (h) x 1.72 (d) Table of Contents Diagnostic Tests. A Preview of Calculus. 1. FUNCTIONS AND MODELS. Four Ways to Represent a Function. Mathematical Models: A Catalog of Essential Functions. New Functions from Old Functions. Graphing Calculators and Computers. Exponential Functions. Inverse Functions and Logarithms. Review. Principles of Problem Solving. 2. LIMITS AND DERIVATIVES. The Tangent and Velocity Problems. The Limit of a Function. Calculating Limits Using the Limit Laws. The Precise Definition of a Limit. Continuity. Limits at Infinity; Horizontal Asymptotes. Derivatives and Rates of Change. Writing Project: Early Methods for Finding Tangents. The Derivative as a Function. Review. Problems Plus. 3. DIFFERENTIATION RULES. Derivatives of Polynomials and Exponential Functions. Applied Project: Building a Better Roller Coaster. The Product and Quotient Rules. Derivatives of Trigonometric Functions. The Chain Rule. Applied Project: Where Should a Pilot Start Descent? Implicit Differentiation. Laboratory Project: Families of Implicit Curves. Derivatives of Logarithmic Functions. Rates of Change in the Natural and Social Sciences. Exponential Growth and Decay. Related Rates. Linear Approximations and Differentials. Laboratory Project: Taylor Polynomials. Hyperbolic Functions. Review. Problems Plus. 4. APPLICATIONS OF DIFFERENTIATION. Maximum and Minimum Values. Applied Project: The Calculus of Rainbows. The Mean Value Theorem. How Derivatives Affect the Shape of a Graph. Indeterminate Forms and l'Hospital's Rule. Writing Project: The Origins of l'Hospital's Rule. Summary of Curve Sketching. Graphing with Calculus and Calculators. Optimization Problems. Applied Project: The Shape of a Can. Newton's Method. Antiderivatives. Review. Problems Plus. 5. INTEGRALS. Areas and Distances. The Definite Integral. Discovery Project: Area Functions. The Fundamental Theorem of Calculus. Indefinite Integrals and the Net Change Theorem. Writing Project: Newton, Leibniz, and the Invention of Calculus. The Substitution Rule. Review. Problems Plus. 6. APPLICATIONS OF INTEGRATION. Areas Between Curves. Applied Project: The Gini Index. Volume. Volumes by Cylindrical Shells. Work. Average Value of a Function. Applied Project: Calculus and Baseball. Applied Project: Where to Sit at the Movies. Review. Problems Plus. 7. TECHNIQUES OF INTEGRATION. Integration by Parts. Trigonometric Integrals. Trigonometric Substitution. Integration of Rational Functions by Partial Fractions. Strategy for Integration. Integration Using Tables and Computer Algebra Systems. Discovery Project: Patterns in Integrals. Approximate Integration. Improper Integrals. Review. Problems Plus. 8. FURTHER APPLICATIONS OF INTEGRATION. Arc Length. Discovery Project: Arc Length Contest. Area of a Surface of Revolution. Discovery Project: Rotating on a Slant. Applications to Physics and Engineering. Discovery Project: Complementary Coffee Cups. Applications to Economics and Biology. Probability. Review. Problems Plus. 9. DIFFERENTIAL EQUATIONS. Modeling with Differential Equations. Direction Fields and Euler's Method. Separable Equations. Applied Project: How Fast Does a Tank Drain? Applied Project: Which is Faster, Going Up or Coming Down? Models for Population Growth. Linear Equations. Predator-Prey Systems. Review. Problems Plus. 10. PARAMETRIC EQUATIONS AND POLAR COORDINATES. Curves Defined by Parametric Equations. Laboratory Project: Families of Hypocycloids. Calculus with Parametric Curves. Laboratory Project: Bezier Curves. Polar Coordinates. Laboratory Project: Families of Polar Curves. Areas and Lengths in Polar Coordinates. Conic Sections. Conic Sections in Polar Coordinates. Review. Problems Plus. 11. INFINITE SEQUENCES AND SERIES. Sequences. Laboratory Project: Logistic Sequences. Series. The Integral Test and Estimates of Sums. The Comparison Tests. Alternating Series. Absolute Convergence and the Ratio and Root Tests. Strategy for Testing Series. Power Series. Representations of Functions as Power Series. Taylor and Maclaurin Series. Laboratory Project: An Elusive Limit. Writing Project: How Newton Discovered the Binomial Series. Applications of Taylor Polynomials. Applied Project: Radiation from the Stars. Review. Problems Plus. 12. VECTORS AND THE GEOMETRY OF SPACE. Three-Dimensional Coordinate Systems. Vectors. The Dot Product. The Cross Product. Discovery Project: The Geometry of a Tetrahedron. Equations of Lines and Planes. Cylinders and Quadric Surfaces. Review. Problems Plus. 13. VECTOR FUNCTIONS. Vector Functions and Space Curves. Derivatives and Integrals of Vector Functions. Arc Length and Curvature. Motion in Space: Velocity and Acceleration. Applied Project: Kepler's Laws. Review. Problems Plus. 14. PARTIAL DERIVATIVES. Functions of Several Variables. Limits and Continuity. Partial Derivatives. Tangent Planes and Linear Approximation. The Chain Rule. Directional Derivatives and the Gradient Vector. Maximum and Minimum Values. Applied Project: Designing a Dumpster. Discovery Project: Quadratic Approximations and Critical Points. Lagrange Multipliers. Applied Project: Rocket Science. Applied Project: Hydro-Turbine Optimization. Review. Problems Plus. 15. MULTIPLE INTEGRALS. Double Integrals over Rectangles. Iterated Integrals. Double Integrals over General Regions. Double Integrals in Polar Coordinates. Applications of Double Integrals. Surface Area. Triple Integrals. Discovery Project: Volumes of Hyperspheres. Triple Integrals in Cylindrical Coordinates. Discovery Project: The Intersection of Three Cylinders. Triple Integrals in Spherical Coordinates. Applied Project: Roller Derby. Change of Variables in Multiple Integrals. Review. Problems Plus. 16. VECTOR CALCULUS. Vector Fields. Line Integrals. The Fundamental Theorem for Line Integrals. Green's Theorem. Curl and Divergence. Parametric Surfaces and Their Areas. Surface Integrals. Stokes' Theorem. Writing Project: Three Men and Two Theorems. The Divergence Theorem. Summary. Review. Problems Plus. 17. SECOND-ORDER DIFFERENTIAL EQUATIONS. Second-Order Linear Equations. Nonhomogeneous Linear Equations. Applications of Second-Order Differential Equations. Series Solutions. Review. Problems Plus. Appendix A: Numbers, Inequalities, and Absolute Values. Appendix B: Coordinate Geometry and Lines. Appendix C: Graphs of Second-Degree Equations. Appendix D: Trigonometry. Appendix E: Sigma Notation. Appendix F: Proofs of Theorems. Appendix G: The Logarithm Defined as an Integral. Appendix H: Complex Numbers. Appendix I: Answers to Odd-Numbered Exercises. Read More Show Less
{"url":"http://www.barnesandnoble.com/w/calculus-james-stewart/1100295539?ean=9780538497909&itm=1&usri=9780538497909&r=1","timestamp":"2014-04-23T11:12:46Z","content_type":null,"content_length":"149934","record_id":"<urn:uuid:af1580b3-0746-4608-8589-7953872c0537>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00200-ip-10-147-4-33.ec2.internal.warc.gz"}
Wolfram Demonstrations Project Negative Pedal Curves of an Ellipse The pedal curve of a curve with respect to a point is the curve whose points are the closest to on the tangents of . This Demonstration concerns the inverse of a pedal curve, sometimes called the negative pedal curve. The negative pedal of a curve can be defined as a curve such that the pedal of is . This Demonstration allows you to explore two sets of the negative pedal curves of an An ellipse can be described parametrically by two equations, each of which contains a constant; in this Demonstration these constants have been labeled and : , . The same two constants appear in the parametric equations of the negative pedal curves. The set of negative pedal curves with respect to the origin includes a curve known as "Talbot's curve" which has four cusps and two ordinary double points. The set of curves with respect to one of the foci is called "Burleigh's ovals" and includes a fish-like curve that inspired a paper by H. Martyn Cundy ( Mathematical Gazette , 85(504), pp. 439-445).
{"url":"http://demonstrations.wolfram.com/NegativePedalCurvesOfAnEllipse/","timestamp":"2014-04-20T23:31:18Z","content_type":null,"content_length":"43076","record_id":"<urn:uuid:8cbd6f04-c76a-4176-9c83-aa15c36ca30f>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00333-ip-10-147-4-33.ec2.internal.warc.gz"}
the first resource for mathematics This is a lovely article on Weierstrass and the early development of approximation theory. It begins with a short biography of Weierstrass. Two main themes stand out in his work: To set a new standard of rigor in analysis, and his love for power series or more generally for function series. The first theme is documented by his construction of a continuous, nowhere differentiable function which was shocking to the mathematical community at the time. Weierstrass presented this in his lectures since 1861 but published his example (using a cosine series) in 1872. Further history on that by Bolzano, Riemann, Takagi, and du Bois-Reymond is mentioned. The second theme is documented by the Fundamental Theorem of Approximation Theory: Algebraic polynomials are dense in $C\left[a,b\right] $, where $-\infty <a<\sigma <\infty$. This was published by Weierstrass. in 1885 when he was 70 years old, and proved by representing $f\in C\left[a,b\right]$ as a limit of integrals ${\int }_{-\ infty }^{\infty }$ depending on a parameter $k$. Thus $f$ is the uniform limit of a sequence of entire functions and hence of a sequence of polynomials. Weierstrass. states and proves also the analogous theorem about the density of trigonometric polynomials. The author then lists and analyses further proofs (before 1913) of the Fundamental Theorem. He puts them into three groups. In Group 1 there are proofs based on singular integrals (Weierstrass, Picard, Fejér, Landau), while those in Group 2 are based on the approximation of a particular function, like a polygonal function (Runge, Lebesgue, Mittag-Leffler, Lerch). Left over are those in Group 3 by Bernstein, Volterra, Lerch. It is interesting to note that Runge proved (also in 1885!) that rational functions are dense in $C\left[a,b\right]$ but overlooked the fact that this is true already for polynomials. Lebesgue reduces the Fundamental Theorem to the special case $f\left(x\right)=|x|$, and he raises (1908) apparently for the first time questions about the speed of approximation, three years before Jackson’s dissertation appeared. The last section deals with various generalizations: Müntz’s theorem, Hermite-Fejér interpolation, Carleman’s theorem, Stone-Weierstrass, and Bohman-Korovkin. All these theorems are given with full explanation, proofs, as well as historical notes. It is clear that this article is necessary reading for all approximators. 41-02 Research monographs (approximations and expansions)
{"url":"http://zbmath.org/?q=an:0968.41001","timestamp":"2014-04-17T18:50:34Z","content_type":null,"content_length":"23295","record_id":"<urn:uuid:42c3c120-b58b-4b15-a628-db3030df4a32>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00499-ip-10-147-4-33.ec2.internal.warc.gz"}
Summary: Journd of Pure and Applied Algebra 40 (1986) 103-113 Eli ALJADEFF and Shmuel ROSSET Tel Aviv University, Ramat Aviv, 69978 Israel Communicated by H. Bass Received 18 December 1984 Revised 22 April 1985 Suppose r is a group and a homomorphism t : I'-+ Aut(K) is given. Here K is a field and Aut(K) is the group of field automorphisms of K. Then we say that r acts on K. In such circumstances the multiplicative group K* is a r-module and it is well known that elements of H2(r, K*) give rise to `crossed product' algebras. To recall this let aeN2(r, K*) and let f: TxT-+K* be a 2-cocycle representing a. One defines the crossed product, which we denote by as follows. As left K vector space it is a direct sum UaErKuo. Multiplication is defined so as to satisfy the rule Here a(~) is the action of t(a) on y. It is easy to see that this multiplication is associative (this follows from the cocycle condition) and that, up to isomorphism of rings, KfT only depends on (Y, not on the choice of f. It is thus assumed
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/378/5469714.html","timestamp":"2014-04-20T19:17:28Z","content_type":null,"content_length":"8156","record_id":"<urn:uuid:8ea2a2e5-4431-4ef8-80d4-e052fb3447b7>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00355-ip-10-147-4-33.ec2.internal.warc.gz"}
st: re: system estimation with dynamic panel [Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index] st: re: system estimation with dynamic panel From Kit Baum <baum@bc.edu> To statalist@hsphsun2.harvard.edu Subject st: re: system estimation with dynamic panel Date Tue, 5 Aug 2008 20:02:47 -0400 < > Hewan said I used the example from Nicola so as to not introduce a whole different example but to re-ask Nicola's question. In my particular case, my equations didn't have a dynamic nature altogether, but they did have endognous vars on the right hand sides (e.g. one equations dependent var was the other's regressor and vice versa). I had, at the time, wanted to use a "panel version" of reg3, i.e. use 3SLS to estimate a simultaneous equation system, only that each of three equations wasn't a cross- section (i.e. along the lines of y_i = a + b*x_i + e_i; consider the x_it a vector of regressors) but rather panel (ie. y_it = a + b*x_it + e_it). Is it correct then that there is no panel version of reg3, and one would have to use a more "manual" approach in imposing the desired restrictions on the parameter and error matrices? (1) There is to my knowledge no systems estimator that has panel- specific features. (2) What are the 'desired restrictions on the parameter and error matrices'? Do you have cross-equation restrictions or restrictions on the VCE in mind? If not, I see no reason why equation-by-equation -xtivreg2- would not yield the desired estimates, as it handles limited-information estimation for a panel with endogenous regressors. Kit Baum, Boston College Economics and DIW Berlin An Introduction to Modern Econometrics Using Stata: * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2008-08/msg00181.html","timestamp":"2014-04-17T00:55:22Z","content_type":null,"content_length":"6518","record_id":"<urn:uuid:a767a4c4-eedb-4d3c-9a81-580bd2584cac>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00272-ip-10-147-4-33.ec2.internal.warc.gz"}
Good combinatorics textbooks for teaching undergraduates? up vote 13 down vote favorite Hello, can anyone recommend good combinatorics textbooks for undergraduates? I will be teaching a 10-week course on the subject at Stanford, and I assume that the students will be strong and motivated but will not necessarily have background in subjects like abstract algebra or advanced calculus. I intend to focus on the enumerative side of the subject and do permutations and combinations, generating functions, recurrence relations, Stirling and Catalan numbers, and related topics. However, this hasn't been set in stone and I also welcome advice for what topics to include. I would be grateful if people would not only suggest names of books but also say a little bit about their merits. Thank you! co.combinatorics mathematics-education textbook-recommendation books You might enjoy taking a leaf out of Arthur Benjamin's book, even if it won't fill an entire curriculum: books.google.com/… – Qiaochu Yuan Jun 22 '10 at 20:11 1 I made this community wiki since it is intended to be a collection of resources. See the faq for details - mathoverflow.net/faq#communitywiki – François G. Dorais♦ Jun 22 '10 at 21:58 Hello, thanks to everyone for your answers. In contrast to what I was told initially, my department will mostly be calling the shots (which is perfectly fine with me) and I will be doing a lot of graph theory after all, and using van Lint and Wilson. Thanks to all! – Frank Thorne Jun 23 '10 at 16:58 2 Andrew -- to answer what is not quite your question, I think consistency from year to year is a very reasonable thing for departments to strive for. I still have plenty of flexibility, and my instructions came from a tenured professor whose expertise in the subject is unquestioned. – Frank Thorne Jun 24 '10 at 17:18 1 The answer will probably overlap with mathoverflow.net/questions/4836/… . – Zsbán Ambrus Jun 28 '11 at 9:08 show 1 more comment 19 Answers active oldest votes Concrete Mathematics: A Foundation for Computer Science, by Ronald Graham, Donald Knuth, and Oren Patashnik. up vote 10 down vote I agree that this book does not make a standard combinatorics course, although it has chapters on binomial coefficients, special numbers, and generating functions. At least it's a very accessible additional source. It definitely does not require a solid background in algebra or calculus. add comment Two obvious answers are van Lint & Wilson "A Course in Combinatorics" and Peter Cameron "Combinatorics". Which is best really depends on the fine details of your course, and what content you want. Cameron's book has a lot of nice exercises, there are not as many in van Lint & Wilson (and they have a tendency to go of the deep end). As you would expect both books are very well written and have an excellent selection of topics. Cameron's book is possibly more approachable. up vote 9 down vote Grahm, Knuth and Patashnik is a fine book, but is much more focussed on classical combinatorial sequences and less on combinatorics in general. I think those books might be too difficult for what Frank wants,Chris.Then again,they are Stanford students.So maybe they'll work. – Andrew L Jun 22 '10 at 20:20 I like Cameron's book, and I don't think it's as advanced as all that. – Hugh Thomas Jun 23 '10 at 6:14 add comment I can hardly do better then recommend the 2 books by Milkos Bona: A Walk Through Combinatorics and Introduction To Enumerative Combinatorics. The first book is more comprehensive as well as classical,giving thorough discussions of counting arguements and the intuition behind them in addition to bijection arguements.It also contains a very good introduction to graph theory and some topics not normally found in introductory books,like lattices and partial orders. The second book has considerable overlap with the first,but the emphasis is a lot more on modern counting methods.It is more formal and less intuitive then the first,but the discussion of several topics is better and there are better exercises. The chapter on generating functions is probably the best single chapter source in the current textbook literature on the subject. up vote 7 down vote Using either or both of these books will give your students a terrific course.There's also quite a bit of material available online for free: Richard Stanley's 2003 Art Of Counting course at the MIT OpenCourseWare website has 233 substantial combinatorics problems for your students to chew on. You also might want to look at the terrific classic by Wilf on generating functions,generatingfunctionology-available free for download at Wilf's website. Anywho,those are some of my favorite books on combinatorics. 3 An acquaintance who taught a class out of (I believe) A Walk Through Combinatorics found it to be full of typos, so keep an eye out for that. – JBL Jun 22 '10 at 22:57 (I should say also that I read parts of another Bona book, The Combinatorics of Permutations. I found his writing style enjoyable, and was disappointed to hear about my acquaintance's problems with his other book.) – JBL Jun 23 '10 at 2:09 A Walk Through Combinatorics has some of the worst puns I know. – Charles Chen Jun 29 '10 at 0:15 Can I ask for an example of the puns? – Felix Goldberg May 17 '12 at 8:12 add comment For a single quarter basic introductory course, I recommend J. Matousek and J. Nesetril, Invitation to Discrete Mathematics, Oxford Univ Press, 1998. I have taught from it several times - it is well written, clear, inexpensive, and fun on occasion. It also has the advantage of being not overly ambitious in scope (compared to van Lint & Wilson, for example, excellent up vote 7 otherwise for a longer course and more advanced students), while still having a good selection of topics. down vote add comment It's obviously slanted towards the generating-function view of enumeration, but I enthusiastically recommend Generatingfunctionology by Herb Wilf. It covers all the topics you mentioned, written mainly in the style of examples, rather than theory---something that usually appeals to undergraduates. To me what makes the book a great introduction for a newcomer to combinatorics up vote is Wilf's obvious enthusiasm and easy-going (yet firmly exacting) writing style. The mileage he gets out of changing a recurrence relation into a generating function is truly amazing. I think 7 down most undergraduates would be amazed that their skills in calculus can help them enumerate discrete objects, and this book does exactly that over and over again. If price matters, this one is vote tough to beat---the second edition is free at Wilf's website. add comment I would recommend Combinatorics and Graph Theory, 2nd ed. by Harris, Hirst and Mossinghoff link to publisher's page. It presupposes little more than some knowledge of mathematical induction, a modicum of linear algebra, and some sequences and series material from calculus. The book is divided into three largish chapters: the first on graph theory, the second on combinatorics and up vote the third (more advanced) on infinite combinatorics. Your course sounds like it might cover much of chapter two (sum rule, product rule, binomial and multinomial coefficients, the pigeonhole 5 down principle, the principle of inclusion and exclusion, generating functions, Pólya's theory of counting, Stirling numbers, Bell numbers, stable marriage, etc.). There's even a brief vote introduction to combinatorial geometry. Furthermore, the exposition is clear, with a touch of humour. I forgot that one,J.W.-it's terrific,but it might be a bit too gentle for the students at Stanford.But by all means,Frank,I concur-give it a look. – Andrew L Jun 22 '10 at 20:18 add comment Notes on Introductory Combinatorics by Pólya, Tarjan, Woods. up vote 3 down vote add comment Combinatorics: Theory and Applications by V. Krishnamurthy. up vote 3 This is an undergraduate text. Comprehensive and clear. If you live in India, there is an additional benefit: it can be yours for Rs. 275 only (I bought it on flipkart for Rs. 215 down vote recently). It covers Polya Theory, Schur Functions, Matching Theory, Inversion Techniques, Ramsey Theory and Designs. It has a large collection of examples and problems. That's six US dollars! – Amritanshu Prasad Aug 25 '11 at 6:41 I should have bought one last year when I was in India and the rupee was weaker. Oh well. Gerhard "Maybe It Is Online Now" Paseman, 2011.08.25 – Gerhard Paseman Aug 25 '11 at 7:08 add comment Neither of these suggestions seem to exactly fit the level the OP was aiming for, but I add them for others who come across this thread with a different group of students in mind: (1) For a gentle, problem-based introduction for undergraduates, I really like Ken Bogart's Combinatorics Through Guided Discovery. Sadly, he passed away while writing the text, but up vote 3 down he has left it publically available at no charge. (2) For a comprehensive and structured approach to combinatorics at the introductory graduate level, I really like Martin Aigner's A Course in Enumeration. add comment I've also been searching for a good undergraduate book for combinatorics, which I'm teaching next fall for the first time. One book not mentioned yet is Brualdi's "Introductory It looks to be at a good level for beginning undergraduates while still maintaining a reasonable level of rigor. Some of the comments at Amazon seem say that the most recent edition is up vote 2 an improvement over the previous ones. Anyone have any specific experiences with this book? down vote [1] http://www.pearsonhighered.com/educator/product/Introductory-Combinatorics/9780136020400.page add comment Stanton and White's Constructive Combinatorics emphasizes bijective proofs, and enumerative algorithms (with the theoretical insights that follow from the analysis thereof). The approach beautifully bridges the cultures of mathematics and computer science. I think undergraduates appreciate seeing powerful theoretical methods that nevertheless don't involve much up vote 2 abstraction. The contrast here would be to generating function methods, for which one would need a separate source. down vote add comment It was long time ago, but I remeber Combinatorial Problems and Exercises by Lovasz fondly. However, this might be too challenging for undergraduates. up vote 2 down vote add comment Mazur's recently published Combinatorics guided tour is quite well-organized. This is the MAA page for the book up vote 1 down vote add comment If your students aren't yet sophisticated (that is, if the course is to include an intro to proofs, and truth-tables, and induction, and such) then you should definitely pick up a copy of Ralph Grimaldi's "Discrete and Combinatorial Mathematics: An Applied Introduction" (Amazon, Publisher). up vote 1 down vote This book is very carefully written, and in my opinion does an excellent job of reaching students who are intelligent but who begin the course with little mathematical maturity. @Kevin I'd prefer Harris,Hirst and Mossinghoff (mentioned in one of the previous posts) for such a course,Kevin. But that's me. – Andrew L Jun 23 '10 at 4:56 add comment Daniel I A Cohen, Basic Techniques Of Combinatorial Theory, covers all the requested topics and more, and has a superb collection of exercises. To give you some idea, in the chapter on binomial coefficients, there are exercises leading you through a proof of Bertrand's Postulate and Chebyshev's estimates for the counting function for the primes. up vote 1 I think that, unfortunately, the book is out of print. down vote Another fine book of the same vintage is Alan Tucker, Applied Combinatorics. add comment I liked Roberts & Tesman, Applied Combinatorics (2nd edition), but it's out of print. It has nice applications and nice references. up vote 1 down vote add comment 'Mathematics of Choice : Or, How to Count without Counting' by Ivan Niven. Clear, succinct and an absolute joy to read. up vote 1 down vote add comment Biggs' discrete mathematics is very well written. up vote 1 down I agree. It was my first introduction to mathematics and it was understandable to me even at that time (middle school). Biggs' books are very comprehensively written. – Jernej May 17 '12 at 19:33 add comment How about A Path to Combinatorics for Undergraduates: Counting Strategies by Andreescu and Feng. I remember using this for my olympiad preparation a decade ago. Its a well structured up vote 0 down book with lots of challenging problems, after all its Andreescu! add comment Not the answer you're looking for? Browse other questions tagged co.combinatorics mathematics-education textbook-recommendation books or ask your own question.
{"url":"https://mathoverflow.net/questions/29137/good-combinatorics-textbooks-for-teaching-undergraduates/29153","timestamp":"2014-04-19T17:37:12Z","content_type":null,"content_length":"131202","record_id":"<urn:uuid:49ec7f46-07f2-402e-bd58-7f0875416c1b>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00374-ip-10-147-4-33.ec2.internal.warc.gz"}
Thematic Program on o-minimal Mailing List : To receive updates on the program please subscribe to our mailing list at www.fields.utoronto.ca/maillist The structure of this workshop, modeled on the very successful annual Arizona Winter School in Number Theory (http://swc.math.arizona.edu/), is the following: between three and five leading researchers give intensive tutorials (roughly five lectures per speaker) on the focus areas discussed during the semester, including posing problems. In small groups, graduate students work on these problems during the workshop. The meeting culminates with presentations by the students on their solutions. In order to give students the possibility to prepare adequately, the lecturers are asked to provide, a few weeks ahead of the workshop, a preliminary version of their lecture notes, or at least a list of suitable references and statements of the problems to be discussed. The final version of the lectures notes will be published later as part of the proceedings of the program. This format provides an intensive introduction to the topics and open problems related to the proposed program, and is designed to ensure significant interaction among all participants, especially between the students and the senior lecturers. The topics to be covered were selected to prepare the participants for the semester- long graduate courses, to commence after the introductory workshop. Our current plan is to have lectures in the following areas: 1. Model Theory (Deirdre Haskell) 2. o-minimality (Sergei Starchenko) 3. Real Analytic Geometry (K. Kurdyka) Manuscripta Mathematica 4. Planar Real Analytic Vector Fields (S. Yakovenko) - lectures are the relevant sections from the book Ilyashenko & Yakovenko, Lectures on analytic differential equations, AMS, 2007, Grad. Studies in Math vol. 86. Winter School schedule: Apply to the Program: All scientific events are open to the mathematical sciences community. Visitors who are interested in office space or funding are requested to apply by filling out the application form (available in 2008) . Additional support is available (pending NSF funding) to support junior US visitors to this program. Fields scientific programs are devoted to research in the mathematical sciences, and enhanced graduate and post-doctoral training opportunities. Part of the mandate of the Institute is to broaden and enlarge the community, and to encourage the participation of women and members of visible minority groups in our scientific programs. To be informed of when the application for support will be open please subscribe to the Fields maillist For additional information contact
{"url":"http://www.fields.utoronto.ca/programs/scientific/08-09/o-minimal/geometry/","timestamp":"2014-04-17T21:34:11Z","content_type":null,"content_length":"22177","record_id":"<urn:uuid:05825615-d8d2-41df-89fb-628631d90f68>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00230-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: A massless rod of length L has a mass m fastened at one end and a mass 2m fastened at the other end. What is the ratio (Im/Ic) of the moment of inertia about an axis through the mass m (Im) to the moment of inertia through the center of the rod (Ic)? • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/5110784fe4b09cf125bd72cb","timestamp":"2014-04-20T21:18:47Z","content_type":null,"content_length":"25388","record_id":"<urn:uuid:f6f5e3e5-eeab-436c-9dae-f3bd35a63cf3>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00282-ip-10-147-4-33.ec2.internal.warc.gz"}
Date: 05/17/99 at 11:32:38 From: P. Grabowski Subject: Mod I have been interested in math (something about prime numbers) but can't find anywhere an explanation of what "mod" is. I am prevented from learning more until I know what it means. Can you help to explain it for me please? It comes in the line a^(p-1) = 1 mod p. Is it something like a remainder? Thank you, Date: 05/17/99 at 14:44:52 From: Doctor Rob Subject: Re: Mod Thanks for writing to Ask Dr. Math. The sentence a = b (mod n) means that n is a divisor of a - b. This sentence is read, "a is congruent to b modulo n." It is something like a remainder, because if you subtract a remainder from the dividend, the divisor will go into the result evenly. Examples: 100 = 86 (mod 7), because 100 - 86 = 14 has 7 as a divisor. On the other hand, if you divide 100 by 7, the quotient is 14 and remainder is 2, and 100 = 2 (mod 7), too. If you need more explanation, write again. - Doctor Rob, The Math Forum
{"url":"http://mathforum.org/library/drmath/view/55910.html","timestamp":"2014-04-19T07:14:41Z","content_type":null,"content_length":"5800","record_id":"<urn:uuid:b1fb05da-10fe-4aae-b070-e64e38a4b560>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00223-ip-10-147-4-33.ec2.internal.warc.gz"}
Identify Performance Bottlenecks Next: Performance Bottlenecks in the Up: Performance Evaluation Previous: Determine Whether Reasonable Performance The formulas mentioned in section 5.3.3, in addition to providing an estimate of performance, can help one identify whether the performance is limited by computation, by the number of messages, or by the volume of communication. Even if the estimate is far from correct, the user may get some information about the performance bottleneck by studying the computation and communication estimates provided by those formulas. Comparing the execution times of a problem of size N and one of size N/2 may also provide insight into the performance of the ScaLAPACK routine being used. Let N and size N/2, respectively, on P • If • If 5.1.1) and the underlying BLAS. • If N. • If N. This performance analysis suggests which computer characteristic is most likely limiting the performance. It cannot say whether one is getting good performance. Susan Blackford Tue May 13 09:21:01 EDT 1997
{"url":"http://netlib.org/scalapack/slug/node125.html","timestamp":"2014-04-19T09:23:53Z","content_type":null,"content_length":"4374","record_id":"<urn:uuid:29c6bc3c-7921-44e6-b5a7-8d6324c9ae94>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00499-ip-10-147-4-33.ec2.internal.warc.gz"}
Research trends in geometry of numbers? up vote 17 down vote favorite Geometry of numbers was initiated by Hermann Minkowski roughly a hundred years ago. At its heart is the relation between lattices (the group, not the poset) and convex bodies. One of its fundamental results, for example, is Minkowski's theorem: If $L$ is a lattice in $\mathbb{R}^d$ and $C$ a centrally-symmetric convex body, then $\mbox{vol}(C) \geq 2^d \det(L)$ implies that $C$ contains another lattice point than $0$. While there was a lot of activity in the field until at least 1960, it seems that in recent decades not so many people are working on it anymore. One of the reasons could be that the field is somewhat stuck, or in Gruber's more polite words "It seems that fundamental advance in the future will require new ideas and additional tools from other areas." (see [1]). I would like to know more about current research trends in the geometry of numbers. What are hot topics right now? In which areas was recently considerable progress achieved? Did maybe even the "fundamental advance", that Gruber mentions, take place? [1] P. Gruber: Convex and discrete geometry, Springer 2007, p. 353 nt.number-theory discrete-geometry convex-geometry 1 By the way, an excellent historical account of the geometry of numbers can be found in the doctoral thesis of Sébastien Gauthier, written under the direction of Catherine Goldstein : La géométrie des nombres comme discipline (1890-1945) (math.univ-lyon1.fr/~gauthier/recherche.html). – Chandan Singh Dalawat Jan 30 '13 at 2:28 This is not new research in the geometry of numbers, but rather an application of classical results to another classical problem, that of determining primes of the form x^2+ny^2: tcnj.edu/ ~hagedorn/papers/… – Jeff H Jan 30 '13 at 3:29 WADR to Minkowski, the field should have been renamed "geometric number theory" in the 1950s. The part of geometric number theory that should be called "lattice theory" is not called that, because of overloading. Diophantine approximation is another rather questionable name of subdiscipline, but at least is named after a cluster of problems, rather than techniques or objects of study. The answers below suggest that some shifts of perspective are overdue. – Charles Matthews Jan 30 '13 at 13:44 add comment 3 Answers active oldest votes There has indeed been exciting recent work in this area, by Bhargava and Shankar (see this Bourbaki expose by Poonen) and also by Bhargava and Gross. Briefly, the work of Bhargava and Shankar bounds the average rank of the group of rational points of elliptic curves over $\mathbb{Q}$, while the Bhargava and Gross paper does the same for Jacobians of hyperelliptic up vote 10 Section 4 of the (quite readable) write-up by Poonen explains why I refer to these results as recent advances in the geometry of numbers: both of these results boil down to (subtle!) down vote computations of adelic volumes! It's worth noting that the work of Bhargava and Shankar does not use adelic language, and so is more obviously related to the "classical" geometry of +1, and I'd add that nearly all of Bhargava's work is related to geometry of numbers, fascinating, and beautifully written. "Higher Composition Laws" I-IV is a great place to start. In 3 addition to Shankar, his former students Melanie Matchett Wood and Wei Ho are also doing outstanding related work. Ho gave one of the four "Current Events Bulletin" lectures at the recent AMS/MAA meetings; if this field was ever stuck, it is roaring now. – Frank Thorne Jan 30 '13 at 0:02 add comment Recently several fundamental works have been done in Geometry of numbers. Beside Bhargava's revolutionary ideas (an of course the contribution of his students), Ergodic theory is a new idea that plays an important role in Modern Geometry of Numbers. It seems to me that several ideas are coming from Margulis and E. Lindenstrauss. up vote 5 down vote Here are a list of works in this area which I think they are extremely interesting add comment I can't say that what I'll relate is fundamental, but it does fit into the new ideas category. Since I and (my collaborator) Florent Balacheff have given talks on the subject and the paper will be in the ArXiv in a few days I feel free to comment on it. This post is an annoucement of joint work with Florent Balacheff and Kroum Tzanev. As you comment, the basic result in the geometry of numbers is Minkowski's (first) theorem: If the volume of a $0$-symmetric convex body $K \subset \mathbb{R}^n$ is at least $2^n$, then $K$ contains a non-zero integer point. But what happens when the body is not $0$-symmetric? It is easy to see that Minkowski's theorem fails completely, but that's because one is not thinking symplectically. By using some Hamiltonian dynamics of the sort Balacheff and I used to study isosystolic inequalities in this paper, we guessed that the "right" result should be the following: Conjecture. If a convex body in $\mathbb{R}^n$ contains no integer point other than the origin, then the volume of its dual body with respect to the origin is at least (n+1)/n! In other words, one should have a sort of uncertainty principle: if the origin is localized as the unique integer point inside a convex body, the dual body cannot be too small. In fact, its volume is bounded below by $(n+1)/n!$. Another formulation of the conjecture that seems more elementary goes as follows: If every hyperplane $m_1x_1 + \cdots m_nx_n = 1$, where the $m_i$ are integers not all equal to zero, intersects a convex body $K \subset \mathbb{R}^n$, then the volume of $K$ is at least We proved the conjecture in the case $n = 2$ and the asymptotic version: Theorem. There exists a (universal) constant $C \leq 1$ such that if a convex body $K \subset \mathbb{R}^n$ contains no integer point other than the origin, then the volume of $K^*$ is at least $C^n(n+1)/n!$. In fact, this result is equivalent to Bourgain-Milman. Moreover, it easily implies the asymptotic version of a conjecture of Ehrhart: up vote 5 down vote Theorem. There exists a universal constant $c \geq 1$ such that if $K \subset \mathbb{R}^n$ is a convex body with barycenter at the origin and containing no other integer point, then the volume of $K$ is at most $c^n (n+1)^n/n!$. However, what is really interesting for us is that at least in the case $n=2$ the result trascends the geometry of numbers and is really a result in Hamiltonian dynamics. I just need a Definition. A hypersurface in the cotangent bundle of a manifold $M$ is said to be optical if its intersection with every cotangent space is a convex hypersurface enclosing the origin. To an optical hypersurface in the cotangent of a compact manifold we can associate two numbers: the symplectic volume of the region enclosed by $\Sigma$ and the least action of its periodic characteristics. Theorem. An optical hypersurface $\Sigma$ in the cotangent space of the two-torus carries a periodic characteristic whose action is less than or equal to the square root of two-thirds the symplectic volume enclosed by $\Sigma$. The inequality is sharp. Finsler geometers will be happier if I translate: If the Holmes-Thompson volume of a (non-reversible) Finsler $2$-torus $(T^2,F)$ is $3/2\pi$, then $(T^2,F)$ carries a (non-contractible) periodic geodesic of length at most $1$. In other words, this is the (non-reversible) Finsler version of Loewner's systolic inequality. The reversible Finsler version (replace $3/2\pi$ by $2/\pi$) is due to Stéphane Sabourau and can be found here. add comment Not the answer you're looking for? Browse other questions tagged nt.number-theory discrete-geometry convex-geometry or ask your own question.
{"url":"http://mathoverflow.net/questions/120253/research-trends-in-geometry-of-numbers/120266","timestamp":"2014-04-16T07:41:45Z","content_type":null,"content_length":"69006","record_id":"<urn:uuid:44145f1c-a408-46c3-9011-fb31d88b9e15>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00479-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Website Links Need help with algebra? You’ve found the right place! Select the subject from basic math to calculus! Provides info about basic math, algebra, study skills, math anxiety and learning styles. Submit your algebra problem for step-by-step solutions. A collection of lessons created to assist students of algebra. Offers a wealth of problems and puzzles. This is an online graphing calculator that will allow you to graph equations just like they are written in the text book for example x^2 + y^2 = 12 http://www.mayland.edu/aca111/ManagingMath.pdf - Managing Math
{"url":"http://www.uaf.edu/deved/math/math-website-links/index.xml?request=classic","timestamp":"2014-04-17T21:47:49Z","content_type":null,"content_length":"12334","record_id":"<urn:uuid:3bc20c08-e89a-4f22-8eea-85c21d0447a6>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00290-ip-10-147-4-33.ec2.internal.warc.gz"}
Zentralblatt MATH Publications of (and about) Paul Erdös Zbl.No: 407.05006 Autor: Deza, M.; Erdös, Paul; Frankl, P. Title: Intersection properties of systems of finite sets. (In English) Source: Proc. Lond. Math. Soc., III. Ser. 36, 369-384 (1978). Review: The authors use a theorem of Erdös-Rado [P.Erdös and R.Rado, J. London Math. Soc. 35, 85-90 (1960; Zbl 103.27901)] to generalize theorems of Erdös-Ko-Rado [P.Erdös, Chao Ko and R.Rado, Quart. J. Math., Oxford II. Ser. 12, 313-320 (1961; Zbl 100.01902)] , M.Deza [J. Comb. Theory, Ser. B 16, 166-167 (1974; Zbl 263.05007)] , A.Hajnal and R. Rothschild [J. Comb. Theory, Ser. B 15, 359-362 (1973; Zbl 269.05003)] and A.J.W.Hilton and E.C.Milner [Theorem 2 in Quart. J. Math. Oxford II. Ser. 18, 369-384 (1967; Zbl 168.26205)]. X is a finite set with |X| = n, L = {l[1],...,l[r]}, l[1] < ... < l[r] and K = {k[1],...,k[s]}, k[1] < ... k[s] are sets of integers: an (n,L,K)-system is a collection A of subsets of X such that for each A[1],A[2] in A, |A[1]| ,|A[2]| in K and |A[1]cap A[2]| in L. Define K[i] = K\cap{l[i]*1,...,L[i+1]}, 0 \leq i \leq r, where l[0] = -1, L[r+1] = k[s], and k[1]^* = min{k| k in K[i]}. Theorem 7. (1) If |A| > k[s]c(k[s],L)prod[i = 2]^r(n-l[i])/(k[i]^*-l[i]) then there exists a set D such that |D| = l[1] and D\subseteq A for every A in A . (ii) If |A| > k[s]^32^r-1n^r-1 then there exists a k in K[r] such that l[i]-l[i-1] divides l[i+1]-l[i], 2 \leq i \ leq r, l[r+1] = k. (iii) |A| \leq sum[i = 0]^r\epsilon[1]prod(n-l[j])/(k[i]^*-l[j]) where \epsilon = 0or1 according as K[i] = Ø or not, and the product is taken over those j, 1 \leq j \leq r for which l[j] < k[i]^*. Theorem 8. If K = {k} and for a fixed q \geq 1 we can find, among any A[1],...,A[q+1] in A, two of them A[1],A[j] such that |A[i]\cap A[j]| in L, then there is a constant c = c(k,q) such that if |A| > (q-1)prod[i = 1]^r(n-l[i])/(k-l[i])+cn^r-1 then there are sets D[1],...,D[s], each of cardinality l[1], such that for every A in A there is an i for which D[i]\subset A. Further, if q[i] is the maximum number of sets A[j], 1 \leq j \leq q[i], such that D[i]\subset A[j], but for h\ne i, D[n]\not\subset A[j] and |A[j[1]]\cap A[j[2]]|\notin L for 1 \leq j[1] < j[2] \leq q[1], then sum[i = 1]^ sq[i] = q. Also, for n > n[0](k,q), |A| \leq prod[i = 1]^r(n-l[i])/(k-l[i])+0(n^r-1). Theorem 9. If, for any t different members of A, > |A[1]\cap...\cap A[t]| in L, then there is a constant c = c(k,t) such that if |A| > cn^r-1, then there is a set D, |D| = l[1], D\subset A for every A in A, and l[i]-l[i-1] divides l[i+1]-l[i], 2 \leq i \leq r. Also, for n > n[0](k,t), |A| \leq (t-1)prod[i = 1]^r(n-l[i])/(k-l[i]). The authors ask if it is true that L'\subset L implies the existence, for large enough n, of (n,L,k)- and (n,L',k)-systems A and A', each of maximum cardinaly with A'\subseteqA. They note that Theorem 7 and 9 may be simultaneously generalized to the families called quasi-block-designs by Vera T. Sós [Colloq. int. Teorie comb., Roma 1973, Tomo II, 223-233 (1976; Zbl 261.05022)]. Reviewer: R.K.Guy Classif.: * 05A05 Combinatorial choice problems Keywords: intersection properties; systems of finite sets Citations: Zbl.168.262; Zbl.261.05022; Zbl.103.027; Zbl.100.019; Zbl.263.05007; Zbl.269.05003 © European Mathematical Society & FIZ Karlsruhe & Springer-Verlag │Books │Problems │Set Theory │Combinatorics │Extremal Probl/Ramsey Th. │ │Graph Theory │Add.Number Theory│Mult.Number Theory│Analysis │Geometry │ │Probabability│Personalia │About Paul Erdös │Publication Year│Home Page │
{"url":"http://www.emis.de/classics/Erdos/cit/40705006.htm","timestamp":"2014-04-18T10:37:53Z","content_type":null,"content_length":"9469","record_id":"<urn:uuid:e76e8821-192e-402c-acec-432d3a14576a>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00474-ip-10-147-4-33.ec2.internal.warc.gz"}
profinite homotopy type profinite homotopy type In Algebraic Topology, profinite homotopy types are frequently encountered. This is often because of the use of profinite completion?s of homotopy types in an attempt to get more ‘accessible’ information out of a homotopy type. The theory has applications both in Algebraic Geometry and Algebraic topology. One origin of the theory can be found in Grothendieck's Galois theory, in which he defined an algebraic fundamental group of a scheme using its finite ‘covering spaces’. These correspond to the finite field extensions in the classical case of fields, and from that perspective one can ask what the higher profinite homotopy n-types of a scheme should classify. In the 1960s Artin and Mazur constructed a functor which associates to each locally noetherian scheme $X$ its étale homotopy type, $X_{et}$ , an object of $pro\Ho(SSets)$, the pro-category of the homotopy category $Ho(SSets)$ of simplicial sets. They observed that this did not correspond to a homotopy category of a model category on a category of pro simplicial sets. (This observation was the start of the search for a suitable pro-homotopy theory.) Friedlander gave a rigidified version of the Artin-Mazur homotopy type, which he called the étale topological type? of the scheme. This was used by Quillen in the proof of the Adams conjecture, a result purely in Algebraic Topology. • Dennis Sullivan introduced profinite completions into topology in his work: D. Sullivan, Genetics of homotopy theory and the Adams conjecture, Annals of Math., 100, (1974), 1–79. The Adams conjecture, which is a statement about purely topological phenomena, was then proved by Quillen using these ideas. • At about the same time, Bousfield and Kan, studied profinite completions amongst a wealth of other stuff in A. Bousfield and D. Kan, 1972, Homotopy limits, Completions and localizations, volume 304 of Lecture Notes in Maths , Springer-Verlag. • In the 1990s, Morel and Voevodsky defined a neat framework for the use of topological methods in algebraic geometry. They embedded the category of smooth schemes of ‘finite type over a field $k$ into a larger category of ’$k$-spaces’, which carries the structure of a closed model category. The study of these $k$-spaces is linked to étale homotopy theory, see Schmidt, On the étale homotopy type of Morel-Voevodsky spaces, and Dan Isaksen, Etale realization on the $A^1$-homotopy theory of schemes. Adv. in Math. 184, 37–63 (2004). • A well motivated approach to profinite homotopy theory has been published by Gereon Quick, in order to give a good profinite completion construction for homotopy types; (see G. Quick, Profinite homotopy theory, Documenta Mathematica, 13, (2008), 585–612, (Archiv 0803.4082) and Some remarks on profinite completion of spaces , Arxiv preprint arXiv:1104.4659. The published version of the model category structure does not give enough generators of the fibrations, but this is corrected in a later article. Revised on November 26, 2013 08:58:38 by Tim Porter
{"url":"http://ncatlab.org/nlab/show/profinite+homotopy+type","timestamp":"2014-04-17T03:52:24Z","content_type":null,"content_length":"17085","record_id":"<urn:uuid:517223dc-4037-476f-b1f1-f90ca1104242>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00418-ip-10-147-4-33.ec2.internal.warc.gz"}
Importance Of Present Value WHY IS THE CONCEPT OF PRESENT VALUE SO IMPORTANT FOR CORPORATE FINANCE? The importance of concept of present value to the world of corporate finance is that present value calculations are widely used in business and economics to provide a means to compare cash flows at different times. Present Va Finance Research Assignment Tiger Pty World is a private company in the USA looking to introduce a new line of golf clubs into production. The purpose of the first part of this report is to evaluate the viability of this investment by analysing the predicted cash flows of the company and evaluati Critics to DCF methods Ducht an UK companies * However, it is found inappropriate to use DCF methods for investments that have got strategic implications. * There are various reasons for the use of open approach. Since the outcomes of these projects are highly unforeseen, according one in Money has no legs of its own and yet it keeps on moving around faster than all of us. This makes money all powerful. But time is a great leveller. What looks like a mountain of money today may become dust tomorrow if money does not keep on moving with time. Johnny is, however, interested in unde NPV of a project is defined as the present value of all future cash flows produced In the calculation, we consider Research and Development Cost as a Sunk Cost because R&D will be considered part of the cost of the project when it occurs for a specific project and when we are valuing the project fr Net present value In finance, the net present value (NPV) or net present worth (NPW) of a time series of cash flows, both incoming and outgoing, is defined as the sum of the present values (PVs) of the individual cash flows. In case when all future cash flows are incoming (such as coupons and princ QUESTION FIVE (6 marks) Please answer each of the following questions. Each solution should be accompanied by a brief explanation of no more than two (2) typed lines in length. A) Cynthia is the Chief Financial Officer of Big Corporation (BC). Cynthia’s current objective is to evaluate fiv 1) What is the present value of a perpetuity (uniform, constant growth), starting a year from today, expected to grow at 1.5 percent per year with the discount rate of 12 percent per year, if the first payment is $5.00? $47.62 2) If ABC stock sells today for $150.00 per share, the ABC com . To find the PVA, we use the equation: PVA = C({1 – [1/(1 + r)]t } / r ) PVA = $60,000{[1 – (1/1.0825)9 ] / .0825} PVA = $370,947.84 The present value of the revenue is greater than the cost, so your company can afford the equipment. 7. Here we need to find the FVA. The equation Present Value Robert J. Blair TUI FIN 301 Module 2 Case Assignment Dr. Sopko 2 February 2011 Present Value Part I: A. Using the formula of PV=FV/(1+r)^y A bank account that will be worth $15,000.00 in one year with an interest rate of 7% would have a present value of $14, Trident University Billy H Burgess III Module 2 Case Present Value and Capital Budgeting FIN301 - Principles of Finance November 4, 2011 Part I: This part of the assignments tests your ability to calculate present value. A. Suppose your bank account will be worth $15,000.00 in one year. WEEK 4 ASSIGNMENT 1 “ASSIGNMENT #1” BY: INSTRUCTOR: FIN100 PRINCIPLES OF FINANCE 10-30-2011 The financial manager of every business is faced with many tough decisions in today’s economy. These decisions involve making choices that will affect the financial welfare of their company What is your understanding of the following concepts; present value, present value of an annuity, future value, and future value of an annuity. (please describe any formulas related to each.) Present Value is the current worth of a future sum of money or stream of cash flows given a specified rate of return. Future cash flows are discounted at the discount rate, and the higher the discount rate, the lower the present value of the future cash flows. Determining the appropriate discount ra Examples Of Net Present Value (NPV), ROI and Payback Analysis Introduction Terms and Definitions Net Present Value - Method of calculating the expected net monetary gain or loss from a project by discounting all expected future cash inflows and outflow Executive Summary The research centers on how value affects the organization when they focus on the lower level employees’ interest, fairness, transparency, and create opportunities to advance. The results being better product service retaining valuable employees and improving stakeholders’ val Assignment “Net Promoter Score: a strong indicator of loyalty and growth?” 2 Table of Content Page Introduction…………………………………………………………………………3 1 Main advantages of the NPS……………………………………………...4 1.1 1 Rita Collins FIN 501- Strategic Corporate Finance Module 2: Case Study T.U.I According to Wikipedia.com, “Present value is the value on a given date of a future payment or series of future payments, discounted to reflect the time value of money and other factors such as investment ris Present and Future Value • Calculate the future value of the following: o $5,000 compounded annually at 6% for 5 years $5000 (1+6%)^5 =$5000 (1.06^5) =$6691.13 o $5,000 compounded semiannually at 6% for 5 years $5000 (1+6% / 2)^5x2 =$5000 (1.03^10) =$6719.58 o $5,000 compo Present and Future Value HCA 270 Calculate the future value of the following: * $5,000 compounded annually at 6% for 5 years $6,691.13 * $5,000 compounded semiannually at 6% for 5 years $6719.58 * $5,000 compounded quarterly at 6% for 5 years $6734.28 Present and Future Values XXXX Axia College of University of Phoenix Instructor: Mary Pearson HCA 270 April 3, 2009 Calculate the future value of the following: $5,000 compounded annually at 6% for 5 years [pic] [pic] [pic] $5,000 compounded semiannually at 6% for 5 years Documents 1 - 20 of 1,000 Go to Page
{"url":"http://www.studymode.com/subjects/importance-of-present-value-page1.html","timestamp":"2014-04-16T21:52:50Z","content_type":null,"content_length":"41531","record_id":"<urn:uuid:189e4efe-7cc4-4fe6-ba50-b21a42e1ffaa>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00594-ip-10-147-4-33.ec2.internal.warc.gz"}
Weinan E Professor, Department of Mathematics and Program in Applied and Computational Mathematics Princeton University Princeton, NJ 08544-1000 U.S.A. Phone: (609)258-3683 ~ Fax: (609)258-1735 Summary: My work draws inspiration from various disciplines of sciences and has made an impact in fluid dynamics, chemistry, material sciences, and soft condensed matter physics. I have contributed to the resolution of some long standing scientific problems such as the Burgers turbulence problem (which was the original motivation of Burgers for proposing the well-known Burgers equation), the Cauchy-Born rule for crystalline solids (which indeed dates back to Cauchy, and provides a microscopic foundation for the elasticity theory), and the moving contact line problem (which is still largely open). A common theme is to try bringing clarity to scientific issues through mathematics. A second theme is multi-scale and/or multi-physics problems. I have also worked on building the mathematical framework and finding effective numerical algorithms for modeling rare events which is a very difficult class of problems involving multiple time scales (string method, minimum action methods, transition path theory, etc). I have also worked on multiscale analysis and algorithms for stochastic simulation algorithms, homogenization problems, problems with multiple time scales, complex fluids, etc. My book provides a broad introduction to this subject. A third theme is to develop and analyze algorithms in general. In computational fluid mechanics, I was involved in analyzing and developing vorticity-based methods, the project method and the gauge method. In density functional theory (DFT), my collaborators and I have developed the selected inversion algorithm, which is so far the most efficient algorithm for DFT. Here are some examples of the work I have been involved with (click on the ``+'' sign to read more): Other topics I have made contributions to include: Onsager's conjecture on the energy conservation for weak solutions of the 3D Euler's equation, homogenization and two-scale convergence, singularity formation in solutions of Prandtl's equation, Ginzburg-Landau vortices, micromagnetics and the Landau-Lifshitz equation, stochastic resonance, etc.
{"url":"https://web.math.princeton.edu/~weinan/","timestamp":"2014-04-19T17:03:13Z","content_type":null,"content_length":"76573","record_id":"<urn:uuid:c4adc3e7-3514-4333-8f6f-b2ec4e5b445f>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00450-ip-10-147-4-33.ec2.internal.warc.gz"}
[SOLVED] riemann sum question February 23rd 2009, 11:04 AM #1 Feb 2009 [SOLVED] riemann sum question Hi, long-time lurker, first time poster The problem is this: find the limit of Sum[(i^2/(n^3 + i^3))] (going from i=1 to n) as n approaches infinity. Sorry I don't know how to write fancy math text but basically it's the limit of a series going to n as n --> infinity. I figure it's probably a riemann sum question, but I tend to have difficulty with those. Any help would be nice Last edited by mr fantastic; February 24th 2009 at 02:30 AM. Reason: Question deleted by OP has been restored. $\underset{n\to \infty }{\mathop{\lim }}\,\sum\limits_{i=1}^{n}{\frac{i^{2}}{n^{3}+i^{3} }}=\underset{n\to \infty }{\mathop{\lim }}\,\frac{1}{n}\sum\limits_{i=1}^{n}{\left( \frac{i}{n} \right)^ {2}\cdot \frac{1}{1+\left( \frac{i}{n} \right)^{3}}},$ thus $\int_{0}^{1}{\frac{x^{2}}{1+x^{3}}\,dx}$ is the limit and the partition is $\Delta x_{i}=\frac{1-0}{n}=\frac{1}{n}.$ February 23rd 2009, 11:12 AM #2
{"url":"http://mathhelpforum.com/differential-geometry/75335-solved-riemann-sum-question.html","timestamp":"2014-04-16T13:32:18Z","content_type":null,"content_length":"34489","record_id":"<urn:uuid:08f4a82d-8b38-445f-9a5d-b929833191da>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00660-ip-10-147-4-33.ec2.internal.warc.gz"}
Hands-on science: Lightspeed Yan's video guide In order to see this content you need to have both Javascript enabled and Flash installed. Visit BBC Webwise for full instructions Dr Yan shows you how to try Lightspeed for yourself Difficulty: advanced Some big ideas to grasp and care is needed for an accurate measurement Time/effort: quite quick There's maths help available (see list on right of this page) Hazard level: very low Follow normal microwave advice WARNING: All the standard guidance about microwave ovens applies here. Do not run the oven when empty or with anything metal inside. Adults should supervise any children trying this who are not old enough to use a microwave oven on their own. What you do Place the four slices of bread on the plate in a square pattern touching each other as closely as possible. Spread margarine thickly and evenly over the entire top surface of the bread slices and any gaps in between. Find a way to remove the microwave oven's turntable or to stop it rotating. Often you can take everything out and put the bread plate on the base of the oven. If your oven has a rotating drive in the base that always spins, cover it with something like an upside-down cereal bowl and balance the plate centrally on that. With the bread inside, set the oven to run on full power for 30 seconds. Start the oven and watch closely for areas in the margarine where it melts first. Depending on the power of your oven, melting should start after about 15 seconds. As soon as you have three or four melted patches, stop the oven and take the plate out, without moving the bread around. Decide where you think the centre of each melt patch is and measure the distances between adjacent patch centres. Write them down and work out the average (also termed the mean value). Now search inside or around your microwave for a label that says what the frequency of the microwaves inside the oven is. It may be on the back of the case or just inside the door. Look for a figure followed by either MHz (for megahertz) or GHz (gigahertz). Don't worry if you can't find one as you can estimate the frequency instead. You've finished using the oven now, so if you took things out to disable the turntable, don't forget to put them back how they were. Lastly, prepare for some mathematics. It's straightforward multiplication but does involve very big numbers. Go to our helpful webpage that does the sums for you and has an explanation of the maths What should happen The margarine should melt in small patches. The gaps between them should be roughly 5-7cm. Do the maths - try this help page - and the figure for light speed that you are aiming for is around 300,000,000 metres per second. To put that another way, the speed of light is about 1 billion kilometres per hour. If it doesn't work for you Try as hard as you can to get an accurate measurement of the average distance between points where the margarine melts first. • Use a plate that is as flat as possible - you may find the oven's own turntable is better than a plate. • Use bread slices that are the same thickness. • Cutting the crusts off the bread slices (so they make one big slice) can help. • Don't let the margarine soften too much before you spread it. Keep it cool if you can, as long as it spreads evenly. • Be extremely careful to spread the margarine to an even thickness across all four slices of bread. Some microwave ovens have extra functions to stir the waves around. They are designed to heat food evenly (a good idea) but the exact opposite of what you want here. If yours has a 'stirrer fan', a 'chaos mode' or something that sounds similar you may need to try a more basic oven. You could also try melting other foods instead of margarine. • Slices of processed cheese are easy to lay out in an even pattern. • Bars of chocolate work as well, if you choose a type that's not too chunky. If you use other foods, make sure you watch closely what's happening inside the microwave oven and be ready to stop it running.
{"url":"http://www.bbc.co.uk/bang/handson/lightspeed.shtml","timestamp":"2014-04-21T06:07:31Z","content_type":null,"content_length":"57240","record_id":"<urn:uuid:06763a03-ac10-4ddd-8487-0bea07607b3d>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00026-ip-10-147-4-33.ec2.internal.warc.gz"}
Roger W. Garrison There is much pedagogical value in applying the economist's standard analytical tools to issues about which students have a common-sense understanding. By demonstrating this point repeatedly, Steven Landsburg has become the profession's pre-eminent Armchair Economist (Landsburg, 1993). "Why Popcorn Costs More at the Movies" (pp. 157-167) may not be a critical issue in its own right, but dealing with such light-weight issues can be an effective way of teaching analytical skills. Slate, 2002) has dealt with the downright frivolous question: Why don't people walk up escalators? Or alternatively, Why don't people stand still on stairs. Landsburg's answer, which he attributes to Mark Bils, involves taking escalators and stairs to be instances of superior and inferior machines. Just as workers should spend less time with an inferior machine, people should spend less time on stairs. Well, standing still on stairs clearly violates that maxim. While Bils and Landsburg get points here for matching answer to question in terms of the degree of frivolity, they forgo the opportunity to showcase the economic way of thinking by applying standard indifference-curve analysis. The relevant decision, undoubtedly made subconsciously by the typical stair climber or escalator rider, involves a trade-off between "resting" and "moving." These two gerunds, then, label the vertical axis (resting) and the horizontal axis (moving) of the indifference-curve map in Figure 1. The ultimate form of resting in this context consists of standing still, which is the level of rest that defines the vertical intercept of a linear budget constraint. (A possible budget-constraint non-linearity in the case of moving walkways will be considered below.) The horizontal intercept corresponds to zero rest and has our stair climber racing as fast as possible. We should let this budget constraint be applicable only to ascending the stairs; the constraint applicable to descending would have a horizontal intercept lying further to the right. People can race down the stairs faster than they can race up them.^ (1) In either application and except in extreme circumstances, the actual trade-off will be struck somewhere between standing still and racing as fast as possible. If the stairway is replaced by an escalator, our budget constraint must be replaced as well. The new constraint has the same slope as the old one but is shifted to the right by a magnitude representing the speed of the escalator. Having the same slope simply means that walking up an escalator and walking up stairs are equally unrestful activities. (If walking up an escalator is judged to be more unrestful—because of the higher steps and less suitable rise-to-run ratio—then the budget constraint for the escalator would be a little steeper than the one for the stairs.) Important for application in normal circumstances is the fact that the new budget constraint is truncated at the level of rest associated with standing still. The constraint does not extend upward from that point toward the vertical axis. This is only to say that you cannot improve on the restfulness associated with standing still by walking or running backwards on the escalator. Thus, the point that represents standing still and moving at escalator speed is a potential corner solution. The preference map shows that this corner, Point 2, is in fact the optimal choice for our typical We see from Figure 1 that our climber-cum-rider deals with the gain offered by an escalator in conventional ways. The gain is taken partly in the form of more rest and partly in the form of more speed. The corner solution implies that some riders would actually go further in trading speed for rest at the margin if that were technologically possible. But given their actual options, they are constrained to move at escalator speed while just standing there. Providing answers to the original inquiry (Why don't people walk up escalators?) leads us to a related question—a question that has a more satisfying answer: On what basis do escalator manufactures set the speed of their escalators? It would seem that they set the speed at a level that puts most riders at a corner solution.^ (2) With most riders standing still, the conflicts among riders are nullified. Further, the dominance of the corner solution justifies an escalator design that best accommodates standers. (The slow-moving escalator of Figure 2 should have a step height and rise-to-run ratio of a conventional stairway. That design would best accommodate the climbers.) Even with the faster-moving escalator of Figure 1, some people will climb. Their preferences imply a tangency solution. Some may even climb as rapidly as they would climb stairs. These people are simply taking all of the gain provided by the escalator in the form of speed and none of the gain in the form of rest.^ (3) It is probably the case that an escalator speed set so high that literally no one would walk or run up it is set so high that virtually no one would get on it. Further, we see in the following section that it is no contradiction of economic theory for some people in some circumstances to move faster up an escalator than they would move on stairs. Figure 3 is identical to Figure 1 in terms of the shape and location of the budget constraints, but it differs from the earlier figure in terms of the preference map. Given the particular indifference curves of Figure 3, we get a tangency solution in which the rider actually leverages the gain provided by the escalator. As implied by a movement from Point 1 to Point 2, he runs up it, though with stairs he would only have walked up. We can easily imagine the circumstances in which these preferences are understandable. Suppose it is very much worth while to get to the next floor quickly but that if you can't get there quickly, it doesn't much matter whether you get there a little later or even later still. Train stations and airports provide circumstances where these indifference curves might apply. Shopping malls are usually like convention hotels but are sometimes like train stations and airports. On the day after Christmas or on other special sales days, it is critical to get to the merchandise ahead of the crowd; but failing that, it will do just to see what's left over after the mad scramble. Shoppers whose preferences are more conventional—i.e., similar to the ones shown in Figure 1—might want to be put on notice that there are other shoppers among them whose preference maps are similar to the one shown in Figure 3. Possibly it would be worthwhile to create an iconic symbol that resembles the pattern of indifference curves of Figure 3 and post it near the mall entrance on sales days. The notice could serve a function similar to a posting at the beach that warns of a rip tide. The shallow slope alone makes it unlikely that a corner solution will dominate. In Figure 4, our typical rider takes advantage of a moving walkway by locating at Point 2, which constitutes a tangency solution. It is without contradiction, then, that many people (including the author) stand on an escalator even on grounds of restfulness, then the budget constraint itself rises from its vertical intercept and then slopes downward at speeds beyond the stroll. As is clear in Figure 5, this kind of non-linearity would preclude a corner solution. Of course, some people do stand on moving walkways. Standing may even be typical for riders who have luggage or are otherwise encumbered. Others stroll for added restfulness or to avoid boredom. Still others walk, taking only part of the gain provided by the moving walkway in the form of rest, or they walk fast, leveraging the gain. Without a corner solution to fix a dominant mode of usage, measures need to be taken to deal with the variety of modes. Typically, riders are reminded by conspicuous signs or by a taped voice to stand on the right or walk on the left. Though less common (except possibly in England), this same convention can be prescribed for escalator riders. Using indifference curve analysis to show why people stand still on escalators but walk on moving walkways helps establish the near-universal applicability of economic theory. Working with contrasting preference maps (such as those in Figures 1 and 3) to deal with an issue where the student's own intuition is fully in play may help the student to read indifference curves in less intuitive cases. And challenging the students to apply basic economic tools to similarly frivolous issues can result in fun and even learning. The only down side to exposing students to this armchair view of escalators is that they may never again be able to ride an airport escalator without thinking of indifference-curve analysis. Landsburg, S. E. (2002). Everyday Economics: "Why do you walk up staircases but not up escalators? Slate (http://slate.msm.com), August 28. Landsburg, S. E. (1993). The Armchair Economist: Economics and Everyday Life. New York: The Free Press. 1. One of my colleagues whose armchair is much newer than my own claims that she can go up the stairs (taking three steps at a time) faster than she can go down (having to use every step). 2. Actually polling the manufactures of escalators on this question, of course, would violate the spirit of armchair theorizing. In any case, we can claim they behave as if they have a corner solution in mind. 3. According to Landsburg (2002), the Bils-Landsburg argument "proves...that even if you choose to walk on the escalator, you should always walk even faster on the stairs" (emphasis added). The "always," it turns out, makes their statement too strong.
{"url":"http://www.auburn.edu/~garriro/escaland.htm","timestamp":"2014-04-19T22:14:59Z","content_type":null,"content_length":"15326","record_id":"<urn:uuid:b5faa9c0-0d40-4670-9848-1afa836a6b8d>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00359-ip-10-147-4-33.ec2.internal.warc.gz"}
The Probability of Evolution There are lies, dammed lies, and statistics. (Mark Twain) I’m reminded of a story about newspaper reporting in the old Soviet Union (I don’t know if it’s true or not, but it illustrates a point well). A car race between the United States and the Soviets ended with the United States car in first, and the Soviet car second (you should also know only two cars were in this race). But the reporting in the Soviet Union stated the Soviet car came in second, while the United States car came in second to last. Perfectly true, and yet perfectly misleading. You must be careful with statistics. For example, it’s possible the air in your room right now could spontaneously all move in the same direction at once, piling up in the other side of the room leaving you gasping for air. Possible, yes. But when calculated, the probability is so small as to be reasonably rounded off to zero (it’s not going to happen, so breathe easy). A similar argument against evolution applies to the probability of events occurring which result in new species (mutations, natural selection and spontaneous generation). That probability is zero (when rounded off reasonably). It’s mathematically possible, but the expectation is so low we logically round it down to zero and state the event is never going to occur. So the evolutionist has a problem — the odds of evolution occurring are zero. One tactic evolutionists attempt to show the theory isn’t ridiculous (i.e. mathematically impossible) is showing highly improbable events happen all the time — unfortunately, it’s usually through a misapplication of statistics. You see, simple logic and common sense tell you if (as they claim) improbable events happen frequently one of two situations is most likely true. 1. The event really isn’t that improbable. Thus, our mathematical calculation of statistical odds is incorrect — an error in math has been made. 2. Statistics have been misused or misunderstood, similar to our car race example. The facts and math are correct, but the application of that knowledge is wrong. Common sense explains the argument is already wrong, but we can continue with a specific example and explain exactly why it’s wrong. One of the methods the evolutionist uses draws false conclusions from a deck of cards — a mistake even a college professor can make; consider the following discussion from a professor of mathematics at Temple university. So if, after the fact, we observe the particular evolutionary path actually taken and then calculate the a priori probability of its being taken, we will get the minuscule probability that creationists mistakenly attach to the process as a whole. Here’s another example. We have a deck of cards before us. There are almost 10 to the 68th power – a one with 68 zeroes after it – orderings of the 52 cards in the deck. Any of the 52 cards might be first, any of the remaining 51 second, any of the remaining 50 third, and so on. This is a humongous number, but it’s not hard to devise even everyday situations that give rise to much larger numbers. Now if we shuffle this deck of cards for a long time and then examine the particular ordering of the cards that happens to result, we would be justified in concluding that the probability of this particular ordering of the cards having occurred is approximately 1 chance in 10 to the 68th power. This certainly qualifies as minuscule. Still, we would not be justified in concluding that the shuffles could not have possibly resulted in this particular ordering because its a priori probability is so very tiny. Some ordering had to result from the shuffling, and this one did. (What’s wrong with Creationist Probability — Mathematics Professor John Allen Paulos of Temple University) Mr. Paulos gets his math right, but the statistics wrong. The card example comes up repeatedly in attempts to show evolution isn’t mathematically impossible, but this is the first time I’ve actually seen a professor of math make the mistake. His problem lies in the card example. Suppose I have a deck of cards. He is correct in the 10^68 combinations of cards (the probability of any 1 combination occurring). But he makes the mistake of applying statistics. Actually, by shuffling and dealing the cards the probability is 1 — it’s a certainty one sequence will occur (one of the 10^68 possibilities). Mr. Paulos does understand this, as he says “Some ordering had to result from the shuffling”. The one in 10^68 is the probability of calling out each card — in order — as you turn them up. That’s the correct analogy between cards and evolution. It’s a certainty you will get a sequence. But is it the exact sequence you want? Correct math, wrong application. The probability is 1 you will get a sequence, but much less likely you could correctly call out each card as it’s dealt (This is also sometimes illustrated as a group of monkeys randomly typing out the works of Shakespeare). The card example illustrates a common mistake in the application of statistics, and statistical mistakes can be difficult to uncover. As already noted, if such improbable events really do happen commonly, they’re not so improbable, are they (by definition)? But since the odds calculation is correct (it’s not an error in math), it must be the application of knowledge. Let’s turn to Physicist Richard Feynman to explain the faulty reasoning and the professor’s error immediately becomes obvious. For those who might not know, Feynman was a Nobel-prize winning physicist involved in The Manhattan Project, and on the panel investigating the space shuttle Challenger disaster. But perhaps best known for a series of undergraduate lectures captured in the famous “Feynman lectures on Physics”, Feynman had the ability to illustrate complex problems simply. What came to Feynman by “common sense” were often brilliant twists that perfectly captured the essence of his point. Once, during a public lecture, he was trying to explain why one must not verify an idea using the same data that suggested the idea in the first place. Seeming to wander off the subject, Feynman began talking about license plates. “You know, the most amazing thing happened to me tonight. I was coming here, on the way to the lecture, and I came in through the parking lot. And you won’t believe what happened. I saw a car with the license plate ARW 357. Can you imagine? Of all the millions of license plates in the state, what was the chance that I would see that particular one tonight? Amazing!” A point even many scientists fail to grasp was made clear through Feynman’s remarkable “common sense”. (“The Feynman Lectures on Physics Volume I” , Feynman, Leighton, Sands page xi-xii) Feynman makes Professor Paulos’ mistake with the cards clear — it’s not an error in math, it’s an error in science. The issue with cards relating to evolution isn’t that any given sequence is wildly improbable, yet a sequence comes up — when dealing cards it’s a statistical certainty a sequence will occur (probability one). The correct example relating to evolution would be to predict each card as it is dealt (probability zero). Physicist Feynman illustrates the difficulty when applying mathematical statistics to science. It’s quite easy to make a mistake, even for a professor of mathematics; Feynman illustrates the error through his license plate example. Don’t be misled by lengthly, complicated examples — anyone truly understanding a subject should be able to explain it simply, as Feynman did. Sometimes (though certainly not always), the complicated explanation simply provides a way to mask the uncertainty involved (In Physics we called that “hand-waving” — the idea being to distract from a lack of Feynman was once asked by a Caltech faculty member to explain why spin 1/2 particles obey Fermi-Dirac statistics. He gauged his audience perfectly and said “I’ll prepare a freshman lecture on it”. But a few days later he returned and said “You know, I couldn’t do it. I couldn’t reduce it to the freshman level. That means we really don’t understand it”. (“The Feynman Lectures on Physics Volume I” , Feynman, Leighton, Sands page xii) Predicting a sequence of cards as it is dealt is impossible and correctly displays the improbability of evolution. The probability of evolution occurring rounds down to zero — it’s not going to
{"url":"http://www.dyeager.org/blog/2008/04/probability-evolution.html","timestamp":"2014-04-19T21:28:13Z","content_type":null,"content_length":"26742","record_id":"<urn:uuid:d258c4d5-6b7c-449a-a04f-b129e14fbd63>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00295-ip-10-147-4-33.ec2.internal.warc.gz"}
Mathematical Sciences - 2012 University Catalog Chairperson: Helen Marcus Roberts The Department of Mathematical Sciences offers programs leading to the Bachelor of Science Degree in Mathematics and the Bachelor of Science Degree in Physics. In particular, the Department offers a major in Mathematics, a major in Mathematics with a concentration in Mathematics Education and certification as a teacher of Mathematics, a major in Mathematics with a conand BS/MAT Mathematics with Teacher Certification in Mathematics (Preschool-Grade 12)/Teacher of Students with Disabilities.centration in Discrete Applied Mathematics, a major in Mathematics with a concentration in Mathematics of Finance, a major in Mathematics with a concentration in Statistics, a major in Physics, a major in Physics with certification as a Teacher of Physical Science, a major in Physics with certification as a teacher of Physics, a major in Physics with a concentration in Astronomy, a minor in Mathematics, and a minor in Physics. The Department also offers a 5-year combined BS/MS Mathematics/Statistics, a BS/MS Mathematics with a concentration in Statistics/Statistics, and a BS/MAT Mathematics with Teacher Certification in Mathematics (Preschool-Grade 12)/Teacher of Students with Disabilities. There are honor programs in Mathematics and Physics for qualified students. The programs introduce central ideas in a variety of areas in Mathematics and Physics, and develop problem-solving ability by teaching students to combine critical thinking with rigorous reasoning. The Mathematics program provides students with a spectrum of courses in pure and applied mathematics and develops rigorous mathematical thinking. The Bachelor of Science degree in Mathematics is an extremely versatile degree. Graduates with this degree have found their mathematical and analytical training in demand in business, industry, government, and in the teaching profession. The versatility afforded by a degree in mathematics allows graduates to easily adjust to unexpected shifts in employment opportunities from one of these areas to another. All our mathematics programs also prepare students for graduate study. The department administrator is the resource for specific information such as advanced placement, transfer credits, dual majors, Cooperative Education, and independent study. Programs of Study
{"url":"http://www.montclair.edu/catalog/view_requirements.php?DepartmentID=202","timestamp":"2014-04-20T17:38:33Z","content_type":null,"content_length":"20410","record_id":"<urn:uuid:391ec125-6043-4ba4-a3bd-cdc11868aec1>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00459-ip-10-147-4-33.ec2.internal.warc.gz"}
Solving Second Order ODEs by Computing Integrating Factors for Them Integrating factors - depending on two variables - for second order ODEs • For second order ODEs integrating factors depending on three variables, (x,y,y') always exist - their determination however may be as difficult as solving the ODEs themselves. However, integrating factors of the form mu(x,y), mu(x,y'), mu(y,y'), that is, depending on just two variables, when they exist, can be determined systematically (see E.S. Cheb-Terrab and A.D. Roche, "Integrating factors for second order ODEs", Journal of Symbolic Computation V. 27, No. 5 (1999) 501). This method is implemented in dsolve. • The main idea of this method consists of algorithmically determining whether or not a given second order ODE is member of one of the three reducible ODE families respectively admitting integrating factors of the forms mu(x,y), mu(x,y'), mu(y,y') (see redode): > PDEtools[declare](y(x), prime=x); > subs(_F1 = G, redode(mu(x,y(x)),y(x),2) ); # ODE family admitting mu(x,y) > convert( subs(diff(y(x),x,x) = `y''`, diff(y(x),x) = `y'`,y(x)=y,(2)), diff); > subs(_F1 = G, redode(mu(x,diff(y(x),x)),y(x),2) ); # ODE family admitting mu(x,y') > convert( subs(diff(y(x),x,x) = `y''`, diff(y(x),x) = `y'`,y(x)=y,(4)), diff); > subs(_F1 = G, redode(mu(y(x),diff(y(x),x)),y(x),2) ); # ODE family admitting mu(y,y') > convert( subs(diff(y(x),x,x) = `y''`, diff(y(x),x) = `y'`,y(x)=y,(6)), diff); and in doing that, determining mu. Concretely, this means that members of these three ODE families can now be systematically reduced to first order ODEs - for arbitrary mu and G - as if they were simple exact equations (total derivatives). This has enlarged in a noticeable way the decision procedures available for tackling second order ODEs in the nonlinear case. Although the relevant thing here is that the new algorithms don't require solving any auxiliary differential equation, it is also worth mentioning that the ODE families above include members not having point symmetries and nonetheless reducible to quadratures; this fact was the main result of an interesting article in 1988 (see A. Gonzalez-Lopez, Phys. Lett. A, (1988) 190). As an ODE example, consider This ODE has the following integrating factor (see intfactor) This integrating factor can be tested using mutest The product Mu*ode is an exact ODE (a total derivative) and hence one can take advantage of this fact to reduce the second order ODE to a first order one, in this case of Bernoulli type (in turn fully solvable in the general case): From where the solution to the original ode follows straightforwardly: (for more details on how to build solutions from the knowledge of integrating factors see dsolve,education, subsection Constructing solutions using integrating factors.) All these steps are performed automatically by dsolve by calling It is also possible to invoke the use of "just" this method for reducible ODEs or for each of the three forms of integrating factors via For the specific example under discussion, the first three callings above will solve the problem. As a concluding remark, an ODE not having point symmetries and having such an integrating factor of the form 1/y is still a simple problem if compared with the three ODE families shown on top, related to arbitrary integrating factors depending on two variables. That the ode under discussion has no point symmetries can be checked with ease as follows. First we generate the determining system for the infinitesimals of the symmetry generator (see gensys): Then we simplify this system with respect to its integrability conditions using casesplit (in turn using DifferentialAlgebra or DEtools,Rif): verifying that the only solution to this system is the trivial (useless) one. There are in fact infinitely many cases like the one just mentioned, so that the integrating factor and the symmetry approaches end up complementing each other in a quite useful manner. See Also DEtools, dsolve/education, firint, firtest, gensys, intfactor, muchange, mutest, PDEtools, PDEtools[casesplit], redode Was this information helpful? Please add your Comment (Optional) E-mail Address (Optional) What is This question helps us to combat spam
{"url":"http://www.maplesoft.com/support/help/Maple/view.aspx?path=dsolve/integrating_factors","timestamp":"2014-04-16T22:46:47Z","content_type":null,"content_length":"146499","record_id":"<urn:uuid:3d056917-6bed-4ebc-9877-8ae2f801dec5>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00240-ip-10-147-4-33.ec2.internal.warc.gz"}
PH 414, 415, 417: Quantum Mechanics AY 2011/12 (D. Belitz) Chapter 1: Basic Notions of Quantum Mechanics Chapter 2: A Particle in a Spherically Symmetric Potential Chapter 3: Elements of measurement, representation, and transformation theory Note: Dar Dahlen has kindly typed up part of Ch. 3; thanks, Dar!! The current version can be found here . I have edited some parts that Dar had trouble reading. If you spot any problems, please let Dar or me know. Chapter 4: Spin, the Pauli equation, and identical particles Chapter 5: Time independent perturbation theory, the variational principle, and applications
{"url":"http://physics-server.uoregon.edu/~belitz/teaching/2011_12/PHYS_414-7/toc.html","timestamp":"2014-04-19T22:06:01Z","content_type":null,"content_length":"8428","record_id":"<urn:uuid:56715491-7907-4ef2-914f-7b5e2aa02797>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00218-ip-10-147-4-33.ec2.internal.warc.gz"}
Support functions for property lists. Support functions for property lists. Property lists are ordinary lists containing entries in the form of either tuples, whose first elements are keys used for lookup and insertion, or atoms, which work as shorthand for tuples {Atom, true}. (Other terms are allowed in the lists, but are ignored by this module.) If there is more than one entry in a list for a certain key, the first occurrence normally overrides any later (irrespective of the arity of the tuples). Property lists are useful for representing inherited properties, such as options passed to a function where a user may specify options overriding the default settings, object properties, annotations, property() = atom() | tuple() property(P::property()) -> property() Creates a normal form (minimal) representation of a property. If P is {Key, true} where Key is an atom, this returns Key, otherwise the whole term P is returned. See also: property/2. property(Key::term(), Value::term()) -> property() Creates a normal form (minimal) representation of a simple key/value property. Returns Key if Value is true and Key is an atom, otherwise a tuple {Key, Value} is returned. See also: property/1. unfold(List::[term()]) -> [term()] Unfolds all occurences of atoms in List to tuples {Atom, true}. See also: compact/1. compact(List::[term()]) -> [term()] Minimizes the representation of all entries in the list. This is equivalent to [property(P) || P <- List]. See also: property/1, unfold/1. lookup(Key::term(), List::[term()]) -> none | tuple() Returns the first entry associated with Key in List, if one exists, otherwise returns none. For an atom A in the list, the tuple {A, true} is the entry associated with A. See also: get_bool/2, get_value/2, lookup_all/2. lookup_all(Key::term(), List::[term()]) -> [tuple()] Returns the list of all entries associated with Key in List. If no such entry exists, the result is the empty list. See also: lookup/2. is_defined(Key::term(), List::[term()]) -> bool() Returns true if List contains at least one entry associated with Key, otherwise false is returned. get_value(Key::term(), List::[term()]) -> term() get_value(Key::term(), List::[term()], Default::term()) -> term() Returns the value of a simple key/value property in List. If lookup(Key, List) would yield {Key, Value}, this function returns the corresponding Value, otherwise Default is returned. See also: get_all_values/2, get_bool/2, get_value/1, lookup/2. get_all_values(Key, Ps::List) -> [term()] Similar to get_value/2, but returns the list of values for all entries {Key, Value} in List. If no such entry exists, the result is the empty list. See also: get_value/2. append_values(Key::term(), List::[term()]) -> [term()] Similar to get_all_values/2, but each value is wrapped in a list unless it is already itself a list, and the resulting list of lists is concatenated. This is often useful for "incremental" options; e.g., append_values(a, [{a, [1,2]}, {b, 0}, {a, 3}, {c, -1}, {a, [4]}]) will return the list [1,2,3,4]. See also: get_all_values/2. get_bool(Key::term(), List::[term()]) -> bool() Returns the value of a boolean key/value option. If lookup(Key, List) would yield {Key, true}, this function returns true; otherwise false is returned. See also: get_value/2, lookup/2. get_keys(List::term()) -> [term()] Returns an unordered list of the keys used in List, not containing duplicates. delete(Key::term(), List::[term()]) -> [term()] Deletes all entries associated with Key from List. substitute_aliases(As::Aliases, List::[term()]) -> [term()] □ Aliases = [{Key, Key}] □ Key = term() Substitutes keys of properties. For each entry in List, if it is associated with some key K1 such that {K1, K2} occurs in Aliases, the key of the entry is changed to Key2. If the same K1 occurs more than once in Aliases, only the first occurrence is used. Example: substitute_aliases([{color, colour}], L) will replace all tuples {color, ...} in L with {colour, ...}, and all atoms color with colour. See also: normalize/2, substitute_negations/2. substitute_negations(As::Negations, List::[term()]) -> [term()] □ Negations = [{Key, Key}] □ Key = term() Substitutes keys of boolean-valued properties and simultaneously negates their values. For each entry in List, if it is associated with some key K1 such that {K1, K2} occurs in Negations, then if the entry was {K1, true} it will be replaced with {K2, false}, otherwise it will be replaced with {K2, true}, thus changing the name of the option and simultaneously negating the value given by get_bool(List). If the same K1 occurs more than once in Negations, only the first occurrence is used. Example: substitute_negations([{no_foo, foo}], L) will replace any atom no_foo or tuple {no_foo, true} in L with {foo, false}, and any other tuple {no_foo, ...} with {foo, true}. See also: get_bool/2, normalize/2, substitute_aliases/2. expand(Es::Expansions, List::[term()]) -> [term()] □ Expansions = [{property(), [term()]}] Expands particular properties to corresponding sets of properties (or other terms). For each pair {Property, Expansion} in Expansions, if E is the first entry in List with the same key as Property, and E and Property have equivalent normal forms, then E is replaced with the terms in Expansion, and any following entries with the same key are deleted from List. For example, the following expressions all return [fie, bar, baz, fum]: expand([{foo, [bar, baz]}], [fie, foo, fum]) expand([{{foo, true}, [bar, baz]}], [fie, foo, fum]) expand([{{foo, false}, [bar, baz]}], [fie, {foo, false}, fum]) However, no expansion is done in the following call: expand([{{foo, true}, [bar, baz]}], [{foo, false}, fie, foo, fum]) because {foo, false} shadows foo. Note that if the original property term is to be preserved in the result when expanded, it must be included in the expansion list. The inserted terms are not expanded recursively. If Expansions contains more than one property with the same key, only the first occurrance is used. See also: normalize/2. normalize(List::[term()], Stages::[Operation]) -> [term()] □ Operation = {aliases, Aliases} | {negations, Negations} | {expand, Expansions} □ Aliases = [{Key, Key}] □ Negations = [{Key, Key}] □ Key = term() □ Expansions = [{property(), [term()]}] Passes List through a sequence of substitution/expansion stages. For an aliases operation, the function substitute_aliases/2 is applied using the given list of aliases; for a negations operation, substitute_negations/2 is applied using the given negation list; for an expand operation, the function expand/2 is applied using the given list of expansions. The final result is automatically compacted (cf. compact/1). Typically you want to substitute negations first, then aliases, then perform one or more expansions (sometimes you want to pre-expand particular entries before doing the main expansion). You might want to substitute negations and/or aliases repeatedly, to allow such forms in the right-hand side of aliases and expansion lists. See also: compact/1, expand/2, substitute_aliases/2, substitute_negations/2. split(List::[term()], Keys::[term()]) -> {Lists, Rest} □ Lists = [[term()]] □ Rest = [term()] Partitions List into a list of sublists and a remainder. Lists contains one sublist for each key in Keys, in the corresponding order. The relative order of the elements in each sublist is preserved from the original List. Rest contains the elements in List that are not associated with any of the given keys, also with their original relative order preserved. split([{c, 2}, {e, 1}, a, {c, 3, 4}, d, {b, 5}, b], [a, b, c]) {[[a], [{b, 5}, b],[{c, 2}, {c, 3, 4}]], [{e, 1}, d]}
{"url":"http://erldocs.com/R11B-5/stdlib/proplists.html","timestamp":"2014-04-17T00:50:33Z","content_type":null,"content_length":"17386","record_id":"<urn:uuid:ce7bd63d-d958-4331-b098-4ebcb051be94>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00176-ip-10-147-4-33.ec2.internal.warc.gz"}
Work: Force through a Distance Work: Force through a Distance (page 2) Science Fair Survival Guide The work done will depend on the force needed to pull the box and the distance it moves. In the example, the work done is 4.45 J. Work is what is accomplished when a force causes an object to move. The amount of work done is equal to the product of the force applied to an object times the distance the object moves in the direction of the force. Another requirement for work to be done is that the distance the object is moved must be in the same direction that the force is applied. In this experiment, a horizontal force moves the box in a horizontal direction, so work is done. Try New Approaches 1. Does the speed at which an object moves affect the work needed to move it? Repeat the experiment twice, first at a higher but constant speed and then at a lower but constant speed. 2. How does the weight of the object being moved affect the work done to move it? Repeat the original experiment twice, first using a lesser weight in the box and then using a greater weight. Note: Try to pull the box at the same speed for each testing. Design Your Own Experiment 1. A machine is a device that makes work easier. Machines make work easier by changing either the size or the direction of the input force. Simple machines are the most basic machines, such as an inclined plane (a flat, slanted surface). Inclined planes are used to transport an object to a specific height. Design an experiment to determine if using an inclined plane affects the overall work done on the object being moved. One way is to add weight, such as marbles, clay, or coins, to a small box with a lid. Close the box and secure the lid with tape. Tie a string around the box and attach the hook of a spring scale to the string. Use the scale to slowly raise the box a vertical distance of 1 meter. As you raise the box, ask a helper to note the reading on the scale in newtons, grams, or pounds. If the reading moves up and down slightly, record the average reading. Employ the previous method of determining force in newtons using pound or gram units. Then determine the work done in lifting the box using this equation: w = f × d. Then prepare an inclined plane by placing one end of a board at least 1 meter longer than the box on a stack of several books. Use the scale to move the box up the inclined plane for a distance of 1 meter. Repeat the procedure for determining the force needed to move the box and the work done. Use diagrams to display the results of the experiments. a. Sometimes a force on an object is at an angle to the direction of motion. An example would be pulling a wagon's handle at an angle, causing the wagon to move horizontally (see Figure 5.2). In this case, the relationship of the force acting on the wagon can be expressed by the equation d[a]/d[h] = f[h]/f[a], where d[a] is the distance of the side adjacent to the angle of the applied force, d[h] is the distance of the hypotenuse (side opposite the right angle), f[h] is the force causing horizontal motion parallel to the direction in which an object is moved, and f [a] is the force applied at angle A°. The cosine (cos) of an angle is equal to the length of the adjacent side (d[a]) divided by the hypotenuse (d[b]). Since cos A° = d[a]/d[h] and d[a]/d[h] = f[h]/f[a], then cos A° = f[h]/f[a]. Thus the horizontal force (f[h]) causing the wagon to move in a horizontal direction can be calculated using this equation: f[h] = f[a] × cos A°. (See Appendix 1 for the cosine value of different angles.) Design an experiment to calculate the work done by a force that is at an angle to the direction in which an object is moved. One way is to attach a scale to a weighted box. Move the box across a table by pulling on the scale so that this force is at an angle to the movement of the box, as shown in Figure 5.3. Measure and record the distance (d) the box is moved. Use a protractor to measure the angle (A°) of the applied force. Determine the work using this equation: For example, if the box is moved 0.6 m by a force of 10 N applied at an angle of 30°, the work done would be: For more information about work done by a constant force that is applied at an angle relative to the direction of motion, see J. P. Den Martog, Mechanics (New York: Dover, 1961), pp. 133-135. b. How does the angle affect the amount of work done in the previous experiment? Repeat the experiment three times, first at a smaller angle and second at a greater angle, but less than 90°. For the third trial, use an angle of 90°, thus slightly lifting the box above the table. Prove mathematically that while the box is moved horizontally while applying a force at 90°, no work is done. Science Fair Hint: Show vector diagrams for each angle. You do work in lifting an object, but once the object is lifted, you do no work in carrying it across a room. For an explanation of this seeming paradox, see work in a physics text and Larry Gonick and Art Hufamn, The Cartoon Guide to Physics (New York: HarperPerennial, 1990), p. 75. Get the Facts Power is the rate of doing work. Since power is work divided by time, power is expressed as joules per second in SI units. The power unit of watt was named after James Watt (1736–1819), the inventor of the steam engine. How do the units of watt and horsepower compare to the SI unit of joules/ sec? See a physics text for a comparison of power units. Disclaimer and Safety Precautions Education.com provides the Science Fair Project Ideas for informational purposes only. Education.com does not make any guarantee or representation regarding the Science Fair Project Ideas and is not responsible or liable for any loss or damage, directly or indirectly, caused by your use of such information. By accessing the Science Fair Project Ideas, you waive and renounce any claims against Education.com that arise thereof. In addition, your access to Education.com’s website and Science Fair Project Ideas is covered by Education.com’s Privacy Policy and site Terms of Use, which include limitations on Education.com’s liability. Warning is hereby given that not all Project Ideas are appropriate for all individuals or in all circumstances. Implementation of any Science Project Idea should be undertaken only in appropriate settings and with appropriate parental or other supervision. Reading and following the safety precautions of all materials used in a project is the sole responsibility of each individual. For further information, consult your state’s handbook of Science Safety.
{"url":"http://www.education.com/science-fair/article/work-force-distance/?page=2","timestamp":"2014-04-16T22:02:43Z","content_type":null,"content_length":"87590","record_id":"<urn:uuid:e46cbb01-4f8d-430a-8763-d9c2bd9b908e>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00400-ip-10-147-4-33.ec2.internal.warc.gz"}
Decimals: Terminating or Repeating? Date: 01/26/2001 at 15:10:31 From: Seegee Subject: How to tell if decimals are terminating or repeating? How can you tell just by looking at a fraction whether, in decimal form, it will terminate or repeat? My math teacher said there was a way, but I don't see how. Please help. Date: 01/26/2001 at 15:49:23 From: Doctor Greenie Subject: Re: How to tell if decimals are terminating or repeating? Hi, Seegee - If a decimal fraction terminates, then it has a name like one of the " ____ tenths" " ____ hundredths" " ____ thousandths" " ____ ten-thousandths" " ____ millionths" " ____ ten-billionths" etc., etc. When you write these numbers as common fractions, what is special about the denominators? The answer to that question should be a big hint toward the answer to your question, but it won't give the complete answer. For example, here are a couple of fractions whose decimal representations terminate but that don't have names from the "infinite" list above: 3/4 (= .75) and 5/8 (= .625). So why do these two have terminating decimals, while a fraction like 1/3 does not? It is because the first two can be written as equivalent fractions with names from the list above, while the fraction 1/3 3/4 = 75/100 = seventy-five hundredths 5/8 = 625/1000 = six hundred twenty-five thousandths but you can't write 1/3 = a/10 or = b/100 or = c/1000 or .... where a, b, c, or any other of the numerators are integers. I have still only hinted at the precise answer to your question. If you can't quite figure out the whole answer after studying what I've written, you can find the complete answer in the Dr. Math archives. Click on the "Search the Archives" link on the main Dr. Math page and use "repeating decimal" or "terminating decimal" as the phrase to search for (do not use quotation marks, but be sure to click on the button that makes the search engine look for the entire phrase instead of the individual words). The search will provide you with links to several pages where this question is discussed. - Doctor Greenie, The Math Forum Date: 01/26/2001 at 15:32:14 From: Doctor Rob Subject: Re: How to tell if decimals are terminating or repeating? Thanks for writing to Ask Dr. Math, Seegee. The fraction will terminate if and only if the denominator has for prime divisors only 2 and 5, that is, if and only if the denominator has the form 2^a * 5^b for some exponents a >= 0 and b >= 0. The number of decimal places until it terminates is the larger of a and b. The proof of this lies in the fact that every terminating decimal has the form n/10^e, for some e >= 0 (e is the number of places to the right that the decimal point must be moved to give you an integer, and n is that integer), and every fraction of that form has a terminating decimal found by writing down n and moving the decimal point e places to the left. Now when you cancel common factors from n/10^e = n/(2*5)^e = n/(2^e*5^e), it may reduce the exponents in the denominator, but that is all that can happen. - Doctor Rob, The Math Forum
{"url":"http://mathforum.org/library/drmath/view/58174.html","timestamp":"2014-04-16T13:13:17Z","content_type":null,"content_length":"8188","record_id":"<urn:uuid:e1d7c0e0-d131-404a-a2c7-4992e12907e9>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00476-ip-10-147-4-33.ec2.internal.warc.gz"}
Area vs. Perimeter of Rectangles Date: 03/19/2000 at 16:19:30 From: Melissa Subject: Perimeter and area I don't understand how two rectangles with exactly the same perimeter can enclose different areas. Can you explain that to me? Thank you, Date: 03/19/2000 at 23:15:36 From: Doctor Peterson Subject: Re: Perimeter and area Hi, Melissa. I can start by convincing you that it's really true. A mathematician often looks at a question like this by thinking about the extreme cases, to get a feel for how far things can go. So let's think about extreme rectangles. Suppose you make a loop of string, say 24 inches long, and try to make a rectangle of it, by putting four fingers into the loop and moving them around. What's the widest rectangle you can make? Pull your fingers as far as they can go, and you'll have something like this: If you imagine your fingers having no width, you can see that the widest rectangle possible would have zero height (or as little as you are willing to have and still call it a rectangle) and width 12 inches. Its area will be zero. At the other extreme, of course, you can stretch your rectangle vertically so that it is 12 inches high with no width, and again has zero area. Yet you know that in between you do have a positive area, and in fact it will turn out that a square (with the width and height the same) will have the greatest area you can make. So how can area change when the perimeter stays the same? Here's one way to look at it, suggested by a problem someone sent in recently. Let's reverse the question and try to build a rectangle out of 12 one-inch squares (a fixed area) and see why we won't always get the same perimeter. The 12 squares will have a total perimeter of 48 inches (4 inches each). If I line up the squares in a row, only two or three sides of each square will be part of the perimeter, while the others will be shared with neighbors: _ _ _ _ _ _ _ _ _ _ _ _ Each of the 11 "interior edges" between two squares takes away two inches from the perimeter (one side of each square), so the perimeter of this rectangle will be 48 - 22 = 26. Since the height is 1 and the width is 12, that's right: 1 + 12 + 1 + 12 = 26. Now let's stack the squares closer together, in two rows of 6: _ _ _ _ _ _ Now there are 16 interior edges, because more of the squares are touching, so we subtract not 22 but 32 inches from the perimeter, which is now 48 - 32 = 16. Yes, this is 2 + 6 + 2 + 6. Now let's lump them even closer together (more squarish), as a 3 x 4 _ _ _ _ Now there are 17 interior edges, so the perimeter is 48 - 34 = 14 inches, which is equal to 3 + 4 + 3 + 4. Do you see what's happening? The more squarish the rectangle is, the more edges the squares share, and the less they contribute to the perimeter, so the less the perimeter will be. The same sort of thing happens with three-dimensional shapes, and this effect is important in such questions as how your body dissipates heat: if we picture our squares as cells, then a flat shape will let each cell be close to the surface and cool itself off, while a roundish shape will force more cells into the interior, where they won't be part of the surface, and also won't lose heat easily. Lumpy things have less "outside" for the same amount of "inside." (That's why elephants have thin ears, to radiate more heat, and why cactuses have thick stems, to retain more moisture.) So the basic answer to your question is that area measures the "inside" of a shape and perimeter measures the "outside," and by changing the shape we can move outside parts to the inside without changing the outside. Or, if we keep the perimeter the same as you originally asked, we can keep the same "outside" but pack more "inside" into it, which will puff it up. Thanks for the question - it's fun to think about this sort of thing! - Doctor Peterson, The Math Forum
{"url":"http://mathforum.org/library/drmath/view/57820.html","timestamp":"2014-04-19T00:52:30Z","content_type":null,"content_length":"9453","record_id":"<urn:uuid:6342d445-9265-43a4-bbe9-02ef9b80af7f>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00131-ip-10-147-4-33.ec2.internal.warc.gz"}
Necklaces and the generating function for inversions up vote 4 down vote favorite The problem of Necklaces is well-known, i.e "The number of fixed necklaces of length $n$ composed of $a$ types of beads $N(n,a)$" can be calculated: http://mathworld.wolfram.com/Necklace.html Let us consider the limit $\lim_{n\to \infty}\prod_{p=1}^n N(p,a)$. It is possible to show that the limit presents the result which looks like the generating function for inversion ( we may exclude one unimportant factor): $\frac {a^n} {n!}$ $\prod_{p=1}^n \frac {1-a ^p} {1-a}$ For $n \to \infty$ we have $\prod_{p=1}^n N(p,a) \approx \frac {a^n} {n!}$ $\prod_{p=1}^n \frac {1-a^p} {1-a}$. Then, for eg please see theorem #1 http://www.cs.uwaterloo.ca/journals/JIS/VOL4/ MARGOLIUS/inversions.pdf The generating function under theorem 1 looks like $\prod_{p=1}^n \frac {1-a^p} {1-a}$ So, a question appears, how to explain the influence of the symmetric group's properties for the particular case? In other words why and how the connection appears? gr.group-theory co.combinatorics Can you be more precise about the value of the limit? – Qiaochu Yuan Aug 1 '12 at 18:56 @Qiaochu Yuan Thank you, I've added the approximation of the limit – Mikhail Gaichenkov Aug 1 '12 at 19:06 I don't understand how this limit is equal to the claimed generating function. Can you elaborate? – Gjergji Zaimi Aug 3 '12 at 3:26 @ Gjergji Zaimi For $n \to \infty$ we have $\prod_{p=1}^n N(p,a) \approx \frac {a^n} {n!}$ $\prod_{p=1}^n \frac {1-a^p} {1-a}$. Then, for eg please see theorem #1:cs.uwaterloo.ca/journals/JIS/VOL4/ MARGOLIUS/inversions.pdf – Mikhail Gaichenkov Aug 3 '12 at 17:17 The coefficient of $a^n/n!$ in $\prod N(p,a)$ is $\prod \varphi(p)$, but in the other side it is 1... – Gjergji Zaimi Aug 3 '12 at 22:06 show 2 more comments Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook. Browse other questions tagged gr.group-theory co.combinatorics or ask your own question.
{"url":"http://mathoverflow.net/questions/103716/necklaces-and-the-generating-function-for-inversions","timestamp":"2014-04-18T10:37:30Z","content_type":null,"content_length":"52799","record_id":"<urn:uuid:bc5956d7-8026-4f2c-a4c6-14efe368f600>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00264-ip-10-147-4-33.ec2.internal.warc.gz"}
The Shifting Demographic Landscape of Influenza Background: As Pandemic (H1N1) 2009 influenza spreads around the globe, it strikes school-age children more often than adults. Although there is some evidence of pre-existing immunity among older adults, this alone may not explain the significant gap in age-specific infection rates. Methods & Findings: Based on a retrospective analysis of pandemic strains of influenza from the last century, we show that school-age children typically experience the highest attack rates in primarily naive populations, with the burden shifting to adults during the subsequent season. Using a parsimonious network-based mathematical model which incorporates the changing distribution of contacts in the susceptible population, we demonstrate that new pandemic strains of influenza are expected to shift the epidemiological landscape in exactly this way. Conclusions: Our results provide a simple demographic explanation for the age bias observed for H1N1/09 attack rates, and a prediction that this bias will shift in coming months. These results also have significant implications for the allocation of public health resources including vaccine distribution policies.
{"url":"http://pubmedcentralcanada.ca/pmcc/articles/PMC2762811/?lang=en-ca","timestamp":"2014-04-17T20:09:54Z","content_type":null,"content_length":"123715","record_id":"<urn:uuid:879e2066-78b3-4284-8e54-73c021b43efc>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00634-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Goes Pop! Last year, Professor Steven Strogatz of Cornell University wrote a series of op-eds for the New York Times that discussed the presence of mathematics in unlikely places. I discussed one of these columns here. Now, either those articles were well-received, or Professor Strogatz is well-connected, because this year he’s back in the Times with a much more ambitious series of articles. This time around, Strogatz is attempting to “[write] about the elements of mathematics, from preschool to grad school, for anyone out there who’d like to have a second chance at the subject.” Preschool to grad school is a significant amount of ground to cover, but thus far Strogatz has used his articles to assault this goal with gusto. To date, he has tackled counting, patterns in addition, negative numbers, division, and basic high school algebra. This doesn’t really do justice to his content, though. Along the way he . . . → Read More: Math in the News(paper) Math has gotten a bit of a visibility boost recently, in the form of posts by professor Steven Strogatz at the New York Times blog. For three weeks, starting at the end of May, Professor Strogatz filled in for usual blogger Olivia Judson, and during that time he used the platform to write some highly readable musings that show the presence of mathematics in unlikely places, and touch on some of the directions math is headed in the 21st century. Let me highlight the first post, titled “Math and the City.” Professor Strogatz begins this article by describing Zipf’s law, an observation attributed to linguist George Zipf regarding the distribution of words in a language (for a linguistic motivation, you can check the Wikipedia article on Zipf’s law). One of these things is not like the other. In the context of cities, the law states the following: in a given . . . → Read More: Math Gets Around in the Big City
{"url":"http://www.mathgoespop.com/tag/strogatz","timestamp":"2014-04-19T17:01:42Z","content_type":null,"content_length":"73173","record_id":"<urn:uuid:975b3ee8-e003-409f-863f-841b792cf57b>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00044-ip-10-147-4-33.ec2.internal.warc.gz"}
Application of Differentiation Sub The differentiation is the subfield of Calculus and there are various application of differentiation in real world. The differentiation is very important part of Math as it is used in many Topics scientific fields. Differentiation can be defined as the process of finding the Derivatives of the Functions. Differentiation can be used as a tool to calculate or study the rate of change of a quantity with respect to change in some other quantity. The most common example is calculation of velocity and acceleration. Velocity is given by v = dx / dt, where ' x ' is the distance covered by a moving body in time ‘t’. Similarly acceleration can be given by can be given by a = dv / dt as acceleration is rate of change of velocity with respect to time. Here ' a ' is the acceleration ' v ' is the velocity and ' t ' is time. Now we will see some other applications of differentiation- 1 ) Normal’s and tangents- Differentiation can be used to find the tangents and normal’s of curve we are studying the different forces acting on a body. Tangent- Tangent can be defined as a straight line that touches the curve at a Point and the Slope of curve and line is same. Normal- The perpendicular line to the Tangent of a curve is known as normal. Slope can be calculated by using dy / dx or Slope= dy / dx. 2 ) Curvilinear motion- As we can calculate the velocity and acceleration of a moving body we can also use differentiation in curvilinear in which object moves along a curved path. Here we express x and y as function of time and it is known as parametric form. Here horizontal component of velocity is given by v[x] = dx / dt, vertical component of velocity is given by v[y] = dy / Magnitude is calculated by v = √ ( v^2[x] + v^2[y] ). Direction ⊖ of an object can be calculated by tan ⊖[v ]= v[y] / v[x] 3 ) Related rates- When two are varying with respect to time and if they are related, then they can be expressed in terms of each other. We will have to differentiate both sides with respect to time d / dt. 4 ) Drawing a curve- We can sketch a curve using differentiation, we can find the Maxima and Minima using given data by finding the first derivative that is dy / dx or y ' and putting it equals to 0 that is y ' = 0 if value of ' x ' is positive then function has local minima and otherwise function has a local maxima. Then we calculate second derivative that is d^2 ( y ) / dx that is y' ' and if value of y' ' is greater than 0 or y' ' > 0 then curve has minimum type shape otherwise curve has a maximum type shape. Slope in Derivatives is a simple and very useful concept in Calculus. We will learn here that how to find the Slope in the different type of Differential Equations by the help of derivative. We will go through the several ways for finding the Slope of a derivative and also solves some of the problems related to the evaluation of the slopes in the derivatives. When we talk about the topic, first we have to understand the derivative of any expression. Derivative of a function generally shows the change in the function as the input of the function changes. So we can say that a derivative of a function is a quantity that shows that how much a quantity changes when any change occurs in the response of change in some other related quantity. For example the change in the Position of a particle occurs according to the change in the velocity of that object. The derivative of a function for a chosen value gives the best linear approximation in the function nearby of that input changing value. Now just talking about any of the real value function of a single variable, derivative at any of the Point of the function is the slope of the Tangent line to that graph at that point. In some of the higher dimensions, the derivative of a function at any particular point is the linear transformation of that function at that point. This linear transformation is also called as linearization of that function for that point. The concept of the slope in derivatives is totally same as the concept of the derivative in the differential equations. We simply calculate the differentiation of any of the function to get the derivative of that function. So, slope in derivatives is the process of finding differentiation of the function. In the process of differentiation, we simply finds the rate of change in the value of 'y' in the comparison of the change in the value of the other independent input variable 'x'. This rate of change in the function is called as the derivative of 'y' with respect to the other independent variable 'x'. In some simple words we can say that “y dependence on x” also means that the “y is a function of x” and this functional relationship is given by “ y = f (x)”. Here f is the function of x in the terms of y and this y is dependent to the value of x. If both of the variables are the real Numbers then we can plot a graph for this function and this graph will have a number of tangents for different points on the graph, for a particular point we can find a tangent for that graph and this tangent will show the slope of the function f(x) on that point. We can find the slope of the function at any of the point on the graph. The slope of any curve at any point on that curve is the slope of the tangent line of the curve on that point. Here are a number of types of the function in the calculus mathematics where we calculate the slope of the function for any point of that function. The simplest case is the linear types of Functions where we have to find the slope. For general Linear Function say y = f(x) = mx + c, is a function of line, where m is the slope of linear function and c is the intercept of the line made by line on the co-ordinates. Now here the slope 'm' of the line can be given as the: m = (change in y) / (change in x), m = Δy/Δx. The sign of the delta 'Δ' (delta) shows the changes occurred in the quantities. We can also get this formula by some of the calculation on the function. As Δ is the change in the quantities then we can write function as: y = f(x + Δx) = mx + c, y + Δ y = m (x + Δx) + c, y + Δy = mx + mΔx + c, As y = mx + c then it can be written as: Δy = mΔx, So, m = Δy/Δx. In the case if function is not linear, means it is not a Straight Line and the value of m changes accordingly. Differentiation is a perfect method to find the rate of change in any of the quantity. Here are several Notations for the slope in derivatives. In the Leibniz's notation, s we gives the slope in derivatives as dy/dx and this derivative is read as “derivative of y with respect to the x” or “dy by dx”. To understand the concept of the slope in derivative, if we zoom the graph at any of the point such that the graph looks like a straight line on that point then the derivative at that is the slope of the line. Now talking about a real life example for change in the derivatives, say a velocity of a bike changes as the driver changes the speed and change in the distance occurs when change in the velocity occurs. The acceleration is the example of double derivative. Now just talking about the some of the examples, 1. Derivative of the expression: f(x) = 2x2 – 5x + 3. Now, the dy/dx = differentiation of (2x2 – 5x + 3), dy/dx = 2*2x – 5 = 4x – 5. 2. Calculating the slope of function y = x3, for the points x = 1 to x = 3. So, we have to find the derivative of the function at that point. dy/dx = 3x2, dy/dx at point x = 1 is 3 and dy/dx at x = 3 is 27. 3. y = 5x2 + 3x so tangent can be given as dy/dx = 10x + 3. Similarly, we can calculate the slope in derivatives all type of equations. When we make an angle with the positive direction of x axis in anticlockwise sense is called as a tangent of line or Slope or gradient of line. So, tangent is a trigonometric angle, which is called as a Slope or gradient of the line. The slope of line is generally denoted by m, where m = tan t, here t is the angle which makes with the positive direction of x axis in anticlockwise sense and derivative of tangent is sec^2 x. Now we discuss how we calculate tangent angle- Parallel to x axis: when a line parallel to x axis makes an angle of 0 degree with x axis. Therefore its slope or tangent is tan 0 = 0. Perpendicular to x axis: perpendicular to x axis or parallel to y axis is same situation. So, when we calculate an angle from x axis, it produces 90 degree. Therefore its slope or tangent is tan 90 = Inclined with axis: The Slope of a line, equally inclined with axis, is 1 or -1 as it makes 45 degree or 135 degree with x axis and we know that tan 45 = 1 and tan 135 = -1. Now we discuss tangent in terms of coordinates of two points- Let two points A(x[1],y[1]) and B(x[2],y[2]) are represents a line, then tangent or slope of line is- m = (y[2] – y[1])/(x[2] – x[1]) = difference of ordinates/difference of Abscissa. For example a line represents two points (1,2) and (3,4), then tangent or slope of line is- m = (y[2] – y[1])/(x[2] – x[1]) = (4 – 2)/(3 – 1) = 2/2 = 1 So, tangent of line is m = 1. Now we discuss how we calculate a tangent from a given equation: Let a line whose equation is ax + by + c = 0, then tangent or slope of that line is - m = -a/b = -coefficient of x/coefficient of y For understand this property, we take an example - Example: Find the tangent of line, whose equation is 3x – 2y + 5 = 0. Solution: given line equation is 3x – 2y + 5 = 0, then tangent or slope of line is- m = -a/b = -(3/-2) = 3/2, So, tangent of line is 3/2. Now we discuss how we calculate tangent between two lines: If t is a Angle between two lines whose slope m[1] and m[2], then- tan t = (m[1] – m[2])/(1 + m[1]m[2]), If both lines are parallel, then angle between both line is 0 degree. Then tan 0 = 0 => m[1] – m[2] /(1 + m[1]m[2]) = 0 => m[1] = m[2]. Now we can say that if slope of two lines are equal than both lines are parallel. Similarly there is a condition which shows perpendicularity: When two lines are perpendicular, then angle between both lines are 90 degree. So, tan 90 = ∞ = (m[1] – m[2])/(1 + m[1]m[2]) => m[1]m[2 ]= -1, Thus if two lines are perpendicular, then product of their slope is -1 and if m is the slope of a line, then slope of a line perpendicular to it -1/m. In differential Point of view, a tangent is equal to dy/dx, where y is a function So, m = tan t = dy/dx, And differentiation of tangent is sec^2 x. We take some example which shows how we calculate tangent of a given curve by using differentiation - Example 1: Find the tangent of curve x^2 + 3y + y^2 = 5 at (1,1). Solution: The equation of curve is x^2 + 3y + y^2 = 5. Now we differentiate the given curve with respect to x - 2x + 3*(dy/dx) + 2y*(dy/dx) = 0, => dy/dx = -2x/(2y+3), => (dy/dx)[(1,1)] = -2/(2+3) = -2/5, So, slope or tangent of curve at (1,1) is (dy/dx)[(1,1) ]= -2/5. Example 2: Prove that tangent to the curve y = 2x^3 – 3 at the points x=2 and x=-2 are parallel. Solution: the equation of given curve is y = 2x^3 – 3 …........equation(1) Now we differentiate this curve with respect to x - dy/dx = 6x^2 at x = 2, dy/dx = 6*(2)^2 = 6*4 = 24, at x = -2, dy/dx = 6*(-2)^2 = 6*4 = 24, So, at both point x=2 and x = -2, tangent are same. Therefore m[1] = m[2,] Thus we can say that tangent to the curve y = 2x^3 – 3 at the points x=2 and x=-2 are parallel. Example 3: Prove that the tangents to the curve y = x^2 – 5x + 6 at the points (2,0) and (3,0) are perpendicular to each other. Solution: the equation of the curve is y = x^2 – 5x + 6 …..........equation(1) Now we differentiate this curve with respect to x- dy/dx = 2x – 5, Now, we calculate the slope- Slope at (2,0) = m[1] = dy/dx = 2*2 – 5 = 4 – 5 = -1, Slope at (3,0) = m[2] = dy/dx = 2*3 – 5 = 6 – 5 = 1, Here m[1]*m[2] = -1, So, we can say that the tangents to the curve y = x^2 – 5x + 6 at the points (2,0) and (3,0) are perpendicular to each other. These are example which shows us procedure to calculate tangents of curve and properties of tangents. Now we discuss how we derive equation of a line from a tangent: If a Straight Line is passing through a point P(x[1],y[1]) and m is slope of line, then equation of line is - y – y[1 ]= m(x – x[1]), Now we discuss how we calculate equation of a given tangent: We use following steps to calculate equation- Step 1 : First of all we assume a given curve as a variable like y = x^2. Step 2 : After assuming the variable, we calculate dy/dx. Step 3 : Find the value of dy/dx at the given point P(x[1],y[1]). Step 4 : if dy/dx at that point is a non-zero finite number, then the equations of tangent is - y – y[1 ]= (dy/dx) (x – x[1]). Step 5: if dy/dx = 0, then the equation of tangent y – y[1] = 0, and if dy/dx = ∞, then the equation of tangent x – x[1] = 0, We take some examples to understand the procedure to the equation of tangent- Example 1: Find the equation of the tangent to the curve y = -5x^2 + 6x + 7 at point (½,35/4). Step 1: First of all, the equation of given curve y = -5x^2 + 6x + 7. Step 2: now we calculate dy/dx of given curve with respect to x- dy/dx = -10x + 6 Step 3: the value of dy/dx at (½,35/4) is- (dy/dx)[(1/2,35/4)] = -10(1/2) + 6 = -5 + 6 = 1, Step 4: the required equation of the tangent at (½, 35/4) is- y – 35/4 = (dy/dx) (x - ½), => y – 35/4 = (1) (x - ½), => y = x + 33/4. So, equation of tangent is y = x + 33/4. Example 2: Find the equation of the tangent to the Parabola y^2 = 4ax at point (at^2,2at). Step 1: First of all, the equation of given curve y^2 = 4ax. Step 2: now we calculate dy/dx of given curve with respect to x- 2ydy/dx = 4a => dy/dx = 2a/y, Step 3: the value of dy/dx at (at^2,2at ) is- (dy/dx) = 2a/2at = 1/t, Step 4: the required equation of the tangent at (at^2,2at) is- y – 2at = (dy/dx) (x - at^2), => y – 2at = (1/t) (x – at^2), => y + tx = 2at + at^3. So, equation of tangent is y + tx = 2at + at^3. Example 3: Find the equation of the tangent to the curve y = 2x^2 + 3sin x at x=0. Step 1: First of all, the equation of given curve y = 2x^2 + 3sin x, Step 2 : now we calculate dy/dx of given curve with respect to x- dy/dx = 4x + 3 cos x, Step 3: the value of dy/dx at x = 0 is- (dy/dx) = 4(0) + 3 cos(0) = 0 + 3 = 3, Step 4: the required equation of the tangent at x = 0 is- y – 0 = (dy/dx) (x - 0), => y – 0 = (3) (x - 0), => y = 3x. So, equation of tangent is y = 3x. These are example which shows procedure to calculate equation of tangent and shows how to calculate differentiation tangents. Normal Differentiation is a method of obtaining the rate at which a dependent variable or a dependent output say ‘y’ changes with respect to the change in a independent variable or input. This rate of change is called the derivative of y with respect to x. However physical meaning of normal differentiation says that if a graph is plotted between a dependent variable y and a independent variable x, then Slope of the graph gives the derivative of variable 'y' with respect to independent variable 'x'. This Ratio of change or derivative is represented by Δy/Δx. Where symbol ‘Δ’ represents the change in variable whereas, a great mathematician Leibniz represented this ratio of change as dy/dx which also called the derivative of y with respect to x. Now, let’s derive the formula for normal differentiation of a function. Let f(x) is a function of x then, change in this function can be represented by f(x+Δx) and the derivative of this function is given by the formula: Δf(x)/ Δx = [f(x+Δx) – f(x)]/ Δx = df(x)/dx, This expression is called Newton’s difference quotient. Friends, limit of a function play an important role while calculating the Derivatives of different Functions. For example, the derivative of a function f(x) at some Point a can be computed by the relation. F'(a) = lim[h→0] [f(a+h)-f(a)]/[(a+h)-a], further we can say that if h approaches to zero and if limit exists then, we can say f(x) is a differentiable function at a point of its interval ‘a’. Friends, concept of differentiation is widely applicable in many calculations in Calculus differentiation and is mostly used to find the equations of Tangent normal and slopes of different curves. Study of differentiation leads to the study of derivability, continuity and differentiability of a function. In Geometry, a technique that defines basic concept of shape in a plane is called as curve sketching. So, curve sketching Calculus is basically used for solving a mathematical problem about shapes in geometry and for solving the typical mathematical problems like area, maximum, minimum value of certain equation or curve. For sketching a curve, we use following steps - Step 1 : Firstly we Find out the ‘x’ and ‘y’ intercepts of the curve, for finding the ‘x’ intercept, we put ‘y’ equals to 0 and for finding the ‘y’ intercept, we put ‘x’ equals to 0 means If y = 0, then result = x If x = 0, then result = y Step 2: After intercepts, we find out symmetry of curve by putting ‘x’ as a (-x), in given equation where ‘x’ is an assumed variable - If in the given equation of the curve, power of the ‘x’ is even, then in this situation y-axis is an Axis of Symmetry in curve and if in the given equation of the curve, power of the ‘y’ is even, then in this situation x-axis is an axis of symmetry in curve and if the sum of the degree of ‘x’ and ‘y’ is even or odd, then in this situation, the curve is symmetric about the origin, where origin is called center of the curve. Step 3: After completion of above two steps, we calculate the limits of ‘x’ and ‘y’. Step 4: After above three steps, we check that curve is passing through origin or not and if the curve passes through the origin, then we calculate Tangent lines. For calculation of tangent lines, we solve lowest order term from the equation and remove all other terms of equation in algebraic curves. Step 5: Next we solve highest order term of equation and calculate the points where the curve meets the line at infinity in algebraic curve. Step 6: After all these steps, we calculate asymptotes of curve and also calculate where the asymptotes of the curve intersect the curve, means we calculate from which side the curve approaches to the asymptotes. In above 6 steps, we discussed all the processes, but we have not discussed, what are asymptotes of curve and how we sketch asymptotes of a curve. Knowledge of asymptotes is important for sketching a curve, so we will discuss asymptotes of a curve: An Asymptote of a curve is a line such that the distance between the curve and the line approaches zero as they tend to infinity. The word asymptote is derived from the Greek word asymptoto which has the meaning “not falling together” means any line which doesn’t intersects the curve is called as an asymptote of curve. Apollonian of Persia introduced this term. Three kinds of asymptotes are used in mathematics named as a horizontal asymptotes, vertical asymptotes and oblique asymptotes. Horizontal asymptotes of a function are the horizontal lines such that the graph of the function approaches to +∞ or - ∞. Vertical asymptotes are vertical lines according to which the function grows without any bound. The asymptotes are generally encountered in study of calculus and for the curve y = f(x). Horizontal asymptotes are defined as, f( x ) = r or, [x→-∞ ] f( x ) = p, y = p is called the Horizontal Asymptote of graph of function ‘f’. Vertical asymptote of a function is defined as, [x →a+] f ( x ) = ± ∞ or, [x →a-] f ( x ) = ± ∞, Where ‘a’ is a real number and the vertical line ‘x = a’ is called the Vertical Asymptote. Other asymptotes can be defined as [f ( x )] – ( mx + b )] = 0 or, [ f (x )] – ( mx + b )] = 0, The line y = mx + b is an asymptote of the graph. Here if m ≠ 0; y = mx + b is called an oblique asymptote. If f( -x ) = f( x) then function ‘f’ is said to be symmetric about the y axis, If f( - x) = - f ( x ) then the function ‘f’ is symmetric about the origin. This is all about asymptotes of a curve. Now we take an example which describes how we use above 6 steps - Example 1: By using above six steps sketch the curve of following function: f(x) = x /(9 - x Solution: We use above six steps to sketch curve of function f(x) = x /(9 – x ) : Step 1: First of all we evaluate ‘x’ and ‘y’ intercepts - For ‘x’ intercept, we put ‘y’ equals to 0 Means y = x /(9 – x ) = 0 = > x /(9 – x ) = 0 = > x = 0, For ‘y’ intercept, we put ‘x’ equals to 0 Means y = x /(9 – x ) = > 0/(9 – 0) = > y = 0, So, x =0, and y =0 are intercepts. Step 2: Now, we will check the symmetry of function - We put ‘-x’ as ‘x’ in above function, then, f(-x) = (-x) /(9 – (-x) ) = -x /(9 – x ) = -f(x), Means f(-x) + f(x) = 0, If Function is odd than this function is symmetric about origin. Step 3: Now we evaluate the Domain of given function - (9 – x ) = 0 = > x = +/-3, Means domain of function is x = (-∞, -3)∪(-3, 3)∪(3, ∞), Step 4: Now we calculate the asymptotes of given function, vertical asymptotes are at -3 and at 3. Step 5: Now we sketch the curve of the above equation by using all these things, which we have calculated in above steps. This is an example which shows what the basic steps for evaluating a curve are? Now we will discuss applications of curve sketching: Curve sketching technique is useful for finding out the maximum and minimum value of curve: We use following steps for finding maximum and minimum value by curve sketching technique - Step 1: First of all we will assume variable as a curve like y = x + 5. Step 2: After assuming the variable, we will find derivative of that variable with respect to particular variable, like differentiation of ‘y’ with respect to ‘x’. Step 3: Now we calculate critical points of that variable, by putting differential equation equals to 0 like (dy/dx) = 0 gives x = a and x = b. Step 4: After finding critical points, we will calculate second order derivative for finding that critical Point is local maximum point or local minimum point. Like we will calculate (d ) and putting critical point values and if d gives positive sign, then that critical point is called as a local minima point and if after putting the value of critical point, d gives negative sign, then that critical point is called as local maxima point means - If at x = a, (d [x = a ] < 0, then x = a is a local maxima point. If at x = b, (d [x = b ] > 0, then x = b is a local minima point. If at x = c, (d [x = c ] = 0, then x = c is a point of inflexion. Step 5 : Next, we will sketch a curve of the given equation, local maxima and local minima makes this easier because at local minima, curve moves in upward direction and at local maxima, curve moves in downward direction and at point of inflexion, its center point moves between the upward and downward side of curve. We take an example which defines the above process - Example: Find the local maxima and local minima of following curve, + 2x + 1? Solution: We use following steps to calculate local maxima and local minima - Step 1: First of all we will assume a variable ‘y’ as a curve, means y = x + 2x + 1 Step 2: Next, we will differentiate ‘y’ with respect to ‘x’- dy/dx = d/dx (x + 2x + 1) = 2x + 2, Step 3: Next, we will calculate the critical points of the given curve- Put dy/dx = 0, = > 2x + 2 = 0, = > x = -1. Step 4: Now, we will again differentiate dy/dx with respect to x means, we will calculate second order derivative - = d/dx(dy/dx) = d/dx(2x + 2) = 2, which is a positive number. So, we can say that given critical point of above curve is a local minima because at x = -1, (d [x = -1 ] > 0, then x = -1 is a local minima point. Step 5: When we sketch a curve of above equation, it makes a sharp point at local minima point towards upside. This is basic and best application of curve sketching.
{"url":"http://math.tutorcircle.com/calculus/application-of-differentiation.html","timestamp":"2014-04-20T18:23:38Z","content_type":null,"content_length":"53250","record_id":"<urn:uuid:769856d4-aa19-496f-b52b-2896330d1993>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00459-ip-10-147-4-33.ec2.internal.warc.gz"}
Matrix Rank The rank of a matrix is the number of linearly independent rows or columns. Using this definition, the rank can be calculated using the Gaussian elimination method. It can also be said that the rank is the order of the largest nonzero square submatrix. Using this definition, the rank can be calculated using determinants. The rank of a matrix is symbolized as: rank(A) or r(A). Calculating the Rank of a Matrix by the Gaussian Elimination Method A line can be discarded if: • All the coefficients are zeros. • There are two equal lines. • A line is proportional to another. • A line is a linear combination of others. r[3] = 2 · r[1] r[4] is zero r[5] = 2r[2] + r[1] r(A) = 2. In general, eliminate the maximum possible number of lines, and the rank is the number of nonzero rows. r[2] = r[2] − 3r[1] r[3]= r[3] − 2r[1] Therefore r(A) = 3. Calculate the rank of the following matrix: r[1] − 2 r[2] r[3] − 3 r[2] r[3] + 2 r[1] Therefore, r(A) =2. Calculating the Rank of a Matrix for Determimants 1. A line can eliminated a if: All the coefficients are zeros. There are two equal lines. A line is proportional to another. A line is a linear combination of others. The third column can be deleted because it is a linear combination of the first two: c[3 ] = c[1] + c[2] 2. Check to see if the rank is 1, for it must be satisfied that the element of the matrix is not zero and therefore its determinant is not zero. 3. The matrix will have a rank of 2 if there is a square submatrix of order 2 and its determinant is not zero. 4. The matrix will have a rank of 3 if there is a square submatrix of order 3 and its determinant is not zero. As all the determinants of the submatrices are zero, it does not have a rank of 3, therefore r(B) = 2. If the matrix had a rank of 3 and there was a submatrix of order 4, whose determinant was not zero, it would have had a rank of 4. In the same way as shown above, check to see if there is a rank greater than 4. 1.Calculate the rank of the matrix: r(B) = 4 2. Calculate the rank of the matrix: Remove the third column as it is zero, the fourth because it is proportional to the first and the fifth because it is the linear combination of the first and second: c[5] = −2 · c[1] + c[2] r(C) = 2
{"url":"http://www.ditutor.com/matrix/matrix_rank.html","timestamp":"2014-04-20T18:44:41Z","content_type":null,"content_length":"20996","record_id":"<urn:uuid:faa71c72-bd88-4f59-bfe3-a55a12f82764>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00498-ip-10-147-4-33.ec2.internal.warc.gz"}
[NumPy-Tickets] [NumPy] #1701: NumPy dtype arithmetic is the opposite of Python type arithmetic! NumPy Trac numpy-tickets@scipy.... Sat Dec 25 16:59:56 CST 2010 #1701: NumPy dtype arithmetic is the opposite of Python type arithmetic! Reporter: pv00 | Owner: somebody Type: defect | Status: new Priority: normal | Milestone: 2.0.0 Component: numpy.core | Version: 1.5.0 Keywords: dtype | NumPy dtype arithmetic is the opposite of Python type arithmetic! The operators "and" and "or" switch roles: <type 'int'> or <type 'float'> = <type 'int'> <type 'int'> and <type 'float'> = <type 'float'> int64 or float64 = float64 int64 and float64 = int64 This will be very confusing to users at large. Can we make the conventions agree in NumPy 2.0? More details: I was happy to find that NumPy exposed the numerical datatype promotion rules such as int + float = float by means of the logical operator or : dtype('int64'), ' or ', dtype('float64'), ' = ', dtype('float64') But then I was shocked to find that Python had the I propose that there be a negotiation between the NumPy developers and the Python developers, and that a consistent convention be set. Right now I think it's undocumented (or not very publically documented) behavior, and it would be better to make a change earlier, and fix a relatively small base of code, rather than carry it along forever, a potential source of confusion when code is written and whenever it is debugged. Ticket URL: <http://projects.scipy.org/numpy/ticket/1701> NumPy <http://projects.scipy.org/numpy> My example project More information about the NumPy-Tickets mailing list
{"url":"http://mail.scipy.org/pipermail/numpy-tickets/2010-December/004261.html","timestamp":"2014-04-20T16:02:17Z","content_type":null,"content_length":"4909","record_id":"<urn:uuid:1a974b87-614f-4185-a64b-06d392b5369d>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00439-ip-10-147-4-33.ec2.internal.warc.gz"}