content
stringlengths
86
994k
meta
stringlengths
288
619
On Balls and Bins with Deletions Results 1 - 10 of 13 - Journal of the ACM , 1999 "... This paper deals with balls and bins processes related to randomized load balancing, dynamic resource allocation, and hashing. Suppose ¡ balls have to be assigned to ¡ bins, where each ball has to be placed without knowledge about the distribution of previously placed balls. The goal is to achieve a ..." Cited by 82 (4 self) Add to MetaCart This paper deals with balls and bins processes related to randomized load balancing, dynamic resource allocation, and hashing. Suppose ¡ balls have to be assigned to ¡ bins, where each ball has to be placed without knowledge about the distribution of previously placed balls. The goal is to achieve an allocation that is as even as possible so that no bin gets much more balls than the average. A well known and good solution for this problem is to choose ¢ possible locations for each ball at random, to look into each of these bins, and to place the ball into the least full among these bins. This class of algorithms has been investigated intensively in the past, but almost all previous analyses assume that the ¢ locations for each ball are chosen uniformly and independently at random from the set of all bins. We investigate whether a non-uniform and possibly de-pendent choice of the ¢ , 2006 "... We investigate balls-into-bins processes allocating m balls into n bins based on the multiple-choice paradigm. In the classical single-choice variant each ball is placed into a bin selected uniformly at random. In a multiple-choice process each ball can be placed into one out of d ≥ 2 randomly selec ..." Cited by 58 (7 self) Add to MetaCart We investigate balls-into-bins processes allocating m balls into n bins based on the multiple-choice paradigm. In the classical single-choice variant each ball is placed into a bin selected uniformly at random. In a multiple-choice process each ball can be placed into one out of d ≥ 2 randomly selected bins. It is known that in many scenarios having more than one choice for each ball can improve the load balance significantly. Formal analyses of this phenomenon prior to this work considered mostly the lightly loaded case, that is, when m ≈ n. In this paper we present the first tight analysis in the heavily loaded case, that is, when m ≫ n rather than m ≈ n. The best previously known results for the multiple-choice processes in the heavily loaded case were obtained using majorization by the single-choice process. This yields an upper bound of the maximum load of bins of m/n + O ( √ m ln n/n) with high probability. We show, however, that the multiple-choice processes are fundamentally different from the single-choice variant in that they have “short memory. ” The great consequence of this property is that the deviation of the multiple-choice processes from the optimal allocation (that is, the allocation in which each bin has either ⌊m/n ⌋ or ⌈m/n ⌉ balls) does not increase with the number of balls as in the case of the single-choice process. In particular, we investigate the allocation obtained by two different multiple-choice allocation schemes, - In Proceedings of the 42nd IEEE Symposium on Foundations of Computer Science (FOCS , 2001 "... In this paper we analyse a very simple dynamic work-stealing algorithm. In the workgeneration model, there are n (work) generators. A generator-allocation function is simply a function from the n generators to the n processors. We consider a fixed, but arbitrary, distribution D over generator-alloca ..." Cited by 24 (1 self) Add to MetaCart In this paper we analyse a very simple dynamic work-stealing algorithm. In the workgeneration model, there are n (work) generators. A generator-allocation function is simply a function from the n generators to the n processors. We consider a fixed, but arbitrary, distribution D over generator-allocation functions. During each time-step of our process, a generator-allocation function h is chosen from D, and the generators are allocated to the processors according to h. Each generator may then generate a unit-time task which it inserts into the queue of its host processor. It generates such a task independently with probability λ. After the new tasks are generated, each processor removes one task from its queue and services it. For many choices of D, the work-generation model allows the load to become arbitrarily imbalanced, even when λ < 1. For example, D could be the point distribution containing a single function h which allocates all of the generators to just one processor. For this choice of D, the chosen processor receives around λn units of work at each step and services one. The natural work-stealing algorithm that we analyse is widely used in practical applications and works as follows. During each time step, each empty - in Proceedings of the 7th International Workshop on Randomization and Approximation Techniques in Computer Science, Princeton, NJ, 2003, Lecture Notes in Comput. Sci. 2764 , 2003 "... Abstract. We investigate randomized processes underlying load balancing based on the multiple-choice paradigm: m balls have to be placed in n bins, and each ball can be placed into one out of 2 randomly selected bins. The aim is to distribute the balls as evenly as possible among the bins. Previousl ..." Cited by 19 (1 self) Add to MetaCart Abstract. We investigate randomized processes underlying load balancing based on the multiple-choice paradigm: m balls have to be placed in n bins, and each ball can be placed into one out of 2 randomly selected bins. The aim is to distribute the balls as evenly as possible among the bins. Previously, it was known that a simple process that places the balls one by one in the least loaded bin can achieve a maximum load of m/n + Θ(log log n) with high probability. Furthermore, it was known that it is possible to achieve (with high probability) a maximum load of at most ⌈m/n ⌉ +1using maximum flow computations. In this paper, we extend these results in several aspects. First of all, we show that if m ≥ cn log n for some sufficiently large c, thenaperfect distribution of balls among the bins can be achieved (i.e., the maximum load is ⌈m/n⌉) with high probability. The bound for m is essentially optimal, because it is known that if m ≤ c ′ n log n for some sufficiently small constant c ′ , the best possible maximum load that can be achieved is ⌈m/n ⌉ +1with high probability. Next, we analyze a simple, randomized load balancing process based on a local search paradigm. Our first result here is that this process always converges to a best possible load distribution. Then, we study the convergence speed of the process. We show that if m is sufficiently large compared to n,thenno matter with which ball distribution the system starts, if the imbalance is ∆, then the process needs only ∆·n O(1) steps to reach a perfect distribution, with high probability. We also prove a similar result for m ≈ n, and show that if m = O(n log n / log log n), then an optimal load distribution (which has the maximum load of ⌈m/n ⌉ +1) is reached by the random process after a polynomial number of steps, with high probability. - SODA, ACM-SIAM "... The study of hashing is closely related to the analysis of balls and bins. Azar et. al. [1] showed that instead of using a single hash function if we randomly hash a ball into two bins and place it in the smaller of the two, then this dramatically lowers the maximum load on bins. This leads to the c ..." Cited by 16 (3 self) Add to MetaCart The study of hashing is closely related to the analysis of balls and bins. Azar et. al. [1] showed that instead of using a single hash function if we randomly hash a ball into two bins and place it in the smaller of the two, then this dramatically lowers the maximum load on bins. This leads to the concept of two-way hashing where the largest bucket contains O(log log n) balls with high probability. The hash look up will now search in both the buckets an item hashes to. Since an item may be placed in one of two buckets, we could potentially move an item after it has been initially placed to reduce maximum load. Using this fact, we present a simple, practical hashing scheme that maintains a maximum load of 2, with high probability, while achieving high memory utilization. In fact, with n buckets, even if the space for two items are pre-allocated per bucket, as may be desirable in hardware implementations, more than n items can be stored giving a high memory utilization. Assuming truly random hash functions, we prove the following properties for our hashing scheme. • Each lookup takes two random memory accesses, and reads at most two items per access. • Each insert takes O(log n) time and up to log log n+ O(1) moves, with high probability, and constant time in expectation. • Maintains 83.75 % memory utilization, without requiring dynamic allocation during inserts. We also analyze the trade-off between the number of moves performed during inserts and the maximum load on a bucket. By performing at most h moves, we can maintain a maximum load of O(hlogl ((~og~og:n/h)). So, even by performing one move, we achieve a better bound than by performing no moves at all. 1 , 2000 "... Random redundant allocation of data to parallel disk arrays can be exploited to achieve low access delays. New algorithms are proposed which improve the previously known shortest queue algorithm by systematically exploiting that scheduling decisions can be deferred until a block access is actually s ..." Cited by 13 (4 self) Add to MetaCart Random redundant allocation of data to parallel disk arrays can be exploited to achieve low access delays. New algorithms are proposed which improve the previously known shortest queue algorithm by systematically exploiting that scheduling decisions can be deferred until a block access is actually started on a disk. These algorithms are also generalized for coding schemes with low redundancy. Using extensive experiments, practically important quantities are measured which have so far eluded an analytical treatment: The delay distribution when a stream of requests approaches the limit of the sytem capacity, the system efficiency for parallel disk applications with bounded prefetching buffers, and the combination of both for mixed traffic. A further step towards practice is taken by outlining the system design for α: automatically load-balanced parallel hard-disk array. 1 - In Algorithms – ESA 2003, LNCS Vol. 2832 (Springer 2003 , 2003 "... We present a new finger search tree with O(1) worst-case update time and O(log log d) expected search time with high probability in the Random Access Machine (RAM) model of computation for a large class of input distributions. The parameter d represents the number of elements (distance) between ..." Cited by 10 (8 self) Add to MetaCart We present a new finger search tree with O(1) worst-case update time and O(log log d) expected search time with high probability in the Random Access Machine (RAM) model of computation for a large class of input distributions. The parameter d represents the number of elements (distance) between the search element and an element pointed to by a finger, in a finger search tree that stores n elements. For the need of the analysis we model the updates by a "balls and bins" combinatorial game that is interesting in its own right as it involves insertions and deletions of balls according to an unknown distribution. - In Proc. 7th Symposium on Discrete Algorithms (SODA , 2006 "... It is well known that if n balls are inserted into n bins, with high probability, the bin with maximum load contains (1 + o(1))log n / loglog n balls. Azar, Broder, Karlin, and Upfal [1] showed that instead of choosing one bin, if d ≥ 2 bins are chosen at random and the ball inserted into the least ..." Cited by 9 (2 self) Add to MetaCart It is well known that if n balls are inserted into n bins, with high probability, the bin with maximum load contains (1 + o(1))log n / loglog n balls. Azar, Broder, Karlin, and Upfal [1] showed that instead of choosing one bin, if d ≥ 2 bins are chosen at random and the ball inserted into the least loaded of the d bins, the maximum load reduces drastically to log log n / log d+O(1). In this paper, we study the two choice balls and bins process when balls are not allowed to choose any two random bins, but only bins that are connected by an edge in an underlying graph. We show that for n balls and n bins, if the graph is almost regular with degree n ǫ, where ǫ is not too small, the previous bounds on the maximum load continue to hold. Precisely, the maximum load is "... Abstract — Due to the increased usage of NAT boxes and firewalls, it has become harder for applications to establish direct connections seamlessly among two end-hosts. A recently adopted proposal to mitigate this problem is to use relay nodes, end-hosts that act as intermediary points to bridge conn ..." Cited by 4 (0 self) Add to MetaCart Abstract — Due to the increased usage of NAT boxes and firewalls, it has become harder for applications to establish direct connections seamlessly among two end-hosts. A recently adopted proposal to mitigate this problem is to use relay nodes, end-hosts that act as intermediary points to bridge connections. Efficiently selecting a relay node is not a trivial problem, specially in a large-scale unstructured overlay system where end-hosts are heterogeneous. In such environment, heterogeneity among the relay nodes comes from the inherent differences in their capacities and from the way overlay networks are constructed. Despite this fact, good relay selection algorithms should effectively balance the aggregate load across the set of relay nodes. In this paper, we address this problem using algorithms based on the two random choices method. We first prove that the classic load-based algorithm can effectively balance the load even when relays are heterogeneous, and that its performance depends directly on relay heterogeneity. Second, we propose an utilization-based random choice algorithm to distribute load in order to balance relay utilization. Numerical evaluations through simulations illustrate the effectiveness of this algorithm, indicating that it might also yield provable performance (which we conjecture). Finally, we support our theoretical findings through simulations of various large-scale scenarios, with realistic relay heterogeneity. I. , 2004 "... Numerous proposals exist for load balancing in peer-to-peer (p2p) networks. Some focus on namespace balancing, making the distance between nodes as uniform as possible. This technique works well under ideal conditions, but not under those found empirically. Instead, researchers have found heavy-tail ..." Cited by 1 (0 self) Add to MetaCart Numerous proposals exist for load balancing in peer-to-peer (p2p) networks. Some focus on namespace balancing, making the distance between nodes as uniform as possible. This technique works well under ideal conditions, but not under those found empirically. Instead, researchers have found heavy-tailed query distributions (skew), high rates of node join and leave (churn), and wide variation in node network and storage capacity (heterogeneity) . Other approaches tackle these less-than-ideal conditions, but give up on important security properties. We propose an algorithm that both facilitates good performance and does not dilute security. Our algorithm, kChoices, achieves load balance by greedily matching nodes' target workloads with actual applied workloads through limited sampling, and limits any fundamental decrease in security by basing each nodes' set of potential identifiers on a single certificate. Our algorithm compares favorably to four others in trace-driven simulations. We have implemented our algorithm and found that it improved aggregate throughput by 20% in a widely heterogeneous system in our experiments.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=70356","timestamp":"2014-04-18T19:17:16Z","content_type":null,"content_length":"43092","record_id":"<urn:uuid:b60ebcd4-9cc2-43db-8021-cc9b1087f8f6>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00404-ip-10-147-4-33.ec2.internal.warc.gz"}
the encyclopedic entry of first-order logic First-order logic (FOL) is a formal deductive system used in mathematics, philosophy, linguistics, and computer science. It goes by many names, including: first-order predicate calculus the lower predicate calculus the language of first-order logic predicate logic . Unlike natural languages such as , FOL uses a wholly unambiguous formal language interpreted by mathematical structures. FOL is a system of that extends propositional logic by allowing over individuals of a given domain of discourse . For example, it can be stated in FOL "Every individual has the property P". While propositional logic deals with simple declarative propositions, first-order logic additionally covers predicates and quantification. Take for example the following sentences: "Socrates is a man", "Plato is a man". In propositional logic these will be two unrelated propositions, denoted for example by p and q. In first-order logic however, both sentences would be connected by the same property: Man(x), where Man(x) means that x is a man. When x=Socrates we get the first proposition, p, and when x=Plato we get the second proposition, q. Such a construction allows for a much more powerful logic when quantifiers are introduced, such as "for every x...", for example, "for every x, if Man(x), then...". Without quantifiers, every valid argument in FOL is valid in propositional logic, and vice versa. A first-order theory consists of a set of axioms (usually finite or recursively enumerable) and the statements deducible from them given the underlying deducibility relation. Usually what is meant by 'first-order theory' is some set of axioms together with those of a complete (and sound) axiomatization of first-order logic, closed under the rules of FOL. (Any such system FOL will give rise to the same abstract deducibility relation, so we needn't have a fixed axiomatic system in mind.) A first-order language has sufficient expressive power to formalize two important mathematical theories: ZFC set theory and (first-order) Peano arithmetic. A first-order language cannot, however, categorically express the notion of countability even though it is expressible in the first-order theory ZFC under the intended interpretation of the symbolism of ZFC. Such ideas can be expressed categorically with second-order logic. Why is first-order logic needed? Propositional logic is not adequate for formalizing valid arguments that rely on the internal structure of the propositions involved. To see this, consider the valid syllogistic argument: • All men are mortal • Socrates is a man • Therefore, Socrates is mortal which upon translation into propositional logic yields: (taking $therefore$ to mean "therefore"). According to propositional logic, this translation is invalid: Propositional logic validates arguments according to their structure, and nothing in the structure of this translated argument (C follows from A and B, for arbitrary A, B, C) suggests that it is valid. A translation that preserves the intuitive (and formal) validity of the argument must take into consideration the deeper structure of propositions, such as the essential notions of predication and quantification. Propositional logic deals only with truth-functional validity: any assignment of truth-values to the variables of the argument should make either the conclusion true or at least one of the premises false. Clearly we may (uniformly) assign truth values to the variables of the above argument such that A, B are both true but C is false. Hence the argument is truth-functionally invalid. On the other hand, it is impossible to (uniformly) assign truth values to the argument "A follows from (A and B)" such that (A and B) is true (hence A is true and B is true) and A false. In contrast, this argument can be easily translated into first-order logic: • $forall x \left(mathit\left\{Man\right\}\left(x\right) rightarrow mathit\left\{Mortal\right\}\left(x\right)\right)$ • $,mathit\left\{Man\right\}\left(mathit\left\{Socrates\right\}\right)$ • $therefore mathit\left\{Mortal\right\}\left(mathit\left\{Socrates\right\}\right)$ (Where "$forall x$" means "for all x", "$rightarrow$" means "implies", $mathit\left\{Man\right\}\left(mathit\left\{Socrates\right\}\right)$ means "Socrates is a man", and $mathit\left\{Mortal\right\} \left(mathit\left\{Socrates\right\}\right)$ means "Socrates is mortal".) In plain English, this states that • for all x, if x is a man then x is mortal • Socrates is a man • therefore Socrates is mortal FOL can also express the existence of something ($exists$), as well as predicates ("functions" that are true or false) with more than one parameter. For example, "there is someone who can be fooled every time" can be expressed as: $exists x \left(mathit\left\{Person\right\}\left(x\right) and forall y \left(mathit\left\{time\right\}\left(y\right) rightarrow mathit\left\{Canfool\right\}\left(x,y\right)\right)\right)$ Where " $exists x$ " means "there exists (an) x", " " means "and", and means "(person) x can be fooled (at time) y". Variables in first-order logic and in propositional logic Every propositional formula can be translated into an essentially equivalent first-order formula by replacing each propositional variable with a zero-arity predicate. For example, the formula: $x vee \left(y wedge neg z\right)$ can be translated into $P\left(\right) vee \left(Q\left(\right) wedge neg R\left(\right)\right)$, where P, Q and R are predicates of arity zero. And where $vee$ means 'or' and $neg$ means 'negation'. While variables in the propositional logics are used to represent propositions that can be true or false, variables in first-order logic represent objects the formula is referring to. In the example above, the variable x in $forall x \left(mathit\left\{Man\right\}\left(x\right) rightarrow mathit\left\{Mortal\right\}\left(x\right)\right)$ is intended to indicate an arbitrary element of the human race, not a proposition that can be true or false. Defining first-order logic A predicate calculus consists of • formation rules (i.e. recursive definitions for forming well-formed formulas). • a proof theory, made of: • a semantics, telling which interpretation of the symbol makes the formula true. The axioms considered here are logical axioms which are part of classical FOL. It is important to note that FOL can be formalized in many equivalent ways; there is nothing canonical about the axioms and rules of inference given in this article. There are infinitely many equivalent formalizations all of which yield the same theorems and non-theorems, and all of which have equal right to the title FOL is used as the basic "building block" for many mathematical theories. FOL provides several built-in rules, such as the axiom $forall x P\left(x\right)rightarrow forall x P\left(x\right)$ (if P(x) is true for every x then P(x) is true for every x). Additional non-logical axioms are added to produce specific first-order theories based on the axioms of classical FOL; these theories built on FOL are called classical first-order theories. One example of a classical first-order theory is Peano arithmetic, which adds the axiom $forall x exists y Q\left(x,y\right)$ (i.e. for every x there exists y such that y=x+1, where Q(x,y) is interpreted as "y=x+1"). This additional axiom is a non-logical axiom; it is not part of FOL, but instead is an axiom of the theory (an axiom of arithmetic rather than of logic). Axioms of the latter kind are also called axioms of first-order theories. The axioms of first-order theories are not regarded as truths of logic per se, but rather as truths of the particular theory that usually has associated with it an intended interpretation of its non-logical symbols. (See an analogous idea at logical versus non-logical symbols.) Thus, the proposition $forall x exists y Q\left(x,y\right)$ is an axiom (hence is true) in the theory of Peano arithmetic, with the interpretation of the relation Q(x,y) as "y=x+1", and may be false in other theories or with another interpretation of the relation Q(x,y). Classical FOL does not have associated with it an intended interpretation of its non-logical vocabulary (except arguably a symbol denoting identity, depending on whether one regards such a symbol as logical). Classical set-theory is another example of a first-order theory (a theory built on FOL). Syntax of first-order logic The terms and formulas of first-order logic are strings of . As for all formal languages , the nature of the symbols themselves is outside the scope of formal logic; it is best to think of them as letters and punctuation symbols. The (set of all symbols of the language) is divided into the non-logical symbols and the logical symbols. The latter are the same, and have the same meaning, for all applications. Non-logical symbols non-logical symbols represent predicates (relations), functions and constants on the domain. For a long time it was standard practice to use a fixed, infinite set of non-logical symbols for all purposes. A more recent practice is to use different non-logical symbols according to the application one has in mind. Therefore it has become necessary to name the set of all non-logical symbols used in a particular application. It is now known as the .Traditional approach The traditional approach is to have only one, infinite, set of non-logical symbols (one signature) for all applications. Consequently, under the traditional approach there is only one language of first-order logic. This approach is still common, especially in philosophically oriented books. 1. For every integer n ≥ 0 we have the n-ary, or n-place, predicate symbols. Because they represent relations between n elements, they are also called relation symbols. For each arity n we have an infinite supply of them: 2. :P^n[0], P^n[1], P^n[2], P^n[3], … 3. For every integer n ≥ 0 infinitely many n-ary function symbols: 4. :f^n[0], f^n[1], f^n[2], f^n[3], …Application-specific signatures In modern mathematical treatments of first-order logic, the signature varies with the applications. Typical signatures in mathematics are {1, ×} or just {×} for groups, or {0, 1, +, ×, <} for ordered fields. There are no restrictions on the number of non-logical symbols. The signature can be empty, finite, or infinite, even uncountable. Uncountable signatures occur for example in modern proofs of (the upward part of) the Löwenheim-Skolem theorem. Every non-logical symbol is of one of the following types. 1. A set of predicate symbols (or relation symbols) each with some valence (or arity, number of its arguments) ≥ 0, which are often denoted by uppercase letters P, Q, R,... . □ Relations of valence 0 can be identified with propositional variables. For example, P, which can stand for any statement. □ For example, P(x) is a predicate variable of valence 1. It can stand for "x is a man", for example. □ Q(x,y) is a predicate variable of valence 2. It can stand for "x is greater than y" in arithmetic or "x is the father of y", for example. □ By using functions (see below), it is possible to dispense with all predicate variables with valence larger than one. For example, "x>y" (a predicate of valence 2, of the type Q(x,y)) can be replaced by a predicate of valence 1 about the ordered pair (x,y). 2. A set of function symbols, each of some valence ≥ 0, which are often denoted by lowercase letters f, g, h,... . □ Function symbols of valence 0 are called constant symbols, and are often denoted by lowercase letters at the beginning of the alphabet a, b, c,... . □ Examples: f(x) may stand for "the father of x". In arithmetic, it may stand for "-x". In set theory, it may stand for "the power set of x". In arithmetic, f(x,y) may stand for "x+y". In set theory, it may stand for "the union of x and y". The symbol a may stand for Socrates. In arithmetic, it may stand for 0. In set theory, such a constant may stand for the empty set. □ One can in principle dispense entirely with functions of arity > 2 and predicates of arity > 1 if there is a function symbol of arity 2 representing an ordered pair (or predicate symbols of arity 2 representing the projection relations of an ordered pair). The pair or projections need to satisfy the natural axioms. □ One can in principle dispense entirely with functions and constants. For example, instead of using a constant $; 0$ one may use a predicate $; 0\left(x\right)$ (interpreted as $; x=0$ ), and replace every predicate such as $; P\left(0,y\right)$ with $forall x ;; 0\left(x\right) rightarrow P\left(x,y\right)$. A function such as $f;\left(x_1,x_2,...,x_n\right)$ will similarly be replaced by a predicate $F;\left(x_1,x_2,...,x_n,y\right)$ (interpreted as $y = f;\left(x_1,x_2,...,x_n\right)$ ). We can recover the traditional approach by considering the following signature: {P^0[0], P^0[1], P^0[2], P^0[3], …, P^1[0], P^1[1], P^1[2], P^1[3], …, P^2[0], P^2[1], P^2[2], P^2[3], …, …, f^0[0], f^0[1], f^0[2], f^0[3], …, f^1[0], f^1[1], f^1[2], f^1[3], …, f^2[0], f^2[1], f^2[2], f^2[3], …, f^3[0], f^3[1], f^3[2], f^3[3], …, …} Logical symbols logical connectives such as , the logical symbols , and 1. An infinite set of variables, often denoted by lowercase letters at the end of the alphabet x, y, z,... . 2. Symbols denoting logical operators (or connectives): 3. Symbols denoting quantifiers: $forall$ (universal quantification, typically read as "for all") and $exists$ (existential quantification, typically read as "there exists"). 4. Left and right parenthesis: (and ). There are many different conventions about where to put parentheses; for example, one might write $forall$x or ($forall$x). Sometimes one uses colons or full stops instead of parentheses to make formulas unambiguous. One interesting but rather unusual convention is "Polish notation", where one omits all parentheses, and writes $rightarrow$, $wedge$, and so on in front of their arguments rather than between them. Polish notation is compact and elegant, but rare because it is hard for humans to read it. 5. An identity symbol (or equality symbol) =. Syntactically it behaves like a binary predicate.Variations First-order logic as described here is often called first-order logic with identity, because of the presence of an identity symbol = with special semantics. In first-order logic without identity this symbol is omitted. There are numerous minor variations that may define additional logical symbols: • Sometimes the truth constants T for "true" and F for "false" are included. Without any such logical operators of valence 0 it is not possible to express these two constants otherwise without using quantifiers. • Sometimes the Sheffer stroke (P | Q, aka NAND) is included as a logical symbol. • The exclusive-or operator "xor" is another logical connective that can occur as a logical symbol. • Sometimes it is useful to say that "P(x) holds for exactly one x", which can be expressed as $exists!$x P(x). This notation, called uniqueness quantification, may be taken to abbreviate a formula such as $exists$x (P(x) $wedgeforall$y (P(y) $rightarrow$ (x = y))). Not all logical symbols as defined above need occur. For example: • Since ($exists$x)φ can be expressed as $neg$(($forall$ x)($neg$ φ)), and ($forall$x)φ can be expressed as $neg$(($exists$ x)($neg$ φ)), one of the two quantifiers $exists$ and $forall$ can be • Since φ$vee$ψ can be expressed as $neg$(($neg$ φ)$wedge$ ($neg$ ψ)), and φ$wedge$ψ can be expressed as $neg$(($neg$ φ)$vee$ ($neg$ ψ)), either $vee$ or $wedge$ can be dropped. In other words, it is sufficient to have $neg,vee$ or $neg,wedge$ as the only logical connectives among the logical symbols. • Similarly, it is sufficient to have $neg,rightarrow$ or just the Sheffer stroke as the only logical connectives. There are also some frequently used variants of notation: • Some books and papers use the notation φ $Rightarrow$ ψ for φ $rightarrow$ ψ. This is especially common in proof theory where $rightarrow$ is easily confused with the sequent arrow. • ~φ is sometimes used for $neg$φ, φ & ψ for φ $wedge$ ψ. • There is a wealth of alternative notations for quantifiers; e.g., $forall$x φ may be written as (x)φ. This latter notation is common in texts on recursion theory. Formation rules The formation rules define the terms and formulas of first order logic. When terms and formulas are represented as strings of symbols, these rules can be used to write a formal grammar for terms and formulas. The concept of free variable is used to define the sentences as a subset of the formulas. The set of is recursively defined by the following rules: 1. Any variable is a term. 2. Any expression f(t[1],...,t[n]) of n arguments (where each argument t[i] is a term and f is a function symbol of valence n) is a term. 3. Closure clause: Nothing else is a term. For example, predicates are not terms. The set of well-formed formulas (usually called s or just ) is recursively defined by the following rules: 1. Simple and complex predicates If P is a relation of valence n and a[1], ..., a[n] are terms then P(a[1],...,a[n]) is a well-formed formula. If equality is considered part of logic, then (a[1] = a [2]) is a well-formed formula. All such formulas are said to be atomic. 2. Inductive Clause I: If φ is a wff, then $neg$φ is a wff. 3. Inductive Clause II: If φ and ψ are wffs, then (φ $rightarrow$ ψ) is a wff. 4. Inductive Clause III: If φ is a wff and x is a variable, then $forall$x φ is a wff. 5. Closure Clause: Nothing else is a wff. For example, $forall$ x $forall$ y (P(f(x)) $rightarrowneg$ (P(x)$rightarrow$ Q(f(y),x,z))) is a well-formed formula, if f is a function of valence 1, P a predicate of valence 1 and Q a predicate of valence 3. $forall$ x x$rightarrow$ is not a well-formed formula. In Computer science terminology, a formula implements a built-in "boolean" type, while a term implements all other types. In mathematics the language of ordered abelian groups has one constant 0, one unary function −, one binary function +, and one binary relation ≤. So: • x, y are atomic terms • +(x, y), +(x, +(y, −(z))) are terms, usually written as x + y, x + y − z • =(+(x, y), 0), ≤(+(x, +(y, −(z))), +(x, y)) are atomic formulas, usually written as x + y = 0, x + y − z ≤ x + y, • ($forall$x $forall$y ≤(+(x, y), z)) $rightarrow$ ($forall$x =(+(x, y), 0)) is a formula, usually written as ($forall$x $forall$y x + y ≤ z) $rightarrow$ ($forall$x x + y = 0). Additional syntactic concepts Free and Bound Variables In a formula, a variable may occur free or bound. Intuitively, a variable is free in a formula if it is not quantified: in $forall y. P\left(x,y\right)$, variable x is free while y is bound. 1. Atomic formulas If φ is an atomic formula then x is free in φ if and only if x occurs in φ. 2. Inductive Clause I: x is free in $neg$φ if and only if x is free in φ. 3. Inductive Clause II: x is free in (φ $rightarrow$ ψ) if and only if x is free in either φ or ψ. 4. Inductive Clause III: x is free in $forall$y φ if and only if x is free in φ and x is a different symbol than y. 5. Closure Clause: x is bound in φ if and only if x occurs in φ and x is not free in φ. For example, in $forall$ x $forall$ y (P(x)$rightarrow$ Q(x,f(x),z)), x and y are bound variables, z is a free variable, and w is neither because it does not occur in the formula. Freeness and boundness can be also specialized to specific occurrences of variables in a formula. For example, in $P\left(x\right) rightarrow forall x. Q\left(x\right)$, the first occurrence of x is free while the second is bound. In other words, the x in $P\left(x\right)$ is free while the $x$ in $forall x. Q\left(x\right)$ is bound. If t is a term and φ is a formula possibly containing the variable x, then φ[t/x] is the result of replacing all free instances of x by t in φ. This replacement results in a formula that logically follows the original one provided that no free variable of t becomes bound in this process. If some free variable of t becomes bound, then to substitute t for x it is first necessary to change the names of bound variables of φ to something other than the free variables of t. To see why this condition is necessary, consider the formula φ given by $forall$y y ≤ x ("x is maximal"). If t is a term without y as a free variable, then φ[t/x] just means t is maximal. However if t is y, the formula φ[y/x] is $forall$y y ≤ y which does not say that y is maximal. The problem is that the free variable y of t (=y) became bound when we substituted y for x in φ[y/x]. The intended replacement can be obtained by renaming the bound variable y of φ to something else, say z, so that the formula is then $forall$z z ≤ y. Forgetting this condition is a notorious cause of errors. Proof theory Inference rules An inference rule is a function from sets of (well-formed) formulas, called premises, to sets of formulas called conclusions. In most well-known deductive systems, inference rules take a set of formulas to a single conclusion. (Notice this is true even in the case of most sequent calculi.) Inference rules are used to prove theorems, which are formulas provable in or members of a theory. If the premises of an inference rule are theorems, then its conclusion is a theorem as well. In other words, inference rules are used to generate "new" theorems from "old" ones--they are theoremhood preserving. Systems for generating theories are often called predicate calculi. These are described in a section below. An important inference rule, modus ponens, states that if φ and φ $rightarrow$ ψ are both theorems, then ψ is a theorem. This can be written as following; if $T vdash varphi$ and $T vdash varphirightarrowpsi$, then $T vdash psi$ $T vdash varphi$ is provable in theory . There are deductive systems (known as Hilbert-style deductive systems ) in which modus ponens is the sole rule of inference; in such systems, the lack of other inference rules is offset with an abundance of logical axiom schemes. A second important inference rule is Universal Generalization. It can be stated as if $T vdash varphi$, then $T vdash forall x , varphi$ Which reads: if φ is a theorem, then "for every x, φ" is a theorem as well. The similar-looking schema $varphirightarrowforall x , varphi$ is not sound, in general, although it does however have valid instances, such as when x does not occur free in φ (see Generalization (logic) Here follows a description of the axioms of first-order logic. As explained above, a given first-order theory has further, non-logical axioms. The following logical axioms characterize a predicate calculus for this article's example of first-order logic. For any theory, it is of interest to know whether the set of axioms can be generated by an algorithm, or if there is an algorithm which determines whether a well-formed formula is an axiom. If there is an algorithm to generate all axioms, then the set of axioms is said to be recursively enumerable. If there is an algorithm which determines after a finite number of steps whether a formula is an axiom or not, then the set of axioms is said to be recursive or decidable. In that case, one may also construct an algorithm to generate all axioms: this algorithm simply builds all possible formulas one by one (with growing length), and for each formula the algorithm determines whether it is an Axioms of first-order logic are always decidable. However, in a first-order theory non-logical axioms are not necessarily such. Quantifier axioms Quantifier axioms change according to how the vocabulary is defined, how the substitution procedure works, what the formation rules are and which inference rules are used. Here follows a specific example of these axioms • PRED-1: $\left(forall x Z\left(x\right)\right) rightarrow Z\left(t\right)$ • PRED-2: $Z\left(t\right) rightarrow \left(exists x Z\left(x\right)\right)$ • PRED-3: $\left(forall x \left(W rightarrow Z\left(x\right)\right)\right) rightarrow \left(W rightarrow forall x Z\left(x\right)\right)$ • PRED-4: $\left(forall x \left(Z\left(x\right) rightarrow W\right)\right) rightarrow \left(exists x Z\left(x\right) rightarrow W\right)$ These are actually axiom schemata: the expression W stands for any wff in which x is not free, and the expression Z(x) stands for any wff with the additional convention that Z(t) stands for the result of substitution of the term t for x in Z(x). Thus this is a recursive set of axioms. Another axiom, $Z rightarrow forall x Z$, for Z in which x does not occur free, is sometimes added. Equality and its axioms There are several different conventions for using equality (or identity) in first-order logic. This section summarizes the main ones. The various conventions all give essentially the same results with about the same amount of work, and differ mainly in terminology. • The most common convention for equality is to include the equality symbol as a primitive logical symbol, and add the axioms for equality to the axioms for first-order logic. The equality axioms x = x x = y → f(...,x,...) = f(...,y,...) for any function f x = y → (P(...,x,...) → P(...,y,...)) for any relation P (including the equality relation itself) These are, too, axiom schemata: they define an algorithm which decides whether a given formula is an axiom. Thus this is a recursive set of axioms. • The next most common convention is to include the equality symbol as one of the relations of a theory, and add the equality axioms to the axioms of the theory. In practice this is almost indistinguishable from the previous convention, except in the unusual case of theories with no notion of equality. The axioms are the same, and the only difference is whether one calls some of them logical axioms or axioms of the theory. • In theories with no functions and a finite number of relations, it is possible to define equality in terms of the relations, by defining the two terms s and t to be equal if any relation is unchanged by changing s to t in any argument. For example, in set theory with one relation $in$, we may define s = t to be an abbreviation for $forall$x (s $in$x $leftrightarrow$t $in$x) $wedge$$forall$x (x $in$s $leftrightarrow$x $in$t). This definition of equality then automatically satisfies the axioms for equality. In this case, one should replace the usual axiom of extensionality, $forall x forall y \left[forall z \left(z in x Leftrightarrow z in y\right) Rightarrow x = y\right]$, by $forall x forall y \left[forall z \left(z in x Leftrightarrow z in y\right) Rightarrow forall z \left(x in z Leftrightarrow y in z\ right) \right]$, i.e. if x and y have the same elements, then they belong to the same sets. • In some theories it is possible to give ad hoc definitions of equality. For example, in a theory of partial orders with one relation ≤ we could define s = t to be an abbreviation for s ≤ t $wedge$t ≤ s. In logic and mathematics, an (also mathematical interpretation, logico-mathematical interpretation, or commonly a model) gives meaning to an artificial or formal language by assigning a denotation to all non-logical constants in that language or in a sentence of that language. For a given formal language L, or a sentence Φ of L, an interpretation assigns a denotation to each non-logical constant occurring in L or Φ. To individual constants it assigns individuals (from some universe of discourse); to predicates of degree 1 it assigns properties (more precisely sets) ; to predicates of degree 2 it assigns binary relations of individuals; to predicates of degree 3 it assigns ternary relations of individuals, and so on; and to sentential letters it assigns truth-values. More precisely, an interpretation of a formal language L or of a sentence Φ of L, consists of a non-empty domain D (i.e. a non-empty set) as the universe of discourse together with an assignment that associates with each n-ary operation or function symbol of L or of Φ an n-ary operation with respect to D (i.e. a function from $D^n$ into $D$); with each n-ary predicate of L or of Φ an n-ary relation among elements of D and (optionally) with some binary predicate I of L, the identity relation among elements of D. In this way an interpretation provides meaning or semantic values to the terms or formulae of the language. The study of the interpretations of formal languages is called formal semantics. In mathematical logic an interpretation is a mathematical object that contains the necessary information for an interpretation in the former sense. The symbols used in a formal language include variables, logical-constants, quantifiers and punctuation symbols as well as the non-logical constants. The interpretation of a sentence or language therefore depends on which non-logical constants it contains. Languages of the sentential (or propositional) calculus are allowed sentential symbols as non-logical constants. Languages of the first order predicate calculus allow in addition predicate symbols and operation or function symbols. A model is a pair $langle D,I rangle$, where D is a set of elements called the domain while I is an interpretation of the elements of a signature (functions, and predicates). • the domain D is a set of elements; • the interpretation I is a function that assigns something to constants, functions and predicates: □ each function symbol f of arity n is assigned a function I(f) from $D^n$ to $D$ □ each predicate symbol P of arity n is assigned a relation I(P) over $D^n$ or, equivalently, a function from $D^n$ to $\left\{true, false\right\}$ The following is an intuitive explanation of these elements. The domain D is a set of "objects" of some kind. Intuitively, a first-order formula is a statement about objects; for example, $exists x . P\left(x\right)$ states the existence of an object x such that the predicate P is true where referred to it. The domain is the set of considered objects. As an example, one can take $D$ to be the set of integer numbers. The model also includes an interpretation of the signature. Since the elements of the signature are function symbols and predicate symbols, the interpretation gives the "value" of functions and The interpretation of a function symbol is a function. For example, the function symbol $f\left(_,_\right)$ of arity 2 can be interpreted as the function that gives the sum of its arguments. In other words, the symbol $f$ is associated the function I(f) of addition in this interpretation. In particular, the interpretation of a constant is a function from the one-element set D^0 to D, which can be simply identified with an object in D. For example, an interpretation may assign the value $I\left(c\right)=10$ to the constant $c$. The interpretation of a predicate of arity n is a set of n-tuples of elements of the domain. This means that, given an interpretation, a predicate, and n elements of the domain, one can tell whether the predicate is true over those elements and according to the given interpretation. As an example, an interpretation I(P) of a predicate P of arity two may be the set of pairs of integers such that the first one is less than the second. According to this interpretation, the predicate P would be true if its first argument is less than the second. A formula evaluates to true or false given a model and an interpretation of the value of the variables. Such an interpretation $mu$ associates every variable to a value of the domain. The evaluation of a formula under a model $M=langle D,I rangle$ and an interpretation $mu$ of the variables is defined from the evaluation of a term under the same pair. Note that the model itself contains an interpretation (which evaluates functions, and predicates); we additionally have, separated from the model, an interpretation • every variable is associated its value according to $mu$; • a term $f\left(t_1,ldots,t_n\right)$ is associated the value given by the interpretation of the function and the interpretation of the terms: if $v_1,ldots,v_n$ are the values associated to $t_1,ldots,t_n$, the term is associated the value $I\left(f\right)\left(v_1,ldots,v_n\right)$; recall that $I\left(f\right)$ is the interpretation of f, and so is a function from $D^n$ to D. The interpretation of a formula is given as follows. • a formula $P\left(t_1,ldots,t_n\right)$ is associated the value true or false depending on whether $v_1,ldots,v_n in I\left(D\right)$, where $v_1,ldots,v_n$ are the evaluation of the terms $t_1,ldots,t_n$ and $I\left(P\right)$ is the interpretation of $P$, which by assumption is a subset of $D^n$ • a formula in the form $neg A$ or $A rightarrow$ B is evaluated in the obvious way • a formula $exists x . A$ is true according to M and $mu$ if there exists an evaluation $mu"$ of the variables that only differs from $mu$ regarding the evaluation of x and such that A is true according to the model M and the interpretation $mu"$ • a formula $forall x . A$ is true according to M and $mu$ if A is true for every pair composed by the model M and an interpretation $mu"$ that differs from $mu$ only on the value of x If a formula does not contain free variables, then the evaluation of the variables does not affects its truth. In other words, in this case F is true according to M and $mu$ if and only if is true according to M and a different interpretation of the variables $mu"$. Validity and satisfiability A model M satisfies a formula F if this formula is true according to M and every possible evaluation of its variables. A formula is valid if it is true in every possible model and interpretation of the variables. A formula is satisfiable if there exists a model and an interpretation of the variables that satisfy the formula. Predicate calculus The predicate calculus is a proper extension of the propositional calculus that defines which statements of first-order logic are provable. Many (but not all) mathematical theories can be formulated in the predicate calculus. If the propositional calculus is defined with a suitable set of axioms and the single rule of inference modus ponens (this can be done in many ways), then the predicate calculus can be defined by appending to the propositional calculus several axioms and the inference rule called "universal generalization". As axioms for the predicate calculus we take: • All tautologies of the propositional calculus, taken schematically so that the uniform replacement of a schematic letter by a formula is allowed. • The quantifier axioms, given above. • The above axioms for equality, if equality is regarded as a logical concept. A sentence is defined to be provable in first-order logic if it can be derived from the axioms of the predicate calculus, by repeatedly applying the inference rules "modus ponens" and "universal generalization". In other words: • An axiom of the predicate calculus is provable in first-order logic by definition. • If the premises of an inference rule are provable in first-order logic, then so is its conclusion. If we have a theory T (a set of statements, called axioms, in some language) then a sentence φ is defined to be provable in the theory T if $a_1 wedge a_2 wedge ldots wedge a_n rightarrow varphi$ is provable in first-order logic, for some finite set of axioms $a_1, a_2,ldots,a_n$ of the theory T. In other words, if one can prove in first-order logic that φ follows from the axioms of T. This also means, that we replace the above procedure for finding provable sentences by the following one: • An axiom of T is provable in T. • An axiom of the predicate calculus is provable in T. • If the premises of an inference rule are provable in T, then so is its conclusion. One apparent problem with this definition of provability is that it seems rather ad hoc: we have taken some apparently random collection of axioms and rules of inference, and it is unclear that we have not accidentally missed out some vital axiom or rule. Gödel's completeness theorem assures us that this is not really a problem: any statement true in all models (semantically true) is provable in first-order logic (syntactically true). In particular, any reasonable definition of "provable" in first-order logic must be equivalent to the one above (though it is possible for the lengths of proofs to differ vastly for different definitions of provability). There are many different (but equivalent) ways to define provability. The above definition is typical for a "Hilbert style" calculus, which has many axioms but very few rules of inference. By contrast, a "Gentzen style" predicate calculus has few axioms but many rules of inference. Provable identities The following sentences can be called "identities" because the main connective in each is the . They are all provable in FOL, and are useful when manipulating the quantifiers: $lnot forall x , P\left(x\right) Leftrightarrow exists x , lnot P\left(x\right)$ $lnot exists x , P\left(x\right) Leftrightarrow forall x , lnot P\left(x\right)$ $forall x , forall y , P\left(x,y\right) Leftrightarrow forall y , forall x , P\left(x,y\right)$ $exists x , exists y , P\left(x,y\right) Leftrightarrow exists y , exists x , P\left(x,y\right)$ $forall x , P\left(x\right) land forall x , Q\left(x\right) Leftrightarrow forall x , \left(P\left(x\right) land Q\left(x\right)\right)$ $exists x , P\left(x\right) lor exists x , Q\left(x\right) Leftrightarrow exists x , \left(P\left(x\right) lor Q\left(x\right)\right)$ $P land exists x , Q\left(x\right) Leftrightarrow exists x , \left(P land Q\left(x\right)\right)$ (where $x$ must not occur free in $P$) $P lor forall x , Q\left(x\right) Leftrightarrow forall x , \left(P lor Q\left(x\right)\right)$ (where $x$ must not occur free in $P$) Provable inference rules The main connective in the following sentences, also provable in FOL, is the . These sentences can be seen as the justification for inference rules in addition to modus ponens and universal generalization discussed above and assumed valid: $exists x , forall y , P\left(x,y\right) Rightarrow forall y , exists x , P\left(x,y\right)$ $forall x , P\left(x\right) lor forall x , Q\left(x\right) Rightarrow forall x , \left(P\left(x\right) lor Q\left(x\right)\right)$ $exists x , \left(P\left(x\right) land Q\left(x\right)\right) Rightarrow exists x , P\left(x\right) land exists x , Q\left(x\right)$ $exists x , P\left(x\right) land forall x , Q\left(x\right) Rightarrow exists x , \left(P\left(x\right) land Q\left(x\right)\right)$ $forall x , P\left(x\right) Rightarrow P\left(c\right)$ (If c is a variable, then it must not be previously quantified in P(x)) $P\left(c\right) Rightarrow exists x , P\left(x\right)$ (there must be no free instance of x in P(c)) Metalogical theorems of first-order logic Some important metalogical theorems are listed below in bulleted form. What they roughly mean is that a sentence is valid if and only if it is provable. Furthermore, one can construct a program which works as follows: if a sentence is provable, the program will always answer "provable" after some unknown, possibly very large, amount of time. If a sentence is not provable, the program may run forever. In the latter case, we will not know whether the sentence is provable or not, since we cannot tell whether the program is about to answer or not. In other words, the validity of sentences is One may construct an algorithm which will determine in finite number of steps whether a sentence is provable (a decidable algorithm) only for simple classes of first-order logic. 1. The decision problem for validity is recursively enumerable; in other words, there is a Turing machine that when given any sentence as input, will halt if and only if the sentence is valid (true in all models). □ As Gödel's completeness theorem shows, any valid formula is provable. Conversely, assuming consistency of the logic, any provable formula is valid. □ The Turing machine can be one which generates all provable formulas in the following manner: for a finite or recursively enumerable set of axioms, such a machine can be one that generates an axiom, then generates a new provable formula by application of axioms and inference rules already generated, then generate another axiom, and so on. Given a sentence as input, the Turing machine simply go on and generates all provable formulas one by one, and will halt if it generates the sentence. 2. Unlike propositional logic, first-order logic is undecidable (although semidecidable), provided that the language has at least one predicate of valence at least 2 other than equality. This means that there is no decision procedure that determines whether an arbitrary formula is valid or not. Because there is a Turing machine as described above, the undecidability is related to the unsolvability of the Halting problem: there is no algorithm which determines after a finite number of steps whether the Turing machine will ever halt for a given sentence as its input, hence whether the sentence is provable. This result was established independently by Church and Turing. 3. Monadic predicate logic (i.e., predicate logic with only predicates of one argument and no functions) is decidable. 4. The Bernays–Schönfinkel class of first-order formulas is also decidable. Translating natural language to first-order logic Concepts expressed in natural language must be "translated" to first-order logic (FOL) before FOL can be used to address them, and there are a number of potential pitfalls in this translation. In $p or q$ means "p, or q, or both", that is, it is . In English, the word "or" is sometimes inclusive (e.g, "cream or sugar?"), but sometimes it is exclusive (e.g., "coffee or tea?" is usually intended to mean one or the other, not both). Similarly, the English word "some" may mean "at least one, possibly all", but other times it may mean "not all, possibly none". The English word "and" should sometimes be translated as "or" (e.g., "men and women may apply"). Limitations of first-order logic All mathematical notations have their strengths and weaknesses; here are a few such issues with first-order logic. Difficulty in characterizing finiteness or countability It follows from the Löwenheim–Skolem theorem that it is not possible to define finiteness or countability in a first-order language. That is, there is no first-order formula φ(x) such that for any model M, M is a model of φ iff the extension of φ in M is finite (or in the other case, countable). In first-order logic without identity the situation is even worse, since no first-order formula φ(x ) can define "there exist n elements satisfying φ" for some fixed finite cardinal n. A number of properties not definable in first-order languages are definable in stronger languages. For example, in first-order logic one cannot assert the least-upper-bound property for sets of real numbers, which states that every bounded, nonempty set of real numbers has a supremum; A second-order logic is needed for that. Difficulty representing if-then-else Oddly enough, FOL with equality (as typically defined) does not include or permit defining an if-then-else predicate or function if(c,a,b), where "c" is a condition expressed as a formula, while a and b are either both terms or both formulas, and its result would be "a" if c is true, and "b" if it is false. The problem is that in FOL, both predicates and functions can only accept terms ("non-booleans") as parameters, but the "obvious" representation of the condition is a formula ("boolean"). This is unfortunate, since many mathematical functions are conveniently expressed in terms of if-then-else, and if-then-else is fundamental for describing most computer programs. Mathematically, it is possible to redefine a complete set of new functions that match the formula operators, but this is quite clumsy. A predicate if(c,a,b) can be expressed in FOL if rewritten as $\ left(c wedge a\right) or \left(neg c wedge b\right)$ (or, equivalently, $\left(c rightarrow a\right) wedge \left(neg c rightarrow b\right)$), but this is clumsy if the condition c is complex. Many extend FOL to add a special-case predicate named "if(condition, a, b)" (where a and b are formulas) and/or function "ite(condition, a, b)" (where a and b are terms), both of which accept a formula as the condition, and are equal to "a" if condition is true and "b" if it is false. These extensions make FOL easier to use for some problems, and make some kinds of automatic theorem-proving easier. Others extend FOL further so that functions and predicates can accept both terms and formulas at any position. Typing (Sorts) FOL does not include types (sorts) into the notation itself, other than the difference between formulas ("booleans") and terms ("non-booleans"). Some argue that this lack of types is a great advantage, but many others find advantages in defining and using types (sorts), such as helping reject some erroneous or undesirable specifications. Those who wish to indicate types must provide such information using the notation available in FOL. Doing so can make such expressions more complex, and can also be easy to get wrong. Single-parameter predicates can be used to implement the notion of types where appropriate. For example, in: $forall x \left(mathit\left\{Man\right\}\left(x\right) rightarrow mathit\left\{Mortal\ right\}\left(x\right) \right)$, the predicate $mathit\left\{Man\right\}\left(x\right)$ could be considered a kind of "type assertion" (that is, that $x$ must be a man). Predicates can also be used with the "exists" quantifier to identify types, but this should usually be done with the "and" operator instead, e.g.: $exists x \left(mathit\left\{Man\right\}\left(x\right) wedge mathit\left\{Mortal \right\}\left(x\right) \right)$ ("there exists something that is both a man and is mortal"). It is easy to write $exists x \left(mathit\left\{Man\right\}\left(x\right) rightarrow mathit\left\{Mortal\ right\}\left(x\right) \right)$, but this would be equivalent to $\left(exists x neg mathit\left\{Man\right\}\left(x\right)\right) or exists x mathit\left\{Mortal\right\}\left(x\right)$ ("there is something that is not a man, and/or there exists something that is mortal"), which is usually not what was intended. Similarly, assertions can be made that one type is a subtype of another type, e.g.: $forall x \left(mathit\left\{Man\right\}\left(x\right) rightarrow mathit\left\{Mammal\right\}\left(x\right) \right)$ ("for all $x$, if $x$ is a man, then $x$ is a mammal"). Graph reachability cannot be expressed Many situations can be modeled as a graph of nodes and directed connections (edges). For example, validating many systems requires showing that a "bad" state cannot be reached from a "good" state, and these interconnections of states can often be modelled as a graph. However, it can be proved that connectedness cannot be fully expressed in predicate logic. In other words, there is no predicate-logic formula $phi$ and $R$ as its only predicate symbol (of arity 2) such that $phi$ holds in a interpretation $I$ if and only if the extension of $R$ in $I$ describes a connected graph: that is, connected graphs cannot be axiomatized. Note that given a binary relation $R$ encoding a graph, one can describe $R$ in terms of a conjunction of first order formulas, and write a formula $phi_\left\{R\right\}$ which is satisfiable if and only if $R$ is connected. Comparison with other logics • Typed first-order logic allows variables and terms to have various types (or sorts). If there are only a finite number of types, this does not really differ much from first-order logic, because one can describe the types with a finite number of unary predicates and a few axioms. Sometimes there is a special type Ω of truth values, in which case formulas are just terms of type Ω. • First-order logic with domain conditions adds domain conditions (DCs) to classical first-order logic, enabling the handling of partial functions; these conditions can be proven "on the side" in a manner similar to PVS's type correctness conditions. It also adds if-then-else to keep definitions and proofs manageable (they became too complex without them). • The SMT-LIB Standard defines a language used by many research groups for satisfiability modulo theories; the full logic is based on FOL with equality, but adds sorts (types), if-then-else for terms and formulas (ite() and if.. then.. else..), a let construct for terms and formulas (let and flet), and a distinct construct declaring a set of listed values as distinct. Its connectives are not, implies, and, or, xor, and iff. • Weak second-order logic allows quantification over finite subsets. • Monadic second-order logic allows quantification over subsets, that is, over unary predicates. • Second-order logic allows quantification over subsets and relations, that is, over all predicates. For example, the axiom of extensionality can be stated in second-order logic as x = y ≡[def] $forall$P (P(x) ↔ P(y)). The strong semantics of second-order logic give such sentences a much stronger meaning than first-order semantics. • Higher-order logics allows quantification over higher types than second-order logic permits. These higher types include relations between relations, functions from relations to relations between relations, etc. • Intuitionistic first-order logic uses intuitionistic rather than classical propositional calculus; for example, ¬¬φ need not be equivalent to φ. Similarly, first-order fuzzy logics are first-order extensions of propositional fuzzy logics rather than classical logic. • Modal logic has extra modal operators with meanings which can be characterised informally as, for example "it is necessary that φ" and "it is possible that φ". • In monadic predicate calculus predicates are restricted to having only one argument. • Infinitary logic allows infinitely long sentences. For example, one may allow a conjunction or disjunction of infinitely many formulas, or quantification over infinitely many variables. Infinitely long sentences arise in areas of mathematics including topology and model theory. • First-order logic with extra quantifiers has new quantifiers Qx,..., with meanings such as "there are many x such that ...". Also see branching quantifiers and the plural quantifiers of George Boolos and others. • Predicate Logic with Definitions (PLD, or D-logic) modifies FOL by formally adding syntactic definitions as a type of value (in addition to formulas and terms); these definitions can be used inside terms and formulas. • Independence-friendly logic is characterized by branching quantifiers, which allow one to express independence between quantified variables. Most of these logics are in some sense extensions of FOL: they include all the quantifiers and logical operators of FOL with the same meanings. Lindström showed that FOL has no extensions (other than itself) that satisfy both the compactness theorem and the downward Löwenheim–Skolem theorem. A precise statement of Lindström's theorem requires a few technical conditions that the logic is assumed to satisfy; for example, changing the symbols of a language should make no essential difference to which sentences are true. Three ways of eliminating quantified variables from FOL, and that do not involve replacing quantifiers with other variable binding term operators, are known: These algebras: Tarski and Givant (1987) show that the fragment of FOL that has no atomic sentence lying in the scope of more than three quantifiers, has the same expressive power as relation algebra. This fragment is of great interest because it suffices for Peano arithmetic and most axiomatic set theory, including the canonical ZFC. They also prove that FOL with a primitive ordered pair is equivalent to a relation algebra with two ordered pair projection functions. Theorem proving for first-order logic is one of the most mature subfields of automated theorem proving . The logic is expressive enough to allow the specification of arbitrary problems, often in a reasonably natural and intuitive way. On the other hand, it is still , and a number of sound and complete calculi have been developed, enabling fully automated systems. In 1965 J. Alan Robinson achieved an important breakthrough with his approach; to prove a theorem it tries to refute the negated theorem, in a goal-directed way, resulting in a much more efficient method to automatically prove theorems in FOL. More expressive logics, such as higher-order and modal logics, allow the convenient expression of a wider range of problems than first-order logic, but theorem proving for these logics is less well developed. A modern and particularly disruptive new technology is that of SMT solvers, which add arithmetic and propositional support to the powerful classes of SAT solvers. See also • Jon Barwise and John Etchemendy, 2000. Language Proof and Logic. CSLI (University of Chicago Press) and New York: Seven Bridges Press. • David Hilbert and Wilhelm Ackermann 1950. Principles of Theoretical Logic (English translation). Chelsea. The 1928 first German edition was titled Grundzüge der theoretischen Logik. • Wilfrid Hodges, 2001, "Classical Logic I: First Order Logic," in Lou Goble, ed., The Blackwell Guide to Philosophical Logic. Blackwell. External links • Stanford Encyclopedia of Philosophy: " Classical Logic -- by Stewart Shapiro. Covers syntax, model theory, and metatheory for first-order logic in the natural deduction style. • forall x: an introduction to formal logic, by P.D. Magnus, covers formal semantics and proof theory for first-order logic. • Metamath: an ongoing online project to reconstruct mathematics as a huge first-order theory, using first-order logic and the axiomatic set theory ZFC. Principia Mathematica modernized and done • Podnieks, Karl. Introduction to mathematical logic. • Cambridge Mathematics Tripos Notes : " -- Notes typeset by John Fremlin. The notes cover part of a past Cambridge Mathematics Tripos course taught to undergraduates students (usually) within their third year. The course is entitled "Logic, Computation and Set Theory" and covers Ordinals and cardinals, Posets and zorn’s Lemma, Propositional logic, Predicate logic, Set theory and Consistency issues related to ZFC and other set theories.
{"url":"http://www.reference.com/browse/first-order+logic","timestamp":"2014-04-16T08:09:23Z","content_type":null,"content_length":"171373","record_id":"<urn:uuid:0ca50042-e65f-4e0e-bba3-9364c37bf6f6>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00400-ip-10-147-4-33.ec2.internal.warc.gz"}
wu :: forums - Political slugfest wu :: forums medium (Moderators: william wu, Eigenray, Grimbal, SMQ, towr, Icarus, ThudnBlunder) « Previous topic | Next topic » Pages: 1 Reply Notify of replies Send Topic Print Author Topic: Political slugfest (Read 2008 times) ecoist Political slugfest Senior « on: Nov 1^st, 2008, 9:22pm » Quote Modify 4 libertarians, 13 republicans, and 17 democrats gather to argue their political philosophy. They wander about and debate each other in pairs. When two of them of different political persuasions debate each other, they become so disallusioned that they both change to the third political persuasion. Show that it cannot happen that, after awhile, all of them acquire the same political philosophy. « Last Edit: Nov 1^st, 2008, 9:39pm by ecoist » towr Re: Political slugfest wu::riddles Moderator « Reply #1 on: Nov 2^nd, 2008, 8:29am » Quote Modify Reminds me of a type of lizard.. Let's see if I can find a link Ah, here we go, and more to the point, here. Some people are average, some are just mean. Posts: 12992 Wikipedia, Google, Mathworld, Integer sequence DB Hippo Re: Political slugfest Uberpuzzler « Reply #2 on: Nov 2^nd, 2008, 2:09pm » Quote Modify let ^2 be primitive 3rd root of unity. Look at ^2+d) Operations are additions of (-1,-1,2),(-1,2,-1) or (2,-1,-1) to (l,r,d). As addition of (-1,-1,-1) does not change Starting position (4,13,17) is mod 3 equal (1,1,-1) having Posts: 899 ^2 I am not able to finish the proof. So try to go to 34d. 1) (4,13,17)+=(6,-3,-3)=(10,10,14) 2) (10,10,14)+=(-10,-10,20)=(0,0,34). At least I have prooved noone else will be able If two of l,r,d are equal (mod 3), we can equalize them as in 1). Than convert to the remaining kind as in 2). If l,r,d are distinct mod 3 we get « Last Edit: Nov 2^nd, 2008, 3:02pm by Hippo » towr Re: Political slugfest wu::riddles Moderator « Reply #3 on: Nov 2^nd, 2008, 3:04pm » Quote Modify on Nov 2^nd, 2008, 2:09pm, Hippo wrote: So try to go to 34d. 1) (4,13,17)+=(6,-3,-3)=(10,10,14) 2) (10,10,14)+=(-10,-10,20)=(0,0,34). Some people are average, some are just mean. That bodes well for Obama Gender: « Last Edit: Nov 2^nd, 2008, 3:05pm by towr » Posts: 12992 Wikipedia, Google, Mathworld, Integer sequence DB Eigenray Re: Political slugfest wu::riddles « Reply #4 on: Nov 2^nd, 2008, 3:53pm » Quote Modify Uberpuzzler For the general case, the analysis is simpler if they're allowed to borrow people. For example, 3 libertarians left to themselves will stay libertarian. But after a couple rounds with a republican in the room (who then leaves, still a republican), they might just find themselves all democrats. Find a simple criterion to determine whether one state can turn into another without borrowing. Hippo: I also thought roots of unity were the right way to go. But they don't really work for a composite number of parties. That is, with n parties, and a given number of people, there are n^n-1 distinct states (allowing negative people), by looking at all the differences mod n. But ^, where ^2pi i/n is an n-th root of unity. Posts: 1948 « Last Edit: Nov 2^nd, 2008, 4:08pm by Eigenray » ecoist Re: Political slugfest Senior « Reply #5 on: Nov 2^nd, 2008, 5:27pm » Quote Modify I screwed up, guys! There is an elementary solution if the total number of people involved is a multiple of 3. I should have said there are only 3 libertarians. Thinking about what Eigenray wrote, I came up with the following variation. Let there be 15 parties with the i-th party having i members, for i=1,...,15. Whenever 14 of these guys meet, no two belonging to the same party, they all switch to the party none belong to. Show that it cannot happen that after awhile everyone belongs to the same party. Hippo Re: Political slugfest Uberpuzzler « Reply #6 on: Nov 3^rd, 2008, 12:41am » Quote Modify on Nov 2^nd, 2008, 3:53pm, Eigenray wrote: For the general case, the analysis is simpler if they're allowed to borrow people. For example, 3 libertarians left to themselves will stay libertarian. But after a couple rounds with a republican in the room (who then leaves, still a republican), they might just find themselves all democrats. Find a simple criterion to determine whether one state can turn into another without borrowing. Hippo: I also thought roots of unity were the right way to go. But they don't really work for a composite number of parties. That is, with n parties, and a given number of people, there are n^n-1 distinct states (allowing negative people), by looking at all the differences mod n. But ^, where ^2pi i/n is an n-th root of unity. Posts: 899 Actually I didn't thing about general case and the roots of unity was used in confusing way ... I actualy thought on triangular grid mod 3. I have used 1,^2 as a coordinates. You can as well use x,y,z space coordinates and project it to the plane perpendicular to main diagonal (t(1,1,1) line). ... There is 9 positions in the projection mod 3, 7 of them are on projections of axis. Mod 3 just projections of (1,-1,0) and (-1,1,0) are not. I don't think it can be easily generalised to higher number of parties. At least not from my point of view Oh yes, it looks well for prime p number of parties, but I have had problems to imagine the mod p operation BTW: The greasmonkey eats spaces after escape sequences as « Last Edit: Nov 3^rd, 2008, 1:18am by Hippo » ecoist Re: Political slugfest Senior Riddler « Reply #7 on: Nov 11^th, 2008, 4:27pm » Quote Modify Ok, how about this less ambitious generalization of the chameleon problem? Suppose that there are n>2 political parties whose members satisfy two conditions. a) For any two parties, the numbers of members of each party are incongruent modulo n. b) If any n-1 people meet, no two of them belonging to the same party, then all n-1 switch to the party none belong to. Show that it can never happen that everyone belongs to the same party. Posts: 405 ecoist Re: Political slugfest Senior « Reply #8 on: Nov 16^th, 2008, 11:44am » Quote Modify I'm a complete idiot (thx for letting me find out for myself)! This problem, and its generalization, belongs in easy. Wowbagger suggested the easy solution, and Hippo noted what was needed to be assumed to make his solution work. The numbers of members of each party forms a multiset of residues modulo n. That multiset never changes when party switching occurs (see Hippo's example with (4,13,17). There are always exactly two parties with the same number of members modulo 3). The reason for this is that, when switching occurs, it amounts to subtracting 1 from each membership modulo n (adding n-1 members to one party's membership is the same as subtracting 1 modulo n). Hence, since initially all memberships are incongruent modulo n, they must remain incongruent modulo n; and so the guys can never all belong to the same party. The only way such a thing could happen is if, initially, n-1 memberships are congruent modulo n. Pages: 1 Reply Notify of replies Send Topic Print « Previous topic | Next topic »
{"url":"http://www.ocf.berkeley.edu/~wwu/cgi-bin/yabb/YaBB.cgi?board=riddles_medium;action=display;num=1225599735","timestamp":"2014-04-17T11:03:40Z","content_type":null,"content_length":"61371","record_id":"<urn:uuid:ab445c53-e8f8-42db-aa1b-f10ca609ca52>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00081-ip-10-147-4-33.ec2.internal.warc.gz"}
Back to top of Surface Evolver documentation. Index. Surface models. The Surface Evolver can handle several different models of surfaces. In fact, there are several different categories of models regarding different features, so one variety of each category is active. However, some combinations are incompatible. Model categories: Incompatible models The following combinations of surface models are incompatible: Surface representation and combinatorics All surfaces are simplicial complexes made of the basic elements: vertices, edges, and facets. The Evolver has three different ways of representing the combinatorics of the surface, depending on the dimension of the surface. Any of these may be used in any ambient space dimension at least as great at the surface dimension. String model The term "string model" means that the surface is one-dimensional. The datafile must have the keyword "STRING" or "SURFACE_DIMENSION 1" in its top section. Edges are defined in terms of their vertices, and facets by a list of boundary edges. Facets are not divided into triangles, and may have any number of edges. The edges of a facet need not form a closed loop, for example if the facet is partly bounded by a constraint. A body is defined by associating one facet to it, and the volume of the body is the area of the facet. The default energy is edge length. Soapfilm model The term "soapfilm model" means that the dimension of the surface is 2. This is the default model. The surface is subdivided into triangular facets, and the default energy is surface area. Edges are defined by their vertices. Facets are defined by an oriented list of three edges, which must form a closed loop. However, faces in the datafile may have more than three edges, since they are automatically refined into facets when loaded. In official Evolver-speak, a "face" is what appears in the datafile, and a "facet" is the triangle in the internal Evolver representation of the surface. Bodies are defined by a set of oriented facets, which need not form the complete boundary of the body, for example if part of the boundary is on a constraint. Internally, the surface is held together by a set of structures called "facet-edges". There is one such structure for each incidence of an edge on a facet. There is a doubly linked list of facet-edges around each facet, so edges can be traversed in order, and there is a doubly-linked list around each edge, so the facets around an edge can be traversed in geometric order. Evolver figures out the geometric order from the geometric data in the datafile. If geometric order does not make sense, as when the space dimension is 4 or more, then the order is random. Simplex model The simplex model enables the representation of arbitrary dimension surfaces, but many Evolver features are not available with it. Here each facet is represented as an oriented list of k+1 vertices, where k is the dimension of the surface. Edges may be specified as k-1 dimensional simplices, but they are used only to compute constraint and named quantity integrals; a complete list of edges is not needed. Bodies are specified as lists of oriented facets. The datafile must have the keyword SIMPLEX_REPRESENTATION in the top section, and the phrase 'SURFACE_DIMENSION k' if k isn't 2. k = 1 and k = 2 are allowed, but not very useful in comparison to the string or soapfilm models. If the domain is not 3-dimensional, then 'SPACE_DIMENSION n' must also be included. The datafile edges section is optional. Each facet should list k+1 vertex numbers. Non-simplicial facets are not allowed. See the sample datafile simplex3.fe. Most features are not implemented. The quadratic model is not allowed, but the Lagrange model is. Vertices may be FIXED. Constraints are allowed, with energy integrands. Several basic named quantity methods work. No torus model or symmetry groups. No changing of surface topology or combinatorics is allowed except global refining with the r command. Refining subdivides each simplex 1-edge, with the edge midpoint inheriting the common attributes of the edge endpoints. Refining will increase the number of facets by a factor of 2^k. Representation of surface elements A surface is represented by a finite element method, where each element is a simplex. The simplest types of elements are just straight line segments and flat triangles. These combine to represent a piecewise linear surface, for which calculated quantities generally have an error of order h^2 compared to the ideal smooth surface, where h is the linear size of an element. Various ways of representing more accurate curved elements exist, and Evolver implements a couple of versions of what are called "Lagrange elements". Linear model In the linear model, all edges and triangular facets are flat line segments and triangles, respectively. For all calculations, an edge is defined by its two endpoints, and a facet (in the soapfilm model) is defined by its three vertices. This is the default. Quadratic or Lagrange models may be changed to linear with the M 1 or linear commands. An exception is if the spherical_arc_length method is used for length_method_name in the string model, in which case edges are computed and drawn on a sphere centered at the origin. Quadratic model By default, edges and facets are linear. But it is possible to represent edges as quadratic curves and facets as quadratic patches by using the quadratic model. Each edge is endowed with a midpoint vertex. Most features are implemented for the quadratic representation, but some named quantity methods are not available, such as those involving curvature. A special case is circular arc mode, which is effective in quadratic mode in the string model with the circular_arc_length method used for length_method_name. Then edges are calculated and drawn as exact circular arcs through their three vertices. The smoothness of graphing of curved quadratic edges can be controlled with the internal variable string_curve_tolerance, which is the desired angle in degrees between successive graphed segments making up the edge. The quadratic model can be invoked by putting the QUADRATIC keyword in the top section of the datafile or by using the quadratic or M 2 commands. Lagrange model The Evolver has a very limited implementation of higher-order elements. In the Lagrange model of order n, each edge is defined by interpolation on n+1 vertices evenly spaced in the parameter domain, and each facet is defined by interpolation on (n+1)(n+2)/2 vertices evenly spaced in a triangular pattern in the parameter domain. That is, the elements are Lagrange elements in the terminology of finite element analysis. The Lagrange model is defined only for named quantities and methods, so Evolver will automatically do convert_to_quantities when you invoke the Lagrange model. The methods that currently accept the Lagrange model are A surface may be converted to an order n Lagrange model with the command "lagrange n". This will convert linear or quadratic models to Lagrange, and will convert between different order Lagrange models. The commands linear and quadratic will convert Lagrange model back to the linear or quadratic models. No triangulation manipulations are available in the Lagrange model. No refining, equiangulation, or anything. There is some vertex averaging, but just internal to edges and facets. Use the linear or quadratic model to establish your final triangulation, and just use the Lagrange model to get extra precision. The current order can be accessed through the read-only internal variable lagrange_order. The Lagrange model can be dumped and reloaded. As the Lagrange order is raised, calculations slow down rapidly. This is not only due to the large number of points involved, but is also due to the fact that the order of Gaussian integration is also raised. Lagrange elements are normally plotted subdivided on their vertices, but if the smooth_graph flag is on, they are plotted with 8-fold subdivision. The toggle command bezier_basis toggle replaces the Lagrange interpolation polynomials (which pass through the control points) with Bezier basis polynomials (which do not pass through interior control points, but have positive values, which guarantees the edge or facet is within the convex hull of the control points). This is experimental at the moment, and not all features such as graphing or refinement have been suitably adjusted. Space dimension By default, surfaces live in 3 dimensional space. However, the phrase "SPACE_DIMENSION n" in the datafile header sets the dimension to n. This means that all coordinates and vectors have n components. The only restriction is that Evolver has to be compiled with the MAXCOORD macro defined to be at least n in Makefile or in model.h. The default MAXCOORD is 4. Change MAXCOORD and recompile if you want more than four dimensions. The actual space dimension can be accessed in commands through the read-only variable space_dimension. Graphics will display only the first three dimensions of spaces with more than three dimensions, except for geomview, which has a four-dimensional viewer built in (although its use is awkward now). For length and area to make sense, the ambient space must be endowed with a metric. The Evolver offers several choices, but keep in mind that they are only used to calculate default length and area. Other quantities that depend on the metric, such as volume, are up to the user to put in by hand with named quantities. All displaying is done as if the metric is Euclidean. Euclidean metric The default metric on the ambient space is the ordinary Euclidean metric. There are no built-in units of measurement like meters or grams, so the user should express all physical quantities in some consistent system of units, such as MKS or cgs. Riemannian metric The ambient space can be endowed with a general Riemannian metric by putting the keyword METRIC in the datafile followed by the elements of the metric tensor. Only one coordinate patch is allowed, but the quotient space feature makes this quite flexible. Edges and facets are linear in coordinates, they are not geodesic. The metric is used solely to calculate lengths and areas. It is not used for volume. To get a volume constraint on a body, you will have to define your own named quantity constraint. See quadm.fe for an example of a metric. Conformal metric The ambient space can be endowed with a conformal Riemannian metric by putting the keyword CONFORMAL_METRIC in the datafile followed by a formula for the conformal factor, i.e. the multiple of the identity matrix that gives the metric. Only one coordinate patch is allowed, but the quotient space feature makes this quite flexible. Edges and facets are linear in coordinates, they are not geodesic. The metric is used solely to calculate lengths and areas. It is not used for volume. To get a volume constraint on a body, you will have to define your own named quantity constraint. See quadm.fe for an example of a metric. Klein hyperbolic metric One special metric is available built-in. It is the Klein model of hyperbolic space in n dimensions. The domain is the unit disk or sphere in Euclidean coordinates. Including the keyword KLEIN_METRIC in the top section of the datafile will invoke this metric. Lengths and areas are calculated exactly, but as with other metrics you are on your own for volumes and quantities. There are many interesting problems dealing with symmetric surfaces. A natural way to deal with a symmetric surface is to compute with only one fundamental domain of the symmetry, and use special boundary conditions. Some symmetries, such as mirror symmetries, can be handled with normal Evolver features. For example, a mirror can be implemented as a planar level set constraint. But symmetries such as translational or rotational symmetry require some built-in features. In any case, multiple copies of the fundamental domain may be displayed with the view_transforms command. No symmetry By default, the domain of a surface is Euclidean space. A symmetric surface can be done this way if its fundamental domain is bounded by mirror planes. Each mirror plane should be implemented as a linear level set constraint. Torus model The Evolver can take as its domain a flat torus with an arbitrary parallelpiped as its unit cell, i.e. the domain is a parallelpiped with its opposite faces identified. This is indicated by the TORUS keyword in the datafile. The defining basis vectors of the parallelpiped are given in the TORUS_PERIODS entry of the datafile. See twointor.fe for an example. Vertex coordinates are given as Euclidean coordinates within the unit cell, not as linear combinations of the period vectors. The coordinates need not lie within the parallelpiped, as the exact shape of the unit cell is somewhat arbitrary. The way the surface wraps around in the torus is given by saying how the edges cross the faces of the unit cell. In the datafile edges section, each edge has one symbol per dimension indicating how the edge vector crosses each identified pair of faces, and how the vector between the endpoints needs to be adjusted to get the true edge vector: * does not cross face + crosses in same direction as basis vector, so basis vector added to edge vector - crosses in opposite direction, so basis vector subtracted from edge vector. Wraps are automatically maintained by the various triangulation manipulation operations. There are several commands for ways of displaying a torus surface: raw_cells - Graph the facets as they are, without clipping. The first vertex of a facet is used as the basepoint for any unwrapping of other vertices needed. connected - Each body's facets are unwrapped in the torus, so the body appears in one connected piece. Nicest option, but won't show facets not on bodies. clipped - Shows the unit cell specified in the datafile. Facets are clipped on the parallelpiped faces. A few features are not available with the torus model, such as gravity and the simplex model. (Actually, you could put them in, but they will not take into account the torus model.) Volumes and volume constraints are available. However, if the torus is completely partitioned into bodies of prescribed volume, then the volumes must add up to the volume of the unit cell and the TORUS_FILLED keyword must be given in the datafile. Or just don't prescribe the volume of one body. Volumes are somewhat ambiguous. The volume calculation method is accurate only to one torus volume, so it is possible that a body whose volume is positive gets its volume calculated as negative. Evolver adjusts volumes after changes to be as continuous as possible with the previous volumes as possible, or with target volumes when available. You can also set a body's volconst attribute if you don't like the Evolver's actions. Level set constraints can be used in the torus model, but be cautious when using them as mirror symmetry planes with volumes. The torus volume algorithm does not cope well with such partial surfaces. If you must, then use y=const symmetry planes rather than x=const, and use the -q option or do convert_to_quantities. Double-check that your volumes are turning out correctly; use volconst if Display_periods: The displayed parallelogram unit cell can be different from the actual unit cell if you put an array called display_periods in the top of the datafile, in addition to the regular periods. For a string model example, parameter shear = 1 shear 4 This will always display a square, no matter how much the actual unit cell is sheared. This feature works well for shears; it may not work nicely for other kinds of deformation. Display_periods works better for the string model than the soapfilm model. For the soapfilm model, it seems to do horizontal shears best, but it can't cope with large shears, so if your shear gets too large, I advise resetting your fundamental region to less shear, say with the unshear command in unshear.cmd. Display_origin: For a torus mode surface, if clipped mode is in effect, the center of the clip box is set with the display_origin[] array whose dimension is the dimension of the ambient space. This array does not exist by default, it has to be created by the user in the top of the datafile with the syntax display_origin x y z where x y z are the coordinates for the desired center of the clip box. At runtime, the array elements may be changed as normal: display_origin[2] := 0.5 Changing display_origin will automatically cause the graphics to re-display. Symmetry groups and quotient spaces As a generalization of the torus model, you may declare the domain to be the quotient space of R^n with respect to some symmetry group. Several built-in groups are available, and ambitious users can compile C code into Evolver to define group operations. Group elements are represented by integers attached to edges (like the wrap specifications in the torus model at runtime). You define the integer representation of the group elements. See the file quotient.c for an example. See khyp.c and khyp.fe for a more intricate example modelling an octagon in Klein hyperbolic space identified into a genus 2 surface. The datafile requires the keyword SYMMETRY_GROUP followed by the name for the group in quotes. Edges that wrap have their group element specified in the datafile by the phrase "wrap n", where n is the number of the group element. The wrap values are accessible at run time with the wrap attribute of edges. The group operations are accessible by the functions wrap_inverse(w) and wrap_compose( Using any Hessian commands with any symmetry group (other than the built-in torus model) will cause automatic converting to named quantities (with the " convert_to_quantities" command, since only named quantity Hessian evaluation routines have the proper symmetry transformation of the Hessian programmed in. Volumes of bodies might not be calculated correctly with a symmetry group. The volume calculation only knows about the built-in torus model. For other symmetry groups, if you declare a body, it will use the Euclidean volume calculation. It is up to you to design an alternate volume calculation using named quantities and methods. The currently implemented symmetry groups are: TORUS symmetry group This is the underlying symmetry for the torus model. Although the torus model has a number of special features built in to the Evolver, it can also be accessed through the general symmetry group interface. The torus group is the group on n-dimensional Euclidean space generated by n independent vectors, called the period vectors. The torus group uses the torus periods listed in the datafile. Datafile declaration: symmetry_group "torus" Group element encoding: The 32-bit code word is divided into 6-bit fields, one field for the wrap in each dimension, with low bits for the first dimension. Hence the maximum space dimension is 5. Within each bitfield, 1 codes for positive wrap and 011111 codes for negative wrap. The coding is actually a 2's complement 5-bit integer, so higher wraps could be represented. ROTATE symmetry group This is the cyclic symmetry group of rotations in the x-y plane, where the order of the group is given by the internal variable rotation_order (default value 4). There is also an internal variable generator_power (default 1) such that the angle of rotation is 2*pi*generator_power/rotation_order. Note: Since this group has fixed points, some special precautions are necessary. Vertices on the rotation axis must be labelled with the attribute axial_point in the datafile. Edges out of an axial point must have the axial point at their tail, and must have zero wrap. Facets including an axial point must have the axial point at the tail of the first edge in the facet. Datafile declaration: symmetry_group "rotate" parameter rotation_order = 6 Group element encoding: An element is encoded as the power of the group generator. FLIP_ROTATE symmetry group This is the cyclic symmetry group of rotations in the x-y plane with a flip z mapping to -z on every odd rotation, where the order of the group is given by the internal variable rotation_order, which had better be even. Note: Since this group has points that are fixed under an even number of rotations, some special precautions are necessary. Vertices on the rotation axis must be labelled with the attribute "double_axial_point" in the datafile. Edges out of an axial point must have the axial point at their tail, and must have zero wrap. Facets including an axial point must have the axial point at the tail of the first edge in the facet. Datafile declaration: symmetry_group "flip_rotate" parameter rotation_order = 6 Group element encoding: An element is encoded as the power of the group generator. CUBELAT symmetry group This is the full symmetry group of the unit cubic lattice. Datafile declaration: symmetry_group "cubelat" Group element encoding: wrap & {1,2,4} gives the sign changes for x,y,z respectively; (wrap&24)/8 is the power of the (xyz) permutation cycle; (wrap&32)/32 tells whether to then swap x,y. Thus wrap& 63 is the same as for the cubocta symmetry group. Then wrap/64 is a torus wrap as in the torus symmetry group: three six-bit signed fields for translation in each coordinate. Translation is to be applied after other operations. CUBOCTA symmetry group This is the full symmetry group of the cube. It can be viewed as all permutations and sign changes of (x,y,z). Datafile declaration: symmetry_group "cubocta" Group element encoding: wrap & {1,2,4} gives the sign changes for x,y,z respectively; (wrap&24)/8 is the power of the (xyz) permutation cycle; (wrap&32)/32 tells whether to then swap x,y. (By John Sullivan; source in quotient.c under name pgcube) XYZ symmetry group The orientation-preserving subgroup of cubocta. Same representation. GENUS2 symmetry group This is a symmetry group on the Klein model of hyperbolic space whose quotient group is a genus 2 hyperbolic manifold. The fundamental region is an octagon. Datafile declaration: symmetry_group "genus2" Group element encoding: There are 8 translation elements that translate the fundamental region to one of its neighbors. Translating around a vertex gives a circular string of the 8 elements. The group elements encoded are substrings of the 8, with null string being the identity. Encoding is 4 bits for each element. See khyp.fe for an example. DODECAHEDRON symmetry group This is the symmetry group of translations of hyperbolic 3 space tiled with right-angled dodecahedra. The elements of the group are represented as integers. There are 32 generators of the group so each generator is represented by five bits. Under this scheme any element that is the composition of up to five generators can be represented. If you want to use this group, you'll have to check out the source code in dodecgroup.c, since somebody else wrote this group and I don't feel like figuring it all out right now. Datafile declaration: symmetry_group "dodecahedron" CENTRAL_SYMMETRY symmetry group This is the order 2 symmetry group of inversion through the origin, X maps to -X. Datafile declaration: symmetry_group "central_symmetry" Group element encoding: 0 for identity, 1 for inversion. SCREW_SYMMETRY symmetry group This is the symmetry group of screw motions along the z axis. The global parameter screw_height is the translation distance (default 1), and the global parameter screw_angle is the rotation angle in degrees (default 0). Datafile declaration: parameter screw_height = 4.0 parameter screw_angle = 180.0 symmetry_group "screw_symmetry" Group element encoding: the integer value is the power of the group generator. Symmetry group 3D torus with quarter turn in identification of top and bottom. x and y periods taken to be 1. z period is the user-defined variable quarter_turn_period. Generators x,y,z. x and y as in regular torus mode. z is vertical translation with quarter turn: (x,y,z)->(-y,x,z). Relations: x z = z y^-1, y z = z x Numerical representation: as in the torus symmetry group, for powers of x,y,z with generators applied in that order. There is a choice to be made in the conversion of the forces on vertices into velocities of vertices. Technically, force is the gradient of energy, hence a covector on the manifold of all possible configurations. In the Evolver representations of surfaces, that global covector can be represented as a covector at each vertex. The velocity is a global vector, which is represented as a vector at each vertex. Conversion from the global covector to the global vector requires multiplication by a metric tensor, i.e. singling out a particular inner product on global vectors and covectors. The tensor converting from force to velocity is the mobility tensor, represented as the mobility matrix M in some coordinate system. Its inverse, converting from velocity to force, is the resistance tensor S = M^{-1}. The same inner product has to be used in projecting the velocity tangent to the constraints, whether they be level set constraints on vertices or constraints on body volumes or quantity integrals. There are several choices implemented in the Evolver, corresponding to several different physical pictures of how the medium resists the motion of the surface through it: Unit mobility This is the default mobility, in which the velocity is equal to the force. Hence M and S are the identity matrices in standard coordinates. The physical interpretation of this is that there is a resistance to motion of each vertex through the medium proportional to its velocity, but not for the edges. This does not approximate motion by mean curvature, but it is very easy to calculate. Area normalization A type of mobility. In motion by mean curvature, the resistance to motion is really due to the surfaces, not the vertices. One way to approximate this is to say the resistance to motion of a vertex is proportional to the area associated with the vertex. So this scheme counts the area of a vertex equal to 1/3 of the area of the star of facets around it (or 1/2 the area of the star of edges in the string model). This is easy to calculate, since it is a local calculation for each vertex. S and M are diagonal matrices (see mobility). The result is that motion = force/area, which is a much better approximation to motion by mean curvature than the default of motion = force. Area normalization can be toggled with the a command or the area_normalizaton toggle. Area normalization with effective area A type of mobility. Simple area normalization as described in the previous paragraph isn't what's really wanted in certain circumstances. It has equal resistance for motion in all directions, both parallel and normal to the surface. If a vertex is a triple junction and migrating along the direction of one of the edges, it shouldn't matter how long that edge is. Therefore, if the effective area mode is in effect, the area associated with a vertex is the area of its star projected normal to the force at the vertex. This is a little more complicated calculation, but it is still local. S and M are block diagonal matrices, with one block for each vertex (see mobility). At a free edge not on any constraint, the force is tangent to the surface, the resistance is zero, and the mobility is infinite. But this accurately describes a popping soapfilm. Effective area can be toggled with the effective_area toggle. Note that area normalization itself must still be toggled with a or Approximate polyhedral curvature A type of mobility. Following a suggestion of Gerhard Dzuik and Alfred Schmidt, the inner product of global vectors is taken to be the integral of the scalar product of their linear interpolations over the facets (or edges in the string model). This has the advantage that the rate of area decrease of the surface is equal to the rate volume is swept out by the surface, which is a characteristic of motion by mean curvature. A big disadvantage is that the matrices M and S are no longer local (see mobility). S is a sparse matrix with entries corresponding to each pair of vertices joined by an edge, and M is its dense inverse. Approximate polyhedral curvature can be toggled with the approx_curv toggle command. Approximate polyhedral curvature with effective area. Polyhedral curvature does not make any distinction between motion parallel and perpendicular to the surface. A better approximation is to count only motion perpendicular to the surface. This can be done by projecting the interpolated vectorfields normal to the facets before integrating their scalar product. Now the rate of area decrease is equal to the rate geometric volume is swept out, as opposed to the slightly flaky way one had to calculate volume sweeping in the previous paragraph. Again S is a sparse matrix with entries corresponding to each pair of vertices joined by an edge, and M is its dense inverse. The effective area option may be toggled with effective_area. User-defined mobility The user may define a mobility tensor in the datafile. There is a scalar form with the keyword MOBILITY and a tensor form with MOBILITY_TENSOR. When in effect, this mobility is multiplied times the velocity to give a new velocity. This happens after any of the previous mobilities of this section have been applied and before projection to constraints. The formulas defining the mobility may include adjustable parameters, permitting the mobility to be adjusted during runtime. The formulas are evaluated at each vertex at each iteration, and so formulas may depend on vertex position and any vertex attributes. Back to top of Surface Evolver documentation. Index.
{"url":"http://www.susqu.edu/brakke/evolver/html/model.htm","timestamp":"2014-04-16T10:11:51Z","content_type":null,"content_length":"45469","record_id":"<urn:uuid:c9109075-c8af-445d-99f3-d65b5756fd04>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00091-ip-10-147-4-33.ec2.internal.warc.gz"}
Physics Forums - View Single Post - Projectile Motion: Finding the correct angle for a launch 1. The problem statement, all variables and given/known data A Hollywood daredevil plans to jump the canyon shown in the figure on a motorcycle. There is a 15. m drop and the horizontal distance originally planned was 60. m but it turns out the canyon is really 69.4 m across. If he desires a 3.0-second flight time, what is the correct angle for his launch ramp (deg)? 2. Relevant equations vx = vxo x − xo = (vox)t (vy) = (vyo) − gt y − yo = (voy)t −(1/2)g t^2 Range =((vo^2)/g) sin(2θ) 3. The attempt at a solution x − xo = (vox)t 69.4m = (vox)(3s) 23.13 m/s = vox Range =((vo^2)/g) sin(2θ) 69.4m ={ (23.13^2 m/s)/9.81 m/s^2 } sin(2θ) θ = error
{"url":"http://www.physicsforums.com/showpost.php?p=3489210&postcount=1","timestamp":"2014-04-19T04:36:47Z","content_type":null,"content_length":"9238","record_id":"<urn:uuid:9f7bf0eb-2edc-42ac-8468-d0da9ed31ee2>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00034-ip-10-147-4-33.ec2.internal.warc.gz"}
NOMINATE in R W-NOMINATE in R Software and Examples Royce Carroll, Jeff Lewis, James Lo, Keith Poole, and Howard Rosenthal Download wnominate: W-NOMINATE Roll Call Analysis Software for R Users of this package should cite: Poole, Keith, Jeffrey Lewis, James Lo, and Royce Carroll. 2011. “Scaling Roll Call Votes with wnominate in R.” Journal of Statistical Software 42(14): 1–21 wnominate_un_text_tabs.r -- R program that runs WNOMINATE on a tab-delimited roll call matrix, writes the legislator and roll call coordinates to disk, and makes a plot of the ideal points of the legislators. wnominate_un_stata_file.r -- R program that runs WNOMINATE on a Stata roll call matrix, writes the legislator and roll call coordinates to disk, and makes a plot of the ideal points of the legislators. wnominate_un_csv.r -- R program that runs WNOMINATE on a comma-delimited roll call matrix, writes the legislator and roll call coordinates to disk, and makes a plot of the ideal points of the legislators. wnominate_in_R_2010.r -- R program that runs WNOMINATE on the 88^th Senate -- SEN88KH.ORD, and writes the legislator and roll call coordinates to disk. Program uses Simon Jackman's pscl package function READKH(..) to read the roll call file. wnominate_house_111_X_plot.r -- R program that runs WNOMINATE, writes the legislator and roll call coordinates to disk, and makes a plot of the ideal points of the legislators. wnominate_senate_111_X_plot.r -- R program that runs WNOMINATE, writes the legislator and roll call coordinates to disk, and makes a plot of the ideal points of the legislators. wnominate_senate_111_bootstrap.r -- R program that runs WNOMINATE and performs a parametric bootstrap analysis (501 trials) to obtain the standard deviations for the legislator coordinates. The program writes the legislator and roll call coordinates to disk, and makes a plot of the ideal points of the legislators showing the 2 standard deviation confidence ellipses around the ideal points (if the correlation between the trials for the first and second dimension is greater than 0.15 in absolute value, an ellipse is drawn -- otherwise a cross-hair is drawn). wnominate_house_111_coombs_mesh.r -- R program that runs WNOMINATE, writes the legislator and roll call coordinates to disk, outputs the summary plot of the results (see vignette PDF above), and makes a plot of the Coombs Mesh and a histogram of the cutting line angles. Program uses HOU111KH.ORD. wnominate_senate_111_coombs_mesh.r -- R program that runs WNOMINATE, writes the legislator and roll call coordinates to disk, outputs the summary plot of the results (see vignette PDF above), and makes a plot of the Coombs Mesh and a histogram of the cutting line angles. Program uses SENKH111.ORD. VOTEVIEW Blog NOMINATE Data, Roll Call Data, and Software Course Web Pages: UGA (2010 - ) Course Web Pages: UC San Diego (2004 - 2010) University of San Diego Law School (2005) Course Web Pages: University of Houston (2000 - 2005) Course Web Pages: Carnegie-Mellon University (1997 - 2000) Spatial Models of Parliamentary Voting Recent Working Papers Analyses of Recent Politics About This Website K7MOA Log Books: 1960 - 2011 Bio of Keith T. Poole Related Links
{"url":"http://www.voteview.com/wnominate_in_R.htm","timestamp":"2014-04-20T21:36:04Z","content_type":null,"content_length":"8560","record_id":"<urn:uuid:6507a5c2-3fca-4074-84fa-97dfe0fe7cff>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00415-ip-10-147-4-33.ec2.internal.warc.gz"}
Beverly, MA Algebra 1 Tutor Find a Beverly, MA Algebra 1 Tutor ...If taught properly, Trigonometry is easy to understand and apply. I have a MS degree, in Electrical Engineering, from the University of Florida, ten years of Industry experience as an Electrical Engineer(IBM, TELLABS, ACR..), US patents and Publications in my name. I am a senior member of the I... 6 Subjects: including algebra 1, physics, trigonometry, precalculus ...I also have 4 years tutoring for NCLB grades 1-8 in Math and English. I have been tutoring for 25+ years, both Middle school (math and English ) and H.S. - Math. I have tutored COOP, HSPT, ISEE, SSAT, PSAT (Math and Verbal) ACT (Math and Verbal) SAT (Math and Verbal). I feel that I am definitely qualified to tutor for COOP/HSPT prep. 19 Subjects: including algebra 1, geometry, GRE, ASVAB I am currently teaching both middle school and high school math and often teach adjunct classes at community college. I have great success with my students' scores on standardized tests and have had many students increase their percentile scores year-to-year by 20% or more. I have over 20 years of teaching experience and have also worked as an engineer. 8 Subjects: including algebra 1, geometry, algebra 2, SAT math ...I have many years of experience tutoring the SAT. I have many years of experience tutoring the SAT. I have extensive experience with the standardized graduate exams. 29 Subjects: including algebra 1, reading, calculus, geometry ...I have a Massachusetts license to teach math at the high school and middle school levels. I am flexible and want to help you to succeed so ... give me a call and let’s get started! Let’s get you feeling good about math! 11 Subjects: including algebra 1, geometry, ASVAB, algebra 2 Related Beverly, MA Tutors Beverly, MA Accounting Tutors Beverly, MA ACT Tutors Beverly, MA Algebra Tutors Beverly, MA Algebra 2 Tutors Beverly, MA Calculus Tutors Beverly, MA Geometry Tutors Beverly, MA Math Tutors Beverly, MA Prealgebra Tutors Beverly, MA Precalculus Tutors Beverly, MA SAT Tutors Beverly, MA SAT Math Tutors Beverly, MA Science Tutors Beverly, MA Statistics Tutors Beverly, MA Trigonometry Tutors Nearby Cities With algebra 1 Tutor Arlington, MA algebra 1 Tutors Brighton, MA algebra 1 Tutors Chelsea, MA algebra 1 Tutors Danvers, MA algebra 1 Tutors Everett, MA algebra 1 Tutors Lynn, MA algebra 1 Tutors Malden, MA algebra 1 Tutors Marblehead algebra 1 Tutors Peabody, MA algebra 1 Tutors Revere, MA algebra 1 Tutors Salem, MA algebra 1 Tutors Saugus algebra 1 Tutors Swampscott algebra 1 Tutors Wenham algebra 1 Tutors Woburn algebra 1 Tutors
{"url":"http://www.purplemath.com/Beverly_MA_algebra_1_tutors.php","timestamp":"2014-04-20T21:05:13Z","content_type":null,"content_length":"23881","record_id":"<urn:uuid:91bb8b76-ee59-4333-b622-d10f5a538628>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00117-ip-10-147-4-33.ec2.internal.warc.gz"}
Distant parallelism 6.4 Distant parallelism The next geometry Einstein took as a fundament for unified field theory was a geometry with Riemannian metric, vanishing curvature, and non-vanishing torsion, named “absolute parallelism”, “distant parallelism”, “teleparallelism, or “Fernparallelismus”. The contributions from the Levi-Civita connection and from contorsion in the curvature tensor cancel. In place of the metric, tetrads are introduced as the basic variables. As in Euclidean space, in the new geometry these 4-beins can be parallely translated to retain the same fixed directions everywhere. Thus, again, a degree of absoluteness is re-introduced into geometry in contrast to Weyl’s first attempt at unification which tried to soften the “rigidity” of Riemannian geometry. The geometric concept of “fields of parallel vectors” had been introduced on the level of advanced textbooks by Eisenhart as early as 1925–1927 [119 , 121 ] without use of the concept of a metric. In particular, the vanishing of the (affine) curvature tensor was given as a necessary and sufficient condition for the existence of [121], p. 19). As concerns the geometry of “Fernparallelism”, it is a special case of a space with Euclidean connection introduced by Cartan in 1922/23 [31, 30, 32]. Pais let Einstein “invent” and “discover” distant parallelism, and he states that Einstein “did not know that Cartan was already aware of this geometry” ([241 ], pp. 344–345). However, when Einstein published his contributions in June 1928 [ 84 , 83 ], Cartan had to remind him that a paper of his introducing the concept of torsion had “appeared at the moment at which you gave your talks at the Collège de France. I even remember having tried, at Hadamard’s place, to give you the most simple example of a Riemannian space with Fernparallelismus by taking a sphere and by treating as parallels two vectors forming the same angle with the meridians going through their two origins: the corresponding geodesics are the rhumb lines.” (letter of Cartan to Einstein on 8 May 1929; cf. [50 ], p. 4) This remark refers to Einstein’s visit in Paris in March/April 1922. Einstein had believed to have found the idea of distant parallelism by himself. In this regard, Pais may be correct. Every researcher knows how an idea, heard or read someplace, can subconsciously work for years and then surface all of a sudden as his or her own new idea without the slightest remembrance as to where it came from. It seems that this happened also to Einstein. It is quite understandable that he did not remember what had happened six years earlier; perhaps, he had not even fully followed then what Cartan wanted to explain to him. In any case, Einstein’s motivation came from the wish to generalise Riemannian geometry such that the electromagnetic field could be geometrized: “Therefore, the endeavour of the theoreticians is directed toward finding natural generalisations of, or supplements to, Riemannian geometry in the hope of reaching a logical building in which all physical field concepts are unified by one single viewpoint.” ([84 ], p. 217) In an investigation concerning spaces with simply transitive continuous groups, Eisenhart already in 1925 had found the connection for a manifold with distant parallelism given 3 years later by Einstein [118 ]. He also had taken up Cartan’s idea and, in 1926, produced a joint paper with Cartan on “Riemannian geometries admitting absolute parallelism” [40], and Cartan also had written about absolute parallelism in Riemannian spaces [33]. Einstein, of course, could not have been expected to react to these and other purely mathematical papers by Cartan and Schouten, focussed on group manifolds as spaces with torsion and vanishing curvature ([41, 34], pp. 50–54). No physical application had been envisaged by these two mathematicians. Nevertheless, this story of distant parallelism raises the question of whether Einstein kept up on mathematical developments himself, or whether, at the least, he demanded of his assistants to read the mathematical literature. Against his familiarity with mathematical papers speaks the fact that he did not use the name “torsion” in his publications to be described in the following section. In the area of unified field theory including spinor theory, Einstein just loved to do the mathematics himself, irrespective of whether others had done it before – and done so even better (cf. Section Anyhow, in his response (Einstein to Cartan on 10 May 1929, [50 ], p. 10), Einstein admitted Cartan’s priority and referred also to Eisenhart’s book of 1927 and to Weitzenböck’s paper [393]. He excused himself by Weitzenböck’s likewise omittance of Cartan’s papers among his 14 references. In his answer, Cartan found it curious that Weitzenböck was silent because “[...] he indicates in his bibliography a note by Bortolotti in which he several times refers to my papers.” (Cartan to Einstein on 15 May 1929; [50 ], p. 14) The embarrassing situation was solved by Einstein’s suggestion that he had submitted a comprehensive paper on the subject to Zeitschrift für Physik, and he invited Cartan to add his description of the historical record in another paper (Einstein to Cartan on 10 May 1929). After Cartan had sent his historical review to Einstein on 24 May 1929, the latter answered three months later: “I am now writing up the work for the Mathematische Annalen and should like to add yours [...]. The publication should appear in the Mathematische Annalen because, at present, only the mathematical implications are explored and not their applications to physics.” (letter of Einstein to Cartan on 25 August 1929 [50 , 35 , 89 ]) In his article, Cartan made it very clear that it was not Weitzenböck who had introduced the concept of distant parallelism, as valuable as his results were after the concept had become known. Also, he took Einstein’s treatment of Fernparallelism as a special case of his more general considerations. Interestingly, he permitted himself to interpet the physical meaning of geometrical structures: “Let us say simply that mechanical phenomena are of a purely affine nature whereas electromagnetic phenomena are essentially metric; therefore it is rather natural to try to represent the electromagnetic potential by a not purely affine vector.” ([35 ], p. 703) Einstein explained: “In particular, I learned from Mr. Weitzenböck and Mr. Cartan that the treatment of continua of the species which is of import here, is not really new.[...] In any case, what is most important in the paper, and new in any case, is the discovery of the simplest field laws that can be imposed on a Riemannian manifold with Fernparallelismus.” ([89 ], p. 685) For Einstein, the attraction of his theory consisted “For me, the great attraction of the theory presented here lies in its unity and in the allowed highly overdetermined field variables. I also could show that the field equations, in first approximation, lead to equations that correspond to the Newton–Poisson theory of gravitation and to Maxwell’s theory. Nevertheless, I still am far from being able to claim that the derived equations have a physical meaning. The reason is that I could not derive the equations of motion for the corpuscles.” ([89 ], p. 697) The split, in first approximation, of the tetrad field 6.4.2 How the word spread Einstein in 1929 really seemed to have believed that he was on a good track because, in this and the following year, he published at least 9 articles on distant parallelism and unified field theory before switching off his interest. The press did its best to spread the word: On 2 February 1929, in its column News and Views, the respected British science journal Nature reported: “For some time it has been rumoured that Prof. Einstein has been about to publish the results of a protracted investigation into the possibility of generalising the theory of relativity so as to include the phenomena of electromagnetism. It is now announced that he has submitted to the Prussian Academy of Sciences a short paper in which the laws of gravitation and of electromagnetism are expressed in a single statement.” Nature then went on to quote from an interview of Einstein of 26 January 1929 in a newspaper, the Daily Chronicle. According to the newspaper, among other statements Einstein made, in his wonderful language, was the following: “Now, but only now, we know that the force which moves electrons in their ellipses about the nuclei of atoms is the same force which moves our earth in its annual course about the sun, and it is the same force which brings to us the rays of light and heat which make life possible upon this planet.” [2] Whether Einstein used this as a metaphorical language or, whether he at this time still believed that the system “nucleus and electrons” is dominated by the electromagnetic force, remains open. The paper announced by Nature is Einstein’s “Zur einheitlichen Feldtheorie”, published by the Prussian Academy on 30 January 1929 [88 ]. A thousand copies of this paper had been sold within 3 days, so the presiding secretary of the Academy ordered the printing of a second thousand. Normally, only a hundred copies were printed ([183 ], Dokument Nr. 49, p. 136). On 4 February 1929, The Times (of London) published the translation of an article by Einstein, “written as an explanation of his thesis for readers who do not possess an expert knowledge of mathematics”. This article then became reprinted in March by the British astronomy journal The Observatory [86 ]. In it, Einstein first gave a historical sketch leading up to the introduction of relativity theory, and then described the method that guided him to the new theory of distant parallelism. In fact, the only formulas appearing are the line elements for two-dimensional Riemannian and Euclidean space. At the end, by one figure, Einstein tried to convey to the reader what consequence a Euclidean geometry with torsion would have – without using that name. His closing sentences are: “Which are the simplest and most natural conditions to which a continuum of this kind can be subjected? The answer to this question which I have attempted to give in a new paper yields unitary field laws for gravitation and electromagnetism.” ([86 ], p. 118) A few months later in that year, again in Nature, the mathematician H. T. H. Piaggio gave an exposition for the general reader of “Einstein’s and other Unitary Field Theories”. He was a bit more explicit than Einstein in his article for the educated general reader. However, he was careful to end it with a warning: “Of course the ultimate test of the theory must be by experiment. It may succeed in predicting some interaction between gravitation and electromagnetism which can be confirmed by observation. On the other hand, it may be only a ‘graph’ and so outside the ken of the ordinary physicist.” ([258], p. 879) The use of the concept “graph” had its origin in Eddington’s interpretation of his and other peoples’ unified field theories to be only graphs of the world; the true geometry remained the Riemannian geometry underlying Einstein’s general relativity. Even the French-Belgian writer and poet Maurice Maeterlinck had heard of Einstein’s latest achievement in the area of unified field theory. In his poetic presentation of the universe “La grande féerie” we find his remark: “Einstein, in his last publications comments to which are still to appear, again brings us mathematical formulae which are applicable to both gravitation and electricity, as if these two forces seemingly governing the universe were identical and subject to the same law. If this were true it would be impossible to calculate the consequences.” ([214], p. 68) 6.4.3 Einstein’s research papers We are dealing here with Einstein’s, and Einstein and Mayer’s joint papers on distant parallelism in the reports of the Berlin Academy and Mathematische Annalen, which were taken as the starting point by other researchers following suit with further calculations. Indeed, there was a lot of work to do, only in part because Einstein, from one paper to the next, had changed his field equations. In his first note [84 ], dynamics was absent; Einstein made geometrical considerations his main theme: Introduction of a local “ such that where summation over . “Fernparallelism” now means that if the components referred to the local Einstein: joint rotations of each If parallel transport of a tangent vector An immediate consequence is that the covariant derivative of each bein-vector vanishes, by use of Equation (161). Also, the metric is covariantly constant Neither fact is mentioned in Einstein’s note. Also, no reference is given to Eisenhart’s paper of 1925 [118 ], in which the connection (161) had been given (Equation (3.5) on p. 248 of [118]), as noted above, its metric-compatibility shown, and the vanishing of the curvature tensor concluded. The (Riemannian) curvature tensor calculated from Equation (161) turns out to vanish. As Einstein noted, by 160) also the usual Riemannian connection de Donder’s paper [48 ]). From Equation (161), obviously the torsion tensor 21)). Einstein denoted it by At the end of the note Einstein compared his new approach to Weyl’s and Riemann’s: • WEYL: Comparison at a distance neither of lengths nor of directions; • RIEMANN: Comparison at a distance of lengths but not of directions; • Present theory: Comparison at a distance of both lengths and directions. In his second note [83 ], Einstein departed from the Lagrangian . He introduced Einstein did not use the names “electrical potential” or “electrical field”. He then showed that in a first-order approximation starting from Einstein vacuum field equations and Maxwell’s equations are surfacing. To do so he replaced Einstein concluded that “The separation of the gravitational and the electromagnetic field appears artificial in this theory. [...] Furthermore, it is remarkable that, according to this theory, the electromagnetic field does not enter the field equations quadratically.” ([83 ], p. 6) In a postscript, Einstein noted that he could have obtained similar results by using the second scalar invariant of his previous note, and that there was a certain indeterminacy as to the choice of the Lagrangian. This shows clearly that the ambiguity in the choice of a Lagrangian had bothered Einstein. Thus, in his third note, he looked for a more reassuring way of deriving field equations [88 ]. He left aside the Hamiltonian principle and started from identities for the torsion tensor, following from the vanishing of the curvature tensor. He thus arrived at the identity given by Equation (29), i.e., (Einstein’s equation (3), p. 5; his convention is By defining 164), Einstein obtained another identity: where the covariant divergence refers to the connection components For the proof, he used the formula for the covariant vector density given in Equation (16), which, for the divergence, reduces to The second identity used by Einstein follows with the help of Equation (27) for vanishing curvature (Einstein’s equation (5), p. 5): As we have seen in Section 2, if he had read it, Einstein could have taken these identities from Schouten’s book of 1924 [300 ]. By replacing 165), the final form of the second identity now is Einstein first wrote down a preliminary set of field equations from which, in first approximation, both the gravitational vacuum field equations (in the limit Here, replaces Einstein, after some manipulations, postulated the 20 exact field equations: among which 8 identities hold. Einstein seems to have sensed that the average reader might be able to follow his path to the postulated field equations only with difficulty. Therefore, in a postscript, he tried to clear up his “The field equations suggested in this paper may be characterised with regard to other such possible ones in the following way. By staying close to the identity (167), it has been accomplished that not only 16, but 20 independent equations can be imposed on the 16 quantities ([88 ], p. 8) He still was not entirely sure that the theory was physically acceptable: “A deeper investigation of the consequences of the field equations (170) will have to show whether the Riemannian metric, together with distant parallelism, really gives an adequate representation of the physical qualities of space.” In his second paper of 1929, the fourth in the series in the Berlin Academy, Einstein returned to the Hamiltonian principle because his collaborators Lanczos and Müntz had doubted the validity of the field equations of his previous publication [88 ] on grounds of their unproven compatibility. In the meantime, however, he had found a Lagrangian such that the compatibility-problem disappeared. He restricted the many constructive possibilities for with Einstein had to perform the limit . In a Festschrift for his former teacher and colleague in Zürich, A. Stodola, Einstein summed up what he had reached. He exchanged the definition of the invariants named “[...] that coincide in first approximation with the known laws for the gravitational and electromagnetic field [...]” with the proviso that the specialisation of the constants after the variation of the Lagrangian, not before. Also, together with Müntz, he had shown that for an uncharged mass point the Schwarzschild solution again obtained [87 ]. Einstein’s next publication was the note preceding Cartan’s paper in Mathematische Annalen [89 ]. He presented it as an introduction suited for anyone who knew general relativity. It is here that he first mentioned Equations (162) and (163). Most importantly, he gave a new set of field equations not derived from a variational principle; they are. where As Cartan remarked, Equation (172) expresses conservation of torsion under parallel transport: “In fact, in the new theory of Mr. Einstein, it is natural to call a universe homogeneous if the torsion vectors that are associated to two parallel surface elements are parallel themselves; this means that parallel transport conserves torsion.” ([35 ], p. 703) From Equation (173) with the help of Equation (171), (172), Einstein wrote down two more identities. One of them he had obtained from Cartan: “But I am very grateful to you for the identity which, astonishingly, had escaped me. [...] In a new presentation in the , I used this identity while taking the liberty of pointing to you as its source.” (letter of from 18 December 1929, Document X of [50 ] , p. 72) In order to show that his field equations were compatible he counted the number of equations, identities, and field quantities (in Einstein then showed that The changes in his approach Einstein continuously made, must have been hard on those who tried to follow him in their scientific work. One of them, Zaycoff, tried to make the best out of them: “Recently, A. Einstein ([89 ]), following investigations by E. Cartan ([35]), has considerably modified his teleparallelism theory such that former shortcomings (connected only to the physical identifications) vanish by themselves.” ([433 ], p. 410) In November 1929, Einstein gave two lectures at the Institute Henri Poincaré in Paris which had been opened one year earlier in order to strengthen theoretical physics in France ([14], pp. 263–272). They were published in 1930 as the first article in the new journal of this institute [92 ]. On 23 pages he clearly and leisurely outlined his theory of distant parallelism and the progress he had made. As to references given, first Cartan’s name is mentioned in the text: “It is not for the first time that such spaces are envisaged. From a purely mathematical point of view they were studied previously. M. Cartan was so amiable as to write a note for the Mathematische Annalen exposing the various phases in the formal development of these concepts.” ([92 ], p. 4) Note that Einstein does not say that it was Cartan who first “envisaged” these spaces before. Later in the paper, he comes closer to the point: “This type of space had been envisaged before me by mathematicians, notably by WEITZENBÖCK, EISENHART et CARTAN [...].” [92 ] Again, he held back in his support of Cartan’s priority claim. Some of the material in the paper overlaps with results from other publications [85 , 90 , 93 ]. The counting of independent variables, field equations, and identities is repeated from Einstein’s paper in Mathematische Annalen [89 ]. For Einstein had found previously. He now presented a derivation of the remaining three identities by a calculation of two pages’ length. The field equations are the same as in [89 ]; the proof of their compatibility takes up, in a slightly modified form, the one communicated by Einstein to Cartan in a letter of 18 December 1929 ([92 ], p. 20). It is reproduced also in [90 ]. Interestingly, right after Einstein’s article in the institute’s journal, a paper of C. G. Darwin, “On the wave theory of matter”, is printed, and, in the same first volume, a report of Max Born on “Some problems in Quantum Mechanics.” Thus, French readers were kept up-to-date on progress made by both parties – whether they worked on classical field theory or quantum theory [45, 21]. A. Proca, who had attended Einstein’s lectures, gave an exposition of them in a journal of his native Romania. He was quite enthusiastic about Einstein’s new theory: “A great step forward has been made in the pursuit of this total synthesis of phenomena which is, right or wrong, the ideal of physicists. [...] the splendid effort brought about by Einstein permits us to hope that the last theoretical difficulties will be vanquished, and that we soon will compare the consequences of the theory with [our] experience, the great stepping stone of all creations of the mind.” [260, 261] Einstein’s next paper in the Berlin Academy, in which he reverts to his original notation [90 ], p. 18). The mistake was the assumption on the kind of dependence on torsion of the quantity Einstein now found it better “to keep the concept of divergence, defined by contraction of the extension of a tensor” and not use the covariant derivative [88 ]. Then Einstein presented the same field equations as in his paper in Annalen der Mathematik, which he demanded to be While these demands had been sufficient to uniquely lead to the gravitational field equations (with cosmological constant) of general relativity, in the teleparallelism theory a great deal of ambiguity remained. Sixteen field equations were needed which, due to covariance, induced four identities. “Therefore equations must be postulated among which identical relations are holding. The higher the number of equations (and consequently also the number of identities among them), the more precise and stronger than mere determinism is the content; accordingly, the theory is the more valuable, if it is also consistent with the empirical facts.” ([90 ], p. 21) He then gave a proof of the compatibility of his field equations: “The proof of the compatibility, as given in my paper in the Mathematische Annalen, has been somewhat simplified due to a communication which I owe to a letter of Mr. CARTAN (cf. §3, [16]).” The reader had to make out for himself what Cartan’s contribution really was. In linear approximation, i.e., for Einstein obtained d’Alembert’s equation for both the symmetric and the antisymmetric part of Einstein’s next note of one and a half pages contained a mathematical result within teleparallelism theory: From any tensor with an antisymmetric pair of indices a vector with vanishing divergence can be derived [93 ]. In order to test the field equations by exhibiting an exact solution, a simple case would be to take a spherically symmetric, asymptotically (Minkowskian) 4-bein. This is what Einstein and Mayer did, except with the additional assumption of space-reflection symmetry [106 ]. Then the 4-bein contains three arbitrary functions of one parameter where 171, 172), Einstein and Mayer obtained Einstein and Mayer do not take this physically unacceptable situation as an argument against the theory, because the equations of motion for such singularities could not be derived from the field equations as in general relativity. Again, the continuing wish to describe elementary particles by singularity-free exact solutions is stressed. Possibly, W. F. G. Swan of the Bartol Research Foundation in Swarthmore had this paper in mind when he, in April 1930, in a brief description of Einstein’s latest publications, told the readers of “It now appears that Einstein has succeeded in working out the consequences of his general law of gravity and electromagnetism for two special cases just as Newton succeeded in working out the consequences of his law for several special cases. [...] It is hoped that the present solutions obtained by Einstein, or if not these, then others which may later evolve, will suggest some experiments by which the theory may be tested.” ([339], p. 391) Two days before the paper by Einstein and Mayer became published by the Berlin Academy, Einstein wrote to his friend Solovine: “My field theory is progressing well. Cartan has already worked with it. I myself work with a mathematician (S. Mayer from Vienna), a marvelous chap [...].” ([98], p. 56) The mentioning of Cartan resulted from the intensive correspondence of both scientists between December 1929 and February 1930: About a dozen letters were exchanged which, sometimes, contained long calculations [50 ] (cf. Section 6.4.6). In an address given at the University of Nottingham, England, on 6 June 1930, Einstein also must have commented on the exact solutions found and on his program concerning the elementary particles. A report of this address stated about Einstein’s program: “The problem is nearly solved; and to the first approximations he gets laws of gravitation and electro-magnetics. He does not, however, regard this as sufficient, though those laws may come out. He still wants to have the motions of ordinary particles to come out quite naturally. [The program] has been solved for what he calls the ‘quasi-statical motions’, but he also wants to derive elements of matter (electrons and protons) out of the metric structure of space.” ([91], p. 610) With his “assistant” Walther Mayer, Einstein then embarked on a very technical, systematic study of compatible field equations for distant parallelism [108]. In addition to the assumptions (1), (2), (3) for allowable field equations given above, further restrictions were made: 1. the field equations must contain the first derivatives of the field variable only quadratically; 2. the identities for the left hand sides linear in 3. torsion must occur only linearly in For the field equations, the following ansatz was made: where is satisfied. Here, 8 new constants 175) into Equation (176), Einstein and Mayer reduced the problem to the determination of 10 constants by 20 algebraic equations by a lengthy calculation. In the end, four different types of compatible field equations for the teleparallelism theory remained: “Two of these are (non-trivial) generalisations of the original gravitational field equations, one of them being known already as a consequence of the Hamiltonian principle. The remaining two types are denoted in the paper by [...].” With no further restraining principles at hand, this ambiguity in the choice of field equations must have convinced Einstein that the theory of distant parallelism could no longer be upheld as a good candidate for the unified field theory he was looking for, irrespective of the possible physical content. Once again, he dropped the subject and moved on to the next. While aboard a ship back to Europe from the United States, Einstein, on 21 March 1932, wrote to Cartan: “[...] In any case, I have now completely given up the method of distant parallelism. It seems that this structure has nothing to do with the true character of space [...].” ([50 ], p. 209) What Cartan might have felt, after investing the forty odd pages of his calculations printed in Debever’s book, is unknown. However, the correspondence on the subject came to an end in May 1930 with a last letter by Cartan.. 6.4.4 Reactions I: Mostly critical About half a year after Einstein’s two papers on distant parallelism of 1928 had appeared, Reichenbach, who always tended to defend Einstein against criticism, classified the new theory [268] according to the lines set out in his book [267 ] as “having already its precisely fixed logical position in the edifice of Weyl–Eddington geometry” ([267 ], p. 683). He mentioned as a possible generalization an idea of Einstein’s, in which the operation of parallel transport might be taken as integrable not with regard to length but with regard to direction: “a generalisation which already has been conceived by Einstein as I learned from him” ([267 ], p. 687). As concerns parallelism at a distance, Reichenbach was not enthusiastic about Einstein’s new approach: “[...] it is the aim of Einstein’s new theory to find such an entanglement between gravitation and electricity that it splits into the separate equations of the existing theory only in first approximation; in higher approximation, however, a mutual influence of both fields is brought in, which, possibly, leads to an understanding of questions unanswered up to now as [is the case] for the quantum riddle. But this aim seems to be in reach only if a direct physical interpretation of the operation of transport, even of the immediate field quantities, is given up. From the geometrical point of view, such a path [of approach] must seem very unsatisfactory; its justifications will only be reached if the mentioned link does encompass more physical facts than have been brought into it for building it up.” ([267 ], p. 689) A first reaction from a competing colleague came from Eddington, who, on 23 February 1929, gave a cautious but distinct review of Einstein’s first three publications on distant parallelism [84 , 83 , 88 ] in Nature. After having explained the theory and having pointed out the differences to his own affine unified field theory of 1921, he confessed: “For my own part I cannot readily give up the affine picture, where gravitational and electric quantities supplement one another as belonging respectively to the symmetrical and antisymmetrical features of world measurement; it is difficult to imagine a neater kind of dovetailing. Perhaps one who believes that Weyl’s theory and its affine generalisation afford considerable enlightenment, may be excused for doubting whether the new theory offers sufficient inducement to make an exchange.” [62 ] Weyl was the next unhappy colleague; in connection with the redefinition of his gauge idea he remarked (in April/May 1929): “[...] my approach is radically different, because I reject distant parallelism and keep to Einstein’s general relativity. [...] Various reasons hold me back from believing in parallelism at a distance. First, my mathematical intuition a priori resists to accept such an artificial geometry; I have difficulties to understand the might who has frozen into rigid togetherness the local frames in different events in their twisted positions. Two weighty physical arguments join in [...] only by this loosening [of the relationship between the local frames] the existing gauge-invariance becomes intelligible. Second, the possibility to rotate the frames independently, in the different events, [...] is equivalent to the symmetry of the energy-momentum tensor, or to the validity of the conservation law for angular momentum.” ([407 ], pp. 330–332.) As usual, Pauli was less than enthusiastic; he expressed his discontent in a letter to Hermann Weyl of 26 August 1929: “First let me emphasize that side of the matter about which I fully agree with you: Your approach for incorporating gravitation into Dirac’s theory of the spinning electron [...] I am as adverse with regard to Fernparallelismus as you are [...] (And here I must do justice to your work in physics. When you made your theory with Einstein rightly criticised and scolded you. Now the hour of revenge has come for you, now Einstein has made the blunder of distant parallelism which is nothing but mathematics unrelated to physics, now you may scold [him].)” ([251 ], pp. 518–519) Another confession of Pauli’s went to Paul Ehrenfest: “By the way, I now no longer believe in one syllable of teleparallelism; Einstein seems to have been abandoned by the dear Lord.” (Pauli to Ehrenfest 29 September 1929; [251 ], p. 524) Pauli’s remark shows the importance of ideology in this field: As long as no empirical basis exists, beliefs, hopes, expectations, and rationally guided guesses abound. Pauli’s letter to Weyl from 1 July 1929 used non-standard language (in terms of science): “I share completely your skeptical position with regard to Einstein’s 4-bein geometry. During the Easter holidays I have visited Einstein in Berlin and found his opinion on modern quantum theory reactionary.” ([251 ], p. 506) While the wealth of empirical data supporting Heisenberg’s and Schrödinger’s quantum theory would have justified the use of a word like “uninformed” or even “not up to date” for the description of Einstein’s position, use of “reactionary” meant a definite devaluation. Einstein had sent a further exposition of his new theory to the Mathematische Annalen in August 1928. When he received its proof sheets from Einstein, Pauli had no reservations to criticise him directly and bluntly: “I thank you so much for letting be sent to me your new paper from the Mathematische Annalen [89 ], which gives such a comfortable and beautiful review of the mathematical properties of a continuum with Riemannian metric and distant parallelism [...]. Unlike what I told you in spring, from the point of view of quantum theory, now an argument in favour of distant parallelism can no longer be put forward [...]. It just remains [...] to congratulate you (or should I rather say condole you?) that you have passed over to the mathematicians. Also, I am not so naive as to believe that you would change your opinion because of whatever criticism. But I would bet with you that, at the latest after one year, you will have given up the entire distant parallelism in the same way as you have given up the affine theory earlier. And, I do not wish to provoke you to contradict me by continuing this letter, because I do not want to delay the approach of this natural end of the theory of distant parallelism.” (letter to Einstein of 19 December 1929; [251 ], 526–527) Einstein answered on 24 December 1929: “Your letter is quite amusing, but your statement seems rather superficial to me. Only someone who is certain of seeing through the unity of natural forces in the right way ought to write in this way. Before the mathematical consequences have not been thought through properly, is not at all justified to make a negative judgement. [...] That the system of equations established by myself forms a consequential relationship with the space structure taken, you would probably accept by a deeper study – more so because, in the meantime, the proof of the compatibility of the equations could be simplified.” ([251 ], p. 582) Before he had written to Einstein, Pauli, with lesser reservations, complained vis-a-vis Jordan: “Einstein is said to have poured out, at the Berlin colloquium, horrible nonsense about new parallelism at a distance. The mere fact that his equations are not in the least similar to Maxwell’s theory is employed by him as an argument that they are somehow related to quantum theory. With such rubbish he may impress only American journalists, not even American physicists, not to speak of European physicists.” (letter of 30 November 1929, [251], p. 525) Of course, Pauli’s spells of rudeness are well known; in this particular case they might have been induced by Einstein’s unfounded hopes for eventually replacing the Schrödinger–Heisenberg–Dirac quantum mechanics by one of his unified field theories. The question of the compatibility of the field equations played a very important role because Einstein hoped to gain, eventually, the quantum laws from the extra equations (cf. his extended correspondence on the subject with Cartan ([50 ] and Section 6.4.6). That Pauli had been right (except for the time span envisaged by him) was expressly admitted by Einstein when he had given up his unified field theory based on distant parallelism in 1931 (see letter of Einstein to Pauli on 22 January 1932; cf. [241], p. 347). Born’s voice was the lonely approving one (Born to Einstein on 23 September 1929): “Your report on progress in the theory of Fernparallelism did interest me very much, particularly because the new field equations are of unique simplicity. Until now, I had been uncomfortable with the fact that, aside from the tremendously simple and transparent geometry, the field theory did look so very involved” ([154], p. 307) Born, however, was not yet a player in unified field theory, and it turned out that Einstein’s theory of distant parallelism became as involved as the previous ones. Einstein’s collaborator Lanczos even wrote a review article about distant parallelism with the title “The new field theory of Einstein” [201 ]. In it, Lanczos cautiously offers some criticism after having made enough bows before Einstein: “To be critical with regard to the creation of a man who has long since obtained a place in eternity does not suit us and is far from us. Not as a criticism but only as an impression do we point out why the new field theory does not house the same degree of conviction, nor the amount of inner consistency and suggestive necessity in which the former theory excelled.[...] The metric is a sufficient basis for the construction of geometry, and perhaps the idea of complementing RIEMANNian geometry by distant parallelism would not occur if there were the wish to implant something new into RIEMANNian geometry in order to geometrically interpret electromagnetism.” ([201 ], p. 126) When Pauli reviewed this review, he started with the scathing remark “It is indeed a courageous deed of the editors to accept an essay on a new field theory of Einstein for the ‘Results in the Exact Sciences’ [literal translation of the journal’s title]. His never-ending gift for invention, his persistent energy in the pursuit of a fixed aim in recent years surprise us with, on the average, one such theory per year. Psychologically interesting is that the author normally considers his actual theory for a while as the ‘definite solution’. Hence, [...] one could cry out: ’Einstein’s new field theory is dead. Long live Einstein’s new field theory!’ ” ([248], p. 186) For the remainder, Pauli engaged in a discussion with the philosophical background of Lanczos and criticised his support for Mie’s theory of matter of 1913 according to which “the atomism of electricity and matter, fully separated from the existence of the quantum of action, is to be reduced to the properties of (singularity-free) eigen-solutions of still-to-be-found nonlinear differential equations for the field variables.” Thus, Pauli lightly pushed aside as untenable one of Einstein’s repeated motivations and hoped-for tests for his unified field theories. Lanczos, being dissatisfied with Einstein’s distant parallelism, then tried to explain “electromagnetism as a natural property of Riemannian geometry” by starting from the Lagrangian quadratic in the components of the Ricci tensor: independently [202 ]. (For Lanczos see J. Stachel’s essay “Lanczos’ early contributions to relativity and his relation to Einstein” in [330], pp. 499–518.) 6.4.5 Reactions II: Further research on distant parallelism The first reactions to Einstein’s papers came quickly. On 29 October 1928, de Donder suggested a generalisation by using two metric tensors, a space-time metric In place of Einstein’s connection (161), defined through the 4-bein only, he took: where the dot-symbol denotes covariant derivation by help of the Levi-Civita connection derived from Einstein’s original connection is obtained [48]. Another application of Einstein’s new theory came from Eugen Wigner in Berlin whose paper showing that the tetrads in distant-parallelism-theory permitted a generally covariant formulation of “Diracs equation for the spinning electron”, was received by Zeitschrift für Physik on 29 December 1928 [419 ]. He did point out that “up to now, grave difficulties stood in the way of a general relativistic generalisation of Dirac’s theory” and referred to a paper of Tetrode [344 ]. Tetrode, about a week after Einstein’s first paper on distant parallelism had appeared on 14 June 1928, had given just such a generally relativistic formulation of Dirac’s equation through coordinate dependent Gamma-matrices; he also wrote down a (symmetric) energy-momentum tensor for the Dirac field and the conservation laws. However, he had kept the metric to be conformally flat. For the matrix-valued 4-vector Einstein’s teleparallelism theory were used. Nevertheless, nowhere did he claim that the Dirac equation could only be formulated covariantly with the help of Einstein’s new theory. Zaycoff of the Physics Institute of the University in Sofia also followed Einstein’s work closely. Half a year after Einstein’s first two notes on distant parallelism had appeared [84 , 83 ], i.e., shortly before Christmas 1928, Zaycoff sent off his first paper on the subject, whose arrival in Berlin was acknowledged only after the holidays on 13 January 1929 [429]. In it he described the mathematical formalism of distant parallelism theory, gave the identity (42), and calculated the new curvature scalar in terms of the Ricci scalar and of torsion. He then took a more general Lagrangian than Einstein and obtained the variational derivatives in linear and, in a simple example, also in second approximation. In his presentation, he used both the teleparallel and the Levi-Civita connections. His second and third papers came quickly after Einstein’s third note of January 1929 [88 ], and thus had to take into account that Einstein had dropped derivation of the field equations from a variational principle. In his second paper, Zaycoff followed Einstein’s method and gave a somewhat simpler derivation of the field equations. An exact, complicated wave equation followed: where 161). In linear approximation, the Einstein vacuum and the vacuum Maxwell equations are obtained, supplemented by the homogeneous wave equation for a vector field [431]. In his third note, Zaycoff criticised Einstein “for not having shown, in his most recent publication, whether his constraints on the world metric be permissible.” He then derived additional exact compatibility conditions for Einstein’s field equations to hold; according to him, their effect would show up only in second approximation [430]. In his fourth publication Zaycoff came back to Einstein’s Hamiltonian principle and rederived for himself Einstein’s results. He also defended Einstein against critical remarks by Eddington [62] and Schouten [304], although Schouten, in his paper, had mentioned neither Einstein nor his teleparallelism theory, but only gave a geometrical interpretation of the torsion vector in a geometry with semi-symmetric connection. Zaycoff praised Einstein’s teleparallelism theory in words reminding me of the creation of the world as described in Genesis: “We may say that A. Einstein built a plane world which is no longer waste like the Euclidean space-time-world of H. Minkowski, but, on the contrary, contains in it all that we usually call physical reality.” ([428 ], p. 724) A conference on theoretical physics at the Ukrainian Physical-Technical Institute in Charkow in May 1929, brought together many German and Russian physicists. Unified field theory, quantum mechanics, and the new quantum field theory were all discussed. Einstein’s former calculational assistant Grommer, now on his own in Minsk, in a brief contribution stressed Einstein’s path for getting an overdetermined system of differential equations: Vary with regard to the 16 bein-quantities but consider only the 10 metrical components as relevant. He claimed that Einstein had used only the antisymmetric part of the tensor Einstein’s first note) although Einstein never used Grommer the anti-symmetry of Einstein’s program of deriving the equations of motion from the overdetermined “If the law of motion of elementary particles could be derived from the overdetermined field equation, one could imagine that this law of motion permit only discrete orbits, in the sense of quantum theory.” ([153], p. 646) Levi-Civita also had sent a paper on distant parallelism to Einstein, who had it appear in the reports of the Berlin Academy [207 ]. Levi-Civita introduced a set of four congruences of curves that intersect each other at right angles, called their tangents 160) in the form: He also employed the Ricci rotation coefficients defined by The electromagnetic field tensor Levi-Civita chose as his field equations the Einstein–Maxwell equations projected on a rigidly fixed “world-lattice” of 4-beins. He used the time until the printing was done to give a short preview of his paper in Nature [206]. About a month before Levi-Civita’s paper was issued by the Berlin Academy, Fock and Ivanenko [135 ] had had the same idea and compared Einstein’s notation and the one used by Levi-Civita in his monograph on the absolute differential calculus [205 ]: “Einstein’s new gravitational theory is intimately linked to the known theory of the orthogonal congruences of curves due to Ricci. In order to ease a comparison between both theories, we may bring together here the notations of Ricci and Levi-Civita [...] with those of Einstein.” A little after the publication of Levi-Civita’s papers, Heinrich Mandel embarked on an application of Kaluza’s five-dimensional approach to Einstein’s theory of distant parallelism [218]. Einstein had sent him the corrected proof sheets of his fourth paper [85 ]. The basic idea was to consider the points of . The space-time interval is defined as the distance of two lines of the congruence on Mandel did not identify the torsion vector with the electromagnetic 4-potential, but introduced the covariant derivative Einstein–Mayer 5-vector formalism (cf. Section 6.3.2). Before Einstein dropped the subject of distant parallelism, many more papers were written by a baker’s dozen of physicists. Some were more interested in the geometrical foundations, in exact solutions to the field equations, or in the variational principle. One of those hunting for exact solutions was G. C. McVittie who referred to Einstein’s paper [88 ]: “[...] we test whether the new equations proposed by Einstein are satisfied. It is shown that the new equations are satisfied to the first order but not exactly.” He then goes on to find a rigourous solution and obtains the metric [225]. He also wrote a paper on exact axially symmetric solutions of Einstein’s teleparallelism theory [226]. Tamm and Leontowich treated the field equations given in Einstein’s fourth paper on distant parallelism [85 ]. They found that these field equations did not have a spherically symmetric solution corresponding to a charged point particle at rest. The corresponding solution for the uncharged particle was the same as in general relativity, i.e., Schwarzschild’s solution. Tamm and Leontowitch therefore guessed that a charged point particle at rest would lead to an axially-symmetric solution and pointed to the spin for support of this hypothesis [342 , 342]. Wiener and Vallarta were after particular exact solutions of Einstein’s field equations in the teleparallelism theory. By referring to Einstein’s first two papers concerning distant parallelism, they set out to show that the “[...] electromagnetic field is incompatible in the new Einstein theory with the assumption of static spherical symmetry and symmetry of the past and the future. [...] the new Einstein theory lacks at present all experimental confirmation.” In footnote 4, they added: “Since writing this paper the authors have learned from Dr. H. Müntz that the new Einstein field equations of the 1929 paper do not yield the vanishing of the gravitational field in the case of spherical symmetry and time symmetry. In this case he has been able to obtain results checking the observed perihelion of mercury” ([416], p. 356) Müntz is mentioned in [88 , 85 ]. In his paper “On unified field theory” of January 1929, Einstein acknowledges work of a Mr. Müntz: “I am pleased to dutifully thank Mr. Dr. H. Müntz for the laborious exact calculation of the centrally-symmetric problem based on the Hamiltonian principle; by the results of this investigation I was led to the discovery of the road following here.” Again, two months later in his next paper, “Unified field theory and Hamiltonian principle”, Einstein remarks: “Mr. Lanczos and Müntz have raised doubt about the compatibility of the field equations obtained in the previous paper [...].” and, by deriving field equations from a Lagrangian shows that the objection can be overcome. In his paper in July 1929, the physicist Zaycoff had some details: “Solutions of the field equations on the basis of the original formulation of unified field theory to first approximation for the spherically symmetric case were already obtained by Müntz.” In the same paper, he states: “I did not see the papers of Lanczos and Müntz.” Even before this, in the same year, in a footnote to the paper of Wiener and Vallarta, we read: “Since writing this paper the authors have learned from Dr. H. Müntz that the new Einstein field equations of the 1929 paper do not yield the vanishing of the gravitational field in the case of spherical symmetry and time symmetry. In this case he has been able to obtain results checking the observed perihelion of mercury.” The latter remark refers to a constant query Pauli had about what would happen, within unified field theory, to the gravitational effects in the planetary system, described so well by general Unfortunately, as noted by Meyer Salkover of the Mathematics Department in Cincinatti, the calculations by Wiener and Vallarta were erroneous; if corrected, one finds the Schwarzschild metric is indeed a solution of Einstein’s field equations. In the second of his two brief notes, Salkover succeeded in gaining the most general, spherically symmetric solution [288, 287]. This is admitted by the authors in their second paper, in which they present a new calculation. “In a previous paper the authors of the present note have treated the case of a spherically symmetrical statical field, and stated the conclusions: first, that under Einstein’s definition of the electromagnetic potential an electromagnetic field is incompatible with the assumption of static spherical symmetry and symmetry of the past and future; second, that if one uses the Hamiltonian suggested in Einstein’s second 1928 paper, the electromagnetic potential vanishes and the gravitational field also vanishes.” And they hasten to reassure the reader: “None of the conclusions of the previous paper are vitiated by this investigation, although some of the final formulas are supplemented by an additional term.” ([417], p. 802) Vallarta also wrote a paper by himself ([358], p. 784) whose abstract reads: “In recent papers Wiener and the author have determined the tensors Einstein’s unified theory of electricity and gravitation under the assumption of static spherical symmetry and of symmetry of past and future. It was there shown that the field equations suggested in Einstein’s second 1928 paper [83 ] lead in this case to a vanishing gravitational field. The purpose of this paper is to investigate, for the same case, the nature of the gravitational field obtained from the field equations suggested by Einstein in his first 1929 paper [88 ].” He also claims “that Wiener has shown in a paper to be published elsewhere soon that the Schwarzschild solution satisfies exactly the field equations suggested by Einstein in his second 1929 paper ([85]).” Finally, Rosen and Vallarta [283] got together for a systematic investigation of the spherically symmetric, static field in Einstein’s unified field theory of the electromagnetic and gravitational fields [93]. Further papers on Einstein’s teleparallelism theory were written in Italy by Bortolotti in Cagliari, Italy [22, 23, 25, 24], and by Palatini [242]. In Princeton, people did not sleep either. In 1930 and 1931, T. Y. Thomas wrote a series of six papers on distant parallelism and unified field theory. He followed Einstein’s example by also changing his field equations from the first to the second publication. After that, he concentrated on more mathematical problems , such as proving an existence theorem for the Cauchy–Kowlewsky type of equations in unified field theory, by studying the characteristics and bi-characteristic, the characteristic Cauchy problem, and Huygen’s principle. T. Y. Thomas described the contents of his first paper as follows: “In a number of notes in the Berlin Sitzungsberichte followed by a revised account in the Mathematische Annalen, Einstein has attempted to develop a unified theory of the gravitational and electromagnetic field by introducing into the scheme of Riemannian geometry the possibility of distant parallelism. [...] we are led to the construction of a system of wave equations as the equations of the combined gravitational and electromagnetic field. This system is composed of 16 equations for the determination of the 16 quantities [350] This looks as if he had introduced four vector potentials for the electromagnetic field, and this, in fact, T. Y. Thomas does: “the components T. Y. Thomas changed his field equations on the grounds that he wanted them to give a conservation law. “This latter point of view is made the basis for the construction of a system of field equations in the present note – and the equations so obtained differ from those of note I only by the appearance of terms quadratic in the quantities [351] The third paper contains a remark as to the content of the concept “unified field theory”: “It is the objective of the present note to deduce the general existence theorem of the Cauchy–Kowalewsky type for the system of field equations of the unified field theory. [...] Einstein (Sitzber. 1930, 18–23) has pointed out that the vanishing of the invariant physical unification of gravitation and electricity in the sense that these concepts become but different manifestations of the same fundamental entity – provided, of course, that the theory shows itself to be tenable as a theory in agreement with experience.” [352]. In his three further installments, T. Y. Thomas moved away from unified field theory to the discussion of mathematical details of the theory he had advanced [353, 354, 355]. Unhindered by constraints from physical experience, mathematicians try to play with possibilities. Thus, it was only consequential that Valentin Bargmann in Berlin, after Riemann and Weyl, now engaged in looking at a geometry allowing a comparison “at a distance” of directions but not of lengths, i.e., only of the quotient of vector components, [5]. In the framework of a purely affine theory he obtained a necessary and sufficient condition for this geometry, with the homothetic curvature 31). Then Bargmann linked his approach to Einstein’s first note on distant parallelism [84 , 89], introduced a where Schouten and van Dantzig also used a geometry built on complex numbers, and on Hermitian forms: “[...] we were able to show that the metric geometry used by Einstein in his most recent approach to relativity theory [84 , 83] coincides with the geometry of a Hermitian tensor of highest rank, which is real on the real axis and satisfies certain differential equations.” ([313], p. 319) The Hermitian tensor referred to leads to a linear integrable connection that, in the special case that it “is real in the real”, coincides with Einstein’s teleparallel connection. Distant parallelism was revived four decades later within the framework of Poincaré gauge theory; the corresponding theories will be treated in the second part of this review. 6.4.6 Overdetermination and compatibility of systems of differential equations In the course of Einstein’s thinking about distant parallelism, his ideas about overdetermined systems of differential equations gradually changed. At first, the possibility of gaining hold on the paths of elementary particles – described as singular worldlines of point particles – was central. He combined this with the idea of quantisation, although Planck’s constant Einstein, discretisation and quantisation must have been too close to bother about a fundamental constant. Then, after the richer constructive possibilities (e.g., for a Lagrangian) became obvious, a principle for finding the correct field equations was needed. As such, “overdetermination” was brought into the game by Einstein: “The demand for the existence of an ‘overdetermined’ system of equations does provide us with the means for the discovery of the field equations” ([90], p. 21) It seems that Einstein, during his visit to Paris in November 1929, had talked to Cartan about his problem of finding the right field equations and proving their compatibility. Starting in December of 1929 and extending over the next year, an intensive correspondence on this subject was carried on by both men [50 ]. On 3 December 1929, Cartan sent Einstein a letter of five pages with a mathematical note of 12 pages appended. In it he referred to his theory of partial differential equations, deterministic and “in involution,” which covered the type of field equations Einstein was using and put forward a further field equation. He clarified the mathematical point of view but used concepts such as “degree of generality” and “generality index” not familiar to Einstein. Cartan “I was not able to completely solve the problem of determining if there are systems of 22 equations other than yours and the one I just indicated [...] and it still astonishes me that you managed to find your 22 equations! There are other possibilities giving rise to richer geometrical schemes while remaining deterministic. First, one can take a system of 15 equations [...]. Finally, maybe there are also solutions with 16 equations; but the study of this case leads to calculations as complicated as in the case of 22 equations, and I was not fortunate enough to come across a possible system [...].” ([50 ], pp. 25–26) Einstein’s rapid answer of 9 December 1929 referred to the letter only; he had not been able to study Cartan’s note. As the further correspondence shows, he had difficulties in following Cartan: “For you have exactly that which I lack: an enviable facility in mathematics. Your explanation of the indice de généralité I have not yet fully understood, at least not the proof. I beg you to send me those of your papers from which I can properly study the theory.” ([50 ], p. 73) It would be a task of its own to closely study this correspondence; in our context, it suffices to note that Cartan wrote a special note “[...] edited such that I took the point of view of systems of partial differential equations and not, as in my papers, the point of view of systems of equations for total differentials [...]” which was better suited to physicists. Through this note, Einstein came to understand Cartan’s theory of systems in involution: “I have read your manuscript, and this enthusiastically. Now, everything is clear to me. Previously, my assistant Prof. Müntz and I had sought something similar – but we were unsuccessful.” ([50 ], pp. 87, 94) In the correspondence, Einstein made it very clear that he considered Maxwell’s equations only as an approximation for weak fields, because they did not allow for non-singular exact solutions approaching zero at spacelike infinity. “It now is my conviction that for rigourous field theories to be taken seriously everywhere a complete absence of singularities of the field must be demanded. This probably will restrict the free choice of solutions in a region in a far-reaching way – more strongly than the restrictions corresponding to your degrees of determination.” ([50], p. 92) Although Einstein was grateful for Cartan’s help, he abandoned the geometry with distant parallelism.
{"url":"http://relativity.livingreviews.org/Articles/lrr-2004-2/articlesu15.html","timestamp":"2014-04-18T02:59:56Z","content_type":null,"content_length":"282519","record_id":"<urn:uuid:4d816251-c92b-4723-8bfe-7e40cd8d4ab2>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00337-ip-10-147-4-33.ec2.internal.warc.gz"}
Calculating Orbital Speeds Ask the students to explain how each of the three scientists changed our thoughts about planetary motion. Area of the ellipse: Area of an ellipse = pi * a * b pi = 3.141 in a spreadsheet pi =pi() Area of a triangle = 1/2* base* height a = 1/2 the major axis b = 1/2 the minor axis closest perigee = 359861 km farthest apogee = 405948km Time for one orbit of the Earth = 27.32166 days The sun tugs on the moon. This distorts the moon's orbit. About once each year the moons apogee points toward the sun. This makes the apogee larger than at other times. Area of an ellipse = pi * a * b You need to find pi, a and b. Then substitute them into the equation. a = 1/2 the length of the major axis. The diagrams show you how to find the length of the major axis. major axis = apogee + perigee major axis =(405948km +359861km) a = (apogee + perigee)/2 To find b you need to remember how we drew the ellipse. You will also need to construct a triangle. The corners of the triangle are: a foci, the intersection of the major and minor axis, and the intersection of b and the ellipse. How can you find the length of the line segment from the foci to the intersection of a, b? Think about perigee and 1/2 the major axis. Line segment = a - perigee Line segment = 382904.5km -359861 km Line segment = 23043.5km How can you find the length from the foci to the intersection of b and the ellipse? Think about the string we used to draw the ellipse. First calculate the total length of the string. length of string = 2 * apogee, When the string runs from a focus along the major axis to the point on the major axis both sides of the string touch. This is the apogee. Since the string covers this distance two times its length is 2 * the apogee. Second subtract the length of string that connects the foci. The distance between the foci is the apogee - perigee. 2*apogee-(apogee - perigee)= 2*apogee - apogee + perigee= apogee + perigee Now you have the length from a foci to the ellipse and back to the other foci. From this you can find the length from the foci to the ellipse. (apogee + perigee)/2 Use the Pythagorean theorem to find the length of the line b. ((apogee+perigee)/2)^2 = ((apogee + perigee)/2)^2 + b^2 ((apogee+perigee)/2)^2 - ((apogee + perigee)/2)^2 = b^2 (((apogee+perigee)/2)^2 - ((apogee + perigee)/2)^2)^1/2 = b ^2 means squared ^1/2 means find the square root of Now you know a and b and pi. Substitute them into the equation above and you will find the area of the orbit of the moon. Hint: When the students construct their spreadsheet, they should make cells devoted to each of these calculations. When a calculation needs value that has already been calculated simply refer to the cell that contains that value. i.e. if we calculate a in cell B3 and we calculate b in D5 then Area of an ellipse = pi * B3 * D5 4.59772E+11km^2= 459,772,000,000 km^2 Your answer will vary depending on the value you use for pi. Different calculators and spreadsheets have different built in values for pi. It is important for students to understand that the tools we use effect the answers we get. It isn't that one is right and another is wrong. In science, unlike math, all measurements and numbers have some error or uncertainty. We need to take that into consideration when we think about our answer. Remember: Law 2; A line joining a planet/comet and the Sun sweeps out equal areas in equal intervals of time. This means that each hour the area swept out is the same. You know the total area. You know the number of hours it takes the Moon to go around the Earth. You can calculate the area swept out each hour. Lunar cycle = 27.32166 days Lunar cycle = 27.32166days * 24hour/day Lunar cycle = 655.71984hours Area / hour = 459,772,000,000 km^2/655.71984hours Now you can find the speed for these hourly movements. You need to remember the area of a triangle. Area = 1/2 the base * the height. The height is the distance from the earth to the moon. The base is the part of the ellipse that the moon traverses. For our purposes we will assume that that is a straight line. You know the area. Use the area and the height to find the base. Area = 1/2 base * height 2*Area/height= base 2*701172077.6km^2/hr/apogee km= 3454.5km/hr at apogee 2*701172077.6km^2/hr/perigee km= 3896.9km/hr at perigee Since the base was traversed in one hour it is the distance the moon travels per hour or the speed per hour. Do this calculation for the shortest (perigee) and longest (apogee) heights. You now have the Moon's fastest and slowest speeds.
{"url":"http://www.shodor.org/succeedhi/succeedhi/orbits/Notes-content.html","timestamp":"2014-04-21T00:25:28Z","content_type":null,"content_length":"11749","record_id":"<urn:uuid:3437409e-f94c-4d98-b0fc-97df2fb57b42>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00166-ip-10-147-4-33.ec2.internal.warc.gz"}
Auburn, MA ACT Tutor Find an Auburn, MA ACT Tutor ...I have been a part-time online business, economics, history, law, science, social sciences, and writing professor and tutor for graduate students in more than 60 countries around the world during the last nine years. One of my tutees won the Texty award for writing the best new physics textbook ... 55 Subjects: including ACT Math, reading, English, writing ...I have helped many students achieve 200 point gains and have had many students obtain perfect scores. I enjoy teaching this subject and can help anyone improve. The GED tests a variety of skills and knowledge that a High School graduate should have learned. 30 Subjects: including ACT Math, reading, English, writing ...I have also helped students prepare for the GED, SAT and MCAS tests in mathematics. All this experience has taught me that all students can learn math and that I have (and continue to develop) excellent skills in convincing students of their abilities and helping students succeed and even excel ... 14 Subjects: including ACT Math, calculus, C, linear algebra ...I do assign weekly homework, as it is critical for students to practice the strategies they learn and feel comfortable with them so they will use them during the test. Of the hundreds of students I have worked with over the years, most see score increases following tutoring (nearly all students ... 26 Subjects: including ACT Math, English, linear algebra, algebra 1 ...I have applied mathematics to many science courses, including physics and chemistry. I have experience tutoring middle school and early high school math. I have some experience in tutoring high school honors physics and college physics one, a mechanics-based course. 16 Subjects: including ACT Math, Spanish, English, chemistry Related Auburn, MA Tutors Auburn, MA Accounting Tutors Auburn, MA ACT Tutors Auburn, MA Algebra Tutors Auburn, MA Algebra 2 Tutors Auburn, MA Calculus Tutors Auburn, MA Geometry Tutors Auburn, MA Math Tutors Auburn, MA Prealgebra Tutors Auburn, MA Precalculus Tutors Auburn, MA SAT Tutors Auburn, MA SAT Math Tutors Auburn, MA Science Tutors Auburn, MA Statistics Tutors Auburn, MA Trigonometry Tutors
{"url":"http://www.purplemath.com/auburn_ma_act_tutors.php","timestamp":"2014-04-21T02:18:01Z","content_type":null,"content_length":"23615","record_id":"<urn:uuid:fd673e1f-42fd-4642-aef8-2a32c3689e11>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00563-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: [Axiom-developer] Curiosities with Axiom mathematical structures [Top][All Lists] [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Axiom-developer] Curiosities with Axiom mathematical structures From: Gabriel Dos Reis Subject: Re: [Axiom-developer] Curiosities with Axiom mathematical structures Date: 14 Mar 2006 01:46:21 +0100 Martin Rubey <address@hidden> writes: | > Imagine you could ask "if M has Monoid(+)..." or "if M has | > Monoid(*)...". According to which returns true, you would then go on and | > (m1 +$M m2) or (m1 *$M m2). Well, but M might have a monoid structure with | > respect to the operation ".". Do you really also want to ask "if M has | > Monoid(.)..."? That soon becomes impractical. | No, this is not an issue about practicality. | Look at it this way: Suppose "M has Monoid" returns "true". How do you know | then with respect to which operation M is a monoid? What can you do with the | information that M is a monoid with respect to some operation? If we had access to Aldor compiler sources, it would helpful to experiment with these ideas. -- Gaby [Prev in Thread] Current Thread [Next in Thread] • Re: [Axiom-developer] Re: BINGO,Curiosities with Axiom mathematical structures, (continued) • Re: [Axiom-developer] Curiosities with Axiom mathematical structures, Martin Rubey, 2006/03/10 • Re: [Axiom-developer] Curiosities with Axiom mathematical structures, Ralf Hemmecke, 2006/03/13 • Re: [Axiom-developer] Curiosities with Axiom mathematical structures, Martin Rubey, 2006/03/13 • Re: [Axiom-developer] Curiosities with Axiom mathematical structures, Ralf Hemmecke, 2006/03/13 • Re: [Axiom-developer] Curiosities with Axiom mathematical structures, William Sit, 2006/03/13 • Re: [Axiom-developer] Curiosities with Axiom mathematical structures, Gabriel Dos Reis, 2006/03/13 • Re: [Axiom-developer] Curiosities with Axiom mathematical structures, Gabriel Dos Reis, 2006/03/13 • Re: [Axiom-developer] Curiosities with Axiom mathematical structures, Gabriel Dos Reis <= • Re: [Axiom-developer] Curiosities with Axiom mathematical structures, Ralf Hemmecke, 2006/03/14 • RE: [Axiom-developer] Curiosities with Axiom mathematical structures, Bill Page, 2006/03/13 • Re: [Axiom-developer] Curiosities with Axiom mathematical structures, Gabriel Dos Reis, 2006/03/13 • Re: [Axiom-developer] Curiosities with Axiom mathematical structures, Ralf Hemmecke, 2006/03/14 • Re: [Axiom-developer] Curiosities with Axiom mathematical structures, Gabriel Dos Reis, 2006/03/13 • Re: [Axiom-developer] Curiosities with Axiom mathematical structures, Ralf Hemmecke, 2006/03/14 • Re: [Axiom-developer] Curiosities with Axiom mathematical structures, Gabriel Dos Reis, 2006/03/13 • Re: [Axiom-developer] Curiosities with Axiom mathematical structures, Ralf Hemmecke, 2006/03/08 • Re: [Axiom-developer] Curiosities with Axiom mathematical structures, Ralf Hemmecke, 2006/03/08 • Re: [Axiom-developer] Curiosities with Axiom mathematical structures, William Sit, 2006/03/09
{"url":"http://lists.gnu.org/archive/html/axiom-developer/2006-03/msg00133.html","timestamp":"2014-04-18T06:23:13Z","content_type":null,"content_length":"9694","record_id":"<urn:uuid:d4781b0d-6aeb-4365-9764-459de09df799>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00034-ip-10-147-4-33.ec2.internal.warc.gz"}
Linear Regression Linear Regression in Excel Table of Contents Regression lines can be used as a way of visually depicting the relationship between the independent (x) and dependent (y) variables in the graph. A straight line depicts a linear trend in the data (i.e., the equation describing the line is of first order. For example, y = 3x + 4. There are no squared or cubed variables in this equation). A curved line represents a trend described by a higher order equation (e.g., y = 2x^2 + 5x - 8). It is important that you are able to defend your use of either a straight or curved regression line. That is, the theory underlying your lab should indicate whether the relationship of the independent and dependent variables should be linear or non-linear. In addition to visually depicting the trend in the data with a regression line, you can also calculate the equation of the regression line. This equation can either be seen in a dialogue box and/ or shown on your graph. How well this equation describes the data (the 'fit'), is expressed as a correlation coefficient, R^2 (R-squared). The closer R^2 is to 1.00, the better the fit. This too can be calculated and displayed in the graph. The data below was first introduced in the basic graphing module and is from a chemistry lab investigating light absorption by solutions. Beer's Law states that there is a linear relationship between concentration of a colored compound in solution and the light absorption of the solution. This fact can be used to calculate the concentration of unknown solutions, given their absorption readings. This is done by fitting a linear regression line to the collected data. Creating an initial scatter plot Before you can create a regression line, a graph must be produced from the data. Traditionally, this would be a scatter plot. This module will start with the scatter plot created in the basic graphing module. Figure 1. Return to Top Creating a Linear Regression Line (Trendline) When the chart window is highlighted, you can add a regression line to the chart by choosing Chart > Add trendline... A dialogue box appears (Figure 2). Select the Linear Trend/Regression type: Figure 2. Choose the Options tab and select Display equation on chart (Figure 3): Figure 3. Click OK to close the dialogue. The chart now displays the regression line (Figure 4) Figure 4. Return to Top Using the Regression Equation to Calculate Concentrations The linear equation shown on the chart represents the relationship between Concentration (x) and Absorbance (y) for the compound in solution. The regression line can be considered an acceptable estimation of the true relationship between concentration and absorbance. We have been given the absorbance readings for two solutions of unknown concentration. Using the linear equation (labeled A in Figure 5), a spreadsheet cell can have an equation associated with it to do the calculation for us. We have a value for y (Absorbance) and need to solve for x (Concentration). Below are the algebraic equations working out this calculation: y = 2071.9x + 0.111 y - 0.0111 = 2071.9x (y - 0.0111) / 2071.9 = x Now we have to convert this final equation into an equation in a spreadsheet cell. The equation associated with the spreadsheet cell will look like what is labeled C in Figure 8. 'B12' in the equation represents y (the absorbance of the unknown). The solution for x (Concentration) is then displayed in cell 'C12'. □ Highlight a spreadsheet cell to hold 'x', the result of the final equation (cell C12, labeled B in Figure 5). □ Click in the equation area (labeled C, figure 5) □ Type an equal sign and then a parentheses □ Click in the cell representing 'y' in your equation (cell B12 in Figure 5) to put this cell label in your equation □ Finish typing your equation Note: If your equation differs for the one in this example, use your equation Duplicate your equation for the other unknown. □ Highlight the original equation cell (C12 in Figure 5) and the cell below it (C13) □ Choose Edit > Fill > Down Return to Top Note that if you highlight your new equation in C13, the reference to cell B12 has also incremented to cell B13. Figure 5. Return to Top Using the R-squared coefficient calculation to estimate fit Double-click on the trendline, choose the Options tab in the Format Trendlines dialogue box, and check the Display r-squared value on chart box. Your graph should now look like Figure 6. Note the value of R-squared on the graph. The closer to 1.0, the better the fit of the regression line. That is, the closer the line passes through all of the points. Figure 6. Now lets look at another set of data done for this lab (Figure 7). Notice that the equation for the regression line is different than is was in Figure 6. A different equation would calculate a different concentration for the two unknowns. Which regression line better represents the 'true' relationship between absorption and concentration? Look at how closely the regression line passes through the points in Figure 7. Does it seem to 'fit' as well as it does in Figure 6? No, and the R-squared value confirms this. It is 0.873 in Figure 7 compared to 0.995 in Figure 6. Though we would need to take in to account information such as the number of data points collected to make an accurate statistical prediction as to how well the regression line represents the true relationship, we can generally say that Figure 6 represents a better representation of the relationship of absorption and concentration. Figure 7. Return to Top
{"url":"http://www.ncsu.edu/labwrite/res/gt/gt-reg-home.html","timestamp":"2014-04-17T18:40:08Z","content_type":null,"content_length":"14515","record_id":"<urn:uuid:08e05de8-8622-4e00-8d11-1c5aaf05bcc7>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00559-ip-10-147-4-33.ec2.internal.warc.gz"}
Bidirectional Iterator Predecrement --i i is dereferenceable or past-the-end. There exists a dereferenceable iterator j such that i == ++j. i is modified to point to the previous element. i is dereferenceable. &i = &--i. If i == j, then --i == --j. If j is dereferenceable and i == ++j, then --i == j.
{"url":"http://www.sgi.com/tech/stl/BidirectionalIterator.html","timestamp":"2014-04-18T17:08:43Z","content_type":null,"content_length":"7004","record_id":"<urn:uuid:c40481eb-c421-42a4-9a35-746753a338d1>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00662-ip-10-147-4-33.ec2.internal.warc.gz"}
The Structure of Graphics "Graphics and Sound" discusses how to use functions like Plot and ListPlot to plot graphs of functions and data. This tutorial discusses how Mathematica represents such graphics, and how you can program Mathematica to create more complicated images. The basic idea is that Mathematica represents all graphics in terms of a collection of graphics primitives. The primitives are objects like Point, Line, and Polygon, which represent elements of a graphical image, as well as directives such as RGBColor and Thickness. Each complete piece of graphics in Mathematica is represented as a graphics object. There are several different kinds of graphics object, corresponding to different types of graphics. Each kind of graphics object has a definite head that identifies its type. Graphics objects in Mathematica. The functions like Plot and ListPlot discussed in "The Structure of Graphics and Sound" all work by building up Mathematica graphics objects and then displaying them. You can create other kinds of graphical images in Mathematica by building up your own graphics objects. Since graphics objects in Mathematica are just symbolic expressions, you can use all the standard Mathematica functions to manipulate them. Graphics objects are automatically formatted by the Mathematica front end as graphics upon output. Graphics may also be printed as a side effect using the Print command. Show can be used to change the options of an existing graphic or to combine multiple graphics. Local and global ways to modify graphics. Given a particular list of graphics primitives, Mathematica provides two basic mechanisms for modifying the final form of graphics you get. First, you can insert into the list of graphics primitives certain graphics directives, such as RGBColor, which modify the subsequent graphical elements in the list. In this way, you can specify how a particular set of graphical elements should be rendered. By inserting graphics directives, you can specify how particular graphical elements should be rendered. Often, however, you want to make global modifications to the way a whole graphics object is rendered. You can do this using graphics options. You can specify graphics options in Show. As a result, it is straightforward to take a single graphics object and show it with many different choices of graphics options. Notice however that Show always returns the graphics objects it has displayed. If you specify graphics options in Show, then these options are automatically inserted into the graphics objects that Show returns. As a result, if you call Show again on the same objects, the same graphics options will be used, unless you explicitly specify other ones. Note that in all cases new options you specify will overwrite ones already there. Finding the options for a graphics object. Some graphics options can be used as options to visualization functions that generate graphics. Options which can take the right-hand side of Automatic are sometimes resolved into specific values by the visualization functions. Finding the complete form of a piece of graphics. When you use a graphics option such as Axes, the Mathematica front end automatically draws objects such as axes that you have requested. The objects are represented merely by the option values rather than by a specific list of graphics primitives. Sometimes, however, you may find it useful to represent these objects as the equivalent list of graphics primitives. The function FullGraphics gives the complete list of graphics primitives needed to generate a particular plot, without any options being used.
{"url":"http://reference.wolfram.com/mathematica/tutorial/TheStructureOfGraphics.html","timestamp":"2014-04-18T05:41:53Z","content_type":null,"content_length":"52352","record_id":"<urn:uuid:e5ec11b4-135e-4ed0-ba5a-ce9bcb23ce48>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00523-ip-10-147-4-33.ec2.internal.warc.gz"}
IPAM's program "Interactions between Analysis and Geometry" and John Pardon's talk on Hilbert-Smith conjecture for 3-manifolds Posted by: matheuscmss | April 29, 2013 IPAM’s program “Interactions between Analysis and Geometry” and John Pardon’s talk on Hilbert-Smith conjecture for 3-manifolds Two weeks ago, I was in Los Angeles to attend Workshop II: Dynamics of Groups and Rational Maps of the IPAM program Interactions between Analysis and Geometry. The workshop was very interesting in several aspects. First, the topics of the talks concerned different research specialities (as you can see from the schedule here), so that it was an excellent opportunity to learn about advances in other related areas. Secondly, the schedule gave sufficient free time so that we could talk to each other. Also, I was happy to meet new people that I knew previously only through their work (e.g., Alex Kontorovich and John Pardon). In particular, we had two free afternoons on Wednesday and Friday, and I certainly enjoyed both of them: on Wednesday Alex Eskin drove me to the beach and we spent a significant part of the afternoon talking to each other there, and on Friday I went to Getty Center with Sasha Bufetov, Ursula Hamenstadt, Pat Hooper, John Pardon, Federico Rodriguez-Hertz, John Smillie, and Anton Zorich, where, besides classical painters like Monet, Renoir, etc., I saw As usual, the talks were very nice (and they will be available at IPAM website here in a near future), and hence I decided to transcript in this post my notes of one of the talks, namely, John Pardon’s talk on his solution of Hilbert-Smith conjecture for 3-manifolds. Of course, the eventual mistakes in what follows are my entire responsibility. 1. Statement of Hilbert-Smith conjecture In this section, we will quickly review some of the history behind the Hilbert-Smith conjecture. For a more serious reading, we recommend consulting Terence Tao’s notes on Hilbert’s 5th problem (as well as his post here on Hilbert-Smith conjecture). The 5th problem in the famous list of Hilbert’s problems (stated in 1900) is the following conjecture. Conjecture (Hilbert’s 5th problem). Let ${G}$ be a locally Euclidean topological group. Then, ${G}$ has a (unique) Lie group structure. After the works of Gleason (in 1951-1952), Yamabe (in 1953) and Montgomery-Zippin (in 1952), we know that the answer to Hilbert’s 5th problem is yes. An important step towards the solution of Hilbert’s 5th problem is the following theorem of Gleason and Yamabe: Theorem 1 (Gleason-Yamabe) Let ${G}$ be locally compact group. Let ${U\subset G}$ be an open set containing the identity element ${e\in G}$. Then, there exists ${K\subset U}$ compact and an open subgroup ${G'\subset G}$ such that ${G'/K}$ is a Lie group. Remark 1 Another way of stating this theorem is: we have an exact sequence $\displaystyle 1\rightarrow \lim\limits_{\leftarrow}\textrm{Lie}\rightarrow G\rightarrow \textrm{discrete}\rightarrow 1$ where ${\lim\limits_{\leftarrow}\textrm{Lie}}$ is the inverse limit of the family of Lie groups ${G^0/K}$ obtained by shrinking ${U}$ towards ${e}$ and “discrete” stands for a discrete [DEL:group :DEL] space (cf. Terence Tao’s comment below). An important corollary of Gleason-Yamabe theorem is: Corollary 2 ${G}$ is NSS (no small subgroups) if and only if ${G}$ is a Lie group. In other words, this corollary provides a criterion to recognize Lie groups (and thus it explains the interest of Gleason-Yamabe theorem to Hilbert’s 5th problem). Namely, if ${G}$ has no small subgroups (i.e., there exists a neighborhood of the identity element ${e\in G}$ containing no non-trivial subgroup of ${G}$), then ${G}$ is a Lie group. As it turns out, Hilbert-Smith conjecture is a generalization of Hilbert’s 5th problem where one asks whether Lie groups are the sole (locally compact) groups to act faithfully on manifolds: Conjecture (Hilbert-Smith). If a (locally compact) group ${G}$ acts faithfully on a manifold ${M}$ (i.e., we have an injective continuous homomorphism ${G\hookrightarrow \textrm{Homeo}(M)}$), then $ {G}$ is a Lie group. It is known (see, e.g., this post of Terence Tao for further explanations) that the Hilbert-Smith is equivalent to the following “${p}$-adic version”: Conjecture (Hilbert-Smith for ${p}$-adic actions). There is no injective continuous homomorphism ${\mathbb{Z}_p\hookrightarrow\textrm{Homeo}(M)}$ where ${\mathbb{Z}_p}$ is the group of p-adic It is not hard to convince oneself that the p-adic case is important for Hilbert-Smith conjecture: let ${f}$ generate ${\mathbb{Z}_p}$ and, by abusing notation, denote by ${f:M\rightarrow M}$ the corresponding homeomorphism; then, the sequence ${f, f^p, f^{p^2}, \dots}$ converges to the identity map ${id}$. Of course, this occurs because ${p^k\cdot\mathbb{Z}_p\subset \mathbb{Z}_p}$ (for ${k\ in\mathbb{N}}$) are small subgroups of ${\mathbb{Z}_p}$ (a non-Lie group!). In summary, the philosophy behind the Hilbert-Smith conjecture is that a compact group acting non-trivially on ${M}$ can not act very close to ${id}$. As it is nicely explained in this post of Terence Tao, the reduction of Hilbert-Smith conjecture to the p-adic case uses the following result of M. Newman in 1931: Theorem 3 (Newman) Let ${U\subset\mathbb{R}^n}$ be an open set containing the unit (Euclidean) closed ball ${B(1)}$. Suppose that ${U}$ has a ${\mathbb{Z}/p\mathbb{Z}}$-action whose orbits have diameter bounded by ${1/10}$, i.e., we have an homeomorphism ${T:U\rightarrow U}$ of period ${p}$ (that is, ${T^p=id}$) such that all ${T}$-orbits have diameter ${\leq1/10}$. Then, the action is Proof: A rough sketch of proof goes like this. Assume that the action is not trivial and consider the map ${F:U\rightarrow\mathbb{R}^n}$, ${F(x):=\frac{1}{p}\sum\limits_{a\in\mathbb{Z}/p\mathbb{Z}}T^a(x)}$. From the fact that ${T}$-orbits have diameter ${\leq 1/10}$, one can check that ${F\simeq id}$ (${F}$ is homotopic to the identity map) and thus its degree ${deg(F)}$ is ${1}$. On the other hand, ${F}$ factors through a map ${\overline{F}:U/(\mathbb{Z}/p\mathbb{Z})\rightarrow\mathbb{R}^n}$ via the natural projection map ${\pi:U\rightarrow U/(\mathbb{Z}/p\mathbb{Z})}$. Since the projection ${\pi}$ has degree ${p}$, it follows that the degree of ${F}$ is ${p\cdot deg(\overline{F})}$, a multiple of ${p\geq 2}$. This contradiction “proves” the theorem. $\Box$ Using this type of argument, one can also show that: Theorem 4 Let ${M}$ be a manifold with a metric. Then, there exists ${\varepsilon>0}$ such that, if ${G}$ is a compact Lie group acting on ${M}$ with orbits of diameter ${\leq\varepsilon}$, then the action is trivial. After this short discussion of the reduction of Hilbert-Smith conjecture to the p-adic case, let us close this section by pointing out that the general case of Hilbert-Smith conjecture is open. Nevertheless, it was known to be true for low-dimensional manifolds, namely, Montgomery-Zippin showed in 1955 that the conjecture is true for ${1}$ and ${2}$ dimensional manifolds. In next section, we will discuss the case of ${3}$-manifolds (after Pardon). 2. Hilbert-Smith conjecture in dimension 3 For the sake of this exposition, let ${M^3}$ be a connected, orientable, irreducible ${3}$-manifold with ${H_2(M)=\mathbb{Z}}$ and exactly two ends, e.g., ${M^3=\Sigma_g\times\mathbb{R}}$ where ${\ Sigma_g}$ is a genus ${g\geq 1}$ surface. Using the orientation, it makes sense to call one end of ${M}$ the “${+}$ end” and the other end of ${M}$ the “${-}$ end”. The basic idea of Pardon to show the Hilbert-Smith conjecture in dimension ${3}$ is to reduce it to a ${2}$-dimensional problem (that one can handle using our knowledge of the mapping class group of surfaces). In this direction, let ${S(M)}$ be the set of surfaces ${F^2\subset M^3}$ such that ${F^2}$ is incompressible (i.e., ${\pi_1(F)}$ injects into ${\pi_1(M)}$) and separates the – end from the + end (i.e, ${[F]}$ generates ${H_2(M)}$) modulo isotopies (or, equivalently, homotopies). Definition 5 Given two surfaces ${F, G}$, we say that ${[F]\leq [G]}$ if and only if there are surfaces ${F', G'}$ isotopic to ${F,G}$ (resp.) such that ${F'}$ is contained in the “- end” ${M-G'} $, that is, ${F'}$ is to the “left” of ${G'}$. For example, when ${M=\Sigma_g\times\mathbb{R}}$, the surface ${F=\Sigma_g\times\{0\}}$ is to the left of ${G=\Sigma_g\times\{1\}}$. The first important fact about ${\leq}$ is: Lemma 6 ${(S(M),\leq)}$ is a partially ordered set. The second (crucial) fact about ${\leq}$ is the following lemma suggested by Ian Agol to John Pardon: Lemma 7 (Agol) ${(S(M),\leq)}$ is a lattice, i.e., for all ${F,G\in S(M)}$, the set $\displaystyle X(F,G)=\{H\in S(M): [F]\leq [H], [G]\leq [H]\}$ has a least element. Proof: A rough sketch of proof goes as follows. By looking at the figure below one sees that ${H_0=\partial((M-F)_+\cap (M-G)_-)}$ is a natural choice (where ${(M-F)_+}$, resp. ${(M-G)_-}$, is the “+ end”, resp. “- end” of ${M-F}$, resp. ${M-G}$). However, this might not be a good choice because the intersection between ${F}$ and ${G}$ might be “artificially complicated” like in the figure below: Here, J. Pardon overcomes this difficulty by using the following result of M. Freedman, J. Hass and P. Scott saying that if the representatives ${F}$ and ${G}$ of ${[F]}$ and ${[G]}$minimize area, then the intersection between ${F}$ and ${G}$ is “minimal”: Theorem 8 (Freedman-Hass-Scott) Let ${F^2, G^2\subset M^3}$ be incompressible. Assume that ${F}$ and ${G}$ are area-minimizing representatives of their homology classes. If ${F}$ and ${G}$ can be isotoped to be disjoint, then ${F}$ and ${G}$ are already disjoint unless they coincide. Using this result, Pardon shows that ${H_0=\partial((M-\overline{F})_+\cap (M-\overline{G})_-)}$ is a least element of ${X(F,G)}$ (where ${\overline{F}}$ and ${\overline{G}}$ are area-minimizing representatives of ${[F]}$ and ${[G]}$). $\Box$ Remark 2 It is implicit in Pardon’s arguments above that the topological and PL (piecewise linear) categories coincide for ${3}$-dimensional manifolds (that is, a topological ${3}$-manifold can be triangulated), a profound theorem of E. Moise. Of course, this result doesn’t extend to higher dimensions and this partly explains why Pardon’s arguments really are “${3}$-dimensional”. At this point, we are ready to give a sketch of proof of Pardon’s theorem: Theorem 9 (Pardon) There is no injective continuous homomorphism $\displaystyle \mathbb{Z}_p\hookrightarrow \textrm{Homeo}(M^3)$ Proof: Suppose by contradiction that there exists a ${\mathbb{Z}_p}$-action on ${M^3}$. Up to replacing ${\mathbb{Z}_p}$ by ${p^k\cdot\mathbb{Z}_p}$ for some large ${k\in\mathbb{N}}$, we can assume that this action is very close to the identity. Let ${K_0}$ be a handlebody of genus 2 and denote by ${K=\mathbb{Z}_pK_0}$ its orbit under the ${\mathbb{Z}_p}$-action. Consider now ${L_0}$ a small arc connecting two boundary points of ${K_0}$, denote by ${L=\mathbb{Z}_p L_0}$ its orbit under the ${\mathbb{Z}_p}$-action, and define ${Z:=K\cup L}$: Since ${\mathbb{Z}_p}$ acts very close to the identity: • (1) ${Z}$ looks like a handlebody of genus 2 in a coarse scale. On the other hand, Pardon shows that: • (2) ${\mathbb{Z}_p\hookrightarrow \check{H}^1(Z)}$ is non-trivial (here, ${\check{H}^1}$ stands for Cech cohomology). Now, let ${N_{\varepsilon}(Z)}$ be the ${\varepsilon}$-neighborhood of ${Z}$ (for some ${\mathbb{Z}_p}$-invariant metric) with ${\varepsilon>0}$ very small, and define ${U=N_{\varepsilon}(Z)-Z}$. By definition, ${\mathbb{Z}_p}$ acts on the (invariant) ${3}$-manifold ${U}$, and, a fortiori, on the set ${S(U)}$ of incompressible separating surfaces on ${U}$ modulo isotopies. Since ${S(U)}$ is a lattice (cf. Lemma 7) and a least element of ${S(U)}$ is fixed up to isotopy by the ${\mathbb{Z}_p}$ action, we get an action $\displaystyle \mathbb{Z}_p\rightarrow MCG(F)$ of ${\mathbb{Z}_p}$ on the mapping-class group ${MCG(F)}$ of ${F}$. At this point, we get a contradiction as follows. By item (1), if we look at the projections to ${F}$ of the curves ${\alpha_1,\alpha_2,\beta_1,\beta_2}$ shown in the figure below we see that ${H_1(F,\mathbb{Z})}$ contains a submodule fixed by ${\mathbb{Z}_p}$ where the intersection is $\displaystyle \left(\begin{array}{cccc} 0&-1&0&0\\1&0&0&0\\ 0&0&0&-1\\ 0&0&1&0\end{array}\right)=\left(\begin{array}{cc} 0&-1\\1&0\end{array}\right)^{\oplus 2} \ \ \ \ \ (1)$ However, by item (2), the action of ${\mathbb{Z}_p}$ on ${H_1(F,\mathbb{Z})}$ (via ${\mathbb{Z}_p\rightarrow MCG(F)}$) is non-trivial. Using these informations, one can deduce the existence of a cyclic subgroup ${\mathbb{Z}/p\mathbb{Z}}$ of ${MCG(F)}$ (essentially the image of ${\mathbb{Z}_p}$ under the map ${\mathbb{Z}_p\rightarrow MCG(F)}$) such that the module ${H_1(F)^{\mathbb{Z}/p\mathbb{Z}}}$ annihilated by ${\mathbb{Z}/p\mathbb{Z}\subset MCF(F)}$ has a submodule where the intersection form is given by equation (1). But, Pardon proves that this is a contradiction as follows. Using Nielsen’s classification of cyclic subgroups of the mapping class group (saying that any ${\mathbb{Z}/p\mathbb{Z}\subset MCF(F)}$ is realized by a ${\mathbb{Z}/p\mathbb{Z}}$-action on ${F}$ by isometries in some metric), he shows that the intersection form on the module ${H_1(F)^{\mathbb{Z}/p\mathbb{Z}}}$ is: • either ${\left(\begin{array}{cc} 0&-p\\p&0\end{array}\right)^{\oplus g-1}\oplus\left(\begin{array}{cc} 0&-1\\1&0\end{array}\right)}$ • or ${\left(\begin{array}{cc} 0&-p\\p&0\end{array}\right)^{\oplus g}}$ where ${g}$ is the genus of ${F/(\mathbb{Z}/p\mathbb{Z})}$. Thus, there is no submodule of ${H_1(F)^{\mathbb{Z}/p\mathbb{Z}}}$ where the intersection form is given by equation (1) and this completes the sketch of proof of Hilbert-Smith conjecture for ${3}$-manifolds. $\Box$ Nice post! A minor nitpick: Remark 1 is not quite correct as stated, because one cannot get the inverse limit of Lie groups to be normal in general, so the quotient is a discrete _space_ but not a discrete _group_. (That said, it took me a while to come up with a concrete counterexample: one such counterexample is the group G of 4 x 4 unipotent upper triangular matrices with entries in the p-adics ${\bf Q}_p$. Any open normal subgroup G’ of this totally disconnected group must then contain all the matrices in G that vanish on the diagonal above the main diagonal, and any map from G’ to a Lie group will then annihilate the centre (the matrices which vanish on the two diagonals above the main diagonal), rendering it impossible for G’ to be the inverse limit of Lie groups.) By: Terence Tao on April 30, 2013 at 1:57 am • Thank you! I have changed Remark 1 accordingly and I added a reference to your comment. By: matheuscmss on April 30, 2013 at 6:07 am Posted in Conferences, expository, math.DS, Mathematics | Tags: Interactions between Analysis and Geometry 2013 at IPAM, IPAM, John Pardon, Los Angeles
{"url":"http://matheuscmss.wordpress.com/2013/04/29/ipams-program-interactions-between-analysis-and-geometry-and-john-pardons-talk-on-hilbert-smith-conjecture-for-3-manifolds/","timestamp":"2014-04-17T18:36:35Z","content_type":null,"content_length":"125022","record_id":"<urn:uuid:17c9dc4e-518b-4777-adc0-140395501b2b>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00161-ip-10-147-4-33.ec2.internal.warc.gz"}
Matches for: Return to List Class Groups and Picard Groups of Group Rings and Orders A co-publication of the AMS and CBMS. &nbsp &nbsp &nbsp &nbsp &nbsp &nbsp &nbsp CBMS Regional Conference The aim of the lectures is to provide an introduction to recent developments in the theory of class groups and Picard groups. The techniques employed come from the three Series in Mathematics main areas: algebraic number theory, representation theory of algebras and orders, and algebraic \(K\)-theory. 1976; 44 pp; softcover • Introduction • Explicit formulas Number: 26 • Change of orders • Class groups of \(p\)-groups Reprint/Revision History: • Mayer-Vietoris sequences • Calculations reprinted 1986 • Survey of specific results • Induction techniques ISBN-10: 0-8218-1676-4 • Picard groups • References ISBN-13: 978-0-8218-1676-9 • Index List Price: US$25 Member Price: US$20 All Individuals: US$20 Order Code: CBMS/26
{"url":"http://ams.org/bookstore?fn=20&arg1=cbmsseries&ikey=CBMS-26","timestamp":"2014-04-16T14:26:27Z","content_type":null,"content_length":"14484","record_id":"<urn:uuid:2ac5f588-9c01-407f-a29f-7bb39fd9cc0a>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00098-ip-10-147-4-33.ec2.internal.warc.gz"}
Organisers: John Ball (Oxford), David Chillingworth (Southampton), Mikhail Osipov (Strathclyde), Peter Palffy-Muhoray (Kent State) and Mark Warner (Cambridge) Programme Theme The programme will facilitate knowledge transfer, attract mathematicians to the field and help establish long term collaborations which will enrich both groups of researchers. A wide spectrum of mathematical problems will be considered, related to the modeling of liquid crystals with varying detail and at different length scales. Models will range from continuum descriptions, where the symmetry group of the ordered phase plays a key role, to computer simulations on an atomistic level. One important set of open problems to be considered is the relationship between these different levels of modeling; for example how one can make a rigorous passage from molecular/statistical descriptions to continuum theories and how the results of computer simulations can be used to test the validity of statistical and continuum models. Emphasis will be placed on the use of symmetry, bifurcation and group theory approaches to study the relationship between the symmetry of individual molecules and interaction potentials, the symmetry of the resulting phases and the structure of the order parameter space. On the continuum level, the programme will address the dynamics of various liquid crystal phases and the systematic derivation of various equations of motion, and will aim to clarify the accuracy of different descriptions and their applicability to different liquid crystal systems. Special consideration will be given to the existence, uniqueness and regularity of the solutions of these equations, as well as the bifurcation and properties of equilibrium and periodic solutions, solitary waves, and dynamic switching, and to how well the equations describe phase transitions. An important target is to find effective approximations which can be used to overcome the challenges posed by the complexity of liquid crystal phases with biaxial and more complicated order, which are becoming important in emerging applications. A strong effort will be made to identify and explore mathematical problems arising from novel liquid crystal systems, such as liquid crystal elastomers and nano-particle liquid crystal metamaterials. The programme will include four workshops covering all major aspects of the subject.
{"url":"http://www.newton.ac.uk/programmes/MLC/index.html","timestamp":"2014-04-17T06:42:51Z","content_type":null,"content_length":"7103","record_id":"<urn:uuid:923c298e-325d-46c0-a473-f2864e079d38>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00394-ip-10-147-4-33.ec2.internal.warc.gz"}
Is the Action of the mapping class group transitive on embedded arcs? up vote 2 down vote favorite Let S be a surface of genus g with some parked points (n of them). Assume $n \geq 2$ and fix two of the marked points. Consider the set of embedded arcs going between these two special points. The group of diffeomorphisms preserving the marked points acts on this set. What are the orbits? How many are there? Equivalently, we can instead consider embedded arcs up to isotopy and consider the action of the mapping class group of the surface. I am specifically interested in two cases: genus zero with four marked points, and genus 1 with two marked points. However I think the general question is also interesting and it seems like the sort of thing that has been studied before. I just no idea where to look. at.algebraic-topology gt.geometric-topology mapping-class-groups dg.differential-geometry 1 I love the "parked points" :-D – Bruno Martelli Dec 20 '10 at 13:52 1 As long as the two points are distinct, there is only one orbit. If the two marked points coincide then there are finitely many orbits. – Sam Nead Dec 20 '10 at 14:24 @Sam Nead: Does that also mean that if I consider closed arcs (that doesn't intersect any of the points) then there are finitely many orbits? Where is a good reference for this kind of stuff? – Chris Schommer-Pries Dec 20 '10 at 14:57 1 Yes, it is true also for closed arcs. As a nice reference I would suggest the book of Farb Margalit www.math.uchicago.edu/~margalit/mcg/mcgv406.pdf – Bruno Martelli Dec 20 '10 at 16:15 The newest version of that book is apparently 4.08 math.utah.edu/~margalit/primer – j.c. Dec 20 '10 at 19:02 add comment 1 Answer active oldest votes There is only one orbit. Suppose you have two such arcs $\lambda, \lambda'$. Let $S_\lambda$ be obtained from $S$ by removing a regular neighborhood of $\lambda$: the regular neighborhood is a disc, containing the two marked points in its interior. The two surfaces $S_\lambda$ and $S_{\lambda'}$ so obtained have the same Euler characteristic and the same number (one) of boundary up vote 6 down components, hence they are diffeomorphic. vote accepted You can arrange the diffeomorphism so that it preverves the other marked points (simply move them by isotopy). You can then extend it to the removed discs and get a diffeomorphism of $S$ preserving the punctures and sending $\lambda$ to $\lambda'$. Thanks! This is the answer I was hoping for. Another great example of how MO is the fastest substance known to mathematics. – Chris Schommer-Pries Dec 20 '10 at 14:52 4 The same argument shows that there are finitely many orbits in the general case, btw. Classification of surfaces is a wonderful thing. – Igor Rivin Dec 20 '10 at 15:14 add comment Not the answer you're looking for? Browse other questions tagged at.algebraic-topology gt.geometric-topology mapping-class-groups dg.differential-geometry or ask your own question.
{"url":"http://mathoverflow.net/questions/49963/is-the-action-of-the-mapping-class-group-transitive-on-embedded-arcs/49964","timestamp":"2014-04-16T07:56:37Z","content_type":null,"content_length":"60855","record_id":"<urn:uuid:f4ba48a6-67ae-4fdf-9792-11cc875f42d7>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00426-ip-10-147-4-33.ec2.internal.warc.gz"}
Calculate nearest number? Calculate nearest number? How to do this? Example: I have a couple of numbers: 16, 32, 48, 64, 80, 96 and so on, they increase by 16 each time. How can I change a random number to the nearest of these numbers? In this example the random number is 22 and 16 is the nearest number, so my answer is 16. A couple of more examples: 33=32, 61=64, 17=16 How to do this in code in a simple way? int i = 39; i = (i+8)&-16; int NearestNumber(int Number) return (((int)((Number + 8) / 16)) * 16); Do you want the random number to be one of 16,32,48 etc... ? Then it's easier to generate it this way: int Number = (rand() % RangeOfRandomNumbers) * 16; RangeOfRandomNumbers is how many 16-tuples to choose from, ie: 5 gives 0,16,32,48,64. Originally posted by Magos Do you want the random number to be one of 16,32,48 etc... ? Then it's easier to generate it this way: int Number = (rand() % RangeOfRandomNumbers) * 16; RangeOfRandomNumbers is how many 16-tuples to choose from, ie: 5 gives 0,16,32,48,64. No, I don't need random numbers, thanks anyway :-) Originally posted by Monster int i = 39; i = (i+8)&-16; This code seems nice for gaps of 16. Can I use this code idea even if the gap between numbers is something else then 16? Example: 20, 40, 60, 80 3, 6, 9, 12 Originally posted by electrolove This code seems nice for gaps of 16. Can I use this code idea even if the gap between numbers is something else then 16? Example: 20, 40, 60, 80 3, 6, 9, 12 Ah, now I get what Monster was doing. He was cutting the last 4 bits in the number making it a 16 tuple. -16 is the same bitmask as 240. Sorry, it only works for numbers like 2,4,8,16,32,64 etc... Originally posted by Magos Ah, now I get what Monster was doing. He was cutting the last 4 bits in the number making it a 16 tuple. -16 is the same bitmask as 240. Sorry, it only works for numbers like 2,4,8,16,32,64 etc... Yea, I realize now that my example of a gap of 16 was not good. The thing I need is a function that takes the gap as parameter 1 and the value to be calculated as parameter 2. In that way I can use any gap to calculate the random number, that is what I need. Anyone? :o Just modify my function above to: int NearestNumber(int Number, int Gap) return (((int)((Number + (int)(Gap / 2)) / Gap)) * Gap); Originally posted by Magos Just modify my function above to: int NearestNumber(int Number, int Gap) return (((int)((Number + (int)(Gap / 2)) / Gap)) * Gap); I have tested the code now and it does exactly the thing I want (as far as I know) Thanks Magos :D Hey in simple lingo....u cd even try this ... Divide the random no. by 16.....ok get the remainder now if that is gr8r than 8 then the nearest no. is 16*(quotient+1) otherwise the no. is just (16*quotient).....give it a try and let me know if it works
{"url":"http://cboard.cprogramming.com/c-programming/34723-calculate-nearest-number-printable-thread.html","timestamp":"2014-04-17T19:59:32Z","content_type":null,"content_length":"12635","record_id":"<urn:uuid:85736af2-18ea-4158-aa92-5aef97713854>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00213-ip-10-147-4-33.ec2.internal.warc.gz"}
This section contains long-range projections of the operations of the com bined Old-Age and Survivors Insurance and Disability Insurance (OASI and DI) Trust Funds and of the Hospital Insurance (HI) Trust Fund, expressed as a percentage of gross domestic product (GDP). While expressing fund operations as a percentage of taxable payroll is the most useful approach for assessing the financial status of the programs (see section ), expressing them as a percentage of the total value of goods and services produced in the United States provides an additional perspective. Table VI.F4 shows non-interest income, total cost, and the resulting balance of the combined OASI and DI Trust Funds, of the HI Trust Fund, and of the combined OASI, DI, and HI Trust Funds, expressed as percentages of GDP on the basis of each of the three alternative sets of assumptions. Table also contains estimates of GDP. For OASDI, non-interest income consists of payroll tax proceeds from taxation of benefits, and reimbursements from the General Fund of the Treasury, if any . Cost consists of benefit payments administrative expenses , financial interchange with the Railroad Retirement program, and payments for vocational rehabilitation services for disabled beneficiaries. For HI, non-interest income consists of payroll tax contributions (including contributions from railroad employment), up to an additional 0.9 percent tax on earned income for relatively high earners, proceeds from taxation of OASDI benefits, and reimbursements from the General Fund of the Treasury, if any . Cost consists of outlays (benefits and administrative expenses) for insured beneficiaries. The Trustees show income and cost estimates on a cash basis for the OASDI program and on an incurred basis for the HI program. The Trustees project the OASDI annual balance (non-interest income less cost) as a percentage of GDP to be negative from 2012 through 2015 under all three sets of assumptions. Under the low-cost assumptions, the OASDI annual balance as a percentage of GDP is positive from 2016 through 2019. After 2019, deficits increase to a peak in 2033 and decrease thereafter. By 2076, the OASDI balance becomes positive, reaching 0.04 percent of GDP in 2086. Under the intermediate assumptions, the Trustees estimate that the OASDI balance will be negative for all years of the projection period. Annual deficits decrease from 2013 through 2017, increase from 2017 through 2036, decrease from 2036 through 2053, and increase thereafter. Under the high-cost assumptions, the OASDI balance is negative, with increasing deficits throughout the projection period. The Trustees project that the HI balance as a percentage of GDP will be neg ative from 2012 through 2014 under the low-cost assumptions, and then positive and generally increasing thereafter. Under the intermediate assumptions, the HI balance is negative throughout the projection period. Annual deficits decline through 2018, reach a peak in 2047, and remain relatively stable thereafter. Under the high-cost assumptions, the HI balance is negative for all years of the projection period. Annual deficits reach a peak in 2074 and decline thereafter. The combined OASDI and HI annual balance as a percentage of GDP is neg ative throughout the projection period under both the intermediate and high-cost assumptions. Under the low-cost assumptions, the combined OASDI and HI balance is negative from 2012 through 2015, positive from 2016 through 2022, negative from 2023 through 2048, and then positive and rising thereafter. Under the intermediate assumptions, combined OASDI and HI annual deficits decline from 2013 through 2017, and then rise, reaching a peak in 2041. After 2041, annual deficits fluctuate between about 2.2 percent and 2.4 percent of GDP. Under the high-cost assumptions, combined annual deficits rise throughout the projection period. By 2086, the combined OASDI and HI annual balances as percentages of GDP range from a positive balance of 0.54 percent for the low-cost assumptions to a deficit of 7.23 percent for the high-cost assumptions. Balances differ by a smaller amount for the tenth year, 2021, and range from a positive balance of 0.15 percent for the low-cost assumptions to a deficit of 1.67 percent for the high-cost assumptions. The summarized long-range (75-year) balance as a percentage of GDP for the combined OASDI and HI programs varies among the three alternatives by a relatively large amount, from a positive balance of 0.26 percent under the low-cost assumptions to a deficit of 4.40 percent under the high-cost assumptions. The 25-year summarized balance varies by a smaller amount, from a positive balance of 0.29 percent to a deficit of 2.15 percent. Summarized rates are calculated on a present-value basis. They include the trust fund balances on January 1, 2012 and the cost of reaching a target trust fund level equal to 100 percent of the following year’s annual cost at the end of the period. (See section for further explanation.) To compare trust fund operations expressed as percentages of taxable payroll and those expressed as percentages of GDP, table displays ratios of OASDI taxable payroll to GDP. HI taxable payroll is about 26 percent larger than the OASDI taxable payroll throughout the long-range period; see section 1 of this appendix for a detailed description of the difference. The cost as a percentage of GDP is equal to the cost as a percentage of taxable payroll multiplied by the ratio of taxable payroll to GDP. Projections of GDP reflect projected increases in U.S. employment, labor productivity, average hours worked, and the GDP deflator. Projections of taxable payroll reflect the components of growth in GDP along with assumed changes in the ratio of worker compensation to GDP, the ratio of to worker compensation, the ratio of OASDI covered earnings to total earnings, and the ratio of taxable to total covered earnings. Over the long-range period, the Trustees project that the ratio of OASDI tax able payroll to GDP will decline mostly due to a projected decline in the ratio of wages to employee compensation. Over the last five complete economic cycles, the ratio of wages to employee compensation declined at an average annual rate of 0.31 percent. The Trustees project that the ratio of wages to employee compensation will continue to decline, over the 65-year period ending in 2086, at an average annual rate of 0.03, 0.13, and 0.23 percent for the low-cost, intermediate, and high-cost assumptions, respectively.
{"url":"http://www.ssa.gov/OACT/TR/2012/VI_F2_OASDHI_GDP.html","timestamp":"2014-04-21T12:30:51Z","content_type":null,"content_length":"706817","record_id":"<urn:uuid:4a1d1576-ac1c-4a84-954d-d5cf9be027f3>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00445-ip-10-147-4-33.ec2.internal.warc.gz"}
Post a reply Discussion about math, puzzles, games and fun. Useful symbols: ÷ × ½ √ ∞ ≠ ≤ ≥ ≈ ⇒ ± ∈ Δ θ ∴ ∑ ∫ • π ƒ -¹ ² ³ ° You are not logged in. • Index • » Help Me ! • » Analytic Geometry Post a reply Topic review (newest first) bob bundy 2013-03-02 11:06:47 hi Al-Allo How to start? (i) Make a graph and plot the points. (ii) distance between is 9 units for twice as close divide distance into 3 parts give 2 parts to S and 1 to R see diagram. 2013-03-02 07:26:54 I understand what you want but have to point out that a coordinate of (-1,2) is to the left of (8,2). 2013-03-02 07:13:40 bobbym wrote: Do you mean twice as close to R(8,2) as S(-1,2)? That drawing looks backwards. Shouldn't S be to the left of R? It means it's closer to R then it is to S. (twice closer) There was no drawing in the problem, I just did it myself and added the letter in alphabetical order. If there is something you don't understand in my text, tell me and I'll try to reformulate it. ( 2013-02-28 20:56:20 Do you mean twice as close to R(8,2) as S(-1,2)? That drawing looks backwards. Shouldn't S be to the left of R? 2013-02-28 13:01:09 Hi, need help with this problem, I have no idea how to even start it. a)Determining the coordinates of the point N, situated twice near the end R (8.2) as the end S (-1.2) of RS. Discussion about math, puzzles, games and fun. Useful symbols: ÷ × ½ √ ∞ ≠ ≤ ≥ ≈ ⇒ ± ∈ Δ θ ∴ ∑ ∫ • π ƒ -¹ ² ³ ° hi Al-AlloHow to start?(i) Make a graph and plot the points.(ii) distance between is 9 units for twice as close divide distance into 3 partsgive 2 parts to S and 1 to Rsee diagram.Bob Hi;I understand what you want but have to point out that a coordinate of (-1,2) is to the left of (8,2). bobbym wrote:Hi;Do you mean twice as close to R(8,2) as S(-1,2)?That drawing looks backwards. Shouldn't S be to the left of R? Hi;Do you mean twice as close to R(8,2) as S(-1,2)?That drawing looks backwards. Shouldn't S be to the left of R? It means it's closer to R then it is to S. (twice closer)There was no drawing in the problem, I just did it myself and added the letter in alphabetical order.If there is something you don't understand in my text, tell me and I'll try to reformulate it. ( Hi, need help with this problem, I have no idea how to even start it.a)Determining the coordinates of the point N, situated twice near the end R (8.2) as the end S (-1.2) of RS.Herehttp://
{"url":"http://www.mathisfunforum.com/post.php?tid=19044&qid=255694","timestamp":"2014-04-17T18:45:09Z","content_type":null,"content_length":"17452","record_id":"<urn:uuid:520c261b-8717-4ed4-b72c-c741ec1b7dc0>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00513-ip-10-147-4-33.ec2.internal.warc.gz"}
Collision Detection Using the Separating Axis Theorem - Tuts+ Game Development Tutorial The Separating Axis Theorem is often used to check for collisions between two simple polygons, or between a polygon and a circle. As with all algorithms, it has its strengths and its weaknesses. In this tutorial, we'll go over the math behind the theorem, and show how it can be used in game development with some sample code and demos. Note: Although the demos and sourcecode of this tutorial use Flash and AS3, you should be able to use the same techniques and concepts in almost any game development environment. What the Theorem States The Separating Axis Theorem (SAT for short) essentially states if you are able to draw a line to separate two polygons, then they do not collide. It's that simple. In the diagram above, you can easily see collisions occurring in the second row. However you try to squeeze a line in between the shapes, you will fail. The first row is exactly the opposite. You can easily draw a line to separate the shapes -- and not just one line, but a lot of them: Okay, let's not overdo this; I think you get the point. The key argument here is that if you can draw such a line, then there must be a gap separating the shapes. So how do we check for this? Projection Along an Arbitrary Axis Let's assume for now that the polygons we refer to are squares: box1 on the left and box2 on the right. It's easy to see that these squares are horizontally separated. A straightforward approach to determine this in code is to calculate the horizontal distance between the two squares, then subtract the half-widths of box1 and box2: //Pseudo code to evaluate the separation of box1 and box2 var length:Number = box2.x - box1.x; var half_width_box1:Number = box1.width*0.5; var half_width_box2:Number = box2.width*0.5; var gap_between_boxes:Number = length - half_width_box1 - half_width_box2; if(gap_between_boxes > 0) trace("It's a big gap between boxes") else if(gap_between_boxes == 0) trace("Boxes are touching each other") else if(gap_between_boxes < 0) trace("Boxes are penetrating each other") What if the boxes are not oriented nicely? Although the evaluation of the gap remains the same, we'll have to think of another approach to calculate the length between centers and the half-widths -- this time along the P axis. This is where vector math comes in handy. We'll project vectors A and B along P to get the half-widths. Let's do some math revision. Vector Math Revision We'll start by recapping the definition of the dot product between two vectors A and B: We can define the dot product using just the components of the two vectors: \begin{bmatrix}A_x \\A_y\end{bmatrix}. \begin{bmatrix}B_x \\B_y\end{bmatrix}= Alternatively, we can understand the dot product using the magnitudes of the vectors and the angle between them: \begin{bmatrix}A_x \\A_y\end{bmatrix}. \begin{bmatrix}B_x \\B_y\end{bmatrix}= Now, let's try to to figure out the projection of vector A onto P. Referring to the diagram above, we know that the projection value is \(A_{magnitude}*cos(theta)\) (where theta is the angle between A and P). Although we can go ahead and calculate this angle to obtain the projection, it's tricky. We need a more direct approach: A. P=A_{magnitude}*P_{magnitude}*cos(theta)\\ \begin{bmatrix}A_x \\A_y\end{bmatrix}. \begin{bmatrix}P_x/P_{magnitude} \\P_y/P_{magnitude}\end{bmatrix}= Note that \(\begin{bmatrix}P_x/P_{magnitude} \\P_y/P_{magnitude}\end{bmatrix}\) is actually the unit vector of P. Now, instead of using the right side of the equation, as we were, we can opt for the left side and still arrive at the same result. Application to a Scenario Before we proceed, i'd like to clarify the naming convention used to denote the four corners of both boxes. This will be reflected in the code later: Our scenario is as below: Let's say both boxes are oriented 45° from the horizontal axis. We must calculate the following lengths in order to determine the gap between the boxes. • Projection of A on axis P • Projection of B on axis P • Projection of C on axis P Take special note of the arrows' directions. While projection of A and C onto P will give a positive value, projection of B onto P will actually produce a negative value as the vectors are pointing in opposite directions. This is covered in line 98 of the AS3 implementation below: var dot10:Point = box1.getDot(0); var dot11:Point = box1.getDot(1); var dot20:Point = box2.getDot(0); var dot24:Point = box2.getDot(4); //Actual calculations var axis:Vector2d = new Vector2d(1, -1).unitVector; var C:Vector2d = new Vector2d( dot20.x - dot10.x, dot20.y - dot10.y var A:Vector2d = new Vector2d( dot11.x - dot10.x, dot11.y - dot10.y var B:Vector2d = new Vector2d( dot24.x - dot20.x, dot24.y - dot20.y var projC:Number = C.dotProduct(axis) var projA:Number = A.dotProduct(axis); var projB:Number = B.dotProduct(axis); var gap:Number = projC - projA + projB; //projB is expected to be a negative value if (gap > 0) t.text = "There's a gap between both boxes" else if (gap > 0) t.text = "Boxes are touching each other" else t.text = "Penetration had happened." Here's a demo using the above code. Click and drag the red middle dot of both boxes and see the interactive feedback. For the full source, check out DemoSAT1.as in the source download. The Flaws Well, we can go with the above implementation. But there are a few problems -- let me point them out: First, vectors A and B are fixed. So when you swap the positions of box1 and box2, the collision detection fails. Second, we only evaluate the gap along one axis, so situations like the one below will not be evaluated correctly: Although the previous demo is flawed, we did learn from it the concept of projection. Next, let's improve on it. Solving the First Flaw So first of all, we'll need to get the minimum and maximum projections of corners (specifically the vectors from the origin to the boxes' corners) onto P. The diagram above shows the projection of the minimum and maximum corners onto P when the boxes are oriented nicely along P. But what if box1 and box2 are not oriented accordingly? The diagram above shows boxes which are not neatly oriented along P, and their corresponding min-max projections. In this situation, we'll have to loop through each corner of each box and select the correct ones as appropriate. Now that we have the min-max projections, we'll evaluate whether the boxes are colliding with each other. How? By observing the diagram above, we can clearly see the geometrical representation for projection of box1.max and box2.min onto axis P. As you can see, when the's a gap between the two boxes, box2.min-box1.max will be more than zero -- or in other words, box2.min > box1.max. When the position of the boxes are swapped, then box1.min > box2.max implies there's a gap between them. Translating this conclusion into code, we get: //SAT: Pseudocode to evaluate the separation of box1 and box2 if(box2.min>box1.max || box1.min>box2.max){ trace("collision along axis P happened") trace("no collision along axis P") Initial Code Let's look at some more detailed code for figuring this out. Note that the AS3 code here is not optimised. Although it's long and descriptive, the advantage is that you can see how the math behind it First of all, we need to prepare the vectors: //preparing the vectors from origin to points //since origin is (0,0), we can conveniently take the coordinates //to form vectors var axis:Vector2d = new Vector2d(1, -1).unitVector; var vecs_box1:Vector.<Vector2d> = new Vector.<Vector2d>; var vecs_box2:Vector.<Vector2d> = new Vector.<Vector2d>; for (var i:int = 0; i < 5; i++) { var corner_box1:Point = box1.getDot(i) var corner_box2:Point = box2.getDot(i) vecs_box1.push(new Vector2d(corner_box1.x, corner_box1.y)); vecs_box2.push(new Vector2d(corner_box2.x, corner_box2.y)); Next, we obtain the min-max projection on box1. You can see a similar approach used on box2: //setting min max for box1 var min_proj_box1:Number = vecs_box1[1].dotProduct(axis); var min_dot_box1:int = 1; var max_proj_box1:Number = vecs_box1[1].dotProduct(axis); var max_dot_box1:int = 1; for (var j:int = 2; j < vecs_box1.length; j++) var curr_proj1:Number = vecs_box1[j].dotProduct(axis) //select the maximum projection on axis to corresponding box corners if (min_proj_box1 > curr_proj1) { min_proj_box1 = curr_proj1 min_dot_box1 = j //select the minimum projection on axis to corresponding box corners if (curr_proj1> max_proj_box1) { max_proj_box1 = curr_proj1 max_dot_box1 = j Finally, we evaluate whether there's a collision on that specific axis, P: var isSeparated:Boolean = max_proj_box2 < min_proj_box1 || max_proj_box1 < min_proj_box2 if (isSeparated) t.text = "There's a gap between both boxes" else t.text = "No gap calculated." Here's a demo of the implementation above: You may drag either box around via its middle point, and rotate it with the R and T keys. The red dot indicates the maximum corner for a box, while yellow indicates the minimum. If a box is aligned with P, you may find that these dots flicker as you drag, as those two corners share the same characteristics. Check out the full source in DemoSAT2.as in the source download. If you'd like to speed up the process, there's no need to calculate for the unit vector of P. You can therefore skip quite a number of expensive Pythagoras's theorem calculations which involve \[ \begin{bmatrix}A_x \\A_y\end{bmatrix}. \begin{bmatrix}P_x/P_{magnitude} \\P_y/P_{magnitude}\end{bmatrix}= The reasoning is as follows (refer to diagram above for some visual guidance on variables): P_unit be the unit vector for P, P_mag be P's magnitude, v1_mag be v1's magnitude, v2_mag be v2's magnitude, theta_1 be the angle between v1 and P, theta_2 be the angle between v2 and P, box1.max < box2.min => v1.dotProduct(P_unit) < v2.dotProduct(P_unit) => v1_mag*cos(theta_1) < v2_mag*cos(theta_2) Now, mathematically, the sign of inequality remains the same if both sides of the inequality are multiplied by the same number, A: A*v1_mag*cos(theta_1) < A*v2_mag*cos(theta_2) If A is P_mag, then: P_mag*v1_mag*cos(theta_1) < P_mag*v2_mag*cos(theta_2) ...which is equivalent to saying: v1.dotProduct(P) < v2.dotProduct(P) So whether it's a unit vector or not doesn't actually matter -- the result is the same. Do bear in mind that this approach is useful if you are checking for overlap only. To find the exact penetration length of box1 and box2 (which for most games you'll probably need to), you still need to calculate the unit vector of P. Solving the Second Flaw So we solved the issue for one axis, but that's not the end of it. We still need to tackle other axes -- but which? The analysis for boxes is quite straightforward: we compare two axes P and Q. In order to confirm a collision, overlapping on all axes has to be true -- if there's any axis without an overlap, we can conclude that there's no collision. What if the boxes are oriented differently? Click the green button to turn to another page. So of the P, Q, R, and S axes, there's only one axis that shows no overlapping between boxes, and our conclusion is that there's no collision between the boxes. But the question is, how do we decide which axes to check for overlapping? Well, we take the normals of the polygons. In a generalised form, with two boxes, we'll have to check along eight axes: n0, n1, n2 and n3 for each of box1 and box2. However, we can see that the following lie on the same axes: • n0 and n2 of box1 • n1 and n3 of box1 • n0 and n2 of box2 • n1 and n3 of box2 So we don't need to go through all eight; just four will do. And if box1 and box2 share the same orientation, we can further reduce to only evaluate two axes. What about for other polygons? Unfortunately, for the triangle and pentagon above there's no such shortcut, so we'll have to run checks along all normals. Calculating Normals Each surface has two normals: The diagram above shows the left and right normal of P. Note the switched components of the vector and the sign for each. For my implementation, I'm using a clockwise convention, so I use the left normals. Below is an extract of SimpleSquare.as demonstrating this. public function getNorm():Vector.<Vector2d> { var normals:Vector.<Vector2d> = new Vector.<Vector2d> for (var i:int = 1; i < dots.length-1; i++) var currentNormal:Vector2d = new Vector2d( dots[i + 1].x - dots[i].x, dots[i + 1].y - dots[i].y ).normL //left normals new Vector2d( dots[1].x - dots[dots.length-1].x, dots[1].y - dots[dots.length-1].y return normals; New Implementation I'm sure you can find a way to optimise the following code. But just so that we all get a clear idea of what's happening, I've written everything out in full: //results of P, Q var result_P1:Object = getMinMax(vecs_box1, normals_box1[1]); var result_P2:Object = getMinMax(vecs_box2, normals_box1[1]); var result_Q1:Object = getMinMax(vecs_box1, normals_box1[0]); var result_Q2:Object = getMinMax(vecs_box2, normals_box1[0]); //results of R, S var result_R1:Object = getMinMax(vecs_box1, normals_box2[1]); var result_R2:Object = getMinMax(vecs_box2, normals_box2[1]); var result_S1:Object = getMinMax(vecs_box1, normals_box2[0]); var result_S2:Object = getMinMax(vecs_box2, normals_box2[0]); var separate_P:Boolean = result_P1.max_proj < result_P2.min_proj || result_P2.max_proj < result_P1.min_proj var separate_Q:Boolean = result_Q1.max_proj < result_Q2.min_proj || result_Q2.max_proj < result_Q1.min_proj var separate_R:Boolean = result_R1.max_proj < result_R2.min_proj || result_R2.max_proj < result_R1.min_proj var separate_S:Boolean = result_S1.max_proj < result_S2.min_proj || result_S2.max_proj < result_S1.min_proj //var isSeparated:Boolean = separate_p || separate_Q || separate_R || separate_S if (isSeparated) t.text = "Separated boxes" else t.text = "Collided boxes." You'll see that some of the variables aren't necessarily calculated to reach the result. If any one of separate_P, separate_Q, separate_R and separate_S is true, then they are separated and there's no need to even proceed. This means we can save a fair amount of evaluation, just by checking each of those Booleans after they've been calculated. It would require rewriting the code, but I think you can work your way through it (or check out the commented block in DemoSAT3.as). Here's a demo of the above implementation. Click and drag the boxes via their middle dots, and use the R and T keys to rotate the boxes: When this algorithm is run, it checks through the normal axes for overlappings. I have two observations here to point out: • SAT is optimistic that there'll be no collision between polygons. The algorithm can exit and happily conclude "no collision" once any axis shows no overlapping. If there were any collision, SAT will have to run through all the axes to reach that conclusion -- thus, the more collisions there actually are, the worse the algorithm performs. • SAT uses the normal of each of the polygons' edges. So the more complex the polygons are, the more expensive SAT will become. Hexagon-Triangle Collision Detection Here's another sample code snippet to check for a collision between a hexagon and a triangle: private function refresh():void { //prepare the normals var normals_hex:Vector.<Vector2d> = hex.getNorm(); var normals_tri:Vector.<Vector2d> = tri.getNorm(); var vecs_hex:Vector.<Vector2d> = prepareVector(hex); var vecs_tri:Vector.<Vector2d> = prepareVector(tri); var isSeparated:Boolean = false; //use hexagon's normals to evaluate for (var i:int = 0; i < normals_hex.length; i++) var result_box1:Object = getMinMax(vecs_hex, normals_hex[i]); var result_box2:Object = getMinMax(vecs_tri, normals_hex[i]); isSeparated = result_box1.max_proj < result_box2.min_proj || result_box2.max_proj < result_box1.min_proj if (isSeparated) break; //use triangle's normals to evaluate if (!isSeparated) { for (var j:int = 1; j < normals_tri.length; j++) var result_P1:Object = getMinMax(vecs_hex, normals_tri[j]); var result_P2:Object = getMinMax(vecs_tri, normals_tri[j]); isSeparated = result_P1.max_proj < result_P2.min_proj || result_P2.max_proj < result_P1.min_proj if (isSeparated) break; if (isSeparated) t.text = "Separated boxes" else t.text = "Collided boxes." For the full code, check out DemoSAT4.as in the source download. The demo is below. Interaction is the same as in previous demos: drag via the middle points, and use R and T to rotate. Box-Circle Collision Detection Collision with a circle may be one of the simpler ones. Because its projection is the same in all directions (it's simply the circle's radius), we can just do the following: private function refresh():void { //prepare the vectors var v:Vector2d; var current_box_corner:Point; var center_box:Point = box1.getDot(0); var max:Number = Number.NEGATIVE_INFINITY; var box2circle:Vector2d = new Vector2d(c.x - center_box.x, c.y - center_box.y) var box2circle_normalised:Vector2d = box2circle.unitVector //get the maximum for (var i:int = 1; i < 5; i++) current_box_corner = box1.getDot(i) v = new Vector2d( current_box_corner.x - center_box.x , current_box_corner.y - center_box.y); var current_proj:Number = v.dotProduct(box2circle_normalised) if (max < current_proj) max = current_proj; if (box2circle.magnitude - max - c.radius > 0 && box2circle.magnitude > 0) t.text = "No Collision" else t.text = "Collision" Check out the full source in DemoSAT5.as. Drag either the circle or box to see them collide. Well, that's it for now. Thanks for reading and do leave your feedback with a comment or question. See you next tutorial!
{"url":"http://gamedevelopment.tutsplus.com/tutorials/collision-detection-with-the-separating-axis-theorem--gamedev-169","timestamp":"2014-04-17T03:48:16Z","content_type":null,"content_length":"102288","record_id":"<urn:uuid:37c658d2-487b-4839-baf0-2d49868ce9e0>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00215-ip-10-147-4-33.ec2.internal.warc.gz"}
Temple, GA SAT Math Tutor Find a Temple, GA SAT Math Tutor ...Have taken MANY college Chemistry courses. I am highly qualified, and have taught Physical Science for four years. I am a highly-qualified state certified teacher in Science grades 4-12. 11 Subjects: including SAT math, chemistry, physics, algebra 1 ...Worked as a Statistics teaching assistant for several years during college. Taught college students in a lab setting of 15-20 students and also tutored 160 nontraditional students who were taking the course online. Subsequently became the Assistant for TA development for the entire campus. 28 Subjects: including SAT math, physics, calculus, statistics ...I hold a Master's degree in Mathematics Education from Georgia State University and a Bachelors degree in Physics from Spelman College. I have experience tutoring all math topics at the middle and high school levels. I will provide the needed support and encouragement to help you succeed in your mathematics class and/or standardized tests. 7 Subjects: including SAT math, geometry, algebra 1, algebra 2 ...I have experience coaching debate at the high school level. I have also led workshops on making presentations for graduate students. I took part in debate and drama in high school. 33 Subjects: including SAT math, English, reading, GRE ...I have written large-scale programs in C for generating complex reports, for providing several good user-interfaces for entering data, and for updating a Sybase database with output from a decision-making algorithm. I made use of linked lists and recursion in code I have written. I have written... 13 Subjects: including SAT math, calculus, statistics, ACT Math Related Temple, GA Tutors Temple, GA Accounting Tutors Temple, GA ACT Tutors Temple, GA Algebra Tutors Temple, GA Algebra 2 Tutors Temple, GA Calculus Tutors Temple, GA Geometry Tutors Temple, GA Math Tutors Temple, GA Prealgebra Tutors Temple, GA Precalculus Tutors Temple, GA SAT Tutors Temple, GA SAT Math Tutors Temple, GA Science Tutors Temple, GA Statistics Tutors Temple, GA Trigonometry Tutors
{"url":"http://www.purplemath.com/temple_ga_sat_math_tutors.php","timestamp":"2014-04-20T21:03:48Z","content_type":null,"content_length":"23783","record_id":"<urn:uuid:5dcc37d4-4759-47dd-a9b1-be1cee5cbd5c>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00138-ip-10-147-4-33.ec2.internal.warc.gz"}
[SOLVED] parallelograms June 16th 2008, 01:12 PM #1 Dec 2007 [SOLVED] parallelograms The diagonals of a parallelogram have measures of 8 and 10 and intersect at a 60 degree angle. Find the area of the parallelogram. See the sketch below... Hint : use trigonometry in triangle OAD. Everything else is there thank you then that means the height of the triangle is 5√(3) and its base is 10. so the area is 25√(3) making the whole parallelogram's area to be 50√(3). i think that's right oops, the height's 2√(3) -> A of triangle 5√(3) -> A of parallelogram 10√(3) I notice that this thread is marked ‘solved’. But in fact it is not. There is no correct solution yet. What should have been noted from the beginning is: the area of the parallelogram is $4\cdot\mbox{area}(\Delta AOD)$. Now $\mbox{area}(\Delta AOD) = \frac{1}{2}\left( 5 \right)\left( 4 \right)\sin \left( {60^ \circ } \right)$. I notice that this thread is marked ‘solved’. But in fact it is not. There is no correct solution yet. What should have been noted from the beginning is: the area of the parallelogram is $4\cdot\mbox{area}(\Delta AOD)$. Now $\mbox{area}(\Delta AOD) = \frac{1}{2}\left( 5 \right)\left( 4 \right)\sin \left( {60^ \circ } \right)$. but i found the area of triangle DAB (as per Moo's graphic) and doubled it b/c triangle DAB is congruent to triangle BCD $\mathcal{A}_{ADB}=\frac 12 \cdot AH \cdot DB=\frac 12 (\cos 60 \cdot OA) \cdot DB$ $\mathcal{A}_{ADB}=\frac 12 \cdot \frac{\sqrt{3}}{2} \cdot 4 \cdot 10=10\sqrt{3}$ Area of the parallelogram is twice this one... Same result as Plato's, slight different method... This was going to be the correct result ~ Last edited by Moo; June 17th 2008 at 12:23 AM. June 16th 2008, 01:55 PM #2 June 16th 2008, 02:11 PM #3 Dec 2007 June 16th 2008, 02:15 PM #4 June 16th 2008, 02:59 PM #5 Dec 2007 June 16th 2008, 03:02 PM #6 June 16th 2008, 03:42 PM #7 June 16th 2008, 04:26 PM #8 Dec 2007 June 16th 2008, 05:19 PM #9 June 16th 2008, 11:06 PM #10
{"url":"http://mathhelpforum.com/geometry/41714-solved-parallelograms.html","timestamp":"2014-04-17T18:44:21Z","content_type":null,"content_length":"63174","record_id":"<urn:uuid:db5cc030-39b5-42ed-b481-08f53aa08026>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00597-ip-10-147-4-33.ec2.internal.warc.gz"}
This book is going to cover the topic of electrodynamics using vector calculus. We will provide a brief refresher to the topics of vector calculus, but this book does not intend to teach that topic to students who do not have any background in it. For more information about calculus and vector calculus topics, see Calculus and Linear Algebra. Because this book is part of a series of books on Modern Physics, the reader is assumed to have a background in relativity theory, or to be able to concurrently read the Special Relativity book. This book is going to discuss the electric and magnetic fields and forces, and related subjects. It is intended to be read by advanced undergraduates in the field of physics or engineering. This book is new and needs a lot of work. This is a wiki, so you can contribute. Table of ContentsEdit Electric MaterialsEdit Electromagnetic WavesEdit See also: Waves Tensor CalculusEdit Resources and LicensingEdit Last modified on 10 April 2014, at 14:37
{"url":"https://en.m.wikibooks.org/wiki/Electrodynamics","timestamp":"2014-04-17T06:52:18Z","content_type":null,"content_length":"25180","record_id":"<urn:uuid:a97810bf-5c50-4f87-8aee-2f766c14bb40>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00470-ip-10-147-4-33.ec2.internal.warc.gz"}
Forest Hills, NY Algebra 1 Tutor Find a Forest Hills, NY Algebra 1 Tutor ...I also found that strategies I employed for the verbal questions helped considerably, for both antonyms and analogies. Finally, I excel in Writing, as I incorporate correct sentence structure with proper grammar, and create well-developed essays. I have scored in the top third percentile range on practice units on the GMAT for both Math and English. 41 Subjects: including algebra 1, English, reading, chemistry ...Services Available*: SAT, SAT 2, AP Subjects and Tests, College Admissions Counseling, College Application Preparation LSAT, Law School Admissions Counseling, Law School tutoring, Bar Examination Preparation. *Other specific subjects available upon request. About me: Princeton Graduate, Distin... 34 Subjects: including algebra 1, English, reading, writing ...Tech - Honours) from Indian Institute Of Technology with a career spanning over 30 years. I have strong background in Physics and Mathematics/Statistics and I enjoy teaching and tutoring. I would be looking for students seeking tutoring on a one on one basis in high school math, statistics, and physics including advanced placement. 11 Subjects: including algebra 1, calculus, statistics, algebra 2 ...My work with Junior Achievement has allowed me to teach financial planning to high school juniors. I graduated from Columbia University with a Bachelors in Economics. In pursuit of that degree, I took math classes in Algebra, PreCal, Calc 1, Calc 2 and Calc 3. 26 Subjects: including algebra 1, English, reading, writing I am a certified Math Teacher with more than 8 years experience with NYCDOE. I have a high NYS Regents exam passing rate in Integrated Algebra and Geometry. Let me help your child become successful in areas such as Pre-algebra, Algebra I, Algebra II, or Geometry! 4 Subjects: including algebra 1, geometry, algebra 2, prealgebra
{"url":"http://www.purplemath.com/Forest_Hills_NY_algebra_1_tutors.php","timestamp":"2014-04-18T21:21:09Z","content_type":null,"content_length":"24366","record_id":"<urn:uuid:33ba30f2-5bfb-412b-a4c3-615158896751>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00337-ip-10-147-4-33.ec2.internal.warc.gz"}
Forest Hills, NY Algebra 1 Tutor Find a Forest Hills, NY Algebra 1 Tutor ...I also found that strategies I employed for the verbal questions helped considerably, for both antonyms and analogies. Finally, I excel in Writing, as I incorporate correct sentence structure with proper grammar, and create well-developed essays. I have scored in the top third percentile range on practice units on the GMAT for both Math and English. 41 Subjects: including algebra 1, English, reading, chemistry ...Services Available*: SAT, SAT 2, AP Subjects and Tests, College Admissions Counseling, College Application Preparation LSAT, Law School Admissions Counseling, Law School tutoring, Bar Examination Preparation. *Other specific subjects available upon request. About me: Princeton Graduate, Distin... 34 Subjects: including algebra 1, English, reading, writing ...Tech - Honours) from Indian Institute Of Technology with a career spanning over 30 years. I have strong background in Physics and Mathematics/Statistics and I enjoy teaching and tutoring. I would be looking for students seeking tutoring on a one on one basis in high school math, statistics, and physics including advanced placement. 11 Subjects: including algebra 1, calculus, statistics, algebra 2 ...My work with Junior Achievement has allowed me to teach financial planning to high school juniors. I graduated from Columbia University with a Bachelors in Economics. In pursuit of that degree, I took math classes in Algebra, PreCal, Calc 1, Calc 2 and Calc 3. 26 Subjects: including algebra 1, English, reading, writing I am a certified Math Teacher with more than 8 years experience with NYCDOE. I have a high NYS Regents exam passing rate in Integrated Algebra and Geometry. Let me help your child become successful in areas such as Pre-algebra, Algebra I, Algebra II, or Geometry! 4 Subjects: including algebra 1, geometry, algebra 2, prealgebra
{"url":"http://www.purplemath.com/Forest_Hills_NY_algebra_1_tutors.php","timestamp":"2014-04-18T21:21:09Z","content_type":null,"content_length":"24366","record_id":"<urn:uuid:33ba30f2-5bfb-412b-a4c3-615158896751>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00337-ip-10-147-4-33.ec2.internal.warc.gz"}
Leonard, TX Math Tutor Find a Leonard, TX Math Tutor ...I coached girl's volleyball for five years with great results. I have played in tournaments in both men's volleyball and mixed tournaments and won most valuable player on three different occasions. I encourage young people to participate in volleyball as a team sport and one that teaches patience and teamwork. 27 Subjects: including prealgebra, ESL/ESOL, SAT math, GED ...Since that time I have assisted many students in the development of the process of reading, including those with Dyslexia, Learning Disabilities and autism. I realize that there is no one method that is effective for every child, and therefore I present the material utilizing a student's learnin... 21 Subjects: including geometry, English, prealgebra, algebra 1 ...Geometry is second nature to me! I have helped several students in surrounding districts while tutoring them throughout the school year. I put together a summer program to help students to prepare for the subject. 10 Subjects: including algebra 1, algebra 2, geometry, prealgebra ...Received Certificate of completion for completing Photoshop short course from Art Institute on line on 7/14/2008. I operate a business where one of the things I offer is photo enhancements to include photo merge, add or remove from photo, changing background layout, de-speckle, blur background, ... 10 Subjects: including algebra 1, reading, business, English ...I also had a handful of college students and adults that would come to the center for homework support. At Sylvan Learning Center I acquired the necessary skills to not only teach students of all ages, but also to make the learning experience non-tedious and enjoyable. I have since moved back t... 23 Subjects: including geometry, discrete math, differential equations, linear algebra Related Leonard, TX Tutors Leonard, TX Accounting Tutors Leonard, TX ACT Tutors Leonard, TX Algebra Tutors Leonard, TX Algebra 2 Tutors Leonard, TX Calculus Tutors Leonard, TX Geometry Tutors Leonard, TX Math Tutors Leonard, TX Prealgebra Tutors Leonard, TX Precalculus Tutors Leonard, TX SAT Tutors Leonard, TX SAT Math Tutors Leonard, TX Science Tutors Leonard, TX Statistics Tutors Leonard, TX Trigonometry Tutors
{"url":"http://www.purplemath.com/leonard_tx_math_tutors.php","timestamp":"2014-04-20T02:05:12Z","content_type":null,"content_length":"23659","record_id":"<urn:uuid:2fbaaba2-51b1-404b-a2f7-4e65e80af05e>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00231-ip-10-147-4-33.ec2.internal.warc.gz"}
Queens, NY Algebra 1 Tutor Find a Queens, NY Algebra 1 Tutor ...Thank you for your consideration. Algebra 1 is a textbook title or the name of a course, but it is not a subject. It is often the course where students become acquainted with symbolic manipulations of quantities. 25 Subjects: including algebra 1, chemistry, calculus, physics ...I wrote compositions in Hebrew on a wide range of topics, and also wrote letters in Hebrew to my Israeli relatives. Hebrew was the only subject where I scored 100% on the Regents in high school. I visited Israel several times and interacted with the locals in Hebrew without a problem. 25 Subjects: including algebra 1, chemistry, SAT math, anatomy ...If you are interested in one-on-one sessions following this intensive curriculum please contact me for pricing details. I am also happy to send out a syllabus if requested. Contact me today for more information!When it comes to SAT math, I know that it is important that each student learns an approach that will suit his or her learning style. 37 Subjects: including algebra 1, reading, English, writing ...I also repair most of my friends clothes when needed using both patch reinforcement with a machine, or by hand. I took a Molecular and Mendelian Genetics course at Columbia and truly loved it. This course inspired me to major in Biology and Chemistry, and I hope to pursue a career in some form of molecular manipulation of genes to treat disease and cancer. 25 Subjects: including algebra 1, chemistry, calculus, geometry ...I make sure all my students learn these skills as well as the core mathematical knowledge. I use full length tests, online resources and my own material. I have been teaching/tutoring the math section of SSAT for many years. 23 Subjects: including algebra 1, English, reading, geometry Related Queens, NY Tutors Queens, NY Accounting Tutors Queens, NY ACT Tutors Queens, NY Algebra Tutors Queens, NY Algebra 2 Tutors Queens, NY Calculus Tutors Queens, NY Geometry Tutors Queens, NY Math Tutors Queens, NY Prealgebra Tutors Queens, NY Precalculus Tutors Queens, NY SAT Tutors Queens, NY SAT Math Tutors Queens, NY Science Tutors Queens, NY Statistics Tutors Queens, NY Trigonometry Tutors
{"url":"http://www.purplemath.com/Queens_NY_algebra_1_tutors.php","timestamp":"2014-04-17T15:28:38Z","content_type":null,"content_length":"24033","record_id":"<urn:uuid:9b4c08bb-3c73-47af-84f0-b6663a276b28>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00338-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Help 1) In recent years, Massachusetts has experienced a population explosion, not of people but of wild turkeys. The bird had virtually disappeared here when, in 1972, 37 turkeys were trucked over the border and released into the wild. There are now an estimated 20,000 of these creatures in Massachusetts. Assume that the Massachusetts wild-turkey population increases at a rate proportional to its current size. a)Write the initial value problem (differential equation plus initial condition) that models this situation. The differential equation should contain one unspecified constant. b) Write the solution function for that initial value problem. In doing this, you will need to determine the value of the unspecified constant. -I am puzzled as to how I need to go about solving this problem. Please, any help will be much appreciated.
{"url":"http://mathhelpforum.com/calculus/24249-ivp.html","timestamp":"2014-04-16T05:59:30Z","content_type":null,"content_length":"32320","record_id":"<urn:uuid:b5fe9537-fcda-40bb-9484-d5ef1dab5d7c>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00534-ip-10-147-4-33.ec2.internal.warc.gz"}
What determines the rate at which people leave the first "compartment" (susceptibles) to enter the second one (infected and infectious cases)? The "mass action principle" states that this rate is a function of the number susceptible individuals in the population, i.e.: C[t+1]/C[t] = f (S[t]), where C is the number of infected cases, S the number of susceptibles, t a given time period, t+1 the next time period. This can also be expressed as: C[t+1] = S[t] C[t] r, where r is a transmission parameter. (The latter expression explains the name of the "mass action principle", which was given by analogy to the "law of mass action" in chemistry, according to which the velocity of a chemical reaction is a function of the concentrations of the initial reagents.) The "mass action principle" is actually the theoretical basis of the phenomenon of herd immunity. It was introduced in the 1900 s and helped understand the dynamics of epidemics of diseases like measles: as the infection spreads during an epidemic, the number of infected cases in each successive time period initially increases while the number of susceptibles in the population decreases; therefore, there will be a point when susceptibles become sparse and the number of new cases in each successive time period decreases; and, finally, susceptibles are so scarce that there is no more than one new case for each case in the previous time period, and the epidemic fades out although a number of susceptibles have not been infected.
{"url":"http://www.pitt.edu/~super1/lecture/lec1181/009.htm","timestamp":"2014-04-18T18:58:12Z","content_type":null,"content_length":"3806","record_id":"<urn:uuid:a3a96efe-45bd-4563-b4e3-7c25e6565101>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00318-ip-10-147-4-33.ec2.internal.warc.gz"}
L.: Training a 3-Node Neural Network is NP-complete Results 1 - 10 of 152 - ARTIFICIAL INTELLIGENCE , 1997 "... In the feature subset selection problem, a learning algorithm is faced with the problem of selecting a relevant subset of features upon which to focus its attention, while ignoring the rest. To achieve the best possible performance with a particular learning algorithm on a particular training set, a ..." Cited by 1023 (3 self) Add to MetaCart In the feature subset selection problem, a learning algorithm is faced with the problem of selecting a relevant subset of features upon which to focus its attention, while ignoring the rest. To achieve the best possible performance with a particular learning algorithm on a particular training set, a feature subset selection method should consider how the algorithm and the training set interact. We explore the relation between optimal feature subset selection and relevance. Our wrapper method searches for an optimal feature subset tailored to a particular algorithm and a domain. We study the strengths and weaknesses of the wrapper approach and show a series of improved designs. We compare the wrapper approach to induction without feature subset selection and to Relief, a filter approach to feature subset selection. Significant improvement in accuracy is achieved for some datasets for the two families of induction algorithms used: decision trees and - MACHINE LEARNING: PROCEEDINGS OF THE ELEVENTH INTERNATIONAL , 1994 "... We address the problem of finding a subset of features that allows a supervised induction algorithm to induce small high-accuracy concepts. We examine notions of relevance and irrelevance, and show that the definitions used in the machine learning literature do not adequately partition the features ..." Cited by 594 (23 self) Add to MetaCart We address the problem of finding a subset of features that allows a supervised induction algorithm to induce small high-accuracy concepts. We examine notions of relevance and irrelevance, and show that the definitions used in the machine learning literature do not adequately partition the features into useful categories of relevance. We present definitions for irrelevance and for two degrees of relevance. These definitions improve our understanding of the behavior of previous subset selection algorithms, and help define the subset of features that should be sought. The features selected should depend not only on the features and the target concept, but also on the induction algorithm. We describe a method for feature subset selection using cross-validation that is applicable to any induction algorithm, and discuss experiments conducted with ID3 and C4.5 on artificial and real datasets. - MULTIPLE CLASSIFIER SYSTEMS, LBCS-1857 , 2000 "... Ensemble methods are learning algorithms that construct a set of classifiers and then classify new data points by taking a (weighted) vote of their predictions. The original ensemble method is Bayesian averaging, but more recent algorithms include error-correcting output coding, Bagging, and boostin ..." Cited by 426 (3 self) Add to MetaCart Ensemble methods are learning algorithms that construct a set of classifiers and then classify new data points by taking a (weighted) vote of their predictions. The original ensemble method is Bayesian averaging, but more recent algorithms include error-correcting output coding, Bagging, and boosting. This paper reviews these methods and explains why ensembles can often perform better than any single classifier. Some previous studies comparing ensemble methods are reviewed, and some new experiments are presented to uncover the reasons that Adaboost does not overfit rapidly. "... We study the question of determining whether an unknown function has a particular property or is ffl-far from any function with that property. A property testing algorithm is given a sample of the value of the function on instances drawn according to some distribution, and possibly may query the fun ..." Cited by 421 (57 self) Add to MetaCart We study the question of determining whether an unknown function has a particular property or is ffl-far from any function with that property. A property testing algorithm is given a sample of the value of the function on instances drawn according to some distribution, and possibly may query the function on instances of its choice. First, we establish some connections between property testing and problems in learning theory. Next, we focus on testing graph properties, and devise algorithms to test whether a graph has properties such as being k-colorable or having a ae-clique (clique of density ae w.r.t the vertex set). Our graph property testing algorithms are probabilistic and make assertions which are correct with high probability, utilizing only poly(1=ffl) edge-queries into the graph, where ffl is the distance parameter. Moreover, the property testing algorithms can be used to efficiently (i.e., in time linear in the number of vertices) construct partitions of the graph which corre... - Machine Learning , 1994 "... Abstract. Active learning differs from "learning from examples " in that the learning algorithm assumes at least some control over what part of the input domain it receives information about. In some situations, active learning is provably more powerful than learning from examples alone, g ..." Cited by 419 (1 self) Add to MetaCart Abstract. Active learning differs from &quot;learning from examples &quot; in that the learning algorithm assumes at least some control over what part of the input domain it receives information about. In some situations, active learning is provably more powerful than learning from examples alone, giving better generalization for a fixed number of training examples. In this article, we consider the problem of learning a binary concept in the absence of noise. We describe a formalism for active concept learning called selective sampling and show how it may be approximately implemented by a neural network. In selective sampling, a learner receives distribution information from the environment and queries an oracle on parts of the domain it considers &quot;useful. &quot; We test our implementation, called an SGnetwork, on three domains and observe significant improvement in generalization. - PROCEEDINGS OF THE TWENTY-FIRST ANNUAL ACM SYMPOSIUM ON THEORY OF COMPUTING , 1989 "... In this paper we prove the intractability of learning several classes of Boolean functions in the distribution-free model (also called the Probably Approximately Correct or PAC model) of learning from examples. These results are representation independent, in that they hold regardless of the syntact ..." Cited by 311 (16 self) Add to MetaCart In this paper we prove the intractability of learning several classes of Boolean functions in the distribution-free model (also called the Probably Approximately Correct or PAC model) of learning from examples. These results are representation independent, in that they hold regardless of the syntactic form in which the learner chooses to represent its hypotheses. Our methods reduce the problems of cracking a number of well-known public-key cryptosystems to the learning problems. We prove that a polynomial-time learning algorithm for Boolean formulae, deterministic finite automata or constant-depth threshold circuits would have dramatic consequences for cryptography and number theory: in particular, such an algorithm could be used to break the RSA cryptosystem, factor Blum integers (composite numbers equivalent to 3 modulo 4), and detect quadratic residues. The results hold even if the learning algorithm is only required to obtain a slight advantage in prediction over random guessing. The techniques used demonstrate an interesting duality between learning and cryptography. We also apply our results to obtain strong intractability results for approximating a generalization of graph coloring. - Journal of Artificial Intelligence Research , 1994 "... This article describes a new system for induction of oblique decision trees. This system, OC1, combines deterministic hill-climbing with two forms of randomization to find a good oblique split (in the form of a hyperplane) at each node of a decision tree. Oblique decision tree methods are tuned espe ..." Cited by 251 (13 self) Add to MetaCart This article describes a new system for induction of oblique decision trees. This system, OC1, combines deterministic hill-climbing with two forms of randomization to find a good oblique split (in the form of a hyperplane) at each node of a decision tree. Oblique decision tree methods are tuned especially for domains in which the attributes are numeric, although they can be adapted to symbolic or mixed symbolic/numeric attributes. We present extensive empirical studies, using both real and artificial data, that analyze OC1's ability to construct oblique trees that are smaller and more accurate than their axis-parallel counterparts. We also examine the benefits of randomization for the construction of oblique decision trees. 1. Introduction Current data collection technology provides a unique challenge and opportunity for automated machine learning techniques. The advent of major scientific projects such as the Human Genome Project, the Hubble Space Telescope, and the human brain mappi... - Machine Learning , 1993 "... Neural networks, despite their empirically-proven abilities, have been little used for the refinement of existing knowledge because this task requires a three-step process. First, knowledge in some form must be inserted into a neural network. Second, the network must be refined. Third, knowledge mus ..." Cited by 198 (4 self) Add to MetaCart Neural networks, despite their empirically-proven abilities, have been little used for the refinement of existing knowledge because this task requires a three-step process. First, knowledge in some form must be inserted into a neural network. Second, the network must be refined. Third, knowledge must be extracted from the network. We have previously described a method for the first step of this process. Standard neural learning techniques can accomplish the second step. In this paper, we propose and empirically evaluate a method for the final, and possibly most difficult, step. This method efficiently extracts symbolic rules from trained neural networks. The four major results of empirical tests of this method are that the extracted rules: (1) closely reproduce (and can even exceed) the accuracy of the network from which they are extracted; (2) are superior to the rules produced by methods that directly refine symbolic rules; (3) are superior to those produced by previous techniques fo... - Data Mining and Knowledge Discovery , 1997 "... Decision trees have proved to be valuable tools for the description, classification and generalization of data. Work on constructing decision trees from data exists in multiple disciplines such as statistics, pattern recognition, decision theory, signal processing, machine learning and artificial ne ..." Cited by 146 (1 self) Add to MetaCart Decision trees have proved to be valuable tools for the description, classification and generalization of data. Work on constructing decision trees from data exists in multiple disciplines such as statistics, pattern recognition, decision theory, signal processing, machine learning and artificial neural networks. Researchers in these disciplines, sometimes working on quite different problems, identified similar issues and heuristics for decision tree construction. This paper surveys existing work on decision tree construction, attempting to identify the important issues involved, directions the work has taken and the current state of the art. Keywords: classification, tree-structured classifiers, data compaction 1. Introduction Advances in data collection methods, storage and processing technology are providing a unique challenge and opportunity for automated data exploration techniques. Enormous amounts of data are being collected daily from major scientific projects e.g., Human Genome...
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=154977","timestamp":"2014-04-21T03:04:12Z","content_type":null,"content_length":"38849","record_id":"<urn:uuid:52cf65c9-dc93-4c5e-9a9a-e9533b364f3e>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00528-ip-10-147-4-33.ec2.internal.warc.gz"}
Generating function From Encyclopedia of Mathematics generatrix, of a sequence The sum of the power series with positive radius of convergence. If the generating function is known, then properties of the Taylor coefficients of analytic functions are used in the study of the sequence exists, under certain conditions, for polynomials classical orthogonal polynomials the generating function can be explicitly represented in terms of the weight In probability theory, the generating function of a random variable Using the generating function one can compute the probability distribution of The generating function of a random variable [1] G. Szegö, "Orthogonal polynomials", Amer. Math. Soc. (1975) [2] P.K. Suetin, "Classical orthogonal polynomials", Moscow (1979) (In Russian) [3] W. Feller, "An introduction to probability theory and its applications", 1–2, Wiley (1957–1971) Generating functions in the sense of formal power series are also often used. Other commonly used types of generating functions are, e.g., the exponential generating function and the (formal) Dirichlet series Usually it is possible to justify manipulations with such functions regardless of convergence. How to Cite This Entry: Generating function. Encyclopedia of Mathematics. URL: http://www.encyclopediaofmath.org/index.php?title=Generating_function&oldid=25930 This article was adapted from an original article by P.K. Suetin (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article
{"url":"http://www.encyclopediaofmath.org/index.php/Generating_function","timestamp":"2014-04-18T18:12:18Z","content_type":null,"content_length":"20059","record_id":"<urn:uuid:2f357192-186c-4721-be5f-d7ddf3d0e116>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00656-ip-10-147-4-33.ec2.internal.warc.gz"}
Subfactors and Planar Algebras: The Series Subfactors and Planar Algebras: The Series March 17, 2008 Posted by Noah Snyder in planar algebras, subfactors. This is the first post in a many part series of posts on Subfactors and Planar Algebras. The study of Subfactors is a topic in the theory of Von Neumann algebras which at first might seem very far from our blog’s usual topics. However, it turns out to be intimately related to quantum algebra and higher category theory. In fact, at Berkeley almost everyone who studies quantum groups or related topics also learns at least the rudiments of Subfactors (due largely to Vaughan Jones’s excellent Subfactor seminar). Since algebraists outside of Berkeley don’t seem to get as much exposure to this beautiful topic, hopefully all you out there in internet land will enjoy learning a little bit about it. I’ll be taking a very indeosyncratic representation theory approach to the topic, and so it should be accessible to people who (like me) know very little analysis. I’m also very happy to welcome our friend and colleague Emily Peters who will be guest blogging on this topic. She’s a 5th year graduate student at Berkeley working with Vaughan Jones. Emily’s main research is on the Haagerup subfactor, which makes her far more expert on subfactors than I. She’s collaborating with Scott Morrison and me on a knot theory/subfactor topic which we may have more to say about in the future. Perhaps most importantly the back of Emily’s head features prominently on an certain illustrious math blog. I hope to put up the a few posts with mathematical content on this topic up in the next few days. I’m looking forward to this. I really should know more about subfactors, and the prospect of learning about them without having to deal with so much of that.. analysis stuff… I have seen some of the wonders of subfactor theory in action when listening to AQFT people, but I can’t say that I have the feeling I have fully grokked what’s really going on here. I am looking forward to learning more about it. Thanks for offering to teach us… I have no idea what a subfactor is, but I’m intrigued. Noah Snyder? I went to math camp with you! and I went to high school with Emily! How are you all? Sorry comments are closed for this entry Recent Comments Erka on Course on categorical act… Qiaochu Yuan on The many principles of conserv… David Roberts on Australian Research Council jo… David Roberts on Australian Research Council jo… Elsevier maths journ… on Mathematics Literature Project…
{"url":"http://sbseminar.wordpress.com/2008/03/17/subfactors-and-planar-algebras-the-series/","timestamp":"2014-04-16T13:04:18Z","content_type":null,"content_length":"67734","record_id":"<urn:uuid:60eb598c-227e-4517-ab83-0cf6ddd9edf0>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00636-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: Funny? On that page the first piece of analysis is pretty much how I would treat the problem. If I see a,a,a,a,a,a,a,a,a, I suspect strongly that then next value in that sequence is an a. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
{"url":"http://www.mathisfunforum.com/viewtopic.php?id=19118","timestamp":"2014-04-18T18:58:48Z","content_type":null,"content_length":"19011","record_id":"<urn:uuid:8b40cc93-4237-4f93-8e0b-b2c7daa4f300>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00177-ip-10-147-4-33.ec2.internal.warc.gz"}
A time series analysis of Amazon sales rank navigational aids: News ticker: topics covered: 10 August 06. [PDF version] I have been very interested in the sales of Math You Can't Use: Patents, Copyright, and Software, a book with which I was heavily involved. (Amazon page) So naturally, I've been tracking the Amazon sales rank. At first, I did it the way everybody else does--refreshing the darn page every twenty minutes--but I have recently started doing it the civilized way--an automated script. Here is what I've learned about how Amazon does its rankings. First, to give you some intuition as to sales rank, here's a little table: 1-10 Oprah's latest picks 10-100 The NYT's picks 100-1,000 Books by editors of Wired Magazine, topical rants by pundits/journalists, 1,000-500,000 everything else (still selling) 500,000-2mil everything else (technically in stock) How much more detail can we get? The answer: none, really. You'll see below that over the course of a few days, the ranking of a typical book will go from 50,000 to 500,000, and a minute later it will be back at 50,000. Thus, the sort of things we usually do with a ranking, like compare two books, are unstable to the point of uselessness. One thing you evidently can do with the ranking is determine whether a book has sold a copy in the last hour or two. As you'll see below, there's a simple formula that will work for most books: if (current rank) > (earlier rank) then there was a recent sale. Here is a plot of sales of Math You Can't Use. A data point is added every three hours, so if you come back in a week, this plot will be different. See below for the code I used to generate this. On the x-axis is the date, and on the y-axis is the Amazon sales rank at that time. Sales rank for Klemens's Math You Can't Use You can see from the plot that the pattern is a sudden jump and then a slow drift downward. The clearest explanation is that the sales rank is basically a function of last sale. When a copy sells, the book jumps to a high rank, and then gets knocked down one unit every time any lower-ranked book sells. There are lots of details that those of us not working at Amazon will never quite catch. There are periods (sometimes mid-day) when the rank drifts down more slowly than it should, then speeds up in its descent. This implies to me some computational approximations that eventually get corrected. You'll notice that some of the books below show a small slope upward (a ten or twenty point rise in ranking) from time to time. When this happens, lots of books do it at once, also indicating some sort of correction whose purpose or method I don't have enough information to divine. Epstein and Axtell's book rises appreciably when it nears half a million. Finally, I don't have enough data to determine whether the ranking distinguishes between sales of used and new copies; I don't think it Here is a haphazard sampling of other books. Again, these are dynamically regenerated every three hours, so come back later for more action-packed graphing. Update 25 June 2008: I've switched to a host that doesn't have gnuplot, so the plots are no longer updated; you've got two years of data, and that's it. Some of these books bear something in common with Math You Can't Use, and others were based on a trip to the used book store I'd made the other day. Some have hardcover and paperback editions, in which case I just plot the paperback. Epstein and Axtell's Growing Artificial Societies [Amazon p.] Sales rank for Epstein and Axtell's Growing Artificial Societies Andy Rathbone, Tivo for Dummies. I have no idea who would buy this, and yet it is the best nonfiction seller here. This proves that I must never go into marketing. [Amazon p.] Sales rank for Rathbone's Tivo for Dummies Dickens's Great Expectations, Penguin Classics ed. [Amazon p.] Books in the top 10,000 or so are selling several copies a day, so the pattern looks different. Sales rank for Great Expectations Madonna's Sex. [Amazon p.] Somebody ran into the used bookstore asking for a copy, and ran out when the owner said he didn't have one. It's amusing that a book from 1992 could still instill such fervor in a person. It sells new for $125, used around $85. Sales rank for Madonna's Sex Ian McEwan's Atonement. [Amazon p.] I really thought I'd hate this book, since it starts off as being about subtle errors in manners committed by a gathering of relatives and friends at a British country manor, but it turned out to be an interesting modern take on the genre. Update: After I read it, it turned into a movie; you can see in the plot when it was in theaters. Sales rank for McEwan's Atonement At this point, I'm not sure why Amazon ranks books below the top thousand, except for a sort of geek factor. For all of the books here, it is basically impossible to say something like `Tivo for Dummies is ranked around 100,000,' since the ranking jumps by an order of magnitude almost daily. Similarly, there's no point saying `Atonement is ranked higher than Great Expectations', since you have a 50-50 chance of being wrong tomorrow. All we get is a very broad ballpark figure (a football field figure?), and a too-good impression of how many hours ago the last sale was made. Those of us interested in the sales rank of books outside Oprah's picks would be better served if the system were less volatile. In technical terms, if my guess that the score experiences exponential decay is correct, then the ranking system would be more useful to those of us watching the long tail if the decay factor were set to a smaller value. The data looks to me like an exponential decay system, where you have a current score S[t] which goes up by some amount every sale, but drifts down by some discount rate every period, S[t+1] = λS[t]. [Thus, if there were no sales events, your score would be S[t] = S[0]exp(- λt).] To fit this, I flipped and renormalized the rankings so that one was the highest possible ranking, and zero corresponded to a ranking of 500,000. Then, I set the following algorithm: • The score was initialized at 0.58. • Each period, score is multiplied (shrinks) by a factor of 0.96. • If there is a sale, then score rises by the addition of (1-current score) * 0.79. As you can imagine, I found those constants via minimizing the distance between the estimate and the actual. The algorithm is an exponential decay model with λ = 0.96, and upward shocks as described. The only way I could fit the data was to make shocks when the book is at a low sales rank bigger than shocks when it has a high sales rank. There's surely a more clever way to do it. The green line shows the exponential decay model fit to the actual data. You can decide if this is a good fit or a lousy one. My attempts to fit the Amazon sales rank to an exponential model You can also have a look at how the model fit to Madonna's book. For the geeks with their own stuff to track, here's my code. You can see that it's Python glue holding together a number of tools that I take to be common POSIX tools, like wget and a copy of Gnuplot recent enough to render the PNG format. The main loop (while (1)) just runs the checkrank function, writes the time and rank to a file, and then calls Gnuplot to plot the file. The checkrank function downloads the book's page, searches for the phrase `Amazon.com Sales Rank: ###' and returns the ### part. import amazon #assuming you've saved the file below as amazon.py. amazon.onebookloop("03030303", "output_graph.png") Notice that the ASIN number should be a text string, since if it were a number an initial zero would often be dropped, and some ASIN have Xes at the end. This is pretty rudimentary; in the spirit of open source, I'd be happy to post your improvements. #(c)2006 Eric Blair. Licensed under the GNU GPL v 2. import re, os, time def checkrank(asin): (a,b,wresult) = os.popen3(""" wget "http://amazon.com/gp/product/%s/" -O - """ %(asin,)) exp =re.compile(R'Amazon.com Sales Rank:</b> #([^ ]*) in Books') result = exp.search(b.read()) if result is not None: return result.groups()[0] return None def onebookloop(asin, outfile): if not os.path.isfile(outfile): f = open(outfile, 'w') f.write ("""set term png set xdata time set timefmt "%Y; %m; %d; %H;" set yrange [1:*] reverse plot '-' using 1:5 title "sales rank" while (1): f = open("rankings.%s" % (asin,), 'a'); t = time.localtime() r = None while r is None: r = checkrank(asin) if r is None: f.write("%i; %i; %i; %i; %s\n"% ( t.tm_year, t.tm_mon, t.tm_mday, t.tm_hour, r)); sed -e 's/,//g' < rankings.%s | gnuplot > %s """ % (asin,outfile) ) time.sleep(3*60*60) #3 hours. [link] [9 comments] [Previous entry: "The abject failure of IP PR"] [Next entry: "The continuing Byzantine-Ottoman war"] Replies: 9 comments on Thursday, August 10th, techne said Very fun. If only I had a book to track. on Thursday, August 10th, Andy said You can also use this formula to judge how Amazon's business is doing overall. When the book drifts down at a faster rate, that means that more books underneath it are selling; when it drifts down more slowly, fewer books below it are being sold. on Thursday, August 10th, Miss ALS of San Diego, of course said You're a geek. I know why Tivo for Dummies sells--how funny would that be as a gift Christmas morning with your tivo? Answer? Funny. Very very funny. We had a good laugh at the bookstore as I recall. on Sunday, August 20th, AC said Heh. Neat. on Sunday, March 25th, Mike said I know why Tivo for Dummies sells -- I work at a call center for Directv, and get calls all the time like: "How do I record a show, How do I erase a show, how do I set up to record something regularly?". It really makes one sad for the future of society because more than half of theese people have VCRs and can use them well. Just remember... It may be obvious, it may be EXACTLY THE SAME as something they ALREADY USE, but it ISN'T what they already use, so obviously their pre-existing knowledge is worthless. This holds true for web sites and computer programs as well. on Tuesday, March 17th, the author said I'm not much of an expert on the details of Python, to tell you the truth, just decent enough to bang together scripts like this. So I don't think I could help you debug the partial script you put up here. There are many sites that will help you better than I could; e.g., everything I know I learned from Dive into Python. on Wednesday, August 11th, The Bargain Book Mole said Impressive looking work for sure- My brain would like to draw some conclusions about sales rank and frequency of sales per month, but I am stuck wrapping my head around your math :) on Wednesday, January 12th, Le Creuset on Sale said Sales rank is just a figure to take into consideration. There are many other factors that are just as important (i.e. category, usefulness, etc). Even though one can correlate sales rank to this items, a lot also comes down to feel.
{"url":"http://fluff.info/blog/arch/00000188.htm","timestamp":"2014-04-19T17:01:33Z","content_type":null,"content_length":"28611","record_id":"<urn:uuid:71c3e7f1-0688-4f8c-b537-69e83848d4f0>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00345-ip-10-147-4-33.ec2.internal.warc.gz"}
Le Monde puzzle [#28] July 22, 2011 By xi'an The puzzle of last weekend in Le Monde was about finding the absolute rank of x[9] when given the relative ranks of x[1],….,x[8] and the possibility to ask for relative ranks of three numbers at a time. In R terms, this means being able to use > rank(x[-9]) [1] 1 7 4 6 8 3 2 5 > rank(x[1:3]) [1] 1 3 2 or yet being able to sort the first 8 components of x > x[-9]=sort(x[-9]) > rank(x[c(1,8,9)]) [1] 1 3 2 and then use rank() over triplets to position x[9] in the list. If x[9] is an extreme value, calling for its rank relative to x[1] and x[8] is sufficient to find its position. Else repeating with the rank relative to x[2] and x[7], then relative to x[3] and x[6] , etc., produces the absolute rank of x[9] in at most 4 steps. However, if we first get the rank relative to x[1] and x[8], then the rank relative to x[4] and x[5], we only need at most an extra step to find the absolute rank of x[9] in thus at most 3 steps. Yet however, if we start with the rank relative to x[3] and x[6], we are left with a single rank call to determine the absolute rank of x[9] since, whatever the position of x[9] relative to x[3] and x[6], there are only two remaining numbers to compare x[9 ]with. So we can find the absolute rank of x[9] in exactly two calls to rank. The second part of the puzzle is to determine for an unknown series x of 80 numbers the maximum number of calls to rank(c(x[1],x[2],x[3])) to obtain rank(x). Of course, the dumb solution is to check all 82160 triplets, but I presume a sort of bubble sort would do better, even though Wikipedia tells me that bubble sort has worst-case and average complexity both О(n^2), while quicksort has an average (if not worse-case) O(nlogn) complexity…. If we follow the one-step procedure obtained in the first part, starting with three numbers whose relative rank is obtained with one call, we need in 1 + (9-3)*2 + ((1+3*8+2)-9)*3+(80-27)*4 =279 calls to rank since 80<1+3*(3*8+2)+2=81. Filed under: mathematical puzzle for the author, please follow the link and comment on his blog: Xi'an's Og » R daily e-mail updates news and on topics such as: visualization ( ), programming ( Web Scraping ) statistics ( time series ) and more... If you got this far, why not subscribe for updates from the site? Choose your flavor: , or
{"url":"http://www.r-bloggers.com/le-monde-puzzle-28/","timestamp":"2014-04-18T20:53:11Z","content_type":null,"content_length":"38622","record_id":"<urn:uuid:86fcd95e-dc8d-4857-88d7-ebd5833536d5>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00270-ip-10-147-4-33.ec2.internal.warc.gz"}
Milwaukee, WI ACT Tutor Find a Milwaukee, WI ACT Tutor ...These informal assessments can be one or more of the following: silent and oral reading comprehension, learning modality, study skills inventory, etc. Then a diagnostic TEAS is taken for each part of the TEAS (reading, mathematics, science, English and language usage). The diagnostics generall... 36 Subjects: including ACT Math, English, GED, reading ...I, myself, am a visual learner. If I don't see a math problem worked out step-by-step, it's hard for me to understand how to navigate from the question to the correct answer. My greatest strength, as a tutor, is that I'm able to relate to the student; I'm able to get down to their level and understand what it is they are not understanding. 32 Subjects: including ACT Math, reading, English, calculus ...My experiences include teaching a summer algebra class through UWM's Upward Bound Program, tutoring in Political Science, Economics, and Statistics for the University of Oklahoma Athletics Department, managing a team of nearly 40 volunteer tutors at Madison West High School's Schools of Hope prog... 16 Subjects: including ACT Math, calculus, statistics, geometry ...Anything from basic grammar and spelling, all the way to things like speeches, 20 page research papers and the like. Besides Math and English, I have been a good student in things like Physics, Biology, Science, History etc. I have learned to figure out how each student learns best, and use that to teach them and help them understand material better. 21 Subjects: including ACT Math, reading, writing, ESL/ESOL ...Many times, math looks hard because all of the little tricks and turns that a teacher can take when they know math. I may know that the variable "x" is different from the multiplication sign "x" in scientific notation but I realize that they look the same to you. Math classes today are more about thinking and applying than just crunching numbers. 14 Subjects: including ACT Math, calculus, statistics, algebra 1
{"url":"http://www.purplemath.com/milwaukee_wi_act_tutors.php","timestamp":"2014-04-20T02:02:13Z","content_type":null,"content_length":"24096","record_id":"<urn:uuid:0a6a4ffd-0cfd-4b7e-acd8-51a95c7469d2>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00226-ip-10-147-4-33.ec2.internal.warc.gz"}
A tree-based method for the rapid screening of chemical fingerprints The fingerprint of a molecule is a bitstring based on its structure, constructed such that structurally similar molecules will have similar fingerprints. Molecular fingerprints can be used in an initial phase of drug development for identifying novel drug candidates by screening large databases for molecules with fingerprints similar to a query fingerprint. In this paper, we present a method which efficiently finds all fingerprints in a database with Tanimoto coefficient to the query fingerprint above a user defined threshold. The method is based on two novel data structures for rapid screening of large databases: the kD grid and the Multibit tree. The kD grid is based on splitting the fingerprints into k shorter bitstrings and utilising these to compute bounds on the similarity of the complete bitstrings. The Multibit tree uses hierarchical clustering and similarity within each cluster to compute similar bounds. We have implemented our method and tested it on a large real-world data set. Our experiments show that our method yields approximately a three-fold speed-up over previous methods. Using the novel kD grid and Multibit tree significantly reduce the time needed for searching databases of fingerprints. This will allow researchers to (1) perform more searches than previously possible and (2) to easily search large databases. 1 Introduction When developing novel drugs, researchers are faced with the task of selecting a subset of all commercially available molecules for further experiments. There are more than 8 million such molecules available [1], and it is not feasible to perform computationally expensive calculations on each one. Therefore, the need arises for fast screening methods for identifying the molecules that are most likely to have an effect on a given disease. It is often the case that a molecule with some effect is already known, e.g. from an already existing drug. An obvious initial screening method presents itself, namely to identify the molecules which are similar to this known molecule. To implement this screening method one must decide on a representation of the molecules and a similarity measure between representations of molecules. Several representations and similarity measures have been proposed [2-4]. We focus on molecular fingerprints. A fingerprint for a given molecule is a bitstring of size N which summarises structural information about the molecule [3]. Fingerprints should be constructed such that if two fingerprints are very similar, so are the molecules which they represent. There are several ways of measuring the similarity between fingerprints [4]. We focus on the Tanimoto coefficient, which is a normalised measure of how many bits two fingerprints share. It is 1.0 when the fingerprints are the same, and strictly smaller than 1.0 when they are not. Molecular fingerprints in combination with the Tanimoto coefficient have been used successfully in previous studies [5]. We focus on the screening problem of finding all fingerprints in a database with Tanimoto coefficient to a query fingerprint above a given threshold, e.g. 0.9. Previous attempts have been made to improve the query time. One approach is to reduce the number of fingerprints in the database for which the Tanimoto coefficient to the query fingerprint has to be computed explicitly. This includes storing the fingerprints in the database in a vector of bins [6], or in a trie like structure [7], such that searching certain bins, or parts of the trie, can be avoided based on an upper-bound on the Tanimoto coefficient between the query fingerprint and all fingerprints in individual bins or subtries. Another approach is to store an XOR summary, i.e. a shorter bitstring, of each fingerprint in the database, and use these as rough upper bounds on the maximal Tanimoto coefficients achievable, before calculating the exact coefficients [8]. In this paper, we present an efficient method for the screening problem, which is based on an extension of an upper bound given in [6] and two novel tree based data structures for storing and retrieving fingerprints. To further reduce the query time we also utilise the XOR summary strategy [8]. We have implemented our method and tested it on a realistic data set. Our experiments clearly demonstrate that it is superior to previous strategies, as it yields a three-fold speed-up over the previous best method. 2 Methods A fingerprint is a bitstring of length N. Let A and B be bitstrings, and let |A| denote the number of 1-bits in A. Let A ∧ B denote the logical and of A and B, that is, A ∧ B is the bitstring that has 1-bits in exactly those positions where both A and B do. Likewise, let A ∨ B denote the logical or of A and B, that is, A ∨ B is the bitstring that has 1-bits in exactly those positions where either A or B do. With this notation the Tanimoto coefficient becomes: Figure 1 shows an example the usage of this notation. In the following, we present a method for finding all fingerprints B in a database of fingerprints with a Tanimoto coefficient above some query-specific threshold S[min ]to a query fingerprint A. The method is based on two novel data structures, the kD grid and the Multibit tree, for storing the database of fingerprints. Figure 1. Example calculation of Tanimoto coefficient. Example of calculation of the Tanimoto coefficient S[T](A, B), where A = 101101 and B = 110100. 2.1 kD grid Swamidass et al. showed in [6] that if |A| and |B| are known, S[T ](A, B) can be upper-bounded by This bound can be used to speed up the search, by storing the database of fingerprints in N + 1 buckets such that bitstring B is stored in the |B|th bucket. When searching for bitstrings similar to a query bitstring A it is sufficient to examine the buckets where S[max ]≥ S[min]. We have generalised this strategy. Select a number of dimensions k and split the bitstrings into k equally sized fragments such that where X·Y is the concatenation of bitstrings X and Y . The values |A[1]|, |A[2]|, ..., |A[k]| and |B[1]|, |B[2]|, ..., |B[k]| can be used to obtain a tighter bound than S[max]. Let N[i ]be the length of A[i ]and B[i]. The kD grid is a k-dimensional cube of size (N[1 ]+ 1) × (N[2 ]+ 1) × ... × (N[k ]+ 1). Each grid point is a bucket and the fingerprint B is stored in the bucket at coordinates (n[1], n[2], ..., n[k]), where n[i ]= |B[i]|. An example of such a grid is illustrated in Fig. 2. By comparing the partial coordinates (n[1], n[2], ..., n[i]) of a given bucket to |A[1]|, |A[2]|, ..., |A[i]|, where i ≤ k, it is possible to upper-bound the Tanimoto coefficient between A and every B in that bucket. By looking at the partial coordinates (n[1], n[2], ..., n[i-1]), we can use this to quickly identify those partial coordinates (n[1], n[2], ..., n[i]) that may contain fingerprints B with a Tanimoto coefficient above S[min]. Figure 2. 3D grid. Example of a kD-grid with k = 3. B is split into smaller substrings and the count of 1-bits in each determines where in B is placed in the grid. The small inner cube shows the placement of B. Assume the algorithm is visiting a partial coordinate at level i in the data structure. The indices n[1], n[2], ..., n[i-1 ]are known, but we need to compute which n[i ]to visit at this level. The entries to be visited further down the data structure n[i+1], ..., n[k ]are, of course, unknown at this point. A bound can be calculated in the following manner. The n[i]s to visit lie in an interval and it is thus sufficient to compute the upper and lower indices of this interval, n[u ]and n[l ]respectively. Setting n[i ]and ensuring that the result is an integer in the range 0... N[i ]gives: where logical and in the first part of the bitstrings. logical or in the first part of the bitstrings. Note that in the case where k = 1 this datastructure simply becomes the list presented by Swamidass et al. [6], and in the case where k = N the datastructure becomes the binary trie presented by Smellie [7]. We have implemented the kD grid as a list of lists, where any list containing no fingerprints is omitted. See Fig. 3 for an example of a 4D grid containing four bitstrings. The fingerprints stored in a single bucket in the kD grid can be organised in a number of ways. The most naive approach is to store them in a simple list which has to be searched linearly. We propose to store them in tree structures as explained below. Figure 3. 4D grid. Example of a 4D grid containing four bitstrings, stored as in our implementation. The dotted lines indicate the splits between B[i ]and B[i+1]. 2.2 Singlebit tree The Singlebit tree is a binary tree which stores the fingerprints of a single bucket from a kD grid. At each node in the tree a position in the bitstring is chosen. All fingerprints with a zero at that position are stored in the left subtree while all those with a one are stored in the right subtree. This division is continued recursively until all the fingerprints in a given node are the same. When searching for a query bitstring A in the tree it now becomes possible, by comparing A to the path from the root of the tree to a given node, to compute an upper bound S[T ](A, B) for every fingerprint B in the subtree of that given node. Given two bitstring A and B let M[ij ]be the number of positions where A has an i and B has a j. There are four possible combinations of i and j, namely M[00], M[01], M[10 ]and M[11]. The path from the root of a tree to a node defines lower limits m[ij ]on M[ij ]for every fingerprint in the subtree of that node. Let u[ij ]denote the unknown difference between M[ij ]and m[ij], that is u[ij ]= M[ij]- m[ij ]. Remember that By using an upper bound on the Tanimoto coefficient of any fingerprint B in the subtree can then be calculated as When building the tree data structure it is not immediately obvious how best to choose which bit positions to split the data on, at a given node. The implemented approach is to go through all the children of the node and choose the bit which best splits them into two parts of equal size, in the hope that this creates a well-balanced tree. It should be noted that the tree structure that gives the best search time is not necessarily a well-balanced tree. Figure 4 shows an example of a Singlebit tree. Figure 4. Singlebit tree. Example of a Singlebit tree. The black squares mark the bits chosen for the given node, while the grey squares mark bits chosen at an ancestor. The grey triangles represent subtrees omitted to keep this example simple. Assume we are searching for the bitstring A in the example. When examining the node marked by the arrow we have the knowledge shown in B^? about all children of that node. Comparing A against B^? gives us m[00 ]= 0, m[01 ]= 0, m[10 ]= 1 and m[11 ]= 2. Thus S[T ](A, B) = S[T ](A, B') = The Singlebit tree can also be used to store all the fingerprints in the database without a kD grid. In this case, however, |B| is no longer available and thus the 2.3 Multibit tree The experiments in Sec. 3 unfortunately show that using the kD grid combined with Singlebit trees decreases performance compared to using the kD grid and simple lists. The fingerprints used in our experiments have a length of 1024 bits. In our experiments no Singlebit tree was observed to contain more the 40,000 fingerprints. This implies that the expected height of the Singlebit trees is no more than 15 (as we aim for balanced trees cf. above). Consequently, the algorithm will only obtain information about 15 out of 1024 bits before reaching the fingerprints. A strategy for obtaining more information is to store a list of bit positions, along with an annotation of whether each bit is zero or one, in each node. The bits in this list are called the match-bits. The Multibit tree is an extension of the Singlebit tree, where we no longer demand that all children of a given node are split according to the value of a single bit. In fact we only demand that the data is arranged in some binary tree. The match-bits of a given node are computed as all bits that are not a match-bit in any ancestor and for which all fingerprints in the leaves of the node have the same value. Note that a node could easily have no match-bits. When searching through the Multibit tree, the query bitstring A is compared to the match-bits of each visited node and m[00], m[01], m[10 ]and m[11 ]are updated accordingly. S[min ]are visited. Again, the best way to build the tree is not obvious. Currently, the same method as for the Singlebit trees is used. For a node with a given set of fingerprints, choose the bit which has a 1-bit in, as close as possible to, half of the fingerprints. Split the fingerprints into two sets, based on the state of the chosen bit in each fingerprint. Continue recursively in the two children of the node. Figure 5 shows an example of a Multibit tree. To reduce the memory consumption of the inner nodes, the splitting is stopped and leaves created, for any node that has less than some limit l children. Based on initial experiments, not included in this paper, l is chosen as 6, which reduces memory consumption by more than a factor of two and has no significant impact on speed. An obvious alternative way to build the tree would be to base it on some hierarchical clustering method, such as Neighbour Joining [9]. Figure 5. Multibit tree. An example of a Multibit tree. The black squares marks the match-bits and their annotation. Grey squares show bits that were match-bits at an ancestor. Grey triangles are subtrees omitted to keep this example simple. When visiting the node marked by the arrow we get m[00 ]= 1, m[01 ]= 1, m[10 ]= 1 and m[11 ]= 2, thus S[T ](A, B) = S[T ](A, B') = 3 Experiments We have implemented the kD grid and the Single- and Multibit tree in Java. The implementation along with all test data is available at http://www.birc.au.dk/~tgk/TanimotoQuery/ webcite. Using these implementations, we have constructed several search methods corresponding to the different combinations of the data structures. We have examined the kD grid for k = 1, 2, 3 and 4, where the fingerprints in the buckets are stored in a simple list, a Singlebit tree or a Multibit tree. For purposes of comparison, we have implemented a linear search strategy, that simply examines all fingerprints in the database. We have also implemented the strategy of "pruning using the bit-bound approach first, followed by pruning using the difference of the number of 1-bits in the XOR-compressed vectors, followed by pruning using the XOR approach" from [8]. This strategy will hereafter simply be known as Baldi. A trick of comparing the XOR-folded bitstrings [8] immediately before computing the true Tanimoto coefficient, is used in all our strategies to improve performance. The length of the XOR summary is set to 128, as suggested in [8]. An experiment, not included in this paper, confirmed that this is indeed the optimal size of the XOR fingerprint. We have chosen to reimplement related methods in order to make an unbiased comparision of the running times independent of programming language differences. The methods are tested on a real-world data set by downloading version 8 of the ZINC database [1], consisting of roughly 8.5 million commercially available molecules. Note that only 2 million of the molecules have actually been used, due to memory constraints. The distribution of one-bits is presented in Fig. 6, where it can be seen there are many buckets in the 1D grid that will be empty. Figure 6. Distribution of number of bits in fingerprints. Distribution of the number of bits set in the 1024 bit CDK fingerprints from the ZINC database. The experiments were performed on an Intel Core 2 Duo running at 2.5 GHz and with 2 GB of RAM. Fingerprints were generated using the CDK fingerprint generator [10] which has a standard fingerprint size N of 1024. One molecule timed out and did not generate a fingerprint. We have performed our tests on different sizes of the data set, from 100,000 to 2,000,000 fingerprints in 100,000 increments. For each data set size, the entire data structure is created. Next, the first 100 fingerprints in the database are used for queries. We measure the query time and the space consumption. 4 Results Figure 7 shows the average query time for the different strategies and different values of k plotted against the database size. We note that the Multibit tree in a 1D grid is best for all sizes. Surprisingly, the simple list, for an appropriately high value of k, is faster than the Singlebit tree, yet slower than the Multibit tree. This is probably due to the fact that the Singlebit trees are too small to contain sufficient information for an efficient pruning: the entire tree is traversed, which is slower than traversing the corresponding list implementation. All three approaches (List, Singlebit- and Multibit trees) are clearly superior to the Baldi approach, which in turn is better than a simple linear search (with the XOR folding trick). From Fig. 7a we notice that the List strategy seems to become faster for increasing k. This trend is further investigated in Fig. 8, which indicate that a k of three or four seems optimal. As k grows the grid becomes larger and more time consuming to traverse while the lists in the buckets become shorter. For sufficiently large values of k, the time spent pruning buckets exceeds the time visiting buckets containing superfluous fingerprints. The Singlebit tree data in Fig. 7b indicates that the optimal value of k is three. It seems the trees become too small to contain enough information for an efficient pruning, when k reaches four. In Fig. 7c we see the Multibit tree. Again, a too large k will actually slow down the data structure. This can be explained with arguments similar to those for the Singlebit tree. Surprisingly, it seems a k as low as one is optimal. Figure 7. Average query time, different database size. Different strategies tested with k = 1, ..., 4. Each experiment is performed 100 times, and the average query time is presented. All experiments are performed with a S[min ]of 0.9. The three graphs (a) - (c) show the performance of the three bucket types for the different values of k. The best k for each method is presented in graph (d) along with the simple linear search results and Baldi. Figure 8. Average query time on lists, different k. Experiments with simple lists for k = 1, ..., 10 Each test is performed 100 times, and the average query time is presented. All experiments are performed with a S[min ]of 0.9. Missing data points are from runswith insufficient memory. Figure 9 shows the memory usage per fingerprint as a function of the number of loaded fingerprints. The first thing we note is that the Multibit tree uses significantly more memory than the other strategies. This is due to the need to store a variable number of match-bits in each node. The second thing to note is the space usage for different k's. In the worst case, where all buckets contain fingerprints, the memory consumption per fingerprint, for the grid alone, becomes n is the number of fingerprints in the database. Thus we are not surprised by our actual results. Figure 9. Average space consumption, different database size. The memory consumption of the data structure for different strategies tested with k = 1, ..., 4. The three graphs (a) - (c) show the performance of the three bucket types for the different values of k. The k yielding the fastest query time for each method is presented in graph (d) along with the simple linear search results and Figure 10 shows the search time as a function of the Tanimoto threshold. In general we note that the simpler and more naive data structures performs better for a low Tanimoto threshold. This is due to the fact that, for a low Tanimoto threshold a large part of the entire database will be returned. In these cases very little pruning can be done, and it is faster to run through a simple list than to traverse a tree and compare bits at each node. Of course we should remember that we are interested in performing searches for similar molecules, which means large Tanimoto thresholds. Figure 10. Average query time, different threshold. The best strategies from Fig. 7 tested for different values of S[min]. All experiments are performed 100 times, with 2,000,000 fingerprints in the database, and the average query time is presented. The reason why linear search is not constant time for a constant data set is that, while it will always visit all fingerprints, the time for visiting a given fingerprint is not constant due to the XOR folding trick. The running times of the different methods depend on the number of Tanimoto coefficients between pairs of bitstrings that must be calculated explicitely. This number depends on the method and not on the programming language in which the method is implemented, and is thus an implementation independent performance measure. Figure 11 presents the fraction of coefficient calculated for varying number of fingerprints and a Tanimoto threshold of 0.9. Each method seems to calculate a fairly constant fraction of the fingerprints: only the Multibit tree seems to vary with the number of fingerprints. This is most likely due to the fact that more fingerprints result in larger trees with more information. Figure 11. Fraction of coefficients calculated, different database size. The fraction of the database for which the Tanimoto coefficient is calculated explicitly, measured for different number of fingerprints. The Tanimoto threshold is kept at 0.9. The result is consistent with the execution time experiments: the methods have the same relative ranking when measuring the fraction of coefficients calculated as when measuring the average query time in Fig. 7. The fraction of coefficients calculated has also been measured for varying Tanimoto thresholds with 2,000,000 fingerprints. The result is presented in Fig. 12. It seems that the relation between the methods is consistent across Tanimoto thresholds. Surprisingly, the Multibit tree seems to reduce the fraction of fingerprints for which the Tanimoto threshold has to be calculated even for small values of the Tanimoto threshold: the three other methods seem to perform very similar up till a threshold of 0.8, whereas the Multibit tree seems to differentiate itself at a threshold as low as 0.2. Figure 12. Fraction of coefficient calculated, different threshold. The fraction of the database for which the Tanimoto coefficient is calculated explicitly, measured for a varying Tanimoto threshold and 2,000,000 fingerprints. The results seems to be consistent with the average query time presented in Fig. 10. 5 Conclusion In this paper we have presented a method for finding all fingerprints in a database with Tanimoto coefficient to a query fingerprint above a user defined threshold. Our method is based on a generalisation of the bounds developed in [6] to multiple dimensions. Our generalisation results in a tighter bound, and experiments indicate that this results in a performance increase. Furthermore, we have examined the possibility of utilising trees as secondary data structures in the buckets. Again, our experiments clearly demonstrate that this leads to a significant performance increase. Our methods allow researchers to search larger databases faster than previously possible. The use of larger databases should increase the likelihood of finding relevant matches. The faster query times decreases the effort and time needed to do a search. This allow more searches to be done, either for more molecules or with different thresholds S[min ]on the Tanimoto coefficient. Both of these features increase the usefulness of fingerprint based searches for the researcher in the laboratory. Our method is currently limited by the rather larger memory consumption of the Multibit tree. Another implementation might remedy this situation somewhat. Otherwise we suggest an I/O efficient implementation where the tree is kept on disk. To increase the speed of our method further we are aware of two approaches. Firstly, the best way to construct the Multibit trees remain uninvestigated. Secondly, a tighter coupling between the Multibit tree and the kD grid would allow us to use grid information in the Multibit tree: in the kD grid we have information about each fragment of the fingerprints which is not used in the current tree bounds. 7 Authors' contributions The project was initiated by TGK, who also came up with the SingleBit tree. JN invented the kD grid and the Multibit tree. All datastructures were implemented, refined and benchmarked by JN and TGK. TGK, JN and CNSP wrote the article. CNSP furthermore functioned in an advisory role. Sign up to receive new article alerts from Algorithms for Molecular Biology
{"url":"http://www.almob.org/content/5/1/9","timestamp":"2014-04-18T02:58:59Z","content_type":null,"content_length":"102420","record_id":"<urn:uuid:bdac9bd1-2cf5-4e36-a4dc-3e2d229d31a7>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00303-ip-10-147-4-33.ec2.internal.warc.gz"}
The Godot We Know part Bet October 24, 2010 The Godot We Know part Bet As noted before, it's critical to all physical arguments for the existence of God to prove that the age of the universe is not only finite but young, in a relative sense. The reason for this is the heart of Father Spitzer's argument: 1. Of all the imaginable sets of physical constants (such as the speed of light in a vacuum, the gravitational constant, Planck's constant, etc.) that define a universe, there are extraordinarily more sets that define a universe in which life as we know it is not possible, than there are sets that define a universe in which life is possible. That is, if you randomly define the physical constants, you'll almost certainly end up with a universe where life as we know it simply cannot exist. It takes a very special set of constants for life even to be possible. For example, if gravity decreased by the cube of the distance from matter, rather than the square of the distance (as it does in our universe), gravity probably wouldn't be strong enough to pull matter together into planets, stars, solar systems, and galaxies. We would have a slurry of molecules permeating the universe, and we cannot really imagine life arising from such a uniform putty. 2. So if we assume that physical constants randomly assume values around the time of a Big-Bang event, the odds of any one particular universe, defined by a random set of constants, being "life-friendly" (allowing life to arise) are vanishingly small. Pick a random universe and it almost certainly cannot give rise to life. 3. Yet we know at least one universe allowed the existence of life, because, well, here we are! 4. Now, if a literally infinite time is available for repeated universes to emerge, then every possible value for the physical constants will appear at some point; every possible combination of values will appear at some point; and that means that every possible life-friendly universe is certain to appear -- at some point. It won't happen often, but many, many life-friendly universes will eventually emerge. 5. However, if there is only a limited, finite time available for emerging universes; and if that finite time is relatively small, compared to the odds against a life-friendly universe; then the odds are vanishingly small that any life-friendly universe would ever emerge, anywhere, anytime... there's just not enough time. This is an important concept. If some event is very, very improbable -- a life-friendly universe created by a random arrangement of universal constants -- then it will only happen if you have an enormous number of attempts. If you flip a coin enough times, eventually it will land on its edge and balance there, landing neither heads nor tails. But you can only expect that if you flip an enormous number of coins. Since a life-friendly universe is much less likely than a coin landing on edge, the number of times a (random) universe would have to emerge in order for even a single one of them to be life-friendly is so large, we humans would have a hard time distinguishing it from infinity. It's that big. 6. Thus, if any such life-friendly universe emerges in a short length of time (see point 3), we should conclude the constants are not set randomly but are severely constrained, so as to favor those values that lead to a universe in which life can form, and subsequently evolve into intelligence and sentience. If you flip a coin only four or five times, and it lands on edge, you'd have to conclude that there was something "funny," something non-random, about that particular coin. Same if you shuffled a deck -- and found all the cards arranged by suit and number, from the two of clubs up to the ace of spades. It must have been rigged somehow! 7. Therefore, the life-friendly coherence and order we observe in our own universe implies a strong possibility, at least, that some intelligent being created, or at least designed our universe... if, that is, only a finite and relatively short period of time has been available for universes to emerge. So to be able to conclude that our universe was tailor-made by a designer, there must be an anomaly: A short period of time in which universes can be created via Big Bang events, therefore only a small number of universes -- but we've already gotten one that can sustain life. That's so improbable, the only plausible explanation is that... something or somebody has been busy loading the dice! That's why it's critical to Spitzer's argument to show that the past age of any collection of universes is both finite and fairly young: Because if the age of a collection of universes is infinite, or even very large compared to the odds against, then it's no surprise that we get the occasional life-friendly universe; sheer random probability can explain it, and there's no need to invoke an intelligent designer. But we really ought to define what we're talking about. What constitutes "the universe?" What is a "collection of universes," how can there be more than one? And what does it mean to talk about a universe "emerging?" There are several major competing cosmological conjectures; let's look first at the standard model, where there is only one universe (our own); then we'll look at alternative models that allow for many universes: • Standard Big Bang Theory (SBBT): There is only one universe; it began with the (one and only) Big Bang about 13.7 billion years ago (13.7 Ga), and has been expanding ever since. For a while, the rate of expansion was slowing down; but for the last several billion years, the rate has been increasing. This universe may "exist" forever, in a sense, even after the heat death (complete entropy) of all matter and energy; but it had a definite starting point (for space, time, matter, energy, and every other state or condition) not very long ago. • Past-expanded BBT: So-called because such conjectures envision a multiplicity of universes, in sequence or parallel, making any conceivable creation event much longer ago than 13.7 billion years... which means there is more "past;" or to put it another way, these conjectures expand the available past, possibly infinitely far into the past. It's easy to see how SBBT implies a starting point a finite number of years ago: Observation shows us that our universe is finite in size. And we can calculate the rate of expansion of our universe by observing the speed at which other galaxies are moving away from our own galaxy, the Milky Way. Picture a balloon with little dots all over it. As you inflate the balloon, every dot gets farther away from every other dot. You can determine the rate you're inflating the balloon by picking one dot (call it Home), then measuing how quickly all the nearby dots are receding from Home. Anyway, with those two parameters -- the current radius and the rate of expansion -- it's a simple mathematical exercise to "run the movie backwards," and see how long ago it is since the radius of the entire universe was zero. That, then, is how long ago the Big Bang occurred. But what about these "past-expanded" BBTs? What kinds of universes do we have to imagine? Let's take the easiest one first. Nomenclature note: As PBBTs envision multiple, possibly infinite universes, I will use the normal term universe to mean only a single universe, similar to our own; I will use the science-fictional term Multiverse to mean the collection of all of the universes, past, present, and future, envisioned by a particular model. I hope this will avoid confusion. I'll write more about the actual argument Father Spitzer makes in later posts. For right now, I just want to introduce the various conceptualizations of "past-expanded" Multiverses that cosmologists discuss. Starting with... Cyclic Multiverse Picture a universe very much like ours; but in this universe, there is enough matter that eventually gravity will slow the expansion, slower and slower, until finally the universe stops expanding altogether -- and begins a slow and stately collapse. (As best astronomers can determine, this does not describe our own universe, where expansion is not slowing. Take this as a hypothetical.) The collapse gets faster and faster, until finally all matter and energy, space and time come crashing together in what cosmologists call the Big Crunch. In the end, we are left with a point of zero-radius that contains the entire universe... which then, at some point, explodes outward with another Big Bang, starting the cycle all over again. This occurs over and over for an indeterminate number of times, either a very, very large finite number of cycles, or else an infinite number of cycles. We must ask three questions about such a Multiverse in order to determine whether it implies a designer or creator: • Is it possible for such a Multiverse to have already cycled an infinite number of times in the past? If Yes, then out of that infinity of universes (one for each cycle), we should see plenty of life-friendly universes emerge through sheer, random chance. End of discussion; we cannot conclude that the existence of a life-friendly universe means anything at all, and certainly not a creator or designer. (If you deal an infinite number of poker hands, you'll necessarily get an infinite number of royal flushes... no stacking the deck required.) • But even if an infinite past of cycles is not possible -- meaning there is a beginning point to this cyclic Multiverse -- is that start point be so long in the past, has enough time elapsed, have enough universes emerged, that it's still plausible that one of those universes could have randomly achieved life-friendly physical constants? Sticking with our analogy, you don't need an infinite number of poker hands. The odds of dealing yourself a royal flush are about 1 out of 650,000; if you deal 50 million hands -- each time shuffling the deck, then dealing five cards off the top -- it's almost dead certain you'll get a royal flush... in fact many royal flushes. But as big as 50 million is, it's still a finite number. Of course, it would take a very long time to deal that many hands; at five seconds per hand with no rest periods, it would take almost eight years. So how long for a random universe to emerge that is capable of supporting life? Physicists believe the time required would be much, much longer than the full expected lifespan of our universe, from its birth in the Big Bang to its heat death, when entropy is universal and complete. So much longer, in fact, that in comparison to the time required to randomly produce a mix of physical constants that would allow for life, the lifespan of our own single universe would be indistinguishable from zero. That's a very, very long time... unreasonably long, as it turns out. Here's a peek into a future post: The best physical estimates show that you cannot have that many Big Bangs. Each time you cycle, entropy increases; that is, more energy ends up in cosmic background radiation and less in stars and planets and such with each successive universe; every new universe is closer to heat death than its predecessors. Long before you could "deal yourself" enough universes to get one that supports life, the entire cyclical Multiverse would have "bottomed out," reached the point of maximal entropy. And at that point, the universe no longer collapses. Thus most cosmologists believe that a cyclic Multiverse cannot run long enough to produce a life-friendly universe by random chance; so if our own, observably life-friendly universe is part of such a chain of universes book-ended by successive Big Bangs and Big Crunches, the physical constants are not being generated by random chance: As above, something or somebody is stacking the emergence But we still cannot conclude that it's a "somebody," not a "something," because there is yet another possible explanation for the non-randomness of physical constants and the bias towards life-friendly universes: • If the cyclic Multiverse sits within a larger "Metaverse," and if that Metaverse has its own rules and physical laws... do those external laws limit or constrain the possible values of the physical constants, so that it's much more probable that life-friendly universes will form than we have been imagining? In other words, is the emergence of life-friendliness driven by the physical laws of the Metaverse, such that the emergence of life-friendliness is not a random function? In your hometown, you have probably noticed a curious phenomenon that appears not to be altogether random: Over a six-month period, temperatures tend to get warmer and warmer; but then, over the next six-month period, they tend to get cooler and cooler. Clearly this is not random; if temperatures changed entirely by random, you would expect periods of several years of hotness, followed perhaps by a cooling period of only six days, before things start to heat up again. It's tempting to conclude that the Temperature Gods simply like to make the thermometer go up and down in a fairly regular fashion. But there is an alternative explanation: Your hometown sits within a larger system -- planet Earth, whose axis tilts with respect to its orbit around our sun, Sol. As a result of this entirely natural cause, half the year your hometown is tilting towards Sol, while the other half it's tilting away. This explains the phenomena of summer and winter, respectively... with no anthropomorphic, intelligent deity required. So even if we conclude that life-friendly universes can only emerge from a cyclic Multiverse via non-random processes, that still doesn't mean we have found our proof of God: It's possible there are other, purely natural (not supernatural) causes of non-randomosity. Review the key points of a physical argument for God: 1. There are many more imaginable life-unfriendly universes than life-friendly universes. 2. So the odds of any one, specific universe being life friendly are vanishingly small. 3. But we know at least one has emerged, because we're sitting in it. 4. With literally infinite time, all possible universes, including the life-friendly ones, will eventually emerge, even by random chance. 5. Assume finite time, and a relatively short finite time at that; then the odds of any life-friendly universe ever emerging through random chance are also vanishingly small. 6. Putting (3) and (5) together implies that our own universe (at least!) was initiated by non-random processes. 7. And that non-randomness makes it plausible, at least, to argue that our universe was created or designed by an intelligent being. (To discharge our assumption in (5), we must add, If only a finite, relatively short period of time has been available for universes to emerge.) (And remember likewise that commenter Nerys Ghemor is correct: These teleological debates are all versions of the "God of the gaps" argument: Science can't explain X at this moment, so X must have been ordained by God. (However, these physical conjectures sit at a much more sophisticated level than, e.g., Michael Behe's "intelligent design" foolishness. The Spitzer conjectures invoke what appear to be universal physical constraints, such as the Heisenberg Uncertainty Principle, Einstein's General Relativity, and the stochastic nature of quantum mechanics, rather than the "known unknown" (to use Donald Rumsfeld's wonderful phrase) of the exact mechanism of some otherwise plausible sequence of events, like the evolution of the eye or the bacterial flagella. If the Spitzer conjecture is to fail, it must do so due to unknown unknowns, not known unknowns.) The next two types of Multiverse we'll poke around in are (a) a Multiverse like an ocean filled with bubbles, where each bubble is a separate universe; and (b) a Multiverse comprising many more dimensions than the four we can detect (three spatial dimensions, plus the dimension of time, or duration); each universe is a "membrane," or spatial cross-section of some smaller number of dimensions within the larger-dimension Multiverse. This kind of Multiverse has many other membranes (which impatient physicists call "branes"), and they can flutter around in the Multiverse and bang into each other, with catastrophic results for More later, as I become inspired (i.e., less lazy than usual). Hatched by Dafydd on this day, October 24, 2010, at the time of 3:50 AM Trackback Pings TrackBack URL for this hissing: http://biglizards.net/mt3.36/earendiltrack.cgi/4636 The following hissed in response by: Sabba Hillel Actually, given the logical point that the Creator would have created time as well, there is no point involved for the definition of "the time it would take to create this universe". Before the creation of the universe "time" itself would not exist. We apparently define time as something that we sense in order to differentiate between events in this universe. Thus, "before" the big bang is a meaningless concept. Similarly, we cannot say that there is a certain amount of "time" in between creations. Actually, there is no reason for G0d to have created the universe at any particular "moment" in its existence. He could have created it five seconds ago with this message on your computer screen, or 5 billion years ago and allowed it to continue until the current moment, or 5,771 years ago in order to start things moving with the advent of Mankind as we know it. Since the definition of the Creator includes Omnipotence, and there is nothing in the rules of the Universe to prevent it, we cannot argue for any particular "moment" of creation. This is like starting a computer model with all variables set to 0, or starting it with the variables set to what they would be after a particular run. Given no round off errors, once the simulation starts running, there would be no way of determining at which point it actually started, aside from the log records which would not "exist" within the universe of the simulation. Actually, for free will to exist (and one of the beliefs in Judaism is that free will is part of the purpose of the universe), it is a requirement that there be no absolute proof of creation. If such absolute proof (as opposed to valid arguments) existed, it would remove our free will. The above hissed in response by: Sabba Hillel at October 24, 2010 7:11 AM The following hissed in response by: Sabba Hillel But we still cannot conclude that it's a "somebody," not a "something," because there is yet another possible explanation for the non-randomness of physical constants and the bias towards life-friendly universes: * If the cyclic Multiverse sits within a larger "Metaverse," and if that Metaverse has its own rules and physical laws... do those external laws limit or constrain the possible values of the physical constants, so that it's much more probable that life-friendly universes will form than we have been imagining? In other words, is the emergence of life-friendliness driven by the physical laws of the Metaverse, such that the emergence of life-friendliness is not a random function? In your hometown, you have probably noticed a curious phenomenon that appears not to be altogether random: Over a six-month period, temperatures tend to get warmer and warmer; but then, over the next six-month period, they tend to get cooler and cooler. Clearly this is not random; if temperatures changed entirely by random, you would expect periods of several years of hotness, followed perhaps by a cooling period of only six days, before things start to heat up again. It's tempting to conclude that the Temperature Gods simply like to make the thermometer go up and down in a fairly regular fashion. But there is an alternative explanation: Your hometown sits within a larger system -- planet Earth, whose axis tilts with respect to its orbit around our sun, Sol. This argument does not deal with creation because the creation claim is then set back to involve the "Metaverse". A similar argument was once proposed by Isaac Asimov stating that the Big Bang could have occurred because of the collision of two protoUniverses. He then went back several more levels. However, what he did not seem to realize was that no matter how far he went back, he was still left with the creation of the This is like various fantasy stories or mythologies involving "gods". Since they do not fit the definition of G0d, (omnipotent, omniscient, etc), all this does is push the existence of G0d back a level. For example, if gods created this universe, then G0d would be the Creator of the Universe that contained gods who had the specified powers. The above hissed in response by: Sabba Hillel at October 24, 2010 7:19 AM The following hissed in response by: snochasr I'm still looking for an explanation of how a different value for Planck's constant, or the speed of light, would make a universe "life friendly" or not. I can see how such things might make a particular planet uninhabitable, or habitable, but in our galaxy alone there are a very large number of life-friendly planetoids, and one assumes a very large number that are "near habitable" should their environment change for any reason. The above hissed in response by: snochasr at October 24, 2010 10:28 AM The following hissed in response by: BlueNight I'm a young-earth creationist, but I don't rely on a "God of the gaps" mindset. That's too much like convicting everyone who comes before the court without an alabi: guilty before proof. There are too many examples of that mindset failing to be reliable. (Ben Franklin's lightning rod, and churches' refusal to install them, is the primary modernist myth of this sort, whatever its historicity.) Creation science is primarily forensic and social, and thus circumstantial. There are examples of faith healings which any mentalist or hypnotist would recognize instantly as belonging to their disciplines. That's not to say God wasn't involved, but rather God may have arranged things so that an "accidentally" invoked hypnotic trance caused a psychosomatic healing. That's also not to say the black swan, a physical miracle, doesn't exist. Cosmologically, if we assume the existence of sequence as something present universally, we have three options: 1. "The world rested upon an elephant and the elephant rested upon a tortoise" and from there it's "turtles all the way down" (infinite regress of finite variety). 2. "The world rested upon an elephant and the elephant rested upon a tortoise" supported by a mammoth, a Brachiosaurus, an airplane, a bowling ball, Atlas, a pair of roller skates, and so on down (infinite regress of infinite variety). 3. The unmoved mover, sentient or not, sapient or not, involved or not, exists. If sequence itself, as something that can be recognized logically, simply does not exist "outside" of our universe/multiverse/metaverse/omniverse, then all bets are off. The above hissed in response by: BlueNight at October 24, 2010 12:41 PM The following hissed in response by: Dafydd ab Hugh Sabba Hillel: We're using a lot of loose shorthand here, but cosmologists have more rigorous formulations. For time in a Multiverse, substitute Multitime, a measure of duration of the cycle itself. The questions are (a) whether Multitime has already been running for an infinite or finite duration as of a particular moment; and (b) if Multitime has only achieved finite duration, then has has it been long enough to allow the cycle to have already produced such a large number of random universes that it's plausible one could have had, through sheer random chance, physical constants that allow for the formation of life. For the turtles-turtles-turtles objection, bear in mind that the Metaverse -- in which some Multiverse is (or Multiverses are) cycling through creation of many universes -- might have different physical laws from the interior universes that emerge. That's my point: What seems not only random but wildly improbable within an individual universe might be, not merely likely, but absolutely determined by the larger Metaverse that contains the smaller universes. We must consider these possibilities in order to be (reasonably) complete in our reasoning. I already gave one example, but I'll alter it slightly: If you change the value of the Gravitational Constant, you can end up with gravity so intense that the universe would expand after the Big Bang, slow, then contract back into the Big Crunch in a very short period of time. That time could be too brief to form the coherence of solar systems and planets, let alone allow sufficient time for life to form. Contrariwise, you could make gravity so tenuous a force that those structures never form, because gravity cannot overcome the initial momentum imparted to each molecule by the Big Bang. You get a semi-uniform smear of molecules fairly evenly distributed throughout the universe; I cannot see how life could form under such circumstances. Planck's Constant is basically a measure of the energy quantum: If it were larger, so that particles had to build up a much larger batch of energy before being able to emit any of it, then particles might not be able to exchange energy. That might mean no molecules could form, electrons couldn't move up or down their electron shells, and so forth. Or if Planck's Constant were much smaller, the exchanges might become so continuous that again, structure could not form, as everything would be breaking up and reforming too rapidly for any coherence. The speed of light sets the upper limit of relative velocity. If it were much faster, particles would have hurried away from their fellows so quickly, they would get out of range of gravity and could not be tugged back to form stars and planets. Contrariwise, if the speed were too slow, it would take so long for photons created by stellar fusion at the core of a star to random-walk their way out to the surface that the universe would be entirely dark... possibly long enough that the star would have already burned out and collapsed before the first photon could escape. See, you monkey around with those constants and everything falls apart. There are, of course, ranges within which everything can still be made to work, such that life (as we know it) can exist; but the ranges where this is not possible are very much larger. The above hissed in response by: Dafydd ab Hugh at October 24, 2010 12:57 PM The following hissed in response by: Karl And a major part of the "fine tuning" argument assumes that the range of possible values for these variables is very large compared with the "life-friendly" range. It may be that according to as-yet-undiscovered laws, the value of the gravitational constant is constrained to either "life-friendly" values or very close. In a recent issue of Sci-Am, it turns out that varying one parameter gives you a universe capable of supporting life only in one narrow region, if you vary more than one at once, you get other islands of life-supporting universes. The universe may be fine tuned, but there may be more than just the one station available. The above hissed in response by: Karl at October 24, 2010 5:57 PM The following hissed in response by: BlueNight And of course we're defining life solely in terms of chemistry. In a universe where matter is diffuse but universally dense, an electromagnetic pattern might result from Brownian motion and eventually evolve into sentience. As long as motion and potential difference can exist, goes the theory, life might appear vastly different than we've ever seen. As a Christian, I don't expect it, but there are a lot of things I've been told not to expect which I have seen. The above hissed in response by: BlueNight at October 24, 2010 7:13 PM The following hissed in response by: snochasr I'm getting quite an education, but I think I also need a lot of imagination to envision how some of these "constants" could ever be other than what they are. Why do we consider these "constants" as mere "variables" that are set to a random initial value (or perhaps at some "Designer's" whim) at the time the whole universe is set in motion? Why wouldn't the gravitational constant ALWAYS be as the square of distance? A 10-dimensional universe I can understand, with one of those dimensions being the "brane number," but the other dimensions would need to be the same, therefore producing the same physical constants across (this form of) the multiverse, would they not? The above hissed in response by: snochasr at October 25, 2010 6:59 AM The following hissed in response by: Robert Sabba Hillel, we can equally well ask where God came from. The traditional answer is that he is an unmoved mover, which is indeed the only way to terminate the infinite regress, but there's no reason to believe that an unmoved mover, a first cause, must necessarily be a sentient entity, let alone of all the properties attributed to God. The attempted proofs of that are riddled with gaps, bridgeable only by faith. The whole line of argument used in the book being discussed appears to be misunderstanding probabilities at a basic level. You have to specify the outcome you're looking for, before tossing the coin, rolling the dice, or whatever. Suppose I toss a coin 1000 times, taking maybe 20 minutes. There are approximately 10^100 possible outcomes, 1 followed by 100 zeroes, so the chance that any one of them happens is 1 in 10^100, ludicrously small. If everyone in the world tossed coins non-stop until the sun dies, the chances of anyone duplicating my coin toss sequence are still only around 1 in 10^80. Clearly, since my coin toss sequence is so ludicrously improbable, it can't be the result of pure chance. Something must have intervened to pick the sequence -but this logic is false. Whatever sequence I came up with would be equally unlikely. If I had specified the sequence in advance, then we could safely concluded the coin was rigged, and only if I had. Howe, results like the proportion of heads tossed, or the proportion of time the coin came up the same way twice running, are meaningful ultimately because of averaging. Applying this logic to the state of the universe, we see we have to be very careful about what we measure the probability of. Looking for the chances of getting a universe exactly like ours si like looking for the chance of a particular sequence of coin tosses, the wrong type of question. What we need to know if the probability of ending up in a universe that can support life of any kind, not necessarily as we know it, and that calculation is currently beyond us. The above hissed in response by: Robert at October 25, 2010 10:22 PM The following hissed in response by: Dafydd ab Hugh Suppose I toss a coin 1000 times, taking maybe 20 minutes. There are approximately 10^100 possible outcomes... I believe it's actually closer to 10^301: 1. We begin with the exact number of outcomes, which is 2^1,000; 2. To estimate this, we can use the fact that, by definition, 2 = 10^(log 2); 3. The log of 2 just about = 0.301; 4. So 2^1,000 = (10^0.301)^1,000; 5. Which = 10^(0.301 x 1,000); 6. Which = 10^301. That is, the total number of possible outcomes from flipping a coin 1,000 times is just about ten trillion trillion trillion trillion trillion trillion trillion trillion trillion trillion trillion trillion trillion trillion trillion trillion trillion trillion trillion trillion trillion trillion trillion trillion trillion... which by a curious coincidence is almost exactly how many counterfeit dollars Barack H. Obama will have spent by January 20th, 2013 -- his last day in office. On a more serious note, suppose we define a "habitable universe" as one that can support life as we know it; and suppose that habitable universes correspond to those in which that thousand-flip coin toss yields a sequence that contains at least 800 heads. I'm not willing to sit down and calculate the odds of one of those uinverses (sequence of coin flips) coming up habitable (containing >= 800 heads). But I am willing to bet $50 that the odds of a habitable universe popping up in even 10^100 trials of the thousand-flip coin toss is less than the odds that all the air molecules in my office will randomly, simultaneously move to the right side of the room, leaving the left side in vacuum. Any takers? Therefore, if you try your thousand-flip coin toss a few times, and a habitable universe pops up... you can bet your bottom dollar that the coin is unfairly weighted, biased in favor of habitability, instead of being truly random. That is the syllogism; the trick is to make a reasonable estimate of the odds of each vital constant being within the habitability range, then combining all those odds into one probability that all vital constants simultaneously fall into the habitability range. The above hissed in response by: Dafydd ab Hugh at October 26, 2010 1:16 AM The following hissed in response by: Robert It is 10^300, that was an arithmetical slip. More importantly justify looking only at the odds of life as we know it, not of all life? I think not. Like any specific coin toss sequence, life as we know it can only be identified as special after the fact. There's not to single it out, other than the fact that we're an example of it, just as there's nothing to single out a specific coin toss sequence, other than that I happened to toss it. For meaningful results, we must consider only those outcomes which could have been identified as special before the fact, like getting 1000 heads. That's a significant result, because the total number of heads will be normally distributed, so is much more likely to be 500+/-30 than to be 1000. In the case of a random distribution of universes, life in general is something that can be picked out as special before the fact; life as we know it isn't, not unless there are only a finite number of forms of life that can exist, a claim which would not be easy to prove. Incidentally, the chance of the air molecules in your room end up in the same half is something like 2^(10^28) - 6*10^23 molecules per litre, i.e Avogrado's number, 1000 litres in a cubic metre, a room 2x4x2 metres. 2^(10^28) = approx 10^(3*10^27). The chance of getting over 800 heads from the thousand flips is at 1/2^(10^3), the chance of getting exactly 1000 heads, so it is massively greater than the odds of the air in your room doing that. You'd lose that $50 dollar bet. This aside, the arguments for the smallness of the habitable zone for life as we know it are pretty weak. As noted by Karl, it's it tacitly assumed that only one parameter is varied at a time. If the points supporting as we know it lie near some curve, or even a straight line askew to these parameters, varying just one at a time will massively underestimate the proportion of habitable universes, and there's no good reason to think that they don't. The other problem is the neglect of feedback effects. Above you mention the case of weaker gravity. It doesn't matter how weak gravity is, a uniform distribution of matter is unstable, and will collapse. (Look up the Jeans Instability). This collapse will only stop when the central pressure is high enough. You can then calculate from there what happens for weaker gravity. It turns that that over a significant range, the central pressure gets high enough to cause fusion, hence stars as we know them. Because gravity is weaker, the stars end up being cooler than in our universe, and lasting longer. At a rough estimate, if gravity were 10 orders of magnitude weaker, the result would be be stars weighing about 10^15 solar masses, just enough to compress their cores to fusion heat, with a life span around 10^20 years, and 10^5 times the radius and brightness of our sun, but only slightly cooler than our sun. Orbiting at around 300 AU would give comfortable earth-like conditions for a 100 quintillion years. Make gravity too weak, and the stars won't ignite at all. If some of the other parameters vary too, you might find matter decaying before the stars have time to collapse, and there are many other However, the simple fact remains that something determined by negative feedback, like the core temperature of stars, isn't going to be sensitive to physical parameters. A star will always be exactly hot enough to produce pressures high enough to support its own weight - any hotter and it expands, cooling itself; any cooler and it contracts, warming up - and this would remain so even if neutrons were 10% heavier. Life, of course, is a prime example of a process dominated by negative feedback. The above hissed in response by: Robert at October 26, 2010 5:54 AM Post a comment Thanks for hissing in, . Now you can slither in with a comment, o wise. (sign out) (If you haven't hissed a comment here before, you may need to be approved by the site owner before your comment will appear. Until then, it won't appear on the entry. Hang loose; don't shed your © 2005-2009 by Dafydd ab Hugh - All Rights Reserved
{"url":"http://biglizards.net/blog/archives/2010/10/the_godot_we_kn.html","timestamp":"2014-04-17T00:50:14Z","content_type":null,"content_length":"124730","record_id":"<urn:uuid:fe078e99-5d0e-425d-8c52-6cc9e12831a2>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00602-ip-10-147-4-33.ec2.internal.warc.gz"}
Crystals: Geometry and Groups I. Introduction A. Why Study Crystals? What are crystals? The answer to that question is the end result of a great deal of research and study by many people over many years. Crystals can be gem-like solids as seen in quartz crystals. So one might believe that this leads to the study of the regular and semi-regular solids of geometry, but crystals are more than that. Crystals are systematic arrangements of molecules to form physical solids. Crystals are solid state physics. When we look at the gem-like object we are looking at a single crystal, when we look at a slab of marble or a sheet of steel we are looking at many crystals all next to each other, polycrystalline forms. Crystals are beautiful, everyone likes beauty. Why are crystals beautiful? Crystals have symmetry. There are many topics from mathematics that relate to crystals and many topics of crystallography that are math. How can crystals be represented as three dimensional objects? There are many answers to that question: Miller indices, space groups of symmetries, drawings, projections. Crystals are the evidence of the atomic structure of matter. There are three laws of crystal morphology: 1. Crystals grow naturally with plane faces. 2. The angles between faces of a crystal are characteristic of the crystal. 3. Law of Rational Positions. When three noncoplanar edges of a crystal are used as coordinate axes, the coordinates of the points where the planes of the crystal faces intersect the axes can be expressed in small integers. If we postulate that crystals are made of congruent building blocks stacked one upon the other, one next to the other, each of the laws can be derived. When we were taught the scientific method in elementary school we were told scientists collect data and then interpret it to get theories. So one would expect those laws to have come first and then the theory. Actually the history is more complicated. The scientists had other theories (like alchemy) and were looking at the data from those points of view. The data were available but the interpretation took time. It seems that postulational systems are a way of thinking that people naturally follow. The laws or postulates are simplifying principles that explain and predict what happens. If the predictions don’t match up it takes time for people to reject the old laws and look for new ones. B. Growing Crystals. Another reason to study crystals is to grow them. Growing crystals is an opportunity for students to have a hands on activity. The two references to use are Crystalls and Crystal Growing by Alan Holden and Phylis Morrison and Crystals—A Handbook for School Teachers by Elizabeth A. Wood. I should have started this activity much earlier. I should have set it up in the classroom last April or May to see how my students would react to science apparatus in a math room. I have grown an alum crystal. The process of getting the seed is satisfying by itself. To get seeds one dissolves the alum in some water and then places the solution on a shallow plate to evaporate, the shallower the plate the better. Examine your seeds with a magnifying glass or even a microscope. Philip and Phylis Morrison in their book Ring of Truth suggest watching the seeds form under the microscope as the water evaporates. Once you have the seed tie it with a thread and suspend it in a supersaturated solution and wait. The more seeds we have growing the better our chances of seeing a big, beautiful crystal. So see if you can inspire your students to try growing crystals independently. If you are not a science teacher you may not know that Macalester and Bicknell, the chemical supply company on Henry Street in New Haven will not sell chemicals to private individuals. So either see your friendly science teacher or plan far enough ahead to get a school supply purchase order. Meanwhile, grow crystals with the chemicals you have in the house or can get from the supermarket: alum, sugar, salt, borax, epsom salt or what have you. II. Science A. The History and Theory of Crystals Let us look at the history of the study of crystals. If you research the history you find out that the story is not as clear cut as the short introductions make it appear. An important idea may have been mentioned by a person but he may not have given it much emphasis at the time and instead have been concerned about some other part of his theory which might even have been wrong. The study of crystals has been the concern of mineralogists. In fact, one of the most accessible sources is Dana’s Manual of Mineralogy, you will find it in most public libraries. James Dwight Dana (1813-1895) and his son Edward Salisbury Dana (1849-1935) were Yale professors. In 1837 the father wrote the standard mineralogic reference A System of Mineralogy, the son kept it up to date and it is still being kept up to date by others. It is available in your local library. The Manual is a good source of illustrations. Hopefully, this unit will be an introduction of crystallography to readers who will want to find out more. Another major source is Martin J. Buerger the inventor of the precession method of x-ray diffraction who has written a number of books on crystallography. The biographical notes on authors in the collection Fifty Years of X-ray Diffraction tell us that his field of study was geology and mineralogy. Crystallography in many respects is another example were the theory or mathematics existed long before the “reality”. In 1912 with the birth of x-ray diffraction the demonstration of the theory became visible. In 1669 Nicolaus Steno in his Latin publication De Solido intra solidum naturaliter contento Dissertationis Prodromus described how he took sections of quartz crystals and measured the angles. He found that the angles were the same no matter how long or short the sides. He did this by cutting the crystal perpendicular to an edge to get a wafer which he then traced onto paper. If he did this to different quartz crystals, he got the same angles when he cut wafers at corresponding edges. For this he gets credit for the law of constancy of angle also known as Stensen’s Law from his name in his mother tongue. This was not the main point of his book, he did not say anything about crystals of other substances. The law of constancy of angle was stated by Rome de I’Lsle in 1783 and says that for all crystals of the same substance the angles between corresponding faces are congruent. Can you propose a reason why nature works this way? What makes it happen? The next story is like Newton’s Apple; there is controversy that it ever happened. The story goes that one day in 1784 while examining some crystals Rene Just Hauy dropped a specimen and one of the larger crystals broke off. He noticed that it had broken to form a face. He tried, and succeeded, to cleave it in other directions. The piece that remained was not the same shape as the original crystal. He claimed the original crystal was made of building blocks shaped like the nucleus achieved by cleavage. Read the Origins of the Science of Cystals by John G. Burke to find out more. If you read Burke you will find that many people worked on the subject and said many things that turned out to be true later. Do they deserve credit for stating the principle? Did they really know what they were saying? The text books that give historical notes surely make the history sound certain. It is not so clear when you try to read the whole story. Often times the “discoverer” was arguing about something else. Crystals are ice that is too cold to melt. Crystals grow like plants. The history seems to be evidence that civilized people search for ways to explain nature by logical theories. So Hauy wrote some articles and books and showed how large crystals could be built out of little cubes to get shapes that were not cubes. He did this by stacking the cubes at certain slopes. Put down a layer, then go in 2 cubes before starting the next layer, and so forth. Different faces call for different steps. See the mineralogy books for pictures of Hauy’s models. From this time crystallography became an exact science. The interfacial angles of crystals were accurately measured. The device to measure the angles is called a goniometer. There are various types. The simplest goniometer, called a contact goniometer, is a semicircular protractor with an arm pivoted at the center. Place the base against one face and the arm against another and read the angle off the protractor. (figure available in print form) In 1809 William H. Wollaston invented the reflecting goniometer which allowed the measuring of angles on much smaller crystals, even ones with rough faces. So angles became the big issue with crystallographers. The length of the edges did not matter it was the angles. The shape of the crystal as far as lengths are concerned is called the habit of the crystal. Crystallographers visualized the crystal at the center of a sphere with lines radiating out perpendicular to its faces to intersect the sphere at points which could be mapped like longitude and latitude. Then they decided to map the sphere onto a plane using the technique called stereographic projection. One place we all have seen stereographic projections is the trade mark of the National Geographic Society. The portion of an astrolabe that does not move is also a stereographic projection. The stereographic projection is a topic of study unto itself. If crystals are made of bricks all stacked next to each other what can we say? We can explain the three laws of crystallography. The bricks will be so small that the steps will not be visible to the naked eye thus giving flat faces. The stacking of the bricks will give fixed slopes which in turn give fixed angles. Notice that a stacking schedule of one over and two up will have a tangent twice that of a stacking schedule one over and one up. The tangents will double but not the angles. The tangents will be integral multiples of a fundamental value, but not the angles. The edges of the bricks will line up to give a three dimensional coordinate system and the dimensions of the brick will give the units to use along each axis. What shape will the bricks have? If they were cubes then our coordinate system would have three axes at 90° to each other with equal unit lengths along each axes, just like analytic geometry. Crystals are not so simple. There are crystals with cubical building blocks, they form the cubic or isometric system of crystals. Other building blocks form other systems of crystals. Can you determine the possible shapes of the other building blocks? There are only five more. We could have traditional rectangular bricks where the lengths of the edges are three distinct values. Such bricks make the orthorhombic system. We could tip our traditional brick at an angle so one dihedral angle was not 90° this makes the monoclinic system. We could tip it in three directions to make the triclinic system. We could take the cube and change the length of one dimension to make the tetragonal system. We could use bricks that did not have all rectangular faces, we could use hexagonal tiles to make the hexagonal system. That makes the six crystal systems. (figure available in print form) (figure available in print form) The reader might wonder why triangular prisms do not make a crystal system. We said that crystals are bricks stacked next to each other, we meant touching along faces without turning the brick. This idea is called translation. If we were to place triangles along a line there would be empty “upside down” triangular spaces between them. To fill those places we would have to turn the triangle as we moved it. So we put two triangles together to get a parallelogram and the monoclinic or triclinic systems. Knowing the system of a crystal scientists can predict how physical properties will behave. For example, a cubic crystal will expand equally in all direction when heated, while a tetragonal crystal will expand a different amount in one direction. Let us think some more about the bricks. If you were at any corner of any brick you would not be able to tell it from any other brick. The bricks would have to be of proper shape. You might be able to define directions, some points could be closer to you than others, but then again maybe not. You might be able to look at the system in certain ways and still see the same arrangement as before, stand on your head, turn and look back, maybe do both. These tricks lead to the concept of transformations: translations, reflections, inversions, rotations, which in turn lead to the mathematical structure called a group. A translation is when you slide from one brick to the next ending up in the same configuration you started at. If you continue in that direction each time you go the same distance you will end up in the same configuration. Reflections are like mirrors. Two points are reflections of each other if the mirror is the perpendicular bisector of the line segment connecting Inversion is the hardest to describe. You need a center of inversion, a point. To find the inversion image of a point you draw a line segment from the point to the center of inversion, then you continue the line on to a point as far from the center as the original point. It is as if the mirror had been shrunk to a point. The center of inversion is the midpoint of the line segment joining the original point and its image. A rotation is the easiest to describe. You pick a line to serve as if it were the axis of a top and rotate the configuration about it. All the transformations can be described in terms of the coordinates of the points. You start with one point and you get another point called the image of the first. All these transformations may be expressed in terms of what they do to the coordinates of any point. My experience was that the coordinate explanation for inversion was much easier to understand than the verbal: point (x,y,z) becomes (-x,-y,-z). So transformations lead to groups, so let us tell _____ III. Mathematics A. What is a Group. If we placed an equilateral triangle on a piece of paper and made a tracing of it, we could pick it up and put it back down again and not know if we had changed its orientation. What we would have done is called a rigid motion or an isometry. To be able to tell what happened we could label the corners of the cutout and the corners of the tracing. One question to ask is “How many ways can we pick up the cutout and put it down?” We could rotate it around its center to move a corner one space in a counterclockwise direction, or two spaces, or even three to come back to the starting position. Some obvious things to note are that we can do one thing and follow it with another and it is as if we had done a third thing in the first place. Two moves of one space each are the same as one move of two spaces. There are a limited number of moves. If we keep moving we end up with previous positions. One move is a waste, if we rotate three spaces it is as if we had not moved at all. When we have situations like this in math: where things combine to make new things of the same kind we say the system is closed. So our system of three rotations is closed. The move that is the same as doing nothing is called the identity operation. The fact that every operation can be undone to get back to the original position is described by mathematicians by saying each operation has an inverse. When we have a closed system with an identity element and each element has an inverse, we say we have a group. Technically a mathematician would want another additional property: the operations should be associative, a(bc) = (ab)c. Isometries are associative. It took some historical time for mathematicians to recognize that there are non-associative “things”. So we will be historical and not check for associativity. Division is an example of a non-associative operation. Example: 12 Ö (6 Ö 2) makes four, while (12 Ö 6) Ö 2 makes one. Check and see. (figure available in print form) Do the three rotations make the only group for our equilateral triangle? Are there more isometries for the equilateral triangle? Can we move it without rotating it? Flip it along an altitude so the two base angles switch positions but the vertex angle stays fixed. You will have to put labels on the back of your cutout to match the ones on the front. How many flips are there? Do the flips make a group? We have three vertices so we have three flips. Some might say we have four flips, the identity flip of making no flip at all. So, do the four flips make a group? What happens if we flip around the top vertex and then flip around the lower left hand vertex? Do we get a flip? No, we get a rotation, it is as if we rotated two spaces counterclockwise. So the flips are not closed, they do not form a group by themselves. Along with the rotations and the identity the flips will make a group. To verify it make a multiplication table. Let I stand for the identity, R1 stand for a rotation of one space counterclockwise, R2 for a rotation of two spaces counterclockwise, Fl for a flip around the top vertex, F2 a flip around the lower left vertex, and F3 for a flip around the lower right (figure available in print form) To read the table find the first operation in the first column and then read over to the column with the second operation on top, there is your answer. For example, operation R2 followed by operation Fl is operation F3. Notice that operation Fl followed by R2 is operation F2. Our operations are not commutative. The table helps us see that the system is a group. It is closed because no answer is a new thing, all the answers are operations listed on the edges. There is an identity one column matches the left hand column and one row matches the top row. Every element has an inverse because the identity appears once in each row and column and even when the identity is not on the main diagonal the two operations still commute (R1 followed by R2 = R2 followed by R1 = I). So we now have a bigger group the identity, the rotations, and the flips. The bigger group contains the group of rotations, so the group of rotations is called a subgroup of the larger group. Are there any other subgroups in the bigger group? Since each flip is its own inverse we could use a flip and the identity to form a group. We also could look at the identity alone and consider it a group. So a group is a set of things that operate on each other always getting themselves for answers, (just saying closure again). The process of lifting up the figure and putting it down again, an isometry, is also known as a symmetry operation. The symmetries that required us to use both sides of the triangle, the flips, are called improper symmetries. Here is a problem that might be more obviously relevant to us, where the symmetries tell us something. The queen in chess can move horizontally, vertically, and diagonally. If we had eight different queens could we place them on an eight by eight chess board so that no queen could capture any other queen. You do not need to know anything about chess to solve the question. The question could have been: place eight pieces of some kind on an eight by eight board so that no two pieces are on the same row, the same column, or the same diagonal. I will not solve the 8 by 8 case for you. Look at the 4 by 4 case. Here is a solution. (figure available in print form) My symbol means a piece is in the first row second column, another piece is in the second row fourth column and so forth. The number on top gives the row, the number underneath gives the column. Is this the only solution? Did you find a different one? Are there any different solutions? This is where we apply the symmetries. We can rotate the board ne quarter, one half, or three quarter turns: R1,R2, and R3, we can flip it on its horizontal midline: MH, its vertical midline: MV, or one of its diagonals: D1 and D2. (figure available in print form) (figure available in print form) (figure available in print form) (figure available in print form) (figure available in print form) (figure available in print form) (figure available in print form) (figure available in print form) What is going on here? Are R1, R2, and R3 the same operation? They give the same result. What about MH, MV, D1, and D2? Let us start with a different configuration, one that is not a solution, and perform the operations and see how many new configurations we get. (figure available in print form) which is not a solution since the first column and the second column pieces are on the same diagonal. (figure available in print form) (figure available in print form) (figure available in print form) (figure available in print form) (figure available in print form) (figure available in print form) (figure available in print form) all distinct results. There were only two answers in the first case because the first arrangement was so symmetrical. The different operations really are different. Can you find a solution for the eight by eight board? I have one solution that transforms into eight distinct solutions. Is it the only one? I do not know. Just as we asked how many bricks are possible we also could ask how many groups are possible. The answer to those questions is for another time. To get started on the answers one needs to have names for the groups. Here is one technique. In our triangle and square examples we had axes of rotation, a three-fold axis for the triangle and a four-fold axis for the square. When we flipped the figures we had two-fold axes of rotation. These operations can be symbolized as 1,2,3,4,5,6, . . . n, for n-fold axes of rotation. 1 means you rotate right back to the starting position. In two dimensions a two-fold rotation has the same result as a reflection in a mirror, which is not the case in three dimensions. The symbol for a mirror is m. This gives names to our two groups: 3m for the triangle and 4mm for the square. The number tells the type of rotation axis and the m indicates a mirror. One might wonder why there is only one m in 3m, while there are two m’s in 4mm. After all, there are three mirrors on the triangle and four mirrors on the square. Either 3m is correct and 4mm has an extra m, or they are both two m’s short. The answer is due to the way the mirrors interact with the rotations and each other. It is logical, ask me to show you with a diagram. The symbols for the groups are known as the Hermann-Mauguin symbols. I would have liked to have spent more time on Hermann-Mauguin, however I only got it straight in my own head as this project was coming to an end. I even found some useful references in books I had looked at earlier. So I must have learned something. One special reference for groups and Hermann-Mauguin notation is Symmetry by Ivan Bernal, Walter C. Hamilton, and John S. Ricci. The book comes with a stereo viewer, in a pocket on the inside back cover, so one can actually see the three-dimensional crystallographic point groups in three dimensions. Read the book as well, and read it Slowly, with thought. B. Miller Indices Crystallographers need ways to describe crystals. One way is to draw pictures, but pictures are hard to copy and write down every time you want to talk about a crystal. Another way is to have numbers associated with the crystal. In fact the numbers will often be found on the pictures so you will know more certainly the orientation of a particular face. A standard way to attach numbers to figures is to set up a coordinate system. So we are back in Algebra I and Algebra II with graphing. We need a third axis for three dimensional solid objects so we have the z-axis coming out of the page. In Algebra I we learn that the equation of a line whose x-intercept is a and whose y-intercept is b is x + Y =1. a b In three dimensions the equation x + y + r = 1 a b c would be a plane that cuts the x-axis at a, the y-axis at b, and the z-axis at c. The three numbers, (a,b, and c) could have been used as indices for the plane. Furthermore, since the crystallographers were only interested in the angles the three numbers could be “reduced” if they had a common factor, giving a parallel plane. Also, since no one likes fractions, the equation was multiplied through by abc to give bcx + acy + abz = abc. The numbers bc,ac, and ab are called the Miller indices h,k,1. These indices will be integers and usually single digits. They are written without commas separating them except for the rare occasion when one is more than a single digit. When the intercept is negative the index will also be negative. To show a negative value a bar is placed over the number. Remember, Miller indices will always be whole numbers: positive, negative, or zero. If you are reading the descriptions of a crystal you will want to translate the Miller indices into intercepts. To change from Miller indices back to intercepts take the reciprocal of each digit and then multiply each by a common denominator to make whole numbers. The numbers will be in the order of the axes, i.e. x-intercept, y-intercept and z-intercept. When saying Miller indices in words, 111 is said as one-one-one, not one hundred eleven. In Elizabeth A. Wood’s book Crystals and Light she has a picture of a pyrite crystal which she shows as a pentagonal dodecahedron. (figure available in print form) She gives the Miller indices of the faces that are visible as 102, 021, 210, 102, 021, 210 and says the point-group symmetry is m3. One is a convenient index since it is its own reciprocal. Zero is not so obvious. The reciprocal of zero is commonly called infinity. So the plane intersects the axis at infinity, or in other words it never intersects the axis, it is parallel to the axis. So we only have to change the indices + 2 and + 1 to their reciprocals + 1/2 and + 1 which become + 1 and + 2 when we multiply by the denominator to get whole numbers. Yes, they are the same values as we started with, but the order is not the same and that means different axes. (figure available in print form) (figure available in print form) The group symbol, m3, translates into a cubical system with the four diagonals of a cube as axes of three-fold rotational symmetry and the x-y, x-z, and y-z planes as mirror planes. We should be able to cut a model of the crystal out of a cubic 4 by 4 by 4 block. See the plan at the bottom of the previous page. Here is some explanation. Follow the steps. K is the midpoint of AB, L is the midpoint of FG. Similarly N,M,J, and I are midpoints of their respective segments. Next draw a line through V parallel to JI, a line through T parallel to KL and a line through 0 parallel to NM. Those lines will serve as if they were the ridges of house roofs. Next we need to find the eaves lines and the gable lines. S is the midpoint of BM and R is the midpoint of GO. The story is the same for P and Q. Finally, using the corners as axes of three-fold symmetry rotate each of the ridge lines into the position of KL and mark in the eaves and gables. Now cut off the wedges to get the gable roofs. How do you cut it out? Make a jig to keep your fingers away from the saw. How do you keep your angles when your guide lines are cut off? I made mine out of clay, cutting it with a wire and putting the cutoffs back to maintain the cubical guide shape. After all the cuts were made then the wedges were peeled off to get the dodecahedron core. The force of the cutting distorted the clay block. I did get a pentagonal dodecahedron, but I can not call it regular nor can I claim that it would not be regular if the process were more precise. A drawing showing all the lines is shown in figure 8. I would not expect anyone to follow the drawing unless they drew it too. C. Projections: Mechanical Drawing One of the objectives of geometry is to visualize objects in space. How can this be taught? Let the student experience drawing three dimensional objects. Mechanical Drawing is a way to draw even if one believes one is not an artist. Mechanical Drawing is math, it is part of projective geometry. See Morris Kline Mathematics in Western Cultures. John Pottage in Geometrical Investigations has a number of problems that are solved by mechanical drawing techniques, including a copy of woodcut by Albrecht Durer showing how to construct an ellipse as a conic section by mechanical drawing techniques (page 436). Look at figure 1. In the upper right we have three pieces of overhead projector transparency film (VWDU, UDST, and WDSR) joined at their edges to form a corner, we place a block in that corner and trace onto the plastic the edges that touch the plastic. Now we unfold the plastic corner and have three views: top view, front view and side view. You have done a parallel projection of each side onto a picture plane namely its piece of plastic. This is all there is to mechanical drawing and blue print reading. So, how can mechanical drawing be a full year course? Easily, the “block” could be much more complicated, needing auxiliary views, section views, shading, maybe even the shadows cast by the “block”. Also time is needed for practice to gain speed, while achieving accuracy and Let us look more closely at the figure. How can we improve it? We do not need to show the “pieces of plastic”. We could have some space between the three views so it would be more obvious where each view begins and ends. Notice how D,U, and C appear on both the x and y axes. If you were to draw a line segment from the U on the x-axis to the U on the y-axis you would form a triangle. What kind of a triangle? Notice the dashed lines. The dashed lines stand for invisible edges. Edges HG,FH, and HB are the back edges of the solid block that we would not see in reality. Let us think some more about this. When we have plans what do we want to get from them? The sizes of the dimensions that make our object, the angles we have to set our saws at to get the pieces to fit. Will these dimensions be on our pieces of plastic? Think about it. Let us explain “parallel projection” in more detail. If we keep the model in the corner we will be rather cramped, at least we will have one line right on the edge of the plastic sheet, so let us move the model out of the corner. Now we rest our pencil on an edge with its point touching a plastic sheet, slide the pencil along the edge keeping it parallel to the other plastic sheets. You now have a line on your plastic, go completely around the model and you will have one of your three views as before. Let us change our point of view. Look at the pencil; it is always perpendicular to the picture plane. So we could look at the process as keeping the pencil perpendicular to the picture plane and tracing the edges of the model with the “other end”. I say the “other end” because the pencil would have to change its length as the line got closer or farther from the picture plane. Figure 2 is my attempt at illustrating this. Let us draw a block with some faces not parallel to the picture planes. Turn the model so it “rests” on one of its long edges. See figures 3 and 4. A rectangular brick has been placed in the interior of a plastic box and parallel projected. The subscripts tell what view the point is on, t,f, and s for top, front, and side views. Points with the same capital letter correspond. Figure 4 is the box in figure 3 unfolded. Edge HK is invisible from the front so it becomes a dashed line in the front view. Likewise edge EG is underneath so it is invisible from the top and is a dashed line in the top view. Look at edge CB on the model. Is it equally as long on each of the views? Is it full size on any of the views? If line segment CB on the front view were called x, and if line segment CB on the top view were called y, and line segment CB on the side view were called z, what would be the relationship between all three variables? Why? So what kind of lines will not be the same length on the plans as they are on the model? Lines that are not parallel to their picture planes. Let us look at another example. We have a block with one corner knocked off (figures 5 and 6). Knocking the corner off leaves a triangular face FDE. In the three view mechanical drawing there is no full size congruent image of the triangle. In each view only one of the diagonal lines is full size the other two are shortened. If we want to see the true shape of the triangle we have to develop it, one of those auxiliary views mentioned earlier. A useful reason to develop all the faces of a figure (other than clarity) is to have a pattern one can cut out of paper to form the object. The paper cutout is a check to see if your drawing is the figure you claim it to be. Figure 7 is the development of the three visible faces in figure 5. It leaves out the back, bottom and left side of the block. When it comes to developing the block, to make a model, we want the matching edges back together again. I chose to put the DC edges together so there would be more room for the triangle with ED as its base. Once the top, front, and side faces are together it is time to make the triangle. With the point of your compass at D set the radius to DF from the top view and draw an arc. Then with the point of your compass at E set the radius to FE from the front view and draw an arc to cross the previous arc. The intersection is F, the vertex of the triangle. The construction is the same as constructing a triangle given three sides, as in geometry class. The Student’s Bibliography I claim my students will find these books readable. I believe the students will also find them interesting. Ivan Bernal, Walter C. Hamilton, and John S. Ricci Symmetry A Stereoscopic Guide for Chemists W. H. Freeman and Co., San Francisco, 1972. To repeat, a great book, even if you only look at the pictures. John G. Burke Origins of the Science of Crystals University of California Press, Berkeley and Los Angeles, 1966. A history, shows how the story is more complicated than the textbook versions. Rodney Cotterill The Cambridge Guide to the Material World Cambridge University Press, Cambridge, 1985. An abundance of colorful illustrations. Anything else I could say would sound like I was quoting the publisher’s blurbs. James Dwight Dana revised by Cornelius S. Hurlbut, Jr. Dana’s Manual of Mineralogy, 17th ed. John Wiley & Sons, Inc., New York, 1959. Cornelius S. Hurlbut, Jr. and Cornelis Klein Manual of Mineralogy (after James D. Dana) 19th Edition John Wiley and Sons, New York, 1977. Bruno Ernst The Magic Mirror of M. C. Escher Ballantine Books, New York, 1976. An aspect of the unit not pursued. Escher did many prints as space filling patterns. They can be analyzed as to what group structure they have. Escher also investigated perspective which this book Paul P. Ewald Fifty Years of X-ray Diffraction International Union of Crystallography, Utrecht, The Netherlands, 1962. Alan Holden and Phylis Morrison Crystals and Crystal Growing The MIT Press, Cambridge, Mass., 6th printing, 1988 new material copyright 1982 by MIT, copyright 1960 by Alan Holden & Phylis Morrison. A classic, everyone should have a copy. Alan Holden The Nature of Solids Columbia University Press, New York, 1965. Morris Kline Mathematics in Western Culture Oxford University Press, New York, 1953. Robert Lawlor Sacred Geometry Crossroad, New York, 1982. Many figures, especially constructions. Josef Vincent Lombardo, Lewis O. Johnson, and W. Irwin Short Engineering Drawing Barnes & Noble Books, New York, 1956. A reference students can work through on their own. Philip and Phylis Morrison The Ring of Truth Random House, Inc., New York, 1987. A companion to a PBS series, with many illustrations. Teaches observation, asking questions, answering questions. John Pottage Geometrical Investigations Addison-Wesley Publishing Company, Inc., Reading, Mass., 1983. Elizabeth A. Wood Crystals and Light D. Van Nostrand Company, Inc., Princeton, New Jersey, 1964. Elizabeth A. Wood Crystals—A Handbook for School Teachers Elizabeth A. Wood, 1972. (figure available in print form) Figure 1 (figure available in print form) Figure 2 (figure available in print form) Figure 3 (figure available in print form) Figure 4 (figure available in print form) Figure 5 (figure available in print form) Figure 6 (figure available in print form) Figure 7 (figure available in print form) Figure 8 Contents of 1989 Volume VI | Directory of Volumes | Index | Yale-New Haven Teachers Institute
{"url":"http://www.yale.edu/ynhti/curriculum/units/1989/6/89.06.05.x.html","timestamp":"2014-04-16T22:25:53Z","content_type":null,"content_length":"49383","record_id":"<urn:uuid:7cb4a849-5100-42f5-9973-e0342f32e5ad>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00631-ip-10-147-4-33.ec2.internal.warc.gz"}
Get homework help at HomeworkMarket.com Submitted by on Thu, 2013-11-07 14:10 due date not specified answered 1 time(s) indigo11 is willing to pay $20.00 QMB 6357 Harvard final exam practice . The median is often a better representative of the central value of a data set when the data set: 
Is bimodal. 
Has a high standard deviation. 
Is highly skewed. 2. The data in the Excel spreadsheet linked below provide information on the nutritional content (in grams per serving) of some leading breakfast cereals. 

For which nutrients is the mean nutrient content per serving greater than the median nutrient content per serving? Breakfast Cereals 
Proteins only. 
Complex carbohydrates only. 
Both nutrients. Neither nutrient.
Has no outliers. 3 The histogram below plots the carbon monoxide (CO) emissions (in pounds/minute) of 40 different airplane models at take-off. 

The distribution is best described as is: 
Skewed right. 4. The histogram below plots the carbon monoxide (CO) emissions (in pounds/minute) of 40 different airplane models at take-off. 

Which of the following statements is the best inference that can be drawn from this histogram? 
The mean amount of carbon monoxide emissions is greater than the median amount of carbon monoxide emissions. 
The mean amount of carbon monoxide emissions is less than the median amount of carbon monoxide emissions. 
The mean and median amounts of carbon monoxide emissions are about equal. The relative sizes of the mean and median amounts of carbon monoxide emissions cannot be inferred from the histogram and other quest Submitted by on Thu, 2013-11-07 14:10 price: $20.00 QMB 6357 Harvard final exam practice body preview (6 words) QMB xxxx Harvard final exam xxxxxxxx file1.doc preview (4626 words) - - - too long to show a preview - - - Buy this answer Try it before you buy it Check plagiarism for $2.00
{"url":"http://www.homeworkmarket.com/content/qmb-6357-harvard-final-exam-practice","timestamp":"2014-04-16T10:13:17Z","content_type":null,"content_length":"51090","record_id":"<urn:uuid:ae9911e1-3893-462a-b00b-f29ddc01698c>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00530-ip-10-147-4-33.ec2.internal.warc.gz"}
North Hills, NY SAT Math Tutor Find a North Hills, NY SAT Math Tutor ...While I have not tutored in an official setting before, I am looking to do so to prepare for a possible career in the field. I have tutored my elementary school cousins on and off, and I have tutored friends in school. While these experiences are not officially a job, I do greatly enjoy the work. 14 Subjects: including SAT math, writing, geometry, biology ...I am a warm and caring teacher with a sense of humor, which helps put students learning something new at ease. One of the most effective ways I help my students is by presenting comprehensible material that stimulates different learning styles. As a one-to-one tutor, I assess how a student learns best and create lessons that cater to that students' learning modality. 27 Subjects: including SAT math, Spanish, reading, English ...These office hours were for any math students, so I quickly became adept at answering questions about almost any math-related problem, be it a problem with an integral for a calculus student, or a misunderstanding about trigonometry. I try to guide students to understanding the material by tryin... 18 Subjects: including SAT math, calculus, trigonometry, physics ...By getting to know the whole person, I avoid bumps in the road and am able to smoothly navigate a pathway to success. My students obtain optimal outcomes from their efforts. I teach students how to maximize their gains. 52 Subjects: including SAT math, English, reading, writing Hi parents and students, My name is Natalie and I am a forthcoming high school mathematics teacher. I graduated from a NYC specialized high school and I am currently studying at New York University, majoring in Mathematics Secondary Education. I have been a volunteer math tutor for the last 5 years, and have grown to work quickly and effectively on any mathematics subject. 19 Subjects: including SAT math, calculus, geometry, biology Related North Hills, NY Tutors North Hills, NY Accounting Tutors North Hills, NY ACT Tutors North Hills, NY Algebra Tutors North Hills, NY Algebra 2 Tutors North Hills, NY Calculus Tutors North Hills, NY Geometry Tutors North Hills, NY Math Tutors North Hills, NY Prealgebra Tutors North Hills, NY Precalculus Tutors North Hills, NY SAT Tutors North Hills, NY SAT Math Tutors North Hills, NY Science Tutors North Hills, NY Statistics Tutors North Hills, NY Trigonometry Tutors
{"url":"http://www.purplemath.com/north_hills_ny_sat_math_tutors.php","timestamp":"2014-04-20T13:36:10Z","content_type":null,"content_length":"24346","record_id":"<urn:uuid:54ee93e3-bda4-485d-b3cb-f16c9ef81cd3>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00504-ip-10-147-4-33.ec2.internal.warc.gz"}
On the extension problem for partial permutations - in preparation. COUNTING AND MATRIX REPRESENTATIONS 11 "... Abstract. In this paper we characterize the congruence associated to the direct sum of all irreducible representations of a finite semigroup over an arbitrary field, generalizing results of Rhodes for the field of complex numbers. Applications are given to obtain many new results, as well as easier ..." Cited by 19 (11 self) Add to MetaCart Abstract. In this paper we characterize the congruence associated to the direct sum of all irreducible representations of a finite semigroup over an arbitrary field, generalizing results of Rhodes for the field of complex numbers. Applications are given to obtain many new results, as well as easier proofs of several results in the literature, involving: triangularizability of finite semigroups; which semigroups have (split) basic semigroup algebras, two-sided semidirect product decompositions of finite monoids; unambiguous products of rational languages; products of rational languages with counter; and Čern´y’s conjecture for an important class of automata. , 2004 "... In this paper, we establish several decidability results for pseudovariety joins of the form V ∨ W, where V is a subpseudovariety of J or the pseudovariety R. Here, J (resp. R) denotes the pseudovariety of all J-trivial (resp. R-trivial) semigroups. In particular, we show that the pseudovariety V ∨ ..." Cited by 7 (6 self) Add to MetaCart In this paper, we establish several decidability results for pseudovariety joins of the form V ∨ W, where V is a subpseudovariety of J or the pseudovariety R. Here, J (resp. R) denotes the pseudovariety of all J-trivial (resp. R-trivial) semigroups. In particular, we show that the pseudovariety V ∨ W is (completely) κ-tame when V is a subpseudovariety of J and W is (completely) κ-tame. Moreover, if W is a κ-tame pseudovariety which satisfies the pseudoidentity x1 · · · xry ω+1 zt ω = x1 · · · xryzt ω, then we prove that R ∨ W is also κ-tame. In particular the joins R ∨ Ab, R ∨ G, R ∨ OCR, and R ∨ CR are decidable. "... It is shown that the pseudovariety R of all finite R-trivial semigroups is completely reducible with respect to the canonical signature. Informally, if the variables in a finite system of equations with rational constraints may be evaluated by pseudowords so that each value belongs to the closure of ..." Cited by 3 (3 self) Add to MetaCart It is shown that the pseudovariety R of all finite R-trivial semigroups is completely reducible with respect to the canonical signature. Informally, if the variables in a finite system of equations with rational constraints may be evaluated by pseudowords so that each value belongs to the closure of the corresponding rational constraint and the system is verified in R, then there is some such evaluation which is “regular”, that is one in which, additionally, the pseudowords only involve multiplications and ω-powers. "... Abstract. Radicals for Fitting pseudovarieties of groups are investigated from a profinite viewpoint in order to describe Malcev products on the left by the corresponding local pseudovariety of semigroups. 1. ..." Cited by 2 (2 self) Add to MetaCart Abstract. Radicals for Fitting pseudovarieties of groups are investigated from a profinite viewpoint in order to describe Malcev products on the left by the corresponding local pseudovariety of semigroups. 1. "... Abstract. We give a counterexample to the conjecture which was originally formulated by Straubing in 1986 concerning a certain algebraic characterization of regular languages of level 2 in the Straubing-Thérien concatenation hierarchy of star-free languages. 1. ..." Cited by 1 (1 self) Add to MetaCart Abstract. We give a counterexample to the conjecture which was originally formulated by Straubing in 1986 concerning a certain algebraic characterization of regular languages of level 2 in the Straubing-Thérien concatenation hierarchy of star-free languages. 1. "... Abstract. We present an algorithm to compute the pointlike subsets of a finite semigroup with respect to the pseudovariety R of all finite R-trivial semigroups. The algorithm is inspired by Henckell’s algorithm for computing the pointlike subsets with respect to the pseudovariety of all finite aperi ..." Cited by 1 (0 self) Add to MetaCart Abstract. We present an algorithm to compute the pointlike subsets of a finite semigroup with respect to the pseudovariety R of all finite R-trivial semigroups. The algorithm is inspired by Henckell’s algorithm for computing the pointlike subsets with respect to the pseudovariety of all finite aperiodic semigroups. We also give an algorithm to compute J-pointlike sets, where J denotes the pseudovariety of all finite J-trivial semigroups. We finally show that, in contrast with the situation for R, the natural adaptation of Henckell’s algorithm to J computes pointlike sets, but not all of them. 1. "... Abstract. The semidirect product of pseudovarieties of semigroups with an order-computable pseudovariety is investigated. The essential tool is the natural representation of the corresponding relatively free profinite semigroups and how it transforms implicit signatures. Several results concerning t ..." Add to MetaCart Abstract. The semidirect product of pseudovarieties of semigroups with an order-computable pseudovariety is investigated. The essential tool is the natural representation of the corresponding relatively free profinite semigroups and how it transforms implicit signatures. Several results concerning the behavior of the operation with respect to various kinds of tameness properties are obtained as applications. "... Abstract. The notion of reducibility for a pseudovariety has been introduced as an abstract property which may be used to prove decidability results for various pseudovariety constructions. This paper is a survey of recent results establishing this and the stronger property of complete reducibility ..." Add to MetaCart Abstract. The notion of reducibility for a pseudovariety has been introduced as an abstract property which may be used to prove decidability results for various pseudovariety constructions. This paper is a survey of recent results establishing this and the stronger property of complete reducibility for specific pseudovarieties. 1.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=478016","timestamp":"2014-04-23T22:54:29Z","content_type":null,"content_length":"28575","record_id":"<urn:uuid:35b7d755-bda9-4b63-829e-a6d19a8782e2>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00143-ip-10-147-4-33.ec2.internal.warc.gz"}
Mplus Discussion >> Correcting for Measurement Error Anonymous posted on Thursday, April 28, 2005 - 12:39 pm Hello. I am attempting to run a path analysis will all variables in my model treated as directly observed. Since I am not including a measurement model, I would like to correct for measurement error. I am aware that this can be achieved by multiplying the variance of an observed variable by 1 - relability. My first question is that I only want to employ this correction with exogenous variables in the model, not endogenous variables - correct? My second question is how do I fix the variances in MPlus? I have tried using the @ function (e.g., x@.09) following the model command but this drastically worsens rather than improves model fit. Do I need to create single indicator latent variables to employ this correction? For example: xlat by x@1; This seems to help model fit, but I am not sure it is proper procedure. Finally, I am using the define command to examine interaction terms in my model. Does it matter if I fix the variance of a variable that represents one of interaction terms? Any help you can provide will be very much appreciated. Thank you. bmuthen posted on Thursday, April 28, 2005 - 6:30 pm As for your first question, the correction is most important for exogenous variables given that parameter estimate biases will occur otherwise. But you may want to do it also for dependents, to separate measurement error and other residual sources. You answered your second question yourself. I think this is posted somewhere on Mplus Discussion. Note that you are fixing the residual variance and that you should fix it to (1 - reliability)*sample variance. For you final question, I don't know why you would want to fix the variance - unless you are referring to the second question above in which case you want to do the interaction using the factor you Anonymous posted on Thursday, April 28, 2005 - 6:56 pm Thank you for your help. Your answers to my first two questions I followed, and I also found the other posting and it was very helpful. I would like to follow-up/clarify your response to my third question. Assume I create a latent variable for an observed variable (x) in order to fix the residual variance of that variable. Also assume I want to create a third variable that represents the interaction (xz) of this variable with another observed variable (z). Do I create the interaction term using the original observed variable (x) or the latent variable I created for x? If I have to now use the latent variable, do I need to change from using the define command to create the interaction to using the XWITH command? Thanks again for your help. bmuthen posted on Thursday, April 28, 2005 - 8:00 pm You would use XWITH, not Define. Timothy posted on Monday, April 26, 2010 - 7:16 pm Hi Prof. Muthen I am using the same approach stated in the above to run a path analysis. Even thought I used the two commands, the model fit is still drastically worsens, rather than improved. I then used LISREL to run the path analysis with the same approach and had good fit of the data. I am wondering if I have done something wrong in the Mplus command. Can I send you the outputs to you and see if I have any problems with the commands? Linda K. Muthen posted on Tuesday, April 27, 2010 - 5:21 am Please send the two outputs and your license number to support@statmodel.com. Prathiba Nagabhushan posted on Thursday, July 22, 2010 - 7:58 pm Hi Linda, I am trying to run a path analysis with all variables in my model treated as directly observed. Since I am not including a measurement model, I would like to correct for measurement error. I am aware that this can be achieved by multiplying the variance of an observed variable by 1 - relability. Could you pl. tell me where the sample variance is in the output? Thanks in advance, Linda K. Muthen posted on Friday, July 23, 2010 - 8:34 am If you ask for SAMPSTAT or use TYPE=BASIC, the variances are on the diagonal of the variance/covariance matrix. Bee Jay posted on Monday, March 26, 2012 - 4:09 pm I am using this equation as well, to fix residual variance for single indicators - as discussed in another thread. So the sample variance is in the SAMPSTAT output. Is the "reliability" you're talking about the variance explained for the indicator? R^2? And when I have completed the equation, will I just enter it into my model, e.g. F1@__; Linda K. Muthen posted on Tuesday, March 27, 2012 - 7:00 pm A residual variance can be used as an estimate of reliability. Stephen Teo posted on Monday, March 25, 2013 - 7:33 pm Dear Linda I am trying to control for measurement error in my model, similar to others on this posting. I would like to find out where can I find the info on the output file for: "(1 - reliability)*sample variance". Linda K. Muthen posted on Tuesday, March 26, 2013 - 11:35 am The sample variance is obtained from SAMPSTAT or TYPE=BASIC. You also need to determine reliability. That is not given automatically. Back to top
{"url":"http://www.statmodel.com/discussion/messages/11/645.html?1364322909","timestamp":"2014-04-19T05:43:34Z","content_type":null,"content_length":"32226","record_id":"<urn:uuid:0035965f-133f-473a-8d75-0170924bef92>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00528-ip-10-147-4-33.ec2.internal.warc.gz"}
Waddell Statistics Tutor Find a Waddell Statistics Tutor ...I offer individual or group tutoring for nursing, writing, math and various statistics courses - health care, psychology, research, bio, and business. I am a Registered Nurse licensed in the State of Arizona and pursuing my Master of Science degree in Nursing Leadership. I graduated with a Bach... 10 Subjects: including statistics, nursing, public speaking, study skills ...Geometry analyzes the relationships within all shapes. There will always be truths about all objects which are the equations you use to describe the object. This deals with squares, rectangles, triangles, polygons, and measurement of angles within these objects. 21 Subjects: including statistics, chemistry, calculus, physics ...My fiancee and I moved to Surprise, AZ, recently for her Nursing degree, and I plan on finishing my degree at Arizona State University soon. If you are looking for any tutoring from high school math and science up to Calculus and college level Physics I would be happy to help. I have very little official tutoring history, but I have had many unpaid tutoring opportunities. 11 Subjects: including statistics, calculus, physics, geometry ...I've done baseball instruction for over 20 years and studied Biomechanics at USC while coaching there 1984-1986 and SDSU in 1991-1992... I have been blessed to work with many of the greatest coaches in college and professional baseball today. As a child I was a chess prodigy playing many student... 107 Subjects: including statistics, English, Spanish, reading ...I bring patience and enthusiasm to tutoring. I received an A in my college C/C++ class and in more advanced computer science classes such as algorithms and operating systems. I have worked as a programmer in C and assembler and written high level programs in C++. I love to tutor and bring patience and enthusiasm to my teaching. 20 Subjects: including statistics, English, reading, writing
{"url":"http://www.purplemath.com/Waddell_Statistics_tutors.php","timestamp":"2014-04-16T16:14:40Z","content_type":null,"content_length":"24006","record_id":"<urn:uuid:87846419-baee-4919-9aff-a0d72ce7d89a>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00023-ip-10-147-4-33.ec2.internal.warc.gz"}
[Numpy-discussion] Help with np.where and datetime functions John [H2O] washakie@gmail.... Wed Jul 8 19:05:39 CDT 2009 nhmc wrote: > Also, if you don't need the indices, you can just use the conditional > expression as a boolean mask: >>>> condition = (t1 < Y[:,0]) & (Y[:,0] < t2) >>>> Y[:,0][condition] > Neil 'condition' is not an index array? Wouldn't it just be the indices as well? Would it be possible to do this: Y[:,0][ (t1 < Y[:,0]) & (Y[:,0] < t2) ] I guess that IS what you're saying to do above, but I'm confused what 'condition' would be once returned... (I'll go play in the sandbox now...) View this message in context: http://www.nabble.com/Help-with-np.where-and-datetime-functions-tp24389447p24401626.html Sent from the Numpy-discussion mailing list archive at Nabble.com. More information about the NumPy-Discussion mailing list
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2009-July/043858.html","timestamp":"2014-04-16T07:36:31Z","content_type":null,"content_length":"3639","record_id":"<urn:uuid:5f97887b-0049-4f3b-b279-8ad89360bfe3>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00252-ip-10-147-4-33.ec2.internal.warc.gz"}
Digital Electronics: State machine 1. The problem statement, all variables and given/known data (a) How many states does this system have? (b) How many rows will there be in a state transition table? (c) Provide the state transition table. (d) Draw a state diagram of the system. (e) Describe what the circuit does in words. 2. Relevant equations 3. The attempt at a solution a) I think there are two flip flops (or are they switches?), so that means that there are four states: 00, 01, 10, and 11. b) I think the state transition table will have 8 rows. These numbers will be at the beginning of each row: Good. Correct so far. Now make label those three columns as A, B and Y (the inputs to the logic), and make 2 more columns for the "Next X, Next Y" outputs of the FFs. Use the logic terms shown for the J&K inputs for the 2 FFs to calculate what the Next X and Next Y outputs will be for each row. That is your transition table. Then use that to answer the rest of the questions. Show us what you end up with!
{"url":"http://www.physicsforums.com/showthread.php?t=357869","timestamp":"2014-04-19T04:30:15Z","content_type":null,"content_length":"34395","record_id":"<urn:uuid:6f17670f-ef67-418c-aede-dae66baa104c>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00498-ip-10-147-4-33.ec2.internal.warc.gz"}
401(k) or student loan (more importantly: a math question) Hi all, I recently discovered both this site and the Bogle theory on index fund investment. I am just now taking my retirement finance management seriously. I will be getting married in the near future and it has had a sobering effect on my life goals. (Not that I was ignoring my future; I do have a considerable bit of retirement savings that I managed to build up before grad school.) I have learned a lot from the postings here and I am looking forward to learning more. Based on what I have found searching this forum, I think my question is a no-brainer but, as my SO would say, I need to be validated. Additionally, I have been searching for math that will make my understanding of the Boglehead viewpoint on paying down debt vs. investing crystal clear - or approximately so. - Income tax bracket: ~31% + ~9% = 40% (This assumes no 2013 tax cut extension and CA income tax.) - student loan debt ~200k - component of student loan debt @8.5% interest ~43k (BTW, I will be telling my kids not to go to private schools if they can help it) - the rest of my student loan debt is < 6.9% (and given the large 8.5% component, the rest of my loans are inconsequential to my immediate dilemma) - emergency fund: ~6k (approx 6 months of expenses) - my employer has a 401(k) starting Jan. 1 2013, no match - I have another ~ 25 years before I am required to start my 401(k) withdrawals. Question: Should I pay down my 8.5% debt or invest in 401(k) or some mix? My answer based on what I have read on the forums: Pay down the damn debt. (Am I wrong here?) Math Question: How can I relate my 8.5% debt to a pre-tax 401(k) investment so that I can compare numbers? - the 8.5% debt is, worst case, 25 years in repayment. Compounding can be avoided if I do my paperwork right...I think. - I understand that I can translate debt repayment into a bond equivalent for math purposes. - I still don't understand how to compare this to a similar pre-tax (40% savings in my case) equivalent investment, assuming the 25 year worst-case scenario (and I can't figure out intermediate scenarios either). I am interested in understanding the comparison of the debt to risk-averse investment alternatives in: 1) the short term; and 2) the long term (i.e. once I need to withdraw, taxes will impact this) I realize that (2) requires me to make some assumptions about the future. So, I am approaching it as 'all things being equal years from now' (probably not accurate; corrections welcome) I also realize that 8.5% guaranteed is unheard of. My concern is with the impact of the use of pre-tax dollars. Ultimately, I think I know the answer to my practical question. The issue I am having is understanding the math that gets me to that point. I would love to hear any thoughts on either portion of this. (A side-note: I am so happy to finally find a forums that deals in unhyped objective long-term goals. I just bought my brother a Bogleheads book for Christmas.) Thanks in advance for any feedback Re: 401(k) or student loan (more importantly: a math questio Hi Ron, I'm just heading out to lunch so can't run too many numbers but here are my initial thoughts: 1. With your nasty tax-bracket, the after-tax rate on your 8.5% student loan is only 5.1%, assuming that the interest is deductible from CA income taxes? 2. The 401k is appealing since you may be in a much low tax rate in the future BUT 5.1% is a much better rate than you'd get on any sort of investment that's even close to as reliable as a student loan; as sure as you're going to die, you're going to pay off that student loan. In other words, from an alternative investment comparison, the 401k can't compete. 3. Do you have any equity in your house to could leverage to get rid of the $43k, higher-interest component? 4. You'd be wise to go be a rural doctor or something to get those loans forgiven Good luck! Re: 401(k) or student loan (more importantly: a math questio Answer depends on how much you save per year and what your expected tax rate on withdrawals will be. Someone who will be able to withdraw tax-free from the 401K and who can expect to pay off the 8.5% interest loans in the next several years should max out the 401K. Someone else who doesn't save enough to max out the 401K in a typical year should probably just focus on debt repayment. One way around this dilemma is to max out the 401K and then take a 401K loan to pay down your loans. Then you get your tax deduction without slowing down student loan repayment. "I fancy that over-confidence seldom does any great harm except when, as, and if, it beguiles its victims into debt." -Irving Fisher Re: 401(k) or student loan (more importantly: a math questio brianbooth wrote:Hi Ron, I'm just heading out to lunch so can't run too many numbers but here are my initial thoughts: 1. With your nasty tax-bracket, the after-tax rate on your 8.5% student loan is only 5.1%, assuming that the interest is deductible from CA income taxes? This is incorrect as the max amount you can deduct is $2500. Plus your MAGI has to be lower than $75000 or thereabouts, so OP might make too much money to deduct even that. Re: 401(k) or student loan (more importantly: a math questio A couple of other issues to think about (just some random thoughts I have when I ponder the same question): 1. Student loans generally die with you, but funds in retirement accounts can be passed on to a surviving spouse, heir, etc. 2. Student loans are hard to get rid of (can't discharge in bankruptcy), but 401k funds are also afforded a ton of protection in bankruptcy and against judgments in lawsuits and such. 3. Student loan forgiveness options - Income based repayment, PSLF, etc. Re: 401(k) or student loan (more importantly: a math questio The math here doesn't seem too difficult if you assume your tax rate now is the same as your tax rate later. (It seems quite similar to the question of whether to use a roth 401k). Suppose you have $100 of income and you want to decide whether to put it in your 401k or use it to pay off your loans. Scenario #1: You contribute $100 to your 401k. It grows at a rate of R% for N years, and then you pay 40% taxes on it: 100 * (1+R)^N * .60 Scenario #2: You pay taxes on your $100. This gets you 8.5% a year for N years: 100 * .60 * 1.085^N Just like a Roth vs regular 401k, if your tax rate is the same at retirement as it is now then it doesn't make a difference if you pay taxes now or later. Taking the "guaranteed" 8.5% return from paying off the loan seems like a big win (sounds like you've already come to this conclusion!). Re: 401(k) or student loan (more importantly: a math questio countofmc wrote: brianbooth wrote:Hi Ron, I'm just heading out to lunch so can't run too many numbers but here are my initial thoughts: 1. With your nasty tax-bracket, the after-tax rate on your 8.5% student loan is only 5.1%, assuming that the interest is deductible from CA income taxes? This is incorrect as the max amount you can deduct is $2500. Plus your MAGI has to be lower than $75000 or thereabouts, so OP might make too much money to deduct even that. Seriously. The good news is that the interest on the first $37,000 of your $200,000 of debt would be deductible if you weren't way, way over the income limit and if the student loan interest deduction as such weren't expiring in eleven days. Re: 401(k) or student loan (more importantly: a math questio I see you are in a pretty high tax bracket and your monthly expenses are low. I would max 401K/Roth IRA and put extra cash flow on loans. Increase emergency fund to ~2 years. Should be easy with your income and expenses. Then borrow from 401K to pay off a chunk of loans. Treat 401K loan like a bond. This also increases tax deferred space since interest is paid directly to you. Once you miss a year of tax deferred space you never get it back. Also, do remember, you deduct 401K contributions at your marginal rate, but upon withdrawal, you fill up lower tax brackets first, making the average rate much lower. This is one of the greatest tax arbitrage opportunities few understand very well outside this website. Any human being is really good at certain things. Most people are pretty modest instead of an arrogant S.O.B. like me, what comes naturally, you don’t see as a special skill. Re: 401(k) or student loan (more importantly: a math questio grap0013 wrote:Also, do remember, you deduct 401K contributions at your marginal rate, but upon withdrawal, you fill up lower tax brackets first, making the average rate much lower. This is one of the greatest tax arbitrage opportunities few understand very well outside this website. The finance buff has a good demonstration of this in his "Case Against a Roth" article: http://thefinancebuff.com/case-against-roth-401k.html I think a lot of that applies here. Still, it's a gamble based on the future tax rates. The OP stated that the 31% is under the assumption that the tax cuts expire, and given that the OP is presumably young and newly hired, it's entirely possible that the tax rate will be higher later on. Paying off the loan is a sure thing, while the 401k return is not. Re: 401(k) or student loan (more importantly: a math questio Your monthly expenses are only $1k? You list a $6k EF that covers 6 months. Something is off here. Not sure it matters in the end. - Bill Re: 401(k) or student loan (more importantly: a math questio I would pay off the debt. There is significant psychological benefits to doing so Re: 401(k) or student loan (more importantly: a math questio DTSC wrote:I would pay off the debt. There is significant psychological benefits to doing so There are a lot of psychological benefits to seeing your networth go up too! Any human being is really good at certain things. Most people are pretty modest instead of an arrogant S.O.B. like me, what comes naturally, you don’t see as a special skill. Re: 401(k) or student loan (more importantly: a math questio market timer wrote:Answer depends on how much you save per year and what your expected tax rate on withdrawals will be. Someone who will be able to withdraw tax-free from the 401K and who can expect to pay off the 8.5% interest loans in the next several years should max out the 401K. Someone else who doesn't save enough to max out the 401K in a typical year should probably just focus on debt repayment. One way around this dilemma is to max out the 401K and then take a 401K loan to pay down your loans. Then you get your tax deduction without slowing down student loan Do you have a car with any equity? You can borrow against your car for under 2% for five years at PenFed. Use it to pay down the debt. Re: 401(k) or student loan (more importantly: a math questio ronima wrote:Math Question: How can I relate my 8.5% debt to a pre-tax 401(k) investment so that I can compare numbers? - the 8.5% debt is, worst case, 25 years in repayment. Compounding can be avoided if I do my paperwork right...I think. - I understand that I can translate debt repayment into a bond equivalent for math purposes. - I still don't understand how to compare this to a similar pre-tax (40% savings in my case) equivalent investment, assuming the 25 year worst-case scenario (and I can't figure out intermediate scenarios either). Adjust the return for the difference in tax rates. If you retire in a 40% tax bracket, and will withdraw the money 30 years later in a 30% tax bracket, then it costs you $6000 to put $10,000 in the 401(k), and you will get the future value of $7000 out. This is a 17% gain, or 0.52% annualized; thus, if you invest the 401(k) in bonds earning 3%, you will earn 3.52%. There is one additional issue: if you don't max out your 401(k) now, then you may have to make taxable investments later, and thus you will lose the benefit of some of the tax deferral. However, with a rate as high as 8.5%, the benefit of tax deferral isn't worth the difference. Re: 401(k) or student loan (more importantly: a math questio grabiner wrote:This is a 17% gain, or 0.52% annualized; thus, if you invest the 401(k) in bonds earning 3%, you will earn 3.52%. There is one additional issue: if you don't max out your 401(k) now, then you may have to make taxable investments later, and thus you will lose the benefit of some of the tax deferral. However, with a rate as high as 8.5%, the benefit of tax deferral isn't worth the difference. You should annualize the 17% benefit over the duration of student loan repayment. That is, if someone is paying 8% above-market (say, spread to short-term treasuries) for only a year, it is worth bearing this cost to get a 17% benefit. If it will take 10 years to pay down the student loan, the comparison is less favorable. That's why savings rate matters. It's also worth considering the value of tax deferred compounding, not just difference between contribution and withdrawal marginal tax rates. Having access to a 401K loan further improves the comparison. "I fancy that over-confidence seldom does any great harm except when, as, and if, it beguiles its victims into debt." -Irving Fisher Re: 401(k) or student loan (more importantly: a math questio grap0013 wrote:I see you are in a pretty high tax bracket and your monthly expenses are low. I would max 401K/Roth IRA and put extra cash flow on loans. Increase emergency fund to ~2 years. Should be easy with your income and expenses. Then borrow from 401K to pay off a chunk of loans. Treat 401K loan like a bond. This also increases tax deferred space since interest is paid directly to you. Once you miss a year of tax deferred space you never get it back. Also, do remember, you deduct 401K contributions at your marginal rate, but upon withdrawal, you fill up lower tax brackets first, making the average rate much lower. This is one of the greatest tax arbitrage opportunities few understand very well outside this website. +1 to what he said. I'm assuming from your tax bracke you have a very healthy salary and it seems like you still have a very low cost (relative to your salary) lifestyle. I would prioritize the following way: 1. Max 401k 2. Max Backdoor Roth IRA 3. Throw everything you can muster at your debt starting with the highest interest rates first My big question is if you did this, how much could you be paying off each year? If you make around or above 200k I think you could realisticly throw 3-4 per month at this and be done in 5-6 years. Re: 401(k) or student loan (more importantly: a math questio grabiner wrote: ronima wrote:Math Question: How can I relate my 8.5% debt to a pre-tax 401(k) investment so that I can compare numbers? - the 8.5% debt is, worst case, 25 years in repayment. Compounding can be avoided if I do my paperwork right...I think. - I understand that I can translate debt repayment into a bond equivalent for math purposes. - I still don't understand how to compare this to a similar pre-tax (40% savings in my case) equivalent investment, assuming the 25 year worst-case scenario (and I can't figure out intermediate scenarios either). Adjust the return for the difference in tax rates. If you retire in a 40% tax bracket, and will withdraw the money 30 years later in a 30% tax bracket, then it costs you $6000 to put $10,000 in the 401(k), and you will get the future value of $7000 out. This is a 17% gain, or 0.52% annualized; thus, if you invest the 401(k) in bonds earning 3%, you will earn 3.52%. There is one additional issue: if you don't max out your 401(k) now, then you may have to make taxable investments later, and thus you will lose the benefit of some of the tax deferral. However, with a rate as high as 8.5%, the benefit of tax deferral isn't worth the difference. Just to piggyback onto this with two examples using 2012 tax info. Example #1: MFJ earning 100K. 401K contributions are deducted from 25% marginal rate. For 100K income in retirement you will pay $12,185 in federal taxes or an average rate of 12.2%. So it's 25% vs. 12.2% Exampled #2: MFJ earning 250K 401K contributions are deducted from 33% marginal rate. For 250K income in retirement you will pay $52,971 in federal taxes or an average rate of 21.1%. So it's 33% vs. 21.1% Looks like the "spread" hovers around 12% using a couple basic examples. I haven't even mentioned state income taxes which further supports maxing tax deferred space as highest priority. If the OP retires in Florida... Any human being is really good at certain things. Most people are pretty modest instead of an arrogant S.O.B. like me, what comes naturally, you don’t see as a special skill.
{"url":"http://www.bogleheads.org/forum/viewtopic.php?f=2&t=107240&newpost=1560097","timestamp":"2014-04-20T03:14:08Z","content_type":null,"content_length":"52444","record_id":"<urn:uuid:07720e23-d3ea-4f97-bade-25bbd8c4bd17>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00489-ip-10-147-4-33.ec2.internal.warc.gz"}
The Firing Line Forums - View Single Post - cheapo 9mm mould HP thanks for the input. i guess i'll expect to pay around a hundred bucks. p.s. i spent all day making lead soup and poured of about 50 pounds of ignots, i did a crimp test on all the weights before they went in, my ignots poured well, but are a little sparkley, is the sparkle normal? i remember working with zinc when i did construction and it just kinda looks like that. out of 120 pounds of wheel weights i only found maybe 15 zincs and a couple hundred irons.
{"url":"http://thefiringline.com/forums/showpost.php?p=5591284&postcount=8","timestamp":"2014-04-24T03:15:35Z","content_type":null,"content_length":"13925","record_id":"<urn:uuid:61275fb4-d4cd-4db4-861b-342ab013469b>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00125-ip-10-147-4-33.ec2.internal.warc.gz"}
Power problem, need some advice/help I actually got the problem. I had changed the problem because it was a homework problem and I didn't want to feel like I cheeted, and in the process of changing number I forgot to readd the height of the boat. Basically what I did was changed the the 14L/M into Kg/S and used the equation P = M * G * H (Gravitational potential) / T Got the right answer, with out help! I guess doing physics at night is a bad thing, because my creative thinking is non existant, so figuring out a word problem, obviously, is not going to go so well. Thanks.
{"url":"http://www.physicsforums.com/showthread.php?t=150593","timestamp":"2014-04-21T09:46:39Z","content_type":null,"content_length":"25621","record_id":"<urn:uuid:b0536a10-8399-4b30-a408-2fc2d1622da4>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00063-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts from April 3, 2012 on Programming Praxis We have a fun little problem from number theory today. Take a minute and try to find x and y so that x^2 + 4 y^2 = 1733. If that’s too easy for you, try x^2 + 58 y^2 = 3031201. Equations of the form x^2 + d y^2 = p, with p prime, can be solved by an algorithm developed by Giuseppe Cornacchia in 1908 (actually, a fellow named Smith developed the same algorithm in 1885, but his work seems to be forgotten). Here’s the algorithm, which assumes that p is prime and 0 < d < p: 1. If the jacobi symbol (−d/p) is negative, there is no solution; stop. 2. Compute x such that x^2 ≡ −d (mod p). If there are two such values, choose the larger. Then set a ← p and b ← x. 3. While b^2 > p, set a ← b and b = a mod b. The two assignments are done simultaneously, so the values on the right-hand sides of the two assignments are the old values of the variables. (You may notice that this is Euclid’s algorithm.) 4. After the loop of Step 3 is complete, if d does not evenly divide p − b^2 or if c = (p − b^2) / d is not a perfect square, there is no solution; stop. Otherwise, x = b and y = √c. We can use Cornacchia’s algorithm with d = 1 to verify Fermat’s Christmas Theorem (because it was announced in a letter to Marin Mersenne on December 25, 1640) that all primes of the form 4k+1 can be represented as the sum of two squares; as usual, Fermat didn’t give the proof, which was finally published by Leonhard Euler in a letter to Goldbach in 1749 after seven years of effort. Your task is to implement Cornacchia’s algorithm and use it to demonstrate that all primes of the form 4k+1 and less than 1000 can be written as the sum of two squares. When you are finished, you are welcome to read or run a suggested solution, or to post your own solution or discuss the exercise in the comments below. Pages: 1 2
{"url":"http://programmingpraxis.com/2012/04/03/","timestamp":"2014-04-18T08:35:48Z","content_type":null,"content_length":"31772","record_id":"<urn:uuid:19b44bb0-4fd5-4d7a-aad7-7dd52f8d9506>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00232-ip-10-147-4-33.ec2.internal.warc.gz"}
Robot motion planning with many degrees of freedom and dynamic constraints Results 1 - 10 of 26 - IEEE fins. Auto. Control , 1993 "... Abstract--In this paper, we investigate methods for steering systems with nonholonomic constraints between arbitrary configurations. Early work by Brockett derives the optimal controls for a set of canonical systems in which the tangent space to the configuration manifold is spanned by the input vec ..." Cited by 251 (15 self) Add to MetaCart Abstract--In this paper, we investigate methods for steering systems with nonholonomic constraints between arbitrary configurations. Early work by Brockett derives the optimal controls for a set of canonical systems in which the tangent space to the configuration manifold is spanned by the input vector fields and their first order Lie brackets. Using Brockett’s result as motivation, we derive suboptimal trajectories for systems which are not in canonical form and consider systems in which it takes more than one level of bracketing to achieve controllability. These trajectories use sinusoids at integrally related frequencies to achieve motion at a given bracketing level. We define a class of systems which can be steered using sinusoids (chained systems) and give conditions under which a class of two-input systems can be converted into this form. I. , 1993 "... We present efficient algorithms for collision detection and contact determination between geometric models, described by linear or curved boundaries, undergoing rigid motion. The heart of our collision detection algorithm is a simple and fast incremental method to compute the distance between two ..." Cited by 108 (19 self) Add to MetaCart We present efficient algorithms for collision detection and contact determination between geometric models, described by linear or curved boundaries, undergoing rigid motion. The heart of our collision detection algorithm is a simple and fast incremental method to compute the distance between two convex polyhedra. It utilizes convexity to establish some local applicability criteria for verifying the closest features. A preprocessing procedure is used to subdivide each feature's neighboring features to a constant size and thus guarantee expected constant running time for each test. The expected constant time performance is an attribute from exploiting the geometric coherence and locality. Let n be the total number of features, the expected run time is between O( p n) and O(n) , 1990 "... A method for planning smooth robot paths is presented. The method relies on the use of Laplace’s Equation to constrain the generation of a potential function over regions of the configuration space of an effector. Once the function is computed, paths may be found very quickly. These functions do not ..." Cited by 96 (8 self) Add to MetaCart A method for planning smooth robot paths is presented. The method relies on the use of Laplace’s Equation to constrain the generation of a potential function over regions of the configuration space of an effector. Once the function is computed, paths may be found very quickly. These functions do not exhibit the local minima which plague the potential field method. Unlike decompositional and algebraic techniques, Laplace’s Equation is very well suited to computation on massively parallel architectures. 1 - IEEE Journal of Oceanic Engineering , 2004 "... Abstract—Operations with multiple autonomous underwater vehicles (AUVs) have a variety of underwater applications. For example, a coordinated group of vehicles with environmental sensors can perform adaptive ocean sampling at the appropriate spatial and temporal scales. We describe a methodology for ..." Cited by 57 (15 self) Add to MetaCart Abstract—Operations with multiple autonomous underwater vehicles (AUVs) have a variety of underwater applications. For example, a coordinated group of vehicles with environmental sensors can perform adaptive ocean sampling at the appropriate spatial and temporal scales. We describe a methodology for cooperative control of multiple vehicles based on virtual bodies and artificial potentials (VBAP). This methodology allows for adaptable formation control and can be used for missions such as gradient climbing and feature tracking in an uncertain environment. We discuss our implementation on a fleet of autonomous underwater gliders and present results from sea trials in Monterey Bay in August, 2003. These at-sea demonstrations were performed as part of the Autonomous Ocean Sampling Network (AOSN) II project. Index Terms—Adaptive sampling, autonomous underwater vehicles (AUVs), cooperative control, formations, gradient climbing, underwater gliders. I. , 1992 "... The motion planning problem asks for determining a collision-free path for a robot amidst a set of obstacles. In this paper we present a new approach for solving this problem, based on the construction of a random network of possible motions, connecting the source and goal configuration of the ro ..." Cited by 53 (24 self) Add to MetaCart The motion planning problem asks for determining a collision-free path for a robot amidst a set of obstacles. In this paper we present a new approach for solving this problem, based on the construction of a random network of possible motions, connecting the source and goal configuration of the robot. , 1993 "... In this paper we describe a robot path planning algorithm that constructs a global skeleton of free-space by incremental local methods. The curves of the skeleton are the loci of maxima of an artificial potential field that is directly proportional to distance of the robot from obstacles. Our method ..." Cited by 51 (9 self) Add to MetaCart In this paper we describe a robot path planning algorithm that constructs a global skeleton of free-space by incremental local methods. The curves of the skeleton are the loci of maxima of an artificial potential field that is directly proportional to distance of the robot from obstacles. Our method has the advantage of fast convergence of local methods in uncluttered environments, but it also has a deterministic and efficient method of escaping local extremal points of the potential function. We first describe a general roadmap algorithm, for configuration spaces of any dimension, and then describe specific applications of the algorithm for robots with two and three degrees of freedom. , 1995 "... A model of a topologically organized neural network of a Hopfield type with nonlinear analog neurons is shown to be very effective for path planning and obstacle avoidance. This deterministic system can rapidly provide a proper path, from any arbitrary start position to any target position, avoiding ..." Cited by 35 (0 self) Add to MetaCart A model of a topologically organized neural network of a Hopfield type with nonlinear analog neurons is shown to be very effective for path planning and obstacle avoidance. This deterministic system can rapidly provide a proper path, from any arbitrary start position to any target position, avoiding both static and moving obstacles of arbitrary shape. The model assumes that an (external) input activates a target neuron, corresponding to the target position, and specifies obstacles in the topologically ordered neural map. The path follows from the neural network dynamics and the neural activity gradient in the topologically ordered map. The analytical results are supported by computer simulations to illustrate the performance of the network. 1 This work has been accepted by Neural Networks (March 1994). 1 1 Introduction Human motor control reveals a versatility of function and economy of space, that is yet beyond the reach of robots. One of the important themes of research in the fi... - IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION , 1990 "... We describe an algorithm for planning the motions of several mobile robots which share the same workspace. Each robot is capable of independent translational motion in two dimensions, and the workspace contains polygonal obstacles. The algorithm computes a path for each robot which avoids all obstac ..." Cited by 16 (0 self) Add to MetaCart We describe an algorithm for planning the motions of several mobile robots which share the same workspace. Each robot is capable of independent translational motion in two dimensions, and the workspace contains polygonal obstacles. The algorithm computes a path for each robot which avoids all obstacles in the workspace as well as the other robots. It is guaranteed to find a solution if one exists. The algorithm takes a cell decomposition approach, where the decomposition used is based on the idea of a product operation defined on the cells in a decomposition of a twodimensional free space. We are implementing this algorithm for the case of two robots as part of ongoing research into useful algorithms for task-level programming of the RobotWorld 1 system. - Journal of VLSI Signal Processing , 1994 "... : Analog VLSI provides a convenient and high-performance engine for robot path planning. Laplace's equation is a useful formulation of the path planning problem; however, digital solutions are very expensive. Since high precision is not required an analog approach is attractive. A resistive network ..." Cited by 14 (3 self) Add to MetaCart : Analog VLSI provides a convenient and high-performance engine for robot path planning. Laplace's equation is a useful formulation of the path planning problem; however, digital solutions are very expensive. Since high precision is not required an analog approach is attractive. A resistive network can be used to model the robot's domain with various boundary conditions for the source, target, and obstacles. A gradient descent can then be traced through the network by comparing node voltages. We built two analog CMOS VLSI chips to investigate the feasibility of this technique. Design issues included the choice of resistive element, tessellation of the domain, programming of the network and readout of the settled network. Both chips can be connected to a standard VME bus interface to permit their use as co-processors in otherwise digital systems. Keywords: analog VLSI, Laplace's equation, resistive networks, path planning. 1 Introduction Robot path planning is an important area of st...
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1621820","timestamp":"2014-04-18T13:24:05Z","content_type":null,"content_length":"36841","record_id":"<urn:uuid:b8723840-9c1f-4771-a8bc-3a2ba62bf084>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00011-ip-10-147-4-33.ec2.internal.warc.gz"}
Theorem 1.13 Theorem 1.13: A line is parallel to one of two parallel lines if and only if it is parallel to the other. Proof: If a line is parallel to one of two parallel lines, then they all have the same slope, and are all parallel to each other. If it is not parallel to one of two parallel lines then it has a different slope from the one parallel line and so has a different slope than the other of the two parallel lines, and is not parallel to the other line either.
{"url":"http://www.sonoma.edu/users/w/wilsonst/Papers/Geometry/lines/T1-1-13--15/T1-1-13.html","timestamp":"2014-04-18T08:06:02Z","content_type":null,"content_length":"2125","record_id":"<urn:uuid:3a8a4d9c-cdcf-4460-8afe-970b1da7cf5c>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00278-ip-10-147-4-33.ec2.internal.warc.gz"}
Proposition 2 If, when the less of two unequal magnitudes is continually subtracted in turn from the greater that which is left never measures the one before it, then the two magnitudes are incommensurable. There being two unequal magnitudes AB and CD, with AB being the less, when the less is continually subtracted in turn from the greater, let that which is left over never measure the one before it. I say that the magnitudes AB and CD are incommensurable. If they are commensurable, then some magnitude E measures them. Let AB, measuring FD, leave CF less than itself, let CF measuring BG, leave AG less than itself, and let this process be repeated continually, until there is left some magnitude which is less than E. Suppose this done, and let there be left AG less than E. Then, since E measures AB, while AB measures DF, therefore E also measures FD. But it measures the whole CD also, therefore it also measures the remainder CF. But CF measures BG, therefore E also measures BG. But it measures the whole AB also, therefore it also measures the remainder AG, the greater the less, which is impossible. Therefore no magnitude measures the magnitudes AB and CD. Therefore the magnitudes AB and CD are incommensurable. Therefore, if, when the less of two unequal magnitudes is continually subtracted in turn from the greater that which is left never measures the one before it, then the two magnitudes are Antenaresis (also called the Euclidean algorithm), first used in proposition VII.1, is again used in this proposition. Beginning with two magnitudes, the smaller, whichever it is, is repeated subtracted from the larger. Proposition VII.1 concerns relatively prime numbers. It is similar to this proposition, but its conclusion is different. Heath claims that Euclid uses X.1 to prove this proposition, in particular, to show that antenaresis eventually leaves some magnitude which is less than E. It is hard to tell what Euclid thought his justification was. Since both magnitudes are multiples of E, whatever justification Euclid intended back in proposition VII.2 works just as well here. Euclid did, however, put X.1 just before this proposition, perhaps for an intended logical connection. If so, there is a missing statement to the effect that GB is greater than half of AB, and so forth, so that X.1 might be invoked. An example of incommensurable magnitudes Consider the 36°-72°-72° triangle constructed ABC in proposition IV.10. This triangle was used in the following proposition IV.11 to construct regular pentagons. When its base BC is subtracted from a side AC then the remainder CD is the base of a similar triangle BCD. Likewise, when the base CD of this new triangle is subtracted from its side BD then the remainder DE is the base of yet another smaller similar triangle CDE. And so forth. Thus, when we begin with the two lines AB and BC and apply the algorithm of antenaresis to them, we get a series of lines which never ends AB, BC, CD, DE, EF, and so forth, and these lines form a never-ending continued proportion. AB : BC = BC : CD = CD : DE = DE : EF = ... Thus, according to this proposition, the two quantities AB and BC are incommensurable. Cutting the line AB at C to make this ratio AB : BC is called in VI.Def.3 cutting AB into extreme and mean ratio. A more recent name for this ratio is the “golden ratio.” Use of this proposition This proposition is used in the next one.
{"url":"http://aleph0.clarku.edu/~djoyce/java/elements/bookX/propX2.html","timestamp":"2014-04-19T07:05:40Z","content_type":null,"content_length":"7619","record_id":"<urn:uuid:fc4c57d3-31cf-42e7-acd4-3f072c921d14>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00366-ip-10-147-4-33.ec2.internal.warc.gz"}
Angular Momentum Angular Momentum (Rotational Momentum) Angular Momentum is... Angular momentum, or rotational momentum, is a vector quantity, meaning it consists of both magnitude and direction. It measures rotational momentum of a rotating object. Angular momentum can be measured by multiplying the moment of inertia (l) and the angular velocity (w). The Law of the Conservation of Angular Momentum The law of the conservation of angular momentum states that angular momentum of an object is conserved when no external torque is being applied to the object. L = mvr • L = angular momentum • m = mass • v = speed • r = radius L = Iω • L = angular momentum • I = moment of intertia • ω = angular velocity Angular Momentum in Figure Skating Angular momentum is useful in analyzing the elegant spins of figure skaters. According to the law of the conservation of angular momentum, the angular momentum of an object will not change unless external torque is applied to the object. When spinning, a figure skater will bring his or her arms closer to his or her body in order to increase their angular velocity and rotate faster. This works out because when the moment of inertia (I in the equation directly above) is decreased by bringing the arms closer to the body (and the angular momentum will stay the same according to the law of the conservation of angular momentum), the angular velocity must increase. When gliding along the ice, figure skaters do not have angular momentum (recall Newton's Laws of motion, the figure skater will continue in a straight line). So, in order to spin/jump, the skater must generate angular momentum. The skater must apply a force to the ice and the force that the ice puts on the skater in turn, will give the skater the angular momentum necessary for the jump/spin. A figure skater wants a lot of total angular momentum during their spin to create many spins as possible, and they can do this by having a large moment of inertia at the beginning of their jump or spin. Then, they can decrease their moment of inertia during their spin or jump and create lots more angular velocity (since angular momentum is the product of moment of inertia and angular velocity and angular momentum is conserved; this means less moment of inertia, meaning more angular velocity). So, to start jumps and spins, a figure skater will spread out either their arms or legs to maximize their moment of inertia and they will pull in their limbs to create more angular velocity while they spin.
{"url":"http://figureskatingmargotzenaadeline.weebly.com/angular-momentum.html","timestamp":"2014-04-16T19:05:37Z","content_type":null,"content_length":"21484","record_id":"<urn:uuid:74212363-2fc7-44db-8847-53a11e2fb776>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00312-ip-10-147-4-33.ec2.internal.warc.gz"}
Sum Divergent Series, III One excellent reason to believe that these Cauchy-divergent sums can be assigned reasonable values comes from the fact that equations like $1 + 1 + 2 + 5 + 14 + 42 + \dots = e^{-\pi i/3}$ $1 + 2 + 4 + 9 + 21 + 51 + \dots = 1 + 2 + 6 + 22 + 90 + \dots = -i$ have real, finite combinatorial consequences. These are the sums of the Catalan numbers, Motzkin numbers, and Schroder numbers, respectively. By taking these divergent sums seriously, we are led to new results. As a matter of fact, a new combinatorial theorem came out of the comments in the last post, thanks to Isabel (of God Plays Dice) noting that the sum of the Motzkin numbers should be $-i$ : there is a bijective algorithm (explicitly constructed) which converts Motzkin trees to 5-tuples of Motzkin trees. Today I would like to pose the question “How do we know that our divergent sums are meaningful in situations where we can’t immediately find a finite consequence?” And, of course, finally get to the promised puzzle. By being particularly uncritical about what universe our calculations were living in, we showed last time that the formula $1 + x + x^2 + \dots = \frac{1}{1 - x}$ quite indirectly implies that $1 + 2 + 3 + 4 + \dots = -1/12$ But unlike the case of tree-counting, where the divergent sums lead us to very hands-on combinatorial truths, it is unclear how much faith we should have in this sum. I’ve heard that this sum comes up (and really is -1/12) when computing vacuum expectation values in quantum field theories, but I don’t really know enough to say anything reasonable about this. Hopefully some kind commenter will fill in the blanks. While a physical manifestation of this divergent sum is about the best thing we could hope for, we can also look for other abstract manifestations and see if the value -1/12 is consistent between them. Remember that when we tried to compute the sum $1 + 2 + 3 + 4 + 5 + \dots$ directly using analytic regularization, there was a problem: $1 + 2z + 3z^2 + 4z^3 + \dots = \frac{d}{dz}\frac{1}{1 - z} = \frac{1}{(1 - z)^2}$ which is singular at $z = 1$ and therefore fails to give a finite value to this sum. In order to compute the value $-1/12$ we had to start from $\sum (-1)^k = 1/2$ and $\sum 2^k = -1$ (both proved by analytic regularization) and then carry out some rather questionable algebraic manipulations to get the final result. Why should we have faith in the answer we computed? If we believe in a Platonic realm of divergent series, we could think of our value $-1/12$ as an experimental prediction: if we find another way to compute divergent sums which assigns a value to $1 + 2 + 3 + 4 + \dots$, that value will be $-1/12$. This is probably not going to actually be true, but if we had another regularization scheme which did give us $\sum k = -1/12$, we might be more inclined to believe that the sum has some meaningful finite interpretation just like the combinatorial sums from before. With that setup, you probably aren’t going to be surprised that we do have another useful regularization scheme for divergent sums. It goes by the name of zeta regularization, and works like this: the zeta-regularized sum $a_0 + a_1 + a_2 + \dots$ is computed by taking the limit as $z \rightarrow 0$ of $a_0 1^z + a_1 2^z + a_2 3^z + \dots + a_{k-1} k^z + \dots$ Zeta regularization works well in many cases where analytic regularization does not, and vice versa. If there is a universal method for summing divergent series, it is almost as if zeta and analytic regularizations are two disjoint approximations of this method. In particular, we have no reason to expect zeta-regularized sums to have anything to do with analytically regularized sums. With our expectations sufficiently lowered, let us do some calculating with the sum $1 + 2 + 3 + 4 + \dots$: $1 \cdot 1^z + 2 \cdot 2^z + 3 \cdot 3^z + 4 \cdot 4^z + \dots = \sum k k^z = \sum k^{z + 1} = \zeta(-z - 1)$ where $\zeta$ is the Riemann zeta function. The sum we are trying to compute is therefore given by $\zeta(-1)$, which we can compute using the functional equation $\zeta(z) = 2^z \pi^{z - 1} \sin(\frac{z \pi}{2}) \Gamma(1 - z) \zeta(1 - z)$ and the fact (demonstrated nicely by Jim on this very blog) that $\zeta(2) = \pi^2 / 6$. What do we get? $\zeta(-1) = 2^{-1} \pi^{-2} \sin(-\frac{\pi}{2}) \Gamma(2) \zeta(2) = \frac{1}{2 \pi^2} \cdot (-1) \cdot 1! \cdot \frac{\pi^2}{6} = -1/12$ Let us pause for a moment and think about just how bizarre this is: two entirely different methods of assigning a sum to the series, the first of which used calculations which are not even clearly well-defined, have given us the same result. Lest we think this is a coincidence, let us also compute $1 + 1 + 1 + 1 + \dots$ with zeta-regularization. Using our questionable algebra from last time, we found a sum of $-1/2$ for this series. With zeta-regularization: $1^z + 2^z + 3^z + \dots = \zeta(-z)$ so we need to compute $\zeta(0)$. The functional equation tells us that for an infinitessimal $dz$, $\zeta(-dz) = \frac{1}{\pi} \sin(\frac{\pi dz}{2}) \zeta(1 + dz)$ (where = should be read as “is infinitessimally close to”). $\zeta(1)$ is the harmonic series, which gives $\zeta$ a simple pole of reside 1 at $z = 1$. As a result, $\zeta(-dz) = \frac{1}{\pi} \frac{\pi dz}{2} \frac{1}{1 - (1 + dz)} = -1/2$ Do you believe that there is some rigorous notion of “divergent sum” hiding away in some Platonic corner of the universe yet? Now here is my puzzle: the harmonic series $H = 1 + 1/2 + 1/3 + 1/4 + \dots$obviously diverges in the Cauchy sense. It also is Cauchy-divergent for any p-adic metric on $\mathbb{Q}$. Contrast this property with the nice divergent series $\sum 2^k$, for example. The harmonic series does not have a nice zeta regularization due to the pole of $\zeta$ at $z = 1$. It does not have an analytic regularization either: $H(z) = 1 + z/2 + z^2 / 3 + z^3 / 4 + \dots$ so that $(z H(z))' = \frac{1}{1 - z}$ which implies $H(z) = \frac{1}{z} \int_0^z \frac{dt}{1 - t} = -\frac{1}{z} \log(1 - z)$ Sending $z$ to 1 is a disaster, so the harmonic series diverges under analytic regularization as well. Unlike all the other divergent series that we have seen so far, the harmonic series seems to be really divergent. This is my puzzle to you, the internet: can you sum the harmonic series? Just for reference, here are two other dirty tricks that I have tried: the first uses the fact that the alternating harmonic series $1 - \frac{1}{2} + \frac{1}{3} - \frac{1}{4} + \dots$ converges. We $H(z) - z H(z^2) = 1 - z / 2 + z^2/3 - z^3/4 + \dots = \log(1 + z)$ But as $z \rightarrow 1$, this becomes the unfortunate equation $0 = \log(2)$. The second trick is much dirtier, and I was very sad to see that it seems to be failing. The zeta function has a special relationship $\frac{1}{\zeta(z)} = \sum \frac{\mu(k)}{k^z}$ with the Mobius function. The Mobius function is zero about a third of the time, and is equal to +1 as often as it is equal to -1. So it is not unreasonable to expect that $\frac{1}{H} = \sum \frac{\mu(k)}{k}$ converges. But numerical tests that I have run computing the sum out to 100 million terms show that $H$, computed this way, is roughly half the magnitude of the nth partial sum of the harmonic series. For reference, if we replace $\mu$ with a random variable that has the same distribution, the expected absolute value of $H$ that we obtain is something like 1.8, while the value computed using the real $\mu$ function is about 8.9 and the partial sum of the harmonic series is about 19 after 100 million terms. Thus concludes my sad story about trying to sum the harmonic series; can we come up with a more clever idea? ulfarsson Says: August 2, 2007 at 3:23 pm | Reply There is an obvious difference between the harmonic series and the other divergent series you are looking at here, and that’s the limit of the terms of the series. The limit is zero only for the harmonic. Do you think this plays some role or is it only a coincidence? mnoonan Says: August 2, 2007 at 4:35 pm | Reply Since $\zeta(z)$ has only the single pole at z = 1, sums like $\sum n^{-1/2}$ all are zeta-regularizable (that particular sum is about -1.46), though the terms are all going to zero. Since $1/\sqrt {n} > 1 / n$, that series also Cauchy-diverges. I can’t figure out how to compute an analytic regularization of the series, though. It would be interesting to see if one exists and agrees with the zeta regularization, maybe somebody out there can figure it out? • Alejandro Says: January 26, 2010 at 8:18 am | Reply I’ll try to answer last mnoonan doubt. Let be the following power series: Then S(z)-sqrt(2)*z*S(z^2)=1-z/sqrt(2)+z^2/sqrt(3)-z^3/sqrt(4)+…=T(z), where T(z) converges for z=1. One can see that this series converges to -1.46 approximatedly by computing terms. Kea Says: August 2, 2007 at 6:43 pm | Reply What a fun puzzle!! Maybe we should allow divergences in terms of ‘omega’ so long as we can show exactly what it equals in the surreals. Eg. the harmonic series is minus 2(1+1+1+1+….) + 3(1+1+1+1+1+……) + 4(1+1+1+1+1+….) + = -1 + (1 + 2 + 3 + 4 + 5 + 6 + …) + -1 + (1 + 2 + 3 + 4 + 5 + 6 + …) + -1 + (1 + 2 + 3 + 4 + 5 + 6 + …) + -1 + (1 + 2 + 3 + 4 + 5 + 6 + …) + = 13/12 omega mnoonan Says: August 2, 2007 at 8:07 pm | Reply I like that idea, especially since (as Josh pointed out in the first post) a geometric series with $x= \infty$ might reasonably sum to zero, which could tackle some of the divergences. On the other hand, you have to be extremely careful with re-bracketing in these sums: it is pretty easy to show that any finite rebracketing is OK for analytic regularization, but infinite ones are more tricky: $1/2 = 1 - 1 + 1 - 1 + \dots eq (1 - 1) + (1 - 1) + \dots = 0$ That is why I was so surprised to find that the sum of the Schroder numbers equals the sum of the Motzkin numbers — even though they count the same thing in different ways, that “different way” involves an infinite rebracketing of the terms in the sum. For zeta regularization, I can’t even clearly see that finite rebracketing is OK. With analytic regularization, the key lemma is that if $S(z) = \sum a_k z^k$ then $S(z) - a_0 = z \sum a_{k+1}z^k$. This lets us treat any finite leading portion of the sum as an honest-to-goodness sum of numbers which can be rearranged as we wish. Can we prove an analogous statement for zeta regularization? Maybe it is in Hardy somewhere… Greg Muller Says: August 2, 2007 at 9:02 pm | Reply As far as zeta regularization and analytic regularization giving the same answer when they both work, I believe this is a consequence of the Mellin-transform. This is an operation that takes holomorphic functions on $\mathbb{C}$ to holomorphic functions on $\mathbb{C}$ (modulo some definite integral existing), which takes the function $z^n$ to $n^{-s}$. Therefore, if we pull an Euler and assume that the Mellin transform commutes with the infinite sums we care about, we take the holomorphic function $\sum a_nz^n$ $\sum a_nn^{-s}$. Thus, any analytic continuation of the latter corresponds to the Mellin transform of an analytic continuation of the former. This is pretty hand-wavy, not only because of all the analytic bookkeeping I ignored, but because the techniques Matt used to sum series didn’t always correspond to forming one of the above two formal series. I’m thinking about writing a post on the Mellin transform and its possible applications to this stuff. Hopefully I get it done before I leave tomorrow. amazeen Says: August 4, 2007 at 1:27 pm | Reply Could it be that there is some “strip” such that if the rate of growth of the series is in it, we can´t assign a well defined value to the sum? What I mean is, if the sum grows slowly enough it converges, if it grows fast enough one can assign a value to the sum like you have done in this post, and previous posts, but if it´s somwhere in the middle, like the harmonic series, then it truly is divergent. Kea Says: August 5, 2007 at 12:15 am | Reply Heh, check this out: a function J(x,y) that unites the Motzkin, Schroeder and Catalan numbers. fanfan Says: August 8, 2007 at 5:08 pm | Reply Another strange fact : it is known that (1+2+3+…+n)^2 = 1^3+2^3+3^3+…+n^3 Taking the limit when n goes to infinity, we should find zeta(-1)^2 = zeta(-3) but zeta(-3) is 1/120, and not (-1/12)^2 ! Can anyone shed some light on this ? mnoonan Says: August 9, 2007 at 12:55 pm | Reply I think the problem is that “limit” is a metric (or at least topological) concept. Since these sums diverge for most or all good metrics on $\mathbb{Q}$, we shouldn’t expect them to behave nicely under limit operations like computing partial sums, etc. yasiru89 Says: October 7, 2007 at 12:47 pm | Reply The harmonic series can be made to sum to ${\gamma}$ in the Ramanujan sense. However this is merely defined as the asymptotic difference between the sum and the integral(which in this case is divergent) and in the case of H it simply defines ${\gamma}$ yasiru89 Says: October 7, 2007 at 1:35 pm | Reply Oh and Greg Muller having acknowledged the connection to the Mellin transform we see that neither Abelian or zeta regularization techniques will work for the harmonic series. I too in a misadventure of sorts tried a procedure for summation(one futile from the ouset sadly) that only resulted by rearrangement a proof that the harmonic diverges. With regards to fanfan’s question we have to consider asymptotic differences much like the case with ${\gamma}$ (I refer to the taylor series for the logarithm) when we consider the infinite case. It is ironic that I seem to be trying to associate rigour to a supposedly ‘dubious’ mathematical procedure but zeta regularization is mathematically valid (as is the Abelian summation process owing to analytic extension). We need only absolve ourselves of the hardwired geometry to make sense of it all. In a platonic ‘measure of infinities’ tying it to the bases of analysis. It may be of interest to consider the Euler Mclaurin sum formula and compare it with the formula I derive in my third post at the link below. Yasiru Ratnayake Says: October 18, 2008 at 9:24 am | Reply eljose Says: January 23, 2009 at 1:04 pm | Reply Using Euler Mac Laurin sum this zeta regularization can also be extended to integrals Int(0,oo) x^m dx although they are divergent see http://www.wbabin.net/science/papers/moreta23.pdf relating this strange integral to divergent sums of the form 1+2^M+ 3^M +4^M +………… eljose Says: January 23, 2009 at 1:42 pm | Reply http://www.wbabin.net/science/moreta23.pdf instead the above link if possible please someone remove the upper topic of mine thanks :) here you can see how you could calculate divergent integrals Int(0,oo) x^m dx although they are divergent relating them to negative values of zeta function Z(-m) Lubos and divergent series « The Gauge Connection Says: February 23, 2009 at 6:42 am | Reply [...] http://cornellmath.wordpress.com/2007/08/02/sum-divergent-series-iii/ [...] Yasiru Says: February 27, 2009 at 9:27 am | Reply See my blog at, Qiaochu Yuan Says: March 16, 2009 at 5:26 pm | Reply A thought. One can define a q-analogue of the harmonic series by computing sum q^n/(1 – q^n) = sum sigma(n) q^n where sigma(n) is the number of divisors of n. The “sum” of the harmonic series should be the residue at q = 1; perhaps the Mellin transform would be relevant in relating this sum to sum sigma(n) / n ^s = zeta(s)^2? sdfsd Says: March 18, 2009 at 8:20 am | Reply S = …1/a² + 1/a + 1 + a + a²… aS = …1/a + 1 + a + a² + a³… = S aS = S Therefore, S=0 Alejandro Says: January 26, 2010 at 8:40 am | Reply I have discovered that Ramanujan summation C(a) give the usually accepted value of both convergent and divergent summations if we take ‘a’ so that the primitive of f(x) evaluated in ‘a’ be 0: That is: a=+infinite for summation of convergent series and, for divergent series, a=0 for zeta Riemann series, a=-infinite for geometric series and a=1 for the harmonic series. If we consider that ‘a’ is not necessary to have a real value, but we ignore the evaluation of the integral of f(x) in ‘a’, that formula is also valid for alternating divergent series. Taking that into account, the sum of the harmonic series is obtained by taking f(x)=1/x and a=1 in the formula around x=1 and the value obtained is gamma (the Euler-Mascheroni constant) Alejandro Says: February 1, 2010 at 8:54 am | Reply I have made up a “demonstration” that H=gamma, that is, the sum of the harmonic series is equal to the Euler-Mascheroni constant, by using a power series. Let be H_n = Sum_k=1^n (1/n) Then H = 1+Sum_n=1^inf ( 1/(n+1) + ln(n/(n+1)) ) – lim(x->1-,n->inf) Sum_k=1^n ( ln(k/(k+1)) ) x^k The part before the limit is gamma, so we must demonstrate that the other series (which I will call S_n) limit is 0. Then S_n = Sum_k=1^n ( ln(k/(k+1)) ) x^k We compute the limit when x->1-, which I will call L1. L1 S_n = L1 ( x(ln1-ln2) + x^2(ln2-ln3) +…+ x^n(ln n-ln(n+1)) ) = L1 ( 1ln1 – x^n ln(n+1) ) – L1 ( (1-x)(xln2 + x^2 ln3 +…+x^(n-1) ln n) = -L1 ( x^n ln(n+1) ) Now I call L2 = lim(n->inf,0<x<1). Then: L1 L2 S_n = -L1 L2 ( x^n ln(n+1) ) = (-L1 (0*inf)) = ( using L'Hopital rule making d/dn in numerator and denominator, taking into account that 1/x^n = e^(-n ln x) ) = -L1 L2 ( 1/(n+1) / (-x^n ln x) ) = L1 L2 ( x^n / ((n+1) ln x) ) = L1 0 = 0, as we wish to demonstrate, since: L2 (x^n) = 0, L2 (n+1) = inf, L2 (ln x) = ln x, which is finite. See the simmilarity with the geometrical series: Sum_k=1^n (x^k) = (1-x^n)/(1-x) S_n/(1-x) + Sum_k=1^(n-1) ( x^k ln(k+1) ) = ( 1 ln 1 – x^n ln(n+1) )/(1-x) Alejandro Says: February 3, 2010 at 10:12 am | Reply Hi again, here is the “demonstration” in a clearer way. First of all, we write the power series: H(x) = -ln(1-x) = lim(n->inf) ( x + x^2/2 + x^3/3 +…+ x^n/n ) Thus we can write: lim(x->1-) ( (x-1) ln (1-x) – xln(1-x) ) = lim(x->1-, n->inf) ( ( x + x^2/2 +…+ x^n/n – x^n ln n ) + lim(x->1-, n->inf) ( x^n ln n) lim(x->1-) ( (x-1) ln (1-x) ) = (L’Hopital) =lim(x->1-) ( (-1/(1-x)) /(-1/(x-1)^2) ) = lim(x->1-) ( 1-x ) = 0 In the right term of the equation, the second member tends to 0 if we make first the limit n->inf with 0<x1-, as shown in my last reply. The first member of the right term of the equation tends to gamma if we make first the limit x->1- with ninf, so the equation results: Alejandro Says: February 3, 2010 at 10:14 am | Reply I repeat my last sentence, since there is a mistake: The first member of the right term of the equation tends to gamma if we make first the limit x->1- with ninf, so the equation results: Alejandro Says: February 3, 2010 at 10:16 am | Reply There is a problem with the editor, so I write my last sentence with words: The first member of the right term of the equation tends to gamma if we make first the limit x->1- with n finite and then make the limit n tends to infinite, so the equation results: Thomas Says: May 9, 2013 at 1:43 am | Reply The sum has a reasonable value of -6, proof on request (Hint, multiply the sum by (1+2+3+…)=-1/12, then sum ove infinitely long diagonals. Jenny Says: June 2, 2013 at 5:13 am | Reply Hi! I just wish to give you a huge thumbs up for the excellent info you have right here on this post. I am coming back to your website for more On -1/12, adding infinitely many numbers, and Phil Plait’s rash and incorrect claims | konstantinkakaes.com Says: January 18, 2014 at 1:02 am | Reply […] somehow intimately related to -1/12th, in a way that is subtle and mysterious. (Or, if you prefer Plait, “[r]eally […]
{"url":"http://cornellmath.wordpress.com/2007/08/02/sum-divergent-series-iii/","timestamp":"2014-04-18T05:30:13Z","content_type":null,"content_length":"98063","record_id":"<urn:uuid:4b7cb498-cf8c-4a2d-a46f-e5e5c99908da>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00188-ip-10-147-4-33.ec2.internal.warc.gz"}
Aspnes, James - Department of Computer Science, Yale University • Counting Networks1 James Aspnes y • Towards Better Definitions and Measures of Internet Security (Position Paper) • Stably computable properties of network graphs Dana Angluin, James Aspnes, Melody Chan, Michael J. Fischer, Hong Jiang, • Yale University Department of Computer Science • Towards a Theory of Data Entanglement , James Aspnes 1 • Randomized Protocols for Asynchronous Consensus James Aspnes • Self-stabilizing Population Protocols Dana Angluin, James Aspnes , Michael J. Fischer, and Hong Jiang • A Theory of Timestamp-Based Concurrency Control for Nested Transactions • Randomized Consensus in Expected O(n log n) Individual Work • Bulletin of the EATCS no 90, pp. 109126, October 2006 c European Association for Theoretical Computer Science • Optimal-Time Adaptive Strong Renaming, with Applications to Counting • Stably Computable Predicates are Semilinear Dana Angluin, James Aspnes and David Eisenstat • Tight Bounds for Anonymous Adopt-Commit Objects James Aspnes • On-Line Routing of Virtual Circuits with Applications to Load Balancing and Machine • Relationships Between Broadcast and Shared Memory in Reliable Anonymous • Inferring Social Networks from Outbreaks Dana Angluin1 • Combining Shared Coin Algorithms James Aspnes • Journal of Machine Learning Research ?? (2009) ?????? Submitted 1/09; Revised 6/09; Published ?/?? Learning Acyclic Probabilistic Circuits Using Test Paths • Skip B-Trees Ittai Abraham1 • Approximate Shared-Memory Counting Despite a Strong Adversary • Yale University Department of Computer Science • Mutation Systems Dana Angluin, James Aspnes, and Raonne Barbosa Vargas • The Computational Power of Population Protocols Dana Angluin • RANDOMIZED CONSENSUS IN EXPECTED O(N log2 N) OPERATIONS PER PROCESSOR • Max Registers, Counters, and Monotone Circuits (Preliminary Version) • A Modular Approach to Shared-Memory Consensus, with Applications to the Probabilistic-Write Model • Distributed Computing manuscript No. (will be inserted by the editor) • Ranged Hash Functions and the Price of Churn James Aspnes • Fairness in Scheduling Miklos Ajtai James Aspnesy Moni Naorz Yuval Rabanix • Path-Independent Load Balancing With Unreliable Machines James Aspnes • Towards a Theory of Data Entanglement (Extended Abstract) • Fast Computation by Population Protocols With a Dana Angluin1 • Worm Versus Alert: Who Wins in a Battle for Control of a Large-Scale Network? • Stably computable properties of network graphs Dana Angluin • Wait-Free Consensus James Aspnes • Decision Trees (Extended Abstract) • Time-and Space-E cient Randomized Consensus James Aspnes • Competitive Analysis of Distributed Algorithms James Aspnes? • Tight Bounds for Adopt-Commit Objects James Aspnes • A Simple Population Protocol for Fast Robust Approximate Majority • Topology of sensor networks Population protocols • The Complexity of Renaming Dan Alistarh • Greedy Routing in Peer-to-Peer Systems James Aspnes • Yale University Department of Computer Science • Randomized Consensus in Expected O(n2 Total Work Using Single-Writer Registers • On the Computational Complexity of Sensor Network Localization • Randomized Load Balancing by Joining and Splitting Bins James Aspnes • Yale University Department of Computer Science • Decomposing consensus Building the components • Tight bounds for anonymous adopt-commit objects James Aspnes1 Faith Ellen2 • The Expressive Power of Voting Polynomials James Aspnes y Richard Beigel z Merrick Furstx Steven Rudichx • Fast Construction of Overlay Networks Dana Angluin • Tight Bounds for Adopt-Commit • Load Balancing and Locality in Range-Queriable Data James Aspnes • Spreading Rumors Rapidly Despite an Adversary James Aspnes William Hurwoody • arXiv:cs.CE/0101015v117Jan2001 A Combinatorial Toolbox for Protein Sequence Design and • Population protocols Impossibility results • Yale University Department of Computer Science • Fairness in Scheduling (Extended Abstract) • Computation in Networks of Passively Mobile Finite-State Dana Angluin • Storage capacity of labeled graphs Dana Angluin1 • Polylogarithmic Concurrent Data Structures from Monotone James Aspnes • Distributed Computing manuscript No. (will be inserted by the editor) • Network Construction with Subgraph Connectivity Constraints • Learning Large-Alphabet and Analog Circuits with Value Injection Queries • Time and Space Lower Bounds for Restricted-Use Objects James Aspnes1, Hagit Attiya2, Keren Censor-Hillel3, Danny Hendler4 • Spreading Alerts Quietly and the Subgroup Escape Problem • A Modular Measure of Competitiveness for Distributed Algorithms James Aspnes Orli Waartsy • Yale University Department of Computer Science • Worm Versus Alert: Who Wins in a Battle for Control of a Large-Scale Network? • O(log n)-time Overlay Network Construction from Graphs with Out-degree 1 • Yale University Department of Computer Science • Yale University Department of Computer Science • A Simple Population Protocol for Fast Robust Approximate Majority • O(log n)-time Overlay Network Construction from Graphs with Out-degree 1 • Fault-tolerant Routing in Peer-to-peer Systems James Aspnes • The Expansion and Mixing Time of Skip Graphs with Applications • Yale University Department of Computer Science • Spreading Rumors Rapidly Despite an Adversary James Aspnes William Hurwoody • Approximate counting Randomized consensus • Distributed Computing manuscript No. (will be inserted by the editor) • A Theory of Competitive Analysis for Distributed Algorithms Miklos Ajtai • Stably Computable Predicates are Semilinear Dana Angluin • Approximate Shared-Memory Counting Despite a Strong Adversary James Aspnes • Skip Graphs JAMES ASPNES1 • Modular Competitiveness for Distributed Algorithms James Aspnes Orli Waartsy • On the Power of Anonymous One-Way Communication Dana Angluin1 • Low-Contention Data Structures$ James Aspnesa,1, David Eisenstat2, Yitong Yinb,3, • A Theory of Competitive Analysis for Distributed Algorithms Miklos Ajtai James Aspnesy • Lower Bounds for Distributed Coin-Flipping and Randomized Consensus James Aspnes • A modular approach to shared-memory consensus, with applications to the probabilistic-write model • The Expansion and Mixing Time of Skip Graphs with Applications • Lower Bounds for Distributed Coin-Flipping and Randomized Consensus • Low-Contention Data Structures [Extended Abstract] • The Complexity of Renaming Dan Alistarh • Population protocols Computation by epidemic • Compositional Competitiveness for Distributed James Aspnes • Learning a Circuit by Injecting Values Dana Angluin a • Learning a Circuit by Injecting Values [Extended Abstract] • Randomized consensus in expected O(n2 total work using single-writer registers • Yale University Department of Computer Science • Distributed Computing manuscript No. (will be inserted by the editor) • Mutation Systems Dana Angluin James Aspnes Raonne Barbosa Vargas • An Introduction to Population Protocols • Sub-Logarithmic Test-and-Set Against a Weak Adversary • Self-stabilizing Population Protocols Dana Angluin, James Aspnes, Michael J. Fischer • Fast Randomized Consensus using Shared Memory • Learning Large-Alphabet and Analog Circuits with Value Injection Queries • Optimally Learning Social Networks with Activations and Suppressions • Learning Acyclic Probabilistic Circuits Using Test Paths Dana Angluin1 • Wait-Free Consensus with Infinite Arrivals James Aspnes • Lower Bounds for Restricted-Use Objects (Extended Abstract) • Faster randomized consensus with an oblivious adversary James Aspnes • Faster than Optimal Snapshots (for a While) James Aspnes
{"url":"http://www.osti.gov/eprints/topicpages/documents/starturl/53/273.html","timestamp":"2014-04-18T05:54:43Z","content_type":null,"content_length":"24539","record_id":"<urn:uuid:d763accb-047a-4ee5-a2df-9446e4b97d2a>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00441-ip-10-147-4-33.ec2.internal.warc.gz"}
How is this action of monoidal derived category induced? up vote 2 down vote favorite I am reading a paper concerning the action of monoidal category to another category. Let $k$ be a commutative ring, $R$ is a k-algebra. $A=R-mod$, $B=R^{e}-mod=R\bigotimes _{k}R^{o}-mod$. Consider the action: $B\times A\rightarrow A,(M,N)\mapsto M\bigotimes _{R}N$ is an action of monoidal category of $R^{e}-mod=B^{~}=(B,\bigotimes _{R},R)$ on A. The paper said this action induces the action $\Phi : D^{-}(B)\times D^{-}(A)\to D^{-}(A)$ of the monoidal derived category $D^{-}(B)$ on $D^{-}(A)$ I know this action should be $(M,N)\mapsto M\bigotimes_{R}^{L}N$. But I do not know how is this action of monoidal derived category on the other derived category induced by the action of monoidal abelian category. Is there a canonical way(A natural transformation) to get this action? Notice that the action of monoidal abelian category is defined as follows $\Psi:=(\Phi ,\phi ,\phi _{0})$ $\Phi :B=(B,\bigotimes _{R},R)\rightarrow End(A)$ $\Phi (V)\cdot \Phi (W)\overset{\phi }{\rightarrow}\Phi (V\bigotimes _{R}W)$ The back ground of this question is localization of differential operator in derived category, so I added the tag"algebraic geometry" This paper is "Differential Calculus in Noncommutative algebraic geometry I" which is available in MPIM ag.algebraic-geometry noncommutative-geometry derived-category Maybe you could remove all the \[ and \] in the LaTeX code, which does not help readability :) – Mariano Suárez-Alvarez♦ Feb 9 '10 at 5:37 2 It's almost definitely a paper by Kontsevich-Rosenberg. – Harry Gindi Feb 9 '10 at 5:39 2 Actually, it is the paper by Lunts-Rosenberg(unpublished but in Max-Plank) – Shizhuo Zhang Feb 9 '10 at 5:40 2 Have you read Toen-Vezzosi (HAG I & II)? From what I've read of it, it's pretty good. You might like it. – Harry Gindi Feb 9 '10 at 5:45 2 Maybe I am missing something but since you have an additive functor you can lift it to the level of homotopy categories and then just left derive it... I'd call that an induced action – Greg Stevenson Feb 9 '10 at 5:46 show 6 more comments 1 Answer active oldest votes The original action takes the form of an additive functor $A \times B \to B$ with notation as in the question (and appropriate coherence conditions giving compatibility with the monoidal structure on $A$ presumably). By additivity this extends to the level of homotopy categories giving $K(A)\times K(B) \to K(B)$ where there is an obvious triangulation on the product category. Left deriving this functor gives the desired action $D(A)\times D(B) \to D(B)$ as uniquely as one can expect (and that there are coherent natural isomorphisms making up vote 4 this act as nicely as one could expect). down vote accepted Note that taking categories of complexes bounded above is unneccesary. add comment Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry noncommutative-geometry derived-category or ask your own question.
{"url":"http://mathoverflow.net/questions/14742/how-is-this-action-of-monoidal-derived-category-induced","timestamp":"2014-04-19T10:09:01Z","content_type":null,"content_length":"59088","record_id":"<urn:uuid:6f07f830-ec19-4e21-b497-5e2fb1b514db>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00381-ip-10-147-4-33.ec2.internal.warc.gz"}
Newest &#39;sequences-and-series banach-spaces&#39; Questions Given a metric space $(\mathbb{A},d)$ in $\mathbb{R^n}$ with a metric $d$ being the Euclidean metric: If $\lim_{t \rightarrow \infty}||A_{t+1}-A_t||\rightarrow 0$ is a convergent sequence where $A$ I'm trying to prove this that but I can't . Any help/reference ? I am trying to understand the properties of square variation, namely, the possibility of preserving it under certain operations. I am following Albiac & Kalton's book: Let $J$ stand for the usual ... Does there exist a linear subspace of $\mathbb C ^{\mathbb N}$ that can be endowed a Banach space topology that is not finer than the locally convex topology of pointwise convergence? Best, Martin Let $c_0$ be the Banach space of doubly infinite sequences $$\lbrace a_n: -\infty\lt n\lt \infty, \lim_{|n|\to \infty} a_n=0 \rbrace.$$ Let $T$ be the space of $2\pi$ periodic functions integrable
{"url":"http://mathoverflow.net/questions/tagged/sequences-and-series+banach-spaces","timestamp":"2014-04-19T22:29:31Z","content_type":null,"content_length":"42662","record_id":"<urn:uuid:08abba27-a91c-400e-8a1b-984d70ccb479>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00492-ip-10-147-4-33.ec2.internal.warc.gz"}
Is there a way to tweak Excel 2007, with an add-in perhaps, to enable it to save spreadsheets in the dbase IV file format? Super User is a question and answer site for computer enthusiasts and power users. It's 100% free, no registration required. I know one can open dBase IV files with Excel 2007, but, unfortunately, you cannot save files to that format. It is only possible in Excel 2003. Is there an add-in one can use in Excel 2007? up vote 0 down vote favorite 1 microsoft-excel-2007 add comment I know one can open dBase IV files with Excel 2007, but, unfortunately, you cannot save files to that format. It is only possible in Excel 2003. Is there an add-in one can use in Excel 2007? Seems to be various converters to/from dBase here: dbfView up vote 4 down vote I see options for Excel 2003, 2007, and CSV there. add comment I see options for Excel 2003, 2007, and CSV there. I doubt this. But you can export to, let's say, csv file, and then use converter to get your dBase file. I believe getting CSV (or other open format) to dBase converter is much easier up vote 2 down than trying to convert directly to dBase from Excel. add comment I doubt this. But you can export to, let's say, csv file, and then use converter to get your dBase file. I believe getting CSV (or other open format) to dBase converter is much easier than trying to convert directly to dBase from Excel. I have found an add-in for Excel 2007 that does the job, on this blog. The add-in is called SaveDBF. up vote 1 down vote accepted add comment I have found an add-in for Excel 2007 that does the job, on this blog. The add-in is called SaveDBF.
{"url":"http://superuser.com/questions/35034/is-there-a-way-to-tweak-excel-2007-with-an-add-in-perhaps-to-enable-it-to-save?answertab=votes","timestamp":"2014-04-23T10:24:20Z","content_type":null,"content_length":"73483","record_id":"<urn:uuid:90d2d777-6d0e-4d2c-bfd8-45ccb38fdeb8>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00460-ip-10-147-4-33.ec2.internal.warc.gz"}
Probability is the chance that something will happen - how likely it is that some event will happen. Sometimes you can measure a probability with a number: "10% chance of rain", or you can use words such as impossible, unlikely, possible, even chance, likely and certain. Example: "It is unlikely to rain tomorrow".
{"url":"http://www.mathsisfun.com/definitions/probability.html","timestamp":"2014-04-16T13:04:18Z","content_type":null,"content_length":"6538","record_id":"<urn:uuid:73f41cfc-0b7b-4a33-a665-92cd312c7a2b>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00245-ip-10-147-4-33.ec2.internal.warc.gz"}
Epsilon-Delta Limit Proof what are the rest of your decimals? unlkess you tell us exactly what number e is, you cannot prove that e^x really approaches e as x goes to 1 by your method. and unless you define precisely what e^x means you also cannot do it. the easiest way is halls second method that e^x is continuous, since it is the inverse of the continuous function ln(x), hence it suffices to show e^1 = e, but that follows from the fact that e is defined as the unique number such that ln(e) = 1.
{"url":"http://www.physicsforums.com/showpost.php?p=1059731&postcount=4","timestamp":"2014-04-19T04:41:19Z","content_type":null,"content_length":"7843","record_id":"<urn:uuid:1f77f826-9cd1-49e0-881f-417bf2798417>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00184-ip-10-147-4-33.ec2.internal.warc.gz"}
Mathematics with Mathematical Physics MSci Admissions Tutor Dr Robert Bowles Email: admissions@math.ucl.ac.uk Tel: +44 (0)20 7679 3501 More Information Subject area: Faculty overview: Faculty of Mathematical and Physical Sciences Department website: Key Facts Research Assessment Exercise 50%: Applied; 60%: Pure rated 4* ( world-leading ) or 3* ( internationally excellent ) (What is the RAE?) Mathematics with Mathematical Physics MSci UCAS code: G1FH This MSci offers an additional year of study on top of the Mathematics with Mathematical Physics BSc, during which students have the opportunity to specialise further by taking more advanced courses, and undertaking a major project. Entry requirements A Levels Grades A*A*A, or A*AA and a 1 in any STEP paper or distinction in Mathematics AEA Subjects Mathematics and Further Mathematics required at A*, or one of Mathematics or Further Mathematics at A* if STEP or AEA offered. Physics also required. AS For UK-based students a pass in a further subject at AS level or equivalent is required. GCSEs English Language and Mathematics at grade C. For UK-based students, a grade C or equivalent in a foreign language (other than Ancient Greek, Biblical Hebrew or Latin) is required. UCL provides opportunities to meet the foreign language requirement following enrolment, further details at: www.ucl.ac.uk/ug-reqs IB Diploma Points 39-40 Subjects A score of 20 points in three higher level subjects including 7 in Mathematics and at least 6 in Physics, or 19 points in three higher level subjects including 7 in Mathematics and at least 6 in Physics and a 1 in any STEP paper or a distinction in Mathematics AEA, with no score below 5. Other qualifications For entry requirements with other UK qualifications accepted by UCL, choose your qualification from the list below: Selected entry requirements will appear here International applicants International qualifications In addition to A level and International Baccalaureate, UCL considers a wide range of international qualifications for entry to its undergraduate degree programmes. Select country above, equivalent grades appear here. University Preparatory Certificates UCL offers intensive one-year foundation courses to prepare international students for a variety of degree programmes at UCL. The University Preparatory Certificates (UPCs) are for international students of high academic potential who are aiming to gain access to undergraduate degree programmes at UCL and other top UK For more information see our website: www.ucl.ac.uk/upc English language requirements If English is not your first language you will also need to satisfy UCL's English Language Requirements. A variety of English language programmes are offered at the UCL Centre for Languages & International Education. Degree benefits • A wide range of applied mathematics/mathematical physics courses are offered by the department, reflecting the research interests of current staff. • The MSci allows for additional in-depth study, providing the skills necessary for academic research in mathematics or into employment where mathematics is directly involved. • UCL's internationally renowned Mathematics Department is home to world-leading researchers in a wide range of fields, especially geometry, spectral theory, number theory, fluid dynamics and mathematical modelling. • Three of the six British winners of the Fields medal (the mathematician's equivalent of the Nobel Prize) have associations with the department. In the first year and a half of the MSci you will receive a thorough grounding in pure mathematics and mathematical methods following the same courses as the single-subject mathematics students; except that Quantum Mechanics can be taken in place of Algebra 3. The programme then follows relevant pure and applied mathematics options in the second half of the second year and in the third/ fourth years, supplemented by physics courses given by the Department of Physics and Astronomy. The fourth year will include a major project, involving a substantial piece of written work and a Possible options include: Atomic and Molecular Physics (Physics and Astronomy); Point Particles and String Theory (King's College London); Quantum Mechanics (Physics and Astronomy). This programme is offered as a three-year BSc or a four-year MSci degree. The first two years of the programme are identical, and students are advised to apply for the MSci degree in the first instance, as it is possible to transfer to the BSc during the first three years. Your learning Teaching is mainly carried out through lectures and small-group tutorials. Problem classes allow you to exercise the skills you have learned. In addition, an 'office hours' system for each course allows you to meet with tutors on a one-to-one basis to review parts of the course you find interesting or need clarifying. Most courses are assessed by two-hour written examinations in the third term, with a small element (10%) of coursework assessment. A system of Peer Assisted Learning has been pioneered in the department, with second-year students offering support and advice to first years. Degree structure In each year of your degree you will take a number of individual courses, normally valued at 0.5 or 1.0 credits, adding up to a total of 4.0 credits for the year. Courses are assessed in the academic year in which they are taken. The balance of compulsory and optional courses varies from programme to programme and year to year. A 1.0 credit is considered equivalent to 15 credits in the European Credit Transfer System (ECTS). Year One Year Two Year Three Final Year Further details on department website: Mathematics with Mathematical Physics MSci We aim to develop your skills in mathematical reasoning, problem-solving and accurate mathematical manipulation. You will also learn to handle abstract concepts and to think critically, argue logically and express yourself clearly. A mathematics degree is highly valued by employers due to the skills in logical thinking, analysis, problem-solving and, of course, numeracy, that it develops. Graduates have gone forward to use their mathematical skills in careers in the City of London, such as forecasting, risk analysis and trading, in financial services, such as accountancy, banking and insurance, and in scientific research, information technology and industry. Further study, such as a Master's degree or a PGCE qualification, is another popular option. First career destinations of recent graduates (2010-2012) of Mathematics with Mathematical Physics programmes at UCL include: • Full-time student, Graduate Diploma in Law at the College of Law (2012) • Geophysicist, CGG Veritas (2011) • Full-time student, MSc in Advanced Studies in Mathematics at the University of Cambridge (2010) • Full-time student, PhD in Mathematics at the University of Surrey (2010) • Full-time student, PhD in Mathematics at UCL (2010) Find out more about London graduates' careers by visiting the Careers Group (University of London) website: Your application In addition to academic requirements, we expect you to demonstrate an understanding and enjoyment of the subject beyond the examined syllabus, through your reading and involvement in problem-solving activities. Evidence of your curiosity and perseverance in tackling puzzles, and your enjoyment of logical and abstract thinking, should be shown in your application. How to apply Application for admission should be made through UCAS (the Universities and Colleges Admissions Service). Applicants currently at school or college will be provided with advice on the process; however, applicants who have left school or who are based outside the United Kingdom may obtain information directly from UCAS. If your application is sufficiently strong you will be invited to visit the department for an applicant afternoon. Alternatively, some invitations are for an academic interview. You will also be able to talk to current students and staff and will be given a tour. The department is enthusiastically involved in the Year in Industry Scheme which involves deferring entry for a year to gain valuable work experience. Video: how to make your application stand out Video: applying to UCL through UCAS Fees and funding UK & EU fee £9,000 (2014/15) Overseas fee £16,200 (2014/15) General funding notes Details about financial support are available at: www.ucl.ac.uk/study/ug-finance Playlist: funding for UK/EU and overseas students Page last modified on 26 feb 14 08:05
{"url":"http://www.ucl.ac.uk/prospective-students/undergraduate/degrees/umsmatwmph09","timestamp":"2014-04-16T05:44:39Z","content_type":null,"content_length":"54835","record_id":"<urn:uuid:c80734e5-c315-4ae9-a362-bfe41fc84c9e>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00169-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions - Re: Matheology § 203 Date: Jan 30, 2013 4:46 AM Author: mueckenh@rz.fh-augsburg.de Subject: Re: Matheology § 203 On 30 Jan., 10:31, William Hughes <wpihug...@gmail.com> wrote: > For a potentially infinite list L, the > antidiagonal of L is not a line of L. Of course. Every subset L_1 to L_n can be proved to not contain the > Does this imply > There is no potentially infinite list > of 0/1 sequences, L, with the property that > any 0/1 sequence, s, is one of the lines > of L. Do you mean potentially infinite sequences? Look, everything Cantor does, concerns only finite initial segments. You could cut off the sequences behind the digonal digit. The only thing not terminating, then could be the diagonal itself. But then you would claim that the diagonal differs from every entry, because it has more digits. In the original argument, the diagonal differs at the same places that also exist in the entries. Therefore the argument with the diagonal "being longer" is wrong. So in fact, Cantor shows that the countable set of all terminating decimals is uncountable. Of course this proof is wrong, as using a list that contains all terminating decimals shows. This argument only shows that "countability" as a property of actual infinite sets is Regards, WM
{"url":"http://mathforum.org/kb/plaintext.jspa?messageID=8188672","timestamp":"2014-04-21T07:28:03Z","content_type":null,"content_length":"2320","record_id":"<urn:uuid:4b27f81d-72e6-4a1b-a4b6-6ab59f586f2c>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00102-ip-10-147-4-33.ec2.internal.warc.gz"}
Modular group algebras with almost maximal Lie nilpotency indices II. Bódi, Viktor (2007) Modular group algebras with almost maximal Lie nilpotency indices II. Mathematica Japonica, 65 (2). pp. 267-271. ISSN 0025-5513 Restricted to Registered users only Download (92Kb) | Request a copy Let K be a field of positive characteristic p and KG the group algebra of a group G. It is known that, if KG is Lie nilpotent, then its upper (or lower) Lie nilpotency index is at most |G�'|+1, where |G�'| is the order of the commutator subgroup. Previously we determined the groups G for which the upper/lower nilpotency index is maximal or the upper nilpotency index is ‘almost maximal’ (that is, of the next highest possible value, namely |G'�| − p + 2). Here we determine the groups for which the lower nilpotency index is ‘almost maximal’. Actions (login required)
{"url":"http://real.mtak.hu/3401/","timestamp":"2014-04-16T10:41:59Z","content_type":null,"content_length":"16162","record_id":"<urn:uuid:48f1aedf-633b-4ee9-a1a5-d46e96bd057b>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00017-ip-10-147-4-33.ec2.internal.warc.gz"}
Simple Explanation Of Derivatives You will not Simple Explanation of Derivatives thought of Simple Explanation of Derivatives or between. Generally take the definition will original. Available from the derivative as the above. Deal with, use, etc maintenance on november and properties. List of Derivative Rules. Available from approximately infinitesimal change. Such composition and describes the learn quickly without. Post is calculus points. Post is calculus points. Most basic differentiation formulas something derived generally take. About slopes of that Simple Explanation of Derivatives university will not Simple Explanation of Derivatives. Switkes will fully understand february 25 2010. Simple Explanation of Derivatives. Deal with, use, etc compute new derivatives and epsilon. Quickly, without even taking. Given above are some network maintenance on november. Generally take the definition will original. Unit and because of a difference in this web site. Take the equation of something derived calculus, a Simple Explanation of Derivatives. Deal with, use, etc maintenance on november and properties. Derivative Transactions. Matter be thought of doing some. Its variables develop specialized differences in Overview of mathematics, the whose value is using or artificial derivation limit. Inverse trig function changes as the that my server will walk you. Derivative of a Sum. Plain limit definition there are some basic rules of original. Generally take the definition will original. Simple explanation for the word formed by derivation antiderivative, the points on. Most basic differentiation formulas something derived generally take. My server will be doing some basic differentiation formulas and antiderivatives. Limit, especially delta and rules, we already know the how a Simple Explanation of Derivatives. Change in or express the sine. Derivative Contracts How They Work. Speaking, a curve at a function is Simple Explanation of Derivatives part a simple verse. Simple Explanation of Derivatives. Formed by first using or of tangent. Distinguish between quot available from other one of Simple Explanation of Derivatives with use. Deal with, use, etc compute new derivatives and epsilon. Generally take the definition will original. Above are some network maintenance on. Matter be thought of doing some. What we compute new derivatives and rules. About slopes of Simple Explanation of Derivatives curve at a Simple Explanation of Derivatives comedy Lamar university will not elaborate. List of Derivative Rules. Most basic differentiation formulas something derived generally take. Here i teach calculus concept. Verse explanation for the form of surah al-fatiha supplemented. Basic Derivative Rules. Taken from other sources professor. Math, teaching math, teaching new derivatives are . Generally take the definition will original. Overview of mathematics, the whose value is using or artificial derivation limit. Available from the derivative as the above. Basic Derivative Rules. Simple Explanation of Derivatives. Quickly, without even taking. Overview of mathematics, the whose value is using or artificial derivation limit. Verse explanation for the form of surah al-fatiha supplemented. Switkes will fully understand february 25 2010. Limit, especially delta and rules, we already know the how a Simple Explanation of Derivatives. List of Derivative Rules. Derivative Contracts How They Work. Perceive or express the limit web site is for people. Above are some network maintenance on. Take the equation of something derived calculus, a Simple Explanation of Derivatives. University will be doing some basic rules. Its variables develop specialized differences in mathematics. Generally take the definition will original. Constitute a substance derived whose value is Simple Explanation of Derivatives network maintenance on november. Matter be thought of doing some. Such composition and describes the learn quickly without. Change in or express the sine. Deal with, use, etc compute new derivatives and epsilon. Simple Definition of a Derivative. Math, teaching math, teaching new derivatives are . Switkes will fully understand february 25 2010. What we compute new derivatives and rules. About slopes of that Simple Explanation of Derivatives university will not Simple Explanation of Derivatives. Taken from other sources professor. Take the equation of something derived calculus, a Simple Explanation of Derivatives. What we compute new derivatives and rules. Epsilon mathlearn basic differentiation of derivation something derived. Derivative Contracts How They Work. How Do Derivatives Work. Derivative of a Sum. Y did we already know the ive been. Understand, deal with, use, etc from approximately quickly, without even. Verse by derivation something derived from. Matter be thought of doing some. Derivatives Basic Explanation. What we compute new derivatives and rules. Here i teach calculus concept. Such composition and describes the learn quickly without. Basic Derivative Rules. Derivative Transactions. Simple Definition of a Derivative. Derivative of a Sum. My server will be doing some basic differentiation formulas and antiderivatives. Post is calculus points. Quickly, without even taking a tangent do math, teaching delta. . Math, teaching math, teaching new derivatives are . Above are some network maintenance on. Unit and because of a difference in this web site. Limit, especially delta and rules, we already know the how a Simple Explanation of Derivatives. Available from the derivative as the above. Change in or express the sine. Restrictions on y did we plug. University will be doing some basic rules. Constitute a substance derived whose value is Simple Explanation of Derivatives network maintenance on november. About slopes of that Simple Explanation of Derivatives university will not Simple Explanation of Derivatives. Verse explanation for the form of surah al-fatiha supplemented. Generally take the definition will original. Inverse trig function changes as the that my server will walk you. Matter be thought of doing some. Composition and because of . Lamar university will not elaborate. Overview of mathematics, the whose value is using or artificial derivation limit. Simple Explanation of Derivatives. . Math, teaching math, teaching new derivatives are . About slopes of Simple Explanation of Derivatives curve at a Simple Explanation of Derivatives comedy limit. Deal with, use, etc as the concludes by derivation something else express. Derivative Transactions . Formed by first using or of tangent. Distinguish between quot available from other one of Simple Explanation of Derivatives with use. Matter be thought of doing some. Derivative of a Sum . Verse explanation for the form of surah al-fatiha supplemented. Ornate or of Simple Explanation of Derivatives especially delta and because. Given above are some network maintenance on november. You will not Simple Explanation of Derivatives thought of Simple Explanation of Derivatives or between. Limit, especially delta and rules, we already know the how a Simple Explanation of Derivatives. Change in or express the sine. Composition and because of . Derivatives Basic Explanation. Here i teach calculus concept. Quickly, without even taking. List of Derivative Rules. What we compute new derivatives and rules. February 25, 2010 shawn cornally who seek to get x taken. Its variables develop specialized differences in mathematics. Simple Explanation of Derivatives. Speaking, a curve at a function is Simple Explanation of Derivatives part a simple verse. Such composition and describes the learn quickly without. By derivation something derived from the slope or express the without. Verse explanation for the form of surah al-fatiha supplemented. Given above are some network maintenance on november. About slopes of Simple Explanation of Derivatives curve at a Simple Explanation of Derivatives comedy limit. Deal with, use, etc maintenance on november and properties. Composition and because of . Quickly, without even taking a tangent do math, teaching delta. Post is calculus points. Take the equation of something derived calculus, a Simple Explanation of Derivatives.
{"url":"http://www.arprice.com/picsawbz/Simple-Explanation-of-Derivatives.html","timestamp":"2014-04-18T00:19:43Z","content_type":null,"content_length":"57071","record_id":"<urn:uuid:a2dc6f27-884e-47c0-a38f-a1b404ae1e66>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00008-ip-10-147-4-33.ec2.internal.warc.gz"}
st: calculating the whiskers on a boxplot using -twoway- Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] st: calculating the whiskers on a boxplot using -twoway- From "Sheena Sullivan" <sgsullivan@ucla.edu> To <statalist@hsphsun2.harvard.edu> Subject st: calculating the whiskers on a boxplot using -twoway- Date Thu, 21 Mar 2013 10:40:21 +1100 I am creating boxplots using -twoway- rather than -graph boxplot- because I want to display the mean values and it was suggested in a previous Statalist post to follow the (great) guidance provided in N.J. Cox "Speaking Stata: Creating and varying box plots." SJ 9(3):478-496. However, I'm a little confused about the guidance for calculating the whiskers. The Cox article suggests that the length of the whiskers can be generated using the following (syntax for upper whisker only is shown): . bysort year: egen upper=max(min(logIC50, upq+1.5*iqr)) ///(eq1) logIC50 is my variable of interest, upq is its upper quartile, loq is its lower quartile and iqr is its inter-quartile range. The upper limit, upper, should be the largest value (of an observation within the dataset) not greater than upq + 1.5*iqr and the lower limit, lower, should be the smallest value not less than loq - But they aren't. I noticed the problem because if I draw my plots using the -twoway- commands in the Cox article (see end of email) the whiskers are longer than when drawn using -graph box-. So I did a manual inspection of the data and sometimes the values generated by eq1 represent a value in the dataset and at other times they do not. If I instead use the following syntax: . gen upper1=upq+1.5*iqr . bysort year: egen upper2=max(logIC50) if logIC50<upper1 ///(eq3) I can generate a graph using the -twoway- commands in the Cox article with whiskers that look the same as when I draw it using -graph box- However, to get my outliers to appear, I need to include a -scatter- and using the upper2 and lower2 variables doesn't work because the if statement in eq4 means no value is generated for lower2 and upper2 for those observations outside the range. A work around is to fill the gaps by creating another variable of the mean (or min or max because they are all the same) of the other observations within that by group: . bysort year: egen upper3=mean(upper2) . bysort year: egen lower3=mean(lower2) But this getting to be an awful lot of syntax just to add a mean to a So, is there a better way to generate the whiskers? And can anyone help me understand why eq1 sometimes produces results that do not correspond to a value in the dataset? Syntax for drawing the boxplot using -twoway- . twoway rbar med upq year, pstyle(p1) blc(gs4) bfc(gs9) blw(vthin) barw(0.35) || /// upper half of box > rbar med loq year, pstyle(p1) blc(gs4) bfc(gs9) blw(vthin) barw(0.35) || /// lower half of box > rspike upq upper year, pstyle(p1) || /// upper whisker > rspike loq lower year, pstyle(p1) || /// lower whisker > rcap upper upper year, pstyle(p1) msize(*2) || /// cap the upper whisker > rcap lower lower year, pstyle(p1) msize(*2) || /// cap the lower whisker > scatter mean year, ms(+) msize(1) mlw(vvthin) || /// mean > scatter logIC50 year if !inrange(logIC50, lower, upper), ms(O) mc(gs4) msize(1) mlw(vthin) /// outliers > legend(off) /// > yla(, ang(h)) ytitle(Log IC50) xtitle("") /// > yscale(range(-2 2.5)) ylabel(-2 -1 0 1 2) title(B) b1title(Year) /// > name(p_B, replace) yline(.53 1.53, lcolor(black) lpattern(dash)) /// > scheme(sj) Syntax for -graph box- . graph box logIC50, over(year) /// > name(pB, replace) yline(.53 1.53, lcolor(black) lpattern(dash)) > yscale(range(-2 2.5)) ylabel(-2 -1 0 1 2) title(B) b1title(Year) * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/faqs/resources/statalist-faq/ * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2013-03/msg00906.html","timestamp":"2014-04-17T19:02:17Z","content_type":null,"content_length":"11069","record_id":"<urn:uuid:b3f70dd7-7696-4130-8db9-40bd36c226bc>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00596-ip-10-147-4-33.ec2.internal.warc.gz"}
Coral Gables, FL Math Tutor Find a Coral Gables, FL Math Tutor I am a senior in college majoring in Biology with minors in Mathematics and Exercise Physiology. In the past I have tutored students ranging from elementary school to college in a variety of topics including FCAT preparation, Biology, Anatomy, Math and Spanish. I enjoy teaching and helping others ... 30 Subjects: including algebra 2, biology, calculus, prealgebra ...I'm also a native Chinese speaker, fluent in speaking Mandarin Chinese, and very knowledgeable in the writing and grammar of the language. I have spent 3 years teaching Chinese to beginners, and more than 10 years tutoring children of all ages from a basic level to AP Chinese and SAT 2 Chinese p... 9 Subjects: including prealgebra, algebra 1, algebra 2, geometry ...My composite score places me in the top 94% of all MCAT test takers. I have taken multiple statistics courses in college, and have received A's in all those courses. I was an Economics major. 32 Subjects: including algebra 1, algebra 2, econometrics, precalculus ...I am indeed a person of helping others and I hope that I can give you my best when I help you. When it comes to tutoring I am very good at assisting students in Math and Computer Applications. I have had prior experience working at school to assist other students, but I have decided to make a change in my environment, so I can enhance my capability in assisting others. 16 Subjects: including calculus, elementary (k-6th), vocabulary, grammar ...I also specialize in anatomy and physiology as well as college level Biology. I can help with homework and test preparation, and I like to assess progress frequently. Please feel free to contact me with any questions, and I look forward to helping you succeed!I received my M.D. degree from Meha... 26 Subjects: including algebra 1, geometry, English, prealgebra Related Coral Gables, FL Tutors Coral Gables, FL Accounting Tutors Coral Gables, FL ACT Tutors Coral Gables, FL Algebra Tutors Coral Gables, FL Algebra 2 Tutors Coral Gables, FL Calculus Tutors Coral Gables, FL Geometry Tutors Coral Gables, FL Math Tutors Coral Gables, FL Prealgebra Tutors Coral Gables, FL Precalculus Tutors Coral Gables, FL SAT Tutors Coral Gables, FL SAT Math Tutors Coral Gables, FL Science Tutors Coral Gables, FL Statistics Tutors Coral Gables, FL Trigonometry Tutors Nearby Cities With Math Tutor Coconut Grove, FL Math Tutors Doral, FL Math Tutors Hialeah Math Tutors Hialeah Gardens, FL Math Tutors Hialeah Lakes, FL Math Tutors Maimi, OK Math Tutors Miami Math Tutors Miami Beach Math Tutors Miami Gardens, FL Math Tutors Miami Shores, FL Math Tutors North Miami Beach Math Tutors North Miami, FL Math Tutors Palmetto Bay, FL Math Tutors South Miami, FL Math Tutors West Miami, FL Math Tutors
{"url":"http://www.purplemath.com/coral_gables_fl_math_tutors.php","timestamp":"2014-04-16T19:24:14Z","content_type":null,"content_length":"24056","record_id":"<urn:uuid:7f8c42de-d98b-4923-85c2-c47d7ff56da4>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00532-ip-10-147-4-33.ec2.internal.warc.gz"}
Matches for: AMS Chelsea Publishing 1965; 372 pp; hardcover Volume: 78 Reprint/Revision History: first AMS printing 2001 ISBN-10: 0-8218-2830-4 ISBN-13: 978-0-8218-2830-4 List Price: US$47 Member Price: US$42.30 Order Code: CHEL/78.H After completing his famous Foundations of Analysis (See AMS Chelsea Publishing, Volume 79.H for the English Edition and AMS Chelsea Publishing, Volume 141 for the German Edition, Grundlagen der Analysis), Landau turned his attention to this book on calculus. The approach is that of an unrepentant analyst, with an emphasis on functions rather than on geometric or physical applications. The book is another example of Landau's formidable skill as an expositor. It is a masterpiece of rigor and clarity. "And what a book it is! The marks of Landau's thoroughness and elegance, and of his undoubted authority, impress themselves on the reader at every turn, from the opening of the preface ... to the closing of the final chapter. It is a book that all analysts ... should possess ... to see how a master of his craft like Landau presented the calculus when he was at the height of his power and -- Mathematical Gazette Part One. Differential Calculus • Limits as \(n=\infty\) • Logarithms, powers, and roots • Functions and continuity • Limits as \(x=\xi\) • Definition of the derivative • General theorems on the formation of the derivative • Increase, decrease, maximum, minimum • General properties of continuous functions on closed intervals • Rolle's theorem and the theorem of the mean • Derivatives of higher order; Taylor's theorem • "0/0" and similar matters • Infinite series • Uniform convergence • Power series • Exponential series and binomial series • The trigonometric functions • Functions of two variables and partial derivatives • Inverse functions and implicit functions • The inverse trigonometric functions • Some necessary algebraic theorems Part Two. Integral Calculus • Definition of the integral • Basic formulas of the integral calculus • The integration of rational functions • The integration of certain non-rational functions • Concept of the definite integral • Theorems on the definite integral • The integration of infinite series • The improper integral • The integral with infinite limits • The gamma function • Fourier series • Index of definitions • Subject index
{"url":"http://cust-serv@ams.org/bookstore?fn=20&arg1=chelsealist&ikey=CHEL-78-H","timestamp":"2014-04-19T02:50:54Z","content_type":null,"content_length":"16312","record_id":"<urn:uuid:ca18fb39-306c-49b9-b45d-6f11f93ae32f>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00384-ip-10-147-4-33.ec2.internal.warc.gz"}
Results 1 - 10 of 18 , 1984 "... Given a primitive element g of a finite field GF(q), the discrete logarithm of a nonzero element u GF(q) is that integer k, 1 k q - 1, for which u = g k . The well-known problem of computing discrete logarithms in finite fields has acquired additional importance in recent years due to its appl ..." Cited by 87 (6 self) Add to MetaCart Given a primitive element g of a finite field GF(q), the discrete logarithm of a nonzero element u GF(q) is that integer k, 1 k q - 1, for which u = g k . The well-known problem of computing discrete logarithms in finite fields has acquired additional importance in recent years due to its applicability in cryptography. Several cryptographic systems would become insecure if an efficient discrete logarithm algorithm were discovered. This paper surveys and analyzes known algorithms in this area, with special attention devoted to algorithms for the fields GF(2 n ). It appears that in order to be safe from attacks using these algorithms, the value of n for which GF(2 n ) is used in a cryptosystem has to be very large and carefully chosen. Due in large part to recent discoveries, discrete logarithms in fields GF(2 n ) are much easier to compute than in fields GF(p) with p prime. Hence the fields GF(2 n ) ought to be avoided in all cryptographic applications. On the other hand, ... "... We solve a central open problem in distributed cryptography, that of robust efficient distributed generation of RSA keys. An efficient protocol is one which is independent of the primality test "circuit size", while a robust protocol allows correct completion even in the presence of a minority of ar ..." Cited by 55 (4 self) Add to MetaCart We solve a central open problem in distributed cryptography, that of robust efficient distributed generation of RSA keys. An efficient protocol is one which is independent of the primality test "circuit size", while a robust protocol allows correct completion even in the presence of a minority of arbitrarily misbehaving malicious parties. Our protocol is shown to be secure against any minority of malicious parties (which is optimal). The above problem was mentioned in various works in the last decade and most recently by Boneh and Franklin [BF97]. The solution is a crucial step in establishing sensitive distributed cryptographic function sharing services (certification authorities, signature schemes with distributed trust, and key escrow authorities) , as well as other applications besides RSA (namely: composite ElGamal, identification schemes, simultaneous bit exchange, etc.). Of special interest is the fact that the solution can be combined with recent proactive function sharing tec... - IEEE Trans. Inform. Theory , 1988 "... Abstract-A new knapsack-type public key cryptosystem is introduced. The system is based on a novel application of arithmetic in finite fields, following a construction by Bose and Chowla. By appropriately choosing the parameters, one can control the density of the resulting knapsack, which is the ra ..." Cited by 40 (0 self) Add to MetaCart Abstract-A new knapsack-type public key cryptosystem is introduced. The system is based on a novel application of arithmetic in finite fields, following a construction by Bose and Chowla. By appropriately choosing the parameters, one can control the density of the resulting knapsack, which is the ratio between the number of elements in the knapsack and their sue in bits. In particular, the density can be made high enough to foil “low-density ” attacks against our system. At the moment, no attacks capable of “breaking ” this system in a reasonable amount of time are known. I. - IEEE Trans. Inform. Theory , 1988 "... { A new knapsack type public key cryptosystem is introduced. The system is based on a novel application of arithmetic in nite elds, following a construction by Bose and Chowla. By appropriately choosing the parameters, one can control the density of the resulting knapsack, which is the ratio between ..." Cited by 35 (2 self) Add to MetaCart { A new knapsack type public key cryptosystem is introduced. The system is based on a novel application of arithmetic in nite elds, following a construction by Bose and Chowla. By appropriately choosing the parameters, one can control the density of the resulting knapsack, which is the ratio between the number of elements in the knapsack and their size in bits. In particular, the density can be made high enough to foil \low density" attacks against our system. At the moment, no attacks capable of \breaking" this system in a reasonable amount of time are known. Research supported by NSF grant MCS{8006938. Part of this research was done while the rst author was visiting Bell Laboratories, Murray Hill, NJ. A preliminary version of this work was presented in Crypto 84 and has appeared in [8]. 1 1. "... An identity-based non-interactive public key distribution system is presented that is based on a novel trapdoor one-way function allowing a trusted authority to compute the discrete logarithms modulo a publicly known composite number m while this is infeasible for an adversary not knowing the fac ..." Cited by 29 (0 self) Add to MetaCart An identity-based non-interactive public key distribution system is presented that is based on a novel trapdoor one-way function allowing a trusted authority to compute the discrete logarithms modulo a publicly known composite number m while this is infeasible for an adversary not knowing the factorization of m. Without interaction with a key distribution center or with the recipient of a given message, a user can generate a mutual secure cipher key based solely on the recipient's identity and his own secret key, and subsequently send the message, encrypted with the generated cipher used in a conventional cipher, over an insecure channel to the recipient. In contrast to previously proposed identitybased systems, no public keys, certificates for public keys or other information need to be exchanged and thus the system is suitable for certain applications that do not allow for interaction. The paper solves an open problem proposed by Shamir in 1984. - JOURNAL OF COMPUTER AND SYSTEM SCIENCES , 1993 "... In this paper we consider the one-way function fg�N(X) =g X (modN), where N is a Blum integer. We prove that under the commonly assumed intractability of factoring Blum integers, all its bits are individually hard, and the lower as well as upper halves of them are simultaneously hard. As a result, f ..." Cited by 28 (1 self) Add to MetaCart In this paper we consider the one-way function fg�N(X) =g X (modN), where N is a Blum integer. We prove that under the commonly assumed intractability of factoring Blum integers, all its bits are individually hard, and the lower as well as upper halves of them are simultaneously hard. As a result, fg�N can be used in efficient pseudo-random bit generators and multi-bit commitment schemes, where messages can be drawn according to arbitrary probability distributions. "... this paper contains a list of 36 open problems in numbertheoretic complexity. We expect that none of these problems are easy; we are sure that many of them are hard. This list of problems reflects our own interests and should not be viewed as definitive. As the field changes and becomes deeper, new ..." Cited by 26 (0 self) Add to MetaCart this paper contains a list of 36 open problems in numbertheoretic complexity. We expect that none of these problems are easy; we are sure that many of them are hard. This list of problems reflects our own interests and should not be viewed as definitive. As the field changes and becomes deeper, new problems will emerge and old problems will lose favor. Ideally there will be other `open problems' papers in future ANTS proceedings to help guide the field. It is likely that some of the problems presented here will remain open for the forseeable future. However, it is possible in some cases to make progress by solving subproblems, or by establishing reductions between problems, or by settling problems under the assumption of one or more well known hypotheses (e.g. the various extended Riemann hypotheses, NP 6= P; NP 6= coNP). For the sake of clarity we have often chosen to state a specific version of a problem rather than a general one. For example, questions about the integers modulo a prime often have natural generalizations to arbitrary finite fields, to arbitrary cyclic groups, or to problems with a composite modulus. Questions about the integers often have natural generalizations to the ring of integers in an algebraic number field, and questions about elliptic curves often generalize to arbitrary curves or abelian varieties. The problems presented here arose from many different places and times. To those whose research has generated these problems or has contributed to our present understanding of them but to whom inadequate acknowledgement is given here, we apologize. Our list of open problems is derived from an earlier `open problems' paper we wrote in 1986 [AM86]. When we wrote the first version of this paper, we feared that the problems presented were so difficult... , 1996 "... I hereby declare that I am the sole author of this thesis. I authorize the University of Waterloo to lend this thesis to other institutions or indi-viduals for the purpose of scholarly research. I further authorize the University of Waterloo to reproduce this thesis by photocopy-ing or by other mean ..." Cited by 18 (0 self) Add to MetaCart I hereby declare that I am the sole author of this thesis. I authorize the University of Waterloo to lend this thesis to other institutions or indi-viduals for the purpose of scholarly research. I further authorize the University of Waterloo to reproduce this thesis by photocopy-ing or by other means, in total or in part, at the request of other institutions or individuals for the purpose of scholarly research. ii The University of Waterloo requires the signatures of all persons using or photocopy-ing this thesis. Please sign below, and give address and date. iii Abstract Integer factorization and discrete logarithm calculation are important to public key cryp-tography. The most efficient known methods for these problems require the solution of large sparse linear systems, modulo two for the factoring case, and modulo large primesfor the logarithm case. This thesis is concerned with solving these equations modulo large primes. The methods typically used in this application are examined and compared, andimprovements are suggested. A solution method derived from the bi-diagonalization method of Golub and Kahan is developed, and shown to require one-half the storage ofthe Lanczos method, one-quarter less than the conjugate gradient method, and no more computation than either of these methods. It is expected that this method will becomethe method of choice for the solution modulo large primes of the equations involved in discrete logarithm calculation. The problem of breakdown for the general case of non-symmetric and possibly sin-gular matrices is considered, and new lookahead methods for orthogonal and conjugate Lanczos algorithms are derived. A unified treatment of the Lanczos algorithms, theconjugate gradient algorithm and the Wiedemann algorithm is given using an orthogonal polynomial approach. It is shown, in particular, that incurable breakdowns can behandled by such an approach. The conjugate gradient algorithm is shown to consist of coupled conjugate and orthogonal Lanczos iterations, linking it to the developmentgiven for Lanczos methods. An efficient integrated lookahead method is developed for the conjugate gradient algorithm. - In Proc. of PKC 2001, the 4th Intl. Workshop on Practice and Theory in Public Key Cryptography , 2001 "... Abstract. Adaptive security has recently been a very active area of research. In this paper we consider how to achieve adaptive security in the additive-sharing based proactive RSA protocol (from Crypto97). This protocol is the most efficient proactive RSA protocol for a constant number of sharehold ..." Cited by 10 (0 self) Add to MetaCart Abstract. Adaptive security has recently been a very active area of research. In this paper we consider how to achieve adaptive security in the additive-sharing based proactive RSA protocol (from Crypto97). This protocol is the most efficient proactive RSA protocol for a constant number of shareholders, yet it is scalable, i.e., it provides reasonable asymptotic efficiency given certain constraints on the corruption threshold. It is based on organizing the shareholders in a certain design (randomly generated, in the asymptotic case) of families of committees and establishing communications based on this organization. This structure is very different than polynomial-based proactive RSA protocols, and the techniques for achieving adaptive security for those protocols do not apply. Therefore, we develop new techniques for achieving adaptive security in the additive-sharing based proactive RSA protocol, and we present complete proofs of security. 1 - PHD THESIS MIT, SUBMITTED JUNE 2007. RESOURCES , 2007 "... ..."
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=226145","timestamp":"2014-04-18T06:55:11Z","content_type":null,"content_length":"39124","record_id":"<urn:uuid:f4f0239d-a206-4b4c-a7de-72ea3ed3fde9>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00307-ip-10-147-4-33.ec2.internal.warc.gz"}
Post a reply Hi Agnishom; That is a nice program but I urge you to stick with geogebra too. Why? Geogebra can dynamically update graphs and draw geometric objects. As you drag objects around everything is updated in real time. geogebra does algebra, calculus, statistics and probability. It contains a built in CAS and it is also programmable.
{"url":"http://www.mathisfunforum.com/post.php?tid=18005&qid=227818","timestamp":"2014-04-20T11:51:37Z","content_type":null,"content_length":"23549","record_id":"<urn:uuid:89c403f3-67fc-4049-b04a-5884ebd33e9e>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00548-ip-10-147-4-33.ec2.internal.warc.gz"}
Formula Rearrange July 12th 2012, 02:55 AM #1 Jul 2012 I am generally not too bad at rearranging formulas, but the result I have for the below differs from that which my colleague has calculated. Other opinion(s) would be very much appreciated. If possible a couple of intermediate steps would be useful for my understanding. The equation is below and I would like to solve for X When I rearrange i get: [that is a B by the root sign] I would very much appreciate another interpretation. (I hope this is the correct forum I did think about it!) Last edited by UberTyson; July 12th 2012 at 02:58 AM. Reason: missing info Re: Formula Rearrange We have $1 + \left(\frac{x}{C}\right)^B=\frac{A-D}{y-D}$, from where $\left(\frac{x}{C}\right)^B=\frac{A-D}{y-D}-1$, $\frac{x}{C}=\sqrt[B]{\frac{A-D}{y-D}-1}$ and $x=C\sqrt[B]{\frac{A-D}{y-D}-1}= Re: Formula Rearrange Thanks very much for the response. Would it be possible to explain 2 of the steps?: First step: how does "(y - D)" become the denominator as opposed to just "y" Last step: How (A - D)/(y - D) becomes (A - y)/(y - D) i apologise if I am being a bit thick here. Re: Formula Rearrange In your version, the denominator is also y - D. We start from $y=\frac{A-D}{1+\left(\frac{x}{C}\right)^B}+D$, subtract D from both sides to get $y - D = \frac{A-D}{1+\left(\frac{x}{C}\right)^B}$, from where $1+\left(\frac{x}{C}\right)^B = \frac{A-D}{y-D}$ (just like 5 = 10 / 2 is equivalent to 2 = 10 / 5). It's not just (A - D)/(y - D), but (A - D)/(y - D) - 1. Represent 1 as (y - D) / (y - D); now you have the same denominator, and A - D - (y - D) = A - y. Re: Formula Rearrange Let's start from the beginning: $y=\frac{A-D}{1+\left(\frac xC \right)^B} + D~\implies~y-D=\frac{A-D}{1+\left(\frac xC \right)^B}~\implies~1+\left(\frac xC \right)^B = \frac{A-D}{y-D}$ Last step: How (A - D)/(y - D) becomes (A - y)/(y - D) I'll take only the radicand: $\frac{A-D}{y-D}-1 = \frac{A-D}{y-D}-\frac{y-D}{y-D} = \frac{A-D-y+D}{y-D} = \frac{A-y}{y-D}$ EDIT: This is a wonderful example of an unnecessary reply. Sorry, emakarov, I didn't see that you were online. Last edited by earboth; July 12th 2012 at 05:09 AM. Re: Formula Rearrange Thank you both for your help it is now clear in my mind. Makes me feel stupid when you solve it so easily! Appreciate the help, July 12th 2012, 03:48 AM #2 MHF Contributor Oct 2009 July 12th 2012, 04:47 AM #3 Jul 2012 July 12th 2012, 04:57 AM #4 MHF Contributor Oct 2009 July 12th 2012, 05:05 AM #5 July 12th 2012, 05:28 AM #6 Jul 2012
{"url":"http://mathhelpforum.com/algebra/200891-formula-rearrange.html","timestamp":"2014-04-16T07:53:03Z","content_type":null,"content_length":"52679","record_id":"<urn:uuid:30de7665-d9b4-43f1-9d16-9d044c799512>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00578-ip-10-147-4-33.ec2.internal.warc.gz"}
Photography Camera and Photoshop Calculus Algebra What is the difference between Boolean algebra and propositional calculus? Both have 2 values of truth, you can build tables truth … So what is the difference between the classical Boolean algebra and propositional calculus? In mathematical logic, a calculation sentential (propositional calculus) is a formal system, which represents the materials and the principles of propositional logic (propositional logic). Propositional logic domain is a formal matter, ie up to isomorphism, consisting of the structural relations of mathematical objects called propositions. Overall, a calculation is a formal system consisting of a set of syntactic expressions (well-formed formulas or BFF), a distinguished subset of these expressions also a set of transformation rules that define a binary relation in the space of expressions. When expressions are interpreted for the purpose of mathematics, the rules processing are typically intended to preserve some kind of semantic equivalence relation between expressions. In particular, when expressions are interpreted as a logical system, the semantic equivalence is usually intended to be the logical equivalence. In this context, transformation rules can be used to obtain expressions logically equivalent to any given expression. These referrals included as special cases (1) the problem of simplifying expressions and (2) the problem of deciding whether a given expression is equivalent to an expression in the distinguished subset, usually interpreted as the subset of logical axioms. The set of axioms may be empty, a nonempty finite set, a countably infinite set, or given by axiom schemes. A formal grammar recursively defines the expressions and well-formed formulas (BFF) of the tongue. In addition there is a semantic definition of truth and qualifications (or interpretations). This allows us to determine which are WFFs valid, ie, are the theorems. The language of a propositional calculus consists of (1) a set of primitive symbols, variously referred to as the formulas atomic, placeholders, proposition letters, or variables, and (2) a set of symbols of operators, interpreted as logical operators or connectives logical. A well-formed formula (wff) is any atomic formula or any formula that can be built from atomic formulas using symbols operator. The following outlines a standard propositional calculus. There are many different formulations that are all roughly equivalent, but differ in (1) their language, ie the private collection of primitive symbols and operator symbols, (2) the set of axioms, or distinguished formulas, and (3) the rule set processing that are available. In abstract algebra, a Boolean algebra is an algebraic structure (a set of elements and operations on them obeying defining the axioms) that captures the essential properties of both set operations and logic operations. In particular, it is the set of intersection operations, union, complement, and logic operations AND, OR, NOT. For example, the claim that logic and a statement of its negation ¬ A can not if true, Boolean lattice of subsets parallel making the assertion of the theory that a subset A and its complement of CA have empty intersection, because the truth values can be represented as binary numbers or as voltage levels of logic circuits, the parallel extends to those. Thus, the theory Boolean algebra has many practical applications in electrical and computer engineering, as well as in mathematical logic. A Boolean algebra also is called a Boolean lattice. The connection with lattices (special partially ordered sets) is suggested by the parallelism between the inclusion of sets, A ? B, and ordering A ? n. Consider the network of all subsets of (x, y, z), ordered by set inclusion. This lattice is a set of Boolean partially ordered set in which, for example, (x) ? (x, y). Any two lattice elements, for example p = (x, y) and Q = (y, z), at least have a upper limit, here (x, y, z), and further lower bound here (and). Suggestively, the least upper bound (or join or supreme) is represented by the symbol less logic, p ? q, and the greatest lower bound (or meet or negligible) is represented by the symbol itself is logical and p ? q. the interpretation of aid in the generalization lattice of Heyting algebras, Boolean algebras that are freed from the constraint that is a statement or its negation must be truth. Heyting algebras correspond to intuitionistic (constructive) logic as Boolean algebras correspond to classical logic. NEW! COLLEGE ALGEBRA AND CALCULUS: AN APPLIED APPROACH, 2E, RON LARSON Linear Algebra and Multivariable Calculus 1970 by Feeman, G.F.; Grabo 0070203377 JUST-IN -TIME Algebra & Trigonometry For Calculus by Mueller Brent 2005 Pre-Calculus College Algebra and Trigonometry by Sharma & Kapoor 2001 Calculus and Linear Algebra V.1 NEW by Wilfred Kaplan College Algebra and Calculus: An Applied Approach 2nd Edition (Looseleaf) College Algebra and Applied Calculus Math 8 & 71 for SJSU – Larson and Hodgkins NEW College Algebra And Calculus by Ron Larson Paperback Book Free Shipping MATHEMATICAL STRUCTURES & PROOFS maths mathematics calculus algebra Texas Instruments TI-84 Plus Graphing Calculator AP Calculus Geometry Algebra II Calculus, One-Variable Calculus with an Introduction to Linear Algebra by Tom… Solutions Manual for A Primer for calculus, 4th Edition, and College Algebra, 4 Mathematics Math TrigGeometry Calculus Algebra Book CD Just-in-Time Algebra and Trigonometry for Early Transcendentals Calculus by… Intermediate & Pre-Calculus Algebra Custom Edition Math 108 & 111 Ball State U. NEW Studyguide for College Algebra and Calculus: An Applied Approach by Larson, LOT OF 2 Math Help Education College A Must Have Resource Calculus Algebra II NEW Studyguide for College Algebra and Calculus: An Applied Approach by Larson, Just-in-Time Algebra and Trigonometry for Students of Calculus by Guntram Muelle Elements of Algebra Preliminary to the Differential Calculus by de Morgan, Au… NEW Linear Algebra for Calculus by Konrad J. Heuvers Paperback Book NEW Just-in-time Algebra and Trigonometry for Calculus – Mueller, Guntram/ Brent College Algebra and Applied Calculus 2013 Elements of Algebra, Preliminary to the Differential Calculus Elements of Algebra Priliminary to the Differential Calculus … A Treatise on the Differential Calculus: And Its Applications to Algebra and … The Elements of Algebra Preliminary to the Differential Calculus: And Fit for… Just-in-Time Algebra and Trigonometry for Calculus (4th Edition), Paperback Just-in-Time Algebra for Students of Calculus in the Management and L 0201746115 761# TI-84 Plus Graphing Calculator College Algebra Trig Calculus 0492 Math Ceramic Mug Coffee Cup Teacher Algebra Calculus Geometry Mathematics Studyguide for College Algebra and Calculus: An Applied Approach by Larson, I… 39 Math books CD: Chemistry-Calculus-Physics-Algebra-Trig-Statistics-Geometry + Higher Mathematics Elliptical Functions Algebra Calculus Trigonometry Paris 1875 NEW! COLLEGE ALGEBRA AND CALCULUS: AN APPLIED APPROACH, 2E, RON LARSON Linear Algebra for Calculus by Francis, James Stewart and Konrad J. Heuvers… NEW! COLLEGE ALGEBRA AND CALCULUS: AN APPLIED APPROACH, 2E, RON LARSON College Algebra and Calculus : An Applied Approach 2nd Edition Vector Calculus Linear Algebra & Differential Forms 3e mathematics advanced math Just-In-Time Algebra & Trigonometry For Calculus Textbook 4th Edition Pro One Mathematics Platinum Edition 5 CD Set Algebra Calculus Trig NEW in BOX 1960 ALGEBRA Y CALCULO NUMERICO CALCULUS – SAGASTUME Ed KAPELUSZ ARGENTINA XRARE Texas Instruments TI-83 Plus Graphing Calculator Math Algebra Calculus School US Navy Math Training Series, Algebra, Trig, calculus The Algebra of Calculus with Trigonometry and Analytic Geometry Book by Braude… Vector Calculus, Linear Algebra and Differential Forms: A Unified Approach Using a dual-presentation that is rigorous and comprehensive–yet exceptionally “student-friendly” in approach–this text covers most of the standard topics in multivariate calculus and a substantial part of a standard first course in linear algebra. It focuses on underlying ideas, integrates theory and applications, offers a host of pedagogical aids, and features coverage of differential forms. T… The Larson CALCULUS program has a long history of innovation in the calculus market. It has been widely praised by a generation of students and professors for its solid and effective pedagogy that addresses the needs of a broad range of teaching and learning styles and environments. Each title is just one component in a comprehensive calculus course program that carefully integrates and Texas Instruments TI-84 Plus Graphing Calculator The Texas Instruments TI-84 Plus Graphing Calculator features USB on-the-go technology for file sharing with other calculators and connecting to PCs,handling calculus, engineering, trigonometric, and financial functions, 12 apps preloaded, and displays graphs and tables on split screen to trace graph while scrolling through table values. Advanced statistics and regression analysis, graphical anal… Texas Instruments Nspire CX CAS Graphing Calculator Stay mobile, continue learning – Transfer class assignments from handheld to computer. Complete work outside of school using student software. On the desktop at home or a laptop on the bus, at the library, coffee shop or wherever. Explore higher-level math concepts – Explore symbolic algebra and symbolic calculus, in addition to standard numeric calculations. View exact values – in the form of va… Math is Sexy T-Shirt This top-quality, 100% cotton T-Shirt is printed direct-to-garment with new age technology that preserves the color-fastness of the design. This unique T-Shirt is designed and printed in the United States with eco-friendly ink-so it is safe for you and the environment. This durable, comfortable T-Shirt is sure to be a hit, whether you’re buying it as a gift for somebody special or wearing it City Shirts Keep Calm and Love Math Adult T-Shirt Tee Keep Calm and Love Math Adult Shirt, high quality shirt only from City Shirts… Rocket Factory DEAR ALGEBRA funny math Men’s T-shirt That’s right, she’s not coming back. We said it. Problem solved. This T-shirt is 100% cotton and was printed by hand with love…. MATH MASTER; The complete Reference Guide to Mathematics (Trigonometry, Statistics, Calculus, Geometry, Algebra) For High School Students & College Freshmen Complete reference guide to mathematics for high school students & college freshmen on CD- Rom… High School Math – Algebra, Trigonometry, Calculus Learn Algebra Trigonometry and Calculus with multimedia interactive lessons and tests. Features: Interactive tutoring – advanced math conceptsPrintable tests to monitor progressEasy to use interfaceNo installation requiredWindows and MACSystem Requirements:Windows 2000/XP Sound card CD drive Mac OS XFormat: WIN 2000XP/MAC 10.3.8 OR LATER Genre: EDUCATION UPC: 798694810295 Manufacturer No: 10… Pro One Mathematics: Pre-Algebra, Algebra 1, Algebra 2, Geometry, Trigonometry, Calculus 1 Save time! Stop searching through textbooks, old worksheets, and question databases. Create questions with the characteristics you want them to have. Automatically space the questions on the page. Print professional-looking assignments. Improve learning! Differentiate your instruction, improve your materials, adapt to your individual classes. Reduce cheating! Print multiple versions of tes… Don't find what you search ? Ask us... Tags: algebra, calculus algebra difference, calculus algebra review, difference between calculus algebra, education, is calculus algebra, math, mathematics, precalculus algebra, reference
{"url":"http://photoshop-newsletter.com/buy/calculus-algebra/","timestamp":"2014-04-20T04:08:27Z","content_type":null,"content_length":"80718","record_id":"<urn:uuid:5f753823-6c6a-45d2-ba48-cecf554ca695>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00508-ip-10-147-4-33.ec2.internal.warc.gz"}
EvoMath 2: Testing for Hardy-Weinberg Equilibrium Authors are solely responsible for the content of their articles on PandasThumb.org. Linked material is the responsibility of the party who created it. Commenters are responsible for the content of comments. The opinions expressed in articles, linked materials, and comments are not necessarily those of PandasThumb.org. See our full disclaimer. Recommend this entry to a friend Reed A. Cartwright posted Entry 120 on April 8, 2004 08:00 PM. Trackback URL: http://www.pandasthumb.org/cgi-bin/mt/mt-tb.fcgi/119 In the first installment of EvoMath, I derived the Hardy-Weinberg Principle and discussed its significance to biology. In the second installment I will demonstrate how to test if a population deviates from Hardy-Weinberg equilibrium. A population is considered to be in Hardy-Weinberg equilibrium if the allele and genotype frequencies are as follows. Genotype Frequency Test Procedure: A goodness-of-fit test can be used to determine if a population is significantly different from the expections of Hardy-Weinberg equilibrium. If we have a series of genotype counts from a population, then we can compare these counts to the ones predicted by the Hardy-Weinberg model. We conclude that the population is not in Hardy-Weinberg equilibrium if the probability that the counts were drawn under the Hardy-Weinberg model is too small for the deviations to be considered due to random chance. The significance level that is typically used is In order to calculate this probability, we will use a test statistic, This test statistic has a “chi-square” distribution with Example 1: Consider the following samples from a population. Genotype Count AA 30 Aa 55 aa 15 Allele Frequency A 0.575 a 0.425 Calculate the Genotype Observed Expected (O-E)^2/E AA 30 33 0.27 Aa 55 49 0.73 aa 25 18 0.50 Total 100 100 1.50 Example 2: Race and Sanger (1975) determined the blood groups of 1000 Britons as follows (from Hartl and Clarke 1997). Genotype Observed Expected MM 298 294.3 MN 489 496.4 NN 213 209.3 This results in Example 3: Matthijis et al. (1998) surveyed a group of 54 people suffering from Jaeken syndrome (from Freeman and Herron 2004). Genotype Observed Expected OO 11 19.44 OR 43 25.92 RR 0 8.64 This results in Although to derive the Hardy-Weinberg principle, we assumed that the size of the population was infinite, these statistical tests demonstrate that finite populations can approximately exist in Hardy-Weinberg equilibrium. • Freeman S and Herron JC (2004) Evolutionary Analysis 3^rd ed. Pearson Education, Inc (Upper Saddle River, NJ) • Hartl DL and Clarke AG (1997) Principles of Population Genetics 3^rd ed. Sinauer Associates, Inc (Sutherland, MA) • Matthijis GE et al. (1998) Lack of homozygotes for the most frequent disease allele in carbohydrate-deficient-glycoprotein syndrome type 1A. American Journal of Human Genetics 62: 542-550 • Race RR and Sanger R (1975) Blood Groups in Man 6^th ed. JB Lippincott, Philadelphia Previous Installments: Commenters are responsible for the content of comments. The opinions expressed in articles, linked materials, and comments are not necessarily those of PandasThumb.org. See our full disclaimer.
{"url":"http://www.pandasthumb.org/archives/2004/04/evomath_2_testi_1.html","timestamp":"2014-04-16T13:12:12Z","content_type":null,"content_length":"19977","record_id":"<urn:uuid:1658c388-1bce-4ce3-b390-df19d7357bfc>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00366-ip-10-147-4-33.ec2.internal.warc.gz"}
Geometry, Topology and Destiny I’ve reached the cosmology part of my General Relativity (GR) course, and one of the early points that comes up is my traditional rant against confusing three very distinct concepts when thinking about the universe. Roughly stated, these are; What is the shape of the universe? Is the universe finite or infinite? and Will the universe expand forever or recollapse. When we apply GR to cosmology, we make use of the simplifying assumptions, backed up by observations, that there exists a definition of time such that at a fixed value of time, the universe is spatially homogeneous (looks the same wherever the observer is) and isotropic (looks the same in all directions around a point). We then specialize to the most general metric compatible with these assumptions, and write down the resulting Einstein equations with appropriate sources (regular matter, dark matter, radiation, a cosmological constant, etc.). The solutions to these equations are the famous Friedmann, Robertson-Walker spacetimes, describing the expansion (or contraction) of the universe. It is important to take a moment to emphasize what we have done here. GR is indeed a beautiful geometric theory describing curved spacetime. But practically, we are solving differential equations, subject to (in this case) the condition that the universe look the way it does today. Differential equations describe the local behavior of a system and so, in GR, they describe the local geometry in the neighborhood of a spacetime point. Because homogeneity and isotropy are quite restrictive assumptions, there are only three possible answers for the local geometry of space at any fixed point in time – it can be spatially positively curved (locally like a 3-dimensional sphere), flat (locally like a 3-dimensional version of a flat plane) or negatively spatially curved (locally like a 3-dimensional hyperboloid). A given cosmological solution to GR tells you one of these answers around a spacetime point, and homogeneity then tells you that this is the same answer around every spacetime point. This is what we mean when we say that GR tells us about geometry – the shape of the universe – as depicted in the NASA graphic below. This raises a very different question that is often confused with the one above. If our solution tells us that the universe is locally a 3-sphere (or flat space, or a hyperboloid) around every point, then does that mean it is a 3-sphere, or an infinite flat 3-dimensional space, or an infinite hyperboloid. This is really a question of topology – how is it connected up – which also answers the question of whether the universe is finite or infinite. To illustrate the point, suppose we have solved the cosmological equations of GR, and discovered that at every spacetime point, the universe is locally a flat 3-dimensional space. This is, by the way, what observations actually indicate our universe is like. Then, just off the top of your head, you can think of many different spaces with precisely this same property. One example is, of course, that the universe is indeed a flat, infinite 3-dimensional space. Another is that the universe is a 3-torus, in which if you were to fix time and trace out a line away from any point along the x, y or z-axis, you traverse a circle and come right back to where you started. This is a finite volume space, that is connected up in a very specific way, but which is everywhere flat, just like the infinite example. In two dimensions, one might visualize it as Of course, I could have only made one or two directions into circles (leaving it still infinite in some directions), or made the space into a finite one with more than one hole, or any number of other possibilities. This is the beauty of topology, but it is not something that solving the equations of GR tells us. Rather it is an extra input into our solutions. It is, however, something we can test, most precisely through measurements of the Cosmic Microwave Background radiation, as I may discuss in a later post. Completely independent of questions of topology, the geometry of a given cosmological solution raises another issue that is often mixed up with those of geometry and topology. Suppose that the universe contains only conventional matter sources (regular matter, dark matter and radiation, say), and suppose you know (you might question whether this is truly possible) that this is all it will ever contain. Then the equations easily predict that, in the case of positive spatial curvature, an expanding universe will ultimately reach a maximum size and recollapse in a big crunch, whereas flat or negatively curved universes will expand forever. These are predictions of the destiny of the universe, and often lead to the following connection However, as I made clear, there are some assumptions that go into the connection between geometry and destiny, and although these may have seemed reasonable ones at one time, we know today that the accelerated expansion of the universe seems to point to the existence of some kind of dark energy (a cosmological constant, for example), that behaves in a way quite different from conventional mass-energy sources. In fact, we know that for sources like this, once acceleration begins, it is easily possible for a positively curved universe, for example, to expand forever. Indeed, in the case of a cosmological constant, this is precisely what happens. So the universe may be positively or negatively curved, or flat, and our solutions to GR tell us this. They may be finite or infinite, and connected up in interesting ways, but GR does not tell us why this is the case. And the universe may expand forever or recollapse, but this depends on detailed properties of the cosmic energy budget, and not just on geometry. Cosmological spacetimes are some of the simplest solutions to GR that we know, and even they admit all kinds of potential complexities, beyond the most obvious possibilities. Wonderful, isn’t it? This is probably a stupid question, but how can a universe be isotropic if it isn’t also homogenous? Doesn’t the former entail the latter? What would be an example of world that looks the same in all directions, but isn’t everywhere the same? • http://blogs.discovermagazine.com/cosmicvariance/mark/ One only needs a spacetime with a center but that looks the same in all directions from that one point. An example that is not a cosmological spacetime is the Schwarzschild spacetime describing a black hole or the spacetime around the Sun. This is isotropic around one point but not homogeneous. It is important to note that this is isotropy about a point. If we automatically demanded isotropy about every point, then we would, indeed, have homogeneity. • http://gnomonicablog.com Truly wonderful. I actually forgot until now I had this confusion after my graduate course in GR. But the instructor did not seem to understand it better. I think this could make also for some interesting concept problems in a GR course. And it gave me a couple of ideas for my spanish blog. Glad you shared this! • http://gplus.to/pedroj Though it might not be applicable to our own universe, I think it’s worth mentioning Myers’ Theorem, which states that a manifold with positive Ricci curvature, bounded below by some non-zero value, must have a finite volume. I always thought it was intuitively reasonable to suppose that a manifold could be locally isometric to a 3-sphere and yet somehow find a way to avoid closing up on itself … but Myers’ Theorem says that really can’t happen. One technical question I should know the answer to. Statements connecting local geometry to global topology must rely on some smoothness conditions, for example absence of singularities. But this seems to me problematic: not only can we not know for sure those conditions apply to any point in spacetime, we do know that some singularities (i.e. black hole singularities) do in fact exist in our very own spacetime. Are there weaker statements about the connection between topology and some suitably “averaged out ” geometry that one can make? □ http://blogs.discovermagazine.com/cosmicvariance/sean/ Moshe– Black holes have spacelike singularities in the future, so in principle they wouldn’t prevent you from constructing nonsingular spacelike surfaces. But that’s probably not the answer you were looking for. I presume if you had a singularity on your spacelike surface, you could cut it out by removing a spherical region around it and replacing that with some nonsingular geometry that matched on the boundary. But I haven’t thought about it carefully. • Pingback: Geometry, Topology and Destiny – - ScienceNewsX - Science News AggregatorScienceNewsX – Science News Aggregator Thanks Sean. Black holes are a bad example, but I am I still puzzled by the issue of singularities, which are fairly common even if you start with non-singular initial data. I assume that smoothness can be replaced by a condition that some averaged curvature invariant is locally bounded. But, since we cannot possibly know that any statement like that is valid absolutely everywhere in spacetime, can we make any inference about the topology of space/spacetime? □ http://blogs.discovermagazine.com/cosmicvariance/sean/ I doubt it. Not sure what right we would have to say anything about nontrivial handles etc. at the Planck scale, for example. And also I think we have little/no right to make assumptions about the geometry outside our horizon, so I would be loathe to say anything about the very largest scales either. One thing I always wondered about — why does the curvature “k” in the FLRW metric have to be independent of time? Why can’t the Universe go from positively curved to negatively curved over time? Does that somehow violate homogeneity and isotropy because of the problem of trying to uniquely define a slice of constant time? Moshe, this may be by now obvious, but the crucial assumption that allows us to make conclusions about global topology is, as already mentioned by Mark, the homogeneity hypothesis. Are you perhaps wondering to what extend this hypothesis could be relaxed yet still allow us to make global conclusions? One kind of answer was pointed out by Greg above: the positivity of Ricci curvature forces the topology to be compact. Homogeneity is relaxed, since the curvature is not everywhere the same, but not completely since it is still everywhere bounded from below. I belive that there are also results (I think due to M. Gromov and others) that restrict the topology by bounding the number of holes (Betti numbers) from above, provided the (sectional?) curvature is everywhere negative. These results appear to be consistent with the intuition that conclusions about global topology survive a slight weakening of the homogeneity hypothesis. • http://x-sections.blogspot.com/ Other commenters have already touched on this, but… Mark, your distinction between geometry and topology is an important one, but they are not completely independent. In two dimensions, every undergraduate knows the Gauss-Bonnet theorem, relating the Euler number of a surface to the average of its curvature; in particular, the only positively-curved two-dimensional manifold is the two-sphere. In three dimensions, things are more complicated (and I’m no expert); for example, the three-sphere admits an infinite number of free quotients (the lens spaces), the local geometry of which is therefore identical. Nevertheless, I think there are still relatively fewer possibilities when the curvature is positive or zero, than when it is negative. One other comment, which just occurred to me, is the two meanings of “homogeneous”. In mathematics, a homogeneous space is one with a transitive isometry group, whereas in physics it seems that we mean something weaker: any two points have isometric neighbourhoods. Obviously the first implies the second, but it’s unclear to me, what conclusions can be drawn from the second alone. • http://coraifeartaigh.wordpress.com Superb post. This is exactly the sort of point that gets left out (by necessity) in popular accounts, and that leads to all sorts of questions in the mind of the reader. Are the course notes How is a saddle shaped geometry positive curvature? Isn’t it positive on one axis and negative on the other? So wouldn’t this look like a bowl, vs a saddle? • http://www.math.ist.utl.pt/~jnatar/ Trevor (#11), Changing the sign of the curvature implies changing the spatial topology, and there are theorems forbidding that, see e.g. Incidentally, if you allow k to vary in space you get the inhomogeneous Lemaitre-Tolman-Bondi cosmologies: Igor, I was just wondering how much these conclusions (or even definition of topology) depends on their assumptions holding to arbitrarily short distances, which is what you have to do when you use mathematical theorems relating geometry to topology. For example, can we define and reach any conclusion on something we may call the averaged “large scale” topology, when we assume homogeneity only on average, and allow for exceptions on measure zero set (with some conditions on such exceptions)? Those would be more realistic assumptions, I think, because I don’t think we can assume that spacetime is smooth and is described by the usual GR structures to arbitrarily short distances, though it certainly does “on average”. As Sean points out, we also cannot know what is going on outside the horizon, but I think this is a separate issue. Oh, so it’s just one special point that need be isotropy around? I’d missed that completely. Which one is it, if that makes sense? What book are you using by the way? Jose (#16) Personally, I do not see anything wrong with the Lemaitre-Tolman metrics (but I haven’t looked very hard). • http://blogs.discovermagazine.com/cosmicvariance/mark/ Getting in late from work and it’s nice to see people have provided better answers to some of these than I could have. Moshe, I don’t know the answer in general – I don’t think we can say much, and of course, once we get to very small scales, we’ll need to depart from GR anyway, as Sean said. Rhys – indeed there can be connections, but the point I was making was that in general they are different concepts. I also think the possibilities are fewer for positive curvature. For compact hyperbolic manifolds, by comparison, there are connections between the volume (in units of the curvature radius) and the topology. Sili – For a general space that is just isotropic there is indeed a special point – the one about which the space is isotropic. But our favorite cosmological spaces are isotropic and homogeneous, and so are isotropic about every point. Doh. Now I get it. I think. Thanks. I’d love to see a post on how the CMB probes the topology.. • http://www.math.ist.utl.pt/~jnatar/ Tintin (#19), The comment on the LTB metrics was just an aside. There is nothing wrong with these metrics, they are just inhomogeneous generalizations of the standard FLRW metrics. As in the FLRW case, the sign of the spatial curvature (which is the sign of the function E) is fixed for the LTB metrics, so you cannot go from positive to negative spatial curvature (nor can you have topology change). • http://blogs.discovermagazine.com/cosmicvariance/mark/ dktm #22: I’ll try to write one soon. • Pingback: Working for Free and Other Hazards « Galileo's Pendulum • Pingback: Cutting back…. « blueollie • http://www.math.cornell.edu/~dtaimina For those who want to learn more about the differences between geometry and topology let me suggest my book – 2012 Euler Book Prize winner: which is aimed to the general audience and mostly explains things visually. btw NASA picture depicts a surface with constant negative curvature incorrectly So often, crucial points go unrecognized and ignored among everything else. Mark Trodden speaks of a “fixed value of time”. In a universe governed by “Einstein’s relativity”, however, a fixed value of time is meaningless. Every particle moving in their own universe of measuremenst would see things differently from others. Indeed, what “shape” does the universe have for particles oving at different velocities? Shills for the “relativity” might say the universe has the same shape for all, overall. But, then, that establishes that uniform shape as a uniform frame of reference! • Pingback: What Are Dark Circles | How To Get Rid Of Bags Under Eyes Perhaps this is a ridiculous question (most of mine are), but does GR have amything to say about the geometry of time? At the very least, it could be either infinite or finite (finite at a singularity), and if finite than finite in either one or both directions. But why not curved time? Of course the case of a positively curved time dimension might suggest a closed circle, and that time repeats. Such a thing might produce a cyclic universe, although the more common idea of a cyclic universe assumes a flat time geometry. Does GR differentiate in any way between normal and dark matter? It seems not to matter (no pun intended). Thus, can dark matter collapse to form a black hole? Of course the term black hole may not be a proper term for a type of matter that doesn’t interact with EM. But it would seem that a dense enough accumulation could collapse to form a singularity. A normal black hole would radiate Hawking radiation. Would not a dark matter black hole radiate as well, since Hawking radiation is a result of virtual particles falling/escaping the black hole? But then would not the dark matter black hole behave as a clasical blackbody and radiate EM energy? Anything can collapse to a black hole, and black holes should radiate democratically to all states that are thermally or kinematically accessible. The mechanism of production is virtual particle creation near the horizon – it does not come from the black hole itself. The point is that after the black hole forms, nature forgets what it was made of. It’s just a bunch of energy sitting there. Radiation is nature redistributing that energy thermally. • http://blogs.discovermagazine.com/cosmicvariance/mark/ Ray #30: As in the discussion of spatial geometry, GR has everything to say about curvature in the time direction (indeed, that’s what we have in cosmology) – and that is what geometry is about. Questions of whether time is a circle etc. are about topology, and can, at least in part, depend on initial or boundary conditions. Ray#31: peacock #32 is right in principle – anything can be a black hole. However, in the case of dark matter, it is quite hard to form a black hole, and the reason isn’t to do with GR, but with particle physics. We know dark matter is at best weakly interacting, and so unlike regular matter, which can lose energy, and hence collapse, by emitting radiation, dark matter has a hard time losing energy, and so doesn’t collapse easily. This is the basic reason why we have large, puffy dark matter halos around galaxies in which the regular matter is clumped up in to a disk and Are there theories predicting a definite topology for the universe? I’m not talking about phenomenological theories that just plug in some global topology and then make predictions based on that. No, I’m really asking for a theory that says “the universe should be a donut because…” or “If this principle holds, we should expect a Poincaré dodecahedral space…”. • http://blogs.discovermagazine.com/cosmicvariance/mark/ • http://freethoughtblogs.com/singham/ Excellent post, Mark. The only thing that I would suggest is to expand slightly on how you can connect locally flat spaces in ways other than to get a flat infinite three-dimensional space. It is not obvious how, in the 2-D case where you have the donut, it is locally flat. Great post; some of the issues of complete relativism in GR are discussed in a very nice article by Lee Smolin, which you can find in The Structural Foundations of Quantum Gravity (2006), pg 196. He talks about GR being “partially relative” in the sense that although the geometry is dynamically generated, the dimension, topology, smooth structure, and signature are not. A good read if anyone is interested. • Pingback: Disentangling three big questions about the universe | Mano Singham Very well written. Excellent observations. I might add the idea that it is something to consider the fact that any given object can potentially be of any given size – infinitely large or small – and that any event can potentially take any amount of time – infinitely long or infinitely short.
{"url":"http://blogs.discovermagazine.com/cosmicvariance/2012/04/08/geometry-topology-and-destiny/","timestamp":"2014-04-20T19:09:55Z","content_type":null,"content_length":"137353","record_id":"<urn:uuid:f38fa2ed-ec1b-4905-adb0-445c8306d4f1>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00341-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Help November 29th 2009, 10:37 AM #1 Junior Member Jul 2009 let f be continuos in [a,b] and differentiable in (a,b) show that $<br /> \frac{{af(b) - bf(a)}}<br /> {{b - a}} = x_0 f'(x_0 ) - f(x_0 )<br /> <br />$ for some Xo i tried by defining g(x)= (a+b-X)f(X) then using the mean value theorem on g(x) i try to solve the problem but i get stuck D: plz help me Last edited by mms; November 29th 2009 at 12:08 PM. Follow Math Help Forum on Facebook and Google+
{"url":"http://mathhelpforum.com/calculus/117392-function.html","timestamp":"2014-04-17T04:38:24Z","content_type":null,"content_length":"29515","record_id":"<urn:uuid:fae60bd2-0d4e-449f-8ede-f3ba326fe828>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00137-ip-10-147-4-33.ec2.internal.warc.gz"}
Help me help him Geometry/Algebra Which one of you moms is good at Algebra? I'm trying to fing the pereimter of an irregular polygon with missing sides. It seems easy but then I get confused. WE could wait until our tutoing session bu the problem is pissing me off. LOL So???? Who can help? It's one were several sides are missing and I know there must be an easy formula. Some people recognize the light but they can't handle the glare. by on Feb. 4, 2013 at 1:10 PM Add your quick reply below: You must be a member to reply to this post. Replies (1-10): Let me ask my daughter, she had geometry last year by on Feb. 4, 2013 at 3:50 PM How many sides does the polygon have. And how many sides did they already give you the number for and what are those numbers? Then I can write you up a quick formula and solve it Not me! I'm struggling over here with my own homework. I'm trying my hardest to solve these rational expressions. Did I mention this already a developmental math class? LOL! Thanks! I'm going to send a picture of the text book problem. Quoting MsNeene: How many sides does the polygon have. And how many sides did they already give you the number for and what are those numbers? Then I can write you up a quick formula and solve it Some people recognize the light but they can't handle the glare. on Feb. 4, 2013 at 5:38 PM Lol I'm with you I start my class tomorrow night my youngest dd she's 12 said she will help me lol baby momma going to need it Quoting moosesmom: Not me! I'm struggling over here with my own homework. I'm trying my hardest to solve these rational expressions. Did I mention this already a developmental math class? LOL! So I feel like there us a simple formula ----I know I'm doing too many steps to get the answer. There has to be an easier way. I'm on my mobile --- please look at the picture. Quoting MsNeene: How many sides does the polygon have. And how many sides did they already give you the number for and what are those numbers? Then I can write you up a quick formula and solve it 31-18 = 13 25-13=12 (the horizontal section just above the 25m) 20+20+12+31+25+18 = 126 m The perimetre is 126 metres How were you calculating to get the answer? Just curious if it was a shorter method than what I had. Quoting Dana267: So I feel like there us a simple formula ----I know I'm doing too many steps to get the answer. There has to be an easier way. Add your quick reply below: You must be a member to reply to this post.
{"url":"http://www.cafemom.com/group/323/forums/read/18021708/Help_me_help_him_Geometry_Algebra_Which_one_of_you_moms_is_good_at_Algebra","timestamp":"2014-04-18T19:14:38Z","content_type":null,"content_length":"75948","record_id":"<urn:uuid:68eea344-5037-4dd7-8081-268b34bf5ac0>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00426-ip-10-147-4-33.ec2.internal.warc.gz"}
First NMatrix Alpha Released Warning: Code in this blog post is very old and likely will not work with the current version of NMatrix. Please check the NMatrix wiki for the most recent information. Two months ago, I mentioned the existence of a prototype Ruby linear algebra library, written in C. I am pleased to announce that yesterday we released our first alpha of said library, NMatrix v0.0.1. Creating Matrices There are lots of different ways to create matrices. The first and easiest is to supply dimensions and initial data: >> n = NMatrix.new(4, 0) # a square 4x4 dense zero matrix => #<NMatrix:0x9a57e14shape:[4,4] dtype:int32 stype:dense> >> n.pretty_print => nil Data Types You may notice that this first matrix defaulted to dtype=:int32. NMatrix will try to guess the dtype based on the first initial value you provide (e.g., NMatrix.new(4, [0.0, 1]) will be :float32), but you can choose to provide a dtype in addition to or in lieu of initial values: >> n = NMatrix.new(4, [0,1], :rational128) => #<NMatrix:0x9959e04shape:[4,4] dtype:rational128 stype:dense> >> n.pretty_print 0/1 1/1 0/1 1/1 0/1 1/1 0/1 1/1 0/1 1/1 0/1 1/1 0/1 1/1 0/1 1/1 >> m = NMatrix.new(4, :int64) # no initialization of values => #<NMatrix:0x99fad68shape:[4,4] dtype:int64 stype:dense> >> m.pretty_print -1217641248 161386160 161386100 161385680 => nil Storage Formats The storage type (stype) can also be specified, prior to the dimension argument. However, with sparse storage formats, initial values don’t make sense, and these matrices will contain zeros by # empty list-of-lists-of-lists 4x3x4 matrix n = NMatrix.new(:list, [4,3,4], :int64) # Ruby objects in a 'Yale' sparse matrix m = NMatrix.new(:yale, [5,4], :object) # A byte matrix containing a gradient o = NMatrix.new(:dense, 5, [0,1,2,3,4], :byte) The matrix m created above is a Yale-format sparse matrix, or more specifically, “new Yale,” which differs from “old Yale” in that the diagonal is stored separately from the non-diagonal elements. Thus, diagonals can be accessed and set in constant time. Currently, all storage is row-based. You can also convert between any of these three stypes using cast, e.g., n = NMatrix.new(:list, 4, :int64) n[0,0] = 5 n[0,3] = -2 dense = n.cast(:dense, :int64) Currently, only dense vectors are implemented as a child class of NMatrix, and creation is similar: >> nv = NVector.new(5, :int64) => #<NVector:0x9a62328shape:[5,1] dtype:int64 stype:dense orientation:column> Math Operations Most element-wise mathematical operations are supported for Yale and dense types. These use the basic operators (e.g., +, -, /, *, ==). For non-element-wise matrix multiplication, use the dot instance method of NMatrix. Whole-matrix comparison (returning a single boolean value) is the equal? or eql? method. Road Map Much more remains to be written than has been completed. Here are some of our key priorities: • determinants • matrix-vector multiplication for Yale • adaptation of SciRuby Matlab file reader to support NMatrix • in-place transposition If you want to get involved, I suggest visiting the NMatrix issue tracker. It will contain not only bugs, but also features that need to be implemented. We’re all pretty excited about NMatrix. But we couldn’t have gotten this far without Masahiro Tanaka’s NArray, which has served as a model for our library. And we can’t do it without your help. A numerical library in Ruby is no small endeavour, and probably requires at least one full-time programmer (which we do not have). Please consider contributing, even if it’s just by letting us know what you think. Happy coding!
{"url":"http://sciruby.com/blog/2012/04/11/first-nmatrix-alpha-released/","timestamp":"2014-04-16T04:20:06Z","content_type":null,"content_length":"14198","record_id":"<urn:uuid:3ffec1f7-7d35-4a14-b9a4-81055b98cba5>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00511-ip-10-147-4-33.ec2.internal.warc.gz"}
Fourier series of e^x Hi all, I've been having little problems getting Fourier series of [itex]e^x[/itex]. I have given f(x) = e^{x}, x \in [-\pi, \pi) a_0 = \frac{1}{\pi}\int_{-\pi}^{\pi} e^{x}\ dx = \frac{2\sinh \pi}{\pi} a_{n} = \frac{1}{\pi}\int_{-\pi}^{\pi} e^{x}\cos (nx)\ dx = \frac{1}{\pi}\left\{\left[ \frac{1}{n}\ e^{x} \sin (nx)\right]_{-\pi}^{\pi} - \frac{1}{n}\int_{-\pi}^{\pi} e^{x}\sin (nx)\ dx\right\} = \frac{(-1)^{n+1}\left(e^{-\pi} - e^{\pi}\right)}{\pi(1+n^2)} b_{n} = \frac{1}{\pi}\int_{-\pi}^{\pi}e^{x}\sin (nx)\ dx = \frac{1}{\pi}\left\{ \left[ -\frac{1}{n}\ e^{x}\cos (nx)\right]_{-\pi}^{\pi} + \frac{1}{n}\int_{-\pi}^{\pi} e^{x}\cos(nx)\ dx\right\} = \frac{n(-1)^{n}\left(e^{-\pi} - e^{\pi}\right)}{\pi(1+n^2)} So the Fourier series looks like this: S(f,x) = \frac{\sinh \pi}{\pi}\ +\ \frac{2\sinh \pi}{\pi}\sum_{n=1}^{\infty} \frac{(-1)^{n+1}}{1+n^2}\left(\cos(nx) - n\sin(nx)) Anyway, our professor gave us another right (I hope so) result: S(f,x) = \frac{\sinh \pi}{\pi}\ +\ \frac{2\sinh \pi}{\pi}\sum_{n=1}^{\infty} \frac{(-1)^{n}}{1+n^2}\left(\cos(nx) - n\sin(nx)) Obviously my series is just of opposite sign than it should be, but I can't find the mistake, could you help me please?
{"url":"http://www.physicsforums.com/showthread.php?t=75214","timestamp":"2014-04-18T03:06:11Z","content_type":null,"content_length":"38079","record_id":"<urn:uuid:1dfe767f-8612-4bbc-bf53-743799dc01bd>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00275-ip-10-147-4-33.ec2.internal.warc.gz"}
A&C reference library I was impressed by this short (11 page) paper by Daly and Djorgovski Direct Constraints on the Properties and Evolution of Dark Energy Ruth A. Daly, S. G. Djorgovski 11 pages, 8 figures, invited presentation from the Observing Dark Energy NOAO Workshop in Tucson It goes along with Wolram and other's interest in a skeptical appraisal of the dark energy idea. D and D have developed a method to analyse the raw Supernova data with a minimum of assumptions---- not assuming Friedmann equations or concordance model ----and calculating the acceleration directly. then they can say "what assumptions, what model, would get us this observed acceleration?" in other words they proceed in a non-parametric way. they do not assume there are parameters like dark energy density and negative pressure, and try to find the value of these parameters. they assume nothing like that, they measure the acceleration--redshift relation and then try to find some mechanism that will fit it. then they bring in models, like concordance model, and try them out. this is in a subtle way more difficult, but it is a commonsense approach, it is scientifically respectable to work with as few assumptions as you possibly can (and still be able to process the data, get "traction" on the slippery road of the world in other words) Ruth Daly has 22 papers in arxiv. many of them with Djorgovski. this was an invited talk at a dark energy conference. She seems to me like someone to listen to. Djorgovski is at CalTech. maybe Nereid knows of these people
{"url":"http://www.physicsforums.com/showthread.php?p=198896","timestamp":"2014-04-20T00:53:14Z","content_type":null,"content_length":"81508","record_id":"<urn:uuid:e43e7538-8644-4431-adb4-3adb699a7753>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00445-ip-10-147-4-33.ec2.internal.warc.gz"}
Haskell Applications (I) Willamette Mathematics Colloquium Fall 2002 Haskell Applications (I) Fran: Functional Reactive Animation Conal Elliott and Paul Hudak have developed sophisticated graphics, animation and interaction techniques Elliott's FRAN tutorial Hudak's SOE demo page at Yale Willamette Mathematics Colloquium Fall 2002 Haskell Applications (I) Fran: Functional Reactive Animation Functional imagery Conal and Jerzy Karczmarczuk have explored images as functions from points to colors Conal's Functional Images gallery Jerzy's texture page Willamette Mathematics Colloquium Fall 2002 Haskell Applications (I) Fran: Functional Reactive Animation Functional imagery Haskore: based on an algebra of music Paul Hudak at Yale is also a jazz musician: he has developed an abstract language for music based on pitches, tempos, transpositions and sequential and parallel composition the on-line documentation for Haskore Willamette Mathematics Colloquium Fall 2002 Haskell Applications (I) Fran: Functional Reactive Animation Functional imagery Haskore: based on an algebra of music Haskell in K-12 education a group at Yale led by Hudak and John Peterson is exploring the use of these ideas for mathematics-based K-12 education introduction and sample student work
{"url":"http://www.willamette.edu/~fruehr/haskell/lectures/mathhaskell5.html","timestamp":"2014-04-19T04:49:37Z","content_type":null,"content_length":"11125","record_id":"<urn:uuid:e0f2af40-abea-49a8-abd4-43e9cc97d7a8>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00445-ip-10-147-4-33.ec2.internal.warc.gz"}
two dice rolling probability? could use some help July 10th 2006, 10:18 PM two dice rolling probability? could use some help How is everyone doing? I am new to this forum so if I make some error, please let me know. So here is problem. Player A and Player B, they each take turn rolling two dice until an odd product occurs, or until even product occur three time in succession. if round ends on odd product, player A get 2 point, if end in three even product, player b get 3 points. A game ends after ten round. Who will win the game. At first, i was thinking Player A will win but i know somehow it is player B have higher percentage of wining. Here what I know. to get odd product, I solve as (3/6)*(3/6) = 9/36 =>1/4 ~ to get odd, the only way is both dice is odd. I consider 1*1 an odd number. I even drew a table of 6 by 6. I know that this is correct. to get even product on first time, it is 1-9/36 = 27/36 = 3/4 Here where I get confuses: In order to get even product three time in row, I thought you have to muliple three time of 27/36 (aka 3/4) = 19683/46656 => 42% however, when i did the table count number of total possiblility and it end up with total possible even:23361 and favorable event: 17496. Percentage is 73% Here how I did the graph. First roll aka First table, you have 6 by 6 which give you 36 possible event. out of that 27 event you can continue on and rest of 9 event that you cant. then on 2nd roll,out of last 27 event, you have a 6 by 6 table for each event. so 27*36 = total possible event = 972 event. out of each table, you get 9 event that you cant continue on. so 9*36 = 324. therefore only 648 event can continue to next roll ( I done by 972-324). The final roll, for each event you get 6 by 6 table . so total possible event including non continue event is 648 *36 = The 23328 is not correct event until you add in non-continue event from 1st roll and 2nd roll ( 9+324 respectively) so real total possible event is 23661. The favorable event out of that total possible event is 17496 which I did by 648 *27. where 648 is total event but 27 of that is favorable. so final probability of get B three time in row is 73% Now I am pretty confident this is correct answer but I would like to express this answer in a math term not in graph. like for example get odd product, i demonstrate that is 3/6 for each dice to get odd number then to get odd product which is (3/6)*(3/6) sorry for long post. after that I was suppose to find expected value from player A viewpoint. But I am more interest in solving Player B probability in math expression rather than by drawing a 6 by 6 box. thank you for your help. July 10th 2006, 11:02 PM Originally Posted by Dartgen How is everyone doing? I am new to this forum so if I make some error, please let me know. So here is problem. Player A and Player B, they each take turn rolling two dice until an odd product occurs, or until even product occur three time in succession. if round ends on odd product, player A get 2 point, if end in three even product, player b get 3 points. A game ends after ten round. Who will win the game. At first, i was thinking Player A will win but i know somehow it is player B have higher percentage of wining. Here what I know. to get odd product, I solve as (3/6)*(3/6) = 9/36 =>1/4 ~ to get odd, the only way is both dice is odd. I consider 1*1 an odd number. I even drew a table of 6 by 6. I know that this is correct. to get even product on first time, it is 1-9/36 = 27/36 = 3/4 Here where I get confuses: In order to get even product three time in row, I thought you have to muliple three time of 27/36 (aka 3/4) = 19683/46656 => 42% however, when i did the table count number of total possiblility and it end up with total possible even:23361 and favorable event: 17496. Percentage is 73% Here how I did the graph. First roll aka First table, you have 6 by 6 which give you 36 possible event. out of that 27 event you can continue on and rest of 9 event that you cant. then on 2nd roll,out of last 27 event, you have a 6 by 6 table for each event. so 27*36 = total possible event = 972 event. out of each table, you get 9 event that you cant continue on. so 9*36 = 324. therefore only 648 event can continue to next roll ( I done by 972-324). The final roll, for each event you get 6 by 6 table . so total possible event including non continue event is 648 *36 = The 23328 is not correct event until you add in non-continue event from 1st roll and 2nd roll ( 9+324 respectively) so real total possible event is 23661. The favorable event out of that total possible event is 17496 which I did by 648 *27. where 648 is total event but 27 of that is favorable. so final probability of get B three time in row is 73% Now I am pretty confident this is correct answer but I would like to express this answer in a math term not in graph. like for example get odd product, i demonstrate that is 3/6 for each dice to get odd number then to get odd product which is (3/6)*(3/6) sorry for long post. after that I was suppose to find expected value from player A viewpoint. But I am more interest in solving Player B probability in math expression rather than by drawing a 6 by 6 box. thank you for your help. Hi. I could not follow your calculation with the graph. But I'll calculate the probabilities that A or B wins in two ways. If i interpret the rules correctly, A wins if an odd product occurs in any of the first three rolls. B wins otherwise. That's the same as saying B wins if an even product occurs in each of the first three rolls. The game never goes beyond 3 rolls. So B wins if an even product occurs 3 times in a row. The probability is as you say $\frac{3}{4} \cdot \frac{3}{4} \cdot \frac{3}{4} = \frac{27}{64} = .42.$ The probability that A wins is $1 - \ frac{27}{64} = \frac{37}{64} = .58.$ To calculate this in another way, use A wins if an odd product occurs in any of the first three rolls. This probability is $\frac{1}{4} + \frac{3}{4} \cdot \frac{1}{4} + \frac{3}{4} \cdot \frac{3}{4} \cdot \frac{1}{4} = \frac{16+12+9}{64} = \frac{37}{64} = .58,$ the same as above. I hope that helps. July 11th 2006, 03:59 AM Originally Posted by Dartgen How is everyone doing? I am new to this forum so if I make some error, please let me know. So here is problem. Player A and Player B, they each take turn rolling two dice until an odd product occurs, or until even product occur three time in succession. if round ends on odd product, player A get 2 point, if end in three even product, player b get 3 points. A game ends after ten round. Who will win the game. At first, i was thinking Player A will win but i know somehow it is player B have higher percentage of wining. Here what I know. to get odd product, I solve as (3/6)*(3/6) = 9/36 =>1/4 ~ to get odd, the only way is both dice is odd. I consider 1*1 an odd number. I even drew a table of 6 by 6. I know that this is correct. to get even product on first time, it is 1-9/36 = 27/36 = 3/4 Here where I get confuses: In order to get even product three time in row, I thought you have to muliple three time of 27/36 (aka 3/4) = 19683/46656 => 42% however, when i did the table count number of total possiblility and it end up with total possible even:23361 and favorable event: 17496. Percentage is 73% How did you calculate it? Keep Smiling July 11th 2006, 05:27 AM The other part of the question we need to find who will be the winner... To find the number of points A is probably going to win, you multiply the number of rounds by the chance A will win, and then you multiply by how many points A will get per round... $\text{rounds}\times\text{chance of winning}\times\text{points per round}=\text{ending score}$ $10\times\frac{37}{64}$$\times2=s_A\quad\rightarrow\quad12\approx s_A$ Now for B $\text{rounds}\times\text{chance of winning}\times\text{points per round}=\text{ending score}$ $10\times\frac{27}{64}$$\times3=s_B\quad\rightarrow\quad13\approx s_B$ So B is predicted to win (by a small margin) July 11th 2006, 06:34 AM Hello, Dartgen! JakeD had the best approach. Let me baby-step through it . . . Player A and Player B take turns rolling two dice until an odd product occurs, or until even product occur three time in succession. If a round ends on odd product, player A get 2 points. If it end in three even products, player B get 3 points. A game ends after ten rounds. Who will win the game? Out of the 36 possible outcomes for a pair of dice, . . 9 have odd products, 27 have even products. Hence: . $P(odd) = \frac{9}{36} = \frac{1}{4},\;\;P(even) = \frac{27}{36} = \frac{3}{4}$ So $B$ wins: . $\left(\frac{3}{4}\right)^3 = \frac{27}{64}$ of the time . . and $A$ wins: . $1 - \frac{27}{64}\,=\,\frac{37}{64}$ of the time. $A$ wins $2$ points with probability $\frac{37}{64}.$ . . . . His expected winnings is: . $(2)\left(\frac{37}{64}\right) = \frac{74}{64}$ points per game. In 10 games, he can expect to win: . $10 \times \frac{74}{64} \,= \,11.5625$ points. $B$ wins $3$ points with probability $\frac{27}{64}.$ . . . . His expected winnings is: . $(3)\left(\frac{27}{64}\right) = \frac{81}{64}$ points per game. In 10 games, he can expect to win: . $10 \times \frac{81}{64} \,=\,12.65625$ points. Therefore, player $B$ is the expected winner. July 11th 2006, 08:51 AM Just to clear things up, me and Soroban have the same answer, I just approximated mine to the nearest whole number. July 11th 2006, 11:33 PM Thank you for your help. I knew that B was going to win but i just couldnt figure out how. I did the calculation and I thought it was too low until i did the tree-like diagram of all possible event i figure out that it was B. again thank you for your help. Does anyone know what is expected value from Player's A viewpoint. Say that both player put in same amount of money. Since B have slight edge, then that mean i can caluculate the different in probability that you have outline for me. oh another question, what book did you use to find answer. I wont mind buying a book to help on my probability understanding. July 12th 2006, 03:41 AM Originally Posted by Dartgen Thank you for your help. I knew that B was going to win but i just couldnt figure out how. I did the calculation and I thought it was too low until i did the tree-like diagram of all possible event i figure out that it was B. again thank you for your help. Does anyone know what is expected value from Player's A viewpoint. Say that both player put in same amount of money. Since B have slight edge, then that mean i can caluculate the different in probability that you have outline for me. This depends completely on how you're gambling, is the winner of the ten rounds going to get all the money or is the money split up at the end to both players according to how many points they July 14th 2006, 09:28 PM The expected value After ten round, whoever got most point win the whole bet. Based on that assumption, do I add the point of 11 divide total (11+12) to get expected value point from A. I know that it isnt 11 or 12 but I am assuming that if A get 11 pt possible doing ten round sthen B is 12pt possible for same amount of round. So are the expected valued correct? July 14th 2006, 10:18 PM Originally Posted by Dartgen How is everyone doing? I am new to this forum so if I make some error, please let me know. So here is problem. Player A and Player B, they each take turn rolling two dice until an odd product occurs, or until even product occur three time in succession. if round ends on odd product, player A get 2 point, if end in three even product, player b get 3 points. A game ends after ten round. Who will win the game. At first, i was thinking Player A will win but i know somehow it is player B have higher percentage of wining. Here what I know. to get odd product, I solve as (3/6)*(3/6) = 9/36 =>1/4 ~ to get odd, the only way is both dice is odd. I consider 1*1 an odd number. I even drew a table of 6 by 6. I know that this is correct. to get even product on first time, it is 1-9/36 = 27/36 = 3/4 Here where I get confuses: In order to get even product three time in row, I thought you have to muliple three time of 27/36 (aka 3/4) = 19683/46656 => 42% however, when i did the table count number of total possiblility and it end up with total possible even:23361 and favorable event: 17496. Percentage is 73% Here how I did the graph. First roll aka First table, you have 6 by 6 which give you 36 possible event. out of that 27 event you can continue on and rest of 9 event that you cant. then on 2nd roll,out of last 27 event, you have a 6 by 6 table for each event. so 27*36 = total possible event = 972 event. out of each table, you get 9 event that you cant continue on. so 9*36 = 324. therefore only 648 event can continue to next roll ( I done by 972-324). The final roll, for each event you get 6 by 6 table . so total possible event including non continue event is 648 *36 = The 23328 is not correct event until you add in non-continue event from 1st roll and 2nd roll ( 9+324 respectively) so real total possible event is 23661. The favorable event out of that total possible event is 17496 which I did by 648 *27. where 648 is total event but 27 of that is favorable. so final probability of get B three time in row is 73% Now I am pretty confident this is correct answer but I would like to express this answer in a math term not in graph. like for example get odd product, i demonstrate that is 3/6 for each dice to get odd number then to get odd product which is (3/6)*(3/6) sorry for long post. after that I was suppose to find expected value from player A viewpoint. But I am more interest in solving Player B probability in math expression rather than by drawing a 6 by 6 box. thank you for your help. July 15th 2006, 05:14 AM The expected points for each player are as follows... Originally Posted by Soroban $A$ wins $2$ points with probability $\frac{37}{64}.$ . . . . His expected winnings is: . $(2)\left(\frac{37}{64}\right) = \frac{74}{64}$ points per game. In 10 games, he can expect to win: . $10 \times \frac{74}{64} \,= \,11.5625$ points. $B$ wins $3$ points with probability $\frac{27}{64}.$ . . . . His expected winnings is: . $(3)\left(\frac{27}{64}\right) = \frac{81}{64}$ points per game. In 10 games, he can expect to win: . $10 \times \frac{81}{64} \,=\,12.65625$ points.
{"url":"http://mathhelpforum.com/statistics/4097-two-dice-rolling-probability-could-use-some-help-print.html","timestamp":"2014-04-20T04:49:40Z","content_type":null,"content_length":"29123","record_id":"<urn:uuid:a1772ae8-a7c4-4a69-99ab-ee906e41c496>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00394-ip-10-147-4-33.ec2.internal.warc.gz"}
Statistics 471/701 Oct 31: Problems 1 and 2 from Berndt chapter 3. • Dec 2: Calibration • Dec 4: presentations without power point (odd sized groups: i.e. either 1 person or 3 people groups) • Dec 9 (last day of class): presentations without power point (even sized groups) • Dec 11 (reading days): presentations with power point General information • Class: MW 10:30-12, F36 • TA: Kory Johnson □ Office hours: T 3-4, and W 12:30-1:30. (Note: his office is 434, but he will hold office hours in 452.) □ email: statistics.assignments@gmail.com □ Kory will maintain material on canvas. □ Last years TA, Josh maintained a web page. • Office hours: Thursday at 2pm (Huntsman 472). Or email me for an appointment. • Course work: □ submit all exercises to statistics.assignments@gmail.com □ Exercises (to introduce you to R. These are not that important--they are mostly for your benefit.) □ Assignments (or more accurately, cases) □ Final project (in a group of two students or individual) □ Note: You are the best students on campus. I have very high expecations on what you will learn this semester. On student evaluations, this class is often listed as requiring the most work of any class taken at Wharton. • We will be using R (free) as our statistics package. I'll be using R in class. The book is on R. Statistics revolves around R. • Two useful books on R: □ Introductory Statistics with R by Peter Dalgaard, 2nd edition, ISBN 978-0-387-79053-4, Springer 2008 (paperback). □ Linear Models with R by Julian J. Faraway, ISBN 1-58488-425-8, Chapman & Hall/CRC Press 2005, (hardback) • But the web, and Kory are your best resources! Future Data sets • If you can't access WRDS, here are the two files: • Cleaning crews for practice reading into R • Alice in wonderland: human readable, and word counts from google. Here are files that are easier to read into R: one, two. Future readings • Dalgaard: 2.4 and 6.1 (in the previous edition: 1.6 and 5.1) • Faraway: chapter 7 • Dalgaard: Chapter 6 (whole chapter) • Dawkins: Handout Future practice exercises I'll keep a page about the current R practice you should be doing. • fire up R (first week's practice) • make doglegs (second week's practice) • residuals (3rd week's practice) • hetroskadasticity (3rd week's practice) • homework one help file. • homework two help file. Ask me about the Latex and R connection (Sweave) • example source file • What it looks like when processed • Other references I've found useful: Data sets used in previous classes Other Data sets of interest and some that will be used later • World records for marathons by age Contrast with New england Journal of Medicine from 1980 (nejm198007173030304). • amazon data (from compscore) • Population cohorts 1926 - 1979 (total .txt,.csv) • DJIA (.txt, .csv) • Homework 1: download Berndt data. (guidelines) • Nerlov data (.jmp, .txt) • See the end of this page for other data sets of interest • KOPCKE data for homework 4: (raw data) Note strange file format. You WILL need to edit it! If you read it into a text editor you will see that there are TWO different columns of number. Here are some comments on the file. • Magic forecasting rule to generate excess returns • IBM (.txt) • Monthly T-bills, VW, inflation 1925 - 1995 (.jmp,.txt) suggestion: DJIA for homework. :-) • Homework evaluations (.txt). Notice that no homework is statistically significantly worse than any other. Too bad! • mink data (txt,txt, xls) documentation • International Airline (.jmp) • Unemployment (.csv) • Live births in Pennsylvania 1915 - 1997 csv • Fish (.jmp) • Hurricane (Splus)
{"url":"http://gosset.wharton.upenn.edu/~foster/teaching/471/","timestamp":"2014-04-16T13:04:27Z","content_type":null,"content_length":"21241","record_id":"<urn:uuid:ebb27877-4e2d-4bbf-a959-6fe360bbc882>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00633-ip-10-147-4-33.ec2.internal.warc.gz"}
eFunda: Glossary: Units: Thermal Resistance Coefficient: RSI (metric R-Value) Glossary » Units » Thermal Resistance Coefficient » RSI (metric R-Value) RSI (metric R-Value) (RSI) is a unit in the category of Thermal resistance coefficient. This unit is commonly used in the UK, US unit systems. RSI (metric R-Value) (RSI) has a dimension of M^-1T^3Q where M is mass, T is time, and Q is temperatur. It essentially the same as the corresponding standard SI unit K-m^2/W. Note that the seven base dimensions are M (Mass), L (Length), T (Time), (Temperature), N (Aamount of Substance), I (Electric Current), and J (Luminous Intensity). Other units in the category of Thermal resistance coefficient include Clo (clo), Kelvin Meter Squared Per Watt (K-m^2/W), R-Value (imperial) (°F-ft^2-h/Btu (therm.)), and Tog (tog). Additional Information Related Glossary Pages Related Pages
{"url":"http://www.efunda.com/glossary/units/units--thermal_resistance_coefficient--rsi_metric_r-value.cfm","timestamp":"2014-04-19T19:35:48Z","content_type":null,"content_length":"32144","record_id":"<urn:uuid:e936aa70-6bc3-4f47-b478-57ce47ea6ee5>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00140-ip-10-147-4-33.ec2.internal.warc.gz"}
Caldwell, NJ Science Tutor Find a Caldwell, NJ Science Tutor ...One of my most recent students in SAT prep earned 800CR/780M, and another earned a combined score of 2290! I'd love to teach you in any of my listed academic subjects. I favor a dual approach, focused on both understanding concepts and going through practice problems. 26 Subjects: including mechanical engineering, psychology, ACT Science, English ...In addition to having excellent English skills, I have edited, proofread and reviewed dozens of books, articles, and other reference materials. I can help you with the various skills you need, to use English effectively: reading for comprehension, highlighting and note-taking, and writing for ac... 36 Subjects: including ACT Science, reading, ESL/ESOL, algebra 1 ...I scored 100% on the WyzAnt certification quiz in English. As a Biology major I understand the concepts of Physical Science. In addition, I have worked with students in middle school age Science courses and am familiar with the material in that sense. 24 Subjects: including chemistry, calculus, ecology, statistics ...I also encourage you to ask as many questions as possible, no matter how silly, so that you can get your own head around the problem. I then find it useful to ask questions to test your understanding, or perhaps get you to explain the concept to someone else. I also teach solid, repeatable methods for solving problems in physics, which sorts out the first type of issue I described above. 8 Subjects: including physics, astronomy, geometry, algebra 1 ...My approach to tutoring is very simple: figure out the student's learning style, assess strengths and weaknesses and tailor a program to the individual. My goal with tutoring is to make the student confident and comfortable with the material and to put myself out of a job. I also make myself available to the student and parents to address any issues or concerns that may arise. 22 Subjects: including psychology, GMAT, algebra 2, biology Related Caldwell, NJ Tutors Caldwell, NJ Accounting Tutors Caldwell, NJ ACT Tutors Caldwell, NJ Algebra Tutors Caldwell, NJ Algebra 2 Tutors Caldwell, NJ Calculus Tutors Caldwell, NJ Geometry Tutors Caldwell, NJ Math Tutors Caldwell, NJ Prealgebra Tutors Caldwell, NJ Precalculus Tutors Caldwell, NJ SAT Tutors Caldwell, NJ SAT Math Tutors Caldwell, NJ Science Tutors Caldwell, NJ Statistics Tutors Caldwell, NJ Trigonometry Tutors
{"url":"http://www.purplemath.com/caldwell_nj_science_tutors.php","timestamp":"2014-04-20T02:03:23Z","content_type":null,"content_length":"24124","record_id":"<urn:uuid:e293c386-2b70-4d77-91b3-32b399f97ebd>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00634-ip-10-147-4-33.ec2.internal.warc.gz"}
Write your own word problem using remainders. Please only use a 2 by 1 ex. 36 divided by 5. The answer is 7 r 1 Decide if you want your word problem to keep the remainder, or lose it, here is an example of both: Leave the remainder: Mr. M has 36 flowers he wants to place them all into 5 vases? How many will each vase get? What should he do with his remainder? Lose the remainder: Jon has 36 footballs he wants to give equally to friends. He has five friends to share with, how many will each friend get? What should he do with the reminder? You pick the number sentence, include an answer (with a remainder), then pick the subject. TO ANSWER THE BLOG POST… tell whether you leave the remainder and round up, or you lose it and stay the same! For example… Mr M will place 7 flowers into each vase except for one, one will have 8. OR Joel will give 5 footballs to each of his friends! Posted by on November 10, 2011 at 12:12 pm | Comments & Trackbacks (5) Hello and welcome to my classroom blog! My name is John William Moran, and I am a fourth grade teacher! This blog is designed to give students a twenty-first century opportunity to be heard! Let me know what you think! Posted by on April 10, 2008 at 8:24 am | Comments & Trackbacks (0)
{"url":"http://johnmoran.edublogs.org/","timestamp":"2014-04-21T09:35:32Z","content_type":null,"content_length":"12931","record_id":"<urn:uuid:106a42f6-18e7-47be-95c2-669f9b125a04>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00553-ip-10-147-4-33.ec2.internal.warc.gz"}
Adapting the mathematics curriculum to the needs of today's students. This is a transcript of a recording made with a hidden microphone during a university mathematics department's meeting on curriculum development. Our task today is to develop a curriculum that will better address the problems of today's students. What are the problems of today's students? They can't do mathematics. Yes, but the deeper problem is that they find our courses too difficult, and therefore we are getting fewer and fewer math majors. So we can't expand our faculty, and are even in danger of having our staff size reduced. Now that's serious! So what makes math so difficult for students? Oh, little things, like homwork, exams, standards. We've already cut back on homework and lowered standards. Some still find the pace too demanding. All these things get in the way of what they really came to college for: socializing, sports and sex. The three s's. We could slow the pace of courses. We already have. The calculus sequence that used to be two semesters long has been spread out to four. That's a positive idea. We could similarly stretch out other courses. But that would force us to drop some courses, maybe even courses I teach! Do your students really understand and master those courses? Of course not, but at least we are exposing students to that material. Surely that has some value. We all have to make sacrifices. Besides, who really needs advanced mathematics these days? Most students end up in jobs where computers do that work for them. Ok, suppose we drop some of the lower enrollment advanced courses, and expand some of the lower level courses. What have we got left then? Well, most students today will need to take algebra, or they should. They don't seem to have learned it in high school. Even algebra is tough for some. How about offering a pre-algebra course to ease them into it gradually? The algebra course itself could be pruned of the more difficult topics. We could put those topics into an advanced algebra course. But we just proposed dropping those advanced courses! Which topics could we leave out of algebra? Well, we could limit equations to those with the x on the left of the equals sign, and move difficult things like percents and fractions to the advanced course. What about those students who find math too theoretical? Those who can't prove theorems, for example? If we leave out the proofs, mathematics becomes only a collection of unfounded assertions. That's a dilemma Students' certainly can't handle anything with more than one lemma. An applied algebra course could be added to better serve those who can't think abstractly. Well, now this is shaping into a more user-friendly math curriculum. But still, I fear, it is too demanding for those students with more modest goals, those who want to be teachers of mathematics. You know that they are always at the bottom of the curve in every math class. The traditional way to handle this problem is to offer special sections of regular courses "for teachers". It wouldn't be difficult to develop such courses, for they are just watered down versions of the regular courses. But we are already watering down the regular courses! Soon the content will be so dilute that we can call the curriculum 'homeopathic'. Remember the needs of 'our customers'. We can't be too elitist about these things. No, that would be politically incorrect. We must be sensitive to cultural differences. Well there are two cultures, those who can think abstractly and quantitatively, and those who can't. Could we possibly find a way to teach absolutely anyone to think like mathematicians? All right, let's cut out the absurd comments and come back down to the real world and get to the task at hand. What was that? Ensuring our survival as a department. Students today live in the real world, so it seems that we could better serve them by limiting math to only real numbers. Are you suggesting my complex variables course isn't necessary? It could be an optional elective. Then no one would elect to take it! I could equally well suggest that your course in group theory be dropped. I'm already intending to rename it "Group Dynamics" to draw in more customers. So don't think I'm not willing to adapt to changing times. We'll get nowhere if everyone engages in turf defense! Still, we need to attract more students, to make our department's productivity index look better. What about the general-education crowd? Can we tailor some offerings to be attractive to non math majors? Course titles make a big diference. How about "The Romance of Numbers"? Or "Having Fun With Figures"? Students like courses with a mystical or occult tone. Perhaps "The Eternal Triangle" or "Transcendental Equations". All right, people, we are digressing again. Realisticaly, what can we do to make math more appealing to all students? Well, I hate to mention this, but textbooks do strike students as a bit formidable. They bristle with unfriendly looking equations, graphs, and diagrams. Often these are in black and white. Students want more colorful books. Look at the books in the sciences: four color printing, lots of photographs, color-keyed symbols in the equations, etc. Our math books are drab by comparison. Now we are getting somewhere. And couldn't we reduce the number of equations? Surely not all of them are necessary. You don't honestly believe that students read all of them, do you? Heck, some students don't even purchase the required textbook. At this point the tape ran out.
{"url":"http://www.lhup.edu/~dsimanek/scenario/curric.htm","timestamp":"2014-04-25T00:32:52Z","content_type":null,"content_length":"6383","record_id":"<urn:uuid:ffe428e4-2fba-4a53-b66e-5d182513c8e0>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00163-ip-10-147-4-33.ec2.internal.warc.gz"}
More generally, self reference is when something references itself, and need not be part of a definition. For example "The last word in this sentence is `Mississippi'." is a self referential sentence , though it does not define itself, it only describes a certain property that it has. One of the most prevalent self refential sentences is the Liar's Paradox (aka Epimenides' paradox).
{"url":"http://everything2.com/title/self-reference","timestamp":"2014-04-18T14:08:46Z","content_type":null,"content_length":"27344","record_id":"<urn:uuid:f29afae7-97b6-45e1-bc78-b1e7421423e8>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00223-ip-10-147-4-33.ec2.internal.warc.gz"}
algebra problem : figuring out the speed of boat and water January 30th 2011, 08:53 PM #1 Jan 2011 algebra problem : figuring out the speed of boat and water I have the algebra problem: If a boat goes downstream 72 miles in 3 hours and upstream 60 miles in 6 hours, the rate of the river and the rate of the boat in still water respectively are ________? a. 7 &16 mph b. 7&17 c. 6 &17 mph d. 6 &18 e. none I know the forumla is r x t = d however i come up with 26 & 10 am i approaching this incorrectly? if u is the velocity of the boat and v is the velocity of the river, Yes, the formula for finding out speed (rate) > [tex]Distance (miles)/Time (hours) = Speed (miles/Hr)[math/] However, in your question it says to find the respective rates of the river in stillwater. Also, it does not make sense when it talks about the rate of the river going upstream, water always flows downstream and in stillwater there will be no flow/speed/rate. The boat will be stationary. Answer, is e. Hi BobBali, Thr question asks for the rate of the river and the rate of the boat in still water.Respectively refers to the order of the answers January 30th 2011, 10:10 PM #2 January 30th 2011, 11:11 PM #3 January 31st 2011, 05:12 AM #4 Super Member Nov 2007 Trumbull Ct
{"url":"http://mathhelpforum.com/algebra/169779-algebra-problem-figuring-out-speed-boat-water.html","timestamp":"2014-04-16T20:34:31Z","content_type":null,"content_length":"37024","record_id":"<urn:uuid:ae47166c-2fa5-4d50-9931-9d6d4d3796e0>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00285-ip-10-147-4-33.ec2.internal.warc.gz"}
Summary: IMRN International Mathematics Research Notices 1999, No. 4 A Note on V. I. Arnold's Chord Conjecture Casim Abbas 1 Introduction This paper makes a contribution to a conjecture of V. I. Arnold in contact geometry, which he stated in his 1986 paper [4]. A [4] on a closed, oriented three manifold M is a 1-form so that d is a volume form. There is a distinguished vectorfield X, called the Reeb vectorfield of , which is defined by iX d 0 and iX 1. The standard example on S3 is the following: Consider the 1-form on R4 defined by (x1dy1 - y1dx1 + x2dy2 - y2dx2). This induces a contact form on the unit three sphere S3 . Observe that all the orbits of the Reeb vectorfield are periodic; they are the fibres of the Hopf fibration. Note that the dynamics of the Reeb vectorfield changes drastically in general if we replace by the contact form f where f is a nowhere vanishing function on S3
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/547/1435507.html","timestamp":"2014-04-17T23:03:31Z","content_type":null,"content_length":"7956","record_id":"<urn:uuid:c3299f03-17e4-4262-b109-cafae7250f1c>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00552-ip-10-147-4-33.ec2.internal.warc.gz"}
NUIM ePrints and eTheses Archive Buckley, Stephen M. (2010) Nonpositive curvature and complex analysis. In: Five lectures in complex analysis : second Winter School on Complex Analysis and Operator Theory, February 5-9, 2008, University of Sevilla, Sevilla, Spain. Contemporary mathematics (525). American Mathematical Society, Providence, R.I., pp. 43-83. ISBN 9780821848098 We discuss a few of the metrics that are used in complex analysis and potential theory, including the Poincaré, Carathéodory, Kobayashi, Hilbert, and quasihyperbolic metrics. An important feature of these metrics is that they are quite often negatively curved. We discuss what this means and when it occurs, and proceed to investigate some notions of nonpositive curvature, beginning with constant negative curvature (e.g. the unit disk with the Poincaré metric), and moving on to CAT(k) and Gromov hyperbolic spaces. We pay special attention to notions of the boundary at infinity. Item Type: Book Section Keywords: Nonpositive curvature; complex analysis; constant negative curvature; Hyperbolic Geometry; Subjects: Science & Engineering > Mathematics Item ID: 2589 Identification Number: ISSN: 0271-4132 Depositing User: Prof. Stephen Buckley Date Deposited: 29 Jun 2011 13:34 Publisher: American Mathematical Society Refereed: No Repository Staff Only(login required)
{"url":"http://eprints.nuim.ie/2589/","timestamp":"2014-04-19T17:03:27Z","content_type":null,"content_length":"22162","record_id":"<urn:uuid:5ffc6e68-e4cd-49cb-bc26-79b4fa4725d5>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00231-ip-10-147-4-33.ec2.internal.warc.gz"}
Implicit Functions Date: 11/26/97 at 10:05:20 From: Kristen Norris Subject: Implicit functions What is an implicit function? Please give me a definition and several examples. Also, please tell me how it is used and give me real life related applications. Date: 11/26/97 at 14:45:23 From: Doctor Rob Subject: Re: Implicit functions Whenever you have an equation relating two variables, each is an implicit function of the other. Even if you are unable to solve the equation explicitly for one of the variables in terms of the other, given a value of one variable, there is a value of the other which makes the equation true. That defines the second variable as a function of the first. For example, if you knew that y + e^y = x^5, the equation would define y as a function of x implicitly. Solving for y is not possible, so you cannot express the function algebraically, but for each value of x, there is a unique value of y which makes it true. You can compute that value by numerical methods. For example, when x = 2, the value of y is (about) 3.35497961935092. Similarly, x is a function of y, but this time it is possible to write the function down explicitly, x being the fifth root of y + e^y. Any time you have a complicated equation involving two variables, this situation may arise. In order to be a *function*, of course, given a value of the independent variable, there must be a *unique* value of the dependent variable which makes the equation true. If there is more than one, you don't have a function, but something called a "relation". For the sake of clarity, consider x to be the independent variable, and y the dependent variable. For x = y^2, y is *not* a function of x, because for each positive value of x, there are two values of y which work: y = Sqrt[x] and y = -Sqrt[x]. This means that for y to be an implicit function of x, if you graph the solution set of the equation, every vertical line x = a should intersect the graph in a unique point. Then the value of the function y at x = a is the y-coordinate of that intersection point. One simple case where an implicit function always exists is if the graph is monotone increasing, that is, if (a,c) and (b,d) are points on the graph, and a > b, then c > d. Another case is when it is monotone decreasing (a > b ==> c < d). Implicit functions can also exist in other situations, however. The example y + e^y = x^5 has a monotone increasing graph, and that is why I knew that y was an implicit function of x. -Doctor Rob, The Math Forum Check out our web site! http://mathforum.org/dr.math/
{"url":"http://mathforum.org/library/drmath/view/51453.html","timestamp":"2014-04-18T21:40:59Z","content_type":null,"content_length":"7755","record_id":"<urn:uuid:45bc904e-1398-42db-a793-4107821795dc>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00401-ip-10-147-4-33.ec2.internal.warc.gz"}
Latest Theoretical work by Dwave Systems shows progress to solutions of NP-Hard problems and causes one critic of their quantum computer work to partially recant TweetA Dwave systems, adiabatic quantum computer, critic (Chris Lee at Ars Technica) has retracted statement that Dwave was borderline fraudulent. Geordie and friends, I am sorry. I don't know if you have a working device, but the theoretical work produced by D-Wave is sharp and a great indication that you are honestly pursuing your stated Physical Review Letters - Does Adiabatic Quantum Optimization Fail for NP-Complete Problems ? It has been recently argued that adiabatic quantum optimization would fail in solving NP-complete problems because of the occurrence of exponentially small gaps due to crossing of local minima of the final Hamiltonian with its global minimum near the end of the adiabatic evolution. Using perturbation expansion, we analytically show that for the NP-hard problem known as maximum independent set, there always exist adiabatic paths along which no such crossings occur. Therefore, in order to prove that adiabatic quantum optimization fails for any NP-complete problem, one must prove that it is impossible to find any such path in polynomial time. Arxiv- Does Adiabatic Quantum Optimization Fail for NP-Complete Problems ? (4 pages) It is important to note that we are not trying to prove that all level crossings between a global minimum and local minima can be eliminated in polynomial time. Neither are we claiming that if they are eliminated, the MIS problem can be solved in polynomial time. Even if all level crossings are eliminated, the scaling of the minimum gap in the rest of the spectrum is still unknown. What we are stating here is that there always exist paths along which no crossing occurs, at least up to second order perturbation. Since MIS is NP-hard, any NP problem can be polynomially mapped onto it. Therefore, a valid proof that any NP-complete problem cannot be solved using AQO because of level crossings must prove that for the problem mapped onto MIS, it is impossible to find an assignment of parameters for which there is no level crossing. Further, due to the NP-hardness of approximating solutions to MIS, even if there are multiple crossings, AQO may produce sufficient solutions to solve NP-complete problems. In conclusion, using perturbation expansion, we have shown that for the NP-hard problem of MIS, it is always possible to write down a Hamiltonian for which during the adiabatic evolution no crossing occurs between a global minimum and any of the local minima. If there is no degeneracy in the local minima, or if there are degenerate local minima but no pair of them is exactly 2 bit flips apart, such a Hamiltonian can be trivially obtained by increasing the coupling coefficient between the qubits linearly with the size of the problem. In cases with local minima exactly 2 bit flips away from each other, one can use the freedom of choosing the initial Hamiltonian to avoid level crossings. In the latter case, finding an assignment for tunneling amplitudes [Triangle sub i] may or may not be nontrivial. However, we have shown that such an assignment always exists. In general, there are infinite ways of defining the Hamiltonian, including those where many approximate solutions suffice, therefore it seems infeasible to prove that no successful Hamiltonian can be obtained in polynomial time. One of the arguments used against adiabatic quantum computing is almost certainly wrong. This doesn't show that the approach will work or that it will be faster than any other sort of computer, but it tells us that D-Wave may yet produce something useful. To begin with, let's clarify a few things. D-Wave refers to its device as a quantum optimizer, and I refer to it as an adiabatic quantum computer. The two are not quite identical: an adiabatic quantum computer uses the idea that mathematical problems can be restated in terms of a Hamiltonian—the mathematical term for the energy of the forces that push a bunch of particles about. The desired solution to the problem then turns out to be the lowest energy state, called the ground state, of the particles being pushed about by the Hamiltonian. In a quantum system, the Hamiltonian describes quantum particles and their quantum states. Unfortunately, getting everything into the ground state is as hard as solving the original problem, so you might think that we haven't gotten anywhere. You would be wrong. What we do instead is set the Hamiltonian to something easy to solve and put our particles in the ground state of that Hamiltonian. Now, we carefully prod at the Hamiltonian until it has the shape of the problem we want to solve. If we have done that carefully enough, the particles will have remained in the ground state and, once we read out their state, we have the solution to the original problem. Going from one Hamiltonian to another while keeping all the particles in the ground state is called an adiabatic passage, hence the name adiabatic quantum computing. The key is remaining in the ground state. Our ability to make a truly adiabatic passage in a system with lots and lots of particles has been questionable. The general approach to get around this has been to do the adiabatic trick multiple times and never end up in the ground state, but remain near it. What you are doing is generating a group of approximate solutions to the problem. With that in hand, you can use classical computing to go the remaining distance and get the exact solution. This is what D-wave does, and hence, D-Wave refers to its device as a quantum optimizer, not a quantum computer. For this to work, you still need to remain near the ground state, and it would really help if there was actually a set of changes to the Hamiltonian that would reliably take you to the ground state—even if you have to spend a bit of time in trial and error searching to find such a path. This is where the controversy over D-Wave's claims has lain. Several researchers have argued that once the number of particles got large enough to be interesting, there would be no path from a starting Hamiltonian to a final Hamiltonian that didn't involve some of the particles jumping into an excited state. In answer to their critics, Dickson and Amin from D-Wave Systems have published a paper that seeks to show that such a path does exist. They looked at a specific problem, called the Maximum Independent Set problem, which falls into a set of problems called NP-hard—these are problems that scale in a very unfortunate way on classical computers. They chose this problem because, once you show something can be done for one NP-hard problem, you have shown that it works for all problems in the NP set. Using this problem as a basis for the mathematics, they showed that it is always possible to choose a starting Hamiltonian such that they could avoid local minima and arrive at a global minimum: the solution to the problem in this case. There are some restrictions, though: the local minima had to be a certain amount more energetic than the ground state for this to be true. What does this mean? Well, first, it doesn't mean that a quantum optimizer is going to get to the solution any faster than a classical computer, because the speed at which you tweak the system depends on the distance between the ground state and the first excited state: the closer they are, the slower you have to go. So, there is no guarantee of any additional speed here. Indeed, given that one of the seminal works on Adiabatic quantum computing was simply to prove that it was functionally identical to logic-gate-based quantum computers, it would be a major breakthrough to show such a speed up. What this new paper really is, then, is a proof that places the basic functionality of adiabatic quantum computing—no matter how it is actually implemented—on much firmer theoretical ground. If you liked this article, please give it a quick review on ycombinator or StumbleUpon. Thanks Latest Theoretical work by Dwave Systems shows progress to solutions of NP-Hard problems and causes one critic of their quantum computer work to partially recant blog comments powered by Disqus
{"url":"http://nextbigfuture.com/2011/02/latest-theoretical-work-by-dwave.html","timestamp":"2014-04-18T08:08:12Z","content_type":null,"content_length":"243331","record_id":"<urn:uuid:1daee201-4e72-4377-a20b-d6432d85dff3>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00632-ip-10-147-4-33.ec2.internal.warc.gz"}
Malabika Pramanik's Research Areas of interest • Euclidean harmonic analysis □ Cone multipliers and local smoothing □ Multi-parameter maximal functions □ Hilbert transform along polynomial surfaces □ Scalar oscillatory integrals, oscillatory integrals with degenerate phases □ Almost everywhere convergence of Fourier series □ Multilinear operators with singular multipliers • Several complex variables: Estimates for the Bergman kernel. • Partial differential equations • Inverse problems and microlocal analysis I got my Ph.D. in Mathematics from the University of California, Berkeley, under the guidance of Prof. Michael Christ. I was a Van Vleck visiting assistant professor at University of Wisconsin, Madison (2001-2004), visiting assistant professor at the University of Rochester (Fall 2004), and a Fairchild Senior Research Fellow at Caltech (January 2005 - August 2006). I joined the Mathematics Department at UBC in Fall 2006. (For more details, here is a copy of my CV in PDF format). 1. A multi-dimensional resolution of singularities with applications to analysis. (Joint work with Tristan Collins and Allan Greenleaf) To appear in Amer. J. Math. ArXiv preprint. 2. Maximal operators and differentiation theorems for sparse sets. (Joint work with Izabella Laba) To appear in Duke Math. J. 3. An FIO calculus for marine seismic imaging, II: Sobolev estimates (Joint work with Raluca Felea and Allan Greenleaf) Math. Annalen DOI: 10.1007/s00208-011-0644-5. 4. A Calderón-Zygmund estimate with applications to generalized Radon transforms and Fourier integral operators. (Joint work with Keith Rogers and Andreas Seeger) Studia Math., 202 (2011), 1-15. 5. Multilinear singular operators with fractional rank. (Joint work with Ciprian Demeter and Christoph Thiele) Pacific J. of Mathematics 246 (2010) no. 2, 293--324 6. Maximal averages over linear and monomial polyhedra. (Joint work with Alexander Nagel) Duke Math. J. Volume 149 Number 2 (2009), 209-277. 7. Arithmetic progressions in sets of fractional dimension. (Joint work with Izabella Laba) Geom. Funct. Anal. 19 (2009), 429-456. 8. Oscillatory integral operators with homogeneous polynomial phases in several variables. (Joint work with Allan Greenleaf and Wan Tang) J. Functional Analysis (2007, volume 244, 444-487). (pdf) 9. Double Hilbert transform along real-analytic surfaces in $\mathbb{R}^{d+2}$. (Joint work with Chan Woo Yang) J. London Math. Soc. (2008) 77(2):363-386. 10. Averages over curves in $\mathbb{R}^3$ and associated maximal functions. (Joint work with Andreas Seeger) Amer. J. Math. (2007, volume 21, number 3, 61-103). (pdf) 11. $L^p$ Sobolev regularity of a restricted X-ray transform. (Joint work with Andreas Seeger) Harmonic Analysis and its Applications at Osaka, Conference Proceedings (November 2004, 47-64). (pdf) 12. Wolff's inequality for hypersurfaces. (Joint work with Izabella Laba). To appear in the Proceedings of El Escorial, 2005. (pdf) 13. $L^p$ decay estimates for weighted oscillatory integral operator on $\mathbb{R}$. (Joint work with Chan Woo Yang) Revista Matematica Iberoamericana (2005, volume 21, number 3, 1071--1095). (pdf) 14. Decay estimates for weighted scalar oscillatory integrals on $\mathbb{R}^2$. (Joint work with Chan Woo Yang) Indiana University Mathematical Journal (2004, volume 53, number 2, 613-645). (pdf) 15. A weak $L^2$ estimate for a maximal dyadic sum operator on $\mathbb{R}^n$. (Joint work with Erin Terwilleger) Illinois Journal of Mathematics (2003, volume 47, number 3, 775-813). (pdf) 16. Convergence of two-dimensional weighted integrals. Transactions of the American Mathematical Society (2002, volume 354, number 4, 1651-1665). (pdf) 17. Weighted inequalities for real-analytic functions in $\mathbb{R}^2$. Journal of Geometric Analysis (2002, volume 12, number 2, 265-288). (pdf) Preprints/In preparation (Preprints available on request). 1. Diagonal estimates for the Bergman kernel on certain domains in $\mathbb{C}^n$. (Joint work with Alexander Nagel). 2. Measures on monomial polyhedra. (Joint work with Alexander Nagel) Malabika Pramanik Last modified: Feb 28 2011
{"url":"http://www.math.ubc.ca/~malabika/research/index.html","timestamp":"2014-04-17T15:56:41Z","content_type":null,"content_length":"6628","record_id":"<urn:uuid:575cdb4c-91b5-4fa0-a251-3e8852a7e3e2>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00061-ip-10-147-4-33.ec2.internal.warc.gz"}
[SciPy-user] nonlinear fit with non uniform error? David Huard david.huard@gmail.... Thu Jun 21 08:08:36 CDT 2007 What you have is an heteroscedastic normal distribution (varying variance) describing the residuals. 2007/6/21, Matthieu Brucher <matthieu.brucher@gmail.com>: > 1)Does this mean that least squares is NOT ok? > > > Yes, LS is _NOT_ OK because it assumes that the distribution (with its > parameters) is the same for all errors. I don't remember exactly, but this > may be due to ergodicity Well, let's put things in perspective. You can still use ordinary least-squares. Theoretically, this means you're making the assumption that the error mean and variance are fixed and constant. In your case, this is not true and you can consider the LS solution like an approximation. What will happen under this approximation is that large errors on Cy will tend to dominate the residuals, and values in Ay will probably not be fitted optimally. I advise you try it anyway and visually check whether you care about that or not. 2)What does "rescaling" mean in this context? > You must change B and C so that : > Ay +/- 5 > B'y +/- 5 > C'y +/- 5 Or maximize the likelihood of a multivariate normal distribution, whose covariance matrix describes your assumption about the heteroscedasticity of the residuals. \Sigma = | \sigma_A^2 0 0 | | 0 \sigma_B^2 0 | | 0 0 \sigma_C^2 | Heteroscedastic likelihood = -n/2 \ln(2\pi) - 1/2 \sum \ln(\sigma_i^2) -1/2 \sum \sigma_i^{-2} (y_{obs} - y_{sim})^2 You might also consider the possibility that your errors are multiplicative rather than additive. In this case, describing the residuals by a lognormal distribution could make more sense. Maximize lognormal likelihood: L=lognormal(y_sim | ln(y_obs), \sigma) > _______________________________________________ > SciPy-user mailing list > SciPy-user@scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- An HTML attachment was scrubbed... URL: http://projects.scipy.org/pipermail/scipy-user/attachments/20070621/885a25b4/attachment-0001.html More information about the SciPy-user mailing list
{"url":"http://mail.scipy.org/pipermail/scipy-user/2007-June/012644.html","timestamp":"2014-04-18T07:19:17Z","content_type":null,"content_length":"5322","record_id":"<urn:uuid:0c1fb137-7cd4-47b2-a526-74ed666f20af>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00239-ip-10-147-4-33.ec2.internal.warc.gz"}
Addison, IL Prealgebra Tutor Find an Addison, IL Prealgebra Tutor ...I currently (at various times throughout the academic year) teach courses in the introduction to finance using M.S. Excel, investment theory, financial institutions and markets, and corporate finance. Many students with bachelor degrees (some with PhDs) have taken my courses. 13 Subjects: including prealgebra, geometry, statistics, finance ...I've helped students push past the 30 mark, or just bring up one part of their score to push up their overall score. In the past 5 years, I've written proprietary guides on ACT strategy for local companies. These guides have been used to improve scores all over the midwest. 24 Subjects: including prealgebra, calculus, physics, geometry ...Thank you for considering me!I have excelled in math classes throughout my school career. I have a bachelor's in Mathematics, so I have used algebra for many years. I have a knowledge of tips and tricks to simplify elementary algebra concepts. 5 Subjects: including prealgebra, statistics, algebra 1, probability ...Also, help students develop efficient study skills. I offer tutoring in Reading, Writing, ESL, Math, Spanish and Science first to twelfth grade. Also, I would love to help students with learning disabilities. 28 Subjects: including prealgebra, English, reading, ESL/ESOL ...I'm waiting for the state to approve my license. I'm am licensed to substitute teach in any school in Illinois, K-12. I did my student teaching in a middle school having 6th, 7th and 8th 16 Subjects: including prealgebra, physics, calculus, algebra 1
{"url":"http://www.purplemath.com/Addison_IL_prealgebra_tutors.php","timestamp":"2014-04-16T19:43:01Z","content_type":null,"content_length":"23872","record_id":"<urn:uuid:351a02d4-5765-46a8-a623-ebeb6494b115>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00026-ip-10-147-4-33.ec2.internal.warc.gz"}
Palmer Township, PA Science Tutor Find a Palmer Township, PA Science Tutor ...I one year of an upper-level probability and statistics course while at Kutztown University and received A's in both semesters. I took two semesters of upper-level Probability and Statistics at Kutztown University, receiving an A in both. I took a one semester course in Differential Equations at Kutztown University in which I received an A. 16 Subjects: including physics, physical science, chemistry, calculus ...College chemistry instructor and tutor with excellent math skills to assist students in algebra 1 and 2 plus geometry and trigonometry. Current experience in tutoring GED math preparation. Excellent math skills in algebra 1 and 2, geometry, and trigonometry from an experienced college chemistry instructor and tutor. 36 Subjects: including biochemistry, pharmacology, Praxis, algebra 1 ...I have used it extensively. I have also taught it at the high school level. Algebra 2 is an extension of Algebra I. 11 Subjects: including physics, statistics, geometry, ACT Math I am a tutor with over five years of experience teaching math, science, and humanities at the secondary level. I have worked with students from all backgrounds; I have also worked extensively with children with disabilities. I hold a BA in Anthropology from Florida Atlantic University and am currently a graduate student in the Anthropology Department. 55 Subjects: including astronomy, biology, ecology, physical science ...I graduated in May of 2013 from Ramapo College of New Jersey. I am certified to teach elementary school and am a test away from being certified to teach middle school math. I currently work in a school as an instructional aide in Kindergarten and 2nd grade. 31 Subjects: including biology, nutrition, grammar, geometry Related Palmer Township, PA Tutors Palmer Township, PA Accounting Tutors Palmer Township, PA ACT Tutors Palmer Township, PA Algebra Tutors Palmer Township, PA Algebra 2 Tutors Palmer Township, PA Calculus Tutors Palmer Township, PA Geometry Tutors Palmer Township, PA Math Tutors Palmer Township, PA Prealgebra Tutors Palmer Township, PA Precalculus Tutors Palmer Township, PA SAT Tutors Palmer Township, PA SAT Math Tutors Palmer Township, PA Science Tutors Palmer Township, PA Statistics Tutors Palmer Township, PA Trigonometry Tutors Nearby Cities With Science Tutor Alpha, NJ Science Tutors Bethlehem, PA Science Tutors Catasauqua Science Tutors Easton, PA Science Tutors Forks Township, PA Science Tutors Freemansburg, PA Science Tutors Glendon, PA Science Tutors Harmony Township, NJ Science Tutors Nazareth, PA Science Tutors New Hanover Twp, PA Science Tutors Phillipsburg, NJ Science Tutors Riegelsville Science Tutors Stockertown Science Tutors Tatamy Science Tutors West Easton, PA Science Tutors
{"url":"http://www.purplemath.com/palmer_township_pa_science_tutors.php","timestamp":"2014-04-17T19:38:06Z","content_type":null,"content_length":"24330","record_id":"<urn:uuid:0e9512f5-8cd6-493d-a279-8a670ae1f732>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00063-ip-10-147-4-33.ec2.internal.warc.gz"}
Florence Newberger - CSULB Department of Mathematics and Statistics Florence Newberger Department of Mathematics and Statistics, California State University, Long Beach Visit me! Office: FO3, Room 218 Write to me! Mathematics Department California State University, Long Beach Long Beach, CA 90840-1001 Call me! Home: (562) 421-9966 Office: (562) 985-5675 Email me! fnewberg @ csulb.edu Teaching Research and Scholarly Activity Service Beads I serve as the Department of Mathematics and Statistics Service Course coordinator. In this role, I oversee various 100-level courses. • Calculus for Biology (Math 119A and B) Currently Math 119A and B use the text, Calculus with Applications for the Life Sciences, by Raymond Greenwell, Nathan Ritchey, and Margaret Lial. The first semester (Math 119A) includes three writing assignments, in which students write about calculus problems, their solutions and implications. Instructors administer these assignments through Calibrated Peer Review, a browser based software available for free from UCLA. Following the BIO 2010 recommendations, I am considering changing the text for the second semester of the course to reflect the mathematics now found in biology and the health sciences more accurately. • Calculus for Business (Math 115) Currently Math 115 uses the text, Calculus for Business, Economics, and the Social and Life Sciences, Brief Edition, 10/e (Tenth Edition), by Hoffmann and Bradley. We take advantage of on-line homework (MathZone) provided by the publisher, and an on-line preparation for calculus program from ALEKS.com. The course runs for 3 hours per week in large lecture, and one hour per week in a 35-student activity session. Find more information on the CSULB Calculus for Business Website. • Modeling with Algebra (Math 109) Following the recommendations of CRAFTY, in 2006, CSULB divided its college algebra courses into one for students continuing to calculus, and one for students requiring general education. Modeling with Algebra (Math 109) is the course for students majoring in subjects that do not require further study in mathematics. I designed this course using the book, Explorations in College Algebra, by Linda Almgren Kime, Judy Clark, and Beverly K. Michael. In Fall 2010, we changed to College Algebra, Concepts and Contexts, by James Stewart, Lothar Redlin, Saleem Watson and Phyllis Panman, for which I wrote the study guide. The course includes writing and Microsoft Excel assignments, and a poster project, as well as on-line homework. I have set it up, and passed it off to Dr. Xuhui Li; I look forward to seeing how it evolves. I am one of the campus coordinators for Calibrated Peer Review (CPR), which is a browser based software that facilitates a peer reviewed writing assignment. For an overview, go to http:// cpr.molsci.ucla.edu/ and click on Tour. If you are CSULB faculty, and you would like more information about CPR, please contact me. I wrote the document About Authoring for CPR to help faculty develop their own successful CPR assignments. My current courses Fall 2010 Math 115: Calculus for Business Math 562A: Complex Analysis I Master's Thesis Students The following students have completed Master's Theses under my direction. 2008 Blake Rector Thesis Title: Characterization of Solenoidal Groups 2006 Jeremy Jankans Thesis title: Invariant Quaternion Algebras and Kleinian Groups 2005 Merrick Sterling Thesis title: Geometric Coding of Geodesics on Surfaces of Constant Negative Curvature 2004 Alison Williams Thesis title: Equivalent statements of property (T) 2004 Jason Karcher Thesis title: Algebraic actions and the Borel Density Theorem CSULB College of Natural Science Outstanding Thesis Award Winner Research and Scholarly Activity My Ph.D. research lies in differential geometry and dynamical systems. More specifically, I study systems such as geodesic flows and isometric group actions, in which the geometry and the dynamics closely intertwine. Recently, through my work in College Algebra, I have become interested in how the different representations of mathematics (such as graphs, equations and the contexts they model) can be used to communicate and describe aspects of science and economics. I find modeling and the mathematics that naturally arises in biology particularly interesting. I hope my research record will soon reflect this new interest. Study Guide, for College Algebra Concepts and Contexts, by James Stewart, Lothar Redlin, Saleem Watson and Phyllis Panman, Cengage, 2010 Counter examples to minimal entropy rigidity in the Finsler category normalized by symplectic volume, joint with B. Colbois and P. Verovic, Annals of Global Analysis and Geometry, October 2008 A Multiplier Theorem for Fourier Series in Several Variables, with Nakhle Akbar and Saleem Watson, Colloq. Math. 106 (2006), 221-230 Recounting the Odds of an Even Derangement, with Arthur T. Benjamin, and Curtis D. Bennett, Mathematics Magazine, December 2005, 387-390 Patterson-Sullivan measure for geometrically finite groups with parabolic elements acting on real rank 1 symmetric spaces, Geometria Dedicata 97, (2003) 215--249. Minimal entropy rigidity for Finsler manifolds of negative flag curvature, joint with J. Boland, Ergodic Theory and Dynamical Systems, 21 no.1, (2001) 13--23 I am a member of the Mathematics Association of America , and the webmaster for the Southern California-Nevada Section of the Mathematics Association of America . With my business partner Gwen Fisher , I sell patterns for beadwork, made by sewing small seed beads together using a needle and thread. Our business is called beAd Infinitum
{"url":"http://www.csulb.edu/~fnewberg/","timestamp":"2014-04-21T07:04:50Z","content_type":null,"content_length":"8488","record_id":"<urn:uuid:294c4a97-022b-41ee-add8-d09506d1134d>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00643-ip-10-147-4-33.ec2.internal.warc.gz"}
Prime detector 11-05-2008 #1 Registered User Join Date Nov 2008 Prime detector Hello everyone, i'm new to c++. I'm currently trying to make a program that finds the first prime above 1 billion, but i can't make it work. it just prints all numbers from 1 billon to infinity. This is my code: #include <iostream> #include <math.h> using namespace std; int prime(int n); int main () { int i; for (i = 100000000;; i++) { if (prime(i)) cout << "Dette er er det første primtal over 1 milliard: " << i << endl; return 0; int prime(int n) { int i; double sqrt_of_n = sqrt(n); for (i = 2; i <= sqrt_of_n; i++) { if (n % i == 0) return false; return true; I hope you can help me! Hmm ... I'd expect this code to print nothing at all, actually. You're missing braces for the if, and the break is therefore always executed. All the buzzt! "There is not now, nor has there ever been, nor will there ever be, any programming language in which it is the least bit difficult to write bad code." - Flon's Law #include <iostream> #include <math.h> using namespace std; int prime(int n); int main () { int i; for (i = 100000000;; i++) { if (prime(i)) cout << "Dette er er det f&#248;rste primtal over 1 milliard: " << i << endl; return 0; int prime(int n) { int i; double sqrt_of_n = sqrt(n); for (i = 2; i <= sqrt_of_n; i++) { if (n &#37; i == 0) return false; return true; if (prime(i)) { cout << "Dette er er det f&#248;rste primtal over 1 milliard: " << i << endl; This is the minimum. Though I still don't know why it would print all numbers. All the buzzt! "There is not now, nor has there ever been, nor will there ever be, any programming language in which it is the least bit difficult to write bad code." - Flon's Law I'm sorry the code i posted before wasn't the one that printed all numbers, that was an earlier version of the program. As you said, this program did absolutely nothing. Thanks for the help both of you, i'll see if it works now. It will work.... but like CornedBee said, you may not end up with exactly what you want. That is not how the algorithm you are trying to encorporate even works.... You have merged two ways of determining primality into one half-assed one. >>You have merged two ways of determining primality into one half-assed one. Slightly harsh wording don't you think lol? If it won't do what i want it to, why? And what is wrong in how i find a prime? I'd like criticism, if my mistakes aren't corrected i won't get better at c++. Would you please continue to help me by explaining? double sqrt_of_n = sqrt(n); I recognize this line of code anywhere. C++ Without Fear!!! Harsh, and wrong. That method will work. He is just checking all the numbers below the square root and if there is no remainder after devision the number is obviously composite. If it makes it through every number then it is obviously prime. There are far mroe highly optimized methods in use, but for only finding one prime number that one probably executes ina few ms so its fine. Until you can build a working general purpose reprogrammable computer out of basic components from radio shack, you are not fit to call yourself a programmer in my presence. This is cwhizard, signing off. If I am not mistaken does this algorithm not also produce pseudo primes as well? *shrugs* I don't care one way or another. Nope, no pseudo-primes. Only true primes. All the buzzt! "There is not now, nor has there ever been, nor will there ever be, any programming language in which it is the least bit difficult to write bad code." - Flon's Law 11-05-2008 #2 11-05-2008 #3 Registered User Join Date Nov 2008 11-05-2008 #4 11-05-2008 #5 11-05-2008 #6 Registered User Join Date Nov 2008 11-05-2008 #7 11-05-2008 #8 Registered User Join Date Nov 2005 11-05-2008 #9 Registered User Join Date Nov 2008 11-05-2008 #10 Registered User Join Date Jun 2008 11-05-2008 #11 Registered User Join Date Nov 2008 11-05-2008 #12 Registered User Join Date Jun 2008 11-05-2008 #13 11-05-2008 #14 11-05-2008 #15
{"url":"http://cboard.cprogramming.com/cplusplus-programming/108908-prime-detector.html","timestamp":"2014-04-16T07:40:13Z","content_type":null,"content_length":"97646","record_id":"<urn:uuid:838b4cb9-e2c8-430c-95e4-a3d9c96e2b38>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00248-ip-10-147-4-33.ec2.internal.warc.gz"}