content
stringlengths
86
994k
meta
stringlengths
288
619
Need help setting up feval October 23rd 2012, 09:21 AM #1 Jun 2012 Need help setting up feval Hey guys, I'm trying to use feval to evaluate a very complicated trig function (as in, the fourth derivative of f(x) = log10(tan x)) at a particular point, x = 1.09. I got the fourth derivative no problem, but I'm new at Matlab, and I cannot figure out how to evaluate the function at the given x = 1.09. Can someone show me SPECIFICALLY how to do this? I am lost. Follow Math Help Forum on Facebook and Google+
{"url":"http://mathhelpforum.com/math-software/205941-need-help-setting-up-feval.html","timestamp":"2014-04-20T07:04:17Z","content_type":null,"content_length":"28419","record_id":"<urn:uuid:bd459ffd-0383-491f-8586-062bc6c13742>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00633-ip-10-147-4-33.ec2.internal.warc.gz"}
The Best Constant of Sobolev Inequality Corresponding to Clamped Boundary Value Problem 1. Introduction The fact that To obtain the supremum of Concerning the uniqueness and existence of the solution to , we have the following proposition. The result is expressed by the monomial Proposition 1.1. For any bounded continuous function has a unique classical solution where Green's function With the aid of Proposition 1.1, we obtain the following theorem. The proof of Proposition 1.1 is shown in Appendices A and B. Theorem 1.2. (i) The supremum Clearly, Theorem 1.2(i), (ii) is rewritten equivalently as follows. Corollary 1.3. Next, we introduce a connection between the best constant of Sobolev- and Lyapunov-type inequalities. Let us consider the second-order differential equation where conjugate. It is wellknown that if there exists a pair of conjugate points in holds, where 1], Yang [2], and references there in. Among these extensions, Levin [3] and Das and Vatsala [4] extended the result for higher order equation For this case, we again call two distinct points [2]conjugate if there exists a nontrivial We point out that the constant which appears in the generalized Lyapunov inequality by Levin [3] and Das and Vatsala [4] is the reverse of the Sobolev best embedding constant. Corollary 1.4. If there exists a pair of conjugate points on Without introducing auxiliary equation 2, 4], we can prove this corollary directly through the Sobolev inequality (the idea of the proof origins to Brown and Hinton [5, page 5]). Proof of Corollary 1.4. In the second inequality, the equality holds for the function which attains the Sobolev best constant, so especially it is not a constant function. Thus, for this function, the first inequality is strict, and hence we obtain we obtain the result. Here, at the end of this section, we would like to mention some remarks about (1.12). The generalized Lyapunov inequality of the form (1.14) was firstly obtained by Levin [3] without proof; see Section 4 of Reid [6]. Later, Das and Vatsala [4] obtained the same inequality (1.14) by constructing Green's function for . The expression of the Green's function of Proposition 1.1 is different from that of [4]. The expression of [4, Theorem 2.1] is given by some finite series of 7–9], where the concrete expressions of Green's functions for the equation 2. Reproducing Kernel First we enumerate the properties of Green's function . Lemma 2.1. Consider the following: Note that subtracting the where we used the fact Using Lemma 2.1, we prove that the functional space Lemma 2.2. For any For functions Integrating this with respect to Using (1), (2), and (4) in Lemma 2.1, we have (2.9). 3. Sobolev Inequality In this section, we give a proof of Theorem 1.2 and Corollary 1.3. Proof of Theorem 1.2 and Corollary 1.3. Applying Schwarz inequality to (2.9), we have Note that the last equality holds from (2.9); that is, substituting (2.9), holds (this will be proved in the next section). From definition of Combining this and trivial inequality Hence, we have which completes the proof of Theorem 1.2 and Corollary 1.3. Thus, all we have to do is to prove (3.2). 4. Diagonal Value of Green's Function In this section, we consider the diagonal value of Green's function, that is, Thus, we can expect that Proposition 4.1. To prove this proposition, we prepare the following two lemmas. Lemma 4.2. Lemma 4.3. Proof of Proposition 4.1. From Lemmas 4.2 and 4.3, Inserting (4.9) into (4.8), we have Proposition 4.1. Proof of Lemma 4.2. then differentiating At first, for The first term vanishes because The third term also vanishes because Thus, we have Hence, we have by which we obtain (4.5). Next, for The first term vanishes because Thus, we obtain that is, This completes the proof of Lemma 4.2. Proof of Lemma 4.3. Thus, we have (4.5). If This proves Lemma 4.3. A. Deduction of (1.5) In this section, (1.5) in Proposition 1.1 is deduced. Suppose that has a classical solution is rewritten as Let the fundamental solution or equivalently, for Employing the boundary conditions (A.2), we have In particular, if On the other hand, using the boundary conditions (A.2) again, we have Solving the above linear system of equations with respect to Substituting (A.9) into (A.7), we have Taking an average of the above two expressions and noting Using properties Applying the relation B. Deduction of (1.6) To prove (1.6), we show On the other hand, let where we used which is the same equation as (B.2). Hence, we have 1. Ha, C-W: Eigenvalues of a Sturm-Liouville problem and inequalities of Lyapunov type. Proceedings of the American Mathematical Society. 126(12), 3507–3511 (1998). Publisher Full Text 2. Yang, X: On inequalities of Lyapunov type. Applied Mathematics and Computation. 134(2-3), 293–300 (2003). Publisher Full Text 3. Levin, AJ: Distribution of the zeros of solutions of a linear differential equation. Soviet Mathematics. 5, 818–821 (1964) 4. Das, KM, Vatsala, AS: Green's function for n-n boundary value problem and an analogue of Hartman's result. Journal of Mathematical Analysis and Applications. 51(3), 670–677 (1975). Publisher Full 5. Brown, RC, Hinton, DB: Lyapunov inequalities and their applications. In: Rassias TM (ed.) Survey on Classical Inequalities, Math. Appl., vol. 517, pp. 1–25. Kluwer Academic Publishers, Dordrecht, The Netherlands (2000) 6. Reid, WT: A generalized Liapunov inequality. Journal of Differential Equations. 13, 182–196 (1973). Publisher Full Text 7. Kametaka, Y, Yamagishi, H, Watanabe, K, Nagai, A, Takemura, K: Riemann zeta function, Bernoulli polynomials and the best constant of Sobolev inequality. Scientiae Mathematicae Japonicae. 65(3), 333–359 (2007) 8. Nagai, A, Takemura, K, Kametaka, Y, Watanabe, K, Yamagishi, H: Green function for boundary value problem of 2M-th order linear ordinary differential equations with free boundary condition. Far East Journal of Applied Mathematics. 26(3), 393–406 (2007) Sign up to receive new article alerts from Boundary Value Problems
{"url":"http://www.boundaryvalueproblems.com/content/2011/1/875057/","timestamp":"2014-04-17T21:39:19Z","content_type":null,"content_length":"98991","record_id":"<urn:uuid:55c968a2-7e15-4a6f-a8b0-a450e9d533b3>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00307-ip-10-147-4-33.ec2.internal.warc.gz"}
Results 1 - 10 of 14 - In ICML , 2006 "... Scientists need new tools to explore and browse large collections of scholarly literature. Thanks to organizations such as JSTOR, which scan and index the original bound archives of many journals, modern scientists can search digital libraries spanning hundreds of years. A scientist, suddenly ..." Cited by 392 (22 self) Add to MetaCart Scientists need new tools to explore and browse large collections of scholarly literature. Thanks to organizations such as JSTOR, which scan and index the original bound archives of many journals, modern scientists can search digital libraries spanning hundreds of years. A scientist, suddenly - Journal of Machine Learning Research , 2006 "... Consider the problem of joint parameter estimation and prediction in a Markov random field: that is, the model parameters are estimated on the basis of an initial set of data, and then the fitted model is used to perform prediction (e.g., smoothing, denoising, interpolation) on a new noisy observa ..." Cited by 36 (2 self) Add to MetaCart Consider the problem of joint parameter estimation and prediction in a Markov random field: that is, the model parameters are estimated on the basis of an initial set of data, and then the fitted model is used to perform prediction (e.g., smoothing, denoising, interpolation) on a new noisy observation. "... We develop the relational topic model (RTM), a hierarchical model of both network structure and node attributes. We focus on document networks, where the attributes of each document are its words, that is, discrete observations taken from a fixed vocabulary. For each pair of documents, the RTM model ..." Cited by 27 (1 self) Add to MetaCart We develop the relational topic model (RTM), a hierarchical model of both network structure and node attributes. We focus on document networks, where the attributes of each document are its words, that is, discrete observations taken from a fixed vocabulary. For each pair of documents, the RTM models their link as a binary random variable that is conditioned on their contents. The model can be used to summarize a network of documents, predict links between them, and predict words within them. We derive efficient inference and estimation algorithms based on variational methods that take advantage of sparsity and scale with the number of links. We evaluate the predictive performance of the RTM for large networks of scientific abstracts, web documents, and geographically tagged news. 1. Introduction. Network data - IEEE SIGNAL PROCESSING MAG , 2006 "... Distributed inference methods developed for graphical models comprise a principled approach for data fusion in sensor networks. The application of these methods, however, requires some care due to a number of issues that are particular to sensor networks. Chief of among these are the distributed na ..." Cited by 10 (0 self) Add to MetaCart Distributed inference methods developed for graphical models comprise a principled approach for data fusion in sensor networks. The application of these methods, however, requires some care due to a number of issues that are particular to sensor networks. Chief of among these are the distributed nature of computation and deployment coupled with communications bandwidth and energy constraints typical of many sensor networks. Additionally, information sharing in a sensor network necessarily involves approximation. Traditional measures of distortion are not sufficient to characterize the quality of approximation as they do not address in an explicit manner the resulting impact on inference which is at the core of many data fusion problems. While both graphical models and a distributed sensor network have network structures associated with them, the mapping is not one to one. All of these issues complicate the mapping of a particular inference problem to a given sensor network structure. Indeed, there may be a variety of mappings with very different characteristics with regard to computational complexity and utilization of resources. Nevertheless, it is the case that many of the powerful distributed inference methods have a role in information fusion for sensor networks. In this article we present an overview of research conducted by the authors that has - In Int. Conf. Acoustic, Speech and Sig. Proc , 2007 "... Abstract—Markov random fields are designed to represent structured dependencies among large collections of random variables, and are well-suited to capture the structure of real-world signals. Many fundamental tasks in signal processing (e.g., smoothing, denoising, segmentation etc.) require efficie ..." Cited by 8 (3 self) Add to MetaCart Abstract—Markov random fields are designed to represent structured dependencies among large collections of random variables, and are well-suited to capture the structure of real-world signals. Many fundamental tasks in signal processing (e.g., smoothing, denoising, segmentation etc.) require efficient methods for computing (approximate) marginal probabilities over subsets of nodes in the graph. The marginalization problem, though solvable in linear time for graphs without cycles, is computationally intractable for general graphs with cycles. This intractability motivates the use of approximate “message-passing ” algorithms. This paper studies the convergence and stability properties of the family of reweighted sum-product algorithms, a generalization of the widely used sum-product or belief propagation algorithm, in which messages are adjusted with graph-dependent weights. For pairwise Markov random fields, we derive various conditions that are sufficient to ensure convergence, and also provide bounds on the geometric convergence rates. When specialized to the ordinary sum-product algorithm, these results provide strengthening of previous analyses. We prove that some of our conditions are necessary and sufficient for subclasses of homogeneous models, but not for general models. The experimental simulations on various classes of graphs validate our theoretical results. Index Terms—Approximate marginalization, belief propagation, convergence analysis, graphical models, Markov random fields, sum-product algorithm. I. - Technometrics "... www.stat.berkeley.edu/users/binyu) This article examines the role of statistics in the age of information technology (IT). It begins by examining the current state of IT and of the cyberinfrastructure initiative aimed at integrating the technologies into science, engineering, and education to conver ..." Cited by 6 (0 self) Add to MetaCart www.stat.berkeley.edu/users/binyu) This article examines the role of statistics in the age of information technology (IT). It begins by examining the current state of IT and of the cyberinfrastructure initiative aimed at integrating the technologies into science, engineering, and education to convert massive amounts of data into useful information. Selected applications from science and text processing are introduced to provide concrete examples of massive data sets and the statistical challenges that they pose. The thriving field of machine learning is reviewed as an example of current achievements driven by computations and IT. Ongoing challenges that we face in the IT revolution are also highlighted. The paper concludes that for the healthy future of our field, computer technologies have to be integrated into statistics, and statistical thinking in turn must be integrated into computer technologies. 1. "... Popular methods for probabilistic topic modeling like the Latent Dirichlet Allocation (LDA, [1]) and Correlated Topic Models (CTM, [2]) share an important property, i.e., using a common set of topics to model all the data. This property can be too restrictive for modeling complex data entries where ..." Cited by 4 (0 self) Add to MetaCart Popular methods for probabilistic topic modeling like the Latent Dirichlet Allocation (LDA, [1]) and Correlated Topic Models (CTM, [2]) share an important property, i.e., using a common set of topics to model all the data. This property can be too restrictive for modeling complex data entries where multiple fields of heterogeneous data jointly provide rich information about each object or event. We propose a new extension of the CTM method to enable modeling with multi-field topics in a global graphical structure, and a mean-field variational algorithm to allow joint learning of multinomial topic models from discrete data and Gaussianstyle topic models for real-valued data. We conducted experiments with both simulated and real data, and observed that the multi-field CTM outperforms a conventional CTM in both likelihood maximization and perplexity reduction. A deeper analysis on the simulated data reveals that the superior performance is the result of successful discovery of the mapping among field-specific topics and observed data. 1 - In Advances in Neural Information Processing Systems , 2005 "... Consider the problem of joint parameter estimation and prediction in a Markov random field: i.e., the model parameters are estimated on the basis of an initial set of data, and then the fitted model is used to perform prediction (e.g., smoothing, denoising, interpolation) on a new noisy observation. ..." Cited by 2 (0 self) Add to MetaCart Consider the problem of joint parameter estimation and prediction in a Markov random field: i.e., the model parameters are estimated on the basis of an initial set of data, and then the fitted model is used to perform prediction (e.g., smoothing, denoising, interpolation) on a new noisy observation. Working in the computation-limited setting, we analyze a joint method in which the same convex variational relaxation is used to construct an M-estimator for fitting parameters, and to perform approximate marginalization for the prediction step. The key result of this paper is that in the computation-limited setting, using an inconsistent parameter estimator (i.e., an estimator that returns the “wrong ” model even in the infinite data limit) is provably beneficial, since the resulting errors can partially compensate for errors made by using an approximate prediction technique. En route to this result, we analyze the asymptotic properties of M-estimators based on convex variational relaxations, and establish a Lipschitz stability property that holds for a broad class of variational methods. We show that joint estimation/prediction based on the reweighted sum-product algorithm substantially outperforms a commonly used heuristic based on ordinary sum-product. 1 "... Belief Propagation (BP) can be very useful and efficient for performing approximate inference on graphs. But when the graph is very highly connected with strong conflicting interactions, BP tends to fail to converge. Generalized Belief Propagation (GBP) provides more accurate solutions on such graph ..." Cited by 2 (0 self) Add to MetaCart Belief Propagation (BP) can be very useful and efficient for performing approximate inference on graphs. But when the graph is very highly connected with strong conflicting interactions, BP tends to fail to converge. Generalized Belief Propagation (GBP) provides more accurate solutions on such graphs, by approximating Kikuchi free energies, but the clusters required for the Kikuchi approximations are hard to generate. We propose a new algorithmic way of generating such clusters from a graph without exponentially increasing the size of the graph during triangulation. In order to perform the statistical region labeling, we introduce the use of superpixels for the nodes of the graph, as it is a more natural representation of an image than the pixel grid. This results in a smaller but much more highly interconnected graph where BP consistently fails. We demonstrate how our version of the GBP algorithm outperforms BP on synthetic and natural images and in both cases, GBP converges after only a few iterations. 1. , 2008 "... The focus of this thesis is approximate inference in Gaussian graphical models. A graphical model is a family of probability distributions in which the structure of interactions among the random variables is captured by a graph. Graphical models have become a powerful tool to describe complex high-d ..." Cited by 1 (0 self) Add to MetaCart The focus of this thesis is approximate inference in Gaussian graphical models. A graphical model is a family of probability distributions in which the structure of interactions among the random variables is captured by a graph. Graphical models have become a powerful tool to describe complex high-dimensional systems specified through local interactions. While such models are extremely rich and can represent a diverse range of phenomena, inference in general graphical models is a hard problem. In this thesis we study Gaussian graphical models, in which the joint distribution of all the random variables is Gaussian, and the graphical structure is exposed in the inverse of the covariance matrix. Such models are commonly used in a variety of fields, including remote sensing, computer vision, biology and sensor networks. Inference in Gaussian models reduces to matrix inversion, but for very large-scale models and for models requiring distributed inference, matrix inversion is not feasible. We first study a representation of inference in Gaussian graphical models in terms of computing sums of weights of walks in the graph – where means, variances and correlations can be represented as such walk-sums. This representation holds in a wide class
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=771689","timestamp":"2014-04-19T22:51:05Z","content_type":null,"content_length":"39859","record_id":"<urn:uuid:2fb0ccca-5311-4de2-97a0-8d83505ebb4f>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00413-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum - Ask Dr. Math Archives: College Discrete Math This page: discrete math Dr. Math See also the Internet Library: discrete math linear algebra modern algebra Discrete Math conic sections/ coordinate plane Logic/Set Theory Number Theory Browse College Discrete Math Stars indicate particularly interesting answers or good places to begin browsing. Given n bins and m (indistinguishable) balls, how many arrangements are possible such that no bin has greater than r balls? Can you state briefly the "official" Euclidean Algorithm? How can I prove that any two infinite subsets of the natural numbers can be put in a 1-1 correspondence? How would I show for m greater than or equal to 3 that s(m, m-2) = (1/ 24)m(m-1)(m-2)(3m-1), where s(m,n) are Stirling numbers of the first kind? Solve: P(2n,3) = 2P(n,4) If part(n,k) is the number of ways to partition a set of n elements into k subsets, what is Part(5,2)? Prove Part(n+1,k) = Part(n,k-1)+k* Part(n,k)... If have a particular board setup, how would I prove that a solution exists or does not? Given unlimited amounts of 2 types of beads, how many unique necklaces can you make using exactly k beads? How to express (1 2 3 5 7)(2 4 7 6) as the product of disjoint cycles. How many positive integers less than 100 can be written as the product of the first power of two different primes? Can a program be written in BASIC to compute the number of prime numbers smaller than n? Prove n^2 mod 5 = 1 or 4 when n is an integer not divisible by 5. I have found that there are 2^(n-1) ways to partition an integer (where order matters and all positive integers are available), but need a proof for this seemingly simple formula. Suppose a map is drawn using only lines that extend to infinity in both directions; are two colors sufficient to color the countries so that no pair of countries with a common border have the same color? Can you prove that 1 + 1 = 2? How can I prove that the game Brussels Sprouts always ends in 5n - 2 moves? For any graph G which is not connected, how do I prove that its complement must be connected? Can you explain what P and NP problems are at a level that a high school student can understand? A monkey is in one of 17 rooms built side by side. Each night the monkey moves one room right or left. Each day you can look in any two rooms. What's the optimal search pattern to find the Consider seating n dinner guests at k tables of l (lower case L) settings each, therefore n = k l , for m courses so that no guest shares a table more than once with any other guest. Equivalently, consider n players to be divided into k teams of l players for m rounds of a contest. No player may, more than once, be on the same team as any other player. 1. What is the maximum value of m, as a function of k and l? 2. How could one systematically specify the seating arrangement for the m courses? Thoughts on solving a system of modular equations such as: (1919ab) mod 5107 = 1 1919(a+1)(b-1) mod 5108 = 5047 1919(a+2)(b-2) mod 5109 = 1148 1919(a+3)(b-3) mod 5110 = 3631 1919(a+4)(b-4) mod 5111 = 2280 Find seven unique positive integers such that the sum of their reciprocals is 1. In computer programming, I have a result that contains several values, always a power of 2 (2^2, 2^3, 2^4). If my value is 2^3, 2^4, 2^6 304, how can I tell if 2^3 exists in 304? Is there an easy solution to the "Traveling Salesman Problem"? Prove: Assume that all points in the real plane are colored white or black at random. No matter how the plane is colored (even all white or all black) there is always at least one triangle whose vertices and center of gravity (all 4 points) are of the SAME color. Prove that on a chessboard with dimensions (2k+1) by (2k+1), covered by dominoes except in some of the corners, we can uncover any square with odd coordinates by sliding the dominoes around. Connect three houses with each of three utilities without crossing any lines. Is this possible? Page: [<prev] 1 2
{"url":"http://mathforum.org/library/drmath/sets/college_discrete.html?s_keyid=40092663&f_keyid=40092664&start_at=41&num_to_see=40","timestamp":"2014-04-17T02:41:43Z","content_type":null,"content_length":"17684","record_id":"<urn:uuid:bf55faec-e527-4e2b-aabf-7a3598a6fc48>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00626-ip-10-147-4-33.ec2.internal.warc.gz"}
#! /usr/bin/env python2.2 # distcc/benchmark -- automated system for testing distcc correctness # and performance on various source trees. # Copyright (C) 2003 by Martin Pool # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License as # published by the Free Software Foundation; either version 2 of the # License, or (at your option) any later version. # This program is distributed in the hope that it will be useful, but # WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # General Public License for more details. # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 # USA # Based in part on http://starship.python.net/crew/jhauser/NumAdd.py.html by Janko Hauser import Numeric def var(m): Variance of m. if len(m) < 2: return None mu = Numeric.average(m) return (Numeric.add.reduce(Numeric.power(Numeric.ravel(m)-mu, 2)) / (len(m)-1.)) def std(m): Standard deviation of m. v = var(m) return v and Numeric.sqrt(v) def mean(m): return Numeric.average(m)
{"url":"http://opensource.apple.com/source/distcc/distcc-1607/distcc_dist/bench/statistics.py","timestamp":"2014-04-17T20:12:59Z","content_type":null,"content_length":"4048","record_id":"<urn:uuid:32531abe-8cdd-4653-97d2-b58a9360fbd5>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00138-ip-10-147-4-33.ec2.internal.warc.gz"}
50/50/90 rule There are two basic 50/50/90 rules. One of them refers to probability, and the other one to time management. The first one states that in any decision that a human makes which has a 50/50 chance of being correct, that the human will pick wrong 90 percent of the time. This one is often heard of when referring to multiple choice tests, but is also sometimes bandied as a taunt to the loser of a simple coin toss. The other 50/50/90 rule states that the first half, or fifty percent, of a project will take fifty percent of the time allowed; and the other half of the project will take the remaining ninety percent of the time. This apparent contradiction demonstrates how many projects tend to take far more time than originally intended.
{"url":"http://everything2.com/title/50%252F50%252F90+rule","timestamp":"2014-04-20T15:11:05Z","content_type":null,"content_length":"18932","record_id":"<urn:uuid:99de7f5b-300f-4a58-8093-e0a6cafcabf7>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00395-ip-10-147-4-33.ec2.internal.warc.gz"}
Cyclotomic extension of p-adic fields up vote 2 down vote favorite It is well-known what the $p^n$-cyclotomic extensions (i.e., adjoining $p^n$-th roots of unity) of $\mathbb{Q}_p$ are (see Serre, Local fields for instance). However, assume now that $K/\mathbb{Q}_p$ is an arbitrary finite extension. What can now be said about the $p^n$-cyclotomic extensions of $K$? It is clear that this is a harder problem and I haven't been able to find any literature on this but I'm sure that it's out there. At least some cases (for instance, the case $K/\mathbb{Q}_p$ unramified should be similar to the "classical case" I think). Edit: I should have specified what I mean by "What can be said...?". What I'm interested in in particular is the ramification groups and the jumps in the ramification filtration. 1 What do you mean "what can be said"? If you add the $p^n$-th roots of $1$ for all $n$, (and not just the $p$-th roots as in your question) you get an extension whose Galois group is an open subgroup of $Z_p^\times$, and whose residue field is a finite extension of $F_p$. These extensions are used a lot in $p$-adic Hodge theory, so I suggest you look at papers in that area, eg Fontaine's "Arithmétique des représentations galoisiennes $p$-adiques". There's a lot of info about the ramification of that extension, for example. – Laurent Berger Jul 8 '13 at 7:55 @Laurent: I have edited the question to specify what I mean. I knew that there was a lot of information i $p$-adic Hodge theory concerning this but but my impression is that this is mainly concerning the infinite cyclotomic tower, not its finite constituent. I haven't seen that paper by Fontaine, but I'll have a look. – Daniel Larsson Jul 8 '13 at 10:00 Typical that that Fontaine's paper is in Asterisque which I don't have access to. Do you have any other reference? – Daniel Larsson Jul 8 '13 at 10:04 1 @Larsson: "Typical"? :-( See math.u-psud.fr/~biblio/pub/2000/abs/ppo2000_24.html for the preprint version of Fontaine's paper, and see mathoverflow.net/questions/96257/… concerning the availability of Asterisque. – Laurent Berger Jul 9 '13 at 7:08 @Laurent: By "typical" I meant "Just my luck that I don't have access to..." For some idiotic reason it didn't occur to me that there was a preprint version. Thank you! – Daniel Larsson Jul 9 '13 at 8:59 show 1 more comment 1 Answer active oldest votes Here is an easy example of a $K$ such that $K(\zeta_{p^2})$ is the unramified extension of $K$ of degree $p$. Start with $F=\mathbf{Q}_p(\zeta_p)$, and consider the $\mathbf{F}_p$-space $F^\times/F^{\times p}$ lines in which correspond to degree-$p$ cyclic extensions of $F$. There are two special lines, the one (call it $C$) generated by the image of $\zeta_p$, and the one (call it $U$) such that $F(\root p\of U)$ is the unramified degree-$p$ extension of $F$. [One can say precisely which line $U$ is, but never mind.] These two lines are distinct because $F(\root p\of C)=\mathbf{Q}_p(\zeta_{p^2})$ is totally ramified over $\mathbf{Q}_p$ (as you know), whereas $F(\root p\of U)$ is not, by the definition up vote 2 of $U$. down vote So the plane $CU$ contains at least one more line $D$ (distinct from $C$ and $U$), and the extension $K=F(\root p\of D)$ is a ramified degree-$p$ extension of $F$. I claim that $K(\zeta_{p^2})$ is the unramified extension of $K$ of degree $p$, as you can easily verify by computing its ramification index and residual degree over $F$. The special case $p=2$ gives the classic example : $K(\sqrt{-1})$ is the unramified quadratic extension of $K=\mathbf{Q}_2(\sqrt3)$. Hmm, so what you're saying is that not all $p^n$-extensions of $K$ are ramified? Maybe I should have known this. – Daniel Larsson Jul 9 '13 at 9:02 @Daniel: Well, I do not know what you mean by $p^n$-extension, but if mean "extension of degree $p^n$" you might know that for every $p$-adic field, hence also for $\mathbb{Q}_p$, there exists a unique extension of degree $d$ which is unramified, and for all $d\geq 1$ (it is cyclic, moreover). This is in Serre's book. – Filippo Alberto Edoardo Aug 7 '13 at 16:46 @FilippoAlbertoEdoardo: Well, I actually meant $p^n$-cyclotomic extensions. Yup, I knew about the result about unramifiedness (?), but I'm actually interested in wild ramification. – Daniel Larsson Aug 7 '13 at 18:15 1 @Daniel: Well, in that case if $K/\mathbb{Q}_p$ is finite then $K(\zeta_{p^n})/K$ is wildly ramified for $n\gg 0$ but as Dalawat wrote it might very well be possible that for small $n$ the extension is unramified. – Filippo Alberto Edoardo Aug 8 '13 at 1:46 add comment Not the answer you're looking for? Browse other questions tagged nt.number-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/136052/cyclotomic-extension-of-p-adic-fields","timestamp":"2014-04-19T11:59:40Z","content_type":null,"content_length":"64033","record_id":"<urn:uuid:99aafd35-bec5-4909-8c38-b42b3903959c>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00006-ip-10-147-4-33.ec2.internal.warc.gz"}
Sets, Data Models and Data Independence March 10, 2010 David Childs authored pioneering research about implementing set operations that enabled a programmer to approach the data storage and retrieval problem from a logical model, rather than a physical model of the data. A thorough examination of databases and data management will include many flavors of data and information models, including conceptual, logical, physical, mathematical, and application models. Database technology is constantly evolving, with new approaches and refinements to existing platforms. The choice of a data access solution depends in part on the underlying data model; whether a data store operates with sets, graphs or other types of data.Data management technology has undergone evolutionary development since the 1950s. The modern database management system (DBMS) represents mature, but not static, technology. Besides the emergence of new approaches to data persistence, there are continued refinements to mature DBMS platforms. Today's data stores implement a variety of data models, including graphs, sets, collections, arrays, cubes and other variants, including the hierarchical data model, relational model, network data model (CODASYL), trees, nested sets, adjacency lists and object stores. The index sequential access method (ISAM), key-value data stores and record management systems have also been implemented in various forms for decades. The concept of a data store that supported set operations such as union, intersection, domain and range emerged in the 1960s, based of course on Georg Cantor's set theory published in the 19th century. In 1968, D.L. Childs, then at the University of Michigan, wrote seminal papers about set-theoretic data structures that provided data independence, meaning an application did not have to know the physical structure of the data. During that era of first-generation databases (see CODASYL 1968 Survey of Data Base Systems), data access typically required the use of pointers and descriptions of physical data structures. Childs authored pioneering research about implementing set operations that enabled a programmer to approach the data storage and retrieval problem from a logical view, rather than a physical view of the data. His March 1968 paper, "Description of A Set-Theoretic Data Structure", explained that programmers can query data using set-theoretic expressions instead of navigating through fixed "A set-theoretic data structure (STDS) is virtually a 'floating' or pointer-free structure allowing quicker access, less storage, and greater flexibility than fixed or rigid structures that rely heavily on internal pointers or hash-coding, such as 'associative or relational structures,' 'list structures,' 'ring structures,' etc. An STDS relies on set-theoretic operations to do the work usually allocated to internal pointers. A question in an STDS will be a set-theoretic expression. Each set in an STDS is completely independent of every other set, allowing modification of any set without perturbation of the rest of the structure; while fixed structures resist creation, destruction, or changes in data. An STDS is essentially a meta-structure, allowing a question to 'dictate' the structure or data-flow. A question establishes which sets are to be accessed and which operations are to be performed within and between these sets. In an STDS there are as many 'structures' as there are combinations of set-theoretic operations; and the addition, deletion, or change of data has no effect on set-theoretic operations, hence no effect on the 'dictated structures.' Thus in a floating structure like an STDS the question directs the structure, instead of being subservient to it." In August 1968, Childs published "Feasibility of a Set-Theoretic Data Structure. A General Structure Based on a Reconstituted Set-Theoretic Definition for Relations". "This paper is motivated by an assumption that many problems dealing with arbitrarily related data can be expedited on a digital computer by a storage structure which allows rapid execution of operations within and between sets of datum names. In order for such a structure to be feasible, two problems must be considered: (1) the structure should be general enough that the sets involved may be unrestricted, thus allowing sets of sets of sets...; sets of ordered pairs, ordered triples...; sets of variable length n-tuples, n-tuples of arbitrary sets; etc.; (2) the set-operations should be general in nature, allowing any of the usual set theory operations between sets as described above, with the assurance that these operations will be executed rapidly. A sufficient condition for the latter is the existence of a well-ordering relation on the union of the participating sets. These problems are resolved in this paper with the introduction of the concept of a 'complex' which has an additional feature of allowing a natural extension of properties of binary relations to properties of general relations." The Federal government, including the Defense Advanced Research Projects Agency (DARPA), frequently funded computer science research and development during that era. One such effort was the University of Michigan's Research in the Conversational Use of Computers (CONCOMP) project for which Childs did his work on set-theoretic data structures. During that era, DARPA also funded development of packet-switched network technology and the ARPAnet, the forerunner of today's Internet. Childs' CONCOMP papers were available only to 'qualified requesters' although Childs presented the August 1968 paper at that year's Congress of the International Federation for Information Processing (IFIP). Those 1968 papers did not receive the broad dissemination of research papers published today via the Internet. Nonetheless Dr. Edgar F. Codd, who'd gotten his PhD at the University of Michigan, cited Childs' paper on set-theoretic data structures in his June 1970 paper about the relational model. Many persons who had not discovered Childs' papers erroneously believed the foundation of data independence and set-theoretic operations over data had been laid by Codd. Following Codd's 1970 paper on the relational model, other database researchers published papers that discussed the concept of data independence. In 1971, Chris Date and Paul Hopewell authored "Storage Structures and Physical Data Independence" for the ACM Workshop on Data Definition, Access and Control. The authors wrote about data independence being integral to the relational model: "Such data independence was explicitly called out as one of the major objectives of the relational model by Ted Codd in 1970 in his famous paper "A Relational Model of Data for Large Shared Data Banks" (Communications of the ACM 13, No. 6, June 1970)." Dr. Michael Stonebreaker's 1974 paper, "A Functional View of Data Independence", cited Codd's 1970 paper and Date's 1971 paper, but not Childs' papers in 1968. Similarly I've found other publications that credit the notion of data independence or physical data independence to Codd and Date, without referring to Childs' papers. During the 1990s, the advent of object databases and object-oriented programming frequently surfaced topics related to the relational model and data independence in articles, conference presentations and online discussions. Prominent defenders of relational fidelity included Chris Date, David McGoveran, Hugh Darwen and Fabian Pascal. In debates about the relational model (then and now), data independence and relational algebra are often cited as key factors that differentiate Codds' relational model from less formal approaches. Relational algebra includes the group of set-theoretic operations that provide mathematical underpinnings to the relational model. With the emergence of the Internet, Childs' papers are now widely-available to researchers. We now know Childs' pioneered concepts of data independence and set-theoretic operations over data. The works of Georg Cantor and D.L. Childs provided groundwork that enabled Dr. Edgar F. Codd to develop the relational model. Several years ago I had an e-mail exchange about this with Don Chamberlin, the co-inventor of SQL who worked with the late Dr. Codd at IBM. He acknowledged Childs' contribution: i<>"Thanks for the reminder of David Childs' work. As you have observed, modern relational databases owe a lot to Childs and he deserves recognition for this early and pioneering work." Since his pioneering work on set-theoretic data structures, David Childs has published papers about extended set processing, XML processing and other subjects that will be a topic for the future. Part 2: "Laying the Foundation" It will be interesting to watch whether set-store data access architectures become high fashion for processing large data sets. Part 3: "Information Density, Mathematical Identity, Set Stores and Big Data" David Childs authored pioneering research about implementing set operations that enabled a programmer to approach the data storage and retrieval problem from a logical model, rather than a physical model of the data.
{"url":"http://www.drdobbs.com/database/sets-data-models-and-data-independence/228700616","timestamp":"2014-04-18T08:38:46Z","content_type":null,"content_length":"100373","record_id":"<urn:uuid:66b80d4d-e554-4c33-9491-3a496e1dd8dc>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00356-ip-10-147-4-33.ec2.internal.warc.gz"}
Chelmsford Trigonometry Tutor Find a Chelmsford Trigonometry Tutor ...Thanks.My fascination with math really began when I started studying calculus. For more then a decade I have been using calculus to solve a wide variety of complex problems. As a math phd student I have particularly excelled in the field of analysis which is largely just a more rigorous and abstract formulation of traditional calculus concepts. 14 Subjects: including trigonometry, calculus, geometry, GRE ...I have covered several core subjects with a concentration in math. I currently hold a master's degree in math and have used it to tutor a wide array of math courses. In addition to these subjects, for the last several years, I have been successfully tutoring for standardized tests, including the SAT and ACT.I have taken a and passed a number of Praxis exams. 36 Subjects: including trigonometry, English, reading, calculus I am a certified math teacher (grades 8-12) and a former high school teacher. Currently I work as a college adjunct professor and teach college algebra and statistics. I enjoy tutoring and have tutored a wide range of students - from middle school to college level. 14 Subjects: including trigonometry, statistics, geometry, algebra 1 I'm a very experienced and patient Math Tutor with a wide math background and a Ph.D. in Math from West Virginia University. I teach high school through college students and can teach in person or, if convenient, via Skype. I don't want to take your tests or quizzes, so I may need to verify in some way that I'm not doing that! 14 Subjects: including trigonometry, calculus, geometry, GRE ...I taught them reading, writing, spelling, phonics, grammar, and vocabulary as well as K-6 math. I also taught all high school subjects. My two oldest, a junior and a senior in high school, are currently enrolled full-time at MassBay Community College. 25 Subjects: including trigonometry, English, reading, calculus
{"url":"http://www.purplemath.com/Chelmsford_trigonometry_tutors.php","timestamp":"2014-04-19T19:50:26Z","content_type":null,"content_length":"24266","record_id":"<urn:uuid:a5ab699c-988c-4c2f-bd80-0ffd22d23bc4>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00469-ip-10-147-4-33.ec2.internal.warc.gz"}
Post a reply The following recursive algorithm will do it without any table lookup, and could be done by hand without having to do anything but multiply, subtract, and divide It should be pretty fast, since dividing by 3 on each step will drop the argument to arbitrarily small size in logarithmic time. Of course, don't compute sin(x/3) twice in every iteration! Here's a proof that it works:
{"url":"http://www.mathisfunforum.com/post.php?tid=12973&qid=122778","timestamp":"2014-04-21T09:52:05Z","content_type":null,"content_length":"26360","record_id":"<urn:uuid:2ebd7213-ef5b-4162-8370-482ee8025140>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00181-ip-10-147-4-33.ec2.internal.warc.gz"}
st: RE: RE: proportion as a dependent variable [Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index] st: RE: RE: proportion as a dependent variable From "Joao Pedro W. de Azevedo" <jazevedo@provide.com.br> To <statalist@hsphsun2.harvard.edu> Subject st: RE: RE: proportion as a dependent variable Date Mon, 14 Jul 2003 11:45:01 +0100 I've seen papers in which the authors used a tobit model censored at 0 and 1 to model a proportion. Would this be an acceptable approach? -----Original Message----- From: owner-statalist@hsphsun2.harvard.edu [mailto:owner-statalist@hsphsun2.harvard.edu] On Behalf Of Nick Cox Sent: Monday, July 14, 2003 1:35 PM To: statalist@hsphsun2.harvard.edu Subject: st: RE: proportion as a dependent variable Ronnie Babigumira > I was attending a workshop in which one of the presenters > had a regression > in which a dependent variable was a proportion. One of the > participants > noted that it was wrong but didnt follow it up with a clear > explanation. Presumably the argument was that, given predictor x, a linear form a + bx must predict response values outside [0,1] for some x, so that at least in principle the functional form cannot be appropriate. In practice, if response were (say) proportion female and x were time, then the time at which the proportion passed outside the interval might be far outside the range of the data, but there are plenty of exceptions. This is most commonly mentioned, at least in my reading, as a simple argument why a + bx is likely to be a poor form for predicting responses which are either 0 or 1, an argument which usually leads to a case for logit or probit models. But the argument seems almost as strong for proportions. And -- historically -- logit as a transformation for continuous responses preceded logit as (in modern terms) a link function for binary responses. (The terminology of logit is more recent than its use.) Generalised linear models offer a nice approach to this question using e.g. logit link and some sensible family. There is a FAQ with further comments at How does one estimate a model when the dependent variable is a proportion? http://www.stata.com/support/faqs/stat/logit.html * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/ * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2003-07/msg00294.html","timestamp":"2014-04-19T10:27:13Z","content_type":null,"content_length":"7676","record_id":"<urn:uuid:c00f0371-4f37-4fe9-af88-48af4d945efb>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00417-ip-10-147-4-33.ec2.internal.warc.gz"}
Estimating population means in covariance stationary process Halkos, George and Kevork, Ilias (2006): Estimating population means in covariance stationary process. Download (239Kb) | Preview In simple random sampling, the basic assumption at the stage of estimating the standard error of the sample mean and constructing the corresponding confidence interval for the population mean is that the observations in the sample must be independent. In a number of cases, however, the validity of this assumption is under question, and as examples we mention the cases of generating dependent quantities in Jackknife estimation, or the evolution through time of a social quantitative indicator in longitudinal studies. For the case of covariance stationary processes, in this paper we explore the consequences of estimating the standard error of the sample mean using however the classical way based on the independence assumption. As criteria we use the degree of bias in estimating the standard error, and the actual confidence level attained by the confidence interval, that is, the actual probability the interval to contain the true mean. These two criteria are computed analytically under different sample sizes in the stationary ARMA(1,1) process, which can generate different forms of autocorrelation structure between observations at different lags. Item Type: MPRA Paper Original Estimating population means in covariance stationary process Language: English Keywords: Jackknife estimation; ARMA; Longitudinal data; Actual confidence level Subjects: C - Mathematical and Quantitative Methods > C5 - Econometric Modeling C - Mathematical and Quantitative Methods > C1 - Econometric and Statistical Methods and Methodology: General > C15 - Statistical Simulation Methods: General Item ID: 31843 Depositing Nickolaos Tzeremes Date 26. Jun 2011 10:18 Last 23. Feb 2013 14:49 Adam, N.R., 1983. Achieving a confidence interval for parameters estimated by simulation. Management Science, Vol.29, pp. 856-866. Conway, R.W., 1963. Some tactical problems in digital simulation. Management Science, Vol. 10, pp. 47-61. Crane, M.A., and D.L. Iglehart, 1974a. Simulating stable stochastic systems, I: General multi-server queues. Journal of the Association for Computing Machinery, Vol. 21, pp.103-113. Crane, M.A., and D.L. Iglehart, 1974b. Simulating stable stochastic systems, II: Markov chains. Journal of the Association for Computing Machinery, Vol. 21, pp.114-123. Crane, M.A., and D.L. Iglehart, 1974c Simulating stable stochastic systems, III: Regenerative processes and discrete event simulations. Operations Research, Vol. 23, pp.33-45. Crane, M.A., and D.L. Iglehart, 1975. Simulating stable stochastic systems, IV: Approximation techniques. Management Science, Vol. 21, pp.1215-1224. Ducket, S.D., and A.A.B. Pritsker, 1978. Examination of simulation output using spectral methods. Mathematical Computing Simulation, Vol. 20, pp. 53-60. Efron, B., 1979. Bootstrap methods: Another look at the jackknife. The Annals of Statistics 7, 1-26 Efron, B. and Tibshirani, R., 1993. An Introduction to the Bootstrap. Chapman & Hall, New York. Fishman, G.S., 1971. Estimating the sample size in computing simulation experiments. Management Science, Vol. 18, pp. 21-38. Fishman, G.S., 1973a. Statistical analysis for queuing simulations. Management Science, Vol. 20, pp. 363-369. Fishman, G.S., 1973b. Concepts and methods in discrete event digital simulation. John Wiley and Sons, New York. Fishman, G.S., 1977. Achieving specific accuracy in simulation output analysis. Communication of the Association for computing Machinery, Vol. 20, pp. 310-315. Fishman, G., 1978. Principles of Discrete Event Simulation. Wiley, New York. Fishman, G., 1999. Monte Carlo: Concepts, Algorithms, and Applications. Springer, New York. Gafarian, A.V., Ancker, C.J., JR, and T. Morisaku, 1978. Evaluation of commonly used rules for detecting steady state in computer simulation. Naval Research Logistics Quarterly, Vol. 25, pp. 511-529. Gordon, G., 1969. System simulation. Prentice-Hall, Englewood Cliffs N.j. Hall, P., Horowitz, J. and Jing, B.-Y., 1995. On blocking rules for the bootstrap with dependent data. Biometrika 82, 561-574. Heidelberger, P., and P.D. Welch, 1981a. A spectral method for confidence interval generation and run length control in simulations. Communications of the Association for Computing Machinery, Vol. 24, pp. 233-245. Heidelberger, P., and P.D. Welch, 1981b. Adaptive spectral methods for simulation output analysis. IBM Journal of Research and Development, Vol. 25, pp. 860-876. Heidelberger, P., and P.D. Welch, 1983. Simulation run length control in the presence of an initial transient. Operations Research, Vol. 31, pp. 1109-1144. Kevork, I.S, 1990. Confidence Interval Methods for Discrete Event Computer Simulation: Theoretical Properties and Practical Recommendations. Unpublished Ph.D. Thesis, University of London, London References: Kim, Y., Haddock, J. and Willemain, T., 1993a. The binary bootstrap: Inference with autocorrelated binary data. Communications in Statistics: Simulation and Computation 22, 205-216. Kim, Y., Willemain, T., Haddock, J. and Runger, G., 1993b. The threshold bootstrap: A new approach to simulation output analysis. In: Evans, G.W., Mollaghasemi, M., Russell, E.C., Biles, W.E. (Eds.), Proceedings: 1993 Winter Simulation Conference, pp. 498-502. Kelton, D.W. and A.M. Law, 1983. A new approach for dealing with the startup problem in discrete event simulation. Naval Research Logistics Quarterly, Vol. 30, pp. 6410658. Künsch, H., 1989. The jackknife and the bootstrap for general stationary observations. The Annals of Statistics 17, 1217-1241. Lavenberg, S., S., and C. H. Sauer, 1977. Sequential stopping rules for the regenerative method of simulation. IBM Journal of Research and Development, Vol. 21, pp. 545-558. Law, A.M., 1983. Statistical analysis of simulation output data. Operations Research, Vol. 31, pp. 983-1029. Law, A.M., and J.S. Carson, 1978. A sequential procedure for determining the length of a steady state simulation. Operation Research, Vol. 27, pp. 1011-1025. Law, A.M., and W.D. Kelton, 1982a. Confidence interval for steady state simulations: II. A survey of sequential procedures. Management Science, Vol. 28, pp. 560-562. Law, A.M., and W.D. Kelton, 1982b. Simulation modelling and analysis. McGraw Hill, New York. Law, A.M., and W.D. Kelton, 1984. Confidence intervals for steady state simulations: I. A survey of fixed sample size procedures. Operation Research, Vol. 32, pp. 1221-1239. Law, A. and Kelton, W., 1991. Simulation Modeling and Analysis, second ed. McGraw-Hill, New York. Liu, R. and Singh, K., 1992. Moving blocks jackknife and bootstrap capture weak dependence. In: Le Page, R., Billard, L., (Eds.), Exploring the Limits of Bootstrap. Wiley, New York, Mechanic, H., and W. McKay, 1966. Confidence intervals for averages of dependent data in simulation II. Technical report 17-202 IBM, Advanced Systems Development Division. Park, D. and Willemain, T., 1999. The threshold bootstrap and threshold jackknife. Computational Statistics and Data Analysis 31, 187-202. Park, D.S., Kim, Y.B., Shin, K.I. and Willemain, T.R., 2001. Simulation output Analysis using the threshold bootstrap, European Journal of Operational Research 134, 17-28. Quenouille, M., 1949. Approximation tests of correlation in time series. Journal of Royal Statistical Society Series B 11, 68-84. Schriber, T.J., 1974. Simulation using GPSS. John Wiley and Sons, New York. Sargent, R.G., Kang, K. and Goldsman, D., 1992. An investigation of finite-sample behavior of confidence interval estimators. Operation Research 40, 898-913. Schruben, L., 1983. Confidence interval estimation using standardized time series. Operations Research 31, 1090-1108. Song, W.T., 1996. On the estimation of optimal batch sizes in the analysis of simulation output. European Journal of Operational Research 88, 304-319. Song, W.T. and Schmeiser, B.W., 1995. Optimal mean-squared-error batch sizes. Management Science 41, 111-123. Tukey, J., 1958. Bias and confidence interval in not quite large samples (Abstract). The Annals of Mathematical Statistics 29, 614. Voss, P., Haddock, J. and Willemain, T., 1996. Estimating steady state mean from short transient simulations. In: Charnes, J.M., Morrice, D.M., Brunner, D.T. Welch, P.D., 1987. On the relationship between batch means, overlapping batch means and spectral estimation. Proceedings of the 1987 Winter Simulation Conference, pp. 320-323. URI: http://mpra.ub.uni-muenchen.de/id/eprint/31843
{"url":"http://mpra.ub.uni-muenchen.de/31843/","timestamp":"2014-04-20T06:02:35Z","content_type":null,"content_length":"32487","record_id":"<urn:uuid:aea751f1-1f8a-4887-b5b1-ed9875046fff>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00648-ip-10-147-4-33.ec2.internal.warc.gz"}
Network Flows and Monotropic Optimization Results 1 - 10 of 73 - DISCRETE APPLIED MATHEMATICS , 2002 "... The Quadratic Assignment Problem (QAP) consists of assigning n facilities to n locations so as to minimize the total weighted cost of interactions between facilities. The QAP arises in many diverse settings, is known to be NP-hard, and can be solved to optimality only for fairly small size instances ..." Cited by 108 (11 self) Add to MetaCart The Quadratic Assignment Problem (QAP) consists of assigning n facilities to n locations so as to minimize the total weighted cost of interactions between facilities. The QAP arises in many diverse settings, is known to be NP-hard, and can be solved to optimality only for fairly small size instances (typically, n < 25). Neighborhood search algorithms are the most popular heuristic algorithms to solve larger size instances of the QAP. The most extensively used neighborhood structure for the QAP is the 2-exchange neighborhood. This neighborhood is obtained by swapping the locations of two facilities and thus has size O(n²). Previous efforts to explore larger size neighborhoods (such as 3-exchange or 4-exchange neighborhoods) were not very successful, as it took too long to evaluate the larger set of neighbors. In this paper, we propose very largescale neighborhood (VLSN) search algorithms where the size of the neighborhood is very large and we propose a novel search procedure to heuristically enumerate good neighbors. Our search procedure relies on the concept of improvement graph which allows us to evaluate neighbors much faster than the existing methods. We present extensive computational results of our algorithms on standard benchmark instances. These investigations reveal that very large-scale neighborhood search algorithms give consistently better solutions compared the popular 2-exchange neighborhood algorithms considering both the solution time and solution accuracy. , 2003 "... We consider the problem of approximating sliding window joins over data streams in a data stream processing system with limited resources. In our model, we deal with resource constraints by shedding load in the form of dropping tuples from the data streams. We first discuss alternate architectural m ..." Cited by 97 (2 self) Add to MetaCart We consider the problem of approximating sliding window joins over data streams in a data stream processing system with limited resources. In our model, we deal with resource constraints by shedding load in the form of dropping tuples from the data streams. We first discuss alternate architectural models for data stream join processing, and we survey suitable measures for the quality of an approximation of a set-valued query result. We then consider the number of generated result tuples as the quality measure, and we give optimal offline and fast online algorithms for it. In a thorough experimental study with synthetic and real data we show the efficacy of our solutions. For applications with demand for exact results we introduce a new Archive-metric which captures the amount of work needed to complete the join in case the streams are archived for later processing. - Math. Programming , 2000 "... this paper is to describe the f#eA damental results on M- and L-convex f#24L2A+ with special emphasis on algorithmic aspects. ..." , 1993 "... Lagrange multipliers used to be viewed as auxiliary variables introduced in a problem of constrained minimization in order to write first-order optimality conditions formally as a system of equations. Modern applications, with their emphasis on numerical methods and more complicated side conditions ..." Cited by 89 (7 self) Add to MetaCart Lagrange multipliers used to be viewed as auxiliary variables introduced in a problem of constrained minimization in order to write first-order optimality conditions formally as a system of equations. Modern applications, with their emphasis on numerical methods and more complicated side conditions than equations, have demanded deeper understanding of the concept and how it fits into a larger theoretical picture. A major line of research has been the nonsmooth geometry of one-sided tangent and normal vectors to the set of points satisfying the given constraints. Another has been the game-theoretic role of multiplier vectors as solutions to a dual problem. Interpretations as generalized derivatives of the optimal value with respect to problem parameters have also been explored. Lagrange multipliers are now being seen as arising from a general rule for the subdifferentiation of a nonsmooth objective function which allows black-and-white constraints to be replaced by penalty expressions. This paper traces such themes in the current theory of Lagrange multipliers, providing along the way a freestanding exposition of basic nonsmooth analysis as motivated by and applied to this subject. - in Video: Data, Metrics, and Protocol”, to appear in IEEE Transactions on Pattern Analysis and Machine Intelligence, 2008. 4. Conclusion In this paper, an "... Abstract—Common benchmark data sets, standardized performance metrics, and baseline algorithms have demonstrated considerable impact on research and development in a variety of application domains. These resources provide both consumers and developers of technology with a common framework to objecti ..." Cited by 31 (3 self) Add to MetaCart Abstract—Common benchmark data sets, standardized performance metrics, and baseline algorithms have demonstrated considerable impact on research and development in a variety of application domains. These resources provide both consumers and developers of technology with a common framework to objectively compare the performance of different algorithms and algorithmic improvements. In this paper, we present such a framework for evaluating object detection and tracking in video: specifically for face, text, and vehicle objects. This framework includes the source video data, ground-truth annotations (along with guidelines for annotation), performance metrics, evaluation protocols, and tools including scoring software and baseline algorithms. For each detection and tracking task and supported domain, we developed a 50-clip training set and a 50-clip test set. Each data clip is approximately 2.5 minutes long and has been completely spatially/temporally annotated at the I-frame level. Each task/domain, therefore, has an associated annotated corpus of approximately 450,000 frames. The scope of such annotation is unprecedented and was designed to begin to support the necessary quantities of data for robust machine learning approaches, as well as a statistically significant comparison of the performance of algorithms. The goal of this work was to systematically address the challenges of object detection and tracking through a common evaluation framework that permits a meaningful objective comparison of techniques, provides the research community with sufficient data for the exploration of automatic modeling techniques, encourages the incorporation of objective evaluation into the development process, and contributes useful lasting resources of a scale and magnitude that will prove to be extremely useful to the computer vision research community for years to come. - Math. Programming , 1997 "... this paper occur at the tactical level. Strategic planning focuses on resource acquisition for the period from five to fifteen years ahead. Network planning problems may be viewed as the main strategic issues, but, in order to evaluate possible strategic alternatives, the subsequent stages including ..." Cited by 29 (6 self) Add to MetaCart this paper occur at the tactical level. Strategic planning focuses on resource acquisition for the period from five to fifteen years ahead. Network planning problems may be viewed as the main strategic issues, but, in order to evaluate possible strategic alternatives, the subsequent stages including at least line planning and train schedule generation have to be considered. The disadvantages of the hierarchical planning are obvious, since the optimal output of a subtask which serves as the input of a subsequent task, will not result, in general, in an overall optimal - MANAGEMENT SCIENCE , 1999 "... In this paper, we consider an integer convex optimization problem where the objective function is the sum of separable convex functions (that is, of the form (i,j)Q ij ij F(w)+ iP ii B( ) ), the constraints are similar to those arising in the dual of a minimum cost flow problem (that is, of the f ..." Cited by 29 (5 self) Add to MetaCart In this paper, we consider an integer convex optimization problem where the objective function is the sum of separable convex functions (that is, of the form (i,j)Q ij ij F(w)+ iP ii B( ) ), the constraints are similar to those arising in the dual of a minimum cost flow problem (that is, of the form i - j w ij , (i, j) Q), with lower and upper bounds on variables. Let n = |P|, m = |Q|, and U be the largest magnitude in the lower and upper bounds of variables. We call this problem the convex cost integer dual network flow problem. In this paper, we describe several applications of the convex cost integer dual network flow problem arising in dial-a-ride transit problems, inverse spanning tree problem, project management, and regression analysis. We develop network flow based algorithms to solve the convex cost integer dual network flow problem. We show that using the Lagrangian relaxation technique, the convex cost integer dual network flow problem can be transformed to a convex cost primal network flow problem where each cost function is a piecewise linear convex function with integer slopes. Its special structure allows the convex cost primal network flow problem to be solved in O(nm log n log(nU)) time using a cost-scaling algorithm, which is the best available time bound to solve the convex cost integer dual network flow problem. - Data Mining and Knowledge Discovery , 1996 "... Mathematical programming approaches to three fundamental problems will be described: feature selection, clustering and robust representation. The feature selection problem considered is that of discriminating between two sets while recognizing irrelevant and redundant features and suppressing them. ..." Cited by 26 (3 self) Add to MetaCart Mathematical programming approaches to three fundamental problems will be described: feature selection, clustering and robust representation. The feature selection problem considered is that of discriminating between two sets while recognizing irrelevant and redundant features and suppressing them. This creates a lean model that often generalizes better to new unseen data. Computational results on real data confirm improved generalization of leaner models. Clustering is exemplified by the unsupervised learning of patterns and clusters that may exist in a given database and is a useful tool for knowledge discovery in databases (KDD). A mathematical programming formulation of this problem is proposed that is theoretically justifiable and computationally implementable in a finite number of steps. A resulting k-Median Algorithm is utilized to discover very useful survival curves for breast cancer patients from a medical database. Robust representation is concerned with minimizing trained m... - Journal of Economic Theory "... The main contribution of this paper is to provide a framework in which the notion of farsighted stability for games, introduced by Chwe (1994), can be applied to directed networks. Then, using Chwe's basic result on the nonemptiness of farsightedly stable sets for games, we show that for any give ..." Cited by 24 (3 self) Add to MetaCart The main contribution of this paper is to provide a framework in which the notion of farsighted stability for games, introduced by Chwe (1994), can be applied to directed networks. Then, using Chwe's basic result on the nonemptiness of farsightedly stable sets for games, we show that for any given collection of directed networks and any given collection of rules governing network formation, there exists a farsightedly stable directed network. - SIAM J. Comput , 1997 "... We consider the problem of minimizing a separable convex objective function over the linear space given by a system Mx = 0 with M a totally unimodular matrix. In particular, this generalizes the usual minimum linear cost circulation and cocirculation problems in a network and the problems of determi ..." Cited by 22 (4 self) Add to MetaCart We consider the problem of minimizing a separable convex objective function over the linear space given by a system Mx = 0 with M a totally unimodular matrix. In particular, this generalizes the usual minimum linear cost circulation and cocirculation problems in a network and the problems of determining the Euclidean distance from a point to the perfect bipartite matching polytope and the feasible flows polyhedron. We first show that the idea of minimum mean cycle canceling originally worked out for linear cost circulations by Goldberg and Tarjan [J. Assoc. Comput. Mach., 36 (1989), pp. 873--886.] and extended to some other problems [T. R. Ervolina and S. T. McCormick, Discrete Appl. Math, 46 (1993), pp. 133--165], [A. Frank and A. V. Karzanov, Technical Report RR 895-M, Laboratoire ARTEMIS IMAG, Universite Joseph Fourier, Grenoble, France, 1992], [T. Ibaraki, A. V. Karzanov, ...
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=781927","timestamp":"2014-04-21T11:00:46Z","content_type":null,"content_length":"40588","record_id":"<urn:uuid:3a12fae5-f86f-4b8e-8cbb-22f619ba662e>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00349-ip-10-147-4-33.ec2.internal.warc.gz"}
[Numpy-discussion] matrix multiply Alan G Isaac aisaac@american.... Sun Apr 6 21:51:51 CDT 2008 On Sun, 6 Apr 2008, Charles R Harris apparently wrote: > I prefer the modern usage myself as it is closer to the > accepted logic operations, but applying algebraic > manipulations like powers and matrix inverses in that > context leads to strange results. I have not really thought much about inverses, but nonnegative integer powers have a natural interpretation in graph theory (with the boolean algebra operations, not the boolean ring operations). This is exactly what I was requesting be preserved. Alan Isaac More information about the Numpy-discussion mailing list
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2008-April/032479.html","timestamp":"2014-04-18T00:20:49Z","content_type":null,"content_length":"3113","record_id":"<urn:uuid:6da0b4f8-c638-487d-85ef-320a2dbc6ff5>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00242-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Completing the square: • one year ago • one year ago Best Response You've already chosen the best response. Best Response You've already chosen the best response. Is there anything more to this question? Best Response You've already chosen the best response. \[y=x ^{2}-2x-3\] Best Response You've already chosen the best response. Best Response You've already chosen the best response. Ah ok. Sometimes it's easier to work with completing a square by first bringing the constant term to the other side. Start with y + 3 = x^2 - 2x and we can go from there. Best Response You've already chosen the best response. Best Response You've already chosen the best response. We added 3 to both sides, so the left side becomes y + 3. This step is not necessary, but it makes completing the square clearer. Best Response You've already chosen the best response. You should subtract 2/2 on the right Best Response You've already chosen the best response. I dont udnerstand... Best Response You've already chosen the best response. We started with this equation: \[y = x^{2} - 2x - 3\] Then, adding 3 to both sides (this is an optional step): \[y + 3 = x^{2} - 2x\] Take the coefficient of the x term, divide it by 2, then square it. That would be (2/2)^2 = 1. So we add 1 to both sides. \[y + 3 + 1 = x^{2} - 2x + 1\] \[y + 4 = x^{2} - 2x + 1\] Notice how the right side factors into a square now: \[y + 4 = (x - 1)^ {2}\] Now we can subtract 4 from both sides to get a final solution: \[y = (x - 1)^{2} - 4\] Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50ee049fe4b07cd2b6497e15","timestamp":"2014-04-16T04:15:58Z","content_type":null,"content_length":"49464","record_id":"<urn:uuid:879cb620-a097-4261-a04a-6ea1f6245888>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00313-ip-10-147-4-33.ec2.internal.warc.gz"}
Review for Grade 7 CMP Review by Mathematically Correct Seventh Grade CMP booklets Please note, most of the units detour around before coming to central ideas which are in the last couple chapters. 1. Variables and Patterns: Introducing Algebra The main goal of the Introducing Algebra unit is to teach the student the linear equation y = ax. This is mainly taught near the end of the unit. To get there, the students are led through detours of three chapters of learning about non-linear graphs such as those of hunger and happiness. (Please note that Okemos schools teaches students to read graphs starting in Kindergarten). In the case of my daughter, she was weak in understanding and writing linear equations at the beginning of this unit. At the end, she was still not proficient in it. In the first year, the teacher also introduced graphing calculators for simple straight line graphs. This practice was abandoned after parents complained. It really bother me that problems used in the book such as temperatures distributions in the day and/or bike tour obscure the simplicity of math about straight line The only positive things I can say about this unit is that at the end (after 4 weeks), my daughters could plot nice graphs. However, she was weak in understanding and writing linear equations at the beginning of this unit. At the end, she was still not proficient in it. Being a trained experimentalist, I may be one parent who wrote things down. There are many parents and students who hated these type of open questions of bike tour or suggesting a scenario for the graphs which show the sale of popcorn as a function of time. Some kids are so bored by these type of writings that they refuse to take these problems home because they do not want to do them. Others like my daughters would get frustrated and cry. 2. Moving Straight Ahead : Linear Relationships Similar complaints as above. Additionally, the homework assignments consists of many irrelevant word problems containing nonlinear graphs of happiness and hunger. While most parents share the concern that students should learn to solve word problems instead of just crunching numbers, CMP provides some of the worst word problems under the pretense of offering "real life problems." In a recent seventh grade homework assignment, students were asked to do several problems similar to the following one (Moving Straight Ahead, p. 29): The 1996 Olympic gold medal winner for the 20 kilometer walk was Jefferson Perez from Ecuador. His time was 1 hour, 20 minutes, 7 seconds. Perez's time was not good enough to beat the olympic record set in 1988 by Josef Pribilinec from Czechoslovakia. Pribilinec's record for the 20 kilometer was 1 hour, 19 minutes, 57 seconds. What was the walking rate of each person? My daughter dutifully punched the numbers into her calculator and wrote down the answers as: Jefferson Perez : 0.00416 km/s Josef Pribilinec : 0.00417 km/s When I asked my daughter what she was supposed to learn from this exercise, she looked at me with a blank expression. When I asked her why she chose the unit of kilometers/second instead of meters/ second or meters/hour, she gave me the standard CMP reply that her teacher said there is no absolute correct answer in math homework. The book asked for rate, so she calculated a rate. She then continued to do similar problems that evening by punching more numbers into her calculator. No new light was shed the next morning when her teacher graded her homework. If the problem meant that the student should calculate speed in certain terms, it should have said so clearly instead of using the fuzzy term "rate." Most 7th graders have a pretty good concept of speed (Just ask any parent who has been caught speeding by their child who reads speed limit signs). However, 0.00416 km/s means nothing to most 7th graders, and even to some parents. They cannot relate that to their daily experience. If the problem meant to compare the rate of winners, the original description of how long it took a winner to walk 20 kilometers is the most sensible description. Can you imagine a sportscaster announcing that the winner Jefferson Perez's walking rate was 0.00416 km/s, compared to the Olympic record of 0.00417 km/s achieved by Josef Pribilinec? Word problems that do not relate to the context of real life will train only low-skilled employees who cannot function without cash registers. Unfortunately, this type of problem is prolific throughout the CMP booklets. 3. Accentuate the Negative: Integers My daughter cried her ways through drawing chips (red and black). That is the unit that convinced her to get out of Kinawa math to go to CHAMP -- about the most positive thing I can say regarding 4. Stretching and Shrinking : Similarity Stretching and Shrinking teaches scaling factors and similarity in geometry. The main goal of this unit is to teach students to understand geometric similarity. More importantly, for a curriculum that stresses real life problems, the unit advertises teaching students applications, e.g.. using the properties of similar triangles to determine the height of objects. Unfortunately, after all the detours of drawing wumps and Rep-tiles, the students never mastered the shadow method of determining object height. Let me use some real life examples to illustrate the points I want to make. One night, my daughter spent nearly three hours drawing four "wumps." I have enclosed a wimp for your reference. Instead of drawing all those wumps, most of the points in the lesson could have been taught by drawing a rectangle, using much less time. Another example is the Rep-tile exercise. In the October parent math meeting, the parents were asked to find "a way to divide each shape (copy attached for your entertainment) into four congruent, smaller shapes that are similar to the original shape." When I asked what the students were supposed to learn from this, the question derailed the lesson. One teacher said that the objective was to teach the concept of reduction, which obviously was wrong. The exercise was to create "Rep-tiles," a term I could not find in any college or high school geometry books. I asked several Math and Physics professors, and none had heard the term nor could they see the reason for teaching this. Can you explain to me why precious class time is spent teaching this or drawing wumps instead of teaching students the essence of this unit, i.e. how to use the properties of similar triangles or scale factors to find distance or height? 5. Comparing and Scaling : Ratio, Proportion and Percent It reads like Bits and Pieces II. Thus I don't think much about it in teaching ratios, proportions and percent. Moreover, I will be surprised if most kids at this age will find the population census data interesting. However I do find Chapter 5 about estimating populations of deer in Michigan by sampling to be interesting. On the other hand, I remembered my older child learned similar method to estimate the number of bats in a cave in 6th grade in a non-CMP math class. Still I found the method interesting that I would not mind my 7th grade child repeat it. This is the CMP unit currently used to teach proportional reasoning. The teachers said that the last two chapters will barely be touched upon. For example, the sampling methods used in polling to estimate population which we hear everyday in the media will not be taught. Instead, my daughter and I went through a torturous explanation about the number of visitor hours spent in the Federal Recreational Park Service. 6. Data around us: Number Sense I don't know how to make out of the book. The only thing I can say is that both of my kids hate working with large numbers. I would say a couple problems to illustrate the points will be all they can take. They are definitely not interested in census data. However, for other kids, this may be good and is good PR to parents to hear that the kids are working with real data. 7. Filling and Wrapping: Three Dimensional Measurement I believe this units can be taught with formula in a week to calculate surface area and volume. I think it is good that they illustrate the surface area and volume by folding cardboard. However, I have strong reservations about calculating the surface area of a cylinder by counting squares on a grid paper though. It is much easier to explain to the kid about area of the circles and multiply that by the length to get volume. I believe the following unit was left out by mistake. I notified Lee Gerard already. 8. What Do you Expect : Probability and Expected values (not mentioned) There is one unit about Probability which I believe the teachers left out inadvertently, (I hope). I do not know how much experiments they do with coins and dice. Tossing coins 10 times is all my kids can take. I think analyzing one stage and two stage games especially if this is accompanied by software that illustrate the points may be fine. However putting out a page of nonsense pictograms on Pg. 71 is ridiculous. If they just want the kids to guess, it is not necessary to go to such extreme waste of paper.
{"url":"http://www.nscl.msu.edu/~tsang/CMP/cmp_7.htm","timestamp":"2014-04-20T23:30:49Z","content_type":null,"content_length":"10422","record_id":"<urn:uuid:8f16ec1c-fa3b-4184-acce-6e9a79720bd9>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00014-ip-10-147-4-33.ec2.internal.warc.gz"}
Ernst Sigismund Fischer Born: 12 July 1875 in Vienna, Austria Died: 14 November 1954 in Cologne, Germany Click the picture above to see a larger version Previous (Chronologically) Next Main Index Previous (Alphabetically) Next Biographies index Ernst Fischer's father was Jacob Fischer who was a composer of music and Professor at the world famous Vienna Academy. His mother was Emma Grädener, the daughter of the musician Karl Grädener. Ernst was educated in Vienna, and he studied at the University of Vienna under Mertens from 1894. His doctoral studies were supervised by Gegenbauer and he was awarded his doctorate by the University of Vienna in 1899. He spent 1899 at the University of Berlin, then studied at Zurich and Göttingen with Minkowski. From 1902 he was assistant to E Waelsch at the German Technische Hochschule of Brünn (now Brno), becoming a privatdozent there in 1904, then an extraordinary professor in 1910. From 1911 until 1920, Fischer was professor at the University of Erlangen, appointed to fill the chair left vacant in the previous year when Paul Gordan retired. Emmy Noether had been awarded her doctorate from the University of Erlangen in 1907 having worked under Gordan's supervision. When Fischer arrived in Erlangen it was natural for Noether to work with him. After Noether's death in 1935, Weyl gave an address at which he spoke of Fischer's influence:- Fischer's field was algebra ..., in particular the theory of elimination and of invariants. He exerted upon Emmy Noether, I believe, a more penetrating influence than Gordan did. Under his direction the transition from Gordan's formal standpoint to the Hilbert method of approach was accomplished. She refers in her papers at this time again and again to conversations with Fischer. Fischer is best known for one of the highpoints of the theory of Lebesgue integration, called the Riesz-Fischer Theorem. The theorem is that the space of all square-integrable functions is complete, in the sense that Hilbert space is complete, and the two spaces are isomorphic by means of a mapping based on a complete orthonormal system. Fischer took part in World War I from 1915 to 1918. He married Ellis Strauss, the daughter of Pfarrers Eugen Strauss, in Erlangen in 1917. Fischer was 42 years old, his wife being 26; they had one daughter. From 1920 Fischer worked at the University of Cologne, remaining there until he retired in 1938. Let us note again the major result, the Riesz-Fischer Theorem, for which he is best known as Weyl noted in the above quote. In 1907 Ernst Fischer studied orthonormal sequences of functions and gave necessary and sufficient conditions for a sequence of constants to be the Fourier coefficients of a square integrable function. His two papers of 1907 were Sur la convergence en moyenne and Applications d'un théorèm sur la convergence en moyenne both published in Comptes rendus of the Academy of Sciences in Paris. This work led to the concept of a Hilbert space. Frigyes Riesz published a similar result in the same year. The theorem, now called the Riesz-Fischer theorem, is one of the great achievements of the Lebesgue theory of integration. Fischer went on to study Hadamard determinants, publishing his results in 1908 in the Archiv der Mathematik und Physik, and Sylvester determinants, publishing a paper in Crelle's Journal in the following year. He also published in the Carathéodory Problem and on finite abelian groups. Article by: J J O'Connor and E F Robertson Click on this link to see a list of the Glossary entries for this page A Reference (One book/article) A Poster of Ernst Fischer Mathematicians born in the same country Previous (Chronologically) Next Main Index Previous (Alphabetically) Next Biographies index History Topics Societies, honours, etc. Famous curves Time lines Birthplace maps Chronology Search Form Glossary index Quotations index Poster index Mathematicians of the day Anniversaries for the year JOC/EFR © August 2006 School of Mathematics and Statistics Copyright information University of St Andrews, Scotland The URL of this page is:
{"url":"http://www-history.mcs.st-andrews.ac.uk/Biographies/Fischer.html","timestamp":"2014-04-20T10:58:54Z","content_type":null,"content_length":"14571","record_id":"<urn:uuid:72dfdefb-807c-4bc1-9cd8-a320fa4d5873>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00052-ip-10-147-4-33.ec2.internal.warc.gz"}
Is this a counterexample to a conjecture about independent domination in cartesian graph products? up vote 5 down vote favorite VIZING’S CONJECTURE: A SURVEY AND RECENT RESULTS (2009) by Bostjan Bresar , Paul Dorbec , Wayne Goddard , Bert L. Hartnell , Michael A. Henning , Sandi Klavzar , Douglas F. Rall p.25: Conjecture 9.6. For all graphs $G$ and $H$, $$\gamma(G \square H) \ge \min\{i(G)\gamma(H), i(H)\gamma(G)\}$$ where $\square$ is the cartesian product of graphs and $i(G)$ is the independent domination number. For cartesian squares it is $$\gamma(G \square G) \ge \gamma(G) i(G)$$ According to sage and my verification square of the graph on 7 vertices $[0 \ldots 6]$ with edges $$[(0, 4), (0, 5), (0, 6), (1, 4), (1, 5), (1, 6), (2, 4), (2, 5), (2, 6)]$$ appears a counterexample to Conjecture 9.6 (note that vertex 3 is disconnected). sage: G=Graph(':Fo@I@I@J') #from sparse6 sage: Gs=G.cartesian_product(G) sage: (Gs.dominating_set(value_only=True),G.dominating_set(value_only=True),G.dominating_set(value_only=True,independent=True) ) (11, 3, 4) Since the order is only $7$, $\gamma(G)=3$ and $i(G)=4$ were verified by enumerating all subsets of the vertices. For $\gamma(G \square G)=11$ the dominating set returned by sage was verified and it is an upper bound for the correct value. I well might have misunderstood the conjecture. Is the above graph a counterexample of Conjecture 9.6? Adding disconnected vertices to $G$ gives more counterexamples. graph-theory counterexamples What is $\gamma(G)$? – Ketil Tveiten Sep 25 '12 at 11:28 @Ketil $\gamma(G)$ is the domination number, p. 1 from the paper: "As usual, γ stands for the domination number" – joro Sep 25 '12 at 11:43 1 maybe the original conjecture requires connected graphs? – Suvrit Sep 25 '12 at 12:29 @Suvrit don't know, just cited the conjecture and it says "for all graphs" – joro Sep 25 '12 at 12:42 @joro: I guess then you have uncovered a weakness in the conjecture! – Suvrit Sep 25 '12 at 13:51 add comment 2 Answers active oldest votes To summarize a bit: • $\gamma(G)$ is defined as the usual domination number of a graph $G$ • $i(G)$ is defined as the smallest cardinality of a dominating set that is also an independent set Your graph $C$ is the disjoint union of $K_{3,3}$ and $K_1.$ Clearly $\gamma(C) = 3$ and $i(C) = 4.$ For disjoint graphs $G,H$ we have $$(G \cup H) \square (G \cup H) = (G \square G) \cup (G \square H) \cup (H \square G) \cup (H \square up vote 4 down vote which gives for $C = K_{3,3} \cup K_1$ $$ C \square C = K_{3,3} \square K_{3,3} \cup K_{3,3} \cup K_{3,3} \cup K_1.$$ And thus $$\gamma(C \square C) = \gamma(K_{3,3} \square K_{3,3}) + 2\gamma(K_{3,3})+\gamma(K_1) = 11.$$ This would indeed imply that $\gamma(C \square C) = 11 < \gamma(C)i(C) = 12.$ Making the conjecture false for disconnected graphs. Edit. In this paper the authors construct an infinite family of graphs that are a counterexample to the claim of conjecture 9.6. The constructed family of graphs is disconnected but they remark it can be made connected. Thank you. What software are you using for computations? Sage's domination computations is slow for 8 X 8 products, haven't finished it yet for order 8. – joro Sep 25 '12 at I am also using sage! – Jernej Sep 25 '12 at 13:45 Thank you Jernej! Will check up tomorrow. I am doing G.dominating_set() which invokes the GLPK solver on sage 5.3 on linux. Even if your machine is several times faster this seems strange to me... – joro Sep 25 '12 at 14:04 Are you also using nauty to generate connected graphs? – Jernej Sep 25 '12 at 14:11 @Jernej, yes I usually use nauty - it is quite faster than sage's for g in graphs(7): – joro Sep 25 '12 at 14:27 show 8 more comments Since commenters asked about a connected counterexample, here is the connected counter example from the paper Jernej mentions. p. 2. A $k$-leaf star is a central vertex with $k$ vertices connected to it. A $2k$-leaf-two star is a graph from two $k$-leaf stars with their central vertices connected by an edge. Define $G_k$ as the disjoint union of $2k^2$-leaf-two star and $k$ $K_2$s. $\gamma(G_k)=k+2$ and $i(G_k)=k^2+k+1$. The disconnected counterexample is $F = G_k \square G_k$ with given dominating set $\gamma(F) \le 12k^2+8k+4$. up vote 0 down vote To make connected counterexample $G_k'$, add root vertex $r$ to $G_k$, connect it to one center of the two star and one vertex of each $K_2$. $\gamma(G_k' \square G_k') \le 16k^2+12k+9$. Because of the constants the smallest disconnected counterexample guaranteed by the construction is on $266$ vertices. add comment Not the answer you're looking for? Browse other questions tagged graph-theory counterexamples or ask your own question.
{"url":"http://mathoverflow.net/questions/108042/is-this-a-counterexample-to-a-conjecture-about-independent-domination-in-cartesi","timestamp":"2014-04-17T04:10:59Z","content_type":null,"content_length":"67856","record_id":"<urn:uuid:1f45cacd-bd62-43d0-81a9-c97a40e38afd>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00342-ip-10-147-4-33.ec2.internal.warc.gz"}
Title page for ETD etd-09292009-020343 Maximum and minimum temperature for 14 stations in Western Virginia were used to develop a temperature estimation model. Locational data such as the UTM coordinates, elevation, aspect, distance from West Virginia and the distance to the coast and certain transformations of these variables were used as the independent variables. A variable was developed, called the distance-weighting variable, using the inverse distance to each of the 5 closest weather stations. The dependent variables selected were mean monthly maximum, mean monthly minimum, and mean monthly average temperatures. The statistical method used was stepwise regression analysis, with the diagnostic tools of the partial R2, the Cp, and the PRESS statistic being used as deciding factors for choosing a subset of 5 to 6 models to study closely. The Variance Inflation Factor and the Variance Proportion values were used to check for multicollinearity and to choose the final model. The models developed here were compared to those in a study that was done in 1981 (Anderson 1981). Anderson (1981) also developed temperature equations, using data obtained from the same general area, but using a larger data set of approximately 120 weather stations, and using only locational data. An independent data set of years other than those used to develop the model were used to validate the models by estimating the temperatures and comparing these estimates with the actual temperature values, using a paired t-test. A 2-sided t-test was used to compare the actual temperatures with the estimates calculated with Anderson's (1981) models and to compare Anderson's estimates with the estimates calculated from the models developed in this study. The t-test generally showed that this study developed models that fit the data well and seemed to predict well. In two cases where the model developed did not estimate well, reasons for this departure from the normal were discussed and possible solutions proposed. I also explored a new way of describing a temperature zone. A negative exponential equation was developed for the potential absolute maximum temperature estimate for the area of Western Virginia. A BASIC program was developed for managers to derive the temperature estimates, either on a point-by-point basis, or for a file to be entered into a geographic information system.
{"url":"http://scholar.lib.vt.edu/theses/available/etd-09292009-020343/","timestamp":"2014-04-17T06:55:13Z","content_type":null,"content_length":"11409","record_id":"<urn:uuid:ef65e0ef-9dad-4ee6-bdea-2bcbce240f5a>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00411-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: "A panel of judges must consist of four students and three teachers. A list of potential judges includes six students and five teachers. How many different panels could be created from this list?" How do I solve this? Not looking for the actual answer, just looking for how to solve it. I've spent just about an hour on this question... • one year ago • one year ago Best Response You've already chosen the best response. It would be: the amount of ways you can pick 3 teachers from 5 multiplied by the amount of ways you can pick 4 students from 6. Best Response You've already chosen the best response. For example: If there are 2 red balls and 3 green balls. how many ways can you pick 1 red and 1 green. How many ways can you pick 1 red from 2? .... 2 How many ways can you pick 1 green from 3? ..... 3 2*3=6 ways. |dw:1358504712847:dw| Best Response You've already chosen the best response. Oh my... Best Response You've already chosen the best response. I see. Another question: what are the "!"s next to numbers in a combination for? Best Response You've already chosen the best response. I'm guessing you mean like: \(n!\) Means factorial. Best Response You've already chosen the best response. Best Response You've already chosen the best response. Oh, I get it. Best Response You've already chosen the best response. Right... thanks. Best Response You've already chosen the best response. No problem. Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50f91f88e4b007c4a2ebf28d","timestamp":"2014-04-20T06:16:26Z","content_type":null,"content_length":"54496","record_id":"<urn:uuid:fa423509-0cc1-424f-a677-2198a81a3d00>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00353-ip-10-147-4-33.ec2.internal.warc.gz"}
Trying to express a solution as a series April 29th 2010, 09:12 PM #1 Jan 2010 Trying to express a solution as a series Hello everyone! I'm trying to practice on the numerous steps one has to follow to obtain a power series solution so I tried to solve the simple ODE $(E): y''-2y'+y=0$ whos solutions are very well known to be $y_1=e^x$ and $y_2=x\cdot e^x$. Happily, I laid down the buliding blocks substituting in $(E), \ y=\sum_{n=0}^\infty c_n \dot x^n$ and, after reindexing and grouping of terms, got the following recurrence relation: $c_{k+2}=\frac{2(k+1) c_{k+1}-c_k}{(k+2)(k+1)}$. Okay, I tried to continue by assuming once, that $c_0 = 0$ and again $c_1=0$ but I got no pattern at all, I was expecting something familiar - a taylor series expansion at 0. Question: (1) When to consider 2 cases, a null $c_0$ and a real $c_1$ and the opposite. (2) Why didn't I get a pattern? Hello everyone! I'm trying to practice on the numerous steps one has to follow to obtain a power series solution so I tried to solve the simple ODE $(E): y''-2y'+y=0$ whos solutions are very well known to be $y_1=e^x$ and $y_2=x\cdot e^x$. Happily, I laid down the buliding blocks substituting in $(E), \ y=\sum_{n=0}^\infty c_n \dot x^n$ and, after reindexing and grouping of terms, got the following recurrence relation: $c_{k+2}=\frac{2(k+1) c_{k+1}-c_k}{(k+2)(k+1)}$. Okay, I tried to continue by assuming once, that $c_0 = 0$ and again $c_1=0$ but I got no pattern at all, I was expecting something familiar - a taylor series expansion at 0. Question: (1) When to consider 2 cases, a null $c_0$ and a real $c_1$ and the opposite. (2) Why didn't I get a pattern? As for a pattern, you actually do. Let me write out the first few terms <br /> \begin{aligned}<br /> c_2 &= c_1 - \frac{c_0}{2},\\<br /> c_3 &= \frac{c_1}{2} - \frac{c_0}{3},\\<br /> c_4 &= \frac{c_1}{3!} - \frac{c_0}{8},\\<br /> c_5 &= \frac{c_1}{4!} - \frac{c_0} {30}.<br /> \end{aligned}<br /> Setting $c_0 = 0$ and $c_1 = 1$ gives $<br /> x + \frac{x^2}{1} + \frac{x^3}{2!} + \frac{x^4}{3!} + \cdots<br /> <br />$ $<br /> x\left(1 + x + \frac{x^2}{2!} + \frac{x^3}{3!} \cdots \right)<br />$ (look familar?) Setting $c_0 = 1$ and $c_1 = 0$ gives $<br /> 1 - \frac{x^2}{2} - \frac{x^3}{3} - \frac{x^4}{8} - \frac{x^5}{30} - \cdots<br />$ which, our course, doesn't look familar. So, what happened? Remember in this case, your series solution should recover $<br /> (ax+b)e^x<br />$ for suitable choices of $a$ and $b$. For example, try a Taylor series for $(1-x)e^x$ and see what you get. Oh dear God how one earth did you get that, I was searching for something like that April 30th 2010, 06:18 AM #2 April 30th 2010, 09:18 AM #3 Jan 2010
{"url":"http://mathhelpforum.com/differential-equations/142246-trying-express-solution-series.html","timestamp":"2014-04-18T04:57:01Z","content_type":null,"content_length":"43397","record_id":"<urn:uuid:b1bb7253-5bdc-4a95-a388-f2bef886a925>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00478-ip-10-147-4-33.ec2.internal.warc.gz"}
Combined Space and Time Spectrums March 2nd 2010, 10:58 PM Combined Space and Time Spectrums Hi all. I have a probability problem, beyond my knowledge or brain power! To give some context, I am looking the effect of turbulence on the design of wind turbine blades. The wind turbine control system can cope with low frequency turbulence, but not high frequency. Hence the following problem... Normal Distribution Wind Speed The wind speed distribution due to turbulence for a given mean wind speed can be assumed to be a simple Normal distribution with a standard deviation of 0.2, which is invariant of the mean wind Frequency Spectrum However there is also a frequency spectrum, as some turbulence is low frequency gusts, other are much higher frequency. The normalised frequency spectrum is given by Von Karman: $_{u}$ signifies the longitudinal direction (the one I'm interested in) $L_{2u}$ is a length scale which is (happily) known $n$ is the frequency My question: How can I calculate the probability distribution of the wind speed above a certain frequency? For frequencies of 1HZ and above I want to be able to have the probability distribution for the instantaneous wind velocity, given the mean wind speed. I can't believe its the same as the general Any help would, of course, be gratefully received.
{"url":"http://mathhelpforum.com/advanced-statistics/131800-combined-space-time-spectrums-print.html","timestamp":"2014-04-19T13:03:45Z","content_type":null,"content_length":"5305","record_id":"<urn:uuid:f4645930-de46-489d-9f6f-b337bf7d6947>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00166-ip-10-147-4-33.ec2.internal.warc.gz"}
Comparative evaluation of Logan and relative-equilibrium graphical methods for parametric imaging of dynamic [18F]FDDNP PET determinations Logan graphical analysis with cerebellum as reference region has been widely used for the estimation of the distribution volume ratio (DVR) of [^18F]FDDNP as a measure of amyloid burden and tau deposition in human brain because of its simplicity and computational ease. However, spurious parametric DVR images may be produced with shorter scanning times and when the noise level is high. In this work, we have characterized a relative-equilibrium-based (RE) graphical method against the Logan analysis for parametric imaging and region-of-interest (ROI) analysis. Dynamic [^18F]FDDNP PET scans were performed on 9 control subjects and 12 patients diagnosed with Alzheimer’s disease. Using the cerebellum as reference input, regional DVR estimates were derived using both the Logan analysis and the RE plot approach. Effects on DVR estimates obtained at voxel and ROI levels by both graphical approaches using data in different time windows were investigated and compared with the standard values derived using the Logan analysis on a voxel-by-voxel basis for the time window of 35–125 min used in previous studies. Larger bias and variability were observed for DVR estimates obtained by the Logan graphical analysis at the voxel level when short time windows (85–125 and 45–65 min) were used, because of high noise levels in voxel-wise parametric imaging. However, when the Logan graphical analysis was applied at the ROI level over those short time windows, the DVR estimates did not differ significantly from the standard values derived using the Logan analysis on the voxel level for the time window of 35–125 min, and their bias and variability were remarkably lower. Conversely, the RE plot approach was more robust in providing DVR estimates with less bias and variability even when short time windows were used. The DVR estimates obtained at voxel and ROI levels were consistent. No significant differences were observed in DVR estimates obtained by the RE plot approach for all paired comparisons with the standard values. The RE plot approach provides less noisy parametric images and gives consistent and reliable regional DVR estimates at both voxel and ROI levels, indicating that it is preferred over the Logan graphical analysis for analyzing [^18F]FDDNP PET data. Keywords: Alzheimer’s disease (AD), Distribution volume ratio (DVR), [^18F]FDDNP, Graphical methods, Positron emission tomography (PET)
{"url":"http://pubmedcentralcanada.ca/pmcc/articles/PMC3851575/?lang=en-ca","timestamp":"2014-04-21T15:42:50Z","content_type":null,"content_length":"168281","record_id":"<urn:uuid:7be404e8-a7c5-4d90-912b-9cb0fda38e0c>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00309-ip-10-147-4-33.ec2.internal.warc.gz"}
Servo Controller HELP! [Archive] - Parallax Forums 03-31-2007, 07:24 AM Hi guys, I am doing a project for school and I need to get a servo to interface in matlab, namely simulink. I want to be able to command the servo within my PID loop. I just bought this http://www.parallax.com/detail.asp?product_id=28823·which hooks up to the USB. I know I have to open the com port for this device in matlab, but im not exactly sure what else has to be done to get it to work. Can someone help me out, PLEASE?http://forums.parallax.com/images/smilies/freaked.gif I will be using a single standard fuaba servo.
{"url":"http://forums.parallax.com/archive/index.php/t-93308.html","timestamp":"2014-04-19T05:08:24Z","content_type":null,"content_length":"15009","record_id":"<urn:uuid:c573d248-36a6-4d98-8c99-6f0003996613>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00179-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: st: RE: panel data fixed vs random Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: st: RE: panel data fixed vs random From Christopher Baum <kit.baum@bc.edu> To "statalist@hsphsun2.harvard.edu" <statalist@hsphsun2.harvard.edu> Subject Re: st: RE: panel data fixed vs random Date Mon, 4 Feb 2013 12:21:46 +0000 On Feb 4, 2013, at 2:33 AM, Pietro said: > If i run the hausman test in the reverse order: > hausman random fix > i get the following output: > Note: the rank of the differenced variance matrix (2) does not equal the number of coefficients > being tested (4); be sure this is what you expect, or there may be problems computing > the test. Examine the output of your estimators for anything unexpected and possibly > consider scaling your variables so that the coefficients are on a similar scale. > ---- Coefficients ---- > | (b) (B) (b-B) sqrt(diag(V_b-V_B)) > | random fix Difference S.E. > - -------------+---------------------------------------------------------------- > Penetratio~2 | -29177.79 -83456.95 54279.16 56513.77 > GastosSaud~a | -4.087394 .6067533 -4.694147 2.920822 > MortHomicp~b | 65.12774 4.54167 60.58607 31.10263 > interinsfi~a | .3677846 .238441 .1293436 .0192787 > - ------------------------------------------------------------------------------ > b = consistent under Ho and Ha; obtained from xtreg > B = inconsistent under Ha, efficient under Ho; obtained from xtreg > Test: Ho: difference in coefficients not systematic Garbage in, garbage out. You cannot arbitrarily reverse the order of arguments to -hausman-. As the footer says, you have specified that the first set of estimates is always consistent, i.e. FE, not RE. You have to use fixed as the first argument. The huge difference in point estimates suggests to me that RE is not acceptable. Furthermore, when you run the test properly, you get a negative Chi2 value, and the command tells you that you are not meeting the underlying assumptions. In my experience this often signals that you are comparing two inconsistent estimators, i.e., that your FE model is seriously misspecified, and is itself junk. I would work with the FE model, analyze its residuals, and consider whether there are obvious problems of omitted variables, endogeneity, etc. Kit Baum | Boston College Economics & DIW Berlin | http://ideas.repec.org/e/pba1.html An Introduction to Stata Programming | http://www.stata-press.com/books/isp.html An Introduction to Modern Econometrics Using Stata | http://www.stata-press.com/books/imeus.html * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/faqs/resources/statalist-faq/ * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2013-02/msg00062.html","timestamp":"2014-04-17T10:17:44Z","content_type":null,"content_length":"10155","record_id":"<urn:uuid:0ef963a9-cf9d-48c5-a99e-d6083b116400>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00478-ip-10-147-4-33.ec2.internal.warc.gz"}
RaTLoCC (Ramsey Theory in...) I was at last week which stands for Ramsey Theory in Logic, Combinatorics and Complexity . The idea was to bring in researchers for all three areas (actually its more than three) and have them learn about each others areas. 1. During Szemeredi's talk he said I am not an expert on Ramsey Theory. He meant to say I am not an expert on Ramsey Numbers, e.g., the current upper and lower bounds on R(5). 2. An anagram of Banach-Tarski is Banach-Tarski Banach-Tarski. 3. Bertinoro hosted both RaTLoCC and SUBLINEAR (a workshop on ... SUBLINEAR things) at the same time. We had some shared activities with them. They were a younger, hipper crowd. Ronitt Rubinfeld (who was there) told me they had LESS women than they thought they would have- only 4. We had MORE women then we thought we would have- 2 (we had 0 last time). 4. There were several blogs about the SUBLINEAR workshop: Day 1a Day 1b Day 2a Day 2b Day 3 Day 4a Day 4b 5. One measure of how much you get out of a workshop or conference is how many papers you are inspired to read when you get back home (or perhaps how many pages-of-papers or theorems, or some measure.) With that in mind, here is a list of some of the papers that inspired me. (Slides and abstracts are posted at the workshop's website.) 1. Szemeredi talked on Arithmetic progressions in sumsets. Here is a sample theorem. Throughout A is a subset of {1,...,n} and n is large. A+A is the set of all sums of two elements of A. LA is A+A+...+A (L times). Sample theorem: For all C there exists c such that if |LA| > Cn then LA has a cn-AP (an arithmetic progression of length cn.) These theorems look HARD but it inspires me to read some papers on Alon's website on this topic, and also my own writeup of sum-product theorems (He used C and c in his talk- though some of them looked the same.) 2. Noga Alon talked on List Coloring and Euclidean Ramsey Theory. A graph is L-List colorable if there is a way to assign each vertex L colors so that there IS a coloring of the graph where each vertex uses on of the colors assigned to it. Let G be the unit distance graph in the plane: vertices are points in the plane and two points are connected if they are distance one apart. This graph is known to be 7 colorable, known to NOT be 3-colorable. All else is open. Noga talked about his proof (co-author Kostochka) that G is NOT list-s-colorable for any s. The paper is on his website and I am INSPIRED to read it. (ADDED LATER- THE DEFINITION I GAVE ABOVE IS NOT CORRECT. SEE COMMENT BELOW.) 3. Peter Cholak talked on Reverse Mathematics of Ramsey Theory. Reverse mathematics is a field where they have set up several axiom systems for mathematics (in a hierarchy) and, for MANY theorems of math, they know EXACTLY which system is needed. Infinite Ramsey Theory for PAIRS seems to be stubborn- it is not in any of the usual systems. (formally: RCA cannot prove it, but ACA is too much). It is provably DIFFERENT from Infinite Ramsey for TRIPLETS (which is equivalent to the theorem for 4-tuples, 5-tuples, etc, and for all of them ACA is exact.) My question: when I prove Ramsey for triples I do not feel the earth move or think MY GOODNESS- THAT STEP WAS NONCONSTRUCTIVE!!!! or anything else that is different from Ramsey for pairs. They told me that there IS a proof of Ramsey for pairs that, once you see it, you DO NOT know how to generalize to triples. I may try to look into this. I may not. The paper is at Peter Cholak's webpage and is titled On the strength of Ramsey's theorem for pairs. (co-authored by Jockusch and Slaman) 4. David Conlon's talk Hypergraph Ramsey Numbers was about Ramsey's theorem for triples. For this case the upper bound is double-exp but the lower bound is single-exp. They made some progress on this, but the upper and lower bounds are still, pretty much, what they were. Still- proofs look very interesting. Progress here is unpredictable--- Conlon said that the problem could be solved next week or next month or not for one hundred years. Also, the proofs are clever- so Erdos COULD HAVE done them. (As opposed to results that use mathematics unknown to anyone in Math at the time.) I am INSPIRED to read this paper by Conlon, Fox, and Sudakov. The case of 3-hypergraph is interesting because, if the lower bound can be gotten up to double exp then for k-hypergraphs we will know that upper and lower bounds are roughly tower(k-1). (Uses the STEPPING UP Lemma.) 5. Swastik Kopparty talked on The complexity of computing roots and residuosity in finite fields. Say you are in a ints mod p and you want to know whether x is a cube root or not by a constant depth circuit. This will take exp number of gates. Framework is the same as the Parity not in ACC_0 proof, but requires lots more math. MIGHT want to read it. MIGHT be too hard. 6. Imre Leader gave an excellent talk on Euclidean Ramsey Theory. Here is the basic problem: Let S be a set of points in R^n. IS it the case that, for all c there exists a finite set T in R^m. (m \ge n) such that, no matter how you c-color T there will be a CONGRUENT copy of S? The unit line has this property. If so then S is RAMSEY. It is known that if S is Ramsey then S lies on the surface of some (many dim) sphere. The Main conjecture is that the converse is true. NO says Imre! He has an alternative conjecture here. This paper I am INSPIRED to read. 7. Jan-Christoph Schlage-Puchta (that is one person) gave an application of VDW's theorem to Completely Multiplicative Automatic Functions This one I NEED to read to put in my book on VDW's theorem. Its here 6. The best talks were those that really TOLD YOU WHAT THE PROBLEM WAS so that the no-specialist could at least here that and then TOLD YOU WHY THEY CARE (even if you don't care you should know why they care) and TOLD YOU SOMETHING ABOUT THE TECHNIQUES. Not all of the talks did this. Also, the best talks were on BLACKBOARD- it forces you to go slower. Not possible at STOC/FOCS etc since the audience is too big, but quite possible here. Only drawback- can't put blackboard talks on the web so easily (though you can if you videotape). 7. The youngest participant: James Pinkerton, my High School Student, gave his talk on Duplicator Spoiler Games that go on an ordinal number of moves. The talk was AWESOME! Before hand he was not scared. He said Whats the worst they can do? Throw read and blue tomatoes at me? 8. One of the people there wore a T-shirt that said Stand back, I'm doing SCIENCE! that had a picture (stick figure really) of a scientists with a test tube. He wore it two days in a row. I commented: Nonconstructive proof that you are a supernerd. Either (1) You wore the same T-shirt two days in a row, so you are a supernerd, OR (2) you have two copies of the same nerdy T-shirt, so you are a supernerd. Of course, in these surroundings, being called a nerd or a supernerd is not an insult. 6 comments: 1. That was truly inspiring, thanks. If I were your dean that report would cause me to fund your next five years ttendance with no need to file a formal request! I immediately looked up the Cholak/ Jockusch/Slaman paper although it looks too long to add to my current reading pile. What a different world it would be if there were some way for people without mathematics backgrounds to be equally inspired by the great work that is out there. 2. It's almost surely this shirt: http://store.xkcd.com/xkcd/#StandBackScience 3. Just noticed that there seem to be 3 notions for coloring planar graphs: They went by so fast I didn't realize I was making acquaintance with 3 different terms. At least I think it was only 3, are there more? 4. Red and blue tomatoes. Tomatoes don't read or are read. 5. Micki- apologies, there are only two notions of graph coloring, and they apply to all graphs not just planar ones. L-list-colorable and list s-colorable are the same thing, though with a diff parameter. n-colorable is of course the usual notion of graph coloring. 6. MAJOR CORRECTION: a graph is L-list-colorable if FOR ANY assignment of L colors to each vertex there is a way to pick for each vertex a color from its list and get a proper coloring. (Thanks to the email that pointed this out.)
{"url":"http://blog.computationalcomplexity.org/2011/05/ratlocc-ramsey-theory-in.html","timestamp":"2014-04-18T23:16:45Z","content_type":null,"content_length":"169914","record_id":"<urn:uuid:5fe5afe7-3a54-4f56-94a4-c04df570141e>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00086-ip-10-147-4-33.ec2.internal.warc.gz"}
2002 Conference Proceedings Go to previous article Go to next article Return to 2002 Table of Contents Yoshinobu Maeda*, Eiichi Tano**, Hideo Makino*, Takashi Konishi**, Ikuo Ishii** *Faculty of Engineering, Niigata University **Graduate School of Science and Technology, Niigata University 8050, Ikarashi-2, Niigata, 950-2181, Japan In order to assist the navigation of visually impaired pedestrians, using devices such as GPS (Global Positioning System), there should be a capability of automatically providing building names (landmarks) and compass information, in spoken format. In this report we experimentally manipulated availability of both building names and path intersection (waypoint) information. In our evaluation, sighted participants were blindfolded and walked along a given route using our guidance system. Using a multivariate analysis of the data, we demonstrated a reduction in walking time and distance with the addition of waypoint information. Our results indicate that the landmarks alone were sufficient to travel the route, but travel was more efficient using both types of information (landmarks and waypoints). We have developed a GPS-based speech-output guidance system for blind and visually impaired pedestrians [1]. Our system was originally composed of two separate units: a mobile unit with GPS, carried by the visually impaired pedestrian, and the base station, a centrally-located unit containing a Geographic Information System (GIS), as shown in Fig. 1(a). Recently, we have implemented a new version of the system which integrates the GPS and GIS components into one unit (Fig. 1(b)). Both the combined and two-component versions of the guidance system are intended for blind and visually impaired people. The numerical location data, comprising the latitude and longitude output from the GPS receiver, are conveyed to the GIS to obtain information about nearby buildings (landmarks). Also, the user's facing direction, provided by a fluxgate compass worn by the user, is conveyed to the GIS. Synthetic speech software provides speech information to orient the traveler with respect to the nearby When a blind user with the guidance system wishes to travel to some building, he or she may easily accomplish the task. However, finding the entrance to the building is difficult, for attributes of the building, like an entrance, are not included in the GIS. Also, when he or she has to take another way owing to temporary road repairs, he or she needs path intersection information in order to make a detour. Minimal information about path intersections (waypoints) can greatly improve a guidance system. The main purpose of this report was to investigate the importance of waypoint information for the visually impaired [2]. In our evaluation, we measured walking times and distances of 10 blindfolded, sighted participants in two experimental conditions: with landmark information only and with both landmark and waypoint information. These data were analyzed by means of Principal Components Analysis (PCA) and Discriminant Analysis (DA) using Mahalanobis distance. Fig. 1: (a) Two-component version of the system. (b) Integrated version of the system. "*personal computer" means a small mobile computer. The sighted participants in the experiment were 4 undergraduate students and 6 graduate students, aged 22-24 years, of Niigata University. They were blindfolded and behaved as if temporarily blind (Fig. 2(a)). The route of the experiment was located on the Niigata University campus. The participants knew several landmarks on the campus but had never walked the route using a blindfold. In order to insure each participantfs safety, two observers accompanied the participants on the walk. These observers did not converse with the participants at any time during the experiment. All participants walked on the same route (a bitumen walking path) three times. In Condition I, half of the participants walked along the route using a blindfold, a long cane and the integrated version of the guidance system (see hardware in Fig. 2 (b)) with access only to landmark information. Three months later, they walked the route again with a blindfold, a long cane and the guidance system, this time with both landmark and waypoint information (Condition II). The other half of the participants performed in the same two conditions but in reverse order. The observers measured the walking times of the participants to the nearest minute. Walking distances in meters were obtained from the GPS data. Finally, all participants performed in a third condition (Condition III); here they walked the same route using vision (i.e., without the blindfold, long cane, and guidance system). Condition III was included in order to obtain normal walking times as reference values. Digital maps of our GIS software are shown in Figs. 3(a) and 3(b). In all three conditions, participants walked counter-clockwise from the point labeled "Start & goal" in Fig. 3(b), returning to the same point. This route had 9 path intersections along the way, and the total length of the route was 978 meters. The guidance system was programmed so that the participant received synthesized speech every 10 seconds, even though the GPS receiver updated the location data more frequently. The landmark and the waypoint data were contained within different layers of the GIS. We gave waypoint information higher priority, so that the guidance system provided waypoint information whenever the smaller waypoint block overlapped the larger landmark block (see Fig. 3(b)). The walking times and walking distances were normalized by dividing by the participant's reference walking time (from Condition III) and the shortest route (=978m), respectively. We supposed that the data of Condition I and II were given by two-dimensional joint Gaussian distributions, which consisted of the normalized walking time (NWT) and the normalized walking distance (NWD), respectively. Using the NWT and the NWD, we obtained mean vectors and variance-covariance matrices to evaluate theprincipal components of the distribution and the Mahalanobis distances. We applied the PCA to two data sets of Conditions I and II to obtain the correlation between the NWT and the NWD. We also applied the DA to the data to get a discriminant function (or a boundary curve in the NWT-NWD plane) between two groups of data of Conditions I and II. Every point on the boundary curve takes the same Mahalanobis distance from two mean points (or two mean vectors) of the data of Conditions I and II. Fig. 2: (a) Configuration. (b) Guidance system Fig. 3: Digital maps in our GIS. (a) Landmark information (Condition I). (b) Landmark and waypoint information (Condition II). The small blocks labeled with "Start & goal" and with single digits are the waypoint information. All participants completed the three conditions. Figure 4 shows the GPS plots of one of the participants. This participant temporarily got lost in the vicinity of both waypoints "3" and "5", which are depicted in Fig. 3(b). Figure 5 gives the mean and the standard deviation of NWT (Fig. 5(a)) and NWD (Fig. 5(b)). In each graph, the left represents Condition I, and the right Condition II. In the PCA, the data of Conditions I and II were approximately scattered along a positively-slope line in the NWT-NWD plane, respectively (not shown in figure). These lines represent the eigenvectors corresponding to the maximum eigenvalues of the variance-covariance matrices. The fact that the data lie on a positively-slope line indicates that NWT has a positive correlation with NWD. In the DA, the discriminant function (thick solid curve in Fig. 6(a)) in the NWT-NWD plane is shown as the ellipse, where unfilled and filled circles represent the data of Conditions I and II, respectively (Fig. 6(a)). Figure 6(b) is a magnification in the vicinity of the ellipse of Fig. 6(a). Fig. 4: GPS trajectories. The labels "Start&goal", "3" and "5" correspond to those in Fig. 3(b). (a) Condition I. (b) Condition II. Fig. 5: Mean and standard deviation. (a) Condition I. (b) Condition II. Fig. 6: DA. (a) Unfilled (white) and filled (black) circles represent the data of Conditions I and II, respectively. Unfilled (white) and filled (red) squares represent the mean points of Conditions I and II, respectively. The ellipse represents the discriminant function. (b) A closeup of the data in the vicinity of the ellipse of (a). Abscissa, the normalized walking time (NWT); ordinate, the normalized walking distance (NWD). It was demonstrated in Fig. 4 that the blindfolded pedestrian lacking waypoint information in the GIS got lost in the vicinity of two intersections. We can also see it statistically in the NWT and the NWD of Fig. 5. These mean values were reduced in half when waypoint information was available. The results of the PCA confirms that Figs. 5(a) and 5(b) are the correct results. If the participant had stopped walking intentionally in the experiment, the NWT would be overestimated relative to NWD. Thus, the positive correlation between the NWT and the NWD would differ from the results of the PCA. The discriminant function of the DA (thick solid curve in Fig. 6) was elliptical, where the inside of the ellipse corresponded to the statistical group of the data of Condition II, and the outside Condition I. Indeed, the mean point of Condition II (filled square) was inside the ellipse, and conversely, the mean point of Condition I (unfilled square) was outside. Generally, if the two groups were to have the same statistical variances, the discriminant function would be linear. When the difference of the variances is large, the discriminant function is quadratic nd changes from hyperbolic to elliptic. Therefore our result indicates that, when the participants had waypoint information in the GIS, not only the mean values of the NWT and the NWD were reduced by so were the variances. This means that waypoint information is necessary for efficient travel, though navigation differs considerably from one individual to the next. It is expected, from the diameter of the ellipse in Fig. 6(b), that visually impaired people will take no more than 3.3 times (and no less than 1.8 times) as much time to traverse a path than sighted people, if they have both landmark and waypoint information. For distance, these corresponding maximum value is 1.3. In our guidance system for the visually impaired pedestrian, we divide guidance information into two categories: building names (landmarks), and path intersections (waypoints). These two types of information were provided in different layers in the GIS of our guidance system. We programmed the guidance system to give priority to the waypoint information. In order to assess the importance of waypoint information, we experimented using 10 blindfolded sighted participants and analyzed the walking times and distances. The results indicate that providing both landmark and waypoint information improves effectiveness of the system. We are going to investigate the effect of the other information such as paths and districts on the guidance system. Furthermore, we are going to improve the guidance system so that it has an input module by means of the speech recognition [3][4]. The authors would like to thank Prof. Jack M. Loomis for helpful discussions. [1] H. Makino, I. Ishii and M. Nakashizuka (1996) Proc. 18th Conf. IEEE/EMBS, Amsterdam, The Netherlands, Oct.31-Nov.3 [2] E. Tano, Y. Maeda, H. Makino, T. Konishi and I. Ishii (2001, in press) Theory and Applications of GIS, GISA, Vol.9, No.2 (in Japanese) [3] R. G. Golledge, R. L. Klatzky, J. M. Loomis, J. Speigle and J. Tietz (1998) Int. J. Geographical Information Science, Vol.12, No.7, pp.727-749 [4] J. M. Loomis, R. G. Golledge and R. L. Klatzky (2001) pp.429-446, In: W. Barfield and T. Caudell (eds.) Fundamentals of Wearable Computers and Augmented Reality, Mahwah NJ: Lawrence Erlbaum Go to previous article Go to next article Return to 2002 Table of Contents Return to Table of Proceedings Reprinted with author(s) permission. Author(s) retain copyright.
{"url":"http://www.csun.edu/cod/conf/2002/proceedings/296.htm","timestamp":"2014-04-21T07:34:09Z","content_type":null,"content_length":"14382","record_id":"<urn:uuid:b8a74b60-23ab-4d2b-91e5-a341e74ca457>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00297-ip-10-147-4-33.ec2.internal.warc.gz"}
Results 1 - 10 of 43 "... We extend Duration Calculus to a logic which allows description of Discrete Processes where several steps of computation can occur at the same time point. The resulting logic is called Duration Calculus of Weakly Monotonic Time (W DC). It allows effects such as true synchrony and digitisation to be ..." Cited by 20 (9 self) Add to MetaCart We extend Duration Calculus to a logic which allows description of Discrete Processes where several steps of computation can occur at the same time point. The resulting logic is called Duration Calculus of Weakly Monotonic Time (W DC). It allows effects such as true synchrony and digitisation to be modelled. As an example of this, we formulate a novel semantics of Timed CSP assuming that the communication and computation take no time. - FM'99 , 1999 "... The UniForM Workbench supports combination of Formal Methods (on a solid logical foundation), provides tools for the development of hybrid, real-time or reactive systems, transformation, verification, validation and testing. Moreover, it... ..." Cited by 20 (2 self) Add to MetaCart The UniForM Workbench supports combination of Formal Methods (on a solid logical foundation), provides tools for the development of hybrid, real-time or reactive systems, transformation, verification, validation and testing. Moreover, it... - Nordic Journal of Computing , 2002 "... We present a new combination CSP-OZ-DC of three well researched formal techniques for the specification of processes, data and time: CSP [17], Object-Z [36], and Duration Calculus [40]. The emphasis is on a smooth integration of the underlying semantic models and its use for verifying properties ..." Cited by 19 (3 self) Add to MetaCart We present a new combination CSP-OZ-DC of three well researched formal techniques for the specification of processes, data and time: CSP [17], Object-Z [36], and Duration Calculus [40]. The emphasis is on a smooth integration of the underlying semantic models and its use for verifying properties of CSP-OZ-DC specifications by a combined application of the model-checkers FDR [29] for CSP and UPPAAL [1] for Timed Automata. This approach is applied to part of a case study on radio controlled railway crossings. - FM’99: WORLD CONGRESS ON FORMAL METHODS, LECT. NOTES IN COMPUT. SCI , 1999 "... Timed Communicating Object Z (TCOZ) combines Object-Z's strengths in modeling complex data and algorithms with Timed CSP's strengths in modeling real-time concurrency. TCOZ inherits CSP's channel-based communication mechanism, in which messages represent discrete synchronisations between process ..." Cited by 15 (3 self) Add to MetaCart Timed Communicating Object Z (TCOZ) combines Object-Z's strengths in modeling complex data and algorithms with Timed CSP's strengths in modeling real-time concurrency. TCOZ inherits CSP's channel-based communication mechanism, in which messages represent discrete synchronisations between processes. The purpose of most control systems is to observe and control analog components. In such cases, the interface between the control system and the controlled systems cannot be satisfactorily described using the channel mechanism. In order to address this problem, TCOZ is extended with continuous-function interface mechanisms inspired by process control theory, the sensor and the actuator. The utility of these new mechanisms is demonstrated through their application to the design of an automobile cruise control system. - Theoretical Computer Science , 1998 "... The principal objective of this paper is to lift basic concepts of the classical automata theory from discrete to continuous (real) time. It is argued that the set of nite memory retrospective functions is the set of functions realized by nite state devices. We show that the nite memory retros ..." Cited by 15 (1 self) Add to MetaCart The principal objective of this paper is to lift basic concepts of the classical automata theory from discrete to continuous (real) time. It is argued that the set of nite memory retrospective functions is the set of functions realized by nite state devices. We show that the nite memory retrospective functions are speed-independent, i.e., they are invariant under `stretchings' of the time axis. Therefore, such functions cannot deal with metrical aspects of the reals. - In FM 2005, volume 3582 of LNCS , 2005 "... Abstract. We present a new model-checking technique for CSP-OZ-DC, a combination of CSP, Object-Z and Duration Calculus, that allows reasoning about systems exhibiting communication, data and real-time aspects. As intermediate layer we will use a new kind of timed automata that preserve events and d ..." Cited by 14 (3 self) Add to MetaCart Abstract. We present a new model-checking technique for CSP-OZ-DC, a combination of CSP, Object-Z and Duration Calculus, that allows reasoning about systems exhibiting communication, data and real-time aspects. As intermediate layer we will use a new kind of timed automata that preserve events and data variables of the specification. These automata have a simple operational semantics that is amenable to verification by a constraint-based abstraction-refinement model checker. By means of a case study, a simple elevator parameterised by the number of floors, we show that this approach admits model-checking parameterised and infinite state real-time systems. 1 , 2002 "... We investigate a variant of dense-time Duration Calculus which permits model checking using timed/hybrid automata. We define a variant of the Duration Calculus, called Interval Duration Logic, (IDL), whose models are timed state sequences [1]. A subset LIDL of IDL consisting only of located time con ..." Cited by 10 (0 self) Add to MetaCart We investigate a variant of dense-time Duration Calculus which permits model checking using timed/hybrid automata. We define a variant of the Duration Calculus, called Interval Duration Logic, (IDL), whose models are timed state sequences [1]. A subset LIDL of IDL consisting only of located time constraints is presented. As our main result, we show that the models of an LIDL formula can be captured as timed state sequences accepted by an event-recording integrator automaton. A tool called IDLVALID for reducing LIDL formulae to integrator automata is briefly described. Finally, it is shown that LIDL has precisely the expressive power of event-recording integrator automata, and that a further subset LIDL- corresponds exactly to event-recording timed automata [2]. This gives us an automata-theoretic decision procedure for the satisfiability of LIDL- formulae. - Information Processing Letters , 1994 "... This paper presents an algebraic calculus like the relational calculus for reasoning about sequential phenomena. It provides a common foundation for several proposed models of concurrent or reactive systems. It is clearly differentiated from the relational calculus by absence of a general converse o ..." Cited by 9 (1 self) Add to MetaCart This paper presents an algebraic calculus like the relational calculus for reasoning about sequential phenomena. It provides a common foundation for several proposed models of concurrent or reactive systems. It is clearly differentiated from the relational calculus by absence of a general converse operation. This permits the treatment of temporal logic within the sequential calculus. 1 Introduction and general axioms. - In Workshop on Formal Techniques for Hardware and Hardware-like Systems, Marstrand , 1998 "... Most hardware verification techniques tend to fall under one of two broad, yet separate caps: simulation or formal verification. This paper briefly presents a framework in which formal verification plays a crucial role within the standard approach currently used by the hardware industry. As a basis ..." Cited by 8 (2 self) Add to MetaCart Most hardware verification techniques tend to fall under one of two broad, yet separate caps: simulation or formal verification. This paper briefly presents a framework in which formal verification plays a crucial role within the standard approach currently used by the hardware industry. As a basis for this, the formal semantics of Verilog HDL are defined, and properties about synchronization and mutual exclusion algorithms are proved. - Formal Aspects of Computing , 1994 "... Abstract. This paper deals with dependability of imperfect implementations concerning given requirements. The requirements are assumed to be written as formulas in Duration Calculus. Implementations are modelled by continuous semi-Markov processes with finite state space, which are expressed in the ..." Cited by 8 (3 self) Add to MetaCart Abstract. This paper deals with dependability of imperfect implementations concerning given requirements. The requirements are assumed to be written as formulas in Duration Calculus. Implementations are modelled by continuous semi-Markov processes with finite state space, which are expressed in the paper as finite automata with stochastic delays of state transitions. A probabilistic model for Duration Calculus formulas is introduced, so that the satisfaction probabilities of Duration Calculus formulas with respect to semi-Markov processes can be defined, reasoned about and calculated through a set of axioms and rules of the model. 1.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=589597","timestamp":"2014-04-21T16:58:55Z","content_type":null,"content_length":"35658","record_id":"<urn:uuid:d2f8e4c0-16ba-444d-ad9a-f816ac6e8e1d>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00484-ip-10-147-4-33.ec2.internal.warc.gz"}
Reference prior for logistic regression Gelman et al. just published a paper in the Annals of Applied Statistics on the selection of a prior on the parameters of a logistic regression. The idea is to scale the prior in terms of the impact of a “typical” change in a covariate onto the probability function, which is reasonable as long as there is enough independence between those covariates. The covariates are primarily rescaled to all have the same expected range, which amounts to me to a kind of empirical Bayes estimation of the scales in an unormalised problem. The parameters are then associated with independent Cauchy (or t) priors, whose scale s is chosen as 2.5 in order to make the ±5 logistic range the extremal value. The perspective is well-motivated within the paper, and supported in addition by the availability of an R package called bayesglm. This being said, I would have liked to see a comparison of bayesglm. with the generalised g-prior perspective we develop in Bayesian Core rather than with the flat prior, which is not the correct Jeffreys’ prior and which anyway does not always lead to a proper prior. In fact, the independent prior seems too rudimentary in the case of many (inevitably correlated) covariates, with the scale of 2.5 being then too large even when brought back to a reasonable change in the covariate. On the other hand, starting with a g-like-prior on the parameters and using a non-informative prior on the factor g allows for both a natural data-based scaling and an accounting of the dependence between the covariates. This non-informative prior on g then amounts to a generalised t prior on the parameter, once g is integrated. Anyone interested in the comparison can use the functions provided here on the webpage of Bayesian Core . (The paper already includes a comparison with Jeffreys’ prior implemented as brglm and the BBR algorithm of Genkins et al. (2007).) In the revision of Bayesian Core, we will most likely draw this comparison. 3 Responses to “Reference prior for logistic regression” 1. [...] In connection with the discussion about reference priors for logistic regression posted two weeks ago, Aleks Jakulin pointed out the possibility to embed the slides for Bayesian Core that 2. In our paper we took care of scaling in a very radical way: all continuous variables were discretized and only took values of 0 or 1. I agree that the problem of scaling as well as the problem of inter-predictor correlations are important, and I’m looking forward to seeing how this is handled in Bayesian Core. A PDF of the relevant chapter sent via email would be helpful, as I’ll forget about the problem by the time I actually put my hands on the book. While the models are all fine, the challenge is to implement the model in a robust and efficient fashion so that it would survive the brutal testing on the corpus. I’ll try to put the code and data out there so that others can make their code sufficiently robust. 3. Interesting idea. I agree that it makes sense to use a hierarchical model for the coefficients so that they are scaled relative to each other. Regarding the pre-scaling that we do: I think something of this sort is necessary in order to be able to incorporate prior information. For example, if you are regressing earnings on height, it makes a difference if height is in inches, feet, meters, kilometers, etc. (Although any scale is ok if you take logs first.) I agree that the pre-scaling can be thought of as an approximation to a more formal hierarchical model of the scaling. Aleks and I discussed this when working on the bayesglm project, but it wasn’t clear how to easily implement such scaling. It’s possible that the t-family prior can be interpreted as some sort of mixture with a normal prior on the scaling.
{"url":"http://xianblog.wordpress.com/2009/01/14/reference-prior-for-logistic-regression/","timestamp":"2014-04-18T23:20:05Z","content_type":null,"content_length":"41655","record_id":"<urn:uuid:e2378dbf-6eb0-4efb-971d-35c8cbe8d24f>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00360-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: working on earthquake lab and had a few questons on finding epicenter can someone • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50f98cace4b007c4a2ec23eb","timestamp":"2014-04-18T08:11:09Z","content_type":null,"content_length":"34848","record_id":"<urn:uuid:8ad86930-72db-47b5-9da1-f1fcfdfc1acf>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00640-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: December 2007 [00349] [Date Index] [Thread Index] [Author Index] Re: FullSimplify with Pi • To: mathgroup at smc.vnet.net • Subject: [mg84141] Re: [mg84129] FullSimplify with Pi • From: Andrzej Kozlowski <akoz at mimuw.edu.pl> • Date: Tue, 11 Dec 2007 06:11:30 -0500 (EST) • References: <200712110320.WAA26066@smc.vnet.net> On 11 Dec 2007, at 12:20, Uberkermit wrote: > Greetings, > I would like to simplify an expression involving Pi. Actually, > simplifying the expression isn't hard, getting Mathematica to > recognize the simplification is the hard part. Consider: > Assuming[Element[x, Reals], > FullSimplify[Log[1/Sqrt[2 Pi] Exp[x]]]] > Mathematica doesn't do anything to simplify this. However, replacing > the symbol Pi with any other variable in the above leads to the > trivial substitutions. > By trivial, I mean: > Log[Exp[x]] = x, and Log[a/b] = Log[a] - Log[b]. > Why does Pi confuse Mathematica so?? > Thanks, > -Chris Well, how to put it, everything is the other way round. According to Mathematica's default ComplexityFunction (which is very close to LeafCount) the expression that you get when you use Pi is simpler: p = Assuming[Element[x, Reals], x - Log[a]/2 - Log[2]/2 q = Assuming[Element[x, Reals], x - (1/2)*Log[2*Pi] Now compare the LeafCount : LeafCount /@ {p, q} {14, 10} q is smaller so the second expression is "simpler". So there are two questions that one may wish to ask now. 1. Why does FullSimplify return the "simpler" expression in the case of Pi? 2. How to make FullSimplify return the expanded form of the expression in the case of Pi. Let's start with the easier question, that is the second. Of course one way would be to use a replacement rule, but that is not the point here. What you need to do is to use a ComplexityFunction that will make an expanded expression simpler. One has to find a good choice; for example if we try to maximize the number of Logs we will get: Assuming[Element[x, Reals], FullSimplify[Log[E^x/Sqrt[2*Pi]], ComplexityFunction -> (LeafCount[#1] - 6*Count[#1, Log, Infinity, Heads -> True] & )]] Log[E^x] - Log[Pi]/2 - Log[2]/2 which is not what we wanted. One possibility is to minimize the complexity of the expression inside Log. Here is one attempt: Assuming[Element[x, Reals], FullSimplify[Log[E^x/Sqrt[2*Pi]], ComplexityFunction -> (LeafCount[#1] + 10*Max[0, Cases[{#1}, Log[x_] :> LeafCount[x], Infinity]] & )]] x - Log[Pi]/2 - Log[2]/2 The ComplexityFunction is pretty complicated, I am sure it is possible to find a simpler one but I don't wish to spend any time on this. Now question one. Why does the expansion take place in the first case? Well, I would speculate that this is related to the following strange FullSimplify[x + Log[a]/2 + Log[2]/2] x + (1/2)*Log[2*a] FullSimplify[x + Log[a]/2 + Log[2]/2] x + (1/2)*Log[2*a] FullSimplify[x - Log[a]/2 - Log[2]/2] x - Log[a]/2 - Log[2]/2 Just to confirm: LeafCount[x - Log[a]/2 - Log[2]/2] LeafCount[x - 1/2 log(2 a)] So it does look like an imperfection in FullSimplify but of a somewhat different kind than the one you had in mind. I believe that the cause of it all lies in the following behaviour: Andrzej Kolowski • References:
{"url":"http://forums.wolfram.com/mathgroup/archive/2007/Dec/msg00349.html","timestamp":"2014-04-19T09:40:35Z","content_type":null,"content_length":"28462","record_id":"<urn:uuid:6f339ab2-b19f-43fb-a283-74326f22178b>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00643-ip-10-147-4-33.ec2.internal.warc.gz"}
Vortex extraction apparatus, method, program storage medium, and display system Patent application title: Vortex extraction apparatus, method, program storage medium, and display system Sign up to receive free email alerts when patent applications with chosen keywords are published SIGN UP A vortex extraction apparatus includes: a first numerical data calculation section calculating, for each vertex of each of voxels into which the inside of a fluid space is split, first numerical data indicating whether the vertex is located inside or outside a vortex region in a fluid; and a text data creation section determining the size of the vortex region around a center line of the vortex based on the first numerical data, and creating text data indicating the center line and the size. The apparatus further includes a polygon count reduction processing section counting the number of polygons defining a surface of the vortex region and each including an isosurface of the first numerical data, and integrating the voxels with new voxels into which the fluid space is roughly split until the number of polygons is not more than a set value, to decimate the first numerical data. A vortex extraction apparatus comprising: a data acquisition section that acquires flow velocity data at a plurality of points distributed in a fluid space; a first numerical data calculation section that calculates, for each vertex of each of voxels into which the inside of the fluid space is split, first numerical data indicating whether the vertex is located inside or outside a vortex region in the fluid, based on the flow velocity data acquired by the data acquisition section; a text data creation section that determines a center line of the vortex in the fluid based on the flow velocity data acquired by the data acquisition section and determines the size of the vortex region around the center line based on the first numerical data on each vertex calculated by the first numerical data calculation section, the text data creation section creating text data indicative of the center line and the size; a polygon count reduction processing section that counts the number of polygons defining a surface of the vortex region and each including an isosurface of the first numerical data calculated by the first data calculation section and integrates the plurality of voxels with new voxels into which the fluid space is roughly split until the number of polygons is equal to or smaller than a set value, to decimate the first numerical data and thereby obtain first numerical data corresponding to each vertex of each of the new voxels; and a data output section that outputs the text data created by the text data creation section and the first numerical data obtained by the polygon count reduction processing section and corresponding to each vertex of each of the new voxels. The vortex extraction apparatus according to claim 1, further comprising a second numerical data calculation section that calculates, based on the flow velocity data acquired by the data acquisition section, second numerical data indicative of the intensity of the vortex in each of the voxels comprising the vortex region determined based on the first numerical data calculated by the first numerical data calculation section, wherein the polygon count reduction processing section decimates the first numerical data calculated by the first numerical data calculation section to the first numerical data corresponding to each vertex of each of the new voxels, and creates second numerical data corresponding to each of the new voxels based on the second numerical data calculated by the second numerical data calculation section, and the data output section outputs, in addition to the text data and the first numerical data, the second numerical data corresponding to the new voxels. The vortex extraction apparatus according to claim 1, further comprising an interpolation operation section that performs an interpolation operation based on the flow velocity data acquired by the data acquisition section, thereby calculating flow velocity data on each vertex of each of a plurality of voxels when the fluid space is split into structured grids. The vortex extraction apparatus according to claim 1, wherein a flow velocity gradient matrix of vertices of each voxel is representatively defined as: J = ( u x u y u z v x v y v z w x w y w z ) , # #EQU00010## and S and Q are defined as: S = 1 2 ( J + J T ) ##EQU00011## Q = 1 2 ( J - J T ) ##EQU 2## where when flow velocities u, v, and w in x, y, and z directions in the fluid space are represented by q, and x, y, and z are represented by r, q r = ∂ q ∂ r , ##EQU00012## and J is a transposition matrix of a matrix J, and the first numerical data calculation section then determines an eigenvalue λ corresponding to a median of three eigenvalues λ , λ , and λ of a matrix A=S arranged in numerical order such that λ (each of λ , λ , and λ is any of λ , λ , and λ ), and specifies numerical data associated with the eigenvalue λ as the first numerical data. The vortex extraction apparatus according to claim 2, wherein when a flow velocity vector of each of the voxels is defined as U, the second numerical data calculation section calculates: ω = ∇ × U ## EQU00013## where : ##EQU 2## ∇ = ( ∂ ∂ x , ∂ ∂ y , ∂ ∂ z ) , ##EQU 3## x denotes an outer product, and |ω| denotes the magnitude of ω, and the second numerical data calculation section thus determines numerical data associated with the |ω| to be the second numerical The vortex extraction apparatus according to claim 1, wherein the text data creation section carries out, on a plurality of points of interest, a process of determining whether a point of interest in the fluid space is located on the center line based on a flow direction of the fluid represented by the flow velocity data on each of points around the point of interest, thereby determining the center line. The vortex extraction apparatus according to claim 1, wherein the text data creation section determines the size of the vortex region to be a minimum distance and a maximum distance from a representative point on the center line to the vortex region surface in a surface extending in a direction in which the center line crosses the surface. A non-transitory computer-readable storage medium that stores a vortex extraction program executed by an arithmetic apparatus that executes a program and causing the arithmetic apparatus to operate as a vortex extraction apparatus comprising: a data acquisition section that acquires flow velocity data at a plurality of points distributed in a fluid space; a first numerical data calculation section that calculates, for each vertex of each of voxels into which the inside of the fluid space is split, first numerical data indicating whether the vertex is located inside or outside a vortex region in the fluid, based on the flow velocity data acquired by the data acquisition section; a text data creation section that determines a center line of the vortex in the fluid based on the flow velocity data acquired by the data acquisition section and determines the size of the vortex region around the center line based on the first numerical data on each vertex calculated by the first numerical data calculation section, the text data creation section creating text data indicative of the center line and the size; a polygon count reduction processing section that counts the number of polygons defining a surface of the vortex region and each including an isosurface of the first numerical data calculated by the first data calculation section and integrates the plurality of voxels with new voxels into which the fluid space is roughly split until the number of polygons is equal to or smaller than a set value, to decimate the first numerical data and thereby obtain first numerical data corresponding to each vertex of each of the new voxels; and a data output section that outputs the text data created by the text data creation section and the first numerical data obtained by the polygon count reduction processing section and corresponding to each vertex of each of the new voxels. A vortex extraction method, comprising: acquiring flow velocity data at a plurality of points distributed in a fluid space; calculating, for each vertex of each of voxels into which the inside of the fluid space is split, first numerical data indicating whether the vertex is located inside or outside a vortex region in the fluid, based on the acquired flow velocity data; determining a center line of the vortex in the fluid based on the acquired flow velocity data, determining the size of the vortex region around the center line based on the calculated first numerical data on each vertex, and creating text data indicative of the center line and the size; counting the number of polygons defining a surface of the vortex region and each including an isosurface of the calculated first numerical data, and integrating the plurality of voxels with new voxels into which the fluid space is roughly split until the number of polygons is equal to or smaller than a set value, to decimate the first numerical data and thereby obtain first numerical data corresponding to each vertex of each of the new voxels; and outputting the created text data and the obtained first numerical data corresponding to each vertex of each of the new voxels. A vortex extraction display system comprising: a vortex extraction apparatus that creates data indicative of a vortex in a fluid space based on flow velocity data on each of a plurality of points distributed in the fluid space; and a vortex display apparatus that receives the data created by the vortex extraction apparatus to display the vortex, wherein the vortex extraction apparatus comprises: a data acquisition section that acquires flow velocity data at a plurality of points distributed in a fluid space, a first numerical data calculation section that calculates, for each vertex of each of voxels into which the inside of the fluid space is split, first numerical data indicating whether the vertex is located inside or outside a vortex region in the fluid, based on the flow velocity data acquired by the data acquisition section, a text data creation section that determines a center line of the vortex in the fluid based on the flow velocity data acquired by the data acquisition section and determines the size of the vortex region around the center line based on the first numerical data on each vertex calculated by the first numerical data calculation section, the text data creation section creating text data indicative of the center line and the size, a polygon count reduction processing section that counts the number of polygons defining a surface of the vortex region and each including an isosurface of the first numerical data calculated by the first data calculation section and integrates the plurality of voxels with new voxels into which the fluid space is roughly split until the number of polygons is equal to or smaller than a set value, to decimate the first numerical data and thereby obtain first numerical data corresponding to each vertex of each of the new voxels, and a data output section that outputs the text data created by the text data creation section and the first numerical data obtained by the polygon count reduction processing section and corresponding to each vertex of each of the new voxels. This application is based upon and claims the benefit of priority of the prior Japanese Patent Application No. 2009-285535, filed on Dec. 16, 2009, the entire contents of which are incorporated herein by reference. FIELD [0002] The embodiment discussed herein is related to a vortex extraction apparatus and method in which vortices in a fluid space are extracted. Furthermore, the embodiment is related to a non-transitory computer-readable storage medium that stores a vortex extraction program executed by an arithmetic apparatus that executes the program and causing the arithmetic apparatus to operate as a vortex extraction apparatus. Moreover, the embodiment is related to a vortex extraction display system including a vortex extraction apparatus and a vortex display apparatus that displays vortices extracted by the vortex extraction apparatus. BACKGROUND [0003] A technique to extract and visualize vortices in a fluid space is preferable for determining the behavior of a fluid in various fields including, for example, the behavior of wind around wings of an airplane, water flows around screws of a ship, and air flowing through the human trachea. Various techniques to extract vortices have been proposed in order to realize visualization of the vortices. A major challenge to the visualization technique is to handle a very large amount of data in order to extract vortices accurately. Thus, a high-speed large-sized arithmetic machine that handles a large amount of data at a high speed is preferably used. The amount of calculation for vortex extraction is preferably reduced. However, a greater challenge is the level of the visualization. In association with a large amount of data used for the calculation for the vertex extraction, a large amount of data is used to express extracted vortices. Thus, the visualization process is difficult to achieve unless a computer is used which is specified so as to demonstrate the same level of performance as that of the computer used for the vortex extraction. That is, visualization with an insufficiently specified computer leads to slow responses to operations such as rotation, movement, enlargement, and reduction of displayed vortices. Thus, obtaining preferable display is difficult. Furthermore, the vortices vary momentarily. Hence, temporal variations in vortices are preferably determined in order to know the behavior of the vortices. However, owing to the large amount of data used for the vortex extraction, visualizing temporal variations in vortices is difficult. When the data is decimated for visualization in order to achieve a high-speed visualization process, important phenomena may be lost by the decimation and fail to be displayed despite having spent the calculation cost to accurately extract the vortex. Thus, such phenomena may be missed. The followings are documents related to the vortex extraction, documents disclosing examples of a region splitting method for allowing a large amount of data to be handled, and documents to be refer to in the description provided later. (1) Japanese Laid-Open Patent Publication No. 2001-132700 (2) Japanese Laid-Open Patent Publication No. 2005-309999 (3) Japanese Laid-Open Patent Publication No. 2000-339297 (4) D. Sujudi, R. Haimes, IDENTIFICATION OF SWIRLING FLOW IN 3-D VECTOR FIELDS, Paper of American Institute of Aeronautics and Astronautics, 1995 (5) T. Weinkauf, Cores of Swirling Particle Motion in Unsteady Flows, Paper of IEEE, 2007 (6) On the identification of a vortex, Jinhee Jeong and Fazle Hussain, Jaurnal of fluid mechanics, vol. 285, pp 64-94, 1995 (7) Marching cubes: A high resolution 3 D surface construction algorithm, William E. Lorensen, Harvey E. Cline, Proceedings of the 14th annual conference on Computer graphics and interactive techniques, Pages: 163-169, 1987 SUMMARY [0014] According to an aspect of the invention, a vortex extraction apparatus includes: a data acquisition section that acquires flow velocity data at points distributed in a fluid space; a first numerical data calculation section that calculates, for each vertex of each of voxels into which the inside of the fluid space is split, first numerical data indicating whether the vertex is located inside or outside a vortex region in the fluid, based on the flow velocity data acquired by the data acquisition section; a text data creation section that determines a center line of the vortex in the fluid based on the flow velocity data acquired by the data acquisition section and determines the size of the vortex region around the center line based on the first numerical data on each vertex calculated by the first numerical data calculation section, the text data creation section creating text data indicative of the center line and the size; a polygon count reduction processing section that counts the number of polygons defining a surface of the vortex region and each including an isosurface of the first numerical data calculated by the first data calculation section and integrates the voxels with new voxels into which the fluid space is roughly split until the number of polygons is equal to or smaller than a set value, to decimate the first numerical data and thereby obtain first numerical data corresponding to each vertex of each of the new voxels; and a data output section that outputs the text data created by the text data creation section and the first numerical data obtained by the polygon count reduction processing section and corresponding to each vertex of each of the new voxels. The object and advantages of the invention will be realized and attained by means of the elements and combinations particularly pointed out in the claims. It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory and are not restrictive of the invention, as claimed. BRIEF DESCRIPTION OF DRAWINGS [0022] FIG. 1 is a schematic diagram illustrating an example of a system including a large-scale computer according to an embodiment of a vortex extraction apparatus according to the present invention; [0023] FIG. 2 is a diagram schematically illustrating a vortex extraction program according to an embodiment of the present invention; [0024] FIG. 3 is a diagram schematically illustrating an embodiment of the vortex extraction apparatus according to the present invention; [0025] FIG. 4 is a flowchart schematically illustrating an embodiment of a vortex extraction method according to the present invention; [0026] FIG. 5 is a diagram schematically illustrating the internal structure of the large-scale computer; [0027] FIG. 6 is a flowchart illustrating steps of the vortex extraction program executed by the large-scale computer; [0028] FIG. 7 is a diagram illustrating an interpolation process; [0029] FIG. 8 is a diagram illustrating an example of an algorithm for determining the size of a pixel (voxel); FIG. 9 is a diagram illustrating a region splitting process; FIG. 10 is a diagram illustrating a vorticity calculation method; FIG. 11 is a diagram illustrating a vortex region extraction process; FIG. 12 is a flowchart illustrating the detailed flow of a vortex center segment creation process (S28) in FIG. 6 FIG. 13 is a schematic diagram of a voxel; FIG. 14 is a diagram illustrating a rotation determination algorithm; FIG. 15 is a diagram illustrating an additive rotation determination algorithm; FIG. 16 is a diagram illustrating a polygon count reduction process; FIG. 17 is a diagram illustrating an example of vortex display; FIG. 18 is a diagram translucently illustrating vortices as seen from a direction in which the center line extends in a lateral direction; FIG. 19 is a diagram translucently illustrating vortices as seen from a direction in which the center line extends perpendicularly to the sheet of FIG. 19; [0041] FIG. 20 is a diagram illustrating the display screen in FIG. 18 on which the center line and lines illustrating the vortex region are further superimposed; [0042] FIG. 21 is a diagram illustrating the display screen in FIG. 19 on which the center line and lines illustrating the vortex region are further superimposed; and [0043] FIG. 22 is a diagram illustrating a modification of the present embodiment. DESCRIPTION OF EMBODIMENTS [0044] Embodiments of the present invention will be described below. [0045] FIG. 1 is a schematic diagram illustrating an example of a system including a large-scale computer according to an embodiment of a vortex extraction apparatus according to the present invention. Here, a large-scale computer 10, a file server 20, a visualization computer 30 are connected together via a communication line network 40 such as the Internet. The large-scale computer 10 corresponds to an embodiment of the vortex extraction apparatus according to the present invention, and arithmetically processes a large amount of data at a high speed. The file server 20 is an apparatus in which there are filed: flow velocity data on which vortex extraction calculations carried out by the large-scale computer 10 are based and data indicative of the results of the vortex extraction calculations carried out by the large-scale computer 10. The visualization computer 30 is an example of a vortex display apparatus and receives data indicative of the results of vortex extraction calculations carried out by the large-scale computer 10 to display the results on a display screen 31. The visualization computer 30 may be a commercially available personal computer (hereinafter referred to as the "PC"). [0050] FIG. 1 illustrates only one visualization computer 30. However, this represents all such computers, and multiple or many visualization computers 30 may be provided. Here, it is assumed that data on the flow velocities measured at multiple points distributed in a fluid space is stored in the file server 20, the data being indicative of the distribution of the flow velocities in the fluid space. The flow velocity data may be actually measured by carrying out, for example, wind tunnel experiments or may be obtained through simulation performed by a simulation computer (not illustrated). Alternatively, the large-scale computer 10 illustrated in FIG. 1 may also serve as a simulation computer. The flow velocity data stored in the file server 20 and indicating the flow velocity distribution in the fluid space is input to the large-scale computer 10 (arrow A). The large-scale computer 10 then carries out a vortex extraction calculation. Data indicative of the result of the vortex extraction calculation performed by the large-scale computer 10 is transmitted to the file server 20 again. The data is temporarily stored in the file server 20 (arrow B). Thereafter, the data stored in the file server 20 and indicating the vortex extraction calculation result is transmitted to the visualization computer 30 in response to a request from the visualization computer 30 (arrow C). The visualization computer 30 receives the data indicative of the vortex extraction calculation result and displays vortices based on the data. In the file server 20, data on the flow velocity distribution, which varies momentarily, is accumulated in a time series manner. The large-scale computer 10 may extract vortices at each point in time. Thus, the visualization computer 30 may display momentary variations in vortices (occurrence, extinction, confluence, branching, and the like). Furthermore, the vortex at a certain point in time may be displayed such that, for example, the vortex is redirected, enlarged, or reduced. [0055] FIG. 2 is a diagram schematically illustrating a vortex extraction program according to an embodiment of the present invention. A vortex extraction program 100 illustrated in FIG. 2 is installed in and executed by the large-scale computer 10. The vortex extraction program 100 allows the large-scale computer 10 to operate as an example of the vortex extraction apparatus according to the present invention. The vortex extraction program 100 includes a data acquisition section 110, an interpolation operation section 120, a first numerical data calculation section 130, a second numerical data calculation section 140, a text data creation section 150, a polygon count reduction processing section 160, and a data output section 170. The sections 110 to 170 are program sections included in the vortex extraction program 100. The operations of the sections 110 to 170 will be described later. [0057] FIG. 3 is a diagram schematically illustrating an embodiment of the vortex extraction apparatus according to the present invention. The vortex extraction apparatus illustrated in FIG. 3 includes a data acquisition section 210, an interpolation operation section 220, a first numerical data calculation section 230, a second numerical data calculation section 240, a text data creation section 250, a polygon count reduction processing section 260, and a data output section 270. The sections 210 to 260 illustrated in FIG. 3 indicate functions implemented in the large-scale computer 10 when the program sections 110 to 170 included in the vortex extraction program 100 illustrated in FIG. 2 and having the same names as those in FIG. 3 are executed in the large-scale computer 10. Although the same names are used in both FIGS. 2 and 3, the sections 110 to 170 in FIG. 2 refer only to software, whereas the sections 210 to 270 in FIG. 3 refer to functions implemented in the large-scale computer 10 by a combination of the program sections illustrated in FIG. 2 and having the same names with hardware, OS (Operating System), and the like. Thus, here, the description of the operations of the sections 210 to 270 in FIG. 3 also serves as the description of the sections 110 to 170 in FIG. 2 The data acquisition section 210 included in the vortex extraction apparatus 200 illustrated in FIG. 3 provides a function to acquire data on flow velocities measured at multiple points distributed in a fluid space from which vortices are to be extracted, from the file server 20 illustrated in FIG. 1 via the communication line network 40. Furthermore, the interpolation operation section 220 executes an interpolation operation based on the flow velocity data acquired by the data acquisition section 210 to calculate flow velocity data on each vertex of each of multiple first voxels resulting from division of the fluid space into structured grids. Additionally, the first numerical data calculation section 230 calculates first numerical data indicating whether each vertex of each of the multiple first voxels into which the inside of the fluid space is split is located inside or outside a fluid vortex region. The first numerical data calculation section 230 calculates the first numerical data based on the flow velocity data acquired by the data acquisition section 210. Moreover, the second numerical data calculation section 240 calculates second numerical data indicative of the intensity of vortices in each first voxel included in a vortex region determined based on the first numerical data calculated by the first numerical data calculation section 230. The second numerical data calculation section 240 calculates the second numerical data based on the flow velocity data acquired by the data acquisition section 210. The second numerical data calculation section 240 may calculate the second numerical data on each voxel, either only for the inside of the vortex region or for the whole fluid space without determination of whether each vertex of the voxel is located inside or outside the vortex region. Furthermore, the text data creation section 250 determines the center line of vortices in the fluid based on the flow velocity data acquired by the data acquisition section 210. Additionally, the text data creation section 250 determines the size of the vortex region around the center line based on the first numerical data on each vertex calculated by the first numerical data calculation section 230. Then, the text data calculation section 250 creates text data indicative of the center line and size. Moreover, the polygon count reduction processing section 260 counts the number of polygons defining the surface of the vortex region, such polygons including isosurfaces of the first numerical data calculated by the first data calculation section 230. Then, the polygon count reduction processing section 260 integrates the multiple first voxels into second voxels into which the fluid space is roughly split until the number of polygons is equal to or smaller than a set value (a "set value" 261 illustrated in FIG. 3 ). That is, the first numerical data is decimated such that only the first numerical data corresponding to the vertices of the second voxels remains. Based on the second numerical data calculated by the second numerical data calculation section 240, the polygon count reduction processing section 260 further creates second numerical data corresponding to each of the second voxels resulting from the integration. Here, the "set value" 261 may be pre-specified for each visualization computer 30 (see FIG. 1 ) from the visualization computer 30. Alternatively, the "set value" 261 may be a uniform value set via the large-scale computer 10 regardless of the type of the visualization computer 30 or the like, so as to allow an average PC to carry out a visualization process at a high speed. Alternatively, the "set value" 261 may be contained in the vortex extraction program 100 illustrated in FIG. 2, as a constant. Moreover, the data output section 270 outputs the text data created by the text data creation section 250. Furthermore, the data output section 270 outputs the first numerical data corresponding to each vertex of each of the second voxels obtained by the polygon count reduction processing section 260. The data output section 270 outputs not only the text data and first numerical data but also the second numerical data corresponding to the second voxels. As described above, the data output by the vortex extraction apparatus 200 is temporarily stored in the file server 20 and then transmitted to the visualization computer 30. The visualization computer 30 displays vortices on the display screen 31 based on the data. [0067] FIG. 4 is a flowchart schematically illustrating an embodiment of a vortex extraction method according to the present invention. The vortex extraction method illustrated in FIG. 4 is implemented by executing the vortex extraction program 100 illustrated in FIG. 2 , in the large-scale computer 10 illustrated in FIG. 1 . The vortex extraction method includes a data acquisition step (S11), an interpolation operation step (S12), a first numerical data calculation step (S13), and a second numerical data calculation step (S14). The vortex extraction method further includes a text data creation step (S15), a polygon count reduction processing step (S16), and a data output step (S17). The execution contents of steps S11 to S17 are the same as the operations of the sections 210 to 270 of the vortex extraction apparatus 200 illustrated in FIG. 3 . Here, duplicate descriptions are omitted. Now, a further detailed embodiment will be described. Also in the embodiment described below, the general configuration of the system in FIG. 1 is used without any change. [0071] FIG. 5 is a diagram schematically illustrating the internal configuration of the large-scale computer illustrated in FIG. 1 The large-scale computer 10 illustrated in FIG. 5 includes multiple (in this case, four) CPUs (Central Processing Units) 11A to 11D that execute programs, and a memory 12 shared by the CPUs 11A to 11D. The multiple CPUs 11A to 11D serve to extract vortices in each of the regions into which the fluid space is split. In this case, the multiple CPUs carry out a vortex extraction process on the respective areas in the fluid space. This enables high-speed arithmetic processing. [0073] FIG. 6 is a flowchart illustrating the steps of the vortex extraction program executed by the large-scale computer 10 illustrated in FIGS. 1 and 5. In this case, processing starting with a data input process (S21) and ending with a data output process (S30) as illustrated in FIG. 6 is executed. Here, by way of example, the data input process (S21), an interpolation process (S22), a region splitting process (S23), and the data output process (S30) are carried out by the CPU 11A, one of the multiple CPUs 11A to 11D illustrated in FIG. 5 . For the other processes (S24 to S29), the other multiple CPUs 11B to 11D execute a calculation for the respective regions into which the fluid space is split. Flow velocity data that is momentarily varying time series data is input to the large-scale computer 10 illustrated in FIGS. 1 and 5. When the CPU 11A finishes the region splitting process (S23) on flow velocity data on a certain point in time, the CPUs 11B to 11D carry out a vortex extraction process on the respective regions for the same point in time. In the meantime, the CPU 11A executes the data input process (S21) and the like on flow velocity data on the next point in time while the CPUs 11B to 11D are carrying out the vortex extraction process on the respective regions. When the CPUs 11B to 11D finish the vortex extraction process (S23 to S29) for a certain point in time, the CPU 11A outputs the results of the extraction process thereof (S30). As described above, in the large-scale computer 10, the multiple CPUs 11A to 11D play the respective rolls to process the respective momentarily varying flow velocity data in parallel. In the data input process (S21) in the flowchart illustrated in FIG. 6 , processing corresponding to the data acquisition section 210 according to the embodiment (represented by the vortex extraction apparatus 200 illustrated in FIG. 3 ) is executed. That is, in the data input process (S21), the large-scale computer 10 sequentially receives the flow velocity data stored in the file server 20 as time series data and loads the received flow velocity data into the vortex extraction program illustrated in FIG. 6 Furthermore, in the interpolation process (S22), the large-scale computer 10 carries out processing corresponding to the interpolation operation section 220 of the vortex extraction apparatus 200 illustrated in FIG. 3 [0078] FIG. 7 is a diagram illustrating the interpolation process. Part (A) of FIG. 7 is a diagram illustrating an example of an unstructured grid obtained before an interpolation process. Part (B) of FIG. 7 is a diagram illustrating the unstructured grid in Part (A) of FIG. 7 on which a structured grid is superimposed. Part (C) of FIG. 7 is a structured grid resulting from the interpolation process. FIGS. 7(A) to 7(C) each illustrate, in the center, a cross section of wings of an airplane obtained when the wings are simulated. The fluid space (in this case, the space around the periphery of the wings) is three-dimensional, but is two-dimensionally illustrated here for simplification. Part (A) of FIG. 7 illustrates a mosaic pattern including irregular polygons. The polygons included in the mosaic pattern have a larger average area (a larger average side length) toward the periphery of the pattern and have a smaller average area (a smaller average side length) toward the wings, located in the center of the pattern. Each vertex of each of the polygons included in the mosaic pattern is associated with flow velocity data indicative of the flow velocity of a fluid (for example, air) flowing through the fluid space; the flow velocity is measured at the position of each vertex. The flow velocity data is indicative of a flow velocity vector including the three-dimensional flow velocity direction and size of the vortices measured at the associated position. The flow velocity data is indicative of the flow velocities measured at the multiple points distributed in the fluid space but is inconvenient to handle in this state. Thus, as illustrated in Part (B) of FIG. 7 , in which two patterns are superimposed on each other, quadrangular (hexahedral) pixels (voxels) orderly partitioned in two directions, that is, x and y directions (in actuality, three directions, that is, x, y, and z directions) are assumed. Then, an interpolation process is carried out based on the flow velocity data associated with each vertex of each of the irregular polygons illustrated in Part (A) of FIG. 7 to determine the flow velocity data associated with each vertex of each pixel (each voxel). Thus, as illustrated in Part (C) of FIG. 7 , flow velocity data is determined on the points arranged at equal intervals in the fluid space. This facilitates the subsequent arithmetic process. The length (the interval between the flow velocity data in the fluid space) of each side of each pixel (each voxel) is not particularly limited. However, excessively large data intervals may prevent phenomena from being extracted. On the other hand, when the data intervals are excessively small, the calculation takes a long time even with the high throughput of the large-scale computer 10. Furthermore, however small the data intervals are, a resolution equal to or higher than that of the flow velocity of the original unstructured grid (Part (A) of FIG. 7 ) is obtained. Thus, the size of the pixel (voxel) is preferably determined such that the pixel has an area (volume) equivalent to the average area (in three dimensions, the average volume) of a large number of polygons included in the unstructured grid (Part (A) of FIG. 7 ). Alternatively, the size of the pixel (voxel) may be determined such that the pixel has a side length equivalent to the average side length of a large number of polygons. [0083] FIG. 8 is a diagram illustrating an example of an algorithm for determining the size of the pixel (voxel). The axis of abscissas in FIG. 8 corresponds to ten portions on a logarithmic axis into which the value between the minimum and maximum values min and max of side length of the polygon appearing in the unstructured grid in Part (A) FIG. 7 is divided. Furthermore, the axis of ordinate in FIG. 8 indicates the number of times a side appears in each portion when the axis of abscissas (side length) is divided into the ten pieces on the logarithmic axis. The length of each side of each of the polygons included in the unstructured grid illustrated in Part (A) of FIG. 7 is determined. In connection with the number of times each side length appears in the corresponding portion on the axis of abscissas in FIG. 8 , the numbers in portions A and B with the minimum and maximum side lengths, respectively, are counted. The number of times the minimum side length appears in the portion A is defined as "a". The number of times the maximum side length appears in the portion B is defined as "b". The representative value of the minimum side length in the portion A is defined as I . The representative value of the maximum side length in the portion B is defined as I . The length of each side of the structured grid (voxel) illustrated in Part (C) of FIG. 7 is defined as L. Then, the length L of each side of the structured grid (voxel) may be calculated in accordance with: = a L min + b L max ( a + b ) ( 1 ) ##EQU00001## For example, the flow velocity data on each vertex of the voxel with the thus calculated side length L is determined by the interpolation process. However, the way of determining the length L mentioned here is an example, and there may be adopted any of various methods of calculating, for example, the simple average of the side lengths in the unstructured grid and the average length of sides in a sample region in a part of the fluid space. For example, Part (A) of FIG. 7 illustrates the unstructured grid in which the average area is smaller (the average side length is shorter) toward the wings, located in the center. Thus, when the vicinity of the wings is of interest, the average side length of a part of the unstructured grid which is located close to the wings may be determined and then, an interpolation process may be carried out so as to form a structured grid including sides with the determined average length. The interpolation process itself is a well-known technique and will not be described below. Furthermore, in the above description, the flow velocity data on the unstructured grid is input. However, when the original flow velocity data corresponds to such a structured grid as illustrated in Part (C) of FIG. 7 , the interpolation process (S22) is undesired and thus skipped. When the interpolation process (S22) is finished, the region splitting process (S23) and then a flow velocity gradient calculation process (S24) are carried out. As described above, the data input and interpolation processes (S21) and (S22) in FIG. 6 are executed by the CPU 11A, one of the multiple CPUs 11A to 11D illustrated in FIG. 5 . In contrast, the flow velocity gradient calculation process (S24) is shared by the other multiple CPUs 11B to 11D so that the CPUs 11A to 11D carry out the flow velocity gradient calculation process on the respective multiple regions into which the fluid space is split. Thus, here, the region splitting process (S23) is carried out. FIG. 9 is a diagram illustrating region splitting. Also in this case, a two-dimensional structured grid is illustrated for simplification. Here, the fluid space D is split into three overlapping regions D1 to D3. A region D12, one of two shaded regions D12 and D23, is included in both regions D1 and D2. The other region D23 is included in both regions D2 and D3. A region thus included in multiple regions is called a sleeve region. The multiple (three) CPUs 11B to 11D illustrated in FIG. 5 carries out, on each of the regions D1 to D3, a part of the vortex extraction process which includes the local eigenvalue calculation process (S25) and the subsequent processes in the flowchart in FIG. 6 . Here, since the sleeve regions D12 and D23 are set as illustrated in FIG. 9, for the sleeve regions D12 and D23, the multiple CPUs sharing the processing of the regions D1 to D3 including the sleeve regions D12 and D23 provide almost the same processing results. Thus, after the processing, the processing results for all the regions of the fluid space D may be smoothly integrated together. In the description below, the region resulting from the splitting and the whole fluid space are not distinguished from each other unless otherwise specified. In the flowchart in FIG. 6 , a combination of the flow velocity gradient calculation process (S24), the local eigenvalue calculation process (S25), and a vortex region extraction process (S27) corresponds to the processing in the first numerical data calculation section 230 in FIG. 3 . However, the vortex region extraction process (S27) is intended to improve the accuracy of vortex region extraction, and the first numerical data calculation process without the vortex region extraction process (S27) may be sufficient. In the flow velocity gradient calculation process (S24), the flow velocity gradient of each vertex of each voxel is calculated based on the following Expression (2). Here, the velocities in the x, y, and z directions in the fluid space are denoted as u, v, and w, respectively. u, v, and w are represented by "q", and x, y, and z are represented by "r". Furthermore, the coordinates of each vertex are representatively denoted as (i, j, k). Moreover, i, j, and k are representatively denoted as "n", and i, j, k, and n may be omitted. In this case, the flow velocity gradient of the vertex (i, j, k) is expressed by: q r = ∂ q ∂ r = q n + 1 - q n - 1 2 Δ r ( 2 ) ##EQU00002## However, Δr denotes the length of each side of each voxel in an r direction. Here, processing is preferably carried out such that q =0 on boundaries with quiescent objects such as boundaries with wings illustrated in the center of FIG. 7 and wall surfaces defining the fluid space. In this case, q is expressed by Expression (2) that is a single expression. However, Expression (2) denotes nine elements of a flow velocity matrix J in the following Expression (3). Once the flow velocity gradient is calculated for each vertex (i, j, k) of each voxel in accordance with Expression (2), the local eigenvalue calculation process (S25) is carried out. In the local eigenvalue calculation process (S25), a flow velocity matrix J containing, as elements, flow velocity gradients determined in accordance with Expression (2) is given as follows. = ( u x u y u z v x v y v z w x w y w z ) ( 3 ) ##EQU00003## Furthermore, S and Q are defined by: = 1 2 ( J + J T ) ( 4 ) Q = 1 2 ( J - J T ) ( 5 ) ##EQU00004## Here, J is a transposed matrix of J. Then, three eigenvalues λ , λ , and λ for a matrix A expressed as follows are calculated. The calculated three eigenvalues λ , λ , and λ are arranged in numerical order to be λ . Each of λ , λ , and λ is any of λ , λ , and λ Here, the eigenvalue λ , corresponding to the median of the three eigenvalues λ , λ , and λ arranged in numerical order, is extracted. λ corresponds to an example of the first numerical data calculated by the first numerical data calculation section 230 in FIG. 3 The eigenvalue λ is calculated for each vertex (i, j, k) of each voxel. When the eigenvalue λ calculated for a certain vertex (i, j, k) is λ <0, the vertex (i, j, k) is determined to be inside the vortex region. When λ ≧0, the vertex (i, j, k) is determined to be outside the vortex region. This vortex region determination algorithm focuses on the fact that when a broad view of a vortex is taken, the vortex makes two-dimensional motion by revolving around the center line, which extends one-dimensionally. The algorithm itself is well-known and is called a λ method proposed by Jeong et al (see "On the identification of a vortex" listed above). Here, when λ <0 is calculated for a certain vertex (i, j, k), λ is associated with the vertex (i, j, k) with the value unchanged. On the other hand, when λ is calculated to be λ≧0, λ is substituted with λ =0, which is then associated with the vertex. Thus, the vortex region is roughly extracted such that a region with λ <0 is determined to be a vortex region, whereas a region with λ =0 is located outside the vortex. Then, an absolute vorticity value calculation process (S26) is carried out. In the absolute vorticity value calculation process (S26), an absolute value |ω| of a vorticity ω is calculated. The absolute vorticity value calculation process (S26) corresponds to the processing carried out by the second numerical data calculation section 240 in FIG. 3 Here, the vorticity ω is calculated for each voxel. When the flow velocity vector of a certain voxel is defined as U, the vorticity ω is given as follows. ω=∇×U (7) Here, ∇ is an operator expressed as follows. ∇ = ( ∂ ∂ x , ∂ ∂ y , ∂ ∂ z ) ##EQU00005## Here, x denotes an outer product (vector product). The use of Expression (8) results in Expression (9). = ( u c , v c , w c ) ( 8 ) ω = ∇ × U = ( ∂ w c ∂ y - ∂ v c ∂ z , ∂ u c ∂ z - ∂ w c ∂ x , ∂ v c ∂ x - ∂ u c ∂ y ) ( 9 ) ##EQU00006## Here, U denotes the representative flow velocity vector of the voxels, and the flow velocity data is related to each vertex of each voxel. As one technique, it is conceivable to calculate flow velocity data on the center of the voxel by the interpolation process based on the flow velocity data on each vertex and to define the calculated flow velocity data as U so that the calculation in Expression (9) is carried out. However, here, ω is calculated as follows without separating the interpolation operation from the calculation of ω based on Expression (9). Here, for easy understanding, the description will be provided based on two dimensions. Expression (9) is rewritten for the two dimensions as follows. ω = ∇ × U = ∂ v c ∂ x - ∂ u c ∂ y ( 10 ) ##EQU00007## FIG. 10 is a diagram illustrating a method for calculating the vorticity ω. Here, an x-y two-dimensional plane is assumed for simplification. The velocities of each of the vertices (i, j, k), (i+1, j, k), (i, j+1, k), and (i+1, j+1, k) of a certain voxel (corresponding to a certain pixel herein because the two dimensions are assumed) in the x and y directions are expressed as u and v with the coordinates of the vertex (for example, i, j, and k) as illustrated in FIG. 10. In this case, in accordance with Expression (10), the vorticity of a central point in the pixel is expressed as follows. ω i + 1 2 , j + 1 2 , k ##EQU00008## Then, the following expression is given. ω i + 1 2 , j + 1 2 , k = 1 2 [ v i + 1 , j + 1 - v i , j + 1 Δ x + v i + 1 , j - v i , j Δ x ] + 1 2 [ u i + 1 , j + 1 - u i + 1 , j Δ y + u i , j + 1 - u i , j Δ y ] ( 11 ) ##EQU00009## In actuality, a three-dimensional calculation in accordance with Expression (9) is executed instead of a two-dimensional calculation in accordance with Expressions (10) and (11). Thus, the vorticity ω of each voxel is calculated to determine the intensity of the vortices expressed as the absolute value |ω| thereof. The intensity |ω| of the vortices corresponds to an example of the second numerical data calculated by the second numerical data calculation section 240 in FIG. 3 Then, in the present embodiment, the vortex region extraction process (S27) in the flowchart in FIG. 6 allows, if desired, modification of the eigenvalue λ temporarily determined in the local eigenvalue calculation process (S25) and indicating whether the vertex is located inside (λ <0) or outside (λ =0) the vortex region. FIG. 11 is a diagram illustrating the vortex region extraction process. Part (A) of FIG. 11 is a diagram schematically illustrating a region with λ <0. Part (B) of FIG. 11 is a diagram illustrating separation of the vortex region based on the value of |ω|. Also in this case, a two-dimensional plane is assumed for simplification. In Part (A) of FIG. 11, it is assumed that the shaded region corresponds to the region with λ <0 (inside the vortex region). The condition of vortices is unknown at this stage, but it is assumed that the illustrated two vortices are present. Here, for the inside of a region with an eigenvalue calculated in the local eigenvalue calculation process (S25) to be smaller than zero, the absolute vorticity value determined in the absolute vorticity value calculation process (S26), that is, the vortex intensity |ω|, is checked. Then, |ω| is compared with a certain threshold. λ of voxels with |ω| equal to or smaller than the threshold is replaced with λ =0. For example, here, it is assumed that the voxel in the shaded region in Part (B) of FIG. 11 has |ω| equal to or smaller than the threshold. As described above, the vortex region temporarily extracted using λ is separated using |ω| if desired. This allows the vortex region to be more accurately extracted. After the vortex region is extracted as described above, a process of creating a segment representing the vortex center (S28) in FIG. 6 is carried out. The process of creating a segment representing the vortex center (S28) corresponds to the processing carried out by the text data creation section 250 in FIG. 3 FIG. 12 is a flowchart illustrating the detailed flow of the process of creating a segment representing the vortex center (S28). Furthermore, FIG. 13 is a schematic diagram of a voxel. As illustrated in FIG. 13, each vertex of the voxel is associated with flow velocity data expressed by such a vector as illustrated by an arrow in FIG. 13. Here, the flow velocity data on one vertex is representatively expressed as (u, v, w) ("u" denotes the flow velocity in the x direction, "v" denotes the flow velocity in the y direction, and "w" denotes the flow velocity in the z direction). In the process of creating a segment representing the vortex center as illustrated in FIG. 12, processing described below is carried out on two surfaces (I and II surfaces) directed in the z direction, two surfaces directed in the y direction, and two surfaces directed in the x direction in this order. Here, the two surfaces (I and II surfaces) directed in the z direction will be described. The same processing is executed on the two surfaces directed in each of the y and x directions except for the direction. First, for the I surface, one of the two surfaces (I and II surfaces) directed in the z direction, the flow velocity vectors of the four vertices corresponding to the four corners of the I surface are mapped to the I surface and processed. In other words, the program neglects a z direction component (w), one of the flow velocity vectors (u, v, w) of the four vertices. The program determines whether or not the fluid is rotating, based on the x and y components (u, v). A criterion for determination of whether or not the fluid is rotating will be described later. Upon determining that the fluid is rotating on the I surface, the program carries out similar determination on the II surface, the other surface directed in the same z direction. That is, the program neglects a (z) component 2, one of the flow velocity vectors (u, v, w) of the four vertices corresponding to the four corners of the II surface. The program determines whether or not the fluid is rotating in the same direction as the rotation direction on the I surface, based on the x and y components (u, v). Moreover, the program determines whether or not outward and inward flux conditions are the same for the I surface and for the II surface. Upon determining that rotation is occurring in the same direction as that for the I surface and for the II surface and that the outward and inward flux conditions are the same for the I surface and for the II surface, the program assigns a positive constant Vc to the voxel. The constant Vc indicates that the voxel is located on the vortex center axis. The same processing is carried out on voxels in the fluid space for which at least one vertex has λ <0 (this is indicative of the inside of the vortex region). Moreover, the same processing is carried out on the two surfaces directed in the y direction. The same processing is carried out on the two surfaces directed in the x direction. FIG. 12 is a flowchart basically illustrating the above-described processing. The contents of the processing will be described below in accordance with the flowchart. In step S31, on which of the z, y, and x directions processing is to be carried out is determined. The above-described processing is executed on the z direction. In step S32, whether or not the processing has been carried out on all of the three directions, namely the z, y, and x directions, is determined. When the processing has been carried out on all of the three directions, namely the z, y, and x directions, the program proceeds to step S42. The processing in step S42 will be described later. In the following description, it is assumed that the z direction is set in step S31. In step S33, one of the unprocessed voxels is set to be the next processing target. In step S34, the program determines whether or not all the voxels have been processed with respect to the direction (here, the z direction) set in step S31. In step S34, upon determining that all the voxels have been processed with respect to the currently set direction (here, the z direction) set in step S31, the program returns to step S31 to set the next direction (for example, the y direction follows the z direction). When an unprocessed voxel is set in step S33, the program proceeds to step S35 via step S34. In step S35, the program determines whether or not at least one of the eight vertices of the voxel set in step S33 meets λ <0. When the eight vertices all have λ2=0 (as described above, λ ≧0 has been replaced with λ =0), the voxel is located outside the vortex region. Thus, the processing is undesired for the voxel. The program returns to step S33 to set the next voxel. Upon determining, in step S35, that at least one of the eight vertices has λ <0, the program proceeds to step S36 to determine whether or not for the I surface, one of the two surfaces (I and II surfaces) directed in the current set direction (here, the z direction), the fluid is rotating. In the determination, as described above, the flow velocity components, in the currently set direction (here, the z direction), of the flow velocity vectors of the four vertices corresponding to the four corners of the I surface are neglected, with the flow velocity components in the remaining two directions taken into account. The determination criterion for rotation will be described further later. Upon determining that rotation is not occurring on the I surface, the program suspends the processing executed on that direction (here, the z direction) of the voxel, and returns to step S33 (step S37). Upon determining that rotation is occurring on the I surface, the program then carries out similar determination on the other surface (II surface) directed in the same direction (here, the z direction) (step S38). Upon determining that rotation is not occurring on the II surface (step S39) and upon determining that rotation is occurring on the II surface but that the rotation is not in the same direction as that on the I surface (the same direction also for the outward and inward fluxes described later) (step S40), the program returns to step S33. Upon determining that the rotation on the II surface is in the same direction as that of the rotation on the I surface (the same direction also for the outward and inward fluxes described), the program proceeds to step S41. Then, the program sets, for the current processing target voxel, a value Vc indicating that the voxel is located on the center axis. The program then returns to step S33. In step S33, the next one of the unprocessed voxels is set to be the next processing target with respect to the direction corresponding to the current processing (here, the z direction). When the above-described processing has been carried out on all of the three directions, the z, y, and x directions, the program proceeds to step S42. In step S42, in the voxels for which the value Vc, indicating that the voxel is located on the center axis, is set, a spline function is applied to a group of almost one-dimensionally arranged voxels to calculate the vortex center line. The condition for the group of almost one-dimensionally arranged voxels is used because besides the one-dimensionally arranged voxels, another secondary center line may be present away from the voxels. When the vortex center line is determined, the program then proceeds to step S43. Here, when a plane is assumed which crosses the center line at a right angle at a longitudinally central point on the center line, the minimum and maximum distances r and r from the center line (the intersection between the center line and the plane) to the surface of the vortex region (region with λ <0) on the plane are determined. Then, a region generally representing the vortex region is determined based on the minimum and maximum distances r and r . A segment is then created which corresponds to extension of the region along the center line and which represents the vortex center. In the process of creating a segment representing the vortex center as illustrated in FIG. 6 (S28), data on the segment is created which is based on the vortex center line and vortex region determined as described above and which represents the vortex center. The data on the segment representing the vortex center includes a function indicative of the center line and data including r and r and indicating the vortex region. The data is created as text data unassociated with the individual polygons. Now, the process of determining whether or not rotation is occurring in steps S36 and S38 in FIG. 12 will be described. FIG. 14 is a diagram illustrating a rotation determination algorithm. Also in this case, for simplification, each voxel is illustrated as a pixel mapped onto a two-dimensional plane. In both Part (A) and Part (B) of FIG. 14, a fluid is determined to be rotating in a shaded voxel and not to be rotating in a voxel on the immediate right of the shaded voxel. In Part (A) of FIG. 14, the flow velocity vectors of the four corners of the shaded pixel are directed in the same rotating direction (here, clockwise rotating direction) outside and along each side of the pixel with an angle of at most 90° to the side. In this case, the central point of an outward flux of a vortex is present in the pixel. Furthermore, in Part (B) of FIG. 14, the flow velocity vectors of the four corners of the shaded pixel are directed inward of each side and in the same rotating direction along each side with an angle of at most 90° to the side. In this case, the central point of an inward flux of a vortex is present in the pixel. In FIGS. 14(A) and 14(B), the pixel on the immediate right of the shaded pixel fails to meet both the outward and inward flux conditions described above. Here, only one of the I and II surfaces directed in the same direction (for example, the z direction) has been described. However, if both outward and inward fluxes are rotating in the same direction in the same manner on both the I and II surfaces directed in the same direction, then for the pixel (voxel), the value Vc indicating that the voxel is located on the center line of the vortex is set. However, the rotation determination algorithm described with reference to FIGS. 14(A) and 14(B) infrequently fails to determine the voxel actually located on the center line to be located on the center line. However, here, no problem occurs because the center line is specified through application of the spline function. That is, the spline function works continuously even though the value Vc infrequently fails to be set for the voxel on the center line. However, the determination accuracy for the voxel on the center line may be increased by adding a determination algorithm described below to the determination algorithm described with reference to FIGS. 14(A) and 14(B). FIG. 15 is a diagram illustrating an additive rotation determination algorithm. In both Part (A) and Part (B) of FIG. 15, the fluid is determined to be rotating in the lower right pixel (voxel). However, Part (A) and Part (B) of FIG. 15 illustrate only one of the I and II surfaces, and both the I and II surfaces preferably meet the rotation conditions as described above. The lower right quadrangular pixels in both Part (A) and Part (B) of FIG. 15 fail to meet the rotation determination conditions in both Part (A) and Part (B) of FIG. 14. Thus, the opposing corners of the pixel are connected together with lines to form triangles. Paired opposing corners are connected together with a line to form two triangles. Other paired opposing corners are connected together to form two other triangles. The same determination criteria as those described with reference to FIG. 14 are applied to each of the four triangles. In other words, when the shaded triangles in Part (A) and Part (B) of FIG. 15 are noted, the flow velocity vectors of the three vertices of each of the triangles are directed in the same rotating direction along the respective sides with an angle of at most 90° to the sides. Furthermore, the conditions for the outward and inward fluxes (both Part (A) and Part (B) of FIG. 15 illustrate an outward flux) are met. The value Vc is also set for voxels in which such triangles are present. The determination accuracy for the voxel on the center line of the vortex may be increased by adding the determination algorithm in FIG. 15 to the determination algorithm in FIG. 14. Now, a polygon count reduction process (S29) in the flowchart in FIG. 6 will be described. FIG. 16 is a diagram illustrating the polygon count reduction process. Also in this case, for simplification, polygons are illustrated as two-dimensional pixels. Furthermore, here, also for simplification, only 4×4 polygons (pixels), namely, a total of 16 polygons, are illustrated. Each vertex of each polygon (pixel) is associated with λ . Here, the subscript 2 of λ is omitted, and i, j, and the like which are indicative of coordinates are illustrated as subscripts. However, the representative of all the values λ of the coordinates carries the subscript 2 and is illustrated as λ . As described above, each λ is either λ <0 or λ =0. Here, a negative number with a small absolute value, for example, -0.1, is set, and a Marching Cube method is applied. Thus, polygons each formed of an isosurface with λ =-0.1 are generated. The polygons are expressed as segments on FIG. 16. The Marching Cube method itself is a well-known technique described in "Marching cubes: A high resolution 3D surface construction algorithm" listed above. The polygons each formed of an isosurface with λ =-0.1 define the surface of the vortex region. In FIG. 16, oblique solid lines correspond to the respective polygons. In a fluid space in which three-dimensional voxels are arranged, each polygon forms a triangle. The number of the polygons thus generated is counted. Here, as described with reference to FIG. 5 , the multiple CPUs 11B to 11D share the region to perform the processing (see FIG. 9). Thus, here, the CPU 11A receives reports of the polygon counts in the regions D1 to D3, from the CPUs 11B to 11D, assigned to the regions D1 to D3, respectively. The CPU 11A aggregates the polygon counts so as to avoid doubly counting the polygon counts in the sleeve regions D12 and D23. If the aggregate polygon count exceeds a set value, the CPU 11A instructs the CPUs 11B to 11D to roughly split the fluid space and integrate resultant new voxels with the above-described voxels until the polygon count is equal to or smaller than the set value. Specifically, the 4×4 voxels illustrated in FIG. 16 are integrated with adjacent 2×2 voxels to obtain new voxels indicated by an alternate long and short dash line in FIG. 16. The integration with the new voxels means that the resolution is reduced by removing the values λ of the points other than the vertices of the new voxels, with only the values λ of the vertices of the new voxels left. For the new voxels thus created, isosurfaces (polygons) with λ =-0.1 are created and the numbers of the polygons are aggregated, as described above. The numbers of the original voxels arranged in the x, y, and z directions are defined as m, n, and o, respectively. Then, with the numbers of the voxels sequentially reduced to, for example, m/2, n/2, and o/2; m/3, n/3, and o/3; . . . , that is, with the resolution gradually reduced, polygons are repeatedly created until the aggregate number of polygons is equal to or smaller than the set value. Here, the set value means the number (for example, 1,000,000) of polygons that may be processed by the visualization computer 30 illustrated in FIG. 1 , at an operation speed sufficient for visualization of the vortex. The set value may be pre-specified by the visualization computer 30 or may be a uniformly set value at which a standard PC preferably used as a visualization computer may process the polygons at a sufficient operation speed. Once the polygon count becomes equal to or smaller than the set value, the intensity |ω| of the vorticity is set for each of the voxels leading to the polygon count. The intensity |ω| of the vorticity has been determined for each of the original, non-integrated voxels (step S26 in FIG. 6 ). For each of the integrated voxels, a representative value is set, such as the average value or median of the vorticity intensities |ω| of the multiple original voxels contained in the integrated Such a polygon count reduction process (step S29 in FIG. 6 ) reduces the resolution for λ and |ω| in the fluid space but enables the visualization computer 30 to provide smooth display. This allows temporal variations in vortices (occurrence, extinction, branching, confluence, and the like) to be easily observed, and improves responsiveness to rotation used to allow the vortices to be viewed at a different angle and other operations such as enlargement, reduction, and movement of the vortices. Furthermore, the display resolution decreases from the display resolution obtained during the vortex extraction operation. However, the process of creating text data indicative of the vortex center line and region is carried out before the resolution is reduced. Thus, such important transition phenomena as hindered from appearing by reducing the resolution may be appropriately In the data output process (S30) in FIG. 6 , processing corresponding to the data output section 270 in FIG. 3 is carried out. That is, here, the text data indicative of the segment representing the vortex center and created in the process of creating the segment (S28) is output. Moreover, in the data output process (S30), the following data is also output: λ associated with each vertex of each of the new voxels resulting from the integration in the polygon count reduction process (S29) and |ω| associated with each of the new voxels resulting from the integration. Furthermore, the positional coordinates of the voxels located on the vortex center line (voxels associated with the value V ) may be output. However, in the present embodiment, data indicative of polygons created during the execution of the polygon count reduction process (S29) is not output. This is because data indicative of polygons is large in amount and is disadvantageous in terms of communication time and memory capacity for storage and because polygons may be created by the visualization computer 30 at a sufficient operation speed if desired. The data thus output by the large-scale computer 10 illustrated in FIG. 1 is temporarily stored in the file server 20. The data is then transmitted to the visualization computer 30 in response to a request from the visualization computer 30. Based on the received data, the visualization computer 30 displays, on the display screen 31, the segment representing the vortex center based on the text data and vortices themselves based on the voxel data. Although the display mode of the visualization computer 30 is not particularly limited, typical display modes of the visualization computer 30 will be described below. The minimum display includes the display of the segment representing the vortex center, based on the text data indicative of the segment. The minimum display allows the occurrence, extinction, confluence, branching, movement, and the like of vortices to be viewed. Detailed display includes the display of the surface shape of the vortices on the display screen 31 based on polygon data created by the visualization computer 30 using the Marching Cube method. Then, the detailed shape of the vortices, which is unclear from the display, based on the text data, of the segment representing the vortex center, may be determined. Furthermore, in addition to the display of the surface shape of the vertices based on polygons, the display of |ω| associated with the voxel containing the surface polygons may be provided in a color corresponding to the magnitude of the value |ω|. For example, |ω| with a large value is displayed in red, whereas |ω| with a small value is displayed in blue. Then, the vortex intensity is displayed in terms of a color distribution, enabling the direction of the vortex center to be understood. Furthermore, in addition to the display of the surface shape of the vertices based on polygons, the translucent display of the surface shape may be provided so that the vortex center line and region based on the text data may be superimposed on the surface shape. Moreover, in the above-described display modes, a cross section may be displayed so as to allow the inside of the vortices to be Now, display examples will be described with reference to the drawings. FIG. 17 is a diagram illustrating an example of vortex display. Here, a fluid is filled in a cubic. Vortices are illustrated which are extracted based on a flow velocity distribution created by simulating the case where the top surface of the cubic is moved at a constant speed. Furthermore, the magnitude of |ω| of the surface of the vortices may be expressed by shading. FIG. 18 is a diagram translucently illustrating vortices as viewed from a direction in which the center line extends in the lateral direction. FIG. 19 is a diagram translucently illustrating the vortices as viewed from a direction in which the center line extends perpendicularly to the sheet of FIG. 19. FIGS. 18 and 19 illustrate voxels for which the value Vc corresponding to the center line is set. FIGS. 20 and 21 are diagrams corresponding to the display screens in FIGS. 18 and 19 on which the center line and lines representing the segment region representing the vortex center are further According to the present embodiment, the vortices may be displayed in such various modes at a sufficient drawing speed. Furthermore, according to the present embodiment, text data on the segment representing the vortex center is created by calculations with a high resolution obtained before the integration of the voxels. This allows possible missing of preferable information to be avoided. [0171] FIG. 22 is a diagram illustrating a modification of the present embodiment. [0172] FIG. 22 illustrates a large-scale computer that is the same as the large-scale computer 10 illustrated in FIG. 5 . In the description of FIG. 5, data indicative of the flow velocity distribution in the fluid space is stored in the file server 20 (see FIG. 1 ). In contrast, in FIG. 22 , simulation for determining the flow velocity distribution in the fluid space is also performed by the large-scale computer 10. The fluid space is split into regions assigned to the respective CPUs 11A to 11D. Each of the CPUs 11A to 11D first calculates the flow velocity distribution in the corresponding region by simulation, and then extracts vortices. When the vortex extraction is finished, the simulated flow velocity distribution calculation is carried out on the next point in time, and vortices at that point in time are then extracted. The CPUs 11A to 11D in FIG. 22 alternately carry out the simulated flow velocity distribution calculation and the vortex extraction. Thus, in the present case, the large-scale computer 10 may avoid specializing in vortex extraction and may be used for the simulated flow velocity distribution calculation. According to the present embodiment and the like, the text data indicative of the center line and size of the vortex region is created based on the flow velocity data with a relatively high resolution. These preferable pieces of information are created first. Therefore, even when voxel data, polygon data, and the like are reduced to perform display in an apparatus with specifications inferior to those of the vortex extraction apparatus according to the present embodiment and the like, the vortex display process may be carried out at a high speed without missing preferable All examples and conditional language recited herein are intended for pedagogical purposes to aid the reader in understanding the invention and the concepts contributed by the inventor to furthering the art, and are to be construed as being without limitation to such specifically recited examples and conditions, nor does the organization of such examples in the specification relate to a showing of the superiority and inferiority of the invention. Although the embodiment of the present invention has been described in detail, it should be understood that the various changes, substitutions, and alterations could be made hereto without departing from the spirit and scope of the invention. Patent applications by Masahiro Watanabe, Kawasaki JP Patent applications by FUJITSU LIMITED Patent applications in class Fluid measurement (e.g., mass, pressure, viscosity) Patent applications in all subclasses Fluid measurement (e.g., mass, pressure, viscosity) User Contributions: Comment about this patent or add new information about this topic:
{"url":"http://www.faqs.org/patents/app/20110144928","timestamp":"2014-04-17T14:40:15Z","content_type":null,"content_length":"123522","record_id":"<urn:uuid:9fcf8235-3178-4fc2-8202-fa50a826282e>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00010-ip-10-147-4-33.ec2.internal.warc.gz"}
[plt-scheme] What's the deal with inexact numbers? From: Todd O'Bryan (toddobryan at mac.com) Date: Mon Mar 5 17:31:14 EST 2007 I've noticed a strange inconsistency with respect to inexact numbers. The values of e and pi in HtDP Beginning Student are accurate to 15 decimal places: > e > pi But it seems that trigonometric functions are accurate to 16 decimal places. Is this just an unintended inconsistency? How accurate should inexact numbers be? I also noticed the following little interesting tidbit: > (min 2 pi) I guess it makes sense, given that the two numbers have to be compared to one another to convert them to the least exact representation, but does anyone else get the heebie-jeebies about a minimum function returning an element that's not in the set of things it was given to Posted on the users mailing list.
{"url":"http://lists.racket-lang.org/users/archive/2007-March/016689.html","timestamp":"2014-04-18T18:18:50Z","content_type":null,"content_length":"6074","record_id":"<urn:uuid:0d919913-1a16-478e-8260-7011230c12c5>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00549-ip-10-147-4-33.ec2.internal.warc.gz"}
Factoring Polynomials of Degree 3 Date: 3/17/96 at 15:50:7 From: Ian Rounthwaite Subject: factoring 1. Factor completely (x+2)3 (to the third) + (x+2)2 (to the second) 2. Factor completely 18x(squared) + 9x - 2 Thank you, Barb R. Date: 3/17/96 at 19:8:42 From: Doctor Jodi Subject: Re: factoring Hi there Barb! Well, in your first problem the first step is to multiply everything out.(By the way, on computers, we usually write exponents like this: 5^2 (read this "5 squared"). So, if we want to find out what (x+2)^3 + (x+2)^2 is, first we multiply (x+2)(x+2) Do you know how to do this? we can also write (x+2)(x+2) as x(x+2) + 2(x+2) which equals x^2+2x + 2x+4 We can simplify this to x^2 + 4x + 4, since we have two terms (2x and 2x) which are both multiplied by x. Does this make sense so far? Well, know we know one half of the problem: (x+2)^3 + (x+2)^2 = (x+2)^3 + x^2 + 4x + 4 (from our multiplication). Now, to find out what (x+2)^3 is, we need to multiply Well, we already know that (x+2)(x+2) = x^2+4x+4 (from our multiplication above). Now we need to multiply that by another (x+2) (x^2 + 4x +4) which we can also write as x(x^2 + 4x + 4) + 2 (x^2 + 4x + 4) (Do you understand why?) Now, multiplying, we get x^3 + 4x^2 + 4x + 2x^2 + 8x + 8 Now, if we add like terms (like 4x^2 and 2x^2, etc) We get x^3 + 6X^2 + 12 x + 8 = (x+2)^3 But we need to put this answer back into our original equation where (x+2)^3 was. So instead of (X+2)^3 + x^2 + 4x + 4 We now have x^3 + 6x^2 + 12x + 8 + x^2 + 4x + 4 ^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^ from (x+2)^3 from (x+2)^2 Okay. At this point, we're going to add like terms again. So after we do this, we have x^3 + 7x^2 + 16x + 12 right? NOW comes the tricky part: the factoring. There are two methods: trial and error (which means, you try a couple of likely I tried trial and error for a while but I didn't come up with an So then I tried division. Maybe you haven't seen division before in this context: it's pretty similar to division with plain old numbers, but much cooler. It looks something like this. I think that our polynomial (x^3 + 7x^2 + 16x + 12) is going to be divisible by x+2 (since we multiplied out some powers of x+2 and added them) So I try this x+2 |x^3 + 7x^2 + 16x +12 I look at x^3 + 7x^2. How many times will x+2 go into x^3 + 7x^2? I guess that it's around x^2 times, since x+2 times x^2 = x^3 + 2x^2 So now I write that down and subtract: x+2 |x^3 + 7x^2 + 16x +12 -(x^3 + 2x^2) Next I bring down the next term: x+2 |x^3 + 7x^2 + 16x +12 -(x^3 + 2x^2) 5x^2 + 16x and guess again: This time, my guess is 5x. Can you see why? x^2 + 5x x + 2 |x^3 + 7x^2 + 16x +12 -(x^3 + 2x^2) 5x^2 + 16x -(5x^2 + 10x) Next, I'll bring down the 12 and finish up with a guess of 6. x^2 + 5x + 6 x+2|x^3 + 7x^2 + 16x +12 -(x^3 + 2x^2) 5x^2 + 16x - (5x^2 + 10x) 6x +12 -(6x +12) Whew! Looks like it worked... We can check by multiplying (x+2) (x^2 + 5x +6) Now we've factored x^3 + 7X^2 + 16x +12 To FULLY factor, we'll need to see if either x+2 or x^2 + 5x +6 have factors. Intuition tells me that x^2 + 5x + 6 has some factors. Think you can handle this part on your own? Do you think that you'll be okay with the second problem now? If you need more help, write us back! -Doctor Jodi, The Math Forum Date: 3/21/96 at 13:58:53 From: "Ian Rounthwaite" Date: Thu, 21 Mar 1996 11:54:28 MST Subject: help Hi - my question ((x+2)^3 + (x+2)^2 - you took me as far as x^2+5x-6 and said to factor more. So I think the answer is x(x+5)-6. I understood your answer for the rest I think. It was very helpful. Thank you. Also, question 2 was 18x^2 +9x-2. (factor completely like the other one). Is the answer 9x(2x+1)-2? Thanks, Barb R. Date: 3/21/96 at 23:53:57 From: Doctor Patrick Subject: Re: help Hi there. Sorry, that's not quite right. When you factor a problem, you have to factor the entire polynomial, not just parts of it, just like when you find the factors of a whole number. If you were going to factor 6, you would say it was 3*2, not 2*2 +2, even though they both equal 6. Likewise, when you factor you need to break it into two parts that when multiplied together will give you the whole polynomial back. I'll show you how to do this part, and then let you do the second on your own, okay? One thing before we start - when you wrote back to us you were using x^2+5x-6 - instead of x^2+5x + 6 which was the correct polynomial. I'll use that to demonstrate the problem. Let me give you the answer first, so you know what we are looking for: (x+3)*(x+2). Do you know how to multiply these? The way you do it is to multiply both of the numbers in the first half by both of the numbers of the second half and add the results when you multiply this you get (x*x) - (2x) +(3x) +(6), which gives us our x^2+5x-6 when we add like terms and multiply the x's. . Now, on to how to get the answer. To get the x^2 we will half to multiply an x times an x, so we can safely say that the factors will look like: (x [+ or -] some number) * (x [+ or -] some other number) Further, we know that the second numbers must be factors of 6 in order to give us the 6 in the final equation. There factors will add up to the 5 which is in the middle term of the polynomial. Since the 6 is positive, it will have to have either both positive, or both negative, factors. And since the 5 is positive, and can only be made by adding at least one positive number (either a positive and a negative, or a positive and a lesser negative) then the factors can only be positive. So now our factors are: (x+ some factor of 6) * (x+ another factor of 6). Since the only factor of six that add up to 5 are 3 and 2 we have for answer (X+3)(x+2). Also - don't forget about the other (x+2) left over from before - that still part of the number too, making the final answer (x+2)*(x+2)*(x+3), or (x+2)^2 * (x+3). Does that make sense to you? One last thing There is an easier way to factor the first problem than our last answer gave. Whenever I have to factor, the first thing I look for is a common factor of all of the terms so I can divide it out and simplify the expression. What do you think it might be in this problem? Let's see - (X+2)^3 could be rewritten as (X+2)(X+2)^2. Can you see why? If you're not sure, try rewriting (X+2)(X+2)^2 without using the expound, and then do the same for (X+2)^3 and compare what you This makes our expression (X+2)(X+2)^2 +(X+2)^2. The (X+2)^2 is common to both terms, so we can factor it out, making our expression (X+2)^2 ((X+2) +1). We can simplify this further by re-writing it as (X+2)^2 (X+2 +1) and then adding the like terms. This gives us (X+2)^2 (X+3), which is the final answer -Doctor Patrick, The Math Forum Date: 4/2/96 at 18:42:30 From: "Ian Rounthwaite" Subject: help Factor completely Barb R. Date: 4/16/96 at 19:27:35 From: Doctor Patrick Subject: Re: help Hello again! To factor 18x^2+9x-2 we need to find two terms which, when multiplied, will give us back our original equation. Whenever I factor this kind of problem, I start by looking for the possible factors for the number before the x^2 term (in this case 18) and the number which is not multiplied by a varible (2 in your problem). Then I look to see which combinations of the two will give me the middle term (9x). For example 18x^2 can be factored into 18x*1x, 9x*2x, 6x*3x, or -18x*-1x, -9x*-x2, and -6x*-3x. -2 can only be factored to -2*1 or -1*2. We now have to find a way to combine the two parts in order to make 9. However we combine it, since the middle term is positive we know that when we start multiplying the largest product will have to be positive also. Do you understand why? Let's start with the first combination and see what happens. (18x+2)(1x-1)=18x^2-17x-2. That didn't work very well, did it? We could try some other combinations with the 18, but I have the feeling it won't work, since we only have the 2 to add to or subtract from it. I don't think that that will be enough, do you? What we are left with at this point are the 9x*2x and 6x*3x combinations. There really isn't a way to tell which will work so a lot of what follows is guesswork and trial and error. You have to play with the different combinations to figure out which will work. Often, you can rule out an entire set of possibilities by looking at some of its results, as we did above. It turns out that the 9x*2x combinations don't work either, but I'll let you test those on your own in order to keep this answer from getting too long. All we have left are 6x*3x combinations to make the 18x^2. I'll let you work out these on your own - good luck! Write us back if you need more help! -Doctor Patrick, The Math Forum
{"url":"http://mathforum.org/library/drmath/view/58544.html","timestamp":"2014-04-17T07:19:59Z","content_type":null,"content_length":"14422","record_id":"<urn:uuid:da18ef07-fc44-43d5-b480-efecc154a398>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00085-ip-10-147-4-33.ec2.internal.warc.gz"}
Big Theta refers to an asymptotic upper bound. refers to an asymptotic lower bound. refers to an asymptotically tight bound -- something with asymptotic performance. Theta(f(x)) is simultaneously O(f(x)) and Omega(f(x)). See also http://en.wikipedia.org/wiki/Big_O_notation OrderNotation of this page (last edited March 11, 2011) or FindPage with title or text search
{"url":"http://c2.com/cgi/wiki?BigTheta","timestamp":"2014-04-17T05:31:44Z","content_type":null,"content_length":"1515","record_id":"<urn:uuid:89934d93-9c9b-47ca-88db-f98fb67b7fde>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00391-ip-10-147-4-33.ec2.internal.warc.gz"}
Find a Londonderry, NH Tutor ...Well, the teaching bug has not left and I would like to get back into working with students. I try to make the tutor/student experience rewarding. I am not just here to lecture but to engage you in the learning process to get you to ask the questions that are begging to be asked.I have a PhD in organic chemistry from Montana State University. 11 Subjects: including chemistry, physics, calculus, algebra 1 ...Throughout my time as an undergrad, I helped students with subjects ranging from College Writing, History and Spanish. After I finished my bachelor’s, I worked for a couple of years as a Spanish teacher. I later moved to Spain to study for my Master’s at Saint Louis University Madrid, where I graduated with distinction. 14 Subjects: including Spanish, ESL/ESOL, Italian, grammar ...This experience allowed me to work with students with varying learning styles, and aided me in developing the ability to explain complex scientific and mathematic concepts in ways that each student could understand. My time in academia has taken me through many college-level math and science cou... 8 Subjects: including geometry, algebra 1, algebra 2, elementary math ...However, I continue to teach and tutor students on a part time basis. Within these four years I have been teaching many areas of science with the Goffstown Adult Education Program. Teaching within this program has allowed me to expand my content knowledge in areas outside of biology. 4 Subjects: including biology, anatomy, physiology, genetics ...In addition, I understand the struggles that many students experience in their educational journey. The years of counseling experience with various children/adolescents with different disabilities provides a greater depth of understanding for individuals with learning and emotional problems, whi... 8 Subjects: including biology, grammar, special needs, study skills Related Londonderry, NH Tutors Londonderry, NH Accounting Tutors Londonderry, NH ACT Tutors Londonderry, NH Algebra Tutors Londonderry, NH Algebra 2 Tutors Londonderry, NH Calculus Tutors Londonderry, NH Geometry Tutors Londonderry, NH Math Tutors Londonderry, NH Prealgebra Tutors Londonderry, NH Precalculus Tutors Londonderry, NH SAT Tutors Londonderry, NH SAT Math Tutors Londonderry, NH Science Tutors Londonderry, NH Statistics Tutors Londonderry, NH Trigonometry Tutors Nearby Cities With Tutors Bedford, NH Tutors Derry, NH Tutors Haverhill, MA Tutors Hudson, NH Tutors Lawrence, MA Tutors Litchfield, NH Tutors Lowell, MA Tutors Manchester, NH Tutors Merrimack Tutors Methuen Tutors Nashua, NH Tutors North Andover Tutors Peabody, MA Tutors Salem, NH Tutors Windham, NH Tutors
{"url":"http://www.purplemath.com/Londonderry_NH_tutors.php","timestamp":"2014-04-19T02:30:41Z","content_type":null,"content_length":"23795","record_id":"<urn:uuid:6ae07cca-234b-4d5c-a3dd-55019498c463>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00461-ip-10-147-4-33.ec2.internal.warc.gz"}
How to Calculate the Product–moment Correlation Coefficient Edit Article Edited by Caidoz, Teresa, Glutted, Kreidler2 and 16 others The product-moment correlation coefficient allows you to work out the linear dependence of two variables (referred to as x and y). An example in economics might be that you are the owner of a restaurant. For every 10th customer you record the time he stayed in your restaurant (x, in minutes) and the amount spent (y, in dollars). Is it generally true that the long stayers are also the bigger spenders? This would be a positive correlation. Or is it actually the other way around, e.g., the richer the client the less time he takes for his lunch? This would be a negative correlation. In order to shed some light on this mystery you can calculate the product-moment correlation coefficient, r, sometimes known as Pearson's correlation. Note: The equations are for the linear least squares fit which statistically fits the set of data pairs to a straight line. 1. 1 Remove incomplete pairs. In the next steps, use only the observations where both x and y are known. However do not exclude observations just because one of the values equals zero. 2. 2 Summarize the data into the values needed for the calculation. □ n - the number of data. □ Σ(x^2) - the sum of the squares of the x values. □ Σx - the sum of all the x values. □ Σ(x*y) - the sum of each x value multiplied by its corresponding y value. □ Σy - the sum of all the y values. □ Σ(y^2) - the sum of the squares of the y values. 1. 1 Calculate ss[xy], ss[xx] and ss[yy] using these values. □ ss[xy]=Σxy-(ΣxΣy÷n)=283-(12*93/5)=59.8 □ ss[xx]=Σx^2-(ΣxΣx÷n)=40-(12*12/5)=11.2 □ ss[yy]=Σy^2-(ΣyΣy÷n)=2089-(93*93/5)=359.2 2. 2 Insert these values into the equation for r, the product-moment correlation coefficient. The value should be between 1 and -1, inclusive. □ A value close to 1 implies strong positive correlation. (The higher the x, the higher the y). □ A value close to 0 implies little or no correlation. □ A value close to -1 implies strong negative correlation. (The higher the x, the lower the y). • Always make a scatter plot. Otherwise you may miss your discovery because the product moment correlation coefficient only takes straight lines into consideration in the business of predicting y from x. • This is the reason why a lot of questionnaires feature the same questions, making them incredibly boring to answer. The researchers often know a lot about question x and question y, but they don't know yet how they are related. • Before you state that two variables are correlated make sure the correlation coefficient is statistically significant. That is to say that the calculated correlation coefficient is unlikely to be a result of pure chance. For example, all your points may lay on the same line, this has a coefficient of +1 or -1, but it would still be inconclusive. (When the coefficient is not significant there is generally no point in reporting its value.) • When the correlation is significant you have not shown causality- that one variable "causes" the other. You have only proven that knowledge of the value of x may help to some degree in predicting the value of y and/or the other way around. Article Info Thanks to all authors for creating a page that has been read 68,775 times. Was this article accurate?
{"url":"http://www.wikihow.com/Calculate-the-Product%E2%80%93moment-Correlation-Coefficient","timestamp":"2014-04-21T08:14:33Z","content_type":null,"content_length":"67891","record_id":"<urn:uuid:aed1ae6e-c6da-47bd-a3a1-c3ed518a4aef>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00185-ip-10-147-4-33.ec2.internal.warc.gz"}
Proof that (1+x)^n >= 1+nx? - Straight Dope Message Board Originally Posted by Mayo Speaks! I assume that a proof by contradiction or a proof by induction is possible, as those two techniques were what the test was about. Ah, this helps. Whenever you have to prove a statement about "any natural number n," consider using induction on n. Try starting with the assumption that it's true for and multiply both sides of the inequality by (1+x) (which is positive if x > -1) to get (1+x) >= ... something which can be shown to be >= 1 + (n+1)x. I'll let you work out the details.
{"url":"http://boards.straightdope.com/sdmb/showthread.php?t=395695","timestamp":"2014-04-21T04:33:27Z","content_type":null,"content_length":"113050","record_id":"<urn:uuid:97fa44bd-d621-4742-a3fa-1ea7b506e6b4>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00403-ip-10-147-4-33.ec2.internal.warc.gz"}
Consistency using the median March 8th 2011, 07:26 PM Consistency using the median How can you use the sample median Y' to prove consistency? If 2n+1 random observations are drawn from a continuous and symmetric pdf with mean (mu) and if f_Y(mu;mu) does not equal 0, then the sample median Y'_(n+1), is unbiased for (mu) and Var( Y'_ (n+1) ) = 1/8[ f_Y(mu;mu)^2 *n ]. Show that (mu) hat = Y'_(n+1) is consistent for (mu). March 8th 2011, 07:56 PM How can you use the sample median Y' to prove consistency? If 2n+1 random observations are drawn from a continuous and symmetric pdf with mean (mu) and if f_Y(mu;mu) does not equal 0, then the sample median Y'_(n+1), is unbiased for (mu) and Var( Y'_ (n+1) ) = 1/8[ f_Y(mu;mu)^2 *n ]. Show that (mu) hat = Y'_(n+1) is consistent for (mu). When the mean and variance exist it suffices to show that the mean converges to the thing you want and the variance goes to 0 (for all values of mu).
{"url":"http://mathhelpforum.com/advanced-statistics/173913-consistency-using-median-print.html","timestamp":"2014-04-16T11:46:20Z","content_type":null,"content_length":"4531","record_id":"<urn:uuid:4b84a829-f41e-4681-94ca-cc3c35c7f766>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00328-ip-10-147-4-33.ec2.internal.warc.gz"}
Recent LSUS Math Grads It is true that the LSUS Department of Mathematics does not boast a huge number of graduates in any single year. There are many reasons for that, not least of which is the relative scarcity of mathematics majors. (The few, the proud, ...) That being the case, our department can boast that we do more with few. We are proud of the careers and trajectories of our graduates. The following is only a partial list of recent graduates (we're waiting to hear from more of them). • Laura McCormick (2011 Math): PhD student in Mathematics and Teaching Assistant at the University of South Carolina, Columbia, SC (2011). • David Poe (2011 Math+Physics): Graduate Student in Electrical Engineering at the University of North Texas, Denton, TX (2011). • Charles Crawford (2010, Math+Physics): NASA Internship - Applied Aerosciences and Computational Fluid Dynamics Branch (fall of 2010); User Experience and Design Engineer, Twin Engine Labs, Shreveport, LA (2011). • Derrick Hughes (2010 Math): Graduate Student and Teaching Assistant in Statistics, Florida State University, Tallahassee, FL (2011). • Will Sandifer (2010 Math): PhD student in Pure Mathematics at the University of Denver, Denver, CO (2011). • Steven Smith (2010 Math, 2008 Physics): Mechanical engineer for Weldship Corporation (a transportation company in the compressed gas industry), Bethlehem, PA. • Jared Vicory (2008 Math+CS): Masters in Comp. Sci. at LSUS (2009); PhD Student and Research/Teaching Assistant at the University of North Carolina, Chapel Hill, NC (2010); specializing in Medical Image Analysis. • Jason Downey (2007 Math): Mathematics and QEP Coordinator at Bossier Parish Community College, Bossier City, LA (2008). • Sara Jayne Slocombe (2008 Math): Learning Technologist for course development at the University of Manchester, Manchester, UK (2011).
{"url":"http://www.lsus.edu/academics/college-of-arts-and-sciences/school-of-mathematics-and-sciences/department-of-mathematics/math-department-news/recent-math-graduates","timestamp":"2014-04-19T17:56:11Z","content_type":null,"content_length":"16780","record_id":"<urn:uuid:c31e096c-5fd4-4ec8-a8ca-02631b561e8b>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00449-ip-10-147-4-33.ec2.internal.warc.gz"}
Moreno Valley Trigonometry Tutors ...I'm Andrew! I just got finished with my second year of college at CSUSB. I'm majoring in Mathematics. 12 Subjects: including trigonometry, calculus, geometry, algebra 1 ...Once 1/4 of the students learned their multiplication tables from 1 through 12, I started them on division. I got to see children who hated math tell me a countless number of times that "I've always been horrible at math"; however, once these little kids understood the concepts, they were no lon... 6 Subjects: including trigonometry, calculus, algebra 2, geometry I tutored for two years in elementary Geometry/Algebra all the way through to advanced placement calculus and physics. I am a good tutor because I have an inherent passion for the topics themselves and find that I can communicate their intricacies thoroughly to my students. I have taken: Different... 14 Subjects: including trigonometry, physics, calculus, statistics ...I took courses at UCR in Differential Equations while a graduate student in mathematics. I was a Teaching Assistant at UCR and covered Differential Equations. I was an instructor for Differential Equations at MSJC. 27 Subjects: including trigonometry, English, calculus, physics ...Physics has always been my favorite subject. I have taken 4 semesters of algebra-based physics in high school and college and received an A every semester. In high school, I had my first exposure to physics, where I gained a strong understanding of the subject and used it to ace the course as well as aid my classmates in understanding the material. 9 Subjects: including trigonometry, chemistry, physics, geometry
{"url":"http://www.algebrahelp.com/Moreno_Valley_trigonometry_tutors.jsp","timestamp":"2014-04-18T20:50:33Z","content_type":null,"content_length":"25179","record_id":"<urn:uuid:1862cec0-4556-485f-89d7-a4eeec1e8ea4>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00452-ip-10-147-4-33.ec2.internal.warc.gz"}
Limit Proof January 6th 2006, 03:06 PM #1 Junior Member Nov 2005 Limit Proof Hey, I am trying to prove that: lim(nth root of n)=1 as n approaches infinity and that lim(nth root of n+1)=1 as n approaces infinity I do not know how to go about it. Can anybody help?? Is using the delta-epsilon definition of a limit or a less rigorous method? Start with considering any function $f(n)=n^\left(\frac{1}{n}\right)$. You can do this both graphically and analytically. For the first way, use a calculator to see the trend as n tends to infinity. For the second way, look at the exponential part of the function. What is $\lim_{n\rightarrow\infty}\frac{1}{n}$? And what is any number or variable to that answer? Last edited by Jameson; January 6th 2006 at 04:34 PM. This happens to be a famous limit I will prove that $\lim_{n\rightarrow \infty} n^{\frac{1}{n}}$=1. Notice that the function $n^{\frac{1}{n}}$ is a monotonic decreasing function, which has a lower bound. Thus, $\lim_{n\rightarrow \infty} n^{\frac{1}{n}}$ has a limit. Call that $L$ thus, L= $\lim_{n\rightarrow \infty} n^{\frac{1}{n}}$ $\ln L=\ln (\lim_{n\rightarrow \infty} n^{\frac{1}{n}})$ But because $\ln$ is countinous for that interval, $\ln L=\lim_{n\rightarrow \infty} \ln (n^{\frac{1}{n}})$ $\ln L=\lim_{n\rightarrow \infty} \frac{\ln n}{n}$ But this fits the necessary condition of L'Hopital's Rule, $\ln L=\lim_{n\rightarrow \infty} \frac{1}{n}=0$ Thus, $\ln L=0$ thus, $L=1$ Finally putting all of this together, $\lim_{n\rightarrow \infty} n^{\frac{1}{n}}$=1 Is using the delta-epsilon definition of a limit or a less rigorous method? Start with considering any function $f(n)=n^\left(\frac{1}{n}\right)$. You can do this both graphically and analytically. For the first way, use a calculator to see the trend as n tends to infinity. For the second way, look at the exponential part of the function. What is $\lim_{n\rightarrow\infty}\frac{1}{n}$? And what is any number or variable to that answer? Jameson I believe the rule goes that if the composition of the limits exists then the limit is the functional composition. That means what you said about looking at the exponent part lacks mathematical rigor. For the second way, look at the exponential part of the function. What is $\lim_{n\rightarrow\infty}\frac{1}{n}$? And what is any number or variable to that answer? This is invalid as it pays no attention to the rate of growth of what is being raised to $1/n$. Such an argument would apply to for any real function $f(n)$. In particular it would appy when $f(n)=n^n$. Jameson I believe the rule goes that if the composition of the limits exists then the limit is the functional composition. That means what you said about looking at the exponent part lacks mathematical rigor. - its actualy false (fallacious?), see my other post Hey, I am trying to prove that: lim(nth root of n)=1 as n approaches infinity and that lim(nth root of n+1)=1 as n approaces infinity I do not know how to go about it. Can anybody help?? $n^{\frac{1}{n}}=e^{\frac{ \ln(n)}{n}}$, and as the exponential function is continuous if the limit exists: $\lim_{n\rightarrow \infty} n^{\frac{1}{n}}=e^{\lim_{n\rightarrow \infty}\frac{ \ln(n)}{n}}$, so we need only worry about: $\lim_{n\rightarrow \infty}\frac{ \ln(n)}{n}$. Now this is dealt with elsewhere in this thread by ThePerfectHacker, but an informal short cut is available which tells us what this limit is. The sort cut is that log functions grow more slowly than any power of its argument, that is for all $\alpha \epsilon \mathbb{R}$, there exists a $k$ such that: $\ln (x) < k.x^{\alpha}$, for all $x$ sufficiently large. Which is sufficient to show that $\lim_{n\rightarrow \infty}\frac{ \ln(n)}{n^{\beta}}=0$, for any $\beta \epsilon \mathbb{R}$. Of course this can also be demonstrated using L'Hopital's rule as is done for $\beta = 1$ elsewhere in this thread. Many Thanks You guys are awesome. I don't know what I would do without you. Thanks a million! - its actualy false (fallacious?), see my other post What are you talking about CaptainBlack. What are you talking about CaptainBlack. Jameson's statement is a fallacy - you need to consider the rate of growth of what is raised to $1/n$ not just that the limit of the exponent is $0$. Its explained in my response to Jameson's post. Jameson's statement is a fallacy - you need to consider the rate of growth of what is raised to $1/n$ not just that the limit of the exponent is $0$. Its explained in my response to Jameson's post. I know, I told him that. For the second thing you cannot just take the natural logarithm of the limit you must first prove it exists. It is a common mistake by just taking the natural logarithm because we are assuming it This is invalid as it pays no attention to the rate of growth of what is being raised to $1/n$. Such an argument would apply to for any real function $f(n)$. In particular it would appy when $f(n)=n^n$. I don't understand your point. Obviously $lim_{x\rightarrow\infty}n^n=\infty$ I was simply telling her to analytically think about the trend of the exponent. If I made a mistake I apologize, but I really don't see how I was false in showing that as n approaches infinity, the function becomes $n^0$ Jameson I believe the rule goes that if the composition of the limits exists then the limit is the functional composition. That means what you said about looking at the exponent part lacks mathematical rigor. Yes it did lack rigor. I like your method better. I make a quick post and I as I said above I was trying to analytically show the trend of the function. I'm still not seeing my fallacy though. I don't understand your point. Obviously $lim_{x\rightarrow\infty}n^n=\infty$ I was simply telling her to analytically think about the trend of the exponent. If I made a mistake I apologize, but I really don't see how I was false in showing that as n approaches infinity, the function becomes $n^0$ Your argument about the exponent if valid would also prove that: as here also the outer exponent goes to zero and as we know anything raised to the power zero (if non-zero) is unity. Jameson, your problem, informally was that you looked at the exponent you should have also seen the base thus it is of the form $\infty^0$ no conclusion could be drawn. January 6th 2006, 04:31 PM #2 MHF Contributor Oct 2005 January 6th 2006, 06:18 PM #3 Global Moderator Nov 2005 New York City January 6th 2006, 06:21 PM #4 Global Moderator Nov 2005 New York City January 6th 2006, 11:25 PM #5 Grand Panjandrum Nov 2005 January 6th 2006, 11:27 PM #6 Grand Panjandrum Nov 2005 January 7th 2006, 01:28 AM #7 Grand Panjandrum Nov 2005 January 7th 2006, 07:59 AM #8 Junior Member Nov 2005 January 7th 2006, 11:27 AM #9 Global Moderator Nov 2005 New York City January 7th 2006, 12:24 PM #10 Grand Panjandrum Nov 2005 January 7th 2006, 02:11 PM #11 Global Moderator Nov 2005 New York City January 8th 2006, 08:18 AM #12 MHF Contributor Oct 2005 January 8th 2006, 08:24 AM #13 MHF Contributor Oct 2005 January 8th 2006, 09:06 AM #14 Grand Panjandrum Nov 2005 January 8th 2006, 09:38 AM #15 Global Moderator Nov 2005 New York City
{"url":"http://mathhelpforum.com/calculus/1561-limit-proof.html","timestamp":"2014-04-17T04:36:42Z","content_type":null,"content_length":"94069","record_id":"<urn:uuid:11367050-d9a1-48ce-8c84-b8c66652c1db>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00573-ip-10-147-4-33.ec2.internal.warc.gz"}
Newest &#39;semigroups nt.number-theory&#39; Questions Let $b > a > 0$ be two real numbers. I am interested in the set of numbers $X(p,q) = p a + q b$ with $p,q$ positive integers. Basically this is the set $a \mathbb{N} + b \mathbb{N}$. What ... For an integer $m$, let $S^m_{x_0,x_1} = \{ t | x_0 ≤ t ≤ x_1 $ and $t$ is a square modulo $m \}$. Let $S^m_x$ = $S^m_{0,x}$. Determining whether the sets $S^m_x$ are empty is easy (1 is always a ...
{"url":"http://mathoverflow.net/questions/tagged/semigroups+nt.number-theory","timestamp":"2014-04-18T11:04:58Z","content_type":null,"content_length":"33884","record_id":"<urn:uuid:cdb7b6e8-fc5e-4007-8eea-3da7d92d7ed5>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00030-ip-10-147-4-33.ec2.internal.warc.gz"}
[Solved] hide and sort columns and rows macro It looks like Excel 2010 uses different arguments for a VBA Sort than Excel 2003, and it is not backwards compatible. Luckily the 2003 code works in 2010, so the following code should work in both versions. I made some changes to your code, mainly because of this: 'Sort Descending on Column C Set myRange = Range("A2:C" & lastRw) Selection.Sort ... Rarely do you have to actually "select" an object in VBA in order to perform an operation on it. You can almost always perform the operation directly on the object. It more efficient to use this: 'Sort Descending on Column C Set myRange = Range("A2:C" & lastRw) myRange.Sort ... This code worked for me in both 2003 and 2010: Sub SortHideErrZero() 'Unhide all rows Cells.EntireRow.Hidden = False 'Determine length of data lastRw = Range("A" & Rows.Count).End(xlUp).Row - 1 'Sort Descending on Column C Set myRange = Range("A2:C" & lastRw) myRange.Sort Key1:=Range("C2"), Order1:=xlDescending, Header:=xlGuess, _ OrderCustom:=1, MatchCase:=False, Orientation:=xlTopToBottom, _ 'Hide Errors and 0 values based on Column C 'Hide "Gap" in Column A For myRow = 2 To lastRw If IsError(Cells(myRow, 3)) Then Cells(myRow, 3).EntireRow.Hidden = True ElseIf Cells(myRow, 3) = 0 Then Cells(myRow, 3).EntireRow.Hidden = True ElseIf Cells(myRow, 1).Value = "Gap" Then Cells(myRow, 3).EntireRow.Hidden = True End If End Sub Click Here Before Posting Data or VBA Code ---> How To Post Data or Code.
{"url":"http://www.computing.net/answers/office/hide-and-sort-columns-and-rows-macro/17566.html","timestamp":"2014-04-16T16:14:02Z","content_type":null,"content_length":"60679","record_id":"<urn:uuid:bacdd0b1-5f72-4c49-aabf-ccb4c5f0042a>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00488-ip-10-147-4-33.ec2.internal.warc.gz"}
How to Calculate the Product–moment Correlation Coefficient Edit Article Edited by Caidoz, Teresa, Glutted, Kreidler2 and 16 others The product-moment correlation coefficient allows you to work out the linear dependence of two variables (referred to as x and y). An example in economics might be that you are the owner of a restaurant. For every 10th customer you record the time he stayed in your restaurant (x, in minutes) and the amount spent (y, in dollars). Is it generally true that the long stayers are also the bigger spenders? This would be a positive correlation. Or is it actually the other way around, e.g., the richer the client the less time he takes for his lunch? This would be a negative correlation. In order to shed some light on this mystery you can calculate the product-moment correlation coefficient, r, sometimes known as Pearson's correlation. Note: The equations are for the linear least squares fit which statistically fits the set of data pairs to a straight line. 1. 1 Remove incomplete pairs. In the next steps, use only the observations where both x and y are known. However do not exclude observations just because one of the values equals zero. 2. 2 Summarize the data into the values needed for the calculation. □ n - the number of data. □ Σ(x^2) - the sum of the squares of the x values. □ Σx - the sum of all the x values. □ Σ(x*y) - the sum of each x value multiplied by its corresponding y value. □ Σy - the sum of all the y values. □ Σ(y^2) - the sum of the squares of the y values. 1. 1 Calculate ss[xy], ss[xx] and ss[yy] using these values. □ ss[xy]=Σxy-(ΣxΣy÷n)=283-(12*93/5)=59.8 □ ss[xx]=Σx^2-(ΣxΣx÷n)=40-(12*12/5)=11.2 □ ss[yy]=Σy^2-(ΣyΣy÷n)=2089-(93*93/5)=359.2 2. 2 Insert these values into the equation for r, the product-moment correlation coefficient. The value should be between 1 and -1, inclusive. □ A value close to 1 implies strong positive correlation. (The higher the x, the higher the y). □ A value close to 0 implies little or no correlation. □ A value close to -1 implies strong negative correlation. (The higher the x, the lower the y). • Always make a scatter plot. Otherwise you may miss your discovery because the product moment correlation coefficient only takes straight lines into consideration in the business of predicting y from x. • This is the reason why a lot of questionnaires feature the same questions, making them incredibly boring to answer. The researchers often know a lot about question x and question y, but they don't know yet how they are related. • Before you state that two variables are correlated make sure the correlation coefficient is statistically significant. That is to say that the calculated correlation coefficient is unlikely to be a result of pure chance. For example, all your points may lay on the same line, this has a coefficient of +1 or -1, but it would still be inconclusive. (When the coefficient is not significant there is generally no point in reporting its value.) • When the correlation is significant you have not shown causality- that one variable "causes" the other. You have only proven that knowledge of the value of x may help to some degree in predicting the value of y and/or the other way around. Article Info Thanks to all authors for creating a page that has been read 68,775 times. Was this article accurate?
{"url":"http://www.wikihow.com/Calculate-the-Product%E2%80%93moment-Correlation-Coefficient","timestamp":"2014-04-21T08:14:33Z","content_type":null,"content_length":"67891","record_id":"<urn:uuid:aed1ae6e-c6da-47bd-a3a1-c3ed518a4aef>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00185-ip-10-147-4-33.ec2.internal.warc.gz"}
Samacheer Kalvi Maths Model Question Paper-SSLC Samacheer kalvi Maths model question paper-SSLC is provided here for the pupils to develop suggestions for resolving the trouble and design for which type of questions asked in the public exam. You could download the web links furthermore supplied here. This article has points which include 4 parts from sequences and series of real numbers, trigonometry, probability, coordinate geometry and matrices device. In part-I, decide on the best response from the offered 4 alternatives which holds one mark. Part-II, you need to address any type of 10 in the given inquiries through which point no. 30 is compulsory and each inquiry carries 2 marks. Part-III is a five marks concerns. In this you must go to any type of 9 questions through which question no. 45 is must. Part-IV consists of two points, each with 2 options. Pupils have to respond to both the concerns by deciding on either of the alternatives and each point holds ten marks. Tags: 10th model question papers, samacheer maths SSLC question paper model, tamilnadu 10 maths model question paper, xth model question paper for maths No comments yet. Leave a Reply Click here to cancel reply.
{"url":"http://samacheerkalvi.net/samacheer-kalvi-maths-model-question-paper-sslc","timestamp":"2014-04-20T18:24:34Z","content_type":null,"content_length":"28304","record_id":"<urn:uuid:20bc75a0-dde0-4c3c-8f4e-f1dc347b2b85>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00634-ip-10-147-4-33.ec2.internal.warc.gz"}
Lansing, IL Calculus Tutor Find a Lansing, IL Calculus Tutor ...I have been a teacher's assistant for 1st and 3rd grade for the past 2 years. I am currently a student at Purdue University Calumet, and I am majoring in math education. One day I plan on being a high school math teacher. 9 Subjects: including calculus, algebra 2, precalculus, elementary (k-6th) ...I am licensed in both math and physics at the high school level. I have taught a wide variety of courses in my career: prealgebra, math problem solving, algebra 1, algebra 2, precalculus, advanced placement calculus, integrated chemistry/physics, and physics. I also have experience teaching physics at the college level and have taught an SAT math preparation course. 12 Subjects: including calculus, physics, geometry, algebra 1 ...I use both analytical as well as graphical methods or a combination of the two as needed to cater to each student. Having both an Engineering and Architecture background, I am able to explain difficult concepts to either a left or right-brained student, verbally or with visual representations. ... 34 Subjects: including calculus, reading, writing, statistics ...The main subject I have taught is mathematics. I have also taught PLTW in basic electronics. I have 20-plus years of experience in heavy industry (steel mills and the like) and I interpret/ translate English/Greek and vice-versa. 12 Subjects: including calculus, geometry, statistics, algebra 2 ...The subject forms a part of our curriculum in our undergraduate and postgraduate classes. Presently, it has been a part of my teaching at grades 11 and 12. The more rigorous treatment of the subject has been taught by me at the undergraduate level. 14 Subjects: including calculus, geometry, algebra 1, algebra 2 Related Lansing, IL Tutors Lansing, IL Accounting Tutors Lansing, IL ACT Tutors Lansing, IL Algebra Tutors Lansing, IL Algebra 2 Tutors Lansing, IL Calculus Tutors Lansing, IL Geometry Tutors Lansing, IL Math Tutors Lansing, IL Prealgebra Tutors Lansing, IL Precalculus Tutors Lansing, IL SAT Tutors Lansing, IL SAT Math Tutors Lansing, IL Science Tutors Lansing, IL Statistics Tutors Lansing, IL Trigonometry Tutors
{"url":"http://www.purplemath.com/Lansing_IL_Calculus_tutors.php","timestamp":"2014-04-21T05:13:17Z","content_type":null,"content_length":"23936","record_id":"<urn:uuid:b98ce5b3-eb66-426e-9ba4-8d68e098b0f1>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00263-ip-10-147-4-33.ec2.internal.warc.gz"}
Wolfram Demonstrations Project Shell Space: Flare, Verm, and Spire This resembles Stephen Wolfram's model of shell growth. However, this is an implementation of a simpler model, due to David Raup and later used by Richard Dawkins in an influential popularization. The idea is that all naturally occurring shells conform to the same basic design, varying only in a limited number of quantifiable ways. In this model, you can specify a shell by setting the flare, (the rate of growth of the spiral), the verm, (the "tightness" of the shell cavity), and the spire, (the rate of creep parallel to the spiral axis). David Raup proposes a simple three-parameter "shell space," the parameters being (the rate of growth of the spiral), (the "tightness" of the shell cavity), and (the rate of creep parallel to the spiral axis). This simplified model of "shell space" was later adopted by Dawkins, who coined the terms "flare," "verm," and "spire" for the parameters , and , respectively, as an illustration of the notion of "design space" in the book Climbing Mount Improbable Here are precise definitions of the parameters. 1. If is the distance between a point on the generating spiral and that spiral's axis, and the distance from the corresponding point one turn later, then , which is strictly greater than 1. 2. If is the distance between the spiral axis and the inner edge of the shell cavity, and the distance between the spiral axis and the outer edge on the same turn, then , which is greater than or equal to 0 and strictly less than 1. 3. If the acute angle between the line and the horizontal is , then .
{"url":"http://demonstrations.wolfram.com/ShellSpaceFlareVermAndSpire/","timestamp":"2014-04-18T02:58:37Z","content_type":null,"content_length":"45496","record_id":"<urn:uuid:c2b7994a-f604-4f2a-adb0-0f404386adfe>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00030-ip-10-147-4-33.ec2.internal.warc.gz"}
A Cross-National Analysis of How Economic Inequality Predicts Biodiversity Loss Abstract:We used socioeconomic models that included economic inequality to predict biodiversity loss, measured as the proportion of threatened plant and vertebrate species, across 50 countries. Our main goal was to evaluate whether economic inequality, measured as the Gini index of income distribution, improved the explanatory power of our statistical models. We compared four models that included the following: only population density, economic footprint (i.e., the size of the economy relative to the country area), economic footprint and income inequality (Gini index), and an index of environmental governance. We also tested the environmental Kuznets curve hypothesis, but it was not supported by the data. Statistical comparisons of the models revealed that the model including both economic footprint and inequality was the best predictor of threatened species. It significantly outperformed population density alone and the environmental governance model according to the Akaike information criterion. Inequality was a significant predictor of biodiversity loss and significantly improved the fit of our models. These results confirm that socioeconomic inequality is an important factor to consider when predicting rates of anthropogenic biodiversity loss. Resumen:Utilizamos modelos socioeconómicos que incluyeron la inequidad económica para predecir la pérdida de biodiversidad, medida como la proporción de especies amenazadas de plantas y vertebrados, en 50 países. Nuestra principal meta fue evaluar sí la inequidad económica, medida como el índice Gini de distribución del ingreso, mejoraba el poder predictivo de nuestros modelos estadísticos. Comparamos cuatro modelos que incluyeron lo siguiente: solo densidad poblacional, huella económica (i.e., el tamaño de la economía en relación con la superficie del país); huella económica e inequidad de ingresos (índice Gini) y un índice de gobernabilidad ambiental. También probamos la hipótesis de la curva ambiental de Kuznets, pero no fue sustentada por los datos. Las comparaciones estadísticas de los modelos revelaron que el modelo que incluyó la huella ecológica y la inequidad fue el mejor pronosticador de especies amenazadas. Superó significativamente el funcionamiento de la densidad poblacional sola y la gobernabilidad ambiental de acuerdo con el criterio de información de Akaike. La inequidad fue un pronosticador significativo de la pérdida de biodiversidad y mejoró significativamente el ajuste de nuestros modelos. Los resultados confirman que la inequidad socioeconómica es un factor importante a considerar cuando se pronostican tasas de pérdida antropogénica de
{"url":"http://onlinelibrary.wiley.com/doi/10.1111/j.1523-1739.2009.01207.x/full","timestamp":"2014-04-25T06:01:21Z","content_type":null,"content_length":"134111","record_id":"<urn:uuid:bd8ff148-90d1-472e-b41b-1d3a4c4c7ba0>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00647-ip-10-147-4-33.ec2.internal.warc.gz"}
Saugus Trigonometry Tutor Find a Saugus Trigonometry Tutor ...I am new to Wyzant but very experienced in tutoring, so if you would like to meet first before a real lesson to see if we are a good fit, I am willing to arrange that.I was a swim teacher for 8 years at Swim facilities and summer camps. I also coached. I have worked with infants of 6 mos to senior citizens to providing therapeutic lessons to those with disabilities. 19 Subjects: including trigonometry, Spanish, chemistry, calculus ...Most recently I have done this at the secondary level as a public charter school teacher. Prior to that, alongside my own graduate work in mathematics, I taught and assistant-taught college-level math classes, from remedial Calculus to Multivariate Calculus. Because my GRE scores are in the 98-... 29 Subjects: including trigonometry, English, reading, writing ...I loved learning math when I was in school and greatly enjoy passing on that love and knowledge now. I know many people have difficulties with it and so want to use my skills to make it as easy as possible for people to get. Trigonometry is when math starts getting more complicated and you move from Algebra towards pre-calculus and calculus. 28 Subjects: including trigonometry, English, reading, writing ...Even if you aren't shooting for 800, the SAT math exam is definitely one where you can markedly improve your score with even just a limited amount of studying and practice. Fortunately, the SAT math exam covers a specific, well-defined set of topics, and tends to ask the same types of questions ... 44 Subjects: including trigonometry, English, chemistry, reading ...Currently I work as a college adjunct professor and teach college algebra and statistics. I enjoy tutoring and have tutored a wide range of students - from middle school to college level. I know the programs of high and middle school math, as well as the preparation for the SAT process. 14 Subjects: including trigonometry, statistics, geometry, algebra 1 Nearby Cities With trigonometry Tutor Belmont, MA trigonometry Tutors Chelsea, MA trigonometry Tutors Danvers, MA trigonometry Tutors East Boston trigonometry Tutors Everett, MA trigonometry Tutors Lynn, MA trigonometry Tutors Malden, MA trigonometry Tutors Melrose, MA trigonometry Tutors Peabody, MA trigonometry Tutors Reading, MA trigonometry Tutors Revere, MA trigonometry Tutors Stoneham, MA trigonometry Tutors Swampscott trigonometry Tutors Wakefield, MA trigonometry Tutors Winchester, MA trigonometry Tutors
{"url":"http://www.purplemath.com/saugus_ma_trigonometry_tutors.php","timestamp":"2014-04-16T13:28:09Z","content_type":null,"content_length":"24130","record_id":"<urn:uuid:10367894-8eed-456f-8853-025d447b9969>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00589-ip-10-147-4-33.ec2.internal.warc.gz"}
Machine Level Optimizations Machine Level Optimizations Author: Allen K. Leung Advisor: Krishna V. Palem Two machine instruction level compiler optimization problems are considered in this work. The first problem is time-constrained instruction scheduling, i.e., finding optimal schedules for machine code in the presence of time constraints such as release-times and deadlines. These types of time constraints appear naturally in embedded applications, and also as a side effect of many other compiler optimization problems. While the general problem is NP-hard, we have developed a new algorithm which can optimally handle many P-time solvable sub-instances. In fact, we show that almost all previous algorithms in this related area can be seen as an instance of the priority computation scheme that we have developed. Our work extends and unifies many algorithmic results in classical deterministic scheduling theory related to release-times, deadlines and pipeline The second problem that we investigate in this work is scalar optimizations in machine code. We present a new framework that utilizes static single assignment form (SSA) at the level of individual machine instructions. Complementing the framework, we have also developed new SSA construction algorithms which are faster than previous algorithms, and are very simple to implement.
{"url":"http://www.cs.nyu.edu/web/Research/Theses/leung_allen.html","timestamp":"2014-04-17T22:02:16Z","content_type":null,"content_length":"1756","record_id":"<urn:uuid:94347657-616e-489b-bd1c-9809c086e243>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00034-ip-10-147-4-33.ec2.internal.warc.gz"}
New version of analogue on CRAN December 14, 2013 By Gavin L. Simpson It has been almost a year since the last release of the analogue package. At lot has happened in the intervening period and although I’ve been busy with a new job in a new country and coding on several other R packages, activity on analogue has also progressed a pace. As the version 0.12-0 of the package hits a CRAN mirror near you, I thought I’d outline the major changes in the packages, which range from at long last having dissimilarity matrices computed in fast C code to lots of new functionality that makes fitting principal curves and plotting and interpreting the results much easier, from a more robust way to determine the posterior probability that two samples are analogues to rounding out the fitting of calibration models using principal components regression with ecologically-meaningful transformations. Dissimilarity matrices The original intent for the analogue package was for methods related to analogue matching, which at their heart involve computing dissimilarities between samples. Computing these pairwise-distances is quite time consuming, and even though I had a fairly efficient R-based implementation I always wanted to rewrite distance() to use fast C code. This has finally happened! The old behaviour can still be accessed via oldDistance() and as I have two implementations of the same functions I use both and compare results as part of new unit tests for the package. As far as the user is concerned, nothing has changed; the interface and arguments provided by distance() are the same as with previous versions. But the underlying code is now much quicker, especially for larger problems. Principal curves Fitting and working with principal curves, flexible smooth curves fitted in high dimensions, saw lots of additions and improvements in the period between the 0.10-0 and 0.12-0 releases. There are new methods for lines(), points(), scores() and residuals() which work with the output of prcurve(), and a nice 3D plotting function, plot3d(), courtesy of the rgl package. [DEL:Passive samples can now be handled through the provision of a predict() method.:DEL] Up to now, the only smoother that could be used to fit principal curves was a smoothing spline via smooth.spline(). With this release of analogue, GAMs can be used instead. This allows for better handling of species data via say Poisson or logistic regression, just as you’d fit response curves to individual species. This functionality is provided via gam() from package mgcv. The object returned from prcurve() has also expanded to supply more useful information on the fit and to allow easier plotting of the curve. Now, each of the fitted smooth models is returned so that they can be inspected for individual species. In addition, the PCA space of the data is available as component ordination and the original species data is also returned. Posterior probability of analogue-ness analogue contains two functions to assess the degree to which samples are analogues of one another; roc() and logitreg(). logitreg() uses a logistic regression to model the posterior probability that two samples are analogues given their dissimilarity via a glm() fit. Such binomial models can suffer from several problems, especially separation, whereby at some value of the covariates perfect discrimination between the two values of the response is achieved. These models can also become biased if the relative proportions of 0s and 1s in the data is strongly skewed to one class or the other. Firth’s bias-reduced logistic regression is a useful alternative in such circumstances. With this release, logitreg() can fit bias-reduced logistic regression models by use of functions in the brglm package, as well as the standard GLM implemented in glm(). Principal component regression Principal component regression (PCR) is a linear calibration method, used in chemometrics, and a form of PCR was used by Imbrie & Kipp in their original palaeoecological transfer function methodology. However, as it is a linear method it generally fails to adapt well to the often non-linear responses observed in species-environment data sets. In 2001 Pierre Legendre and Eugene Gallagher introduced the ecological world to the use of PCA on transformed data which could adequately model species data and being a simpler method it did not suffer from issues related to outliers or odd samples that plague CA. Their method achieved this via transformations of data that, when ordinated using PCA and the implicit Euclidean distance, result in an ordination that preserved a distance function other than the Euclidean. For example, if a Hellinger transformation is used, the PCA of such transformed data results in the ordination reflecting the Hellinger distances between samples in the scale of the original data. The pcr() function in analogue extends this idea to principal component regression, and was added to the package in version 0.8-0. Version 0.12-0 completes the basic functionality required to use PCR with ecologically-meaningful transformations. The full range of cross-validation methods (n repeats of k-fold, leave-one-out, and bootstrap CV) are now included in the crossval() method. Predictions from new samples can now be produced using the predict() method and sample-specific errors derived using n repeats of k-fold, and bootstrap CV. A summary of the main changes is in the new NEWS file, and a detailed list in the ChangeLog. I have a number of posts in development that will illustrate some of the above new functionality and methods, which will be posted over the next few weeks. You can get the new version of analogue now from CRAN. daily e-mail updates news and on topics such as: visualization ( ), programming ( Web Scraping ) statistics ( time series ) and more... If you got this far, why not subscribe for updates from the site? Choose your flavor: , or
{"url":"http://www.r-bloggers.com/new-version-of-analogue-on-cran/","timestamp":"2014-04-17T21:46:23Z","content_type":null,"content_length":"40921","record_id":"<urn:uuid:d263ca82-a751-40f9-9d11-3b90e564718c>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00038-ip-10-147-4-33.ec2.internal.warc.gz"}
Simplifying Products of Radicals In mathematics, the radicals are considered as one of the important topic. Radical numbers, which are having the root of square are called as radicals. It can have all types of roots. Index number is always given in the radicals. The symbols which are used in the radicals as symbol for symbol. The number which is inside the given root is called as radicand number. For example, `root(7)(15)` are known as the radicals. Check this awesome Simplify Radicals Calculator to solve your problems. Explanation to Simplifying Products of Radicals The explanations given for simplifying the products of the given radicals are given below, Radicals consists of mainly two types. They are given below the following section, Products law for radicals: • For the radicals the products law is also used. • The products is law used only we are having the same index value. Then the multiplication are made to the inside values. Example: `root(5)(9)` `xx` `root(5)(2)` = `root(5)(18)` Distributive law: • For the radicals the distributive law is also used. • the multiplication are made to each and every term present in the given radicals. Example: x (y + z) = xy + xz Example Problems to Simplifying Products of Radicals Problem 1: Simplify the product of the following radicals, `sqrt(13)` and `sqrt(13)` . Step 1: The given radicals for finding the product are as follows, `sqrt(13)` `xx` `sqrt(13)` Step 2: For the above terms we have to find the product and then we are simplifying, = 13 This is the obtained answer for simplifying the radicals. My Upcoming post is on how to divide radicals keep checking my blogs. Problem 2: Simplify the product of the following radicals, `5(sqrt(2)+sqrt(7))` and `5(sqrt(7)+sqrt(2))` . Step 1: The given radicals for finding the product are as follows, `5(sqrt(2)+sqrt(7))` and `5(sqrt(7)+sqrt(2))` . Step 2: For the above terms we have to find the product and then we are simplifying, `5(sqrt(2)+sqrt(7))` and `5(sqrt(7)+sqrt(2))` . = 5`sqrt(2) + sqrt(7)` `xx` 5 `sqrt(7)` + 5 `sqrt(2)` = 5 `sqrt(14)` `xx` 5 `sqrt(14)` = 5 `sqrt(196)` = 5 `xx` 14 = 70 This is the obtained answer for simplifying the radicals. Practice Problems to Simplifying Products of Radicals Problem 1: Simplify the product of the following radicals, `sqrt(12)` and `sqrt(13)` . Answer: 156 Problem 2: Simplify the product of the following radicals,`4(sqrt(4)+sqrt(6))` and `4(sqrt(6)+sqrt(4))` . Answer: 4`sqrt(576)`
{"url":"http://findmathhelp.jimdo.com/number-sense/simplifying-products-of-radicals/","timestamp":"2014-04-18T01:00:25Z","content_type":null,"content_length":"20519","record_id":"<urn:uuid:b0d05ca2-2f18-46a5-806b-16ca5cc5cca2>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00142-ip-10-147-4-33.ec2.internal.warc.gz"}
SSAC Geology of National Parks Modules This material is based upon work supported by the National Science Foundation under Grant Number NSF DUE-0836566. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. Following are short descriptions of the available modules. To access any of them, select from the list and you will connect to cover material including learning goals, context of use, and other information. Within the cover material (under "Teaching Materials") there is a link by which you can download the student version of the module. You can also request an instructor's version by clicking on the "Instructor Module Request Form" link under Teaching Materials. To find content, use the search box for a full text search of title, author, and text of the cover pages of the modules. Refined results can be seen in the boxes on the right side of the page. There are 26 modules currently in the collection. Current Search Limits Quantitative Concepts showing only Algebra; Modeling; Functions Results 1 - 2 of 2 matches Let's Take a Hike in Catoctin Mountain Park part of Pedagogy in Action:Partners::Examples Module by: Meghan Lindsey, University of South Florida Cover Page by: Meghan Lindsey, Len Vacher and Denise Davis, University of South Florida Spreadsheets Across the Curriculum module/Geology of National Parks course. Students use a topographic map and spreadsheet to find how many Big Macs they burned off on a five-mile hike at Catoctin Mountain Park. Quantitative Concepts: Geometry; Trigonometry , Measurement; Data presentation and analysis; Probability:Uncertainty, Visual display of data , Geometry; Trigonometry :Triangles and trigonometric ratios, Measurement; Data presentation and analysis; Probability, :Visual display of data :Reading graphs, Measurement; Data presentation and analysis; Probability:Uncertainty:Error; relative error, Geometry; Trigonometry :Circles (including radians and pi), Basic arithmetic; Number sense:Ratio and proportion; percentage; interpolation, Unit conversions, Rates, Number operations:Number operations: addition, subtraction, multiplication, division, Basic arithmetic; Number sense:Units and Dimensions, Basic arithmetic; Number sense, Algebra; Modeling; Functions:Nonlinear functions of a single variable, Nonlinear functions of a single variable:Trigonometric functions, Algebra; Modeling; Functions:Straight lines and linear functions, Straight lines and linear functions:Slope, intercept; linear trends, Basic arithmetic; Number sense:Units and Dimensions:Unit Conversions Excel Skills: Logic Functions:IF, Logic Functions, Angles and Trig Functions, Basic Arithmetic, Angles and Trig Functions:TAN, ATAN, ATAN2, Basic Arithmetic:Simple Formulas, Angles and Trig Functions:DEGREES, RADIANS, Basic Arithmetic:Arithmetic Functions, Arithmetic Functions:SUM Subject: Earth & Space Science:Geology, Earth & Space Science What is the Discharge of the Congaree River at Congaree National Park? part of Pedagogy in Action:Partners::Examples Denise Davis Spreadsheets Across the Curriculum module/Geology of National Parks course. Students use a rating curve to determine discharge at various stage heights. Quantitative Concepts: Measurement; Data presentation and analysis; Probability:Descriptive statistics; trend lines , Measurement; Data presentation and analysis; Probability, :Descriptive statistics; trend lines :Goodness of fit (R2), Geometry; Trigonometry :Squares and rectangles, Algebra; Modeling; Functions:Modeling:Forward modeling, Basic arithmetic; Number sense:Logarithms; orders of magnitude; scientific notation, Measurement; Data presentation and analysis; Probability:Gathering data , Visual display of data :XY scatter plots, Measurement; Data presentation and analysis; Probability:Visual display of data , Visual display of data :Logarithmic scale, Measurement; Data presentation and analysis; Probability:Descriptive statistics; trend lines :Line- and Excel Skills: Graphs and Charts:XY Scatterplot:Trendlines, Log Scale, Other Elementary Math Functions:LOG, LOG10, Graphs and Charts:XY Scatterplot, Basic Arithmetic Subject: Environmental Science:Natural Hazards, Earth & Space Science:Geology, Ground & Surface Water, Environmental Science:Resource Use & Consequences:Land Use & Planning, Environmental Science: Ecosystem Monitoring, Environmental Science
{"url":"http://serc.carleton.edu/sp/ssac/national_parks/GNP_modules.html?q1=sercvocabs__66%3A30","timestamp":"2014-04-17T21:28:18Z","content_type":null,"content_length":"27858","record_id":"<urn:uuid:327075c6-16e1-4917-a77e-251a9e38a52d>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00262-ip-10-147-4-33.ec2.internal.warc.gz"}
Bacteria growth question with formula Could someone please help me with this problem! Thanks! #29 First, we need to find k (Note that 100 is the initial size) We are told when N = 300, t = 5, so plug those in: $\Rightarrow 300 = 100e^{5k}$ Now solve for k. Once you have k, put it in the original formula. then plug in N = 200 and solve for t, that will give you the required time Any questions? i get .2197 for k do i have to double the 300 as well or just the 100? correct! do i have to double the 300 as well or just the 100? now that we know k, we have: $N = 100 e^{0.2197t}$ now plug in N = 200 (since that is double 100 -- the initial population size) and solve for t you were just given the 300 to find k, now that you have, forget about it. i believe the question is asking how long it takes to double from the initial population (i may be wrong) all i know, you only have to use one or the other, not both. if you want to find how long it takes to double from 300, then plug in N = 600 and solve for t, then subtract 5 from the answer
{"url":"http://mathhelpforum.com/pre-calculus/18149-bacteria-growth-question-formula.html","timestamp":"2014-04-21T10:27:58Z","content_type":null,"content_length":"47303","record_id":"<urn:uuid:2d311a33-835e-4e47-8377-4a67d3d8dc90>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00488-ip-10-147-4-33.ec2.internal.warc.gz"}
NEED HELP FAST!!!!! [Archive] - OpenGL Discussion and Help Forums Does anyone know how to move the eye of a cube along the x axis without using glulookat(...); Please Help! does this translate just translate the 3d cube or does it move the eye of the cube around the x axis. b/c thats what i need to do what precisely do you mean by the eye of the cube? Yes it's kindof what I want but what do I put where you have rotation++, do I put the value of my x coordinates of my cube?I just don't understand how to use it. I want the cube to rotate but stay in the same spot. So when you want to increment the eye.x position the cube turns as if you were looking from the x axis. Powered by vBulletin® Version 4.2.2 Copyright © 2014 vBulletin Solutions, Inc. All rights reserved.
{"url":"http://www.opengl.org/discussion_boards/archive/index.php/t-131837.html","timestamp":"2014-04-18T23:24:11Z","content_type":null,"content_length":"5544","record_id":"<urn:uuid:220e5b67-333a-4a22-aeac-94668c666611>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00230-ip-10-147-4-33.ec2.internal.warc.gz"}
Limits using trig functions September 26th 2008, 04:59 AM #1 Limits using trig functions I got stuck on this one, once i saw the sin, i just didn't know what to do. All this is one problem. The Answer according to the book is Cos(x) f(x) = sin(x) f(x) = lim sin(x+h) - sin (x) over h You're expected to recognise the limit as the derivative of sin x (calculated from first principles). Interesting, let me give this a bash: $\frac{f(x+h)-f(x)}{h}=\frac{sin(x+h)-sin x}{h}$ $\frac{sin x cos h+sin h cos x-sinx}{h}$ as h---->0: $sin h\rightarrow h \, and \, cos h\rightarrow 1$ $\frac{sin x+hcos x-sinx}{h}$ $\frac{hcos x}{h}$ September 26th 2008, 05:01 AM #2 September 26th 2008, 08:49 AM #3
{"url":"http://mathhelpforum.com/calculus/50696-limits-using-trig-functions.html","timestamp":"2014-04-18T06:09:25Z","content_type":null,"content_length":"37485","record_id":"<urn:uuid:b14d2a43-22fe-4f01-a49e-bcfb1a74d2e9>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00638-ip-10-147-4-33.ec2.internal.warc.gz"}
looping through possible combinations of McNuggets packs of 6, 9 and 20 looping through possible combinations of McNuggets packs of 6, 9 and 20 News123 news1234 at free.fr Fri Aug 13 22:22:44 CEST 2010 What would your solution be if you weren't allowed to 'know' that 120 is an upper limit. Assume you were only allowed to 'know', that you won't find any other amount, which can't be bought A AOON A you found six solutions in a row? I have a rather straightforward solution trying from 0 nuggets on until I found six 'hits' in a row, but would be interested about other On 08/13/2010 12:38 PM, Roald de Vries wrote: > On Aug 13, 2010, at 12:25 PM, Roald de Vries wrote: >> My previous algorithm was more efficient, but for those who like >> one-liners: >> [x for x in range(120) if any(20*a+9*b+6*c == x for a in range(x/20) >> for b in range(x/9) for c in range(x/6))][-1] > OK, I did some real testing now, and there's some small error in the > above. All solutions for all x's are given by: > [(x, a, b, c) for x in range(120) for a in range(x/20+1) for b in > range(x/9+1) for c in range(x/6+1) if x == a*20+b*9+c*6] > ... and all non-solutions by: > [x for x in range(120) if not any(x == a*20+b*9+c*6 for a in > range(x/20+1) for b in range(x/9+1) for c in range(x/6+1))] > Cheers, Roald More information about the Python-list mailing list
{"url":"https://mail.python.org/pipermail/python-list/2010-August/584716.html","timestamp":"2014-04-19T05:09:05Z","content_type":null,"content_length":"4154","record_id":"<urn:uuid:bc892437-86cb-471d-88b3-399f1b2e40ac>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00632-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: Equation Editor. (fw Re: Equation Editor. (fwd) Subject: Re: Equation Editor. (fwd) From: Karl Ove Hufthammer (huftis@bigfoot.com) Date: Tue Sep 05 2000 - 11:41:44 CDT ----- Original Message ----- From: "Caolan McNamara" <cmc@stardivision.de> To: "Karl Ove Hufthammer" <huftis@bigfoot.com> Cc: <abiword-dev@abisource.com> Sent: Tuesday, September 05, 2000 5:04 PM Subject: Re: Equation Editor. (fwd) | At 16:16 05.09.00 +0200, Karl Ove Hufthammer wrote: | >One solution I would like for an AbiWord equation editor is a way to create | >*good* content MathML using a "verbal" language. One program which uses | >this is | >Dave Raggett's (the editor of the HTML specification) EZMath. Here you write | >mathematics like you would speak it, and it outputs (good) content MathML. | > | >The quadratic formula: | >x = {-b plus or minus sqrt {b^2 - 4ac}}/2a | Well you'll like starmath then, the starmath equivalent is... | x = {-b plusminus sqrt {b^2 - 4ac}}/2a Looks good ... | Where I think it falls down is that with grouping you cannot easily see | which { | is with }, Syntax colouring. Make the outer {}s (i.e. the ones directly outside the cursor) | the standard programming problems apply, and as the equation | gets larger | and matrices and other more verbose constructs make their appearance it gets | difficult very fast, Nevertheless these are visual interface issues, | primarily I | am concerned with the file formats under the hood of all this. If the equation/formula is entered as 'x=a+b', there's really no reason to save it as MathML: But it should of course be possible to convert it to MathML (e.g. for exporting into XHTML+MathML). | The starmath UI | interface will remain but the native format will hopefully move to MathML, | Ill have | a bit of a think during the implementation about preserving closer the | users intent | when importing from visual based ones. | >Trigonometrical functions: | >sin {2x} = 2 sin x times cos x | sin {2x} = 2 sin x times cos x | Certainly theres a similarity, | perhaps a similar root interface is behind both | apps. | >Limits: | >{limit as x tends to infinity of integral from 0 to x wrt y of e^y^2} = | >sqrt pi | >/ 2 | This is probably, | lim csup {x rightarrow infinity} int from 0 to x y(e^{y^2} )} = sqrt pi / 2 The EZMath approach seems more natural here. StarMath seems more visually oriented ('righarrow'). | though this is more limited because I personally don't construct math | verbally, | seems like we need a "tends" word. Well, in EZMath, you can write things in *several* ways. Example: x to the power of 3 x cubed are all equal. | Isn't there though some issues as to | peoples | languages ?, this language isn't localized as far as I know in StarMath, | its always | pseudo english, and at even from my (native speaker) perspective some of the | terminology is different even from what I would use, though thats more likely | a personal thing. You raise an important point here. Localizability would certainly be very nice. It's easy if the syntax is similar (as in Spanish), but can be difficult to localize into some other languages. Anyway, it should always be stored in English (or MathML). | So a graphical interface would negate some of those issues. But would introduce other problems (mainly concerning *meaning* which is necessary for using MathML). Though I don't think it's impossible to constuct a graphical (point and click) interface ... Karl Ove Hufthammer This archive was generated by hypermail 2b25 : Tue Sep 05 2000 - 11:42:09 CDT
{"url":"http://www.abisource.com/mailinglists/abiword-dev/00/September/0044.html","timestamp":"2014-04-17T22:12:49Z","content_type":null,"content_length":"7334","record_id":"<urn:uuid:631c57bc-de3b-4e4e-97af-dc0834c47d51>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00085-ip-10-147-4-33.ec2.internal.warc.gz"}
Trig Equality March 18th 2012, 09:48 AM #1 Mar 2012 United States Trig Equality When doing some trig practice in my free time, I came across the problem if $\sin 2\alpha = \frac{1}{7}$, find $\sin^4 \alpha + \cos^4 \alpha$. I did a little research online, and I found that $\sin^4 \alpha + \cos^4 \alpha = \frac{1}{4}(\cos 4\alpha + 3)$, which simplified the problem greatly, and I was able to find that the answer is $ \frac{97}{98}$. Nevertheless, I feel that there is probably a "cleaner" way of doing this that does not require knowing the above equivalence. Does anyone have any suggestions on how to attempt this question? Thanks! Re: Trig Equality Use following facts : $\sin^4 \alpha + \cos^4 \alpha=(\sin^2 \alpha+\cos^2 \alpha)^2-2\sin^2 \alpha \cdot \cos^2 \alpha$ $\sin 2 \alpha=2\sin \alpha \cdot \cos \alpha$ Re: Trig Equality Thank you very much! That made it so easy to do! March 18th 2012, 10:14 AM #2 Senior Member Nov 2011 Crna Gora March 18th 2012, 10:37 AM #3 Mar 2012 United States
{"url":"http://mathhelpforum.com/trigonometry/196099-trig-equality.html","timestamp":"2014-04-19T09:29:25Z","content_type":null,"content_length":"35977","record_id":"<urn:uuid:43a9e9e2-4602-40ea-8bb9-fe02e7b86fed>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00590-ip-10-147-4-33.ec2.internal.warc.gz"}
Extension theorems, orbits, and automorphisms of the computably enumerable sets - J. OF SYMBOLIC LOGIC , 2002 "... We show that if A and A are automorphic via # then the structures SR (A) and SR ( 3 -isomorphic via an isomorphism # induced by #. Then we use this result to classify completely the orbits of hhsimple sets. ..." Cited by 4 (4 self) Add to MetaCart We show that if A and A are automorphic via # then the structures SR (A) and SR ( 3 -isomorphic via an isomorphism # induced by #. Then we use this result to classify completely the orbits of hhsimple sets. , 2007 "... The goal of this paper is to show there is a single orbit of the c.e. sets with inclusion, E, such that the question of membership in this orbit is Σ1 1-complete. This result and proof have a number of nice corollaries: the Scott rank of E is ωCK 1 + 1; not all orbits are elementarily definable; th ..." Cited by 3 (3 self) Add to MetaCart The goal of this paper is to show there is a single orbit of the c.e. sets with inclusion, E, such that the question of membership in this orbit is Σ1 1-complete. This result and proof have a number of nice corollaries: the Scott rank of E is ωCK 1 + 1; not all orbits are elementarily definable; there is no arithmetic description of all orbits of E; for all finite α ≥ 9, there is a properly ∆0 α orbit (from the proof). - BULLETIN OF SYMBOLIC LOGIC , 2008 "... The goal of this paper is to announce there is a single orbit of the c.e. sets with inclusion, E, such that the question of membership in this orbit is Σ1 1-complete. This result and proof have a number of nice corollaries: the Scott rank of E is ωCK 1 + 1; not all orbits are elementarily definable; ..." Cited by 2 (0 self) Add to MetaCart The goal of this paper is to announce there is a single orbit of the c.e. sets with inclusion, E, such that the question of membership in this orbit is Σ1 1-complete. This result and proof have a number of nice corollaries: the Scott rank of E is ωCK 1 + 1; not all orbits are elementarily definable; there is no arithmetic description of all orbits of E; for all finite α ≥ 9, there is a properly ∆0 α orbit (from the proof).
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1103711","timestamp":"2014-04-21T08:52:23Z","content_type":null,"content_length":"16762","record_id":"<urn:uuid:749a002d-11fa-4623-a04c-e700a74285f1>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00405-ip-10-147-4-33.ec2.internal.warc.gz"}
188 helpers are online right now 75% of questions are answered within 5 minutes. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/users/malkassir/answered","timestamp":"2014-04-19T10:19:21Z","content_type":null,"content_length":"114844","record_id":"<urn:uuid:50410e9a-06e5-4c13-a730-d27f5196807d>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00338-ip-10-147-4-33.ec2.internal.warc.gz"}
One dimensional Diffusion Equation with time-dependent BC January 20th 2011, 07:37 AM #1 Jan 2011 One dimensional Diffusion Equation with time-dependent BC Hi everyone! I have been struggling with this problem. Im trying to find an analytical solution to the problem below: All coefficients are known constants. I know that it has been solved using Laplace transforms, but the source I have doesn't explain how, and the publications my source are reffering to are impossible to find. I am having trouble when applying the boundary condition at z=0. When I Laplace transform the boundary condition i get: dF/dz = (VMK/ZRTDA)*Fs where F(x,s)=L{C(x,t)} Can anybody give me a little help as to how I should solve this? You're going to have fun with that one. The overall procedure (you can find this in the book The Mathematics of Diffusion, by J. Crank) is to take the Laplace Transform of the DE, which gives you a second-order ODE. You'll also take the LT of the boundary conditions, as you've done. Solve the resulting system. The result is a function of $z$ and the LT variable $s.$ Then you have to take the inverse LT. This is where the fun and games begin, because most likely, you can't use a table to just find the inverse LT. You'll have to go back to the definition of the inverse LT using the complex line integral, and then use residue calculus to compute the integrals. This is a fairly involved problem, it looks like. Is this for a class? Thanks for your reply! Its not for a class per se, I am doing these calculations in connection with my master thesis in petroleum engineering. That equation should describe some experiments being done at my university where co2 diffuses into water. I have done as you said, taken the LT of the equation and the boundary conditions, but the resulting equation makes no sense. The image below shows what i've done. F is the Laplace transform of C with the first boundary condition applied. I dont understand what ive done wrong in my methods, if anything. I agree with the first line. You can throw out the $c_{2}e^{\sqrt{s/D}\,z}$ solution because $C=0$ when $z\to\infty$ for $t\ge 0.$ When you do the LT of the boundary condition and you do some solving for constants, might you be confusing the $c_{1}$ of the previous LT with the LT of the boundary condition? You're certainly not guaranteed that they'll be the same. Its definitely the same constant. The c1 comes the derivative of F. I just equate the derivative F (0 put in for z after derivation) with dF/dz as given by the boundary condition. Yeah, I can see you did that. But is that allowed? Might you not need to solve the boundary condition DE separately? Incidentally, I suppose I should ask this question: what about your solution doesn't make sense to you? January 20th 2011, 07:50 AM #2 January 21st 2011, 09:39 AM #3 Jan 2011 January 21st 2011, 01:12 PM #4 January 21st 2011, 02:43 PM #5 Jan 2011 January 21st 2011, 03:18 PM #6
{"url":"http://mathhelpforum.com/differential-equations/168861-one-dimensional-diffusion-equation-time-dependent-bc.html","timestamp":"2014-04-19T17:15:51Z","content_type":null,"content_length":"50651","record_id":"<urn:uuid:2d40fe55-baff-43bb-a03b-a7b3f603a37c>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00032-ip-10-147-4-33.ec2.internal.warc.gz"}
transfer function of a PDE system Hi chiro it is a dynamic model of an electrical system which could be described as below equations. except t,tetha,y and i are constants. f1 - f2 = f3 + f4 f1 = \frac{4 I\ddot{\theta}{\beta} f2 = \dot{\theta} {M^{2}\ddot{y}^{2} + \tilde{M}^2\tilde{g}^2 + 2 M\tilde{M}\tilde{g}\ \ddot{y}}{4 \tau_{0}^{2}}} f3 = 4F f4 = A i^{2} t^2 the laplace transform should be applied on this equation in order to transform all of the non-constant parameters(tetha,y,i,t) to 's' parameter!!!! have you any idea??? It would help if you (or a moderator) fixed up the latex as I can't interpret it from what is being written. However I can offer some advice. If you can transform your PDE into an ODE, then you can use normal Laplace transform to get your transfer function F(s) and depending on the function you could get an analytic expression using tables and transform identities, or you could use the direct approach and use residue theorems to extract the function in the time domain. If you need to use the direct approach, then you will need to use residue theory which means you will need to be familiar with complex analysis at a basic level. When you fix up the latex, I can give you more specific help but unfortunately I can't really make sense of relationships: but yeah see if you can transform your PDE into a system of ODE's and from there you can use standard Laplace transform identities to get your transfer functions for the system of ODE's and then solve for the function using indirect or direct method.
{"url":"http://www.physicsforums.com/showthread.php?t=585507","timestamp":"2014-04-18T21:24:01Z","content_type":null,"content_length":"32458","record_id":"<urn:uuid:b6191ddd-3a0f-4029-b5cc-ed003fd0b8c0>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00206-ip-10-147-4-33.ec2.internal.warc.gz"}
Can't find derivative. August 6th 2012, 04:19 AM #1 Aug 2012 Can't find derivative. Can somebody tell me how to find the rate of change to this? p = 240(1 - [3 / {3 + e^(-0.0005x)}] I need to find the rate of change when x = 1000. Re: Can't find derivative. Re: Can't find derivative. Re: Can't find derivative. \displaystyle \begin{align*} p &= 240\left(1 - \frac{3}{3 + e^{-0.0005x}}\right) \\ &= 240 - \frac{720}{3 + e^{-0.0005x}} \\ &= 240 - 720\left(3 + e^{-0.0005x}\right)^{-1} \end{align*} You should be able to apply the Chain Rule now. Re: Can't find derivative. Am I supposed to use the chain rule with the exponent of e as in f(p)=-0.0005x and use that as the inner function? My teacher said that when using the chain rule with an exponential function like this we're supposed to use the equation d/dx(e^f(x)) = e^f(x) * f '(x). I'm trying to use that for this equation but I don't think I'm supposed to because the answer doesn't make sense. Re: Can't find derivative. There is no way for anyone to know where your answer makes sense or not if you don't tell us what answer you got! Re: Can't find derivative. Am I supposed to use the chain rule with the exponent of e as in f(p)=-0.0005x and use that as the inner function? My teacher said that when using the chain rule with an exponential function like this we're supposed to use the equation d/dx(e^f(x)) = e^f(x) * f '(x). I'm trying to use that for this equation but I don't think I'm supposed to because the answer doesn't make sense. I mean you let \displaystyle \begin{align*} u = 3 + e^{-0.0005x} \end{align*} which gives \displaystyle \begin{align*} y = 240 - 720u^{-1} \end{align*} You should be able to evaluate \displaystyle \begin{align*} \frac{du}{dx} \end{align*} and \displaystyle \begin{align*} \frac{dy}{du} \end{align*} then multiply them together... Re: Can't find derivative. I mean you let \displaystyle \begin{align*} u = 3 + e^{-0.0005x} \end{align*} which gives \displaystyle \begin{align*} y = 240 - 720u^{-1} \end{align*} You should be able to evaluate \displaystyle \begin{align*} \frac{du}{dx} \end{align*} and \displaystyle \begin{align*} \frac{dy}{du} \end{align*} then multiply them together... Sorry, it was midnight here last night, so ended up going to bed. Ok, so if it's \displaystyle \begin{align*} \frac{du}{dx} \end{align*} then $u(1000)=3 + e^{-0.0005{1000}}$ which is $3.60653066...$ Using \displaystyle \begin{align*} \frac{dy}{du} \end{align*} it has to be $y(u)=240 - 720 (3.60653066)^{-1}$ $= 240 - 720 (0.277274781)$ Is this correct? The rate of change I should be getting is -0.0168? Last edited by viper483; August 6th 2012 at 06:31 PM. August 6th 2012, 04:40 AM #2 August 6th 2012, 04:41 AM #3 Aug 2012 August 6th 2012, 04:53 AM #4 August 6th 2012, 05:18 AM #5 Aug 2012 August 6th 2012, 05:34 AM #6 MHF Contributor Apr 2005 August 6th 2012, 05:58 AM #7 August 6th 2012, 03:26 PM #8 Aug 2012
{"url":"http://mathhelpforum.com/calculus/201801-can-t-find-derivative.html","timestamp":"2014-04-16T06:47:49Z","content_type":null,"content_length":"60043","record_id":"<urn:uuid:d4a31d10-b2f9-426e-b4cc-e2144fb36889>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00595-ip-10-147-4-33.ec2.internal.warc.gz"}
Categorical Abstract Algebraic Logic: Meet-Combination of Logical Systems Journal of Mathematics Volume 2013 (2013), Article ID 126347, 8 pages Research Article Categorical Abstract Algebraic Logic: Meet-Combination of Logical Systems School of Mathematics and Computer Science, Lake Superior State University, Sault Sainte Marie, MI 49783, USA Received 20 December 2012; Accepted 11 March 2013 Academic Editor: Abdul Hamid Kara Copyright © 2013 George Voutsadakis. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. The widespread and rapid proliferation of logical systems in several areas of computer science has led to a resurgence of interest in various methods for combining logical systems and in investigations into the properties inherited by the resulting combinations. One of the oldest such methods is fibring. In fibring the shared connectives of the combined logics inherit properties from both component logical systems, and this leads often to inconsistencies. To deal with such undesired effects, Sernadas et al. (2011, 2012) have recently introduced a novel way of combining logics, called meet-combination, in which the combined connectives share only the common logical properties they enjoy in the component systems. In their investigations they provide a sound and concretely complete calculus for the meet-combination based on available sound and complete calculi for the component systems. In this work, an effort is made to abstract those results to a categorical level amenable to categorical abstract algebraic logic techniques. 1. Introduction The widespread and rapid proliferation of logical systems in several areas of computer science has led to a resurgence of interest in various methods for combining logical systems and in investigations into the properties inherited by the resulting combinations. One of the oldest methods for combining connectives is fibring [1]. In fibring one combines two logical systems by possibly imposing some sharing of common connectives or identification of connectives from the constituent logical systems. When such interaction occurs, the combined connectives inherit all properties of the components from both logical systems, and this leads often to inconsistencies. A typical example of this strong interaction is the combination of an intuitionistic negation from one logical system with a classical negation from another. The combined connective behaves like a classical negation, and this outcome defeats any intended purpose for the combination. Fibring has been studied substantially since its original introduction, and both its virtues and its vices are relatively well understood. For instance in [2], fibring was presented as a categorical construction (see also [3 ]), in [4] fibred logical systems were investigated from the point of view of preserving completeness, in [5] some work was carried out on the effect of fibring in logics belonging to specific classes of the classical abstract algebraic logic hierarchy [6–8], and more recently, in [9] fibring was employed to obtain some modal logics, first considered in [10], in a structured way and to draw some conclusions regarding their algebraic character. To avoid some of the drawbacks and undesired effects involved in the application of fibring, Sernadas et al. [11, 12] introduced, recently, another way of combining logical systems, called meet-combination, in which the combined connectives, instead of inheriting all properties they enjoy in the component logical systems, inherit only those properties that are common to both connectives. A very illuminating example of the difference that this entails as contrasted to the fibring method consists of the result of combining two logics and , one including a classical conjunction and one including a classical disjunction , with the intention of obtaining a combined connective “identifying" these two connectives from the component logics. Roughly speaking, if fibring is used, then, since in the combination the combined connective has all properties that are enjoyed by each of the connectives in either logic, the derivation shows that in the combined logic a single formula entails all other formulas; that is, there are only two possible theories, the empty theory and the entire set of formulas. On the other hand, this derivation would not be valid in the meet-combination of the two logics, since the afore-used Properties of Disjunction and Conjunction in and , respectively, are not shared by in and by in , respectively. Commutativity, however, is a shared property, whence the derived rule is a derived rule of the meet-combination. In [11] Sernadas et al. start from a given logical system with a Hilbert style calculus and with a matrix semantics and define a new logic that incorporates all meet-combinations of connectives of of the same arity. Moreover, this system includes in a canonical way the connectives of the original logical system. Roughly speaking, the Hilbert calculus of the combination consists of all old Hilbert rules plus two new rules that ensure that the combined connectives inherit the common properties of the component connectives and only those properties. The matrix semantics consists, also roughly speaking, of the direct squares of the matrices in the original matrix semantics. In the main results, [11, Theorems 3.9 and 3.13], it is shown that soundness and a special form of completeness, called concrete completeness, are inherited in from . Moreover, Sernadas et al. [11] investigate in some detail the case of classical propositional logic, which constitutes the main motivation and paradigmatic example behind their work. Based on classical propositional calculus, they present several interesting examples, which, in addition, serve as illustrations for various sensitive points of the general theory. In the present paper, we adapt the framework of [11] to a categorical level, using notions and techniques of categorical abstract algebraic logic [13, 14]. Our main goal is providing a framework in which, starting from a -institution whose closure system is axiomatized by a set of rules of inference, we may construct a new -institution that includes, in a precise technical sense, natural transformations corresponding to meet-combinations of operations available in the original -institution. The closure system of this new -institution is created by essentially mimicking the process of [11] to create a new set of rules of inference, suitable for the new sentence functor, and by using this new set of rules to define the inferences in the newly created structure. Under conditions analogous to those imposed by Sernadas et al. in [11], we are also able to establish a form of soundness and a form of restricted completeness for the new system, with respect to a suitably constructed matrix system semantics, under the proviso that these properties are satisfied by the original system. We close this section by providing an outline of the contents of the paper. In Section 2, we introduce the basic notions underlying the framework in which our work will be carried out. The inspiration comes from categorical abstract algebraic logic [13, 14] and, more specifically, uses the notion of a category of natural transformations on a given sentence functor and, implicitly, many aspects of the theory of -rule based -institutions, where is a category of natural transformations on the sentence functor of the -institution under consideration. A recent reference on this material is [15]. The reader should be aware that basic categorical notions are used rather heavily, but the elementary references to the subject [16–18] should be enough for necessary terminology and In Section 3 the basic constructions that take after corresponding constructions in [11] are presented. Here the meet-combination of logical systems refers to logical systems based on sentence functors, whose “signatures" are categories of natural transformations on the sentence functors and whose rules of inference and model classes are all categorical in nature. The goal is to work in a framework that would be amenable to categorical abstract algebraic logic methods and techniques so as to be able to consider aspects drawing from both theories. In Sections 4 and 5, we show that a form of soundness and a form of restricted completeness are inherited by the meet-combination, subject to the condition that it is present in the components being combined. These results yield also results on conservativeness and on consistency, which are presented in Section 6. Finally, based on the thorough work of [11], we present in Section 7 some examples showcasing various aspects of the general theory. These examples are relevant to both the theory developed in [11] and to its extension elaborated on in the present paper and, whenever appropriate, we draw attention to points where the two theories overlap and points where some differences occur. 2. Basic Framework In the sequel we consider an arbitrary but fixed category Sign, called the category of signatures, and an arbitrary but fixed Set-valued functor , called the sentence functor. Also into the picture in a critical way will be an arbitrary but fixed category of natural transformations on , which we view as the clone of all algebraic operations on . We remind the reader here of the precise definition of such a category, as presented, for example, in [15]. The clone of all natural transformations on is defined to be the locally small category with collection of objects and collection of morphisms -sequences of natural transformations . Composition is defined by A subcategory of this category containing all objects of the form for , and all projection morphisms , , , with given by and such that, for every family of natural transformations in , the sequence is also in , is referred to as a category of natural transformations on . A natural transformation in is called a constant if, for all and all , If is a constant, then we set , to denote the value of the constant in , which is independent of . An -rule of inference or simply an -rule is a pair of the form , sometimes written more legibly , where , are natural transformations in . The elements ,, are called the premises and the conclusion of the rule. An -Hilbert calculus is a set of -rules. Using the -rules in , one may define derivations of a natural transformation in from a set of natural transformations in . Such a derivation is denoted by . If the calculus is fixed and clear in a particular context, we might simply write . Given two functors and , with categories of natural transformations on , respectively, a pair , where is a functor and is a natural transformation, is called a translation from to . Moreover, it is said to be -epimorphic if there exists a correspondence between the natural transformations in and that preserves projections (and, thus, also arities), such that, for all , all and all , An -epimorphic translation from to will be denoted by , with the relevant categories of natural transformations on , respectively, understood from context. An -algebraic system consists of (i)a functor , with a category of natural transformations on ; (ii)an -epimorphic translation . An -matrix system or, simply, -matrix is a pair consisting of (i)an -algebraic system ; (ii)an axiom family on , that is, a collection of subsets . We perceive of the elements of as truth values for evaluating the natural transformations in and those of as being the designated ones. An -matrix semantics is a class of -matrices. Given a natural transformation in , we set where and . The matrix satisfies at under , written , if . An -rule is a rule of an -matrix semantics , written if , for all , implies , for every -matrix , all , all -assignments in , and all . If the semantics is clear from context, we simply write . In the remainder of this paper, by a logical system, or simply a logic, we understand a pentuple , where (i) is a category; (ii) is a sentence functor; (iii) is a category of natural transformations on ; (iv) is an -Hilbert calculus; (v) is a -matrix semantics. 3. Meet-Combinations Let be a logical system. Define the product logical system or, simply, product logic as follows: the logic has the same signature category as . The sentence functor is defined by setting for all , and, similarly, for morphisms. The category of natural transformations on has the same objects as and its morphisms into are pairs of natural transformations in . We call the members of the combined natural transformations or combined operations or, following [11], but rather apologetic for abusing terminology, combined connectives. Given in , we set in and, accordingly, given in , we set Every -rule gives rise to an -rule The calculus is an “enrichment" of in the sense that it contains all rules of the form , for , and some additional -rules devised for dealing with the combined operations: (i)for each in , the lifting rule (LFT) is included in to enforce inheritance by in of all the common properties of and in ; (ii) for each constant in , the special colifting rules (cLFT) are included in to enforce that should enjoy in only those properties that are common properties of and in . The reason for allowing only the special co-lifting rules (i.e., ones that admit only constants), rather than the (general) co-lifting rules, is that, unless this restriction is imposed, the rules are not in general sound. This will become apparent in the analysis to follow. Before introducing the semantics of , we show, following [11], that given constant natural transformations in , the two combined constructors and are closely related. Theorem 1 (Sernadas, Sernadas, and Rasga). Let be a logical system. Consider a constant natural transformation in and set . Then and are interderivable in . Proof. Apply first cLFT twice and then LFT, in each direction. One gets the following proof: Let and be -algebraic systems with the same underlying sentence functors and the same signature functor component . Let be defined, for all , by and similarly for morphisms, and let be given, for all , by Denote by the -algebraic system Moreover, given two -matrix systems and , let where , such that, for all , The semantics is the class consisting of all -matrix systems of the form , for , having underlying -algebraic systems , respectively, with the same underlying sentence functors and the same signature functor components. The semantics will be called the product semantics, taking after [11]. Finally, we let and stand for satisfaction and entailment in the product logic . 4. Soundness Recall that, given a natural transformation in , we use the notation to denote the natural transformation in . Proposition 2. Let be a logical system and consider the product system . Suppose that in , and , where the th component of is , for all . Then Moreover, for all , , all and all , Proof. We have the following equivalences: iff iff and ,iff and .This proves the Proposition. Proposition 3. Let be a logical system. If the -rule is sound in , then the -rule , is sound in . Proof. Suppose that and are in so that , , , and , such that , for all . Then, by Proposition 2, for all . Thus, by soundness of in , we get that and . Therefore, again by Proposition 2, and, hence, is sound for . Let be in and suppose that , and . Then, by the definition of , Proposition 4. Let be a logical system. The lifting rule LFT is sound in . Proof. Suppose that in , , and , such that, for some and , This implies that and . These imply that , whence, by (24), This proves the soundness of lifting. Let be a category and a sentence functor with a category of natural transformations on . Recall that a natural transformation is called a constant if, for all , all and that we use the notation , for this value, which is independent of . A class of -matrix systems is said to be a -semantics if, for all and in , every constant in and all , Intuitively, a semantics is a -semantics if and only if every constant is consistently interpreted as true or false under all matrix systems in the semantics, that is, under all combinations of interpretations and designated truth values included in the semantics. Proposition 5. Let be a logical system, where is a -semantics. For all constants in , the special co-lifting rules are sound in . Proof. Let , a constant in , , and , such that, for some and , Then (recalling the notation for constants) , whence Since is a -semantics, we get that the four following relations hold: Therefore, we obtain that which show that the special co-lifting rules are sound in . Theorem 6 (soundness). Let be a logical system, where is a -semantics. If is sound, then the product logic is also sound. Proof. We have shown in Proposition 3 that all rules inherited by are sound in . By Proposition 4, the lifting rule is sound in and, since is assumed to be a -semantics, by Proposition 5, the special co-lifting rules are sound in . Therefore the product logic is also sound. 5. c-Completeness A logic is -complete if it is complete with respect to constant natural transformations. More precisely, for all sets of constants in , we have that Proposition 7. If a logic is -complete, then, for all sets of constants in , Proof. Suppose that . Then, since includes all -rules of the form , for all , we get that . Therefore, by the -completeness of , we get that . Thus, there exists a model , together with , and , such that and . Hence, the model is such that and . Therefore , showing that is also -complete. Proposition 8. Let be a logic and suppose that, for some in , Then it is also the case that Proof. Suppose that . By the lifting rule, we must have or . Therefore, by hypothesis, or . Suppose, without loss of generality, that the first holds. Thus, there exists a model , and , such that Thus, we must have or This implies that either or bears witness to and concludes the proof. To formulate the following proposition we introduce a convenient notation: given a set of natural transformations in , we write Proposition 9. Let be a logic and suppose for some set of constants in Then it is also the case that Proof. If , then, by the special co-lifting property, . Thus, by hypothesis, . Hence, there exists , , and , such that while, at the same time, These relations imply that whence . Theorem 10 (-completeness). If the logic is -complete, then the product logic is -complete also. Proof. If is -complete, then, by Proposition 7, we get that, for all sets of constants in , Thus, by Proposition 8, for all sets of constants in , Finally, by Proposition 9, we get that, for all sets of constants in , This proves that is -complete. 6. Conservativeness and Consistency Theorem 11 (conservativeness). Let be a logic. For every set of natural transformations in , Proof. Suppose . If is such that, for some , , , , then, we get that , whence, by the hypothesis, , and, therefore, . This shows that . Theorem 12 (consistency). If the logic is consistent, then so is the product logic . Proof. This follows directly from conservativeness. 7. Examples from Classical Propositional Logic We present a simple example, essentially borrowed from [11], with the twofold goal of, first, seeing how the theory of [11] can be easily accommodated in the categorical framework (becoming actually a trivial case) and, second, showcasing the difference between the soundness of special co-lifting and the lack of soundness obtained by allowing the full power of the general co-lifting rule. Suppose, first, that is a logic, such that contains two binary natural transformations and two constants that obey the usual laws of conjunction, disjunction, truth, and falsity of classical propositional logic. Then, if , we have that This can be shown by observing that the hypothesis yields, by special co-lifting, and . These, by following usual derivations in , yield and , whence, by lifting, we finally obtain the conclusion. In fact, if we arrange for to consist, essentially, of Boolean algebras and evaluations together with Boolean filters, it is the case that where are the two projection natural transformations; that is, “commutativity" is valid in general, not just for constants. However, the derivation (50) cannot be inferred directly from this using -completeness, since there are nonconstant natural transformations involved. To illustrate, using the same example, that the general co-lifting rule fails, we may employ Boolean models to show that In fact, note that whereas the first belonging to the product filter of 2-element Boolean algebras, the second failing to do so. Note, next, that A straightforward computation shows that in the direct product of 2-element Boolean algebras, the left-hand side evaluates to , whereas the right-hand side to . Even though this serves as a counterexample for an analog of Theorem 1 concerning the exchangeability of components in the context of [11], this problem does not arise in our context. In fact, our reformulation of [ 11, Theorem 2.1] in the form of Theorem 1 would only ensure that Suppose now that in , one has the, possibly derived, rule , where are both constants in . Then it can be shown that In fact, follows from the special co-lifting, whereas lifting helps establish the opposite direction Finally, if one has available in a disjunction and an implication , both behaving classically, then, since both derived rules are rules of , one obtains the rule in by an application of lifting. We close with a generally phrased (rather informally formulated) problem that would be of interest in the context developed in the present work from the point of view of abstract algebraic logic. For more details on the motivations and the state of the art in that theory, as well as the precise definitions and more insights on the notions employed in the phrasing of this problem, the reader is referred to [13–15] and further references therein. Problem for Investigation. Suppose that we have some knowledge about the algebraic classification of the -institution , where is a logic in the sense of the present paper, possibly satisfying some additional conditions. The closure system is the system induced by the set of -rules, as detailed in, for example, [15]. What corresponding information may then be drawn about the -institution , that corresponds, in a similar manner, to the product logic ? 1. D. M. Gabbay, “Fibred semantics and the weaving of logics. I. Modal and intuitionistic logics,” Journal of Symbolic Logic, vol. 61, no. 4, pp. 1057–1120, 1996. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 2. A. Sernadas, C. Sernadas, and C. Caleiro, “Fibring of logics as a categorial construction,” Journal of Logic and Computation, vol. 9, no. 2, pp. 149–179, 1999. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 3. C. Caleiro, W. Carnielli, J. Rasga, and C. Sernadas, “Fibring of logics as a universal construction,” in Handbook of Philosophical Logic, vol. 13, pp. 123–187, 2nd edition, 2005. View at Publisher · View at Google Scholar 4. A. Zanardo, A. Sernadas, and C. Sernadas, “Fibring: completeness preservation,” Journal of Symbolic Logic, vol. 66, no. 1, pp. 414–439, 2001. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 5. V. L. Fernández and M. E. Coniglio, “Fibring in the Leibniz hierarchy,” Logic Journal of the IGPL, vol. 15, no. 5-6, pp. 475–501, 2007. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 6. W. J. Blok and D. Pigozzi, “Algebraizable logics,” Memoirs of the American Mathematical Society, vol. 77, no. 396, 1989. View at Zentralblatt MATH · View at MathSciNet 7. J. Czelakowski, Protoalgebraic Logics, vol. 10 of Trends in Logic-Studia Logica Library, Kluwer Academic, Dodrecht, The Netherlands, 2001. View at Publisher · View at Google Scholar · View at 8. J. M. Font, R. Jansana, and D. Pigozzi, “A survey of abstract algebraic logic,” Studia Logica, vol. 74, no. 1-2, pp. 13–97, 2003. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 9. M. A. Martins and G. Voutsadakis, “Malinowski Modalization, Modalization through Fibring and the Leibniz Hierarchy,” http://www.voutsadakis.com/RESEARCH/papers.html. 10. J. Malinowski, “Modal equivalential logics,” The Journal of Non-Classical Logic, vol. 3, no. 2, pp. 13–35, 1986. View at Zentralblatt MATH · View at MathSciNet 11. A. Sernadas, C. Sernadas, and J. Rasga, “On combined connectives,” Logica Universalis, vol. 5, no. 2, pp. 205–224, 2011. View at Publisher · View at Google Scholar · View at MathSciNet 12. A. Sernadas, C. Sernadas, and J. Rasga, “On meet-combination of logics,” Journal of Logic and Computation, vol. 22, no. 6, pp. 1453–1470, 2012. 13. G. Voutsadakis, “Categorical abstract algebraic logic: algebraizable institutions,” Applied Categorical Structures, vol. 10, no. 6, pp. 531–568, 2002. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 14. G. Voutsadakis, “Categorical abstract algebraic logic: equivalent institutions,” Studia Logica, vol. 74, no. 1-2, pp. 275–311, 2003. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 15. G. Voutsadakis, “Categorical abstract algebraic logic: algebraic semantics for pi-Institutions,” Mathematical Logic Quarterly. In press. 16. M. Barr and C. Wells, Category Theory for Computing Science, Les Publications CRM, Montreal, Canada, 3rd edition, 1999. View at Zentralblatt MATH 17. F. Borceux, Handbook of Categorical Algebra, Vol. I, Encyclopedia of Mathematics and Its Applications, Cambridge University Press, Cambridge, UK, 1994. 18. S. Mac Lane, Categories for the Working Mathematician, Springer, New York, NY, USA, 1971. View at MathSciNet
{"url":"http://www.hindawi.com/journals/jmath/2013/126347/","timestamp":"2014-04-17T19:36:14Z","content_type":null,"content_length":"728959","record_id":"<urn:uuid:7df869e0-09ff-4107-9d90-833cd5a354e3>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00175-ip-10-147-4-33.ec2.internal.warc.gz"}
Today we will advance our coverage toward quantum mechanics by looking at an unusual feature of daily life. We’ll be looking at an aspect of the world which doesn’t quite behave as expected; though it won’t be as counterintuitive as, say, the Heisenberg uncertainty relations, it does tend to make people blink a few times and say, “That’s not — well, I guess it is right.” Furthermore, poking into this area will motivate the development of some mathematical tools which will remarkably simplify our study of symmetry in quantum physics. Fortunately, then, I found an assistant to help me with the demonstrations. Please welcome my fellow physics enthusiast, here on an academic scholarship after a rough-and-tumble life in Bear City: Those of us who grew up, as I did, with the aftereffects of the “New Math” are familiar with the “commutative law.” At some tender age, we learned that “addition is commutative,” which we were taught meant that the order in which we do additions doesn’t matter. 22 + 17 is the same as 17 + 22, and in fact for any numbers real or complex, [tex]a + b = b + a.[/tex] When we’re talking about numbers which we use for counting things, this seems like the most unremarkable property in the world. How could it be false? My assistant will illustrate the process at Multiplication, which we define in terms of repeated additions, inherits the commutative property of addition, and as we build more types of numbers — irrational, real, complex — this nice and unsurprising character trait continues to hold. It seems so unremarkable that we’d be foolish not to wonder why it deserves a fancy name at all! Why make such a fuss about it and turn it into a grand thing. . . unless there were a place where it wasn’t true. Subtraction, we note, is not commutative. Six minus three gives one result, but three minus six gives another — a result which, as it happens, we can’t even use the “natural” or “counting” numbers to specify. But wait, isn’t subtraction just a special kind of addition — the addition of negatives? Aha: something must be going on with that “flip” which turns a number into its opposite, bearded Thinking in terms of a number line, the expression a + b just means counting b units from some starting point a. Flipping the sign to get subtraction, a – b, means counting b units in the opposite direction from the same starting point. Addition of any number is a translation along the number line, and negation is a flip to the other side of zero, the origin. A translation followed by a flip does not give the same result as a flip followed by a translation. Starting from a number a, the former sequence of operations gives -(a + b), while the latter gives [tex](-a) + b = b – a.[/tex] Now, if addition is a shift to the side, then multiplication is a scaling. On the number line, twice a is the segment from 0 to a stretched out so that it extends from 0 to 2a. Repeated multiplications are successive scalings; b^2 is scaling by the amount b twice in succession, and the square root of b is that scaling which one must perform twice in order to scale by b. This viewpoint is useful for the insight it gives into the next question: what is the operation which, when performed twice, gives a flip? In more “numerical” terms, we’re asking if there exists a scaling operation — a multiplier — which, when we multiply it by itself, has the same effect as flipping, or negation. This is the geometric interpretation of asking what is the square root of negative one! Thinking geometrically, it is not so difficult to find the answer. If we imagine the number a represented by a line segment from the origin 0, sticking out in the right-hand direction, we can pivot the number a one quarter-turn clockwise or counterclockwise (your choice), to give a line segment pointing up (or down). By repeating the same operation, we’ll get a line segment of length a pointing to the left — which is just the number (-a). The square root of -1 is a rotation by a quarter-turn! Well, we’ve worked ourselves right into the complex numbers. Instead of a number line, we’ve got a number plane, each number in which can be represented by a scaling (a shrinking or an expansion) and a turning. (Incidentally, we’re in a very good position now to understand why -i is just as good a square root of -1 as is i.) Starting with the number 1, we can rotate by a gradually increasing angle to trace out a full circle, on which the vertical coordinate is sin θ and the horizontal coordinate is cos θ. Rotation is just multiplication by a complex number, and the complex number which rotates by θ without scaling has a name; it’s called e^i θ. This is the geometric interpretation of Euler’s formula [tex]e^{i\theta} = \cos\theta + i\sin\theta[/tex] which we used a little while ago to prove some trigonometric identities. My assistant Whew! We’ve covered a fair bit of territory just thinking about translations, rotations and scalings. One important thing to notice is that successive rotations in the 2D plane commute. Intuitively, we feel pretty confident that twisting by 30 degrees, taking a breather and then twisting by 60 degrees will have the same result as turning by 60 degrees, pausing and turning by another 30. What can we say about geometric operations in three dimensions? Naturally we can translate shapes in three different directions, but what does the extra “room” mean for rotations? Instead of having one axis about which we can turn, we’ve got three independent ways we can spin and twist. In aerospace lingo, in order to represent an arbitrary rotation in 3D we have to give pitch, roll and yaw. (There are many different ways to represent the same rotation, but they all end up giving the same amount of information.) To specify a scale factor in addition, we need one additional number, for a total of four. Therefore, whatever mathematical objects we employ to do for 3D space what the complex numbers do for 2D space, they must have four components. We can deduce another fact about the 3D analogues of complex numbers by looking at how successive rotations in 3D behave. Let’s pick a “zero point” somewhere in space to be our origin, and choose three perpendicular axes, which can be left-right, up-down and forward-back. We can rotate an object around any of these axes, by any amount we wish. To begin, we’ll consider rotations around the vertical and around the horizontal left-right axes, and we’ll rotate by one quarter-turn (90 degrees or π/4 radians) each time. My assistant will demonstrate a rotation around the vertical, followed by one around the horizontal: Now, surely, performing the same operations in the opposite order will have the same outcome, yes? It worked in two dimensions, didn’t it? Surprise! Rotations about different axes in three dimensions do not commute! This means that if we represent a rotation around the vertical by some “hypercomplex” number v, and one about the left-right axis by h, then whatever the specific form of v and h, we must have [tex] vh \neq hv.[/tex] Historically, this problem was approached in two different ways. One group of people followed the path to quaternions, while another took the other fork and developed vectors and matrices. Because we’re aiming for quantum mechanics, we’ll be taking the latter approach, though quaternions have interesting properties and practical applications too. (Mark Chu-Carroll and Tim Lambert wrote about them a few months ago.) 4 Comments 1. Well I don’t mean to be flippant. While I thoroughly enjoyed the matrix, I didn’t care for the other matrices. I might have to take my chances with quaternions! 2. thats a relly good way of explaining it in a cool way! 3. Thanks. 4. This is really cute and helpful. And overall cute!!!!! But at the same time helpfull. You have combinde 2 complete opposites to make one huge amazing math learning website of cuteness. Im shocked. Keep up the good work
{"url":"http://www.sunclipse.org/?p=174","timestamp":"2014-04-17T10:33:33Z","content_type":null,"content_length":"30393","record_id":"<urn:uuid:45eaf4d9-ec85-4988-9d70-a0f6222a969c>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00429-ip-10-147-4-33.ec2.internal.warc.gz"}
A new nonlinear approximation for elastic wave scattering. ASA 127th Meeting M.I.T. 1994 June 6-10 3aPA1. A new nonlinear approximation for elastic wave scattering. S. Kostek T. M. Habashy C. Torres-Verdin Schlumberger--Doll Res., Ridgefield, CT 06877-4108 A novel approximation to simulate elastic wave scattering in arbitrarily heterogeneous media is introduced. The approximation, formulated in the frequency domain, is derived from a volume integral equation governing the displacement vector within the scatterer. It is shown that if the displacement vector is assumed a locally smooth function of position within the scatterer then it can be expressed as the projection of the background displacement onto a scattering operator. This scattering operator is nonlinear with respect to the spatial variations of density and Lame constants and can be computed via simple explicit formulas. The scattering operator adjusts the background displacement by way of amplitude, phase, and polarization corrections which are needed to estimate the internal displacement due to an arbitrary source excitation. It is also shown that the new approximation is substantially more efficient than iterative Born techniques whose convergence is hardly guaranteed for large contrasts in material properties. Validation tests are presented which confirm that the new approximation remains accurate for large contrasts of the elastic parameters and over a wide frequency range. Moreover, the new approximation has nearly the computational efficiency of the first-order Born approximation but is much more accurate. These two features make the scattering formulation extremely attractive to approach multifrequency inversion problems involving a multitude of scatterers.
{"url":"http://www.auditory.org/asamtgs/asa94mit/3aPA/3aPA1.html","timestamp":"2014-04-21T02:01:56Z","content_type":null,"content_length":"2143","record_id":"<urn:uuid:afa747b2-84cd-4fb5-9258-39b39e612df0>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00457-ip-10-147-4-33.ec2.internal.warc.gz"}
188 helpers are online right now 75% of questions are answered within 5 minutes. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/users/gurvinder/answered","timestamp":"2014-04-19T17:05:38Z","content_type":null,"content_length":"114987","record_id":"<urn:uuid:79987588-c5e7-491a-8e51-a5cf344a03fc>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00199-ip-10-147-4-33.ec2.internal.warc.gz"}
BEGIN:VCALENDAR VERSION:2.0 METHOD:PUBLISH BEGIN:VEVENT DTSTART:20130308T210000Z DTEND:20130308T223000Z LOCATION:Roger Bacon 412 TRANSP:OPAQUE UID:www.siena.edu DTSTAMP:20140421T081112Z SUMMARY:Math Colloquium DESCRIPTION: Dr. Yu Sun\,&nbsp\;Assistant Professor of Quantitative Business Analysis\, will speak in the Math Colloquium on&nbsp\;Friday at 4:00pm in RB412. Consensus-Type Stochastic Approximation Algorithms The interest in this work is motivated by cooperative and coordinated control of unmanned aerial vehicles (UAVs). This work is concerned with asymptotic properties of consensus-type algorithms for networked systems whose topologies switch randomly. The regime- switching process is modeled as a discrete-time Markov chain with a finite state space. The consensus control is achieved by using stochastic approximation methods. Distinct convergence properties of three different scenarios with respect to the stationary measure of the Markov chain are studied. Simulation results are presented to demonstrate these findings.&nbsp\;&nbsp\; Refreshments will be served in the Math Library (RB434) at 3:30pm with the talk to follow at 4:00pm in Roger Bacon 412.
{"url":"http://www.siena.edu/snipplets/27.asp?item=335131&date=3/8/2013","timestamp":"2014-04-21T12:11:12Z","content_type":null,"content_length":"2567","record_id":"<urn:uuid:fb5bd769-1a81-4961-8fd1-6b237445a45d>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00188-ip-10-147-4-33.ec2.internal.warc.gz"}
Linear Algebra and Its Applications Gilbert Strang Renowned professor and author Gilbert Strang demonstrates that linear algebra is a fascinating subject by showing both its beauty and value. While the mathematics is there, the effort is not all concentrated on proofs. Strang's emphasis is on understanding. He explains concepts, rather than deduces. This book is written in an informal and personal style and teaches real mathematics. The gears change in Chapter 2 as students reach the introduction of vector spaces. Throughout the book, the theory is motivated and reinforced by genuine applications, allowing pure mathematicians to teach applied mathematics. User ratings 5 stars 4 stars 3 stars 2 stars 1 star Review: Linear Algebra and Its Applications User Review - Jim Robles - Goodreads This was the textbook for a class I took at Stanford while working for Watkins-Johnson. Read full review Review: Linear Algebra and Its Applications User Review - Rahul Nyamangoudar - Goodreads Best book some one looking to understand linear algebra topics.......... Combined this with videos of Gilbert strang himself is great to learn,.. Read full review Bibliographic information
{"url":"http://books.google.com/books?vid=ISBN0030105676","timestamp":"2014-04-17T19:13:37Z","content_type":null,"content_length":"35615","record_id":"<urn:uuid:b1ff745b-e687-46e2-97c9-feba6ec4e0de>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00273-ip-10-147-4-33.ec2.internal.warc.gz"}
reddit is a website about everything Why is the derivative of a circle's area (pi*r^2) its circumference (2*pi*r)? by Vulmoxin math [–]antonfire0 points1 point2 points ago It's more a matter of what description of the shape's size you differentiate with respect to than the shape itself. For example, if you differentiate a circle's area with respect to its diameter, you don't get its circumference. So what's the "right" thing to differentiate with respect to? For those that want the spoiler, I've recently posted it in another comment in this thread. I'm finally keeping hygienic again by Hyperionxjdin depression [–]antonfire0 points1 point2 points ago Usually when I neglect hygiene, it's because I don't think it's worth it that day. "I'm not getting close enough for anyone to smell me anyway", that sort of thing. What helped me was, when decision time came, thinking of it as practice for overcoming lethargy. Then I had two reasons to shower: (a) I smell better and don't have to worry about getting close to people, and (b) I'm practicing activating myself so that it will be easier to activate myself in the future. Sometimes, even when (a) or (b) alone isn't enough, the combination is. You can also compromise. Shower, but only soap/scrub the smelly bits. Brush your teeth, but rush through it and only do the outside. Better to do it poorly every day than to do it perfectly once a week. You're practicing activating yourself, which is a skill. If you're not good at it, you make it easier for yourself until you get the hang of it. And when you do it poorly, you're practicing not being a perfectionist to boot! Does depression mean my thoughts don't count? by CandDin depression [–]antonfire1 point2 points3 points ago If just dismissing your "negative" thoughts seems dishonest to you, you are not alone. Here's part of the thought process that help me deal with that hang-up. Everything you think you only think because because you're X. Being human means my thoughts are biased in a variety of (often quite predictable) ways. My thoughts are habitual, they're based on many layers of abstraction and interpretation and selective attention, and they have more to do with my circumstances than "who I am". I think "I should get food" when I'm hungry but I can't think of anything in my house I want to eat. I think "I'm sleepy" when I catch my eyes drooping. I think "I wish I'd done more work yesterday" when I have a bunch of stuff to do. I think a driver is careless if they change lanes without signaling. I think "I should just kill myself" when I'm feeling particularly bad about myself. Those thoughts are automatic habits, and they affect my, mood, behavior, and other thoughts down the line. That fact is never going to change; I don't think I will attain enlightenment in this lifetime. The vast majority of my thoughts is always going to be a jumble of biased habitual responses to whatever is going on around me and in me. That includes the "good" ones and the "bad" ones. It includes the ones that are "true" and the ones that are "false". It includes everything I'm thinking and writing down now. Some of these habits, however, are causing me problems and making me unhappy. I've been advised to pay attention when that happens and try to notice what the habits actually are, and what the biases are in the habitual thoughts. The idea is not to dismiss the thoughts, it is to recognize where they are coming from and not feel too attached to them. Why don't I also make an effort to do the same thing with the "positive" thoughts? For the same reason I don't put sunscreen on the soles of my feet. When I say "Fuck, I didn't get any work done yesterday, I'm so lazy", am I only thinking that because I'm depressed? No, sometimes I'm also thinking it because it's true. But am I only thinking it because it's true? Also no. There's a bunch of other true things I could be thinking. True or not, It's a habitual thought which worsens my mood and makes me less productive. So when the thought happens and is true, I recognize that it's true, but I also recognize that having that thought is a habit which doesn't benefit me very much, and I would actually be better off if I didn't think it so often. So which of my thoughts should I attribute to myself? All of them, I'm the one having them. In that sense, they all "count". Which of my thoughts should I attribute to myself and nothing else? None of them, they're all brought about by a big ol' mess of circumstances and causality. In that sense, none of them "count". Help with Apathetic depression? by Single_mom_and_Proudin depression [–]antonfire0 points1 point2 points ago The thing that has helped me most so far is group cognitive-behavioral therapy (CBT). This was a group of 10 or so depressed people, meeting weekly for an hour and a half. Every week, the therapist would talk about the way depression is thought of from the CBT point of view, and discuss some skills and habits that it would be helpful for us to acquire, and how to go about acquiring them. We would bring up our own experiences and struggles and so on. I found this a lot more engaging than one-on-one therapy; I tend to have more clarity when thinking about other people's problems than about my own. So if you have access to something like that, you might want to give it a shot. If you'd like, I can give more details about what it was like for me. The Monty Hall Problem is cognitive Three-Card Monte by Prooffreaderin math [–]antonfire0 points1 point2 points ago I've given myself a time limit in reading / composing these posts, so excuse me if I've missed some crucial bit of information. I don't understand what "Statement prioritizes" means, but I don't have to in order to add another column to this table, because knowing whether or not B+ happened doesn't require me to know anything about any statement. World Statement prioritises In Bt? (some mystery person) In Bt? (/u/Brian) In B+? BB Boys Yes Yes Yes BB Girls No Yes Yes BG Boys Yes Yes Yes BG Girls No No Yes GB Boys Yes Yes Yes GB Girls No No Yes GG Boys No No No GG Girls No No No B+ doesn't seem relevant, since we're talking about the difference in information under different interpretations of Bt. That is not what I am talking about. I've said many times that the interpretation I am talking about does not even have to involve any "speaker". If you scroll up, you'll find that the reason I even brought up the Bt that you have attributed to me is to make it clear that that was not the interpretation I was talking about, even though it gives the same answer as the one I am talking about. But you keep insisting that that is what I'm talking about, and I don't know why. Do divergent sequences have a UNIQUE sum? For example, 1+2+3+... = -1/12. Is that unique? by curtdbzin math [–]antonfire6 points7 points8 points ago You are correct. You can't. It's not true that 1-1+1-1... = 1/2, not when you interpret it the standard way anyway. What's true is that if you were to concoct some procedure which is both linear and stable (see here) for assigning numbers to some infinite series, and if that procedure actually did assign some value to that series, then that value would have to be 1/2. The value for the series 1+2+4+8+..., if any, would have to be -1. Note that a priori it's not even clear that there is a consistent way of doing this. One shouldn't just assume that because something seems to work out, it won't lead to a contradiction down the Do divergent sequences have a UNIQUE sum? For example, 1+2+3+... = -1/12. Is that unique? by curtdbzin math [–]antonfire5 points6 points7 points ago No, e.g. the series sum_n n s^n ((n+1)s - (n-1))/2 converges to 0 on a neighborhood of s = 0, which would suggest that "its value at s = 1" is 0, not -1/12. Edit: A few more words about this. Generally a central idea in complex analysis is that smooth functions on the complex plane are forced to be much more rigid than smooth ones on the real line. That is, what's going on in one area can tell you a lot about what's going on in another area. That's why analytic continuation works in the first place. Phrased more negatively, you can't control what's going on in one area of the complex plane independently of what's going on in another area. However, in this case, we still have enough control to do pretty much whatever we want, because we only need to control (a) approximate values in some region of the complex plane, and (b) the exact value at some particular point outside that region. Specifically, all we need is for the partial sums of the series to take on the values n(n+1)/2 at some point p, and to converge on some region to an analytic function f, for which f(p) is something other than -1/12. But this is easy, because there are enough complex functions that are small on some region but relatively large at some point outside it. I chose s^n, which is very small near 0, but large enough at 1. Scaling these up by n(n+1)/2 gives them the right value at 1, and keeps them small enough near 0 that they still converge to 0. So I took the series whose partial sums are n(n+1)/2 s^n. The moral of the story is that even analytic functions aren't rigid enough to always guarantee a particular value for that sum. A fun exercise might be to go through the manipulations made in these videos to get the fact that 1+2+3+... = -1/12 and see what "goes wrong" if you make those manipulations with this series. (And, of course, to make them with the zeta function and see what "goes right".) Maybe it will suggest some ideas for a more restrictive framework which actually does force that value. The Monty Hall Problem is cognitive Three-Card Monte by Prooffreaderin math [–]antonfire0 points1 point2 points ago Specifically, that you'd get it 0% of the time for GG, and 100% of the time otherwise. This is certainly not treating the information as "the only thing you found out". No, that is exactly what it is, from the mathematical perspective, and that's the only way I can talk precisely about "information" at all in this context. (See my edit below for the formalism.) Did you look at the computations I did? Are you trying to think about this in that framework at all? Then she can't conclude anything without a prior assumption about what information would be revealed. Yes she can. When a Bayesian agent find out something is true, she conditions on the truth of that statement. You keep forcing a complicated model of the world at Alice when you can answer the question without it. Now, before she reads the very next word, what probability would you say she should assign to whether the next word is going to "boy" or "girl"? Unless you think this is biased 3:1 in favour of "boy", you simply can't conclude the 1/3 answer. That's right, in the situation you are describing, she wouldn't assign a probability of 1/3 to BB if she read "boy". That's because in the situation you're describing, she would have to use a whole bunch of information about how god behaves, including this idea that god behaves symmetrically. Eg. if Alice had asked "Is there at least one boy - yes or no?", she would indeed expect that distribution (and the 1/3 answer would apply). But without adding in that assumption about the nature of the information, it's simply not true that you're treating that information as the only information you have. I think this is the interpretation you're assuming "The family has at least one boy and no other information" corresponds to. No it isn't. I said it wasn't in the last paragraph of the previous post. The interpretation you just described is the one I described earlier, where Alice conditions on Bt, assuming P(Bt | BB) = P (Bt | BG) and P(Bt | GG) = 0. In that interpretation understanding the event Bt involves a complicated model of the world including a speaker and questions and how they behave and so on, it just gives an answer that's the same as when you condition on B+. The event Bt is a proper subset of the event B+, i.e. it contains all the information that B+ contains, and then some. (In particular, it contains the information that there is even a speaker telling us something.) In the special interpretation I'm talking about, Alice just conditions on the event B+. As far as I can tell, the idea of conditioning on B+ doesn't even seem to exist for you, and you insist on interpreting it as her conditioning on some other event like Bt. You know what, let me be more specific about what I even mean by "contains more information". Alice has a total probability space Omega. Points in this space are "possible worlds", and events are subsets of this total probability space, with assigned probabilities for each event. Every time Alice learns something, it limits what possible worlds she could be in, so she restricts her attention to a subset U of Omega, and the probability she assigns to an event X is now P(X|U) = P(X n U) / P(U). When she learns another fact, she restricts to an even smaller subset of Omega, and so on. I say an event X has more information than Y if X is a subset of Y, and P(X) < P(Y). For example, the event B+ is the set of all possible worlds where the family has two boys. The event Bs is the set of all possible worlds where Alice spotted a boy and didn't spot the other child. The event Bt is the set of all possible worlds where some speaker says to Alice "this family has at least one boy". Bs contains more information than B+. If we make some assumptions about the speaker, Bt also contains more information than B+. So, when she spots a boy and doesn't spot the other child, she restricts her attention to Bs, and P(BB|Bs) = 1/2 as we saw above. If instead she hears this mystery speaker say "this family has at least one boy", she restricts her attention to Bt, and under the assumptions about the speaker I stated in the previous post, P(BB|Bt) = 1/3. When she somehow manages to learn only that the family has at least one boy, she can still update her knowledge sensibly, just like the other two situations. She conditions on the event B+, and we have P(BB|B+) = 1/3. And yes, the last one is a bizarre situation, because it's nearly impossible to imagine a mechanism by which she could learn only B+ and nothing else. In any practical situation, every moment she's cutting away huge swaths of Omega by learning facts like "That's an energetic child" or "god decided to talk to me for some reason" or "god has a funny beard" and so on. There are good reasons to interpret the problem in a way where she conditions on some event other than B+. But, again, in every one of those interpretations, that event contains more information than B+. If she is to use the information that the family has at least one boy and nothing else, then she has to condition on B+. The Monty Hall Problem is cognitive Three-Card Monte by Prooffreaderin math [–]antonfire-1 points0 points1 point ago that you're assuming the speaker would have said this I'm not even assuming the existence of any "speaker"! I'm not making assumptions about the format the information was presented in, I am only assuming that it doesn't carry any other information with But sure, if we decide to picture a situation where we're getting this information from someone saying "this family has at least one boy", then I'm assuming that if I hear that person say it, then the only thing I found out about the family is that it has at least one boy. Now, yes, that boils down to the same thing you said: the speaker says it if and only if the family has at least one boy. Yes, I'm already convinced that this is a strange scenario. Yes, it is difficult to picture any non-contrived situation where you find out that the family has at least one boy and nothing else. Yes, that makes it fairly reasonable to pick a different interpretation for this problem. But none of that contradicts the fact that, in the other interpretations, you're getting more information about the family than that it has at least one boy, and in this one, you are I'd say (b) is much closer to what I'm assuming than you. You are wrong. Let GG, BG, and BB be the events that the family is a two-girl, boy-girl, two-boy family, respectively, and B+ be the event (BG or BB). Let's say Alice is a rational Bayesian agent, and she starts out assigning probabilities P(GG) = 1/4, P(BG) = 1/2, P(BB) = 1/4. If Alice somehow obtains the new piece of information "the family has at least one boy" and no other new information, then she will condition her probabilities on the event B+. Her new probabilities are P'(GG) = P(GG | B+) = P(B+ | GG) * P(GG)/P(B+) = 0, P'(BG) = P(BG | B+) = P(B+ | BG) * P(BG)/P(B+) = 2/3, P'(BB) = P(BB | B+) = P(B+ | BB) * P(BB)/P(B+) = 1/3. In order to get any other answer, Alice would have to condition on an event other than B+. For example, let's say the probability of spotting a particular child in the family is p, and whether a child is spotted is independent of their genders and of whether the other child is spotted. Then say Bs is the event that Alice spots exactly one child, who is a boy. If she conditions on that event, then her new probabilities are P'(GG) = P(GG | Bs) = P(Bs | GG) * P(GG)/P(Bs) = 0, P'(BG) = P(BG | Bs) = P(Bs | BG) * P(BG)/P(Bs) = p(1-p) * 1/2 / P(Bs) = 1/2, P'(BB) = P(BB | Bs) = P(Bs | BB) * P(BB)/P(Bs) = 2*p(1-p) * 1/4 / P(Bs) = 1/2. In the first case, Alice is using only the piece of information "the family has at least one boy". In the second case, she is using the piece of information "the family has at least one boy, and also I spotted one such boy, and also I didn't spot the other child." Now, we could also condition on the event Bt, the event that some speaker tells us "this family has at least one boy". If we make the assumptions P(Bt | GG) = 0 and P(Bt | BG) = P(Bt | BB), then the answer comes out to 1/3 again. This is what you seem to think I'm doing, and your argument is "well sure, I condition on the event Bs, but why is conditioning on the event Bt any more special?" Well, I'm not conditioning on some event Bt. I'm conditioning on the event B+. That event is more special than any other choice of event to condition on: after all, the only information about the family that was actually given in the problem statement is whether or not B+ occurred. The Monty Hall Problem is cognitive Three-Card Monte by Prooffreaderin math [–]antonfire0 points1 point2 points ago I would say assuming either of these scenarios qualifies as "making something up" I am not assuming either of those scenarios. I'm not assuming anything about the process of obtaining the information, except that (a) somehow we got the piece of information that the family has at least one boy, and (b) that we didn't get any other information about the family. Now (b) does not seem like a reasonable assumption to you, and that's fine. But this is still the only interpretation where you don't make up any information about the family that wasn't mentioned in the problem statement. That singles this interpretation out from all the others. The Monty Hall Problem is cognitive Three-Card Monte by Prooffreaderin math [–]antonfire0 points1 point2 points ago Well, it doesn't matter much for (E), since the answer is 1/2 either way. This happens roughly because you get enough information to distinguish the two children even when they're both boys. The case where the answers are actually different is (C), so that's the one we should be looking at if we want to know what the issue is. It requires us to assume that we're going to pick a random family and then reveal the truth of the statement "This family has at least one boy named Alex: true or false". Which ultimately is a very strange way to behave, and there isn't anything in the question suggesting this. But that's the only information you actually get about the family in the statement of the question. The interpretation you're suggesting, where the answer to (C) is 1/2, involves making something up about the process of obtaining that information. It involves concocting a situation where you somehow obtain more information about the family than was in the problem statement. There is certainly not anything in the question suggesting any particular process for finding out that the family has at least one boy. My interpretation of the problem as a straight-up conditional probability question does not involve making anything like that up. That is a decent reason. The Monty Hall Problem is cognitive Three-Card Monte by Prooffreaderin math [–]antonfire0 points1 point2 points ago [Spoiler warning: I give away some of the answers below, so don't read this if you don't want to see them.] I agree with the gist of what you're saying, but I think there is a decent reason to make the interpretations I have in mind. Namely, those are the interpretations you make when you treat it as a straightforward conditional probability problem. So, for example, (E) becomes "what percentage of two-child families with at least one boy named Alex have two boys"? The problem is that it's not at all clear that what we're choosing to ask about ("Boy" / boy named "Alex") is independent of the family we chose. I don't think this is a good description of what the problem is. (C), (D), and (E) are separate questions, and there's no reason to decide to mix parts of them together into one situation. But I think what you're really saying is that it's not at all clear in each of these situations how we got the information we got. (Just like in Monty Hall.) And you're right. If you try to make (C) into an actual real-world situation, it's hard to come up with one which gives the answer 1/3. This is because it's hard to picture a situation where you get the datum that the family has at least one boy without also getting some other information. For example, if you're visiting this two-child family and you happen to spot only one child, who is a boy, then under some reasonable assumptions the answer is 1/2: you are twice as likely to spot a boy when they're both boys as you are to spot one in a one-boy-one-girl family. (For this same reason the answer to (E) actually is roughly 1/2, in case you thought it wasn't.) To make the answer 1/3, you need some observation which you are equally likely to make of a two-boy family as of a one-boy-one-girl family. Maybe boys have a distinctive smell, but having two boys in the house doesn't make the smell any stronger. Yeah, that's pretty much the least contrived one I can come up So yes, as intended, (C) is a very unnatural question to ask, and it's not surprising if people interpret it as some more natural question. But I think that's pretty much the moral of that series of questions anyway: the answers to these things are very sensitive to the setup. The Monty Hall Problem is cognitive Three-Card Monte by Prooffreaderin math [–]antonfire1 point2 points3 points ago When we know something happened purely by coincidence, we don't draw conclusions from it. If there are two people in your class with the same last name, you might expect them to walk home in the same direction after school, because they're likely to be siblings. But if you know they're not related, then you no longer have a reason to make that prediction. Similarly, if Monty opens 98 doors to reveal a goat, you might expect that the one he didn't open contains the car. But if you know it was just coincidence, then you no longer have a reason to make that prediction. The Monty Hall Problem is cognitive Three-Card Monte by Prooffreaderin math [–]antonfire19 points20 points21 points ago The reason a lot of people get it wrong is that there are two fairly reasonable ways to interpret the problem, and they give two different answers. (A) Monty knows which door has the car, and intentionally opens a different one. (B) Monty picks one of the two doors randomly, and by chance it happens not to have the car. If you want to give someone the problem and then be smug about the answer, you have to be a bit delicate in your phrasing. Edit: since some people aren't convinced that the two situations are actually different, here is a simulation: http://codepad.org/awZal2nW. Edit 2: while we're at it: (C) You pick a family at random, and it turns out they have two children, one of whom is a boy. What's the probability that the other one is a boy? (D) You pick a family at random, and it turns out they have two children, the elder of whom is a boy. What's the probability that the other one is a boy? (E) You pick a family at random, and it turns out they have two children, one of whom is a boy named Alex. What's the probability that the other one is a boy? (Make some reasonably simple assumptions about how people name their kids.)
{"url":"http://www.reddit.com/user/antonfire","timestamp":"2014-04-18T06:05:55Z","content_type":null,"content_length":"141337","record_id":"<urn:uuid:853ad082-af58-42cd-8970-cd305f87c523>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00550-ip-10-147-4-33.ec2.internal.warc.gz"}
University Place Prealgebra Tutor Find an University Place Prealgebra Tutor ...I have tutored at risk students with general knowledge skills as well as tutor people for the GED exam. I am currently a certified teacher in two states, Florida and Washington. Part of my everyday duties is instructing physical science for four periods of the day. 16 Subjects: including prealgebra, chemistry, physics, biology ...I feel that tutoring will prepare me for my future career and to earn some money to pay for my classes. I will work hard to assure that the student achieves their highest potential. I believe that learning should be a fun thing that sparks interest in the student. 6 Subjects: including prealgebra, geometry, algebra 1, algebra 2 ...Included in these studies were several 200 and 300 level courses on microbiology, cellular physiology, lab work, and organic and inorganic cellular chemistry. I have tutored microbiology as a homework assistant at the local high school and taught introductory classes to home school students. I ... 48 Subjects: including prealgebra, chemistry, Spanish, reading ...Last year, I took the following courses: AP English Language and Composition, AP Calculus AB, AP Spanish Language, and AP Chemistry. I received scores of 5 on all of the AP exams for these subjects. I am currently enrolled in AP Biology, AP US Government and Politics, and AP Calculus BC. 15 Subjects: including prealgebra, chemistry, English, writing ...I look forward to the point where students are able to teach me new things. I currently offer a reduced rate for grade-school subjects in math and science. Regards, GlennAlgebra, is one of the broadest parts of mathematics, along with geometry, trigonometry, number theory, calculus, etc. to work and study science. 45 Subjects: including prealgebra, chemistry, physics, calculus
{"url":"http://www.purplemath.com/University_Place_Prealgebra_tutors.php","timestamp":"2014-04-16T04:30:54Z","content_type":null,"content_length":"24266","record_id":"<urn:uuid:89566d8b-f2d5-43ef-b181-465e581c7815>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00120-ip-10-147-4-33.ec2.internal.warc.gz"}
Optimization Question May 1st 2010, 05:57 PM #1 Super Member Dec 2008 Optimization Question I need help on the following question: 1) A symmetrical gutter is made of metal sheeting 30cm. wide by bending it twice as shown in the cross section. Find an expression for the area $A(\theta)$ of the cross section and hence find the value of $\theta$ for the gutter to have maximum carrying capacity. This is what i have try to do so far. I know that Area of a square is L*W and Area of 2 triangles is $2(0.5*x^2 * sin(\theta))$ so $L*W + x^2 * sin(\theta) = 30$ Make $W = \frac{30 - x^2(sin(\theta))}{10}$ $A = L*W$ $A = 10(\frac{30 - x^2(sin(\theta))}{10})$ $A = 30 - x^2(sin(\theta))$ $A' = -x^2(\frac{dA}{d\theta})cos(\theta) + 2xsin(\theta)$ $A' = \frac{2xsin(\theta)}{-x^2(cos(\theta))}$ Don't think this is correct. I need help on the following question: 1) A symmetrical gutter is made of metal sheeting 30cm. wide by bending it twice as shown in the cross section. Find an expression for the area $A(\theta)$ of the cross section and hence find the value of $\theta$ for the gutter to have maximum carrying capacity. area of a trapezoid ... $A = \frac{h}{2}(b_1+b_2)$ $h = 10\cos{\theta}$ $b_1 = 10$ $b_2 = 10 + 20\sin{\theta}$ try again. I need help on the following question: 1) A symmetrical gutter is made of metal sheeting 30cm. wide by bending it twice as shown in the cross section. Find an expression for the area $A(\theta)$ of the cross section and hence find the value of $\theta$ for the gutter to have maximum carrying capacity. This is what i have try to do so far. I know that Area of a square is L*W and Area of 2 triangles is $2(0.5*x^2 * sin(\theta))$ This makes no sense because you don't say what "x" means. Your area should depend on the single variable $\theta$, not both $\theta$ and some undefined "x". What you can do is this: you know that the hypotenuse of each right triangle is 10 so, since " $sin(\theta)= \frac{opposite side}{hypotenuse}$", $opposite side= hypotenuse* sin(\theta)= 10sin(\ theta)$ and " $cos(\theta)= \frac{near side}{hypotenuse}$", $near side= hypotenuse* cos(\theta)= 10 cos(\theta)$. Of course, the area of a right triangle is "(1/2) near side* opposite side" so $A= \frac{1}{2}(10 cos(\theta))(10 sin(\theta))= 50 sin(\theta)cos(\theta)$. The base of the central rectangle is 10 and the height is the "near side" of the two right triangles, $10 cos(\theta)$ so its area is $100 cos(\theta)$. The entire area is $100 cos(\theta)+ 100 sin(\theta)cos(\theta)= 100 cos(\theta)(1+ sin(\theta))$. so $L*W + x^2 * sin(\theta) = 30$ Make $W = \frac{30 - x^2(sin(\theta))}{10}$ $A = L*W$ $A = 10(\frac{30 - x^2(sin(\theta))}{10})$ $A = 30 - x^2(sin(\theta))$ $A' = -x^2(\frac{dA}{d\theta})cos(\theta) + 2xsin(\theta)$ $A' = \frac{2xsin(\theta)}{-x^2(cos(\theta))}$ Don't think this is correct. so then to find the value of $\theta$ do i make the equation: $100 cos(\theta)(1+sin(\theta)) = 30$ why are you setting the area = 30 ??? you want to find the value of $\theta$ that maximizes the cross-sectional area of the gutter ... i.e. the trapezoid. find $\frac{dA}{d\theta}$ and maximize like you were taught. ok this is what i have done $\frac{d(100cos(\theta)(1+sin(\theta)) = 900)}{dx}$ $100cos(\theta)cos(\theta)+(1+sin(\theta))-100sin(\theta) = 0$ $100cos^2(\theta) -100sin^2(\theta)-100sin(\theta) = 0$ $1-2sin^2(\theta) - 100sin(\theta) =0$ this is where i get stuck, what should i do next? $A = 100\cos{\theta}(1 + \sin{\theta})$ using the product rule ... $<br /> \frac{dA}{d\theta} = 100\cos{\theta}(\cos{\theta}) + (1+\sin{\theta})(-100\sin{\theta})<br />$ $<br /> \frac{dA}{d\theta} = 100\cos^2{\theta} - 100\sin{\theta} - 100\sin^2{\theta}<br />$ $<br /> 100\cos^2{\theta} - 100\sin{\theta} - 100\sin^2{\theta} = 0<br />$ $<br /> (1-\sin^2{\theta}) - \sin{\theta} - \sin^2{\theta} = 0<br />$ $<br /> 1 - \sin{\theta} - 2\sin^2{\theta} = 0<br />$ $<br /> (1 - 2\sin{\theta})(1 + \sin{\theta}) = 0<br />$ finish it May 1st 2010, 06:14 PM #2 May 2nd 2010, 03:52 AM #3 MHF Contributor Apr 2005 May 2nd 2010, 04:42 PM #4 Super Member Dec 2008 May 2nd 2010, 05:40 PM #5 May 2nd 2010, 10:41 PM #6 Super Member Dec 2008 May 3rd 2010, 05:11 AM #7
{"url":"http://mathhelpforum.com/calculus/142548-optimization-question.html","timestamp":"2014-04-16T08:50:34Z","content_type":null,"content_length":"63837","record_id":"<urn:uuid:cab3df0a-f680-44ed-80dc-3f9b18726b1b>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00038-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: calculate the mass in kilograms of 6.7x10^21 molecules of NO2 Best Response You've already chosen the best response. You need to use Avogadro's number 6.0221415 × 10^23 which explains how many molecules go per mole, and then you just need to calculate the molecular weight of NO2 (just combine the molecular weight of one N atom and two O atoms) so the calculus is: x molecules of NO2 * 6.0221415 × 10^23 molecules/mole = x moles of NO2 x moles of NO2 * molecular weight of NO2 (g/mole) = x g of NO2 Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4e9e51170b8b81e68807de02","timestamp":"2014-04-21T07:50:46Z","content_type":null,"content_length":"27888","record_id":"<urn:uuid:bfb76497-d987-43cf-aa07-ce163e9d6f75>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00530-ip-10-147-4-33.ec2.internal.warc.gz"}
Universal Harsanyi type spaces In the main body of this entry, we have been concerned with streams, trees, and sets, especially in settings where one wants one object to be a part (in some sense) of another, this to be a part of yet another, and so on ad infinitum. These are the standard motivating examples for the Anti-Foundation Axiom and related modeling techniques. In this section, we broaden the discussion by turning to another field. Type spaces are mathematical structures used in theoretical parts of economics and game theory. The are used to model settings where agents are described by their types, and these types give us “beliefs about the world”, “beliefs about each other's beliefs about the world”, “beliefs about each other's beliefs about each other's beliefs about the world”, etc. That is, the formal concept of a type space is intended to capture in one structure an unfolding infinite hierarchy related to interactive belief. John C. Harsanyi (1967) makes a related point as follows: It seems to me that the basic reason why the theory of games with incomplete information has made so little progress so far lies in the fact that these games give rise, or at least appear to give rise, to an infinite regress in reciprocal expectations on the part of the players. This quote is from the landmark papers he published in 1967 and 1968, showing how to convert a game with incomplete information into one with complete yet imperfect information. Many of the details are not relevant here and would take us far from our topic. But three points are noteworthy. First, the notion of types goes further than what we described above: an agent A's type involves (or induces) beliefs about the types of the other agents, their types involve beliefs about A's type, etc. So the notion of a type is already circular. Second, despite this circularity, the informal concept of a universal type space (as a single type space in which all types may be found) is widespread in areas of non-cooperative game theory and economic theory. And finally, the formalization of type spaces was left open: Harsanyi did not really formalize type spaces in his original papers (he used the notion); this was left to later researchers. Work started with Böge and Eisele (1979) and continued in several papers up through that of Heifetz and Samet(1998). It was later realized that there is a connection to the kind of work that is emphasized in this entry, and this connection is what we are concerned with, both in this section and in A supplement on additional related modeling of circularity Getting back to our very rough informal description above, what exactly are “beliefs”? And how can a structure contain types which give rise to beliefs about other types? What is the relation of this to the infinite hierarchy of beliefs about beliefs about … beliefs about the world? Can we characterize the space of all possible types? Again, we are not concerned with conceptual matters concerning beliefs and games in this entry. Most of the important papers for our study are technical contributions dealing with the matter of universal type spaces. We are concerned with the conceptual matters related to our own topics, and with some of the technical matters connected with measure theory, probability, and the like. Returning to type spaces, we recall that the usual modeling of belief in game theory is via probability. So we would expect that type spaces should be probabilistic versions of Kripke models. One should replace the functor ℘ with something like Δ where Δ(W) = {μ : μ is a probability measure on W}}. Indeed, this is the case: most proposals in the literature do end up studying certain mappings from a space X to some variation of the functor Δ applied to X. This is our third clue of the connection. But note that this informal description leaves many loose ends: if W is just a set, how do we know that it has any probability measures? Does it matter which σ-algebra we use? For the remainder of this discussion, we need the notion of a measurable space, which is defined in the following supplementary document: Measurable Spaces Let S be a fixed measurable space. A type space over S is a tuple T = (M, σ, N, τ), where M and M are measurable spaces, and σ:M → Δ(S × N) τ:N → Δ(S × M) What we are describing would be better called a two-player type space over S, and the spaces M and M are the spaces of the two players. The idea is that S represents the possible “states of nature”, each m ∈ M is a possible “type” of the first player, and each n ∈ N is a possible type for the second. Each player does not know the exact state of nature or which type of player he or she faces. But each does have some probabilistic “opinion” on this matter: each σ(m) gives a measure on S × N, and so the first player can measure subsets of both S and N (using marginals). One assumption incorporated into our definition is that the players “know” their own types. This is why σ(m) is a measure on S × N and not S × M × N. However, each μ ∈ Δ( S × N) and each m ∈ M together define a measure μ^*[m] on the larger product S × M × N by concentrating μ on m: μ^*[m](E) = μ(m)({(s,n) : (s,m,n) ∈ E}). Everything we said for the first player goes through for the second, mutatis mutandis, and so we also have measures ρ^*[n] ∈ Δ (S × M × N) for each ρ ∈ Δ(S × M) and each n ∈ N. The basic modal language for a type space over S would have the following sentences: 1. Each measurable subset A of S is taken as an atomic sentence. 2. If φ is a sentence and p ∈ [0,1], then both B^1[p] φ and B^2[p] φ are sentences. 3. If φ and ψ are sentences, then so is φ ∧ ψ. We read B^i[p] φ as saying that player i believes the probability of φ to be at least p. We define a semantics for this language, interpreting each sentence φ by subset [φ] ⊆ S × M × N as follows: 1. [A] = A × M × N. 2. [B^1[p] φ] = {(s, m, n): (σ(m)^*[m]([φ]) ≥ p} 3. [B^2[p] φ] = {(s, m, n): (τ(n)^*[n]([φ]) ≥ p} 4. [φ ∧ ψ] = [φ] ∩ [ψ]. One may also add negation or implication to the language, with the semantics from classical logic. A good first exercise would then be to show that [B^1[p] φ → B^1[1] B^1[p] φ] = S × M × N. This is an echo of the assumption agents know their types: they are able to introspect on their own beliefs, and to do so with certainty. Reformulation of the problem of universal type spaces. The language of beliefs is more natural than the formulation of type spaces (and it also came first), and so the reason for introducing type spaces is to interpret that language. One then wants a type space T^* which is rich enough to contain all possible types. It might be better to work on the semantic side, and consider, for each space T and each point x = (s,m,n), the theory of x; this is the set of sentences φ such that x ∈ [φ]. Let us call this set th(T,x). One wants a space T^* with the property that for all type spaces T and all x from it, there is a unique point x^* ∈ T^*, th(T,x) = th(T^*,x^*). Intuitively, points with the same theory represent players with the same beliefs, beliefs about beliefs, etc., so they should be identified for the purposes of game theory. These theories then serve as surrogates for the types. A universal space T^* would contain a point realizing each theory of any point x (in any space), exactly one copy in fact. The mathematical problem is then to find such a space. Note that it is not clear that such a space exists in the first place: one requires the space T^* to be a set, and perhaps the universality requirement would force it to be a proper class.
{"url":"http://plato.stanford.edu/entries/nonwellfounded-set-theory/harsanyi-type-spaces.html","timestamp":"2014-04-20T10:48:09Z","content_type":null,"content_length":"23373","record_id":"<urn:uuid:338be6a9-84b1-48be-8358-13df1ae00e69>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00601-ip-10-147-4-33.ec2.internal.warc.gz"}
Great Moments in Pedantry: The odds of your existence What are the odds that you, as an individual, exist? Pretty good, you'd guess, since you're sitting right here reading this. But, in an abstract sense, the chances that you exist are really rather slim. In fact, once you see the full infographic, put together by futurist and designer Sofya Yampolsky of Visual.ly, I'm sure you'll be much more skeptical of your existence. The infographic is based on this post by Dr. Ali Binazir. Maggie Koerth-Baker is the science editor at BoingBoing.net. She writes a monthly column for The New York Times Magazine and is the author of Before the Lights Go OutTwitter and Facebook. Maggie goes places and talks to people. Find out where she'll be speaking next. More at Boing Boing 93 Responses to “Great Moments in Pedantry: The odds of your existence” 2. and yet this has happened over 7 Billion times. Imaging that. □ I can’t wait for those images. □ 7puck7 says: Not really, you were not born 7 Billion times… He’s calculating the odds for ONE specific person, not the odds for one human being’s birth. □ 7 billions? I think you mixed up with number of living people in the world right now with number of people born.. it’s way higher than that for the latter.. 3. David Forbes says: The odds that I would exist are about 1:1. You could ask my twin brother, whose odds are about the same. I have a relative who made his name by writing books about creationism, in which he posits that the odds of life beginning the evolutionary way (amino acids and whatnot) were something like 1 in 10^3000. I suspect he was going about it all wrong, though, since that would lead to us not existing. □ Daniel Meyer says: Hah. Is this guy’s last name Meyer? It’s the same as mine… and I also have a degree in mathematics. His argument is pretty shameful. I like to counter this with a very easy thought experiment. Roll a standard 6-sided die. The odds that the number you got was 1/6 . Roll 2… the odds that you get that sequence is (1/6)^2. Go ahead and roll 50 die. Look at the sequence. The odds it will come up is (1/6)^50. Wow! What an amazingly small chance for this occurring!… yet all it took was one attempt. Amazing! 4. Patrick Grote says: Can you link to the original or a larger version? The small type face is very hard to read. Also, Facebook logon isn’t working for comments. Thanks! 5. Dan Fornika says: Learn about conditionals! I do exist. Odds that I exist are 1.0. □ a11138776 says: The question is: Who is “I”? 6. Small odds, roll the bones Small transfinitesimalFlip the flop cards fast 7. Can you prove, without a shadow of a doubt that I do exist? Or that anybody does? □ GTMoogle says: Yes, you exist, QED Prove me wrong :) 8. Is there a version of this image somewhere not riddled with JPG compression artifacts? I can hardly read it. 9. The odds are 1:1 even in an abstract sense. It would have been impossible for there to have not been me now.. 10. xzzy says: The odds that I can’t read the small text in the graphic because it’s been obliterated by jpeg compression: 1 Where is the original at? □ mindtheink says: Exactly. I feel the odds of us getting a PNG file of this are slim at best. ☆ Miracles! ;) http://i.imgur.com/Dub8k.png 11. GlenBlank says: Cogito, ergo sum. Ergo, probabilty that I exist = 1 □ “I think that I think, therefore I think that I am.” ~Ambrose Bierce □ Guest says: truly putting Descartes before the hordes 12. Glen Able says: Jeez…it seems that everyone is very reluctant to “go forth and feel like the miracle they are”. 13. Dr Manhatten’s conversation with Laurie on Mars. □ Uh…Temba, his arms wide…? 14. GlenBlank says: Entirely aside from that, this is one of those daffy attempts to calculate ‘odds’ that turn on bogus assumptions, the mistaken idea that average results indicate odds, and the illusion that long odds mean low probability. That makes it the most irritating sort of pedantry – wrong in so many different ways it’s hard to even know where to start. 15. David Llopis says: Yeah… some people walking around acting like “the miracle that they are” don’t seem to be aware that they aren’t special. 16. Greeny says: My head hurts. What are the odds of me buying a bottle of wine on the way home from work tonight? 17. RobDobbs says: I hate these graphics that are so huge yet somehow manage to use letters so small that they compress to an illegible chunky mass. 18. Colin Lickwar says: The point is taken, but this infographic represents the probability that you would exist “exactly the same” if the world was restarted and not the probability of your existence, which as pointed out is 1, if we use the authors definition of existence. □ Mallet Head says: Thank You. Now the point of this exercise makes sense. If the entire universe were restarted then the chances of any one specific person existing as they do now, is near zero. Ok. Great. So what? It’s a stupid thought exercise. 1) you do exist. 2) the universe isn’t going to start over 3) even if it did you’d never know you existed before. a) Don’t know about you but I can barely remember last week, much less the time between universes. 19. Indeed, this kind of wrong-headed thinking leads to all kinds of problems. Change the example a bit. The odds of selecting any particular number at random from a set of one million are 1/ 1,000,000. However, the odds of selecting any number from a set of one million are 1/1. Only if you take the post hoc result as the intended outcome does it seem statistically unlikely. However, some result is certain. □ you are the one number out of a million. If there were exactly one million clones of you, this would make sense. 20. Greeny says: Think I’d better give my girlfriend some of that one in 400 quadrillion tonight 21. hnice says: Is it Dawkins or Dennett who writes about 1,024 people in a single-elimination coin-flipping tournament? At the end of it, there’s guaranteed to be one winner who actually wins 10 coin tosses in a row, which seems incredible but is actually whatever the opposite of incredible is. It’s a great illustration of how silly and backwards this setup is. To me, the trick is figuring out how to find meaning in life given that I’m entirely *not* special. I’m just this guy, and that’s OK. □ VerySincerely says: Yes, a coin flipping tournament pre-supposes a winner. But is the universe such a tournament? ☆ Guest says: heads. heads. heads. heads. heads. heads. heads. heads. We were called. 22. GlenBlank says: ….and then finally, turned into an ‘infographic’, so that even the borderline-literate can think they learned something of value. 23. TooGoodToCheck says: Yeah, if there were some way to roll back the universe 100 years and replay it from that time, I can pretty much guarantee that I, in my current form, would not exist. but, well, BFD . . .I was trying to come up with a more significant way of dismissing this analysis than just BFD but while I was doing that, hnice did it for me 24. Lobster says: The odds that I exist are 1. The odds that any particular piece of matter in the entire universe should happen to be part of me are astronomically (literally) small. 25. emo hex says: I don’t exist I’m not! 26. I like to roll out something like this when creationists blather on about the odds of evolution. Fun, now I have a handy image, but somehow I still think they just won’t get it. The odds of a particular person winning the lottery are slim, the odds of someone, somewhere winning the lottery are 100% (depending of course on the rules of the particular lottery.) □ VerySincerely says: The lottery analogy is useful, and I think the particular rules matter. If we’re talking about the kind of lottery where there is guaranteed a winner, the odds of someone winning are 1. Of course the universe didn’t agree to any such rules about producing intelligent life, so I think the analogy falls down. In the other kind of lottery, where a whole bunch of people choose numbers, the odds that you or me will win are slim, but the odds that someone from the pool will win are actually pretty good. I think this is the kind of lottery we’re talking about here (no granteed winners), except that the odds of anyone winning are impossibly slim (as the poster illustrates). The comments on BoingBoing suggest that everyone reasons that since they’ve bought the winning lotto ticket (I exist), the odds of winning the lottery were 1. I disagree. The universe could have produced no winners. It seems to me that we do exist in spite of impossibly slim odds, and that I exist in the face of even slimmer odds. ☆ flagler23 says: But you see it makes no sense to speak of the odds of us existing. That’s because not only is the universe a one-off event, but if we didn’t exist we wouldn’t be around to question the odds. That we do exist, as intelligent, self-reflecting beings, brings about that question, so that even if the odds were 1 (absolute) that the universe would produce us we would necessarily, by your logic, conclude that our existence is highly improbable-and be wrong. If we could observe an experimental run of multiple universes and note which ones produced intelligent life and which ones did not we would be in a position to determine the odds of us existing. But conceptually that is paradoxical because the universe we are observing the experiment from would itself be a one-off event and include ourselves and all the universes of the experiment in which intelligent life exists, so the odds would still be absolute that we 27. TooGoodToCheck says: As long as we’re floating meaningless odds, he left out the odds that there would be a habitable planet, in a universe whose physical constants aren’t inimical to life. Try calculating those 28. daneyul says: Odds that multiple posters will stubbornly answer “1″ and take this rather innocuous speculative article way too seriously…uh…1? 29. Guest says: “Five to one against and falling…” she said, “four to one against and falling…three to one…two…one…probability factor of one to one…we have normality, I repeat we have normality.” She turned her microphone off — then turned it back on, with a slight smile and continued: “Anything you still can’t cope with is therefore your own problem.” 30. Glen Able says: so…anyone want to remark on which of the (dozens of) horribly wrong things in this calculation is the most horribly wrong? The particular one that jumps out at me is this: It seems to be trying to determine how likely it is that I got my particular genome, and that “who I am” is solely dependent on this. This seems rather at odds with the final message that I’m some sort of unique miracle – just shove this genome in another body and you’ll get ME again! 31. robdobbs says: This reminds me of the Doomsday argument: http://en.wikipedia.org/wiki/Doomsday_argument 32. Badly flawed in the first posit: my parents lived within 300 feet of each other in a village north of Manhattan that had about 5,500 residents. If you further narrow that to the number of kids/ people within an age range of say give or take 5-8 years, then we talking about a few hundred. There is no validity in referencing the rough population of the globe at any point! Dumb. □ umbriel says: That assumption caught my eye right out of the gate as well — clearly that “number of prospects” term varies enormously on a case by case basis. And you don’t have to be a starry-eyed romantic to recognize that basic issues of attraction and compatibility aren’t really “random”, and further hone down the successive parent numbers. So I’d guesstimate that, for my daughter, that first 1:40,000,000 probability would improve to about 1:50 or better. The non-uniqueness of sex cells issue, discussed below, should pretty much negate that whole term, though post-conception variables would open up a whole ‘nother can of worms. 33. sussed says: Wait didn’t Douglas Adams solve this in the ‘Hitch Hikers Guide to the Galexy’ when he wrote (dare I say proved) how the population of the universe is essentially Zero. (…because the Universe is infinite and the amount of planets which developed life on them is so small compared to infinity that the number is practically zero) …so in turn the odd are that you exist are Zero. 34. s2redux says: “Roxanny, every life is unlikely.” — Louis Wu 35. Andrew Rice says: I remember hearing someone say, “Remember if you never win anything in your life you won the races to make it into the egg” 36. In the words of Mike Skinner… “For billions of years since the outset of time, Every single one of your ancestors survived, Every single person on your mum and dads side, Successfully looked after and passed onto you life… what are the chances of that, like?” I think that we all understand that technically the chance of us each existing is 1; because we exist, but that’s not really the purpose of the musing. 37. What are the odds that a tossed handful of glitter lands the way it does? It can’t happen! 38. flagler23 says: Rather than attacking the obvious flaws in the original argument, why not restate the idea-that we are a product of chance-in a more accurate sense? If time were reset to some arbitrary point, say, the beginning of life on earth, would we still be eventualities? I think before the discovery of quantum phenomena the answer would be an absolute yes. In a fixed, determined universe without spontaneous events everything follows from everything else, in time and space. But with quantum fluctuations wouldn’t the universe unfold differently if, hypothetically, time-states were repeated? The butterfly effect produced from quantum uncertainty might be infinitesmal but would still be exponentially proportional to the difference in time, so at what starting point of this “experiment” would we become unlikely? Of course our identity needs to be defined as well, which is totally arbitrary. At what point does something resembling us and thinking like us cease to truly be us? I’ll stop there before I get too carried away. 39. Hey all, here’s the hi-res version: http://i.imgur.com/Dub8k.png 40. Recluse says: I frequently use this argument when people tell me how stupid I am for playing the lottery. Pretty much the chances of YOU being here are ZERO..but there you are! When SOMEBODY wins, the odds of them winning are 100% □ Recluse says: OOPS Neal Starkey beat me to it way up there in Comment Land….. □ flagler23 says: And when YOU lose, the odds of you losing are 100% 43. ookluh says: “Now go forth and feel and act like the miracle that you are.” No pressure! How about I just go forth and act like the completely improbably accident that I am instead? Aw shit, I just typed a sentence of words. What a fucking miracle! 44. Sounds like my hillbilly neighbors with 10 kids should be playing the lotto. 45. GawainLavers says: But tell me, what are the odds that my existence has precluded the existence of someone smarter, more attractive and happier from existing? 46. PaulDavisTheFirst says: I wrote something here that was startlingly, flagrantly wrong. □ lknope says: What? Why are not all siblings identical then? Why can one couple have both boys and girls? Explain fraternal twins and fraternal twins where one is a girl and one is a boy. ☆ PaulDavisTheFirst says: Yep, I really am an idiot. ○ GlenBlank says: I believe that barring mutation, all X-carrying sperm are genetically identical and the same for all Y-carrying sperm. But your believing it doesn’t make it true. :-) Go look up Meiosis. We’ll wait. ■ Antinous / Moderator says: Does this mean that your sign-in problem is fixed? ■ GlenBlank says: Did you not get my new email? :-) (Short answer: Yes, is fixed. Bad cookie, no comment.) ■ Antinous / Moderator says: Oops. Missed it in the pile. ■ PaulDavisTheFirst says: not sure how i forgot that level of biology. I’ve edited my previous posts to reflect my idiocy. 47. Antinous / Moderator says: There was a newspaper story a decade or so ago, “Science Proves That Dragonflies Can’t Possibly Fly.” 48. Daemonworks says: 100%… or i wouldn’t be reading this ;) 49. Sparrow says: So what you’re saying is that for me to be here, the rest of the world rolled a natural one on a 10^2685000 sided die. □ umbriel says: But clearly such a die would be effectively spherical, and would therefore never stop rolling. 50. Okay, time for a stats lesson. First, “odds” implies a ratio. It’s the probability of something happening divided by the complementary probability (p/(1-p)). I think the infographic meant to use the term “probability.” Second, this highlights the difference between permutations and combinations. A permutation is any specific set of event outcomes. Flipping a coin four times and getting H, T, H, T is one permutation, getting H, H, T, T is another. A set of permutations with a specific number of each event outcomes is a combination. The permutations both listed are both examples of the combination two heads and two tails. Let’s say I flip a coin 500 times. Getting any one predetermined sequence of heads and tails is phenomenally unlikely, so there is a low probability for that permutation. Getting 250 heads and 250 tails, in any sequence, is much more likely – the probability of this combination is the sum of the probability of each combination where there are 250 heads and 250 tails. The infographic gives the probability of the permutation of the event outcome known as you. The particular gametes that met (and all the necessary prior gametes meeting) and expressing themselves in a particular way is highly unlikely. You are a permutation. The probability of the combination of a person though – very likely. Even the probability of the combination of a class of people like you is fairly likely, i.e. that you in particular might be unlikely, but the probability in this set of events of producing a tall man with brown hair who likes computers and Megadeth is much more likely. Who was it that said “Remember, you’re unique – just like everyone else”? 51. SCAQTony says: I swear to God, this is why I play the lotto each week – The odds above make the lotto look like a coin toss! 52. hinten says: This presupposes that if the sperm right next to me would have won that it wouldn’t have been me! 53. This old creationist canard is embarrassingly stupid every time the creationists trot it out. It is no less stupid when put to any other use. The failure to understand the workings of “odds” or “chance” does not prove miracles nor magic. 54. Andrew S says: On top of all of this, we have to look at the determinist argument. Yes, a person’s father could have met 200 million women, but due to social groups and such, it is very unlikely he would have met almost all of those people. If we look at the people he plausibly could have met, it’s a lot closer to the number he actually met. But then, you have to define plausible, and you have to figure out how much detail you want to go into, because that’s how probability works. Whoever made this didn’t go into too much detail though, as evidenced by the 200 million women thing. In reality, any physicist or neuroscientist will tell you that causality is total, except perhaps for certain things in the realm of quantum mechanics (though the chances of those acting on the macroscopic or even on the microscopic are quite small). In reality, the anything happening is exactly delineated by what came before. Who your father ends up with totally depends on genetics and environment. As humans, we’re resistant to that idea because we evolved to be resistant to it, but it’s absolutely true. Neurons in the brain take an input and turn it into an output. There’s no way to do something without a reason. Even if you were to punch yourself in the face right now, the reason would be to prove me wrong, which is determined by your personality, which is determined by (say it with me) genetics and environment. Yes, there are things we have no way of knowing based on our ability to gather information, and this necessitates probability, but it doesn’t mean that probability is actually manifest in the world. From what we can determine, there could be incredible numbers of different paths, but if we could somehow look close enough, we would see that there is only one path anything could have ever taken. 55. LYNDON says: The problem we have is that all probabilities are conditional. What is the probability of me existing given present conditions? 1. Of the world as it was when my parents were born producing me (assuming some degree of randomness)? Quite small. Of a randomly selected possible universe having a me in it? You’re gonna need a bigger infographic. 56. GlenBlank says: Also, when he says, Probability of boy meeting girl: 1 in 20,000. So far, so unlikely. Now let’s say the chances of them actually talking to one another is one in 10. …all I can say is, he obviously never met my mother. :-) The chances of her meeting a man she found attractive but then NOT talking to him were… well, maybe one in eleventy bazillion or so. 57. If you took a random person off the street, what would be the probability that their height was exactly X feet? The answer is zero– basic continuous probability question. Nobody is ‘exactly’ six feet, because height is not something that can be perfectly exactly measured (a counterexample to this would be a dice roll, which is discrete). Now let’s think about you. You have a height, which is finite, of course. Let’s call it Y feet. Now, what is the probability that a random person off the street would have height Y? zero. 59. TheMudshark says: Look at all you scientists, taking the magic out of things again, you ought to be ashamed of yourselves. In fact, you´re all muthafuckas lyin´and gettin´me pissed. 60. 7puck7 says: Not really, you were not born 7 Billion times… He’s calculating the odds for ONE specific person, not the odds for one human being’s birth. 61. “[Y]our existence here, now, and on planet earth presupposes another supremely unlikely and utterly undeniable chain of events. Namely, that every one of your ancestors lived to reproductive I think the odds that all of my ancestors were able to reproduce is pretty great… that’s why they are my ancestors. The odds are certainly greater than the possibility that one of my ancestors spontaneously generated into existence. It’s certainly true to say that the odds of a single-celled organism’s genetic material surviving for 4 billion years to become a multi-celled organism is great. But it doesn’t work in reverse. That’s like saying the odds that my wooden chair came from a tree is equal to the odds that any one tree will become my wooden chair. 62. Jared Holt says: Here’s a link to the original: http://visual.ly/what-are-odds 63. John Ridley says: Complete tripe. Throw this out, replace with anthropic principle (you’re here to think about this BECAUSE you are here, end of story). The odds of that rock being in that exact place on this planet given a random big bang are very very close to zero, but to talk about that is simply crap. SOME universe exists, and we’re standing in this one, so it’s the only one we can talk about. In any other universe, we wouldn’t be here asking the question. 64. hymenopterid says: Indeed, the odds that my parents would marry a person who could actually stand to be with them are vanishingly small. 65. a11138776 says: I propose another – hopefully less controversial – subject: The moon seems to be as big as the sun, because the distance to the earth has randomly the right size. The moon-earth distance is changing with time. So, what is the probability of this phenomenon to occure just during the short period of mankind?
{"url":"http://boingboing.net/2011/11/09/great-moments-in-pedantry-the.html","timestamp":"2014-04-20T21:08:47Z","content_type":null,"content_length":"145557","record_id":"<urn:uuid:2d7756dc-1e94-4847-a5a4-9555129e08ea>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00097-ip-10-147-4-33.ec2.internal.warc.gz"}
An Evolution Model of Trading Behavior Based on Peer Effect in Networks Discrete Dynamics in Nature and Society Volume 2012 (2012), Article ID 138178, 15 pages Research Article An Evolution Model of Trading Behavior Based on Peer Effect in Networks ^1School of Economics and Management, Southeast University, Nanjing, Jiangsu 211189, China ^2Nanjing Institute of Railway Technology, Nanjing, Jiangsu 210031, China ^3School of Business, Nanjing Normal University, Nanjing, Jiangsu 210097, China Received 3 November 2011; Revised 23 February 2012; Accepted 6 March 2012 Academic Editor: M. De la Sen Copyright © 2012 Yue-Tang Bian et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. This work concerns the modeling of evolvement of trading behavior in stock markets which can cause significant impact on the movements of prices and volatilities. Based on the assumption of the investors' limited rationality, the evolution mechanism of trading behavior is modeled according to peer effect in network, that investors are prone to imitate their neighbors' activity through comprehensive analysis on the neighboring preferred degree, self-psychological preference, and the network topology of the relationship among them. We investigate by mean-field analysis and extensive simulations the evolution of investors' trading behavior in various typical networks under different characteristics of peer effect. Our results indicate that the evolution of investors' behavior is affected by the network structure of stock market and the effect of neighboring preferred degree; the stability of equilibrium states of investors' behavior dynamics is directly related with the concavity and convexity of the peer effect function; connectivity and heterogeneity of the network play an important role in the evolution of the investment behavior in stock market. 1. Introduction In the behavioral finance literature, investors are considered to be limited rational, especially for the less sophisticated ones, who always attempt to mimic financial gurus or follow the activities of successful investors, since using their own information/knowledge might incur a higher cost [1]. The most typical example is that in the financial crisis of 2008, agents’ were rushed to sell shares in the same direction, leading the market behavior to herding critically. More imitation behavior among the investors is probe to result in herding behavior in stock market, as Nofsinger and Sias [2] note, “a group of investors trading in the same direction over a period of time,” and introducing big fluctuations easily, particularly in bull or bear states. Empirically, this may lead to observed behavior patterns that are correlated across individuals and that bring about systematic, erroneous decision-making by entire populations [3]. Therefore, mechanism of dynamic and herding behavior of stock markets has attracted much academic and industrial attention. In recent years, the literature about dynamic or herding behavior of stock markets can be classified into two categories. One category studies focus on examining the existence of herding behavior in stock market, which the investors’ trading behavior evolves into. Griffin et al. [4] conclude, the nature of herding is not universal and differs across exchanges and countries. Specifically, investors in emerging markets might exert herding patterns different from those observed in developed countries. In [5], the authors find evidence of intentional herding in China among both domestic accesses to information and expertise between these two cohorts. Zhou and Lai [6] discover that herding activity in Hong Kong’s market tends to be more prevalent with small stocks and that investors are more likely to herd when selling rather than buying stocks. In [7], Chiang and Zheng extend the investigation of herding behavior from domestic markets to global markets and find evidence of herding in advanced stock markets (except the USA) and in Asian markets. In all cases, herding behavior is proved to be a common state of stock behavior’s evolving into. The other studies concentrate on using various methods to study the evolution process of trading behavior in stock market. Wei et al. [8] propose that instability in the stock market is partly due to social influences impacting investors’ decision to buy, sell, or hold stock. By developing a Cellular Automata model of investment behavior in the stock market they show that increased imitation among investors leads to a less stable market. In [9], Liang and Han construct artificial stock market models by multiagent method based on small-world relationship network and find that evolvement of investors’ trading behavior in stock market emerges most of stylized facts, such as clustered volatility, bubbles, and crashes. Based on incorporating stock price into investor decisions, Bakker et al. construct a social network model of investment behavior in stock market and find that real life trust networks can significantly delay the stabilization of a market [10]. Chen et al. use experimental platform to study the correlation between herd behavior and earnings volatility in stock market and find that stock price bubbles or crashes are caused by synergy herding behavior through imitation agent and market sentiment signals [11]. Liu et al. study herding behavior by designing an artificial stock market model analyzing its results through computational experiment. They find that, in the short run, herding interacts with the returns, and destabilizing the market; in the long run, it is not the traders’ herding behavior but the traders’ disregard of discovering their own information, the low proportion of informed traders and the lack of market liquidity that are to blame for the anomalies in stock markets [12]. Hassan et al. integrates agent computational modes and fuzzy set theory, to study how to simulate friendship dynamics in an agent-based model, based on the principle that social relationships are ruled by proximity [13]. Falbo and Grassi proposes a market with two kinds of agents: speculators and rational investors to analyze the dynamics of a financial market when agents anticipate the occurrence of a correlation breakdown and finds that the market equilibrium results depend on the prevalence of one of the two types of agents [14]. Consequently, it has a great important chance to study the evolvement of the trading behavior, for it affects the market critically and vice versa. All the above-mentioned models and methods may capture some mechanisms of investor trading behavior and its impact on the market, but most of the studies mainly focus on the macroscopic features of trading behavior through quantitative analysis and only a few articles explore the inner mechanism of trading behavior’s evolvement from the complexity theory perspective. In this work, we attempt to fill this gap by investigating the evolution of individual investors’ trading behavior through mean-field theory and complex networks theory from microscopic perspective, so as to probe the collective dynamical evolution mechanism of trading behavior. According to Johansen and Sornette [15], all traders around the world can be seen as a network organized from family, friends, colleagues, incorporated not only by the source of opinion but also the local influence among them. We consider a network of interacting agents whose trading behavior is determined by the action of their peer neighbors, according to peer effect evolvement rules. Using a mean-field approach, the evolvement equilibrium of investors’ trading behavior is analyzed and it crucially depends on two components: the connectivity distribution of the network and the concavity of peer effect function. We show that, the stability of equilibrium states of investors’ behavior dynamics is directly related with the concavity and convexity of the neighbors preference function. These results and their analysis can be used to generate insights to understand the evolution law of the collective dynamical behavior. The rest of the paper is organized as follows. In Section 2, we introduce the model and define the evolvement rules of investor’s trading behavior. In Section 3, the evolvement characteristics of the trading behavior are presented by the method of mean-field equation. In Section 4, the comparison between analytic and simulation results are conducted. And the conclusion is drawn in Section 5. 2. The Model 2.1. The Network Nature, society, and many technologies are sustained by numerous networks that are not only too important to fail but paradoxically for decades have also proved too complicated to understand Albert and Barabási [16]. Based on the viewpoint posted by Johansen and Sornette [15] that all traders around the world can be seen as a network organized from family, friends, colleagues, incorporated not only by the source of opinion but also the local influence among them, the evolution system of investors’ behavior in stock markets can be described as a network, while nodes represent investors, the edges between every two nodes represent their relationship, such as, social relations and trade association. Consider a finite but large population of individuals . Each investor interacts with a subset of the population which form a complex network , where means that and are linked in network. We consider undirected networks, that is, if then . Let be the set of individuals with whom is linked. Formally, , where is the number of neighbors of , often referred as his connectivity. The connectivity can differ across individuals in the population and its distribution displays for each the fraction of nodes with connectivity . More precisely, . 2.2. The Evolvement Mechanism Our model studies evolvement of trading behavior in a population of stock markets. Normally, a trader can only exist in three discreet states: , where if select behavior “sell,” if select behavior “buy,” and if select the behavior “hold.” Considering the following continuous dynamic process at time , the state of the stock market is a vector , so, each investor can switch their trading behavior among these three states. For investors’ limited rationality and there incomplete knowledge about the market information, the factors affecting the investors’ behavior can be roughly divided into two categories which are self-psychological preference and peer effect. Self-psychological preference is a comprehensive judgment of the macroeconomic environment, individual psychological and other random factors which have influence on the investors’ recognition. Peer effect means the direct effect of neighbors’ behavior; therefore, the evolvement mechanism of investors’ trading behavior can be addressed as follows.(1) Self-psychological preference: for each trader , let , , and be preference probabilities of trading behavior “buy,” “hold,” and “sell,” respectively, where .(2)Peer effect: affected by neighbors’ trading behavior, at time , trader switches his behavior at a rate of peer effect function , which depends on neighboring preferred rate , connectivity , and the number of neighbors who select the certain behavior at time , where . Assuming that the neighboring preferred rate and the effect of neighbors’ behavior are independent, we can configure the peer effect function as , where is nonnegative function which represents the effect of neighbors’ behavior and declines for . As , obviously shows no peer effect. According to the above statement, the evolvement mechanism of trading behavior can be expressed as , where , , and represent self-psychological preference factor, neighboring preferred degree, and the factor of neighbors’ behavior effect, respectively. Supposing that the investors’ initial status of trading behavior is “hold,” so that the default state vector of the stock market can be described as . In addition, we assume that the traders whose trading behavior states have already changed can only switch trading behavior between “buy” and “sell,” while the ones whose trading behaviors has not changed yet can select behavior among three states above mentioned. Notice that, since the switch rates only depend on the properties of the present state, the dynamics, induced by the connectivity distribution and evolvement mechanism , determines a continuous Markov chain over the space of possible states . The analytic results of this dynamic are extremely complicated and thus will not be addressed. We concentrate instead on the study of a mean-field described below. 3. Mean-Field Analysis The mean-field approximation allows us to address questions that otherwise would be intractable. For instance, given a certain peer effect function of evolvement mechanism, how does the connectivity distribution of the network affect the evolution of trading behavior? Furthermore, given a certain network, how do the collective dynamics depend on the properties of the peer effect function? All these questions will be discussed below. Consequently, in what follows, we will assume that the population of investors is infinite. More precisely, let be the relative density of traders who show trading behavior “buy” at time with connectivity . So, will be the relative density of traders with behavior “buy” at time . Denote by and the probabilities that any given link points to a trader with behavior “buy” or “sell,” respectively, at time . Therefore, the probability that a “nonbuy” behavior trader with neighbors has exactly neighbors with “buy” behavior at time is . In the same, the probability that a “nonsell” behavior trader with neighbors has exactly neighbors with “sell” behavior at time is . Hence, the transition rate from “nonbuy” behavior to “buy” behavior, for a trader with connectivity , is given The transition rate from “buy” behavior to “sell” behavior, for a trader with connectivity , is given by So, for each the dynamic mean-field equation of trading behaviors’ evolvement can be written as Equation (3.3) shows the following: the variation of the relative density of “buy” behavior traders with links at time equals the proportion of “non-buy” behavior traders with neighbors at time who change their behavior minus the proportion of “buy” behavior traders with neighbors at time who become “non-buy” behavior ones. Consequently, for all , the stationary condition of behavior “buy” is equivalent to . Therefore, the stationary state must satisfy that Let denote the average connectivity of the network, that is, . The probability that a trader links to another one with connectivity equals . Thus, the value of can be computed as Equations (3.4) and (3.5) determine the stationary values for and of the stock markets. On submitting (3.3) into (3.4) we can obtain that The solutions of (3.6) are the stationary values of . Therefore, given a specific connectivity distribution , peer effect , and traders’ self-psychological preference , the stationary values of trading behavior “buy” in stock markets can be computed. So do the trading behaviors “sell” and “hold.” 4. Comparison between Analytic and Simulation Results Johansen and Sornette [15] point that the market collapse is the mutual influence of the continuous decline process, associated with the characteristics of psychology, especially depends on the investors’ conception of loss and their selection of reference. Therefore, the evolution of trading behavior will be subjected to investors’ psychological impact on the peer effect function, reflecting the essential characteristics of traders’ limited rationality. When the degree of investors’ rationality is high, marginal utility of peer effect increases with a high investors’ rationality. While the degree of investors’ rationality is low, marginal utility of peer effect will decreases. In this section we will present the analytic and simulation results so as to study how the connectivity distribution and the evolvement mechanism affect the mean-field equilibrium outcomes of the trading behavior in stock markets. 4.1. Marginal Utility Decreasing 4.1.1. Theoretical Analysis Normally, when investors’ rationality is high, there is marginal utility decreasing of peer effect function. Therefore, let be a weekly convex function. More precisely, for all , the function shows the following characteristic: The interpretation for (4.1) is that, for any given investor, adding one more “buy” behavior trader has an impact on her probability of selecting this action, which is weakly decreasing with respect to the existing number of “buy” behavior traders. For being all continuous and differentiable within their domain, we let and can obtain Considering that, function is nonnegative for and is nondecreasing for , so it can be derived that and . Meanwhile, based on the assumption of concave characteristic of peer effect function, we can obtain that and . Therefore, is a nondecreasing concave function within its domain and has an only stable solution. Thus, we can present an example of a concave function and let . Based on this, we can see the logical equilibrium stable under the condition that peer effect function is marginal utility decreasing, shown in Figure 1. 4.1.2. Simulation Analysis Notice that, evolution of investors’ trading behavior not only depends on the peer effect function , but also on investors’ self-psychological preference factor , neighboring preferred degree , and network structure . Particularly, due to the complexity of network structure, it cannot be resolved directly by mathematical methods to describe its impact on the evolution of investors’ behavior. In this section, we will analyze the influence on the evolution of investors’ behavior from the network structure and investors’ neighboring preferred degree through simulation analysis. Let be the number of investor and be the initial state of investors’ behavior; simulation experiment will be done 100 times and each of them will go 100 time steps. Degree and of ER Network [17], WS Network [18], BA Network [19], and IN Network [20] are comparatively analyzed in the simulation, respectively. Noticing that this paper focuses on the behavior “buy,” we assume that the psychological preference of choosing behavior “buy” is stronger than behavior “sell.” It is saying that the stock market being in good condition is assumed. (i) Evolution of Trading Behavior Affected by Network Structure Figure 2 shows the comparative analysis on evolution of trading behavior in ER Network, WS Network, BA Network and IN Network, under the condition of marginal utility decreasing of peer effect function, where and respectively. Overall, the volatility of equilibrium of behavior’s evolvement in BA Network and IN Network is little stronger than in ER Network and WS Network. In BA Network and IN Network, the proportion of investors who select behavior “buy” at is higher than at ; while, the change magnitude in IN Network is larger than in BA Network. From the graph, we can conclude that, when peer effect function is marginal utility decreasing, to some extent, network heterogeneity reduces the proportion of investors who choose trading behavior “buy”, and increases the volatility of the evolution state of the trading behavior, especially leading to the impact on the equilibrium state from network degree. (ii) Evolution of Trading Behavior Affected by Neighboring Preferred Degree Figure 3 describes the evolution path of the proportion of investors who select behavior “buy” as the neighboring preferred degree changes, under the condition of marginal utility decrease of peer effect function, at different network degree. In BA Network and IN Network, the proportion of investors who choose behavior “buy” increases as the investors’ neighboring preferred degree increases, while in ER Network and WS Network, the proportion shows stable state. Overall, the proportion of investors who choose behavior “buy” presents a linear evolvement. Meanwhile, the fluctuation range of the proportion equilibrium at is larger than at . 4.2. Marginal Utility Increasing 4.2.1. Theoretical Analysis In contrast with the elaboration in Section 4.1.1, when investors’ rationality is low, there is marginal utility increase of peer effect function. Therefore, let be a weekly concave function. More precisely, for all , The interpretation for (4.7) is that, for any given investor, adding one more “buy” behavior trader has an impact on her probability of selecting this action, which is weakly increasing with respect to the existing number of “buy” behavior traders. Consequently, . For ’s being positive or negative depends on the parameters , , and ; is not second-order monotonic and is prone to be multiple equilibrium. Considering that , and are continuous and nondecreasing functions, the requirement of ’s being multiple equilibrium is that it shows the characteristics of both concave and convex, so we suppose that when , and when , . When , we can obtain that , and . Based on the assumption of and upon substituting it into , we can obtain that Letting we have the following results: When , we can obtain that and . Based on the assumption of , , and upon substituting it into , we obtain where , and . Leting we have the following result: At this time, satisfies the above two conditions both and it is prone to find multiple equilibrium solutions of function . Therefore, we can present an example of a convex function and let . Based on this, we can see the logical evolvement equilibrium stable under the condition that peer effect function is marginal utility decreasing, shown in Figure 4. 4.2.2. Simulation Analysis (i) Evolution of Trading Behavior Affected by Network Structure Figure 5 shows the comparative analysis on evolution of trading behavior in ER Network, WS Network, BA Network, and IN Network [15], under the condition of marginal utility increasing of peer effect function, where and , respectively. In ER Network and WS Network, about at time , evolvement of trading behavior almost comes to the equilibrium. The equilibrium values of the two networks at and are basically the same, that is to say, network degree has little influence on the evolvement equilibrium of trading behavior in ER Network and WS Network. Compared with ER Network and WS Network, the stability of evolvement equilibrium is little weaker in BA Network and IN Network. About at time , equilibrium gradually stabilized. As time goes, in contrast with the evolution state at , the proportion of investors who choose behavior “buy” at transforms from initially lower than at to higher than at . The changes in IN Network are more obvious. Therefore, we can obtain that, when peer effect function is marginal utility increasing, network heterogeneity delays the process that the evolution of trading behavior reaches the equilibrium and strengthens the impact on the equilibrium of trading behavior which network degree places. (ii) Evolution of Trading Behavior Affected by Neighboring Preferred Degree Figure 6 describes the evolution path of the proportion of investors who select behavior “buy” as the neighboring preferred degree changes, at different network degrees, when peer effect function shows the characteristic of marginal utility increasing. Overall, the proportion of investors who choose behavior “buy” increases as the investors’ neighboring preferred degree increases. In ER Network and WS Network, the evolution equilibrium of trading behavior is not affected by the neighboring preferred degree, and the validity of the equilibrium is a little strong at . In BA Network and IN Network, the validity at is obviously stronger than at , especially in IN Network. When neighboring preferred degree , the proportion of investors who select behavior “buy” in BA Network and IN Network at , gradually transform from lower to larger than at . By the way, at in IN Network, there is multiequilibrium state of trading behavior evolvement. 5. Conclusion In this paper, we introduce an evolution model of investors’ trading behavior in stock market based on peer effect, represented by peer effect function, in networks, according to the assumption of investors’ limited rationality in behavioral financial literature. The model describes the evolution mechanism of investors’ trading behavior from the two aspects of self-psychological preference and peer effect. Letting the behavior “buy” be the benchmark, we investigate by mean-field analysis and extensive simulations the evolution of investors’ trading behavior in various typical networks under different characteristics of peer effect function. Our results indicate that the evolution of investors’ trading behavior is affected by the network structure of stock market and the neighboring preferred degree of the investors. Particularly, when peer effect function is marginal utility decreasing, to some extent, network heterogeneity reduces the proportion of investors who choose trading behavior “buy,” and increases the volatility of the evolution state of the trading behavior, especially leading to the impact on the equilibrium state of trading behavior from network degree. When peer effect function is marginal utility increasing, network heterogeneity delays the process that the evolution of trading behavior reaches the equilibrium and strengthens the impact on the equilibrium of trading behavior which network degree places, that there comes the multiequilibrium of investors’ trading behavior. Meanwhile, the proportion of investors who choose trading behavior “buy” increase with the network degree’s increasing in both circumstance. However, in this work, the evolution model is comparatively simple while that of the investors’ trading behavior in real life is much more complex. Especially, investors’ preference to certain trading behavior being deterministic and stochastic [21], and beliefs among investors’ being heterogeneous [22]; therefore, our future study will focus on the models that are even closer to the ones in real life, which could include considering the influence from the network structure based on social relationship, online network, and other relationship among investors, adopting the strategic characteristics of the investors’ choosing behavior, such as game learning strategy, and self-adapting learning strategy. The authors wish to thank the anonymous referee for his/her invaluable comments and suggestions. This research was supported by the National Natural Science Foundation of China (Grants no. 71071034 and 71171051), and Graduate Education Innovation Project of Jiangsu Province of China (Grant no. CXLX11_0159). 1. F. Villatoro, “The delegated portfolio management problem: reputation and herding,” Journal of Banking and Finance, vol. 33, no. 11, pp. 2062–2069, 2009. View at Publisher · View at Google Scholar · View at Scopus 2. J. R. Nofsinger and R. W. Sias, “Herding and feedback trading by institutional and individual investors,” Journal of Finance, vol. 54, no. 6, pp. 2263–2295, 1999. View at Scopus 3. S. Bikhchandani, D. Hirshleifer, and I. Welch, “A theory of fads, fashion, custom, and cultural change as informational cascades,” Journal of Political Economy, vol. 100, pp. 992–1026, 1992. 4. J. M. Griffin, J. H. Harris, and S. Topaloglu, “The dynamics of institutional and individual trading,” Journal of Finance, vol. 58, no. 6, pp. 2285–2320, 2003. View at Publisher · View at Google Scholar · View at Scopus 5. L. Tan, T. C. Chiang, J. R. Mason, and E. Nelling, “Herding behavior in Chinese stock markets: an examination of A and B shares,” Pacific Basin Finance Journal, vol. 16, no. 1-2, pp. 61–77, 2008. View at Publisher · View at Google Scholar · View at Scopus 6. R. T. Zhou and R. N. Lai, “Herding and information based trading,” Journal of Empirical Finance, vol. 16, no. 3, pp. 388–393, 2009. View at Publisher · View at Google Scholar · View at Scopus 7. T. C. Chiang and D. Zheng, “An empirical analysis of herd behavior in global stock markets,” Journal of Banking and Finance, vol. 34, no. 8, pp. 1911–1921, 2010. View at Publisher · View at Google Scholar · View at Scopus 8. Y. M. Wei, S. J. Ying, Y. Fan, and B. H. Wang, “The cellular automaton model of investment behavior in the stock market,” Physica A, vol. 325, no. 3-4, pp. 507–516, 2003. View at Publisher · View at Google Scholar · View at Scopus 9. Z. Z. Liang and Q. L. Han, “Coherent artificial stock market model based on small world networks,” Complex Systems and Complexity Science, vol. 2, pp. 70–76, 2009 (Chinese). 10. L. Bakker, W. Hare, H. Khosravi, and B. Ramadanovic, “A social network model of investment behaviour in the stock market,” Physica A, vol. 389, no. 6, pp. 1223–1229, 2010. View at Publisher · View at Google Scholar · View at Scopus 11. Y. Chen, J. H. Yuan, X. D. Li, et al., “Research on collaborative herding behavior and market volatility: based on computational experiments,” Journal of Management Sciences in China, vol. 13, pp. 119–128, 2010. 12. H. F. Liu, S. Yao, B. Q. Xiao, and H. Qu, “Stock market herd behavioral mechanism and its impact based on computational experiment,” System Engineering Theory and Practice, vol. 31, no. 5, pp. 805–812, 2011. View at Scopus 13. S. Hassan, M. Salgado, and J. Pavón, “Friendship dynamics: modelling social relationships through a fuzzy agent-based simulation,” Discrete Dynamics in Nature and Society, vol. 2011, Article ID 765640, 19 pages, 2011. View at Publisher · View at Google Scholar 14. P. Falbo and R. Grassi, “Market dynamics when agents anticipate correlation breakdown,” Discrete Dynamics in Nature and Society, vol. 2011, Article ID 959847, 33 pages, 2011. View at Publisher · View at Google Scholar 15. A. Johansen and D. Sornette, “Log-periodic power law bubbles in Latin-American and Asian markets and correlated anti-bubbles in Western stock markets—an empirical study,” International Journal of Theoretical and Applied Finance, vol. 4, pp. 853–920, 2001. 16. R. Albert and A. L. Barabási, “Statistical mechanics of complex networks,” Reviews of Modern Physics, vol. 74, no. 1, pp. 47–97, 2002. View at Publisher · View at Google Scholar · View at Scopus 17. P. Erdös and A. Rényi, “On the evolution of random graphs,” Publications of the Mathematical Institute of the Hungarian Academy of Sciences, vol. 5, pp. 17–61, 1960. 18. D. J. Watts and S. H. Strogatz, “Collective dynamics of “small-world” networks,” Nature, vol. 393, no. 6684, pp. 440–442, 1998. View at Scopus 19. A. L. Barabási and R. Albert, “Emergence of scaling in random networks,” Science, vol. 286, no. 5439, pp. 509–512, 1999. View at Publisher · View at Google Scholar · View at Scopus 20. Y. T. Bian, J. M. He, and Y. M. Zhuang, “A network model of investment and its robustness based on the intrinsic characteristics of the subjects in stock market,” Journal of Industrial Engineering and Engineering Management. In press. 21. L. I. Dobrescu, M. Neamtu, A. L. Ciurdariu, et al., “A dynamic economic model with discrete time and consumer sentiment,” Discrete Dynamics in Nature and Society, vol. 2009, Article ID 509561, 18 pages, 2009. View at Publisher · View at Google Scholar 22. A. Foster and N. Kirby, “Analysis of a heterogeneous trader model for asset price dynamics,” Discrete Dynamics in Nature and Society, vol. 2011, Article ID 309572, 12 pages, 2011. View at Publisher · View at Google Scholar
{"url":"http://www.hindawi.com/journals/ddns/2012/138178/","timestamp":"2014-04-16T19:31:54Z","content_type":null,"content_length":"350106","record_id":"<urn:uuid:cadfb659-f9a9-4553-b323-67d1f835164f>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00250-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: I think the answer is - 1/5, but I need an A so I need to be positive. Heres the question: What is the slope of the line containing the ordered pair (2, 3) and (-4, 0)? • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50ddf5a0e4b0f2b98c86d6c6","timestamp":"2014-04-19T07:04:50Z","content_type":null,"content_length":"44452","record_id":"<urn:uuid:4a39bef2-c69e-4228-a965-11910f04c104>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00650-ip-10-147-4-33.ec2.internal.warc.gz"}
2pPP5. Information integration of signal frequency. Session: Tuesday Afternoon, May 14 Time: 2:00 Author: Robert D. Irwin Author: Daniel L. Weber Location: Dept. of Psych., Wright State Univ., Dayton, OH 45435 A listener's ability to integrate information can be evaluated by presenting multiple samples in the distribution discrimination procedure. To examine frequency discrimination with a 2IFC form of this procedure, each interval contains samples drawn from one of two distributions of signal frequency. The listener indicates which interval contained the sample(s) drawn from the distribution with the higher mean. This experiment evaluated the integration of frequency information in terms of improvement in frequency discrimination for an increasing number of samples (n=1,2,3,4,5,6,8,12,16). Seven listeners discriminated a ``standard'' distribution (means of 400, 565, and 1000 Hz) from each of four ``comparison'' distributions (means of 401, 403, 406, and 414 Hz for the 400-Hz standard; 566.5, 569.5, 572, and 584 Hz for the 565-Hz standard; and 1002, 1005, 1010, and 1020 Hz for the 1000-Hz standard). All samples were 100-ms sinusoids. All conditions were alike to an ideal observer in that the distributions were normal with a standard deviation equal to the difference between the means. The results indicate that integration of frequency information appears constant across different frequencies when initial performance is equated. [Work supported by AFOSR through WPAFB AL/CFBA.] from ASA 131st Meeting, Indianapolis, May 1996
{"url":"http://www.auditory.org/asamtgs/asa96ind/2pPP/2pPP5.html","timestamp":"2014-04-18T10:35:43Z","content_type":null,"content_length":"2040","record_id":"<urn:uuid:6da40dfd-b160-4f5f-afa6-884a3e83c7de>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00498-ip-10-147-4-33.ec2.internal.warc.gz"}
Replicating the Body Fat Example from "Bayesian Model Averaging: A Tutorial" (1999) with BMS in R February 7, 2011 By BMS Add-ons » BMS Add-ons In their paper Bayesian Model Averaging: A Tutorial (Statistical Science 14(4), 1999, pp. 382-401), Hoeting, Madigan, Raftery and Volinsky (HMRV) do an exercise in Bayesian Model Averaging (BMA) at pp.394-397 in estimating body fat data from Johnson (1996): "Fitting Percentage of Body Fat to Simple Body Measurements"; Journal of Statistics Education 4(1). This tutorial shows how to reproduce the results with the R-package BMS. Getting R and BMS Before continuing with the tutorial, you should have R installed (any version > 2.5) and the library BMS installed. How HMRV Did It The following is a short outline of how FLS set up their BMA example on pp.394-397: • They use data from Johnson (1996) with 13 covariates and 251 observations. (The original data contains 252 observations, but one is omitted.) • HMRV use slightly different priors compared to BMS: □ HMRV use proper priors on variance and the intercept, that differ slightly from the standard framework used in BMS. □ Prior coefficient covariance: They do not rely on Zellner's g prior as does BMS, but instead use a diagonal matrix: However, their matrix is the same as under Zellner's g prior, with the off-diagonal elements set to zero. □ Consequently, HMRV use a notation that differs from the approach used in Zellner. But since the coefficient prior nearly coincides, Zellner's g in BMS is equivalent to their choice of phi^ 2*n. Since they choose phi=2.85 the equivalent g-prior should be approximately 2000. • Their model prior is the uniform model prior (same prior probability for all models) - (p.7). • For sampling through the model space, they use complete enumeration of all models. • They base their final posterior model probabilities on the analytical posterior model probabilities of all models. Getting the Body Fat Data The original data is available from the web appendix of Johnson (1996) as the file fat.dat Before downloading it, find out your current working directory by opening R and typing the following into the R console: Download fat.dat to this location. Then load it into R by fat1 becomes a data.frame containing the data. HMRV take only a meaningful sub-sample of these. A glance at fat.txt reveals which data they are. Therefore the next step is to consolidate fat1 to the same data as used in HMRV, by extracting the response variable (column 2) and the explanatory variables from HMRV (columns 5-7 and 10-19). fat = fat1[,c(2, 5:7, 10:19)] Moreover, HMRV drop observation 42 since its reported body height is only 29.5 inch (75 cm). Finally, let's assign more meaningful names to the data. fat= fat[-42,] Replicating: Demonstration First, load the package BMS with the following command Now we can sample models according to HMRV: We need a fixed g prior equal to 2000 - set by the argument g=2000. The model priors should be uniform and are assigned via mprior="uniform". Finally, mcmc ="enumeration" requires full enumeration of all models and should be quite fast: 2^13=8192 potential models, which should take two to three seconds to compute. mf= bms(fat,mprior="unif",g=1700, mcmc="enumeration", user.int=FALSE) The variable mf now holds the BMA results and can be used for further inference. The argument user.int=FALSE just suppresses printing output on the console in order to avoid confusion in this HMRV report standardized coefficients in Table 8. This can be reproduced by typing: Here, std.coefs=TRUE forces the coefficients to be standardized (as if for data with mean zero and variance one). The above command produces the following output The first column PIP above holds the posterior inclusion probabilities that correspond to the fifth column in Table 8 of HMRV (p.396). The values match up pretty well, even though the prior concept used here was different from HMRV. The column Post Mean reports posterior expected values of coefficients ('Mean' in HMRV Table 8) which are, again quite close to the ones in the article. The same applies to their standard deviations (PostSD). The slight differences are most likely due to the effect exerted by Zellner's g prior vs the prior used by HMRV. Note that the coefficients in HMRV are also sign-standardized (no negative signs), which is not the case here. The Other Results In addition to PIPs and coefficients, HMRV report a graphic representation of the best 10 models (by posterior model probability) in their Table 9. The BMS equivalent can be plotted with the following command: The [1:10] means that only the best ten models should be plotted (for more on best models consider argument nmodel in help(bms)). order=FALSE orders the output according to the original data. The output looks as follows - note that only the variables included among the top ten models are shown, just as in HMRV, Table 9: The chart can be read as follows: The very best model is plotted to the left-hand side, and contains three variables: weight (red for negative sign), abdomen (blue for positive coefficient) and wrist (red for negative). This best model has a posterior model probability of 20%. To its right is the second-best model, which contains forearm in addition. The other models follow. In its structure, this image conforms to Table 9 in HMRV. However, the reported posterior model probabilities are different in absolute values, which is due to the different prior structures employed. Nonetheless, the model distribution corresponds to HMRV in relative magnitudes. Finally, HMRV report the posterior density for the standardized coefficient of wrist in Figure 4. Replicate this plot with the following command: The result looks exactly similar to Figure 4 in HMRV: More Results There are several other functions in the BMS package that can be used to explore the data further. Try out the following commands: • plotModelsize(mf) plots the prior and posterior model size distribution • summary(mf) reports some summary statistics, in particular posterior expected model size. • help(bms) reveals more about the prior options to play around with body fat data. • Refer to the documentation for even more functions. daily e-mail updates news and on topics such as: visualization ( ), programming ( Web Scraping ) statistics ( time series ) and more... If you got this far, why not subscribe for updates from the site? Choose your flavor: , or
{"url":"http://www.r-bloggers.com/replicating-the-body-fat-example-from-bayesian-model-averaging-a-tutorial-1999-with-bms-in-r/","timestamp":"2014-04-16T10:23:52Z","content_type":null,"content_length":"45650","record_id":"<urn:uuid:456ece0d-a4b9-4114-93b1-51433fed84a8>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00389-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: Mathematics as a language Replies: 5 Last Post: Nov 10, 2010 12:05 AM Messages: [ Previous | Next ] Re: Mathematics as a language Posted: Nov 9, 2010 11:48 PM I wrote: >> Basically, our intuitions just aren't up to >> dealing with general set theory. (OC some claim theirs ARE.) Transfer Principle <lwal...@lausd.net> responded: > And I'm just wondering, does this "some claim theirs _are_" have > anything to do with "cranks," as in WM, AP, and so on? Not necessarily at all. Those who claim their intuitions about sets are all well and good, include... 1: Many crackpots; 2: Many oridnary working mathematicians; 3: Some of the top logicians and set theorists of the past and present, including Godel and Woodin. Of course, these groups may be further qualified... (1) probably usually have a totally different concept of sets from everyone else; (2) mostly couldn't give a toss one way or the other; (3) mostly would be very chary of ascribing much personal confidence in their view of set ontology. The two mentioned are (I suspect) among a tiny minority. -- Workaday William Date Subject Author 11/2/10 Re: Mathematics as a language Jesse F. Hughes 11/4/10 Re: Mathematics as a language Bill Taylor 11/6/10 Re: Mathematics as a language lwalke3@lausd.net 11/9/10 Re: Mathematics as a language Bill Taylor 11/10/10 Re: Mathematics as a language lwalke3@lausd.net
{"url":"http://mathforum.org/kb/thread.jspa?threadID=2167061&messageID=7274747","timestamp":"2014-04-16T07:36:38Z","content_type":null,"content_length":"21634","record_id":"<urn:uuid:f7b55adf-e607-4a34-8533-765c8d34ae0e>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00236-ip-10-147-4-33.ec2.internal.warc.gz"}
Studio City Algebra 2 Tutor ...This was a fantastic experience for both the students and tutors alike, and I will be sure to bring my same enthusiasm and excitement as your tutor. I know what it’s like to not understand a topic (or an entire subject) in school. Tutors, therefore, assist in evaluating the core of the student’... 16 Subjects: including algebra 2, English, reading, geometry My services are great for students and prospective students studying for standardized tests (ACT, SAT, GRE, GMAT) and /or who need help with school coursework. I was educated at West Point and Cornell University where I was in the top 5% of my class and received a degree in Economics and Mandarin C... 30 Subjects: including algebra 2, reading, Spanish, English ...I have enabled students to reach their maximum potential on standardized examinations (up to a 650 point increase on the SAT exams!) as well as on school work, and to get into the college of their dreams! To provide some more details on my academic background, I have listed my experiences in the... 43 Subjects: including algebra 2, English, calculus, reading ...It has provided me with challenges and exposure to a wide array of activities. The experiences have reaffirmed my passion to help others. My teaching style helps to develop a deeper understanding of how to analyze problems. 14 Subjects: including algebra 2, geometry, algebra 1, SAT math ...I can be flexible with time and have a 2-hour cancellation policy, because I know unforeseen events can occur. I hope to hear from you soon in order to build your ability in math or science. I graduated from Cal State LA as a Mechanical Engineer. 9 Subjects: including algebra 2, calculus, algebra 1, trigonometry
{"url":"http://www.purplemath.com/Studio_City_algebra_2_tutors.php","timestamp":"2014-04-18T21:56:46Z","content_type":null,"content_length":"24031","record_id":"<urn:uuid:f99afe86-015c-436a-8a4c-781e7a408358>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00606-ip-10-147-4-33.ec2.internal.warc.gz"}
Grothendieck topologies, Mayer-Vietoris, and points up vote 13 down vote favorite I am trying to think about certain problems in the theory of motives without having a proper background in Grothendieck topologies and the like, hoping to teach myself the related techniques in the process. Here is a rather specific question that I've stumbled upon; I would appreciate any references and/or explanations of the relevant issues. Consider a topology "of reasonable size". What I have in mind is the Zariski, Nisnevich, or etale site of an algebraic variety. Let us call the objects of this topology "open sets". My understanding is that there is also some notion of "points" of a Grothendieck topology, and that for the three topologies mentioned above these are the spectra of the local rings of the points of the scheme, the spectra of Henselizations of these local rings, and the spectra of the strict Henselizations of these local rings, respectively. Please correct me if this is wrong. I think one can define a "cohomology theory" on a site as a sequence of contravariant functors, indexed by nonnegative integers, from the category of open sets to, say, abelian groups, such that whenever an open set is covered by two other ones, there is a cohomological Mayer-Vietoris sequence. Given a cohomology theory, one can also define its value on any point of the site simply by passing to the inductive limit. Suppose that I have a morphism of cohomology theories on, say, the Nisnevich site of an algebraic variety. Assume that this morphism is an isomorphism at all points. Does it follow that it is an isomorphism of cohomology theories (i.e., an isomorphism on all open sets)? ag.algebraic-geometry grothendieck-topology Let me briefly advocate the idea that thinking on the group level is perhaps not what you want. For example, one way to produce the kinds of objects you seem to be getting at is to start with a bounded-below cochain complex of injective sheaves on your site, and to each open set associate the cohomology groups of the chain complex of sections. Remembering only the groups doesn't provide the "connective tissue" that connects the groups at different levels. – Tyler Lawson Oct 24 '10 at 15:44 (And more generally one might take values in a stable model category, stable infinity-category, or perhaps might have a "sheaf of infinity-categories" - but I digress.) – Tyler Lawson Oct 24 '10 at add comment 1 Answer active oldest votes You definition of a cohomology theory is rather strange since it does not seem to depend on the choice of the site (it corresponds to Zariski topology). For the correct definition of cohomology theory (I don't know it by heart:)), the answer to your question is probably 'yes'. The corresponding condition is 'topos has enough points' (for up vote 3 example, see http://webcache.googleusercontent.com/search?q=cache:htiGfZlZqc0J:ncatlab.org/nlab/show/point%2Bof%2Ba%2Btopos+site+has+enough+points+sheaf&cd=1&hl=en&ct=clnk). It is fullfiled down vote for the topologies your mentioned. Moreover, this condition was mentioned explicitly in some papers of Voevodsky or Suslin. I don't quite understand why does the notion of one variety being covered by two others not depend on the topology, or always corresponds to Zariski topology. I would think that I can cover a curve with two other curves mapping to it by etale maps and this wouldn't come from Zariski topology. However, the images of these two morphisms would form a Zariski covering, indeed. Is this what you had in mind? – Leonid Positselski Oct 24 '10 at 12:49 3 @Leonid: I think the point is that in general one should instead expect a descent spectral sequence associated with the Cech nerve of the cover, rather than just a Mayer-Vietoris sequence. The Mayer-Vietoris sequence is so simple because of the natural identifications $U \cap (U \cap V) \cong U \cap V$. This certainly fails for the etale topology if $U \to X$ is an etale cover rather than an open immersion. – Tyler Lawson Oct 24 '10 at 13:02 1 For an arbitrary topology, what do you mean by Mayer-Viertoris? It usually does not suffice to consider the fibre square of the covering (and certainly there is no reasonable analogue for the 'intersection of two components' of the covering; it is quite a strange idea to consider coverings consisting of two components). – Mikhail Bondarko Oct 24 '10 at 13:02 Yes, a spectral sequence of the type mentioned is what you would expect here. – Mikhail Bondarko Oct 24 '10 at 13:04 @Tyler: Thanks, now I understand. – Leonid Positselski Oct 24 '10 at 13:14 show 7 more comments Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry grothendieck-topology or ask your own question.
{"url":"http://mathoverflow.net/questions/43345/grothendieck-topologies-mayer-vietoris-and-points/43374","timestamp":"2014-04-18T10:37:46Z","content_type":null,"content_length":"61191","record_id":"<urn:uuid:da71e517-e259-4ede-be3e-75bf06738473>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00227-ip-10-147-4-33.ec2.internal.warc.gz"}
Artesia, CA Math Tutor Find an Artesia, CA Math Tutor Hi, I'm Zeke and I am looking for students to tutor over the summer and continuing into the fall. Teaching is my passion. My tutoring is always fashioned with the student in mind. 10 Subjects: including geometry, precalculus, trigonometry, physical science ...I look forward to hearing from you.I am a substitute teacher and recently completed my Mathematics Single Subject Teaching Credential at Cal State Long Beach. My student teaching assignment consisted of teaching one class of high school Algebra and two classes of 7th grade PreAlgebra. At the b... 10 Subjects: including prealgebra, algebra 1, algebra 2, geometry ...I also created lessons plans and worked closely with the standards therefore I have a good understanding of what is required by the student in their own classroom. I am eager to use all the skills I have learned in college and start working with students. I am very enthusiastic and always trying to make learning fun and interesting. 12 Subjects: including algebra 2, elementary (k-6th), geometry, phonics ...However, my wife and I have just moved to the LA area, so I thought I would take on some new students while my course load is light. While my experience tutoring has primarily been with 1st-12th graders, the material I command the greatest knowledge of stands out when applied to college students... 29 Subjects: including SAT math, reading, English, writing ...I also attended Loyola Marymount University where I obtained my teaching credential. Let's learn together!I have a multiple subjects credential for teaching K- 8th grade. I look forward to working with your student. 12 Subjects: including algebra 1, reading, English, grammar Related Artesia, CA Tutors Artesia, CA Accounting Tutors Artesia, CA ACT Tutors Artesia, CA Algebra Tutors Artesia, CA Algebra 2 Tutors Artesia, CA Calculus Tutors Artesia, CA Geometry Tutors Artesia, CA Math Tutors Artesia, CA Prealgebra Tutors Artesia, CA Precalculus Tutors Artesia, CA SAT Tutors Artesia, CA SAT Math Tutors Artesia, CA Science Tutors Artesia, CA Statistics Tutors Artesia, CA Trigonometry Tutors
{"url":"http://www.purplemath.com/artesia_ca_math_tutors.php","timestamp":"2014-04-21T12:54:11Z","content_type":null,"content_length":"23619","record_id":"<urn:uuid:dc602283-fb9b-436c-998e-35fdcb5c55ee>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00099-ip-10-147-4-33.ec2.internal.warc.gz"}
Rings: Principal Ideal domain March 7th 2010, 09:23 AM Rings: Principal Ideal domain Hey Guys, I've never been on one of these sites before but am really stuck on an problem sheet I have been set. I'm doing fine in all my other modules but this algebra one doesn't seem to click well with me :( Any help would be much appreciated. So I have to prove that R is a principal ideal domain where R is the ring {a + b(sqrt(-2)) | a,b are integers} (by b(sqrt(-2)) I mean b multiplied by the square root of -2) TY in advance March 7th 2010, 05:21 PM Hey Guys, I've never been on one of these sites before but am really stuck on an problem sheet I have been set. I'm doing fine in all my other modules but this algebra one doesn't seem to click well with me :( Any help would be much appreciated. So I have to prove that R is a principal ideal domain where R is the ring {a + b(sqrt(-2)) | a,b are integers} (by b(sqrt(-2)) I mean b multiplied by the square root of -2) TY in advance define the map $f: R \setminus \{0 \} \longrightarrow \mathbb{N}$ by $f(a+b \sqrt{-2})=a^2+2b^2.$ show that $f$ is a norm-Euclidean and so $R$ is an Euclidean domain and therefore a PID.
{"url":"http://mathhelpforum.com/advanced-algebra/132483-rings-principal-ideal-domain-print.html","timestamp":"2014-04-18T09:18:05Z","content_type":null,"content_length":"5612","record_id":"<urn:uuid:8c9a7d61-a2a3-48f3-82d0-f6a4e9e9e8f7>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00523-ip-10-147-4-33.ec2.internal.warc.gz"}
Proposition 50 To find the third binomial line. Set out two numbers AC and CB such that the sum of them AB has to BC the ratio which a square number has to a square number, but does not have to AC the ratio which a square number has to a square Also set out any other number D, not square, and let it not have to either of the numbers BA and AC the ratio which a square number has to a square number. Set out any rational straight line E, and let it be contrived that D is to AB as the square on E is to the square on FG. Then the square on E is commensurable with the square on FG. And E is rational, therefore FG is also rational. And, since D does not have to AB the ratio which a square number has to a square number, neither does the square on E have to the square on FG the ratio which a square number has to a square number, therefore E is incommensurable in length with FG. Next let it be contrived that the number BA is to AC as the square on FG is to the square on GH. Then the square on FG is commensurable with the square on GH. But FG is rational, therefore GH is also rational. And, since BA does not have to AC the ratio which a square number has to a square number, neither does the square on FG have to the square on HG the ratio which a square number has to a square number, therefore FG is incommensurable in length with GH. Therefore FG and GH are rational straight lines commensurable in square only. Therefore FH is binomial. I say next that it is also a third binomial straight line. Since D is to AB as the square on E is to the square on FG, and BA is to AC as the square on FG is to the square on GH, therefore, ex aequali, D is to AC as the square on E is to the square on GH. But D does not have to AC the ratio which a square number has to a square number, therefore neither does the square on E have to the square on GH the ratio which a square number has to a square number. Therefore E is incommensurable in length with GH. Since BA is to AC as the square on FG is to the square on GH, therefore the square on FG is greater than the square on GH. Let then the sum of the squares on GH and K equal the square on FG. Then, in conversion, AB is to BC as the square on FG is to the square on K. But AB has to BC the ratio which a square number has to a square number, therefore the square on FG also has to the square on K the ratio which a square number has to a square number. Therefore FG is commensurable in length with K. Therefore the square on FG is greater than the square on GH by the square on a straight line commensurable with FG. And FG and GH are rational straight lines commensurable in square only, and neither of them is commensurable in length with E. Therefore FH is a third binomial straight line.
{"url":"http://aleph0.clarku.edu/~djoyce/java/elements/bookX/propX50.html","timestamp":"2014-04-20T18:37:09Z","content_type":null,"content_length":"6399","record_id":"<urn:uuid:b60d396d-6f98-4920-a3d7-0ee7ca3fc358>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00079-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: st: Dealing with survey data when the entire population is also in [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: st: Dealing with survey data when the entire population is also in the dataset From Austin Nichols <austinnichols@gmail.com> To statalist@hsphsun2.harvard.edu Subject Re: st: Dealing with survey data when the entire population is also in the dataset Date Sat, 25 Jul 2009 23:56:46 -0400 Margo Schlanger<margo.schlanger@gmail.com> : I think Michael I. Lichter means for you to -append- your sample and population in step 2 below. Then you can run -hotelling- or the equivalent linear discriminant model (with robust SEs) to compare means for a bunch of variables observed in both. I.e. . reg sample x* [pw=wt] in step 2b, not tabulate, with or without svy: and chi2. On Fri, Jul 24, 2009 at 11:24 PM, Michael I. Lichter<MLichter@buffalo.edu> wrote: > Margo, > 1. select your sample and save it in a new dataset, and then in the new > dataset: > a. define your stratum variable -stratavar- as you described > b. define your pweight as you described, wt = 1/(sampling fraction) for each > stratum > 2. combine the full original dataset with the new one, but with stratavar = > 1 for the new dataset and wt = 1 and with a new variable sample = 0 for the > original and =1 for the sample, and then > a. -svyset [pw=wt], strata(stratavar)- > b. do your chi square test or whatever using svy commands, e.g., -svy: tab > var1 sample- > Michael > Margo Schlanger wrote: >> Hi -- >> I have a dataset in which the observation is a "case". I started with >> a complete census of the ~4000 relevant cases; each of them gets a >> line in my dataset. I have data filling a few variables about each of >> them. (When they were filed, where they were filed, the type of >> outcome, etc.) >> I randomly sampled them using 3 strata (for one strata, the sampling >> probability was 1, for another about .5, and for a third, about .75). >> I end up with a sample of about 2000. I know much more about this >> sample. >> Ok, my question: >> 1) How do I use the svyset command to describe this dataset? It would >> be easy if I just dropped all the non-sampled observations, but I >> don't want to do that, because of question 2: >> 2) How do I compare something about the sample to the entire >> population, just to demonstrate that my sample isn't very different >> from that entire population on any of the few variables I actually >> have comprehensive data about. I could do this simply, if I didn't >> have to worry about weighting: >> tabulate year sample, chi2 >> But I need the weights. In addition, I can't simply use weighting >> commands, because in the population (when sample == 0), everything >> should be weighted the same; the weights apply only to my sample (when >> sample == 1). And I can't (so far) use survey commands, because I >> don't know the answer to (1), above. >> NOTE: Nearly all the variables I care about are categorical: year of >> filing, type of case. But it's easy enough to turn them into dummies, >> if that's useful. >> Thanks for any help with this. >> Margo Schlanger * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2009-07/msg01060.html","timestamp":"2014-04-19T22:38:11Z","content_type":null,"content_length":"9618","record_id":"<urn:uuid:ce35124e-669e-4101-9c90-3ac7f8365cb4>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00111-ip-10-147-4-33.ec2.internal.warc.gz"}
proton magnetic moment flip angle 54.7 in an external magnetic field hi mkj! loosely speaking, the angular momentum P is fixed if there's no magnetic field, there's no problem … the "spin axis" can be in any direction if there is a magnetic field in the z direction, the component P is quantised, and (for a spin-half particle) must be P/√3 so the angle between P and z must be cos 1/√3, = 54.74° (see eg
{"url":"http://www.physicsforums.com/showthread.php?t=725298","timestamp":"2014-04-16T10:38:26Z","content_type":null,"content_length":"23837","record_id":"<urn:uuid:72af4d58-4308-4d04-8ffa-076e00d63154>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00467-ip-10-147-4-33.ec2.internal.warc.gz"}
Proposition 39 If two straight lines incommensurable in square which make the sum of the squares on them rational but the rectangle contained by them medial are added together, then the whole straight line is irrational; let it be called major. Let two straight lines AB and BC incommensurable in square, and fulfilling the given conditions, be added together. I say that AC is irrational. Since the rectangle AB by BC is medial, therefore twice the rectangle AB by BC is also medial. But the sum of the squares on AB and BC is rational, therefore twice the rectangle AB by BC is incommensurable with the sum of the squares on AB and BC, so that the sum of the squares on AB and BC together with twice the rectangle AB by BC, that is, the square on AC, is also incommensurable with the sum of the squares on AB and BC. Therefore the square on AC is irrational, so that AC is also irrational. Let it be called major. This proposition is used in X.57, X.63, and X.68.
{"url":"http://aleph0.clarku.edu/~djoyce/java/elements/bookX/propX39.html","timestamp":"2014-04-20T18:37:13Z","content_type":null,"content_length":"2829","record_id":"<urn:uuid:719f64e8-dfd5-4b21-a21e-914974b69db0>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00068-ip-10-147-4-33.ec2.internal.warc.gz"}
[R] how to plot implicit functions Duncan Murdoch murdoch at stats.uwo.ca Mon Dec 29 20:28:13 CET 2008 YIHSU CHEN wrote: > Dear R users -- > I think this question was asked before but there was no reply to it. > I would appreciate any suggestion any of you might have. I am > interested in plotting several "implicit functions" (F(x,y,z)=0) on > the same fig. Is there anyone who has an example code of how to do > this? The misc3d package has a contour3d function that could draw a contour of the function F at level 0. Assuming you have written a function to calculate F and know the region where you want to plot it, you could just say contour3d(F, 0, seq(minx, maxx, 30), seq(miny, maxy, 30), seq(minz, maxz, 30)) Duncan Murdoch More information about the R-help mailing list
{"url":"https://stat.ethz.ch/pipermail/r-help/2008-December/183555.html","timestamp":"2014-04-17T18:28:04Z","content_type":null,"content_length":"3165","record_id":"<urn:uuid:75604780-b1e5-4f94-92f1-c41fc5c82020>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00389-ip-10-147-4-33.ec2.internal.warc.gz"}
Baxter Estates, NY Geometry Tutor Find a Baxter Estates, NY Geometry Tutor ...I took the MCAT and received a score of 40 (14PS-14VR-12B), which placed me in approximately the 99.75 percentile of all test takers. I took the test July 27th 2013. I have been accepted to medical school and am matriculating in August, though I currently tutor full time. 24 Subjects: including geometry, chemistry, ASVAB, physics ...As well, I have also tutored for the TACHS which has replaced the COOPs in the New York City area regarding Catholic High Schools. I also tutor the NYC SHSAT which is the exam for entry into the New York City Specialized High Schools. I have actually gone to a NYC Specialized High School, Brooklyn Tech, therefore, I actually passed the test as a teenager. 17 Subjects: including geometry, reading, English, writing Hello! My name is Mary, and I may be the perfect tutor for you! I am an engineering graduate from Cornell University with a Masters in Math Education. 9 Subjects: including geometry, statistics, algebra 2, SAT math ...Let me tell you a little bit about myself:I'm a member of MENSA. I received a 780 on the math portion of the SAT in 2001. I majored in mathematics in college, and was educated through calculus 12 Subjects: including geometry, reading, SAT math, algebra 2 ...My student rate of passing exams is between 90 and 100%. I am patient and have the tact to help students to get the concept, complete assignments, and apply mathematics in real life. I really have the secret to help students build their math skills in an easy way. I am currently teaching math in the CUNY system in three colleges, and tutor as well. 11 Subjects: including geometry, French, trigonometry, algebra 1 Related Baxter Estates, NY Tutors Baxter Estates, NY Accounting Tutors Baxter Estates, NY ACT Tutors Baxter Estates, NY Algebra Tutors Baxter Estates, NY Algebra 2 Tutors Baxter Estates, NY Calculus Tutors Baxter Estates, NY Geometry Tutors Baxter Estates, NY Math Tutors Baxter Estates, NY Prealgebra Tutors Baxter Estates, NY Precalculus Tutors Baxter Estates, NY SAT Tutors Baxter Estates, NY SAT Math Tutors Baxter Estates, NY Science Tutors Baxter Estates, NY Statistics Tutors Baxter Estates, NY Trigonometry Tutors Nearby Cities With geometry Tutor Albertson, NY geometry Tutors Glen Head geometry Tutors Glenwood Landing geometry Tutors Greenvale geometry Tutors Harbor Acres, NY geometry Tutors Kings Point, NY geometry Tutors Manorhaven, NY geometry Tutors Plandome, NY geometry Tutors Port Washington, NY geometry Tutors Roslyn, NY geometry Tutors Russell Gardens, NY geometry Tutors Saddle Rock, NY geometry Tutors Sands Point, NY geometry Tutors The Terrace, NY geometry Tutors Thomaston, NY geometry Tutors
{"url":"http://www.purplemath.com/Baxter_Estates_NY_geometry_tutors.php","timestamp":"2014-04-18T04:22:18Z","content_type":null,"content_length":"24326","record_id":"<urn:uuid:46ab7328-0d76-43f7-8492-0f0f05ae0058>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00647-ip-10-147-4-33.ec2.internal.warc.gz"}
1- Basic Facts 3. Student Learning Map • Topic:1- Basic Facts • Subject(s):Math Key Learning: Recalling basic addition and related subtraction facts will help us solve real-world problems. (Course Code: 5012040) Unit Essential Question(s): How do I use basic facts (addition and subtraction) to solve problems? Basic Addition and Subtraction Facts Lesson Essential Question(s): How can I use fluency of basic addition and subtraction facts to solve real-world problems? (facts 0-12) How can I use cue words to set up and solve addition and subtraction word problems? Lesson Essential Question(s): What is the relationship between addition and subtraction? (inverse operation) What strategies can I use when determining the accuracy of addition and subtraction problems? (constructing support with justification) Lesson Essential Question(s): How do I use repeated addition to solve real world problems? (i.e. 4+4+4+4=16; addends no greater than 10) How can I show counting groups of numbers to 100? (2's, 5's, 10's) Additional Information: United Streaming Video; Manipulative;NCTM Process Standards (located in Public Folders) The connection to writing includes answering the extended thinking questions. Additionally, the FCIM math mini-lessons have a daily summarization prompt that is to be answered through writing. Student and Teacher Edition pages:121-124, 125-128, 129-132, 133-135,137-140, 141-144, 153-156 Additional Teacher Edition pages: 129B,137B, 141B, 153B Student and Teacher Edition pages:137-140, 141-144, 464, 465 Additional Teacher Edition pages: 137B,141B Student and Teacher Edition pages:145-148, 509-512, 364 Additional Teacher Edition pages: 145B,509B Student and Teacher Edition pages:25-28, 33-36, 29-32 Additional Teacher Edition pages: 25B,33B
{"url":"http://publish.learningfocused.com/5522773","timestamp":"2014-04-18T05:30:00Z","content_type":null,"content_length":"31559","record_id":"<urn:uuid:64139dd3-71ae-4dd7-b522-eb4547dce1b5>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00624-ip-10-147-4-33.ec2.internal.warc.gz"}
How many cubic yards? August 3rd 2010, 05:55 AM #1 Aug 2010 Need help determining how many cubic yards of gravel I need for a flower bed in front of a building. The flower bed is 42' (feet) long, by 8' (feet) wide. The height closest to the building of gravel needs to be 16" (inches). The height of the gravel by the lawn needs to be 3" (inches). How many cubic YARDS of gravel to I need? Thank you. From the building to the lawn: do you want a linear grade there, so that I could describe the shape of the gravel bed as a trapezoidal prism? The cross section of this prism of gravel would be a right trapezoid laid on its side. The two bases would be 3" and 16", and the "height" would be 8' = 96". The area would then be $A = \frac{1}{2}(3 + 16)(96) = 912\,in^2$ The volume of the prism would be the area of this trapezoid times the width of the flower bed, which is 42' = 504": $V = 912 \times 504 = 459,648\,in^3$ One cubic yard = 46,656 cubic inches, so $V = (459,648\,in^3)\left( \dfrac{1\,yd^3}{46,656\,in^3} \right)\approx 9.852\,yd^3$ EDIT: Didn't see your post Ackbeet, sorry. thanks so much guys, sorry didn't get a chance to see ackbeet question before i saw eumyang's! August 3rd 2010, 06:05 AM #2 August 3rd 2010, 06:14 AM #3 August 3rd 2010, 06:51 AM #4 Aug 2010
{"url":"http://mathhelpforum.com/geometry/152682-how-many-cubic-yards.html","timestamp":"2014-04-17T19:23:35Z","content_type":null,"content_length":"38364","record_id":"<urn:uuid:edc2468e-413f-4f6b-a2b2-d5daa092b1f7>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00417-ip-10-147-4-33.ec2.internal.warc.gz"}
Emeryville Calculus Tutor Find an Emeryville Calculus Tutor ...When I think of Algebra 1, what comes to mind is a hybrid of all maths (obviously, between Pre-Algebra and Algebra 2) - a place where the foundations of math are laid. I think it is integral for this particular segment of math to be carefully experienced and very hands-on, and that is how I woul... 12 Subjects: including calculus, chemistry, geometry, statistics ...I have a broad background in the Physical Sciences, Mathematics, and Economics. I have taught a number of courses and have previously worked as a GRE/SAT/ACT tutor. I relish the opportunity to see the light-bulb go off in a students head when they 'get' a subject or concept for the first time. 19 Subjects: including calculus, physics, statistics, geometry ...I know many issues of learning Calculus and have effective solutions. I also wrote 5 sections for a free Calculus textbook. On top of 4 standard Calculus courses, I have taken several graduate level Analysis courses (real and complex variables, topology, measure theory). Honors class and Math... 15 Subjects: including calculus, geometry, algebra 1, GRE ...In addition, he has a lifelong passion for mathematics and, in addition to tutoring all grade levels in math, has volunteered for 6 years in the local public schools in San Rafael (including mathematics instruction and Odyssey of the Mind coach). Dr. G. has a daughter who is currently in high school. He enjoys music, hiking and geocaching.Dr. 13 Subjects: including calculus, physics, statistics, geometry ...My hope is that they will internalize my role and learn how to flesh out and check their own understanding. This tutoring philosophy and methodology is complemented by years of experience teaching difficult students. In Poland, I taught ESL to busy and tired adults at Motorola Krakow and other Krakovian businesses. 6 Subjects: including calculus, chemistry, physics, geometry
{"url":"http://www.purplemath.com/emeryville_ca_calculus_tutors.php","timestamp":"2014-04-16T16:43:29Z","content_type":null,"content_length":"24020","record_id":"<urn:uuid:872cbd67-a18b-4ac8-80bd-ce1adf40831e>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00164-ip-10-147-4-33.ec2.internal.warc.gz"}
Homework Help Posted by Ellie on Friday, March 29, 2013 at 8:29pm. Tina solved for the decimal value 0f (3/5*18) and got an answer of 5.4. Four of her classmates argued that her answer was not reasonable. Which of the following statements best explain why that value can not =5.4 A. the number 18 is a multiple of 3 , so the answer must be a whole number B. The number 18 can be rounded to 20 and 3/5 can be rounded to 1, so the value of (3/5*18) must be less than 20*1, or 20 C. the fraction3/5 is greater than 1/2, so the value of (3/5*18) must be greater than 1/2 of 18 or 9 D. The fraction 3/5 is less then 1, so the value of (3/5*18) must be less than (1*18) or 18 I think it is D. I did 18*3/5=104/5=54/5=10.8 I really wasn't sure if C or D was the better answer since to me they are both correct. Can you help explain? thanks for your help!! • math - Ms. Sue, Friday, March 29, 2013 at 8:42pm C is correct. It is more exact than D. • math - Ellie, Friday, March 29, 2013 at 8:56pm ok thank you for explaining that. • math - Ms. Sue, Friday, March 29, 2013 at 9:04pm You're welcome. Related Questions social studies - In war 0f 1812... Describe the destruction of Washington D.C... algebra - what percentage 0f 480 is 450 type a decimal College Algebra - Tina invested $30,000 in a stock. In the first year, the stock... Math - using the range rule of thumb to approximate the standard deviation - a ... Math - Event Probability .37 .18 .13 .32 Dollar Payoff -2 0 6 3 1)Calculate the ... chem 121 - what volumes of the folowing concentrated solutions are required to ... Math - I just want to double check myself. For the decimal 8.62354....what is ... Math - Write the decimal equivalent, using the bar notation. 13/18 b) Find the ... math - what is the EXACT VALUE of cos (pi/3)? Physics - suppose you have 3 resistors, each of value 0f 30 omega. List all the ...
{"url":"http://www.jiskha.com/display.cgi?id=1364603376","timestamp":"2014-04-18T17:21:01Z","content_type":null,"content_length":"9216","record_id":"<urn:uuid:3d71ae61-6627-45ee-8dfb-7272a15ca5c6>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00113-ip-10-147-4-33.ec2.internal.warc.gz"}
US Monetary Policy in the 2010’s: The Mankiw Rule Today To make a short story even shorter, the Mankiw Rule suggests that the Zero Interest Rate Policy will continue for quite some time, barring dramatic changes in the inflation and/or unemployment rates. “The Mankiw Rule” is what I call Greg Mankiw’s version of the Taylor Rule . “Taylor Rule” is now the general term for a rule that sets a monetary policy interest rate (usually the federal funds rate in the US case) as a linear function of an inflation rate and a measure of economic slack. Such rules provide a simple way of either describing or prescribing monetary policy. Unfortunately, there are now many different versions of the Taylor Rule, which all lead to different conclusions. Not only are there many different measures of both slack and inflation; there are also an infinite number of possible coefficients that could be used to relate them to the policy interest rate. In fact, if you ask John Taylor today, he will advocate a very different set of coefficients than the ones he proposed in his original 1993 paper ( Parsimony suggests that a good Taylor rule should have 3 characteristics: it should be as simple as possible; it should use robust, easily defined, and well-known measures of slack and inflation; and it should fit reasonably well to past monetary policy. Also, to have credibility, such a rule should have “stood the test of time” to some extent: it should fit reasonably well to some subsequent monetary policy experience after it was first proposed. The Mankiw Rule has all these characteristics. It uses the unemployment rate and the core CPI inflation rate as its measures, and it applies the same coefficient to both. This setup leaves it with only two free parameters, which Greg set in a 2001 paper ( ) so as to fit the results to actual 1990’s monetary policy. As you can see from the chart below, the rule fits subsequent monetary policy rather well, although policy has tended to be slightly more easy (until 2008) than the rule would imply. You will notice a substantial divergence, however, after 2008, between the Mankiw Rule and the actual federal funds rate. If the reason for this divergence isn’t immediately clear, you need to take a closer look at the vertical axis. Extrapolating from pre-Lehman experience, this chart suggests that the Fed is still doing the best it can to approximate the Mankiw Rule. When banks lend money to one another in the federal funds market, lenders stubbornly refuse to pay for the privilege of lending, and this perversity does limit the Fed’s options. If we wanted to make a guess as to when the Fed will (or should) raise its target for the federal funds rate, a reasonable guess would be “when the Mankiw Rule rate rises above zero.” When will that happen? (Will it ever happen?) Nobody knows, of course, but the algebra is straightforward as to what will need to happen to inflation and unemployment. If the core inflation rate remains near 1%, the unemployment rate will have to fall to 7%. If the core inflation rate rises to 2%, the unemployment rate will still have to fall to 8%. Do you expect either of these things to happen soon? I DISCLOSURE: Through my investment and management role in a Treasury directional pooled investment vehicle and through my role as Chief Economist at Atlantic Asset Management, which generally manages fixed income portfolios for its clients, I have direct or indirect interests in various fixed income instruments, which may be impacted by the issues discussed herein. The views expressed herein are entirely my own opinions and may not represent the views of Atlantic Asset Management. 9 comments: Good analysis! How do we amend the Mankiw Rule (or any variant of the Taylor Rule) to account for the impact of quantitative easing? I would think of quantitative easing (as well as fiscal policy) as a substitute for negative interest rates. The Fed (with the help of Congress) has tried to approximate what the impact of negative interest rates would be, so that they can induce a recovery in spite of the zero constraint. But I don't think this requires amending the Mankiw Rule. Quantitative easing is a last resort that should no longer be happening by the time the rule gets up to zero. In fact, we're already perhaps seeing a transition from quantitative easing to quantitative tightening (largely passive removal of prior quantitative easing), as the Mankiw Rule rate starts to turn up from the bottom (or at least, as it's anticipated to do so -- and may have started by the time you read this). There ought to be a substantial period of quantitative tightening before we get to actual interest rate tightening. Actually (to amend my previous comment) it's not clear what constitutes quantitative tightening, and a substantial amount may already have taken place. If you think of the relevant variable as the growth rate of nontraditional Fed assets, that growth rate has already fallen significantly, and such may constitute tightening. If that's the right way to look at things, then it implies that, relative to the Mankiw Rule, the Fed is tightening too soon. Another proposal for a modified Taylor Rule that readers might find interesting was put forth by JD Foster, PhD at the Heritage Foundation here: http://bit.ly/alxKrK Is there a Web site that tracks these numbers and displays this graph, perhaps weekly? You forgot one desirable characteristic of a monetary rule: it should stave off or moderate booms and avoid deflation. I don't know that a regression-based model that fits, or describes, a particular historical period of monetary policy necessarily meets the test of wise monetary policy. Andy, this Mankiw rule reminds me a little of Knut Wicksell's theory of observed interest rate and real interest rate. I know it is not the same but what is your take on that? Also with a seemingly negative real rate of return, even with low rates do you think deflation is a potential issue, or that no matter how low rates can go the economy might be stuck with monetary policy being less effective. Great post, I believe blog owners should larn a lot from this site its really user pleasant…. Isn't this also a critical assumption: "the real return on safe short-term investments averages about 1-2 percent over the long run." Is there anything that says this is constant? What if the real return is negative for a period? In 1979 you had inflation at 13%, but the fed funds rate was at 10%. you can see more about tangkas ball at http://ligaplayer.com/bola-tangkas
{"url":"http://blog.andyharless.com/2010/06/us-monetary-policy-in-2010s-mankiw-rule.html?showComment=1275652754127","timestamp":"2014-04-19T14:29:07Z","content_type":null,"content_length":"71853","record_id":"<urn:uuid:7e1a4a5a-76c6-47e0-9d85-708303fe90a9>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00012-ip-10-147-4-33.ec2.internal.warc.gz"}
Bayes’ theorem: Its triumphs and discontents Lessons learned from 250 years of a famous statistical theorem by Matthew Francis - Jun 7, 2013 4:45 pm UTC Nate Silver, baseball statistician turned political analyst, gained a lot of attention during the 2012 United States elections when he successfully predicted the outcome of the presidential vote in all 50 states. The reason for his success was a statistical method called Bayesian inference, a powerful technique that builds on prior knowledge to estimate the probability of a given event Bayesian inference grew out of Bayes' theorem, a mathematical result from English clergyman Thomas Bayes, published two years after his death in 1761. In honor of the 250th anniversary of this publication, Bradley Efron examined the question of why Bayes' theorem is not more widely used—and why its use remains controversial among many scientists and statisticians. As he pointed out, the problem lies with blind use of the theorem, in cases where prior knowledge is unavailable or unreliable. As is often the case, the theorem ascribed to Bayes predates him, and Bayesian inference is more general than what the good reverend worked out in his spare time. However, Bayes' posthumous paper was an important step in the development of probability theory, and so we'll stick with using his name. Bayes' theorem in essence states that the probability of a given hypothesis depends both on the current data and prior knowledge. In the case of the 2012 United States election, Silver used successive polls from various sources as priors to refine his probability estimates. (In other words, saying he "predicted" the outcome of the election is slightly misleading: he calculated which candidate was most likely to win in each state based on the polling data.) In other cases, priors could be the outcome of earlier experiments or even educated assumptions drawn from experience. The wise statistician or scientist constructs priors that are informative, but that isn't always easy to do. Partly for that reason, many who work with statistics and probability reject the use of prior data at all. Stats geeks refer to this as the "frequentist" approach, but if you learned it in a formal setting, you probably just called it "statistics." The basis of frequentist reasoning is a prediction of the outcome of many repetitions of the same test, providing an estimate of how frequently a particular result will show up. That's arguably a more objective approach, since it sidesteps the problem Bayesians have when there isn't an obvious prior. However, as Efron pointed out, when there are reasonable prior data—especially from disparate sources, as with Nate Silver's analysis—the Bayesian method performs better than the frequentist approach. While zealots exist in both the Bayesian and frequentist camps, scientists are often pragmatic, picking a method based on the particular problem at hand. Efron, himself a sophisticated Bayesian, admitted that he uses pure Bayesian methods only when the data allows him to, and he does some frequentist double-checking when priors aren't available. A third approach, called "empirical Bayes", can be used when the data set is large enough to act as a sort of prior in its own right. In this case, there are enough experimental outcomes within a single trial to provide a "prior" to test a hypothesis against one specific data point in the set. Using the entire data set, empirical Bayes can provide an estimate for the probability of a given outcome within the set. As Efron wrote, Bayes' theorem is "controversial," but not because the equation itself is in doubt. (It's a mathematical theorem, after all; it even has an interpretation in frequentist thinking, albeit one that doesn't make reference to prior knowledge.) Rather, its use is sometimes controversial, especially in light of unsophisticated application of poorly chosen priors. Nevertheless, the real lesson is that of the sharp kitchen knife that can cut you as well as the vegetables you're chopping: use the blade of Bayes poorly, and you'll regret it. Use it wisely, and it will serve you better than the dull but reliable knife. Science, 2013. DOI: 10.1126/science.1236536 (About DOIs). 79 Reader Comments 1. F22RaptureSmack-Fu Master, in training - http://xkcd.com/1132/ 2. hbar_squaredWise, Aged Ars Veteran So basically, garbage in garbage out? 3. l8gravelySmack-Fu Master, in training The article was nice, but boy would I have appreciated more historical context on the Rev Bayes himself, which put him in more context with his mathematical peers, etc. Time to do my own research 4. JakelsharkArs Scholae Palatinae Bayes' Theorem and Mohr's Circle were the bane of my college education. 5. Sarge MisfitWise, Aged Ars Veteran I'm not a mathematician or statistician, but I am curious about something. Can this theorem be used in reverse? That is, if you have the outcome, the data and some of the priors, can you infer other priors that you do not know or priors that you do not know exist? 6. daemoniosArs Scholae Palatinae The reason for his success was a statistical method called Bayesian inference, a powerful technique that builds on prior knowledge to estimate the probability of a given event happening. Thus was psychohistory born. 7. longhairedboyArs Scholae Palatinae use the blade of Bayes poorly, and you'll regret it. 8. psdArs Tribunus Militum l8gravely wrote: The article was nice, but boy would I have appreciated more historical context on the Rev Bayes himself, which put him in more context with his mathematical peers, etc. Time to do my own research He was trying to use math to prove the existence of God. He came up with this formula. It scared him that he might be right and put it away. 9. MJ the ProphetArs Tribunus Militum Have to say, I've been a fan of Bayes for a while now. If you're reasoning correctly, you're using Bayes' Theorem, whether you realize it or not. And if you're not using Bayes' Theorem, consciously or not, you're not reasoning correctly. 10. FrankMArs Tribunus Militum I'm not a mathematician or statistician, but I am curious about something. Can this theorem be used in reverse? That is, if you have the outcome, the data and some of the priors, can you infer other priors that you do not know or priors that you do not know exist? Bayesian techniques take two buckets of information, one labeled the "prior" and the other labeled the "data," and smoosh them together into a new bucket of information called a "posterior." The posterior is a distribution rather than a single outcome. If you take a completely non-informative prior distribution and process in some Gaussian data, you will end up with a Gaussian posterior distribution that has a 95% of its mass within the 95% confidence interval of a frequentist result. Things get more complicated when there is informative prior information. In principle, if you knew the prior and the posterior, you could back out a summary of the data. I don't like theta-upper-bar and theta-lower-bar enough to work out exactly how you'd do that. 11. redleaderArs Legatus Legionis I'm not a mathematician or statistician, but I am curious about something. Can this theorem be used in reverse? That is, if you have the outcome, the data and some of the priors, can you infer other priors that you do not know or priors that you do not know exist? Perhaps you should be a mathematician, as your idea is widely used in stats, machine intelligence, etc. One interpretation of Bayes theorem is just that it connects present events to future priors, such that each new event simply becomes a bit of information that serves as a prior used to predict the what will happen next. 12. psdArs Tribunus Militum I'm not a mathematician or statistician, but I am curious about something. Can this theorem be used in reverse? That is, if you have the outcome, the data and some of the priors, can you infer other priors that you do not know or priors that you do not know exist? I don't know but I bet there will be a lot of (1 - something) terms. 13. MJ the ProphetArs Tribunus Militum psd wrote: He was trying to use math to prove the existence of God. He came up with this formula. It scared him that he might be right and put it away. Funny how now, with 250 more years of experience and the accumulated successes of naturalism helping our priors, Bayes is one of the strongest tools in the arsenal of the nonreligious. 14. psdArs Tribunus Militum MJ the Prophet wrote: Have to say, I've been a fan of Bayes for a while now. If you're reasoning correctly, you're using Bayes' Theorem, whether you realize it or not. And if you're not using Bayes' Theorem, consciously or not, you're not reasoning correctly. Yes, the lay person term is profiling... 15. charleskiArs Tribunus Militum Frankly, the source of much of the 'controversy' surrounding Bayes' Theorem lies in the fact that at times it's "soared to scientific celebrity" (as Efron puts it). You need only delve into Silver's book to see an example of this sort of thing happening. Bayes' Theorem is a useful tool in the right circumstances, that's all. It's not going to spit out the Ultimate Answer any more than Pythagoras' Theorem would. 16. b.essiambreSmack-Fu Master, in training If you are interested in the Bayesian approach, this is what you should read: 17. HerrKaputtSmack-Fu Master, in training I think the picture is misleading. The formulation of Bayes' Theorem (at least in all books on Statistics and on Machine Learning I've read so far, and I've read quite a few) is: P(H|D) = P(D|H) * P(H) / P(D) The equation on the picture is actually a generalization of the formula I just typed. It's a correct statement, but it's not what the vast majority of scientists call "Bayes' Theorem". In the picture, everything is conditioned on X as well (which is why an "X" appears at the right side of every bar, including the two bars which are omitted above). (Minor) corrections aside, a bit of trivia: one of the simplest machine learning algorithms, called Naive Bayes classifier, uses little more than Bayes' Theorem. It forms the basis of pretty much any modern spam filter, and (I suspect) it is also how Google's language detector (the "detect language automatically" button on Google Translate) works. My suspicion comes from the fact that I implemented my own NB language detector and it makes many of the same mistakes as Google's. More info: 18. FrankMArs Tribunus Militum charleski wrote: Bayes' Theorem is a useful tool in the right circumstances, that's all. Shhhhh, my econometrics professor might hear you and get all medieval on your posterior. 19. metalsheepSmack-Fu Master, in training hbar_squared wrote: So basically, garbage in garbage out? Amen; as in stat and pretty much every other facet of life. The tricky bit with bayes is sometimes it's hard to tell whether your input (prior) is garbage or not. 20. MJ the ProphetArs Tribunus Militum psd wrote: Yes, the lay person term is profiling... Oh, governments realized a long while back that Bayes' Theorem is the key to ruling (or at least comprehending) the universe. Which is why, when it was so successful during WWII, they considered it classified information. They didn't want other people to figure out how to use its awesome power. 21. Jeff SArs Praetorian Question: Are Bayesian methods actually useful for *truly* random outcomes? Or do they tend more towards being useful for sets of data which at first APPEAR random, but in fact, have hidden non-random influences? What I mean is, take the example of Nate Silver's "predicting" the election - arguably, who a person is going to vote for is not truly random, and the overall percentage of the population who has already decided to vote for a candidate near to election day is not very random either - it's hard, sometimes, to look at any given person and figure out who they are going to vote for (unless, you know, they are wearing something which strongly correlates - like a t-shirt or button for their favored candidate), but in fact, who they are voting for is not going to be determined Sure, right up to the last day there's some level of fluidity as people perhaps change their minds, but an election doesn't seem like perfectly randomized data. However, in stuff like physics, there's often a high degree of true randomness. Do Bayesian methods help much compared to frequentist methods, for truly hard-random data? 22. IonitorArs Tribunus Militumet Subscriptor Frequentists statistics almost always use prior information as well -- everything from the experimental design to the method chosen for analysis are typically based off of knowledge of what is being studied. Bayesians include it in the actual analysis, which can be viewed as more "honest". A big reason for the surge of Bayesian analysis is that is often can solve more complex problems in simpler fashion, but that is only true due to the computing power enabling the analysis. For this reason, in the modern day, very few (if any) "frequentist" statisticians will completely reject Bayesian methods, while most Bayesian statisticians will do what they can to ensure their methods are reasonable under frequentist standards. 23. IonitorArs Tribunus Militumet Subscriptor Question: Are Bayesian methods actually useful for *truly* random outcomes? Or do they tend more towards being useful for sets of data which at first APPEAR random, but in fact, have hidden non-random influences? What I mean is, take the example of Nate Silver's "predicting" the election - arguably, who a person is going to vote for is not truly random, and the overall percentage of the population who has already decided to vote for a candidate near to election day is not very random either - it's hard, sometimes, to look at any given person and figure out who they are going to vote for (unless, you know, they are wearing something which strongly correlates - like a t-shirt or button for their favored candidate), but in fact, who they are voting for is not going to be determined Sure, right up to the last day there's some level of fluidity as people perhaps change their minds, but an election doesn't seem like perfectly randomized data. However, in stuff like physics, there's often a high degree of true randomness. Do Bayesian methods help much compared to frequentist methods, for truly hard-random data? Bayesian methods tend to make it easier to incorporate "expert knowledge" and to model data that is less random than it appears, but that doesn't mean that it's limited to that. Physicians often have some of the more complicated models that make non-Bayesian analysis more difficult, or there is knowledge from previous experiments that people want to incorporate into their own testing. Bayesian analysis definitely has a place there. 24. MJ the ProphetArs Tribunus Militum Question: Are Bayesian methods actually useful for *truly* random outcomes? Quite useful, actually. For example, radioactive decay is one of the few truly random phenomena we know of. So how does Bayes help us? You know the probability that an atom will decay at any given moment. You then get some evidence of what looks like a decay product. How likely is it that the atom has actually decayed? Plug in the prior probability of the decay happening, the likelihood that you'd have that evidence if a decay had occurred, the likelihood that you'd have that evidence if the decay hadn't occurred (say, if it were a cosmic ray sneaking in), and Bayes gives you the answer. 25. OrangeCreamArs Legatus Legionis hbar_squared wrote: So basically, garbage in garbage out? While necessary it is an insufficient statement since that applies all the time. Not that I'm a statistician, but the gist is that if your data doesn't conform to a particular set of requirements (it's not garbage by any definition, it's just not of the correct type), it won't work. 26. Guest "Nevertheless, the real lesson is that of the sharp kitchen knife that can cut you as well as the vegetables you're chopping: use the blade of Bayes poorly, and you'll regret it. Use it wisely, and it will serve you better than the dull but reliable knife." Hmmmm..... well, the sharp knife will cut into the onion and then through, while you will apply force to the dull/semi-sharp blade which then suddenly slides the onion off and severs a finger or an artery 27. FrankMArs Tribunus Militum Question: Are Bayesian methods actually useful for *truly* random outcomes? Or do they tend more towards being useful for sets of data which at first APPEAR random, but in fact, have hidden non-random influences? What I mean is, take the example of Nate Silver's "predicting" the election - arguably, who a person is going to vote for is not truly random, and the overall percentage of the population who has already decided to vote for a candidate near to election day is not very random either - it's hard, sometimes, to look at any given person and figure out who they are going to vote for (unless, you know, they are wearing something which strongly correlates - like a t-shirt or button for their favored candidate), but in fact, who they are voting for is not going to be determined Sure, right up to the last day there's some level of fluidity as people perhaps change their minds, but an election doesn't seem like perfectly randomized data. However, in stuff like physics, there's often a high degree of true randomness. Do Bayesian methods help much compared to frequentist methods, for truly hard-random data? I believe a Bayesian-like method was used to combine the data from two different detectors to declare the Higgs discovered. Bayes theorem definitely applies in that kind of situation. For something like polling, it's hard to find useful prior information other than other polls (notwithstanding platitudes like "the undecided tend to break late" or "polls tighten up as election day nears"). You can make some inferences that one poll or another tends to have a certain relative bias, or that one poll or another tends to bounce around more. With decent information about these, it becomes possible to combine polls meaningfully to get a snapshot of the population at the time the poll was taken. What bothers me about the Nate Silver predictions is that one campaign had an election-day meltdown of its get-out-the-vote effort, yet this didn't cause any unexpected results??? This only makes sense if (1) Silver was systematically underestimating turnout for Romney supporters or (2) Silver used a Delorean to get a future history book and bet accordingly :-) 28. deas187Ars Scholae Palatinae Ahhh the second best cult next to Burning Man. 29. WhitneyLandWise, Aged Ars Veteran Anyone understand Bayes well enough to explain Monty Hall to a layman? 30. WaveRunnerArs Tribunus Militum HerrKaputt wrote: I think the picture is misleading. The formulation of Bayes' Theorem (at least in all books on Statistics and on Machine Learning I've read so far, and I've read quite a few) is: P(H|D) = P(D|H) * P(H) / P(D) Not only that but the use of DX is also troubling. It should be A and B... 31. GDwarfArs Centurion FrankM wrote: Question: Are Bayesian methods actually useful for *truly* random outcomes? Or do they tend more towards being useful for sets of data which at first APPEAR random, but in fact, have hidden non-random influences? What I mean is, take the example of Nate Silver's "predicting" the election - arguably, who a person is going to vote for is not truly random, and the overall percentage of the population who has already decided to vote for a candidate near to election day is not very random either - it's hard, sometimes, to look at any given person and figure out who they are going to vote for (unless, you know, they are wearing something which strongly correlates - like a t-shirt or button for their favored candidate), but in fact, who they are voting for is not going to be determined Sure, right up to the last day there's some level of fluidity as people perhaps change their minds, but an election doesn't seem like perfectly randomized data. However, in stuff like physics, there's often a high degree of true randomness. Do Bayesian methods help much compared to frequentist methods, for truly hard-random data? I believe a Bayesian-like method was used to combine the data from two different detectors to declare the Higgs discovered. Bayes theorem definitely applies in that kind of situation. For something like polling, it's hard to find useful prior information other than other polls (notwithstanding platitudes like "the undecided tend to break late" or "polls tighten up as election day nears"). You can make some inferences that one poll or another tends to have a certain relative bias, or that one poll or another tends to bounce around more. With decent information about these, it becomes possible to combine polls meaningfully to get a snapshot of the population at the time the poll was taken. What bothers me about the Nate Silver predictions is that one campaign had an election-day meltdown of its get-out-the-vote effort, yet this didn't cause any unexpected results??? This only makes sense if (1) Silver was systematically underestimating turnout for Romney supporters or (2) Silver used a Delorean to get a future history book and bet accordingly :-) Or that "get out and vote" programs are a waste of money, which seems likely. They're stocked with lists of people who have already indicated a voting preference, which seems like it would make them almost useless. 32. BitPoetModeratoret Subscriptor I'd like to point out that sharp knives are safer, and more predictable to use in the kitchen vs. dull ones, which may slip and do damage. So I'm thinking your analogy at the end might need a 33. linnenArs Centurion Here is the example used in the book I'm reading (security professionals have similar explanations for low occurrence events as well); A diagnostic test for a specific disease is 95% accurate (you HAVE this disease 95% of the time when this test is positive, you DON'T have the disease 95% of the time when the test shows negative.) Your test results show positive. Do you in fact have this disease? We will go with the assumption that this disease is rare in your location, that only one person in a thousand have Bayes equation for P[ disease | test says yes ] is the product of probability of the test ( .95 ) and the probability of the disease ( .001 ) divided by the probability of all positive cases ( true positives plus false positives ). Long, short; the answer is that only about 1.9 out of 100 of those that tested positive actually have the disease. Of course the inverse problem is equally interesting (in all senses of the word) in that a negative result is valid in 99.995 results out of 100. 34. FrankMArs Tribunus Militum GDwarf wrote: Or that "get out and vote" programs are a waste of money, which seems likely. They're stocked with lists of people who have already indicated a voting preference, which seems like it would make them almost useless. There is a difference between a voting preference and the probability of actually voting. Anecdotally, the elephants' probabilities seem more polarized to 0 and 1 than the donkeys'. You see both the "I would vote even if a volcano popped up in my front yard" and the "I wouldn't pull a lever for that ____ even if the voting machine was at the foot of my bed" responses, but relatively few of the "I'll vote if someone gives me a ride" on that side of the ideological spectrum. We'd need to a breakdown on the donkey side to empirically test if get-out-the-vote is truly useless. 35. metalsheepSmack-Fu Master, in training Question: Are Bayesian methods actually useful for *truly* random outcomes? Or do they tend more towards being useful for sets of data which at first APPEAR random, but in fact, have hidden non-random influences? What I mean is, take the example of Nate Silver's "predicting" the election - arguably, who a person is going to vote for is not truly random, and the overall percentage of the population who has already decided to vote for a candidate near to election day is not very random either - it's hard, sometimes, to look at any given person and figure out who they are going to vote for (unless, you know, they are wearing something which strongly correlates - like a t-shirt or button for their favored candidate), but in fact, who they are voting for is not going to be determined Sure, right up to the last day there's some level of fluidity as people perhaps change their minds, but an election doesn't seem like perfectly randomized data. However, in stuff like physics, there's often a high degree of true randomness. Do Bayesian methods help much compared to frequentist methods, for truly hard-random data? In a truly random data set there's no information to recover: statistics is only interested in datasets that aren't random. The more powerful a test, the more it is able to cut through noisy situations like you experience in physics, where the influence of the data is very small compared to the influence of the noise. Bayes is just as applicable in these (general) situations as any other test or statistical tool. Pick the right test for the job. 36. metalsheepSmack-Fu Master, in training BitPoet wrote: I'd like to point out that sharp knives are safer, and more predictable to use in the kitchen vs. dull ones, which may slip and do damage. So I'm thinking your analogy at the end might need a True in skilled hands respectful of how sharp the knife is, less so when the wielder of the implement is clueless to the damage they could do themselves. 37. MJ the ProphetArs Tribunus Militum WhitneyLand wrote: Anyone understand Bayes well enough to explain Monty Hall to a layman? The Monty Hall problem is easier to grasp by giving you more of an intuitional handle to take hold of. Imagine there are a thousand doors. You pick one, which you know has a 1/1000 chance of being right. Monty then eliminates 998 other doors, showing you that they're all wrong. What's a better bet, the door you picked randomly at first, or the one remaining door that has been non-randomly pointed out? When you see the probabilities collapse that way, it makes more sense. And it works the same even when there's only one door for Monty to eliminate. 38. FrankMArs Tribunus Militum metalsheep wrote: In a truly random data set there's no information to recover: statistics is only interested in datasets that aren't random. The more powerful a test, the more it is able to cut through noisy situations like you experience in physics, where the influence of the data is very small compared to the influence of the noise. Bayes is just as applicable in these (general) situations as any other test or statistical tool. Pick the right test for the job. You can have unknown random variables of interest, such as the probability of dying from a novel disease, or the probability of a particular particle decay sequence. Think of it this way... loaded dice are still random, but with enough data we can detect the bias. 39. AlhazredArs Scholae Palatinae I'm not a mathematician or statistician, but I am curious about something. Can this theorem be used in reverse? That is, if you have the outcome, the data and some of the priors, can you infer other priors that you do not know or priors that you do not know exist? No, not really. You have no idea what an appropriate statistical probability would be. Since you KNOW the outcome of the experiment you're stuck with 100%. Do you model a situation where you would have guessed there was a 1%, 5%, 50%, 99.5% or what probability of the given outcome. Each of these would have been the result of very different statistics. Statistics isn't really useful in that respect post-hoc. Though clearly its possible to gain insight into the utility of various statistical methods themselves. You must login or create an account to comment. Feature Story (2 pages) Introducing Steam Gauge: Ars reveals Steam’s most popular games We sampled public data to estimate sales and gameplay info for every Steam game.
{"url":"http://arstechnica.com/science/2013/06/bayes-theorem-its-triumphs-and-discontents/?comments=1","timestamp":"2014-04-17T20:54:49Z","content_type":null,"content_length":"141351","record_id":"<urn:uuid:6ce35702-00d3-43ab-9376-c4193fb1fd50>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00113-ip-10-147-4-33.ec2.internal.warc.gz"}
Limitations of the operator expansion method. ASA 125th Meeting Ottawa 1993 May 2aUW7. Limitations of the operator expansion method. Peter J. Kaczkowski Eric I. Thorsos Appl. Phys. Lab., Univ. of Washington, Seattle, WA 98105 Preliminary studies of the operator expansion method applied to scattering from rough surfaces satisfying the Dirichlet boundary condition [P. J. Kaczkowski and E. I. Thorsos, J. Acoust. Soc. Am. 90, 2258 (A) (1991)] have indicated that this relatively new method [D. M. Milder, J. Acoust. Soc. Am. 89, 529--541 (1991)] has a broad range of validity. For moderate rms surface slopes, the method is accurate over almost all scattering angles, and represents a vast improvement over the Kirchhoff approximation and small perturbation methods. Further study of the operator expansion method has led to new insights that establish a link between the formal validity of the method and the validity of the Rayleigh hypothesis. While the Rayleigh hypothesis appears to place a strict limit on the operator expansion, numerical examples will be presented that illustrate that the accuracy of the scattering cross section computed by the operator expansion method degrades only gradually as the rms slope is increased beyond that limit. For scattering from surfaces rough in one dimension, the accuracy of the operator expansion solution is established through comparison with the solution to an integral equation. Studies of the convergence of the terms in the operator expansion series indicate how the convergence rate can be used to infer the accuracy of the solution at any given order. This property will be useful when applying the operator expansion method to scattering from surfaces rough in two dimensions, for which exact solutions are still very costly. [Work supported by ONR.]
{"url":"http://www.auditory.org/asamtgs/asa93ott/2aUW/2aUW7.html","timestamp":"2014-04-18T09:11:08Z","content_type":null,"content_length":"2196","record_id":"<urn:uuid:affde08b-6907-47ee-9d44-9bbef71cb581>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00131-ip-10-147-4-33.ec2.internal.warc.gz"}
Rand Implementation Stack Overflow is a question and answer site for professional and enthusiast programmers. It's 100% free, no registration required. I would like to go through how rand() and srand() functions are implemented and would like to tweak the code to modify it to my requirements. Where can i find the source code of rand() and srand(). up vote 6 down vote favorite c gcc random add comment I would like to go through how rand() and srand() functions are implemented and would like to tweak the code to modify it to my requirements. Where can i find the source code of rand() and srand(). It takes a seed as in input argument, usually like follows:- double result = srand(time(NULL)); and returns a random number that adheres to the probability and hence expected number of occurrences. from CodeGuru forums:- void __cdecl srand (unsigned int seed) #ifdef _MT _getptd()->_holdrand = (unsigned long)seed; #else /* _MT */ holdrand = (long)seed; #endif /* _MT */ up vote 4 down vote accepted } int __cdecl rand (void) #ifdef _MT _ptiddata ptd = _getptd(); return( ((ptd->_holdrand = ptd->_holdrand * 214013L + 2531011L) >> 16) & 0x7fff ); #else /* _MT */ return(((holdrand = holdrand * 214013L + 2531011L) >> 16) & 0x7fff); #endif /* _MT */ Hope this helps. show 4 more comments It takes a seed as in input argument, usually like follows:- and returns a random number that adheres to the probability and hence expected number of occurrences. void __cdecl srand (unsigned int seed) { #ifdef _MT _getptd()->_holdrand = (unsigned long)seed; #else /* _MT */ holdrand = (long)seed; #endif /* _MT */ } int __cdecl rand (void) { #ifdef _MT _ptiddata ptd = _getptd(); return( ((ptd->_holdrand = ptd->_holdrand * 214013L + 2531011L) >> 16) & 0x7fff ); #else /* _MT */ return(((holdrand = holdrand * 214013L + 2531011L) >> 16) & 0x7fff); # endif /* _MT */ } rand and srand are usually implemented as a simple LCG, you can easily write your own (it's few lines of code) without looking for the sources of rand and srand. Notice that, if you need random numbers for "serious" purposes (e.g. cryptography), there are much better RNGs than LCG. By the way, the C standard itself includes a sample implementation of rand and srand: static unsigned long int next = 1; int rand(void) // RAND_MAX assumed to be 32767 up vote 7 next = next * 1103515245 + 12345; down vote return (unsigned int)(next/65536) % 32768; void srand(unsigned int seed) next = seed; add comment rand and srand are usually implemented as a simple LCG, you can easily write your own (it's few lines of code) without looking for the sources of rand and srand. Notice that, if you need random numbers for "serious" purposes (e.g. cryptography), there are much better RNGs than LCG. By the way, the C standard itself includes a sample implementation of rand and srand: static unsigned long int next = 1; int rand(void) // RAND_MAX assumed to be 32767 { next = next * 1103515245 + 12345; return (unsigned int)(next/65536) % 32768; } void srand(unsigned int seed) { next = seed; } The glibc one (used by gcc) is the simple formula: x = 1103515245 * x + 12345 wrapping around at 2^32, as shown here. You can just set x as the seed then keep calling a function to evaluate that expression (and update the seed). up vote 2 down vote But you should be aware the linear congruential generators like this are considered adequate but not ideal. While the only ideal random number generator would be perfectly random, the Mersenne Twister probably comes closer. add comment The glibc one (used by gcc) is the simple formula: wrapping around at 232, as shown here. You can just set x as the seed then keep calling a function to evaluate that expression (and update the seed). But you should be aware the linear congruential generators like this are considered adequate but not ideal. While the only ideal random number generator would be perfectly random, the Mersenne Twister probably comes closer.
{"url":"http://stackoverflow.com/questions/4768180/rand-implementation?answertab=active","timestamp":"2014-04-18T01:21:40Z","content_type":null,"content_length":"80690","record_id":"<urn:uuid:fbe37f3b-2b4d-49c4-86a7-dcc9af3319b9>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00513-ip-10-147-4-33.ec2.internal.warc.gz"}