content
stringlengths
86
994k
meta
stringlengths
288
619
solving polynomial with degree 3... March 11th 2011, 02:35 PM #1 solving polynomial with degree 3... the problem is... This polynomial was created for a word problem. The problem started off w/ this guy that had a block of ice 3x4x5=60 ft^3 He wants to reduce the volume TO (3/5) the original volume by removing the same # of feet from each dimension. I made the equation... $(3-x)(4-x)(5-x)=V(x)$ to represent the volume of the cube in terms of the amount of feet removed. After multiplying out this problem becomes $V(x)=-x^3+12x^2-47x+60$ Since he wants to reduce the volume TO (3/5) the original volume, I can say that... You can see this is the same polynomial that appears at the beginning of this post. I tried to find a whole number root (positive in this case only) and couldn't find one. I know the answer is ABOUT .6 feet. So, it makes sense that I can't plug in positive factors of 24 into the equation and get zero. Obviously, taking all the factors of 24 where one factor is a fraction is an absurd I can't use the quadratic formula for this problem either, since this polynomial is not a quadratic function. It has a constant, so it's not like I can remove x from the polynomial. Any idea how I might solve this problem? Last edited by jonnygill; March 12th 2011 at 09:10 AM. Reason: There was some unnecessary/incorrect information before You can solve it using Cardano's method, but I doubt that will give you any more insight on the answer. The only real solution is not rational...x = 0.597152101 or so. (Here's the exact Thanks Dan. The book I am working from would not expect to me know or use Cardano's method. This section of the book is dealing with using synthetic division to solve polynomials. This problem occurs towards the end of the problem set, so it's one of the more difficult problems in this section. There is something I am missing, but I don't know what it is. I've looked this problem over for awhile now. Last edited by jonnygill; March 11th 2011 at 04:39 PM. Perhaps, instead of me asking how to solve a polynomial w/ a degree of 3 for x. I should ask this... given the word problem presented in the first post of this thread, how could i solve the problem. Keep in mind that the section of the book this problem is found in deals w/ using synthetic division to solve polynomial equations. And lets say that we are not allowed to use Cardano's method in this problem. Thank you. the problem is... This polynomial was created for a word problem. The problem started off w/ this guy that had a cube of ice 3x4x5=60 ft^3 He wants to reduce the volume by (3/5) by removing the same # of feet from each dimension. I made the equation... $(3-x)(4-x)(5-x)=V(x)$ to represent the volume of the cube in terms of the amount of feet removed. After multiplying out this problem becomes $V(x)=-x^3+12x^2-47x+60$ Since he wants to reduce the volume by (3/5), I can say that... You can see this is the same polynomial that appears at the beginning of this post. I tried to find a whole number root (positive in this case only) and couldn't find one. I know the answer is ABOUT .6 feet. So, it makes sense that I can't plug in positive factors of 24 into the equation and get zero. Obviously, taking all the factors of 24 where one factor is a fraction is an absurd I can't use the quadratic formula for this problem either, since this polynomial is not a quadratic function. It has a constant, so it's not like I can remove x from the polynomial. Any idea how I might solve this problem? It is possible that since you have not posted the entire question, what the question is actually asking is not being answered. Please post the entire question... Also, it is possible that since it says "reduce the amount by 3/5", he will only be left with 2/5 of the original volume... I'm sorry, in the initial problem i said he has a cube of ice. He in fact has a three dimensional rectangular figure. This problem is broken up into 4 parts: a, b, c, and d. All parts concern a sculptor who has a block of ice that measures 3 x 4 x 5 = 60 ft^3. Before he starts sculpting, he wants to shave off the same number of feet from each dimension to make the block of ice smaller. a asks to write a polynomial function to model the situation. I came up with $V(x)=-x^3+12x^2-47x+60$ This function shows Volume in terms of the amount removed from each dimension, x. The function itself was derived from (3-x)(4-x)(5-x) where x is the number of feet removed from each dimension. b asks to graph the function (graphing calculator) c states that he wants to reduce the volume TO 3/5 the original volume and to write an equation to model this situation... I figured that if $V(x)=60=-x^3+12x^2-47x+60$ then $\dfrac{3}{5}\cdot60= -x^3+12x^2-47x+60$ or $36=-x^3+12x^2-47x+60$ or $0=-x^3+12x^2-47x+24$ d asks how much he should remove from each dimension... essentially they are asking to solve $0=-x^3+12x^2-47x+24$ for x. I tried solving the equation using synthetic division. I knew that x must be < 3 since the smallest dimension is 3 feet to begin with. I also figured that x must be > 0 because it is not practical to remove a negative length from a dimension. So, the only two factors of 24 that satisfy these requirements are 1 and 2 (i knew zero was not an option since there is a constant in the polynomial equation). I tried using synthetic division for both 1 and 2 and both resulted in a remainder that did not equal zero. And of course, I know the answer is about .6, so its not a surprise that using whole number factors was not fruitful. How can I figure out how much to remove from each dimension so that the resulting volume is three fifths of the original volume without using Cardano's method? Prove It: I made an error, the problem in fact said that the sculptor wants the new volume to be three fifths OF the original volume. Thank you to whoever is kind enough to read through this thread. Last edited by jonnygill; March 12th 2011 at 09:33 AM. Reason: oopsie I'm sorry, in the initial problem i said he has a cube of ice. He in fact has a three dimensional rectangular figure. This problem is broken up into 4 parts: a, b, c, and d. All parts concern a sculptor who has a block of ice that measures 3 x 4 x 5 = 60 ft^3. Before he starts sculpting, he wants to shave off the same number of feet from each dimension to make the block of ice smaller. a asks to write a polynomial function to model the situation. I came up with $V(x)=-x^3+12x^2-37x+60$ This function shows Volume in terms of the amount removed from each dimension, x. The function itself was derived from (3-x)(4-x)(5-x) where x is the number of feet removed from each dimension. b asks to graph the function (graphing calculator) c states that he wants to reduce the volume TO 3/5 the original volume and to write an equation to model this situation... I figured that if $V(x)=60=-x^3+12x^2-37x+60$ then $\dfrac{3}{5}\cdot60= -x^3+12x^2-37x+60$ or $36=-x^3+12x^2-37x+60$ or $0=-x^3+12x^2-37x+24$ d asks how much he should remove from each dimension... essentially they are asking to solve $0=-x^3+12x^2-37x+24$ for x. I tried solving the equation using synthetic division. I knew that x must be < 3 since the smallest dimension is 3 feet to begin with. I also figured that x must be > 0 because it is not practical to remove a negative length from a dimension. So, the only two factors of 24 that satisfy these requirements are 1 and 2 (i knew zero was not an option since there is a constant in the polynomial equation). I tried using synthetic division for both 1 and 2 and both resulted in a remainder that did not equal zero. And of course, I know the answer is about .6, so its not a surprise that using whole number factors was not fruitful. How can I figure out how much to remove from each dimension so that the resulting volume is three fifths of the original volume without using Cardano's method? Prove It: I made an error, the problem in fact said that the sculptor wants the new volume to be three fifths OF the original volume. Thank you to whoever is kind enough to read through this thread. I expect that since the factor theorem does not work, in that case you would have to use technology to solve the equation. Thanks! Yeah, I accidentally wrote -37x instead of -47x. Prove It is correct, the authors of the book probably expected me to use technology (e.g. a graphing calculator) to determine the zeros of the function. I did this and I got x=.5971 or about .6 Thank you for your help. Somehow, it did not occur to me that in this case (i.e. this level of mathematics) using the graphing calculator is the way to go. Thanks again. March 11th 2011, 03:22 PM #2 March 11th 2011, 04:17 PM #3 March 11th 2011, 06:22 PM #4 March 11th 2011, 06:25 PM #5 March 11th 2011, 08:15 PM #6 March 11th 2011, 11:27 PM #7 March 12th 2011, 09:39 AM #8
{"url":"http://mathhelpforum.com/pre-calculus/174285-solving-polynomial-degree-3-a.html","timestamp":"2014-04-17T02:51:35Z","content_type":null,"content_length":"65898","record_id":"<urn:uuid:96627122-4b2c-4c0a-8a1a-8658e3a76c74>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00561-ip-10-147-4-33.ec2.internal.warc.gz"}
An introduction to interlacing families Perhaps you heard on Gil Kalai’s blog, but the 54-year-old Kadison-Singer problem was recently solved by Adam Marcus, Dan Spielman and Nikhil Srivastava. The paper is fantastically short, and comes as a sequel to another recent paper which used similar methods to solve another important problem: For every degree $d>2$, there exists an infinite family of $d$-regular bipartite Ramanujan graphs. Considering the gravity of these results, it’s probably wise to get a grasp of the techniques that are used to prove them. The main technique they use (“the method of interlacing families”) is actually new, and so I decided to write a blog post to discuss it (and force myself to understand it). — 1. A new probabilistic method — The main idea is that you’re given a list of polynomials, and you’re asked to establish that the largest real root of one of these polynomials is smaller than something. This is very similar to the sort of problem that the probabilistic method could be used to solve: Suppose you have a huge collection $S$ of real numbers, and you want to see how small these numbers can be. Define a random variable $X$ to be uniformly distributed over $S$ and compute the expected value. Then you know that there is some member of $S$ which is no larger than $\mathbb{E}[X]$. Another way to say this is the minimum is always less than or equal to the average. Functional analysts use this pigeonhole principle-type argument all the time without consciously acknowledging that they are using the probabilistic method. Of course, the probabilistic method can be formulated as “just counting,” but oftentimes, it is easier to think in terms of probability. Now if you’re given a collection of polynomials, you might be inclined to define a uniform distribution over this collection, and then find the “average” polynomial, that is, the polynomial whose $k$ th coefficient is the average of the $k$th coefficients. But you’re asked to establish that there is a polynomial in the collection whose largest real root is smaller than something. How can you relate this to the roots of the average polynomial? In general, there is no correspondence, but the following lemma reveals an interesting case in which something can actually be said: Lemma (Lemma 4.2 in Part I, essentially). Take a finite collection of polynomials $\{f_i(x)\}_i$, each with all real coefficients, a positive leading coefficient, and at least one real root. Let $R_i$ and $r_i$ be the largest and second largest real roots (respectively) of $f_i(x)$; if $f_i(x)$ only has one real root, take $r_i$ to be $-\infty$. If $\displaystyle{\max_j r_j\leq \min_j R_j}$, then $f_\emptyset(x):=\sum_j f_j(x)$ has a real root (say, $R$) which satisfies $\min_j R_j \leq R$. Note that the average polynomial is just a scaled version of $f_\emptyset(x)$, which will have the same roots as $f_\emptyset(x)$. In practice, you will use the largest real root of $f_\emptyset(x)$ as the upper bound on $\min_j R_j$. The coolest thing about this result is you could assign the proof as homework in an basic real analysis class. Proof of Lemma: View each polynomial $f_i(x)$ as a function over the real line. Since the leading coefficient of $f_i(x)$ is positive, there exists $y_1$ such that $f_i(y)>0$ for every $y\geq y_1$ and every $i$. Taking $y_0:=\min_j R_j$, we then have $r_i\leq y_0 \leq R_i\leq y_1$. We claim that $f_i(y_0)\leq 0$. Suppose otherwise, and consider two cases: Case I: There exists $y\in(y_0,R_i)$ such that $f_i(y)\leq 0$. Then either $y$ is a root of $f_i(x)$ in $(y_0,R_i)$, or $f_i(x)$ has a root in $(y_0,R_i)$ by the intermediate value theorem. This contradicts the fact that the second largest root of $f_i(x)$ satisfies $r_i\leq y_0$. Case II: $f_i$ is strictly positive over $(y_0,R_i)$. Note that $f_i$ is also strictly positive over $(R_i,\infty)$, since otherwise $f_i(y_1)>0$ implies that $R_i$ is not the largest root by the intermediate value theorem. Since $f_i$ is differentiable, $f_i'(R_i)$ exists, and we must have $f_i'(R_i)=0$, since otherwise there is a point $y$ in a neighborhood of $R_i$ for which $f_i(y)<0$. Writing out the definition of the derivative, and taking $g_i(x)$ to be the polynomial such that $f_i(x)=(x-R_i)g_i(x)$, we have $\displaystyle{0=f_i'(R_i)=\lim_{y\rightarrow R_i}\frac{f_i(y)-f_i(R_i)}{y-R_i}=\lim_{y\rightarrow R_i}\frac{f_i(y)}{y-R_i}=g_i(R_i)}$. As such, $R_i$ is also a root of $g_i(x)$, and so $r_i=R_i$. Combined with $r_i\leq y_0\leq R_i$, we then have that $y_0=R_i$, which contradicts the assumption that $f_i(y_0)>0$. To summarize up to this point, we have $f_i(y_0)\leq 0$ for every $i$, where $y_0:=\min_j R_j$, and also $f_i(y_1)>0$ for every $i$. By addition, it then follows that $f_\emptyset(y_0)\leq 0<f_\ emptyset(y_1)$, meaning $f_\emptyset(x)$ has a root $R\in[y_0,y_1)$ by the intermediate value theorem. $\Box$ Note that I presented this result differently from the original, mostly because I wanted to weaken the hypothesis to improve my understanding. The main takeaway is you want there to be a “caste system” on the top roots in order for this new probabilistic method to work. — 2. A useful condition — Consider the following definition: Definition. Take a finite collection of polynomials $\{f_i(x)\}_i$ of degree $n$ with all real roots. Let $\beta_1^{(i)}\leq\cdots\leq\beta_n^{(n)}$ denote the roots of $f_i(x)$. We say the polynomials $\{f_i(x)\}_i$ have a common interlacing if there exists $\{\alpha_k\}_{k=1}^{n-1}$ such that $\displaystyle{\beta_1^{(i)}\leq \alpha_1\leq \beta_2^{(i)}\leq \alpha_2\leq \cdots\leq \alpha_{n-1}\leq \beta_n^{(i)}}$ for every $i$. Note that $\{f_i(x)\}_i$ having a common interlacing implies that $\max_j\beta_{n-1}^{(j)}\leq \alpha_{n-1}\leq \min_j\beta_n^{(j)}$, and so such polynomials satisfy the hypothesis of the previous lemma (provided the leading coefficients are positive). Moreover, such polynomials are rather common, at least in linear algebra. Theorem 4.3.10 of Horn and Johnson’s Matrix Analysis gives the following example: Consider all self adjoint $n\times n$ matrices such that deleting the last row and column produces the same matrix $A$. Then the characteristic polynomials of these matrices have a common interlacing (the $\alpha_k$‘s in this case can be taken to be the eigenvalues of $A$). Not only does this notion of common interlacing naturally occur in linear algebra, it also can be verified by testing the hypothesis in the following lemma: Lemma (Proposition 1.35 in this paper, essentially). Take polynomials $f(x)$ and $g(x)$ of degree $n$ with positive leading coefficients such that, for every $\lambda\in[0,1]$, the polynomial $\ lambda f(x)+(1-\lambda)g(x)$ has $n$ real roots. Then $f(x)$ and $g(x)$ have a common interlacing. Proof: We only treat the special case where the $2n$ roots of $f(x)$ and $g(x)$ are all distinct (the degenerate cases can be treated using basic calculus-type arguments). By assumption, $f(x)$ and $g(x)$ each have $n$ real roots, which form $2n$ points on the real line, and their complement in the real line forms $2n+1$ open intervals. Viewing $f$ and $g$ as functions over the real line, note that their signs are constant over each of these open intervals (this follows from the intermediate value theorem). In fact, ordering these intervals from right to left as $\{I_k\}_{k=0}^{2n}$, then since the polynomials have positive leading coefficients, both $f$ and $g$ are positive on $I_0$. Next, the largest of the $2n$ roots belongs to either $f(x)$ or $g(x)$ (but not both), and so $f$ and $g$ have opposite sign over $I_1$. Continuing in this way, $f$ and $g$ have common sign over even-indexed intervals and opposite sign over odd-indexed intervals. As such, addition gives that for any $\lambda>0$, every real root of $\lambda f(x)+(1-\lambda)g(x)$ must lie in an odd-indexed interval. Moreover, the (real) roots are continuous in $\lambda$, and so the boundary of each odd-indexed interval had better match a root of $f(x)$ with a root of $g(x)$. Picking a point from each of the even intervals then produces a common interlacing of $f(x)$ and $g(x)$. $\Box$ Corollary. Take a finite collection of polynomials $\{f_i(x)\}_i$ of degree $n$ with positive leading coefficients such that every convex combination of these polynomials has $n$ real roots. Then the polynomials $\{f_i(x)\}_i$ have a common interlacing. Proof: By the lemma, any pair has a common interlacing. This in mind, suppose $\{f_i(x)\}_i$ does not have a common interlacing. Then there is some $i,j,k$ such that the $k$th largest root of $f_i(x) $ is smaller than the $(k+1)$st largest root of $f_j(x)$, contradicting the fact that $f_i(x)$ and $f_j(x)$ have a common interlacing. $\Box$ In discerning whether the new probabilistic method is valid for a given sequence of polynomials, the above result reduces this problem to verifying the real-rootedness of a continuum of polynomials. While this may seem particularly cumbersome at first glance, Marcus, Spielman and Srivastava have found success in verifying this (in both papers) using a theory of real stable polynomials. This leads to the most technical portions of both papers, and I plan to study these portions more carefully in the future. — 3. The method of interlacing families — In practice, the polynomials you are given may not have a common interlacing, but you might be able to organize them into something called an interlacing family. Definition. Let $S_1,\ldots,S_m$ be finite sets, and use the $m$-tuples in $S_1\times \cdots\times S_m$ to index $|S_1|\cdots|S_m|$ polynomials. In particular, let each $f_{s_1,\ldots,s_m}(x)$ be a real-rooted degree-$n$ polynomial with a positive leading coefficient. For every $k<m$ and $s_1\in S_1,\ldots, s_k\in S_k$, define $\displaystyle{f_{s_1,\ldots,s_k}(x):=\sum_{s_{k+1}\in S_{k+1}}\cdots\sum_{s_m\in S_m}f_{s_1,\ldots s_m}(x)}$. For the degenerate case where $k=0$, we take $f_\emptyset(x)$ to be the sum of all of the $f_{s_1,\ldots,s_m}(x)$‘s. We say the polynomials $\{f_{s_1,\ldots,s_m}(x) \}_{s_1,\ldots,s_m}$ form an interlacing family if for every $k<m$ and $s_1\in S_1,\ldots, s_k\in S_k$, the polynomials $\displaystyle{\{f_{s_1,\ldots,s_k,t}(x) \}_{t\in S_{k+1}}}$ have a common interlacing. If you think of $f(x)$ as being a random polynomial drawn uniformly from the collection $\{f_{s_1,\ldots,s_m}(x) \}_{s_1,\ldots,s_m}$, then The other polynomials defined above can be thought of in terms of conditional expectation. That is, suppose you are told that the first $k$ indices of $f(x)$ are $s_1,\ldots,s_k$. Then Unfortunately, this is the extent of my intuition with interlacing families. I would like to relate them to eigensteps, for example, but interlacing families are very different because they involve partial sums of polynomials (as opposed to characteristic polynomials of partial sums of rank-1 matrices). Regardless, by the following result, interlacing families are just as useful as polynomials with a common interlacing: Theorem (Theorem 4.4 in Part I). Suppose $\{f_{s_1,\ldots,s_m}(x)\}_{s_1,\ldots,s_m}$ form an interlacing family. Then there exists an $m$-tuple $(s_1,\ldots,s_m)$ such that the largest root of $f_ {s_1,\ldots,s_m}(x)$ is at most the largest root of $f_\emptyset(x)$. Proof: By assumption, $\{f_t(x)\}_{t\in S_1}$ has a common interlacing, and so by the first lemma, there is a polynomial ($f_{s_1}(x)$, say) whose largest root is at most the largest root of $\displaystyle{\sum_{t\in S_1}f_t(x)=f_\emptyset(x)}$. Next, we know $\{f_{s_1,t}(x)\}_{t\in S_2}$ has a common interlacing, and so by the first lemma, there is a polynomial ($f_{s_1,s_2}(x)$, say) whose largest root is at most the largest root of $\displaystyle{\sum_{t\in S_2}f_{s_1,t}(x)=f_{s_1}(x)}$, which in turn is no larger than the largest root of $f_\emptyset(x)$. The result follows by induction. $\Box$ At this point, we provide the two-step “method of interlacing families” which MSS use in both papers: 1. Prove that the given polynomials form an interlacing family (using real stable polynomials). 2. Find $f_\emptyset(x)$ and produce an upper bound on its largest root. — 4. How to prove Kadison-Singer — It was a little disingenuous of me to suggest in the previous section that the polynomials are “given” to you. Indeed, there is a bit on ingenuity that goes in to actually finding the polynomials of interest. For the sake of example (and because it motivated me to write this blog entry in the first place), I will sketch how Marcus, Spielman and Srivastava use the method of interlacing families to prove Kadison-Singer. Actually, they prove an equivalent version of Kadision-Singer called Weaver’s conjecture: There exists $r\geq2$, $\eta\geq 2$ and $\theta>0$ such that every finite-dimensional $\eta$-tight frame with frame elements of norm at most 1 can be partitioned into $r$ subcollections of vectors such that the frame operator of each subcollection has operator norm at most $\eta-\theta$. Apparently, it suffices to take $r=2$, $\eta=18$ and $\theta=2$. This follows from the following result: Theorem (Corollary 1.3 in Part II). Let $u_1,\ldots, u_m$ be column vectors in $\mathbb{C}^d$ such that $\sum_{i=1}^m u_iu_i^*=I$ and $\|u_i\|^2\leq\alpha$ for all $i$. Then there exists a partition $T_1\sqcup T_2=\{1,\ldots,m\}$ such that $\displaystyle{\bigg\|\sum_{i\in T_j}u_iu_i^*\bigg\|\leq\frac{(1+\sqrt{2\alpha})^2}{2}}$ for both $j=1$ and $j=2$. Even after seeing the result they proved, it’s not obvious what the polynomials are, but we’re getting there. Let $v_1,\ldots,v_m$ be independent random column vectors in $\mathbb{C}^{2d}$. Each $v_i$ has $2d$ entries, so think of the top and bottom halves of this vector to define the distribution: With probability $1/2$, the top half is $\sqrt{2}u_i$ and the bottom half is all zeros, and the rest of the time the top half is all zeros and the bottom half is $\sqrt{2}u_i$. In this way, each vector $u_i$ is randomly lifted to a doubly-dimensional vector (qualitatively, this bears some resemblance to the random 2-lifts of graphs described in Part I). Notice that $\sum_{i=1}^m v_iv_i^*$ is self adjoint and positive semidefinite, and so its operator norm is precisely the largest root of its characteristic polynomial. Since there are $2^m$ possible realizations of $\sum_{i=1}^m v_iv_i^*$, we have a total of $2^m$ characteristic polynomials to analyze; these are the polynomials we are “given.” In Section 4 of Part II, they use real stable polynomials to show that these $2^m$ polynomials form an interlacing family. The most technical portion of Part II (namely, Section 5) is dedicated to bounding the largest root of $f_\emptyset(x)$ in this case, and they manage to bound it by $(1+\sqrt{2\alpha})^2$. With this, the last theorem implies that the largest root of one of the $2^m$ polynomials is at most $(1+\sqrt{2\alpha})^2$. In other words, there exists a subset $T_1\subseteq\{1,\ldots,m\}$ such $\displaystyle{2\max\bigg\{\bigg\|\sum_{i\in T_1}u_iu_i^*\bigg\|,\bigg\|\sum_{iot\in T_1}u_iu_i^*\bigg\|\bigg\}=\bigg\|\sum_{i\in T_1}\binom{\sqrt{2}u_i}{0}\binom{\sqrt{2}u_i}{0}^*+\sum_{tot\in T_1}\ As such, the result follows by taking $T_2:=\{1,\ldots,m\}\setminus T_1$.
{"url":"http://dustingmixon.wordpress.com/2013/06/29/an-introduction-to-interlacing-families/","timestamp":"2014-04-21T02:22:47Z","content_type":null,"content_length":"106452","record_id":"<urn:uuid:35772773-7065-406d-88cc-6b9142c99ce8>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00354-ip-10-147-4-33.ec2.internal.warc.gz"}
140 us dollars in uk pounds You asked: 140 us dollars in uk pounds Say hello to Evi Evi is our best selling mobile app that can answer questions about local knowledge, weather, books, music, films, people and places, recipe ideas, shopping and much more. Over the next few months we will be adding all of Evi's power to this site. Until then, to experience all of the power of Evi you can download Evi for free on iOS, Android and Kindle Fire.
{"url":"http://www.evi.com/q/140_us_dollars_in_uk_pounds","timestamp":"2014-04-20T11:21:47Z","content_type":null,"content_length":"59073","record_id":"<urn:uuid:3f4885ac-27ad-425a-a5ce-1de07c80b485>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00131-ip-10-147-4-33.ec2.internal.warc.gz"}
differential geometry help November 27th 2008, 01:13 AM #1 Junior Member Jun 2008 differential geometry help I'm not sure this is the right forum, I hope the moderator will move it if necessary. (1) I have to sketch level sets $f^{-1}(c)$, c=1, 0, -1, for the function $f(x_1, x_2, x_3)=x_1 x_2 -x_3^2$ I know level sets are $\{(x_1, x_2, x_3) | x_1 x_2 -x_3^2=1, 0, -1\}$ but how do i sketch this? (2) Let $\phi:\mathbb{R}^3 \rightarrow S^3$ be the inverse of stereographic projection from $S^3-\{(0, 0, 0, 1)\}$ to the equatorial hiperplane $x_4=0$. Show that for every vector $p \in \mathbb {R}^3$ exists $\lambda(p) \in \mathbb{R}^3$ such that $||d\phi(v)||= \lambda (p) ||v||, \forall v \in {\mathbb{R}^3}_p$. I don't even know how to start.. Thank you for your time and help! those level-sets are surfaces in R3. You can draw them with most math-packages. The command usually is called "implicitplot" or similar. Do you have maple or mupad? (1) You can use a software, but you can also sketch it yourself. For our convenience, let us denote the coordinates by $(x,y,z)$, so that the equations are $z^2=xy-c$ for $c=-1,\,0,\, 1$. These are equations of quadrics. A usual trick in this case is to introduce the new coordinates $X=\frac{1}{\sqrt{2}}(x+y)$ and $Y=\frac{1}{\sqrt{2}}(x-y)$ (this just corresponds to rotating the axes of the x-y plane by an angle $\frac{\pi}{4}$), for then we obtain $X^2-Y^2=xy$, and the equations are now $X^2=z^2+Y^2+c$. First remark is that it depends only on $r^2=z^2+Y^2$, so that the surface has a "revolution symmetry" around the axis $(OX)$. So it suffices to find the sketch of a slice (containing $(0X)$), for instance $z=0$, and rotate it around $(OX)$ to generate the whole surface. Consider $c=0$. Then for $z=0$ we have $X^2=Y^2$, hence $Y=X$ or $Y=-X$. The graph is nothing else but a cone with axis $(OX)$ and angle $\frac{\pi}{2}$. Consider $c=-1$. Then for $z=0$ we have $X^2=Y^2-1$, hence $Y^2=X^2+1$. This is the equation of a hyperbola centered at 0, with asymptots $Y=\pm X$ (one piece above the axis, one piece below). The surface is a hyperboloid of one sheet (i.e. a connected one). Consider $c=1$. This time you have $Y^2=X^2-1$, this has no solution when $|X|<1$, this is the equation the same hyperbola as above but rotated by $\frac{\pi}{2}$. Hence we get a hyperboloid of two sheets (i.e. with two connected parts). (2) Let $\phi:\mathbb{R}^3 \rightarrow S^3$ be the inverse of stereographic projection from $S^3-\{(0, 0, 0, 1)\}$ to the equatorial hiperplane $x_4=0$. Show that for every vector $p \in \mathbb {R}^3$ exists $\lambda(p) \in \mathbb{R}$ such that $||d\phi(v)||= \lambda (p) ||v||, \forall v \in {\mathbb{R}^3}_p$. I don't even know how to start.. Thank you for your time and help! A first step is to find an expression for $\phi$. A sketch is pretty helpful. Of course not in $\mathbb{R}^4$, but the situation (and the result) is pretty much the same in any dimension, so you can draw in $\mathbb{R}^2$: draw the unit circle, the line $y=0$ (this is the hyperplane $\{x_4=0\}$ of the text), pick a point $X=(x,0)$ on the line, draw the line from $N=(0,1)$ (the "north pole") to $X$. It intersects the circle at one point, which is $M=\phi(X)$. In my explanation, I will think at $X$ as a point outside the circle (or the sphere), so perhaps it will be clearer if you draw your sketch accordingly. In the following, I will denote the north pole by $N=(0,\ldots,0,1)$. In a geometric way: remark that $ON=1=OM$ (remember $M=\phi(X)$ where $X$ is in the hyperplane $\{x_n=0\}$ of $\mathbb{R}^n$), hence the triangle $OMN$ is isocele at $O$, so that the point $O$ projects orthogonally to the middle of the segment $[M,N]$. Subsequently, $\overrightarrow{NM}$ is twice the projection of $\overrightarrow{NO}$ to the line $(NX)$, that is to say: $\ overrightarrow{NM}=2\left(\overrightarrow{NO}\cdo t\frac{\overrightarrow{NX}}{NX}\right)\frac{\overr ightarrow{NX}}{NX}$. Notice that $\overrightarrow{NO}\cdot\overrightarrow{NX}=\overr ightarrow {NO}\cdot(\overrightarrow{NO}+\overrighta rrow{OX})=1+0=1$ since $\overrightarrow{OX}\perp\overrightarrow{ON}$ and $ON=1$. Finally, we have: $\phi(X)=M=N+2\frac{\overrightarrow{NX}}{\|\overrig htarrow{NX}\|^2}$. If you prefer not to look at the picture, you can write $\overrightarrow{NM}=\lambda\overrightarrow{NX}$ for some $\lambda\in\mathbb{R}$ (because $M$ lies on the line $(NX)$), and find $\lambda$ by writing $\|\overrightarrow{OM}\|^2=1$ (because $M$ lies on the unit sphere). You'll end up with the same expression of course. Now, we must differentiate. It would have been tedious with coordinates, but this is easy with vectors, and the conclusion will be very nice. The differential of $\overrightarrow{NX}=\overrightarrow{NO}+\overright arrow{OX}$ (at $X$, evaluated at $v\in\mathbb{R}^n_X$) is just $\vec{v}$. Then the differential of $\|\overrightarrow{NX}\|^2=\overrightarrow{NX}\cdot \overrightarrow{NX}$ is $2\vec{v}\cdot\overrightarrow{NX}$. So that the differential of $\frac{1}{\|\overrightarrow{NX}\|^2}$ is $-\frac{2\vec{v}\cdot\overrightarrow{NX}}{\|\overrig htarrow{NX}\|^4}$. (by the chain rule) Finally, $d\phi_X(\vec{v})=2\frac{1}{\|\overrightarrow{NX}\| ^2}\vec{v}-\frac{2\vec{v}\cdot\overrightarrow{NX}}{\|\overrig htarrow{NX}\|^4}\overrightarrow{NX}$. Let me write is slightly differently: $d\phi_X(\vec{v})=\frac{2}{\|\overrightarrow{NX}\|^ 2}\left(\vec{v}-2\left(\vec{v}\cdot\frac{\overrightarrow{NX}}{\|\o verrightarrow{NX}\|}\right)\frac{\ overrightarrow{N X}}{\|\overrightarrow{NX}\|}\right)$. What do you recognize in the bigger parentheses? This is $\vec{v}$ minus twice its projection on $(NX)$. In other words, this is the symmetric of $\vec{v}$ with respect to the vectorial hyperplane orthogonal to $(NX)$. A symmetry preserves the length, so that $\|d\phi_X(\vec{v})\|=\frac{2}{\|\overrightarrow{NX }\|^2}\|\vec{v}\|$. This is what you need, with an explicit $\lambda$. That means that $d\phi$ multiplies lengths by a constant factor: it is a similitude (if my translation is correct). In particular, this shows that $\phi$ preserves the angles. But this is another story... Love the question (and the solution)! Thank you so very much!!! I was offline for two days and until now I didn't see you reply. I'm sure it took a lot of time and efford and I am extremely grateful! I will study it in detail today, and I hope it's okay to post some further questions, if I have some. Thank you! I can't seem to get the graphs right. I don't have the necessary software, could someone please post the output photo? Thank you again. I have plotted those surfaces with MuPAD. It seems that there is a transition from "two sheets" to "one sheet". I donīt have time to look into the math itself, you will learn more from Laurent here. I understood everything but this "revolution symmetry" part, and why does it suffice to sketch a slice and rotate it around OX. I've googled it, but couldn't understand it. Could you please explain a bit more? And thank you, Andreas! Perhaps the "revolution symmetry" is not the proper term in English, I'm not sure. Anyway, here is what it means: We noticed that the equation depends only on $X$ and on the distance from the $X$-axis to the point $(X,Y,z)$ (this distance equals $r=\sqrt{z^2+Y^2}$). So if a point $(X,Y,z)$ is on the surface, then any point $(X, Y', z')$ where $Y^2+z^2=Y'^2+z'^2$ will lie on the surface as well. And how can we go from the first point to the second one? By rotating around the axis $(OX)$. Or I could say that the circle of center $(X,0,0)$, of radius $r$, and orthogonal to $(OX)$ is a subset of the surface. Now if we know the intersection of the surface with a slice containing the axis, like the slice $\{z=0\}$, then the whole surface is the union of the previous circles, centered on the axis, orthogonal to the axis, and going through the points of the section. In other words, the whole surface is obtained by rotating the slice around this axis. You can see this on Andreas'plots, except that the scale is not the same on the three axes, so what we can see on the pictures is not really a revolution symmetry (the sections are ellipses rather than circles). I hope it was clearer... Much clearer, thank you. :-) I am now required to solve the second one using a given hint, but I will post it in another thread. Thank you for all your help! November 27th 2008, 03:22 AM #2 Nov 2008 November 27th 2008, 01:31 PM #3 MHF Contributor Aug 2008 Paris, France November 27th 2008, 02:18 PM #4 MHF Contributor Aug 2008 Paris, France November 27th 2008, 06:43 PM #5 Aug 2008 November 30th 2008, 02:31 AM #6 Junior Member Jun 2008 November 30th 2008, 05:38 AM #7 Junior Member Jun 2008 November 30th 2008, 11:16 AM #8 Nov 2008 November 30th 2008, 12:50 PM #9 Junior Member Jun 2008 December 1st 2008, 01:51 AM #10 MHF Contributor Aug 2008 Paris, France December 6th 2008, 11:20 AM #11 Junior Member Jun 2008
{"url":"http://mathhelpforum.com/differential-geometry/61845-differential-geometry-help.html","timestamp":"2014-04-17T11:58:59Z","content_type":null,"content_length":"86927","record_id":"<urn:uuid:fce2064c-715c-42e5-9524-849be55169f1>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00248-ip-10-147-4-33.ec2.internal.warc.gz"}
Billiards as dynamical systems Dynamical systems are defined as the set of prescribed rules to evolve certain state in time. If the rules involve some random or probability feature (e.g., lottery) we say that the dynamical system is stochastic. Otherwise the system is called deterministic. Billiards are very illustrative example of dynamical system. In this case pointwise particles move in straight lines and experience specular collisions in the boundaries (like light in a mirror). The main general properties of billiards motion are illustrated below in a mushroom billiard (semi-circular stem placed on top of a triangular foot, see references at the end). Magnetic Billiards Another kind of dynamical systems are the so called magnetic billiards. In this case the straight line movement between collision is replaced by a the movement on semi-circles. The physical motivation is the movement of electric charges under the influence of a perpendicular magnetic field. Chaos in billiards Non-linear dynamical systems typically present chaos. Its most striking characteristics is its sensitivity to initial conditions. In the simulation below this effect is illustrated: the same position was chosen for the red and blue balls (no interaction between them) and a difference of 0.5% in the direction of movement (first case), and 0.5% in the position (second case). One sees that after some bounces the two trajectories are far away from each other. Chaos introduces unpredictability in fully deterministic dynamical systems. Chaos and Order coexist Trajectories in billiards may be chaotic (red) or regular (blue), depending on the initial condition. Regular trajectories perform a periodic or quasi-periodic movement and are restricted to the stem of the mushroom. If one waits long enough, every chaotic trajectories visits every point of the mushroom billiard table. This property is a generic property of Hamiltonian systems (to whom billiards belong). We say that such systems have a mixed phase space. Intermittent Chaos One interesting problem of nonlinear dynamics which is not completely solved is how the movement of chaotic trajectories in a mixed phase space look like. When chaotic trajectories approach the regular region (located at the mushrooms stem) they spent a very long time close to them before visiting again the rest of the chaotic region (foot of the billiard). During this time the movement is very regular and we call thus the full movement as intermittent (alternates between chaos and regular). This can be seen also in another paradigmatic example of Hamiltonian systems, the [standard map ], whose phase space is shown at the right. How to generate such animations? Making gif animations using Xmgrace. About mushroom billiards • L. Bunimovich, Mushrooms and other billiards with divided phase space, Chaos 11 (2001), 802. Kinematics, equilibrium, and shape in Hamiltonian systems: The "lab"effect, Chaos 13 (2003), 903. • E. G. Altmann, A. E. Motter, and H. Kantz, "Stickiness in mushroom billiards" [Chaos 15, 033105 (2005)] or pre-print [nlin.CD/0502058]. And "Stickiness in Hamiltonian systems: from sharply divided to hierarchical phase space" [Phys. Rev. E 73, 026207 (2006)] or pre-print nlin.CD/0601008 About chaos in Hamiltonian systems • E. Ott, Chaos in dynamical systems, Cambridge University Press, Cambridge, 2002. • A. M. Ozorio de Almeida, Hamiltonian systems: Chaos and quantization, Cambridge University Press, Cambridge, 1992. • Mackay and Meiss (Eds.), Hamiltonian Dyanmical systems, Adam Hilger, Bristol, 1987. About intermittent chaos and stickiness • J.D. Meiss, Symplectic maps, variational principles and transport, Rev. Mod. Phys. 64 (1992), 795. • J. D. Meiss and E. Ott, Markov-tree model of intrinsic transport in Hamiltonian systems, Phys. Rev. Lett. 55 (1985), 2741.Markov-tree model of transport in area-preserving maps, Physica D 20 (1986), 387. • G. M. Zaslavsky, Chaos, fractinal kinetics, and anomalous transport, Phys. Rep. 371 (2002), 461. • E. G. Altmann, Intermittent chaos in Hamiltonian dynamical system [Ph.D. Thesis] (2007). Questions? Write me: edugalt(AT)pks.mpg.de or visit my homepage http://www.pks.mpg.de/~edugalt/ .
{"url":"http://www.mpipks-dresden.mpg.de/mpi-doc/kantzgruppe/wiki/projects/Billiards.html","timestamp":"2014-04-17T06:41:02Z","content_type":null,"content_length":"11066","record_id":"<urn:uuid:9ce6ead8-bc35-4d37-94dc-79510bce763b>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00165-ip-10-147-4-33.ec2.internal.warc.gz"}
How Many Is A Million? This activity was selected for the On the Cutting Edge Exemplary Teaching Collection Resources in this top level collection a) must have scored Exemplary or Very Good in all five review categories, and must also rate as “Exemplary” in at least three of the five categories. The five categories included in the peer review process are • Scientific Accuracy • Alignment of Learning Goals, Activities, and Assessments • Pedagogic Effectiveness • Robustness (usability and dependability of all components) • Completeness of the ActivitySheet web page For more information about the peer review process itself, please see http://serc.carleton.edu/NAGTWorkshops/review.html. This page first made public: Feb 9, 2012 How Many Is A Million? Roger Steinberg, Department of Natural Sciences, Del Mar College To help students visualize the immensity of geologic time, or even the immensity of just one million years, I have created a very large sheet of paper that contains one million dots. The dots are regularly spaced, 100 per square inch. The dimensions of the sheet of paper are approximately 50 by 200 inches. I made it by cutting and taping together 200 pieces of 8 1/2 by 11 paper, each with 5000 dots. Because of the texture of the dots on the paper, you can kind of see or 'experience' all million of the dots while viewing from one end of the paper. Students both visually and viscerally 'get it'! How this visualization is used in class: I do this visual demonstration at the end of the semester, just before passing out the Final Exams, emphasizing that although the semester is over, learning never ceases. Plus, it helps relieve some pre-test tension. But you could present it earlier in the semester, if you wish. I first hand out individual copies of the 8 1/2 by 11 pieces of paper containing 5000 dots to each student, and ask them the significance. Students usually look for a pattern in the dots (which I may actually encourage), until I inform them that it is the number of dots on the page that is significant. I ask them to guess the number of dots on the pages. Usually, someone will eventually correctly guess (or calculate) 5000. Then I select a student to help me unfurl the large sheet of paper, and ask the class to guess (or quickly calculate) the number of dots on it. Most commonly the responses are 4.6 billion (age of the Earth) or 13.7 billion (age of the Universe). Students are always shocked when I reveal that the number is only 1 million. (Yes, I actually cut and taped together 200 pieces of 8 1/2 by 11 paper, each with 5000 dots.) Because I made the dots by hand on the original paper, using a grid that disappeared in the copying, my sheet of 1 million dots has some 'texture' that enables individual dots to be discerned from one end of the paper to the other. Students can see that a million is a very, very, large number, and a million years must be a very long time--yet only the blink of an eye compared to all of geologic time. (Computerized dots, even on a single piece of paper containing just 5000, are dizzying in a way that my hand-drawn paper is not—I know, I've tried it, and it doesn't work for that reason. I've included a pdf of my 5000 dots drawn by hand and 5000 dots drawn by computer with this visualization.) As a final step, I ask the class to calculate the size of the sheet of paper, at the same scale, that would be required to contain 4.6 billion dots. I tell the students that I'm going to start working on creating that really, really large sheet of dot-covered paper after the semester ends, and ask for volunteers to help me with it. (It's not actually a project I intend to work on, of course.) To a first approximation, 4.6 billion is 5000 million. So imagine one of my 8 1/2 by 11 pieces of paper with 5000 dots. Now replace each dot with a sheet of paper as large as my 50 by 200 inch sheet containing 1 million dots. If you do the math, you will find that the new sheet of paper with 5 billion dots would be about 1.3 times as wide as a football field and 5.5 times as long! If only 4.6 billion dots, the paper would be the same width and still a bit longer than 5 football fields! Advice on using this visualization in the classroom: Good luck cutting and taping! Don't try to do it all at once, I spent several weeks on the effort, whenever I had spare time. When using this visual demonstration for classes, make it fun as well as instructive. There are many other potential statements you could make or questions you could ask while presenting it. I sometimes make a point of emphasizing that there are 100 dots per square inch, and if a person is lucky (and pick their parents well), they may live to be the number of dots (years) contained in that one square inch of paper. This certainly puts a human life in perspective compared to all of geologic time, or even compared to just one million years. As another example, you could ask students if they have ever seen a million of anything all at the same time and all in one place. Grains of sand on a beach may come to mind. But unlike my million dots, you really can't see the individual grains of sand on a beach unless you get quite close, so close that your field of view probably no longer encompasses one million grains. That's the beauty of my sheet of paper--you can both visually and viscerally understand that one million of anything is a lot because you can see the individual dots--and if a student had a million dollars, they probably wouldn't be sitting in my classroom! Visualization and Supporting Materials 5000 Dots by Hand (Acrobat (PDF) 570kB Feb7 12) 5000 Dots by Computer (Acrobat (PDF) 411kB Feb7 12)
{"url":"http://serc.carleton.edu/NAGTWorkshops/time/visualizations_teachtips/60624.html","timestamp":"2014-04-16T08:16:10Z","content_type":null,"content_length":"44047","record_id":"<urn:uuid:9e22cb54-0742-4f70-b6e8-d2e6c6f1cd40>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00317-ip-10-147-4-33.ec2.internal.warc.gz"}
This Article Bibliographic References Add to: On the Phase Transition Width of K-Connectivity in Wireless Multihop Networks July 2009 (vol. 8 no. 7) pp. 936-949 ASCII Text x Xiaoyuan Ta, Guoqiang Mao, Brian D.O. Anderson, "On the Phase Transition Width of K-Connectivity in Wireless Multihop Networks," IEEE Transactions on Mobile Computing, vol. 8, no. 7, pp. 936-949, July, 2009. BibTex x @article{ 10.1109/TMC.2008.170, author = {Xiaoyuan Ta and Guoqiang Mao and Brian D.O. Anderson}, title = {On the Phase Transition Width of K-Connectivity in Wireless Multihop Networks}, journal ={IEEE Transactions on Mobile Computing}, volume = {8}, number = {7}, issn = {1536-1233}, year = {2009}, pages = {936-949}, doi = {http://doi.ieeecomputersociety.org/10.1109/TMC.2008.170}, publisher = {IEEE Computer Society}, address = {Los Alamitos, CA, USA}, RefWorks Procite/RefMan/Endnote x TY - JOUR JO - IEEE Transactions on Mobile Computing TI - On the Phase Transition Width of K-Connectivity in Wireless Multihop Networks IS - 7 SN - 1536-1233 EPD - 936-949 A1 - Xiaoyuan Ta, A1 - Guoqiang Mao, A1 - Brian D.O. Anderson, PY - 2009 KW - Phase transition width KW - k-connectivity KW - connectivity KW - wireless multihop networks KW - transmission range KW - average node degree KW - random geometric graph. VL - 8 JA - IEEE Transactions on Mobile Computing ER - In this paper, we study the phase transition behavior of k-connectivity (k=1,2,\ldots) in wireless multihop networks where a total of n nodes are randomly and independently distributed following a uniform distribution in the unit cube [0,1]^{d} (d=1,2,3), and each node has a uniform transmission range r(n). It has been shown that the phase transition of k-connectivity becomes sharper as the total number of nodes n increases. In this paper, we investigate how fast such phase transition happens and derive a generic analytical formula for the phase transition width of k-connectivity for large enough n and for any fixed positive integer k in d-dimensional space by resorting to a Poisson approximation for the node placement. This result also applies to mobile networks where nodes always move randomly and independently. Our simulations show that to achieve a good accuracy, n should be larger than 200 when k=1 and d=1; and n should be larger than 600 when k\le 3 and d=2,\ 3. The results in this paper are important for understanding the phase transition phenomenon; and it also provides valuable insight into the design of wireless multihop networks and the understanding of its characteristics. [1] I.F. Akyildiz, W. Su, Y. Sankarasubramaniam, and E. Cayirci, “A Survey on Sensor Networks,” IEEE Comm. Magazine, pp.102-114, Aug. 2002. [2] R. Hekmat and P.V. Mieghem, “Connectivity in Wireless Ad-Hoc Networks with a Log-Normal Radio Model,” Mobile Networks and Applications, vol. 11, no. 3, pp.351-360, June 2006. [3] C. Bettstetter, “On the Minimum Node Degree and Connectivity of a Wireless Multihop Network,” Proc. Third ACM Int'l Symp. Mobile Ad Hoc Networking and Computing, pp.80-91, 2002. [4] V. Ravelomanana, “Extremal Properties of Three-Dimensional Sensor Networks with Applications,” IEEE Trans. Mobile Computing, vol. 3, no. 3, pp.246-257, July 2004. [5] A. Tang, C. Florens, and S.H. Low, “An Empirical Study on the Connectivity of Ad Hoc Networks,” Proc. IEEE Aerospace Conf. vol. 3, pp.1333-1338, Mar. 2003. [6] J. Aspnes, T. Eren, D.K. Goldenberg, A.S. Morse, W. Whiteley, Y.R. Yang, B.D.O. Anderson, and P.N. Belhumeur, “A Theory of Network Localization,” IEEE Trans. Mobile Computing, vol. 5, no. 12, pp.1663-1678, Dec. 2006. [7] B.D.O. Anderson, P.N. Belhumeur, T. Eren, D.K. Goldenberg, and A.S. Morse, “Graphical Properties of Easily Localizable Sensor Networks,” Wireless Networks, Apr. 2007, doi:10.1007/ [8] O. Younis, S. Fahmy, and P. Santi, “An Architecture for Robust Sensor Network Communications,” Int'l J. Distributed Sensor Networks vol. 1, nos. 3/4, pp.305-327, July 2005. [9] M. Bahramgiri, M.T. Hajiaghayi, and V.S. Mirrokni, “Fault-Tolerant and 3-Dimensional Distributed Topology Control Algorighms in Wireless Multi-Hop Networks,” Proc. IEEE Int'l Conf. Computer Comm. and Networks (ICCCN '02), pp.392-397, 2002. [10] X.-Y. Li, P.-J. Wan, Y. Wang, and C.-W. Yi, “Fault Tolerant Deployment and Topology Control in Wireless Networks,” Proc. ACM MobiHoc, pp.117-128, June 2003. [11] B. Krishnamachari, S.B. Wicker, and R. Bejar, “Phase Transition Phenomena in Wireless Ad Hoc Networks,” Proc. IEEE GLOBECOM, vol. 5, pp.2921-2925, 2001. [12] G. Han and A.M. Makowski, “Poisson Convergence Can Yield Very Sharp Transitions in Geometric Random Graphs,” Proc. Inaugural Workshop, Information Theory and Applications, Feb. 2006. [13] B. Krishnamachari, S. Wicker, R. Bejar, and C. Fernandez, “On the Complexity of Distributed Self-Configuration in Wireless Networks,” Telecomm. Systems, vol. 22, nos. 1-4, pp.33-59, 2003. [14] P. Gupta and P. Kumar, “The Capacity of Wireless Networks,” IEEE Trans. Information Theory, vol. 46, no. 2, pp.388-404, 2000. [15] P. Gupta and P. Kumar, “Critical Power for Asymptotic Connectivity in Wireless Networks,” Stochastic Analysis, Control, Optimization and Applications, pp.547-566, 1998. [16] P.-J. Wan and C.-W. Yi, “Asymptotic Critical Transmission Radius and Critical Neighbor Number for k-Connectivity in Wireless AdHoc Networks,” Proc. ACM MobiHoc, pp.1-8, May 2004. [17] X. Ta, G. Mao, and B.D.O. Anderson, “Evaluation of the Probability of K-Hop Connection in Homogeneous Wireless Sensor Networks,” Proc. IEEE GLOBECOM, pp.1279-1284, Nov. 2007. [18] M. Penrose, “On K-Connectivity for a Geometric Random Graph,” Random Structures and Algorithms, vol. 15, pp.145-164, 1999. [19] M. Desai and D. Manjunath, “On the Connectivity in Finite AdHoc Networks,” IEEE Comm. Letters, vol. 6, no. 10, pp.437-439, Oct. 2002. [20] C.H. Foh and B.S. Lee, “A Closed Form Network Connectivity Formula for One-Dimensional Manets,” Proc. IEEE Int'l Conf. Comm. (ICC '04), vol. 6, pp.3739-3742, June 2004. [21] G. Korniss, C.J. White, P.A. Rikvold, and M.A. Novotny, “Dynamic Phase Transition, Universality, and Finite-Size Scaling in the Two-Dimensional Kinetic Using Model in an Oscillating Field,” Physical Rev. E, vol. 63, no. 15, p. 016120, 2001. [22] R. Cerf and E.N.M. Cirillo, “Finite Size Scaling in Three-Dimensional Bootstrap Percolation,” The Annals of Probability, vol. 27, no. 4, pp.1837-1850, 1999. [23] M. Penrose, Random Geometric Graphs, first ed. Oxford University Press, 2003. [24] P. Hall, Introduction to the Theory of Coverage Processes. Birkhauser, 1988. [25] C. Bettstetter, “On the Connectivity of Ad Hoc Networks,” The Computer J., vol. 47, no. 4, pp.432-447, 2004. [26] A. Nain, D. Towsley, B. Liu, and Z. Liu, “Properties of Random Direction Models,” Proc. IEEE INFOCOM, vol. 3, pp.1897-1907, Mar. 2005. [27] B. Bollobas, Random Graphs. Academic Press, 1985. [28] E. Friedgut and G. Kalai, “Every Monotone Graph Property Has a Sharp Threshold,” The Am. Math. Soc., vol. 124, pp.2993-3002, 1996. [29] U.N. Raghavan, H.P. Thadakamalla, and S. Kumara, “Phase Transition and Connectivity in Distributed Wireless Sensor Networks,” Proc. 13th Int'l Conf. Advanced Computing and Comm., Dec. 2005. [30] A. Goel, S. Rai, and B. Krishnamachari, “Sharp Threshold for Monothone Properties In Random Geometric Graphs,” Proc. ACM Symp. Theory of Computing (STOC '04), June 2004. [31] X. Ta, G. Mao, and B.D.O. Anderson, “On the Probability of K-Hop Connection in Wireless Sensor Networks,” IEEE Comm. Letters, vol. 11, no. 8, pp.662-664, Aug. 2007. [32] S. Janson, T. Luczak, and A. Rucinski, Random Graphs, first ed. John Wiley and Sons, 2000. [33] F. Buckley and M. Lewinter, A Friendly Introduction to Graph Theory, first ed. Pearson Education, Inc., 2003. [34] N.G.D. Bruijn, Asymptotic Methods in Analysis, second ed. Dover Publications, Inc., 1981. [35] P. Santi and D.M. Blough, “The Critical Transimitting Range for Connectivity in Sparse Wireless Ad Hoc Networks,” IEEE Trans. Mobile Computing, vol. 2, no. 1, pp.25-39, Jan.-Mar. 2003. [36] B. Jackson and T. Jordan, “Connected Rigidity Matroids and Unique Realizations of Graphs,” J. Combinatorial Theory B, vol. 94, pp.1-29, 2005. [37] C. Bettstetter and J. Zangl, “How to Achieve a Connected Ad Hoc Network with Homogeneous Range Asignment: An Analytical Study with Consideration of Border Effects,” Proc. Fourth IEEE Int'l Conf. Mobile and Wireless Comm. Networks (MWCN '02), pp.125-129, 2002. [38] E. Godehardt and J. Jaworski, “On the Connectivity of a Random Interval Graph,” Random Structures and Algorithms, vol. 9, nos. 1-2, pp.137-161, 1996. [39] C. Bettstetter and O. Krause, “On Border Effects in Modeling and Simulation of Wireless Ad Hoc Networks,” Proc. IEEE Int'l Conf. Mobile and Wireless Comm. Networks (MWCN '01), pp.20-27, Aug. [40] N.A.C. Cressie, Statistics for Spatial Data. John Wiley and Sons, 1991. [41] C. Bettstetter and C. Hartmann, “Connectivity of Wireless Multihop Networks in a Shadow Fading Environment,” Wireless Networks, vol. 11, no. 5, pp.571-579, Sept. 2005. [42] O. Dousse, F. Baccelli, and P. Thiran, “Impact of Interferences on Connectivity in Ad Hoc Networks,” Proc. IEEE INFOCOM, pp.1724-1733, Apr. 2003. Index Terms: Phase transition width, k-connectivity, connectivity, wireless multihop networks, transmission range, average node degree, random geometric graph. Xiaoyuan Ta, Guoqiang Mao, Brian D.O. Anderson, "On the Phase Transition Width of K-Connectivity in Wireless Multihop Networks," IEEE Transactions on Mobile Computing, vol. 8, no. 7, pp. 936-949, July 2009, doi:10.1109/TMC.2008.170 Usage of this product signifies your acceptance of the Terms of Use
{"url":"http://www.computer.org/csdl/trans/tm/2009/07/ttm2009070936-abs.html","timestamp":"2014-04-18T11:23:10Z","content_type":null,"content_length":"62913","record_id":"<urn:uuid:a22d4b60-478a-4d8f-b7f2-e3225ec351ec>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00603-ip-10-147-4-33.ec2.internal.warc.gz"}
Ornstein isomorphism theorem From Encyclopedia of Mathematics Ergodic theory, the study of measure-preserving transformations or flows, arose from the study of the long-term statistical behaviour of dynamical systems (cf. also Measure-preserving transformation; Flow (continuous-time dynamical system); Dynamical system). Consider, for example, a billiard ball moving at constant speed on a rectangular table with a convex obstacle. The state of the system (the position and velocity of the ball), at one instant of time, can be described by three numbers or a point in Euclidean 3-dimensional space, and its time evolution by a flow on its state space, a subset of 3-dimensional space. The Lebesgue measure of a set does not change as it evolves and can be identified with its probability. One can abstract the statistical properties (e.g., ignoring sets of probability 0) and regard the state-space as an abstract measure space. Equivalently, one says that two flows are isomorphic if there is a one-to-one measure-preserving (probability-preserving) correspondence between their state spaces so that corresponding sets evolve in the same way (i.e., the correspondence is maintained for all time). It is sometimes convenient to discretize time (e.g., look at the flow once every minute), and this is also referred to as a transformation. Measure-preserving transformations (or flows) also arise from the study of stationary processes (cf. also Stationary stochastic process). The simplest examples are independent processes such as coin tossing. The outcome of each coin tossing experiment (the experiment goes on for all time) can be described as a doubly-infinite sequence of heads \(H\) and tails \(T\). The state space is the collection of these sequences. Each subset is assigned a probability. For example, the set of all sequences that are \(H\) at time 3 and \(T\) at time 5 gets probability 1/4. The passage of time shifts each sequence to the left (what used to be time 1 is now time 0). (This kind of construction works for all stochastic processes, independence and discrete time are not needed.) The above transformation is called the Bernoulli shift \(B(1/2,1/2)\). If, instead of flipping a coin, one spins a roulette wheel with three slots of probability \(p_1, p_2, p_3\), one would get the Bernoulli shift \(B(p_1, p_2, p_3)\). Bernoulli shifts play a central role in ergodic theory, but it was not known until 1958 whether or not all Bernoulli shifts are isomorphic. A.N. Kolmogorov and Ya.G. Sinai solved this problem by introducing a new invariant for measure-preserving transformations: the entropy, which they took from Shannon's theory of information (cf. also Entropy of a measurable decomposition; Shannon sampling theorem). They showed that the entropy of \(B(p_1,\ldots,p_n)\) is \[\sum_{i=1}^n p_i \log p_i\] thus proving that not all Bernoulli shifts are isomorphic. The simplest case of the Ornstein isomorphism theorem (1970), [a3], states that two Bernoulli shifts of the same entropy are isomorphic. A deeper version says that all the Bernoulli shifts are strung together in a unique flow: There is a flow \(B_t\) such that \(B_0\) is isomorphic to the Bernoulli shift \(B(1/2, 1/2)\), and for any \ (t_0\), \(B_{t_0}\) is also a Bernoulli shift. (Here, \(B_{t_0}\) means that one samples the flow every \(t_0\) units of time.) In fact, one obtains all Bernoulli shifts (more precisely, all finite entropy shifts) by varying \(t_0\). (There is also a unique Bernoulli flow of infinite entropy.) \(B_t\) is unique up to a constant scaling of the time parameter (i.e., if \(\widetilde{B}_t\) is another flow such that for some \(t_0\), \(\widetilde{B}_{t_0}\) is a Bernoulli shift, then there is a constant \(c\) such that \(B_{ct}\) is isomorphic to \(\widetilde{B}_t\)). The thrust of this result is that at the level of abstraction of isomorphism there is a unique flow that is the most random possible. The above claim is clarified by the following part of the isomorphism theorem: Any flow \(f_t\) that is not completely predictable, has as a factor \(B_{ct}\) for some \(c>0\) (the numbers \(c\) involved are those for which the entropy of \(B_{ct}\) is not greater than the entropy of \(f_t\)). The only factors of \(B_t\) are \(B_{ct}\) with \(0<c\le 1\). Here, completely predictable means that all observations on the system are predictable in the sense that if one makes the observation at regular intervals of time (e.g., every hour on the hour), then the past determines the future. (An observation is simply a measurable function \(P\) on the state space; one can think of repeated observations as a stationary process.) It is not hard to prove that "completely predictable" is the same as zero entropy. Also, "\(B_t\) is a factor of \(f_t\)" means that there is a many-to-one mapping \(\phi\) from the state space of \(f_t\) to that of \(B_t\) so that a set and its inverse image evolve in the same way ( \(\phi^{-1}(B_t(E))=f_t(\phi^{-1} E)\); this is the same as saying that one gets \(B_t\) by restricting \(f_t\) to an invariant sub-sigma algebra or by lumping points). Thus, \(B_t\) is, in some sense, responsible for all randomness in flows. The most important part of the isomorphism theorem is a criterion that allows one to show that specific systems are isomorphic to \(B_t\). Using results of Sinai, one can show that billiards with a convex obstacle (described earlier) are isomorphic to \(B_t\). If one would perturb the obstacle, one would get an isomorphic system; if the perturbation is small, then the isomorphism mapping of the state space (a subset of 3-dimensional space) can be shown to be close to the identity. This is an example of "statistical stability" , another consequence of the isomorphism theorem, which provides a statistical version of structural stability. Note that the billiard system is very sensitive to initial conditions and the perturbation completely changes individual orbits. This result shows, however, that the collection of all orbits is hardly changed. Geodesic flow on a manifold of negative curvature is another example where results of D. Anosov allow one to check the criterion and is thus isomorphic to \(B_t\). Here too one obtains stability for small perturbations of the manifold's Riemannian structure. Results of Ya.B. Pesin allow one to check the criterion for any ergodic measure preserving flow of positive entropy on a 3-dimensional manifold (i.e. not completely predictable). Thus, any such flow is isomorphic to \(B_t\) or the product of \(B_t\) and a rotation. Stability is not known. [a1] D.S. Ornstein, "Ergodic theory, randomness, and dynamical systems" , Yale Univ. Press (1974) [a2] "Dynamical systems II" Ya.G. Sinai (ed.) , Springer (1989) [a3] V.I. Arnold, A. Avez, "Ergodic problems of classical mechanics" , Benjamin (1968) [a4] M. Smorodinsky, "Ergodic theory. Entropy" , Lecture Notes Math. , 214 , Springer (1970) [a5] P. Shields, "The theory of Bernoulli shifts" , Univ. Chicago Press (1973) [a6] D.S. Ornstein, B. Weiss, "Statistical properties of chaotic systems" Bull. Amer. Math. Soc. , 24 : 1 (1991) [a7] "Ergodic theory and differentiable dynamics" R. Mané (ed.) , Springer (1987) [a8] D. Randolph, "Fundamentals of measurable dynamics — Ergodic theory of Lebesgue spaces" , Oxford Univ. Press (to appear) How to Cite This Entry: Ornstein isomorphism theorem. Encyclopedia of Mathematics. URL: http://www.encyclopediaofmath.org/index.php?title=Ornstein_isomorphism_theorem&oldid=27182 This article was adapted from an original article by D. Ornstein (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article
{"url":"http://www.encyclopediaofmath.org/index.php?title=Ornstein_isomorphism_theorem&oldid=27182","timestamp":"2014-04-18T00:34:59Z","content_type":null,"content_length":"28464","record_id":"<urn:uuid:ed4a9b0d-c960-4345-9602-63b08cd8abed>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00451-ip-10-147-4-33.ec2.internal.warc.gz"}
Free symmetric monoidal $(\infty,n)$-categories with duals up vote 13 down vote favorite The reading of (Hopkins-)Lurie's On the Classification of Topological Field Theories (arXiv:0905.0465) suggests that a stronger version of the cobordism hypothesis should hold; namely, that (under eventually suitable technical assumptions), the inclusion of symmetric monoidal $(\infty,n)$-categories with duals into $(\infty,n)$-categories should have a left adjoint, the ``free symmetric monoidal $(\infty,n)$-category with duals on a given category $\mathcal{C}$ '', and that this free object should be given by a suitably $\mathcal{C}$-decorated $(\infty,n)$-cobordism $Bord_n(\mathcal {C})$. This would be an higher dimensional generalization of Joyal-Street-Reshetikhin-Turaev decorated tangles. Such an adjunction would in particular give a canonical symmetric monoidal duality preserving functor $Z: Bord_n(\mathcal{C})\to \mathcal{C}$ which seems to appear underneath the constructions in Freed-Hopkins-Lurie-Teleman's Topological Quantum Field Theories from Compact Lie Groups (arXiv:0905.0731). Yet, I've been unable to find an explicit statement of this conjectured adjointness in the above mentioned papers, and my google searches for "free symmetric monoidal n-category" only produce documents in which this continues with "generated by a single fully dualizable object", as in the original form of the cobordism hypothesis. Is anyone aware of a formal statement or treatment of the cobordism hypothesis from the left adjoint point of view hinted to above? at.algebraic-topology ct.category-theory extended-tqft add comment 1 Answer active oldest votes The existence of a left adjoint follows by formal nonsense. If you have a symmetric monoidal $(\infty,n)$-category which is can be built by first freely adjoining some objects, then some $1$-morphisms, then some $2$-morphisms, and so forth, up through $n$-morphisms and then stop, then there is an explicit geometric description of the $(\infty,n)$-category you get by up vote 12 "enforcing duality" in terms of manifolds with singularities. This is sketched in one of the sections of the paper you reference. I don't know of a geometric description for what you get down vote if you start with an arbitrary symmetric monoidal $(\infty,n)$-category and then "enforce duality". Yes, that's the section 4.3, Manifolds with singularities, from your paper on the classification of tqft, where singularity data are described. It is precisely that which suggested me the idea that those data were implicitly expressing the cobordism hypothesis as a left adjoint, but then I was unable to find this explicited in some form in the paper, so I became unsure about it. Thanks a lot. – domenico fiorenza Dec 17 '10 at 18:43 p.s. By the way, the existence of a natural morphisms $Bord^{SO}_n(\mathcal{C})\to \mathcal{C}$ for any symmetric monoidal $(\infty,n)$-category with duals seems to simplify the exposition of a few points in your paper with Freed-Hopkins-Teleman. Should you be interested in the details of this, there is an on-going forum discussion on the topic here: math.ntnu.no/~stacey/Mathforge/nForum/… – domenico fiorenza Dec 17 '10 at 18:48 add comment Not the answer you're looking for? Browse other questions tagged at.algebraic-topology ct.category-theory extended-tqft or ask your own question.
{"url":"http://mathoverflow.net/questions/49732/free-symmetric-monoidal-infty-n-categories-with-duals?sort=votes","timestamp":"2014-04-18T01:04:10Z","content_type":null,"content_length":"55033","record_id":"<urn:uuid:26ef5d01-db25-4be2-873d-e275228e50e2>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00520-ip-10-147-4-33.ec2.internal.warc.gz"}
let q(x) and f(x) be relatively prime and elements of F[x] where F is a field April 27th 2011, 03:42 PM #1 Jan 2009 Kingston, PA let q(x) and f(x) be relatively prime and elements of F[x] where F is a field. Let q(x) and f(x) be relatively prime. Let q(x)|f(x)g(x). Prove q(x)|g(x) I dont know where to start guys. I know there gcd =1 obviously. Maybe it has something to do with their degrees?? I dont know! BIG HELP PLEASE!! Since gcd(f,q)=1, there are polynomials a,b such that f(x)a(x)+q(x)b(x)=1. You also know something about f(x)g(x), so try to get that product involved somehow. It doesn't seem like I gave a big hint, but that's already most of the solution. Wow! thank you very much. With the product, are you hinting to the degree(f(x)g(x))= m+n where m=deg(f(x)) and n= deg(g(x)). is that helpful at all. You don't really need the degrees here. Try multiplying the equation f(x)a(x)+q(x)b(x)=1 through by g(x). Now q(x) divides both terms on the left side, so what can you conclude? April 27th 2011, 05:13 PM #2 Junior Member Jan 2011 April 27th 2011, 06:43 PM #3 Jan 2009 Kingston, PA April 27th 2011, 06:57 PM #4 Junior Member Jan 2011
{"url":"http://mathhelpforum.com/advanced-algebra/178805-let-q-x-f-x-relatively-prime-elements-f-x-where-f-field.html","timestamp":"2014-04-18T07:29:39Z","content_type":null,"content_length":"36586","record_id":"<urn:uuid:2df52396-03d9-40a8-bec5-6cd6c8a02ec4>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00056-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: st: How to insert the name of a variable into a matrix automatically Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: st: How to insert the name of a variable into a matrix automatically From Nick Cox <njcoxstata@gmail.com> To statalist@hsphsun2.harvard.edu Subject Re: st: How to insert the name of a variable into a matrix automatically Date Mon, 10 Sep 2012 13:36:39 +0100 Stata won't support the inclusion of textual information in a matrix. Rather, you mean you want automation of row names, which can be thought of as marginal labels. This is exactly equivalent to automating column names. matrix cancertypes = J(2,2,.) matrix colnames cancertypes = N median_age matrix rownames cancertypes = head lowgi local row = 1 qui foreach var of varlist head lowgi { count if `var'==1 matrix cancertypes[`row',2]=r(N) summarize AGE if `var'==1, detail matrix cancertypes[`row',2]=r(p50) local row = `row'+1 matrix list cancertypes On Sun, Sep 9, 2012 at 4:58 PM, yaacov lawrence <yaacovla@gmail.com> wrote: > For a medical epidemiological project looking at cancer incidence I > should like Stata to create a descriptive table of the variables as a > matrix that I will then export to Excel. There will be one row for > each variable. The various columns include number of observations, > median age, etc. I am using the "foreach" command. > All works well. > The only problem is that I should like the first cell of each row to > contain the variable name, and I cannot see how to do this. I realize > that I could do this manually using "matrix rownames" but I would > rather do it automatically. > example of the code below: > thank you!! > yaacov > ------------- > matrix cancertypes = J(10,4,.) > matrix colnames cancertypes = "cancer_type" N median_age "%white" > local rownumber 1 > foreach var of varlist head lowgi { > count if `var'==1 > matrix cancertypes[`rownumber',2]=r(N) > summarize AGE if `var'==1, detail > matrix cancertypes[`rownumber',3]=r(p50) > local rownumber=`rownumber'+1 > } > matrix list cancertypes * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2012-09/msg00372.html","timestamp":"2014-04-17T07:05:28Z","content_type":null,"content_length":"9388","record_id":"<urn:uuid:694a99eb-f3a4-41de-a63a-33a8d58520c6>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00140-ip-10-147-4-33.ec2.internal.warc.gz"}
Roslindale Geometry Tutor Find a Roslindale Geometry Tutor ...I have also found that study skills and organization play a large role in students' academic success, and that certain study techniques are particularly useful for math and physics. I like to check my students' notebooks and give them guidelines for note-taking and studying. Here too I stress a... 9 Subjects: including geometry, calculus, physics, algebra 1 ...I've taught various group classes at small studios. I have also worked with wedding couples in preparation for their First Dance. I enjoy choreographing. 13 Subjects: including geometry, English, writing, algebra 1 ...I'm looking forward to teaching ESL and working with student on regular basis. My major is Chemistry but I have a good experience in teaching Russian as well. I'm native Russian speaker, Russian is my first language. 23 Subjects: including geometry, English, biology, calculus ...I am a very good writer. I did well in college, and got a 5 on my AP English exam. I also write a lot for my career in insurance. 90 Subjects: including geometry, chemistry, reading, English ...I've been successfully tutoring students for more than 8 years. I teach a wide variety of subjects, and will work with anyone from grades 5 through Adult. I specialize in middle and high school Math (Algebra 2 is my favorite), and standardized test prep: PSAT, SAT, GED, ACT, etc. 41 Subjects: including geometry, chemistry, reading, English
{"url":"http://www.purplemath.com/Roslindale_geometry_tutors.php","timestamp":"2014-04-16T10:49:46Z","content_type":null,"content_length":"23606","record_id":"<urn:uuid:0e3b655b-4958-49d5-bb9a-e21004c898f5>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00254-ip-10-147-4-33.ec2.internal.warc.gz"}
188 helpers are online right now 75% of questions are answered within 5 minutes. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/users/freetrader/answered/1","timestamp":"2014-04-17T19:21:23Z","content_type":null,"content_length":"106384","record_id":"<urn:uuid:5cf6f5a0-4bfb-4460-8b73-ee13e41ec454>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00607-ip-10-147-4-33.ec2.internal.warc.gz"}
Mike Sullivan’s time-tested approach focuses students on the fundamental skills they need for the course: preparing for class, practicing with homework, and reviewing the concepts. In the Ninth Edition, Algebra and Trigonometry has evolved to meet today’s course needs, building on these hallmarks by integrating projects and other interactive learning tools for use in the classroom or online. New Internet-based Chapter Projects apply skills to real-world problems and are accompanied by assignable MathXL exercises to make it easier to incorporate these projects into the course. In addition, a variety of new exercise types, Showcase examples, and video tutorials for MathXL exercises give instructors even more flexibility, while helping students build their conceptual CourseSmart textbooks do not include any media or print supplements that come packaged with the bound book. Table of Contents R. Review R.1 Real Numbers R.2 Algebra Essentials R.3 Geometry Essentials R.4 Polynomials R.5 Factoring Polynomials R.6 Synthetic Division R.7 Rational Expressions R.8 nth Roots; Rational Exponents 1. Equations and Inequalities 1.1 Linear Equations 1.2 Quadratic Equations 1.3 Complex Numbers; Quadratic Equations in the Complex Number System 1.4 Radical Equations; Equations Quadratic in Form; Factorable Equations 1.5 Solving Inequalities 1.6 Equations and Inequalities Involving Absolute Value 1.7 Problem Solving: Interest, Mixture, Uniform Motion, and Constant Rate Jobs Applications 2. Graphs 2.1 The Distance and Midpoint Formulas 2.2 Graphs of Equations in Two Variables; Intercepts; Symmetry 2.3 Lines 2.4 Circles 2.5 Variation 3. Functions and Their Graphs 3.1 Functions 3.2 The Graph of a Function 3.3 Properties of Functions 3.4 Library of Functions; Piecewise-defined Functions 3.5 Graphing Techniques: Transformations 3.6 Mathematical Models: Building Functions 4. Linear and Quadratic Functions 4.1 Linear Functions and Their Properties 4.2 Linear Models: Building Linear Functions from Data 4.3 Quadratic Functions and Their Properties 4.4 Building Quadratic Models from Data 4.5 Inequalities Involving Quadratic Functions 5. Polynomial and Rational Functions 5.1 Polynomial Functions and Models 5.2 Properties of Rational Functions 5.3 The Graph of a Rational Function 5.4 Polynomial and Rational Inequalities 5.5 The Real Zeros of a Polynomial Function 5.6 Complex Zeros: Fundamental Theorem of Algebra 6. Exponential and Logarithmic Functions 6.1 Composite Functions 6.2 One-to-One Functions; Inverse Functions 6.3 Exponential Functions 6.4 Logarithmic Functions 6.5 Properties of Logarithms 6.6 Logarithmic and Exponential Equations 6.7 Financial Models 6.8 Exponential Growth and Decay Models; Newton’s Law; Logistic Growth and Decay Models 6.9 Building Exponential, Logarithmic, and Logistic Models from Data 7. Trigonometric Functions 7.1 Angles and Their Measure 7.2 Right Triangle Trigonometry 7.3 Computing the Values of Trigonometric Functions of Acute Angles 7.4 Trigonometric Functions of Any Angle 7.5 Unit Circle Approach; Properties of the Trigonometric Functions 7.6 Graphs of the Sine and Cosine Functions 7.7 Graphs of the Tangent, Cotangent, Cosecant, and Secant Functions 7.8 Phase Shift; Sinusoidal Curve Fitting 8. Analytic Trigonometry 8.1 The Inverse Sine, Cosine, and Tangent Functions 8.2 The Inverse Trigonometric Functions (continued) 8.3 Trigonometric Equations 8.4 Trigonometric Identities 8.5 Sum and Difference Formulas 8.6 Double-Angle and Half-Angle Formulas 8.7 Product-to-Sum and Sum-to-Product Formulas 9. Applications of Trigonometric Functions 9.1 Applications Involving Right Triangles 9.2 Law of Sines 9.3 Law of Cosines 9.4 Area of a Triangle 9.5 Simple Harmonic Motion; Damped Motion; Combining Waves 10. Polar Coordinates; Vectors 10.1 Polar Coordinates 10.2 Polar Equations and Graphs 10.3 The Complex Plane; DeMoivre’s Theorem 10.4 Vectors 10.5 The Dot Product 11. Analytic Geometry 11.1 Conics 11.2 The Parabola 11.3 The Ellipse 11.4 The Hyperbola 11.5 Rotation of Axes; General Form of a Conic 11.6 Polar Equations of Conics 11.7 Plane Curves and Parametric Equations 12. Systems of Equations and Inequalities 12.1 Systems of Linear Equations: Substitution and Elimination 12.2 Systems of Linear Equations: Matrices 12.3 Systems of Linear Equations: Determinants 12.4 Matrix Algebra 12.5 Partial Fraction Decomposition 12.6 Systems of Nonlinear Equations 12.7 Systems of Inequalities 12.8 Linear Programming 13. Sequences; Induction; The Binomial Theorem 13.1 Sequences 13.2 Arithmetic Sequences 13.3 Geometric Sequences; Geometric Series 13.4 Mathematical Induction 13.5 The Binomial Theorem 14. Counting and Probability 14.1 Sets and Counting 14.2 Permutations and Combinations 14.3 Probability Appendix: Graphing Utilities 1. The Viewing Rectangle 2. Using a Graphing Utility to Graph Equations 3. Using a Graphing Utility to Graph Equations Locating Intercepts and Checking for Symmetry 4. Using a Graphing Utility to Solve Equations 5. Square Screens 6. Using a Graphing Utility to Graph Inequalities 7. Using a Graphing Utility to Solve Systems of Linear Equations 8. Using a Graphing Utility to Graph a Polar Equation 9. Using a Graphing Utility to Graph Parametric Equations Purchase Info ? With CourseSmart eTextbooks and eResources, you save up to 60% off the price of new print textbooks, and can switch between studying online or offline to suit your needs. Once you have purchased your eTextbooks and added them to your CourseSmart bookshelf, you can access them anytime, anywhere. Buy Access Algebra and Trigonometry, CourseSmart eTextbook, 9th Edition Format: Safari Book $89.99 | ISBN-13: 978-0-321-71672-9
{"url":"http://www.mypearsonstore.com/bookstore/algebra-and-trigonometry-coursesmart-etextbook-0321716728","timestamp":"2014-04-19T14:51:29Z","content_type":null,"content_length":"20645","record_id":"<urn:uuid:fc35f14b-ae27-42e6-a9f1-8baca89d425a>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00477-ip-10-147-4-33.ec2.internal.warc.gz"}
Run The Strongly Connected Components Algorithm... | Chegg.com Run the strongly connected components algorithm on the following directed graphs G. When doing DFS on GR: whenever there is a choice of vertices to explore, always pick the one that is alphabetically In each case answer the following questions. (a) In what order are the strongly connected components (SCCs) found? (b) Which are source SCCs and which are sink SCCs? (c) Draw the “metagraph” (each meta-node is an SCC of G). (d) What is the minimum number of edges you must add to this graph to make it strongly connected?
{"url":"http://www.chegg.com/homework-help/run-strongly-connected-components-algorithm-following-direct-chapter-3-problem-4-solution-9780073523408-exc","timestamp":"2014-04-21T03:58:51Z","content_type":null,"content_length":"25475","record_id":"<urn:uuid:22635451-56f1-4973-aef5-dbf866d05ce2>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00445-ip-10-147-4-33.ec2.internal.warc.gz"}
Bowdon, GA Trigonometry Tutor Find a Bowdon, GA Trigonometry Tutor I am attending Jacksonville State University to complete my degree in secondary math education. I have tutored many students in algebra who are high school students and students in college. I enjoy teaching math and helping others learn math. 9 Subjects: including trigonometry, calculus, geometry, algebra 1 ...All of my teaching includes reading, English, and language arts. I have a masters degree in education and have taught students how to study and prepare for all types of exams, in all subjects including for state and national exams. In adddition I have tutored hundreds of students preparing for the ACT and SAT entrance exams. 47 Subjects: including trigonometry, chemistry, English, physics ...I am also available for teaching Spanish, as well as almost any subject for lower grades. I am fun and outgoing and I like to make learning fun! In high school, I took a course preparing for educational fields. 40 Subjects: including trigonometry, reading, Spanish, geometry I love math and I love helping others learn math! I am a certified teacher with 8 years of experience teaching math in public high schools. I have taught Pre-Algebra, Algebra 1, Algebra 2, Geometry, and Discrete Math and am willing to tutor students in other math classes as well. 10 Subjects: including trigonometry, geometry, algebra 2, ASVAB ...During my 20 years of tutoring, I have always emphasized both arithmetic and logical reasoning skills. My unique method of teaching makes it easier for my students to understand and solve almost any type of math problem. Moreover, I've been extremely successful in teaching my students to always utilize a step-wise approach to problem-solving. 57 Subjects: including trigonometry, reading, chemistry, English Related Bowdon, GA Tutors Bowdon, GA Accounting Tutors Bowdon, GA ACT Tutors Bowdon, GA Algebra Tutors Bowdon, GA Algebra 2 Tutors Bowdon, GA Calculus Tutors Bowdon, GA Geometry Tutors Bowdon, GA Math Tutors Bowdon, GA Prealgebra Tutors Bowdon, GA Precalculus Tutors Bowdon, GA SAT Tutors Bowdon, GA SAT Math Tutors Bowdon, GA Science Tutors Bowdon, GA Statistics Tutors Bowdon, GA Trigonometry Tutors Nearby Cities With trigonometry Tutor Bowdon Junction trigonometry Tutors Bremen, GA trigonometry Tutors Buchanan, GA trigonometry Tutors Cragford trigonometry Tutors Delta, AL trigonometry Tutors Edwardsville, AL trigonometry Tutors Ephesus, GA trigonometry Tutors Franklin, GA trigonometry Tutors Fruithurst trigonometry Tutors Mount Zion, GA trigonometry Tutors Muscadine trigonometry Tutors Ranburne trigonometry Tutors Sargent, GA trigonometry Tutors Winston, GA trigonometry Tutors Woodland, AL trigonometry Tutors
{"url":"http://www.purplemath.com/Bowdon_GA_Trigonometry_tutors.php","timestamp":"2014-04-18T03:52:51Z","content_type":null,"content_length":"24058","record_id":"<urn:uuid:df733185-35f9-4fde-8c15-3a54b73d5d7a>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00616-ip-10-147-4-33.ec2.internal.warc.gz"}
sampling strategy August 6th 2010, 12:45 AM #1 Aug 2008 sampling strategy I have a map of presence - absence (raster 1 or 0). The area where the phenomenon is present is much smaller in area than where it is absent. What I would like to do is run a binomial regression using multiple independent variables and presence absence as the dependent variable. To do this I need to sample the whole area. My question is about sample sizes. I have run an algorithm to give me a representative sample size (9000 points for the area is statistically robust). However I am struggling to figure out how these points should be distributed. I just took 9000 random points over the whole area at first, however this meant that only 70 points fell within the 'presence' area. When I subsequently run regression analysis, it doesn't seem like there is enough data in the presence area to really get a handle on relationships. So should I instead weight the points between the two distributions by area, or should I take 9000 points from the presence area, and 9000 points from the absence area, thus getting a representative sample of each area? Any help would be appreciated!! Follow Math Help Forum on Facebook and Google+
{"url":"http://mathhelpforum.com/advanced-statistics/152904-sampling-strategy.html","timestamp":"2014-04-16T08:39:28Z","content_type":null,"content_length":"29717","record_id":"<urn:uuid:b0a873e0-db5a-4b81-8412-e018f4b036b1>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00400-ip-10-147-4-33.ec2.internal.warc.gz"}
ppt for infinite dimension vector space Some Information About ppt for infinite dimension vector space is hidden..!! Click Here to show ppt for infinite dimension vector space's more details.. Do You Want To See More Details About "ppt for infinite dimension vector space" ? Then .Ask Here..! with your need/request , We will collect and show specific information of ppt for infinite dimension vector space's within short time.......So hurry to Ask now (No Registration , No fees ...its a free service from our side).....Our experts are ready to help you... .Ask Here..! In this page you may see ppt for infinite dimension vector space related pages link And You're currently viewing a stripped down version of content. open "Show Contents" to see content in proper format with attachments Page / Author tags Posted by: computer science crazy real vector space example , dual vector space example, vector space exercises , vector space explained, vector space examples , vector space distance, vector space direct Created at: Tuesday 07th sum , vector space dual, vector space dimension , vector space definition, pivot vector space approach , proof of vector space axioms, a vector space approach to geometry , of April 2009 01:05:21 vector space analysis, vector space algebra , vector space applications, vector space and field , vector space and subspace, vector space axioms proof , vector space axioms PM , Vector Space , Space, Vector , Dimensional, Infinite , seminar topics for vector, infinite dimensional vector spaces seminar , infinite dimension vector space, ppt for Last Edited Or Replied infinite dimension vector space , at :Tuesday 07th of April 2009 01:05:21 PM w finite matrices over the underlying field. In section II we consider the conjugate space of an infinite dimensional vector space and define its dimension and cardinality and will s..................[:=> Show Contents <=:] Posted by: computer science crazy real vector space example, dual vector space example , vector space exercises, vector space explained , vector space examples, vector space distance , vector space direct Created at: Tuesday 07th sum, vector space dual , vector space dimension, vector space definition , pivot vector space approach, proof of vector space axioms , a vector space approach to geometry, of April 2009 01:05:21 vector space analysis , vector space algebra, vector space applications , vector space and field, vector space and subspace , vector space axioms proof, vector space axioms PM , Vector Space, Space , Vector, Dimensional , Infinite, seminar topics for vector , infinite dimensional vector spaces seminar, infinite dimension vector space , ppt for Last Edited Or Replied infinite dimension vector space, at :Tuesday 07th of April 2009 01:05:21 PM mensional vector spaces over a field. Many of the results that are valid in finite dimensional vector spaces can very well be extended to infinite dimensional cases sometimes with slight modifications in definitions. But there are certain results that do not hold in infinite dimensional cases. Here we consolidate some of those results and present it in a readable form. We present the whole work in three chapters. All those concepts in vector spaces and linear algebra which we require in the sequel are included in the first chapter. In section I of chapter II we discuss the fundamental conce..................[:=> Show Contents <=:] Cloud Plugin by Remshad Medappil
{"url":"http://seminarprojects.net/c/ppt-for-infinite-dimension-vector-space","timestamp":"2014-04-17T01:49:51Z","content_type":null,"content_length":"19169","record_id":"<urn:uuid:3f1f2912-d198-460d-a9c8-3bba7c9437aa>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00333-ip-10-147-4-33.ec2.internal.warc.gz"}
Wilderness generation using Voronoi diagrams Reposted from an old post on rec.games.roguelike.development. I'm starting to think about re-doing wilderness generation for Unangband, a variant I've developed from Angband. I've spent considerable amounts of time developing a complex, dynamic terrain system, and whilst this is fun in the dungeon, I think for the player to fully appreciate it, I'm going to have to include a more complex wilderness than Unangband currently features. Unangband currently has a fixed graph-based wilderness travel. Each wilderness location is the same size as a dungeon, which contains one or more terrain types generated by a drunken walk type algorithm, over a background terrain, and may feature dungeon rooms connected by trails, which are guaranteed to be traversable. To travel between wilderness areas, you press '<' on the surface level, and can travel to a number of 'adjacent' locations, which increases as you defeat the boss monster (Angband unique) featured at the bottom of the dungeons. At some points, the graph is one way - you can't travel back to the original location once you get there. Some wilderness locations act as towns, one-screen sized locations containing a variety of shops. Dungeons under a wilderness location are distinguished by the types of locations, terrain types on the level generated by the same drunken walk algorithm as surface terrain, the background terrain through which corridors are tunnelled, and boss monster at the bottom of the dungeon, which is always fixed. Unangband's current terrain system has received praise for being focused and simple. You don't spend forever wandering around a wilderness level - just until you find a set of stairs down. The wilderness levels are just dungeons with open space instead of walls around the terrain (they are still surrounded by permanent walls at the edges). Unfortunately, the wilderness graph is not particularly easy to understand. Players complain that wilderness locations appear and disappear randomly as they move (a consequence of the implementation of the graph). In particular, its hard to make it clear that some graph locations are 'one-way'. Its also hard to make clear that some locations have no dungeon, just a surface boss monster, who, for play-balance reasons, only appears at night. I've looked at a number of wilderness implementations, and have focused on Zangband's in particular (at least the new wilderness implementation). The terrain in Zangband's wilderness system, for want of a better word, is beautiful, and something I aspire to. However, the Zangband terrain generation system suffers from a number of serious problems, in particular, that the algorithms used are overly complicated, there is not enough incentive to travel between wilderness locations (I know the Zangband development team are improving this) and that the difficulty level is too high for starting out characters and exploitable for higher level characters. Recently, I ran across an indirect reference to Voronoi diagrams in this newsgroup, and realised that this will provide the solution to my issues with the Zangband wilderness. I'll run through what I intend to do with them, and raise some questions that hopefully someone here can point me in the direction of. The new Unangband wilderness will be divided up into regions, by randomly generating sufficient spoints on a large wilderness map (4096 x 4096) and using a Voronoi diagram to divide the space into the regions (each map grid will be associated with the region of the nearest point generated). Take the Delauney triangulation of the points and select a vertex (region point) and give it a difficulty 0. Give its neighbours a difficulty 1, and so on, 'flood-filling' the graph until all vertexs are assigned difficulties. Then normalise these difficulties against the desired difficulty range of the whole wilderness. The region data structure can then be expanded as per the following pseudo-structure: point (x,y) int difficulty_level text region_name dungeon_type dungeon race_type monster terrain_type terrain To select a terrain type can be done using a variety of methods. I quite like the Zangband 'three-space' terrain parameters, law (which corresponds to difficulty_level above), population and altitude. Its possible to generate this fractally by picking one or more random low and high points on the Delauney triangulation, and interpolating between then (perhaps using a fractal algorithm, and/or weighting the differences based on the distance between region points). You could pick points in a manner that guarantees that all possible terrain is available on each map, that Zangband cannot do currently. Its also more efficient than the Zangband method, because once you've generated this one, you can store the selected terrain type, and throw away the generation Once you have the terrain type for the region, the actual terrain in each map grid can be determine a number of ways. I suspect I'll have the following: 1. Open regions, with randomly / fractally placed point terrain. These should be selected so that terrain types that are likely to be adjacent have some terrain types in common, so terrain transitions across between two terrains without creating a hard edge. 2. Building regions - as open regions above, but a rectangular feature is placed within the bounds of the region. 3. Field-type regions (as in farmers' fields), which are filled with passable terrain but have an impassable edge, with a gate/bridge placed at a random location on the edge. If two field regions are next to each other, the region with the lower difficulty level does not place the edge. 2. Hill-type regions, where the height is calculated as the distance from the point to the edge of the region, and the slope of the height change determines the terrain (flat, slope, or wall). 5. Mountain / lake-type regions, which are filled up with an impassable terrain up to an hard / fractal distance from the edge. 6. Completely impassable regions. I will use the Delauney triangulation graph to ensure every passable node is accessible. It should be possible to 'merge' adjacent regions of the same type, so that any regions that share a common edge type / centre type do not generate edge terrain when next to each other. Because I will be storing regions, it should be easier to replace region types, in the event, for instance, I want to have a huge building in on a magma island surrounded by lava that takes up multiple regions space. Of course, I'm going to need some fairly good programming to get the above done, but I can see how to proceed. Firstly, can someone point me in the direction of a fast integer based look-up algorithm to determine which region a map-grid is in? Ideally, I need a data-structure that supports searching the minimum number of points. Ideally in C. I also need a fast algorithm to draw the terrain on a subset of the whole map. Obviously, I don't want to have every map-grid in memory. I'm thinking of adopting Zangband's in memory representation of 16x16 grid patches, that can be generated quickly when scrolling into the map area, and destroyed as a player moves away. Alternately, I'll have to have a large scrolling window looking down on the total map and generate larger areas as the player moves. So I need a fast drawing algorithm for each of the above terrain types, that doesn't create gaps. So some kind of rasterisation algorithm for a 2 dimensional Voronoi diagram please, and a good suggestion as what memory management technique to adopted (patches / scrolling window / etc.) Also, ideally in C. Finally, any suggested strategies for generating and representing in memory the Delauney triangulation. This is only required for the initial region generation - however, I may also use it to determine difficulties for overland quests (by finding the highest difficulty of the regions required to cross to travel to the quest location). 3 comments: James McNeill said... I like your idea of flood-filling difficulty level through the regions, but be aware that this may not do exactly what you want on the edges of your map, because distant nodes can end up as neighbors in the Delaunay triangulation. (Here's an illustration.) You might need to trim skinny triangles off the outer perimeter or something like that. James McNeill said... Come to think of it, you might get better luck just scaling difficulty with distance from the player start point, as long as the landscape is fairly open. In general I'd think you would want to scale difficulty by travel distance/difficulty. If it takes special equipment to get up into mountains, for instance, the encounters up there might be more difficult even if it's close to an easy region, since the player can't get there until they've acquired crampons and ice axes. I'm looking over some code I wrote to generate Delaunay triangulations years ago. Here's a sketch of one way to do it: Create a simple initial triangulation that encloses all of the points. This could be a pair of triangles forming a rectangle, for instance. Ensure it is Delaunay (that is, the circle through by any three points does not contain any other points). For a rectangle this property always holds. Insert the points into the triangulation one at a time, ensuring the empty-circumcircle property still holds after each one: 1. Find the existing triangle containing the new point and split it into three new ones such that the new point is now part of the triangulation. 2. Restore the empty circumcircle property by turning edges around the new point. When you're done you can strip off the outer triangles connected to the original rectangle corner points, if you like, to get a triangulation of only the points of interest. The Wikipedia article says pretty much the same stuff. For actual implementation you'll find a half-edge data structure pretty useful, I would think. Another way is to come up with some triangulation, any triangulation, of the points. Then go through and turn edges wherever you find a point inside a circumcircle. If you attack them in an organized fashion, you can get the triangulation into Delaunay shape fairly quickly. Andrew Doull said... Thanks for the advice. I had in the back of my head similar ideas to your mountain suggestion, where difficult to traverse locations could seperate locations of extreme difficulty (e.g. mountains, seas, walls of fire or ice etc). I also quite like the idea of being able to choose which path to take based on a difficulty slope e.g. the adjacent locations are either +1 or +2 difficulty. That way you can navigate a path of +2 difficulty increases if you find a powerful item that boosts your overall survival chances. It'd be important to distinguish the +2 difficulty slopes (dead bodies, piles of skulls, warning signs etc) of course. This is equivalent to diving down multiple stairs quickly.
{"url":"http://roguelikedeveloper.blogspot.com/2007/07/wilderness-generation-using-voronoi.html","timestamp":"2014-04-19T01:57:07Z","content_type":null,"content_length":"113982","record_id":"<urn:uuid:39502ea4-7a16-4adc-8543-bcaedddb4de0>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00136-ip-10-147-4-33.ec2.internal.warc.gz"}
Convert g/cm^3 to kg/m^3 - Conversion of Measurement Units ›› Convert gram/cubic centimetre to kilogram/cubic metre ›› More information from the unit converter How many g/cm^3 in 1 kg/m^3? The answer is 0.001. We assume you are converting between gram/cubic centimetre and kilogram/cubic metre. You can view more details on each measurement unit: g/cm^3 or kg/m^3 The SI derived unit for density is the kilogram/cubic meter. 1 g/cm^3 is equal to 1000 kilogram/cubic meter. Note that rounding errors may occur, so always check the results. Use this page to learn how to convert between grams/cubic centimeter and kilograms/cubic meter. Type in your own numbers in the form to convert the units! ›› Metric conversions and more ConvertUnits.com provides an online conversion calculator for all types of measurement units. You can find metric conversion tables for SI units, as well as English units, currency, and other data. Type in unit symbols, abbreviations, or full names for units of length, area, mass, pressure, and other types. Examples include mm, inch, 100 kg, US fluid ounce, 6'3", 10 stone 4, cubic cm, metres squared, grams, moles, feet per second, and many more! This page was loaded in 0.0033 seconds.
{"url":"http://www.convertunits.com/from/g/cm%5E3/to/kg/m%5E3","timestamp":"2014-04-21T02:10:40Z","content_type":null,"content_length":"19686","record_id":"<urn:uuid:57fe1969-41b0-4c8f-b495-e9d111259601>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00226-ip-10-147-4-33.ec2.internal.warc.gz"}
Finite-dimensional variational inequality and nonlinear complementarity problems: A survey of theory, algorithms and applications Find out how to access preview-only content Finite-dimensional variational inequality and nonlinear complementarity problems: A survey of theory, algorithms and applications Purchase on Springer.com $39.95 / €34.95 / £29.95* Rent the article at a discount Rent now * Final gross prices may vary according to local VAT. Get Access Over the past decade, the field of finite-dimensional variational inequality and complementarity problems has seen a rapid development in its theory of existence, uniqueness and sensitivity of solution(s), in the theory of algorithms, and in the application of these techniques to transportation planning, regional science, socio-economic analysis, energy modeling, and game theory. This paper provides a state-of-the-art review of these developments as well as a summary of some open research topics in this growing field. The research of this author was supported by the National Science Foundation Presidential Young Investigator Award ECE-8552773 and by the AT&T Program in Telecommunications Technology at the University of Pennsylvania. The research of this author was supported by the National Science Foundation under grant ECS-8644098. 1. H.Z. Aashtiani and T.L. Magnanti, “Equilibria on a congested transportation network,”SIAM Journal on Algebraic and Discrete Methods 2 (1981) 213–226. 2. H.Z. Aashtiani and T.L. Magnanti, “A linearization and decomposition algorithm for computing urban traffic equilibria,”Proceedings of the 1982IEEE International Large Scale Systems Symposium (1982) 8–19. 3. M. Abdulaal and L.J. LeBlanc, “Continuous equilibrium network design models,”Transportation Research 13B (1979) 19–32. 4. M. Aganagic, “Variational inequalities and generalized complementarity problems,” Technical Report SOL 78-11, Systems Optimization Laboratory, Department of Operations Research, Stanford University (Stanford, CA 1978). 5. B.H. Ahn,Computation of Market Equilibria for Policy Analysis: The Project Independence Evaluation Study (PIES) Approach (Garland, NY, 1979). 6. B.H. Ahn, “A Gauss-Seidel iteration method for nonlinear variational inequality problems over rectangles,”Operations Research Letters 1 (1982) 117–120. 7. B.H. Ahn, “A parametric network method for computing nonlinear spatial equilibria,” Research report, Department of Management Science, Korea Advanced Institute of Science and Technology (Seoul, Korea, 1984). 8. B.H. Ahn and W.W. Hogan, “On convergence of the PIES algorithm for computing equilibria,”Operations Research 30 (1982) 281–300. 9. E. Allgower and K. Georg, “Simplicial and continuation methods for approximating fixed points and solutions to systems of equations,”SIAM Review 22 (1980) 28–85. 10. R. Asmuth, “Traffic network equilibrium,” Technical Report SOL 78-2, Systems Optimization Laboratory, Department of Operations Research, Stanford University (Stanford, CA, 1978). 11. R. Asmuth, B.C. Eaves and E.L. Peterson, “Computing economic equilibria on affine networks with Lemke's algorithm,”Mathematics of Operations Research 4 (1979) 207–214. 12. J.P. Aubin,Mathematical Methods of Game and Economic Theory (North-Holland, Amsterdam, 1979). 13. M. Avriel,Nonlinear Programming: Analysis and Methods (Prentice-Hall, Englewood Cliffs, NJ, 1976). 14. S.A. Awoniyi and M.J. Todd, “An efficient simplicial algorithm for computing a zero of a convex union of smooth functions,”Mathematical Programming 25 (1983) 83–108. 15. C. Baiocchi and A. Capelo,Variational and Quasivariational Inequalities: Application to Free-Boundary Problems (Wiley, New York, 1984). 16. B. Banks, J. Guddat, D. Klatte, B. Kummer and K. Tammer,Nonlinear Parametric Optimization (Birkhauser, Basel, 1983). 17. V. Barbu,Optimal Control of Variational Inequalities (Pitman Advanced Publishing Program, Boston, 1984). 18. M.J. Beckman, C.B. McGuire, and C.B. Winston,Studies in the Economics of Transportation (Yale University Press, New Haven, CT, 1956). 19. A. Bensoussan, “Points de Nash dans le cas de fonctionnelles quadratiques et jeux differentials linéaires aN personnes,”SIAM Journal on Control 12 (1974) 460–499. 20. A. Bensoussan, M. Goursat and J.L. Lions, “Contrôle impulsionnel et inéquations quasivariationnelles stationnaires,”Comptes Rendus Academie Sciences Paris 276 (1973) 1279–1284. 21. A. Bensoussan and J.L. Lions, “Nouvelle formulation de problèmes de contrôle impulsionnel et applications,”Comptes Rendus Academie Sciences Paris 276 (1973) 1189–1192. 22. A. Bensoussan and J.L. Lions, “Nouvelles méthodes en contrôle impulsionnel,”Applied Mathematics and Optimization 1 (1974) 289–312. 23. C. Berge,Topological Spaces (Oliver and Boyd, Edinburgh, Scotland, 1963). 24. D.P. Bertsekas and E.M. Gafni, “Projection methods for variational inequalities with application to the traffic assignment problem,”Mathematical Programming Study 17 (1982) 139–159. 25. K.C. Border,Fixed Point Theorems with Applications to Economics and Game Theory (Cambridge University Press, Cambridge, 1985). 26. F.E. Browder, “Existence and approximation of solutions of nonlinear variational inequalities,”Proceeding of the National Academy of Sciences, U.S.A. 56 (1966) 1080–1086. 27. M. Carey, “Integrability and mathematical programming models: a survey and parametric approach,”Econometrica 45 (1977) 1957–1976. 28. D. Chan and J.S. Pang, “The generalized quasi-variational inequality problem,”Mathematics of Operations Research 7 (1982) 211–222. 29. G.S. Chao and T.L. Friesz, “Spatial price equilibrium sensitivity analysis,”Transportation Research 18B (1984) 423–440. 30. S.C. Choi, W.S. DeSarbo and P.T. Harker, “Product positioning under price competition,”Management Science 36 (1990) 265–284. 31. R.W. Cottle,Nonlinear Programs with Positively Bounded Jacobians. Ph.D. dissertation, Department of Mathematics, University of California (Berkeley, CA, 1964). 32. R.W. Cottle, “Nonlinear programs with positively bounded Jacobians,”SIAM Journal on Applied Mathematics 14 (1966) 147–158. 33. R.W. Cottle, “Complementarity and variational problems,”Symposia Mathematica XIX (1976) 177–208. 34. R.W. Cottle and G.B. Dantzig, “Complementary pivot theory of mathematical programming,”Linear Algebra and Its Applications 1 (1968) 103–125. 35. R.W. Cottle, F. Giannessi and J.L. Lions, eds.,Variational Inequalities and Complementarity Problems: Theory and Applications (Wiley, New York, 1980). 36. R.W. Cottle, G.J. Habetler and C.E. Lemke, “Quadratic forms semi-definite over convex cones,” in: H.W. Kuhn, ed.,Proceedings of the Princeton Symposium on Mathematical Programming (Princeton University Press, Princeton, NJ, 1970) 551–565. 37. R.W. Cottle, J.S. Pang and V. Venkateswaran, “Sufficient matrices and the linear complementarity problem,”Linear Algebra and its Applications 114/115 (1989) 231–249. 38. R.W. Cottle and A.F. Veinott, Jr., “Polyhedral sets having a least element,”Mathematical Programming 3 (1972) 238–249. 39. S. Dafermos, “Traffic equilibria and variational inequalities,”Transportation Science 14 (1980) 42–54. 40. S. Dafermos, “The general multimodal network equilibrium problem with elastic demand,”Networks 12 (1982) 57–72. 41. S. Dafermos, “Relaxation algorithms for the general asymmetric traffic equilibrium problem,”Transportation Science 16 (1982) 231–240. 42. S. Dafermos, “An iterative scheme for variational inequalities,”Mathematical Programming 26 (1983) 40–47. 43. S. Dafermos, “Sensitivity analysis in variational inequalities,”Mathematics of Operations Research 13 (1988) 421–434. 44. S. Dafermos and A. Nagurney, “Sensitivity analysis for the general spatial economic equilibrium problem,”Operations Research 32 (1984) 1069–1086. 45. S. Dafermos and A. Nagurney, “Sensitivity analysis for the asymmetric network equilibrium problem,”Mathematical Programming 28 (1984) 174–184. 46. J.E. Dennis Jr. and R.B. Schnabel,Numerical Methods for Unconstrained Optimization and Nonlinear Equations (Prentice-Hall, Englewood Cliffs, NJ, 1983). 47. I.C. Dolcetta and U. Mosco, “Implicit complementarity problems and quasi-variational inequalities,” in: R.W. Cottle, F. Giannessi and J.L. Lions, eds.,Variational Inequalities and Complementarity Problems: Theory and Applications (Wiley, New York, 1980) 75–87. 48. B.C. Eaves, “On the basic theorem of complementarity,”Mathematical Programming 1 (1971) 68–75. 49. B.C. Eaves, “The linear complementarity problem,”Management Science 17 (1971) 612–634. 50. B.C. Eaves, “Homotopies for computation of fixed points,”Mathematical Programming 3 (1972) 1–22. 51. B.C. Eaves, “A short course in solving equations with PL homotopies,” in: R.W. Cottle and C.E. Lemke eds.,Nonlinear Programming: SIAM-AMS Proceedings 9 (American Mathematical Society, Providence, RI, 1976) pp. 73–143. 52. B.C. Eaves, “Computing stationary points,”Mathematical Programming Study 7 (1978) 1–14. 53. B.C. Eaves, “Computing stationary points, again,” in: O.L. Mangasarian, R.R. Meyer and S.M. Robinson, eds.,Nonlinear Programming 3 (Academic Press, New York, 1978) pp. 391–405. 54. B.C. Eaves, “Where solving for stationary points by LCPs is mixing Newton iterates,” in: B.C. Eaves, F.J. Gould, H.O. Peitgen and M.J. Todd, eds.,Homotopy Methods and Global Convergence (Plenum Press, New York, 1983) pp. 63–78. 55. B.C. Eaves, “Thoughts on computing market equilibrium with SLCP,” Technical Report, Department of Operations Research, Stanford University (Stanford, CA, 1986). 56. S.C. Fang,Generalized Variational Inequality, Complementarity and Fixed Point Problems: Theory and Application. Ph.D. dissertation, Northwestern University (Evanston, IL, 1979). 57. S.C. Fang, “An iterative method for generalized complementarity problems,”IEEE Transactions on Automatic Control AC-25 (1980) 1225–1227. 58. S.C. Fang, “Traffic equilibria on multiclass user transportation networks analyzed via variational inequalities,”Tamkang Journal of Mathematics 13 (1982) 1–9. 59. S.C. Fang, “Fixed point models for the equilibrium problems on transportation networks,”Tamkang Journal of Mathematics 13 (1982) 181–191. 60. S.C. Fang, “A linearization method for generalized complementarity problems,”IEEE Transactions on Automatic Control AC 29 (1984) 930–933. 61. S.C. Fang and E.L. Peterson, “Generalized variational inequalities,”Journal of Optimization Theory and Application 38 (1982) 363–383. 62. S.C. Fang and E.L. Peterson, “General network equilibrium analysis,”International Journal of Systems Sciences 14 (1983) 1249–1257. 63. S.C. Fang and E.L. Peterson, “An economic equilibrium model on a multicommodity network,”International Journal of Systems Sciences 16 (1985) 479–490. 64. A.V. Fiacco,Introduction to Sensitivity and Stability Analysis in Nonlinear Programming (Academic Press, New York, 1983). 65. A.V. Fiacco and J. Kyparisis, “Sensitivity analysis in nonlinear programming under second order assumptions,” in: A. Bagchi and H. Th. Jongen, eds.,Systems and Optimization (Springer, Berlin, 1985) pp. 74–97. 66. M. Fiedler and V. Ptak, “On matrices with nonpositive off-diagonal elements and positive principal minors,”Czechoslovak Mathematics Journal 12 (1962), 382–400. 67. M.L. Fisher and F.J. Gould, “A simplicial algorithm for the nonlinear complementarity problem,”Mathematical Programming 6 (1974) 281–300. 68. M.L. Fisher and J.W. Tolle, “The nonlinear complementarity problem: existence and determination of solutions,”SIAM Journal of Control and Optimization 15 (1977), 612–623. 69. C.S. Fisk and D.E. Boyce, “Alternative variational inequality formulations of the network equilibrium—travel choice problem,”Transportation Science 17 (1983) 454–463. 70. C.S. Fisk and S. Nguyen, “Solution algorithms for network equilibrium models with asymmetric user costs,”Transportation Science 16 (1982) 316–381. 71. M. Florian, ed.,Traffic Equilibrium Methods (Springer, Berlin, 1976). 72. M. Florian, “Nonlinear cost network models in transportation analysis,”Mathematical Programming Study 26 (1986) 167–196. 73. M. Florian and M. Los, “A new look at static spatial price equilibrium models,”Regional Science and Urban Economics 12 (1982) 579–597. 74. M. Florian and H. Spiess, “The convergence of diagonalization algorithms for asymmetric network equilibrium problems,”Transportation Research 16B (1982) 447–483. 75. T.L. Friesz, “Network equilibrium, design and aggregation,”Transportation Research 19A (1985) 413–427. 76. T.L. Friesz, R.L. Tobin, T.E. Smith and P.T. Harker, “A nonlinear complementary formulation and solution procedure for the general derived demand network equilibrium problem,”Journal of Regional Science 23 (1983) 337–359. 77. T.L. Friesz and P.T. Harker, “Freight network equilibrium: a review of the state of the art,” in: A. Daughety, ed.,Analytical Studies in Transportation Economics (Cambridge University Press, Cambridge, 1985) 161–206. 78. T.L. Friesz, P.T. Harker and R.L. Tobin, “Alternative algorithms for the general network spatial price equilibrium problem,”Journal of Regional Science 24 (1984) 473–507. 79. M. Fukushima, “A relaxed projection method for variational inequalities,”Mathematical Programming 35 (1986) 58–70. 80. D. Gabay and H. Moulin, “On the uniqueness and stability of Nash-equilibria in noncooperative games,” in: A. Bensoussan, P. Kleindorfer and C.S. Tapiero, eds.,Applied Stochastic Control in Econometrics and Management Science (North-Holland, Amsterdam, 1980) pp. 271–292. 81. C.B. Garcia and W.I. Zangwill,Pathways to Solutions, Fixed Points and Equilibria (Prentice-Hall, Englewood Cliffs, NJ, 1981). 82. R. Glowinski, J.L. Lions and R. Trémolières,Analyses Numérique des Inéquations Variationalles: Methodes Mathematiques de l'Informatique (Dunod, Paris, 1976). 83. C.D. Ha, “Application of degree theory in stability of the complementarity problem,”Mathematics of Operations Research 12 (1987) 368–376. 84. G.J. Habetler and M.M. Kostreva, “On a direct algorithm for nonlinear complementarity problems,”SIAM Journal of Control and Optimization 16 (1978) 504–511. 85. G.J. Habetler and A.L. Price, “Existence theory for generalized nonlinear complementarity problems,”Journal of Optimization Theory and Applications 7 (1971) 223–239. 86. J.H. Hammond,Solving Asymmetric Variational Inequality Problems and Systems of Equation with Generalized Nonlinear Programming Algorithms. Ph.D. dissertation, Department of Mathematics, M.I.T. (Cambridge, MA, 1984). 87. J.H. Hammond and T.L. Magnanti, “Generalized descent methods for asymmetric systems of equations,”Mathematics of Operations Research 12 (1987) 678–699. 88. J.H. Hammond and T.L. Magnanti, “A contracting ellipsoid method for variational inequality problems,” Working Paper OR 160-87, Operations Research Center, M.I.T. (Cambridge, MA, 1987). 89. T.H. Hansen,On the Approximation of a Competitive Equilibrium. Ph.D. dissertation, Department of Economics, Yale University (New Haven, CT, 1968). 90. T.H. Hansen and H. Scarf, “On the approximation of Nash equilibrium points in an N-person noncooperative game,”SIAM Journal of Applied Mathematics 26 (1974) 622–637. 91. P.T. Harker, “A variational inequality approach for the determination of oligopolistic market equilibrium,”Mathematical Programming 30 (1984) 105–111. 92. P.T. Harker, “A generalized spatial price equilibrium model,”Papers of the Regional Science Association 54 (1984) 25–42. 93. P.T. Harker, ed.,Spatial Price Equilibrium: Advances in Theory, Computation and Application. Lecture Notes in Economics and Mathematical Systems, Vol 249 (Springer, Berlin, 1985). 94. P.T. Harker, “Existence of competitive equilibria via Smith's nonlinear complementarity result,”Economics Letters 19 (1985) 1–4. 95. P.T. Harker, “Alternative models of spatial competition,”Operations Research 34 (1986) 410–425. 96. P.T. Harker, “A note on the existence of traffic equilibria,”Applied Mathematics and Computation 18 (1986) 277–283. 97. P.T. Harker,Predicting Intercity Freight Flows (VNU Science Press, Utrecht, The Netherlands, 1987). 98. P.T. Harker, “Multiple equilibria behaviors on networks,”Transportation Science 22 (1988), 39–46. 99. P.T. Harker, “Accelerating the convergence of the diagonalization and projection algorithms for finite-dimensional variational inequalities,”Mathematical Programming 41 (1988) 29–59. 100. P.T. Harker, “The core of a spatial price equilibrium game,”Journal of Regional Science 27 (1987) 369–389. 101. P.T. Harker, “Privatization of urban mass transportation: application of computable equilibrium models for network competition,”Transportation Science 22 (1988) 96–111. 102. P.T. Harker, “Generalized Nash games and quasivariational inequalities,” to appear in:European Journal of Operational Research. 103. P.T. Harker and S.C. Choi, “A penalty function approach for mathematical programs with variational inequality constraints,” Working paper 87-09-08, Department of Decision Sciences, University of Pennsylvania (Philadelphia, PA, 1987). 104. P.T. Harker and J.S. Pang, “Existence of optimal solutions to mathematical programs with equilibrium constraints,”Operations Research Letters 7 (1988) 61–64. 105. P.T. Harker and J.S. Pang, “A damped-Newton method for the linear complementarity problem,” in: E.L. Allgower and K. Georg, eds.,Computational Solution of Nonlinear Systems of Equations. AMS Lectures on Applied Mathematics 26 (1990) 265–284. 106. P.T. Harker and J.S. Pang,Equilibrium Modeling With Variational Inequalities: Theory, Computation and Application, in preparation. 107. P. Hartman and G. Stampacchia, “On some nonlinear elliptic differential functional equations,”Acta Mathematica 115 (1966) 153–188. 108. A. Haurie and P. Marcotte, “On the relationship between Nash-Cournot and Wardrop equilibria,”Networks 15 (1985) 295–308. 109. A. Haurie and P. Marcotte, “A game-theoretic approach to network equilibrium,”Mathematical Programming Study 26 (1986) 252–255. 110. A. Haurie, G. Zaccour, J. Legrand and Y. Smeers, “A stochastic dynamic Nash-Cournot model for the European gas market,” Working Paper G-87-24, École des Hautes Études Commeriales, Université de Montréal (Montréal, Que., 1987). 111. D.W. Hearn, “The gap function of a convex program,”Operations Research Letters 1 (1982) 67–71. 112. D.W. Hearn, S. Lawphongpanich and S. Nguyen, “Convex programming formulation of the asymmetric traffic assignment problem,”Transportation Research 18B (1984) 357–365. 113. D.W. Hearn, S. Lawphongpanich and J.A. Ventura, “Restricted simplicial decomposition: computation and extensions,”Mathematical Programming Study 31 (1987) 99–118. 114. W. Hildenbrand and A.P. Kirman,Introduction to Equilibrium Analysis (North-Holland, Amsterdam, 1976). 115. A.V. Holden, ed.,Chaos (Princeton University Press, Princeton, NJ, 1986). 116. T. Ichiishi,Game Theory for Economic Analysis (Academic Press, New York, 1983). 117. C.M. Ip,The Distorted Stationary Point Problem. Ph.D. dissertation, School of Operations Research and Industrial Engineering, Cornell University (Ithaca, NY, 1986). 118. K. Jittorntrum, “Solution point differentiability without strict complementarity in nonlinear programming,”Mathematical Programming Study 21 (1984) 127–138. 119. P.C. Jones, G. Morrison, J.C. Swarts and E. Theise, “Nonlinear spatial price equilibrium algorithms: a computational comparison,”Microcomputers in Civil Engineering 3 (1988) 265–271. 120. P.C. Jones, R. Saigal and M. Schneider, “Computing nonlinear network equilibria,”Mathematical Programming 31 (1985) 57–66. 121. N.H. Josephy, “Newton's method for generalized equations,” Technical Report No. 1965, Mathematics Research Center, University of Wisconsin (Madison, WI, 1979). 122. N.H. Josephy, “Quasi-Newton methods for generalized equations,” Technical Report No. 1966, Mathematics Research Center, University of Wisconsin (Madison, WI, 1979). 123. N.H. Josephy, “A Newton method for the PIES energy model,” Technical Summary Report No. 1977, Mathematics Research Center, University of Wisconsin (Madison, WI, 1979). 124. S. Karamardian, “The nonlinear complementarity problem with applications, parts I and II,”Journal of Optimization Theory and Applications 4 (1969) 87–98 and 167-81. 125. S. Karamardian, “Generalized complementarity problem,”Journal of Optimization Theory and Applications 8 (1971) 161–167. 126. S. Karamardian, “The complementarity problem,”Mathematical Programming 2 (1972) 107–129. 127. S. Karamardian, “Complementarity problems over cones with monotone and pseudomonotone maps,”Journal of Optimization Theory and Applications 18 (1976) 445–454. 128. S. Karamardian, “An existence theorem for the complementarity problem,”Journal of Optimization Theory and Applications 18 (1976) 445–454. 129. W. Karush,Minima of Functions of Several Variables with Inequalities as Side Conditions. M.S. thesis, Department of Mathematics, University of Chicago (Chicago, IL, 1939). 130. D. Kinderlehrer and G. Stampacchia,An Introduction to Variational Inequalities and Their Application (Academic Press, New York, 1980). 131. M. Kojima, “Computational methods for solving the nonlinear complementarity problem,”Keio Engineering Reports 27 (1974) 1–41. 132. M. Kojima, “A unification of the existence theorems of the nonlinear complementarity problem,”Mathematical Programming 9 (1975) 257–277. 133. M. Kojima, “Strongly stable stationary solutions in nonlinear programming,” in: S.M. Robinson, ed.,Analysis and Computation of Fixed Points (Academic Press, New York, 1980) pp. 93–138. 134. M. Kojima, S. Mizuno, and T. Noma, “A new continuation method for complementarity problems with uniform P-functions,”Mathematical Programming 43 (1989) 107–114. 135. M. Kojima, S. Mizuno, and T. Noma, “Limiting behavior of trajectories generated by a continuation method for monotone complementarity problems,” Research Report No. B-199, Department of Information Sciences, Tokyo Institute of Technology (Tokyo, Japan, 1988). 136. M.M. Kostreva, “Block pivot methods for solving the complementarity problem,”Linear Algebra and Its Application 21 (1978) 207–215. 137. M.M. Kostreva, “Elasto-hydrodynamic lubrication: a nonlinear complementarity problem,”International Journal for Numerical Methods in Fluids 4 (1984) 377–397. 138. H. Kremers and D. Talman, “Solving the nonlinear complementarity problem with lower and upper bounds,” FEW330, Department of Econometrics, Tilburg University (Tilburg, The Netherlands, 1988). 139. H.W. Kuhn, “Simplicial approximation of fixed points,”Proceedings of the National Academy of Sciences U.S.A. 61 (1968) 1238–1242. 140. H.W. Kuhn and A.W. Tucker, “Nonlinear programming,” in: J. Neyman, ed.,Proceedings of the Second Berkeley Symposium on Mathematical Statistics and Probability, (University of California Press, Berkeley, CA, 1951) pp. 481–492. 141. J. Kyparisis, “Uniqueness and differentiability of solutions of parametric nonlinear complementarity problems,”Mathematical Programming 36 (1986) 105–113. 142. J. Kyparisis, “Sensitivity analysis framework for variational inequalities,”Mathematical Programming 38 (1987) 203–213. 143. J. Kyparisis, “Perturbed solutions of variational inequality problems over polyhedral sets,”Journal of Optimization Theory and Applications 57 (1988) 295–305. 144. J. Kyparisis, “Sensitivity analysis for nonlinear programs and variational inequalities with nonunique multipliers,” Working paper, Department of Decision Sciences and Information Systems, Florida International University (Miami, FL, 1987). 145. S. Lawphongpanich and D.W. Hearn, “Simplicial decomposition of asymmetric traffic assignment problem,”Transportation Research 18B (1984) 123–133. 146. S. Lawphongpanich and D.W. Hearn, “Bender's decomposition for variational inequalities,”Mathematical Programming (Series B) 48 (1990) 231–247, this issue. 147. S. Lawphongpanich and D.W. Hearn, “Restricted simplicial decomposition with application to the traffic assignment problem,”Ricera Operativa 38 (1986) 97–120. 148. L.J. LeBlanc, E.K. Morlok and W.P. Pierskalla, “An efficient approach to solving the road network equilibrium traffic assignment problem,”Transportation Research 9 (1974) 309–318. 149. C.E. Lemke, “Bimatrix equilibrium points and mathematical programming,”Management Science 11 (1965) 681–689. 150. C.E. Lemke and J.T. Howson, “Equilibrium points of bimatrix games,”SIAM Review 12 (1964) 45–78. 151. Y.Y. Lin and J.S. Pang, “Iterative methods for large convex quadratic programs: a survey,”SIAM Journal on Control and Optimization 25 (1987) 383–411. 152. J.L. Lions and G. Stampacchia, “Variational inequalities,”Communications on Pure and Applied Mathematics 20 (1967) 493–519. 153. H.J. Lüthi, “On the solution of variational inequality by the ellipsoid method,”Mathematics of Operations Research 10 (1985) 515–522. 154. T.L. Magnanti, “Models and algorithms for predicting urban traffic equilibria,” in: M. Florian, ed.,Transportation Planning Models (North-Holland, Amsterdam, 1984) pp. 153–185. 155. O. Mancino and G. Stampacchia, “Convex programming and variational inequalities,”Journal of Optimization Theory and Application 9 (1972) 3–23. 156. O.L. Mangasarian, “Equivalence of the complementarity problem to a system of nonlinear equations,”SIAM Journal on Applied Mathematics 31 (1976) 89–92. 157. O.L. Mangasarian, “Locally unique solutions of quadratic programs, linear and nonlinear complementarity problems,”Mathematical Programming 19 (1980) 200–212. 158. O.L. Mangasarian and L. McLinden, “Simple bounds for solutions of monotone complementarily problems and convex programs,”Mathematical Programming 32 (1985) 32–40. 159. A.S. Manne, “On the formulation and solution of economic equilibrium models,”Mathematical Programming Study 23 (1985) 1–22. 160. A.S. Manne and P.V. Preckel, “A three-region intertemporal model of energy, international trade and capital flows,”Mathematical Programming Study 23 (1985) 56–74. 161. P. Marcotte, “Network optimization with continuous control parameters,”Transportation Science 17 (1983) 181–197. 162. P. Marcotte, “Quelques notes et résultats nouveaux sur les problème d'equilibre d'un oligopole,”R.A.I.R.O. Recherche Opérationnelle 18 (1984) 147–171. 163. P. Marcotte, “A new algorithm for solving variational inequalities with application to the traffic assignment problem,”Mathematical Programming 33 (1985) 339–351. 164. P. Marcotte, “Gap-decreasing algorithms for monotone variational inequalities,” paper presented at the ORSA/TIMS Meeting, Miami Beach, October 1986. 165. P. Marcotte, “Network design with congestion effects: a case of bi-level programming,”Mathematical Programming 34 (1986) 142–162. 166. P. Marcotte and J.P. Dussault, “A modified Newton method for solving variational inequalities,”Proceeding of the 24th IEEE Conference on Decision and Control, pp. 1433–1436. 167. P. Marcotte and J.P. Dussault, “A note on a globally convergent Newton method for solving monotone variational inequalities,”Operations Research Letters 6 (1987) 35–42. 168. L. Mathiesen, “Computation of economic equilibria by a sequence of linear complementarity problems,”Mathematical Programming Study 23 (1985) 144–162. 169. L. Mathiesen, “Computational experience in solving equilibrium models by a sequence of linear complementarity problems,”Operations Research 33 (1985) 1225–1250. 170. L. Mathiesen, “An algorithm based on a sequence of linear complementarity problems applied to a Walrasian equilibrium model: an example,”Mathematical Programming 37 (1987) 1–18. 171. L. Mathiesen and A. Lont, “Modeling market equilibria: an application to the world steel market,” Working Paper MU04, Center for Applied Research, Norwegian School of Economics and Business Administration (Bergen, Norway, 1983). 172. L. Mathiesen and E. Steigum, Jr., “Computation of unemployment equilibria in a two-country multi-period model with neutral money,” Working Paper, Center for Applied Research, Norwegian School of Economics and Business Administration (Bergen, Norway, 1985). 173. L. McKenzie, “Why compute economic equilibria?,” in:Computing Equilibria: How and Why (North-Holland, Amsterdam, 1976). 174. L. McLinden, “The complementarity problem for maximal monotone multifunctions,” in: R.W. Cottle, F. Giannessi and J.L. Lions, eds.,Variational Inequalities and Complementarity Problems (Academic Press, New York, 1980) pp. 251–270. 175. L. McLinden, “An analogue of Moreau's proximation theorem, with application to the nonlinear complementarity problem,”Pacific Journal of Mathematics 88 (1980) 101–161. 176. L. McLinden, “Stable monotone variational inequalities,”Mathematical Programming (Series B) 48 (1990) 303–338, this issue. 177. N. Megiddo, “A monotone complementarity problem with feasible solutions but no complementary solutions,”Mathematical Programming 12 (1977) 131–132. 178. N. Megiddo, “On the parametric nonlinear complementarity problem,”Mathematical Programming Study 7 (1978) 142–159. 179. N. Megiddo and M. Kojima, “On the existence and uniqueness of solutions in nonlinear complementarity theory,”Mathematical Programming 12 (1977) 110–130. 180. G.J. Minty, “Monotone (non-linear) operators in Hilbert space,”Duke Mathematics Journal 29 (1962) 341–346. 181. J.J. Moré, “The application of variational inequalities to complementarity problems and existence theorems,” Technical Report 71–90, Department of Computer Sciences, Cornell University (Ithaca, NY, 1971). 182. J.J. Moré, “Classes of functions and feasibility conditions in nonlinear complementarity problems,”Mathematical Programming 6 (1974) 327–338. 183. J.J. Moré, “Coercivity conditions in nonlinear complementarity problems,”SIAM Review 17 (1974) 1–16. 184. J.J. Moré and W.C. Rheinboldt, “On P- and S-functions and related classes of n-dimensional nonlinear mappings,”Linear Algebra and Its Applications 6 (1973) 45–68. 185. J.J. Moreau, “Proximitè et dualitè dans un espace Hilberiten,”Bulletin of the Society of Mathematics of France 93 (1965) 273–299. 186. J.D. Murchland, “Braess' paradox of traffic flow,”Transportation Research 4 (1970) 391–394. 187. K.G. Murty,Linear Complementarity, Linear and Nonlinear Programming (Helderman, Berlin, 1988). 188. A. Nagurney, “Comparative tests of multimodal traffic equilibrium methods,”Transportation Research 18B (1984) 469–485. 189. A. Nagurney, “Computational comparisons of algorithms for general asymmetric traffic equilibrium problems with fixed and elastic demand,”Transportation Research 20B (1986) 78–84. 190. A. Nagurney, “Computational comparisons of spatial price equilibrium methods,”Journal of Regional Science 27 (1987) 55–76. 191. A. Nagurney, “Competitive equilibrium problems, variational inequalities and regional science,”Journal of Regional Science 27 (1987) 503–517. 192. J.F. Nash, “Equilibrium points in n-person games,”Proceedings of the National Academy of Sciences 36 (1950) 48–49. 193. S. Nguyen and C. Dupuis, “An efficient method for computing traffic equilibria in networks with asymmetric transportation costs,”Transportation Science 18 (1984) 185–202. 194. J.M. Ortega and W.C. Rheinboldt,Iterative Solution of Nonlinear Equations in Several Variables (Academic Press, New York, 1970). 195. A.R. Pagan and J.H. Shannon, “Sensitivity analysis for linearized computable general equilibrium models,” in: J. Piggott and J. Whalley, eds.,New Developments in Applied General Equilibrium Analysis (Cambridge University Press, Cambridge, 1985) pp. 104–118. 196. J.S. Pang,Least-Element Complementarity Theory. Ph.D. dissertation, Department of Operations Research, Stanford University (Stanford, CA, 1976). 197. J.S. Pang, “The implicit complementarity problem“, in: O.L. Managasarian, R.R. Meyer and S.M. Robinson, eds.,Nonlinear Programming 4 (Academic Press, New York, 1981) 487–518. 198. J.S. Pang, “A column generation technique for the computation of stationary points,”Mathematics of Operations Research 6 (1981) 213–244. 199. J.S. Pang, “On the convergence of a basic iterative method for the implicit complementarity problem,”Journal of Optimization Theory and Application 37 (1982) 149–162. 200. J.S. Pang, “Solution of the general multicommodity spatial equilibrium problem by variational and complementarity methods,”Journal of Regional Science 24 (1984) 403–414. 201. J.S. Pang, “Variational inequality problems over product sets: applications and iterative methods,”Mathematical Programming 31 (1985) 206–219. 202. J.S. Pang, “Inexact Newton methods for the nonlinear complementarity problem,”Mathematical Programming 36 (1986) 54–71. 203. J.S. Pang, “A posteriori error bounds for linearly constrained variational inequality problems,”Mathematics of Operations Research 12 (1987) 474–484. 204. J.S. Pang, “Two characterization theorems in complementarity theory,”Operations Research Letters 7 (1988) 27–31. 205. J.S. Pang, “Newton's method for B-differentiable equations,” to appear in:Mathematics of Operations Research. 206. J.S. Pang, “Solution differentiability and continuation of Newton's method for variational inequality problems over polyhedral sets,” to appear in:Journal of Optimization Theory and 207. J.S. Pang and D. Chan, “Iterative methods for variational and complementarity problems,”Mathematical Programming 24 (1982) 284–313. 208. J.S. Pang and J.M. Yang, “Parallel Newton methods for the nonlinear complementarity problem,”Mathematical Programming (Series B) 42 (1988) 407–420. 209. J.S. Pang and C.S. Yu, “Linearized simplicial decomposition methods for computing traffic equilibria on networks,”Networks 14 (1984) 427–438. 210. P.V. Preckel, “Alternative algorithms for computing economic equilibria,”Mathematical Programming Study 23 (1985) 163–172. 211. P.V. Preckel, “A modified Newton method for the nonlinear complementarity problem and its implementation,” paper presented at the ORSA/TIMS Meeting, Miami Beach, FL, October 1986. 212. Y. Qiu and T.L. Magnanti, “Sensitivity analysis for variational inequalities defined on polyhedral sets,”Mathematics of Operations Research 14 (1989) 410–432. 213. Y. Qiu and T.L. Magnanti, “Sensitivity analysis for variational inequalities,” Working Paper OR 163-87, Operations Research Center, M.I.T. (Cambridge, MA, 1987). 214. A. Reinoza,A Degree For Generalized Equations. Ph.D. dissertation, Department of Industrial Engineering, University of Wisconsin (Madison, WI, 1979). 215. A. Reinoza, “The strong positivity conditions,”Mathematics of Operations Research 10 (1985) 54–62. 216. W.C. Rheinboldt,Numerical Analysis of Parameterized Nonlinear Equations (Wiley, New York, 1986). 217. S.M. Robinson, “Generalized equations and their solutions, part I: basic theory,”Mathematical Programming Study 10 (1979) 128–141. 218. S.M. Robinson, “Strongly regular generalized equations,”Mathematics of Operations Research 5 (1980) 43–62. 219. S.M. Robinson, “Generalized equations and their solutions, part II: applications to nonlinear programming,”Mathematical Programming Study 19 (1982) 200–221. 220. S.M. Robinson, “Generalized equations,” in: A. Bachem, M. Grötschel and B. Korte, eds.,Mathematical Programming: The State of the Art (Springer, Berlin, 1982) pp. 346–367. 221. S.M. Robinson, “Implicit B-differentiability in generalized equations,” Technical Summary Report No. 2854, Mathematics Research Center, University of Wisconsin (Madison, WI, 1985). 222. S.M. Robinson, “Local structure of feasible sets in nonlinear programming, part III: stability and sensitivity,”Mathematical Programming Study 30 (1987) 45–66. 223. S.M. Robinson, “An implicit-function theorem for a class of nonsmooth functions,” to appear in:Mathematics of Operations Research. 224. R.T. Rockafellar, “Characterization of the subdifferentials of convex functions,”Pacific Journal of Mathematics 17 (1966) 497–510. 225. R.T. Rockafellar, “Convex functions, monotone operators, and variational inequalities,”Theory and Applications of Monotone Operators: Proceedings of the NATO Advanced Study Institute, Venice, Italy (Edizioni Oderisi, Gubbio, Italy, 1968) pp. 35–65. 226. R.T. Rockafellar, “On the maximal monotonicity of subdifferential mappings,”Pacific Journal of Mathematics 33 (1970) 209–216. 227. R.T. Rockafellar,Convex Analysis (Princeton University Press, Princeton, NJ, 1970). 228. R.T. Rockafellar, “Augmented Lagrangian and application of the proximal point algorithm in convex programming,”Mathematics of Operations Research 1 (1976) 97–116. 229. R.T. Rockafellar, “Monotone operators and the proximal point algorithm,”SIAM Journal on Control and Optimization 14 (1976) 877–898. 230. R.T. Rockafellar, “Lagrange multipliers and variational inequalities,” in: R.W. Cottle, F. Giannessi, and J.L.Lions, eds.,Variational Inequalities and Complementarity Problems: Theory and Applications (Wiley, New York, 1980) pp. 303–322. 231. T.F. Rutherford,Applied General Equilibrium Modeling. Ph.D. dissertation Department of Operations Research, Stanford University (Stanford, CA, 1986). 232. T.F. Rutherford, “Implementation issues and computational performance solving applied general equilibrium models with SLCP,” Discussion Paper 837, Cowles Foundation for Research in Economics, Yale University (New Haven, CT, 1987). 233. R. Saigal, “Extension of the generalized complementarity problem,”Mathematics of Operations Research 1 (1976) 260–266. 234. P.A. Samuelson, “Spatial price equilibrium and linear programming,”American Economic Review 42 (1952) 283–303. 235. H.E. Scarf, “The approximation of fixed points of a continuous mapping,”SIAM Journal on Applied Mathematics 15 (1967) 1328–1342. 236. H.E. Scarf and T. Hansen,Computation of Economic Equilibria (Yale University Press, New Haven, CT, 1973). 237. A. Shapiro, “On concepts of directional differentiability,” Research Report 73/88(18), Department of Mathematics and Applied Mathematics, University of South Africa (Pretoria, South Africa, 238. J.B. Shoven, “Applying fixed points algorithms to the analysis of tax policies,” in: S. Karmardian and C.B. Garcia, eds.,Fixed Points: Algorithms and Applications (Academic Press, New York, 1977) pp. 403–434. 239. J.B. Shoven, “The application of fixed point methods to economics,” in: B.C. Eaves, F.J. Gould, H.O. Peitgen, and M.J. Todd, eds.,Homotopy Methods and Global Convergence (Plenum Press, New York, 1983) pp. 249–262. 240. S. Smale, “A convergent process of price adjustment and global Newton methods,”Journal of Mathematical Economics 3 (1976) 107–120. 241. M.J. Smith, “The existence, uniqueness and stability of traffic equilibria,”Transportation Research 13B (1979) 295–304. 242. M.J. Smith, “The existence and calculation of traffic equilibria,”Transportation Research 17B (1983) 291–303. 243. M.J. Smith, “A descent algorithm for solving monotone variational inequality and monotone complementarity problems,”Journal of Optimization Theory and Application 44 (1984) 485–496. 244. M.J. Smith, “The stability of a dynamic model of traffic assignment- an application of a method of Lyapunov,”Transportation Science 18 (1984) 245–252. 245. T.E. Smith, “A solution condition for complementarity problems: with an apilication to spatial price equilibrium,”Applied Mathematics and Computation 15 (1984) 61–69. 246. J.E. Spingarn, “Partial inverse of a monotone operator,”Applied Mathematics and Optimization 10 (1983) 247–265. 247. J.E. Spingarn, “Applications of the method of partial inverses to convex programming: decomposition,”Mathematical Programming 32 (1985) 199–223. 248. J.E. Spingarn, “On computation of spatial economic equilibria,” Discussion Paper 8731, Center for Operations Research and Econometrics, Université Catholique de Louvain (Louvain-la-Neuve, Belgium, 1987). 249. G. Stampacchia, “Variational inequalities,” inTheory and Applications of Monotone Operators, Proceedings of the NATO Advanced Study Institute, Venice, Italy (Edizioni Oderisi, Gubbio, Italy, 1968) pp. 102–192. 250. R. Steinberg and R.E. Stone, “The prevalence of paradoxes in transportation equilibrium problems,” Working paper, AT&T Bell Laboratories (Holmdel, NJ, 1987). 251. R. Steinberg and W.I. Zangwill, “The prevalence of Braess' paradox,”Transportation Science 17 (1983) 301–319. 252. J.C. Stone, “Sequential optimization and complementarity techniques for computing economic equilibria,”Mathematical Programming Study 23 (1985) 173–191. 253. P.K. Subramanian, “Gauss-Newton methods for the nonlinear complementarity problem,” Technical Summary Report No. 2845, Mathematics Research Center, University of Wisconsin (Madison, WI, 1985). 254. P.K. Subramanian, “Fixed-point methods for the complementarity problem,” Technical Summary Report No. 2857, Mathematics Research Center, University of Wisconsin (Madison, WI, 1985). 255. P.K. Subramanian, “A note on least two norm solutions of monotone complementarity problems,”Applied Mathematics Letters 1 (1988) 395–397. 256. A. Tamir, “Minimality and complementarity properties associated with Z-functions and Mfunctions,”Mathematical Programming 7 (1974) 17–31. 257. R.L. Tobin, “General spatial price equilibria: sensitivity analysis for variational inequality and nonlinear complementarity formulations,” in: P.T. Harker, ed.,Spatial Price Equilibrium: Advances in Theory, Computation and Application, Lecture Notes in Economics and Mathematical Systems, Vol. 249 (Springer, Berlin, 1985) pp. 158–195. 258. R.L. Tobin, “Sensitivity analysis for variational inequalities,”Journal of Optimization Theory and Applications 48 (1986) 191–204. 259. M.J. Todd,The Computation of Fixed Points and Applications (Springer, Berlin, 1976). 260. M.J. Todd, “A note on computing equilibria in economics with activity models of production“,Journal of Mathematical Economics 6 (1979) 135–144. 261. G. Van der Laan and A.J.J. Talman, “Simplicial approximation of solutions to the nonlinear complementarity problem with lower and upper bounds,”Mathematical Programming 38 (1987) 1–15. 262. J.A. Ventura and D.W. Hearn, “Restricted simplicial decomposition for convex constrained problems,” Research Report No. 86-15, Department of Industrial and Systems Engineering, University of Florida (Gainesville, FL, 1986). 263. J.G. Wardrop, “Some theoretical aspects of road traffic research,”Proceedings of the Institute of Civil Engineers, Part II (1952) 325–378. 264. L.T. Watson, “Solving the nonlinear complementarity problem by a homotopy method,”SIAM Journal on Control and Optimization 17 (1979) 36–46. 265. J. Whalley, “Fiscal harmonization in the EEC: some preliminary findings of fixed point calculations,” in: S. Karamardian and C.B. Garcia, eds.,Fixed Points: Algorithms and Applications (Academic Press, New York, 1977) pp. 435–472. 266. Y. Yamamoto, “A path following algorithm for stationary point problems,”Journal of the Operations Research Society of Japan 30 (1987) 181–198. 267. Y. Yamamoto, “Fixed point algorithms for stationary point problems,” in: M. Zri and K. Tanabe, eds.,Mathematical Programming: Recent Developments and Applications (KTK Scientific Publishers, Tokyo, 1989) pp. 283–308. Finite-dimensional variational inequality and nonlinear complementarity problems: A survey of theory, algorithms and applications Cover Date Print ISSN Online ISSN Additional Links □ Variational inequality □ complementarity □ fixed points □ Walrasian equilibrium □ traffic assignment □ network equilibrium □ spatial price equilibrium □ Nash equilibrium Industry Sectors Author Affiliations □ 1. Decision Sciences Department, The Wharton School, University of Pennsylvania, 19104-6366, Philadelphia, PA, USA □ 2. Department of Mathematical Sciences, The Whiting School of Engineering, The Johns Hopkins University, 21218, Baltimore, MD, USA
{"url":"http://link.springer.com/article/10.1007%2FBF01582255","timestamp":"2014-04-23T10:56:30Z","content_type":null,"content_length":"114989","record_id":"<urn:uuid:9819c5c5-b815-4d61-a0a0-3370eb544ffb>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00556-ip-10-147-4-33.ec2.internal.warc.gz"}
Random sampling of elementary flux modes in large-scale metabolic networks • We are sorry, but NCBI web applications do not support your browser and may not function properly. More information Bioinformatics. Sep 15, 2012; 28(18): i515–i521. Random sampling of elementary flux modes in large-scale metabolic networks Motivation: The description of a metabolic network in terms of elementary (flux) modes (EMs) provides an important framework for metabolic pathway analysis. However, their application to large networks has been hampered by the combinatorial explosion in the number of modes. In this work, we develop a method for generating random samples of EMs without computing the whole set. Results: Our algorithm is an adaptation of the canonical basis approach, where we add an additional filtering step which, at each iteration, selects a random subset of the new combinations of modes. In order to obtain an unbiased sample, all candidates are assigned the same probability of getting selected. This approach avoids the exponential growth of the number of modes during computation, thus generating a random sample of the complete set of EMs within reasonable time. We generated samples of different sizes for a metabolic network of Escherichia coli, and observed that they preserve several properties of the full EM set. It is also shown that EM sampling can be used for rational strain design. A well distributed sample, that is representative of the complete set of EMs, should be suitable to most EM-based methods for analysis and optimization of metabolic networks. Availability: Source code for a cross-platform implementation in Python is freely available at http://code.google.com/p/emsampler. Contact: dmachado/at/deb.uminho.pt Supplementary information: Supplementary data are available at Bioinformatics online. The description of a metabolic network in terms of elementary (flux) modes (EMs) provides an important framework for metabolic pathway analysis (Schuster et al., 1999). Elementary mode analysis identifies all minimal functional pathways connecting substrates with biomass and products inherent to a metabolic network. EMs have been used to understand the cellular metabolism through analysis of the network structure, regulations and characterization of all possible phenotypes (Çakir et al., 2007; Schuster et al., 1999, 2000, 2002; Stelling et al., 2002). Examples of other recent applications of pathway analysis are in the determination of minimum medium requirements (Schilling and Palsson, 2000) and in the development of reduced kinetic models (Provost and Bastin, 2004). They also play an essential role in the development of model-based metabolic engineering strategies for strain optimization by identification of suitable intervention targets (Hädicke and Klamt, 2010 , 2011; Trinh et al., 2008). A comprehensive review on elementary mode analysis and other applications of EMs can be found in (Trinh et al., 2009). EMs are also closely related to the problem of identifying all transition invariants (t-invariants) in Petri net theory (Chaouiya, 2007). In fact, if all reactions are irreversible, the set of EMs is equivalent to the minimal t-invariants of a Petri net. Thus, it is not surprising that the algorithms for computation of EMs and t-invariants have evolved closely [see Schuster et al. (2002) for a comparison of both concepts]. Despite recent improvements in the algorithms for computation of EMs (Klamt et al., 2005; Terzer and Stelling, 2008), their application to real world metabolic networks has been hampered by the combinatorial explosion in the number of modes as the size of the networks increase. The enumeration of the complete set of EMs for genome-scale networks has been infeasible so far, and perhaps even undesirable due to the hardly manageable number of modes that would be generated. An attractive approach is the enumeration of a subset of pathways representing the complete system. Several approaches have been proposed hereto, though none of them provides a purely random sample of EMs. Current state of the art approaches typically enumerate EMs with a certain objective or constraint; like the enumeration of the shortest pathways (De Figueiredo et al., 2009; Rezola et al., 2011), the enumeration of pathways including a specific target reaction (Kaleta et al., 2009b), enumeration based on available measurements (Jungers et al., 2011; Soons et al., 2011), enumeration of all possible pathways through selected reactions that satisfy the steady-state flux of the entire network (elementary flux patterns) (Kaleta et al., 2009a), or decomposition of the network in modules (Schuster et al., 2002; Schwartz et al., 2007). These approaches do not represent the full solution space and hence a number of potentially interesting solutions may be missed. One of the key requirements for successful understanding of the cellular metabolism based on EMs is the ability to enumerate a representative subset of modes. In this work, we develop a method for generating random samples of EMs without computing their whole set. The goal is to obtain a well-distributed sample, which is representative of the complete set of EMs, and suitable to most EM-based methods for analysis and optimization of metabolic networks. 2 METHODS 2.1 Algorithm The EM sampler was implemented as an adaptation to the canonical basis approach by Schuster and Hilgetag (Schuster and Hilgetag, 1994). The algorithm begins with a matrix containing the transposed stoichiometric matrix augmented with the identity matrix. Then, for each metabolite, all the non-zero entries in the corresponding column are detected and replaced with all possible combinations that annul the respective entries. The combinatorial nature of the pairwise input/output reaction combination for each metabolite is the key to the exponential growth of the number of modes during the execution of the algorithm. Therefore, we add an additional filtering step which, at each iteration, selects only a subset of the possible combinations. This selection is based on a given probability function that randomly selects among the candidates. The algorithm is described as follows: • initialize the matrix T = [S^T |I[n]] • for i n } : □ – let R = {j | T[j,i] ≠= 0} □ – delete all rows R from T □ – let T[new] = [] □ – for ( j, k ☆ * t = add(T[j,][•], T[k,][•]) ☆ * rev(t)= rev(T[j,][•]) ∧ rev(T[k,][•]) ☆ * if minimal(t[m][+1:][m][+][n]) : append t to T[new] □ – T[new] = filter(T[new]) □ – append T[new] to T • get E from T = [0|E^T ] where S ^m^×^n is the stoichiometric matrix, I[n] is the identity matrix of size n and E is the elementary mode matrix. In order to find all possible row combinations, the following function is Function rev keeps track of the reversibility of the candidate modes so that reversible modes can be freely combined, and irreversible modes can only be combined in the appropriate direction. A candidate mode is reversible if it only contains reversible reactions. In order to combine candidate modes such that column i is annulled (i.e. metabolite i is balanced), it is important to make sure that they are combined in the proper direction. This is implemented by the following function: For a mode to be elementary, it must have minimal support (Schuster et al., 1999). There are two methods to check if a mode is elementary, the combinatorial test and the rank test (Terzer and Stelling, 2008). The combinatorial test compares the support vector of the new mode with the support of all modes computed at that point. However, this method is not appropriate in our case because we do not have the full set of elementary modes required for the test. On the other hand, the rank test is based only on the support vector of the candidate mode. Therefore, we opted to use this test in our implementation: where s(e)= {i | e[i] ≠ 0} is the support of t, and S[1:][i,s][(][e][)] is a submatrix of the stoichiometric matrix composed by the metabolites that have been processed so far, and the reactions that belong to the support of e. We also verify if the new candidate mode contains any reversible reaction occurring in both directions simultaneously. In that case, the candidate can be disregarded without performing the test. The filtering step is the novelty of the method proposed in this work (Fig. 1). In order to prevent the exponential growth in the number of candidate modes, we randomly select a sample of the new candidate modes at each step. This is implemented by the following function: where N is the number of new candidate modes, P is a given selection probability. The computation of elementary modes consists on iteratively removing all internal metabolites, and combining every pair of input/output reactions. For highly connected nodes, this results in a combinatorial explosion of new connections (expansion phase). ... The selection probability is a critical aspect of the algorithm. A low probability may cause the elimination of vital connections in the network, whereas a high probability may not prevent the combinatorial explosion. Ideally, one would want a high selection probability for low connectivity nodes, and a low selection probability at high connectivity nodes. Therefore, we opted to define the selection probability as a function of the number of candidate modes. Furthermore, we can observe that the selection of modes follows a binomial distribution with an average selection size equal to N ·P. Hence, at each step we define where K is a given constant that determines a maximum upper bound in the number of new candidate modes at each step. 2.2 Implementation The algorithm was implemented in Python and uses the open libraries NumPy/SciPy (Jones et al., 2012) for numerical computations, and libSBML (Bornstein et al., 2008) for reading SBML (Systems Biology Markup Language) model files. All tests were performed on an Intel Core 2 Duo 2.13 GHz processor with 3 GB RAM, running Linux Kernel 3.0 and Python 2.7. 2.3 Model The algorithm was tested using a condensed genome-scale metabolic reconstruction of Escherichia coli (Orth et al., 2009). The model contains 72 metabolites (52 internal, 20 external) and 95 reactions (75 internal, 20 drains). Glucose was set as the external carbon source and flux variability analysis (FVA) (Mahadevan and Schilling, 2003) was performed in order to detect blocked reactions, which were then removed from the model. The simplified model contains 68 metabolites and 87 reactions. The EMs for this model were calculated with efmtool (Terzer and Stelling, 2008), resulting in a total of 100.274 EMs. 3 RESULTS 3.1 Sampling The selection probability is controlled by the constant K, the only adjustable parameter in the algorithm (see Section 2). Therefore, we performed several tests using different values for this parameter. For each value of K, a total of 10 trials were run, and the individual samples were merged into a larger set. Additionally, we verified that all modes obtained are truly elementary modes contained within the full set of EMs for the tested model. The first test is the sample size and computation time as a function of K. By controlling the selection probability, it is expected that the resulting sample size will be affected and, consequently, the computation time as well. Results for these experiments are shown in Table 1. By performing linear regression of these values on a log–log scale, it is possible to observe that the number of modes obtained grows linearly with K, whereas the computation time grows nearly quadratically (Fig. 2). Sample size (No EMs) and computation time (s) as a function of K (log–log scale). The number of EMs grows linearly with K (slope Size of the EM samples obtained and respective computation times for different values of K In order to obtain a well-distributed sample it is important to guarantee that the reaction participation (i.e. the fraction of EMs in which a reaction participates) is preserved by the sampling procedure. Otherwise we would obtain a biased sample of the full EM set. We compared the reaction participation of the generated samples against the respective values in the full set of EMs (Fig. 3). It is possible to observe that for lower values of K, there is a weaker correlation between the reaction participation of the samples and the reaction participation of the full EM set. This is likely due to the fact that the sample size is too small to obtain a good coverage of the solution space. However, it is possible to observe that, as K increases, the Pearson correlation coefficient (r) also increases. For K = 10^5 we observe a high correlation (r = 0.986) between the reaction participation that is estimated by the sampling approach and the true values. In all cases, the dispersion seems to be homogeneous, showing no observable bias, hence the degree of correlation is only affected by the sample size. Reaction participation in the full EM set versus the participation in samples of different sizes, and the respective Pearson correlation coefficients (r) Furthermore, we analyzed the EM samples regarding their distribution within the flux solution space. For that matter, we plotted the EMs distribution within the phenotypic phase plane for oxygen uptake and cellular growth normalized by glucose uptake (Edwards et al., 2002). Figure 4 shows the distribution for the full set compared with the samples for different values of K. It can be observed that the samples are unbiased relatively to the full set, and that the coverage of the solution space improves with the size of the samples. Comparison of the phenotypic phase planes for oxygen uptake and cellular growth normalized by glucose uptake (full set and different samples) We also evaluated how the sampling procedure affects the pathway length distribution (Fig. 5). It is possible to observe that the full EM set has a skewed Gaussian-like distribution with a maximum frequency of pathways with 50 reactions. However, we observe that as K decreases, the distribution shifts towards smaller pathway lengths. This is not surprising since the EMs with larger support vectors will undergo more sampling steps. Nonetheless, it is observed that for K = 10^5, the distribution of the sample is considerably close to the distribution of the full set. Comparison of the pathway length distribution of the full set of EMs against the distribution for samples at different values of K and the correlation coefficient (r) between the original frequency distribution and the latter 3.2 Case study: rational strain design One of the applications of elementary mode analysis is the rational design of mutant strains for industrial production of chemical compounds (Hädicke and Klamt, 2010, 2011; Trinh et al., 2008). The enumeration of the EMs of a metabolic network, allows the determination of the most efficient pathways for production of the selected compound. Then, one can find the best knockout candidates that eliminate the maximum number of competing pathways, channeling the metabolic fluxes to the desired pathways (Trinh et al., 2008). In order to understand if the utilization of EM samples is appropriate for rational strain design, we compared the knockout strategy obtained with the full EM set, using the method of (Trinh et al., 2008), against the strategies obtained with different samples (for K = 10^5). Succinate production was used as the case study, all experiments were constrained to a maximum of 8 knockouts, and only strategies with viable biomass production were allowed. In order to have an estimate of the production rate for each case, we used the method of minimization of metabolic adjustment (MOMA) to predict the flux distribution of the mutants (Segrè et al., 2002). The results are presented in Table 2. It is possible to observe that the knockout strategies found for the samples differ from that of the full EM set (see Supplementary Fig. S1 for a clustering analysis). Nonetheless, all the knockouts predicted with the full set appear frequently in the other knockout strategies. Also, the succinate production rates estimated with MOMA are high in most cases. Only in one case (Sample 8) there was no production, although it predicted 4 knockouts common to the full EM set. Comparison of the optimal knockout strategies for succinate production for the full EM set and different EM samples 4.1 Sampling quality The main goal of this work is the creation of a sampling approach for computing EMs in large-scale networks without computing their whole set. For that matter, it is important to guarantee that the sampling approach provides a uniform coverage of the complete solution space. Our method is controlled by a single parameter that influences the number of computed EMs, by adjusting the selection probability during the execution of the algorithm. Our results show that the sample size obtained is directly proportional to K. For most EM-based applications, it is important to obtain a sample that preserves the reaction participation (the fraction of EMs in which a reaction participates). The results show that the sampling is unbiased in that aspect. However, the correlation of the estimated values with the true values is affected by the size of the sample. The larger the sample size, the better will be the correlation obtained. This is also reflected in the analysis of the phenotypic phase plane for oxygen uptake and cellular growth. The samples present an unbiased representation of the solution space, although the coverage obtained will depend on the sample size. Regarding the pathway length distribution of the EMs, the results show that there can be a bias towards smaller pathway lengths for low values of K. The importance of this bias depends on the application for which the sample will be used. One may argue that shorter pathways are more efficient, hence more likely to carry higher fluxes. For a large value of K, we can observe that the bias is not significant. However, for larger networks, the demands in terms of computational time and memory may not allow for arbitrarily, large values of K and the effect may become more significant. One way to compensate for such effect would be to give larger selection probabilities to modes with larger support vectors. In that case, additional testing is required in order to check if that artificial selection would cause any bias in other properties of the samples. We tested our approach with a case study of rational strain design for succinate production in E.coli. The results have shown that, using an EM sample, it is possible to predict most of the best potential reaction knockouts, and to obtain close to optimal solutions. The utilization of heuristic methods to search for satisfactory solutions, is a common approach in metabolic engineering for large metabolic networks, when an exhaustive search becomes prohibitive (Patil et al., 2005). 4.2 Performance We implemented our sampling method as an adaptation to the canonical basis approach (Schuster and Hilgetag, 1994). This approach has a very simple and intuitive topological interpretation in terms of the graph of the metabolic network (Fig. 1). However, it is very inefficient compared to the more recent nullspace approach (Klamt et al., 2005). There are very efficient implementations of this approach (e.g. using bit pattern trees (Terzer and Stelling, 2008)). The efmtool software, which implements these state-of-the-art methods (Terzer and Stelling, 2008), is able to compute the full set of approximately a hundred thousand EMs in the order of seconds to minutes. Our implementation of the canonical basis approach, on the other hand, takes within minutes to hours to compute a few thousand EMs. In our tests, we used a condensed genome-scale reconstruction of E. coli (Orth et al., 2009), which is a simplified version of the full genome-scale model (Feist et al., 2007). In order to apply our method to the full model, it will be necessary to analyze how this approach can be reformulated as a modification to the nullspace approach and integrated into the most recent implementations (Terzer and Stelling, 2008). The most significant bottleneck in our algorithm is the computation of a matrix rank for every candidate mode. As explained earlier (see Section 2), the combinatorial test is not appropriate for EM sampling because we do not have the full set of EMs to compare with. Using this method in our approach would result in the computation of a sample of modes that are elementary among themselves but not truly elementary modes of the full set. Therefore, the rank test must be used. However, computing the rank of a large matrix is very expensive and hampers EM computation at the genome-scale. This limitation may be overcome by improving the efficiency of rank calculation. Note that we are constantly computing the rank of matrices which are very similar (submatrices of the stoichiometric matrix). Therefore, one may take advantage of methods with pre-computation such as the lazy rank updating method proposed by (Terzer and Stelling, 2008). Our results show that the computational time grows nearly quadratically with the size of the sample. Therefore, it would seem advantageous to use a divide-and-conquer strategy to compute a sample of size N by appending together P independent samples of size N/P. However, in order to obtain smaller sample sizes, one has to decrease the selection probability (by adjusting K), which affects the quality of the samples regarding the pathway length distribution. Note that it is possible to take advantage of multiple CPUs to run several samplers in parallel and combine the samples into one larger set. However, it must be kept in mind that this does not provide the same sampling quality as a sample of the same size obtained with a higher selection probability. One of the advantages of elementary mode analysis, when compared with methods based on flux balance analysis, is the fact that the EM set needs to be computed only once. Once the EM set is computed, the analysis and optimization of the metabolic network is quite straightforward. On the other hand, bi-level optimization frameworks require expensive computational time at every utilization (Burgard et al., 2003; Patil et al., 2005). Therefore, even if the computation of an EM sample, large enough to obtain an unbiased coverage of the solution space, is highly time consuming at the genome-scale, this effort is compensated on the long term. As more data are collected, metabolic models keep constantly growing in size. This increases the challenge for EM-based analysis of metabolic networks, as the number of EMs grows exponentially with the network size. For that matter, the development of EM sampling approaches will become increasingly important. This work is a contribution in that direction. We developed a method that prevents the combinatorial explosion of the number of EMs during computation, by adding a filtering step that randomly samples among the candidate modes at each iteration. Unlike other methods for obtaining reduced sets of EMs (De Figueiredo et al., 2009; Jungers et al., 2011; Kaleta et al., 2009b; Rezola et al., 2011), our approach does not use any objective functions or experimental flux constraints. EFMEvolver (Kaleta et al., 2009b) is the approach most similar to ours. It samples the EMs that contain a target reaction, rather than the whole solution space. It uses linear programming (LP) to find a single EM, and a genetic algorithm (GA) to search different solutions. It has the advantage that the procedure can be stopped after a desired number of modes have been collected, whereas our approach only yields valid EMs after completion. On the downside, it requires tuning the parameters for the GA and selection of a proper fitness function, whereas or method is tunable by a single parameter. Our method can show a bias towards smaller EM pathway lengths if the selection probability is too low. Given its formulation, it is likely that EFMEvolver exhibits the same bias, although it is not evaluated how strong that bias can be. Despite the current shortcomings, EM sampling is a promising approach for computation of EMs at the genome-scale, and opens the possibility for application of EM-based metabolic engineering methods for optimizing metabolic networks at this scale. Funding: Research supported by the Portuguese Foundation for Science and Technology (FCT), through the projects “Bridging Systems and Synthetic Biology for the development of improved microbial cell factories” (MIT-Pt/BS-BB/0082/2008) and “SYNBIOBACTHER - Synthetic biology approaches to engineer therapeutic bacteria” (PTDC/EBB-BIO/102863/2008). Conflict of Interest: none declared. • Bornstein B., et al. LibSBML: an API library for SBML. Bioinformatics. 2008;24:880–881. [PMC free article] [PubMed] • Burgard A., et al. Optknock: a bilevel programming framework for identifying gene knockout strategies for microbial strain optimization. Biotechnol. Bioeng. 2003;84:647–657. [PubMed] • Çakir T., et al. Effect of carbon source perturbations on transcriptional regulation of metabolic fluxes in Saccharomyces cerevisiae. BMC Syst. Biol. 2007;1:18. [PMC free article] [PubMed] • Chaouiya C. Petri net modelling of biological networks. Brief. Bioinform. 2007;8:210–219. [PubMed] • De Figueiredo L., et al. Computing the shortest elementary flux modes in genome-scale metabolic networks. Bioinformatics. 2009;25:3158–3165. [PubMed] • Edwards J., et al. Characterizing the metabolic phenotype: a phenotype phase plane analysis. Biotechnol. Bioeng. 2002;77:27–36. [PubMed] • Feist A., et al. A genome-scale metabolic reconstruction for Escherichia coli K-12 MG1655 that accounts for 1260 orfs and thermodynamic information. Mol. Syst. Biol. 2007;3 [PMC free article] [ • Hädicke O., Klamt S. CASOP: a computational approach for strain optimization aiming at high productivity. J. Biotechnol. 2010;147:88–101. [PubMed] • Hädicke O., Klamt S. Computing complex metabolic intervention strategies using constrained minimal cut sets. Metab. Eng. 2011;13:204–213. [PubMed] • Jones E., et al. 2001–2012. SciPy: Open Source Scientific Tool for Python. www.scipy.org. • Jungers R., et al. Fast computation of minimal elementary decompositions of metabolic flux vectors. Automatica. 2011;47:1255–1259. • Kaleta C., et al. Can the whole be less than the sum of its parts? Pathway analysis in genome-scale metabolic networks using elementary flux patterns. Genome Res. 2009a;19:1872–1883. [PMC free article] [PubMed] • Kaleta C., et al. EFMEvolver: Computing elementary flux modes in genome-scale metabolic networks. In: Grosse I., editor. Lecture Notes in Informatics. Vol. 157. 2009b. pp. 179–189. Gesellschaft für Informatik, Bonn. • Klamt S., et al. Algorithmic approaches for computing elementary modes in large biochemical reaction networks. Syst. Biol. 2005;152:249–255. [PubMed] • Mahadevan R., Schilling C. The effects of alternate optimal solutions in constraint-based genome-scale metabolic models. Metab. Eng. 2003;5:264–276. [PubMed] • Orth J., et al. Reconstruction and use of microbial metabolic networks: the core Escherichia coli metabolic model as an educational guide. In: Bock A., editor. EcoSal – Escherichia coli and Salmonella: Cellular and Molecular Biology. Washington, DC: ASM Press; 2009. pp. 56–99. • Patil K., et al. Evolutionary programming as a platform for in silico metabolic engineering. BMC Bioinformatics. 2005;6:308. [PMC free article] [PubMed] • Provost A., Bastin G. Dynamic metabolic modelling under the balanced growth condition. J. Process Control. 2004;14:717–728. • Rezola A., et al. Exploring metabolic pathways in genome-scale networks via generating flux modes. Bioinformatics. 2011;27:534–540. [PubMed] • Schilling C., Palsson B. Assessment of the metabolic capabilities of Haemophilus influenzae Rd through a genome-scale pathway analysis. J. Theor. Biol. 2000;203:249–283. [PubMed] • Schuster S., Hilgetag C. On elementary flux modes in biochemical reaction systems at steady state. J. Biol. Syst. 1994;2:165–182. • Schuster S., et al. Detection of elementary flux modes in biochemical networks: a promising tool for pathway analysis and metabolic engineering. Trends Biotechnol. 1999;17:53–60. [PubMed] • Schuster S., et al. A general definition of metabolic pathways useful for systematic organization and analysis of complex metabolic networks. Nat. Biotechnol. 2000;18:326–332. [PubMed] • Schuster S., et al. Exploring the pathway structure of metabolism: decomposition into subnetworks and application to Mycoplasma pneumoniae. Bioinformatics. 2002;18:351–361. [PubMed] • Schwartz J., et al. Observing metabolic functions at the genome scale. Genome Biol. 2007;8:1–17. [PMC free article] [PubMed] • Segrè D., et al. Analysis of optimality in natural and perturbed metabolic networks. Proc. Natl. Acad. Sci. USA. 2002;99:15112–15117. [PMC free article] [PubMed] • Soons Z., et al. Identification of minimal metabolic pathway models consistent with phenotypic data. J. Process Control. 2011;21:1483–1492. • Stelling J., et al. Metabolic network structure determines key aspects of functionality and regulation. Nature. 2002;420:190–193. [PubMed] • Terzer M., Stelling J. Large-scale computation of elementary flux modes with bit pattern trees. Bioinformatics. 2008;24:2229–2235. [PubMed] • Trinh C., et al. Minimal Escherichia coli cell for the most efficient production of ethanol from hexoses and pentoses. Appl. Environ. Microbiol. 2008;74:3634–3643. [PMC free article] [PubMed] • Trinh C., et al. Elementary mode analysis: a useful metabolic pathway analysis tool for characterizing cellular metabolism. Appl. Microbiol. Biotechnol. 2009;81:813–826. [PMC free article] [ Articles from Bioinformatics are provided here courtesy of Oxford University Press Your browsing activity is empty. Activity recording is turned off. See more...
{"url":"http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3436828/?tool=pubmed","timestamp":"2014-04-17T16:34:28Z","content_type":null,"content_length":"96417","record_id":"<urn:uuid:495a5f8e-f6e5-4c1c-af00-5913367c0000>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00618-ip-10-147-4-33.ec2.internal.warc.gz"}
doubts in momentum A body with mass 'm' moves with velocity 'v' possess momentum and the magnitude of momentum of the particle is given by product of its mass and its velocity It is the definition of momentum. Newton called it quantity of motion http://en.wikisource.org/wiki/The_Ma...finitions#Def1 Definition II. The Quantity of Motion is the measure of the same, arising from the velocity and quantity of matter conjuctly. The motion of the whole is the Sum of the motions of all the parts; and therefore in a body double in quantity, with equal velocity, the motion is double; with twice the velocity, it is quadruple. Who came up with this idea? On what observation leads to think him like this? I know that Newton's 2nd law states that force is nothing but rate of change of momentum. So this idea (i.e, concept of momentum) dates back before Newton proposed his laws right, or am i wrong? He formulated his second law as The alteration of motion is ever proportional to the motive force impressed; and is made in the direction of the straight line in which that force is impressed. If any force generates a motion, a double force will generate double the motion, a triple force triple the motion, whether that force be impressed altogether and at once, or gradually and successively. And this motion (being always directed the same way with the generating force), if the body moved before, is added to or subtracted from the former motion, according as they directly conspire with or are directly contrary to each other; or obliquely joined, when they are oblique, so as to produce a new motion compounded from the determination of both. His Principles was based on the experimental observations of Galilei and others.
{"url":"http://www.physicsforums.com/showthread.php?s=4d55fe33efe890504c5200c98650b325&p=4486186","timestamp":"2014-04-17T12:43:31Z","content_type":null,"content_length":"26164","record_id":"<urn:uuid:5d8c7bb7-92cc-4a50-b776-bece891463c6>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00207-ip-10-147-4-33.ec2.internal.warc.gz"}
FOM: re: On CH/1 Kanovei kanovei at wminf2.math.uni-wuppertal.de Mon Dec 8 13:04:08 EST 1997 >From: Harvey Friedman <friedman at math.ohio-state.edu> >Date: Mon, 8 Dec 1997 02:28:20 +0100 > I strongly suspect that by 2100, the CH situation will be clarified Still there is something strange in around CH. It looks like more a cultural rather mathematical phenomenon. Indeed compare two *problems*: CH: ........ CD (cube duplicate): express 2^(1/3) as a combination of square roots To solve CH means to find a "computation", which involves logical inherence, ending by CH or its negation. To solve CD means to find a "computation", which involves square roots, ending by 2^(1/3). Both have been proved to be impossible, so in principle both have one and the same mathematical status. Both are important for certain types of mathematics. Perhaps there are many mathematicians who say CD is more important, at least for 1 and 2 (if not for 3 as well) in the HF scheme. But who now cares how to solve CD by 2100 ? Why after all CH is so attractive as a topic for discussions ? Vladimir Kanovei More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/1997-December/000445.html","timestamp":"2014-04-17T10:24:37Z","content_type":null,"content_length":"3254","record_id":"<urn:uuid:c58a249d-82f8-4dc7-b8f5-47033b2b6577>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00663-ip-10-147-4-33.ec2.internal.warc.gz"}
ALEX Lesson Plans Subject: Mathematics (5 - 6) Title: Decimal Dilemma: Carlos the Centipede needs to buy tennis shoes! Description: In this lesson, Carlos the Centipede must buy new baseball shoes. His dilemma will be how to multiply decimals (money) by multiples of 10. To add to the dilemma, Carlos' baseball team needs to buy shoes too! Students will multiply decimals by 100. Students will investigate what is happening to the decimal point each time a decimal is multiplied by a multiple of 10. Subject: Mathematics (5 - 6) Title: Decimal Dilemmas: Carlos the Centipede is Adding Decimals! Description: In this lesson, students will work in collaborative/cooperative groups to understand adding decimals. Students will use decimal grids to represent the sum of decimals. Emphasis is on using place-value, lining up the decimals and bringing the decimal down. Students will understand how important estimation of decimals is in real-life situations. Subject: Mathematics (5 - 6) Title: Decimal Dilemma: Carlos the Centipede is Subtracting Decimals! Description: In this lesson, students will work in collaborative/cooperative groups to understand subtracting decimals. Students will use decimal grids to represent the difference of decimals. Emphasis is on using place-value, lining up the decimals and bringing the decimal down. Students will understand how important estimation of decimals is in real-life situations Subject: Information Literacy (K - 12), or Mathematics (5 - 6) Title: Dewey Decimal Goes to Math Class Description: Math students take an onsite field trip to the library (media center) to meet the Dewey decimal system. An "Essential Question" will be used to a set the purpose for understanding why we use the dewey decimal system. Students will compare and order decimals based on the dewey decimal system. Subject: Character Education (K - 12), or Mathematics (4 - 5) Title: Lining up the Decimals Description: This lesson provides a chance for the students to order decimals from least to greatest and greatest to least. The numbers are based on student's ability.This lesson plan was created as a result of the Girls Engaged in Math and Science, GEMS Project funded by the Malone Family Foundation. Thinkfinity Lesson Plans Subject: Language Arts,Mathematics Title: Post-Office Numbers Add Bookmark Description: In this lesson, one of a multi-part unit from Illuminations, students participate in activities in which they focus on the role of numbers and language in real-world situations. Students discuss a picture of things you might see at a post office and then discuss, describe, read, and write about whole numbers to thousands, decimal fractions to hundredths, and common fractions. Thinkfinity Partner: Illuminations Grade Span: 3,4,5 Subject: Mathematics Title: Number and Operations Web Links Add Bookmark Description: This collection of Web links, reviewed and presented by Illuminations, offers teachers and students information about and practice in concepts related to arithmetic. Users can read the Illuminations Editorial Board's review of each Web site, or choose to link directly to the sites. Thinkfinity Partner: Illuminations Grade Span: K,1,2,3,4,5,6,7,8,9,10,11,12 Subject: Mathematics Title: Communicating about Mathematics Using Games: Playing Fraction Tracks Add Bookmark Description: Mathematical games can foster mathematical communication as students explain and justify their moves to one another. In addition, games can motivate students and engage them in thinking about and applying concepts and skills. This e-example from Illuminations contains an interactive version of a game that can be used in the grades 3-5 classroom to support students' learning about fractions. e-Math Investigations are selected e-examples from the electronic version of the Principles and Standards of School Mathematics (PSSM). The e-examples are part of the electronic version of the PSSM document. Given their interactive nature and focused discussion tied to the PSSM document, the e-examples are natural companions to the i-Math investigations. Thinkfinity Partner: Illuminations Grade Span: 3,4,5
{"url":"http://alex.state.al.us/plans2.php?std_id=53775","timestamp":"2014-04-18T03:07:02Z","content_type":null,"content_length":"26353","record_id":"<urn:uuid:ac07dc15-bfcb-457c-b9e7-db104b463a96>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00292-ip-10-147-4-33.ec2.internal.warc.gz"}
Anna University B.E PH1X01: ENGINEERING PHYSICS (MODEL) Question paper 2008 Anna University B.E PH1X01: ENGINEERING PHYSICS (MODEL) Question paper Course: B.E University: Anna University (Common to all Branches of Engineering and Technology) Regulation 2004 Time: 3 Hours Maximum Marks: 100 (Answer all Questions) PART –A (10x2=20 marks) 1. Noise level inside a room with printer operating is found to be 80 dB. Find the noise level produced by the printer alone, if the background noise without the printer is 73 dB. 2. What is a unit cell? How many lattice parameters are required to describe a cubic lattice? 3. A monochromatic source of light is used to get circular fringes in Michelson’s interferometer. When the movable mirror is moved by 0.06 mm, a shift of 200 circular fringes is observed. Find the wavelength of light used? 4. What is the principle of laser and give the important requisites for laser action to take place. 5. State Wiedemann-Franz law. 6. The Fermi energy of Cesium is 1.55 eV. Calculate the number of conduction electrons in 1 cm3 of the metal. 7. With increase in temperature the resistivity of semiconductors decreases, while that of metals increases. Give reasons. 8. Explain isotope effect in superconductors. 9. What are metallic glasses? 10. What are the requirements of good insulating materials? PART –B (5x16 = 80 marks) 11. a) i) Explain the various factors affecting acoustics of buildings and their remedies. (8 ) b) ii) What is magnetostriction effect? With a neat circuit diagram, describe the production of ultrasonic waves by magnetostriction method. (8 ) b) i) Describe simple cubic, body centered cubic and face centered cubic structures with examples and calculate the atomic packing density. (12) ii) NaCl crystal has fcc lattice. The density of NaCl is 2.18 g/cm3. Calculate the distance between the sodium and chlorine ions. (Atomic weight of sodium is 23 and atomic weight of chlorine is 35.5). (4) 12. a) i) Describe the construction and working of Michelson’s interferometer. Explain the method of determination of wavelength of monochromatic source of light. (12) ii) Explain fibre optic based displacement sensor. (4) b) i) With necessary theory explain the construction and working of CO2 laser. (12) ii) Discuss the classification of optical fibres based of refractive index profile. (4) 13. a) i) Solving Schrodinger’s time independent wave equation find the energy eigenvalues of a particle moving freely inside an one – dimensional box. (12) ii) In a Compton experiment the wavelength of the incident photon is 1.325?, whereas that of the scattered photon is 1.351?. Find the angle through which the photon is scattered and also calculate the kinetic energy of the recoiling electron. (4) b) i) Derive an expression for density of energy states. Obtain an expression for Fermi energy in metals at T = 0 K. (12) ii) Aluminum is a trivalent metal. Its electrical conductivity at room temperature is 3.8x107 O-1m-1. The atomic weight and density of aluminum are 27 and 2700 kg/m3 respectively. Calculate the mean free path of the conduction electrons in aluminum at room temperature. (4) 14. a) i) Derive an expression for electrical conductivity of an intrinsic semiconductor. Describe the experimental method of determination of bandgap of a semiconductor (16) b) i) Explain with a sketch the variation of Fermi level with temperature and doping concentration in an N-type semiconductor . (12) ii) Explain Type I and Type II superconductors. (4) 15. a) i) What is meant by internal field in dielectrics? Obtain Clasius- Mosotti equation and explain how it can be used to determine the dipole moment of polar molecules. (12) ii)Explain shape memory alloys. (4) b) i) Explain the non-destructive testing method of materials by Liquid penetrant method. (8 ) ii) Draw the block diagram of ultrasonic flaw detector and explain how it is used for detecting defects in materials. (8 )
{"url":"http://discuss.itacumens.com/index.php?topic=22668.0","timestamp":"2014-04-21T12:10:31Z","content_type":null,"content_length":"28713","record_id":"<urn:uuid:a0b60970-58ad-4b4f-aaf3-a8457509cc2e>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00467-ip-10-147-4-33.ec2.internal.warc.gz"}
User J. E. Pascoe bio website location University of California, San Diego age 24 visits member for 1 year seen 3 mins ago stats profile views 146 I am a graduate student at University of California, San Diego under the advice of Jim Agler. 11 awarded Teacher 26 accepted Structure theorem for finite dimensional $C^*$-algebras and their representations Jul Structure theorem for finite dimensional $C^*$-algebras and their representations 26 comment This proof is interesting. Luckily, I know enough about the theory vN algebra for the context of this proof to make sense. I understand the feeling that this is somehow the decomposition into factors combined with some manipulatorics. However, I'm certain there are a lot of different ways to approach the problem. I was certainly more interested in the location of known solutions so that I could use this for some simplification of some proof. 26 awarded Commentator Jul Structure theorem for finite dimensional $C^*$-algebras and their representations 26 comment This is a nice succinct explanation of the fact. I needed to use this to simplify some argument I was doing, and if someone asks why, this kind of logic/perspective will be useful. Jul Structure theorem for finite dimensional $C^*$-algebras and their representations 24 comment Thanks, Mike! That's exactly what I was looking for! 24 asked Structure theorem for finite dimensional $C^*$-algebras and their representations 25 awarded Citizen Patrol May Metrics and completions on the direct limit of matrices of all sizes over arbitrary fields 1 comment What exactly do you mean by uniform norm? Do you mean you need to use a different norm on $M_n$ than the two norm/maximum singular value? Apr Metrics and completions on the direct limit of matrices of all sizes over arbitrary fields 29 revised edited title 29 asked Metrics and completions on the direct limit of matrices of all sizes over arbitrary fields Mar Injectivity bounds for complex analytic functions 26 revised added 4 characters in body 26 awarded Editor Mar Injectivity bounds for complex analytic functions 26 revised added 4 characters in body Mar Injectivity bounds for complex analytic functions 26 comment I posted an answer. It should clear up your multivariable question. Somehow I think it should encode a basic proof, but I don't know what it is. 26 answered Injectivity bounds for complex analytic functions 24 awarded Supporter Mar Injectivity bounds for complex analytic functions 24 comment The theorem is an algebraic criterion for the invertibility of functions. It talks about a precise sense in which having nonsingular derivative on a domain implies that function is injective. (However, there are some provisos here! I'll link to it later when I can upload it! (It's not complicated, I just don't want to do all the TeXing involved again just for MO)) Mar Injectivity bounds for complex analytic functions 24 comment Somehow you need the series to have the superlinear terms have negative coefficients, That is, take $$F(z) = id(z) - \sum_{|I|\geq 2} a_Iz^I$$ where $a_I\geq 0.$ Further suppose $F$ is defined on some polydisk at 0. If the derivative of $F$ is nonsingular throughout the intersection of polydisk and the positive reals, then $F$ is injective. Injectivity bounds for complex analytic functions Mar comment It is some analogue of an algebraic theorem I proved, I wanted to know if it had a simple proof in complex variables. You proof exhibits this fact. In fact it should be that for 24 holomorphic maps $F:\mathbb{C}^n\rightarrow \mathbb{C}^n$ of the form $$F(z)= \text{id}(z) + O(\|z\|^2)$$ that the above holds (for appropriate statements of what the domain could be, some kind of polydisks.) Thank you for your time.
{"url":"http://mathoverflow.net/users/32470/j-e-pascoe?tab=activity","timestamp":"2014-04-17T18:29:35Z","content_type":null,"content_length":"46064","record_id":"<urn:uuid:e7932fdf-ae1c-4466-ae27-631d1a37cfd8>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00463-ip-10-147-4-33.ec2.internal.warc.gz"}
May 1st 2010, 04:10 PM Why is it that if B = A^-1, B^-1 = A I understand that whatever you do to one side, you have to do the same for the other side, but I don't understand how (B^-1)(B^-1) = B Can anyone please explain? May 1st 2010, 05:02 PM What is this in regards to? What field are we working in? Matricies? May 1st 2010, 09:06 PM mr fantastic $B^{-1} = (A^{-1})^{-1} = A$. Alternatively, note that $B B^{-1} = I$. But if $B = A^{-1}$ then $A^{-1} B^{-1} = I \Rightarrow A A^{-1} B^{-1} = A \Rightarrow I B^{-1} = A ....$ May 2nd 2010, 04:34 AM The definition of $X^{-1}$, in general, (matrices, elements of a group or field, etc.) is that $Y= X^{-1}$ if and only if $XY= I$and $YX= I$ where I is the "multiplicative identity" for the algebraic structure. T If you know that $B= A^{-1}$ then you know that $AB= I$ and [mathy]BA= I[/tex], by replacing X above with A and Y with B. But if we were to replace X with B and Y with A, we wold get exactly the same equations! Therefore, $A= B^{-1}$.
{"url":"http://mathhelpforum.com/advanced-algebra/142540-inverses-print.html","timestamp":"2014-04-18T12:26:03Z","content_type":null,"content_length":"7386","record_id":"<urn:uuid:6f3c504c-9366-48f3-9243-c32a76a809a0>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00144-ip-10-147-4-33.ec2.internal.warc.gz"}
Are you a poet or a mathematician? - Journal - Agile* Many geologists can sometimes be rather prone to a little woolliness in their language. Perhaps because you cannot prove anything in geology (prove me wrong), or because everything we do is doused in interpretation, opinion and even bias, we like to beat about the bush. A lot. Sometimes this doesn't matter much. We're just sparing our future self from a guilty binge of word-eating, and everyone understands what we mean—no harm done. But there are occasions when a measure of unambiguous precision is called for. When we might want to be careful about the technical meanings of words like approximately, significant, and certain. Sherman Kent was a CIA analyst in the Cold War, and he tasked himself with bringing quantitative rigour to the language of intelligence reports. He struggled (and eventually failed), meeting what he called aesthetic opposition: What slowed me up in the first instance was the firm and reasoned resistance of some of my colleagues. Quite figuratively I am going to call them the poets—as opposed to the mathematicians—in my circle of associates, and if the term conveys a modicum of disapprobation on my part, that is what I want it to do. Their attitude toward the problem of communication seems to be fundamentally defeatist. They appear to believe the most a writer can achieve when working in a speculative area of human affairs is communication in only the broadest general sense. If he gets the wrong message across or no message at all—well, that is life. Sherman Kent, Words of Estimative Probability, CIA Studies in Intelligence, Fall 1964 Kent proposed using some specific words to convey specific levels of certainty (right). We have used these words in our mobile app Risk*. The only modification I made was setting P = 0.99 for Certain, and P = 0.01 for Impossible (see my remark about proving things in geology). There are other schemes. Most petroleum geologists know Peter Rose's work. A common language, with some quantitative meaning, can dull the pain of prospect risking sessions. Almost certainly. Do you use systematic descriptions of uncertainty? Do you think they help? How can we balance our poetic side of geology with the mathematical? As I was adding to the SubSurfWiki page for WEPs, I came across a wonderful article by Bernie O'Brien (1989), a physician. He interviewed 56 doctors about twenty-three words and phrases, asking them to place them on a probability scale. He then used the interquartile range of the responses as an indication of ambiguity. Here's how his data plot: This is very interesting: words used to express equivocality and fence-sitting are themselves ambiguous and uncertain. That makes intuitive sense, but it's a fascinating insight into the language of O'Brien also compared the interquartile range to a 3-point rating of ambiguity, as given by the respondents. You can read more about it on the wiki page. O'Brien, B (1989), Words or numbers? The evaluation of probability expressions in general practice. Journal of the Royal College of General Practitioners 39, p 98–100, March 1989. Link to PDF. Reader Comments (13) "Unwary readers should take warning that ordinary language undergoes modification to a high-pressure form when applied to the interior of the Earth. A few examples of equivalents follow:" High Pressure Form Ordinary Meaning Certain Dubious Undoubtedly Perhaps Positive proof Vague suggestion Unanswerable argument Trivial objection Pure iron Uncertain mixture of all the elements A similar exercise was done by the IPCC to translate their probabilities related to climate predictions into 'everyday' phrases of certainty/uncertainty. Two examples I can think of that do BOTH 'poetry' and 'mathematics' very well are Philip Allen and Chris Paola. They both write wonderfully, with evocative descriptions and explanations of quantitative concepts for which they also lay out the equations. I love that second figure. The problem, I think, when communicating certainty and uncertainty with a wider audience is that most people's thinking on probabilities seems to be rather binary in nature, and language tends to be forced into bimodal bins: 'probable' is interpreted as 'very likely' or 'almost certain' and 'probably not' translates as 'not at all likely'. So the fact that something described as probable might not happen 1 time in 3, or that something described as not probably will happen 1 time in 3, is not appreciated at the time, and used as evidence of something fishy going on later. Then, of course, there's the particle physicists, who only accept existence past the 99.9% percentile or something. @Brian: Great tip— thank you. I added a section on the IPCC's WEPs to the wiki page. Thanks for that. The document I reference is full of interesting stuff about reporting on uncertain models. And I love the idea that we can be poets and mathematicians. My new goal in life! @Chris: I think that's a really important point: the flip side of uncertainty is the certainty of the occasional anti-outcome. A 40% chance of showers means it will probably be dry. A 90% pass rate means one in ten fail. Perhaps this is intuitive for some people, maybe most scientists, but if the anti-outcome is especially bad (or good), then it's worth knowing about. I guess this is the basis for buying a lottery ticket! There is one slippery class of event though: the one-off. Things like drilling a single oil well, or predicting the outcome of a single football game, are conceptually tricky for me. There will not be a succession of trials, converging on some predicted frequency like dice rolls. The thing will happen once, and the probability will collapse into a Schrödinger-esque finality. The prediction is good or bad, the probability irrelevant in retrospect. I'll go along with your dichotomy of people into poets and mathematicians, notwithstanding people who are competent at both, for they typically know when to use which. I see the problem that you start with a fuzzy estimate and fuzz it even worse by trying to express it in "everyday language". DON'T DO THAT. If people cannot be bothered to understand a numerical probability with error term (P = 0.83 +- 0.1) they don't deserve any information. Poetry is not the tool for transmitting quantitative information. I "might" understand what I am "tying" to say, but the "chances" that your neurons will reconstruct a "similar" or even an "overlapping" mental map of my concept is "practically nil" or at least "unpredictably unreliable". The chart of words expressing uncertainty should adequately convince us that words alone will not succeed in communicating the information. Giving someone a warm-fuzzy feeling is not the same as expressing a fuzzy measurement. Use math to express math. SI units are great for measurement. Non-standard units can work in context, too. I can deal with a 2x4 not having a 1:2 aspect ratio. I can deal with needing a 2x4 at least 10 feet long. I stop short of "a long board"... it takes poetic license to imagine scenarios where that is adequate info... "Timmy fell through the ice. Lassy, run and get a long board. " (and call 911 while you are at it) @Rik: Thanks for reading, and for the image of Lassie with a longboard in her mouth. I can see how I gave the impression I was proposing casting probabilities as WEPs, especially with the tables arranged as they are. But no — I am with you: I would rather cast WEPs as probabilities. I think there is a role for Kent et al's work in helping us do this more consistently, within an organization, say. Often, it matters more that we are consistent in our treatments than that we are accurate. Indeed, we can't have accuracy without consistency. The notion of fuzzy probabilities is one I have never played with. At least in petroleum geoscience, we tend to draw the uncertainty into a key parameter—the expected volume of gas in a trap, say, or the return on the investment—and use a single probability. The probability therefore represents the chance of getting onto the distribution. There is then a separate probability distribution function, usually a log-normal one, to describe the parameter. I suppose it amounts to the same thing, but I find this more intuitive than the idea of error bars on probabilities. Maybe I'm just used to it. McLane et al (2008), AAPG Bulletin, show a similar study to your O'Brien one, but done on geoscientists. The results (p1437) are absolutely shocking, but I suspect that there is a significant chance that it is possible that they perhaps could be invalidated by one study participant giving a 10% ("P10") confidence answer when everyone else was giving a 90% confidence answer. Which is a shame, because it would be a useful dataset otherwise. @Richie: Wow, that's a great reference, thank you. The entire paper is online at the US Securities and Exchange Commission. Highly recommended, and essential if you deal with portfolio management or reserve reporting. The figure you referred to is on page 7 of that version of the paper, page 1437 in the original Bulletin. I have also added it to the wiki page. As you say, it's clear from the next figure, their Figure 4, that one respondent may have misinterpreted the exercise, though it seems odd that he or she would then give Proved a 25% 'confidence' level, and (assuming it's the same person), Reasonable certainty a lower 10% level. Shocking indeed. @Matt, Excellent article and discussions in the comments. I just wanted to discuss it in a, fairly heavy, industry context. What you have discussed is a common and very important problem I have come across during my time in an exploration asset. As geoscientists, we often have little training in statistics and the probability theories they are based on are designed for finite games of chance. For me, it has so far been best to review other subjects for publications with people with similar view points. http:// en.wikipedia.org/wiki/Mathematical_economics (Scroll down to Criticisms and defenses) Your reading list and views suggest this would not be new for you, and it is refreshing to common across these views in a fellow geoscientist. The problem raised here is amplified in current oil and gas exploration practice. Your app, risk, is an example of a series of factors multiplied together. The chance of success (PG, POS or whichever abbreviation is used) is an aggregation of all these parameters. It is commonly encouraged to not consider a chance of success, but to consider each of the risk factors (independently) and then take the outputted PG. I, for one, cannot calculate the conditional probability on the fly of 5 factors during an intense discussion of a prospect. So to word it, two risk factors with a WED of “probably not” = “Almost certainly not”? (0.3*0.3 = 0.09). Three risk factors with a WED of “probably not” = (almost) Impossible (0.3 * 0.3 * 0.3 = 0.027). This rarely makes sense and is often fudged around and played with until a chance of success is found that pleases the group (or much worse the manager!). There are different “risking factor” systems applied by different companies. The problem becomes more acute the more factors, methods, segments are included. This also applies to other methods like Bayesian modification, how can anyone calibrate the inputs to a DFI upgrade/downgrade? The key thing missing in my general rant so far is where and why are we making a chance of success in the first place. The chance of success is typically applied to allow for an estimate of the project/opportunity value and for the COS and Volumes to cross company benchmarks (despite Rose pointing out the nonsense in this) and help managers high grade their portfolio so they can make a decision. This is the key thing, everything we do, boils down to making an investment decision and that a series of good looking numbers gives mangers a good basis to take a decision and to fall back on if the well is dry! So if we recognize that the input to the risking numbers is flawed, the decision basis is flawed, then why do we use it? Every time I ask that, I receive a dumbfounded look and a response that we have to have numbers to take a decision on! As you mentioned in @Brian October the 14th 2011, oil wells are not common. Even the biggest companies’ portfolios are very small data sets. You mentioned the importance of being consistent in risking, however even if consistency could be achieved this is still statistical inference. Statistical inference that has a severe statistical self-reference problem due to the small number of exploration wells drilled a year. I feel that if a company that has a 30% success rate that matches a 30% prognosis on their exploration portfolio has done very little apart from get two numbers to coincidently match over a specific time-frame. I will stop now; there are many additional things that could be added. For example there is an important connection to the risk and the data quality, can we ever say “almost certainly not” given an immature data set, and for that what defines an immature data set! Do you have any ideas for a different system for valuing and risking exploration prospects? I have only very recently found your website, this is a long comment, I just had to get it out of my system! @Adam: Thanks for the awesome comment. I'm very grateful that you found the time to bash it out; rants on uncertainty and risk are my favourite kind of rant! I agree — there are more problems with risk analysis than there are non-problems. The best you can hope for is consistency across a portfolio, and even that is probably impossible to achieve. And besides, decisions are often made on non-technical grounds (because of commitments, politics, egos, etc). Fundamentally, I feel like risking is hard because it's more like Schrodinger's cat than a roulette wheel, because the events we are risking have already happened. In fact, it's not even like the quantum cat, because the events were not determined by chance, but by a natural system. I don't know if it's possible to model that physical system with enough precision to make chance (i.e. stochastic models) a weaker tool, but I think it should be our goal. On this point, I do have some ideas about another way to do it, but I am having trouble manifesting them. As a sort of prelude, I just wrote an article about the subject... it's due out this month in the CSEG Recorder. It will be freely available online, but not till about June. Get in touch and I'll gladly send it to you. The most beneficial aspect will always be that we think through every aspect (or risk factors) and discuss them rather than the output of a risk element for a model. I don't think we can model that system, at least not with current technology. Your scales of geoscience (I've continued to browse you excellent site) reflect the limitations of that, and then there are additional layers of uncertainty on everyone one of our measurements. So it may as well be Schrodinger's Cat. For me perhaps some form of qualitative guide would suffice that is underpinned on high quality technical work but then it will not fit into calculating a dollar value of the project. There will always be abuse or mistakes using very advanced quantitative systems unless everyone who uses them has a thorough understanding of the models background. Perhaps making better WEDs could bridge the As you mentioned the decisions are not always on technical grounds, but often the current models are being manipulated for egos, political reasons. So they form the defence and the reason still. I sent you a mail with my address, I would be interested in viewing the article.
{"url":"http://www.agilegeoscience.com/journal/2011/10/13/are-you-a-poet-or-a-mathematician.html","timestamp":"2014-04-20T10:53:09Z","content_type":null,"content_length":"95711","record_id":"<urn:uuid:4c7ecd48-3148-4ffa-a9b0-272c3bbd86b7>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00635-ip-10-147-4-33.ec2.internal.warc.gz"}
MAJOR CONCERNS AND CHALLENGES Bienvenido F. Nebres, S.J. Ateneo de Manila University As the paper of Prof. Zhang and other papers for TSG 22 note, the countries of East Asia have been very successful in teaching mathematics fundamentals, particularly at the elementary and secondary levels. The performance of their students in all kinds of international comparative assessment bears this out. Their concern is thus on moving forward and creating a methodology and an environment which would build on these achievements and move students towards greater creativity. In Southeast Asia, notably in the bigger countries such as the Philippines, Indonesia, Thailand and Malaysia, the situation is quite different. The concern is the relatively weak achievement of students and the school systems in learning basic mathematics. The concern is thus on success in teaching mathematics fundamentals. The focus of attention over the last decades has been on: □ curriculum and textbook reform, a continuing search for a curriculum that would lead to greater success in teaching fundamentals □ teachers and teacher-training, a concern for the weak mathematics foundations of teachers and for a better pedagogy for them The papers on mathematics education prepared for our TSG 22 from Indonesia (Susanti Linuwih), from Laos (Beth Southwell), from Malaysia (Lim Chap Sam) show that we need to continue to address these concerns of mathematics achievement and teacher capabilities and preparation in different ways. In the Philippines they remain the dominant topics of discussion. Thus these concerns remain valid today in Southeast Asia. What needs reflection in ICME 9 and in the coming joint meeting in Singapore of the East and Southeast Asian countries (EARCOME and SEACME) in 2002 is the method we in the Southeast Asian countries have followed in curriculum and teacher-training reform. Although our countries are geographically closer to the East Asian countries, our mathematics education has been more influenced by colonial history, notably by the United States and the United Kingdom. The typical method of reform from the United States has been: □ a new theory of mathematics education (new math, back to basics, problem-solving approach, realistic mathematics education, etc.) □ a pilot program, which usually succeeds in showing that the new theory is better than the old □ implementation of a reform program along the new lines across the country While this approach may have worked well in the U.S. with its more decentralized school system and larger resources, assessment of the impact of such curricular reform in, say, the Philippines or Indonesia has not shown much improvement in mathematics achievement. We might say that reform has focused on the intended curriculum (as carried out through a new curriculum, new textbooks, and teacher-training for this curriculum). Assessment years later of impact of the reform on the achieved curriculum shows little improvement. We might guess that the missing link might be insufficient attention to the implemented curriculum (what actually goes on in the classroom), not in specialized pilot programs but in a broad range of actual classrooms. We have used this way of analysis in looking at the experience of improvement at the Ateneo de Manila Grade School and High School. In particular, over an intensive weeklong seminar we used: (1)The TIMSS videotapes of actual classroom teaching in the U.S., Germany, and Japan, as a way of comparing how we ourselves teach mathematics (2) Liping Ma s "Knowing and Teaching Elementary Mathematics", to reflect on the mathematics we teach to present and future teachers and how this mathematics relates to the mathematics they actually need in the classroom (3) The description of the process of Lesson Study in Stigler and Hiebert s "Teaching Gap", to reflect on how actual classroom experience can become the basis for curricular and teacher-training The experience was that it was very effective in helping teachers reflect on □ our way of teaching mathematics and that there are other ways from which we can learn best practices □ the mathematics content in the education of future teachers. There was much discussion of Liping Ma s Profound Understanding of Fundamental Mathematics (PUFM) □ on how teachers and actual classroom experience can become the key element in an incremental process of curricular and teacher-training reform Since I assume that the participants in TSG 22 are familiar with these studies I will not go into a description of them here, though I would be happy to discuss them further if necessary. Our conclusion from our workshops at the Ateneo de Manila is that the concern for the Philippines and probably for other countries in Southeast Asia is that we do have to continue with curricular and textbook reform and with teacher-training reform. The challenge is how to have these efforts empirically based on actual classroom and teaching experience. The challenges to doing this are great for: □ this would require that the center and energy for reform would come from teachers and classrooms □ mathematics education departments and mathematics education experts should play supportive and empowering roles, rather than more dominant roles □ the mathematics education system would need to be organized for the learning from teachers and classrooms to be recorded, disseminated, discussed and evaluated for the larger system. In any case, the challenge is how to center curricular and teacher-training reform more on the actual practice of teachers and classrooms and how to work in a patient step-by-step incremental process of improvement, as opposed to the "sweep away the past and begin anew" approach of previous reforms. Since TSG 22 and the forthcoming joint meeting in 2002 of SEACME and EARCOME bring together mathematics education experts and teachers from East and Southeast Asia, this may be an opportunity for us in Southeast Asia to engage in a dialogue with and learn best practices from colleagues in East Asia.
{"url":"http://www.math.admu.edu.ph/tsg22/nebres.htm","timestamp":"2014-04-16T10:10:18Z","content_type":null,"content_length":"7846","record_id":"<urn:uuid:55de777b-34bb-41f9-a3ce-ef684724a9bc>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00609-ip-10-147-4-33.ec2.internal.warc.gz"}
What is the biggest possible earthquake? 0 What is the biggest possible earthquake? · Comment · Flag · Answer The size of an earthquake is directly related to the area of the fault plane and how much it moves. The bigger the area and the more it moves, the bigger the earthquake. The largest quakes occur on subduction zones. To imagine the fault plane of a subduction zone, imagine a piece of paper laying on its side. The paper has a large surface area. Faults such as the San Andreas are more like narrow ribbons laying on edge and therefore have a more limited surface area. Although a 'mega-quake' of magnitude 10 is theoretically possible, the probability of such a large quake is extremely small. Since we can calculate how big a fault plane would be required for such a large quake, we can compare this to the known faults of the world. The fact is, there just isn't a fault big enough to make such a quake. It would take 32 of the recent Sumatra 9.0 earthquakes to equal a magnitude 10. The largest earthquake on record was a whopping magnitude 9.5 which occured in Chile 0 on May 22, 1960. more Comment · Flag The size of an earthquake is directly related to the area of the fault plane and how much it moves. The bigger the area and the more it moves, the bigger the earthquake. The largest quakes occur on subduction zones. To imagine the fault plane of a subduction zone, imagine a piece of paper laying on its side. The paper has a large surface area. Faults such as the San Andreas are more like narrow ribbons laying on edge and therefore have a more limited surface area. Although a 'mega-quake' of magnitude 10 is theoretically possible, the probability of such a large quake is extremely small. Since we can calculate how big a fault plane would be required for such a large quake, we can compare this to the known faults of the world. The fact is, there just isn't a fault big enough to make such a quake. It would take 32 of the recent Sumatra 9.0 earthquakes to equal a magnitude 10. The largest earthquake on record was a whopping magnitude 9.5 which occured in Chile on May 22, 1960. more
{"url":"http://www.experts123.com/q/what-is-the-biggest-possible-earthquake.html","timestamp":"2014-04-18T16:36:14Z","content_type":null,"content_length":"43713","record_id":"<urn:uuid:b2e2145f-2413-4f5b-b68e-140b9a5f300d>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00173-ip-10-147-4-33.ec2.internal.warc.gz"}
Cam Quinlan : Blog We did more with the measures of central tendency today. Do WS 10.2. Today we did more modeling with quadratic function. Today we used the calculator for the process. Do the quadratic functions worksheet. Today we took a short quiz over our knowledge of vectors at this point. We then learned how to apply vectors to story problems. Do: p.511 {41-46, 48-52} We took our unit 9 test today. We then did WS 10.1. Today we did some modeling with polynomial functions: p.309 {9-11, 18-20, 27-29}, and p.383 {15-18} Today we did more with vectors: p.511 {21-30 x3, 33-36, 57-59} Today we looked at ways to distinguish between linear, quadratic, and exponential data. Do WS 9.11. Today we reviewed the chapter for the test tomorrow. You can check the answers for the review here. Today we took part 2 of our chapter 5 test. Today we finished the worksheet from yesterday, as well as a new one today. Both require the use of the graphing calculator. Mr. Quinlan Your Blog
{"url":"http://blogs.canby.k12.or.us/quinlanc/blog","timestamp":"2014-04-21T02:12:05Z","content_type":null,"content_length":"19385","record_id":"<urn:uuid:22b1d33b-411b-40fa-9375-ae9b131521a1>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00017-ip-10-147-4-33.ec2.internal.warc.gz"}
fluid mechanics (idea) is the of the under the external forces . In general, a distinction is made between (lumping the in with the ). The distinction lies in the fact that a shear stress applied to a solid results in a distinct, or measurable, distortion, whereas a stress applied to a fluid results in a continuing distortion, albeit one with a distinct, again measurable, For a large part the study of the mechanics of liquids and gasses are identical. One major difference is that only liquids have a free surface level . Another difference is that gasses are generally more than liquids. Fluid characteristics When dealing with fluid mechanics, one uses a continuum model to model the of the fluid, thereby bypassing the intricacies of the on a lower level. Mass per unit of volume gives the of the material under study, generally denoted with the symbol ρ (ρ, or rho, for those that don't get these HTML symbols to work). The for density follows from the definition, as given by the [ρ] = M L^-3; the SI unit is 1 [kg/m^3] The density of pure water (and other pure liquids) is only dependent on . In most cases where one uses fluid mechanics (like in civil engineering ) the density can be approximated to a value instead of including the influence of temperature and pressure in the calculations. Values normally used are: Constitutive equations constitutive equations give the between the in a material and the resulting distortion of . The three areas for which we need equations are: 1. Stress 2. Compressibility 3. Fluidity In liquids and gasses tensile stress is a very rare occurrence, and therefore the definition ) is introduced, which is equivalent to the part of the stress in a liquid or gas: p = - σ[0] = - 1/3 (σ[xx] + σ[yy] + σ[zz]) Variations in the isotropic part of the stress (tension or pressure) result in changes in the of the fluid, while variations in the deviator stress (the remainder) result in changes in the of the fluid (usually resulting in the fluid being in motion). If the pressure increases, fluids become compressed. The relation between volume ( ) and pressure ( ) for fluids under the idealization of is expressed using the compressibility modulus K The value of increases with increasing pressure. However, for a large of pressures the value of (without gas !) is practically constant, namely equal to roughly 2.2 x 10 dynamic viscosity η determines the of a liquid or gas, and has the dimension (according to the ) of M L (which is 1 [Pa s] = 1 [kg m Usually the kinematic viscosity ν is used in calculations, which is defined as follows: This is just the dynamic viscosity divided by the density of the material in question. The rest The above information is the main background needed to understand fluid mechanics, at least in the context of situations in the size range usually encountered in civil engineering . One of the things still left out here is capillary attraction , which is a that is only of in very in the above is the importance of dimensionless parameters in fluid mechanics. For example, the discussion of the compressibility modulus K can be followed further to the definition of the Mach number , which is the of the of the of a medium to the velocity of in that medium (which is linked to the compressibility of the medium → sound == compression Reynolds number is another such dimensionless parameter, which can be arrived at by following the discussion of the fluidity further. The Reynolds number gives an indication whether a certain flow situation is . It is also used to do on differing , the idea being that if the Reynolds number is kept constant the simulation is dynamically similar to the original. An adaptation of one of my college textbooks - node your homework My first nodeshell rescue August 16, 2001
{"url":"http://everything2.com/user/getha/writeups/fluid+mechanics","timestamp":"2014-04-17T04:42:26Z","content_type":null,"content_length":"28906","record_id":"<urn:uuid:d470dccd-8ed6-48d8-bf05-31abfd2a7db4>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00198-ip-10-147-4-33.ec2.internal.warc.gz"}
Robbinsville, NJ Math Tutor Find a Robbinsville, NJ Math Tutor ...In addition to the subject matter, I assist my students with test-taking skills and study habits that will help them for many years to come. An unsolicited testimonial from a recent student that I tutored in SAT Preparation: "I got my SAT results, and I'm pleased to tell you that they far excee... 19 Subjects: including algebra 1, algebra 2, calculus, ACT Math ...I try to relate the concepts to real-life situations so they have a reference point and then they can make the connection. My students usually feel much better about the algebra and then they have a solid foundation for future math courses/proficiency tests. "Pre-algebra" skills are the foundat... 10 Subjects: including ACT Math, algebra 1, algebra 2, geometry ...I have nearly completed a PhD in math (with a heavy emphasis towards the computer science side of math) from the University of Delaware. I have 20+ years of solid experience tutoring college-level math and theoretical computer science, having mostly financed my education that way. I also have 7+ years experience teaching college-level math. 11 Subjects: including differential equations, logic, calculus, precalculus ...I currently hold my substitute teacher certification in Middlesex County. I have tutored in facilities, schools and freelance. I work comfortably with students in grades 4-12 and college students who are in maths up to calculus. 24 Subjects: including calculus, Microsoft Excel, geometry, Microsoft Word ...It is universally acknowledged that our students in China are required a high level mathematics since a young age. I do believe math needs skills and methods which is far above a large number of exercises. Additionally, I am a Statistics and Mathematics major. 10 Subjects: including algebra 1, algebra 2, calculus, prealgebra Related Robbinsville, NJ Tutors Robbinsville, NJ Accounting Tutors Robbinsville, NJ ACT Tutors Robbinsville, NJ Algebra Tutors Robbinsville, NJ Algebra 2 Tutors Robbinsville, NJ Calculus Tutors Robbinsville, NJ Geometry Tutors Robbinsville, NJ Math Tutors Robbinsville, NJ Prealgebra Tutors Robbinsville, NJ Precalculus Tutors Robbinsville, NJ SAT Tutors Robbinsville, NJ SAT Math Tutors Robbinsville, NJ Science Tutors Robbinsville, NJ Statistics Tutors Robbinsville, NJ Trigonometry Tutors Nearby Cities With Math Tutor Allentown, NJ Math Tutors Edgely, PA Math Tutors Groveville, NJ Math Tutors Hamilton Square, NJ Math Tutors Hamilton Township, OH Math Tutors Hamilton, NJ Math Tutors High Crest, NJ Math Tutors Kingston, NJ Math Tutors Mercerville, NJ Math Tutors Millhurst, NJ Math Tutors Parkland, PA Math Tutors Rosedale, NJ Math Tutors Uppr Free Twp, NJ Math Tutors West Bristol, PA Math Tutors Windsor, NJ Math Tutors
{"url":"http://www.purplemath.com/robbinsville_nj_math_tutors.php","timestamp":"2014-04-18T08:28:43Z","content_type":null,"content_length":"24167","record_id":"<urn:uuid:bd5928ca-37b5-40b2-910a-5a719eb6551b>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00297-ip-10-147-4-33.ec2.internal.warc.gz"}
Find a Petaluma Precalculus Tutor ...As opposed to teaching, I help the student develop their own tools to teach themselves. I break down the ACT math section into the major classes of fundamental problems that will be presented. From each fundamental problem, we will work from the base easiest example and then increase in complexity. 37 Subjects: including precalculus, chemistry, physics, statistics ...As a doctoral student in clinical psychology, I was hired by the university to teach study skills, test taking skills, and time management to incoming freshmen students. As part of this job, I was trained in and provided materials for each of these topics. I often find, when working with my stu... 20 Subjects: including precalculus, calculus, Fortran, Pascal ...I am a chemist because of the way it has helped me to see the world and my greatest achievement in life has been to help people see the world through the eyes of a chemist. As a bachelor degree holder in chemistry I have completed multiple levels of chemistry and have tutored them all along the ... 6 Subjects: including precalculus, chemistry, algebra 1, prealgebra I'm a retired engineer and math teacher with a love for teaching anyone who wants to learn. As an engineer, I regularly used all levels of math (from arithmetic through calculus), statistics, and physics. I hold a California Single Subject Teaching Credential in math and physics (I taught high school math from pre-algebra to geometry). 26 Subjects: including precalculus, reading, calculus, statistics ...I actually co-founded the Bioengineering Advising Representatives Program in the Bioengineering department at UC Berkeley with the goal of coaching undergraduate students interested in graduate school how to best prepare for the application process and the challenges faced as they pursue advanced... 24 Subjects: including precalculus, chemistry, physics, calculus
{"url":"http://www.purplemath.com/Petaluma_Precalculus_tutors.php","timestamp":"2014-04-18T19:07:34Z","content_type":null,"content_length":"24102","record_id":"<urn:uuid:18111ea7-0941-4a2a-a3ea-27da0ceeb869>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00569-ip-10-147-4-33.ec2.internal.warc.gz"}
Estimating NFL Win Probabilities for Matchups Between Teams of Various Records » Sports Reference Posted by Neil on October 30, 2012 WARNING: Math post. PFR user Brad emailed over the weekend with an interesting question: "Wondering if you've ever tracked or how it would be possible to find records vs. records statistics....for instance a 3-4 team vs. a 5-2 team...which record wins how often? but for every record matchup in every week." That's a cool concept, and one that I could answer historically with a query when I get the time. But in the meantime, here's what I believe is a valid way to estimate that probability... 1. Add eleven games of .500 ball to the team's current record (at any point in the season). So if a team is 3-4, their "true" wpct talent is (3 + 5.5) / (7 + 11) = .472. If their opponent is 5-2, it would be (5 + 5.5) / (7 + 11) = .583. 2. Use the following equation to estimate the probability of Team A beating Team B at a neutral site: p(Team A Win) = Team A true_win% *(1 - Team B true_win%)/(Team A true_win% * (1 - Team B true_win%) + (1 - Team A true_win%) * Team B true_win%) 3. You can even factor in home-field advantage like so: p(Team A Win) = ((Team A true_win%) * (1 - Team B true_win%) * HFA)/((Team A true_win%) * (1 - Team B true_win%) * HFA +(1 - Team A true_win%) * (Team B true_win%) * (1 - HFA)) In the NFL, home teams win roughly 57% of the time, so HFA = 0.57. This means in Brad's hypothetical matchup of a 5-2 team vs. a 3-4 team, we would expect the 5-2 team to win .583 *(1 - .472)/(.583 * (1 - .472) + (1 - .583) * .472) = 61% of the time at a neutral Really Technical Stuff: Now, you may be wondering where I came up with the "add 11 games of .500 ball" part. That comes from this Tangotiger post about true talent levels for sports leagues. Since the NFL expanded to 32 teams in 2002, the yearly standard deviation of team winning percentage is, on average, 0.195. This means var(observed) = 0.195^2 = 0.038. The random standard deviation of NFL records in a 16-game season would be sqrt(0.5*0.5/16) = 0.125, meaning var(random) = 0.125^2 = 0.016. var(true) = var(observed) - var(random), so in this case var(true) = 0.038 - 0.016 = 0.022. The square root of 0.022 is 0.15, so 0.15 is stdev(true), the standard deviation of true winning percentage talent in the current NFL. Armed with that number, we can calculate the number of games a season would need to contain in order for var(true) to equal var(random) using: In the NFL, that number is 11 (more accurately, it's 11.1583, but it's easier to just use 11). So when you want to regress an NFL team's W-L record to the mean, at any point during the season, take eleven games of .500 ball (5.5-5.5), and add them to the actual record. This will give you the best estimate of the team's "true" winning percentage talent going forward. That's why you use the "true" wpct number to plug into Bill James' log5 formula (see step 2 above), instead of the teams' actual winning percentages. Even a 16-0 team doesn't have a 100% probability of winning going forward -- instead, their expected true wpct talent is something like (16 + 5.5) / (16 + 11) = .796. (For more info, see this post, and for a proof of this method, read what Phil Birnbaum wrote in 2011.) This entry was posted on Tuesday, October 30th, 2012 at 7:20 pm and is filed under Announcement, Pro-Football-Reference.com, Stat Questions. You can follow any responses to this entry through the RSS 2.0 feed. Both comments and pings are currently closed. 3 Responses to “Estimating NFL Win Probabilities for Matchups Between Teams of Various Records” 1. [...] originally posted this at the S-R Blog, but I thought it would be very appropriate here as [...] 2. What’s really great is that you can prove this using Bayes’ Theorem… Say we have a 3-4 team. Their observed (mean) record is 3 / 7 = 0.429, and the binomial standard deviation of that is (sqrt((W+L)*WPct*(1-WPct)))/(W+L) = (sqrt((3+4)*0.429*(1-0.429)))/(3+4) = 0.187. Since we’re regressing halfway to the mean, we’ll use a 0.500 WPct as the Bayesian prior mean, with a standard deviation of 0.15 (aka the standard deviation of true NFL winning percentage talent that we derived in the post). Bayes’ Theorem states that: Result_mean = ((prior_mean/prior_stdev^2)+(observed_mean/observed_stdev^2))/((1/prior_stdev^2)+(1/observed_stdev^2)) Plugging in the means and standard deviations we found above, we get: Result_mean = ((0.5/0.15^2)+(0.429/0.187^2))/((1/0.15^2)+(1/0.187^2)) Which equals… 0.472. Or, exactly the same “true” WPct talent we found via (W + 5.5) / (G + 11). Pretty cool, right? 3. Does this mean you’d add 11 games of 0.500 ball to last season’s record to get the best projection of this season’s record?
{"url":"http://www.sports-reference.com/blog/2012/10/estimating-nfl-win-probabilities-for-matchups-between-teams-of-various-records/","timestamp":"2014-04-16T04:31:43Z","content_type":null,"content_length":"34495","record_id":"<urn:uuid:c657d9f9-e286-47fa-a372-016ff05a015b>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00265-ip-10-147-4-33.ec2.internal.warc.gz"}
Concavity and the Derivative of e^x December 4th 2009, 03:16 PM #1 Sep 2009 Concavity and the Derivative of e^x Let f be the function given by f(x)=2xe^(x). The graph of f is concave down when A)x<-2 B)x>-2 C)x<-1 D)x>-1 E)x<0 I know that you can use the 2nd derivative to find concavity. If f''(x) is negative, it is concave down and if its f''(x) is positive, its concave up However, i don't know how to find f''(x) of this function Please Help Let f be the function given by f(x)=2xe^(x). The graph of f is concave down when A)x<-2 B)x>-2 C)x<-1 D)x>-1 E)x<0 I know that you can use the 2nd derivative to find concavity. If f''(x) is negative, it is concave down and if its f''(x) is positive, its concave up However, i don't know how to find f''(x) of this function Please Help $f(x) = 2x \cdot e^x$ product rule ... $f'(x) = 2x \cdot e^x + 2e^x$ $f'(x) = 2e^x(x + 1)$ do the product rule again ... i'm guessing the the derivative of 2e^x is still 2e^x is it ok so I distributed to get and i don't know how to solve from here (i'm not quite sure how to use natural log) $f''(x) = 2e^x(x+2) = 0$ $x = -2$ December 4th 2009, 03:30 PM #2 December 4th 2009, 03:48 PM #3 Sep 2009 December 4th 2009, 04:01 PM #4 December 4th 2009, 04:20 PM #5 Sep 2009 December 4th 2009, 04:23 PM #6
{"url":"http://mathhelpforum.com/calculus/118530-concavity-derivative-e-x.html","timestamp":"2014-04-21T04:52:12Z","content_type":null,"content_length":"46788","record_id":"<urn:uuid:a1ad5765-2d39-4693-846d-47cc7c757524>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00047-ip-10-147-4-33.ec2.internal.warc.gz"}
4.5 Implementation Next: 4.5.1 Forming the Initial Up: 4 Solving 1-SAT Previous: 4.4 Performance of the Conceptually, the operation of Eq. (12) can be performed classically by matrix multiplication. However, since the matrices have 2, quantum computers can rapidly perform many matrix operations of this size. Here we show how this is possible for the operations used by this algorithm. For describing the implementation, it is useful to denote the individual components in a superposition explicitly. Traditionally, this is done using the ket notation introduced by Dirac [18]. For instance, the superposition described by the state vector of Eq. (1) is equivalently written as s. An example of these alternate, and equivalent, notations is: Tad Hogg Feb. 1999
{"url":"http://www.cs.cmu.edu/afs/cs/project/jair/pub/volume10/hogg99a-html/node13.html","timestamp":"2014-04-20T06:35:03Z","content_type":null,"content_length":"3285","record_id":"<urn:uuid:ab531956-4b4f-4a4f-bb80-ed479c9d9186>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00019-ip-10-147-4-33.ec2.internal.warc.gz"}
Nested generics and ugly syntax 05-23-2010, 01:17 PM Nested generics and ugly syntax While I know that generics will always be somewhat ugly in syntax, there are limits to what is feasible for programming. I wonder if the following could not be simplified, e.g. make part of it just copy its generics from another class. I'm doing mathematical simulations over certain (finite) fields, with different internal datatypes. So I work with generics: Field<ElementType,VectorType> can be □ Field<Byte,byte[]> for small fields, □ Field<Short,short[]> for large fields, □ Field<Boolean,BitSet> for the binary field, etc. For matrices, I then need to use FieldMatrix<ElementType, VectorType, FieldType<ElementType,VectorType>>, hence declaring element and vector type twice. When considering stuctures over those fields, things seems to get arbitrary ugly. Projective points are just ProjectivePoint<ElementType, VectorType, Field<ElementType, VectorType>, projective lines are similarly ProjectiveLine<ElementType, VectorType, Field<ElementType, VectorType>, and the projective space is then ProjectiveSpace<ProjectivePoint<ElementType, VectorType, Field <ElementType, VectorType>, ProjectiveLine<ElementType, VectorType, Field<ElementType, VectorType>>, Field<ElementType, VectorType>, ElementType, VectorType>. And it gets even worse. Now I had to type ElementType and VectorType 6 times, but for embedding projective spaces in eachother, I have to type it 16 times. And it goes up. Am I doing this wrong, or do you see any way I could do this more efficient? I don't mind a long header, but I'd like to make it simple for people to contribute to my code. If GF implements Field <Byte,byte[]>, is there any way I can type new ProjectiveLine<GF>(parameters), rather than ProjectiveLine<Byte,byte[],GF>? 05-25-2010, 01:07 AM i'm not sure I understand your problem 100%, but maybe this is helpful: class Field<ElementType, VectorType> { class GF extends Field<Byte, byte[]> { class Test { GF gf = new GF(); // no need for parameters here at all 05-26-2010, 04:06 AM I guess if I were lazy and I were doing this, ....and I am in fact lazy, I'd make the generic key off only a single type. So instead of using Byte and byte[], I'd be lazy and just use Byte, and then use Byte[]. public class Junk<T extends Number> { public void stuff(T myT, T[] myTArray) { for(T thisT : myTArray) {
{"url":"http://www.java-forums.org/advanced-java/29102-nested-generics-ugly-syntax-print.html","timestamp":"2014-04-17T22:24:59Z","content_type":null,"content_length":"7024","record_id":"<urn:uuid:8b85cec4-98c7-4e0b-8208-fca9f7c48b7b>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00250-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: st: RE: Multicollinerity test in IV regression [Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index] Re: st: RE: Multicollinerity test in IV regression From Stas Kolenikov <skolenik@gmail.com> To statalist@hsphsun2.harvard.edu Subject Re: st: RE: Multicollinerity test in IV regression Date Wed, 13 Oct 2004 14:56:13 -0400 On Wed, 13 Oct 2004 18:10:10 +0100, Nick Cox <n.j.cox@durham.ac.uk> wrote: > I assert that multicollinearity is a property of the > predictors and does not depend on what > you do with them before, during or > after any examination of multicollinearity. Indeed, multicollinearity has nothing to do with the estimation method, but rather an intrinsic property of the regressor configuration. Any good regression book (not an econometric book!) would have a discussion of multicollinearity. One of the basic references (actually, written by economists) is Belsley, Kuh and Welsh; other books to look at are Fox (it seems to me he is in psychology, although I am not sure, hence his examples are more pertinent for social sciences), or a classic text by Draper and Smith. My advisor at UNC, Richard Smith, has compiled a very modern and comprehensive text on regression, but just does not seem to have time to polish it for a publication; otherwise, this would be the default reference I would provide. There are no "formal" tests on collinearity; all the measures are ad-hoc. The most advanced one is to use singular value decomposition of the regressor matrix and look at the singular values close to zero -- they would correpond to the linear combinations that do not have much variability, and thus cannot be estimated with sufficient precision. Principal component analysis of the regressor matrix serves the same purpose. Stata has -vif- (variance inflation factors) command that shows by how much the variance of the estimated coefficients goes up compared to the imaginary case should regressors be orthogonal to each other. The methods to deal with collinearity are tighlty related to the variable selection methods, and regularization approaches, such as ridge regression, principal components' regression, lasso, etc. Again, Richard Smith's unpublished manuscript deals with them quite nicely, and among the published sources, I would recommend Hastie, Tibshirani and Friedman's book "The Elements of Statistical Learning". Stas Kolenikov * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2004-10/msg00390.html","timestamp":"2014-04-19T00:09:20Z","content_type":null,"content_length":"7137","record_id":"<urn:uuid:eab4ef1b-3389-441b-b62f-98303cea114a>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00323-ip-10-147-4-33.ec2.internal.warc.gz"}
How to Learn Algebra the Easy Way Algebra is one of the most important and broad parts of mathematics. It is applied in many fields of knowledge to find solutions of underlying problems. Despite its inclusion in basic courses in schools, colleges and universities, some people find it extremely difficult to understand because of its complex form of questions. However, it is not that hard to understand and you just have to make your basics strong in order to master it. Follow some simple methods to help you learn algebra the easy way. • 1 First of all, you have to make your arithmetic stronger. Almost every individual knows the basic rules of addition, subtraction, multiplication and division. You must learn the advance application of these rules in order to understand them in a more profound way. For instance, you can do word problems to improve your understanding regarding arithmetic. In addition, you must know how to solve the mathematical questions related to exponents, fractions, ratios and other standard operations. • 2 Most of the algebra deals with the problems of equations. Therefore, you should learn the basics of equation solving which is considered as the most important part in algebra. You must learn how to expand products of different numbers and how to simplify a numerical expression. Additionally, learn and practice factorisation of polynomials and placement of fractions over one common denominator. You can make equation solving more easy if you isolate a variable from it or try to simplify one of both sides at a time. Always remember that if you add, subtract, multiply or divide one side of an equation with a number, you will have to do the same with the other side or your answer will be incorrect. • 3 If you are dealing with a word problem related to algebra, you must match your formula with any simple case. It will help you in assessing your solution. Furthermore, never start an exercise before understanding and reviewing the concepts which are given in the book. • 4 One of the most common problems with students is that they think every answer to a problem in algebra will be a whole number. However, it is not the case as the answer can involve decimals, fractions or even be irrational. So, don’t always expect a simple answer and apply the right concepts in finding the solution to a problem.
{"url":"http://www.stepbystep.com/how-to-learn-algebra-the-easy-way-107394/","timestamp":"2014-04-20T01:24:34Z","content_type":null,"content_length":"41682","record_id":"<urn:uuid:1a476da4-ea55-42c7-b778-6838143fb7c0>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00010-ip-10-147-4-33.ec2.internal.warc.gz"}
base-3.0.1.0: Basic libraries Source code Contents Index Portability portable Data.Word Stability experimental Maintainer libraries@haskell.org Unsigned integer types. Unsigned integral types A Word is an unsigned integral type, with the same size as Int. 8-bit unsigned integer type 16-bit unsigned integer type 32-bit unsigned integer type 64-bit unsigned integer type • All arithmetic is performed modulo 2^n, where n is the number of bits in the type. One non-obvious consequence of this is that negate should not raise an error on negative arguments. • For coercing between any two integer types, use fromIntegral, which is specialized for all the common cases so should be fast enough. Coercing word types to and from integer types preserves representation, not sign. • It would be very natural to add a type Natural providing an unbounded size unsigned integer, just as Integer provides unbounded size signed integers. We do not do that yet since there is no demand for it. • The rules that hold for Enum instances over a bounded type such as Int (see the section of the Haskell report dealing with arithmetic sequences) also hold for the Enum instances over the various Word types defined here. • Right and left shifts by amounts greater than or equal to the width of the type result in a zero result. This is contrary to the behaviour in C, which is undefined; a common interpretation is to truncate the shift count to the width of the type, for example 1 << 32 == 1 in some C implementations. Produced by Haddock version 0.8
{"url":"http://www.haskell.org/ghc/docs/6.8.2/html/libraries/base/Data-Word.html","timestamp":"2014-04-16T23:01:05Z","content_type":null,"content_length":"18686","record_id":"<urn:uuid:28a3ff55-d6b6-4af4-83d1-0df58319dc77>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00469-ip-10-147-4-33.ec2.internal.warc.gz"}
Line Graphs and Scatter Plots Table of Contents Line graphs provide an excellent way to map independent and dependent variables that are both quantitative. When both variables are quantitative, the line segment that connects two points on the graph expresses a slope, which can be interpreted visually relative to the slope of other lines or expressed as a precise mathematical formula. Scatter plots are similar to line graphs in that they start with mapping quantitative data points. The difference is that with a scatter plot, the decision is made that the individual points should not be connected directly together with a line but, instead express a trend. This trend can be seen directly through the distribution of points or with the addition of a regression line. A statistical tool used to mathematically express a trend in the data. One Independent and One Dependent Variable 1. Scatter Plot With a scatter plot a mark, usually a dot or small circle, represents a single data point. With one mark (point) for every data point a visual distribution of the data can be seen. Depending on how tightly the points cluster together, you may be able to discern a clear trend in the data. Because the data points represent real data collected in a laboratory setting rather than theoretically calculated values, they will represent all of the error inherent in such a collection process. A regression line can be used to statistically describe the trend of the points in the scatter plot to help tie the data back to a theoretical ideal. This regression line expresses a mathematical relationship between the independent and dependent variable. Depending on the software used to generate the regression line, you may also be given a constant that expresses the 'goodness of fit' of the curve. That is to say, to what degree of certainty can we say this line truly describes the trend in the data. The correlational constant is usually expressed as R^2 (R-squared). Whether this regression line should be linear or curved depends on what your hypothesis predicts the relationship is. When a curved line is used, it is typically expressed as either a second order (cubic) or third order (quadratic) curve. Higher order curves may follow the actual data points more closely, but rarely provide a better mathematical description of the Return to Top 2. Line Graph Line graphs are like scatter plots in that they record individual data values as marks on the graph. The difference is that a line is created connecting each data point together. In this way, the local change from point to point can be seen. This is done when it is important to be able to see the local change between any to pairs of points. An overall trend can still be seen, but this trend is joined by the local trend between individual or small groups of points. Unlike scatter plots, the independent variable can be either scalar or ordinal. In the example above, Month could be thought of as either scalar or ordinal. The slope of the line segments are of interest, but we would probably not be generating mathematical formulas for individual segments. The above example could have also been produced as a bar graph. You would use a line graph when you want to be able to more clearly see the rate of change (slope) between individual data points. If the independent variable was nominal, you would almost certainly use a bar graph instead of a line graph. Return to Top Two (or more) Independent and One Dependent Variable 1. Multiple Line Graph Here, we have taken the same graph seen above and added a second independent variable, year. Both the independent variables, month and year, can be treated as being either as ordinal or scalar. This is often the case with larger units of time, such as weeks, months, and years. Since we have a second independent variable, some sort of coding is needed to indicate which level (year) each line is. Though we could label each bar with text indicating the year, it is more efficient to use color and/or a different symbol on the data points. We will need a legend to explain the coding Multiple line graphs have space-saving characteristics over a comparable grouped bar graph. Because the data values are marked by small marks (points) and not bars, they do not have to be offset from each other (only when data values are very dense does this become a problem). Another advantage is that the lines can easily dual coded. With the lines, they can both be color coded (for computer and color print display) or shape coded with symbols (for black & white reproduction). With bars, shape coding cannot be used, and pattern coding has to be substituted. Pattern coding tends to be much more limiting. Notice that there is a break in the 1996 data line (green/triangle) between August and October. Because the data point for September is missing, the line should not be connected between August and October since this would give an erroneous local slope. This is particularly important if you display the line without symbols at individual data points. Return to Top Excel Tips For information on creating bar graphs with Excel, go to the Scatter Plots and Line Graphs Module, or go to the Excel Tutorial Main Menu for a complete list of modules. Specific tips for line graphs • The graphing tutorial gives specific instructions on creating scatter plots and regression lines • Line graphs can be created with either the Line Graph type or with (XY) Scatter. When using (XY) Scatter, choose the Connected with Line sub-type. • It is simpler to create a line graph with (XY) Scatter when your independent and dependent variables are in columns. • Marks for data points are called Markers • The color and size of the line and markers can be set by double-clicking on the line in the graph. • Markers can be turned off by double-clicking the line and choosing None under Markers.
{"url":"http://www.ncsu.edu/labwrite/res/gh/gh-linegraph.html","timestamp":"2014-04-16T22:02:34Z","content_type":null,"content_length":"13562","record_id":"<urn:uuid:9bc6958b-e430-44ab-8c26-3c5aa5b711af>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00193-ip-10-147-4-33.ec2.internal.warc.gz"}
Results on the propositional µ-calculus Results 1 - 10 of 228 - In Verification: Theory and Practice , 2003 "... Dedicated to Zohar Manna, for his 2 6 th birthday. Abstract. Abstract interpretation theory formalizes the idea of abstraction of mathematical structures, in particular those involved in the specification of properties and proof methods of computer systems. Verification by abstract interpretation is ..." Cited by 192 (16 self) Add to MetaCart Dedicated to Zohar Manna, for his 2 6 th birthday. Abstract. Abstract interpretation theory formalizes the idea of abstraction of mathematical structures, in particular those involved in the specification of properties and proof methods of computer systems. Verification by abstract interpretation is illustrated on the particular cases of predicate abstraction, which is revisited to handle infinitary abstractions, and on the new parametric predicate abstraction. 1 - In 29th International Conference on Software Engineering (ICSE’07 , 2007 "... Model Management addresses the problem of managing an evolving collection of models, by capturing the relationships between models and providing well-defined operators to manipulate them. In this paper, we describe two such operators for manipulating hierarchical Statecharts: Match, for finding corr ..." Cited by 64 (17 self) Add to MetaCart Model Management addresses the problem of managing an evolving collection of models, by capturing the relationships between models and providing well-defined operators to manipulate them. In this paper, we describe two such operators for manipulating hierarchical Statecharts: Match, for finding correspondences between models, and Merge, for combining models with respect to known correspondences between them. Our Match operator is heuristic, making use of both static and behavioural properties of the models to improve the accuracy of matching. Our Merge operator preserves the hierarchical structure of the input models, and handles differences in behaviour through parameterization. In this way, we automatically construct merges that preserve the semantics of Statecharts models. We illustrate and evaluate our work by applying our operators to AT&T telecommunication features. 1 - Annals of Discrete Mathematics , 1985 "... We develop a Logic in which the basic objects of concern are games, or equivalently, monotone predicate transforms. We give completeness and decision results and extend to certain kinds of many-person games. Applications to a cake cutting algorithm and to a protocol for exchanging secrets, are given ..." Cited by 63 (5 self) Add to MetaCart We develop a Logic in which the basic objects of concern are games, or equivalently, monotone predicate transforms. We give completeness and decision results and extend to certain kinds of many-person games. Applications to a cake cutting algorithm and to a protocol for exchanging secrets, are given. 1 , 2008 "... We present an algorithm to solve XPath decision problems under regular tree type constraints and show its use to statically type-check XPath queries. To this end, we prove the decidability of a logic with converse for finite ordered trees whose time complexity is a simple exponential of the size of ..." Cited by 60 (33 self) Add to MetaCart We present an algorithm to solve XPath decision problems under regular tree type constraints and show its use to statically type-check XPath queries. To this end, we prove the decidability of a logic with converse for finite ordered trees whose time complexity is a simple exponential of the size of a formula. The logic corresponds to the alternation free modal µ-calculus without greatest fixpoint, restricted to finite trees, and where formulas are cycle-free. Our proof method is based on two auxiliary results. First, XML regular tree types and XPath expressions have a linear translation to cycle-free formulas. Second, the least and greatest fixpoints are equivalent for finite trees, hence the logic is closed under negation. Building on these results, we describe a practical, effective system for solving the satisfiability of a formula. The system has been experimented with some decision problems such as XPath emptiness, containment, overlap, and coverage, with or without type constraints. The benefit of the approach is that our system can be effectively used in static analyzers for programming languages , 2000 "... Model-checking is a successful technique for automatically verifying concurrent finite-state systems. When building a model-checker, a good compromise must be made between the expressive power of the property description formalism, the complexity of the model-checking problem, and the user-friendlin ..." Cited by 58 (11 self) Add to MetaCart Model-checking is a successful technique for automatically verifying concurrent finite-state systems. When building a model-checker, a good compromise must be made between the expressive power of the property description formalism, the complexity of the model-checking problem, and the user-friendliness of the interface. We present a temporal logic and an associated model-checking method that attempt to fulfill these criteria. The logic is an extension of the alternation-free µ-calculus with ACTL-like action formulas and PDL-like regular expressions, allowing a concise and intuitive description of safety, liveness, and fairness properties over labeled transition systems. The model-checking method is based upon a succinct translation of the verification problem into a boolean equation system, which is solved by means of an efficient local algorithm having a good average complexity. The algorithm also allows to generate full diagnostic information (examples and counterexamples) for temporal for... - J. of Logic and Computation , 1999 "... Recent proposals to improve the quality of interaction with the World Wide Web suggest considering the Web as a huge semistructured database, so that retrieving information can be supported by the task of database querying. Under this view, it is important to represent the form of both the network, ..." Cited by 55 (8 self) Add to MetaCart Recent proposals to improve the quality of interaction with the World Wide Web suggest considering the Web as a huge semistructured database, so that retrieving information can be supported by the task of database querying. Under this view, it is important to represent the form of both the network, and the documents placed in the nodes of the network. However, the current proposals do not pay sufficient attention to represent document structures and reasoning about them. In this paper, we address these problems by providing a framework where Document Type Definitions (DTDs) expressed in the eXtensible Markup Language (XML) are formalized in an expressive Description Logic equipped with sound and complete inference algorithms. We provide methods for verifying conformance of a document to a DTD in polynomial time, and structural equivalence of DTDs in worst case deterministic exponential time, improving known algorithms for this problem which were double exponential. We also deal with parametric versions of conformance and structural equivalence, and investigate other forms of reasoning on DTDs. Finally, we show how to take advantage of the reasoning capabilities of our formalism in order to perform several optimization steps in answering queries posed to a document base. - In Proc. of the 16th Int. Joint Conf. on Artificial Intelligence (IJCAI’99 , 1999 "... In the last years, the investigation on Description Logics (DLs) has been driven by the goal of applying them in several areas, such as, software engineering, information systems, databases, information integration, and intelligent access to the web. The modeling requirements arising in the above ar ..." Cited by 55 (12 self) Add to MetaCart In the last years, the investigation on Description Logics (DLs) has been driven by the goal of applying them in several areas, such as, software engineering, information systems, databases, information integration, and intelligent access to the web. The modeling requirements arising in the above areas have stimulated the need for very rich languages, including fixpoint constructs to represent recursive structures. We study a DL comprising the most general form of fixpoint constructs on concepts, all classical concept forming constructs, plus inverse roles, n-ary relations, qualified number restrictions, and inclusion assertions. We establish the EXPTIME decidability of such logic by presenting a decision procedure based on a reduction to nonemptiness of alternating automata on infinite trees. We observe that this is the first decidability result for a logic combining inverse roles, number restrictions, and general fixpoints. 1 - In Proc. Verification, Model Checking, and Abstract Interpretation (VMCAI’06 , 2006 "... Abstract. We consider the problem of synthesizing digital designs from their LTL specification. In spite of the theoretical double exponential lower bound for the general case, we show that for many expressive specifications of hardware designs the problem can be solved in time N 3, where N is the s ..." Cited by 54 (7 self) Add to MetaCart Abstract. We consider the problem of synthesizing digital designs from their LTL specification. In spite of the theoretical double exponential lower bound for the general case, we show that for many expressive specifications of hardware designs the problem can be solved in time N 3, where N is the size of the state space of the design. We describe the context of the problem, as part of the Prosyd European Project which aims to provide a property-based development flow for hardware designs. Within this project, synthesis plays an important role, first in order to check whether a given specification is realizable, and then for synthesizing part of the developed system. The class of LTL formulas considered is that of Generalized Reactivity(1) (generalized Streett(1)) formulas, i.e., formulas of the form: ( p1 ∧ · · · ∧ pm) → ( q1 ∧ · · · ∧ qn) where each pi, qi is a boolean combination of atomic propositions. We also consider the more general case in which each pi, qi is an arbitrary past LTL formula over atomic propositions. For this class of formulas, we present an N 3-time algorithm which checks whether such a formula is realizable, i.e., there exists a circuit which satisfies the formula under any set of inputs provided by the environment. In the case that the specification is realizable, the algorithm proceeds to construct an automaton which represents one of the possible implementing circuits. The automaton is computed and presented symbolically. 1 - In Automata, Languages, and Programming, LNCS 2719 , 2003 "... ..." - ACM TRANSACTIONS ON COMPUTATIONAL LOGIC , 2005 "... We define five increasingly comprehensive classes of infinite-state systems, called STS1--STS5, whose state spaces have finitary structure. For four of these classes, we provide examples from hybrid systems.STS1 These are the systems with finite bisimilarity quotients. They can be analyzed symbolica ..." Cited by 44 (5 self) Add to MetaCart We define five increasingly comprehensive classes of infinite-state systems, called STS1--STS5, whose state spaces have finitary structure. For four of these classes, we provide examples from hybrid systems.STS1 These are the systems with finite bisimilarity quotients. They can be analyzed symbolically by iteratively applying predecessor and Boolean operations on state sets, starting from a finite number of observable state sets. Any such iteration is guaranteed to terminate in that only a finite number of state sets can be generated. This enables model checking of the μ-calculus.STS2 These are the systems with finite similarity quotients. They can be analyzed symbolically by iterating the predecessor and positive Boolean operations. This enables model checking of the existential and universal fragments of the μ-calculus.STS3 These are the systems with finite trace-equivalence quotients. They can be analyzed symbolically by iterating the predecessor operation and a restricted form of positive Boolean operations (intersection is restricted to intersection with observables). This enables model checking of all ω-regular properties, including linear temporal logic.STS4 These are the systems with finite distance-equivalence quotients (two states are equivalent if for every distance d, the same observables can be reached in d transitions). The systems in this class can be analyzed symbolically by iterating the predecessor operation and terminating when no new state sets are generated. This enables model checking of the existential conjunction-free and universal disjunction-free fragments of the μ-calculus.STS5 These are the systems with finite bounded-reachability quotients (two states are equivalent if for every distance d, the same observables can be reached in d or fewer transitions). The systems in this class can be analyzed symbolically by iterating the predecessor operation and terminating when no new states are encountered (this is a weaker termination condition than above). This enables model checking of reachability properties.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=180221","timestamp":"2014-04-18T13:27:45Z","content_type":null,"content_length":"40212","record_id":"<urn:uuid:178cb6ea-6559-433e-b0f5-e6a97bd2c971>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00327-ip-10-147-4-33.ec2.internal.warc.gz"}
Grover’s quantum search algorithm for an arbitrary initial amplitude distribution Results 1 - 10 of 13 "... Abstract L. K. Grover's search algorithm in quantum computing gives an optimal, square-root speedupin the search for a single object in a large unsorted database. In this paper, we expound Grover's algorithm in a Hilbert-space framework that isolates its geometrical essence, and we generalizeit to t ..." Cited by 8 (1 self) Add to MetaCart Abstract L. K. Grover's search algorithm in quantum computing gives an optimal, square-root speedupin the search for a single object in a large unsorted database. In this paper, we expound Grover's algorithm in a Hilbert-space framework that isolates its geometrical essence, and we generalizeit to the case where more than one object satisfies the search criterion. "... Abstract. One of the most promising and versatile approaches to creating new quantum algorithms is based on the quantum hidden subgroup (QHS) paradigm, originally suggested by Alexei Kitaev. This class of quantum algorithms encompasses the Deutsch-Jozsa, Simon, Shor algorithms, and many more. In thi ..." Cited by 6 (2 self) Add to MetaCart Abstract. One of the most promising and versatile approaches to creating new quantum algorithms is based on the quantum hidden subgroup (QHS) paradigm, originally suggested by Alexei Kitaev. This class of quantum algorithms encompasses the Deutsch-Jozsa, Simon, Shor algorithms, and many more. In this paper, our strategy for finding new quantum algorithms is to decompose Shor’s quantum factoring algorithm into its basic primitives, then to generalize these primitives, and finally to show how to reassemble them into new QHS algorithms. Taking an ”alphabetic building blocks approach, ” we use these primitives to form an ”algorithmic toolkit ” for the creation of new quantum algorithms, such as wandering Shor algorithms, continuous Shor algorithms, the quantum circle algorithm, the dual Shor algorithm, a QHS algorithm for Feynman integrals, free QHS algorithms, and more. Toward the end of this paper, we show how Grover’s algorithm is most surprisingly “almost ” a QHS algorithm, and how this result suggests the possibility of an even more complete ”algorithmic tookit ” beyond the QHS algorithms. Contents "... The structure of satisfiability problems is used to improve search algorithms for quantum computers and reduce their required coherence times by using only a single coherent evaluation of problem properties. The structure of random k-SAT allows determining the asymptotic average behavior of these al ..." Cited by 4 (2 self) Add to MetaCart The structure of satisfiability problems is used to improve search algorithms for quantum computers and reduce their required coherence times by using only a single coherent evaluation of problem properties. The structure of random k-SAT allows determining the asymptotic average behavior of these algorithms, showing they improve on quantum algorithms, such as amplitude amplification, that ignore detailed problem structure but remain exponential for hard problem instances. Compared to good classical methods, the algorithm performs better, on average, for weakly and highly constrained problems but worse for hard cases. The analytic techniques introduced here also apply to other quantum algorithms, supplementing the limited evaluation possible with classical simulations and showing how quantum computing can use ensemble properties of NP search problems. , 2006 "... Abstract. The arguments given in this paper suggest that Grover’s and Shor’s algorithms are more closely related than one might at first expect. Specifically, we show that Grover’s algorithm can be viewed as a quantum algorithm which solves a non-abelian hidden subgroup problem (HSP). But we then go ..." Cited by 1 (1 self) Add to MetaCart Abstract. The arguments given in this paper suggest that Grover’s and Shor’s algorithms are more closely related than one might at first expect. Specifically, we show that Grover’s algorithm can be viewed as a quantum algorithm which solves a non-abelian hidden subgroup problem (HSP). But we then go on to show that the standard non-abelian quantum hidden subgroup (QHS) algorithm can not find a solution to this particular HSP. This leaves open the question as to whether or not there is some modification of the standard non-abelian QHS algorithm which is equivalent to Grover’s algorithm. , 2008 "... Given an item and a list of values of size N. It is required to decide if such item exists in the list. Classical computer can search for the item in O(N). The best known quantum algorithm can do the job in O ( √ N). In this paper, a quantum algorithm will be proposed that can search an unstructure ..." Add to MetaCart Given an item and a list of values of size N. It is required to decide if such item exists in the list. Classical computer can search for the item in O(N). The best known quantum algorithm can do the job in O ( √ N). In this paper, a quantum algorithm will be proposed that can search an unstructured list in O(1) to get the YES/NO answer with certainty. 1 , 2008 "... Grover’s quantum search algorithm is considered as one of the milestone in the field of quantum computing. The algorithm can search for a single match in a database with N records in O ( √ N) assuming that the item must exist in the database with quadratic speedup over the best known classical algo ..." Add to MetaCart Grover’s quantum search algorithm is considered as one of the milestone in the field of quantum computing. The algorithm can search for a single match in a database with N records in O ( √ N) assuming that the item must exist in the database with quadratic speedup over the best known classical algorithm. This review paper discusses the performance of Grover’s algorithm in case of multiple matches where the problem is expected to be easier. Unfortunately, we will find that the algorithm will fail for M> 3N/4, where M is the number of matches in the list. 1 , 1999 "... Grover’s quantum algorithm improves any classical search algorithm. We show how random Gaussian noise at each step of the algorithm can be modelled easily because of the exact recursion formulas available for computing the quantum amplitude in Grover’s algorithm. We study the algorithm’s intrinsic r ..." Add to MetaCart Grover’s quantum algorithm improves any classical search algorithm. We show how random Gaussian noise at each step of the algorithm can be modelled easily because of the exact recursion formulas available for computing the quantum amplitude in Grover’s algorithm. We study the algorithm’s intrinsic robustess when no quantum correction codes are used, and evaluate how much noise the algorithm can bear with, in terms of the size of the phone book and a desired probability of finding the correct result. The algorithm loses efficiency when noise is added, but does not slow down. We also study the maximal noise under which the iterated quantum algorithm is just as slow as the classical algorithm. In all cases, the width of the allowed noise scales with the size of the phone book as N −2/3. Typeset using REVTEX 1 I. , 2000 "... Consider the unstructured search of an unknown number l of items in a large unsorted database of size N. The multi-object quantum search algorithm consists of two parts. The first part of the algorithm is to generalize Grover’s single-object search algorithm to the multi-object case ([3, 4, 5, 6, 7] ..." Add to MetaCart Consider the unstructured search of an unknown number l of items in a large unsorted database of size N. The multi-object quantum search algorithm consists of two parts. The first part of the algorithm is to generalize Grover’s single-object search algorithm to the multi-object case ([3, 4, 5, 6, 7]) and the second part is to solve a counting problem to determine l ([4, 14]). In this paper, we study the multi-object quantum search algorithm (in continuous time), but in a more structured way by taking into account the availability of partial information. The modeling of available partial information is done simply by the combination of several prescribed, possibly overlapping, information sets with varying weights to signify the reliability of each set. The associated statistics is estimated and the algorithm efficiency and complexity are analyzed. Our analysis shows that the search algorithm described here may not be more efficient than the unstructured (generalized) multi-object Grover search if there is “misplaced confidence”. However, if the information sets have a “basic confidence ” property in the sense that each information set contains at least one search item, then a quadratic speedup holds on a much smaller data space, which further expedite the quantum search for the first item. , 2002 "... In this letter, we show that the laser Hamiltonian can perform the quantum search. We also show that the process of quantum search is a resonance between the initial state and the target state, which implies that Nature already has a quantum search system to use a transition of energy. In addition, ..." Add to MetaCart In this letter, we show that the laser Hamiltonian can perform the quantum search. We also show that the process of quantum search is a resonance between the initial state and the target state, which implies that Nature already has a quantum search system to use a transition of energy. In addition, we provide the particular scheme to implement the quantum search algorithm based on a trapped ion. Quantum computation has been in the spotlight, supplying the solution for the problems which are intractable in the context of classical physics. The quantum factorization algorithm and the quantum search algorithm are the good examples.[1] In particular, the quantum search algorithm provides the quadratic speedup in solving a search problem. Here, the search problem is to find the target of the unstructured N itmes. Grover’s fast quantum search algorithm is composed of discrete-time operations(e.g., Walsh-Hadamard). When we use Grover’s algorithm, we need O ( √ N) iterations of the Grover operator. There is the analog quantum search algorithm which is based on the Hamiltonian evolution. Farhi and Gutmann proposed the quantum search Hamiltonian.[5] Moreover, Fenner provided , 2004 "... Hypothesis elimination is a special case of Bayesian updating, where each piece of new data rules out a set of prior hypotheses. We describe how to use Grover’s algorithm to perform hypothesis elimination for a class of probability distributions encoded on a register of qubits, and establish a lower ..." Add to MetaCart Hypothesis elimination is a special case of Bayesian updating, where each piece of new data rules out a set of prior hypotheses. We describe how to use Grover’s algorithm to perform hypothesis elimination for a class of probability distributions encoded on a register of qubits, and establish a lower bound on the required computational resources. 1
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=964333","timestamp":"2014-04-20T07:48:52Z","content_type":null,"content_length":"35839","record_id":"<urn:uuid:e951deb9-7018-4ad8-97ec-7adfe4989415>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00115-ip-10-147-4-33.ec2.internal.warc.gz"}
Institute for Mathematics and its Applications (IMA) - Talk abstract: Slow passage through a homoclinic orbit with subharmonic resonances Richard Haberman, Southern Methodist Univ. Richard Haberman, Southern Methodist Univ. The slow passage through a homoclinic orbit is analyzed for a periodically forced and weakly damped oscillator corresponding to a double-well potential. Multiphase averaging fails at an infinite sequence of subharmonic resonance layers which coalesce on the homoclinic orbit. An accurate phase of the strongly nonlinear oscillator after passage through each subharmonic resonance is obtained using a time shift and a constant phase adjustment. Near the unperturbed homoclinic orbit, the solution is a large sequence of nearly homoclinic orbits in which one saddle approach is mapped into the next. The method of matched asymptotic expansions is used to relate the solution in subharmonic resonance layers to the solution near the unperturbed homoclinic orbit. In this way, we determine an asymptotically accurate description for the boundaries of the basins of attraction corresponding to capture into each well. This is joint work with Jerry D. Brothers.
{"url":"http://www.ima.umn.edu/dynsys/wkshp_abstracts/haberman1.html","timestamp":"2014-04-18T23:57:32Z","content_type":null,"content_length":"13760","record_id":"<urn:uuid:d4087240-ab51-408d-913a-5956ea466eff>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00475-ip-10-147-4-33.ec2.internal.warc.gz"}
Simplify the expression. Write the answer with positive exponents. Assume all variables represent positive numbers - WyzAnt Answers Simplify the expression. Write the answer with positive exponents. Assume all variables represent positive numbers Simplify the expression. Write the answer with positive exponents. Assume all variables represent positive numbers. Tutors, please to answer this question. First we want to simplify the expression in the numerator. Since the term (x^2z) is being raised to the 1/4 power, we multiply the exponents of each variable to give us a simplified power. The denominator is already simplified. This gives us To further simplify from this point, we make use of the fact that x^A/x^B=x^A-B. That is, the power of exponential terms divided by terms with the same base is equal to the difference in the exponents. We do this for both x and z. We recognize that the subtraction of a negative term in a power of x leads us to add the two terms, and we find a common denominator, 4, to subtract the powers of z. We find The power of 1 in the x term gives us our base, x, and the power of -1/4 in the z term can be rewritten as a positive exponent in the denominator of our simplified expression. This gives the answer
{"url":"http://www.wyzant.com/resources/answers/15461/simplify_the_expression_write_the_answer_with_positive_exponents_assume_all_variables_represent_positive_numbers","timestamp":"2014-04-20T13:32:42Z","content_type":null,"content_length":"45569","record_id":"<urn:uuid:0e17690f-c208-453e-9b94-f68dbeaf0702>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00193-ip-10-147-4-33.ec2.internal.warc.gz"}
Weekly Challenge 33: Crazy Cannons Copyright © University of Cambridge. All rights reserved. 'Weekly Challenge 33: Crazy Cannons' printed from http://nrich.maths.org/ Two cannons both fire balls at speed $100 \mathrm{ms}^{-1}$. One of the cannons is fixed at an angle of $45^\circ$ to the horizontal and the other is fixed at angle $30^\circ$ to the horizontal. The cannons are set up facing each other at a distance $D$ from each other. One cannon is fired and then, $T$ seconds later, the other cannon fired. The balls subsequently strike each other in mid air. Show that that $T$ must lie within a range of values and that both $T$ and $D$ must be greater than zero. Did you know ... ? The mathematics used in this question, along with an understanding of how gravity works at large distances, is sufficiently complex to send rockets to the moon and the planets of our solar system.
{"url":"http://nrich.maths.org/7057/index?nomenu=1","timestamp":"2014-04-19T02:46:26Z","content_type":null,"content_length":"4249","record_id":"<urn:uuid:b590cfbc-3faf-4bea-ba12-dcd2b586a333>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00170-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: May 2007 [00698] [Date Index] [Thread Index] [Author Index] Re: Re: Solve & RotationMatrix • To: mathgroup at smc.vnet.net • Subject: [mg76070] Re: [mg76002] Re: Solve & RotationMatrix • From: DrMajorBob <drmajorbob at bigfoot.com> • Date: Wed, 16 May 2007 05:16:05 -0400 (EDT) • References: <f21g0q$7d6$1@smc.vnet.net> <f23qq6$oa3$1@smc.vnet.net> <31173113.1179139106764.JavaMail.root@m35> • Reply-to: drmajorbob at bigfoot.com There MUST be a built-in solver I'm missing, but here's a version 6.0 solver, anyway. For version 5, you'd have to replace RandomReal with RandomArray or a Table of Random[] entries. I'm not sure if there are other issues or not; I've moved on! randomRotationMatrix creates problems, and angleVector solves them. angleVector uses Eigenvectors to find the axis of rotation, NullSpace to find two perpendiculars to it, uses FindRoot to find the angle that allows m[t].perp == target.m for both perpendiculars, and uses Mod to scale the answer to the range -Pi to Pi. The code assumes the matrix is real, has a real eigenvector, and IS a rotation matrix. If not, FindRoot will fail I haven't allowed for difficulties due to imprecision in the input, which could also cause FindRoot to fail. To take care of that, you could replace FindRoot with FindMinimum applied to a sum of squared errors. Of course, the axis is unique only up to a sign change. (* code *) Clear[realQ, realSelect, error, randomRotationMatrix, angleVector] realQ[z_?NumericQ] := Re[z] == z realQ[v_?VectorQ] := VectorQ[v, realQ] realQ[v_?MatrixQ] := MatrixQ[v, realQ] realSelect[m_?MatrixQ] := First@Select[m, realQ] error[target_?MatrixQ, perp_?VectorQ, m_?MatrixQ] := #.# &[ target.perp - m.perp] error[target_?MatrixQ, perps_?MatrixQ, m_?MatrixQ] := Chop@Total[error[target, #, m] & /@ perps] randomRotationMatrix[] := Module[{t, v}, t = RandomReal[{-Pi, Pi}]; v = Normalize@Table[RandomReal[], {i, 3}]; {"angle" -> t, "axis" -> v, "rotation matrix" -> RotationMatrix[t, v]}] angleVector[target_?MatrixQ] /; realQ[target] := Module[{v, t, m, perps, root}, v = realSelect@Eigenvectors[target]; m = RotationMatrix[t, v]; perps = Normalize /@ NullSpace[{v}]; root = FindRoot[error[target, perps, m] == 0, {t, 0}]; {"angle" -> Mod[t /. root, 2 \[Pi], -\[Pi]], "axis" -> v} (* create a test problem *) randomRotationMatrix[] // TableForm target = "rotation matrix" /. %; rotation = (* solve for the angle and vector *) On Mon, 14 May 2007 04:49:33 -0500, Mathieu G <ellocomateo at free.fr> wrote: > CKWong.P at gmail.com a =E9crit : >> Obviously, you need to provide the function RotationMatrix[ angle, >> axisVector ] for the algorithm to work. >> On the other hand, matrices for rotations about the Cartesian axes, >> i.e., RotX,RotY, & RotZ, can be written done directly. Why not do so? > Hi, > Thank you for your reply. > I am not interested in a simple rotation matrix around an axis, but in > finding the angles corresponding to a 3D rotation matrix. > I compute VectN which is the vector normal to my surface, and I am > interested in getting the rotation parameters that bring VectY to VectN. > Regards, > Mathieu DrMajorBob at bigfoot.com
{"url":"http://forums.wolfram.com/mathgroup/archive/2007/May/msg00698.html","timestamp":"2014-04-16T10:13:28Z","content_type":null,"content_length":"37155","record_id":"<urn:uuid:bc46426e-5df0-4154-930f-39d6d59e152d>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00121-ip-10-147-4-33.ec2.internal.warc.gz"}
What is the three-dimensional hyperbolic volume of a four-manifold? up vote 25 down vote favorite Every smooth closed orientable 4-manifold may be constructed via a handle decomposition. Before asking a couple of questions, I recall some well-known facts about handle-decompositions of We can of course order the handles according to their index. Handles of index 0 and 1 form a connected 4-dimensional handlebody, whose boundary is a closed 3-manifold, diffeomorphic to a connected sum $\#_g(S^2\times S^1)$ of some $g$ copies of $S^2\times S^1$ (if $g=0$ we get $S^3$). Handles of index 2 are attached to some framed link $L\subset \#_g(S^2\times S^1)$. Since 3- and 4-handles also form a 1-dimensional handlebody, after the attaching of the 2-handles we must necessarily obtain a 4-manifold whose boundary is again diffeomorphic to $\#_h(S^2\times S^1)$, for some $h$ which is not necessarily equal to $g$. The new $\#_h(S^2\times S^1)$ is obtained by surgery along the framed link $L$. Therefore, in some sense, constructing closed orientable 4-manifolds reduces to constructing framed links $L\subset \#_g(S^2\times S^1)$ that produce some $\#_h(S^2\times S^1)$ via surgery. Let us define the 3-dimensional hyperbolic volume of a handle decomposition as the Gromov norm of $(S^2\times S^1) \setminus L$ (which is in turn the sum of the hyperbolic volumes of its pieces according to geometrization, whence the name). Let us then define the 3-dimensional hyperbolic volume of a closed orientable 4-manifold as the infimum of all the hyperbolic 3-dimensional volumes among all its handle decompositions. The infimum is actually a minimum because the set of 3-dimensional hyperbolic volumes is well-ordered. The general question is: What can we say about the 3-dimensional hyperbolic volume of a closed 4-manifold? A more specific one: Which closed 4-manifolds have zero 3-dimensional hyperbolic volume? which is equivalent to the following: Which closed 4-manifolds admit a handle decomposition such that $\#_g(S^2\times S^1)\setminus L$ is a graph manifold? Complex projetive plane belongs to this class, and also many doubles of 2-handlebodies: if you take any link $L\subset S^2\times S^1$ whose complement is a graph manifold, you can attach 2-handles to it, and then make the double of the resulting bounded 4-manifold. The resulting double has volume zero. Finally, we have the following very specific question: Is there a 4-manifold with positive 3-dimensional hyperbolic volume? I would expect that most (all?) aspherical 4-manifolds have positive volume, and maybe also many simply connected ones, but I don't know the answer. gt.geometric-topology 4-manifolds 3-manifolds knot-theory 1 This question sounds a bit similar to what is written in the last lines of a recent article of Gromov and Guth : arxiv.org/abs/1103.3423 – Dmitri Jul 11 '11 at 9:01 add comment Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook. Browse other questions tagged gt.geometric-topology 4-manifolds 3-manifolds knot-theory or ask your own question.
{"url":"https://mathoverflow.net/questions/69819/what-is-the-three-dimensional-hyperbolic-volume-of-a-four-manifold","timestamp":"2014-04-21T07:35:13Z","content_type":null,"content_length":"49981","record_id":"<urn:uuid:a402df09-1f4c-4b64-ae2b-5bfee001dba3>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00531-ip-10-147-4-33.ec2.internal.warc.gz"}
5. In problem 2, assume that there is no point inside the circle where three of the line meet. How many points of intersection are there inside the circle? The picture from problem 2, will not work in this problem, because there are several point where more than two lines intersect, and this will throw off our theoretical count. We will need to adjust the points somewhat so that there is no point inside the circle where more than two lines intersect. We see in the figure above that it is possible to arrange 8 points on a circle so that there is no point inside the circle where more than two of the lines meet. We also see that there are quite a few points of intersection inside the circle for these lines. While it is possible to count them, it would probably be better to solve enough of the simpler problems to be able to find a pattern which would work in general and enable us to solve this problem. If there are 1, 2, or 3 points on the circle there are no points of intersection inside the circle. The first time we actually get a point of intersection inside the circle is with 4 points. It gets a little more interesting with 5 There are 5 points of intersection. With 6 there are 15 points of intersection. With 6 points, we need to take a little bit of care to make sure that there is no point where more than two lines meet. At this point, the figures are getting more and more complicated, so let's see if we can deduce the pattern at this point. We can find these numbers in Pascal's triangle These are the number of 4 element subsets of a given set. The next number in the sequence is 35 which should be the number of points of intersection inside the circle if there are 7 points on the you can count that there are 35 points of intersection inside, and in our problem, there are 70 points inside the circle. The reason that this procedure works is because if you have a point inside a circle where 2 lines intersect, each line is determined by 2 points on the circle, so if you have two lines, they will be determined by 4 points on the circle. This sets up a one to one correspondence between points inside the circle and 4 element subsets of the set of points on the circle.
{"url":"http://www.sonoma.edu/users/w/wilsonst/Courses/Math_300/Groupwork/Combinations/c5.html","timestamp":"2014-04-17T04:22:33Z","content_type":null,"content_length":"5321","record_id":"<urn:uuid:9e72d657-c952-42eb-a12c-385b2cb03edf>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00542-ip-10-147-4-33.ec2.internal.warc.gz"}
Author Message cool_jonny009 D Director E Joined: 27 Jun Difficulty: 5% (low) Posts: 517 Question Stats: Location: MS Followers: 2 (00:00) correct Kudos [?]: 5 [0] , given: 0 0% (00:00) based on 0 sessions x is an integer greater than 7. What is the median of the set of integers from 1 to x inclusive? (1) The average of the set of integers from 1 to x inclusive is 11. (2) The range of the set of integers from 1 to x inclusive is 20. Answer: D trublu Statement (1): sum of numbers from one to x = x(x+1)/2 Manager average = (x(x+1)/2)/x = 11 (x+1)/2 = 11 Joined: 01 Feb x+1 = 22; therefore x = 21 2006 now that we know x the median is easy to find median = 11 Posts: 102 Statement (2): Followers: 1 Range = x - 1 = 20; therefore x = 21 Both statements are sufficient separately. VP E. both statements are not sufficient to answer the question.. Joined: 29 Dec Posts: 1352 Followers: 6 Professor wrote: E. both statements are not sufficient to answer the question.. Joined: 01 Feb 2006 why E? cool_jonny009 what is OA? Posts: 102 Followers: 1 Professor trublu wrote: VP Professor wrote: Joined: 29 Dec E. both statements are not sufficient to answer the question.. why E? cool_jonny009 what is OA? Posts: 1352 the question is about median not about the value of x. Followers: 6 Professor wrote: trublu wrote: Professor wrote: Joined: 01 Feb 2006 E. both statements are not sufficient to answer the question.. Posts: 102 why E? cool_jonny009 what is OA? Followers: 1 the question is about median not about the value of x. u can calculate the median once u know the value of x. The median of 1,2,3....21 can easily be calculated. The Q stem does not say consecutive ....".....set of integers ......" so I guess the ans should be E Joined: 15 Aug Posts: 137 Followers: 2 Kudos [?]: 6 [0] , given: 0 @ trublu: The formula you used for the summation is applicable for consecutive integers. The problem doesn't mention that in any way. Joined: 09 Feb 2006 I would go for (E) Posts: 21 OA will make interesting reading. Raleigh, NC Followers: 0 believe2 wrote: trublu The Q stem does not say consecutive ....".....set of integers ......" so I guess the ans should be E Manager What does this mean "the set of integers from 1 to x inclusive?"? Joined: 01 Feb doesn't it mean all the integers between 1 and X? If the set of numbers is not given in consecutive order they still have to be ordered to get the median. Posts: 102 Last edited by Followers: 1 trublu on 13 Feb 2006, 19:22, edited 1 time in total. trublu wrote: Professor Quote: VP the question is about median not about the value of x. Joined: 29 Dec u can calculate the median once u know the value of x. The median of 1,2,3....21 can easily be calculated. the question is not that easy as you think.... Posts: 1352 Followers: 6 Last edited by on 13 Feb 2006, 19:55, edited 1 time in total. Professor wrote: the question is not that easy as you think.... Joined: 01 Feb 2006 explain it to me then. You said the question was about the median not about x. However, you need x to get the median so I don't understand your reasoning. Posts: 102 @jacksparrow I dont think order matters. for example, the sums for the following sets is equal {1,2,3} or {3,1,2} Followers: 1 trublu wrote: Professor wrote: the question is not that easy as you think.... explain it to me then. You said the question was about the median not about x. However, you need x to get the median so I don't understand your reasoning. Professor @jacksparrow I dont think order matters. for example, the sums for the following sets is equal {1,2,3} or {3,1,2} VP the question is not that much difficult one and doesnot need this much discussion. it is pretty much clear and easy. anyway, let me try: Joined: 29 Dec from i, avg = 11. but we donot know how many integers are there in between 1 to x. suppose the no. of integers is = 3 and the integers are 1, 11 and 21. the median is 11. Posts: 1352 suppose the integers are 1, 10, 22. the median is 10. Followers: 6 we can vary the number of integers such as 2,4,5,6,7,8........infinite. so there is not one median. i is insufficient. from ii, the first and last integers are 1 and 21. but we donot know how many integers are there in between 1 and 21. we can insert infinite number of integers in between 1 and 21. so this is also not suff. even combining statements i and ii, we cannot make sure the value of median. so the answer is E. hope this helps. if not, you need to contact honghu. Manager I see what you mean but I thought "the set of integers from 1 to x inclusive" meant ALL the numbers between 1 and x with both included. So if x = 8 the set is {1,2,3,4,5,6,7,8}. Joined: 01 Feb Posts: 102 Followers: 1 Joined: 26 Sep inclined to accept E, cool_jonny whats the OA ? Posts: 587 Followers: 1 Kudos [?]: 7 [0] , given: 0 Manager cool_jonny could u post the OA. I'm really confused now. I need closure Joined: 01 Feb Posts: 102 Followers: 1 cool_jonny009 wrote: HongHu x is an integer greater than 7. What is the median of the set of integers from 1 to x inclusive? SVP (1) The average of the set of integers from 1 to x inclusive is 11. Joined: 03 Jan (2) The range of the set of integers from 1 to x inclusive is 20. The question is unclear, in my opinion, which you should not expect from the official GMAT. If the stem says "the set of consecutive integers" then D would be the answer. However Posts: 2257 here we do not know how many integers are in the set, although the use of "the" instead of "a" made it sounds like there could only be one set of integers from 1 to x. Followers: 12 I would personally choose E as well. Even if we know x we don't know the median of the set if we don't know how the set looks like. For example, a set can be {1,21} with a median 11. But another set could be {1,8,12,13,21} where the median is 12. Kudos [?]: 175 [ 0], given: 0 _________________ Keep on asking, and it will be given you; keep on seeking, and you will find; keep on knocking, and it will be opened to you. HongHu wrote: cool_jonny009 wrote: x is an integer greater than 7. What is the median of the set of integers from 1 to x inclusive? Joined: 27 Jun 2005 (1) The average of the set of integers from 1 to x inclusive is 11. Posts: 517 (2) The range of the set of integers from 1 to x inclusive is 20. Location: MS The question is unclear, in my opinion, which you should not expect from the official GMAT. If the stem says "the set of consecutive integers" then D would be the answer. However here we do not know how many integers are in the set, although the use of "the" instead of "a" made it sounds like there could only be one set of integers from 1 to x. Followers: 2 I would personally choose E as well. Even if we know x we don't know the median of the set if we don't know how the set looks like. For example, a set can be {1,21} with a median 11. Kudos [?]: 5 [0] But another set could be {1,8,12,13,21} where the median is 12. , given: 0 you are right HongHu the question is not clear and yes it's not from offical GMAT its MGMAT question .. btw OA given is D
{"url":"http://gmatclub.com/forum/ds-average-26586.html","timestamp":"2014-04-19T22:52:22Z","content_type":null,"content_length":"194027","record_id":"<urn:uuid:fa871a55-5aa3-42d1-8d4d-a8657b249a02>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00286-ip-10-147-4-33.ec2.internal.warc.gz"}
On the Parallelism Workshop on Finsler Geometry and its Applications Debrecen 2009 On the parallel displacement and parallel vector fields in Finsler Geometry Department of Information and Media Studies University of Nagasaki Tetsuya NAGANO • §1. Definition of the parallel displacement along a • §2. Parallel vector fields on curves c and c-1 • §3. HTM • §4. Paths and Autoparallel curves • §5. Inner Product • §6. Geodesics • §7. Parallel vector fields • §8. Comparison to Riemannian cases • References §1. Definition of the parallel displacement along a curve §1. Definition of the parallel displacement along a curve §1. Definition of the parallel displacement along a curve §1. Definition of the parallel displacement along a curve §1. Definition of the parallel displacement along a curve §1. Definition of the parallel displacement along a curve §1. Definition of the parallel displacement along a curve In this time, we have another curve c-1 and vector field v-1 . §1. Definition of the parallel displacement along a curve §1. Definition of the parallel displacement along a curve The vector field v-1 is not parallel along c-1 . §1. Definition of the parallel displacement along a curve The vector field v-1 is not parallel along c-1 . §1. Definition of the parallel displacement along a curve The vector field v-1 is not parallel along c-1 . If v-1 is parallel, §1. Definition of the parallel displacement along a curve The vector field v-1 is not parallel along c-1 . If v-1 is parallel, §1. Definition of the parallel displacement along a curve The vector field v-1 is not parallel along c-1 . If v-1 is parallel, §2. Parallel vector fields on curves c and c-1 So, we take a parallel vector field u on c-1 as follows: §2. Parallel vector fields on curves c and c-1 And, we consider the transformation Φ:A → A’ on TpM §2. Parallel vector fields on curves c and c-1 And, we consider the transformation Φ:A → A’ on TpM Φ is linear because of the linearity of the equation In the Riemannian case, Φ is identity. In general, in the Finsler case, Φ is not identity. §2. Parallel vector fields on curves c and c-1 Then, we have the (1,1)-Finsler tensor field Φi j with the parameter t A=(Ai), A’ =(A’i) §2. Parallel vector fields on curves c and c-1 Then, we have the (1,1)-Finsler tensor field Φi j with the parameter t A=(Ai), A’ =(A’i) §2. Parallel vector fields on curves c and c-1 Then, we have the (1,1)-Finsler tensor field Φi j with the parameter t A=(Ai), A’ =(A’i) §2. Parallel vector fields on curves c and c-1 §2. Parallel vector fields on curves c and c-1 Under this assumption, we can prove that The vector field v-1 is parallel along the curve c-1. As follows: §2. Parallel vector fields on curves c and c-1 Under this assumption, we can prove that The vector field v-1 is parallel along the curve c-1. As follows: §2. Parallel vector fields on curves c and c-1 Under this assumption, we can prove that The vector field v-1 is parallel along the curve c-1. As follows: §2. Parallel vector fields on curves c and c-1 So we have: §2. Parallel vector fields on curves c and c-1 So we have: From u(a)=v-1(a)=B, §2. Parallel vector fields on curves c and c-1 So we have: From u(a)=v-1(a)=B, So we have: §3. HTM We show the geometrical meaning of Definition 1. §3. HTM We show the geometrical meaning of Definition 1. §3. HTM We show the geometrical meaning of Definition 1. §3. HTM §3. HTM §3. HTM So, we have §3. HTM So, we have We can take the derivative operator with respect to xi §3. HTM §3. HTM §3. HTM §3. HTM §3. HTM Horizontal parts || Vertical parts §3. HTM Horizontal parts || §4. Paths and Autoparallel curves §4. Paths and Autoparallel curves §4. Paths and Autoparallel curves If c is a path, then is c-1 also the one? §4. Paths and Autoparallel curves If c is a path, then is c-1 also the one? In general, Not ! §4. Paths and Autoparallel curves If c is a path, then is c-1 also the one? In general, Not ! §4. Paths and Autoparallel curves If c is a path, then is c-1 also the one? In general, Not ! §4. Paths and Autoparallel curves If c is a path, then is c-1 also the one? In general, Not ! §4. Paths and Autoparallel curves If c is a path, then is c-1 also the one? In general, Not ! However, if satisfies, c-1 is the path. §4. Paths and Autoparallel curves If c is a path, then is c-1 also the one? In general, Not ! However, if satisfies, c-1 is the path. §4. Paths and Autoparallel curves If c is a path, then is c-1 also the one? In general, Not ! However, if satisfies, c-1 is the path. §4. Paths and Autoparallel curves Autoparallel curve §4. Paths and Autoparallel curves Autoparallel curve Definition 2. The curve c=(ci(t)) is called an autoparallel curve. §4. Paths and Autoparallel curves Autoparallel curve Definition 2. The curve c=(ci(t)) is called an autoparallel curve. In other words, The canonical lift to HTM is horizontal. §4. Paths and Autoparallel curves Autoparallel curve Definition 2. The curve c=(ci(t)) is called an autoparallel curve. Vertical parts vanish In other words, The canonical lift to HTM is horizontal. Horizontal parts Vertical parts §5. Inner product In here, we call it the “inner product” on the curve c=(ci(t)) where the vector fields v=(vi(t)), u=(ui(t)) are on c. §5. Inner product In here, we call it the “inner product” on the curve c=(ci(t)) where the vector fields v=(vi(t)), u=(ui(t)) are on c. For the parallel vector fields v, u on c, If c is a path, then we have §5. Inner product In here, we call it the “inner product” on the curve c=(ci(t)) where the vector fields v=(vi(t)), u=(ui(t)) are on c. For the parallel vector fields v, u on c, If c is a path, then we have §5. Inner product In here, we call it the “inner product” on the curve c=(ci(t)) where the vector fields v=(vi(t)), u=(ui(t)) are on c. For the parallel vector fields v, u on c, If c is a path, then we have §5. Inner product In here, we call it the “inner product” on the curve c=(ci(t)) where the vector fields v=(vi(t)), u=(ui(t)) are on c. For the parallel vector fields v, u on c, If c is a path, then we have So, if , then is constant on c. §5. Inner product For the parallel vector fields v, u on c, If and is constant on c, then we have §5. Inner product For the parallel vector fields v, u on c, If and is constant on c, then we have §5. Inner product For the parallel vector fields v, u on c, If and is constant on c, then we have §5. Inner product For the parallel vector fields v, u on c, If and is constant on c, then we have §5. Inner product For the parallel vector fields v, u on c, If and is constant on c, then we have §5. Inner product For the parallel vector fields v, u on c, If and is constant on c, then we have §5. Inner product For the parallel vector fields v, u on c, If and is constant on c, then we have §5. Inner product For the parallel vector fields v, u on c, If and is constant on c, then we have || || §5. Inner product For the parallel vector fields v, u on c, If and is constant on c, then we have §5. Inner product For the parallel vector fields v, u on c, If and is constant on c, then we have V, u : arbitrarily Not Riemannian case ( ) §5. Inner product For the parallel vector fields v, u on c, If and is constant on c, then we have V, u : arbitrarily Not Riemannian case ( ) So, we have §5. Inner product For the parallel vector fields v, u on c, If and is constant on c, then we have V, u : arbitrarily Not Riemannian case ( ) So, we have c is a path. §5. Inner product For the parallel vector fields v, u on c, If and is constant on c, then we have V, u : arbitrarily Not Riemannian case ( ) So, we have c is a path. §6. Geodesics By using Cartan connection, the equation of a geodesic c=(ci(t)) is §6. Geodesics By using Cartan connection, the equation of a geodesic c=(ci(t)) is ( t is the arc-length.) §6. Geodesics By using Cartan connection, the equation of a geodesic c=(ci(t)) is ( t is the arc-length.) §6. Geodesics By using Cartan connection, the equation of a geodesic c=(ci(t)) is ( t is the arc-length.) According to the above discussion, we have §7. Parallel vector fields §7. Parallel vector fields In the case of Riemannian Geometry §7. Parallel vector fields In the case of Riemannian Geometry A vector field v(x) on M is parallel, if and only if, §7. Parallel vector fields In the case of Riemannian Geometry A vector field v(x) on M is parallel, if and only if, ∇v=0. (∇: a connection) §7. Parallel vector fields In the case of Riemannian Geometry A vector field v(x) on M is parallel, if and only if, ∇v=0. (∇: a connection) In locally, §7. Parallel vector fields In the case of Riemannian Geometry A vector field v(x) on M is parallel, if and only if, ∇v=0. (∇: a connection) In locally, Then v(x) has the following properties: §7. Parallel vector fields In the case of Riemannian Geometry A vector field v(x) on M is parallel, if and only if, ∇v=0. (∇: a connection) In locally, Then v(x) has the following properties: (1) v is parallel along any curve c. §7. Parallel vector fields In the case of Riemannian Geometry A vector field v(x) on M is parallel, if and only if, ∇v=0. (∇: a connection) In locally, Then v(x) has the following properties: (1) v is parallel along any curve c. (2) The norm ||v|| is constant on M §7. Parallel vector fields I want the notion of parallel vector field in Finsler geometry. §7. Parallel vector fields I want the notion of parallel vector field in Finsler geometry. In general, Finsler tensor field T is parallel, if and only if, ∇T=0. (∇: a Finsler connection) §7. Parallel vector fields I want the notion of parallel vector field in Finsler geometry. In general, Finsler tensor field T is parallel, if and only if, ∇T=0. (∇: a Finsler connection) But it is not good to obtain the interesting notion like the Riemannian case. §7. Parallel vector fields I want the notion of parallel vector field in Finsler geometry. In general, Finsler tensor field T is parallel, if and only if, ∇T=0. (∇: a Finsler connection) But it is not good to obtain the interesting notion like the Riemannian case. So we consider the lift to HTM. §7. Parallel vector fields I want the notion of parallel vector field in Finsler geometry. In general, Finsler tensor field T is parallel, if and only if, ∇T=0. (∇: a Finsler connection) But it is not good to obtain the interesting notion like the Riemannian case. So we consider the lift to HTM. And calculate the differential with respect to §7. Parallel vector fields So we consider the lift to HTM. And calculate the differential with respect to §7. Parallel vector fields So we consider the lift to HTM. And calculate the differential with respect to §7. Parallel vector fields So we consider the lift to HTM. And calculate the differential with respect to §7. Parallel vector fields So we consider the lift to HTM. And calculate the differential with respect to §7. Parallel vector fields So we consider the lift to HTM. And calculate the differential with respect to §7. Parallel vector fields So we consider the lift to HTM. And calculate the differential with respect to §7. Parallel vector fields So we consider the lift to HTM. And calculate the differential with respect to §7. Parallel vector fields So we treat the case satisfying §7. Parallel vector fields So we treat the case satisfying §7. Parallel vector fields So we treat the case satisfying §7. Parallel vector fields So we treat the case satisfying §7. Parallel vector fields So we treat the case satisfying §7. Parallel vector fields First of all §7. Parallel vector fields First of all §7. Parallel vector fields The curve c is called the flow of v. §7. Parallel vector fields The curve c is called the flow of v. Then the restriction satisfies §7. Parallel vector fields The curve c is called the flow of v. Then the restriction satisfies §7. Parallel vector fields The curve c is called the flow of v. Then the restriction satisfies Because, from (7.2) §7. Parallel vector fields The curve c is called the flow of v. Then the restriction satisfies Because, from (7.2) So we can call v parallel along the curve c. §7. Parallel vector fields Next, we can see the solution c(t) satisfies §7. Parallel vector fields Next, we can see the solution c(t) satisfies Because, from(7.1) §7. Parallel vector fields Next, we can see the solution c(t) satisfies Because, from(7.1) So the curve c is a path. §7. Parallel vector fields Next, we can see the solution c(t) satisfies Because, from(7.1) So the curve c is a path. Thus, v is a parallel vector field along the path c. §7. Parallel vector fields Next, we can see the solution c(t) satisfies Because, from(7.1) So the curve c is a path. Thus, v is a parallel vector field along the path c. The inner product is constant on c. §7. Parallel vector fields Next, we can see the solution c(t) satisfies Because, from(7.1) So the curve c is a path. Thus, v is a parallel vector field along the path c. The inner product is constant on c. The norm ||v|| is constant on c. §7. Parallel vector fields §7. Parallel vector fields Next, we study the conditions in order for the vector field satisfying (7.1) and (7.2) to exist in locally at every point (x,y) §7. Parallel vector fields Next, we study the conditions in order for the vector field satisfying (7.1) and (7.2) to exist in locally at every point (x,y) By the integrability conditions of (7.1), §7. Parallel vector fields Next, we study the conditions in order for the vector field satisfying (7.1) and (7.2) to exist in locally at every point (x,y) By the integrability conditions of (7.1), §7. Parallel vector fields Next, we study the conditions in order for the vector field satisfying (7.1) and (7.2) to exist in locally at every point (x,y) By the integrability conditions of (7.1), the equation is satisfied. §7. Parallel vector fields Next, we study the conditions in order for the vector field satisfying (7.1) and (7.2) to exist in locally at every point (x,y) By the integrability conditions of (7.1), the equation is satisfied. §7. Parallel vector fields By the integrability conditions of (7.2), §7. Parallel vector fields By the integrability conditions of (7.2), §7. Parallel vector fields By the integrability conditions of (7.2), the equation is satisfied. §7. Parallel vector fields By the integrability conditions of (7.2), the equation are satisfied. §7. Parallel vector fields By the integrability conditions of (7.2), the equation is satisfied. §7. Parallel vector fields Lastly, the solutions of (7.1) and (7.2) coincide, that is, §7. Parallel vector fields Lastly, the solutions of (7.1) and (7.2) coincide, that is, §7. Parallel vector fields Lastly, the solutions of (7.1) and (7.2) coincide, that is, §7. Parallel vector fields Lastly, the solutions of (7.1) and (7.2) coincide, that is, Thus we have §8. Comparison to Riemannian cases §8. Comparison to Riemannian cases §8. Comparison to Riemannian cases §8. Comparison to Riemannian cases §8. Comparison to Riemannian cases §8. Comparison to Riemannian cases §8. Comparison to Riemannian cases §8. Comparison to Riemannian cases §8. Comparison to Riemannian cases §8. Comparison to Riemannian cases §8. Comparison to Riemannian cases §8. Comparison to Riemannian cases
{"url":"http://www.docstoc.com/docs/74978898/On-the-Parallelism","timestamp":"2014-04-23T15:21:47Z","content_type":null,"content_length":"73854","record_id":"<urn:uuid:702ce157-9b72-4afe-9fbc-0b45d7a8348f>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00026-ip-10-147-4-33.ec2.internal.warc.gz"}
Hockey Stick Graph Formula November 9th 2006, 07:02 PM Hockey Stick Graph Formula I need to come up with the (Excel) formula for a 'hockey stick' graph. Although the numbers do not really matter, here's what I need to show: time line 24 months, saturation value 12,000,000. I hope this makes (some) sense. November 10th 2006, 08:15 AM Guess it did not :( November 10th 2006, 09:47 AM November 10th 2006, 09:55 AM Let's say you start a new service and at the beginning you double your subscriber base every week then every month then every quarter and you reach saturation at about 2,000,000 subscribers within 24 months. How would an (Excel) formula for that scenario look like? November 10th 2006, 11:29 PM You need the Verhulst equation, which has solution: $<br /> S(t)=\frac{K S(0)\ e^{rt}}{K+S(0)\ (e^{rt}-1)}<br />$ Where $S(0)$ is the initial number of subscribers, and $r$ is the initial growth constant (in your case 2/wk, so if you measure time $t$ in weeks has a value of 2), and $K$ is your saturation November 11th 2006, 08:36 AM Thanks so much :D Know I just have to figure out how to put that into Excel.
{"url":"http://mathhelpforum.com/advanced-math-topics/7392-hockey-stick-graph-formula-print.html","timestamp":"2014-04-20T05:53:40Z","content_type":null,"content_length":"7348","record_id":"<urn:uuid:c73a2dd2-1cc9-4d30-b585-04351b43e0da>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00661-ip-10-147-4-33.ec2.internal.warc.gz"}
[Numpy-discussion] dot function or dot notation, matrices, arrays? [Numpy-discussion] dot function or dot notation, matrices, arrays? Keith Goodman kwgoodman@gmail.... Fri Dec 18 15:57:48 CST 2009 On Fri, Dec 18, 2009 at 1:51 PM, Wayne Watson <sierra_mtnview@sbcglobal.net> wrote: > Is it possible to calculate a dot product in numpy by either notation > (a ^ b, where ^ is a possible notation) or calling a dot function > (dot(a,b)? I'm trying to use a column matrix for both "vectors". > Perhaps, I need to somehow change them to arrays? Does this do what you want? >> x >> x.T * x >> np.dot(x.T,x) More information about the NumPy-Discussion mailing list
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2009-December/047542.html","timestamp":"2014-04-20T10:23:42Z","content_type":null,"content_length":"3395","record_id":"<urn:uuid:854a4e6f-2319-434a-977e-027cec2617c5>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00501-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts from March 2012 on Quantum field theory Update (18 March 2012): Copy of my comment submitted to Calder’s blog post on Quantum Computing: “If you’re not sure whether an electron in an atom is in one possible energy state, or in the next higher energy state permitted by the physical laws, then it can be considered to be both states at once.” Thanks for this article. The quantum computing idea depends on intrinsic indeterminism, the single wavefunction of Schrodinger’s equation. This gives a spread of probabilities for the energy state, until the wavefunction is “collapsed” by an actual measurement. The quantum computing question is whether the single wavefunction (1st quantization quantum mechanics) mathematical model is an accurate, experimentally justified model. It’s non-relativistic, and in 1929 Dirac showed that the Hamiltonian in Schroedinger’s equation needs to be replaced by an SU(2) spinor to make it relativistic, which quantizes the field. This is Feynman’s path integral (2nd quantization, or QFT), where there is no single wavefunction amplitude. Instead, each path has a separate wavefunction amplitude, and apparent indeterminist is just multipath interference from the virtual particles (similar to multipath interference of old HF radio waves due to partial reflection by different charged layers in the ionosphere). Feynman explains this fact clearly in his 1985 book QED, stating that Heisenberg’s uncertainty principle is unnecessary. All indeterminism is multipath interference, a physical mechanism. So if Feynman is right, there is no real mathematical magic, and the 1st quantization single wavefunction states at the heart of quantum computing research are a delusion. The Majorana fermions news is very interesting, but again is a spin story. The “pair of Majorana fermions” described in the paper referenced by the Nature article (R. M. Lutchyn et al. http:// arxiv.org/abs/1002.4033; 2010) is simply an electron and a semi-conductor “hole” at the interface between a superconductor and a semiconducting nanowire. The hole behaves as a fermion, and is electrically like a positron. So this Majorana pair is electrically neutral, and with entangled wavefunctions would prove useful for quantum computing. But according to Feynman, the only entangled wavefunctions are from the 1st quantization non-relativistic model. Aspect’s experiments alleging quantum entanglement, and others, are fully explained by Feynman’s 2nd quantization multipath interference mechanism in path integrals, which simply isn’t included in Bell’s equality (a statistical test of 1st quantization). There is no discrimination between 1st and 2nd quantization in these experiments. Experimental spin correlation is assumed to be the entanglement of single wavefunctions. They simply ignore the path integral’s multipath interference mechanism. The use of statistical hypothesis testing is fiddled with a false selection of explanations: it is assumed that the experiments are a test of whether 1st quantization is right or wrong. Of course, under this assumption, it appears correct. A more scientific version of Bell’s inequality would include a third possibility, namely Feynman’s path integral where all indeterminism is due to multipath interference, so there are no single wavefunctions to begin with. Supposed pairs of spin-correlated particles actually follow all paths, most of which cancel one another. There is no single wavefunction; instead, Aspect’s two apparently correlated wavefunctions (one for each detected particle) are each the sum of wavefunction amplitudes for all the virtual paths taken. This provides the physical mechanism for what is actually taking place.
{"url":"https://nige.wordpress.com/2012/03/","timestamp":"2014-04-16T16:07:23Z","content_type":null,"content_length":"30554","record_id":"<urn:uuid:4bc4420d-47e0-4f78-8c33-b72cbf8d30a8>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00204-ip-10-147-4-33.ec2.internal.warc.gz"}
Summary: Cryptography with Constant Input Locality # (extended abstract) Benny Applebaum, Yuval Ishai ## , and Eyal Kushilevitz # # # Computer Science Department, Technion, Haifa 32000, Israel Abstract. We study the following natural question: Which cryptographic primitives (if any) can be realized by functions with constant input lo­ cality, namely functions in which every bit of the input influences only a constant number of bits of the output? This continues the study of cryp­ tography in low complexity classes. It was recently shown (Applebaum et al., FOCS 2004) that, under standard cryptographic assumptions, most cryptographic primitives can be realized by functions with constant out­ put locality, namely ones in which every bit of the output is influenced by a constant number of bits from the input. We (almost) characterize what cryptographic tasks can be performed with constant input locality. On the negative side, we show that prim­ itives which require some form of non­malleability (such as digital sig­ natures, message authentication, or non­malleable encryption) cannot be realized with constant input locality. On the positive side, assuming the intractability of certain problems from the domain of error correcting
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/920/3941740.html","timestamp":"2014-04-21T07:35:30Z","content_type":null,"content_length":"8418","record_id":"<urn:uuid:1e93d9d2-dea8-423d-92fc-46c37a142477>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00104-ip-10-147-4-33.ec2.internal.warc.gz"}
2.5GB equals how many hours of internet/data usage? - QnA Go 2.5GB equals how many hours of internet/data usage? Converting GB to hours. need to find out how much internet usage is 2.5GB. Alternate wordings how much hours of internet is 2.5gb?how much usage is 2.5gigabytes ?how much is 2.5gb of data per month?how many hours can you be on the internet with 2.5 gb of data?how much is 2.5 gb of internet usage?how many hours is 2 gbs of internet usage?how much is 5gb of data internet usage?how much is 2.5GB data on a smart phone?what is 2.5GB of Smartphone usage?how much internet useage is 2.5gb?how much is 2.5 gb of data usage??how many hours is 2.5gb of video?how many hours how much is 2.5 gb?How much is 2.5 gigabytes of data equal?how much is 2.5gb data for phone?
{"url":"http://qnago.com/q/25gb_equals_how_many_hours_of_internetdata_usage-q1283/","timestamp":"2014-04-17T10:45:45Z","content_type":null,"content_length":"585480","record_id":"<urn:uuid:1c075387-4702-4f08-a743-9ed55998b3ba>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00261-ip-10-147-4-33.ec2.internal.warc.gz"}
Show only one group exists for Groups of prime order May 29th 2010, 06:30 PM #1 Junior Member May 2010 Show only one group exists for Groups of prime order I'm trying to dust off the cobb webs by studying some basic Group Theory. Can someone provide a proof that a Group of order 5, or any prime for that matter, must be a cyclic Abelian group, and that there can be only one such group... I can easily find the multiplication table for the Group, but I don't see how to prove the statement that it is the only such group. Any insights would be much appreciated.. Hi there. You might try thinking about Lagrange's Theorem, which states that if $G$ is a finite group, the order of any subgroup $H$ must divide the order of $G$. Can you see how to use this to show that every group of prime order is cyclic? And of course then use this and the important fact that every cyclic group is isomorphic to $\mathbb{Z}_n$ or $\mathbb{Z}$ to finish. I see that a group of prime order cannot have any subgroups (other than E), but how does that lead to the non-existance of groups of a given prime order, other than the cyclic, Abelian group? For example, I can create more than one multiplication table for $G_5$ which obeys the rule that no element appear more than once in any row or column. However, I can show the table for the non-Abelian $G_5$ violates the associative property, and thus is not a proper group. But such a brute force approach is not practical for primes of higher order. I guess the cobb webs have pretty high tensile strength! Am I heading in the right direction: If the order $n$ of any element $g$ of the group $G_p$, where $p$ is the order of the group $G$, is such that $g^n=E$, and by Legrange's theorem $n$ must be divisible into $p$, then for $p$ prime, it must be true that $n$ is either 1 or $p$. Since every element $g_k$ of the Group $G_p$ must have an order $n$, and since $n$ is restricted to either 1 or $p$ when $p$ is prime, all elements (except E) must have the same order -- i.e. $p$ Now, need to show that only one such Group $G_p$ can satisfy this condition. You're on the right track. Every element $g$ in a group of order $p$ has order 1 or $p$. What can be said about the subgroup generated by $g$, if $g$ is not the identity? If g is not the identity, and it has order p, then $g^1, g^2, g^3, ..... g^p$ should generate all the elements of the group. Thus, I believe I can say three things about the subgroup: 1) The subgroup generated by g in this way is actually the entire group. 2) The group formed in this way, i.e. through successive powers of an element, is cyclic 3) The group is Abelian. Yes, and you're done! Thank you for the help along the way. May 29th 2010, 07:17 PM #2 May 29th 2010, 10:13 PM #3 May 30th 2010, 05:03 AM #4 Junior Member May 2010 May 30th 2010, 05:34 AM #5 Junior Member May 2010 May 30th 2010, 10:20 AM #6 May 30th 2010, 12:45 PM #7 Junior Member May 2010 May 30th 2010, 01:00 PM #8 May 30th 2010, 02:52 PM #9 Junior Member May 2010
{"url":"http://mathhelpforum.com/advanced-algebra/146939-show-only-one-group-exists-groups-prime-order.html","timestamp":"2014-04-20T08:17:34Z","content_type":null,"content_length":"59100","record_id":"<urn:uuid:5423921f-7615-42d6-9d75-ae969e259529>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00019-ip-10-147-4-33.ec2.internal.warc.gz"}
2 primes that equal Re: 2 primes that equal Yes, there are more. Why not do a few more before any upgrade. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=264376","timestamp":"2014-04-23T15:40:24Z","content_type":null,"content_length":"35748","record_id":"<urn:uuid:0f533601-38db-4f87-8a62-3d506d1053e2>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00367-ip-10-147-4-33.ec2.internal.warc.gz"}
How is the right adjoint $f_*$ to the inverse image functor $f^*$ described for functor categories $Set^C$, $Set^D$ and $f : C \to D$ up vote 1 down vote favorite For $C,D$ small categories, and $f : C \to D$ a functor between them, there is a precomposition, or "inverse image", functor $f^* = (-) \circ f : Set^D \to Set^C$. It has a left and a right adjoint. What are their definitions, and in particular what is the right adjoint $f_*$? I couldn't find a definition in terms of functor categories, just "topological" ones. ct.category-theory sheaf-theory adjoint-functors gn.general-topology 1 If I remember right, there's a good section or two on this in the Mac Lane and Moerdijk book: gen.lib.rus.ec/… Unfortunately I'm in a rush at the moment — hopefully someone else can provide chapter and verse and perhaps a digest... – Peter LeFanu Lumsdaine Jul 21 '10 at 13:15 Pietro: I might be wrong here, but $f^*$ seems to be covariant, with $f^*(F : Set^D) = F\circ f$ and $f^*(\gamma : F \to G)_{c : C} = \gamma_{fc} : Ffc \to Gfc$. – vincenzoml Jul 21 '10 at 14:09 1 The adjoints (left and right) to such a pre-composition functor are called Kan extensions. They're the subject of the last chapter of Mac Lane's "Categories for the Working Mathematician." – Andreas Blass Jul 21 '10 at 14:09 Peter: Mac Lane and Moerdijk define $f_*$ in the "topological" way, that is (p. 68 of the copy you linked) $(f_*F)V=F(f^{−1})V$. But it is not clear to me how this definition gives us (from a presheaf $F : Set^C$) a presheaf $f_*F: Set^D$. That is: 1) What is now $F(f^{−1})(V)$? f may be non-injective on objects. 2) What is the action of $f_*F$ on arrows? $C$ may even be a discrete category, so $f^{−1}$ of a arrow in $D$ may be undefined. – vincenzoml Jul 21 '10 at 14:23 vincenzo: sorry you're right. Not concentrated! – Pietro Majer Jul 21 '10 at 18:37 show 2 more comments 1 Answer active oldest votes Given a functor $f:\mathcal{C}\to\mathcal{D}$ and any complete category $\mathcal{A}$ (e.g., take $\mathcal{A}=\text{Sets}$ to get the case you are asking about), there exists a right-adjoint $f_{\*}:[\mathcal{C},\mathcal{A}]\to[\mathcal{D},\mathcal{A}]$ to the "inverse image functor" $f^{*}$ and this is given by taking right Kan extension. Explicitly, given a functor $X:\mathcal{C}\to\mathcal{A}$, the functor $f_{*}(X):\mathcal{D}\to\mathcal{A}$ is the right Kan extension of $X$ along $f$. This can be described explicitly using the limit formula $$f_{\*}(X)(d)=\text{lim}_{d\to f(c)}X(c)$$ for $d$ an object of $\mathcal{D}$ (the action on arrows of $\mathcal{D}$ is then induced by the universal property of limits). The indexing category of the limit here is of course the comma category $(d\downarrow f)$. up vote 3 down vote accepted When $\mathcal{A}$ is cocomplete there is a corresponding left-adjoint $f_{!}\dashv f^{*}$ which is given by taking left Kan extension along $f$. This can be explicitly described by the colimit formula dual to the limit formula given above. (I should say that all of this is described very nicely in Mac Lane's book Categories for the Working Mathematician.) Michael, thanks! This helped me understand the definition in MacLane. But reading p. 237-238 (in the second edition) that is, the definition you spell out above, and having made calculations, it seems to me that there's a typo above: the index category of the limit is $(d \downarrow f)$ and the limit is indexed over $d \to f(c)$. For the rest, a very clear explanation. – vincenzoml Jul 22 '10 at 12:00 You're quite right about the typo. I'm so used to using this formula when dealing with presheaves $[\mathcal{C}^{\text{op}},\textbf{Sets}]$ that I mixed up the variance, doh! – Michael A Warren Jul 22 '10 at 15:04 add comment Not the answer you're looking for? Browse other questions tagged ct.category-theory sheaf-theory adjoint-functors gn.general-topology or ask your own question.
{"url":"http://mathoverflow.net/questions/32791/how-is-the-right-adjoint-f-to-the-inverse-image-functor-f-described-for?sort=votes","timestamp":"2014-04-17T10:00:27Z","content_type":null,"content_length":"61092","record_id":"<urn:uuid:23a79f29-58c8-4a3c-ace6-02b887af50c6>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00352-ip-10-147-4-33.ec2.internal.warc.gz"}
San Marino Algebra 1 Tutor ...I've played at venues such as The Shrine in L.A. and Highline Ballroom in NYC. Since 2002, I have released 3 short EP's, and a full-length album in 2009. I'm sure you have your reasons for wanting to learn the guitar - whether you want to play along with songs, write your own music, or expand your musical horizons. 42 Subjects: including algebra 1, reading, English, elementary (k-6th) ...I have completed student teaching and hope to teach in my own class soon. I am aware of Common Core standards and can assist students in a variety of ways. I have two kids of my own, so timely cancellation is always appreciated. 7 Subjects: including algebra 1, chemistry, calculus, geometry ...I have aced pre-algebra as well as Algebras I and II. I scored a 780 in the harder SAT math test, which covered up to Algebra II. Having a solid foundation in pre-algebra immensely helped me in the higher level math and engineering courses in college. 12 Subjects: including algebra 1, calculus, physics, geometry I am an experienced tutor in math and science subjects. I have an undergraduate and a graduate degree in electrical engineering and have tutored many students before. I am patient and will always work with students to overcome obstacles that they might have. 37 Subjects: including algebra 1, English, chemistry, calculus Hi! I am excited to have the opportunity to help and serve my students. Growing up in a mediocre high school, I have witnessed peers that would neglect and throw away their potentiality of receiving good grades, earning recognition from their teachers, and entering a renowned college. 16 Subjects: including algebra 1, English, reading, writing
{"url":"http://www.purplemath.com/san_marino_ca_algebra_1_tutors.php","timestamp":"2014-04-20T13:50:38Z","content_type":null,"content_length":"23896","record_id":"<urn:uuid:92a0e198-ef36-4735-96b3-c3945ed39e4a>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00401-ip-10-147-4-33.ec2.internal.warc.gz"}
Rationals/Irrationals and Continuity February 27th 2010, 03:21 PM #1 Jan 2010 Rationals/Irrationals and Continuity If you define $g: \mathbb{R} \rightarrow \mathbb{R}$ by $g(x) := 2x$ for $x$ rational, and $g(x) := x + 3$ for $x$ irrational, at what points is $g$ continuous? Think about it likes this. If $f$ is continuous and $x_n\to x$ then $f(x_n)\to f(x)$. So, let $x\in\mathbb{R}$ the since both the irrationals and rations are dense in the reals there exists sequences $\{q_n\}_{n\in\mathbb{N}},\{i_n\}_{n\in\mathbb{N}}$ such that $q_n\to x,i_n\to x$. Thus, we must have that $f(x)=\lim\text{ }f(q_n)=\lim\text{ }2q_n=2\lim\text{ }q_n=2x$ and $f(x)=\lim\ text{ }f(i_n)=\lim\text{ }\left\{i_n+3\right\}=\lim\text{ }i_n+3=x+3$. In particular, $2x=x+3\implies x=3$. Thus, that is the only point of continuity. February 27th 2010, 10:03 PM #2
{"url":"http://mathhelpforum.com/differential-geometry/131071-rationals-irrationals-continuity.html","timestamp":"2014-04-21T07:47:45Z","content_type":null,"content_length":"37721","record_id":"<urn:uuid:34c72961-5e48-40c3-9cce-7e66aa4b0f2f>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00366-ip-10-147-4-33.ec2.internal.warc.gz"}
Concrete member definition - beam Use this dialog to define the parameters of an RC beam. The dialog contains the following options: • Member field - displays the name of the selected member type. The default name is: "Beam" and is assigned the first number available. You can also enter another name in place of the default. • Span length - contains options that define the calculation length of a beam. There are three ways to define the length: □ Click At support faces - the distance between the support faces is adopted as a design length. □ Click In axes - the design length equals the beam length. □ Click Coefficient, and enter a value - the value becomes the coefficient by which the length at support faces is multiplied to obtain the required value. For example, if you enter a value of 0.25, it means that the required value equals ΒΌ of the length at support faces. L[0] denotes the beam length between the nodes. • Support width field - contains options that define the width of supports. The support width can be defined in two ways: • Manually - to manually define the support width, enter the required values in the Beginning and End fields. • According to structure geometry. If the data concerning support geometry are to be taken directly from the structure model, the according to structure geometry option should be selected. In this case, the support dimensions are calculated on the basis of the projection of supporting element dimensions on the beam axis. • Admissible deflection - defines the maximum deflection calculated according to code requirements for RC beams. It can be defined in two ways. • Absolute deflection - enter the value of the deflection. • Relative deflection - the deflection is defined with respect to bar length (e.g. L/200). If the allowable values are exceeded, the calculation results tables will display the following:. • RC beams may be designed for a selected set of forces: □ Axial force Nx □ Bending moment and transversal force My / Fz □ Bending moment and transversal force Mz / Fy □ Torsional moment Mx. • T-beam (slab considered) option - if selected, the design of T-beams (beams considered integrally with slabs) is active. The edit fields b1 and b2 are available then and you can specify the values of the maximum effective slab widths. If you select the Geometrical verification option, Robot checks during calculations, if the values b1 and b2 are not excessive (that is, geometrically exceed half the span length). The value b1 corresponds to the panel located in the Y+ part of the member, while b2 - in the Y- part. Definition of a beam in the slab is justified if the member lies in the plane of the panel (an offset may be additionally defined for the beam). If the member does not lie in the plane of the panel, then it is assumed that the effective slab width equals zero. NoteT-beams (beams considered integral with the slab) may be defined for two structure types: 3D Shell and Plate; for the remaining structure types these options are not available. Results of RC beam calculations in which a T-beam (beam considered integral with the RC slab) was taken into account, are displayed in the Maps on Bars dialog (the Design tab) and in internal force tables. RC T-beams can alo be designed in the RC Beams module. See also: Calculations of RC T-beams considered with the slab • Use the buttons at the bottom of the dialog to: □ Additional parameters - click it to view all the codes involving calculation of deflection (for the serviceability limit state). The Additional parameters dialog opens. □ Note - click it display a calculation note. □ Save - click it to add a member type with the defined name and parameters to the list of the formerly-defined RC member types.
{"url":"http://docs.autodesk.com/RSA/2012/ENU/filesROBOT/GUID-1DEB4F93-557B-4B12-8F27-8589E6EB14C-1092.htm","timestamp":"2014-04-17T15:34:52Z","content_type":null,"content_length":"13857","record_id":"<urn:uuid:44dd19e2-1161-4833-a166-b02d09b8886e>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00339-ip-10-147-4-33.ec2.internal.warc.gz"}
No New Z Bosons Below A TeV New important information on high-energy particle physics has recently been released by the CDF experiment, one of the two detectors scrutinizing the 2-TeV proton-antiproton collisions copiously produced by the Fermilab Tevatron collider located near Batavia, Illinois (see aerial view of the site below). The CDF experiment has ruled out the existence of so-called "Z' bosons" (particles extraneous to the Standard Model which are predicted by a number of new physics models) for Z' masses below one Tera-electronvolt. In other words, if such particles exist they must be more massive than a thousand protons. The limit is M(Z')>1071 GeV, so to be precise Z' must be more massive than eleven hundred and forty protons, since a proton weighs 0.938 GeV. Or if you need an alternative unit of measure, let's take my own body weight after christmas parties are over: at 72.5 kilograms I weight about 43.3 thousand trillion trillion protons, so we can also say that Z' bosons must be heavier than 26.3 thousandths of a trillionth of a trillionth of a Tommaso. If I carried a handful such particles in my pocket, you would not notice the difference. Three minutes of boring theory before the exciting new result As I have grown accustomed to do when time allows, before offering the interesting new experimental result I get a bit didactical. Given that this medium does not guarantee much concentration from the reader, I will constrain this introduction to three minutes worth of text. Of course, if you are knowledgeable or impatient, you are free to jump to the next section... Z' (read: Z-prime) bosons are ubiquitous in extensions of the gauge group SU(2)xU(1) on which the Standard Model is based, whenever a U(1) group is added to the structure. U(1) is the one-dimensional unitary group, and it can be represented by rotations on a plane. Rotations are parametrized by an arbitrary angle, or phase -let us call it To understand what is the connection between this rotations group and the physics of elementary particles, we may suppose that to each point of space can be associated a small dial with one hand. The angle (with respect to a fixed direction) the hand aims at is immaterial, and does not modify the properties of space: in quantum-mechanical terms, an arbitrary choice of the angle in a point of space corresponds to multiplying the wave-function of our physical system in that point by a unobservable complex phase. Please do not be scared by the term "wave function": it is just a mathematical expression which may be describing, for instance, a single electron in quantum mechanical calculations. The important thing is that we cannot observe wave function amplitudes (complex numbers) but only the resulting intensities (real numbers!), which are obtained by taking the squared modulus of the amplitudes. Because of that, the phase is irrelevant for the observable physics: the squared modulus of the phase is equal to unity, so In the course of the past century physicists have understood that their theories, in order to represent the reality of the mechanics of particles, need to be "gauge invariant". Gauge invariance is the requisite that the behaviour of elementary particles (i.e., their predicted future history given a set of initial conditions) is to be left unmodified by an arbitrary choice of the phases in every point of space. The classic example is electromagnetism: one starts with a description of electrons as wave functions of point charges with spin in an electromagnetic field. Once one allows that an arbitrary phase, different in every space-time point, multiplies the wave function of electrons, and one insists that this does not affect their dynamics, one is mathematically forced to admit that a new vector boson exists: this particle is the "messenger" of the interaction, which connects the different space-time points sewing them together and canceling the different complex phases. This vector boson is the photon, the quantum of the electromagnetic field. The arbitrariness of the phase corresponds to a "invariance" of the physical laws under the choice of this parameter. It has been known for about a century that to such an invariance must correspond a conservation law. The conservation law in question, for electromagnetism, is that of electric charge: the theory constructed with local gauge invariance guarantees that the number of positive minus negative electric charges is a constant. Now, it is very tempting to hypothesize that the gauge group of elementary particles contains an additional U(1) group besides the one that generates the photon. Such an extension would have several attractive properties, and indeed theorists have generated a set of theories all based on additional U(1) groups. Add a U(1) group to the theory, and you gain the privilege of posing the existence of a fifth force, a new kind of electric-like charge, and a number of as-yet unseen phenomena. Isn't that cool ? In prosaic terms, all of the proposed extensions imply the existence of (at least) a new gauge boson. Such a boson must be electrically neutral, and massive -otherwise we would know it already! In fact, it must be so massive as to have eluded previous searches. At a collider, the more massive a particle is, the less frequently it is produced. This is the way theorists keep their theories alive: by posing that the particles their theories predict have been hiding where they could not be spotted until yesterday. But experimentalists with their searches gradually restrict the domain of existence of these theories, by excluding parts of the allowed parameter space. That is what CDF is doing with the unknown Z' mass. The CDF search Okay, theoretical introduction over. Now that you understand that these beasts must hide, if they exist, at large masses and low production rates, and that they are neutral and massive, you also may guess how we look for them. A Z' boson produced in a proton-antiproton collision will, a fraction of the times, decay into a pair of charged leptons: electron-positron, or muon-antimuon pairs. Tau pairs, of course, are less attractive due to the difficulty in separating these from hadronic jets; quark-antiquark pairs are even harder; and neutrino pairs are simply a nightmare, since neutrinos will escape unseen (we can still detect their departure, just as if, blindfolded and deaf and rowing in a boat, we suddenly feel the boat tilting on our side: our friend on the other side must have fallen in the water!). Muons and electrons of high energy are equally powerful signatures, but the new CDF analysis utilizes the former. The search for high-mass dimuon pairs is based on a data sample corresponding to an integrated luminosity of 4.6 inverse femtobarns of proton-antiproton collisions. What this means is that the set of events are collected from a total of just a bit less than 400 trillion collisions. This number is easy to compute: the total proton-antiproton cross section is 80 millibarns, or 80 trillion femtobarns (since milli- stands for 10^-3, micro- for 10^-6, nano- for 10^-9, pico- for 10^-12, and femto- for 10^-15). Multiply the cross section by the luminosity and you get the number of collisions: N = 80 trillion femtobarns times 4.6 inverse femtobarns is, indeed, 370 trillions of them. The large statistics allows CDF to look for Z' bosons which would be produced at very small rates -even a few times in a trillion, to be sure. But in order to avoid being deceived, one needs to correctly assess the background rate of processes that produce two identified muons of high invariant mass, without being due to the decay of the new Z' boson. This is indeed the hardest part of the work: high-mass dimuon pairs can be produced by the Drell-Yan process (that is, decays of Z bosons, or virtual photons), but also by the decay of top quark pairs, or of pairs of W bosons; they also may be the result of hadrons that fake the muon signature; and finally, one must beware of cosmic rays -muons produced in the upper atmosphere, which may cross the detector at its center and get reconstructed as two separate muons of opposite charge! All the possible background processes are computed with a number of different methods, and their sum is evaluated as a function of the invariant mass they would produce. The total expected spectrum is shown in the figure below; the black points with error bars show the observed data. It is clear by simple visual inspection that the data agree with the expected backgrounds. The comparison, performed with the help of some complicated statistical method, allows to extract a so-called "upper limit" on the production cross section of Z' bosons, as a function of the mass of the particle. This upper limit is the maximum production rate of the particle given that it has escaped detection in the analyzed dataset. The upper limit is shown by the red curve in the figure below. In the figure are also shown the expected rates predicted for different kinds of Z' bosons, in different theoretical models. By comparing the upper limit with the theoretical curves, one may deduce that the Z', if it exists, must be heavier than the mass value corresponding to where the theoretical curves intercept the red curve. That is because for lower masses the predicted rate would exceed the observed upper limit. The described exercise allows CDF to exclude that Z' bosons, if they exist, have masses below 1071 GeV. This value corresponds to the "simplest" kind of Z' bosons; the different models predict enhanced or suppressed rates of production and/or decay into muon pairs, such that the corresponding limits are higher or lower than the quoted one. In summary... To summarize, CDF excludes the existence of new Z' bosons of mass below a TeV or so. This limit is probably the last one that CDF will produce in its long and illustrious career: the LHC experiments ATLAS and CMS, with their bounty of 7-TeV proton-proton collisions, are already in the position of extending the search to much higher mass values, effectively putting CDF and DZERO out of business in this particular research topic. Such new improved limits are probably going to be produced soon... Unless a Z' boson does finally appear at the LHC! For more information on the CDF analysis, please see here. Is there a publicly accessible link to this analysis? Remember, you have physicist as well as general readers... Anonymous (not verified) | 01/09/11 | 10:59 AM Tommaso Dorigo | 01/09/11 | 12:06 PM Cool! I like (colored) preon-composite standard Z bosons better, anyway. Kea (not verified) | 01/09/11 | 15:57 PM and neutrino pairs are simply a nightmare, since neutrinos will escape unseen (we can still detect their departure, just as if, blindfolded and deaf and rowing in a boat, we suddenly feel the boat tilting on our side: our friend on the other side must have fallen in the water!) Tommaso, I wondered how often the boat is felt to be tilting and which graphs would show this? My latest forum article 'Australian Researchers Discover Potential Blue Green Algae Cause & Treatment of Motor Neuron Disease (MND)&(ALS)' Parkinsons's and Alzheimer's can be found at http:// Helen Barratt | 01/09/11 | 16:17 PM Tommaso Dorigo | 01/09/11 | 16:28 PM Thanks Tommaso, so the LHC boat must be starting to really rock, as that would be 100 every second wouldn't it? According to the Large Hadron Collider Communication website When the LHC is fully operational, it will produce roughly 1 billion proton-proton collision events per second in the detectors (40 million bunch crossings per second). This data will be heavily filtered so that only about 100 events of interest per second will be recorded permanently. Each event represents a few Megabytes of data, so the total data rate from the experiments will be of order 1 Gigabyte per second. My latest forum article 'Australian Researchers Discover Potential Blue Green Algae Cause & Treatment of Motor Neuron Disease (MND)&(ALS)' Parkinsons's and Alzheimer's can be found at http:// Helen Barratt | 01/09/11 | 17:23 PM Yes :-) but beware that at the LHC, due to the higher centre-of-mass energy, the number of W and Z boson decays to neutrinos produce a rate that is comparatively larger by a factor of 5 or so (at 8 TeV, the energy which we will be running at this year). So once in 2 million collisions. In 2011 the instantaneous luminosity will bring the number of collisions in the LHC experiments to a hundred million per second, so this will be 50 Hertz. Beware, these are rough numbers, but the order of magnitude is that one. Tommaso Dorigo | 01/10/11 | 01:05 AM Tommaso, the LHC boat must be really rocking wildly with 500 neutrinos per second being generated. Are we still rowing blindfolded and deaf and suddenly feeling the boat tilting on our side as 500 friends on the other side fall into the water and if so how are we detecting this? Do we have any clearer understanding of the solar neutrino problem through Earth generated neutrinos from these LHC experiments yet? If so what are they doing and where are they going? According to this Wiki article :- The crux of the solar neutrino problem, and its resolution, lies in the fact that both the interior of the Sun and the behavior of traveling neutrinos is unknown to begin with. One may assume knowledge of one and determine the other by experiment here on Earth. My latest forum article 'Australian Researchers Discover Potential Blue Green Algae Cause & Treatment of Motor Neuron Disease (MND)&(ALS)' Parkinsons's and Alzheimer's can be found at http:// Helen Barratt | 01/10/11 | 07:06 AM There are not 500 neutrinos in an event; there are of the order of 50 events per second containing energetic neutrino(s) -one or two. They are detected by the imbalance in transverse energy, as I have explained several times in this column: they leave undetected, but the total energy flow in the direction transverse to the beam must be null (conservation of momentum: the initial protons carry no net momentum orthogonally from the beam line, so the same applies to the produced particles). The solar neutrino problem is not a problem - it is the manifestation of neutrino oscillations. We cannot study neutrino oscillations with the LHC, this is a topic for other experiments. Tommaso Dorigo | 01/10/11 | 07:20 AM Tommaso, if the LHC is generating millions of neutrino oscillations then why can't CMS study them? Isn't that what this paper called 'Searches for fourth generation particles and heavy neutrinos at CMS experiment' was claiming to be doing in 2009? I thought to some extent MoEDAL was also looking for some of these higher generation or flavoured neutrino oscillations being generated at LHC or am I completely confused as usual? My latest forum article 'Australian Researchers Discover Potential Blue Green Algae Cause & Treatment of Motor Neuron Disease (MND)&(ALS)' Parkinsons's and Alzheimer's can be found at http:// Helen Barratt | 01/10/11 | 14:36 PM Tommaso Dorigo | 01/10/11 | 14:50 PM No, I can't wait for that, I've already used up at least half a lifetime trying to understand this much. Maybe we need better detection methods? MINOS seem to be doing well at detecting neutrinos, do they have better detectors than LHC? My latest forum article 'Australian Researchers Discover Potential Blue Green Algae Cause & Treatment of Motor Neuron Disease (MND)&(ALS)' Parkinsons's and Alzheimer's can be found at http:// Helen Barratt | 01/10/11 | 16:00 PM Tommaso Dorigo | 01/10/11 | 16:05 PM
{"url":"http://www.science20.com/quantum_diaries_survivor/no_new_z_bosons_below_tev","timestamp":"2014-04-21T02:13:20Z","content_type":null,"content_length":"68056","record_id":"<urn:uuid:cb748179-fa4a-4dce-872c-c513dc6433d2>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00611-ip-10-147-4-33.ec2.internal.warc.gz"}
Occidental ACT Math Tutors ...The coolest part about that kind of math? For me the coolest part about that kind of math is explaining it to non-math people. Non-math people are a gift to someone like me because they force me to reexamine my own way of looking at math and to come up with new, different and innovative ways of explaining it. 11 Subjects: including ACT Math, calculus, geometry, algebra 1 ...Expert in pitching mechanics, hitting strategies, and basic gameplay. Finished my sixth season as an allstar starting pitcher, and having led my team to win the league championship. Received an "A" in differential equations at UCSD. 50 Subjects: including ACT Math, reading, English, calculus I'm a retired engineer and math teacher with a love for teaching anyone who wants to learn. As an engineer, I regularly used all levels of math (from arithmetic through calculus), statistics, and physics. I hold a California Single Subject Teaching Credential in math and physics (I taught high school math from pre-algebra to geometry). 26 Subjects: including ACT Math, reading, calculus, physics ...I'm aware that people need to enjoy something to focus on learning, and I always try to make the lessons fun as well as thorough. I firmly believe that everyone can get a lot better at math with just effort and guidance. You do not have to "be a math person" to do well. 21 Subjects: including ACT Math, English, reading, statistics ...I studied proof, logic, Markov chains, probability, and graph theory. I am quite knowledgeable in this subject. I have studied differential and partial differential equations and I am quite proficient in these two subjects. 39 Subjects: including ACT Math, reading, chemistry, calculus
{"url":"http://www.algebrahelp.com/Occidental_act_math_tutors.jsp","timestamp":"2014-04-19T09:32:19Z","content_type":null,"content_length":"24854","record_id":"<urn:uuid:4c67cab1-4fc5-4cab-854c-1f264689ba5c>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00248-ip-10-147-4-33.ec2.internal.warc.gz"}
Need help with the answers with the following proofs?? Hi everyone, I'm just curious what would be the answers to the following two proofs? Attachment 29156 Re: Need help with the answers with the following proofs?? I can't see what is to be proved on either of those. angle C is the same in the two triangle (vertical angles). apply SAS similitude theorem
{"url":"http://mathhelpforum.com/geometry/221919-need-help-answers-following-proofs-print.html","timestamp":"2014-04-21T16:10:47Z","content_type":null,"content_length":"4159","record_id":"<urn:uuid:2f174807-ad8d-42d8-9c1b-2f224510ff82>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00204-ip-10-147-4-33.ec2.internal.warc.gz"}
Memory Dump I’ve been hacking on the Pycha autopadding feature lately and, as most things I write here, I’ll try to summarize the tricky parts so I can understand them in a couple of months. As a side effect, I hope you enjoy this post. In current Pycha versions there is a padding option but it is not what you expect. The current padding represents the space between the limit of the graphics area and the chart area. The labels and ticks are drawn in this space. What this means is the programmer has to manually adjust the padding in order to make the labels render correctly. When you are using pycha for drawing one or two charts in an environment under your control this is not a big issue. When you draw lots of charts from wild data sources, autopadding helps a lot. In Figure 1 you can see old style pie charts where we had to manually adjust the padding to make the labels fit. As a side effect the pie was not centered in the graphics area. Here the padding options are: {‘top’: 0, ‘right’: 10, ‘left’: 70, ‘bottom’: 0}. Imagine what happen if the dataset changes: we would probably need to change the padding. So, what exactly is autopadding? Well, it’s the feature that lets pycha compute the labels and ticks area automatically. Now the padding that you specify is real empty space. For most of the charts that pycha supports this feature is not very difficult. Except for the pie charts. In this case, it took me a while to get it working and the results, while good enough, are not perfect. In figure 2 you can see such results. As you can see the pie is smaller now and the slices are ordered in a different way. The order really matters and I will probably add an option in the future to indicate the starting angle (now is 0) so there is a small degree of control left to the programmer. Note that this time the four paddings are equal to zero. Let’s describe the problem: given a rectangular area (width and height) and a list of tuples made of a label and an angle, compute the circunference radius and the label positions so that the pie area is maximized and the labels do not override the pie. For each label the angle associated with it is the angle that split its pie slice into two slices of the same side. If you have the initial and final angle of each slice (as I do) computing this angle is as easier as summing both and dividing by two. Step 1. So the first thing I do is compute the center of the area and the bounding boxes of the labels. Any decent graphics library has a function for that and Cairo is no different: Step 2. Now for each tuple in my label-angle list I draw a ray that starts at the center of the chart and goes into the direction given by the angle. Remember that a rect can be defined with a point and an angle. Step 3. I calculate the intersection of these rays with the rectangle that defines my graphics area (drawn in blue in the Figure 3 because debug mode is enabled). Clipping is not useful here since I want the coordinates for every intersection, the rays are not drawn in the final result unless debug mode is enabled. In order to make this computation easier I divide the area in four parts delimited by the two diagonals of the rectangle. Every ray is defined by an ecuation of the form y = mx + n where m is tan (angle) and I know which of the four sides of the rectangle the ray is going to intersect bases on the angle. As every side of the rectangle is also defined by a very simple ecuation, all I have to do is solve an ecuation system for each ray. Nice. We have now a center and a bunch of rays and intersection points. We also have bounding boxes for the labels. Now I have to place each bounding box with two constraints: • Its center lies on its associated ray • One of its side lies on one of the sides of the big chart rectangle Step 4. It’s easy to guess what side of the bounding box matches with which side of the chart rectangle. A little bit of trigonometry will do the rest. Step 5. Ok, now I have all the labels, or more correctly, their bounding boxes, positioned in the chart rectangle, as far as I can from the center. Now the maximun radius for the pie is the minimum of the distances from the bounding boxes to the center. For every bounding box there is one side that is closer than the others to the center, so the distance we want is the distance from that side to the center. Again, having divided the area in those four parts makes this step this easier. In our example shown in Figure 5 the label that restricts the pie radio is the one that reads “ stackedbar.py (6.9%)“. The good news are that we have achieved one of our goals: compute the radius of our pie chart. The bad news are that was the easy part. In the second round of our algorithm we are going to move the labels bounding boxes as close as we can to the pie. Otherwise they don’t look good and the conection between the slice and its label is lost. As you may guess all the labels can be moved closer to the center except for one. That’s the label that gave as the radius of the pie circunference. Step 6. So let’s try to move the labels. For this task I will divide the chart rectangle in four sides but this time I’ll use two perpendicular rects, one horizontal and one vertical. So I’ll get my circunference divided in four quadrants. For each label I’ll compute the intersection of its ray and our pie circunference. Remember that now we know its radius. Step 7. For each label place one corner of its bounding box in the point computed in step 6. For the first quadrant this corner is the left down corner, for the second quadrant it’s the right down corner, for the third quadrant it’s the right top corner and for the fourth quadrant it’s the left top corner. This is a feasible position but it doesn’t look very nice so we will try to improve it. Step 8. For each label draw another circunference with the same center as the pie circunference and the radius equal to the distance between the center of the bounding box and the center of the circunference. Now let’s compute the intersection between this second circunference and the ray of the same label. Voilá, that’s the point where we are going to move the center of the bounding box. So that’s the end of the algorithm, the labels are now positioned closer to the pie circunference but as I said before, it’s not perfect since for those labels that are near the top and bottom side of the circunference the label could be closer than it is. I may have used a soft computing algorithm that moves the bounding box along its ray until touches the pie circunference but that would be By the way, I’m using a named branch for this feature and hopefullly it will get merged into the main branch for the 0.7.0 release of pycha.
{"url":"http://lorenzogil.com/blog/category/pycha/","timestamp":"2014-04-19T10:12:22Z","content_type":null,"content_length":"20387","record_id":"<urn:uuid:e40f427f-d028-487d-adc7-05d1f6b6a442>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00598-ip-10-147-4-33.ec2.internal.warc.gz"}
Sugar Land Prealgebra Tutor Find a Sugar Land Prealgebra Tutor ...Sometimes, I even convince the student that math can be fun.At another private tutoring company, I tutored several students for the ISEE test. This test is much easier than the SAT test and thus is much less mathematically challenging. I have gone over practice tests and feel comfortable tutoring for this test. 14 Subjects: including prealgebra, calculus, geometry, algebra 1 Hi! My name is Tara and while I graduated from Southwestern University with a major in Biology (Pre-med track) and a minor in Spanish, my love has always been math, which lead me to becoming a math teacher and tutor. I am a certified 8-12 Math teacher and I have taught Algebra I, Geometry, and Geo... 15 Subjects: including prealgebra, reading, GED, geometry ...My tutoring personality: I would first like to say that I firmly believe that every student can learn any subject. To achieve that, I try to break the information down into smaller pieces to make it easier for students to comprehend. Also, I show students how to figure out what is the most important information so that they can more quickly master the material. 24 Subjects: including prealgebra, chemistry, English, reading ...I have taught chemistry, physics, and chemical process technology at Alvin Community College for 6 years, and I currently also tutor High School students. My strongest areas are chemistry and math up through precalculus. I will also tutor freshman/sophomore college students in these areas. 11 Subjects: including prealgebra, chemistry, English, geometry ...I have also taught TAKS Math classes at Willowridge high school for many years and have tutored students with great success for this exam. I taught TAKS Math classes for 10th through 12th grades. These were classes that were scheduled for students who had taken their grade level TAKS tests and had failed the previous year and were more than 2 grade levels behind in their math 5 Subjects: including prealgebra, geometry, ASVAB, algebra 1
{"url":"http://www.purplemath.com/sugar_land_tx_prealgebra_tutors.php","timestamp":"2014-04-17T07:34:06Z","content_type":null,"content_length":"24355","record_id":"<urn:uuid:c2824bb6-9b4e-4710-9324-085e00a89542>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00585-ip-10-147-4-33.ec2.internal.warc.gz"}
A problem in complex analysis April 13th 2012, 05:08 PM #1 Aug 2010 A problem in complex analysis How to find the summation S= sigma (x_n)^-2 where x_n are the positive roots of tanx=x ??? Des that have to do with the Weierstrass product ?? I am trying to find a differential equation where it's eigenvalues are x_n ???????? Follow Math Help Forum on Facebook and Google+
{"url":"http://mathhelpforum.com/advanced-math-topics/197240-problem-complex-analysis.html","timestamp":"2014-04-20T14:17:17Z","content_type":null,"content_length":"28824","record_id":"<urn:uuid:d9efd64f-1fbe-42fc-8b2e-d789f3c9ab08>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00464-ip-10-147-4-33.ec2.internal.warc.gz"}
No New Z Bosons Below A TeV New important information on high-energy particle physics has recently been released by the CDF experiment, one of the two detectors scrutinizing the 2-TeV proton-antiproton collisions copiously produced by the Fermilab Tevatron collider located near Batavia, Illinois (see aerial view of the site below). The CDF experiment has ruled out the existence of so-called "Z' bosons" (particles extraneous to the Standard Model which are predicted by a number of new physics models) for Z' masses below one Tera-electronvolt. In other words, if such particles exist they must be more massive than a thousand protons. The limit is M(Z')>1071 GeV, so to be precise Z' must be more massive than eleven hundred and forty protons, since a proton weighs 0.938 GeV. Or if you need an alternative unit of measure, let's take my own body weight after christmas parties are over: at 72.5 kilograms I weight about 43.3 thousand trillion trillion protons, so we can also say that Z' bosons must be heavier than 26.3 thousandths of a trillionth of a trillionth of a Tommaso. If I carried a handful such particles in my pocket, you would not notice the difference. Three minutes of boring theory before the exciting new result As I have grown accustomed to do when time allows, before offering the interesting new experimental result I get a bit didactical. Given that this medium does not guarantee much concentration from the reader, I will constrain this introduction to three minutes worth of text. Of course, if you are knowledgeable or impatient, you are free to jump to the next section... Z' (read: Z-prime) bosons are ubiquitous in extensions of the gauge group SU(2)xU(1) on which the Standard Model is based, whenever a U(1) group is added to the structure. U(1) is the one-dimensional unitary group, and it can be represented by rotations on a plane. Rotations are parametrized by an arbitrary angle, or phase -let us call it To understand what is the connection between this rotations group and the physics of elementary particles, we may suppose that to each point of space can be associated a small dial with one hand. The angle (with respect to a fixed direction) the hand aims at is immaterial, and does not modify the properties of space: in quantum-mechanical terms, an arbitrary choice of the angle in a point of space corresponds to multiplying the wave-function of our physical system in that point by a unobservable complex phase. Please do not be scared by the term "wave function": it is just a mathematical expression which may be describing, for instance, a single electron in quantum mechanical calculations. The important thing is that we cannot observe wave function amplitudes (complex numbers) but only the resulting intensities (real numbers!), which are obtained by taking the squared modulus of the amplitudes. Because of that, the phase is irrelevant for the observable physics: the squared modulus of the phase is equal to unity, so In the course of the past century physicists have understood that their theories, in order to represent the reality of the mechanics of particles, need to be "gauge invariant". Gauge invariance is the requisite that the behaviour of elementary particles (i.e., their predicted future history given a set of initial conditions) is to be left unmodified by an arbitrary choice of the phases in every point of space. The classic example is electromagnetism: one starts with a description of electrons as wave functions of point charges with spin in an electromagnetic field. Once one allows that an arbitrary phase, different in every space-time point, multiplies the wave function of electrons, and one insists that this does not affect their dynamics, one is mathematically forced to admit that a new vector boson exists: this particle is the "messenger" of the interaction, which connects the different space-time points sewing them together and canceling the different complex phases. This vector boson is the photon, the quantum of the electromagnetic field. The arbitrariness of the phase corresponds to a "invariance" of the physical laws under the choice of this parameter. It has been known for about a century that to such an invariance must correspond a conservation law. The conservation law in question, for electromagnetism, is that of electric charge: the theory constructed with local gauge invariance guarantees that the number of positive minus negative electric charges is a constant. Now, it is very tempting to hypothesize that the gauge group of elementary particles contains an additional U(1) group besides the one that generates the photon. Such an extension would have several attractive properties, and indeed theorists have generated a set of theories all based on additional U(1) groups. Add a U(1) group to the theory, and you gain the privilege of posing the existence of a fifth force, a new kind of electric-like charge, and a number of as-yet unseen phenomena. Isn't that cool ? In prosaic terms, all of the proposed extensions imply the existence of (at least) a new gauge boson. Such a boson must be electrically neutral, and massive -otherwise we would know it already! In fact, it must be so massive as to have eluded previous searches. At a collider, the more massive a particle is, the less frequently it is produced. This is the way theorists keep their theories alive: by posing that the particles their theories predict have been hiding where they could not be spotted until yesterday. But experimentalists with their searches gradually restrict the domain of existence of these theories, by excluding parts of the allowed parameter space. That is what CDF is doing with the unknown Z' mass. The CDF search Okay, theoretical introduction over. Now that you understand that these beasts must hide, if they exist, at large masses and low production rates, and that they are neutral and massive, you also may guess how we look for them. A Z' boson produced in a proton-antiproton collision will, a fraction of the times, decay into a pair of charged leptons: electron-positron, or muon-antimuon pairs. Tau pairs, of course, are less attractive due to the difficulty in separating these from hadronic jets; quark-antiquark pairs are even harder; and neutrino pairs are simply a nightmare, since neutrinos will escape unseen (we can still detect their departure, just as if, blindfolded and deaf and rowing in a boat, we suddenly feel the boat tilting on our side: our friend on the other side must have fallen in the water!). Muons and electrons of high energy are equally powerful signatures, but the new CDF analysis utilizes the former. The search for high-mass dimuon pairs is based on a data sample corresponding to an integrated luminosity of 4.6 inverse femtobarns of proton-antiproton collisions. What this means is that the set of events are collected from a total of just a bit less than 400 trillion collisions. This number is easy to compute: the total proton-antiproton cross section is 80 millibarns, or 80 trillion femtobarns (since milli- stands for 10^-3, micro- for 10^-6, nano- for 10^-9, pico- for 10^-12, and femto- for 10^-15). Multiply the cross section by the luminosity and you get the number of collisions: N = 80 trillion femtobarns times 4.6 inverse femtobarns is, indeed, 370 trillions of them. The large statistics allows CDF to look for Z' bosons which would be produced at very small rates -even a few times in a trillion, to be sure. But in order to avoid being deceived, one needs to correctly assess the background rate of processes that produce two identified muons of high invariant mass, without being due to the decay of the new Z' boson. This is indeed the hardest part of the work: high-mass dimuon pairs can be produced by the Drell-Yan process (that is, decays of Z bosons, or virtual photons), but also by the decay of top quark pairs, or of pairs of W bosons; they also may be the result of hadrons that fake the muon signature; and finally, one must beware of cosmic rays -muons produced in the upper atmosphere, which may cross the detector at its center and get reconstructed as two separate muons of opposite charge! All the possible background processes are computed with a number of different methods, and their sum is evaluated as a function of the invariant mass they would produce. The total expected spectrum is shown in the figure below; the black points with error bars show the observed data. It is clear by simple visual inspection that the data agree with the expected backgrounds. The comparison, performed with the help of some complicated statistical method, allows to extract a so-called "upper limit" on the production cross section of Z' bosons, as a function of the mass of the particle. This upper limit is the maximum production rate of the particle given that it has escaped detection in the analyzed dataset. The upper limit is shown by the red curve in the figure below. In the figure are also shown the expected rates predicted for different kinds of Z' bosons, in different theoretical models. By comparing the upper limit with the theoretical curves, one may deduce that the Z', if it exists, must be heavier than the mass value corresponding to where the theoretical curves intercept the red curve. That is because for lower masses the predicted rate would exceed the observed upper limit. The described exercise allows CDF to exclude that Z' bosons, if they exist, have masses below 1071 GeV. This value corresponds to the "simplest" kind of Z' bosons; the different models predict enhanced or suppressed rates of production and/or decay into muon pairs, such that the corresponding limits are higher or lower than the quoted one. In summary... To summarize, CDF excludes the existence of new Z' bosons of mass below a TeV or so. This limit is probably the last one that CDF will produce in its long and illustrious career: the LHC experiments ATLAS and CMS, with their bounty of 7-TeV proton-proton collisions, are already in the position of extending the search to much higher mass values, effectively putting CDF and DZERO out of business in this particular research topic. Such new improved limits are probably going to be produced soon... Unless a Z' boson does finally appear at the LHC! For more information on the CDF analysis, please see here. Is there a publicly accessible link to this analysis? Remember, you have physicist as well as general readers... Anonymous (not verified) | 01/09/11 | 10:59 AM Tommaso Dorigo | 01/09/11 | 12:06 PM Cool! I like (colored) preon-composite standard Z bosons better, anyway. Kea (not verified) | 01/09/11 | 15:57 PM and neutrino pairs are simply a nightmare, since neutrinos will escape unseen (we can still detect their departure, just as if, blindfolded and deaf and rowing in a boat, we suddenly feel the boat tilting on our side: our friend on the other side must have fallen in the water!) Tommaso, I wondered how often the boat is felt to be tilting and which graphs would show this? My latest forum article 'Australian Researchers Discover Potential Blue Green Algae Cause & Treatment of Motor Neuron Disease (MND)&(ALS)' Parkinsons's and Alzheimer's can be found at http:// Helen Barratt | 01/09/11 | 16:17 PM Tommaso Dorigo | 01/09/11 | 16:28 PM Thanks Tommaso, so the LHC boat must be starting to really rock, as that would be 100 every second wouldn't it? According to the Large Hadron Collider Communication website When the LHC is fully operational, it will produce roughly 1 billion proton-proton collision events per second in the detectors (40 million bunch crossings per second). This data will be heavily filtered so that only about 100 events of interest per second will be recorded permanently. Each event represents a few Megabytes of data, so the total data rate from the experiments will be of order 1 Gigabyte per second. My latest forum article 'Australian Researchers Discover Potential Blue Green Algae Cause & Treatment of Motor Neuron Disease (MND)&(ALS)' Parkinsons's and Alzheimer's can be found at http:// Helen Barratt | 01/09/11 | 17:23 PM Yes :-) but beware that at the LHC, due to the higher centre-of-mass energy, the number of W and Z boson decays to neutrinos produce a rate that is comparatively larger by a factor of 5 or so (at 8 TeV, the energy which we will be running at this year). So once in 2 million collisions. In 2011 the instantaneous luminosity will bring the number of collisions in the LHC experiments to a hundred million per second, so this will be 50 Hertz. Beware, these are rough numbers, but the order of magnitude is that one. Tommaso Dorigo | 01/10/11 | 01:05 AM Tommaso, the LHC boat must be really rocking wildly with 500 neutrinos per second being generated. Are we still rowing blindfolded and deaf and suddenly feeling the boat tilting on our side as 500 friends on the other side fall into the water and if so how are we detecting this? Do we have any clearer understanding of the solar neutrino problem through Earth generated neutrinos from these LHC experiments yet? If so what are they doing and where are they going? According to this Wiki article :- The crux of the solar neutrino problem, and its resolution, lies in the fact that both the interior of the Sun and the behavior of traveling neutrinos is unknown to begin with. One may assume knowledge of one and determine the other by experiment here on Earth. My latest forum article 'Australian Researchers Discover Potential Blue Green Algae Cause & Treatment of Motor Neuron Disease (MND)&(ALS)' Parkinsons's and Alzheimer's can be found at http:// Helen Barratt | 01/10/11 | 07:06 AM There are not 500 neutrinos in an event; there are of the order of 50 events per second containing energetic neutrino(s) -one or two. They are detected by the imbalance in transverse energy, as I have explained several times in this column: they leave undetected, but the total energy flow in the direction transverse to the beam must be null (conservation of momentum: the initial protons carry no net momentum orthogonally from the beam line, so the same applies to the produced particles). The solar neutrino problem is not a problem - it is the manifestation of neutrino oscillations. We cannot study neutrino oscillations with the LHC, this is a topic for other experiments. Tommaso Dorigo | 01/10/11 | 07:20 AM Tommaso, if the LHC is generating millions of neutrino oscillations then why can't CMS study them? Isn't that what this paper called 'Searches for fourth generation particles and heavy neutrinos at CMS experiment' was claiming to be doing in 2009? I thought to some extent MoEDAL was also looking for some of these higher generation or flavoured neutrino oscillations being generated at LHC or am I completely confused as usual? My latest forum article 'Australian Researchers Discover Potential Blue Green Algae Cause & Treatment of Motor Neuron Disease (MND)&(ALS)' Parkinsons's and Alzheimer's can be found at http:// Helen Barratt | 01/10/11 | 14:36 PM Tommaso Dorigo | 01/10/11 | 14:50 PM No, I can't wait for that, I've already used up at least half a lifetime trying to understand this much. Maybe we need better detection methods? MINOS seem to be doing well at detecting neutrinos, do they have better detectors than LHC? My latest forum article 'Australian Researchers Discover Potential Blue Green Algae Cause & Treatment of Motor Neuron Disease (MND)&(ALS)' Parkinsons's and Alzheimer's can be found at http:// Helen Barratt | 01/10/11 | 16:00 PM Tommaso Dorigo | 01/10/11 | 16:05 PM
{"url":"http://www.science20.com/quantum_diaries_survivor/no_new_z_bosons_below_tev","timestamp":"2014-04-21T02:13:20Z","content_type":null,"content_length":"68056","record_id":"<urn:uuid:cb748179-fa4a-4dce-872c-c513dc6433d2>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00611-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Which of the following statements are true? • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50b14e8ce4b0e906b4a619fc","timestamp":"2014-04-19T10:30:21Z","content_type":null,"content_length":"52275","record_id":"<urn:uuid:f7cde7c3-7a97-446f-bd64-419da2f4549b>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00028-ip-10-147-4-33.ec2.internal.warc.gz"}
Concordville Statistics Tutor Find a Concordville Statistics Tutor ...Routinely score 800/800 on practice tests. Taught high school math and have extensive experience tutoring in SAT Math. Able to help students improve their math skills and also learn many valuable test-related shortcuts and strategies. 19 Subjects: including statistics, calculus, geometry, algebra 1 ...I have been teaching SAT, GMAT, ACT, MCAT and GRE prep for over 7 years. I have also taught calculus, algebra and college level physics and chemistry to students who need help. I have experience teaching college students as a teaching assistant with State University of New York when I pursued Masters there. 18 Subjects: including statistics, physics, calculus, finance ...I find knowing why the math is important goes a long way towards helping students retain information. After all, math IS fun!In the past 5 years, I have taught differential equations at a local university. I hold degrees in economics and business and an MBA. 13 Subjects: including statistics, calculus, geometry, algebra 1 ...Know and practice the format. Then, let's develop the structure: thesis statement, 3 supporting paragraphs, conclusion. Grammar, spelling and vocabulary are important. 35 Subjects: including statistics, English, reading, chemistry ...I can provide references from satisfied clients. My bachelor's degree in is math, and I have a master's of education (M.Ed) from Temple University. Here are some testimonials from some of my students and their parents: "Jonathan was able to work with my son and decode what he needed to know t... 22 Subjects: including statistics, calculus, writing, geometry
{"url":"http://www.purplemath.com/concordville_pa_statistics_tutors.php","timestamp":"2014-04-17T01:39:07Z","content_type":null,"content_length":"24085","record_id":"<urn:uuid:68f5a313-5372-4181-a974-4035f938681b>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00154-ip-10-147-4-33.ec2.internal.warc.gz"}
Quick question regarding partial decomposition. September 15th 2008, 06:21 PM Quick question regarding partial decomposition. I was just wondering about the steps that are described here but not shown. I don't see what they mean by subtracting equation 3 from equation 1 and using that in equation 2. Could someone show me the steps that were left out? September 15th 2008, 06:47 PM They used elimanation to get rid of A and then use elimanation again to get rid of B or C September 16th 2008, 12:17 AM \begin{aligned} (1) ~ & 1 &=& {\color{red}\boxed{A}} &+&3B&&& \\<br /> (2) ~ &1&=&&&-B&+&3C \\<br /> (3) ~ &-2&=&{\color{red}\boxed{A}}&&&-&C \end{aligned} The red boxes tell us to substract (1) from (3) or (3) from (1). This gives $1-(-2)=A+3B-(A-C) \implies 3=3B+C \implies C=3-3B$ Substitute it in (2). September 16th 2008, 12:26 AM \begin{aligned} (1) ~ & 1 &=& {\color{red}\boxed{A}} &+&3B&&& \\<br /> (2) ~ &1&=&&&-B&+&3C \\<br /> (3) ~ &-2&=&{\color{red}\boxed{A}}&&&-&C \end{aligned} The red boxes tell us to substract (1) from (3) or (3) from (1). This gives $1-(-2)=A+3B-(A-C) \implies 3=3B+C \implies C=3-3B$ Substitute it in (2). I tried so hard to get mine to look like that but couldn't. I just gave up and gave an explanation lol September 16th 2008, 12:29 AM I was just wondering about the steps that are described here but not shown. I don't see what they mean by subtracting equation 3 from equation 1 and using that in equation 2. Could someone show me the steps that were left out? Hey, are you in Mrs. Duncan's class? haha, I'm in here 8AM class. That is from elluminate? September 16th 2008, 02:30 AM Lol, yeah I am in the 2:00 class. Thanks for all the replies.
{"url":"http://mathhelpforum.com/calculus/49245-quick-question-regarding-partial-decomposition-print.html","timestamp":"2014-04-16T07:35:28Z","content_type":null,"content_length":"8489","record_id":"<urn:uuid:480a78e3-1625-4ca7-ae66-0342b5d0fe09>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00286-ip-10-147-4-33.ec2.internal.warc.gz"}
Bachelier’s Legacy: Applications of Theoretical Physics in Market Price Modeling Simon Strong Louis Bachelier’s paper Théorie de la Spéculation, published in 1900, was the first notable attempt to apply mathematical methods to the prediction of stock prices and the behavior of financial markets. Bachelier used a random walk model; the same model was later used by Albert Einstein to explain the phenomenon of Brownian motion. Bachelier’s approach was groundbreaking, but his original model is now widely agreed to be too simplistic. Various methods and approaches from theoretical physics have been co-opted by economists in an attempt to build more accurate models of price movements in financial markets. This paper surveys the benefits and drawbacks of some of these approaches. Theoretical physics and financial modeling seem, at first glance, to be strange bedfellows. Theoretical physics is devoted to building models of various aspects of the natural world, in a tradition dating back to Galileo Galilei, the first Western scientist to use mathematics to describe the regular behavior of the pendulum and other recurring patterns in the world around him. Financial modeling studies and attempts to predict the behavior of financial markets, which are driven by the hopes, fears, beliefs, and decisions of numerous individual investors. There is, however, a deep connection between the two disciplines: they both attempt to model and understand the behavior of complex systems by looking for macroscopic rules, rather than trying to predict the behavior of individual components. In the case of theoretical physics, calculating the behavior and interactions of each separate atom and molecule in a gas, fluid, or solid object may be possible in principle but is not feasible in practice. Instead, physicists develop equations of motion or equations of state, which describe connections between the overall behavior of a system and its macroscopic properties such as temperature and pressure. Similarly, models of financial markets do not try to predict the motivations and decisions of individual investors; instead, they attempt to model patterns in the behavior of the most easily observed attribute of markets, their prices. For most financial assets, it is not feasible to build an exact model that will predict the future market price of the asset with no uncertainty – indeed, if this were possible then the modeled asset would, by definition, become a risk-free asset. However, the goal of market price modeling is more realistic and more achievable than this. Its goal is to build a probabilistic or “stochastic” model of market prices, which can be used to predict the probability distribution of future prices. Stochastic market pricing models are used by cash market traders to decide when to buy or sell a given asset; by derivatives traders to determine the “fair” value of options and other financial derivatives; by risk managers to quantify the risk of loss on a portfolio of assets; and by portfolio managers to determine the construction of a portfolio that gives the optimum balance of risk and return. The same ideas used in theoretical physics to build quantitative models of physical phenomena are often found in market price models. Sometimes this is the result of parallel but independent thinking; in other cases, a conscious effort is made to take a concept from physics, such as phase transition, and apply it to financial markets. This cross-fertilization of ideas has given rise to a discipline called “econophysics.” The econophysics paradigm is not universally accepted, and has been criticized for lacking theoretical underpinnings [Gallegati et al. (2006)]. However, there are parallel lines of development of mathematical models in both theoretical physics and in economics, from simple random walk models, through more sophisticated linear models, to highly nonlinear
{"url":"http://www.capco.com/print/1815","timestamp":"2014-04-21T03:20:47Z","content_type":null,"content_length":"8155","record_id":"<urn:uuid:8949d99e-027f-4a64-a378-554533cad0ca>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00043-ip-10-147-4-33.ec2.internal.warc.gz"}
On the 10th Day of Christmas… A KitchenAid Mixer Giveaway! (winners announced) UPDATE: The winner of the KitchenAid 5-Quart Artisan Stand Mixer is: #726 – Katie: “chewy all the way!” The winner of the KitchenAid 5-Speed Ultra Power Hand Mixer is: #6,831 – Leana: “following you on pinterest!” Congratulations, Katie and Leana! Be sure to reply to the email you’ve each been sent, and your new KitchenAid mixers will be shipped out to you! The KitchenAid Mixer is the holy grail of kitchen equipment. I couldn’t imagine my kitchen without one, and use it multiple times per week (sometimes multiple times per day!). It makes everything infinitely easier – from mixing cookie dough to whipping egg whites and kneading dough, its capabilities are virtually limitless. I couldn’t have a 12 Days of Giveaways without gifting the classic KitchenAid mixer to a lucky reader! In fact, I’m giving away TWO KitchenAid mixers today (one stand mixer and one hand mixer)! Perhaps you could use one to enter The Cookie Rumble contest being sponsored by KitchenAid, in conjunction with Fonseca BIN 27, a Port wine. Submit a cookie recipe that would pair well with the wine and have a chance to win a KitchenAid Dual Fuel Double Oven Range, a new mixer, and a trip to New York City for private cooking lessons with Jacques Torres and Dorie Greenspan! Pretty awesome, right?! While you dream up your killer cookie recipe, read below for today’s giveaway details and how to enter to win one of TWO KitchenAid mixers… (The 12 Days of Christmas Giveaways will resume on Monday with Day #11!) To enter to win, simply leave a comment on this post and answer the question: “Do you like your cookies chewy or crispy?” You can receive up to FIVE additional entries to win by doing the following: 1. Subscribe to Brown Eyed Baker by either RSS or email. Come back and let me know you’ve subscribed in an additional comment on this post. 2. Become a fan of Brown Eyed Baker on Facebook. Come back and let me know you became a fan in an additional comment on this post. 3. Follow Brown Eyed Baker on Pinterest. Come back and let me know you became a fan in an additional comment on this post. 4. Follow @thebrowneyedbaker on Instagram. Come back and let me know you’ve followed in an additional comment on this post. 5. Follow @browneyedbaker on Twitter. Come back and let me know you’ve followed in an additional comment on this post. Deadline: Saturday, December 15, 2012 at 11:59pm EST. Winner: The winner will be chosen at random using Random.org and announced at the top of this post. If the winner does not respond within 48 hours, another winner will be selected. Disclaimer: This giveaway is sponsored by Brown Eyed Baker and Fonseca BIN 27, in conjunction with KitchenAid. 7,829 Responses to “On the 10th Day of Christmas… A KitchenAid Mixer Giveaway! (winners announced)” 6. I am subscribed to your rss feed 7. I follow you on Pinterest. 9. I follow you on pineterest 11. It depends on the cookie. I like my chocolate chips chewy, but sugar cookies crisp. 15. Crunchy on the outside and chewy in the middle. Hard to achieve! 16. I get your emails. Chewy cookie 17. unfair question–chewy cookies are yummy but so are crispy. Crispy are the best when a cup of coffee is involved. 18. I definitely like chewy cookies! 19. I follow you on Pinterest 20. I am a fan on Facebook–Like you very much 22. Chewy. My mom used to always put chocolate chip cookies in my lunch in high school, but I would always give them away because they were too crispy. It made a lot of other people happy! 23. CRISPY. Crunchy yummy cookies. 24. I receive your blog via email. Good way to start the day with a cup of coffee & maybe a crispy cookie or two. 26. I prefer my cookies chewy. 27. C.H.E.W.Y….there is no other way. Crisp cookies are as bad as crisp cooked vegetables, why? 28. I like my cookies both crispy and chewy 29. I definitely like my cookies on the chewy side and the borders definitely have to be crispy. 31. I follow you on facebook! 32. I’m subscribed to your mailing list! 34. I follow you on PInterest! 40. I definitely prefer chewy! 41. I follow you on Pinterest 42. Honestly I love my cookies to be both crispy AND chewy: crispy around the edges and chewy in the centre. Aaaah. 45. I follow Brown Eyed Baker on Facebook! 46. chewy in the middle with a crisp bottom 47. I follow Brown Eyed Baker on Pinterest!! 48. I have subscribed to your email 49. I follow you on instagram! 50. chewy, but only homemade chewy, none of that processed crap, that they pass for “chewy” (I also follow you on pinterest, and your newsletter) 51. Definitely chewy….I love when the middle is just a litte under baked in a chocolate chip cookie it’s like the best of both worlds. Yummy baked chewy cookie on the outside and delectable cookie doughy-ness in the middle! 56. Chewy on the inside and a little crispy on the outside. 57. It depends on the cookie, but most of the time I prefer them chewy. 58. I am following you on pinterest 59. Definitely chewy (and under baked!) 60. chewy cookies for the win! yum 63. It depends on the cookie, but usually chewy. (I do love crispy chocolate chip cookies on occasion.) 66. i’m subscribed in my google reader! 67. I follow you on Instagram 71. I subscribe to your RSS feed 72. Crispy, so I can dip them in milk. Yum. 73. i follow you on pinterest! 75. I follow you on Pinterest 77. I follow you on instagram! 80. I follow you on Facebook. 84. I am an email subscriber. 85. Chewy…defintately chewy!!! 87. I get your email, I love the letter about Santa – I will probably need it next year and have to remember that. 89. I follow you on Instagram. 90. Chewy! But I’ll eat ‘em no matter what! 97. I follow you on Pinterest.
{"url":"http://www.browneyedbaker.com/2012/12/14/on-the-10th-day-of-christmas-a-kitchenaid-mixer-giveaway/comment-page-9/","timestamp":"2014-04-19T07:17:53Z","content_type":null,"content_length":"120105","record_id":"<urn:uuid:abe0be7e-6e27-4d46-a3b5-c3f5fdf846e8>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00245-ip-10-147-4-33.ec2.internal.warc.gz"}
RSA Encryption Next: RSA Exercise Up: rsa Previous: Certification OK, in the previous section we described what is meant by a trap-door cipher, but how do you make one? One commonly used cipher of this form is called ``RSA Encryption'', where ``RSA'' are the initials of the three creators: ``Rivest, Shamir, and Adleman''. It is based on the following idea: It is very simple to multiply numbers together, especially with computers. But it can be very difficult to factor numbers. For example, if I ask you to multiply together 34537 and 99991, it is a simple matter to punch those numbers into a calculator and 3453389167. But the reverse problem is much harder. Suppose I give you the number 1459160519. I'll even tell you that I got it by multiplying together two integers. Can you tell me what they are? This is a very difficult problem. A computer can factor that number fairly quickly, but (although there are some tricks) it basically does it by trying most of the possible combinations. For any size number, the computer has to check something that is of the order of the size of the square-root of the number to be factored. In this case, that square-root is roughly 38000. Now it doesn't take a computer long to try out 38000 possibilities, but what if the number to be factored is not ten digits, but rather 400 digits? The square-root of a number with 400 digits is a number with 200 digits. The lifetime of the universe is approximately It is, however, not too hard to check to see if a number is prime--in other words to check to see that it cannot be factored. If it is not prime, it is difficult to factor, but if it is prime, it is not hard to show it is prime. So RSA encryption works like this. I will find two huge prime numbers, But exactly how is In the following example, suppose that person A wants to make a public key, and that person B wants to use that key to send A a message. In this example, we will suppose that the message A sends to B is just a number. We assume that A and B have agreed on a method to encode text as numbers. Here are the steps: 1. Person A selects two prime numbers. We will use much larger. 2. Person A multiplies 3. Person A also chooses another number 4. Now B knows enough to encode a message to A. Suppose, for this example, that the message is the number 5. B calculates the value of 7. Now A wants to decode 545. To do so, he needs to find a number 8. To find the decoding, A must calculate But since we only care about the result So the result we want is: Using this tedious (but simple for a computer) calculation, A can decode B's message and obtain the original message Next: RSA Exercise Up: rsa Previous: Certification Zvezdelina Stankova-Frenkel 2000-12-22
{"url":"http://mathcircle.berkeley.edu/BMC3/rsa/node4.html","timestamp":"2014-04-18T05:30:15Z","content_type":null,"content_length":"14474","record_id":"<urn:uuid:8e8b881a-c7ae-43e9-a928-c7c3da5bbf06>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00367-ip-10-147-4-33.ec2.internal.warc.gz"}
How much steam should be introduced into the vessel? December 6th 2011, 12:03 PM How much steam should be introduced into the vessel? Steam (water vapor) at 100 degrees C enters an insulated vessel containing 100 g of ice at 0 degrees C. How much steam must be introduced into the vessel so that equilibrium 50 g of the ice has melted and 50 g of the ice remains? Specific heat of water = 4.184 J/K g Latent heat of fusion of ice = 333 J/g Latent heat of vaporization of water = 2260 J/g December 8th 2011, 10:42 PM Re: How much steam should be introduced into the vessel? Steam (water vapor) at 100 degrees C enters an insulated vessel containing 100 g of ice at 0 degrees C. How much steam must be introduced into the vessel so that equilibrium 50 g of the ice has melted and 50 g of the ice remains? Specific heat of water = 4.184 J/K g Latent heat of fusion of ice = 333 J/g Latent heat of vaporization of water = 2260 J/g 1. I'll convert all mass measures into kg, energy into kJ. 2. Melting 0.05 kg of ice needs $0.05\ kg \cdot 333 \tfrac{kJ}{kg}$ 3. When steam condensates to water you have water of a temperature of 100°C. This hot water cools down to 0°C melting the ice too! 4. Solve for m (= mass of steam) $\underbrace{0.05\ kg \cdot 333 \tfrac{kJ}{kg}}_{\text{melting ice}} = \underbrace{m \cdot 2260 \tfrac{kJ}{kg}}_{\text{steam to water}} + \underbrace{m \cdot 4.184 \tfrac{kJ}{kg \cdot K} \cdot (100^\circ-0^\circ)K}_{\text{cool down water}}$ 5. I've got 0.0435 kg = 43.5 g
{"url":"http://mathhelpforum.com/math-topics/193601-how-much-steam-should-introduced-into-vessel-print.html","timestamp":"2014-04-20T19:07:18Z","content_type":null,"content_length":"5857","record_id":"<urn:uuid:ebc2f503-1f6d-4bce-a4f5-41e01e96a178>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00406-ip-10-147-4-33.ec2.internal.warc.gz"}
coming up with prime numbers [Archive] - Free Math Help Forum 05-31-2009, 09:09 PM I am totally lost with this. I have an equation that is supposed to produce prime numbers but I do no know how to do this. What I did come up with though is 0, 2, 3, 5, and I need one more even one or am I doing this wrong? Then I have to show the work with this equation but it is beating me. The equation is x2-x+41 Any help would be appreciated
{"url":"http://www.freemathhelp.com/forum/archive/index.php/t-61421.html","timestamp":"2014-04-19T07:33:40Z","content_type":null,"content_length":"8905","record_id":"<urn:uuid:b5816c01-1f87-4504-b4e6-f27601e79a6c>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00255-ip-10-147-4-33.ec2.internal.warc.gz"}
Exact Asymptotic Expansion of Singular Solutions for the ( Abstract and Applied Analysis VolumeΒ 2012Β (2012), Article IDΒ 278542, 33 pages Research Article Exact Asymptotic Expansion of Singular Solutions for the ()-D Protter Problem ^1Faculty of Technology, Narvik University College, Lodve Langes Gate 2, 8505 Narvik, Norway ^2Faculty of Mathematics and Informatics, University of Sofia, 1164 Sofia, Bulgaria Received 29 March 2012; Accepted 24 June 2012 Academic Editor: ValeryΒ Covachev Copyright Β© 2012 Lubomir Dechevski et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. We study three-dimensional boundary value problems for the nonhomogeneous wave equation, which are analogues of the Darboux problems in . In contrast to the planar Darboux problem the three-dimensional version is not well posed, since its homogeneous adjoint problem has an infinite number of classical solutions. On the other hand, it is known that for smooth right-hand side functions there is a uniquely determined generalized solution that may have a strong power-type singularity at one boundary point. This singularity is isolated at the vertex of the characteristic light cone and does not propagate along the cone. The present paper describes asymptotic expansion of the generalized solutions in negative powers of the distance to this singular point. We derive necessary and sufficient conditions for existence of solutions with a fixed order of singularity and give a priori estimates for the singular solutions. 1. Introduction In the present paper some boundary value problems (BVPs) formulated by M. H. Protter for the wave equation with two space and one time variables are studied as a multidimensional analogue of the classical Darboux problem in the plane. While the Darboux BVP in is well posed the Protter problem is not and its cokernel is infinite dimensional. Therefore the problem is not Fredholm and the orthogonality of the right-hand side function to the cokernel is one necessary condition for existence of classical solution. Alternatively, to avoid infinite number of conditions the notion of generalized solution is introduced that allows the solution to have singularity on a characteristic part of the boundary. It is known that for smooth right-hand side functions there is unique generalized solution and it may have a strong power-type singularity that is isolated at one boundary point. In the present paper we prove asymptotic expansion formula for the generalized solutions in negative powers of the distance to the singular point in the case when is trigonometric polynomial. We leave for the next section the precise formulation of the paper’s main results and the comparisons with recent publications concerning Protter problems, including a semi-Fredholm solvability result in the general case of smooth but for somewhat easier ()-D wave equation problem. First we give here a short historical survey. Protter arrived at the multidimensional problems for hyperbolic equations while examining BVPs for mixed type equations, starting with planar problems with strong connection to transonic flow phenomena. In the plane, the problems of Tricomi, Frankl, and Guderley-Morawetz are the classical boundary-value problems that appear in hodograph plane for 2D transonic potential flows (see, e.g., the survey of Morawetz [1]). The first two of these problems are relevant to flows in nozzles and jets, and the third problem occurs as an approximation to a respective “exact” boundary-value problem in the study of flows around airfoils. For the Gellerstedt equation of mixed type, Protter [2] proposes a 3D analogue to the two-dimensional Guderley-Morawetz problem. At the same time, he formulates boundary value problems in the hyperbolic part of the domain, which is bounded by two characteristics and one noncharacteristic surfaces of the equation. The planar Guderley-Morawetz mixed-type problem is well studied. Existence of weak solutions and uniqueness of strong solutions in weighted Sobolev spaces were first established by Morawetz by reducing the problem to a first order system which then gives rise to solutions to the scalar equation in the presence of sufficient regularity. The availability of such sufficient regularity follows from the work of Lax and Phillips [3] who also established that the weak solutions of Morawetz are strong. On the other hand, for the 3D Protter mixed-type problems a general understanding of the situation is not at hand—even the question of well posedness is surprisingly subtle and not completely resolved. One has uniqueness results for quasiregular solutions, a class of solutions introduced by Protter, but there are real obstructions to existence in this class. To investigate the situation, we study a simpler problem—the Protter problems in the hyperbolic part of the domain for the mixed-type problem. For the wave equation this is the set It is bounded, see Figure 1, by two characteristic cones of (1.1) and the disk , centered at the origin . One could think of the Protter problems in as three-dimensional variant of the planar Darboux problem. The classic Darboux problem involves a hyperbolic equation in a characteristic triangle bounded by two characteristic and one noncharacteristic segments. The data are prescribed on the noncharacteristic part of the boundary and one of the characteristics. Actually, the set could be produced via rotation around the -axis in of the flat triangle —a characteristic triangle for the corresponding string equation As mentioned before, the classical Darboux problem for (1.4) is to find solution in with data prescribed on and , for example. In conformity with this planar BVP, Protter [2, 4] formulated and studied the following problems. Problems and Find a solution of the wave equation (1.1) in which satisfies one of the following boundary conditions: or Nowadays, it is known that the Protter Problems and are not well posed, in contrast to the planar Darboux problem. In fact, in 1957 Tong [5] proved the existence of infinite number nontrivial classical solutions to the corresponding homogeneous adjoint problem . The adjoint BVPs to Problems and were also introduced by Protter. Problems and Find a solution of the wave equation (1.1) in which satisfies the boundary conditions: or Since [5], for each of the homogeneous Problems and (i.e., in (1.1)), an infinite number of classical solutions has been found (see Popivanov, Schneider [6], Khe [7]). According to this fact, a necessary condition for classical solvability of Problem or is the orthogonality in of the right-hand side function to all the solutions of the corresponding homogenous adjoint problem or . Although Garabedian proved [8] the uniqueness of a classical solution of Problem (for its analogue in ), generally, Problems and are not classically solvable. Instead, Popivanov and Schneider [6] introduced the notion of generalized solution. It allows the solution to have singularity on the inner cone and by this the authors avoid the infinite number of necessary conditions in the frame of the classical solvability. In [6] some existence and uniqueness results for the generalized solutions are proved and some singular solutions of Protter Problems and are constructed. In the present paper we study the properties of the generalized solution for Protter Problem in . From the results in [6] it follows that for there exists a smooth right-hand side function , such that the corresponding unique generalized solution of Problem has a strong power-type singularity at the origin and behaves like there. This feature deviates from the conventional belief that such BVPs are classically solvable for very smooth right-hand side functions . Another interesting aspect is that the singularity is isolated only at a single point the vertex of the characteristic light cone, and does not propagate along the bicharacteristics which makes this case different from the traditional case of propagation of singularity (see, e.g., Hörmander [9], Chapter 24.5). The Protter problems have been studied by different authors using various types of techniques like Wiener-Hopf method, special Legendre functions, a priori estimates, nonlocal regularization, and others. For recent known results concerning Protter’s problems see the paper [6] and references therein. For further publications in this area see [7, 10–16]. On the other hand, Bazarbekov gives in another analogue of the classical Darboux problem (see [17]) and analogously in (see [18]) in the corresponding four-dimensional domain . Some different statements of Darboux type problems in or connected with them Protter problems for mixed type equations (also studied in [2]) can be found in [19–25]. Some results concerning the nonexistence principle for nontrivial solution of semilinear mixed type equations in multidimensional case, can be found in [26]. For recent existence results concerning closed boundary-value problems for mixed type equations see for example [27], and also [28 ] that studies an elliptic-hyperbolic equation which arises in models of electromagnetic wave propagation through zero-temperature plasma. The existence of bounded or unbounded solutions for the wave equation in and , as well as for the Euler-Poisson-Darboux equation has been studied in [7, 13–16, 29]. Further, we aim to find some exact a priori estimates for the singular solutions of Problem and to outline the exact structure and order of singularity. For some other Protter problems necessary and sufficient conditions for existence of solutions with fixed order of singularity were found (see [15] in and [16] in and an asymptotic formula for the solution of Problem in was obtained in [30]. Considering Protter Problems, Popivanov and Schneider [6] proved the existence of singular solutions for both wave and degenerate hyperbolic equation. First a priori estimates for singular solutions of Protter Problems, involving the wave equation in , were obtained in [6]. In [10] Aldashev mentioned the results of [6] and, for the case of the wave equation in , he notes the existence of solutions in the domain ( and approximates if , which blows up on the cone like , when . It is obvious that for this results can be compared to the estimates in Corollary 2.4 here. Finally, we point out that in the case of an equation, which involves the wave operator and nonzero lower terms, Karatoprakliev [24] obtained a priori estimates, but only for the sufficiently smooth solutions of Protter Problem. Regarding the ill-posedness of the Protter Problems, there have appeared some possible regularization methods in the case of the wave equation, involving either lower order terms ([11, 31]), or some other type perturbations, like integrodifferential term, or nonlocal one ([12]). In Section 2 the result of the existence of infinite number of classical solutions to the homogeneous Problem (Lemma 2.1) and the definition of generalized solution of Problem are given. The main results of the paper, concerning the asymptotic expansion of the unique generalized solution of Problem (Theorem 2.3) are formulated and discussed. The expansion of is given in negative powers of the distance to the point of singularity. An estimate for the remainder term and the exact behavior of the singularity under the orthogonality conditions imposed on the right-hand side function of the wave equation is found. Necessary and sufficient conditions for the existence of only bounded solutions are given in Corollary 2.4. In Section 3, the auxiliary 2D boundary value Problems and , which correspond to the ()-D Problem , are considered. Actually, these 2D problems are transferred to an integral Volterra equation, which is invertible. Using the special Legendre functions , some exact formulas for the solution of the Problem are derived in Lemma 3.4. Some figures showing the effects appearing near the singularity point are also presented. Section 4 contains the most technical part of the paper. In this section the results concerning the asymptotic expansions of the generalized solution of the 2D Problem are proved and the proof of the main Theorem 2.3 is given. 2. Main Results on ()-D Protter’s Problem 6 Define the functions where the coefficients are with ,. Then for the functions we have the following lemma. Lemma 2.1 (see [29]). Let , . For and the functions are classical solutions to the homogeneous Problem . A necessary condition for the existence of classical solution for Problem is the orthogonality of the right-hand side function to all functions , which are solutions of the homogeneous adjoint Problem . To avoid these infinite number necessary conditions in the framework of classical solvability, one needs to introduce some generalized solutions of Problems with possible singularities on the characteristic cone , or only at its vertex . Popivanov and Schneider in [6] give the following definition. Definition 2.2. A function is called a generalized solution of the Problem in if:(1), , ,(2)the identity holds for all on , and in a neighborhood of . The uniqueness of the generalized solution of Problem and existence results for can be found in [6]. Further, we fix the right-hand side function as a trigonometric polynomial of order with respect to the polar angle: with some complex-valued function-coefficients . For and , denote by the constants Note that actually and in cases of and , due to the special form of the functions and the fact that in the representation (2.5) of the function the sum starts from . The main result is as follows. Theorem 2.3. Suppose that the function is a trigonometric polynomial (2.5). Then there exist functions , , with the following properties:(i)the unique generalized solution of Problem exists, belongs to and has the asymptotic expansion at the origin : (ii)for the coefficient functions the representation holds, where the functions are bounded and independent of ,(iii)if in the expression (2.8) for at least one of the constants is different from zero (i.e., the corresponding orthogonality condition is not fulfilled), then there exists a direction with for , such that ,(iv)if in the expression ( 2.8) for at least one of the constants is different from zero (i.e., the corresponding orthogonality condition is not fulfilled), then the generalized solution is not continuous at ,(v)for the function the estimate holds with a constant independent of . As a consequence of Theorem 2.3 one gets the following results that highlight the two extreme cases of the assertion. The first part gives rough estimate of the expansion (2.7) and describes the “worst” possible singularity. The second part shows that one could control the solution by making some of the defined by (2.6) constants in (2.8) to be zero, that is, by taking to be orthogonal in to the corresponding functions defined in (2.3). Corollary 2.4. Suppose that has the form (2.5).(i)Without any orthogonality conditions imposed, the unique generalized solution of Problem satisfies the a priori estimate (ii)Let the orthogonality conditions, be fulfilled for all and . Then the generalized solution belongs to , is bounded and the a priori estimate holds.(iii)In addition to (ii), if the conditions (2.11) are fulfilled for also, then is a classical solution and . Let us point out that in the case (ii), the generalized solution is bounded if and only if the conditions (2.11) are fulfilled for due to Theorem 2.3(iii). In addition, if all the conditions (2.11) are fulfilled for , but for some the corresponding orthogonality condition is not satisfied, then is not continuous at , according to Theorem 2.3 (iv). Such a solution is illustrated in Figure 4. Notice that some of the functions involved in the orthogonality conditions in Corollary 2.4(ii) and (iii) are not classical solutions of the homogenous adjoint Problem in view of Lemma 2.1, although they satisfy the homogenous wave equation in . In fact, for some , or their derivatives may be discontinuous at . For example when is an odd number and , the functions are not continuous at the origin . On the other hand, when is even and , are singular on the cone and do not satisfy the homogeneous adjoint boundary condition there. However, this singularity is integrable in the domain . To explain the results in Theorem 2.3 and Corollary 2.4 we construct Table 1. It illustrates the connection between the singularity of the generalized solution and the functions . Both functions , are located in column number and row number in Table 1. Thus, form the rightmost diagonal, the next one is empty—we put in these cells “diamonds” , constitute the third one, and so on. The row number designates the order of singularity of the generalized solution. Corollary 2.4 shows that the generalized solution is bounded, when the right-hand side function is orthogonal to the functions in Table 1, except the ones in row number . If is orthogonal to all the functions in the Table 1 (including the row ), then is continuous in . When the right-hand side satisfies orthogonal conditions (2.11) for all the functions from the rows in Table 1 with row-number larger then , , but there is a function with from th row which is not orthogonal to (i.e., ), then the solution behaves like at the origin, according to the expansion (2.7). If there are no orthogonality conditions, then the worst case with singularity appears. Figures 2–5 are created using MATLAB and represent some numerical computations for singular solutions of Problem (actually the behaviour in -domain , not including the terms and ). They illustrate different cases according to the main results for the existence of a singularity at the origin depending on orthogonality conditions. Figure 2 is related to Corollary 2.4(i)—it gives the graph of the solution for the worst case without any orthogonality conditions fulfilled and the solution is going to at the singular point . In Figure 3, only one of orthogonality conditions (2.11) for is not fulfilled and the solution tends to . Figures 4 and 5 are connected to Corollary 2.4(ii) and (iii): Figure 4 presents the case when all the orthogonality conditions (2.11) for are satisfied and the solution is bounded but not continuous at , while Figure 5 concerns the last part (iii) from Corollary 2.4, when conditions (2.11) are additionally fulfilled for and the solution is continuous. Remark 2.5. We mention some differences between the results given here for the Problem and some other results in , but for the Problem , like that from [15]. (i)In [15], assuming the right-hand side function is smooth enough (i.e., ) only the behavior of the singularities was studied using some weighted norms (analogous to the weighted Sobolev norms in corner domains). In the present paper we need only and find in addition the explicit asymptotic expansion of the generalized solution. The bounded but not continuous at the origin solutions are also studied here. (ii)Comparing the power of singularity of the generalized solution for Problem here and for Problem in [15] for the worst case without any orthogonality conditions one can see that the power in the estimate (2.10) from Corollary 2.4(i) is , while in the analogous estimate in Conclusion 1 [15] it is .(iii)It is interesting to compare the results [14, 15], published in 2002. Going in a different way in both cases the authors asked for singular solutions of Problem in . However, in [14] there are absent any analogues to the orthogonality conditions presented in [15], and in contrast to [15] in [14] the dependence of the exact order of singularity on the data is not clarified. Remark 2.6. Let us also compare the present expansion and the results in [30], where an asymptotic expansion of Problem is found for somewhat easier four-dimensional case.(i)Both for Problem in here and Problem in as in [30], the study is based on the properties of the special Legendre functions. Instead of Legendre functions with non-integer indices here, in the four-dimensional case one has to deal with integer indices , that is, simply with the Legendre polynomials . One can easily modify both these techniques to obtain similar results for the -dimensional problems: for even (analogous to the present case ) or for odd (similarly to case). Some different kind of results for the Protter problems in are presented in [10, 11]. (ii)For the four-dimensional Problem in [30], the Corollary 3.3 gives only that the solution is bounded, it could be discontinuous at the origin. On the other hand, here Theorem 2.3 gives us also the control over the bounded but not continuous parts of the generalized solution (through the coefficient for in the expansion formula (2.7)). As a sequence, Corollary 2.4(iii) guarantees that the solution is continuous.(iii)Based on the formulae and the computations from [30], the general case in is also treated, when the right-hand side is smooth enough, but not a finite harmonic polynomial analogous to (2.5). The results are announced and published in [32, 33]. For right-hand side functions in [33] the necessary and sufficient conditions for the existence of bounded solution are found. They involve infinite number of orthogonality conditions for that comes from the fact that this is not a Fredholm problem. On the other hand, the results from [33] show that the linear operator mapping the generalized solution into is a semi-Fredholm operator in . Let us recall that a semi-Fredholm operator is a bounded operator that has a finite dimensional kernel or cokernel and closed range. Additionally, in [32] a right-hand side function is constructed such that the unique generalized solution of Protter Problem in has exponential type singularity. One expects that similar results could also be obtained for the Problem in studied here. These questions correspond to the Open Problem (1) below. Remark 2.7. Let us mention one obvious consequence of Theorem 2.3 and all the arguments above, concerning construction of functions orthogonal to the solutions of the homogeneous adjoint Problem . Take an arbitrary function satisfying the boundary conditions . Then the function with the wave operator , is orthogonal to all the functions , .. Finally, we formulate some still open questions, that naturally arise from the previous works on the Protter problem and the discussions above. Open Problems. (1) To study the more general case when the right-hand side function , for an appropriate . The smooth function could be represented as a Fourier series rather than, the finite trigonometric polynomial (2.5) in the discussions here.(i)Find some appropriate conditions for the function under which there exists a generalized solution of the Protter problem . (ii)What kind of singularity can the generalized solution have? The a priori estimates, obtained in [6, 31], which indicate that the generalized solutions of Problem (including the singular ones), can have at most an exponential growth as . The natural question is as follows: is there a singular solution of these problems with exponential growth as or do all such solutions have only polynomial growth?(iii)Is it possible to prove some a priori estimates for generalized solutions of the Problem with smooth function which is not a harmonic polynomial? (iv)Find some appropriate conditions for the function under which the Problem has only regular, bounded solutions, or even classical solutions. (2) To study the Protter problems for degenerate hyperbolic equations. Up to now it is only known that some singular solutions exist.(i)We do not know what is the exact behavior of the singular solution even when the right-hand side function is a finite sum like (2.5). Can we prove some a priori estimates for generalized solutions?(ii)Is it possible to find some orthogonality conditions for the function , as here, under which only bounded solutions exist? (3) Why does there appear a singularity for such smooth right-hand side even for the wave equation? Can we numerically model this phenomenon? (4) What happens with the ill-posedness of the Protter problems in a more general domain (as in [2, 4]) when the maximal symmetry is lost if the cone is replaced by another light characteristic one with the vertex away from the origin. 3. Preliminaries We have a relation between the functions and the Legendre functions . For , the functions could be defined by the equality (Section 3.7, formula (6), from Erdélyi et al. [34]), where for in this formula . Let , be the cylindrical coordinates in , that is, , . For simplicity, define the function . The following result is in connection with Lemma 5.1 from [15]. Actually, to prove this result one could formally follows the arguments of Lemma 2.3 from [16], where the four-dimensional analogue of Problem is treated. Lemma 3.1. For and define the functions for . Then in the equality holds for with some constants . Proof. Lemma 5.1 from [15] for gives where and according to Lemma 2.2 from [29] Therefore the equality (3.3) holds for . We have to prove it for . In the proof of Lemma 5.1 from [15] the integrals were calculated using the Mellin transform. In order to compute in the same way let us first introduce the variables and : As a consequence after some calculations, formulas (2.2.(4)), (1.10), and (1.4) from Samko et al. [35], show that where is the Riemann-Liouville fractional integral (for its properties see e.g., [34, 35]); in our case we have . As usual, we denote also for for . The substitution of (3.6) in (3.7) shows that and thus The next result is crucial for construction of solutions of Problem in the discussions later. Lemma 3.2. Let , and the functions satisfy . Then all solutions of the Volterra integral equation of first kind are Proof. Formulas (35.17) and (35.28) from Samko et al. [35] state that the solution of the integral equation (3.10) is given by Then, using that , an integration gives (3.11). One could use the Mellin transform to calculate the following integral. Lemma 3.3 (see [16]). Let ,, then According to the existence and uniqueness results in [6], it is sufficient to study Problem when the right-hand side of the wave equation is simply Then we seek solutions for the wave equation of the same form: Thus Problem reduces to the following one. Solve the equation in with the boundary conditions Let us now introduce new coordinates and set Denoting , one transforms Problem into the following. Find a solution of the equation in the domain with the following boundary conditions: Problems and were introduced in [6], although the change of coordinates and was used there instead of (3.17). Of course, because the solution of Problem may be singular, the same is true for the solutions of and . For that reason, Popivanov and Schneider [6] defined and proved the existence and uniqueness of generalized solutions of Problems and , which correspond to the generalized solution of Problem . Further, by “solution” of Problem or we mean exactly this unique generalized solution. Lemma 3.4. Let and . Then the solution of Problem is given by the following formula: where Proof. Notice that the function is a Riemann function for (3.19) (Copson [36]). Therefore, following Aldashev [10], we can construct the function as a solution of a Goursat problem in with boundary conditions and with some unknown function , which will be determined later: Now, the boundary condition gives an integral equation for . For that reason, let us define the function : Obviously, . The condition (3.25) leads us to the following equation: Then, using , we have A necessary solvability condition for the unknown function is: . One could solve this Volterra integral equation of the first kind, using Lemma 3.2. The result is Integrate, we find Now, using Lemma 3.3 and the equality for one finds that the coefficient of in (3.30) is zero. Using again (3.31) for , is simply Obviously, and . Finally, the solution of Problem is given by the formulae (3.20), (3.21), and (3.22). 4. Proofs of Main Results In order to study the behavior of the generalized solution of Problem , in view of relations (3.18) and Lemma 3.4, we will examine the function defined by the formulae (3.20), (3.21), and (3.22). It is not hard to see that the part “responsible” for the singularity is the integral in (3.21) for the function . In fact, blows up at , since the argument and thus the values of the Legendre function in (3.21) go to infinity when . Actually, grows like at infinity. In the next lemma we find the dependance of the exact order of singularity of on the function . It is governed by the constants Actually, these numbers are closely connected to the constants from Theorem 2.3. We will clarify this relation later in Lemma 4.1 and the proof of Theorem 2.3. Lemma 4.1. Let , where , , and let the function . Then the function belongs to and satisfies the representation where the function , and the nonzero constants and are independent of . Proof. The argument of the Legendre function in (4.2) satisfies the inequality , which allows us to apply the representation (3.1): We will study the expansion at of the function Let us define the functions for . Then, obviously First, we will examine the properties of the functions and their derivatives with respect to . We start with the equality Further, the index will be less than . Notice that for the integrals are convergent. Then, for we have the equality and the inequality Differentiating with respect to one finds Therefore, for the derivatives of we find by induction where the coefficients are positive constants. We want to evaluate these derivatives of at . Let us estimate the terms in the last sum for :(i) when is such that the inequality (4.10) gives the estimate (ii)when and , we have Hence for . Therefore, at the point the only one nonzero term in the sum (4.12) is for , that is, The last observation is that (4.8) and (4.12) imply where are constants. Now, we go back to the function . We want to differentiate times and evaluate at . Differentiating (4.7) we find the following: Next, since the assertion for is only , instead of we will differentiate the function Notice that it belongs to and the derivative is In the same way, after denoting , define for the functions with the constants from (4.16). Then, using (4.16), it follows by induction that is continuous in and On the other hand, Hence, according to (4.15), for , The next step is to evaluate the integral . Using (4.12), one could rewrite it in the form For all the terms in the last sum, except one, the estimate is straightforward. (1) When is such that , for the corresponding terms we have and, therefore, where . (2) For the last term in (4.24) with there are two cases:(2a) when is an even number this is the integral (2b) when is an odd number, the integral is For simplicity let us define some constants related to the constants given by (4.1). Indeed, due to (4.9) the equality holds. Let us begin with the following case.(2a):. We will evaluate the difference: For the first integral, using the estimate (4.10), we calculate and, therefore, For the second integral From the last two inequalities we get the estimate Therefore, in the case , , where .(2b) When , we have to study the integral (4.28). Obviously, For the last integral we have Now, to estimate the first term in the right-hand side of (4.36), there are two cases:(i)when , we have and similarly to the previous case (2a) we can apply inequality (4.31). Thus, we estimate the difference: (ii)when , denote for simplicity , then and directly from the definition (4.6) of the functions , we get
{"url":"http://www.hindawi.com/journals/aaa/2012/278542/","timestamp":"2014-04-18T02:55:48Z","content_type":null,"content_length":"1048595","record_id":"<urn:uuid:eeb16de9-098e-456f-91b0-018a3e3de636>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00264-ip-10-147-4-33.ec2.internal.warc.gz"}
Hero to Zero: How to transition to Folrart from Crest Um, darklogos, one question. Would only playing Mirage Master for 30-40 FM per day really let you get cards that fast? "Playing for an extra 30 or 40 fight money a day from mirage master will get you maybe one max two 5 star cards a month." Let's check the math. 1 Point card: 2500 FM 1 5-star card: 60 point cards 1 5-star card: 150,000 FM (And for kicks, a full play-set of all the 5-stars currently available, only counting Zu-jyava once? 9,000,000 FM!) I think if someone really wants to get point cards, they'll probably play MM to get maybe 1000 FM per day. Not too bad, a respectable amount without risking burning out too badly. Even then, they'd only get 2 point cards every five days. That'd be close to five months for a 5-star, four months for a 4-star, 2 and a quarter months for a 3, just under 2 months for a 2, and about a month for a Re: Hero to Zero: How to transition to Folrart from Crest some reason i had the trainer numbers in my head for some reason. correcting it now. Re: Hero to Zero: How to transition to Folrart from Crest I'd just like to add that for our last two starter sets there have been a main character card which is a 3-star rare and has 2 copies. Assuming this trend continues, I think most non-paying players are going to want to make obtaining a 3rd copy of that 3-star rare a priority, and Point Cards are a pretty efficient way to do it. Obviously, randomness isn't an issue and neither is Gran. You'll obtain a certain number of point cards just participating in the low-impact site events, and then you just grind for FM. As has been mentioned, MM is probably the fastest way to do this, but playing isn't a bad idea. Even if you lose you get a decent amount. Obviously it's a bit soon to say this, but my guess is that 3-stars are going to be the magic number for Point Cards and free players. Other than the starter main characters, there are a lot of great 3-star grimores like Panther Soul. I feel like earning 4's and 5's will be too slow for 90% of the free players out there, as well. "Scissors are overpowered. Rock is fine." -Paper Re: Hero to Zero: How to transition to Folrart from Crest Logress wrote:I I feel like earning 4's and 5's will be too slow for 90% of the free players out there, as well. Well certain high end meta depends on rares. EM and EN are prime examples. EN is far more cheaper deck over all then EM. If someone is gunning for the top I might as well give them the tools needed. Re: Hero to Zero: How to transition to Folrart from Crest That's fair. But even though aiming for the long term can really help, (I myself am a long term planner when it comes to these games) short term planning has it's place in Alteil for free players. Some might think it's a total waste to cash in PC's for a rarity 3, when they're halfway to a rarity 5, but if that rarity 3 can get you a lot of wins in the short term, it can help you raise Gran in the form of rankings and Treasure battles. They're both valid strategies for building up your Files. "Scissors are overpowered. Rock is fine." -Paper Re: Hero to Zero: How to transition to Folrart from Crest I think proxies are really awsome units to have in almost every deck. Other 5 stars just force you to rather build a deck around them instead of helping you with your deck. But the non set 3, 5 stars have much better soul skills. :// Alteil is pretty much well balanced. Sometimes you loose and sometimes the enemy wins. Re: Hero to Zero: How to transition to Folrart from Crest Half of the rare soul cards are super flubbed up. Proxy of Light / Limier: [conditional use/Refess lvl 6 or higher] Send one disengaged enemy unit to the Cemetery. Set your Refess level to 0. It's a soul skill that costs you 6 sphere levels instead of 1 unit. It has the same amount of LP (one) and it has the same restriction as Assassin's soul skill. Then, there's Blessed Knight / Orfen One friendly unit gets DF+10. Re: Hero to Zero: How to transition to Folrart from Crest Re: Hero to Zero: How to transition to Folrart from Crest This was a good read, especially for a beginner who doesn't plan on spending any money. I will fight my way to the top! Re: Hero to Zero: How to transition to Folrart from Crest With so many hits, it looks as if this sticky is getting a lot of traffic. A comment or two would be nice from these readers.
{"url":"http://www.alteil.com/forum/viewtopic.php?f=27&t=1351&start=10","timestamp":"2014-04-20T00:53:47Z","content_type":null,"content_length":"32847","record_id":"<urn:uuid:c3a92982-7dbe-475a-a78b-18d2a42c1d02>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00269-ip-10-147-4-33.ec2.internal.warc.gz"}
High-Frequency Statistical Arbitrage Computational statistical arbitrage systems are now de rigeur, especially for high-frequency, liquid markets (such as FX). Statistical arbitrage can be defined as an extension of riskless arbitrage, and is quantified more precisely as an attempt to exploit small and consistent regularities in asset price dynamics through use of a suitable framework for statistical modelling. Statistical arbitrage has been defined formally (e.g. by Jarrow) as a zero initial cost, self-financing strategy with cumulative discounted value \(v(t)\) such that: • \( v(0) = 0 \), • \( \lim_{t\to\infty} E^P[v(t)] > 0 \), • \( \lim_{t\to\infty} P(v(t) < 0) = 0 \), • \( \lim_{t\to\infty} \frac{Var^P[v(t)]}{t}=0 \mbox{ if } P(v(t)<0) > 0 \mbox{ , } \forall{t} < \infty \) These conditions can be described as follows: (1) the position has a zero initial cost (it is a self-financing trading strategy), (2) the expected discounted profit is positive in the limit, (3) the probability of a loss converges to zero, and (4) a time-averaged variance measure converges to zero if the probability of a loss does not become zero in finite time. The fourth condition separates a standard arbitrage from a statistical arbitrage opportunity. We can represent a statistical arbitrage condition as $$ \left| \phi(X_t – SA(X_t))\right| < \mbox{TransactionCost} $$ Where \(\phi()\) is the payoff (profit) function, \(X\) is an arbitrary asset (or weighted basket of assets) and \(SA(X)\) is a synthetic asset constructed to replicate the payoff of \(X\). Some popular statistical arbitrage techniques are described below. Index Arbitrage Index arbitrage is a strategy undertaken when the traded value of an index (for example, the index futures price) moves sufficiently far away from the weighted components of the index (see Hull for details). For example, for an equity index, the no-arbitrage condition could be expressed as: \[ \left| F_t - \sum_{i} w_i S_t^i e^{(r-q_i)(T-t)}\right| < \mbox{Cost}\] where \(q_i\) is the dividend rate for stock i, and \(F_t\) is the index futures price at time t. The deviation between the futures price and the weighted index basket is called the basis. Index arbitrage was one of the earliest applications of program trading. An alternative form of index arbitrage was a system where sufficient deviations in the forecasted variance of the relationship (estimated by regression) between index pairs and the implied volatilities (estimated from index option prices) on the indices were classed as an arbitrage opportunity. There are many variations on this theme in operation based on the VIX market today. Statistical Pairs trading is based on the notion of relative pricing – securities with similar characteristics should be priced roughly equally. Typically, a long-short position in two assets is created such that the portfolio is uncorrelated to market returns (i.e. it has a negligible beta). The basis in this case is the spread between the two assets. Depending on whether the trader expects the spread to contract or expand, the trade action is called shorting the spread or buying the spread. Such trades are also called convergence trades. A popular and powerful statistical technique used in pairs trading is cointegration, which is the identification of a linear combination of multiple non-stationary data series to form a stationary (and hence predictable) series. Trading Algorithms In recent years, computer algorithms have become the decision-making machines behind many trading strategies. The ability to deal with large numbers of inputs, utilise long variable histories, and quickly evaluate quantitative conditions to produce a trading signal, have made algorithmic trading systems the natural evolutionary step in high-frequency financial applications. Originally the main focus of algorithmic trading systems was in neutral impact market strategies (e.g. Volume Weighted Average Price and Time Weighted Average Price trading), however, their scope has widened considerably, and much of the work previously performed by manual systematic traders can now be done by “black box” algorithms. Trading algorithms are no different from human traders in that they need an unambiguous measure of performance – i.e. risk versus return. The ubiquitous Sharpe ratio (\(\frac{\mu_r – \mu_f}{\sigma} \)) is a popular measure, although other measures are also used. A measure of trading performance that is commonly used is that of total return, which is defined as \[ R_T \equiv \sum_{j=1}^{n}r_j \] over a number of transactions n, and a return per transaction \(r_j\). The annualized total return is defined as \(R_A = R_T \frac{d_A}{d_T}\), where \(d_A\) is the number of trading days in a year, and \(d_T\) is the number of days in the trading period specified by \(R_T\). The maximum drawdown over a certain time period is defined as \(D_T \equiv \max(R_{t_a}-R_{t_b}|t_0 \leq t_a \leq t_b \ leq t_E)\), where \(T = t_E – t_0\), and \(R_{t_a}\) and \(R_{t_b}\) are the total returns of the periods from \(t_0\) to \(t_a\) and \(t_b\) respectively. A resulting indicator is the Stirling Ratio, which is defined as \[ SR = \frac{R_T}{D_T} \] High-frequency tick data possesses certain characteristics which are not as apparent in aggregated data. Some of these characteristics include: • Non-normal characteristic probability distributions. High-frequency data may have large kurtosis (heavy tails), and be asymmetrically skewed; • Diurnal seasonality – an intraday seasonal pattern influenced by the restrictions on trading times in markets. For instance, trading activity may be busiest at the start and end of the trading day. This may not apply so much to foreign exchange, as the FX market is a decentralized 24-hour operation, however, we may see trend patterns in tick interarrival times around business end-of-day times in particular locations; • Real-time high frequency data may contain errors, missing or duplicated tick values, or other anomalies. Whilst historical data feeds will normally contain corrections to data anomalies, real-time data collection processes must be aware of the fact that adjustments may need to be made to the incoming data feeds.
{"url":"http://www.theresearchkitchen.com/archives/683","timestamp":"2014-04-18T23:14:38Z","content_type":null,"content_length":"15476","record_id":"<urn:uuid:cdc4a2f8-ac53-4c0f-95d5-5ced4c14c99d>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00359-ip-10-147-4-33.ec2.internal.warc.gz"}
Richmond Hill, NY Algebra Tutor Find a Richmond Hill, NY Algebra Tutor ...At these tutoring centers I taught over 100 students in grades kindergarten to 12th grade in biology, physics, math, English, global and American history, state tests, specialized high school tests, AP tests, SAT subjects tests, and the SAT. I have created an enriching relationship with all my s... 57 Subjects: including algebra 2, Spanish, chemistry, algebra 1 ...While Studying my students don't only learn school basics but also how to communicate with others and express their feelings and opinions in a educated and civil manner. I am a fluent speaker of the Turkish language. I have been learning it for the last 15 years and I communicate fluently with the natives of the country. 36 Subjects: including algebra 1, reading, English, ASVAB ...I have experience teaching students to read, including students who have special needs. I have experience with teaching the sounds of the letters, as well as letter combinations. I am interested in tutoring people with Aspergers because I am very familiar with the condition, and with the emotional and educational needs that come with it. 50 Subjects: including algebra 2, algebra 1, reading, English ...I am an expert in all math concepts required for Algebra. I am licensed to teach this course in New York state, and I taught it for 5 years (ending June 2013). I have also taught the courses that precede it (algebra 1 and 2 including trigonometry) and follow it (AP Calculus). I taught Algebra 2... 10 Subjects: including algebra 1, algebra 2, calculus, SAT math ...I can submit transcripts if necessary. I have a Master's degree in Applied Math from the University of Michigan and three years of tutoring experience (both private tutoring and with the University's learning center). I have taken multiple college-level courses, including symbolic logic and introduction to mathematical proof which involve logic. I received an A (4.00) in both 15 Subjects: including algebra 1, algebra 2, calculus, statistics Related Richmond Hill, NY Tutors Richmond Hill, NY Accounting Tutors Richmond Hill, NY ACT Tutors Richmond Hill, NY Algebra Tutors Richmond Hill, NY Algebra 2 Tutors Richmond Hill, NY Calculus Tutors Richmond Hill, NY Geometry Tutors Richmond Hill, NY Math Tutors Richmond Hill, NY Prealgebra Tutors Richmond Hill, NY Precalculus Tutors Richmond Hill, NY SAT Tutors Richmond Hill, NY SAT Math Tutors Richmond Hill, NY Science Tutors Richmond Hill, NY Statistics Tutors Richmond Hill, NY Trigonometry Tutors
{"url":"http://www.purplemath.com/Richmond_Hill_NY_Algebra_tutors.php","timestamp":"2014-04-17T19:40:01Z","content_type":null,"content_length":"24427","record_id":"<urn:uuid:89f7dbf2-3def-41a1-926f-4c5ff3ba95c2>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00559-ip-10-147-4-33.ec2.internal.warc.gz"}
Summary: Local Rainbow Colorings Noga Alon Ido Ben-Eliezer Given a graph H, we denote by C(n, H) the minimum number k such that the following holds. There are n colorings of E(Kn) with k-colors, each associated with one of the vertices of Kn, such that for every copy T of H in Kn, at least one of the colorings that are associated with V (T) assigns distinct colors to all the edges of E(T). We characterize the set of all graphs H for which C(n, H) is bounded by some absolute constant c(H), prove a general upper bound and obtain lower and upper bounds for several graphs of special interest. A special case of our results partially answers an extremal question of Karchmer and Wigderson motivated by the investigation of the computational power of span programs. 1 Introduction Consider the following question, motivated by an extremal problem suggested by Karchmer and Wigderson, see [6]. Given a fixed graph H, let C(n, H) denote the minimum number k such that there is a set of n colorings {fv : E(Kn) [k] : v V (Kn)}, with the following property. For every copy T of H in Kn, there is a vertex u V (T) so that fu is a rainbow coloring of E(T), that is, no two edges of T get the same color by fu. A set of colorings that satisfies this condition is called an (n, H)-local coloring. Determine or estimate the function C(n, H). Note that each coloring in the set above does not have to be a proper edge coloring.
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/024/4875823.html","timestamp":"2014-04-17T19:22:15Z","content_type":null,"content_length":"8488","record_id":"<urn:uuid:0d7f5423-bc58-43cd-8ed7-dedc24a02bc1>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00102-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Help August 22nd 2009, 01:22 PM #1 Let $T^{2}=\mathcal{R}^{2}/2\pi\mathcal{Z}^{2}$ and consider the space $B(L^{2}(T^{2}))$, using the quotient topology, can we show that $B(L^{2}(T^{2}))$ is compact? Where B is the set of all bounded linear operators mapping elements of $L^{2}(T^{2})$ to itself. The space $B(L^{2}(T^{2}))$ can never be compact in any topology, because it is unbounded. For example, the sequence (nI), where I is the identity operator, cannot have a convergent subsequence. The unit ball of $B(L^{2}(T^{2}))$ is compact in the weak operator topology. August 23rd 2009, 11:52 PM #2
{"url":"http://mathhelpforum.com/differential-geometry/98941-compact.html","timestamp":"2014-04-20T06:59:00Z","content_type":null,"content_length":"35902","record_id":"<urn:uuid:b75d76f2-517f-4e24-b90a-1c16dd35fa30>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00132-ip-10-147-4-33.ec2.internal.warc.gz"}
R: heatmaps with gplots April 8, 2010 By Stewart MacArthur I use heatmaps quite a lot for visualizing data, microarrays of course but also DNA motif enrichment, base composition and other things. I particular like the heatmap.2 function of the package. It has a couple of defaults that are a little ugly but they are easy to remove. Here is a quick example: First lets make some example microarray data. exampleData <- matrix(log2(rexp(1000)/rexp(1000)),nrow=200) This just makes two exponential distributions and takes the log2 ratio to make it look a bit like microarray fold changes, but this really could be able matrix of numbers. Next I will just plot the most variable row/genes/whatever, this step is obviously optional but it reduces the size of the plot to make them easier to see, and normally I only care about the things that are different. evar <- apply(exampleData,1,var) mostVariable <- exampleData[evar>quantile(evar,0.75),] This just calculates the variance of each row in the matrix, then makes a new matrix of those rows that have a variance that is above the 75th percentile, so the top 25% most variable row. Next we load the gplots package (install it first if you do not already have it). We then simple pass the mostVariable matrix to the heatmap.2 function. The option removes a default, which is to add a line to each column, which I find distracting. The option uses another gplots function ( ), which simply generates a color scheme from green to red via black. You could use any color scheme here such as or a scheme from That is about it really for basic heatmaps. For more advanced heatmaps, you can do other things such as adding color strips to the rows or columns to show groupings, for example: Another useful trick is not to use the default clustering methods of heatmap.2, but use your own. For example : ord <- order(rowSums(abs(mostVariable)),decreasing=T) Here were are generating the ordering of the rows ourselves, in this case by the sum of the absolute values of each row. Then we turn off the clustering of the rows and the row dendrogram and get something like this: There are lots of other options too, but that is enough for today. for the author, please follow the link and comment on his blog: daily e-mail updates news and on topics such as: visualization ( ), programming ( Web Scraping ) statistics ( time series ) and more... If you got this far, why not subscribe for updates from the site? Choose your flavor: , or
{"url":"http://www.r-bloggers.com/r-heatmaps-with-gplots/","timestamp":"2014-04-20T16:11:48Z","content_type":null,"content_length":"38111","record_id":"<urn:uuid:3d85e62f-8f9b-44ba-b0e9-1c5f0b4c1873>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00048-ip-10-147-4-33.ec2.internal.warc.gz"}
Integration question relating to inverse trig functions. January 4th 2011, 05:50 PM #1 Apr 2008 Integration question relating to inverse trig functions. I've been doing some differentiation of inverse trig. In doing this I had to find the derivative of: $<br /> arctan (3x/2)$ I found this to be: $6/(9x^2 + 4)$ To check this answer I thought I would try to integrate to see if I arrived back at the original question. However, I'm a bit stumped as to how I should go about integrating it? I know that the original derivative was derived using the chain rule so is there some substitution that I can use to find the integral of $6/(9x^2 + 4)$ Your question asked for the derivative of y = arctan(3x/2) so x = 2tan(y)/3 Use this substitution for the integral. It should simplify very nicely (assuming you did your differentiation correctly). Commit this to memory $\displaystyle \int \frac{a}{a^2+x^2}~dx = \text{Tan}^{-1}\frac{x}{a}+C$ Thanks for the quick reply. How would you find the right substitution if you were just given the question: Integrate $6/9x^2 +4$ I'm sure that I could spot that it related to arctan (the x^2 and the positive constant is the give away) but how could I get it into a form from which I could integrate? Sorry if I'm missing the obvious... Got it now. If I compare $6/9x^2 +4$ $\displaystyle \int \frac{a}{a^2+x^2}~dx = \text{Tan}^{-1}\frac{x}{a}+C$ Divide through by 9 to make the coeff of x^2 equal to 1. Immediately you have the general form given above. It is actually standard to try to substitute trig functions when you see forms like $ax^2 + b$ in the denominator of an integral (without an $x$ term in the numerator). Normally you use the trig function that allows you to simplify the expression with a trig identity. For example when you see $1 - x^2$ in the denominator, you might use $x = sin(y)$. There may be multiple trig subs (or even hyperbolic subs, i.e. sinh, cosh) that work, but they should all come out to be equivalent answers. I can see how you could reasonably easily work out which trig or hyperbolic to sub in. But given the integration question above, all I would have been able to tell you was that I should sub in tan. I wouldn't have been able to get the rest of the substitution at all... Sure you can the 9 and 4 have something to do with the 3 and 2 Normally, if you have $(a + bx^2)$ you think $a(1 + \frac{b}{a}x^2)$. Now the coefficient should cancel out $\frac{b}{a}$. Last edited by snowtea; January 4th 2011 at 06:40 PM. You can also find the result on your own if you don't remember the derivative/identity $\displaystyle \int \frac{6}{9x^2+4}~dx$ Make $\displaystyle x = \frac{2}{3}\tan\theta \implies \theta = \tan^{-1}\frac{3x}{2}$ and $\displaystyle dx=\frac{2}{3}\sec^{2}\theta d\theta$ $\displaystyle \int \frac{6}{9( \frac{2}{3}\tan\theta )^2+4}\times \frac{2}{3}\sec^{2}\theta d\theta$ $\displaystyle \int \frac{6}{9\times \frac{4}{9}\tan^2\theta +4}\times\frac{2}{3}\sec^{2}\theta d\theta$ $\displaystyle \int \frac{6}{4\tan^2\theta +4}\times\frac{2}{3}\sec^{2}\theta d\theta$ $\displaystyle \int \frac{6}{4(\tan^2\theta +1)}\times\frac{2}{3}\sec^{2}\theta d\theta$ $\displaystyle \int \frac{6}{4\sec^2\theta }\times\frac{2}{3}\sec^{2}\theta d\theta$ Cancelling down $\displaystyle \int 1 ~d\theta = \theta +C = \tan^{-1}\frac{3x}{2}+C$ Thanks for all the replies. They've been extremely helpful. My only niggling problem is that in future I may be given slightly more complicated question in which the necessary substitution isn't quite so obvious. I think I'll always be able to identify which trig or hyperbolic I should use but is there a general way to work out what the correct full sub should be? I've tried just subing in x = tan y. In the above example as a test to see if it's really always necessary to include in the correct coefficients and I haven't managed to make it work yet. (which may well be because my algebra is somewhat lacking...) My only niggling problem is that in future I may be given slightly more complicated question in which the necessary substitution isn't quite so obvious. I think I'll always be able to identify which trig or hyperbolic I should use but is there a general way to work out what the correct full sub should be? No, but with enough practice, you'll be able to spot the relevant substitution (at least for the regular ones). My only niggling problem is that in future I may be given slightly more complicated question in which the necessary substitution isn't quite so obvious. I think I'll always be able to identify which trig or hyperbolic I should use but is there a general way to work out what the correct full sub should be? Practice makes perfect here, the more you do the better you will get at identifying which substitution to use. Trial and error won't hurt, patterns will present. Look here for more info Trigonometric substitutions Maybe this is the case, once again practice will make perfect, you are looking to cancel any coeffecients leaving something neat like $1-\cos^2\theta, 1-\sin^2\theta , 1+\tan\theta$ My only niggling problem is that in future I may be given slightly more complicated question in which the necessary substitution isn't quite so obvious. I think I'll always be able to identify which trig or hyperbolic I should use but is there a general way to work out what the correct full sub should be? I've tried just subing in x = tan y. In the above example as a test to see if it's really always necessary to include in the correct coefficients and I haven't managed to make it work yet. Instead of assuming you need a sub in the form x = some function of tan theta, you might prefer to get the denominator in the form 1 + some square or other, thereby making it analogous to the left hand side of $1 + \tan^2\theta = \sec^2\theta$, which is the whole point of the sub. Then it's obvious to sub tan theta for directly for the square you made. Just in case a picture helps... Then adjust for this expression being a derivative via the chain rule ... where (key in spoiler) ... Then you can make the direct analogy with integrating 1 + tan squared... See also http://www.mathhelpforum.com/math-he...tml#post546765 Don't integrate - balloontegrate! Balloon Calculus; standard integrals, derivatives and methods Balloon Calculus Drawing with LaTeX and Asymptote! January 4th 2011, 05:53 PM #2 Senior Member Dec 2010 January 4th 2011, 05:59 PM #3 January 4th 2011, 06:01 PM #4 Apr 2008 January 4th 2011, 06:07 PM #5 Apr 2008 January 4th 2011, 06:10 PM #6 Senior Member Dec 2010 January 4th 2011, 06:22 PM #7 Apr 2008 January 4th 2011, 06:29 PM #8 Senior Member Dec 2010 January 4th 2011, 06:35 PM #9 January 4th 2011, 07:12 PM #10 Apr 2008 January 4th 2011, 07:19 PM #11 Super Member Mar 2010 January 4th 2011, 07:22 PM #12 January 5th 2011, 01:37 AM #13 MHF Contributor Oct 2008
{"url":"http://mathhelpforum.com/calculus/167480-integration-question-relating-inverse-trig-functions.html","timestamp":"2014-04-19T07:13:42Z","content_type":null,"content_length":"76983","record_id":"<urn:uuid:d0568e15-43c9-4af2-a1e0-2c198bdffe28>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00118-ip-10-147-4-33.ec2.internal.warc.gz"}
Linear transformation problem September 18th 2013, 08:16 AM #1 Sep 2013 Linear transformation problem Hi! Have some troubles with solving one problem in my assignment, would be great if somebody could help me. The problem is following: Let n>0 and let L[n,k ], whear 1<=k<=n be a subspace spanned by vectors e[j] for j=1...k. Show that it exist a linear transformation T:R^n=>R^(n-k) such as L[n,k]=Ker(T) Last edited by bogdano; September 18th 2013 at 08:22 AM. Re: Linear transformation problem You can always find a basis for $L_{n,k}$ contained in $\{e_j\}_{j=1}^k$ then extend it to a basis for $R^n$. Define $T(e_i)= 0$ for $e^i\in \{e^j\}_{j=1}^k$, T(v)= 0 for v in the extended basis. T(v) is defined "by linearity" for all other v in $R^n$. That is, since $\{e_1, e_2,..., e_k, f_1, f_2, ..., f_{n-k}\}$, we can write $v= a_1e_1+ a_2e_2+ ...+ a_ke_k+ b_1f_1+ b_2f_2+ ...+ b_{n-k} f_{n-k}$ and then $Tv= a_1T(e_1)+ a_2T(e_2)+ ...+ a_kT(e_k)+ b_1T(f_1)+ b_2T(f_2)+ ...+ b_{n-k}T(f_{n-k})= a_1+ a_2+ ...+ a_k$. Then Tv= 0 if and only if v is in the subspace. Last edited by HallsofIvy; September 18th 2013 at 08:56 AM. Re: Linear transformation problem Thanx for answe! Could you please spicify what do you mean by saying that Tv=0, is it a transformation of v? What is v if it can be represented as v=a(1)e(1)+....b(n-k)*f(n-k). And if you said that T(e_j)=0, how can Tv be equal to a_1+a_2+....+a_k if the all supposed to be transformed into null!? September 18th 2013, 08:51 AM #2 MHF Contributor Apr 2005 September 18th 2013, 09:32 AM #3 Sep 2013
{"url":"http://mathhelpforum.com/advanced-algebra/222071-linear-transformation-problem.html","timestamp":"2014-04-17T04:15:45Z","content_type":null,"content_length":"37912","record_id":"<urn:uuid:d8fb58bb-95d6-4fa1-b30d-88069a729bb6>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00206-ip-10-147-4-33.ec2.internal.warc.gz"}
[FOM] afterthoughts re my reply to Tim Chow's "Re: Harvey's effective number theorists." Gabriel Stolzenberg gstolzen at math.bu.edu Mon Apr 17 14:06:10 EDT 2006 After rereading my reply to Tim Chow's "Re: Harvey's effective number theorists," I wish to add the following comments. Tim begins by quoting me inviting Harvey's unnamed number theorist "to explain what makes the question of getting an 'effective' version of Falting's theorem that yields an 'effective' algorithm for finding all rational points a 'fundamental' problem?" But I don't see where Tim addresses my request for an explanation. Notice that the number theorist didn't just say "problem." He didn't even say "homework problem." He said "fundamental problem." That's what I would like to see explained. In reply to my, "Harvey's effective number theorists," Harvey said that the number theorist's answer would be that it is obvious. By contrast, nothing in Tim's account seems to say this. > Sometimes if you keep pushing on a bound then you'll eventually > cross some kind of threshold that suddenly opens up a qualitatively > new realm of knowledge that you couldn't touch before. Tim, this sounds wonderful, almost magical. But it's wholly lacking in particulars. Why, instead of providing some, do you offer an analogy and then say the following? > Similarly, in number theory, Tijdeman showed that Catalan's conjecture > could have only finitely many exceptions, but until Mihailescu's work, > we couldn't actually assert Catalan's conjecture as a theorem. Similarly, since 1934, we knew that there could be at most one exception to Gauss's conjecture on quadratic imaginary fields of class number 1 (a great result) but, until Stark, we couldn't assert the conjecture as a theorem. But what does this have to do with the question at issue? Did Mihailescu use Tijdeman's work? If not, what is its relevance? Did Stark use the work of Heilbronn and Linfoot? I don't think so. > After enough experience with this sort of thing, one learns to > respect the value of passing from no bound to some bound to a good > bound just in general, knowing that this represents increased > knowledge and power, as well as increased chances of crossing > thresholds into new, uncharted territory. In some cases, of course, > this optimistic viewpoint may turn out to be unfounded, Why do you talk about the value of passing "from no bound to some bound to a good bound" when what is at issue is the value of passing "from no bound to some bound"? Also, do you really think increased knowledge and power, no matter what, is always desirable? My head used to be filled with stuff I couldn't forget, from old phone numbers to what was written on the blackboard during last week's seminar talk. I found it oppressive and, on balance, unhelpful. Nevertheless, I think the value of increased knowledge is at the heart of this discussion. Harvey believes that, at least for bounds and algorithms, increased knowledge, no matter what, is desirable. But although some things you say also invite this reading, your praise of "from no bound to some bound to a good bound" but not of "from no bound to some bound" doesn't seem to fit with it. With best regards, More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/2006-April/010438.html","timestamp":"2014-04-17T00:55:12Z","content_type":null,"content_length":"6033","record_id":"<urn:uuid:78456cbc-8a7a-4e94-a44c-f4eeaae7876e>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00111-ip-10-147-4-33.ec2.internal.warc.gz"}
Convert (x,y) data into a function y => f(x) 2 Answers Accepted answer You could use POLYFIT, or the curve fitting toolbox or a simple interpolation. >> x = 0:.25:5; >> y = x.^2; >> f = @(z) interp1(x,y,z); % Lookup table function >> x2 = 1/8:1/8:5; % Just to compare func to data. >> plot(x,y,'sb',x2,f(x2),'*r') 1 Comment No products are associated with this question. Convert (x,y) data into a function y => f(x) To whom it may concern: It is one of those days when apparent simple tasks seem hard for some reason. I have a continues but highly non smooth dataset of emissivity versus wavelength. I would really like to put this into a (lookup) function so the my dataset (lamda, e), i.e., Lamda e 0.25 0.545 0.26 0.556 0.27 0.654 …etc will turn into a function e = f(lamda). This would be very helpful since I can then create a function handle and integrate over certain wavelength regions and perform other operations. Any suggestions??? 0 Comments Perfect Matt, This solved the problem. I preferred the interp1 function since the data is to jagged for a POLYFIT would require too many coefficients. My code now reads % Import the data from a text file. ems = importdata('Emmisivity.txt'); %the spectrally resolved emissivity data Data = ems.data; %Selects the values, neglects the headers % create a lookup function handle for the data. specEms = @(lamda) interp1(Data(:,1), Data(:,2),lamda); % create a function handle for the spectral exitance, a basically the Max % Planck’s law for a black body. specExbb = @(lamda, T) specexitance(lamda, T); %funtion created by me. % create a exitance function for a selective emitter. specExSel = @(lamda, T) specExbb(lamda, T).*specEms(lamda); % integrate over a user-specified range (lamda1-lamda2), leave the function % parameterized with respect to T. ExSel = @(lamda1, lamda2, T) integral(@(lamda) specExSel (lamda,T),lamda1,lamda2); Thank you so much, my faith in humanity is ones again restored. Jaap Hi Jaap, I can't fully understand your question. Do you mean you want a lookup table in function form? If so, you can use fitting process(The fitting highly depends on the fit type your use) to build a function, but the function may not be very accurate at each data point. 0 Comments
{"url":"http://www.mathworks.com/matlabcentral/answers/56545-convert-x-y-data-into-a-function-y-f-x","timestamp":"2014-04-17T01:45:15Z","content_type":null,"content_length":"29165","record_id":"<urn:uuid:b00a22d9-2045-42ed-a51b-59dcf07cd86c>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00300-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: st: graph combine of tabplot graphs: uncomparable bar heights Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: st: graph combine of tabplot graphs: uncomparable bar heights From Nick Cox <njcoxstata@gmail.com> To "statalist@hsphsun2.harvard.edu" <statalist@hsphsun2.harvard.edu> Subject Re: st: graph combine of tabplot graphs: uncomparable bar heights Date Fri, 28 Jun 2013 13:40:21 +0100 Here's a dopey example to show the principle. By comparing the initial graphs, we see that the highest bar on both graphs shows 30. So the scale for the other graph (highest bar 27) must be adjusted relative to that. Note that the height to be used can be worked out by Stata on the fly: `=0.8 * 27/30' says "work out the result of 0.8 * 27/30 and use it". sysuse auto, clear tabplot foreign rep78 , showval gen himpg = mpg > 30 label define himpg 1 ">30" 0 "<= 30" label val himpg himpg label var himpg "miles per gallon >30?" tabplot himpg rep78 , showval tabplot foreign rep78 , showval height(`=0.8 * 27/30') name(g1) tabplot himpg rep78 , showval name(g2) graph combine g1 g2, ycommon On 28 June 2013 12:22, Nick Cox <njcoxstata@gmail.com> wrote: > -tabplot- is from SSC. You are asked to explain where user-written > programs you use come from. Otherwise you waste the time of anyone > who thinks "That sounds interesting" but then can't find -tabplot- on > their system because it has not been installed. The small point of > giving credit to whoever wrote it I leave on one side. This is all > explained in the FAQ you were asked to read before posting. > The underlying problem is that -tabplot- internally uses a scale that > is only indirectly unrelated to the numbers being shown. In a case > like > sysuse auto > tabplot foreign rep78 , showval > the highest bar is drawn between y = 2 and y = 2.8. If the highest bar > had been on the bottom row it would have been drawn between y = 1 and > y = 1.8. This is regardless of what the highest bar represents. -graph > combine- can only align x and y coordinates; it won't restructure the > graphs internally. Similarly -yscale()- can have no effect here, > because scaling has already taken place. In fact it could not be > otherwise. > Much of the effort of -tabplot- goes into using the space available as > effectively as possible. The default is to leave 20% vertical space > between the height of the tallest bar and the line above (or > correspondingly if the bars are aligned horizontally). > You have I think two choices. One is to choose a -height()- option > separately for each graph so that the values come out as you want when > you combine them. > The second is to restructure your data, so that you use a -by()- > option as well with -tabplot-. Then the bars in each panel will be > drawn on the same scale. > Nick > njcoxstata@gmail.com > On 28 June 2013 11:34, Brunelli Cinzia > <Cinzia.Brunelli@istitutotumori.mi.it> [edited] >> I'm going to combine together in a single figure 3 graphs previously obtained with -tabplot- to show patients and physician agreement on the assessment of three different variables. >> The syntax is the following >> tabplot ecs_cp_q2 alberta_q0_imput, showval percent xtitle(" " " ") subtitle ("") /// >> ytitle("PHYSICIAN ASSESSMENT") title("INCIDENT PAIN", size(msmall)) saving(IP_AGREE, replace) >> tabplot ecscp_neu pain_dtct_cat2, showval percent xtitle(" " "PATIENT ASSESSMENT") subtitle ("") /// >> ytitle("") saving(NP_AGREE, replace) title("NEUROPATHIC PAIN ", size(msmall)) yscale(range(0 1)) >> tabplot ecs_cp_q3 any_dep_PHQ, showval percent xtitle(" " " ") subtitle ("") /// >> ytitle("") saving(PD_AGREE, replace) title("PSYCHOLOGICAL DISTRESS", size(msmall)) >> graph combine IP_AGREE.gph NP_AGREE.gph PD_AGREE.gph, col(3) ycommon xcommon >> The commands work and the figure is nice but... proportions between bar heights are correct within each graph but not between graphs: i.e the fourth bar in the first graph, which represents 35% of the sample, is as tall as the first one in the second graph, which should represent 66% of the sample. I guess this has to do with different graphical formatting rules within each graph. >> Is there a way to overcome this misrepresentation? I tried to use something like yscale(range(0 100)) in order to standardize yaxis scale, but it doesn't work. I find -tabplot- very useful for my proposal and it would be a pity not to be able to use it and present a "boring" table. * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/faqs/resources/statalist-faq/ * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2013-06/msg01319.html","timestamp":"2014-04-17T21:29:01Z","content_type":null,"content_length":"12739","record_id":"<urn:uuid:591c32b6-e31f-49b5-8276-02ebae18ff78>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00452-ip-10-147-4-33.ec2.internal.warc.gz"}
Being Bayesian about Bayesian Network Structure:A Bayesian Approach to Structure Discovery in Bayesian Networks. (2003) by N. Friedman and D. Koller [older version, 2000] Abstract: In many multivariate domains, we are interested in analyzing the dependency structure of the underlying distribution, e.g., whether two variables are in direct interaction. We can represent dependency structures using Bayesian network models. To analyze a given data set, Bayesian model selection attempts to find the most likely (MAP) model, and uses its structure to answer these questions. However, when the amount of available data is modest, there might be many models that have non-negligible posterior. Thus, we want compute the Bayesian posterior of a feature, i.e., the total posterior probability of all models that contain it. In this paper, we propose a new approach for this task. We first show how to efficiently compute a sum over the exponential number of networks that are consistent with a fixed order over network variables. This allows us to compute, for a given order, both the marginal probability of the data and the posterior of a feature. We then use this result as the basis for an algorithm that approximates the Bayesian posterior of a feature. Our approach uses a Markov Chain Monte Carlo (MCMC) method, but over orders rather than over network structures. The space of orders is smaller and more regular than the space of structures, and has much a smoother posterior "landscape". We present empirical results on synthetic and real-life datasets that compare our approach to full model averaging (when possible), to MCMC over network structures, and to a non-Bayesian bootstrap approach. Download Information N. Friedman and D. Koller (2003). "Being Bayesian about Bayesian Network Structure:A Bayesian Approach to Structure Discovery in Bayesian Networks.." Machine Learning, 50(1--2), 95-125. Full version of UAI 2000 paper. Bibtex citation author = "N. Friedman and D. Koller", title = "Being {Bayesian} about {Bayesian} Network Structure: {A} {Bayesian} Approach to Structure Discovery in {Bayesian} Networks.", journal = "Machine Learning", volume = 50, number = {1--2}, pages = {95--125}, year = "2003", Note = "Full version of UAI 2000 paper", full list
{"url":"http://ai.stanford.edu/~koller/papers.cgi?entry=Friedman+Koller:MLJ03","timestamp":"2014-04-17T01:30:32Z","content_type":null,"content_length":"9403","record_id":"<urn:uuid:dde05473-94f7-4542-b7ce-1d083a6cbbd2>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00615-ip-10-147-4-33.ec2.internal.warc.gz"}
Stony Brook Math Tutor ...I am a CT certified teacher and have taught and tutored beginning readers for 6 years in Westport and Norwalk CT. I am also trained in the "Lindamood Bell Learning Process," an innovative, highly effective program for teaching reading, from phonics to reading comprehension and vocabulary. I have also worked as a reading aide in kindergarten, 4th and 6th grade classes over the last 12 years. 30 Subjects: including prealgebra, algebra 1, reading, English ...I received my bachelor's in mathematics and participated in peer tutoring for my school while I was there. I try to work at the student's pace while tutoring as to allow the students actually get an understanding of the material. I like to create a friendly atmosphere so the student remains comfortable. 3 Subjects: including algebra 1, geometry, prealgebra ...Prior to teaching I was a financial analyst with a major corporation. I have a MS in Education and a MBA in Accounting and Finance. As a tutor, I take a personal approach when working with my 8 Subjects: including algebra 2, SAT math, algebra 1, prealgebra ...Happy Tutoring, AlecAs an avid reader, I have always known on some level that vocabulary is best developed by simply following one's natural inclination in subject matter. The words we enjoy the most stick to our memories the best. This means that in order to get an expansive vocabulary, you need to read the subjects that interest you, by the authors you find most compelling. 13 Subjects: including SAT math, English, writing, reading ...I also teach how to read music, as well as music theory and how to apply music theory. The student will learn how to analyze music and also how to compose songs. My teaching method is to have my students learn and play what they want immediately. 17 Subjects: including calculus, composition (music), general music, geometry
{"url":"http://www.purplemath.com/Stony_Brook_Math_tutors.php","timestamp":"2014-04-17T21:45:49Z","content_type":null,"content_length":"23832","record_id":"<urn:uuid:0f0b9162-f998-4aaa-922c-bd5a1f9b7925>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00012-ip-10-147-4-33.ec2.internal.warc.gz"}
4Tests Forums Forums • High School Life • algebra Post Reply • Send to a Friend Author Topic colonel000 Posted - 30 July 2001 20:16 how can i solve this? A and B can build a wall in 24 hrs. After A worked 7 hrs, B helped A and both together finished the rest of the work in 20 hours. ¿How long does it takes to each one to make the work? DimaMedvedev Posted - 12 September 2001 19:40 Hi there. Well... this reply is kinda late now... But if you need help with Algebra, Physics, or Chemistry, please message me, and I will be very happy to help you... You see, I am a Junior in High School and I am already taking AP Statistics and AP Calculus, so I am sure I could solve any of your algebra problems. Here''s the solution for this one: Let x be the speed (work per hour) at which worker A does his job, y - the speed at which worker B does his. 24x+24y=27x+20y , because as u said, both of them working together finish in 24 hours, and for the right side of the equation, both work together for 20 hours, after worker A has worked for 7 hours. (20+7=27). Simplify the equation above to get 3x=4y, divide both sides by 3 to get X=4/3*y That means that worker A works faster than worker B, and it takes him only 3 hours to do the work that B does in 4 hours. Now you can substitute 4/3y in for x in any side of the equation. For example, 27x+20y=27*(4/3y)+20y=36y+20y=56y. That means that it will take 56 hours for worker B to build a wall or whatever it was at a speed Y. Now replace Y with 3/4x to 27x+20y=27x+20(3/4x)=27x+15x=42x Which means it takes 42 hours for the worker A to do his job. I hope that''s what the question asked. You dont have to use 27x+20y, you could use 24x+24y to get the same results. SOrry that I am reading your post only now, months later, but I will be willing to help you in the future. Best regards, PS. My e-mail address is bearalex777@hotmail.com Anyone is welcome with math or science questions! See ya godsdiamond Posted - 24 June 2006 12:57 Do you think that you could help me with some Algebra! And maybe helping me with comprehending some of the Science problems? It would be such a great help!
{"url":"http://www.4tests.com/forums/topic.asp?topic_id=50&forum_id=46&Topic_Title=algebra&forum_title=High+School+Life&M=True&S=True","timestamp":"2014-04-20T10:46:09Z","content_type":null,"content_length":"32589","record_id":"<urn:uuid:d47fd810-ff70-4d8b-a61f-de5bfa30a0d7>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00169-ip-10-147-4-33.ec2.internal.warc.gz"}
There's no 4-branched Belyi's theorem -- right? There’s no 4-branched Belyi’s theorem — right? Much discussion on Math Overflow has not resolved the following should-be-easy question: Give an example of a curve in ${\mathcal{M}}_g$ defined over $\bar{Q}$ which is not a family of 4-branched covers of P^1. Surely there is one! But then again, you’d probably say “surely there’s a curve over $\bar{Q}$ which isn’t a 3-branched cover of P^1.” But there isn’t — that’s Belyi’s theorem. 6 thoughts on “There’s no 4-branched Belyi’s theorem — right?” 1. Would you be willing to tag this post “PlanetMO” so that it can be found via mathblogging.org/planetmo ? 2. done! 3. Thanks! 4. Dear Jordan — I think you should remove the overline from Mgbar. I am sure there are curves completely contained in the boundary which are not families of 4-branched covers (for any of the usual extensions of this notion to the boundary). Best — Jason 5. done, Jason. 6. [...] paper is related to the question I discussed last week about “4-branched Belyi” — or rather the theorem of Diaz-Donagi-Harbater that inspired our paper is related to that [...] Tagged algebraic curves, algebraic geometry, belyi, moduli of curves, PlanetMO
{"url":"http://quomodocumque.wordpress.com/2011/08/07/theres-no-4-branched-belyis-theorem-right/","timestamp":"2014-04-18T23:16:39Z","content_type":null,"content_length":"63296","record_id":"<urn:uuid:9dfc7d78-9850-495b-8df7-1843a8960102>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00344-ip-10-147-4-33.ec2.internal.warc.gz"}
Problem C2.4. Laminar Flow around a Delta Wing A laminar flow at high angle of attack around a delta wing with a sharp leading edge and a blunt trailing edge. As the flow passes the leading edge it rolls up and creates a vortex together with a secondary vortex. The vortex system remains over a long distance behind the wing. This problem is aimed at testing high-order and adaptive methods for the computation of vortex dominated external flows. Note, that methods which show high-order on smooth solutions will show about 1^st order only on this test case because of reduced smoothness properties (e.g. at the sharp edges) of the flow solution. Finally, also h-adaptive, and hp-adaptive computations can be submitted for this test case. Governing Equations The governing equation is the 3D Navier-Stokes equations with a constant ratio of specific heats of 1.4 and Prandtl number of 0.72. The viscosity is assumed a constant. Flow Conditions Subsonic viscous flow with M[¡Þ]= 0.3, and ¦Á = 12.5¡ã, Reynolds number (based on a mean cord length of 1) Re=4000. The geometry is a delta wing with a sloped and sharp leading edge and a blunt trailing edge. The geometry can be seen from Fig. 5 which shows the top, bottom and side view of the half model of the delta wing. Half model: Figure 4: Left: Top, bottom and side view of the half model of the delta wing. The grid has been provided by NLR within the ADIGMA project. Right: Streamlines and Mach number isosurfaces of the flow solution over the left half of the wing and Mach number slices over the right half. The figures are taken from [LH10]. Reference values Reference area: 0.133974596 (half model) Reference moment length: 1.0 Moment line: Quarter chord Boundary Conditions Far field boundary: Subsonic inflow and outflow Wing surface: no slip isothermal wall with []. 1. Start the simulation from a uniform free stream everywhere, and monitor the L[2] norm of the density residual. Track the work units needed to achieve steady state. Compute the drag and lift coefficients c[d] and c[l.] 2. Perform grid and order refinement studies to find ¡°converged¡± c[d] and c[l] values. 3. Plot the c[d] and c[l] errors vs. work units. 4. Study the numerical order of accuracy according to c[d] and c[l] errors vs. []. (Note, that due to the locally non-smooth solution, e.g. at the sharp edges, globally high-order methods will show about 1^st order only.) 5. Submit two sets of data to the workshop contact for this case a) c[d] and c[l] error vs. work units b) c[d] and c[l] error vs [] 6. The following data sets can also be submitted a. for sequences of locally refined meshes (h-adaptive mesh refinement) and b. for sequences of meshes with locally varying mesh size and order of convergence (hp-adaptive mesh refinement), possibly including improved data based on a posteriori error estimation results. Note, that here the error-vs-work-unit data sets should take account of the additional work units possibly required - for auxiliary problems (like e.g. adjoint problems), - for the evaluation of refinement indicators or mesh metrics, - and for the actual mesh refinement or mesh regeneration procedure. [LH10] T. Leicht and R. Hartmann. Error estimation and anisotropic mesh refinement for 3d laminar aerodynamic flow simulations. J. Comput. Phys., 229(19), 7344-7360, 2010.
{"url":"http://www.as.dlr.de/hiocfd/case_c2.4.html","timestamp":"2014-04-20T23:28:36Z","content_type":null,"content_length":"15277","record_id":"<urn:uuid:1bd2a665-99ed-444c-a831-5f7fb03e0edc>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00067-ip-10-147-4-33.ec2.internal.warc.gz"}
Summer Programs Several mathematics majors choose to participate in the summer REU programs (Research Experiences for Undergraduates) available across the country each year. For a list of such programs see these web The Mathematics Department supports a small number of mathematics majors for a summer research program amounting to a total of 8 weeks during the summer. This program is open to students who will be junior or senior mathematics majors during the following academic year; it is an opportunity not just to learn a body of mathematics but also to engage in some independent investigation. A student wishing to participate must find a faculty member who agrees to supervise the program, and in consultation with the advisor must submit an outline of the proposed program to the Special Projects Manager, Scott Kenney, Fine 320, by March 3rd. Students will be notified by April 30 whether their applications are accepted. From time to time the Mathematics Department has additional summer mathematics programs for undergraduates. These are announced on the Mathematics Department web page.
{"url":"http://www.math.princeton.edu/undergraduate/math-majors/summer-programs","timestamp":"2014-04-19T22:05:43Z","content_type":null,"content_length":"14120","record_id":"<urn:uuid:aa8b68b3-b1a8-4155-948c-dc9f39f3d0d5>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00582-ip-10-147-4-33.ec2.internal.warc.gz"}
Mathematics at College of San Mateo - Associate in Science Degree for Transfer/SB1440: Mathematics AS-TMajor in Mathematics The AS-T major in Mathematics prepares students for transfer into bachelor's degree programs in mathematics and similar areas. Degree Requirements Complete General Education and other requirements listed for Associate degree and Major requirements: 21-22 semester units A grade point average of 2.0 is required for the major courses. Required Core: MATH 251 Calculus with Analytical Geometry I 5.0 units MATH 252 Calculus with Analytical Geometry II 5.0 units MATH 253 Calculus with Analytical Geometry III 5.0 units List A: Plus one course from the following: MATH 270 Linear Algebra 3.0 units MATH 275 Ordinary Differential Equations 3.0 units List B: Plus one course from the following: MATH 270 Linear Algebra (if not selected in List A) 3.0 units MATH 275 Ordinary Differential Equations (if not selected in List A) 3.0 units MATH 268 Discrete Mathematics 4.0 units CIS 278 Programming Methods C++ 4.0 units PHYS 250 Physics with Calculus I 4.0 units General Education requirements: Select courses to complete CSU General Education CSU. This degree does not require the CSM AA/AS General Education pattern CSU GE: Area A1 Oral Communication 3.0 units Area A2 Written Communication 3.0 units Area A3 Critical Thinking 3.0 units Area B1 Physical Science 3.0 units Area B2 Life Science 3.0 units Area B3 Science Lab 1.0 unit Area B4 Math Concepts 3.0 units Area C1 Arts 3.0 units Area C2 Humanities 3.0 units Area C1 or C2 3.0 units Area D Social, Polical, and Economic Institutions 9.0 units Area E Lifelong Understanding 3.0 units Area 1A English Composition 3.0 units Area 1B Critical Thinking/Composition 3.0 units Area 1C Oral Communication 3.0 units Area 2 Math Concepts 3.0 units Area 3A Arts 3.0 units Area 3B Humanities 3.0 units Area 3A or 3B 3.0 units Area 4 Social and Behavioral Science 9.0 units Area 5A Physical Science 3.0 units Area 5B Biological Science 3.0 units Area 5C Either 5A or 5B must be a lab course 1.0 unit Additional CSU tranferable courses based on student interest to reach 60 transferable units total.
{"url":"http://collegeofsanmateo.edu/math/degrees_mathematics_as-t.asp","timestamp":"2014-04-17T06:52:54Z","content_type":null,"content_length":"27309","record_id":"<urn:uuid:14a5407b-43f5-4169-a64b-6ad56a9044ef>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00489-ip-10-147-4-33.ec2.internal.warc.gz"}
Graph theory glossary Graph-theory glossary for Freshman Seminar 23j: Chess and Mathematics (Fall 2004) Graph theory, like chess, has an extensive collection of technical terminology. As with the chess glossary, this glossary is limited to basic terms of graph theory that we'll need for our seminar and whose meaning may not be obvious. adjacency matrix (n.; the plural is ``adjacency matrices''): A table of 0's and 1's that encodes the structure of a graph. The rows and columns are labeled by the vertices; the entry in row r and column c is 1 if and only if (r,c) is an edge of the graph. Thus an adjacency matrix is a square table, which is diagonally symmetrical unless the graph is directed. For example, is an adjacency matrix of a pentagon, and is an adjacency matrix of a directed pentagon. (I write ``an adjacency matrix'' rather than ``the adjacency matrix'' because the table depends on how the vertices are ordered; for instance is also an adjacency matrix of a directed pentagon. What's the total number of different adjacency matrices of this directed graph?) adjacent (adj.): Two vertices v,v' of a graph are said to be ``adjacent'' [to each other] if {v,v'} is an edge of the graph. bipartite (adj.): A graph is bipartite if its set of vertices can be split into two parts V[1], V[2], such that every edge of the graph connects a V[1] vertex to a V[2] vertex. For example, a hexagon is bipartite but a pentagon is not. More generally, we'll see that a graph is bipartite if and only if all cycles in the graph have even length. In particular, all trees (q.v.) are bipartite: a tree has no cycles at all, so a fortiori no odd cycles. clique (n.): A subset S of the vertices of a graph such that all pairs of vertices in S are adjacent. coclique (co-clique) (n.): A subset S of the vertices of a graph such that no two vertices in S are adjacent. A coclique in a graph is a clique in its complementary graph (q.v.). complement, complementary graph (n.): for any graph G, the complementary graph of G (a.k.a. the complement of G) is the graph with the same vertices as the vertices of G but with all the edges not in G. For instance, the complementary graph of a pentagon is a pentagram. If G' is this complementary graph then G is in turn the complement of G'. complete bipartite graph (n.): A bipartite graph in which every V[1] vertex is connected with every V[2] vertex. Such a graph is sometimes called K[n[1],n[2]], where n[1],n[2] are the numbers of vertices in the two parts V[1],V[2]. For example, a square is a complete bipartite graph (namely K[2,2] -- right?), but no other polygon is. complete graph (n.): A graph in which every pair of vertices is adjacent. Such a graph is sometimes called K[n], where n is the number of vertices. For example, a triangle is a complete graph (namely K[3]), but no other polygon is. connected (adj.): A graph is connected if every pair of vertices is connected by some path. The Bishop graph of an 8*8 chessboard is not connected; all other pieces yield connected graphs. degree (n.): the degree of a vertex of a graph is the number of other vertices that it is adjacent to. For instance, in a polygon all vertices have degree 2; in the Petersen graph, all vertices have degree 3; and in the complete graph K[n], all vertices have degree n-1. Thus all three of these graphs are regular (q.v.). The sum of the degrees of all vertices in a graph G equals twice the number of edges of G (why?); in particular, this sum must be an even number. diameter (n.): The diameter of a graph is the maximal distance between any two points on the graph. If the graph is not connected, its diameter is infinite. distance (n.): The distance between two vertices v,v' (often denoted ``d(v,v')'') is the length of the shortest path connecting them. Thus: @ d(v,v')=0 if and only if v=v'; @ d(v,v')=1 if and only if v and v' are adjacent; @ d(v,v')=2 if and only if v and v' are neither the same nor adjacent but have a common neighbor; @ ``etc.''. Note that this definition does in fact satisfy the axioms (required properties) required for a ``distance'': i) nonnegativity: for all vertices v and v', the distance d(v,v') is either positive or zero, and zero if and only if v=v'; ii) symmetry: d(v,v')=d(v',v) for all v and v'; iii) triangle inequality: for all v,v',v'' the distance d(v,v'') is no larger than d(v,v')+d(v',v'') [why?]. If the graph is not connected, there are some vertices v,v' not connected by any path. In that case we say that d(v,v') = + edge (n.): See graph. Euler(ian) circuit, Euler(ian) path (n.): An Euler path (a.k.a. Eulerian path) in a graph G is a path that traverses each edge of G exactly once. If the initial and final vertices are the same then the path is an Euler(ian) circuit. Named in honor of Euler, who proved that: G has an Euler path if and only the number of vertices of odd degree is 0 or 2; it has an Euler circuit if and only if all vertices have even degree; and in this case all Euler paths are circuits. (There cannot be just one odd degree because the sum of all the degrees is even.) Look up ``bridges of Königsberg'' on the Web for the charming history of this problem. girth (n.): The girth of a graph is the length of the shortest cycle(s) in the graph. If the graph has no cycles, its girth is infinite. graph (n.): A mathematical structure consisting of a (usually finite) set of ``vertices'' together with a set of ``edges'' which are pairs of vertices (usually regarded as ``connections'' between pairs of vertices). Unless otherwise specified, we do not allow loops (edges connecting a vertex to itself), nor multiple edges joining the same two vertices. If the graph is directed, its edges are ordered pairs of vertices; usually our graphs are undirected, so the edges are unordered pairs. Hamiltonian circuit, Hamiltonian path (n.): A Hamiltonian path in a graph G is a path that goes through each vertex of G once. If the initial and final vertices are adjacent then the path can be completed to a Hamiltonian circuit. Knight's tours and closed Knight's tours are examples of Hamiltonian paths and Hamiltonian circuits respectively. Unlike their Eulerian cousins, Hamiltonian paths and circuits can be hard to find, or even to tell whether they exist on a given graph G: it is known that finding a Hamiltonian path or circuit on a general graph G is an ``NP-complete'' problem. isomorphic (adj.), isomorphism (n.): These are central notions not just in graph theory but in all of modern mathematics. Two mathematical objects, call them A and B, are isomorphic if they have the same structure; that is, if there is a 1:1 map from A to B that preserves all of the structure relevant to these objects. Such a map is an isomorphism. When A,B are graphs, an isomorphism is a bijection from the vertices of A to the vertices of B such that any two vertices of A are adjacent if and only if their images in B are adjacent. For example, the pentagon and pentagram are isomorphic as graphs; one isomorphism takes vertices 1,2,3,4,5 to 1,3,5,2,4. If A and B are isomorphic then they have identical graph-theoretical properties; for instance, since the pentagon is regular of degree 2, with diameter 2 and girth 5, the same is automatically true also of the pentagram once we know that the two graphs are isomorphic. regular (adj.): A graph is regular if all its vertices have the same degree (q.v.). The Rook graph of an 8*8 chessboard is regular of degree 14 -- that is, a Rook on an empty board can go to 14 other squares, regardless of which square it stands on. None of the other chess pieces yields a regular graph. tree (n.): A connected graph with no cycles. Exercise: Prove that every tree has exactly one fewer edge than its number of vertices. vertex (n.): See graph. The plural is vertices.
{"url":"http://www.math.harvard.edu/~elkies/FS23j.05/glossary_graph.html","timestamp":"2014-04-19T14:31:40Z","content_type":null,"content_length":"9838","record_id":"<urn:uuid:0731d4ba-a010-470e-a6a0-fa3d750b87db>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00569-ip-10-147-4-33.ec2.internal.warc.gz"}
Deerfield Beach SAT Math Tutor - Economics major in undergrad at JMU who completed CPA requirements at FAU while working.- Worked for a Big 4 public accounting firm for 4 1/2 years as an auditor, included teaching staff and clients proper accounting processes and GAAP. I have direct experience with Sarbanes-Oxley 404 (SOX), acco... 6 Subjects: including SAT math, accounting, algebra 1, prealgebra ...I have over 200 hours tutoring in algebra I/II and in geometry. I provide additional resources to all my students to help them prepare for the ACT. If you want someone to help you love math also, you have found your tutor! 12 Subjects: including SAT math, chemistry, geometry, biology Hello, my name is Daniel and I double majored in Applied Mathematics and Economics at Florida State University. I have experience teaching Calculus I & II to college students as well as Algebra II to high school students. I am pursuing a graduate degree focused in Statistics. 24 Subjects: including SAT math, reading, physics, calculus ...I also taught adult students at the University Level for over 2 years. I am qualified to tutor Calculus. I have professionally taught Microsoft Office, including Excel, at the University level. 16 Subjects: including SAT math, calculus, geometry, algebra 1 ...The total time each week should be 3 hours per week. As the testing deadline approaches we will meet less but each meeting will be for a longer duration. The emphasis on grades through 2nd is add, subtract, multiply, and divide whole numbers. 30 Subjects: including SAT math, calculus, geometry, ASVAB
{"url":"http://www.purplemath.com/Deerfield_Beach_SAT_math_tutors.php","timestamp":"2014-04-19T02:32:45Z","content_type":null,"content_length":"24192","record_id":"<urn:uuid:c78b3b00-19b9-48fc-a559-ed85a98f3b7f>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00142-ip-10-147-4-33.ec2.internal.warc.gz"}