content
stringlengths
86
994k
meta
stringlengths
288
619
A sequence of generic filters that does not come from an iteration up vote 7 down vote favorite Fix a countable transitive model $M$ of ZFC. In my answer to this question I indicated that there are forcing iterations $((Q_\alpha:\alpha\leq\omega),(\dot P_\alpha:\alpha<\omega))$ in $M$ and sequences $(G_\alpha:\alpha<\omega)$ of filters such that the following happens: Each $G_\alpha$ is a filter in the evaluation of $\dot P_\alpha$ with respect to the filter $G_0*\dots*G_{\alpha-1}$ and $G_\alpha$ is generic over $M[G_0,\dots,G_{\alpha-1}]$ (call such a sequence $ (G_\alpha:\alpha<\omega)$ a sequence of generics), but there is no $Q_\omega$-generic filter over $M$ whose $\alpha$-th projection is $G_\alpha$ for all $\alpha<\omega$. An example can be obtained as follows: Take the countable support iteration of Sacks forcing (or any other nontrivial $\omega^\omega$-bounding proper forcing notion) of length $\omega$ (i.e., the supports are actually everything, but this doesn't matter). This forcing adds no Cohen real. Compare this to the finite support iteration of the same forcing notions. This iteration does add a Cohen real. The Cohen real is coded by the sequence of generics and hence this sequence of generics does not come from a generic filter for the countable support iteration mentioned before. This sequence of generics is not even contained in a forcing extension obtained using the countable support Now here are two questions: 1) Is there an example of a sequence of generics (of length $\omega$) that cannot come from any iteration of the $\dot P_\alpha$? I am asking here for iterations where the finite initial segments are as usual (just plain iteration) and we choose whatever ideal for the supports, including all finite subsets of the index set. But I am open to more general forms of iteration. For example take a large forcing notion $Q$ along with commuting complete embeddings of the $Q_\alpha$, $\alpha<\omega$. This would be an iteration of the $\dot P_\alpha$, too, the most general one that I can think of right now. 2) Is there an example of a sequence of generics over $M$ that is not contained in any countable transitive extension of $M$ with the same ordinals as $M$ that is a model of ZFC? Obviously, a positive answer to 2) solves 1) as well. set-theory posets forcing add comment 2 Answers active oldest votes There are a number of interesting things to say. The answer to your first question is yes. Suppose that $M$ is a countable transitive model of set theory and we have a forcing iteration $P_\omega$ in $M$ of length $\omega$, forcing with, say, Cohen forcing $Q_n$ at stage $n$. Let $z$ be any real that cannot be added by forcing over $M$, such as a real that codes all the ordinals of $M$. This real cannot exist in any extension of $M$ to a model of ZFC with the same ordinals. Now, suppose that $G$ is any $M$-generic filter for the iteration, with $G_n$ being the stage $n$ generic filter. Let $H_n$ be the filter that results from $G_n$ by changing the first bit so as to agree with $z(n)$. That is, we change a single bit at each stage. The resulting sequence $\langle H_n | n\in\omega\ rangle$ will be generic at every stage, since only finitely many bits are changed by a given stage, but the whole sequence computes $z$, which cannot be added by forcing. Second, a similar phenomenon occurs even just with 2-step product forcing: Theorem. If $M$ is a countable transitive model of ZFC, then there are two $M$-generic Cohen reals $c$ and $d$ such that $M[c]$ and $M[d]$ have no common extension to model of ZFC with the same ordinals. The proof is to build $c$ and $d$ in stages. Fix a real $z$ which cannot exist in any extension of $M$ with the same ordinals, and enumerate the dense sets of $M$ by $D_0, D_1$ and so on. Build $c$ and $d$ in zig-zag fashion: first provide $c_0$ meeting $D_0$, and $d_0$ all $0$s of the same length as $c_0$, followed by the first digit of $z$. Now extend $d_0$ to $d_1$ up vote 8 meeting the dense set, adding all $0$s to $c_0$ making $c_1$ of the same length, and adding one more bit of $z$. And so on. The point is that we ensure that each of $c$ and $d$ is down vote $M$-generic, but together, they reveal the coding points of $z$. So no model extending $M$ with the same ordinals can have both $c$ and $d$, for then it would have $z$. Third, this is essentially the only kind of obstacle, for there is a positive result here. The following theorem is proved in a paper with G. Fuchs, myself and J. Reitz on the topic of set-theoretic geology: Theorem. If $M$ is a countable transitive model of set theory, and $M[G_n]$ is a sequence of generic extensions of $M$ by forcing $G_n\subset P_n\in M$ of bounded size in $M$, such that the extensions are finitely amalgamble, in the sense that any finitely many of the $M[G_n]$ have a common forcing extension $M[H]$, then there is a single forcing extension $M[H]$ containing all $M[G_n]$. I'll try to post a proof sketch later, but the main idea is to perform a very large combination of forcing, and then perform surgey so as to replace certain coordinate generics with $G_n$, in such a way that the resulting extension can see only finite fragments of the sequence $\langle G_n | n\lt\omega\rangle$, without being able to construct the whole sequence. A special case of this theorem answers your second question, in a sense, for if we have a sequence of extensions $M\subset M[G_0]\subset M[G_1]\subset\cdots$, then these extensions are finitely amalgamable, and so there is indeed a common extension $M[H]$ containing every $M[G_n]$. This extension, however, is not an $\omega$-iteration of the forcing notions in your iteration, and in general we cannot expect that the sequence $\langle G_n | n\lt\omega\rangle$ is in $M[H]$, for the reasons described above. I added a link to the geology paper. – Joel David Hamkins Jul 30 '11 at 12:44 add comment Take the steps $Q_\alpha$ to be Cohen forcing, and begin by taking a generic filter $G$ for the usual finite-support product. So $G$ codes an $\omega$-sequence of Cohen reals $x_n\in 2^\ omega$. Fix some $z\in 2^\omega$ coding an ordinal larger than the height of $M$. Define $y_n\in 2^\omega$ to be exactly the same as $x_n$ except that $y_n(0)=z(n)$. Since only one entry in up vote $x_n$ has been changed, it is clear that $y_n$ is Cohen-generic over $M$ and that, for each natural number $n$, $\langle y_k:k<n\rangle$ is generic for $P_0*\dots*P_{n-1}$; let $G_n$ be the 8 down corresponding generic filter in $P_0*\dots*P_{n-1}$. The sequence $\langle G_n:n\in\omega\rangle$ is not in any extension of $M$ with the same ordinals, because it encodes $z$. While I was typing this, Joel gave the same construction (and more). – Andreas Blass Sep 14 '10 at 17:52 Andreas, we gave the same answer simultaneously! – Joel David Hamkins Sep 14 '10 at 17:53 I accept Joel's answer for being a hair earlier and more extensive. – Stefan Geschke Sep 14 '10 at 18:25 add comment Not the answer you're looking for? Browse other questions tagged set-theory posets forcing or ask your own question.
{"url":"https://mathoverflow.net/questions/38666/a-sequence-of-generic-filters-that-does-not-come-from-an-iteration","timestamp":"2014-04-20T08:57:33Z","content_type":null,"content_length":"65759","record_id":"<urn:uuid:c999f633-51de-410d-93f9-d3ef13c14781>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00547-ip-10-147-4-33.ec2.internal.warc.gz"}
Find a Trigonometry Tutor ...He scored 800/800 on the Math section of the GRE. Rajiv has 15 peer-reviewed publications/abstracts and holds 8 granted patents. "The data are clear. Recent results from the Third International Mathematics and Science Study (TIMSS) show that U.S. eighth- and 12th-graders do not do well by inte... 15 Subjects: including trigonometry, calculus, algebra 1, geometry ...I taught as a teaching assistant for two years and during this time was trusted to teach a class on my own while cooperatively generating the course material with other teaching assistants. I also held voluntary office hours which I regularly extended in order to provide the maximum opportunity ... 24 Subjects: including trigonometry, chemistry, physics, calculus ...I tutor part-time, and am available in the evenings and on Saturdays. I look forward to working with you or your child!I've tutored Calculus for about 5 years now. Most of my calculus students have been high school seniors, but some are high-achieving sophomores and juniors. 21 Subjects: including trigonometry, Spanish, reading, geometry ...I can work at your house or mine. Flexible hours could include right after school, evenings or weekends. I look forward to hearing from you and being a part of your success.I'm working with a geometry student who is doing 2 letter grades better this semester than last semester. 12 Subjects: including trigonometry, statistics, geometry, ASVAB ...I graduated high school in 2013 with a 4.3 GPA. I have several hours of tutoring experience before and with WyzAnt; tutoring students from elementary math to precalculus. Therefore, I have experience working with a wide variety of ages. 17 Subjects: including trigonometry, chemistry, algebra 2, biology Related Ashburn, VA Tutors Ashburn, VA Accounting Tutors Ashburn, VA ACT Tutors Ashburn, VA Algebra Tutors Ashburn, VA Algebra 2 Tutors Ashburn, VA Calculus Tutors Ashburn, VA Geometry Tutors Ashburn, VA Math Tutors Ashburn, VA Prealgebra Tutors Ashburn, VA Precalculus Tutors Ashburn, VA SAT Tutors Ashburn, VA SAT Math Tutors Ashburn, VA Science Tutors Ashburn, VA Statistics Tutors Ashburn, VA Trigonometry Tutors Nearby Cities With trigonometry Tutor Aldie trigonometry Tutors Arcola, VA trigonometry Tutors Barnesville, MD trigonometry Tutors Beallsville, MD trigonometry Tutors Boyds, MD trigonometry Tutors Dickerson trigonometry Tutors Hamilton, VA trigonometry Tutors Hillsboro, VA trigonometry Tutors Leesburg, VA trigonometry Tutors Lincoln, VA trigonometry Tutors Merrifield, VA trigonometry Tutors Paeonian Springs trigonometry Tutors Purcellville trigonometry Tutors Sterling, VA trigonometry Tutors Waterford, VA trigonometry Tutors
{"url":"http://www.purplemath.com/redondo_wa_trigonometry_tutors.php","timestamp":"2014-04-21T13:16:21Z","content_type":null,"content_length":"24121","record_id":"<urn:uuid:1e12e5fc-61bc-4ed7-bcb1-44b28147f1e3>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00086-ip-10-147-4-33.ec2.internal.warc.gz"}
Summary: Symmetric shift radix systems and finite S. Akiyama and K. Scheicher Shift radix systems provide a unified notation to study several important types of number systems. However, the classification of such systems is already hard in two dimensions. In this paper, we consider a symmetric version of this concept which turns out to be easier: the set of such number systems with finite expansions can be completely classified in dimension two. 1 Introduction Shift radix systems, defined in [4], provide a unified notation for canonical number systems (for short CNS) as well as -expansions. Both concepts are generalisations of the well-known b-ary expansions of integers. Let d 1 be an integer and r = (r1, . . . , rd) Rd . With r we associate a mapping ~r : Zd in the following way: if z = (z1, . . . , zd) Zd
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/115/2631757.html","timestamp":"2014-04-19T18:08:49Z","content_type":null,"content_length":"7949","record_id":"<urn:uuid:31066f59-98cb-4bce-8d18-fbee2f93a989>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00043-ip-10-147-4-33.ec2.internal.warc.gz"}
Tuesday, May 25th, 2010 I’m working on a graphical editor in WPF. One feature of this editor is the ability to resize an object. As cool as that feature is, the comment I got from my partner on the project was even better. In this editor, you can drag the mouse across the surface to pan. You can use your scroll wheel to zoom in and out. And you can select an object and use your scroll wheel to resize it. The trick is figuring out just how much to resize with every tick. When the scale is zoomed out, you want each tick of the mouse to adjust the size by large amount. When the scale is zoomed in tight, you want to adjust by a proportionally smaller amount. This gives the user a chance to fine-tune by zooming in. But on the other end of the spectrum, you have to prevent the size of the object from reaching zero. So when the object gets small, it no longer shrinks in proportion to the zoom. Design an equation The first problem is to change the size in proportion to the zoom. The easiest way to do this is to convert the object size into screen size, adjust by a constant amount, and then convert back. Here’s the math to do that: • Screen size = actual size * zoom factor • New screen size = screen size + ticks * pixels per tick • New actual size = new screen size / zoom factor The second problem is to switch modes when the object appears small on the screen. Instead of changing screen size by a fixed number of pixels per tick, you want to change by fewer pixels as the object gets smaller. This way you will approach zero but never actually reach it. The key to solving this second problem is that idea of approaching a target without reaching it. In mathematical terms, that target – or limit – is an asymptote. We want to design an equation that asymptotically approaches zero as we scale down. But at the same time, we want to asymptotically approach a constant growth as we scale up. Here are the two asymptotes: The first asymptote (y = 0) makes the size of our object to approach zero but never actually reach it. The second asymptote (y = x) makes the size of the object to increase by the same amount each time we tick the scroll wheel. Let y be the screen position, and all this happens relative to the zoom. To design an equation with asymptotes, simply multiply to make each one a root of the equation: The first factor represents the first asymptote (y = 0). The second represents the second (y = x). We just subtract one side from the other to turn the equation into a root (i.e. y = x, y – x = 0). Study the equation Plug this equation into Wolfram Alpha and it tells you that it is a pair of intersecting lines. We already knew that. We need to pull back from those two lines and see what happens. Let’s change the equation to this: Now Wolfram Alpha tells us that it is a hyperbola. We can visually verify that it approaches zero on the left, and it approaches a 1-to-1 slope on the right. We can also see that the equation crosses the y axis at 3. This is about where we “switch modes” from a big object to a little one. And finally, we can see the solution for y: With a little work on the whiteboard, we can confirm that the numbers 3 and 36 are related to the arbitrarily chosen constant 9. The y intercept (3) is the square root, and the constant (36) is quadruple. With this knowledge, we can adjust the equation to “switch modes” at any screen size. Complete the algorithm With a little more help from Steven Wolfram, we can find the inverse of this equation. With that, we turn this equation into an algorithm for determining the new size of an object: • Screen size = actual size * zoom factor • Starting x = (screen size^2 – small object^2) / screen size • New x = starting x + ticks * pixels per tick • New screen size = 1/2 (new x-sqrt(new x^2 + small object * 4)) • New actual size = new screen size / zoom factor And with that we have one algorithm that operates one way on small objects, and another way on large objects. And it allows the user to fine tune by zooming in. My partner’s comment I walked through this code with my partner, and explained how it solved three problems at once. His reaction was, “You used math where I would have used an if statement.” I think that was the most telling result of the whole exercise. When we write code using discrete concepts like conditions, we apply brute force. We decide exactly where the solution changes from one mode to another. We force a corner into the solution. A singularity at which the behavior of the system changes violently. Bugs gather at such singularities. And those corners poke the user in the eye, even if he can’t put his finger on them. But when we find one simple, continuous, elegant solution to many problems, we allow that solution to take its own form. It emerges naturally from the problem space itself. The benefit is not simply more beautiful code. It is also fewer bugs, and a more pleasant user experience. So when faced with a multitude of problems, put them all together and see if a single solution emerges.
{"url":"http://adventuresinsoftware.com/blog/?m=201005","timestamp":"2014-04-20T03:10:12Z","content_type":null,"content_length":"91966","record_id":"<urn:uuid:510eb869-c73a-42e4-8533-e7d17796dcfc>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00624-ip-10-147-4-33.ec2.internal.warc.gz"}
U Mn Science & Engineering Library News & Events Patterns Patterns Everywhere 3/7 IMA Public Lecture: Patterns Patterns Everywhere Martin Golubitsky, Cullen Distinguished Professor of Mathematics University of Houston March 7, 2007, 7:00 pm, Willey Hall 125 In this lecture, Professor Golubitsky will show some of these fascinating patterns and explain how mathematical symmetry enters the picture.
{"url":"http://blog.lib.umn.edu/sciref/events/2007/03/_patterns_patterns_everywhere.html","timestamp":"2014-04-17T19:26:01Z","content_type":null,"content_length":"7929","record_id":"<urn:uuid:aa8ff4c7-001b-4d82-b2e9-75600683b0ad>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00518-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions - Re: More WMythology Date: Feb 7, 2013 2:15 AM Author: fom Subject: Re: More WMythology On 2/7/2013 12:45 AM, WM wrote: > Matheology ยง 222 Back to the roots > Consider a Cantor-list with entries a_n and anti-diagonal d: > For every n: (a_n1, a_n2, ..., a_nn) =/= (d_1, d_2, ..., d_n). > For every n: (a_n1, a_n2, ..., a_nn) is terminating. > For every n: (d_1, d_2, ..., d_n) is terminating. > For all n: (a_n1, a_n2, ..., a_nn) =/= (d_1, d_2, ..., d_n). > For all n: (a_n1, a_n2, ..., a_nn) is terminating. > For all n: (d_1, d_2, ..., d_n) is *not* terminating. > That's the origin of matheology. > Regards, WM Correction. That is the origin of WMythology. "I justify with the observation that most persons pass over this inconspicuous small detail and consequently tangle themselves in doubt and contradictions over the irrational; but, by observing the facts emphasized here they would spare themselves these problems and would clearly discern that the irrational number, in virtue of the property given to it by the definitions has just as definite a reality in our minds as the rational numbers or even the integers, and that one does not even need to gain it through a limiting process, but by possession of it one becomes convinced of the practicability and evidence of limiting processes in Notice the word DEFINITION in Cantor's statement. That is how LOGIC is applied in the FOUNDATIONAL STUDY of DEMONSTRATIVE
{"url":"http://mathforum.org/kb/plaintext.jspa?messageID=8256879","timestamp":"2014-04-19T00:47:25Z","content_type":null,"content_length":"2504","record_id":"<urn:uuid:824f162e-911a-4c39-9fa8-22d570a0ea1f>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00653-ip-10-147-4-33.ec2.internal.warc.gz"}
Seeking help with a program. March 14th, 2012, 02:52 PM Seeking help with a program. I'm still fairly new to Java, I've made a few simple programs but nothing to advanced. I've just been assigned a project where I must make a program that makes graph for the 2 equations y = 600-sqrt(x) * 10 y = 300+300*sin(x/50.0). Normally this would not be an issue and getting the graph to show is not the problem. I realize I'll have to make 2 methods, one for each operation. The trick is I cannot use the Math.sqrt function and I have to use the binary search method. I guess I'm just looking for some guidance on the where to start exactly with the code for the method. Any input or advice is appreciated. March 14th, 2012, 05:45 PM Re: Seeking help with a program. I'm not sure you'd have to make methods for the two functions - you'd end up calling them function1(double x) and function2(double x) or f(double x) and g(double x) or something equally not very helpful. If you're not allowed to use Math.sqrt then you'll certainly have to write a method for square root. Something like: Code java: public static double squareRoot(double x) and then implement the 'binary search method' whatever that is. Is it one of these methods? Methods of computing square roots - Wikipedia, the free encyclopedia March 14th, 2012, 09:08 PM Re: Seeking help with a program. It looks like the high low method would work the best. Any suggestions or advice? Thank You for the help. March 15th, 2012, 07:51 AM Re: Seeking help with a program. Binary search is used to find a value in a sorted list. I am not sure why you would need a binary search algorithm to plot two equations as a graph ... unless of course you have been asked to find where the two intersect then it would make a bit of twisted sense because you could push the results of the equations into two arrays and then search for equality for every value of the first against the second (basically a very long winded and problematic approach to solving simultaneous equations). As for not being allow to use Math.sqrt .. tough break. Your lecturer is obviously a bit of a sadist. Your best bet would be to look at the Babylonian_method. It translates better to pseudo-code than the high-low method (which is better for manual computation). Personally, to start this project I would clarify the question with your lecturer. Then as Sean4u stated, write a square root function and test it thoroughly. Next I would write a binary search function that accepts an array and a search value and returns the index of the position or -1. Hope that helps.
{"url":"http://www.javaprogrammingforums.com/%20whats-wrong-my-code/14594-seeking-help-program-printingthethread.html","timestamp":"2014-04-18T13:42:13Z","content_type":null,"content_length":"7501","record_id":"<urn:uuid:235b92a1-af08-4eef-a8e2-b4dfe670c151>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00030-ip-10-147-4-33.ec2.internal.warc.gz"}
Algebra Word Problem April 25th 2010, 02:26 PM #1 Apr 2010 Algebra Word Problem Q. After inheriting $30,000 from a rich aunt, David decides to place the money in three different investments: = a savings account paying 5% = a bond account paying 7% = a stock account paying 9% After 1 year, he earned $2080 in interest. Find how much was invested at each rate if $8000 less was invested at 9% than at 7%. Please explain how I can set up this mind boggling question for this mathematically challenged grasshopper. Thank you! algebra word problem Q. After inheriting $30,000 from a rich aunt, David decides to place the money in three different investments: = a savings account paying 5% = a bond account paying 7% = a stock account paying 9% After 1 year, he earned $2080 in interest. Find how much was invested at each rate if $8000 less was invested at 9% than at 7%. Please explain how I can set up this mind boggling question for this mathematically challenged grasshopper. Thank you! Hi Sasuria, let x= investment @7% x-8000 = invest @9% 30000 -( 2x -8000) =invest @5% calculate the interest for each invest in terms of x. add them and equate to 2080.It is not difficult but extremely easy to make mistakes with signs Thank you sir! April 25th 2010, 04:12 PM #2 Super Member Nov 2007 Trumbull Ct April 25th 2010, 04:50 PM #3 Apr 2010
{"url":"http://mathhelpforum.com/algebra/141339-algebra-word-problem.html","timestamp":"2014-04-16T10:23:29Z","content_type":null,"content_length":"34663","record_id":"<urn:uuid:0d18869b-5d3d-4429-a1fe-800ddeb18c08>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00098-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: if m<2=67 (degrees) then find m<1? anyone know how to do that ? • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/507db3dfe4b0eba0db5e99aa","timestamp":"2014-04-20T18:42:15Z","content_type":null,"content_length":"75628","record_id":"<urn:uuid:cb031090-1641-43f8-80cb-ff444f55d872>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00277-ip-10-147-4-33.ec2.internal.warc.gz"}
ity i Results 1 - 10 of 39 , 1992 "... Some extensions are considered of Gold's influential model of language learning by machine from positive data. Studied are criteria of successful learning featuring convergence in the limit to vacillation between several alternative correct grammars. The main theorem of this paper is that there are ..." Cited by 44 (11 self) Add to MetaCart Some extensions are considered of Gold's influential model of language learning by machine from positive data. Studied are criteria of successful learning featuring convergence in the limit to vacillation between several alternative correct grammars. The main theorem of this paper is that there are classes of languages that can be learned if convergence in the limit to up to (n+1) exactly correct grammars is allowed but which cannot be learned if convergence in the limit is to no more than n grammars, where the no more than n grammars can each make finitely many mistakes. This contrasts sharply with results of Barzdin and Podnieks and, later, Case and Smith, for learnability from both positive and negative data. A subset principle from a 1980 paper of Angluin is extended to the vacillatory and other criteria of this paper. This principle, provides a necessary condition for circumventing overgeneralization in learning from positive data. It is applied to prove another theorem to the eff... - Information and Computation , 1999 "... Important re nements of concept learning in the limit from positive data considerably restricting the accessibility of input data are studied. Let c be any concept; every in nite sequence of elements exhausting c is called positive presentation of c. In all learning models considered the learning ma ..." Cited by 39 (29 self) Add to MetaCart Important re nements of concept learning in the limit from positive data considerably restricting the accessibility of input data are studied. Let c be any concept; every in nite sequence of elements exhausting c is called positive presentation of c. In all learning models considered the learning machine computes a sequence of hypotheses about the target concept from a positive presentation of it. With iterative learning, the learning machine, in making a conjecture, has access to its previous conjecture and the latest data item coming in. In k-bounded example-memory inference (k is a priori xed) the learner is allowed to access, in making a conjecture, its previous hypothesis, its memory of up to k data items it has already seen, and the next element coming in. In the case of k-feedback identi cation, the learning machine, in making a conjecture, has access to its previous conjecture, the latest data item coming in, and, on the basis of this information, it can compute k items and query the database of previous data to nd out, for each of the k items, whether or not it is in the database (k is again a priori xed). In all cases, the sequence of conjectures has to converge to a hypothesis , 1994 "... Kleene's Second Recursion Theorem provides a means for transforming any program p into a program e(p) which first creates a quiescent self copy and then runs p on that self copy together with any externally given input. e(p), in effect, has complete (low level) self knowledge, and p represents how ..." Cited by 18 (6 self) Add to MetaCart Kleene's Second Recursion Theorem provides a means for transforming any program p into a program e(p) which first creates a quiescent self copy and then runs p on that self copy together with any externally given input. e(p), in effect, has complete (low level) self knowledge, and p represents how e(p) uses its self knowledge (and its knowledge of the external world). Infinite regress is not required since e(p) creates its self copy outside of itself. One mechanism to achieve this creation is a self replication trick isomorphic to that employed by single-celled organisms. Another is for e(p) to look in a mirror to see which program it is. In 1974 the author published an infinitary generalization of Kleene's theorem which he called the Operator Recursion Theorem. It provides a means for obtaining an (algorithmically) growing collection of programs which, in effect, share a common (also growing) mirror from which they can obtain complete low level models of themselves and the other prog... , 1993 "... A team of learning machines is essentially a multiset of learning machines. ..." - Journal of Computer and System Sciences , 1996 "... A new investigation of the complexity of language identification is undertaken using the notion of reduction from recursion theory and complexity theory. The approach, referred to as the intrinsic complexity of language identification, employs notions of ‘weak ’ and ‘strong ’ reduction between learn ..." Cited by 17 (7 self) Add to MetaCart A new investigation of the complexity of language identification is undertaken using the notion of reduction from recursion theory and complexity theory. The approach, referred to as the intrinsic complexity of language identification, employs notions of ‘weak ’ and ‘strong ’ reduction between learnable classes of languages. The intrinsic complexity of several classes is considered and the results agree with the intuitive difficulty of learning these classes. Several complete classes are shown for both the reductions and it is also established that the weak and strong reductions are distinct. An interesting result is that the self referential class of Wiehagen in which the minimal element of every language is a grammar for the language and the class of pattern languages introduced by Angluin are equivalent in the strong sense. This study has been influenced by a similar treatment of function identification by Freivalds, Kinber, and Smith. 1 - In Proceedings of the Ninth Annual Conference on Computational Learning Theory , 1996 "... this paper we assume, without loss of generality, that for all oe ` ø , [M(oe) 6=?] ) [M(ø) 6=?]. ..." - Journal of Symbolic Logic , 1997 "... Limiting identification of r.e. indexes for r.e. languages (from a presentation of elements of the language) and limiting identification of programs for computable functions (from a graph of the function) have served as models for investigating the boundaries of learnability. Recently, a new approac ..." Cited by 15 (7 self) Add to MetaCart Limiting identification of r.e. indexes for r.e. languages (from a presentation of elements of the language) and limiting identification of programs for computable functions (from a graph of the function) have served as models for investigating the boundaries of learnability. Recently, a new approach to the study of “intrinsic ” complexity of identification in the limit has been proposed. This approach, instead of dealing with the resource requirements of the learning algorithm, uses the notion of reducibility from recursion theory to compare and to capture the intuitive difficulty of learning various classes of concepts. Freivalds, Kinber, and Smith have studied this approach for function identification and Jain and Sharma have studied it for language identification. The present paper explores the structure of these reducibilities in the context of language identification. It is shown that there is an infinite hierarchy of language classes that represent learning problems of increasing difficulty. It is also shown that the language classes in this hierarchy are incomparable, under the reductions introduced, to the collection of pattern languages. Richness of the structure of intrinsic complexity is demonstrated by proving that any finite, acyclic, directed graph can be embedded in the reducibility structure. However, it is also established that this structure is not dense. The question of embedding any infinite, acyclic, directed graph is open. 1 - Information and Computation , 1999 "... An index for an r.e. class of languages (by definition) is a procedure which generates a sequence of grammars defining the class. An index for an indexed family of languages (by definition) is a procedure which generates a sequence of decision procedures defining the family. Studied is the metaprobl ..." Cited by 12 (0 self) Add to MetaCart An index for an r.e. class of languages (by definition) is a procedure which generates a sequence of grammars defining the class. An index for an indexed family of languages (by definition) is a procedure which generates a sequence of decision procedures defining the family. Studied is the metaproblem of synthesizing from indices for r.e. classes and for indexed families of languages various kinds of language-learners for the corresponding classes or families indexed. Many positive results, as well as some negative results, are presented regarding the existence of such synthesizers. The negative results essentially provide lower bounds for the positive results. The proofs of some of the positive results yield, as pleasant corollaries, subset-principle or tell-tale style characterizations for the learnability of the corresponding classes or families indexed. For example, the indexed families of recursive languages that can be behaviorally correctly identified from positive data are surprisingly characterized by Angluin’s (1980b) Condition 2 (the subset principle for circumventing overgeneralization). 1 - Information and Computation , 1995 "... It was previously shown by Barzdin and Podnieks that one does not increase the power of learning programs for functions by allowing learning algorithms to converge to a finite set of correct programs instead of requiring them to converge to a single correct program. In this paper we define some new, ..." Cited by 12 (9 self) Add to MetaCart It was previously shown by Barzdin and Podnieks that one does not increase the power of learning programs for functions by allowing learning algorithms to converge to a finite set of correct programs instead of requiring them to converge to a single correct program. In this paper we define some new, subtle, but natural concepts of mind change complexity for function learning and show that, if one bounds this complexity for learning algorithms, then, by contrast with Barzdin and Podnieks result, there are interesting and sometimes complicated tradeoffs between these complexity bounds, bounds on the number of final correct programs, and learning power. CR Classification Number: I.2.6 (Learning – Induction). 1 - International Journal of Foundations of Computer Science , 1992 "... Machine learning of limit programs (i.e., programs allowed finitely many mind changes about their legitimate outputs) for computable functions is stud-ied. Learning of iterated limit programs is also studied. To partially motivate these studies, it is shown that, in some cases, interesting global pr ..." Cited by 11 (5 self) Add to MetaCart Machine learning of limit programs (i.e., programs allowed finitely many mind changes about their legitimate outputs) for computable functions is stud-ied. Learning of iterated limit programs is also studied. To partially motivate these studies, it is shown that, in some cases, interesting global properties of computable functions can be proved from suitable (n + 1)-iterated limit pro-grams for them which can not be proved from any n-iterated limit programs for them. It is shown that learning power is increased when (n + 1)-iterated limit programs rather than n-iterated limit programs are to be learned. Many trade-off results are obtained regarding learning power, number (possibly zero) of limits taken, program size constraints and information, and number of errors tolerated in final programs learned.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=245749","timestamp":"2014-04-16T09:48:11Z","content_type":null,"content_length":"38203","record_id":"<urn:uuid:6d003322-768e-4a4b-8f89-7244d3f054bb>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00648-ip-10-147-4-33.ec2.internal.warc.gz"}
Why does r=cos((2n+1)A) have 2n+1 petals, while cos((2n)(A)) has 4n petals? April 3rd 2012, 08:20 AM #1 Aug 2010 Why does r=cos((2n+1)A) have 2n+1 petals, while cos((2n)(A)) has 4n petals? In graphing the polar equation, (letting A be the angle, being too lazy to put theta every time) r=r(A)=cos(nA), for n a positive integer, one has n petals if n is odd, and 2n petals if n is even. (Letting the domain of A be sufficiently large.) It appears that this is because the petals are repeated more often in the odd cases, but I cannot figure out why. I would be grateful for any clarification. Thanks. Re: Why does r=cos((2n+1)A) have 2n+1 petals, while cos((2n)(A)) has 4n petals? In graphing the polar equation, (letting A be the angle, being too lazy to put theta every time) r=r(A)=cos(nA), for n a positive integer, one has n petals if n is odd, and 2n petals if n is even. (Letting the domain of A be sufficiently large.) It appears that this is because the petals are repeated more often in the odd cases, but I cannot figure out why. I would be grateful for any clarification. Thanks. actually, a polar rose $r = cos[(2n+1)\theta]$ has $2(2n+1)$ petals ... the second petal is superimposed on the first. that is why you can graph a complete rose with an odd coefficient with limits from $0$ to $\pi$(red graph)... or from $\pi$ to $2\pi$(blue graph) Re: Why does r=cos((2n+1)A) have 2n+1 petals, while cos((2n)(A)) has 4n petals? Thanks, skeeter, but apparently I did not express myself clearly enough, and you merely repeated my question in other terms: what you said about superimposition is what I mean by the petals being repeated more often (that is, letting the domain of theta be all reals, also the polar roses of cos(2A) have superimposed petals, merely with less frequency than the polar roses of cos(2A+1).) So my question is, why exactly is this difference between odd and even: why are the odd ones superimposed more frequently than the even ones? Re: Why does r=cos((2n+1)A) have 2n+1 petals, while cos((2n)(A)) has 4n petals? Rose (mathematics) - Wikipedia, the free encyclopedia scroll down to the topic How the parameter k affects shapes Re: Why does r=cos((2n+1)A) have 2n+1 petals, while cos((2n)(A)) has 4n petals? Again thanks, skeeter, but that was one of the first sources I checked (good ol' Wiki...). But it basically tells me that there is the overlap, not why. Here is an example of what I meant: (Being a bit sloppy, and too lazy to put the Greek symbols in): It is obvious that {(A, cos(kA)):0<A<2(pi)} and {(A, cos(k(A)):2(pi)<A<4(pi)} will be the same. In fact, point by point, (A, cos(kA))= (A+2pi, cos(k(A+2pi)). Now, however, to check (pi) in the same role: (A, cos(kA)) (A+(pi), cos(k(A+(pi)))) If k = 2n, this becomes (A+(pi), cos(2n(A+(pi)))) = (A+(pi), cos(2nA+2n(pi)))=(A+(pi), cos(2n(pi))) = (A+(pi), cos(kA)) which is definitely different to ((A, cos(kA)), so no overlap. If k = 2n+1, this becomes (A+(pi), cos((2n+1)(A+(pi)))) = (A+(pi), cos(2nA + 2n(pi)+A+(pi))) = (A+(pi), cos(2nA +A+(pi)))= (A+(pi), cos((2n+1)A +(pi)))= (A+(pi), -cos((2n+1)A)) = (A+(pi),-cos(kA))=(A,cos(kA)) That could be made more rigorous (and/or more elegant), but this is the main idea, I think. April 3rd 2012, 12:07 PM #2 April 3rd 2012, 07:19 PM #3 Aug 2010 April 4th 2012, 02:21 PM #4 April 4th 2012, 06:42 PM #5 Aug 2010
{"url":"http://mathhelpforum.com/trigonometry/196758-why-does-r-cos-2n-1-have-2n-1-petals-while-cos-2n-has-4n-petals.html","timestamp":"2014-04-16T20:14:44Z","content_type":null,"content_length":"42954","record_id":"<urn:uuid:3bf68f68-9d0d-4377-84f7-3a28f18b42a1>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00238-ip-10-147-4-33.ec2.internal.warc.gz"}
Goal Expectation and Efficiency This article has been written by Constantinos Chappas (who can be followed on Twitter @cchappas) Introducing Goal Expectation It’s Sunday afternoon, you are heading home trying to catch the end of the match on TV but traffic is holding you back. By the time you get home, the final whistle was blown and you manage to catch a glimpse of the match statistics screen. Your team had 12 shots, with 8 on target yet they only drew 1-1. “Their keeper must have played a blinder”, you think to yourself. Well, possibly. But then again, maybe not. It depends on where those 12 shots were taken from. If they were taken from way outside the box and tamely reached the opposition goal, then they don’t really count for much, do they? On the other hand, if the opposition goalie had just saved 7 1-on-1s, well that’s different. In effect, some shots or chances in general are worth more than others. While discussing this with Colin Trainor (do follow him at @colinttrainor), we realised that a shot – or a shot on target for that matter – is not an adequate metric. Some shots have a higher probability of being converted into goals whereas others a much lower one. As a result, we came up with a metric which considers a number of important factors affecting the chance of a particular shot being scored and assigns a figure for the probability of a goal (or the shot’s goal expectation) and named it ExpG. The exact calculations of ExpG will remain private as a lot of work between us has been dedicated to its creation. ExpG and Shooting Efficiency The reason behind the introduction of ExpG would be to provide a metric that chances / strikers / teams can be compared on. If a striker has a 25% conversion rate, that does not mean that he is a better finisher compared to someone with a 20% conversion rate. Perhaps his chances were from more favourable positions compared to the other striker’s chances. Therefore unless we somehow break down the conversion rate (e.g. shots from inside/outside the area) and look at those individual figures, we would be comparing apples with oranges. The proposed metric ExpG alleviates part of this problem. If a player, given the chances he was presented with, had an ExpG of 10.3 goals, he would be expected to score around 10 goals. If he managed to get on the scoresheet 12 times it could mean that he is an above average finisher whereas a player who only managed 12 goals while he was expected to score ExpG = 16.4 goals, would be considered as inferior. An efficiency measure can be introduced here by dividing the number of goals a player or team has scored – excluding own goals when looking at team efficiency – by the number of goals that player or team was expected to score. An average player or team, in terms of efficiency, would have ratings equal to 1 Colin briefly introduced ExpG in his piece here and previously here so the purpose of this piece is to delve a little bit deeper and present some of our analysis and results. Potential explanations for some of the results are offered but it has to be stressed here that this is very much work under progress therefore ExpG are always likely to be updated depending on what our research uncovers. Shooting Efficiency Across Leagues Without further ado, we start by looking at the shooting efficiency figures across the top divisions in England, France, Germany, Italy and Spain for the 2012-2013 season. For comparison purposes we can look at the deviation of each shooting efficiency figure from 1.00. Based on our measure of shooting efficiency, it would seem that players in the English league are on average more inefficient i.e. they score fewer goals (approximately 10% fewer) than what they would be expected to score given the chances they are presented with, compared to players from other leagues. Germany appears in the opposite end of the spectrum of shooting efficiency and in fact this has also been mentioned by others such as Ted Knutson in his piece here. Not the first time the words “Germany” and “efficiency” appear in the same sentence, I guess! One potential issue to consider here which is not taken into account in our analysis due to shortage of available data is the effect the defensive organization of a team, including the defenders’ positioning or defensive pressure has on ExpG. Faced with better defenders, a striker might be less likely to score a goal compared to when he faces a team with average defensive capabilities. This could potentially be a factor explaining the below average efficiency in England and Italy as identified by Colin himself at in his article here. Shooting Efficiency By Team If we now turn our attention to the team level, shooting efficiency for all 98 teams in these leagues follows a reasonably symmetric and roughly Normal distribution with the majority of teams exhibiting around average efficiency. For comparison purposes, league average efficiency figures have been included as dashed lines and correspond to the previous plot. A lot has been said about the Barcelona team of recent years and they excel in this metric with a shooting efficiency of 1.44. Having first accounted for a number of factors through this analysis, Barcelona’s conversion of chances remains very high and while part of this may be down to the quality of strikers they possess, interestingly enough even a Messi-less Barcelona registers a shooting efficiency of 1.35. On the other hand, only 3 English teams register an above-average attacking shooting efficiency. A different way of visualizing the results is by plotting the expected versus the actual goals per match scored by each team – having first excluded own goals. Note that the number of expected goals for a team is simply the sum of ExpG for all of their attempted shots. The blue line is a simple linear regression whereas the black dashed line is the equality line i.e. when teams have scored the same amount of goals they were expected to do. The green area highlights the confidence limits for the model fit whereas the blue area presents 95% prediction limits for individual teams. In other words, we expect approximately 5% of the number of teams to fall outside of the blue band. The fact that both lines appear very close and the model appears to be a very good fit is reassuring. It suggests that ExpG is on average a good metric or even a substitute for actual goals, because it doesn’t consistently over- or under-estimate the number of goals a team/player will Barcelona, Bayern Munich and Borussia Dortmund have actually scored a significantly higher number of goals compared to what they were expected to do, given the type of chances they were presented with whereas Everton and Manchester City have vastly underperformed in this area. Somewhat surprising, neither Bayern Munich nor Borussia Dortmund has the second highest shooting efficiency after the Catalans. Remember that shooting efficiency is defined as the number of actual goals divided by the number of expected goals, i.e. the slope of the dashed line. So the second prize goes to … (drumroll!): … Lorient! Scoring a total of 56 goals whereas given their chances, they were expected to just score 41.5 goals, registering an attacking shooting efficiency of 1.35. Intriguingly enough, all but 1 Lorient players expected to score at least 1 goal registered above average efficiency. A further look at the Top 10 teams in terms of attacking efficiency reveals no English or Italian team excelling at this measure. The top ranked Italian team and 16^th overall is Catania with an attacking efficiency rating of 1.144 while champions Juventus appearing in 81^st place out of 98 teams with a rating of 0.843. The 3 English teams with above average efficiency are Aston Villa (ranked 27^th overall with 1.055), Man Utd (ranked 32^nd overall with 1.041) and Swansea (ranked 40^th overall with 1.020). Other notable teams include Real Madrid (30^th with 1.054), PSG (38^th with 1.025), Chelsea (50^th with 0.980), Arsenal (56^th with 0.961) and Liverpool (79^th with 0.858). Shooting Efficiency in Defensive Terms To win matches, a team does not only need to score goals but defend against them too. It’s therefore only natural to look at the same measure (shooting efficiency) but in defensive terms. Which were those teams that defended well enough to prevent their opponents from scoring the number of goals their chances should have allowed them to do? Once again as would be expected, most of the teams have close to average figures. Numbers above 1.00 indicate teams which conceded more goals than were expected to do whereas teams which register small numbers highlight teams with defences that prevented their opposition from scoring their expected numbers. A single French team (PSG) comes on top of this metric with a value of 0.65 which translates to 35% fewer goals conceded than what would have been expected under this analysis. Bottom of this statistic, one can find Hoffenheim who conceded 66 goals whereas based on our research the chances the opposition had would only justify conceding 43.2 goals. Looking at the actual and expected number of goals conceded per match an interesting issue appears. The equality line between actual and expected goals does not fall within the fitted line’s confidence limits. This could be down to the particular dataset (as the points only correspond to the 2012-2013 performance of these teams) or it could be a result of the fact that the ExpG model was based on data from the attacking side given the lack of defensive statistics. Perhaps further research is needed on this. Other than Hoffenheim, Werder Bremen and Mallorca also let in a significantly larger number of goals than what they were expected to do whereas on the other side, Sunderland were the pick of the teams who conceded fewer goals to what was expected of them. This ties in well with other pieces such as Colin’s one here on the performance of Simon Mignolet as well as the suggestion that Sunderland were probably the “luckiest” of the teams in 2012-2013 which weren’t relegated. Following PSG, the Black Cats also have the second best shooting efficiency against at 0.71. A full Top 10 in terms of defensive efficiency: Notable teams which are missing from the table include Man City (ranked 15^th with 0.857), Man Utd (22^nd with 0.872), Arsenal (27^th with 0.891), Chelsea (35^th with 0.921), Real Madrid (46^th with 0.978), Liverpool (63^rd with 1.028) and Barcelona (67^th with 1.045). Overall Shooting Efficiency Bringing it all together and looking at shooting efficiency For and Against (or attacking and defending) we visualize the data in the following way: Points on the right hand side of the plot are efficient teams in terms of their attack i.e. they score more goals than expected given the quality of chances they were presented with. On the other hand defensively efficient teams occupy the lower part of the plot, because the number of goals conceded is lower compared to its expectation. Not a lot of teams excel both in terms of attacking and defensive efficiency. In fact, if we were to create quantiles and slice the data in terms of Top 5%, Top 10%, Top 15% etc for the two types of efficiency, the only team that appears in the Top 15% in both measures is the Champion League holders Bayern Munich with Catania (!) only missing just. There is no single way to combine the two types of efficiency so in addition to the above hierarchy we could look at a different measure such as the ratio between Attacking and Defensive Efficiency. The higher the ratio the more efficient a team is overall. To visualize this we can plot lines on which this ratio is constant. The flatter the line the more efficient a team is so in terms of this statistic PSG register the highest overall efficiency of 1.585 (attacking efficiency of 1.025 divided by defensive efficiency of 0.647) followed by Bayern Munich at 1.505. In table format, the Top 10 teams in overall efficiency are: Some surprising names in this list perhaps but a nice spread with all leagues represented in the top 6. Notable exceptions include Man Utd (ranked 14^th with 1.193), Juventus (29^th with 1.088), Arsenal (32^nd with 1.079), Real Madrid (34^th with 1.077), Chelsea (36^th with 1.064), Man City (74^th with 0.877) and Liverpool (86^th with 0.834). Conclusion and the Way Forward This piece was an introduction of the ExpG which is designed to estimate a shot’s goal expectation. Armed with this measure, we can look at how teams or players have fared in terms of the number of goals scored or conceded relative to their respective expectations and derive efficiency figures for the attack or defence. ExpG allows the analyst to compare figures which have been adjusted for a number of factors affecting goal expectation, indirectly placing these figures in context, thus making comparisons across teams or players more relevant. This also presents huge scope for further analysis, looking at individual teams or players and throwing more light into understanding what football statistics really 5 + nine = 1. “Not the first time the words “Germany” and “efficiency” appear in the same sentence, I guess!” If you are to believe Johan Cruyff, you just have to drink a lot of beer (judging from his comment on the alledged doping scandal of West Germany in the bad old days) Sorry, just couldn’t help myself… 2. Great post by the way. Will be interesting to follow the development of this new metric! □ Thanks. Hopefully, there will be a follow-up with some more analysis. 3. Fantastic work! I think Colin or Ted mentioned Ba’s expG last season to be around 0.88. Can you tell me what his expG was for his tenure at Chelsea? Also, can we expect that you guys will include the expG stat for players when you populate your league tables? I will totally understand if you want to keep it confidential though. Moreover, i am sure you guys have sufficient data to generate an expS stat for keeper to judge their save making efficiency. And something similar which accounts for a team’s defence’s ability to keep a clean sheet. □ Thanks for your comments. Ba’s overall ExpG last year was 16.97 and with 15 goals scored that gives him a 0.88 efficiency rating. But if you were to look at his Chelsea figures only, his ExpG was 7.72 and he scored just 2 goals i.e. 0.26 efficiency. I don’t know yet if ExpG will be available in the tables. And as for ExpS, we haven’t really gone down that road, but perhaps it’s an idea worth exploring. ☆ Thanks for sharing!! Could you clear something for me though: Torres had an efficiency of 0.73 last season, meaning he was expected to score around 11 goals, and he managed 8. Does this mean 1. He didn’t get into enough scoring positions 2. He shot way less than usual 3. His teammates failed to create good chances for him 4. Some combination of the above. This is pushing it a lot, but is it possible for you to give weights to these points by how much they were a factor in the player’s poor performance? ○ Sid, The Efficiency of 0.73 means that, based on the shots he took, he only scored 73% of his expected goals figure. If you bear that in mind it should answer your question, but I would say that your Points 1-4 are all answered in the negative. It means he finished the shots he had with less proficient finishing than would be expected of the average player. ■ that clears it up a bit!! but now i have even more questions!!! Thanks for replying! ○ Regarding your points 1-4, I think you need to think about it in a different way. Torres low efficiency score does not mean that he didn’t get into scoring positions or that the number of his shots was low. ExpG is calculated based on the type of chances/shots he took. So if for example, his chances were difficult to score, they would be associated with low ExpG figures. As a result the expected number of goals from those chances would not need to be high. The same applies to the number of shots he took. If he only took a few shots then obviously he wouldn’t be expected to score a lot of goals. The idea is to try to assign a goal expectancy figure to each shot and compare the expected number of goals with the actual number. Therefore, based on his chances (some where difficult – low ExpG, some were easy – high ExpG), he was expected to score 11 goals, but only managed 8 therefore his efficiency compared to what an average striker from those chance would get, was low. As a result, your question on weights doesn’t really apply to what we’ve done. ■ Thanks for clearing that up! 4. Interesting work indeed. I would like to see the results adjusted for league. If we assume it is easier to score in Germany how do their teams fare when adjusted to reflect that? □ I’m not really sure I follow. Our calculation of ExpG and therefore efficiency was not based on leagues. It’s the results that showed that German teams were on average more efficient in taking those chances compared to the rest of the teams. So perhaps you could explain what you mean by adjusting those teams to reflect that? ☆ I’m just interpreting the results in a slightly different way. One way is that players in a country with a higher ExpG are on average better at finishing, (but then this doesn’t consider they would be more likely to face better goalkeepers!) Or the way I see it, the players in a league with a high ExpG could be playing in a league with a different culture tactics-wise, or a greater concentration of “whipping-boys” to boost their overall score tallies. In the second explanation You could say on average players for German teams are 10% more likely to score, so the same players in the English league would be -10% due to being taken out the German League, and a further -10% when placed into the EPL. This would mean to compare teams across different leagues the League average ExpG would have to be reflected in the team/player ExpG. Or to compare a strikers efficiency when playing in a different league. 5. “One potential issue to consider here which is not taken into account in our analysis due to shortage of available data is the effect the defensive organization of a team, including the defenders’ positioning or defensive pressure has on ExpG.” So does this mean that every shot from the same area is treated with the same goal ExpG? I feel like this could lead to undervaluing of lone strikers who lead attacks and often have the ball and attempt shots in very heavy traffic. For example, Suarez was listed on another article as the #3 most “underperforming” scorer based on expG. On the flip side, wing players who slash in on the break are often taking shots with less defenders back and will have a good chance to be overvalued. □ I won’t go into specifics but in general the answer to your question “So does this mean that every shot from the same area is treated with the same goal ExpG?” is No. There are other factors that we consider which try to address the situation that you are describing, but the main problem remains the defenders’ position/pressure the striker is faced with. As for Suarez, perhaps you can direct me to the article you are referring as according to the numbers we have him at 1.08 shooting efficiency. 6. Love your analysis and think it’s pretty innovative. I also have some questions: Have you used the team data to see if they have any predictive power on match results? The other one: Where did you get the data for such an elaborate measure of shooting efficiency? 7. Are you able to reveal which data sets you are using for your analysis?
{"url":"http://www.statsbomb.com/2013/08/goal-expectation-and-efficiency/","timestamp":"2014-04-18T23:48:24Z","content_type":null,"content_length":"107264","record_id":"<urn:uuid:8b7177f3-1cb5-41cd-9fbb-9550801c1487>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00520-ip-10-147-4-33.ec2.internal.warc.gz"}
Winthrop Harbor Algebra 1 Tutor Find a Winthrop Harbor Algebra 1 Tutor ...This course starts out with basic properties of operations such as associative, distributive, and many more. The course evaluates expression using an order of operations known as PEMDAS. Also, it has review of natural numbers, arithmetic operations, integers, fractions, decimals and negative nu... 11 Subjects: including algebra 1, calculus, algebra 2, trigonometry ...I am deeply passionate about the Classics, and strongly believe that learning Latin and Greek is a great way for students to gain an understanding of Greek and Roman culture. As a tutor, I hope to share my love of Latin and Greek with others! --------------------- In addition to tutoring Latin... 20 Subjects: including algebra 1, English, reading, ESL/ESOL ...I also have particular expertise in tutoring students in mathematics, up through trigonometry, and at all levels of English - having served as a Mathematics Coach in the Chicago Public Schools, and as a reporter and copy editor in my first career for The Daily Southtown Economist in Chicago, now ... 20 Subjects: including algebra 1, reading, English, writing I have over 10 years of experience teaching math and English in the public, private, and international school setting. I've taught all levels of math in the middle school and high school levels. As an experienced classroom teacher (now a stay-at-home mom to two little ones), I know the struggles and triumphs that students face in the classroom. 8 Subjects: including algebra 1, geometry, algebra 2, trigonometry ...My favorite part about teaching is watching a student's confidence grow as they realize they are capable of learning. I love when the light bulb clicks!As a Resource Teacher, I have successfully taught students with learning disabilities math. Before my current job, I was a Teachers Associate for 4 years in Glencoe and Deerfield. 18 Subjects: including algebra 1, reading, writing, grammar
{"url":"http://www.purplemath.com/winthrop_harbor_il_algebra_1_tutors.php","timestamp":"2014-04-16T22:44:06Z","content_type":null,"content_length":"24459","record_id":"<urn:uuid:b251af43-8b85-4f45-b872-729e7dff123f>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00209-ip-10-147-4-33.ec2.internal.warc.gz"}
Card Trick Pull a 3 from the deck and lay it face down. Now have the participant pick any card from the deck. Have them double the value of the card and then add 2. Have them multiply this number by 5. Finally, subtract 7 from the result. In the example below, the person would have picked the 8. Here's how it works: [(2(8)+2)5]-7 = [(16+2)5]-7 = [(18)5]-7 = 90-7= 83 They picked the 8 and you picked the 3. Using the mathematical formula will always yield the 3 so you have to pick the 3. The math will tell you the rest! Thanks to http://www.videojug.com/film/how-to-do-a-maths-magic-trick.
{"url":"http://people.jmu.edu/johns2ja/fun/CardTrick.html","timestamp":"2014-04-20T03:21:45Z","content_type":null,"content_length":"2657","record_id":"<urn:uuid:965247ed-58f8-435b-af17-65fa90f04efd>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00457-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: cylindrical coordinate system Replies: 0 cylindrical coordinate system Posted: Jul 10, 2012 6:22 PM Thanks for reading my posting. I am using an engineering ?Finite Element Method? program. To get better idea of what I am going to ask please see the attached picture. I am trying to stretch circumferentially a piece of cylinder (with central angle of 75 degrees) and make it a half cylinder (central angle of 180 degrees). The picture is from the solution based on Cartesian coord. Sys. I was going to solve the problem using cylindrical coordinate system, where in the boundary conditions section I need to specify two DISPLACEMENTS for the inner surface: one in r direction (ur) and the other in phi (tangential) direction (ut). Unfortunately, the program is not asking for change in angle phi, but I asking about change in length (displacement) in phi. I have the initial and final radii of the inner surface (and even outer radii), and also initial and final phi values. Specifying ?ur? is very simple, however, in phi direction the things seem to be vague to me as both radius and angle are changing. So what I probably need to do is to enter delta value of: d(r*phi)= r*d(phi)+phi*dr My question is that is it correct to evaluate this by using initial radius (the one before deformations) and also initial values of angle, phi, in the above equation. I am asking this because the above equation is general in terms of r and phi, and I was asking myself should I use r=Ri or r=ri (where ri is the inner radius after deformation), and also should I use phi before deformation or after deformation. Any comments is highly appreciated.
{"url":"http://mathforum.org/kb/thread.jspa?threadID=2390258","timestamp":"2014-04-20T06:31:01Z","content_type":null,"content_length":"16168","record_id":"<urn:uuid:8599e6a7-d43a-4d86-86ac-fcf5b79af417>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00405-ip-10-147-4-33.ec2.internal.warc.gz"}
What Would Be Left, If...? What Would Be Left, If…? December 21, 2012 Apocalypses in math and theory, plus a complexity question Douglas Adams wrote the watchword for today: Don’t Panic. Still, his novel and series The Hitchhiker’s Guide to the Galaxy begins with the impending destruction of the Earth, which goes ahead 5 minutes too soon. The remainder is post-apocalyptic. It is also pre-apocalyptic, because there are multiple Earths that face destruction at different times. At least having multiple days of prophesied doom is something we’ve recently been dealing with. Today—the last time we can use this word?—we wish to cover real apocalypses in mathematical and scientific theories. We have already blogged about what would happen to complexity theory if ${\mathsf{P = NP}}$ were true and proved. As we said, “Most of the 573 pages of [the] Arora-Barak [textbook] would be gone…” Well this is still hypothetical. Now we will look at cases in the past where whole theories were blown up by a surprise result. This is different from theories going out of fashion and dying out, even if it was from internal causes. Likewise we don’t consider the fadeouts of past civilizations to be catastrophes, only ones destroyed by things like volcanoes. Ironically, the branch of mathematics called catastrophe theory itself is said to be one of the fadeouts. As mathematical historian David Aubin wrote in “Chapter III: Catastrophes” of his 1998 Princeton PhD thesis: Catastrophe Theory is dead. Today very, very few scientists identify themselves as ‘catastrophists’; the theory has no institutional basis, department, institute, or journal totally or even partly devoted to it. But do mathematics die? He goes on to cite an article by Charles Fisher that proclaimed the death of Invariant Theory. To be sure, theories like that sometimes get revived. But first a word about the Mayans and ultimate Baktun The Future All the fuss is about today’s ticking over of a Mayan unit of time called a baktun, or more properly b’ak’tun. It’s not even a once-in-5,000-years event like everyone says, but rather once-in-144,000 days, making just over 394 years. The point is there have been 13 of them since the inception of the Mayan creation date according to their “Long Count” calendar, making ${13 \times 394.26 = 5,125.38}$ years in all. So the 14th b’ak’tun starts today—big whoop. The buzz comes from many Mayan inscriptions seeming to max out at 13, but others go as far as 19 and it is known that they counted by 20. Hence the real epoch will be when the 20th and final baktun ticks over to initiate the next piktun. That will be on October 13, 4772. If human civilization lasts that long, that is. This still has us thinking, what if Earth really were suddenly blown up by Vogons or by Vegans or by a space rock a little bigger than last week’s? What would be left? Anything? The reason is that according to a recently-agreed principle in fundamental physical theory, the answer should be everything. The principle, as enunciated in small capitals by popular science author Charles Seife in his 2007 book Decoding the Universe, states: Information can be neither created nor destroyed. As we mentioned last March, the agreement was symbolized by Stephen Hawking conceding a bet to John Preskill, who has graced these pages. Hawking underscored the point by making it a main part of the plot of a children’s novel written with his daughter Lucy. The father falls into a black hole, but is resurrected by a computer able to piece back all the information because it was all recoverable. At least in theory. Hence even if Earth really is swallowed up later today, or if we disappear—leaving all our stored information and literary artefacts to decay within a 50,000-year time-span estimated for The History Channel’s series Life After People—all the information would in principle still exist. Is this comforting? Perhaps not. It could be that while all natural processes conserve information, the more violent ones might embody the computation of a one-way function. It then becomes an issue of complexity theory whether the output of that function could be reverted to its pre-apocalyptic state. Apocalypses In Math Here are a few examples from mathematics of “extinction” events: usually the extinction was of a theory or whole approach to math. ${\bullet}$Bertrand Russell and Gottlob Frege: Frege was just finishing his tome on logic when the letter from Russell arrived showing that Frege’s system was inconsistent. The letter basically noticed that the set $\displaystyle \{ x | x ot \in x \},$ was not well-defined. This destroyed the whole book that Frege had worked so hard on for years. Frege’s reaction was recorded in his revised preface: A scientist can hardly meet with anything more undesirable than to have the foundations give way just as the work is finished. I was put in this position by a letter from Mr. Bertrand Russell when the work was nearly through the press. A study in being low-key as we might say today. ${\bullet}$David Hilbert and Paul Gordan: Gordan was known as “the king of invariant theory.” His most famous result is that the ring of invariants of binary forms of fixed degree is finitely generated. Hilbert proved his famous theorem that replaced “binary” by any degree, and replacing horribly complex arguments with a beautiful existence proof. To quote Wikipedia: [This] almost put an end to classical invariant theory for several decades, though the classical epoch in the subject continued to the final publications of Alfred Young, more than 50 years Gordan was less low-key than Frege, since his comment on Hilbert’s brilliant work was: “This is not mathematics; this is theology.” Oh well. ${\bullet}$Kurt Gödel and David Hilbert: Hilbert, again, wanted to create a formal foundation of all mathematics based on an axiomatic approach. He had already done this for geometry in his famous work of 1899. No, Euclid did have an axiomatic system thousands of years earlier, but it was not really formal. Some proofs relied on looking at diagrams and other obvious facts, so Hilbert added extra notions that made geometry based on a complete system. For example, Hilbert added the notion of betweenness of three points: point ${x}$ is between ${y}$ and ${z}$. Of course Gödel proved via his famous Incompleteness Theorem that what Hilbert could do for geometry was impossible to do for number theory. ${\bullet}$Stephen Kleene and Barkley Rosser: Once Alonzo Church’s lambda calculus and Haskell Curry’s combinators were discovered in the 1930′s, it seemed natural to build systems of logic around them. That was the original intent of both Curry and Church. It was therefore a shock when Kleene and Rosser, as students of Church, showed at a stroke that they were inconsistent. The reason is that the theories’ standard of “well-defined” claimed too extensive a reach, as with Frege’s formalization of the notion of “set.” It essentially allowed defining an exhaustive countable list of well-defined real numbers, for which the Cantor diagonal number was well-defined within the system, a contradiction. Ken likens this paradox phenomenon to the collapse of the Tower of Babel. ${\bullet}$Riemann’s Non-Conjecture Refuted: At the very end of his famous 1859 paper which included the Riemann Hypothesis, Bernhard Riemann made a carefully-worded statement about the relationship between the prime-counting function ${\pi(x)} $ and the logarithmic integrals ${li(x)}$ and ${Li(x) = li(x) - li(2)}$: Indeed, in the comparison of ${Li(x)}$ with the number of prime numbers less than ${x}$, undertaken by Gauss and Goldschmidt and carried through up to ${x =}$ three million, this number has shown itself out to be, in the first hundred thousand, always less than ${Li(x)}$; in fact the difference grows, with many fluctuations, gradually with ${x}$. Further calculations were consistent with inequality holding in general, until in 1914, John Littlewood refuted this not just once, but infinitely often. That is, he did not find a counterexample by computation, but rather proved that ${li(x) - \pi(x)}$ must change sign infinitely often. In fact, the first number giving a sign flip is still unknown, though it must be below ${e^{727.951346801}}$. Although this is included on Wikipedia’s short list of disproved mathematical ideas, its significance is not the inequality hypothesis itself, but the fallible nature of numerical evidence. Michael Rubinstein and Peter Sarnak showed an opposite surprise: the set of integers ${x}$ giving a negative sign has non-vanishing density, in fact about 0.00000026, so it is disturbing that no such ${x}$ is within the current range of calculation. ${\bullet}$Mertens Conjecture Refuted: The conjecture, which was first made in 1885 by Thomas Stieltjes not Franz Mertens, states that the sum of the first ${n}$ values of the Möbius function has absolute value at most ${\sqrt{n}}$. That $\displaystyle M(n) = |\sum_{k=1}^n \mu(k)| \leq n^{\frac{1}{2}}.$ Despite the fact that all computer calculations still support this, Andrew Odlyzko and Herman te Riele disproved it theoretically in 1985. At least it has an exponentially bigger leeway than the previous one: the best known upper bound on a bad ${n}$ is currently $\displaystyle e^{15,900,000,000,000,000,000,000,000,000,000,000,000,000}.$ Moreover the following weaker statement, which Stieltjes thought he had proved, is still open: $\displaystyle (\exists C > 0)(\exists n_0 > 0)(\forall n \geq n_0) M(n) \leq Cn^{\frac{1}{2}}.$ The reason this is portentious is that the following slight further weakening, $\displaystyle (\forall \epsilon > 0)(\exists C > 0)(\exists n_0 > 0)(\forall n \geq n_0) M(n) \leq Cn^{\frac{1}{2} + \epsilon},$ is actually equivalent to the Riemann Hypothesis. The failure of Riemann would have a definite apocalyptic effect: it would wipe away all the many papers that assume it. It is not clear whether those papers could even be saved by the kind of “relativization” we have in complexity theory, whereby results obtained assuming ${\mathsf{P eq NP}}$ and so on may still be valid relative to oracle languages ${B}$ such that ${\mathsf{P}^B eq \ Our Scientific Neighbors’ Houses Still, the loss of papers assuming Riemann would be nothing compared to what would happen in physics if supersymmetry were disproved, as its failure could take all of string theory down with it. The Standard Model of particle physics seems also to have survived problems the absence of the Higgs Boson would have caused, although issues with the Higgs are still causing apocalyptic reactions from some physicists. At least news today is that other bosons are behaving well according to Scott Aaronson and Alexander Arkhipov’s protocol, which is related to our kind of hierarchy collapse. Perhaps we in computer science theory and mathematics are fortunate to experience less peril. Even so, we are left with this quotation attributed to Hilbert by Howard Eves: One can measure the importance of a scientific work by the number of earlier publications rendered superfluous by it. Open Problems Do you have other favorite examples of results in the mathematical and general sciences that caused the collapse of theories? Does Nature compute complexity-theoretic one-way functions? [fixed Seife quote, minor tweaks] 1. December 22, 2012 12:58 am Dick Lipton asks “Do you have other favorite examples of results in the mathematical and general sciences that caused the collapse of theories?” In the immortal phrase of Eric Idle “that’s easy! Namely, chinook collapsed the theory of checkers (draughts) by solving the game.” 2. December 22, 2012 4:53 am Dick Lipton: You know that all of us know that we are left with a single fact about theory which is there is no theory. You mention the Kleene-Rosser paradox. You have already examined the proof that the set of all KR-paradoxes is decidable iff it is undecidable. ZFC is inconsistent iff the lambda-calculus is inconsistent. The objects defined by the later are all objects of the former. I’m sure that you as well as others (now) believe in this result, but no one is ready to admit it, except in anonymous comments. Rafee Kamouna. 3. December 22, 2012 9:57 pm Frege’s system is not really “destroyed”, the reality is similar to Cantor’s naive set theory. I remmebe reading an article by George Boolos in JSL that if we replace axiom V with Hume’s principle the theory can be saved. (Huh! this is on Wikipedia: http://en.wikipedia.org/wiki/George_Boolos I should donate more often to them). 4. December 23, 2012 6:19 am More examples (some serious, some not): • (from the 1960s) the collapse of diagnostic auscultation by (e.g.) doppler ultrasound. • (from the 1970s) the collapse of analytic S-matrix theory by gauge field theories. • (2000++) The collapse of post-P computation as a planned milestone for quantum information technology. • (500BC++) “The history of philosophy (and AI too) is the history of failed models of the brain.” • (1900s) The collapse of capitalism by Marxism/Leninism/Maoism ! • (2000++) No! … the collapse of Marxism/Leninism/Maoism by Ayn Randianism !! • (2000++) No! … the collapse of Ayn Randian capitalism by euro-socialism !!! • (2000++) No! … the collapse of euro-socialism by China-style state-capitalism !!!! • (2000++) No! … the worldwide collapse of family farms by corporate farms !!!!! • (2000++) No! … the worldwide collapse of Jeffersonian trading by computerized trading !!!!! • (2000++) No! … the worldwide collapse of academic degree programs via MOOCS !!!!!! • (2000++) No! … the accelerating collapse of the planetary biome from heat stress !!!!!!!! □ December 23, 2012 6:22 am Gee, I forgot: • (2000+) The collapse of single-investigator hypothesis-driven wet-bench biology by large-scale robotic survey experiments guided by efficient dynamical simulations. □ December 23, 2012 10:55 pm LOL … the Seattle Seahawks football are beating the San Francisco 49s so overwhelmingly, that it’s more fun to read Raymond Streater’s on-line list of Lost Causes in Theoretical Physics (he is the “Streater” of the celebrated “Streater and Wightman” field theory textbook PCT, Spin and Statistics, and All That) . Is *your* favorite theoretical physics goal on Streater’s list? What challenges might plausibly appear in a Streater-style list of “Lost Causes in Mathematics”? ” … in Complexity Theory”? ” … in Quantum Computing”? 5. December 23, 2012 6:23 am You can obviously reduce the halting problem to the Kleene-Rosser paradox recognition problem. Thus, KRP is undecidable. Then you can write a trivial program to decide it. Hence, ZFC is inconsistent. This is the proof that made Richard Karp say it is far removed from his area of coverage, by Edward Blum as JCCS editor. Stephen Cook and Lance: Albert Meyer on behalf of them said:”Far-fetched and incomprehenisble of as EiC of Information & Computation. Dick Lipton: Why the biggest names in theory had to run away. I understand it is harsh on them as they find meaning in life through mathematics. Now,. mathematics is meaningless, so they have meaningless life. Is it so? or something else? Why do they have to take it like this? Rafee Kamouna. 6. December 23, 2012 7:15 am “Does Nature compute complexity-theoretic one-way functions?” 7. December 23, 2012 9:28 am I’ve been intrigued by the repeating emergence / hype / backlash cycle in areas of nonlinear mathematics, particularly with connections to biology. Examples include catastrophe theory, fractals, chaos theory, general systems theory, L-systems, and some of the more heady parts of cybernetics. Does anyone know if a historian of science has looked at some subset of these in a unifying 8. December 23, 2012 1:17 pm David D Lewis asks “I’ve been intrigued by the repeating emergence / hype / backlash cycle in areas of nonlinear mathematics, particularly with connections to biology … Does anyone know if a historian of science has looked at some subset of these in a unifying analysis?” David, your question has many good answers, and please let me commend the following articles in particular. When Gil Kalai posed his MathOverflow question “What is an integrable system?“, Gil quoted from Nigel Hitchin’s admirable first chapter to the book Integrable Systems: Twistors, Loop Groups, and Riemann Surfaces (1999) Introduction (by Nigel Hitchin) Integrable systems, what are they? It’s not easy to answer precisely. The question can occupy a whole book (Zakharov 1991), or be dismissed as Louis Armstrong is reputed to have done once when asked what jazz was—`If you gotta ask, you’ll never know!’ If we steer a course between these two extremes, we can that that integrability of a system of differential equations should mainifest itself through some generally recognizable features: • the existence of many conserved quantities; • the presence of algebraic geometry; • the ability to give explicit solutions. The study of integrable systems is not just about cunning methods of solving isolated special equations. Each equation is slightly different, indeed there are many of them: a trawl through a couple of standard books on the subject gives at least the following list of equations which are seriously considered to be related to integrability: [list of 49 named equations follows]. … Another task of the mathematician, apart from solving special equations, is to put some order into a universe like this. Is there some overarching structure of which these are special cases which explains integrability? For whatever reason, few libraries contain the book in which Hitchin’s essay appears (it languishes unreviewed at rank #3,555,481 in Amazon’s book sales). Fortunately, Amazon, Google, and Oxford Press provide free on-line previews that in aggregate encompass the entirety of Hitchin’s essay. What David’s question calls the cycle of “emergence/hype/backlash” is thoroughly surveyed in a book-length Physics Report by Martyushev and Seleznev (2006): Maximum entropy production principle in physics, chemistry and biology The notions of entropy and its production in equilibrium and nonequilibrium processes not only form the basis of modern thermodynamics and statistical physics, but have always been at the core of various ideological discussions concerned with the evolution of the world, the course of time, etc. These issues were raised by many outstanding scientists, including Clausius, Boltzmann, Gibbs and Onsager. As a result, today we have thousands of books, reviews and papers dedicated to properties of the entropy in different systems. The present review deals with the entropy production behavior in nonequilibrium processes. This topic is not new. What was the impetus to this study? […] Two essentially extreme opinions have been formed in the literature. Some scientists glorify the principle and think it is capable of describing various nonequilibrium processes to a certain extent. Other researchers, who observed weak points of the principle and unceasing efforts by Prigogine and his progeny to generalize it, are very skeptical about the possibility to formulate universal entropy principles, which would govern so diverse and dissimilar nonequilibrium processes. For survey of more recent developments there are many!) in this ongoing cycle of discourse relating to entropy, information, biology, cognition, and nonlinear dynamics, see Martyushev and Seleznev’s (shorter, and free!) recent preprint “Fluctuations, trajectory entropy, and Ziegler’s maximum entropy production” ( arXiv:1112.2848). Note: Although the word “quantum” appears nowhere in arXiv:1112.2848 … it arguably should … because anywhere that fundamental questions arise that are associated to “fluctuations”, it is a safe bet that fundamental questions associated to “quantum dynamics” are nearby. And finally, for a survey of the societal aspects of this cycle, historian-of-science Lily Kay’s The Molecular Vision of Life: Caltech, the Rockefeller Foundation, and the Rise of the New Biology (1993) is highly recommended. Prof. Kay was good enough to supply our UW QSE Group with a file copy of Linus Pauling’s (unpublished) 1946 proposal to the Rockefeller Foundation “The possibilities for progress in the fields of biology and biological chemistry.” This makes fascinating reading! All of these references are at-hand because they bear upon (what seems to our UW QSE Group to be) the key issue of the Kalai/Harrow debate, namely the question: How does QM work — mathematically, physically, and thermodynamically — and what great 21st century objectives can we practically accomplish via our growing understanding of it? For us the main virtue of modern-day QM/QIT/QSE research in general — and discourse like GLL’s Harrow/Kalai debate in particular — is the wonderful answers that it suggests to this strategically vital question, whose answer (arguably) will very largely determine the technological course of the 21st century. That was a fine question, DavidDLewis! :) 9. December 28, 2012 5:44 pm There’s a form feed character between “out to be, in the ” and “first hundred thousand” that’s causing the RSS feed to fail in my feed reader, Liferea. It’s an XML validation error, so this might affect other feed readers as well. □ December 29, 2012 2:44 pm I manually typed over the words in question. Did that fix the problem? I couldn’t see anything. ☆ December 29, 2012 4:43 pm Thanks! It works again now. Recent Comments mkatkov on In Praise Of P=NP Proofs Ibrahim Cahit on In Praise Of P=NP Proofs Paul Beame on In Praise Of P=NP Proofs In Praise Of P=NP Pr… on No-Go Theorems In Praise Of P=NP Pr… on Graph Isomorphism and Graph… In Praise Of P=NP Pr… on Can Amateurs Solve P=NP? Jon Awbrey on Triads and Dyads Pip on Triads and Dyads Hendrik Jan Hoogeboo… on Triads and Dyads Mike R on The More Variables, the B… maybe wrong on The More Variables, the B… Jon Awbrey on The More Variables, the B… Henry Yuen on The More Variables, the B… The More Variables,… on Fast Matrix Products and Other… The More Variables,… on Progress On The Jacobian …
{"url":"http://rjlipton.wordpress.com/2012/12/21/what-would-be-left-if/","timestamp":"2014-04-20T11:19:23Z","content_type":null,"content_length":"111878","record_id":"<urn:uuid:6202dc2c-7f3a-44fc-a58e-448e4b4e28d1>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00103-ip-10-147-4-33.ec2.internal.warc.gz"}
Tracyton Geometry Tutor Find a Tracyton Geometry Tutor ...I focus on the task at hand because I love to see it when understanding begins.I've been using algebra in my life and my studies for many years. I love teaching algebra in particular because a solid foundation in this subject is useful throughout life. I've completed coursework in college-level biology, microbiology, anatomy and physiology with a 4.0 gpa. 18 Subjects: including geometry, chemistry, biology, algebra 2 ...Sincerely, Mary AnnI enjoy tutoring Algebra 1, trying to make it interesting and easy to learn. I've helped many students with their math and improved their grades. If you don't understand something or can't solve an algebra problem, I can simplify it until you get it and solve it all by yourself. 13 Subjects: including geometry, Chinese, algebra 1, algebra 2 ...I'm also nearly fluent in Spanish, and would be happy to converse with students taking Spanish classes. I like to communicate plainly and simply, and have always enjoyed presenting material in a way that I find easy to understand, and like to approach the subject matter so that it becomes engagi... 39 Subjects: including geometry, Spanish, English, chemistry ...I approach the material in a patient, non-threatening manner, and try to accommodate the student's needs and time-table to the best of my ability. It is my goal to make math an interesting, if not enjoyable subject. I am detail oriented, and very focused on ensuring that whomever I am working with has a comprehensive, worthwhile and enjoyable experience. 12 Subjects: including geometry, chemistry, algebra 1, algebra 2 My name is Susana and I am currently a PhD student in molecular and cellular biology at the University of Washington. Previously, I worked in an academic biology lab on the UC Berkeley campus, while tutoring math and science one-on-one to high school students. I also taught 7th and 8th grade science at a middle school in Oakland, CA through a program called Teach For America. 9 Subjects: including geometry, chemistry, biology, algebra 1
{"url":"http://www.purplemath.com/tracyton_wa_geometry_tutors.php","timestamp":"2014-04-17T04:49:16Z","content_type":null,"content_length":"24041","record_id":"<urn:uuid:44e273ae-b51c-4dd2-bbde-b2efd44a3641>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00196-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Horizons – April 2011 : Archimedes And The Parabola Math Horizons — April 2011 Archimedes And The Parabola Among the many physical and mathematical results proved by Archimedes (287–212 BCE), one of his more interesting was the determination of the area of the parabolic segment, that portion of a parabola cut off by a chord. Each chord determines a unique parabolic segment. Each chord also determines a unique inscribed triangle, the triangle with side and third vertex P whose tangent line is parallel to (see figure 1). Archimedes proved that for any chord, the area of the parabolic segment is precisely 4/3 the area of the inscribed triangle. So why is this result interesting? First, it's a result proved by one of the greatest mathematicians of all time, Archimedes of Syracuse, and it's always interesting to study the methods of the masters. And though he found the area under a curve without the use of the integral calculus, he employed some distinctly calculus-like techniques (which makes you wonder what Archimedes would have done had he been around in the 17th century.or later). Finally, it's fascinating that the area under the parabola, a curved figure, compared to the area of the inscribed triangle, yields a simple 4 to 3 ratio. It does not detract from the genius of Archimedesfs proof if we reframe parts of it using modern mathematical notation; his underlying logic is still there. Hence, we begin with a modern definition of a parabola: it is the curve defined by the quadratic y = a0x2 + b0x + c0, where a0 ‚ 0. This equation can be rewritten in the vertex form y = a0 (x . H)2 + k, which by a suitable rigid transformation can be transformed into y = b . Ax2, a downward opening parabola whose axis Of symmetry is the y-axis. This will make subsequent calculations easier. To determine the inscribed triangle MPR, where the tangent line at vertex P is parallel to the chord , we make use of the Mean Value Theorem (MVT). The MVT as applied to the parabola f(x) = b – ax2 says that on the open interval (x1, x2) there is a point x3 such that If we plug in expressions for the quadratic f and its derivative f we obtain Solving yields the midpoint. This means that the tangent line at point P is parallel to the chord if and only if the “plumb-line” , which is parallel to the axis of symmetry (the y-axis), bisects chord at W. (See figure 2.) This is the first proposition used in Archimedes’s proof. Proposition 1: From point P on a parabola, drop the straight line parallel to the axis of symmetry meeting the chord Parallel to the tangent at P if and only if W bisects the chord . Suppose next that chords and are parallel, with midpoints V and W respectively. We will see that there is a quadratic relationship between the ratio of the lengths of the “plumb-line” segments and and the corresponding ratio for the “half-chords” and . This follows by direct calculation. Let M and R be the points (x1, b – ax1 ) and (x2, b – ax2 ); let N and Q be the points (x3, b – ax3 ) and (x4, b – ax4 ). The coordinates of P, V, and W are easily computed, the latter two using midpoint formulas.By direct calculation it follows that Thus the second proposition used by Archimedes: Proposition 2: From a point P on a parabola draw a straightLine parallel to the axis of symmetry meeting two chords and that are parallel to the tangent line P, at points V and W respectively. Then With these two propositions in hand, Archimedes had the tools to erect the scaffolding needed to prove the main result. As shown in figure 4, the inscribed triangle with sides and determines two more inscribed triangles and . Archimedes proved (see Proposition 3 below) that.Continuing on, for there are two more inscribed triangles whose areas are . The same is true for . So with seven inscribed triangles, the area of the parabolic segment can be approximated by This process can be extended indefinitely to obtain Summing the geometric series will yield Archimedes’s result: To show we consider only the right half of ; that is, we’ll show . Proposition 3: . Proof (see figure 5): From points N and Q, drop parallel plumb-lines that by Proposition 1 bisect and at Points X and Y, respectively. Thus, is similar to , and so the line is parallel to . It also follows that . Thus, V bisects , and so by Proposition 1 is parallel to the tangent line at P and hence parallel to . Since by Proposition 2. To continue, three triangles (see figure 6) are now used: , which is contained in , and , which is half of . it follows that Moreover, since is similar to , Now put it all together, starting with : Corollary: By a similar argument, . Therefore, . Extending the inscribed triangle process indefinitely gives better and better approximations to the area, so that in the limit we have the result: However, Archimedes did not sum the series. Instead, he showed that the area of the parabolic segment could neither be greater than nor less than , leaving only the remaining possibility, that the area of the parabolic segment must equal . His method depended on showing that inscribed triangles could be used to get arbitrarily close to the area of the parabolic segment. In figure 7, it is seen that the area of the inscribed triangle is half the circumscribed parallelogram.hus, is greater than half the parabolic segment. Since each added inscribed triangle accounts for over half the remaining area for that parabolic segment, it is possible to use 2n . 1 inscribed triangles to construct a 2n + 1 sided inscribed polygon such that the difference between the area of the polygon and the parabolic segment is less than any To see how this can be accomplished, note first that the area of the inscribed 2n + 1 sided polygon, denoted by Tn, is given by the finite sum Next, observe that the difference between Tn and is For if we add them, the finite sum telescopes to 4/3. Now it’s time to finish off Archimedes’s argument! Let A be the area of the parabolic segment. Let be the area of the 2n + 1 sided polygon formed from 2n – 1 triangles inscribed under the parabola. Let where can be made arbitrarily small by making n sufficiently large. 1. Assume , so the area of the parabolic segment is larger. Then or Construct a 2n + 1 sided inscribed polygon such that the difference between area of the parabolic segment and the polygon is less than ; that is, or Then Contradiction! 2. Assume , so the area of the parabolic segment is smaller. Then or Choose a value n such that Then Therefore, since both and are false, must be true; Archimedes’s result follows. Further Reading Archimedes’s full proof can be found in The Works of Archimedes, by Thomas L. Heath (Dover Press, 1953). Over the course of 24 propositions, Archimedes actually gives two proofs for the area of the parabola, a mechanical one (Proposition 17) and a mathematical one (Proposition 24).Proofs of his first three propositions, which include the result obtained from the MVT and the plumb-line and half-chord squared ratio result are not given; Archimedes simply states “and these propositions are proved in the elements of conics,” referring to treatises on conics by Euclid and Aristaeus. Exercise 1: Using the integral calculus, verify Archimedes’s result by showing that the area between the curve y = b – ax2 and the x-axis is 4/3 times the area of the inscribed triangle. Exercise 2: Verify Archimedes’s result using the area between the parabola y = 4 – x2 and the line y = 2 – x. Use the MVT to find the vertex of the inscribed triangle and determine its altitude by constructing a line perpendicular to the base of the triangle. About the author: Brian J. Shelburne is an associate professor of mathematics and computer science at Wittenberg University in Springfield, Ohio. email: bshelburne@wittenberg.edu
{"url":"http://digital.ipcprintservices.com/display_article.php?id=681173","timestamp":"2014-04-20T11:54:41Z","content_type":null,"content_length":"41082","record_id":"<urn:uuid:ab500b6e-a87f-4e63-b5a2-4212e2334976>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00062-ip-10-147-4-33.ec2.internal.warc.gz"}
The realm of infinite possibility When I was in college, I imagined something I called the realm of infinite possibility. It contained every word, every note, every everything, in every possible combination. I liked to think about this when I needed to write a paper for class. Writing was always a struggle, but somewhere in the realm of infinite possibility, the ideal paper already existed. I just needed to discover it. I found this idea comforting. It didn't occur to me until years later that, if the possibilities are truly infinite, there's more than one version of whatever it is you're trying to create. I realized this after I started writing songs. The process is so random. For instance. The song I'm currently working on has a melody and a basic sense of what it will be about, but almost nothing in the way of actual lyrics. Tonight, while taking a shower, I thought of a few lines for the last verse. They involved hiding from the world by locking myself in the bathroom and taking a bath. Obviously, my immediate environment inspired the lines. If I had been somewhere else, or if I hadn't been thinking about the song at that moment, the last lines would have turned out completely Some physicists believe there are alternate universes for every potential happening. In this universe I stopped at Trader Joe's after rehearsal, but in some alternate universe I went straight home. If it were possible to observe myself in the alternates, I'd spend half my time thinking, "Wait, the song isn't supposed to go like that," or, "Shoot, that's better than the way I wrote it." I find this idea disconcerting. When I write a song, I try to chase down the best lyrics possible . If some other me could think of something better, or even just as good, how do I know when to declare myself satisfied with what I've done? This is the point where I decide not to think about it anymore. 5 comments: Cool thought on those alternate universes, they are infinite, I guess? I try no to edit myself at all when new song is coming in, just let as much flow as possible. Then my real work begins when I tell myself it's time to take all the brainstorming and hone. There are onny a few songs, even after recorded i think are done. I probably need to get over that! :) I love how bathing can let ideas in, back to the pre-birth us, I suppose all cocconed in water. :) Hope all is awesome in your realm! My realm now contains Singing for Dummies, thank you! I think my head would explode if I tried to make myself not edit. But yeah, the shower is a great place to think. It's also a great place to not think. This is why it's impossible to take a shower that's less than 15 minutes long. :) This reminds me of an Onion-esque article I once wrote based on two premises: first, that any piece of information, text, audio, video, etc, can be mapped to a set of numbers by any of an essentially infinite number of algorithms (this is what computers do, except they use one of only a few standardized algorithms), and second that any random string of numbers has an A of X odds of being mappable to some given piece of information (a million monkeys/Shakespeare). If you have an infinite string of numbers, there is a 100% chance that somewhere in there is every piece of information that ever has or ever will exist, you just have to figure out which So in the article, the world was in a mad rush to find, for instance, nude pictures of famous historical figures in the digits of Pi, academic papers on future technology in the digits of the square root of two, etc. So, start calculating "e" (2.718....), your next song is in there somewhere if you just find the right algorithm for teasing it out. ...and even if you don't find a song, you might stumble upon the FTL jump coordinates for Earth. I've wondered if there was a way to compute, for really real, the probability that two people would write a song with the same melody and the same lyrics. When unknown songwriters have accused famous songwriters of plagiarizing their stuff, the defense has been that the famous songwriter simply came up with song independently. I'd love to know the
{"url":"http://cinderbridge.blogspot.com/2010/04/realm-of-infinite-possibility.html?showComment=1271074625741","timestamp":"2014-04-18T13:08:25Z","content_type":null,"content_length":"105325","record_id":"<urn:uuid:0175ddb4-f019-4379-a53d-192110756e60>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00656-ip-10-147-4-33.ec2.internal.warc.gz"}
A better solution to the 12 identical pool balls problem A better solution to the 12 identical pool balls problem A friend of mine told me this problem, and I worked out a solution. Then I looked up the solution on this website. I like my solution much better! The Problem: You have 12 balls identical in size and appearance but 1 is an odd weight (could be either light or heavy). You have a set scales (balance) which will give 3 possible readings: Left = Right, Left > Right or Left < Right (ie Left and Right have equal weight, Left is Heavier, or Left is Lighter). You have only 3 chances to weigh the balls in any combination using the scales. Determine which ball is the odd one and if it's heavier or lighter than the rest. How do you do it? My Solution: Label the balls 1-12 First Weighing: Left: 1 2 3 4 Right: 5 6 7 8 Off: 9 10 11 12 Record the heavier side (L, R, or B) Second Weighing: Left: 1 2 5 9 Right: 3 4 10 11 Off: 6 7 8 12 Record the heavier side (L, R or B) Third Weighing: Left: 3 7 9 10 Right: 1 4 6 12 Off: 2 5 8 11 Record the heavier side (L, R, B) There are 27 (3^3) possible combination of scale readings. A complete sorted list of the scale reading appears below. Note that only 24 of the 27 readings should be possible given the original problem statement. The algorithm was designed so that if all three scale readings are the same, an error is flagged indicating that the scale is stuck. BBB Error! There is not a single light or heavy ball (or scale is stuck). BBL Ball #12 is light BBR Ball #12 is heavy BLB Ball #11 is light BLL Ball #9 is heavy BLR Ball #10 is light BRB Ball #11 is heavy BRL Ball #10 is heavy BRR Ball #9 is light LBB Ball #8 is light LBL Ball #6 is light LBR Ball #7 is light LLL Error! Scale is stuck! LLB Ball #2 is heavy LLR Ball #1 is heavy LRB Ball #5 is light LRL Ball #3 is heavy LRR Ball #4 is heavy RBB Ball #8 is heavy RBL Ball #7 is heavy RBR Ball #6 is heavy RLB Ball #5 is heavy RLL Ball #4 is light RLR Ball #3 is light RRB Ball #2 is light RRL Ball #1 is light RRR Error! Scale is stuck! Re: A better solution to the 12 identical pool balls problem What a neat approach ... I haven't checked your answer, but the approach looks sound ! I will put it as part of official solution when I get a chance. Should I credit "cnaumann" ? "The physicists defer only to mathematicians, and the mathematicians defer only to God ..." - Leon M. Lederman Re: A better solution to the 12 identical pool balls problem Thanks! I am glad you liked my solution. You can credit Charles Naumann. Re: A better solution to the 12 identical pool balls problem I have added your solution here. "The physicists defer only to mathematicians, and the mathematicians defer only to God ..." - Leon M. Lederman Super Member Re: A better solution to the 12 identical pool balls problem wow ur famous I come back stronger than a powered-up Pac-Man I bought a large popcorn @ the cinema the other day, it was pretty big...some might even say it was "large Re: A better solution to the 12 identical pool balls problem Hello. I'm a new guy in here. Mr. Cnaumann 's solution is so impressive and systematic. Bravo ! I figured out a solution several months ago, although less nimble but...I tried my best : Weigh (1st time) 4 balls against other 4 (choose randomly). IF THE SCALE NOT BALANCED: the last 4 balls (not on the scale) are normal ones. Name the 4 balls on heavier side A,B,C,D. If the odd ball is in group ABCD, it must be heavier than normal one. Name the 4 balls on lighter side E,F,G,H. If the odd ball is in group EFGH, it must be lighter than normal one. Weigh (2nd time) : (N is a normal ball) A B N ~ C D E 1a/ If ABN = CDE : the odd ball is in group F,G,H. Weigh F against G (3rd time): If not balanced, the lighter one is the odd ball. If balanced, H is the odd ball (and is lighter). 1b/ If ABN ≠ CDE : 1b1/ If the scale tipped towards A,B,N side (heavier): the odd ball is in group A,B,E. Weigh A against B (3rd time) : If not balanced, the heavier one is the odd ball. If balanced, E is the odd ball and lighter. 1b2/ If the scale tipped towards C,D,E side : the odd ball is in group C,D. Weigh C against D. The heavier one is the odd ball. IF THE SCALE BALANCED (at the 1st weighing) : these 8 balls are normal ones. Name the last 4 balls A,B,C,D. Weigh (2nd time) : A N ~ C D 1a/ If not balanced (AN≠ CD) : the odd ball is in group A,C,D. 1a1/ If the scale tipped towards A,N side : If the odd ball is A, it must be heavier than normal one. If the odd ball is C or D, it must be lighter than normal one. Weigh C against D (3rd time): If not balanced, the lighter one is the odd ball. If balanced, A is the odd ball and heavier. 1a2/ If the scale tipped towards C,D side : If the odd ball is A, it must be lighter than normal one. If the odd ball is C or D, it must be heavier than normal one. Weigh C against D (3rd time) : If not balanced, the heavier one is the odd ball. If balanced, A is the odd bal and lighter. 1b/ If balanced (AN=CD) : B is the odd ball. Weigh B against a normal ball to know if it is lighter or heavier than normal one . Last edited by tt (2005-07-05 09:33:46) Re: A better solution to the 12 identical pool balls problem Hi tt, and welcome to the forum That is a good solution, and would look good on TV, I reckon. Hero says "you are going to weigh each one? Harumph!" *throws four on one side, four on the other* *switches a few balls around* "switches one more time* "Here- this one!" "The physicists defer only to mathematicians, and the mathematicians defer only to God ..." - Leon M. Lederman Re: A better solution to the 12 identical pool balls problem It helps if you have skills with balls. (j/k) Re: A better solution to the 12 identical pool balls problem Oh yes, I love billiards (ahem) "The physicists defer only to mathematicians, and the mathematicians defer only to God ..." - Leon M. Lederman Re: A better solution to the 12 identical pool balls problem What does tt stand for, tt? Or isn't it meant to stand for anything? Re: A better solution to the 12 identical pool balls problem It's short for my email address. Super Member Re: A better solution to the 12 identical pool balls problem Aha, I see. Heeheeheeheeheeheehee....... School is practice for the future. Practice makes perfect. But - nobody's perfect, so why practice? Super Member Re: A better solution to the 12 identical pool balls problem "tt" stands for "Too true", actually. Boy let me tell you what: I bet you didn't know it, but I'm a fiddle player too. And if you'd care to take a dare, I'll make a bet with you. Re: A better solution to the 12 identical pool balls problem hello. i'm sorry to say it, but the neat answer isn't as new as you would expect... it has been published in at least five books (however, it isn't very popular in the Internet, if it makes you feel the problem with this solution is the rules of inference cannot be observed so clearly as in the ``classical'' solution. it would be quite good if someone (the author?) explained what is the rule of putting the balls on the left and on the right throughout the 3 weightings. Last edited by rszopa (2005-10-05 08:09:28) Re: A better solution to the 12 identical pool balls problem Mathematics is like that ... Pythagoras was not the first to discover the "Pythagoras" Theorem, for example! "The physicists defer only to mathematicians, and the mathematicians defer only to God ..." - Leon M. Lederman Full Member Re: A better solution to the 12 identical pool balls problem who can juggle 12 balls? Re: A better solution to the 12 identical pool balls problem I can, if they are all placed in a bag! "The physicists defer only to mathematicians, and the mathematicians defer only to God ..." - Leon M. Lederman Re: A better solution to the 12 identical pool balls problem Twelve!? I can't even juggle three properly. Can you juggle, wcy? Re: A better solution to the 12 identical pool balls problem Instead of dividing it into 3 groups...why not into 4 groups of 3 each (1) 3 (4) 3 1) (1) vs (2) - if equal weight (1) vs (3) - from this attemp you will find which has the odd one and you will find in which group that odd one is Now you got one group having 3 balls..in which the odd ball is now 1 ball vs 1 ball ...solution found in 3 attempts Re: A better solution to the 12 identical pool balls problem Hi koundinya17; Welcome to the forum! That is a solution! In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=15032","timestamp":"2014-04-21T07:27:39Z","content_type":null,"content_length":"34513","record_id":"<urn:uuid:c2de4b06-ccd9-4f7e-b05f-6cd15c54e739>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00230-ip-10-147-4-33.ec2.internal.warc.gz"}
last independence and RV question November 4th 2008, 11:30 PM #1 Oct 2008 last independence and RV question Here I want to show that given Y=Z/X is indep. of Z and X, where X,Z>0 show that Y is a constant. The hints I got are to consider: both of which equal 0 since then defining a new variable U=1/Y which is supposed to help somehow and then considering V(Y). I need help in understanding how to tie these hints together to get the final answer (V(Y)=0) Follow Math Help Forum on Facebook and Google+
{"url":"http://mathhelpforum.com/advanced-statistics/57704-last-independence-rv-question.html","timestamp":"2014-04-20T16:11:18Z","content_type":null,"content_length":"30170","record_id":"<urn:uuid:c5e61903-a25d-49e1-817e-ec1dbb53f83f>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00174-ip-10-147-4-33.ec2.internal.warc.gz"}
Winthrop Harbor Algebra 1 Tutor Find a Winthrop Harbor Algebra 1 Tutor ...This course starts out with basic properties of operations such as associative, distributive, and many more. The course evaluates expression using an order of operations known as PEMDAS. Also, it has review of natural numbers, arithmetic operations, integers, fractions, decimals and negative nu... 11 Subjects: including algebra 1, calculus, algebra 2, trigonometry ...I am deeply passionate about the Classics, and strongly believe that learning Latin and Greek is a great way for students to gain an understanding of Greek and Roman culture. As a tutor, I hope to share my love of Latin and Greek with others! --------------------- In addition to tutoring Latin... 20 Subjects: including algebra 1, English, reading, ESL/ESOL ...I also have particular expertise in tutoring students in mathematics, up through trigonometry, and at all levels of English - having served as a Mathematics Coach in the Chicago Public Schools, and as a reporter and copy editor in my first career for The Daily Southtown Economist in Chicago, now ... 20 Subjects: including algebra 1, reading, English, writing I have over 10 years of experience teaching math and English in the public, private, and international school setting. I've taught all levels of math in the middle school and high school levels. As an experienced classroom teacher (now a stay-at-home mom to two little ones), I know the struggles and triumphs that students face in the classroom. 8 Subjects: including algebra 1, geometry, algebra 2, trigonometry ...My favorite part about teaching is watching a student's confidence grow as they realize they are capable of learning. I love when the light bulb clicks!As a Resource Teacher, I have successfully taught students with learning disabilities math. Before my current job, I was a Teachers Associate for 4 years in Glencoe and Deerfield. 18 Subjects: including algebra 1, reading, writing, grammar
{"url":"http://www.purplemath.com/winthrop_harbor_il_algebra_1_tutors.php","timestamp":"2014-04-16T22:44:06Z","content_type":null,"content_length":"24459","record_id":"<urn:uuid:b251af43-8b85-4f45-b872-729e7dff123f>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00209-ip-10-147-4-33.ec2.internal.warc.gz"}
Snellville ACT Tutor Find a Snellville ACT Tutor ...I have tutored ACT math questions to high school aged students for 3 years. I received vocal training at Emory University as part of the voice major program. I have also participated Emory's student choir group all 4 years of college, and I've been a part of a community choir for 3 years. 30 Subjects: including ACT Math, reading, Chinese, algebra 1 I graduated from Clemson University in December 2011. I majored in electrical engineering and currently work in the power industry. My love for math has grown since grade school which prompted me to take all of the math courses that I could in college. 14 Subjects: including ACT Math, calculus, geometry, algebra 1 ...I specialize in statistics, however I am experienced in a wide range of math subjects as well as business subjects (finance, accounting, Microsoft software), reading and English. I have worked in 2 colleges as a CRLA certified tutor and as a private tutor. I have received excellent ratings, reviews and referrals for professionalism, cordiality and instruction. 23 Subjects: including ACT Math, reading, English, calculus ...I also have experience with special needs students. I have done after school tutoring for 3 years for 2nd and 3rd grade students who struggle. I currently teach Special Education in Gwinnett 12 Subjects: including ACT Math, reading, writing, grammar ...I have an undergraduate degree in this subject. I do still study the topics to keep the information fresh in my head. Took the actual test when I considered joining the military. 29 Subjects: including ACT Math, chemistry, reading, physics Nearby Cities With ACT Tutor Alpharetta ACT Tutors Buford, GA ACT Tutors Decatur, GA ACT Tutors Duluth, GA ACT Tutors Dunwoody, GA ACT Tutors Grayson, GA ACT Tutors Johns Creek, GA ACT Tutors Lawrenceville, GA ACT Tutors Lilburn ACT Tutors Loganville, GA ACT Tutors Norcross, GA ACT Tutors Roswell, GA ACT Tutors Stone Mountain ACT Tutors Tucker, GA ACT Tutors Woodstock, GA ACT Tutors
{"url":"http://www.purplemath.com/snellville_act_tutors.php","timestamp":"2014-04-17T04:56:32Z","content_type":null,"content_length":"23505","record_id":"<urn:uuid:fbd5253c-1b96-49aa-bb55-bbb29bfba361>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00531-ip-10-147-4-33.ec2.internal.warc.gz"}
Dynamic fluid–structure interaction using finite volume unstructured mesh procedures Slone, A.K., Pericleous, K., Bailey, C. and Cross, M. (2002) Dynamic fluid–structure interaction using finite volume unstructured mesh procedures. Computers & Structures, 80 (5-6). pp. 371-390. ISSN 0045-7949 (doi:10.1016/S0045-7949(01)00177-8) Full text not available from this repository. A three-dimensional finite volume, unstructured mesh (FV-UM) method for dynamic fluid–structure interaction (DFSI) is described. Fluid structure interaction, as applied to flexible structures, has wide application in diverse areas such as flutter in aircraft, wind response of buildings, flows in elastic pipes and blood vessels. It involves the coupling of fluid flow and structural mechanics, two fields that are conventionally modelled using two dissimilar methods, thus a single comprehensive computational model of both phenomena is a considerable challenge. Until recently work in this area focused on one phenomenon and represented the behaviour of the other more simply. More recently, strategies for solving the full coupling between the fluid and solid mechanics behaviour have been developed. A key contribution has been made by Farhat et al. [Int. J. Numer. Meth. Fluids 21 (1995) 807] employing FV-UM methods for solving the Euler flow equations and a conventional finite element method for the elastic solid mechanics and the spring based mesh procedure of Batina [AIAA paper 0115, 1989] for mesh movement. In this paper, we describe an approach which broadly exploits the three field strategy described by Farhat for fluid flow, structural dynamics and mesh movement but, in the context of DFSI, contains a number of novel features: • a single mesh covering the entire domain, • a Navier–Stokes flow, • a single FV-UM discretisation approach for both the flow and solid mechanics procedures, • an implicit predictor–corrector version of the Newmark algorithm, • a single code embedding the whole strategy. Actions (login required)
{"url":"http://gala.gre.ac.uk/5969/","timestamp":"2014-04-17T12:58:25Z","content_type":null,"content_length":"36224","record_id":"<urn:uuid:d18c9eff-437d-4fd2-ac3c-c24ad2e91a51>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00289-ip-10-147-4-33.ec2.internal.warc.gz"}
Possible Answer How many numbers in a FedEx tracking number? There are always 15 numbers in a FedEx tracking number. Ship items a lot, the first 11 numbers will not change each time you make a shipment. ... How many How many digits do moneygram control number have? 8. ... How many numbers in moneygram tracking number? 8 numbers See related link below for more info Give me an example of a number please. How many digits the reference number for western union? 10. - read more Share your answer: how many numbers in moneygram tracking number? Question Analizer how many numbers in moneygram tracking number resources
{"url":"http://www.askives.com/how-many-numbers-in-moneygram-tracking-number.html","timestamp":"2014-04-16T11:05:37Z","content_type":null,"content_length":"36478","record_id":"<urn:uuid:f3755da6-606c-4022-b2dc-417e475b3e2d>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00372-ip-10-147-4-33.ec2.internal.warc.gz"}
Homework Help Posted by robbie on Saturday, July 23, 2011 at 5:48pm. Two angles are complementary. One of the angles is twice the other. The larger angle has a measure of • math - Ms. Sue, Saturday, July 23, 2011 at 5:53pm Solve for x. Then multiply it by 2. x + 2x = 90 • math - Jen, Saturday, July 23, 2011 at 6:45pm x + 2x = 90 so you would divide 2x on both sides, it would then look like this: x + 2x = 90 -- -- 2x 2x x = 45 so you would then assume that both angles measurements are 45 degrees each, which would equal to 90 degrees, because a complementary angle measures 90 degrees. hope this helps! • math - Ms. Sue, Saturday, July 23, 2011 at 6:50pm Jen -- you did not read the problem carefully! One of the angles is twice the other. • math - Ms. Sue, Saturday, July 23, 2011 at 7:03pm Also x + 2x does not equal 2x. • math - Anonymous, Saturday, July 23, 2011 at 11:34pm 3x=90° Divide both sides with 3 The larger angle has a measure of: Related Questions math - Two angles are complementary. One of the angles is twice the other. The ... Geometry. Please help. - 1) Two angles are complementary. The measure of one ... Math - The measure of one of two complementary angles is 21 less than twice the ... Algebra - Two angles are complementary. The sum of the measure of the first ... Algebra - Two angles are complementary. The sum of the measure of the first ... Math - Two angles are complementary. The sum of the measure of the first angle ... math - I posted this early, I don't understand, Two angles are complementary. ... algebra - two angles are complementary. the sum of the measure of the first ... math - Two angles are complementary. One angle is twice the other. The largest ... Math - The measure of two complementary angles are 6y+3 and 4y-13. Find the ...
{"url":"http://www.jiskha.com/display.cgi?id=1311457716","timestamp":"2014-04-17T20:54:55Z","content_type":null,"content_length":"9446","record_id":"<urn:uuid:dd8d629b-dcfd-4d62-b268-d2a19c6d0e0f>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00000-ip-10-147-4-33.ec2.internal.warc.gz"}
Binomial Expansion August 6th 2010, 08:11 AM #16 MHF Contributor Dec 2009 Here's how to cheat... If the terms start from $x^0$ upwards, then There is a term free of x if we multiply the left terms only.. The lowest power of x is obtained by multiplying the first four 6's, then by -13x... the middle term in the final factor. The next highest power of x is found by multiplying the first four 6's by the 3rd term in the last factor. Then you need to count the number of ways that these calculations can be replicated. Finally (-13x)(-13x) will also give x "squared" terms. Of course, it takes too long!
{"url":"http://mathhelpforum.com/algebra/152922-binomial-expansion-2.html","timestamp":"2014-04-20T17:03:47Z","content_type":null,"content_length":"33286","record_id":"<urn:uuid:a9fa07b4-4791-4838-b69b-9aee4a5aced8>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00381-ip-10-147-4-33.ec2.internal.warc.gz"}
Form a polynomial f(x) Number of results: 19,683 Form a polynomial f(x) from coefficient and it's 0 Form a polynomial, f(x), with real coefficients having the given degree and zeros. Degree: 4; zeros: 6i and 7i I completely don't know what to do with this problem... if someone can solve and give a good explanation, I'd appreciate it. Thanks. Monday, May 30, 2011 at 12:01am by Armando college algebra find all the zero's of the followig polynomial. write the polynomial in form. f(x)=x^2-256 Thursday, November 15, 2012 at 4:46pm by ladybug College Algebra Find all zeros of the following polynomial. Write th polynomial in factored form. f(x) = x^4 - 256 Monday, August 6, 2012 at 1:18pm by Kameesha find all zeros of the following polynomial. write the polynomial in factored form. f(x)=x^3-3x^2+16x-48 Saturday, December 22, 2012 at 9:07pm by ladybug find the polynomial degree 4 zeros i & (1+i) constant term 12 How do I start this problem. Thanks A degree four polynomial will have the form (x^4 +... x + c), where c is a constant. You will need to generate an equation that has the above form, using c=12. Solve for the roots [i & (1+i)]. Thursday, February 1, 2007 at 12:34pm by Jen Form a polynomial f(x) Form a polynomial f(x) with the real coefficients having the given degree and zeros. Degree 5; Zeros: -3; -i; -6+i f(x)=a( ) Tuesday, March 26, 2013 at 7:11pm by Ashley Form a polynomial f(x) Form a polynomial f(x) with real coefficients having the given degree and zeros. Degree 5; Zeros: -3; -i; -6+i F(x)=a ( ) Wednesday, March 27, 2013 at 8:09am by Hailey Form a polynomial Form a polynomial f(x) with real coefficients having the given degree and zeros Degree 5; zeros: -8; -i; -8+i Wednesday, March 27, 2013 at 10:33am by Alley 1.binomial 2.degree of monomial 3.monomial 4.perfect-square trinomial 5.standard form of polynomial A.a polynomial with two terms B.a polynomial in which the terms decrease in degree from left to right and there are no like terms C.a polynomial whith two identical binomal ... Wednesday, May 12, 2010 at 10:10pm by ed Is a polynomial in standard form equal to a polynomial not in standard form? Tuesday, February 17, 2009 at 4:58pm by Angie calculus-can someone please help me with this ques Find all zeros of the following polynomial. Write the polynomial in factored form. f(x)=x^3-3x^2+16x-48 I put: x^2(x-3)+16(x=3) (x-3)(x^2+16) For zeros: x-3=0 x=0 **My teacher stated check the equation solution again. What is the value for x and hence what is the zero for the ... Thursday, December 27, 2012 at 5:08pm by allison--Please help!! Algebra 2 1. Write a polynomial in standard form that has solutions: 0, -2, 3 2. Write a trinomial that has a degree of 4 and a lead coefficient of -3 3. True of false: 3/x^2 is a polynomial expression Saturday, January 12, 2008 at 8:02pm by Christian Form a polynomial f(x) What would be the polynomial? Tuesday, March 26, 2013 at 7:11pm by Chama find all the zeros of the following polynomial. Write the polynomial in fraction form. Show all the work. f(x)= x^4+3x^2-40 Wednesday, December 26, 2012 at 9:33pm by phyllys Factor the polynomial as the product of factors that are irreducible over the real numbers. Then write the polynomial in completely factored form involving complex nonreal or imaginary numbers. x^4 + 20x^2 -44=0 Tuesday, January 31, 2012 at 7:38pm by Chelsea Through n +1 points passing polynomial of degree n. In this case you have 7 points. Polynomial will be degree 6. Write the polynomial in this form : a * x ^ 6 + b * x ^ 5 + c * x ^ 4 + d * x ^ 3 + e * x ^ 2 + f *x + g = y Now put values of x an y in this equation. a * ( - 3... Friday, July 5, 2013 at 10:19pm by Bosnian college algebra form a polynomial f(x) with real coefficients having the given degree and zeros degree 4 zeros 5+3i;3 multiplicity 2 enter the polynomial f(x)=a?() Monday, August 27, 2012 at 12:48am by ash Form a polynomial f(x) with real coefficents having the given degree and zeros Degree 5; Zeros: 2; -i;-7+i Enter the polynomial f(x)=a(____) type expression using x as the variable Saturday, June 8, 2013 at 11:25pm by Sharday form a polynomial f(x) with real coefficients having the given degree and zeros. Degree 4; zeros:-3 +5i; 2 multiplicity 2 enter the polynomial f(x)=a(?) Tuesday, November 20, 2012 at 10:57pm by Heather can someone please help me find all the zeros of the following polynomial. write the polynomial in factored form. f(x)=x^3-3x^2+16x-48 can you please show all work Monday, December 24, 2012 at 9:45pm by Josh form a polynomial f(x) with real coefficients having the given degree and zeros. degree 5; zeros -7; -i;-9+i enter the polynomial. f(x)=a(?) Monday, October 1, 2012 at 11:23pm by ladybug Math:Polynomial Functions That works, but as a trigonometric function and not a polynomial function. I the definition of a polynomial function, the right hand side must be a polynomial, as defined above. Tuesday, September 8, 2009 at 9:48pm by MathMate Numerical Analysis x 0 2 3 4 y 7 11 28 63 a)_Using these values, use the Lagrange interpolation process to obtain a polynomial of least degree. b)_rearrange the points in the table of a) and find the newton form of the interpolating polynomial. Show that they are the same even though their forms... Monday, October 6, 2008 at 11:04am by Julie Can someone show me how to do this? I got 5 more problems of the same kind. Thannk! Consider the polynomial P(x), shown in both standard form and factored form. P(x) = 1/2x^4 - 9/2x^3+21/2x^2+1/2x-15 = 1/2 (x+1)(x-2)(x-3)(x-5) State the behavior at the ends: (up/down) at the ... Tuesday, July 21, 2009 at 10:10am by Tammie Can someone show me how to do this? I got 5 more problems of the same kind. Thannk! Consider the polynomial P(x), shown in both standard form and factored form. P(x) = 1/2x^4 - 9/2x^3+21/2x^2+1/2x-15 = 1/2 (x+1)(x-2)(x-3)(x-5) State the behavior at the ends: (up/down) at the ... Tuesday, July 21, 2009 at 6:40pm by Tammie 8th Grade Algebra Multiply. 1) (3t^2 - 2t - 4) * (5t + 9) Writing. 1) Explain why the product of a quadratic polynomial and a linear polynomial must be a cubic polynomial. Tuesday, May 5, 2009 at 10:40pm by Bridget If t were linear it would be of form y = m x + b if it were constant it would be of form y = b it is in fact a cubic polynomial. By the way we usually write exponents as: y = x^3 - x^2 + 8 Monday, June 9, 2008 at 3:09pm by Damon zeros of a polynomial polynomial is x^3 - x^2 -11x + 15 If 3 is the zero of the polynomial, when you divide this polynomial with x-3, the remainder has to be 0? yes. If y=x^3 - x^2 -11x + 15, when y=0 the roots are on the Thursday, February 1, 2007 at 6:24pm by Jen Algebra 2 I am trying to factor a 4th degree polynomial that does not have any rational roots. I need to somehow get it factored into two quadratics. Anyone know of a method to use. 3x^4 - 8x^3 - 5x^2 + 16x - 5 Two of the irrational roots are 1.135.. and 0.382.. but that won't help you ... Friday, March 9, 2007 at 12:46am by Brenda Math ( Polynomial ) If the polynomial, x^4 - 6x^3 + 16x^2 - 25x + 10 is divided by another polynomial x^2 - 2x + k, the remainder comes out to be x + a, find k + a, please work the complete solution instead of giving simply an answer. Saturday, April 13, 2013 at 11:04am by Jack Form a polynomial f(x) So then a=0? Tuesday, March 26, 2013 at 7:11pm by Chama Find a polynomial f(x) with leading coefficient 1 and having the given degree and zeros. Each polynomial should be expanded from factored form, simplified and written in descending order of exponents on the variable. For example: (x+5)(x-2) should be given as the answer x^2 + ... Tuesday, April 12, 2011 at 11:09am by marissa Factor the polynomial x2 + 3x – 18. Which one of the following is a factor? Factor the polynomial 12x2 + 11x – 5 completely. Factor the polynomial 18a2b – 4ab + 10a completely Factor the polynomial 2x2 – 50 completely. Wednesday, November 9, 2011 at 12:51pm by jesus The quartic polynomial you are looking for has the form kΠ(x-ri) where the summation is over the order of the polynomial (4 in the given case) and ri is the ith zero. k is a constant of multiplication, and is reflected in the coefficient of the highest order term. Among ... Wednesday, July 22, 2009 at 11:12pm by MathMate can some one please help figure this problem out Determine whether each expression is a polynomial.If it is a polynomial, state the degree of the polynomial. 5x^3+2xy^4+6xy can someone please help me Monday, April 28, 2008 at 8:39pm by meika can some one please help figure this problem out Determine whether each expression is a polynomial.If it is a polynomial, state the degree of the polynomial. 5x^3+2xy^4+6xy can someone please help me Monday, April 28, 2008 at 8:45pm by meika can some one please help figure this problem out Determine whether each expression is a polynomial.If it is a polynomial, state the degree of the polynomial. 5x^3+2xy^4+6xy can someone please help me Monday, April 28, 2008 at 8:46pm by meika can some one please help figure this problem out Determine whether each expression is a polynomial.If it is a polynomial, state the degree of the polynomial. 5x^3+2xy^4+6xy can someone please help me Monday, April 28, 2008 at 8:48pm by meika can some one please help figure this problem out Determine whether each expression is a polynomial.If it is a polynomial, state the degree of the polynomial. 5x^3+2xy^4+6xy can someone please help me Monday, April 28, 2008 at 8:50pm by meika can some one please help figure this problem out Determine whether each expression is a polynomial.If it is a polynomial, state the degree of the polynomial. 5x^3+2xy^4+6xy can someone please help me Monday, April 28, 2008 at 8:53pm by meika can some one please help figure this problem out Determine whether each expression is a polynomial.If it is a polynomial, state the degree of the polynomial. 5x^3+2xy^4+6xy can someone please help me Monday, April 28, 2008 at 8:55pm by meika can some one please help figure this problem out Determine whether each expression is a polynomial.If it is a polynomial, state the degree of the polynomial. 5x^3+2xy^4+6xy can someone please help me Monday, April 28, 2008 at 9:01pm by meika can some one please help figure this problem out Determine whether each expression is a polynomial.If it is a polynomial, state the degree of the polynomial. 5x^3+2xy^4+6xy can someone please help me Monday, April 28, 2008 at 9:01pm by meika algebra!!!! please help me!!!! A number is called algebraic if there is a polynomial with rational coefficients for which the number is a root. For example, √2 is algebraic because it is a root of the polynomial x^2−2. The number √(2+√3+√5)is also algebraic because it is a root... Wednesday, May 22, 2013 at 9:08am by bob Form a polynomial f(x) If the zeroes are a1,a2,a3, etc. then the polynomial is: p(x) = A (x-a1)(x-a2)(x-a3).... If all the coefficients are real, then the zeroes come in pairs of complex conjugates. So, in this problem it is given that -i is a zero, and then it follows that i is also a zero. Tuesday, March 26, 2013 at 7:11pm by Count Iblis Form a polynomial f(x) If the zeroes are a1,a2,a3, etc. then the polynomial is: p(x) = A (x-a1)(x-a2)(x-a3).... If all the coefficients are real, then the zeroes come in pairs of complex conjugates. So, in this problem it is given that -i is a zero, and then it follows that i is also a zero. Tuesday, March 26, 2013 at 7:11pm by Count Iblis Yes, this works. You may be expected to expand the terms to the standard polynomial form. Note that the last two factors are of the form (x-a)(x+a) which expands conviently to two terms. Wednesday, July 22, 2009 at 7:38pm by MathMate Rational Zero Theorem Also, if there are many possible candidates like in this case, you should not try them all out. Instead, you should proceed as follows. Let's denote the polynomial by P(x). Suppose y is a possible rational root. But then we check if y is a root and see that P(y) is not equal ... Tuesday, January 13, 2009 at 12:32am by Count Iblis algebra 3 The polynomial would have to have 4 zeros, meaning it would have to be a polynomial of the 4th degree. The general form for a polynomial of the 4th degree with zeros a, b, c and d would be: f*(x-a)* (x-b)*(x-c)*(x-d) where f is a random real number (lets take this to be 1 in ... Thursday, September 24, 2009 at 8:48pm by Christiaan Graphs of polynomial functions What do polynomial functions look like? And what can be consider a polynomial function? Would a graph that is like an upside down V be considered as a graph of a polynomial function? Tuesday, October 9, 2007 at 10:37pm by Jessi What is a polynomial function in standard form with zeroes 1,2,-3,and -3 Sunday, November 18, 2012 at 10:02am by lee Algebra 2 What is a polynomial function in standard form with zeros 1, 2, 3, and –3? Sunday, December 8, 2013 at 10:27pm by Algebraic! Polynomial Functions What do polynomial functions look like? And what can be consider a polynomial function? Would a graph that is like an upside down V be considered as a graph of a polynomial function? Wednesday, October 10, 2007 at 9:03pm by Jessi Polynomial Functions-help What do polynomial functions look like? And what can be consider a polynomial function? Would a graph that is like an upside down V be considered as a graph of a polynomial function? Friday, October 12, 2007 at 10:50pm by Jessi Math:Polynomial Functions h(x) = -7x I was thinking that this was not a polynomial function because -7x is just one term making it a monomial but the answer key says it is a polynomial function. Could someone explain why that is so? Thanks :D Tuesday, September 8, 2009 at 9:48pm by Lena What is cubic polynomial function in standard form with zero 1, -2, and 2 Friday, November 9, 2012 at 9:35pm by lee write a polynomial in factored form x^3-3x^2-10x Monday, November 12, 2012 at 9:48am by jay divide the polynomial x^3-3x^2+x+2 by x+2. Express your answer in the form f(x)=d(x)q(x)+r(x). Monday, November 25, 2013 at 4:17pm by priya divide the polynomial x^3-3x^2+x+2 by x+2. Express your answer in the form f(x)=d(x)q(x)+r(x). Monday, November 25, 2013 at 5:55pm by kevin calculus--please help!! I have two questions that I don't understand and need help with. 1. information is given about a polynomial f(x) whose coefficients are real numbers. Find the remaining zerosof f. degree 4, zeros i; 9+i 2. form a polynomial f(x) with real coefficients having the given degree ... Thursday, December 27, 2012 at 7:07pm by Paul college algebra form the polynomial given,degree 4 and zeros: i, 1+2i Wednesday, June 17, 2009 at 5:29pm by Anonymous What is the complete factor form of the polynomial? -32a^6-28a^5-36a Thursday, February 3, 2011 at 3:08pm by Lamonica Rose Let p(x)=x^3+4x^2+x-6. Find a polynomial g(x) can be (remainder) r, such that p(x) can be expressed in the form p(x)=(x-2)g(x)+r Tuesday, September 4, 2012 at 4:53pm by Jeconiah Algebra 2 Write a polynomial function in standard form with the given zeros: x=2,-2,4 Monday, January 7, 2013 at 11:41am by Alex algebra1 HELP PLEASE Thanks you!! Write a polynomial function in standard form with zeros at 6, -4, and 1 Wednesday, July 10, 2013 at 10:31am by lisa Evaluate the polynomial for x=2 and x = -2. 3x^5-1/4x^2 The value of polynomial for x=2 The value of polynomial for x=-2 Monday, October 14, 2013 at 5:12pm by Anonymous Pre Cal Due to my child being injured I missed this lecture at class. I am so lost now I read the chapter books but lets face it they don't make those in english. Write a function in factured form, then inexpanded polynomial form, having as zeros 2, -1 and 1 (plus and minus) SQroot of... Sunday, March 9, 2008 at 12:35am by Deb Algebra 2 1. Write a polynomial in standard form that has solutions: 0, -2, 3: x(x+2)(x-3) Expand this to write it in standard form. Saturday, January 12, 2008 at 8:02pm by Count Iblis Algebra Question~! Suppose you divide a polynomial by a binomial. How do you know if the binomial is a factor of the polynomial? Create a sample problem that has a binomial which IS a factor of the polynomial being divided, and another problem that has a binomial which is not a factor of the ... Wednesday, April 2, 2014 at 6:12pm by Gabby No to both questions. A polynomial is a sum of at least one integer power of x (or other unknown), each multiplied by a constant. '9' is simply a constant. 2^x is a power of 2, not a power of x. Is 9 a polynomial? Is 2^x a polynomial? Sunday, January 7, 2007 at 7:43pm by drwls write, in extended form, a polynomial(with real coefficients) of degree 3, with solutions 2,2-i. Tuesday, July 26, 2011 at 9:57am by Jill write a polynomial function in standard form with the given zeros. 1-2i,2 Sunday, March 10, 2013 at 8:59am by lisa algebra 1 how do you write this polynomial 5x3 + x5 − 8 + 4x in standard form? Wednesday, March 5, 2014 at 3:43pm by monica x = -1 is one root, as you can easily verify by inspection. That means that (x +1) is a factor of the polynomial. Divide x^3-3x^2+25x+29 by x+1 (using polynomial long division) to get a second order polynomial, from which you can get the other roots easily, using the quadratic... Saturday, October 29, 2011 at 8:16pm by drwls Algebra II The key concept here is that imaginary numbers always appear as conjugates so if 2i is a zero, so is -2a so we would have x = ±2i x^2 = 4i^2 x^2 = -4 so x^2 + 4 is a factor the polynomial is f(x) = (x+1)(x-3)(x^2 + 4) I will leave it up to you to expand it, but I would ... Friday, April 5, 2013 at 12:23am by Reiny Write a polynomial function in standard form that has the given zeros and has a leading coefficient of 1 for: -2, 4+i Wednesday, January 12, 2011 at 7:22pm by Larry *polynomial in standard form 1)5/12+7/18=5/6 2) 3x/(x+1)-1/(x)=______/x^2-1x Thursday, February 28, 2013 at 6:08am by Isis find a polynomial f(x) of degree 4 that has the following zeros: 0,7,-4,5 Leave your answer in factored form Monday, March 24, 2014 at 7:02pm by Adnan Consider the polynomial P(x), shown in standard form and in factored form. (a) State the behavior at the ends (fill in blanks): At the left, as x „_ ¡V„V, P(x) „_ __ (choose ¡V„V or „V). At the right, as x „_ „V, P(x) „_ __ (choose ¡V„V or „V). (b) State the y-intercept: (c) ... Monday, November 22, 2010 at 6:19am by Algebra Q Consider the polynomial P(x), shown in standard form and in factored form. p(x)= -(1)/(5)x^(3)+(1)/(5)x^(2)+(17)/(5)x+3= -1/5(x+3)(x+1)(x-5) (a) State the behavior at the ends (fill in blanks): At the left, as x „_ ¡V„V, P(x) „_ __ (choose ¡V„V or „V). At the right, as x „_ „V... Tuesday, November 23, 2010 at 1:28am by Wizard Do I have this right? A first degree polynomial crosses the x axis A second degree polynomial touches the y axis without crosisng A third degree polynomial flattens against the y axis. Monday, November 30, 2009 at 9:58pm by Nicki The difference of two squares is a^2-b^2, for a and b any expression. It should be easy to determine if a polynomial fits that form. Saturday, January 15, 2011 at 1:28pm by Marth find the complex zeros of the polynomial function. write F in the factored form. f(X)=x^3-7x^2+20x-24 use the complex zeros to write f in factored form. f(x)= (reduce fractions and simplify roots) Monday, October 1, 2012 at 11:29pm by ladybug Find the complex zeros of the polynomial function. Write f in factored form. F(x)=x^3-8x^2+29x-52 Use the complex zeros to write f in factored form F(x)=____(reduce fractions and simplify roots) Sunday, June 9, 2013 at 4:10am by sharday calculus--please help!! find the complete zeros of the polynomial function. Write f in factored form. f(x)=3x^4-10x^3-12x^2+122x-39 **Use the complex zeros to write f in factored form.** f(x)= Please show work Sunday, December 30, 2012 at 5:18am by NayNay By the Descartes rule of signs, we know that there are two positive roots out of three, which also tells us that all the roots are real. Using the rational zero theorem, we know that rational roots, if any, have to be of the form ±p/q, where p is a factor of 16, and q ... Monday, October 25, 2010 at 10:30pm by MathMate college algebra find the complex zeros of the polynomial function. write f in factored form. use complex zeros to write f in factored form f(x)= x^3-6x^2+21x-26 Friday, August 24, 2012 at 9:02pm by Bella find complex zeros of the polynomial function. put f in factored form. f(x)=x^3-6x^2+21x-26 Friday, August 24, 2012 at 11:11pm by mema Find the complex zeros of the polynomial function, and write f in factored form: f(x)= x^3 - 8x^2 + 30x - 36 Wednesday, August 21, 2013 at 10:57pm by sheila calculus totally confused find a first degree polynomial function p(1) whose value and slope agree with the value and slope of f at x=c. i think you use taylor series for this f(x)=4/sqrt(x), c=1 i'm totally confused on how to do it. i know you find the derivative but how do i get the function. this is... Tuesday, March 13, 2007 at 10:24pm by david true/false 1. a cubic polynomial has at least one zero.............. 2. a quadratic polynomial an have at most two zeroes.......... 3. if r(x)is the remainder and p(x) is the divisor, then degree r (x) < degree p(x)............ 4. if zeroes of a quadratic polynomial ax^2+b^x... Friday, June 14, 2013 at 5:22am by Shiana the discriminant= b2-4ac a=1 b=-5 c=6 25-4(1)(6)=1 Since 1 > 0, the polynomial has two real roots, if the discriminant = 0, the polynomial has one real root, and if it's < 0, the polynomial has no real roots Sunday, May 16, 2010 at 10:34pm by Jen college algebra find the complex zeros of the polynomial function. write f in factored form. f(x)=x^3-10x^2+37x-52 Sunday, November 18, 2012 at 10:53pm by ladybug i am stuck on this problem! help! Write a quadratic equation that has two solution, 4 and -4. Leave the polynomial in reduced factored form. Tuesday, August 6, 2013 at 9:42pm by mariana hi my name is hedi and i am in serious need with help with some math problems dealing with polynomials can some one please help me? Determine whether each expression is a polynomial. If it is a polynomial, state the degree of the polynomial. (This is the question) 1.5x^3+2xy6^... Monday, April 28, 2008 at 6:56pm by hedi Pre Calculus 1. Find all rational zeros of the polynomial. Then determine any irrational zeros, and factor the polynomial completely. 3x^4-11x^3+5x^2+3x 2. Find the polynomial with leading coefficient 1 that has a degree of 4, a zero of multiplicity 2 at x=1 and a zero at x=2+i Monday, November 21, 2011 at 1:27am by Josh What is 9x^2 +4x^2 Factor each polynomial completely. If a polynomial is prime, say so. Saturday, February 21, 2009 at 8:26pm by jackie Factor each polynomial completely, if a polynomial is prime say so 9x^2+4y^2 Monday, March 9, 2009 at 6:57pm by Debbie I know to do long division but i still cant figure out how to divide a polynomial by a polynomial Wednesday, February 9, 2011 at 9:46am by Tammy What answers this definition: A polynomial, you would write the polynomial as a product of factors. Tuesday, March 1, 2011 at 10:43pm by Megan Ruff Pages: 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | Next>>
{"url":"http://www.jiskha.com/search/index.cgi?query=Form+a+polynomial+f(x)","timestamp":"2014-04-21T11:16:34Z","content_type":null,"content_length":"37110","record_id":"<urn:uuid:1c9ebed7-2893-48bd-a120-3f4b75d41df1>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00071-ip-10-147-4-33.ec2.internal.warc.gz"}
[Numpy-discussion] A faster median (Wirth's method) Robert Bradshaw robertwb@math.washington.... Wed Sep 2 11:57:05 CDT 2009 On Wed, 2 Sep 2009, Dag Sverre Seljebotn wrote: > Sturla Molden wrote: >> Dag Sverre Seljebotn skrev: >>> Nitpick: This will fail on large arrays. I guess numpy.npy_intp is the >>> right type to use in this case? >> By the way, here is a more polished version, does it look ok? >> http://projects.scipy.org/numpy/attachment/ticket/1213/generate_qselect.py >> http://projects.scipy.org/numpy/attachment/ticket/1213/quickselect.pyx > I didn't look at the algorithm, but the types look OK (except for the > gil as you say). Comments: > a) Is the cast to numpy.npy_intp really needed? I'm pretty sure shape is > defined as numpy.npy_intp*. > b) If you want higher performance with contiguous arrays (which occur a > lot as inplace=False is default I guess) you can do > np.ndarray[T, ndim=1, mode="c"] > to tell the compiler the array is contiguous. That doubles the number of > function instances though... >> Cython needs something like Java's generics by the way :-) > Yes, we all long for that. It will come as soon as somebody volunteers I > suppose -- it shouldn't be all that difficult, but I don't think any of > the existing devs will be up for it any time soon. Danilo's C++ project has some baby steps in that direction, though it'll need to be expanded quite a bit to handle this. - Robert More information about the NumPy-Discussion mailing list
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2009-September/044957.html","timestamp":"2014-04-16T19:20:33Z","content_type":null,"content_length":"4479","record_id":"<urn:uuid:d54bc4a9-5e33-4dec-9f57-6e12392d97e4>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00506-ip-10-147-4-33.ec2.internal.warc.gz"}
Combinations and Permutations May 21st 2009, 03:39 PM #1 Feb 2009 Combinations and Permutations Hi everyone! Looking for someone to check over the work I have done so far to see if im on the right track. Also looking for some help to start the questions I am really stuck on. Thank you! 1.) P(12,5) For this one, would I just use the nPr button on my calculator to get the answer of 95040? 2.) ( 7 ) + ( 8 ) ( 3 ) ( 3 ) The above brackets ^^ are meant to be continuous. I am not sure how to solve this. The instructions just say to Evaluate each expression and this is review for me so I cant quite remember how to start. 3.) What is the seventh term in the expansion of (3 - 2x)^15 ? Using the formula tk+1= nCk (x)^n-k(y)^k k= one less than the term you want. Since I am looking for the seventh term k would be 6. t6+1=15C6(3)^15-6 (-2x)^6 t7=5005(3)^9 (-2x)^6 t7=5005(19683) (-64x^6) t7= -6,304,858,560x^6 This number seems extremely large to be the correct answer. 4.)a.) P(n,2)+ P(n,3)= 42(n-1) b.) C(n,2) + C(n,1)= C(7,2) Not sure how to start either of these questions as this is review and I learned these types of problems months ago. 5.) Last month 200 students enrolled in courses: 56 students took English; 52 took mathematics; 72 took history; 12 took mathematics and English; 19 took mathematics and history; 18 took English and history; and 7 took all 3 courses. a.) How many students took English or mathematics or history? b.) How many students did not take English, mathematics, or history? Any help with any of these questions and reviews of the ones I have already done would be GREATLY appreciated. 5.) Last month 200 students enrolled in courses: 56 students took English; 52 took mathematics; 72 took history; 12 took mathematics and English; 19 took mathematics and history; 18 took English and history; and 7 took all 3 courses. a.) How many students took English or mathematics or history? b.) How many students did not take English, mathematics, or history? Students who took only Total: 103 Did not take any of the above subjects: 97 When you learned these types of problems all those months ago were you given class notes and examples to follow? After substituting the definitions the equations become: a) Solve $n(n-1) + n(n-1)(n-2) = 42(n-1)$. b) Solve $\frac{n(n-1)}{2} + n = 21$. May 21st 2009, 09:08 PM #2 Oct 2008 May 22nd 2009, 04:03 AM #3
{"url":"http://mathhelpforum.com/discrete-math/90000-combinations-permutations.html","timestamp":"2014-04-17T11:21:40Z","content_type":null,"content_length":"41505","record_id":"<urn:uuid:9e752dc3-c59f-436b-97ba-5344249cb593>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00143-ip-10-147-4-33.ec2.internal.warc.gz"}
The ACL Learning Framework The learning framework of ACL differs from ILP because both the background knowledge and the learned theory are abductive theories. An abductive theory T in Abductive Logic Programming is a triple < P, A, I >, where P is a definite logic program, A is a set of predicates called abducible predicates (or simply abducibles), and I is a set of range-restricted clauses called integrity constraints. As a knowledge representation framework, when we represent a problem in ALP via an abductive theory T, we generally assume that the abducible predicates in A carry all the incompleteness of the program P in modelling the external problem domain in the sense that if we (could) complete the abducible predicates in P then P would completely describe the problem domain. An abductive theory can support abductive (or hypothetical) reasoning for several purposes such as diagnosis, planning or default reasoning. The central notion used for this is that of an abductive explanation for an observation or a given goal. When an observation O can be abductively explained in a theory T with explanation Delta we write T|=[A]O. with Delta. We can now define the learning problem when the language of the background and target theories is the one of ALP. Abductive Concept Learning • a set of positive examples E^+, • a set of negative examples E^-, • an abductive theory T=< P, A, I > as background theory, • an hypothesis space T=<H,Y> consisting of a space of possible programs Hand a space of possible constraints Y A set of rules P' belonging to H and a set of constraints I' belonging to Ysuch that the new abductive theory T'=< P union P', A, I union I'> satisfies the following conditions • T'|=[A]E^+, • forall e^- belonging to E^-, T' does not abductively entail E^-. where E^+ stands for the conjunction of all positive examples. In effect, we have replaced the deductive entailment in the ILP problem with abductive entailment to define the ACL learning problem. Example 1 Suppose we want to learn the concept father. Let the background theory be T=< P, A, empty_set> where: Let the training examples be: In this case, a possible hypotheses learned by ACL would consist of P'={father(X,Y) :- parent(X,Y),male(X)} I'={:- male(X),female(X)} This hypothesis satisfies the definition of ACL because 1. T'|=[A]father(john,mary),father(david,steve) with D={male(david)} 2. T' does not abductively entail father(katy,ellen) as the only possible explanation for this goal, namely {male(katy)} is made inconsistent by the learned integrity constraints I' 3. T' does not abductively entail father(john,steve), father(steve,john) and father(steve,katy) because they have no possible abductive explanations. Hence, despite the fact that the background theory is incomplete (in its abducible predicates), ACL can find an appropriate solution to the learning problem by suitably extending the background theory with abducible assumptions. Note that the learned theory without the integrity constraint would not satisfy the definition of ACL, because there would exist an abductive explanation for the negative example father(katy,ellen), namely {male(katy). This explanation is prohibited in the complete theory by the learned constraint together with the fact female(kathy). The full ACL problem can be split into two subproblems: (1) learning the rules together with appropriate strong explanations and (2) learning integrity constraints. The solutions of the two subproblems can be combined to obtain a solution for the original problem. The first subproblem, called ACL1, has the following definition. • a set of positive examples E^+, • a set of negative examples E^-, • an abductive theory T=< P, A, I > as background theory, • an hypothesis space of possible programs H A set of rules P' belonging to H such that the new abductive theory T[ACL1]=< P union P', A, I> satisfies the following conditions • T[ACL1]|=[A]E^+ with Delta^+, • T[ACL1]|=[A]not_E^- with Delta^-, • Delta^+ union Delta^- is consistent where not_E^- stands for the conjunction of the complement of every negative example. Example 2 In the case of example 1 above, a solution to the ACL1 problem would consist of P'={father(X,Y) :- parent(X,Y),male(X)} The ACL1 problem is solved by the system ACL1. The input file for the learning problem of example 1 is father.bg. From this input file, the ACL1 system finds the above solution, as can be seen from the father.rules file. Indeed, the information generated by ACL1 through the abductive explanations for negative examples can be used to provide a solution of the full ACL problem through a second learning phase. From the output of ACL1, i.e., its set of rules and the sets of assumptions and for covering positive examples and uncovering negative ones, a solution to ACL can be found by learning constraints that are consistent with Delta^+and inconsistent with the complement of every abducible in Delta^- . The definition of the second subproblem, called ACL2, can be given as follows. • a solution of ACL1 □ T[ACL1]=< P union P', A, I> □ Delta^+ □ Delta^- • a hypothesis space of possible constraints H A set of constraints I' Î Ysuch that the new abductive theory satisfies the following condition • Delta^+ is a positive interpretation for I' • for every literal l belonging to Delta^-, {not_l} is a negative interpretation for I' where not_l is the complement of l with respect to default negation. The ACL2 problem is solved by means of the ICL system. The input for ICL is generated by the ACL1 system, for the case of example 1 the input files for ICL are again father.bg, containing the background knowledge, and father.kb, containing the interpretations, that is generated by the ACL1 system. The theory, obtained by combining the solutions of the two subproblems, gives a solution to the full ACL problem. Example 2 In the case of example 1 and 2 above, the input to the second learning phase, performed by ICL, would consist of • the positive interpretation p={male(david)} • the negative interpretation n={male(katy)} The solution of the ACL2 problem is I'={:- male(X),female(X)} Indeed the above solution is found by ICL from the above interpretations, as can be seen from the file father.theory.
{"url":"http://www-lia.deis.unibo.it/Software/ACL/ACLtheory.html","timestamp":"2014-04-18T00:13:46Z","content_type":null,"content_length":"10063","record_id":"<urn:uuid:b2353cbb-8695-4fcc-acef-bbcd1a8e102b>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00062-ip-10-147-4-33.ec2.internal.warc.gz"}
Pattern Reduction in Paper Cutting Aldridge, C. and Chapman, S. J. and Gower, R. and Leese, R. and McDiarmid, C. and Shepherd, M. and Tuenter, H. and Wilson, H. and Zinober, A. Pattern Reduction in Paper Cutting. [Study Group Report] A large part of the paper industry involves supplying customers with reels of specified width in specifed quantities. These 'customer reels' must be cut from a set of wider 'jumbo reels', in as economical a way as possible. The first priority is to minimize the waste, i.e. to satisfy the customer demands using as few jumbo reels as possible. This is an example of the one-dimensional cutting stock problem, which has an extensive literature. Greycon have developed cutting stock algorithms which they include in their software packages. Greycon's initial presentation to the Study Group posed several questions, which are listed below, along with (partial) answers arising from the work described in this report. (1) Given a minimum-waste solution, what is the minimum number of patterns required? It is shown in Section 2 that even when all the patterns appearing in minimum-waste solutions are known, determining the minimum number of patterns may be hard. It seems unlikely that one can guarantee to find the minimum number of patterns for large classes of realistic problems with only a few seconds on a PC available. (2) Given an n → n-1 algorithm, will it find an optimal solution to the minimum- pattern problem? There are problems for which n → n - 1 reductions are not possible although a more dramatic reduction is. (3) Is there an efficient n → n-1 algorithm? In light of Question 2, Question 3 should perhaps be rephrased as 'Is there an efficient algorithm to reduce n patterns?' However, if an algorithm guaranteed to find some reduction whenever one existed then it could be applied iteratively to minimize the number of patterns, and we have seen this cannot be done easily. (4) Are there efficient 5 → 4 and 4 → 3 algorithms? (5) Is it worthwhile seeking alternatives to greedy heuristics? In response to Questions 4 and 5, we point to the algorithm described in the report, or variants of it. Such approaches seem capable of catching many higher reductions. (6) Is there a way to find solutions with the smallest possible number of single patterns? The Study Group did not investigate methods tailored specifically to this task, but the algorithm proposed here seems to do reasonably well. It will not increase the number of singleton patterns under any circumstances, and when the number of singletons is high there will be many possible moves that tend to eliminate them. (7) Can a solution be found which reduces the number of knife changes? The algorithm will help to reduce the number of necessary knife changes because it works by bringing patterns closer together, even if this does not proceed fully to a pattern reduction. If two patterns are equal across some of the customer widths, the knives for these reels need not be changed when moving from one to the other. Item Type: Study Group Report Problem Sectors: Discrete Company Name: Greycon ID Code: 329 Deposited By: Dr Kamel Bentahar Deposited On: 06 Jun 2011 16:59 Last Modified: 06 Jun 2011 16:59 Repository Staff Only: item control page
{"url":"http://www.maths-in-industry.org/miis/329/","timestamp":"2014-04-16T08:39:11Z","content_type":null,"content_length":"21753","record_id":"<urn:uuid:c0598219-c133-4952-ba3c-9f720bb675cc>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00025-ip-10-147-4-33.ec2.internal.warc.gz"}
Confidence Intervals for Univariate Ordinal ACE Model with 2 Thresholds I was wondering if there was a reason I can't seem to get confidence intervals for an ordinal ACE model with 2 thresholds. I made sure to specify the mxCI for my standardized variance components call in my model and to put the "intervals=T" in my mxRun line. In my model fit $output$confidenceIntervals exists, but does not have any values. Is this normal? If not, is there a way to create confidence intervals in the ordinal case? If it's just a silly code error, my code is below. Any help would be appreciated! univACEOrdModel <- mxModel("univACEOrd", # Matrices a, c, and e to store a, c, and e path coefficients mxMatrix( type="Full", nrow=nv, ncol=nv, free=TRUE, values=.6, label="a11", name="a" ), mxMatrix( type="Full", nrow=nv, ncol=nv, free=TRUE, values=.6, label="c11", name="c" ), mxMatrix( type="Full", nrow=nv, ncol=nv, free=TRUE, values=.6, label="e11", name="e" ), # Matrices A, C, and E compute variance components mxAlgebra( expression=a %*% t(a), name="A" ), mxAlgebra( expression=c %*% t(c), name="C" ), mxAlgebra( expression=e %*% t(e), name="E" ), # Algebra to compute total variances and standard deviations (diagonal only) mxAlgebra( expression=A+C+E, name="V" ), mxMatrix( type="Iden", nrow=nv, ncol=nv, name="I"), mxAlgebra( expression=solve(sqrt(I*V)), name="sd"), mxAlgebra( expression=cbind(A/VP,C/VP,E/VP),name="stndVCs"), # Calculate 95% CIs here ##Yes, it's repetitive, but I was desperate and the above was exactly how it ran in my continuous univariate ACE model # Constraint on variance of ordinal variables mxConstraint(V == I, name="Var1"), # Matrix & Algebra for expected means vector mxMatrix( type="Zero", nrow=1, ncol=nv, name="M" ), mxAlgebra( expression= cbind(M,M), name="expMean" ), mxMatrix( type="Full", nrow=2, ncol=nv, free=TRUE, values=c(0.8,1.2), label=c("thre1","thre2"), name="T" ), mxAlgebra( expression= cbind(T,T), dimnames=list(c('th1','th2'),selVars), name="expThre" ), # Algebra for expected variance/covariance matrix in MZ mxAlgebra( expression= rbind ( cbind(A+C+E , A+C), cbind(A+C , A+C+E)), name="expCovMZ" ), # Algebra for expected variance/covariance matrix in DZ, note use of 0.5, converted to 1*1 matrix mxAlgebra( expression= rbind ( cbind(A+C+E , 0.5%x%A+C), cbind(0.5%x%A+C , A+C+E)), name="expCovDZ" ) mxData( observed=mzData, type="raw" ), mxFIMLObjective( covariance="ACE.expCovMZ", means="ACE.expMean", dimnames=selVars, thresholds="ACE.expThre" ) mxData( observed=dzData, type="raw" ), mxFIMLObjective( covariance="ACE.expCovDZ", means="ACE.expMean", dimnames=selVars, thresholds="ACE.expThre" ) mxAlgebra( expression=MZ.objective + DZ.objective, name="min2sumll" ), univACEOrdFit <- mxRun(univACEOrdModel,intervals=T)
{"url":"http://openmx.psyc.virginia.edu/print/1009","timestamp":"2014-04-20T06:04:30Z","content_type":null,"content_length":"11119","record_id":"<urn:uuid:19647096-117c-443a-8684-1296f8d1efab>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00451-ip-10-147-4-33.ec2.internal.warc.gz"}
Returns the depreciation of an asset for a specified period using the fixed-declining balance method. Cost is the initial cost of the asset. Salvage is the value at the end of the depreciation (sometimes called the salvage value of the asset). Life is the number of periods over which the asset is being depreciated (sometimes called the useful life of the asset). Period is the period for which you want to calculate the depreciation. Period must use the same units as life. Month is the number of months in the first year. If month is omitted, it is assumed to be 12. ● The fixed-declining balance method computes depreciation at a fixed rate. DB uses the following formulas to calculate depreciation for a period: (cost - total depreciation from prior periods) * rate rate = 1 - ((salvage / cost) ^ (1 / life)), rounded to three decimal places ● Depreciation for the first and last periods is a special case. For the first period, DB uses this formula: cost * rate * month / 12 ● For the last period, DB uses this formula: ((cost - total depreciation from prior periods) * rate * (12 - month)) / 12 Example 1 The example may be easier to understand if you copy it to a blank worksheet. ● Create a blank workbook or worksheet. ● Select the example in the Help topic. Note Do not select the row or column headers. Selecting an example from Help ● Press CTRL+C. ● In the worksheet, select cell A1, and press CTRL+V. ● To switch between viewing the results and viewing the formulas that return the results, press CTRL+` (grave accent), or on the Formulas tab, in the Formula Auditing group, click the Show Formulas A B Data Description 1,000,000 Initial cost 100,000 Salvage value 6 Lifetime in years Formula Description (Result) =DB(A2,A3,A4,1,7) Depreciation in first year, with only 7 months calculated (186,083.33) =DB(A2,A3,A4,2,7) Depreciation in second year (259,639.42) =DB(A2,A3,A4,3,7) Depreciation in third year (176,814.44) =DB(A2,A3,A4,4,7) Depreciation in fourth year (120,410.64) =DB(A2,A3,A4,5,7) Depreciation in fifth year (81,999.64) =DB(A2,A3,A4,6,7) Depreciation in sixth year (55,841.76) =DB(A2,A3,A4,7,7) Depreciation in seventh year, with only 5 months calculated (15,845.10)
{"url":"http://office.microsoft.com/en-us/excel-help/db-HP005209048.aspx?CTT=3","timestamp":"2014-04-21T13:06:49Z","content_type":null,"content_length":"25142","record_id":"<urn:uuid:fca8a63d-96bf-4fe6-8199-89a83ee7d16b>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00135-ip-10-147-4-33.ec2.internal.warc.gz"}
System of equations Re: System of equations These are either olympiad problems, in which case there is a general method which I do not know yet. Solving numerically I can do. Whomever picked these equations knew their onions I will tell you that. I went through the entire Sturmfels book and got nothing. One of the worst math books I have ever seen. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=268999","timestamp":"2014-04-19T22:31:31Z","content_type":null,"content_length":"13156","record_id":"<urn:uuid:7f3dfd2c-56eb-4fb0-bd58-70e4343fad0b>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00530-ip-10-147-4-33.ec2.internal.warc.gz"}
Summary: Rational torsion in elliptic curves and the cuspidal Amod Agashe Let A be an elliptic curve over Q of square free conductor N. Sup- pose A has a rational torsion point of prime order r such that r does not divide 6N. We prove that then r divides the order of the cuspidal subgroup C of J0(N). If A is optimal, then viewing A as an abelian subvariety of J0(N), our proof shows more precisely that r divides the order of A C. Also, under the hypotheses above, we show that for some prime p that divides N, the eigenvalue of the Atkin-Lehner involution Wp acting on the newform associated to A is -1. 1 Introduction Let A be an elliptic curve over Q of square free conductor N and let A be the optimal curve in the isogeny class (over Q) of A . Let X0(N) denote the modular curve over Q associated to 0(N), and let J0(N) be its Ja- cobian. By [BCDT01], we may view A as an abelian variety quotient over Q of J0(N). By dualizing, A can also be viewed as an abelian subvariety of J0(N), as we shall do in the rest of this article. The cuspidal subgroup C of J0(N)(C) is the group of degree zero divisors on X0(N)(C) that are sup-
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/176/3139674.html","timestamp":"2014-04-21T00:10:53Z","content_type":null,"content_length":"8215","record_id":"<urn:uuid:46071bcd-71fa-4d9b-8346-c0e391c345d4>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00416-ip-10-147-4-33.ec2.internal.warc.gz"}
Numerical Analysis Textbooks Browse New & Used Numerical Analysis Textbooks Numerical analysis involves the use of algorithms to solve mathematical problems. The uses of numerical analysis are wide-ranging and can be applied to computer science, mathematics, engineering and the sciences. We have an extensive range of affordable textbooks to buy or rent that will provide you with all the information that you need for your college studies. The textbooks will look at the general real-world applications that the numerical analysis problems generally arise from, such as algebra, geometry and calculus. They will then go on to look at the algorithms themselves, and then on to the common uses for numerical analysis in industry such as natural sciences, social sciences, medicine, engineering and business. Our new and used textbooks will examine the increasing levels of complexity of numerical analysis as a result of the more accurate and complex mathematical models which are being developed. We will deliver the textbooks to the address of your choice saving you time consuming trips to the campus bookstore. This will give you valuable time to work on those complex algorithms! When you are done with your books at the end of your course make the most of our buyback service. Results 401 - 450 of 491 for Numerical Analysis Textbooks
{"url":"http://www.valorebooks.com/new-used-textbooks/mathematics/numerical-analysis?page=9","timestamp":"2014-04-24T17:20:39Z","content_type":null,"content_length":"73601","record_id":"<urn:uuid:2903c1a8-891e-4ddc-9543-d1db89da7e60>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00662-ip-10-147-4-33.ec2.internal.warc.gz"}
Numerical Transform Inversion • The Fourier-Series Method for Inverting Transforms of Probability Distributions. Queueing Systems, vol. 10, No. 1, 1992, pp. 5-88 (with Joseph Abate). [published PDF] • Numerical Inversion of Probability Generating Functions. Operations Research Letters, vol. 12, No. 4, 1992, pp. 245-251 (with Joseph Abate). [published PDF] • Solving Probability Transform Functional Equations} for Numerical Inversion. Operations Research Letters, vol. 12, 1992, pp. 275-281 (with Joseph Abate). [published PDF] • Multi-Dimensional Transform Inversion with Applications to the Transient M/G/1 Queue. Annals of Applied Probability vol. 4, 1994, pp. 719-740 (with Gagan L. Choudhury and David M. Lucantoni). [published PDF] • Numerical Inversion of Laplace Transforms of Probability Distributions. ORSA Journal on Computing, vol. 7, 1995, pp. 36-43 (with Joseph Abate). [published PDF] • Asymptotic Analysis of Tail Probabilities Based on the Computation of Moments. Annals of Applied Probability, vol. 5, 1995, pp. 983-1007 (with Gagan L. Choudhury and David M. Lucantoni). [published PDF] • An Operational Calculus for Probability Distributions Via Laplace Transforms. Advances in Applied Probability, vol. 28, 1996, pp. 75-113. [published PDF] • On the Laguerre Method for Numerically Inverting Laplace Transforms. INFORMS Journal on Computing, vol. 8, 1996, pp. 413-427 (with Joseph Abate and Gagan L. Choudhury). [publishedPDF] • Probabilistic Scaling for the Numerical Inversion of Non-Probability Transforms. INFORMS Journal on Computing, vol. 9, No. 2, 1997, pp. 175-184 (with Gagan L. Choudhury). [PDF] [published PDF] • Numerical Inversion of Multidimensional Laplace Transforms by the Laguerre Method. Performance Evaluation, vol. 31, 1998, pp. 229-243 (with Joseph Abate and Gagan L. Choudhury). [published PDF] • An Introduction to Numerical Transform Inversion and its Application to Probability Models. In Computational Probability, W. Grassman (ed.), Kluwer, Boston, 1999, pp. 257-323 (with Joseph Abate and Gagan Choudhury). [PostScript] [PDF] • Infinite-Series Representations of Laplace Transforms of Probability Density Functions for Numerical Inversion. Journal of Operations Research Society of Japan, vol. 42, No. 3, September 1999, pp. 268-285. [published PDF] • Computing Laplace Transforms for Numerical Inversion Via Continued Fractions. INFORMS Journal on Computing, vol. 11, No. 4, Fall 1999, pp. 394-405. [published PDF] • A Unified Framework for Numerically Inverting Laplace Transforms.. INFORMS Journal on Computing, vol. , No. , Fall 2005, pp. [PDF] Related topics:
{"url":"http://www.columbia.edu/~ww2040/B.html","timestamp":"2014-04-18T03:44:59Z","content_type":null,"content_length":"5564","record_id":"<urn:uuid:1227d459-ef7c-4896-bfd8-8cbef0c415cc>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00379-ip-10-147-4-33.ec2.internal.warc.gz"}
absolute maximum and absolute minimum value November 18th 2009, 02:33 PM #1 Junior Member Oct 2009 absolute maximum and absolute minimum value Find the absolute maximum and absolute minimum values of f on the given interval. $f(x) = x^3 - 3x + 1$ how do i find this? Absolute Extrema can occur 1. At end points 2. Critical points Compute f(0), f(3) and the value of f(x) at all critical ponts on [0,3]. simply the largest is the abs max, the smallest is the abs min note also that in order to get the right answer we have that min and max are attained on that inverval. November 18th 2009, 02:53 PM #2 November 18th 2009, 03:25 PM #3
{"url":"http://mathhelpforum.com/calculus/115448-absolute-maximum-absolute-minimum-value.html","timestamp":"2014-04-18T12:07:21Z","content_type":null,"content_length":"36147","record_id":"<urn:uuid:6d007e39-af24-47d2-b64a-86ebb51a830e>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00220-ip-10-147-4-33.ec2.internal.warc.gz"}
Velocity Reviews - An implementation where sizeof(short int) does not divide sizeof(int) In article <1168909869.077912.146100@38g2000cwa.googlegroups. com> "user923005" <dcorbit@connx.com> writes: > Spiros Bousbouras wrote: > > Do you have an example of an implementation where > > sizeof(short int) does not divide sizeof(int) or > > sizeof(int) does not divide sizeof(long int) or > > sizeof(long int) does not divide sizeof(long long int) ? > > > > Same question for the corresponding unsigned types. > How about where sizeof int is not evenly divisible by sizeof char? > CDC Cyber UTexas C compiler That would violate the standard. sizeof is expressed as an integral unit, where sizeof char == 1. And that was already true in K&R. > 60 bits is not divisible by 8. They stored characters two words at a > time though, so you had 120 bits, which is divisible by 8. That is possible (but gives problems with floats that are also 60 bits). But on those machines it is much more reasonable to have 48 bit ints with the remaining 16 bits (possible) garbage. But integer arithmetic on those machines had quite a few strange points. That is, I could easily construct examples where: (a + a) * 2 != a + a + a + a in integer arithmetic, even when a < 2**48. dik t. winter, cwi, kruislaan 413, 1098 sj amsterdam, nederland, +31205924131 home: bovenover 215, 1025 jn amsterdam, nederland;
{"url":"http://www.velocityreviews.com/forums/printthread.php?t=445908","timestamp":"2014-04-18T01:04:48Z","content_type":null,"content_length":"19729","record_id":"<urn:uuid:945b6892-c51f-4ac6-9880-44f43ee7bbcf>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00434-ip-10-147-4-33.ec2.internal.warc.gz"}
Proving with deductive geometry July 24th 2011, 02:17 AM Proving with deductive geometry Hi, I need some help in solving this question: "Draw any quadrilateral, find the midpoints of its sides and join consecutive midpoints to form an inner quadrilateral. Make a hypothesis about the inner quad. Use deductive geometry to prove your hypothesis is true." This is what I got so far: Attachment 21883 The quadrilateral inside seems to be always a parallelogram no matter what shape the outside quadrilateral is like. The hard part is that i don't how to use deductive geometry to prove this. (Doh) (I do know some deductive geometry, but I just don't know where to start.) July 24th 2011, 02:42 AM Re: Proving with deductive geometry The proof depends heavily upon the axioms and definitions you have to use. We do not know those. So the best we can do here is to give a outline. If you connect two non-consecutive vertices of the original quadrilateral you have a diagonal. There are two sub-triangles formed. Now the segment formed by joining the midpoints of the two adjacent sides is parallel to and one half the length of the third side, that diagonal. Do the same on the second triangle. Now you have a quadrilateral have opposite sides parallel of the same
{"url":"http://mathhelpforum.com/geometry/185024-proving-deductive-geometry-print.html","timestamp":"2014-04-20T23:38:34Z","content_type":null,"content_length":"4639","record_id":"<urn:uuid:edfa78d5-12c5-428e-a840-7b51979e7f3d>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00522-ip-10-147-4-33.ec2.internal.warc.gz"}
STR vs LUC PostedOct 15, 2012 2:38 pm Last edited by Syarl on Oct 16, 2012 7:59 pm. Edited 2 times in total Rank 1 STR vs LUC 13 Dec 2011 I want to know what most people think is better? Say something like 600 luc 1k dex Rest in STR Posts Or 1k DEX and the rest in LUC. 310 I'm not talking about just distribuatable stat points but lapis and OJ stats. United States Lets assume there is no budget and that I can link any piece of gear I need with any Lapis I'd like. Also that I can afford GM max OJ on each piece of gear. In PvP which set up will get me the highest DPS. Looking for a more in depth talk with numbers and math than just STR or LUC Rank 5.1 PostedOct 15, 2012 9:08 pm 19 Dec We need a bit more information before we can answer this question I'm afraid. 2008 - What level are you now? Posts - What level are you aiming at? 7250 - What sort of budget do you have? New One thing though, all luck is an ineffective build. Even mostly luck is ineffective as a grinding build until you can get enough to consistently crit (around 250-300 luck is okay, but Zealand the more the better up to 95% crit rate). RIP Kizzd Rank 1 • 13 Dec 2011 PostedOct 16, 2012 8:44 pm 310 Edited original post United States Rank 1 • Joined PostedOct 16, 2012 9:03 pm 20 May 2010 Posts Which class are you ? this makes a bit of a difference to the maths side of it as sin/rangers get dex/luck buffs, and hunters get increased damage from luck. Location James Godden (think i spelt that right) is very good with this question maybe PM him or check out his Assassin/ranger guide in the class guides section of the forum. Beccles United Kingdom Signature deleted due to shaiya sucking so bad Rank 1 • 13 Dec 2011 PostedOct 16, 2012 9:05 pm 310 Archer.... United States I'll try and pm him Rank 3 • Joined PostedOct 17, 2012 12:09 am 10 Aug 2008 Posts On average, 1 point in STR yields more Damage than 1 point in LUC for a critical hit, and 1 point in STR always yields more damage than 1 point in LUC for a non 843 critical hit. Rekikyo's Neighbor United Both of the above are true for low DEF and low ABS targets. When dealing with low ABS and high DEF targets, then there are some cases where LUC might score more States damage than STR. Rank 1 • 13 Dec PostedOct 17, 2012 7:57 am Posts So are absorbed factored in before or after the critical hit? 310 I know that the attack vs def is factored in before the critical damage is applied. But what exactly is considered a high def target? Something with def higher than your atk or Location something with def that is within (random #) 25% of your max atk? States With the new armors and their ridiculously high def do you think that the Luc is/will be superior? PostedOct 17, 2012 1:53 pm The order of the calculation is very important here. 1. Determine if you actually hit the target 2. If yes, then determine if the hit was a critical hit. 3. If yes, calculate the critical hit bonus: Bonus = LUC*(Random number between 0 and 1.5) - (Armor Enchant) - (Lapis Absorb) The Bonus can be either positive or negative Here is where the tricky part comes in: If the Bonus is positive, then the that value is scored as direct damage to the target. If the Bonus is negative, then that is carried over to the based damage equation. To simplify this, we will call the remaining bonus Residual_Absorb Rank 3 Residual_Absorb = 0 if Bonus > 0 Residual_Absorb = Abs(Bonus) if Bonus <=0 Joined 4. Calculate base damage 10 Aug 2008 Base Damage = (Attack - Defense)*Ele*C1 - Residual_Absorb Posts C1 = 1.0 for not Crit Hit, C1 = 1.5 for Crit Hit 843 Ele = elemental damage, value varies depending on ele vs ele in weapon and armor. Rekikyo's Absorb is computed first. Depending on your target's absorb and your LUC, there may be zero damage to the target for every critical hit. United You need LUC to get the critical hits to maximize the base damage. But the direct damage due to the LUC bonus is heavily affected by the Absorb. This is one of the biggest things that nerfed that archer class. When EP 5 came into play. Right now it is hard to say how LUC will play with the new High Def armors. If someone has 1000 LUC, the average LUC bonus will be 750 (a range from 0 to 1500). The lvl 7 absorb lapis is already in the game at 100 Absorb a piece. If the target has 4 of them, that is 400 off the 750 for a remainder of 350 average LUC bonus damage. the lvl 8 and 9 absorb have 150 and 200 absorb respectively. The lvl 8 absorb will be AP items in tiered events very soon. Well three of those will will absorb another 450 points. Hence with a few lapis, a target can completely negate any special damage that an archer could deliver. Then because the archer has put so much of the stats in LUC, their base attack is pathetically low, thus the targets base defense overwhelms all/most of the attack power of the archer. Hence, archers are nerfed even more with the new armors. If Nexon doesn't fix this, then archers will remain a novelty class for the few people that want to endure the pain of leveling slowly because the majority of the community doesn't want to party archers because they already know they have an extremely low DPS. Archers should not be an AOE class, but they should be deadly in the one on one fight. I will continue to play my archers to the very end. I enjoy the class, but wish it was a bit more effective Rank 1 • 13 Dec 2011 PostedOct 18, 2012 4:40 pm 310 Thanks, that is exactly what I needed to run the numbers myself United States PostedOct 19, 2012 3:40 am That abs calc is wrong.... or written incredibly badly, as it is implying the bonus of LUC is affected whereas its the total damage. Absorption is a deductible from the total damage output, nothing more complicated than that, i.e. if u hit 5000 vs 0 abs you'd hit 4000 vs 1000 abs Rank 5.1 Absorb affects every hit in exactly the same way, and also effects everything build in the same way, it doesn't especially hurt any style build. Joined maximum noncrit 17 May 2008 Posts ("max"atk-def)*ele - abs Location max crit Whitstable United Kingdom (("max"atk-def)*ele)*1.5 + LUC*RND[0-1.5] - abs, in both cases the damage removed by is the same, and also in both cases the abs is a simple deduction which could be rearranged to be a formula such as: (("max"atk-def)*ele)*1.5 - abs + LUC*RND[0-1.5] - abs + (("max"atk-def)*ele)*1.5 + LUC*RND[0-1.5] basically Absorb acts the same regardless of build, class or anything else, and it hurts crits and noncrits for the same amount (i.e. Luc is irrelevant) Display posts from previous: Sort by:
{"url":"http://www.aeriagames.com/forums/en/viewtopic.php?t=1752581","timestamp":"2014-04-20T05:55:15Z","content_type":null,"content_length":"105990","record_id":"<urn:uuid:eae4fed8-c037-4206-bdc5-28d6821e4778>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00285-ip-10-147-4-33.ec2.internal.warc.gz"}
How to Calculate Vessel Capacity [Archive] - bulk-online Forums 4th February 2010, 19:01 How to calculate the max iron ore cargo , a ship can carry? Hello everyone , if u are given the data of a ships gross tonnage , net tonnage and suez canal net tonnage , how can u find out the maximum iron ore carrying capacity of tht ship . Now i cant find the ships design draft deadweight because no other data like the length or breadth is given ... I hope someone sheds a lil light on this pls, thnk you
{"url":"http://forum.bulk-online.com/archive/index.php/t-19489.html","timestamp":"2014-04-18T05:30:22Z","content_type":null,"content_length":"12422","record_id":"<urn:uuid:4b91cfbc-9de9-46c5-9ae6-52d05bb5b5f3>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00185-ip-10-147-4-33.ec2.internal.warc.gz"}
Kids.Net.Au - Encyclopedia > Ring (mathematics) In mathematics, a ring is an algebraic structure in which addition and multiplication are defined and have similar properties to those familiar from the integers. The branch of mathematics which study rings is called ring theory. Definition A ring is an abelian group (R, +), together with a second binary operation * such that for all a, b and c in R, <math>a * (b*c) = (a*b) * c</math> <math>a * (b+c) = (a*b) + (a*c)</math> <math>(a+b) * c = (a*c) + (b*c)</math> and such that there exists a multiplicative identity, or unity, that is, an element 1 so that for all a in R, <math>a*1 = 1*a = a</math> Many authors omit the requirement for a multiplicative identity, and call those rings which do have multiplicative identities unitary rings. Similarly, the requirement for the ring multiplication to be associative is sometimes dropped, and rings in which the associative law holds are called associative rings. In this encyclopedia, associativity and the existence of a multiplicative identity are taken to be part of the definition of a ring. The symbol * is usually omitted from the notation, so that a * b is just written a'b. From the axioms, one can immediately deduce that <math>0a = a0 = 0</math> <math>(-1)a = -a</math> <math>(-a)b = a(-b) = -(ab)</math> for all elements a and b in R. Here, 0 is the neutral element with respect to addition +, and -x stands for the additive inverse of the element x in R. An element a in a ring is called a unit if it is invertible, i.e., there is an element b such that <math>ab = ba = 1</math> If that is the case, then is uniquely determined by and we write . The set of all invertible elements in a ring is closed under multiplication * and therefore forms a , the group of units of the ring. If both are invertible, then we have Constructing new rings from given ones • If a subset H of a ring (R,+,*) together with the operations + and * restricted on H is itself a ring, then it is called a subring of (R,+,*). • The direct sum of two rings (G,+,*) and (H",#,•) is the set G×H together with the operations (g[1],h[1])^(g[2],h[2]) = (g[1]+g[2],h[1]#h[2]) and (g[1],h[1])(g[2],h[2]) = (g[1]*g[2],h[1]•h[2]). • Given a ring R and and ideal I, the factor ring is the set of cosets of R/I together with operations gI+hI=(g+h)I and (gI)(hI)=ghI. See Glossary of ring theory for more definitions in ring theory. All Wikipedia text is available under the terms of the GNU Free Documentation License
{"url":"http://encyclopedia.kids.net.au/page/ri/Ring_(mathematics)","timestamp":"2014-04-19T12:02:43Z","content_type":null,"content_length":"21064","record_id":"<urn:uuid:c148c471-ae38-4c86-a6fd-a127b392e23e>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00490-ip-10-147-4-33.ec2.internal.warc.gz"}
Polar Graphs: Rose Curves and Lemniscates -- from Wolfram Library Archive Polar Graphs: Rose Curves and Lemniscates Organization: Wolfram Research, Inc. Students will be able to recognize from the equation whether the polar function is a rose curve, lemniscate, limaçon, or a circle, and determine information about the polar graph, such as orientation, radius, diameter, number of petals, etc. from the equation. Students will also learn how to plot these graphs using Mathematica. This notebook covers topics that fulfill the following NCTM standards: • Algebra 9–12: understand patterns, relations, and functions • Algebra 9–12: use mathematical models to represent and understand quantitative relationships Education > Precollege Mathematics > Algebra Mathematics > Geometry > Analytic Geometry Mathematics > Geometry > Trigonometry Pre-Calculus, Algebra 2, College Algebra, Trigonometry, Analytic Geometry, Polar Graphs, Polar Equations, Rose Curves, Lemniscates, Petals, Symmetry, High School, Precollege, NCTM PolarGraphsRoseCurvesLemniscates.nb (1.2 MB) - Mathematica Notebook
{"url":"http://library.wolfram.com/infocenter/Courseware/7602/","timestamp":"2014-04-24T11:42:02Z","content_type":null,"content_length":"35834","record_id":"<urn:uuid:a25ff325-8fa5-490e-a613-23a0dea24308>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00543-ip-10-147-4-33.ec2.internal.warc.gz"}
0910475237 isbn/isbn13 $$ Compare Prices at 110 Bookstores! The Business of Direct Mail (Business Education Series) discount, buy, cheap, used, books & textbooks Search Results: displaying 1 - 1 of 1 --> 0910475237 ( ISBN ) The Business of Direct Mail (Business Education Series) Author(s): Rene Gnam ISBN: 0910475237 ISBN-13: 9780910475235 Format: Hardcover Edition: Workbook Pub. Date: 1984-06 Publisher: Ket Enterprise List Price: $26.00 Click link below to compare 110+ bookstores prices! Get up to 90% off list price! [Detail & Customer Review from Barnes & Noble] [Detail & Customer Review from Amazon] Recent Book Searches: / Higher Homotopy Structures in Topology and Mathematical Physics: Proceedings of an International Conference June 13-15, 1996 at Vassar College, Poughkeepsie, ... of Jim Stasheff (Contemporary Mathematics) / / Introduction to Homotopy Theory (Fields Institute Monographs, 9) / Paul S. Selick / Representation Theory and Automorphic Forms (Bulletin of the American Mathematical Society Reprint Series) / Paul Sally / Calculus of Variations and Optimal Control (Translations of Mathematical Monographs) / A. A. Miliutin, N. P. Osmolovskii / Mathematics of Stochastic Manufacturing Systems: 1996 Ams-Siam Summer Seminar in Applied Mathematics, June 17-22, 1996, Williamsburg,Virginia (Lectures in Applied Mathematics) / Va Ams-Siam Summer Seminar in Applied Mathematics 1996 Williamsburg / Birational Algebraic Geometry: A Conference on Algebraic Geometry in Memory of Wei-Liang Chow (1911-1995), April 11-14, 1996, Japan-U.S. Mathematics Institute, ... Hopkinsunivers (Contemporary Mathematics) / / Partial Differential Equations (Ams/Ip Studies in Advanced Mathematics, V. 6) / Harold Levine / Homotopy Theory Via Algebraic Geometry and Group Representations: Proceedings of a Conference on Homotopy Theory, March 23-27, 1997, Northwestern University (Contemporary Mathematics) / Conference on Homotopy Theory (1997 Northwestern University), M. E. Mahowald, Stewart Priddy / Basic Almost-Poised Hypergeometric Series: September 1998 (Memoirs of the American Mathematical Society) / Wenchang Chu / Topics in Semidefinite and Interior-Point Methods (Fields Institute Communications, V. 18.) / / Advances in Switching Networks: Dimacs Workshop July 7-9, 1997 (Dimacs Series in Discrete Mathematics and Theoretical Computer Science, V. 42) / Dimacs Workshop on Network Switching (1997 Princeton / Selected Works of Lipman Bers: Papers on Complex Analysis (Collected Works) / Lipman Bers, Irwin Kra, Bernard Maskit / Morita Equivalence and Continuous-Trace C-Algebras (Mathematical Surveys and Monographs) / Iain Raeburn, Dana P. Williams / Flat Extensions of Positive Moment Matrices: Recursively Generated Relations (Memoirs of the American Mathematical Society) / Raul E. Curto, Lawrence A. Fialkow / Low Dimensional Topology: Proceedings of a Conference on Low Dimensional Topology, January 12-17, 1998, Funchal, Madeira, Portugal (Contemporary Mathematics) / / Algebraic Geometry Santa Cruz 1995: Summer Research Institute on Algebraic Geometry, July 9-29, 1995, University of California, Santa Cruz (Proceedings of Symposia in Pure Mathematics) / Summer Research Institute on Algebraic Geometry, Janos Kollar, R. Lazarsfeld / The Spin Verification System: Dimacs Workshop, August 5, 1996 (Dimacs Series in Discrete Mathematics and Theoretical Computer Science) / N. J.) Workshop on the Spin Verification System (1996 New Brunswick, Jean-Charles Gregoire, Gerard J. Holzmann, Doron Peled / A Continuum Limit of the Toda Lattice (Memoirs of the American Mathematical Society) / Percy Deift, Kenneth T-R McLaughlin / On Stability and Endoscopic Transfer of Unipotent Orbital Integrals on P-Adic Symplectic Groups (Memoirs of the American Mathematical Society) / Magdy Assem / Studies on Composition Operators: Proceedings of the Rocky Mountain Mathematics Consortium, July 8-19, 1996, University of Wyoming (Contemporary Mathematics) / / Function Theory in Several Complex Variables (Translations of Mathematical Monographs) / Toshio Nishino / Cyclic Cohomology and Noncommutative Geometry (Fields Institute Communications, V. 17.) / / Secondary Calculus and Cohomological Physics: Proceedings of a Conference on Secondary Calculus and Cohomological Physics, August 24-31, 1997, Moscow, Russia (Contemporary Mathematics) / Russia) Conference on Secondary Calculus and Cohomological Physics (1997 : Moscow, Marc Henneaux, I. S. Krasilshchik, A. M. Vinogradov / Reviews in Number Theory, 1984-96: As Printed in Mathematical Reviews / / Algebras and Modules I: Workshop on Representations of Algebras and Related Topics, July 29-August 3, 1996, Trondheim, Norway (Conference Proceedings (Canadian Mathematical Society)) / Workshop on Representations of Algebras and Related Topics / Bosonic Construction of Vertex Operator Para-Algebras from Symplectic Affine Kac-Moody Algebras (Memoirs of the American Mathematical Society) / Michael David Weiner / Rank 3 Amalgams (Memoirs of the American Mathematical Society) / B. Stellmacher, Franz Georg Timmesfeld / Spline Functions and the Theory of Wavelets (Crm Proceedings & Lecture Notes) / / Invariants Under Tori of Rings of Differential Operators and Related Topics (Memoirs of the American Mathematical Society) / Ian M. Musson, M. Van Den Bergh / Lectures on the Mathmatics of Finance (CRM Monograph) / Ioannis Karatzas Browse ISBN Directory: 9780847808373-9780847810277 9780847810291-9780847811984 9780847811991-9780847813650 9780847813674-9780847815333 9780847815340-9780847817078 More... More Info about Buying Books Online: Make Sure To Compare Book Prices before Buy The goal of this website is to help shoppers compare book prices from different vendors / sellers and find cheap books and cheap college textbooks. Many discount books and discount text books are put on sale by discounted book retailers and discount bookstores everyday. All you need to do is to search and find them. This site also provides many book links to some major bookstores for book details and book coupons. But be sure not quickly jump into any bookstore site to buy. Always click "Compare Price" button to compare prices first. You would be happy that how much you would save by doing book price comparison. Buy Used Books and Used Textbooks It's becoming more and more popular to buy used books and used textbooks among college students for saving. Different second hand books from different sellers may have different conditions. Make sure to check used book condition from the seller's description. Also many book marketplaces put books for sale from small bookstores and individual sellers. Make sure to check store review for seller's reputation if possible. If you are in a hurry to get a book or textbook for your class, you should choose buying new books for prompt shipping. Buy Books from Foreign Country Our goal is to quickly find the cheapest books and college textbooks for you, both new and used, from a large number of bookstores worldwide. Currently our book search engines fetch book prices from US, Canada, UK, New Zealand, Australia, Netherlands, France, Ireland, Germany, and Japan. More bookstores from other countries will be added soon. Before buying from a foreign book store or book shop, be sure to check the shipping options. It's not unusual that shipping could take two to three weeks and cost could be multiple of a domestic shipping charge. Please visit Help Page for Questions regarding ISBN / ISBN-10 / ISBN10, ISBN-13 / ISBN13, EAN / EAN-13, and Amazon ASIN
{"url":"http://www.alldiscountbooks.net/_0910475237_i_.html","timestamp":"2014-04-18T00:13:37Z","content_type":null,"content_length":"34655","record_id":"<urn:uuid:511b66e1-6676-4073-b9bb-7cb2b9626b2d>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00327-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: A farmer with 12,000 meters of fencing wants to enclose a rectangular plot that borders on a straight highway. If the farmer does not fence the side a long the highway, what is the largest area that can be enclosed? • one year ago • one year ago Best Response You've already chosen the best response. Best Response You've already chosen the best response. Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50907850e4b043900ad93647","timestamp":"2014-04-18T23:24:26Z","content_type":null,"content_length":"59008","record_id":"<urn:uuid:77b1e95a-bcfd-4ab7-be16-e7f7b6b934db>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00384-ip-10-147-4-33.ec2.internal.warc.gz"}
Help with linear transformation April 6th 2013, 01:20 AM #1 Nov 2012 Help with linear transformation Hi guys. I need to prove the following: Let $V$ be a vector space over $\mathbb{R}$, such that $dim(V)>1$. Let $\varphi:V\rightarrow V$ be a linear transformation such that: $\varphi^2=-I$. Prove that for every $veq 0$: $v,\varphi(v)$ are linear independent. This is what I tried so far: Let $v\in V$. suppose $\alpha_1v+\alpha_2\varphi(v)=0$, I need to prove $\alpha_1=\alpha_2=0$. since $\varphi$ is linear, and $\alpha_1v+\alpha_2\varphi(v)=0$, then also $\varphi(\alpha_1v+\alpha_2\varphi(v))=0$, and then: $\alpha_1\varphi(v)+\alpha_2\varphi^2(v)=0$ now since $\varphi^2=-I$: but now I'm not sure what conclusion should I draw from this... I can really use some guidance here. Thanks in advanced! Last edited by Stormey; April 6th 2013 at 01:23 AM. Re: Help with linear transformation Hi guys. I need to prove the following: Let $V$ be a vector space over $\mathbb{R}$, such that $dim(V)>1$. Let $\varphi:V\rightarrow V$ be a linear transformation such that: $\varphi^2=-I$. Prove that for every $veq 0$: $v,\varphi(v)$ are linear independent. This is what I tried so far: Let $v\in V$. suppose $\alpha_1v+\alpha_2\varphi(v)=0$, I need to prove $\alpha_1=\alpha_2=0$. since $\varphi$ is linear, and $\alpha_1v+\alpha_2\varphi(v)=0$, then also $\varphi(\alpha_1v+\alpha_2\varphi(v))=0$, and then: $\alpha_1\varphi(v)+\alpha_2\varphi^2(v)=0$ now since $\varphi^2=-I$: but now I'm not sure what conclusion should I draw from this... I can really use some guidance here. Thanks in advanced! Hi Stormey! Suppose $v,\varphi(v)$ are linearly dependent. Then there is some $t \in \mathbb R$ such that $\varphi(v) = t v$. What do you get for $\varphi(\varphi(v))$? Re: Help with linear transformation Hi ILikeSerena! Thanks for the help. let me see if I got it right: if $\varphi(v)=tv$, then $\varphi(\varphi (v))=\varphi(tv)$, and then we'll get a contradiction since: $\varphi(\varphi (v))=\varphi(tv)$ $\varphi(v)=-\frac{1}{t}v$ (because according to the assumption $teq 0$) no solution in $\mathbb{R}$ Re: Help with linear transformation April 6th 2013, 04:44 AM #2 April 6th 2013, 05:19 AM #3 Nov 2012 April 6th 2013, 05:24 AM #4
{"url":"http://mathhelpforum.com/advanced-algebra/216794-help-linear-transformation.html","timestamp":"2014-04-21T02:55:06Z","content_type":null,"content_length":"49973","record_id":"<urn:uuid:01bd9503-3305-4c6e-aee8-4ce569b1bbaa>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00247-ip-10-147-4-33.ec2.internal.warc.gz"}
Vineland Trigonometry Tutor Find a Vineland Trigonometry Tutor ...As part of that job, I led calculus recitations, tutored students in a variety of math classes at a math assistance center, and also worked as a private tutor. I have a Certificate of Eligibility to teach mathematics in New Jersey, and hope to become a teacher through the Alternate Route To Teac... 19 Subjects: including trigonometry, calculus, geometry, GRE ...Chemistry can be confusing for a lot of students, mainly because most of the work is conceptual and cannot be physically seen; however it is a very interesting field once you master the basics. I love to get students excited about science, and chemistry is no exception. I passed the AP chemistry exam in high school and have taken 1 more year of inorganic in college. 30 Subjects: including trigonometry, chemistry, English, biology ...Providing a simple explanation to complex math principles at the student's level is how I work with students to help them master geometry skills and concepts. Determining parameters, such as perimeter, area, surface area, and volume for circles, triangles, and quadrilaterals will become a simple... 9 Subjects: including trigonometry, chemistry, algebra 2, geometry Latoya graduated from the University of Pittsburgh in December of 2007 with a Bachelor's degree in Psychology and Medicine and a minor in Chemistry. Currently, she is pursuing her Master's in Physician Assistance. Her goal is to practice pediatric medicine in inner city poverty stricken communities. 13 Subjects: including trigonometry, chemistry, geometry, biology ...During my time in college, I took one 3-credit course in Linear Algebra. At least three of the other fourteen math courses I took also touched on topics from Linear Algebra. While I was studying, I worked in the Math Center at my college. 11 Subjects: including trigonometry, calculus, geometry, algebra 1
{"url":"http://www.purplemath.com/vineland_trigonometry_tutors.php","timestamp":"2014-04-19T17:27:28Z","content_type":null,"content_length":"24367","record_id":"<urn:uuid:30d360eb-bd78-49f8-9d61-36a5888bf643>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00325-ip-10-147-4-33.ec2.internal.warc.gz"}
DR on Agility for Warriors [Archive] - TankSpot 06-05-2009, 09:04 AM So I haven't seen anyone talk about it. I am with almost every warrior and think agility is great. However one thing I noticed awhile back when I had Broken Promise with mongoose, is in a raid setting the actual proc was only giving me about .12%-.25% dodge. I monitored it closely because I thought I was seeing things and I even made sure my glasses were on. When I was not in a raid I was getting about the 1.2% dodge I should get from it. After that experience I have been a firm believer in going for another enchant besides mongoose. Anyone else noticed anything similar. This is mainly focused for warriors.
{"url":"http://www.tankspot.com/archive/index.php/t-51051.html","timestamp":"2014-04-17T19:23:45Z","content_type":null,"content_length":"13613","record_id":"<urn:uuid:ff402248-4922-414e-8f8d-c6efefe751b0>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00154-ip-10-147-4-33.ec2.internal.warc.gz"}
FASTlab: Nearest-Neighbor and Computational Geometry Nearest-Neighbor and Computational Geometry Many of our fast algorithms rely on data structures, both existing ones and new ones we have introduced. Nearest neighbor search and other fundamental proximity problems based on metrics are ubiquitous across science and engineering, and also serve as illustrative computational prototypes for solving more complex problems. FASTlab Home Papers/Code Team Rank-Approximate Nearest Neighbor Search: Retaining Meaning and Speed in High Dimensions See also Parikshit Ram, Dongryeol Lee, Hua Ouyang, and Alexander Gray Neural Information Processing Systems (NIPS) 2009, appeared 2010 A new formulation of approximate nearest-neighbor search which allows for the first time direct control of the error in The all-nearest-neighbors problem (as well as the standard terms of ranks, which retain meaning while the traditionally-used numerical distances are known to become meaningless as single-query nearest-neighbor problem) is a special case of dimension grows. Empirical results show favorable accuracy for the same runtime compared to existing methods such as LSH. generalized N-body problem. [see webpage here] Abstract: The long-standing problem of efficient nearest-neighbor (NN) search has ubiquitous applications ranging from Euclidean Minimum Spanning Tree astrophysics to MP3 fingerprinting to bioinformatics to movie recommendations. As the dimensionality of the dataset We demonstrate the most overall efficient algorithm for the increases, exact NN search becomes computationally prohibitive; (1 + epsilon) distance-approximate NN search can provide classic EMST problem, in the context of hierarchical large speedups but risks losing the meaning of NN search present in the ranks (ordering) of the distances. This paper clustering. [see webpage here] presents a simple, practical algorithm allowing the user to, for the first time, directly control the true accuracy of NN search (in terms of ranks) while still achieving the large speedups over exact NN. Experiments with high-dimensional datasets show that it often achieves faster and more accurate results than the best-known distance-approximate method, Cover Trees with much more stable behavior. We showed new analysis methods for cover trees, in the context of bichromatic, dual-tree algorithms. [see full entry here] @inproceedings{ram2010ranknn, author = "Parikshit Ram and Dongryeol Lee and Hua Ouyang and Alexander G. Gray", title = " {Rank-Approximate Nearest Neighbor Search: Retaining Meaning and Speed in High Dimensions}", booktitle = "Advances in Neural Information Processing Systems (NIPS) 22 (Dec 2009)", year = "2010", publisher = {MIT Press} } Cosine Trees We introduced cosine trees, an analog of ball trees for dot New Algorithms for Efficient High-Dimensional Nonparametric Classification products rather than distances, in the context of accelerating Ting Liu, Andrew W. Moore, Alexander G. Gray linear algebraic computations. [see full entry here] Journal of Machine Learning Research (JMLR), 2006 Fast exact algorithms for nearest-neighbor classification (and related problems), exploiting the fact that solving this Subspace Trees does not strictly require finding the nearest neighbors and demonstrating speedups in higher dimensionalities than We introduced subspace trees, which create splits along typical for nearest neighbor search. [pdf] principal component directions, in the context of accelerating high-dimensional kernel summations. [see full entry here] Abstract: This paper is about non-approximate acceleration of high-dimensional nonparametric operations such as k nearest neighbor classifiers. We attempt to exploit the fact that even if we want exact answers to nonparametric queries, we usually do not need to explicitly find the data points close to the query, but merely need to answer questions about the In preparation properties of that set of data points. This offers a small amount of computational leeway, and we investigate how much that leeway can be exploited. This is applicable to many algorithms in nonparametric statistics, memory-based learning and kernel-based learning. But for clarity, this paper concentrates on pure k-NN classification. We introduce new Comparison/Survey of Data Structures ball-tree algorithms that on real-world data sets give accelerations from 2-fold to 100-fold compared against highly We have implemented and compared 45 of the most widely-known optimized traditional ball-tree-based k-NN. These results include data sets with up to 10^6 dimensions and 10^5 records, or promising algorithms and data structures for proximity and demonstrate non-trivial speed-ups while giving exact answers. problems (5 variants of nearest-neighbor search and 3 variants of range search). We believe this is likely the most @article{liu2006nnclsf, Author = "Ting Liu and Andrew W. Moore and Alexander G. Gray", Title = "{New Algorithms for comprehensive experimental study of proximity problems that Efficient High Dimensional Nonparametric Classification}", journal = "Journal of Machine Learning Research (JMLR)", exists. volume = "7", pages = "1135--1158", year = "2006" } An Investigation of Practical Approximate Nearest Neighbor Algorithms Adaptive and Parameterized Analysis of Proximity Data Ting Liu, Andrew W. Moore, Alexander Gray, and Ke Yang Structures Neural Information Processing Systems (NIPS) 2004, appeared 2005 We are pursuing the development of theoretical techniques for more realistic runtime analysis of practical algorithms, A way to do approximate nearest-neighbor search using a data structure called a spill tree, which yields demonstrates toward a theory for the design of optimal data structures for significant speedup over other methods such as LSH. [pdf] proximity problems. Abstract: This paper concerns approximate nearest neighbor searching algorithms, which have become increasingly important, especially in high dimensional perception areas such as computer vision, with dozens of publications in recent years. Much of this enthusiasm is due to a successful new approximate nearest neighbor approach called Locality Sensitive Hashing (LSH). In this paper we ask the question: can earlier spatial data structure approaches to exact nearest neighbor, such as metric trees, be altered to provide approximate answers to proximity queries and if so, how? We introduce a new kind of metric tree that allows overlap: certain datapoints may appear in both the children of a parent. We also introduce new approximate k-NN search algorithms on this structure. We show why these structures should be able to exploit the same random projection-based approximations that LSH enjoys, but with a simpler algorithm and perhaps with greater efficiency. We then provide a detailed empirical evaluation on five large, high dimensional datasets which show up to 31-fold accelerations over LSH. This result holds true throughout the spectrum of approximation levels. @inproceedings{liu2005approxnn, Author = "Ting Liu and Andrew W. Moore and Alexander G. Gray and Ke Yang", Title = "{An Investigation of Practical Approximate Nearest Neighbor Algorithms}", booktitle = "Advances in Neural Information Processing Systems (NIPS) 17 (Dec 2004)", Year = "2005", publisher = {MIT Press} }
{"url":"http://www.fast-lab.org/compgeometry.html","timestamp":"2014-04-16T16:03:30Z","content_type":null,"content_length":"12447","record_id":"<urn:uuid:27d71a0b-e6d2-4bed-ba9c-c90290ad34c4>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00523-ip-10-147-4-33.ec2.internal.warc.gz"}
El Monte Trigonometry Tutor Find an El Monte Trigonometry Tutor ...I'm a senior at Caltech, where I study mathematics. My interests range from the sciences to humanities, focusing especially on math, physics, and art history. I am currently focusing on research, and planning to attend a graduate program for a PhD in art history. 51 Subjects: including trigonometry, reading, chemistry, calculus ...I achieved A's in my algebra II class in high school and have been tutoring many high school students in algebra II through WyzAnt. When I was an organic chemistry tutor at UCI, we shared the tutoring room with general chemistry tutors; I would aid students that came by looking for help with gen... 9 Subjects: including trigonometry, chemistry, physics, geometry ...I moved to California 7 years ago, but I have been tutoring for 12 years and I have taught almost every course from 7th grade Math to Calculus over the past 9 years in Canada and the United States. I love tutoring because it gives me a chance to focus on one person at a time and most people just... 11 Subjects: including trigonometry, physics, geometry, algebra 1 I graduated from UCI with a Bachelor's Degree in biology, and my goal is to seek a career in the medical field. I have experience tutoring middle school, high school, and college students, assisting them with subjects such as biology, introductory microbiology, introductory biochemistry, algebra, a... 22 Subjects: including trigonometry, English, reading, calculus ...I tutored the students in one-on-one sessions, group sessions, and conducted review sessions before exams. In addition, I was a teaching assistant for undergraduate and graduate students in the Biomedical Engineering and Kinesiology departments. It is my goal to not only teach my students the material, but to give them the tools needed to succeed in all their classes. 30 Subjects: including trigonometry, chemistry, calculus, physics
{"url":"http://www.purplemath.com/El_Monte_trigonometry_tutors.php","timestamp":"2014-04-19T12:16:42Z","content_type":null,"content_length":"24164","record_id":"<urn:uuid:fab550ae-8b2c-44ce-a14f-d7f4cd2831c2>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00333-ip-10-147-4-33.ec2.internal.warc.gz"}
Comparison of plots using Stata, R base, R lattice, and R ggplot2, Part I: Histograms One of the nicer things about many statistics packages is the extremely granular control you get over your graphical output. But I lack the patience to set dozens of command line flags in R, and I'd rather not power the computer by pumping the mouse trying to set all the clicky-box options in Stata's graphics editor. I want something that just looks nice, using the out-of-the-box defaults. Here's a little comparison of 4 different graphing systems (three using , and one using ) and their default output for plotting a histogram of a continuous variable split over three levels of a categorical variable. First I'll start with the three graphing systems in R: base, lattice, and ggplot2. If you don't have the last two packages installed, go ahead and download them: Now load these two packages, and download this fake dataset I made up containing 100 samples each from three different genotypes ("geno") and a continuous outcome ("trait") Now let's get started... R: base graphics R: lattice histogram(~trait | factor(geno), data=mydat, layout=c(1,3)) R: ggplot2 qplot(trait,data=mydat,facets=geno~.) # Update Tuesday, September 22, 2009 # A commenter mentioned that this code did not work. # If the above code does not work, try explicitly # stating that you want a histogram: qplot(trait,geom="histogram",data=mydat,facets=geno~.) Stata insheet using "http://people.vanderbilt.edu/~stephen.turner/ggd/2009-09-21-histodemo.csv", comma clear histogram trait, by(geno, col(1)) Commentary In my opinion ggplot2 is the clear winner. Again I'll concede that all of the above graphing systems give you an incredible amount of control of every aspect of the graph, but I'm only looking for what gives me the best out-of-the-box default plot using the shortest command possible. R's base graphics give you a rather spartan plot, with very wide bins. It also requires 4 lines of code. (If you can shorten this, please comment). By default, the base graphics system gives you counts (frequency) on the vertical axis. The lattice package in R does a little better perhaps, but the default color scheme is visually less than stellar. Also, I'm not sure why the axis labels switch sides every other plot, and the ticks on top of the plot are probably unnecessary. I still think the bins are too wide. You lose some information especially on the bottom plot towards the right tail. The vertical axis is proportion of total. Stata's default plot looks very similar to lattice, but again uses a very unattractive color scheme. It uses density for the vertical axis, which may not mean much to non-statisticians. The default plot made by ggplot2 is just hands-down good-looking. There are no unnecessary lines delimiting the bins, and the binwidth is appropriate. The vertical axis represents counts. The black bars on the light-gray background have a good data-ink ratio . And it required the 2nd shortest command, only 3 characters longer than the Stata equivalent. I'm ordering the ggplot2 book (Amazon, ~$50) , so as I figure out how to do more with ggplot2 I'll post more comparisons like this. If you use SPSS, SAS, MATLAB, or something else, post the code in a comment here and send me a picture or link to the plot and I'll post it here. 6 comments: 1. Your ggplot code does not work for me. "Error: ggplot: Some aesthetics (y) are missing sceles..." Don't you forgot to set "geom" parameter? I am new to ggplot. I am looking forward to more examples. Honestly, this one does not convince me that ggplot2 is worth learning. 2. Hmm... worked for me just now. Are you using the most up to date version of ggplot2? I'm not sure how new the qplot function is, but that's what I'm using here. qplot apparently knew I wanted a histogram based on how I called it, but you could explicitly state that you want a histogram: Alternatively you could use the longwinded ggplot function, but I don't know yet how to use this syntax and get the three plots to line up vertically. ggplot(mydat, aes(trait)) + geom_histogram() + facet_wrap(~geno) Learning R has a much more extensive comparison of lattice and ggplot2 including commentary from ggplot2's creator in a 14 part series here. 3. I am sorry. You are right - I did not use the most recent version. I have installed ggplot2 yesterday. However, I was using R 2.6.2 and therefore I got version 0.6. The current version is 0.8.3. 4. Great post. I agree that for basic exploratory analysis, its important to have a graphic system that is efficient. When it comes to a finished product for inclusion in the report, then programming control and lots of options are more important to me. The following are two ways to produce plots in base graphics with fewer lines of code. sapply(airquality[1:3], hist) for(i in seq(3)) hist(airquality[[i]], main = names(airquality)[i], xlab = names(airquality)[i]) The first is concise but does not have pretty labels. the second is a little longer but does have labels. 5. You can do this in two lines as well with the base package : For example : val <- c(rnorm(500),rpois(500,6),rnorm(500,sd=2)) cod <- c(rep('a',500),rep('b',500),rep('c',500)) 6. In regards to your ggplot2 example for multiple histograms, do you know if there is an a)easy way to reorder the facets OR b)relabel the facet components? For example, if you wanted from top to bottom the labels to read 'Aa','aa','AA' (and correspond with their respective graphs). The facet command seems to order the information alphabetically. I read through the files you posted on July 27, 2010 from Hadley Wickham's short course (paired with multiple google searches) but haven't been able to figure out how to reorder or relabel a properly ordered set of histograms.
{"url":"http://gettinggeneticsdone.blogspot.com/2009/09/comparison-of-plots-using-stata-r-base.html","timestamp":"2014-04-19T04:20:45Z","content_type":null,"content_length":"114673","record_id":"<urn:uuid:c1fc3050-5bdc-4c8f-a420-473589005c83>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00552-ip-10-147-4-33.ec2.internal.warc.gz"}
Items tagged with 3d In geom3d. I want to find the vertices A(x1,y1,z1), B(x2,y2,z2), where x1, y1, z1, x2, y2, z2 are integer numbers so that the triangle OAB (O is origin) and perimeter and area are integer numbers. I > resrart: for x1 from -N to N do for y1 from x1 to N do for z1 from y1 to N do for x2 from -N to N do for y2 from -N to N do for z2 from -N to N do a:=sqrt(x1^2+y1^2+z1^2):b:=sqrt(x2^2+y2^2+z2^2):c:=sqrt((x2-x1)^2 + (y2-y1)^2 + (z2-z1)^2): if type(2*p, integer) and type(S, posint) then L:=[op(L), [[0, 0, 0], [x1, y1, z1], [x2, y2, z2]]]: fi: od: od: od: od: od: od: But my computer runs too long. I can not receive the result. How to get the answer? If I the length of the side are 6, 25, 29. I tried DirectSearch:-SolveEquations([(x2-x1)^2+(y2-y1)^2+(z2-z1)^2 = 6^2, (x3-x2)^2+(y3-y2)^2+(z3-z2)^2 = 25^2, (x3-x1)^2+(y3-y1)^2+(z3-z1)^2 = 29^2], {abs(x1) <= 30, abs(x2) <= 20, abs(x3) <= 20, abs(y1) <= 20, abs(y2) <= 20, abs(y3) <= 20, abs(z1) <= 20,abs(z2) <= 20, abs(z3) <= 20}, assume = integer, AllSolutions, solutions = 1);
{"url":"http://www.mapleprimes.com/tags/3d","timestamp":"2014-04-20T22:02:29Z","content_type":null,"content_length":"134396","record_id":"<urn:uuid:f4df5114-68ef-4690-bebb-561c9f7953e3>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00588-ip-10-147-4-33.ec2.internal.warc.gz"}
My life with Android :-) Before we get back to Android programming, we need some theoretical background on signal analysis. I uploaded a report onto Sfonge.com knowledge exchange . The document is somewhat heavy on math. To quote Dr. Kaufman's Fortran Coloring Book : if you don't like it, skip it. But if your teacher likes it, you failed. Click here to read the report, you need free registration to access. If you are too impatient to read, here is the essence. There is no single perfect algorithm when analysing acceleration signals. The analysis framework should provide a toolbox of different algorithms, some working in the time-domain, some operating in the frequency domain. The decision engine that classifies the movements may use a number of algorithms, a characteristic set for each movement type.It has been concluded in the medical research community that wavelet transformation is the most optimal algorithm for frequency-domain analysis of acceleration signals. This report presented concrete cases, how wavelet transformation can be used to classify three common movements: walking, running and shake. In addition, the wavelet transformation provided data series that can be used to extract other interesting information, e.g. step For those who would like to repeat my experiments, I uploaded the prototype. First you need (I used version 4.3.3). and unpack the prototype package and enter the proto directory. Then launch Sage (with the "sage" command) and issue the following commands: import accelaccel.movements(5) now you will be able to look at the different waveforms, e.g. Sage is scriptable in Python. If you know Python, you will find everything familiar, if not - bad luck, you won't get far in Sage. This blog entry does not have associated example program, the samples of this post were taken using the sensor monitoring application presented here. This time I will give a taste what can be achieved by using the acceleration sensor for identifying movement patterns. Previously we have seen that the acceleration sensor is pretty handy at measuring the orientation of the device relative to the Earth's gravity. At that discussion we were faced with the problem what if the device is subject to acceleration other than the Earth's gravity. In that measurement, it caused noise. For example if you move the device swiftly, you can force change between landscape and portrait mode even if the device's relative position to the Earth's surface is constant. That's because the acceleration of the device's movement is added to the Earth's gravity acceleration, distorting the acceleration vector. In this post, we will be less concerned about the device's orientation. We will focus on these added accelerations. Everyday movements have characteristic patterns and by analysing the samples produced by the acceleration sensor, cool context information can be extracted from the acceleration data. The acceleration sensor measures the acceleration along three axes. This 3D vector is handy when one wants to measure orientation of e.g. the gravity acceleration vector. When identifying movements, it is more useful to work with the absolute value of the acceleration because the device may change its orientation during the movement. Therefore we calculate ampl=sqrt( x^2 + y^2 + z^2 ) where x,y,z are elements of the measured acceleration vector along the three axes. Our diagrams will show this ampl value. Let's start with something simple. This is actually me walking at Waterloo station. The diagram shows the beginning of the walking. It starts with stationary position then I start to walk. In order to understand the pattern, we must remember that acceleration is change of the velocity vector. This means that if we move quickly but our velocity does not change, we have 0 acceleration. On the other hand, if we move with constant speed but our direction changes, that's quite a significant change in the velocity vector. That is the source of the gravity acceleration, the Earth's surface (and we along with it) rotates with constant speed but as the speed is tangent to the Earth's surface, its direction constantly changes as the Earth rotates. The same goes for walking. Whenever the foot hits the ground, its velocity vector changes (moved down, starts to move up). Similarly, when the foot reaches the highest point, it moved up and then it starts to move down. The maximums and the minimums of the acceleration amplitude are at these points. As these accelerations are added to the Earth's acceleration (9.81 m/s^2), you will see an offset of about 10. The sine wave-like oscillation of the acceleration vector's absolute value is unmistakable. The signal is not exactly sine wave but the upper harmonics attenuate quickly. You can see that my walking caused about 1.2g fluctuation (-0.6 g to 0.6 g). Running is very similar except that the maximum acceleration is much higher - about 2 g peak-to peak. And now, everybody's favourite, the shaking. Shaking is very easy to identify. First, the peak acceleration is very high (about 3 g in our case). Also, the peaks are spike-like (very high upper harmonic content). In the following, I will present some algorithms that produce handy values when identifying these movements.
{"url":"http://mylifewithandroid.blogspot.com/2010_05_01_archive.html","timestamp":"2014-04-17T00:56:50Z","content_type":null,"content_length":"61871","record_id":"<urn:uuid:1f8558fa-f985-4d0c-a75c-ed3bcbf30d33>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00635-ip-10-147-4-33.ec2.internal.warc.gz"}
Why the quantum? Insights from classical theories with a statistical restriction It is common to assert that the discovery of quantum theory overthrew our classical conception of nature. But what, precisely, was overthrown? Providing a rigorous answer to this question is of practical concern, as it helps to identify quantum technologies that outperform their classical counterparts, and of significance for modern physics, where progress may be slowed by poor physical intuitions and where the ability to apply quantum theory in a new realm or to move beyond quantum theory necessitates a deep understanding of the principles upon which it is based. In this talk, I demonstrate that a large part of quantum theory can be obtained from a single innovation relative to classical theories, namely, that there is a fundamental restriction on the sorts of statistical distributions over classical states that can be prepared. This restriction implies a fundamental limit on the amount of knowledge that any observer can have about the classical state. I will also discuss the quantum phenomena that are not captured by this principle, and I will end with a few speculations on what conceptual innovations might underlie the latter set and what might be the origin of the statistical restriction.
{"url":"http://www.perimeterinstitute.ca/videos/why-quantum-insights-classical-theories-statistical-restriction","timestamp":"2014-04-20T19:59:25Z","content_type":null,"content_length":"27866","record_id":"<urn:uuid:fc468cd2-88f7-40a3-9c42-70a95e1f8576>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00487-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Lines CD and DE are tangent to circle A shown below If arc CE is 126°, what is the measure of ∡CDE? 126° 63° 117° 54° • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4fc4c766e4b0964abc871d02","timestamp":"2014-04-20T18:32:08Z","content_type":null,"content_length":"62157","record_id":"<urn:uuid:2031fa52-3db0-4dbf-aadd-88dfa1ea4450>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00182-ip-10-147-4-33.ec2.internal.warc.gz"}
Number of integers coprime to l up vote 6 down vote favorite A long time ago I've seen a paper considering, given $\ell$ fixed, estimates for $$ \sum_{n \leq x, (n, \ell) = 1} 1 $$ Of course, this is easy to estimate with a trivial error term of $O(\varphi(l))$. However in the paper I am looking for the authors attempted obtaining better bounds, using some Fourier analysis (in particular the Fourier series for the fractional part of x). I think, bounds in the sum $$ \sum_{n \leq x} (n, \ell) $$ are essentially an equivalent variation of the problem, so references on this problem are welcome aswell. The reason why I am interested in this problem is ... pure curiosity. I am curious to see how the Fourier methods meshed in, and what kind of bounds they gave, even though of course we cannot really expect anything too fantastic in this problem. nt.number-theory analytic-number-theory fourier-analysis reference-request I vaguely remember that the journal in question is likely to be the Canadian Math. Bulletin, or the Canadian Math. Journal, but I could be completely wrong on this hunch (so far my attempts at googling with "canadian" have failed). – kolik May 30 '12 at 8:56 add comment 2 Answers active oldest votes It is easy to explain "how the Fourier analysis meshed in". Namely, using the standard notation for the Möbius function, the Euler's totient function, and the integer / fractional part functions, your sum can be written as $$ \sum_{n\le x} \sum_{d\mid(n,l)} \mu(d) = \sum_{d\mid l} \mu(d) \lfloor x/d \rfloor = x \sum_{d\mid l} \frac{\mu(d)}d + R = \frac{\phi(l)}lx + R, $$ where $$ R = \sum_{d\mid l} \mu(d) \{x/d\}. $$ As Fedor Petrov observed, this already suffices to improve the remainder term from $\phi(l)$ to $\tau(l)$ and indeed, to the number of square-free divisors of $l$, which is $2^{\omega(l)}$. To get better estimates, one can try to plug in the Fourier expansion for $\{x/d\}$ and estimate the resulting sums. up vote 5 down vote As to the paper you mention, I think I was able to spot it out: is it "Extremal values of $\Delta(x,N)=\sum_{n<xN,(n,N)=1} 1-x\phi(N)$" by P. Codeca and M. Nair, published in Canad. Math. accepted Bull. 41 (3) (1998), pp. 335–347? Another paper by the same authors on the same subject: "Links between $\Delta(x,N)=\sum_{n<xN,(n,N)=1} 1-x\phi(N)$ and character sums", Boll. Unione Mat. Ital. Sez. B Artic. Ric. Mat. 6 (2) (2003), pp. 509–516. I could find one more paper on this problem published in a Canadian journal: "The distribution of totatives" by D.H. Lehmer, Canad. J. Math. 7 (1955), pp. 347–357. 2 It already gives much better bound then $O(\varphi(l))$, namely, $O(\tau(l))$. – Fedor Petrov May 30 '12 at 10:39 1 Well, I know how it meshed in. I want to see the real work :-) – kolik May 30 '12 at 11:19 Thank you ! It is amazing that you've found the reference :-) – kolik Jun 2 '12 at 3:21 I was looking for the paper by Codeca and Nair in the Math. Bulletin – kolik Jun 2 '12 at 3:22 add comment As written above by Seva, one is led with exponential sums of the form $$ \sum_{d|\ell} \mu(d) e^{\frac{2 i \pi y}{d}} $$ where $y = hx$ is an integer multiple of $x$. If one wants to reduce the trivial error term $O( \tau(\ell))$ to $O(\varepsilon \tau(\ell))$, one must consider the range $h \ll \frac {1}{\varepsilon}$ (at least). But I doubt that something really useful can be said about this particular sum, due to presence of the arithmetic factor $1_{d| \ell}$ (let alone the Möbius function). up vote 0 If the condition $d|\ell$ is dropped (the sum is over a whole interval), then the best known results (to my knowledge) on this kind of sums are contained in this paper of Y.-F.S. Pétermann. down vote Note also that sieve methods give nontrivial bounds on the quantity $\sum_{n \leq x, (n, \ell) = 1} 1$ (without Fourier analysis). add comment Not the answer you're looking for? Browse other questions tagged nt.number-theory analytic-number-theory fourier-analysis reference-request or ask your own question.
{"url":"http://mathoverflow.net/questions/98343/number-of-integers-coprime-to-l?sort=oldest","timestamp":"2014-04-23T09:52:55Z","content_type":null,"content_length":"63793","record_id":"<urn:uuid:51ff170b-1522-4d5a-8198-2a9159d195e3>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00022-ip-10-147-4-33.ec2.internal.warc.gz"}
Is time universal? NO (and its proof) I think you are refering to what Einstein thought of Relativity in 1907 in making your analysis? Because the link you provided clearly makes the point that the analysis you provided is not accepted in "modern relativity" Yes, I've conceded that point already (that post got a bit lost in the midst of Prosoothus's proselytising). Registered Senior Member OK, but how about the Einstein point of view - why has it been abondoned by modern physicists? I guess it wasn't useful any more, like Lorentzian relativity, or a scythe? I really don't know. Billy T, All of the evidence points to the assumption that gravitational fields propel photons, and therefore photons only travel at the speed of c relative to those gravitational fields. I'm not sure why this is the case, but I'm pretty sure that the photon must have a non-standard gravitational field. This "non-standard" gravitational field interacts with the external gravitational field the photon is travelling through to exert a force on the photon if it is travelling at a speed that is slower or faster than c relative to the gravitational field. Does this mean that a photon has a dipolar gravitational field? Not necessarily, but it's the model that makes the most sense at the moment. If you have an alternative model that would explain how a particle could be accelerated in a uniform gravitational field, then please share. Billy T, All of the evidence points to the assumption that gravitational fields propel photons.... You probably don't know it, but Pete and Aer are having a high level discussion, well above my understanding, in which your gravity may not even exist. I don't want to interupt them seriously but can you cite even one piece of journal published evidence that you are referring to? Sounds a lot like the "local ether" model of wave propogation. When did you come up with your so-called theory? Over a year ago. Most of the conservative scientists on this forum are desperately trying to avoid one fundamental question: Why do photons travel at c? If photons are at rest until acted upon by an outside force, then one can claim that photons don't have an aether, or a medium of travel/propulsion. But if photons don't have an aether, how the hell are they travelling at c? Why aren't they just "floating around"? Shouldn't they have to "push" against something to reach the speed of c? Once you assume that photons must have an aether in order to have a constant speed of c, the mission is to find this aether. Since the MM experiment proved that the omnidirectional speed of light is constant on the surface of the Earth, it can only lead to two rational conclusions: 1) The Earth's gravitational field is dragging the aether around with it, and the speed of light is only equal to c relative to this aether. 2) The Earth's gravitational field is directly influencing the photons, so the speed of light is only equal to c relative to the field. Although, all "aetherists" assume #1, there is experimental evidence against it. As Billy suggested, if the medium of travel/propulsion of light is a dynamic aether, then this aether would drag light, which would change the apparent location of stars being viewed by an observer on the surface of the Earth. Since there is no evidence to support this phenomenon, model #1 is likely wrong. In model #2, light is would not be influenced at all by (uniform) gravitational fields unless it was travelling at a speed other than c relative to the field. The (uniform) gravitational field would only accelerate/decellerate a photon's speed to c, but would not change its path. In other words, where a uniform gravitational field is present, space would determine the photon's path, while the gravitational field would determine the photon's speed. Billy T, I don't want to interupt them seriously but can you cite even one piece of journal published evidence that you are referring to? The results of the Michelson-Morley experiment. Unless, of course, you believe in length contraction, time dilation, and leprechauns. The results of the Michelson-Morley experiment.... Strange! Results M&M type experiments and several others are generally accepted by most physicist as showing that not only does the speed (constancy is observed) of light imply c does not depend upon motion thru an aeither, but also that it does not depend upon many other things, like path length (rejects "tired light" crackpots explanation of the red shift) or the strength of the gravity field were the experiment was done. Earth's gravity field is a vector field with different strength in different locations and when sun's and moon's gravity is considered, it is even modulated at each location. I asked you for a published reference that states that the speed of light does depend upon the strength of the gravity field and you give one that shows it does NOT!!!! Last edited by Billy T; 09-02-05 at 02:45 PM. Billy T, Strange! Results M&M type experiments and several others are generally accepted by most physicist as showing that not only does the speed (constancy is observed) of light imply c does not depend upon motion thru an aeither, but also that it does not depend upon many other things, like path length (rejects "tired light" crackpots explanation of the red shift) or the strength of the gravity field were the experiment was done. Earth's gravity field is a vector field with different strength in different locations and when sun's and moon's gravity is considered, it is even modulated at each location. I asked you for a published reference that states that the speed of light does depend upon the strength of the gravity field and you give one that shows it does NOT!!!! I see you didn't understand my reply. If you were to assume that length contraction and time dilation do not exist, then there is no other explanation for the fact that the omnidirectional speed of light is constant on the surface of the Earth then that the speed of light is directly, or indirectly, linked to gravitational fields. Also, if the speed of light is only equal to c relative to the gravitational field through which it is passing, then the average speed of light in an object that is moving through a gravitational field would decrease resulting in the reactions in that object to slow down. So even the decreased tick rate of atomic clocks that are moving through the Earth's gravitational field can be used as proof between the link between the speed of photons and the gravitational field. In summary, I'm stating that the very same experimental evidence that relativists use as proof of relativity can be used as proof of the linkage between the speed of light and gravitational fields. It all depends on which model you choose to believe. Note: You seemed to imply in your response that you believe that I think that the strength of the gravitational field determines the speed of light. This is not the case. The speed of light is always equal to c relative to the gravitational field, regardless of the strength of the field. However, it is possible that a stronger field will accelerate/decelerate a photon faster to c than a weaker field. ...If you were to assume that length contraction and time dilation do not exist, then there is no other explanation for the fact that the omnidirectional speed of light is constant on the surface of the Earth then that the speed of light is directly, or indirectly, linked to gravitational fields. ... I asked for a published reference that supports your claim that many experiments prove the speed of light changes with the strength of the gravity field. I ask again (third time). Perhaps you do not understand this question/request? To help you with your understanding: A published reference is something of the form: Modern Physics, Vol. 33 # 4 pp34-45. It definitely is not of the form: "Assume several well accepted facts are wrong, then I can interpret the M&M experiment as supporting my view." Recall several post back, you said: Billy T, All of the evidence points to the assumption that gravitational fields propel photons.... All I am asking for is one published report - that should be easy if your just quoted statment were true. Last edited by Billy T; 09-03-05 at 03:19 PM. Billy T, "Assume several well accepted facts are wrong, then I can interpret the M&M experiment as supporting my view." More like: "Assuming one assumption is wrong, then the results of the MM experiment support my theory." You're making the mistake in assuming that the principle of invariance of light is a fact. IT IS NOT. It's only an assumption that has only been tested in an object that is stationairy in the Earth's gravitational field. Your overgeneralization is similiar to me dropping a rock on my foot and concluding that 10 m/s^2 is a universal constant everywhere in the universe. Perhaps you do not understand this question/request? To help you with your understanding: A published reference is something of the form: Modern Physics, Vol. 33 # 4 pp34-45. Well, I don't know if there are any "published references" that contain the results of the MM experiment, or data showing the decreased tick rates of clocks that are moving through gravitational fields. If there aren't, I guess relativity must be true, right? I strongly recommend you read the following: An explanation of the invariance of light which does not require any relativity as a consequence. Last edited by MacM; 09-03-05 at 09:40 PM. I admit to having not read the book you reference MacM, but I will venture an opinion anyway because I feel the book is likely bogus for very simple reasons that I know explain. It is a simple mathematical fact Maxwell's equations, the equations that govern light propagation, are invariant under the Lorentz transformation and not the Galilean transformation. Maxwell's equations are amply confirmed in every respect. The very precise predictions they make are used in every aspect of modern technology and as the foundation for extremely predictive advanced physical theories of nature. I question then how light can be explained or understood in terms of Galilean transformations when these transformations are not a part of Maxwell's equations. I emphasize that this is not some confusion about observers and velocities but a simple mathematical fact: the Galilean transformation and Maxwell's equations are mathematically incompatible. I look forward to your reply. Physics Monkey I admit to having not read the book you reference MacM, but I will venture an opinion anyway because I feel the book is likely bogus for very simple reasons that I know explain. It is a simple mathematical fact Maxwell's equations, the equations that govern light propagation, are invariant under the Lorentz transformation and not the Galilean transformation. Maxwell's equations are amply confirmed in every respect. The very precise predictions they make are used in every aspect of modern technology and as the foundation for extremely predictive advanced physical theories of nature. I question then how light can be explained or understood in terms of Galilean transformations when these transformations are not a part of Maxwell's equations. I emphasize that this is not some confusion about observers and velocities but a simple mathematical fact: the Galilean transformation and Maxwell's equations are mathematically incompatible. I look forward to your reply. I can only point out that this is the work of a NASA, Phd, Physicist and he mathematicallys supports his work, including explanations involving Maxwell, etc. It would seem that professionals should find this interesting if not challenging. Rather than just thumbing their noses and refusing to even look at the concept based on faith, assumptions but actually scrutinize the mathematics and its consequences. Unfortunately I can't speak with your physicist, and I'm not sure what exactly of your claims he is supporting. Some clarification might be helpful to me if you have the time since I'm new to the community. It seems to me that if he is proposing describing light in all regimes using Galilean transformations and Maxwell's equations simultaneously, then something is very wrong. Go to any PhD physicist you want, besides perhaps this fellow from NASA, and they will tell you the same thing. I can understand how in this fairly hostile intellectual environment you might feel like my assertion is simply faith based, but the reason professional physicists tend to disregard books like the one you posted is that they state things which aren't true. In this case, the book proposes to explain light using only Galilean transformations, and this is simply not possible unless Maxwell's equations are to be modified. Given the wealth of experimental evidence for the equations as they stand, any professional physicist must, as a practicing member of an experimental science, proceed with tremendous skepticism. If I understand you correctly, you see this as faith, but I and most of the rest of the physics community see it as belief in a self consistent and massively successful theory. Physics Monkey Unfortunately I can't speak with your physicist, and I'm not sure what exactly of your claims he is supporting. Some clarification might be helpful to me if you have the time since I'm new to the community. It seems to me that if he is proposing describing light in all regimes using Galilean transformations and Maxwell's equations simultaneously, then something is very wrong. Go to any PhD physicist you want, besides perhaps this fellow from NASA, and they will tell you the same thing. I can understand how in this fairly hostile intellectual environment you might feel like my assertion is simply faith based, but the reason professional physicists tend to disregard books like the one you posted is that they state things which aren't true. In this case, the book proposes to explain light using only Galilean transformations, and this is simply not possible unless Maxwell's equations are to be modified. Given the wealth of experimental evidence for the equations as they stand, any professional physicist must, as a practicing member of an experimental science, proceed with tremendous skepticism. If I understand you correctly, you see this as faith, but I and most of the rest of the physics community see it as belief in a self consistent and massively successful theory. Again you are making assumptions without having actually read the material. ExtinctionShift is completely plausiable and resolves numerous "counter Intuitive" consequences of relativity. ...You're making the mistake in assuming that the principle of invariance of light is a fact. IT IS NOT. It's only an assumption that has only been tested in an object that is stationairy in the Earth's gravitational field. ... You can't assume that c is not constant to re-interpret the M&M expseriment so that you can then conclude from M&M's results that c is not constant! As far as measuring the velocity of light only on the surface of the Earth, are you not aware that the astronaughts left laser reflectors on the moon more than a decade ago? Time of flight measurements using them are routine. Do you not know that the Pioneer sattelite is far beyond Pluto's orbit, yet routinely sending electromagnetic signals back to Earth and using the Doppler shift of them to get the speed, which can be integrated to give the distance etc. Only with very extensive ignorance can you make the claims you do. I am still asking for one published reference - You claimed "all experiments" support you personnel theory about gravity pushing or slowing down light. None do! I have to agree with Billy T here. It is simple logical fallacy to assume what you are trying to prove. Also, as a practicing physicist, I am totally unaware of any evidence linking the propagation of light to the gravitational field. Certainly the curvature of spacetime can affect the motion of light, but it isn't why light moves. The motion of light is governed by Maxwell's equations as I discuss in my previous posts. No mention of gravity is neccessary or made. This theory is extremely accurate and predictive, why would we abandon it without good MacM, how do you have intuition about what traveling near the speed of light is like? If by counter-intuitive you mean that near the speed of light things don't behave like they do when speeds are slow then I agree. But why should intuition in one regime necessarily apply to another? Things that are very small also don't behave like everyday things, this is quantum mechanics. Is that wrong too? In a precise mathematical sense, special relativity is a simpler description of the world than Galilean relativity. It is also an accurate and predictive theory. Billy T, You can't assume that c is not constant to re-interpret the M&M expseriment so that you can then conclude from M&M's results that c is not constant! I don't know why you are being so stubborn. The near null results of the MM experiment can lead to two conclusions: 1) The omnidirectional speed of light is equal to c for all inertial observers everywhere in the universe. This would require the introduction of two new paraphysical phenomena into physics: length contraction and time dilation. 2) The omnidirectional speed of light is only equal to c for an observer that is stationairy in a gravitational field. Therefore there is a link between gravitational fields and the speed of As any rational person can see, it is unscientific to jump to the conclusion that since the omnidirectional speed of light is equal to c for an observer that is stationairy in a gravitational field, that it must be equal to c for all inertial observers. So tell me, if relativity can use the results of the MM experiment as proof that the speed of light is equal to c for all inertial observers, why can't I use the experiment as proof that the omnidirectional speed of light is only equal to c for an observer that is stationairy in a gravitational field? If relativity can use the decreased tick rates of atomic clocks circling the Earth as proof of time dilation, why can't I use the decreased tick rate as proof that the speed of light has changed inside the clock? As far as measuring the velocity of light only on the surface of the Earth, are you not aware that the astronaughts left laser reflectors on the moon more than a decade ago? Time of flight measurements using them are routine. Do you not know that the Pioneer sattelite is far beyond Pluto's orbit, yet routinely sending electromagnetic signals back to Earth and using the Doppler shift of them to get the speed, which can be integrated to give the distance etc. First of all, since the Moon is at a fixed distance from the Earth, and the Earth is at a fixed distance from the Sun, the speed of light coming to/from the moon, and the speed of radio waves travelling through the Sun's gravitational field, would be equal to c relative to the Earth. And if they were not, how exactly would you determine that they are not? For example, if there were radio waves coming from Pluto to Earth that are travelling at c - 10,000 m/s how would you prove it? Wouldn't you just conclude Pluto was just a little farther away? Only with very extensive ignorance can you make the claims you do. Spare me your personal remarks. Give me a list of experimental evidence proving that SR is valid, and I'll show you that very same evidence can be even more easily interpreted to prove that SR is not valid. I am still asking for one published reference - You claimed "all experiments" support you personnel theory about gravity pushing or slowing down light. Light is propelled by gravity The speed of light travels at a constant speed through a gravitational field A) An observer that is stationairy in a gravitational field will measure the omnidirectional speed of light to be constant. (Proof: The MM experiment can be interpreted as experimental proof that light is propelled by gravity) B) The average speed of light decreases in an object that is moving through a gravitational field. This causes the electromagnetic reactions in that object to slow down (Proof: The decreased tick rates of atomic clocks, and the decreased decay rate of muons that are moving through the Earth's gravitational field can both be interpreted as experimental proof that light is propelled by gravity). Now do you understand why I said that the experiments support my theory. Is the step-by-step logic I gave above sufficient for you, or do you want me to publish it somewhere?
{"url":"http://www.sciforums.com/showthread.php?47924-Is-time-universal-NO-(and-its-proof)/page7","timestamp":"2014-04-20T15:57:53Z","content_type":null,"content_length":"130834","record_id":"<urn:uuid:c76e8f43-d4ca-4e49-b800-86096f9a41f8>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00302-ip-10-147-4-33.ec2.internal.warc.gz"}
Concentration results for inner products of two independent random gaussian vectors up vote 5 down vote favorite I wanted to know if there are standard results on concentration of absolute value of inner products of two random vectors. Thus if $X, Y \in R^m$ are two independent random vectors with each entry distributed as $\mathcal{N}(0, 1/m)$, then how can we bound the following probability expression: $P ( | X^T Y | > \epsilon )$ ? Here, $\epsilon > 0$ is a given constant that is small. add comment 3 Answers active oldest votes Since you're trying to bound the sum of zero-mean i.i.d. RVs, I would recommend you try to develop a Chernoff bound: $$\Pr(X^TY>\epsilon)\leq \inf_{s\geq 0}\(e^{-s\epsilon }\(Ee^{sZ}\)^m\) $$ where $Z=X_1Y_1$ is distributed according to a Normal Product distribution. I haven't carried out the calculation in full but I believe the moment generating function $Ee^{sZ}$ can be computed in close form using the expression (6) for $K_0$ found here. up vote 5 As to tightness of the bound, notice that $$\Pr(X^TY>\epsilon)=\Pr(\sum_{i=1}^m\hat{Z}_i>m\epsilon)$$ where the $\hat{Z}_i$ are i.i.d. and each one is the product of two independent down vote standard ($\mathcal{N}(0,1)$) Gaussian RVs. It is a standard Large Deviations result that such probability goes to zero exponentially fast as $m\to\infty$ for every constant $\epsilon>0$. I am 99% sure that the Chernoff bound always yields the correct exponential rate (but not the correct coefficient of the leading exponent). 1 How good are those bounds when $\epsilon$ is small? – Douglas Zare Oct 18 '12 at 10:10 Good question - see my edit above (since the question was about concentration results it is reasonable to assume $m\to\infty$) – Yair Carmon Oct 18 '12 at 15:23 add comment An alternative method is to exploit the rotational invariance of the Gaussian. You can write $$X^T Y = |X| \left( \left(\frac{X}{|X|}\right)^T Y \right).$$ Because $Y$ is rotationally invariant, the inner product is now independent of $X$, and in fact just has distribution $N(0,1/m)$. Now let $C>1$ be an arbitrary parameter. We can bound the probability $X^T Y > \ epsilon$ by the probability one of the following two events occur. 1. $ \left(\frac{X}{|X|}\right)^T Y \geq \frac{\epsilon}{C}$. Assuming $ \epsilon \sqrt{m}/C$ tends to infinity, this occurs with probability $\Phi (\frac{\epsilon \sqrt{m}}{C})=(1+o(1)) \sqrt{\frac{m}{2 \pi}} \exp(-\frac{\epsilon^2 m}{C^2})$. up vote 4 down vote 2. $|X| \geq C$. The norm of a Gaussian vector is well studied, and it is standard (see, for example Chapter 2 of these notes, that $|X|$ is tightly concentrated around its expectation. For example, applying Corollary 2.3 of the linked notes gives that the probability this occurs is at most $\exp(-\frac{1}{4} (1-\frac{1}{C^2})^2 m)$ For $\epsilon$ bounded away from $0$ you can choose $C$ to optimize the sum of the two terms getting a bound that is exponential in $m$ but with a non-optimal exponent. If $\epsilon$ is tending to $0$ with $m$, then the first term is dominant. That term remains small so long as $\epsilon$ is much larger than $\sqrt{\frac{\log m}{m}}$. add comment If $m=2$ then this is a Laplace distribution. Equivalently, the distribution of the determinant of a $2\times2$ matrix with IID centered normal entries is a Laplace distribution. See whuber's comment. up vote 0 A Laplace distribution is also the difference of two IID exponentials. So, if $m$ is even, then the inner product can be written as a sum of $m/2$ IID Laplace distributions, or the down vote difference of two IID gamma distributions. See "tight bounds on probability of sum of laplace random variables" for the density function as a single sum. add comment Not the answer you're looking for? Browse other questions tagged pr.probability or ask your own question.
{"url":"http://mathoverflow.net/questions/109989/concentration-results-for-inner-products-of-two-independent-random-gaussian-vect","timestamp":"2014-04-17T07:07:40Z","content_type":null,"content_length":"60159","record_id":"<urn:uuid:37f46cbc-f1cf-443f-b2d1-adff946eeb55>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00531-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Gina has a pair of wall hangings in the shape of congruent triangles. The measure of angle ABC is equal to the measure of angle DEF. What is the value of x? A. 3 B. 4.5 C. 6 D. 6.5 I did this twice and got x = 2 both times? What am I doing wrong here? Can someone help me? • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/508024b8e4b0b8b0cacd84e4","timestamp":"2014-04-20T03:32:39Z","content_type":null,"content_length":"50245","record_id":"<urn:uuid:d809d9dd-be71-4987-a1e0-6a3b07ad90af>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00161-ip-10-147-4-33.ec2.internal.warc.gz"}
Risk Management - Probability vs. Frequency Re: Risk Management - Probability vs. Frequency Accurately estimating probability (or frequency) can be difficult so in most cases this is purely a "theoretical" discussion. ISO 14971 specifically states that risk is based on the probability of harm. The correct parameter is frequency (average events per period), to account for events that occur at a frequency > 1. Since probability cannot exceed 1, under ISO 14971, all events with a frequency exceeding one are treated as having the same "risk". So for example a minor burn that occurs 100 times a year has the same "risk" of a minor burn that occurs 1-2 times a year. Obviously, in this extreme example no-one would consider the risk the same. But I have seen cases where risks that are different by factors of 10 are considered treated the same. Also, this can also cause errors if there is a sequence of events, and overall probability of harm is based on product of each probability. Again if one of the intermediate events occurs a frequency greater than 1, the result may underestimate the risk. The potential for "error" increases as the unit for time increases. Calculations based on per use or per day are usually OK, but calculations on a per year or per lifetime might have problems. Obviously, risk should be risk no matter what units are used. Units are just for convenience. The ratio of risk for event A vs event B should be constant no matter what units are used. If frequency is used, the ratio never changes. But if probability is used, it can change ...
{"url":"http://elsmar.com/Forums/showthread.php?t=49147","timestamp":"2014-04-19T17:03:14Z","content_type":null,"content_length":"85244","record_id":"<urn:uuid:57fd9075-221a-4e52-b8c9-d83759af638f>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00612-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: [RFC] High Performance Packet Classifiction for tc framework Michael Bellion and Thomas Heinz wrote: > >This generalises to multiple dimensions e.g. for doing multiple > >prefixes on source+target + different combinations of other bits such > >as protocol, TOS etc. - i.e. arbitrary bit-subset classifiers. The > >basic principle and the algorithm are the same. > Hm, how do you want to solve the d-dimensional PCP by > doing binary search for each dimension? Remember that > PCP is not related to longest prefix matching. Instead > priorities are used. I don't know what you mean by "PCP", so can't answer the question. > Maybe you should describe in little more detail what you mean > by "This generalises to multiple dimensions ...". I mean that the lookup algorithm works for multi-dimensional searches. Creating the search tree can be a little more involved, and I am not sure how much (if any) node duplication is needed when some kinds of rule priorities are used. It may help to say that you don't have to do binary search for each dimension separately, although that is a possible strategy for prefix You can build a tree where each node represents a point (a,b,c...) in prefix-length space, and whose children are other points in that space, if you choose to see the general problem as matching multiple At one extreme, a general bit matcher (i.e. no non-power-of-2 numerical ranges) can be treated as a multi-dimensional prefix match where each prefix is length 0 or 1. You see that, if the input rule set is _equivalent_ to a longest-prefix single-dimensional rule set, the optimal multi-dimensional search tree is trivally found and it does not do binary search on each dimension separately. -- Jamie
{"url":"http://oss.sgi.com/archives/netdev/2003-08/msg01380.html","timestamp":"2014-04-17T12:31:18Z","content_type":null,"content_length":"13171","record_id":"<urn:uuid:167c3309-1b7e-41a6-9b30-9bf7c8ff5092>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00493-ip-10-147-4-33.ec2.internal.warc.gz"}
Comments on Haskell for Maths: Conjugacy classes, part 2scheman: This is just convention. When we&#39;re l...why first vector index in c, k and kb is 1 and 0 i... 16359932006803389458noreply@blogger.comBlogger2125tag:blogger.com,1999:blog-5195188167565410449.post-82315161171803708712009-10-20T20:54:39.752+01:002009-10-20T20:54:39.752+01:00scheman: This is just convention. When we&#39;re labelling the vertices of a graph with integers, we usually start from 1. However, with the cube (q 3), the natural labelling is with 3-tuples from {0,1}. This is q&#39; 3 in the code. We can convert this to an integer labelling by interpreting the 3-tuples as binary numbers - ie [0,0,0] = 0, [0,0,1] = 1, [0,1,0] = 2, etc. Thus we define q 3 = fromBinary (q&#39; 3). HaskellForMaths gives you great flexibility over vertex labellings. For example, the Petersen graph can be labelled with the 2-subsets of [1..5], eg [1,5], [2,4]. But using the fromDigits function, we can instead have the vertices labelled as 15, 24, etcDavidAhttp://www.blogger.com/profile/ 16359932006803389458noreply@blogger.comtag:blogger.com,1999:blog-5195188167565410449.post-28016821134561046882009-10-18T10:03:15.503+01:002009-10-18T10:03:15.503+01:00why first vector index in c, k and kb is 1 and 0 in q ?schemanhttp://www.blogger.com/profile/12181709786187295956noreply@blogger.com
{"url":"http://haskellformaths.blogspot.com/feeds/4305409184519031168/comments/default","timestamp":"2014-04-20T01:41:52Z","content_type":null,"content_length":"5743","record_id":"<urn:uuid:40f7b78b-2ea1-4165-95f0-e85911328670>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00448-ip-10-147-4-33.ec2.internal.warc.gz"}
In the diagram below, PA and PB are tangent segments to circle O from P. If m - Homework Help - eNotes.com In the diagram below, PA and PB are tangent segments to circle O from P. If m<P = 38, find m<ABO. First, since PA and PB are tangent segments to the circle, we know that angles PAO and PBO are right angles. (Radii drawn to a point of tangency are perpendicular to the tangent.) Then the measure of angle AOB is 142. (PAOB is a quadrilateral and thus the sum of the interior angles is 360 degrees. We have accounted for two right angles and the given 38 degrees already, so the remaining angle has measure 142. Or you could know that the central angle and the angle formed by tangents meeting at a point are supplementary.) Now triangle AOB is isosceles. (In the same or congruent circles all radii are congruent.) So angles OAB and OBA are congruent. (The base angles of an isosceles triangle are congruent.) Since the measure of AOB is 142, the measures of angles BAO and ABO sum to 38. Since they are congruent, each angle must have measure 19. Alternatively -- since PA and PB are tangents drawn from a point to a circle, they are congruent. So triangle PAB is isosceles. Then angles PAB and PBA are congruent and have measure 71. Since the radii are perpendicular to the tangents, the measure of angle PBO is 90; with the measure of PBA being 71 that means angle ABO has measure 19 from the angle addition postulate. The measure of angle ABO is 19. Join to answer this question Join a community of thousands of dedicated teachers and students. Join eNotes
{"url":"http://www.enotes.com/homework-help/diagram-below-pa-pb-tangent-segments-circle-o-428747","timestamp":"2014-04-17T00:07:31Z","content_type":null,"content_length":"25900","record_id":"<urn:uuid:826f2e24-f2e9-4b88-89ca-a16bd2c0b20d>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00132-ip-10-147-4-33.ec2.internal.warc.gz"}
valid correlation matrix August 13th 2005, 04:51 AM #1 Aug 2005 valid correlation matrix Hi, guys, Could you help me with the following question, For a correlation matrix, the elements are, r12 =0, r23 =4/5 and r13=0. Assume that all other elements stay fixed, what is the largest possible upper and the smallest possible lower bound on r12 which ensure that the matrix is still a valid correlation matrix? Thanks a lot. I've just pulled the relevant section from my thesis. You could get an alternative explanation by checking out the palisade @risk manual. (below, ' does not mean transpose) Check out my prob in the other forum if you can help. The price risk (ie milk price, supplementary feed price and asset prices) is found using the distribution of prices determined from actual data. These prices are assumed to be normally distributed (Neal, 2004d). Sets of prices are generated from uniform random numbers and transformed into normal distribution by means of an inversion process (ppnd algorithm as presented by Beasley and Springer, 1977) to give a matrix Z. Z in this case is a matrix of n price sets (default of 100) by p prices (default of 3) The random (uncorrelated) prices are then correlated using the process described by Iman and Conover (1982). This process involves a user defined correlation matrix, C which is a symmetric matrix of size p, where p is the number of variables (prices) to correlate. To be valid, the correlation matrix must be positive definite. This can be checked by ensuring the smallest eigenvalue of C is positive. If this is not the case, C can be adjusted to become positive semidefinite and as close as possible to the user defined correlation matrix. The smallest eigenvalue of C is found and labelled E0. Then an adjusted C matrix, C’ is found: C'=C-E0I Equation 5.5 The new (valid) C matrix, C’’ is then: C''=[1/(1-E0)].C' Equation 5.6 Hart et al (2003) suggest using a rank correlation rather than Pearson correlation as this allows a distribution free approach of imposing correlation. Distribution free implies that the distribution of each variable need not belong to a particular family (eg normal). Z is the uncorrelated matrix of prices found above. R is a matrix of (van der Waerden) scores, ranked in the same way as the elements of Z . C’’ is the target (symmetric) matrix of (rank) correlations, but D is the current rank correlation matrix of Matrix R. V must be calculated as the Cholesky decomposition of C’’ and U as the Cholesky decomposition of D . The following matrix multiplication is the performed: S=VU-1 Equation 5.7 R*=RS Equation 5.8 Sorting X to give the same ranking as R* gives X*, a matrix correlated according to the desired matrix C’’. This matrix of prices can then be used in Monte Carlo simulations using the WFM. August 16th 2005, 03:42 AM #2 Aug 2005
{"url":"http://mathhelpforum.com/advanced-statistics/743-valid-correlation-matrix.html","timestamp":"2014-04-18T06:45:51Z","content_type":null,"content_length":"33697","record_id":"<urn:uuid:29f50de3-c1fc-4753-bca4-a05fcac8da7f>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00351-ip-10-147-4-33.ec2.internal.warc.gz"}
On the structure of some families of fuzzy measures Miranda Menéndez, Pedro and Combarro, Elías F. (2007) On the structure of some families of fuzzy measures. IIEEE Transactions on Fuzzy Systems, 15 (6). pp. 1068-1081. ISSN 1063-6706 Restricted to Repository staff only until 2020. Official URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=4387919 The generation of fuzzy measures is an important question arising in the practical use of these operators. In this paper, we deal with the problem of developing a random generator of fuzzy measures. More concretely, we study some of the properties that any random generator should satisfy. These properties lead to some theoretical problems concerning the group of isometries that we tackle in this paper for some subfamilies of fuzzy measures. Item Type: Article Uncontrolled fuzzy measures; isometric transformations; random generation Subjects: Sciences > Mathematics > Operations research ID Code: 17042 References: G. Beliakov, R. Mesiar, and L. Valáˇsková, “Fitting generated aggregation operators to empirical data,” Int. J. Uncertain., Fuzz. Knowl Based Syst., vol. 12, no. 2, pp. 219–236, A. Chateauneuf, “Modelling attitudes towards uncertainty and risk through the use of Choquet integral,” Ann. Oper. Res., no. 52, pp. 3–20, 1994. A. Chateauneuf and J.-Y. Jaffray, “Some characterizations of lower probabilities and other monotone capacities through the use of Möbius inversion,” Math. Social Sci., no. 17, pp. 263–283, 1989. G. Choquet, “Theory of capacities,” Annales de l’Institut Fourier, no. 5, pp. 131–295, 1953. E. F. Combarro and P. Miranda, “Identification of fuzzy measures from sample data with genetic algorithms,” Comput. Oper. Res., vol. 33, no. 10, pp. 3046–3066, 2006. E. F. Combarro and P. Miranda, “On some theoretical results relatin random generation of fuzzy measures,” in Proc. 11th Int. Conf. on Inf. Process. Manage. Uncertain. Knowl.-Based Syst. (IPMU’06), Paris, France, 2006, pp. 1678–1675. R. Dedekind, “Über Zerlegungen von Zahlen durch ihre Bgrössten gemeinsamen Teiler,” in Festschrift Hoch raunschweig Ges. Werke (in German).: , 1897, vol. II, pp. 103–148. D. Denneberg, Non-Additive Measures and Integral. Norwell, MA: Kluwer Academic, 1994. D. Dubois and H. Prade, “A class of fuzzy measures based on triangular norms,” Int. J. General Syst., vol. 8, pp. 43–61, 1982. C. A. B. e Costa and J.-C. Vansnick, “MACBETH—An interative path towards the construction of cardinal value function,” Int. Trans. Oper. Res., vol. 1, no. 4, pp. 489–500, 1994. D. E. Goldberg, Genetic Algorithms in Search, Optimization and Machine Learning. Reading, MA: Addison-Wesley, 1989. M. Grabisch, “A new algorithm for identifying fuzzy measures and its application to pattern recognition,” in Proc. Int. Joint Conf. 4th IEEE Int. Conf. Fuzzy Syst. 2nd Int. Fuzzy Eng. Symp., Yokohama, Japan, Mar. 1995, pp. 145–150. M. Grabisch, “k-order additive discrete fuzzy measures,” in Proc. 6th Int. Conf. Inf. Process. Manage. Uncertain. Knowl Based Syst. IPMU), Granada, Spain, 1996, pp. 1345–1350. M. Grabisch, “Alternative representations of discrete fuzzy measures for decision making,” Int. J. Uncertain., Fuzz. Knowl.-Based Syst., vol. 5, pp. 587–607, 1997. M. Grabisch, “k-order additive discrete fuzzy measures and their representation" Fuzzy Sets Syst., no. 92, pp. 167–189, 1997. M. Grabisch and J.-M. Nicolas, “Classification by fuzzy integral-performance and tests,” Fuzzy Sets Syst. (Special Issue on Pattern Recognition), no. 65, pp. 255–271, 1994. P. L. Hammer and R. Holzman, “On approximations of pseudo- Boolean functions,” Zeitschrift für Oper. Res. Math. Meth. Oper. Res., no. 36, pp. 3–21, 1992. G. Klir and T. Folger, Fuzzy Sets, Uncertainty and Information. Englewood Cliffs, NJ: Prentice-Hall, 1989. J.-L. Marichal, “Tolerant or intolerant character of interacting criteria in aggregation by the Choquet integral,” Eur. J. Oper. Res., vol. 155, no. 3, pp. 771–791, Jun. 2004. J.-L. Marichal, P. Meyer, and M. Roubens, “Sorting multiattribute alternatives : The TOMASO method,” Comput. Oper. Res., vol. 32, no. 4, pp. 861–877, 2005. R. Mesiar, “Generalizations of k-order additive discrete fuzzy measures,” Fuzzy Sets Syst., no. 102, pp. 423–428, 1999. P. Miranda, E. Combarro, and P. Gil, “Extreme points of some families of non-additive measures,” Eur. J. Oper. Res., vol. 33, no. 10, pp. 3046–3066, 2006. P. Miranda and M. Grabisch, “p-symmetric fuzzy measures,” in Proc. 9th Int. Conf. Inf. Process. Manage. Uncertain. Knowl.-Based Syst. (IPMU), Annecy, France, Jul. 2002, pp. P. Miranda and M. Grabisch, “p-symmetric bi-capacities" Kybernetica, vol. 40, no. 4, pp. 421–440, 2004. P. Miranda, M. Grabisch, and P. Gil, “p-symmetric fuzzy measures" Int. J. Uncertain., Fuzz. Knowl.-Based Syst., vol. 10 (Suppl.), pp. 105–123, 2002. D. Radojevic, “The logical representation of the discrete Choquet integral" Belg. J. Oper. Res., Statist. Comput. Sci., vol. 38, no. 2–3, pp. 67–89, 1998. G. C. Rota, “On the foundations of combinatorial theory I. Theory of Möbius functions,” Zeitschrift für Wahrscheinlichkeitstheorie und Verwandte Gebiete, no. 2, pp. 340–368, 1964. D. Schmeidler, “Integral representation without additivity,” Proc. Amer. Math. Soc., vol. 2, no. 97, pp. 255–261, 1986. M. Sugeno, “Theory of fuzzy integrals and its applications,” Ph.D. dissertation, Tokyo Inst. of Technol., Tokyo, Japan, 1974. M. Sugeno and T. Terano, “A model of learning based on fuzzy information" Kybernetes, no. 6, pp. 157–166, 1977. J. W. Z. Wang, K. Xu, and G. Klir, “Using genetic algorithms to determine nonnegative monotone set functions for information fusion in environments with random perturbation,” Int. J. Intell. Syst., vol. 14, pp. 949–962, 1999. Deposited On: 07 Nov 2012 09:35 Last Modified: 07 Feb 2014 09:40 Repository Staff Only: item control page
{"url":"http://eprints.ucm.es/17042/","timestamp":"2014-04-16T07:58:35Z","content_type":null,"content_length":"39721","record_id":"<urn:uuid:49b24df9-106b-486e-a953-75395a3c8dfd>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00469-ip-10-147-4-33.ec2.internal.warc.gz"}
Continuity and effectiveness - THEORY AND APPLICATIONS OF CATEGORIES , 2000 "... The contravariant powerset, and its generalisations ΣX to the lattices of open subsets of a locally compact topological space and of recursively enumerable subsets of numbers, satisfy the Euclidean principle that φ ∧ F (φ) =φ ∧ F (⊤). Conversely, when the adjunction Σ (−) ⊣ Σ (−) is monadic, this ..." Cited by 7 (0 self) Add to MetaCart The contravariant powerset, and its generalisations ΣX to the lattices of open subsets of a locally compact topological space and of recursively enumerable subsets of numbers, satisfy the Euclidean principle that φ ∧ F (φ) =φ ∧ F (⊤). Conversely, when the adjunction Σ (−) ⊣ Σ (−) is monadic, this equation implies that Σ classifies some class of monos, and the Frobenius law ∃x.(φ(x) ∧ ψ) =(∃x.φ (x)) ∧ ψ) for the existential quantifier. In topology, the lattice duals of these equations also hold, and are related to the Phoa principle in synthetic domain theory. The natural definitions of discrete and Hausdorff spaces correspond to equality and inequality, whilst the quantifiers considered as adjoints characterise open (or, as we call them, overt) and compact spaces. Our treatment of overt discrete spaces and open maps is precisely dual to that of compact Hausdorff spaces and proper maps. The category of overt discrete spaces forms a pretopos and the paper concludes with a converse of Paré’s theorem (that the contravariant powerset functor is monadic) that characterises elementary toposes by means of the monadic and Euclidean properties together with all quantifiers, making no reference to subsets. - COMPUT. SCI , 2006 "... We develop the semantic foundations of the specification language HasCasl, which combines algebraic specification and functional programming on the basis of Moggi’s partial λ-calculus. Generalizing Lambek’s classical equivalence between the simply typed λ-calculus and cartesian closed categories, we ..." Cited by 6 (4 self) Add to MetaCart We develop the semantic foundations of the specification language HasCasl, which combines algebraic specification and functional programming on the basis of Moggi’s partial λ-calculus. Generalizing Lambek’s classical equivalence between the simply typed λ-calculus and cartesian closed categories, we establish an equivalence between partial cartesian closed categories (pccc’s) and partial λ-theories. Building on these results, we define (set-theoretic) notions of intensional Henkin model and syntactic λ-algebra for Moggi’s partial λ-calculus. These models are shown to be equivalent to the originally described categorical models in pccc’s via the global element construction. The semantics of HasCasl is defined in terms of syntactic λ-algebras. Correlations between logics and classes of categories facilitate reasoning both on the logical and on the categorical side; as an application, we pinpoint unique choice as the distinctive feature of topos logic (in comparison to intuitionistic higher-order logic of partial functions, which by our results is the logic of pccc’s with equality). Finally, we give some applications of the model-theoretic equivalence result to the semantics of HasCasl and its relation to first-order Casl. "... Abstract. The construction of a free restriction category can be broken into two steps: the construction of a free stable semilattice fibration followed by the construction of a free restriction category for this fibration. Restriction categories produced from such fibrations are “unitary”, in a sen ..." Cited by 5 (2 self) Add to MetaCart Abstract. The construction of a free restriction category can be broken into two steps: the construction of a free stable semilattice fibration followed by the construction of a free restriction category for this fibration. Restriction categories produced from such fibrations are “unitary”, in a sense which generalizes that from the theory of inverse semigroups. Characterization theorems for unitary restriction categories are derived. The paper ends with an explicit description of the free restriction category on a directed graph. 1. - In Jerzy Marcinkowski and Andrzej Tarlecki, editors, Computer Science Logic (CSL 04 , 2004 "... Abstract. We investigate the logical aspects of the partial λ-calculus with equality, exploiting an equivalence between partial λ-theories and partial cartesian closed categories (pcccs) established here. The partial λ-calculus with equality provides a full-blown intuitionistic higher order logic, w ..." Cited by 3 (1 self) Add to MetaCart Abstract. We investigate the logical aspects of the partial λ-calculus with equality, exploiting an equivalence between partial λ-theories and partial cartesian closed categories (pcccs) established here. The partial λ-calculus with equality provides a full-blown intuitionistic higher order logic, which in a precise sense turns out to be almost the logic of toposes, the distinctive feature of the latter being unique choice. We give a linguistic proof of the generalization of the fundamental theorem of toposes to pcccs with equality; type theoretically, one thus obtains that the partial λ-calculus with equality encompasses a Martin-Löf-style dependent type theory. This work forms part of the semantical foundations for the higher order algebraic specification language HasCasl. , 2009 "... Foundations should be designed for the needs of mathematics and not vice versa. We propose a technique for doing this that exploits the correspondence between category theory and logic and is potentially applicable to several mathematical disciplines. Stone Duality. We express the duality between al ..." Add to MetaCart Foundations should be designed for the needs of mathematics and not vice versa. We propose a technique for doing this that exploits the correspondence between category theory and logic and is potentially applicable to several mathematical disciplines. Stone Duality. We express the duality between algebra and geometry as an abstract monadic adjunction that we turn into a new type theory. To this we add an equation that is satisfied by the Sierpiński space, which plays a key role as the classifier for both open and closed subspaces. In the resulting theory there is a duality between open and closed concepts. This captures many basic properties of compact and closed subspaces, despite the absence of any explicitly infinitary axiom. It offers dual results that link general topology to recursion theory. The extensions and applications of ASD elsewhere that this paper survey include a purely recursive theory of elementary real analysis in which, unlike in previous approaches, the real closed interval [0, 1] in ASD is compact.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=3897013","timestamp":"2014-04-20T10:31:06Z","content_type":null,"content_length":"24363","record_id":"<urn:uuid:c3e44fa6-603c-4166-819e-ee9766bfdc6b>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00416-ip-10-147-4-33.ec2.internal.warc.gz"}
Zentralblatt MATH Publications of (and about) Paul Erdös Zbl.No: 721.11034 Autor: Erdös, Paul; Granville, A.; Pomerance, C.; Spiro, C. Title: On the normal behavior of the iterates of some arithmetic functions. (In English) Source: Analytic number theory, Proc. Conf. in Honor of Paul T. Bateman, Urbana/IL (USA) 1989, Prog. Math. 85, 165-204 (1990). Review: [For the entire collection see Zbl 711.00008.] Let \phi[1](n) = \phi(n) (Euler's function), \phi[k](n) = \phi(\phi[k-1](n)) for k \geq 2; denote by k(n) the least k such that \phi[k](n) = 1, and let F(n) = k(n) or k(n)-1 according as n is even or odd. This paper contains a wealth of interesting results concerning these functions and related quantities, and also the iterated function s[k](n), where s[1](n) = \sigma (n)-n, s[k](n) = s(s[k-1] (n)) for k \geq 2. We select just a few typical ones here. The authors begin by obtaining conditional results on the average and normal order of the (completely additive) function F(n) under the assumption of suitable strong versions of the Elliott-Halberstam conjecture. They prove also that the normal order of \phi[k](n)/\phi[k+1](n) for n \leq x is ke^\gamma log log log x for any k \leq (log log x)^\epsilon(x) for positive \epsilon(x) tending to zero arbitrarily slowly as x > oo. They go on to show that there is an absolute constant c > 0 such that the set of positive integers n, for which there is some k with \phi[k](n) divisible by every prime up to (log n)^c, has asymptotic density 1. The proofs are elementary in nature but very intricate. The authors correct some errors pointed out to them by A. Smati by making some minor changes to the proof of a lemma in [Rocky Mt. Math. J. 15, 343-352 (1985; Zbl 617.10037)] by P. Erdös and C. Reviewer: E.J.Scourfield (Egham) Classif.: * 11N37 Asymptotic results on arithmetic functions 11A25 Arithmetic functions, etc. Keywords: iteration of Euler's function; normal order; Elliott-Halberstam conjecture Citations: Zbl 711.00008; Zbl 617.10037 © European Mathematical Society & FIZ Karlsruhe & Springer-Verlag │Books │Problems │Set Theory │Combinatorics │Extremal Probl/Ramsey Th. │ │Graph Theory │Add.Number Theory│Mult.Number Theory│Analysis │Geometry │ │Probabability│Personalia │About Paul Erdös │Publication Year│Home Page │
{"url":"http://www.emis.de/classics/Erdos/cit/72111034.htm","timestamp":"2014-04-19T01:51:01Z","content_type":null,"content_length":"5420","record_id":"<urn:uuid:1cf83eea-ee07-4e7d-8b7a-94480022997a>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00127-ip-10-147-4-33.ec2.internal.warc.gz"}
Lucky Lottery Numbers? Are such a thing as lucky lottery numbers? Yes and no. There are numbers that give you better odds, as I'll show you in a moment. However, I want to start by saying that this isn't about luck in the sense of some mystical force. Yes, there are ways to be "lucky," meaning more good things happen in your life. But these ways are based on experience and science, not magic. Now, about those numbers. First, to understand why there are numbers which are "luckier," you have to understand the difference between event-odds and investment-odds. For example, suppose you bet on a number on a roulette wheel. There are 38 numbers on American wheels, so you have a 1-in-38 chance of your number coming up. The event-odds, which refer to the probability of a specific event happening, are 1-in-38 in this case. Unless you have a biased wheel, they will be the same in every casino. But what if one casino paid you 40-to-1 when your number came up? All other casinos pay 35-to-1, so which casino would you like to play at? The one that pays more of course. The odds of your number haven't changed, but you get paid more when you win. This means you have better investment-odds at that casino. The relationship between the event-odds and the amount you win changed. Even without getting into the calculations necessary to demonstrate this, you can see the principle. You have better investment odds if the event-odds are the same, but the payoff is higher. Now, how does this relate to lucky lottery numbers? The Best Lottery Numbers This is simple math, because you really only need to understand the basic idea. Then you can forget the calculations. The bottom line is that some lottery numbers pay the winners more on average than others. How can this be? It's all about your niece's birthday. Or your son's or your own. People use birthdays more than anything else to determine which lottery numbers to bet on. Birthdays, in case you aren't getting a clue yet, only go up to 31, since there are no months with more days than that. Lottery numbers, however, typically go up to 40. A player chooses six numbers and must match all six to win the entire Now, if lottery players, or at least many of them, are betting birthdays, and far fewer are betting the numbers from 32-40, what does this mean? Well, none of the numbers are more likely than others to come up. In fact, as strange as it seems, the numbers 1, 2, 3, 4, 5, and 6 are just as likely to come up as any other combination. So why does it matter which numbers you bet? It matters because you win more on average when betting numbers that fewer people bet. These are the ones between 32 and 40. You see, the jackpot will split more often and in more ways with the numbers below 32, since a lot of players bet them. You'll have the same event-odds betting the lower numbers or the higher when you play for that ten-million dollars, but if you win on the low numbers you are more likely to get just six-million, or even four-million if it splits three ways. That's what makes 32 through 40 lucky lottery numbers. But in most cases the lottery is just a bad bet. Most "lucky" people probably never play, or they play smart as in the example here. Why not skip it and go right now to see what things lucky people do differently. And if you really want to get lucky gambling, you might work on your card skills and play poker at an online casino (stay away from games where you can't put the odds in your favor). Contact Us How to Get Lucky Good Luck Test Beginners Luck What Is Good Luck? Ready for Good Luck? Then get your copy of Secrets of Lucky People right now! Click Here for details on the Ebook version (the best buy) "...not a collection of common-sense tips, but a real "how to" guide to becoming a lucky person. The stories are fascinating to read, and directly applicable to one's life. - From a review on "Thank you very much Steve Gillman. I like to read all your ebooks." - Mohammad N
{"url":"http://www.goodlucksecrets.com/lucky-lottery-numbers.html","timestamp":"2014-04-20T16:13:15Z","content_type":null,"content_length":"7658","record_id":"<urn:uuid:67e5f35b-4cdd-4189-8e9e-19c97b204437>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00279-ip-10-147-4-33.ec2.internal.warc.gz"}
IntroductionRotation of Covariance Matrix4-CSPD Algorithm Using Rotated Covariance MatrixExperimental Results and DiscussionsConclusionsReferencesFigures and Table REMSE Remote Sensing 2072-4292 Molecular Diversity Preservation International (MDPI) 10.3390/rs4082199 remotesensing-04-02199 Article Four-Component Scattering Power Decomposition Algorithm with Rotation of Covariance Matrix Using ALOS-PALSAR Polarimetric Data SugimotoMitsunobu^* OuchiKazuo NakamuraYasuhiro Department of Computer Science, School of Electrical and Computer Engineering, National Defense Academy, 1-10-20 Hashirimizu, Yokosuka, Kanagawa 239-8686, Japan; E-Mails: ouchi@nda.ac.jp (K.O.); yas@nda.ac.jp (Y.N.) Author to whom correspondence should be addressed; E-Mail: msugimot@ieee.org; Tel.: +81-46-841-3810 (ext. 3773); Fax: +81-46-843-6236. 2012 25 07 2012 4 8 2199 2209 05 06 2012 11 07 2012 16 07 2012 © 2012 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/). The present study introduces the four-component scattering power decomposition (4-CSPD) algorithm with rotation of covariance matrix, and presents an experimental proof of the equivalence between the 4-CSPD algorithms based on rotation of covariance matrix and coherency matrix. From a theoretical point of view, the 4-CSPD algorithms with rotation of the two matrices are identical. Although it seems obvious, no experimental evidence has yet been presented. In this paper, using polarimetric synthetic aperture radar (POLSAR) data acquired by Phased Array L-band SAR (PALSAR) on board of Advanced Land Observing Satellite (ALOS), an experimental proof is presented to show that both algorithms indeed produce identical results. polarimetric synthetic aperture radar (POLSAR) scattering power decomposition radar polarimetry covariance matrix rotation With increased quality of synthetic aperture radar (SAR) systems utilizing polarimetric information recently, the development and applications of polarimetric SAR (POLSAR) are one of the current major topics in radar remote sensing. While conventional SAR systems handle only single polarimetric information, data acquired through POLSAR systems contain fully polarimetric information on the shift in polarization between the transmitted and received microwave. Thus, they have potential to increase further the ability of extracting physical quantities of the scattering targets. Therefore, they are used in broad fields of study such as visualization for classification [1,2], oil detection [3,4], and ship detection [5,6], to name a few. Several decomposition techniques have been proposed along with the utilization of fully polarimetric data sets provided by POLSAR platforms. Most of them can be categorized into either of two main groups. One is based on eigenvalue analysis [7–9], and the other employs scattering model-based decomposition originally proposed by Freeman and Durden [10]. The basic idea behind this is that the backscattering power can be expressed as a linear sum of three different scattering power components. The four-component scattering power decomposition (4-CSPD) [11,12] is one of the model-based decomposition methods, and it is an improved method of the previously devised three-component decomposition [10]. Using the 4-CSPD, one can decompose POLSAR data into four power categories: surface scattering power, double-bounce scattering power, volume scattering power, and helix scattering power. Many studies are being made in this field. For example, Zhang et al. [13] suggested a multiple-component scattering model (MCSM) by introducing an additional component that they call wire scattering. Some other studies incorporate an eigenvalue analysis into model-based decomposition [14–16] to correct for negative eigenvalues from the remainder covariance matrix after the volume contribution is subtracted. According to [9,17], among all of the scattering components, double-bounce scattering occurs when the transmitted signal is reflected by ground/sea surfaces and man-made structures (or natural targets such as tree trunks). However, the problem appears for oblique urban blocks or man-made structures whose main scattering center is at an oblique direction with respect to the radar illumination [18]. In such areas, volume scattering (cross-polarized component) often becomes a major scattering process. Thus, output results from the decomposition analysis are sometimes confusing when classification such as urban and forested areas is made, because volume scattering comes from both areas. This makes the classification of man-made structures from other areas difficult. From the classification point of view, these two types of areas have quite different characteristics. Therefore, it would be better if these areas are separated more clearly. To overcome this problem, the concept of rotation in the 4-CSPD has recently been proposed by Yamaguchi et al. [18]. They applied rotation to coherency matrices, so that cross-polarization (i.e., HV and VH) components, which are directly related to volume scattering, are suppressed, and double-bounce scattering increases instead. As a result, urban or industrial areas are successfully separated from forested areas more effectively. In the present article, we introduce the 4-CSPD algorithm with rotation of covariance matrix and compare rotation of coherency and covariance matrices. This is because, although both approaches should yield a same result [19], detailed comparison has not yet been made and reported to date. Comparison is made among the 4-CSPD analyses with and without rotation of matrices. Examples are presented using ALOS-PALSAR (Advanced Land Observing Satellite-Phased Array L-band SAR) PLR (PoLaRimetric) data. Since the detail of rotation of coherency matrix can be found in [18], we describe only rotation of covariance matrix, which should give the same results to the rotation of covariance matrix (although experimental verification has not yet been reported). The covariance matrix can be expressed as: 〈 [ C ] 〉 = [ 〈 | S H H | 2 〉〈 2 S H H S H V * 〉〈 S H H S V V * 〉〈 2 S H V S H H * 〉〈 2 | S H V | 2 〉〈 2 S H V S V V * 〉〈 S V V S H H * 〉〈 2 S V V S H V * 〉〈 | S V V | 2 〉 ] = [ C 11 C 12 C 13 C 21 C 22 C 23 C 31 C 32 C 33 ]where S[HH], S[HV], S[V H] and S [V V] denote the complex scattering elements at HH, HV, VH, and VV polarizations respectively, 〈〉 denotes the ensemble average of an arbitrary window size, and * denotes complex conjugate. The covariance matrix after rotation can be expressed using a unitary rotation matrix as: [ C ( θ ) ] = [ U θ ] [ C ] [ U θ ] T [ U θ ] = 1 2 [ 1 + cos 2 θ 2 sin 2 θ 1 − cos 2 θ − 2 sin 2 θ 2 cos 2 θ 2 sin 2 θ 1 − cos 2 θ − 2 sin 2 θ 1 + cos 2 θ ]where T denotes matrix transpose, and θ denotes a rotation angle. The elements of the rotated covariance matrix are expressed as follows: [ C ( θ ) ] = [ C 11 ( θ ) C 12 ( θ ) C 13 ( θ ) C 21 ( θ ) C 22 ( θ ) C 23 ( θ ) C 31 ( θ ) C 32 ( θ ) C 33 ( θ ) ] .where, using Equation (3), each element after rotation can be expressed in the same manner as in the rotation of coherency matrix, by replacing the elements of coherency matrix with those of covariance matrix. The important element is the cross-polarized term C[22](θ) given by C 22 ( θ ) = 1 4 [ − C 11 + 2 Re ( C 13 ) + 2 C 22 − C 33 ] cos 4 θ + 2 2 Re ( C 12 − C 23 ) sin 4 θ + 1 4 [ C 11 − 2 Re ( C 13 ) + C 22 + C 33 ] . Now, we are going to minimize C[22](θ) because it is equivalent to minimizing volume scattering after the decomposition. Polarimetric matrices are rotated based on the angle which minimizes the cross-polarized component, so that the contribution of volume scattering power after the decomposition is suppressed. The derivative of C[22](θ) with respect to θ is C 22 ′ ( θ ) = ( C 11 − 2 Re ( C 13 ) − 2 C 22 + C 33 ) sin 4 θ − 2 2 Re ( C 12 − C 23 ) cos 4 θ . Therefore, when C′[22](θ) = 0, the angle is tan 4 θ = 2 2 Re ( C 12 − C 23 ) C 11 − 2 Re ( C 13 ) − 2 C 22 + C 33 . θ = 1 4 tan − 1 2 2 Re ( C 12 − C 23 ) C 11 − 2 Re ( C 13 ) − 2 C 22 + C 33 . An extreme value can be derived from applying Equation (9) to Equation (6). It should be noted that arctan2 function, which many programming languages have, should be used to calculate Equation (9). Otherwise, the obtained angle may not minimize the cross-polarized component as it should be. The algorithm of the 4-CSPD analysis using rotation of covariance matrix is summarized in this section. Figure 1 is the flowchart of the entire algorithm. First, a rotated covariance matrix [C(θ)] is created using the rotation angle described in the previous section. Next, the 4-CSPD algorithm is applied to the rotated covariance matrix [C(θ)] and the scattering powers are calculated. The helix scattering power Pc is derived first. Then, the volume scattering power Pv is calculated based on the value of 10 log [ C 33 ( θ ) / C 11 ( θ ) ] Once Pc and Pv are calculated, the surface scattering power Ps and the double-scattering power Pd can be determined by the remaining power (TP − Pc − Pv). If Pv + Pc > TP, the algorithm ends as two-component scattering power decomposition. The branch condition Re(C[0]) > 0 is used for determining which scattering power, Ps or Pd, is dominant. C[0] can be defined in terms of the covariance matrix elements as: C 0 = C 13 ( θ ) − 1 2 C 22 ( θ ) + 1 2 P c . As a result, all of the four scattering components are determined. If Ps or Pd becomes negative, it is substituted by zero and the other is determined by TP − Pc − Pv. It should also be noted that all of the four scattering components can be obtained directly from the rotated covariance matrix elements. Since covariance matrix and coherency matrix are mutually interchangeable by unitary transformation, the output of this algorithm should exactly be the same as the output from the rotation of coherency matrix as long as the same angle, the one which optimally minimizes cross-polarized component, is chosen. This can easily be proven mathematically that both Equation (9) and 2 θ = 1 2 tan − 1 2 Re ( T 23 ) T 22 − T 23 .from [18] have exactly the same form by assigning relevant scattering component into each equation. As for the equivalence between covariance and coherency matrix, it can also be shown mathematically that Pc and Pv have exactly the same form in both algorithms (covariance and coherency matrices) using the same manner as above. Thus, their equivalence is guaranteed. This equivalence applies to Equation (10) and Equation (11) as well. The contribution of remaining components, Ps and Pd, should also be identical in both matrices as stated in [19]. In order to confirm the equivalent nature between the 4-CSPD based on the covariance and coherence matrices, we provide the experimental results by both approaches in the following section. The algorithm is applied to ALOS-PALSAR data, and the results and discussions are presented in this section. Figure 2 shows parts of decomposed images of the Tokyo Bay and Futtsu Horn in Chiba Prefecture, Japan. The central coordinate is approximately at (139°52′E, 35°20′N), and the image size is about 11 km in both directions. The quad-polarization data used here were acquired by ALOS-PALSAR on 24 November 2008 (ALPSRP150972900-P1.1). In Figure 2, the left column shows the results from coherency matrix and the right column shows the results from covariance matrix. The upper row shows the decomposition images from conventional 4-CSPD and the lower row shows the decomposition images from 4-CSPD with rotation. The red, green, and blue colors represent double-bounce, volume, and surface scattering components respectively. There is no difference between the image from covariance matrix rotation and the image from coherency matrix rotation, as well as those without rotation. This result verifies that the rotation algorithm with covariance matrix agrees with the theoretical fact of unitary transformation. The effect of the size of the moving window (i.e., ensemble average) has also been analyzed. As expected, if the size is too small, the result is too noisy and classification does not work very well. On the other hand, if the size is too large, the output becomes rougher and fine details are lost. An appropriate window size depends on each situation. Therefore, when the decomposition is performed, this should be taken into consideration. At this time, the ensemble average window size is 2 pixels in range direction and 16 pixels in azimuth direction. Table 1 shows comparison of relative contribution of double-bounce, volume, surface, and helix scatterings to the total power before and after rotation, using the same region as in Figure 2. Only results from 4-CSPD with rotation of covariance matrix are shown in Table 1 because we confirmed that the results are precisely identical between coherency and covariance matrices. After rotation, volume scattering is suppressed and the contribution of double-bounce scattering becomes larger. The central area in Figure 2 are classified as yellow in the upper images, but are classified as red in the lower images, increasing likelihood of being recognized as man-made structures. Figure 3 is an enlarged Google Earth image corresponding to the area in Figure 2. The red areas in the lower images in Figure 2 can be identified as industrialized bay areas in Figure 3, and the surface scattering areas represented in blue over land correspond to the rice paddies. After harvest in autumn these paddy fields are left as rough surfaces of bare soil. Some of the dots scattered on the sea surface turned into reddish from greenish after rotation and they are considered to be ships because they show strong Pd scattering and we know that there are no small islands in the area. Thus, the rotation method helps, by emphasizing Pd component, to classify backscattering on the sea not as rocks or tiny islands but as ships with more certainty. Figure 4 shows rotation angle distribution of selected areas in Figure 2. From left to right, the results from Areas A, B, C and D are shown respectively. Area A is a part of urban area and shows strong double-bounce scattering before applying rotation. Small rotation is observed in Area A. The possible explanation for this is that most urban structures are ideally facing the radar in a way that the double-bounce scattering is mostly observed. Thus, rotation is less necessary here. Area B is a part of mountainous area covered by forests and shows strong volume scattering. In Area B, rotation is randomly distributed across the entire range. Because the phases of cross-polarized components are randomly distributed, angles minimizing them are also randomly distributed. Area C is a part of sea area and mostly shows surface scattering. In Areas A and C, the center of rotation angle distribution is around zero. Finally, Area D is a part of industrial area which shows remarkable change after rotation. Here, the peak of angle shifts is around −10°, which is clearly different from other areas. It coincides with the fact that the structures in Area D are slightly tilted to the right in Figure 3. Figure 5 shows Tokyo Bay Aqua-Line (A highway across Tokyo Bay). The green color produced by the highway bridge in the left image turned into reddish in the middle image, highlighting enhanced double-bounce scattering, and Figure 5(c) clearly shows increase in double-bounce scattering between Figure 5(a,b). As a ground truth, we confirmed that the bridge does not have tall towers and large cables as described in the bridge analysis in [20] and there are many highway lamps and traffic and direction signs on the bridge. Thus, double-bounce scattering comes from the dihedral reflection between the surface of the bridge and them, and between the sea surface and the bridge. Figure 6 shows rotation angle image and rotation angle distribution around the bridge. The peak around 30° in Figure 6(c) corresponds to the direction of the bridge from the radar illumination. In this study, the four-component scattering power decomposition (4-CSPD) algorithm with rotation of covariance matrix is introduced. We demonstrated that the algorithm is correct by showing that the result of covariance matrix rotation is identical to that of coherency matrix rotation utilizing ALOS-PALSAR quad-polarization data. Although it is well known that the both matrices should produce the same result based on the theory of unitary transformation, experimental proof with rotation of the matrices has not been done before. We clarified that different types of areas react to the rotation algorithm differently. Urban or industrial areas showing strong double-bounce scattering with the original 4-CSPD (without rotation) are little affected by rotation. Forested areas show random distribution in rotation angles because of their randomness in polarization. Sea or smooth ground surface areas are moderately affected by rotation. Urban or industrial areas which have oblique structures to radar illumination show peaks of rotation angle distribution away from zero degree (center) unlike the other areas, and the degree seems to correspond to the angle between the radar illumination and the structures. We also showed that the rotation can improve the classification of man-made objects such as ships and bridges on the sea. We would like to thank Japan Aerospace Exploration Agency (JAXA) for cordially providing ALOS-PALSAR data. We also thank anonymous reviewers for constructive comments to improve the paper. TurnerD.WoodhouseI.H.An icon-based synoptic visualization of fully polarimetric radar data2012464866010.3390/rs4030648 MargaritG.FabregasX.MallorquiJ.J.PipiaL.BorquetasT.Polarimetric SAR interferometry simulator of complex targetsProceedings of the IEEE International Geoscience and Remote Sensing SymposiumSeoul, Korea25–29 July 200520152018 MigliaccioM.GambardellaA.NunziataF.ShimadaM.IsoguchiO.The PALSAR polarimetric mode for sea oil slick observation2009474032404110.1109/TGRS.2009.2028737 RamseyE.IIIRangoonwalaA.SuzuokiY.JonesC.E.Oil detection in a coastal marsh with polarimetric synthetic aperture radar (SAR)2012326302662 MargaritG.Barba MilanesJ.A.TabascoA.Operational ship monitoring system based on synthetic aperture radar processing2009137539210.3390/rs1030375 MargaritG.MallorquiJ.J.Fortuny-GuaschJ.Lopez-MartinezC.Exploitation of ship scattering in polarimetric SAR for an improved classification under high clutter conditions2009471224123510.1109/TGRS.2008.2008721 CloudeS.R.PottierE.A review of target decomposition theorems in radar polarimetry19963449851810.1109/36.485127 CloudeS.R.PottierE.An entropy based classification scheme for land applications of polarimetric SAR199735687810.1109/36.551935 LeeJ.S.PottierE.CRC PressBoca Raton, FL, USA2009 FreemanA.DurdenS.A three-component scattering model for polarimetric SAR data19983696397310.1109/36.673687 YamaguchiY.MoriyamaT.IshidoM.YamadaH.Four-component scattering model for polarimetric SAR image decomposition2005431699170610.1109/TGRS.2005.852084 YajimaY.YamaguchiY.SatoR.YamadaH.BoernerW.M.POLSAR image analysis of wetlands using a modified four-component scattering power decomposition2008461667167310.1109/TGRS.2008.916326 ZhangL.ZouB.CaiH.ZhangY.Multiple-component scattering model for polarimetric SAR image decomposition2008560360710.1109/LGRS.2008.2000795 AriiM.Van ZylJ.J.KimY.Adaptive model-based decomposition of polarimetric SAR covariance matrices2011491104111310.1109/TGRS.2010.2076285 Van ZylJ.J.AriiM.KimY.Model-based decomposition of polarimetric SAR covariance matrices constrained for nonnegative eigenvalues2011493452345910.1109/TGRS.2011.2128325 AntropovO.RausteY.HameT.Volume scattering modeling in PolSAR decompositions: Study of ALOS PALSAR data over boreal forest2011493838384810.1109/ TGRS.2011.2138146 YamaguchiY.Power Decomposition Based on Scattering Model(In Japanese); Chapter 8;IEICETokyo, Japan2007125139 YamaguchiY.SatoA.BoernerW.M.SatoR.YamadaH.Four-component scattering power decomposition with rotation of coherency matrix2011492251225810.1109/TGRS.2010.2099124 YamaguchiY.YajimaY.YamadaH.A four-component decomposition of POLSAR images based on the coherency matrix2006329229610.1109/LGRS.2006.869986 LeeJ.S.KrogagerE.AinsworthT.L.BoernerW.M.Polarimetric analysis of radar signature of a manmade structure2006355555910.1109/LGRS.2006.879564 4-CSPD algorithm using rotation of covariance matrix (the structure of entire flowchart mainly comes from [18]). ALOS-PALSAR decomposition images of Tokyo Bay, Japan. The central coordinate of each image is approximately at (139°52′E, 35°20′N). The upper row (a,b): 4-CSPD (helix component excluded). The lower row (c,d): 4-CSPD with rotation. The left column (a,c) shows results from coherency matrix and the right column (b,d) shows results from covariance matrix. The red, green, and blue colors represent double-bounce, volume, and surface scattering components respectively. Areas A, B, and C are mostly composed of urban, mountainous, and sea area respectively. Area D is an area which shows remarkable change after rotation. Optical photograph of the image corresponding to the area in Figure 2. The central coordinate of the image is approximately at (139°52′E, 35°20′N). Rotation Angle distribution of selected areas in Figure 2. Horizontal axis is rotation angle and vertical axis is frequency. (a) Area A. (b) Area B. (c) Area C. (d) Area D. Tokyo Bay Aqua-Line (Highway) near the area of Figure 2. The central coordinate of each image is approximately at (139°53′E, 35°26′N). (a) 4-CSPD image without rotation. (b) 4-CSPD image with rotation. (c) Difference of Pd component between the left and the middle image. Tokyo Bay Aqua-Line (Highway) near the area of Figure 2. (a) Rotation angle image. The central coordinate of the image is approximately at (139°53′E, 35°27′N). (b) Rotation angle distribution of the left image. The peak around 30 degree represents the highway bridge. Relative contribution to total power of Tokyo Bay area before and after rotation. Method (Rotation Range, Approach) P[d] P[v] P[s] P[c] 4-CSPD without rotation 26.26% 30.63% 40.06% 3.05% 4-CSPD with rotation 36.34% 17.53% 43.68% 2.45%
{"url":"http://www.mdpi.com/2072-4292/4/8/2199/xml","timestamp":"2014-04-21T06:07:46Z","content_type":null,"content_length":"64519","record_id":"<urn:uuid:f23a7a37-cdab-4846-bfa7-05f3c9fbcca5>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00481-ip-10-147-4-33.ec2.internal.warc.gz"}
Turin, GA Geometry Tutor Find a Turin, GA Geometry Tutor ...The highlight of the course is when students take another diagnostic test and see how much they have improved, they are always surprised and proud of how far they have come. I am a scientist by trade, so I love the science section of the ACT. I bring in data from my own research and relate current events to the test to make things more interesting for students. 17 Subjects: including geometry, chemistry, writing, physics ...With teaching, consulting and banking job experience spanning over a period of 19 years. I derive satisfaction in career counseling, knowledge impartation and mentoring. Going by my personal experience, I had no interest and often struggled with mathematics as a subject until I came in contact ... 18 Subjects: including geometry, calculus, accounting, algebra 1 ...I earned a bachelor's degree in Early Childhood Education in 2005. I am a certified elementary school teacher with 5 years experience. For three years I taught first grade. 10 Subjects: including geometry, reading, algebra 1, grammar ...I am very proud that all of my students are making A's! I was a straight A student, top of my class, and had a perfect GPA; and I translated my skills into helping others excel. I am a highly proficient teacher in math, English, science, and several other areas. 31 Subjects: including geometry, reading, English, chemistry ...Scored in the 99th percentile on the GMAT. Worked as a project manager at a manufacturing company managing over 1.5 million in revenue. Interned with a real estate company doing a discounted cash flow analysis on a timeshare development and a break even analysis of a multi-million dollar renovation project. 28 Subjects: including geometry, calculus, statistics, GRE Related Turin, GA Tutors Turin, GA Accounting Tutors Turin, GA ACT Tutors Turin, GA Algebra Tutors Turin, GA Algebra 2 Tutors Turin, GA Calculus Tutors Turin, GA Geometry Tutors Turin, GA Math Tutors Turin, GA Prealgebra Tutors Turin, GA Precalculus Tutors Turin, GA SAT Tutors Turin, GA SAT Math Tutors Turin, GA Science Tutors Turin, GA Statistics Tutors Turin, GA Trigonometry Tutors Nearby Cities With geometry Tutor Brooks, GA geometry Tutors Chattahoochee Hills, GA geometry Tutors Concord, GA geometry Tutors Gay, GA geometry Tutors Grantville, GA geometry Tutors Haralson geometry Tutors Hogansville geometry Tutors Lovejoy, GA geometry Tutors Palmetto, GA geometry Tutors Sargent, GA geometry Tutors Sharpsburg, GA geometry Tutors Whitesburg, GA geometry Tutors Williamson, GA geometry Tutors Woolsey, GA geometry Tutors Zebulon, GA geometry Tutors
{"url":"http://www.purplemath.com/turin_ga_geometry_tutors.php","timestamp":"2014-04-18T15:48:06Z","content_type":null,"content_length":"23908","record_id":"<urn:uuid:5cf60f56-91f3-4134-b121-9771d785da31>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00288-ip-10-147-4-33.ec2.internal.warc.gz"}
Forest Heights, MD SAT Math Tutor Find a Forest Heights, MD SAT Math Tutor ...I have previously taught economics at the undergraduate level and can help you with microeconomics, macroeconomics, econometrics and algebra problems. I enjoy teaching and working through problems with students since that is the best way to learn.Have studied and scored high marks in econometric... 14 Subjects: including SAT math, calculus, geometry, statistics ...I scored a 790/740 Math/Verbal on my SAT's and went through my entire high-school and college schooling without getting a single B, regardless of the subject. I did this through perfecting a system of self-learning and studying that allowed me to efficiently learn all the required materials whil... 15 Subjects: including SAT math, calculus, physics, GRE ...I recently published an eBook describing how eLearning technologies will change the future of education: "The Electronic Schoolhouse: A Bright Future for Education." I recognize that students and parents are making a significant investment, and I pack in as much value per hour as possible. I te... 25 Subjects: including SAT math, chemistry, reading, writing ...Being a good tutor means nothing without my students. When my students want to achieve more I'm there to assist them. I am looking forward to meeting you! 10 Subjects: including SAT math, geometry, algebra 1, GED ...As someone trained in linguistics, I am used to thinking about the meanings of words and how they are used in practice. This training has been supplemented by reading and writing on a variety of subjects. So I have both theoretical knowledge and practical experience. 22 Subjects: including SAT math, Spanish, English, reading Related Forest Heights, MD Tutors Forest Heights, MD Accounting Tutors Forest Heights, MD ACT Tutors Forest Heights, MD Algebra Tutors Forest Heights, MD Algebra 2 Tutors Forest Heights, MD Calculus Tutors Forest Heights, MD Geometry Tutors Forest Heights, MD Math Tutors Forest Heights, MD Prealgebra Tutors Forest Heights, MD Precalculus Tutors Forest Heights, MD SAT Tutors Forest Heights, MD SAT Math Tutors Forest Heights, MD Science Tutors Forest Heights, MD Statistics Tutors Forest Heights, MD Trigonometry Tutors Nearby Cities With SAT math Tutor Alexandria, VA SAT math Tutors Brandywine, MD SAT math Tutors Cottage City, MD SAT math Tutors District Heights SAT math Tutors Fairmount Heights, MD SAT math Tutors Fort Myer, VA SAT math Tutors Hillcrest Heights, MD SAT math Tutors Martins Add, MD SAT math Tutors Martins Additions, MD SAT math Tutors Mount Rainier SAT math Tutors Mount Vernon, VA SAT math Tutors Oxon Hill SAT math Tutors Seat Pleasant, MD SAT math Tutors Temple Hills SAT math Tutors University Park, MD SAT math Tutors
{"url":"http://www.purplemath.com/forest_heights_md_sat_math_tutors.php","timestamp":"2014-04-16T10:26:58Z","content_type":null,"content_length":"24381","record_id":"<urn:uuid:6c92f282-5a1d-42a3-8b66-cb9cb2abfca3>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00578-ip-10-147-4-33.ec2.internal.warc.gz"}
Suface Area of Revolution- around an arbitrary line December 9th 2009, 08:58 AM #1 Sep 2009 I don't have an example (if someone could provide one it would be great!) but I need to figure out how to revolve around an arbitrary line. I was given a hint that it is rotating around the bottom line not the x-axis and that h=rsinx and i need to look at h' h is the line going down to the x-axis
{"url":"http://mathhelpforum.com/calculus/119564-suface-area-revolution-around-arbitrary-line.html","timestamp":"2014-04-19T13:55:19Z","content_type":null,"content_length":"29327","record_id":"<urn:uuid:4400369a-3bf4-4c6c-8e91-408759492e6a>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00641-ip-10-147-4-33.ec2.internal.warc.gz"}
SPSSX-L archives -- June 2011 (#105)LISTSERV at the University of Georgia Date: Wed, 8 Jun 2011 14:26:03 -0600 Reply-To: Jon K Peck <peck@us.ibm.com> Sender: "SPSSX(r) Discussion" <SPSSX-L@LISTSERV.UGA.EDU> From: Jon K Peck <peck@us.ibm.com> Subject: Re: Generating random 'constants' Comments: To: David Marso <david.marso@gmail.com> In-Reply-To: <1307563872864-4470810.post@n5.nabble.com> Content-Type: multipart/alternative; I interpreted the request as asking for a column of constant values, so something like this would be required. DO IF $casenum = 1. COMPUTE RV = rv.normal(0,1). END IF. LEAVE RV. Jon Peck Senior Software Engineer, IBM new phone: 720-342-5621 From: David Marso <david.marso@gmail.com> To: SPSSX-L@LISTSERV.UGA.EDU Date: 06/08/2011 02:20 PM Subject: Re: [SPSSX-L] Generating random 'constants' Sent by: "SPSSX(r) Discussion" <SPSSX-L@LISTSERV.UGA.EDU> DO IF $CASENUM=1. COMPUTE #RV=NORMAL(0,.1). END IF. * If you want a permanent copy *. COMPUTE RV=#RV. Note use of scratch variable. Read up on scratch variables. drewk wrote: > I'm attempting to generate random 'constant' variables. I mean random as > in randomly generated, and constant as in constant across all > observations. I intend this to be used as a multiplier of sorts which is > randomly generated when the syntax is run to introduce a little > For example, I have a probability for a certain event, and I am running > multiple 'trials' of this event and recording the outcome. For > reasons, I want to add or subtract a randomly generated value (something > like RV.Normal(0,.1)) from this probability for each observation. > just generating a variable such as: > compute randomadjustment = RV.Normal(0,.1). > will create a unique random number for each observation. However, I want > this adjustment to be constant across all observations. This way, every > time the syntax is run, every observation's probability has the same > random number added or subtracted from it. Is it possible to generate a > 'random constant variable'? > I hope all of this makes sense. Thanks for any help, > -dk View this message in context: Sent from the SPSSX Discussion mailing list archive at Nabble.com. To manage your subscription to SPSSX-L, send a message to LISTSERV@LISTSERV.UGA.EDU (not to SPSSX-L), with no body text except the command. To leave the list, send the command For a list of commands to manage subscriptions, send the command
{"url":"http://listserv.uga.edu/cgi-bin/wa?A2=ind1106&L=spssx-l&T=0&F=&S=&P=11802","timestamp":"2014-04-17T09:39:32Z","content_type":null,"content_length":"11983","record_id":"<urn:uuid:0357575e-21ba-4c49-b672-8a752050bcd9>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00483-ip-10-147-4-33.ec2.internal.warc.gz"}
Unscented Kalman Filter for Brain-Machine Interfaces Brain machine interfaces (BMIs) are devices that convert neural signals into commands to directly control artificial actuators, such as limb prostheses. Previous real-time methods applied to decoding behavioral commands from the activity of populations of neurons have generally relied upon linear models of neural tuning and were limited in the way they used the abundant statistical information contained in the movement profiles of motor tasks. Here, we propose an n-th order unscented Kalman filter which implements two key features: (1) use of a non-linear (quadratic) model of neural tuning which describes neural activity significantly better than commonly-used linear tuning models, and (2) augmentation of the movement state variables with a history of n-1 recent states, which improves prediction of the desired command even before incorporating neural activity information and allows the tuning model to capture relationships between neural activity and movement at multiple time offsets simultaneously. This new filter was tested in BMI experiments in which rhesus monkeys used their cortical activity, recorded through chronically implanted multielectrode arrays, to directly control computer cursors. The 10th order unscented Kalman filter outperformed the standard Kalman filter and the Wiener filter in both off-line reconstruction of movement trajectories and real-time, closed-loop BMI operation. Citation: Li Z, O'Doherty JE, Hanson TL, Lebedev MA, Henriquez CS, et al. (2009) Unscented Kalman Filter for Brain-Machine Interfaces. PLoS ONE 4(7): e6243. doi:10.1371/journal.pone.0006243 Editor: Yann LeCun, New York University, United States of America Received: January 28, 2009; Accepted: April 15, 2009; Published: July 15, 2009 Copyright: © 2009 Li et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Funding: This research was supported by DARPA N66001-06-C-2019 and TATRC W81XWH-08-2-0119 to MALN. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. Competing interests: The authors have declared that no competing interests exist. Research on brain-machine interfaces (BMI) – devices that directly link the brain to artificial actuators [1], [2], [3] – has experienced rapid development during the last decade primarily because of the expectation that such devices may eventually cure severe body paralysis caused by injury or neurodegenerative disease [4], [5], [6], [7], [8]. A core component of BMIs is the computational algorithm that decodes neuronal activity into commands that drive artificial actuators to perform movements at the operator's will. Signal processing and machine learning techniques have been applied to the problem of inferring desired limb movements from neural recordings [9]. These include the population vector method [10], [11], [12], [13], [14], [15], [16], the Wiener filter [3], [17], [18], [19], [20], the Kalman filter [21], [22], [23], [24], the particle filter [25], [26], [27], [28], point process methods [29], [30], [31], [32], artificial neural networks [18], [33], [34], [35], and discrete state Bayesian approaches [18], [36], [37], [38]. Decoding methods using linear models of the relationship between neural activity and limb movements, such as the Wiener filter and Kalman filter, are most commonly used in experimental research on BMIs. These methods cannot handle non-linear models, which describe neuronal modulations better but require more complex algorithms such as the particle filter [39], a non-parametric recursive Bayesian estimator. However, along with the power of particle filters comes a heavy computational cost, which makes this approach difficult to implement in real-time BMI systems. The space of possible non-linear models is vast, and selecting an appropriate model – one that offers significant improvement over a linear model while avoiding “over-fitting” of parameters [40] – is a non-trivial task. Combined with the more difficult software engineering involved, these factors explain the rarity of non-linear models in real-time BMI We propose a new computational approach for BMIs, the n-th order unscented Kalman filter (UKF), to improve the extraction of motor commands from brain activity. Our experiments showed that this new approach offers more accuracy compared to methods which use linear models while remaining computationally light enough for implemention in real-time. This filter offers three improvements upon previous designs of BMI decoding algorithms. First, our filter allows the use of non-linear models of neuronal modulations to movements (neural tuning models). Our experiments demonstrate the increased accuracy of our quadratic model versus the previously-used linear model. Second, our filter takes advantage of the patterns of movements performed during the execution of tasks. For example, a prosthetic used to aid in feeding has to perform a stereotypical pattern of movements: the prosthetic actuator moves back and forth between the user's mouth and the food items placed on a tray. Our approach uses this stereotypic pattern to improve BMI output accuracy. Third, our filter allows the relationships between neural activity and arm movement at multiple time offsets to be used simultaneously. These improvements were facilitated by extending the Kalman filter in two ways. First, the unscented Kalman filter [41], which uses a non-stochastic simulation method to approximate non-linear function evaluation on random variables, was used to allow non-linear neural tuning models. Second, the state of our filter was extended to keep a history (of length n) of the desired hand movements to allow an autoregressive (AR n) movement model and neural tuning to all n consecutive time offsets. These two elements were combined in a system that is relatively simple, robust, and fast enough for real-time, closed-loop BMI applications. Our algorithm was tested both off-line and in real-time, closed-loop experiments in which cortical recordings were obtained from macaque monkeys (Macaca mulatta) trained to perform two reaching tasks. In off-line comparisons, our method demonstrated significantly better accuracy compared to the Kalman filter, the Wiener filter, and the population vector method [10], [13]. In on-line, closed-loop BMI control, the monkeys followed targets significantly better when using our method than when using the Kalman or the Wiener filter. Behavioral Tasks and Cortical Recordings We trained 2 rhesus macaques (Monkey C and Monkey G) to perform reaching tasks that incorporated stereotypic patterns of movements. The monkeys manipulated a hand-held joystick to acquire visual targets with a computer cursor (Figure 1A). In the center-out task, the cursor was moved from the screen center to targets randomly placed at a fixed radius around the center (Figure 1C). In the pursuit task the monkeys tracked a continuously moving target which followed a Lissajous curve (Figure 1D). Figure 1. Schematics of the experimental task and cortical implants. A: The cursor and the visual targets were projected to the screen mounted 1.5 m in front of the monkey, and the monkeys moved the cursor with a hand held joystick with length 30 cm and maximum deflection 12 cm. The monkeys received fruit juice rewards when they placed the cursor inside targets. B: Microwire electrode array diagram (top) and schematics of the placement of the arrays in the cortex of two monkeys. C: Schematics of the center-out task. After holding the cursor at the screen center, the monkeys moved it to a peripheral target that appeared at a random angle and a fixed radius from the center D: Schematics of the pursuit task. The monkeys tracked a continuously moving target whose trajectory was a Lissajous curve. Both monkeys were implanted with multielectrode arrays in multiple cortical areas. Monkey C was implanted in M1, PMd, posterior parietal cortex (PP) and supplementary motor area (SMA) in the right hemisphere. Monkey G was implanted bilaterally in primary motor cortex (M1), primary somatosensory cortex (S1) and dorsal premotor cortex (PMd). Extracellular discharges of 94 to 240 (average 142) cortical neurons were recorded while each monkey performed the behavioral tasks. We applied the n-th order unscented Kalman filter to the data collected in 16 daily sessions: 6 sessions from Monkey C and 10 sessions from Monkey G. Data used from each session ranged from 9 to 25 minutes. After evaluating filter accuracy off-line, we conducted six on-line experiments, three with each monkey, while the monkeys controlled the BMI using the unscented Kalman filter and comparison methods in closed-loop operation. We treated the neurons recorded from different cortical areas as one ensemble; differences between individual cortical areas were not considered here. N-th Order Unscented Kalman Filter Our n-th order unscented Kalman filter (UKF) combined two extensions to the standard Kalman filter [42]: (1) the unscented transform [41], which allowed approximate filtering under non-linear models, and (2) the n-th order extension, which allowed autoregressive movement models and multiple temporal-offset neural tuning models. Figure 2 shows a comparison of the standard Kalman filter (Figure 2A) and the n-th order unscented Kalman filter (Figure 2B), as well as examples of a linear neural tuning model (Figure 2C), quadratic neural tuning model (Figure 2D), and autoregressive (AR 1 vs AR n) movement models (Figure 2D). A side-by-side comparison of the filtering equations is shown in Table 1. Figure 2. Comparison of the standard Kalman filter with the n-th order unscented Kalman filter. A: The standard Kalman filter predicts future position and velocity based on a linear model of neural tuning and predictions of the present position and velocity only. B: The n-th order unscented Kalman filter predicts future position and velocity based on a quadratic model of neural tuning and n history taps of position and velocity (AR n). C: Example of linear neural tuning model. D: Example of quadratic tuning model. E: Example AR 1 and AR n movement models. Table 1. Comparison of the equations for the standard Kalman filter and our unscented Kalman filter. Like the standard Kalman filter, the n-th order unscented Kalman filter inferred the hidden state (the position and velocity of the desired movement) from the observations (neuronal rates). The state transition model or movement model, predicted the hidden state at the current time step given the state at the previous n time steps. The observation model or neural tuning model predicted the expected neuronal rates from the estimated desired movement via a non-linear function. We incorporated multiple taps of the state in the neural tuning model to relate neural activity with hand kinematics at multiple time offsets simultaneously. We used a nonlinear quadratic model of tuning to express neuronal rates as a function of hand position and velocity. Tuning Model Validation We analyzed the predictive accuracy of the quadratic tuning model used in our n-th order unscented Kalman filter. Firing rates of single neurons were predicted from hand position and velocity using the quadratic (with n = 1 and n = 10 taps) and the linear neural tuning models after the models were fit with linear regression using the Moore-Penrose pseudoinverse. A 10-fold cross-validation procedure was used to test predictive accuracy from 16 recording sessions with an average of 142 neurons recorded per session, and we report results using signal-to-noise ratios (SNR, where the signal was the recorded binned spike count) and correlation coefficients (CC). The n = 1 tap quadratic model (SNR = 0.03±0.29 dB, CC = 0.10±0.09; mean±standard deviation) was more predictive (P <0.001, two-sided, paired sign-test) than the linear model (SNR = 0.01±0.27 dB, CC = 0.07±0.08). 1753 out of 2273 units (approximately 77%) were better predicted using the quadratic model. The n = 10 tap quadratic model (SNR = 0.05±0.32 dB, CC = 0.11±0.10) was more predictive (P<0.001) than the n = 1 tap quadratic model (about 900 or approximately 40% of units were better predicted). The superior performance of the quadratic tuning model is illustrated in the contour plots of Figure 3, which show the tuning to position and velocity of eight representative neurons and parameter fits using the linear and quadratic (n = 1) models. The x and y coordinates in the plots indicate x and y positions or velocities and the brightness of the shading indicates the predicted firing rate (Figure 3, left two columns) and true firing rate (Figure 3, right-most column). For clarity, the fits to velocity (Figure 3, top four rows) and position (Figure 3, bottom four rows) are shown separately. The right-most column of Figure 3 shows the actual firing rate estimated on a 50 by 50 grid, which spanned plus and minus three standard deviations of the position or velocity values (smaller of the standard deviations for x and y) observed during the experimental session, using Gaussian kernel smoothing, with kernel width one standard deviation of the observed values (smaller of the standard deviations for x and y). Figure 3. Contour plots of parameter fits for linear and quadratic tuning models to the tuning of eight representative neurons. The plot axes are the x- and y-axis of the hand position or velocity. Brighter intensity of shading indicates higher firing rate, in spikes/sec. The right-most column depicts the smoothed true firing rate. The quadratic model captures the trends of neuronal modulations better than the linear model for most neurons. For velocity tuning (Figure 3, top four rows), the quadratic model captures the low-center, high-surround tuning pattern seen in many neurons, while the linear model cannot capture this pattern because it is restricted to fitting a plane in the (rate, x, y) space. For position tuning (Figure 3, bottom four rows), the more expressive quadratic model captures the tuning patterns better than the linear model. While more sophisticated models of tuning, such as higher-order or non-parametric models, may model neural activity more accurately, our model is relatively simple, fast to fit and evaluate, and grounded in previous work (see Materials and Methods), while demonstrating significantly better predictive accuracy than the commonly-used linear model. Off-line reconstruction We compared the ability of our method to reconstruct hand movements from neural recordings with several commonly used, real-time methods by performing 10-fold cross-validation on 16 previously recorded sessions. Parameters for the algorithms were fitted by ridge regression, a regularized form of linear regression, using recorded neural and behavioral (joystick position and velocity) data. The first cross-validation fold of each session was used to optimize ridge regression parameters and omitted from the results. The mean off-line reconstruction accuracy of the 10th order unscented Kalman filter (UKF), the 1st order unscented Kalman filter, the standard Kalman filter, the 10 tap Wiener filter fitted with ridge regression (RR), the 10 tap Wiener filter fitted with ordinary least squares (OLS), and the population vector method used by Taylor et al. are shown in Figure 4, grouped by monkey [13]. The y-axis shows the signal-to-noise ratio (SNR, where the signal was the recorded behavior) of the hand position reconstruction and error bars indicate plus and minus one standard error over the 9 cross-validation folds of each session and the x and y axes (for a total of 108 observations for Monkey C and 180 observations for Monkey G). Reconstruction accuracy for position and velocity, measured in SNR and correlation coefficient, for the algorithms are shown in Table 2, grouped by behavioral task. Figure 4. Off-line reconstruction accuracy for 2 monkeys (C and G) for each algorithm. Accuracy is quantified as signal-to-noise ratio (SNR) of the position reconstructions, averaged between x and y dimensions. Error bars indicate plus and minus one standard error. Table 2. Off-line reconstruction accuracy for the 10th order UKF, Kalman filter, Wiener filter, and population vector method. In terms of position estimates, the 10th order UKF with our quadratic tuning model was consistently more accurate than the other algorithms. The two-sided, paired sign test with 288 observations (16 sessions, 9 folds, 2 dimensions) and significance level was used to evaluate significance. The 10th order UKF produced position estimates with significantly higher SNR than the 1st order UKF (, mean difference 0.85 dB), the standard Kalman filter (, mean difference 1.25 dB), the 10 tap Wiener filter fit using ridge regression (, mean difference 1.11 dB), the 10 tap Wiener filter fit using ordinary least squares ( mean difference 1.55 dB), and Taylor's variant of the population vector method (, mean difference 5.42 dB). When sessions of pursuit task and center-out task were separately analyzed, the 10th order UKF was 1.23 dB more accurate than the 1st order UKF in the pursuit task and 0.48 dB more accurate in the center-out task. The 1st order UKF produced position estimates with significantly higher SNR than the standard Kalman filter (, mean difference 0.39 dB), the 10 tap Wiener filter fit using ridge regression (, mean difference 0.25 dB), the 10 tap Wiener filter fit using ordinary least squares (, mean difference 0.70 dB), and Taylor's variant of the population vector method (, mean difference 4.57 dB). For predicting velocity, the 10th order UKF produced estimates with significantly higher SNR than the 1st order UKF (, mean difference 0.27 dB), the standard Kalman filter (, mean difference 0.36 dB), 10 tap Wiener filter fit using ridge regression (, mean difference 0.29 dB), the 10 tap Wiener filter fit using ordinary least squares ( mean difference 0.82 dB), and Taylor's variant of the population vector method (, mean difference 2.60 dB). The 1st order UKF produced velocity estimates with significantly higher SNR than the standard Kalman filter (, mean difference 0.09 dB), the 10 tap Wiener filter fit using ordinary least squares ( mean difference 0.55 dB), and Taylor's variant of the population vector method (, mean difference 2.33 dB). Similar results were obtained when the correlation coefficient was used as a measure of filter performance. On-line performance We compared the 10th order UKF to the Kalman filter and Wiener filter in on-line, closed-loop BMI control in six recording sessions: three with monkey C and three with monkey G. In each session, the monkey first performed the pursuit task using joystick control for 6 to 10 minutes. During this time period, 5 minutes of data was used to fit parameters for the algorithms. In each session, all algorithms were fit on the same data. Then the monkey performed the pursuit task using BMI control with each algorithm in turn for 5 to 8 minutes. The evaluation order of the algorithms was switched between sessions, however not all orderings could be used in the three sessions for each monkey. During BMI control, the monkey was required to hold the joystick as an indication of active participation; time periods when the monkey did not hold the joystick were omitted from the analysis. Performance was measured by comparing the position of the target (the signal for SNR calculations) and the BMI-controlled cursor. Table 3 shows the signal-to-noise ratio and correlation coefficient for each algorithm in each session, with the mean taken across the x and y-axis. Figure 5 shows example traces of the BMI-controlled cursor and target positions in session 19. The two-sided, paired sign-test was used to measure significance with the two axis treated separately and significance value was set at . In terms of SNR, the monkeys performed significantly better when using the 10th order UKF than when using the Kalman filter (p<0.05, 12 observations) and 10 tap Wiener filter fitted with ridge regression (p<0.05, 10 observations). In terms of CC, no comparison was significantly different at the level. Figure 5. Example traces of y-position during on-line, closed-loop BMI operation in a representative experimental session (session 19, Monkey C). The dashed sinusoidal curves indicate target position. Table 3. Comparison of behavioral performance using on-line, closed-loop BMI driven by a 10th order UKF, a Kalman filter, and a 10 tap Wiener filter fit using ridge regression. Model, parameter, and algorithm analysis Our neural tuning model related neural activity with behavior both prior to and after the time instant of neural activity. The parameters past taps and future taps, in units of 100 ms, described the time offsets prior to and after the instant of neural activity between which tuning was modelled, respectively (see Materials and Methods). We investigated the relationship between choices of the number of future and past taps and reconstruction accuracy for the n-th order UKF (Figure 2B). The ridge regression parameter was optimized for each setting of the number of taps using the first fold of 10 fold cross-validation, and we report the accuracy on the remaining 9 folds. Plots of the mean position accuracy over various choices of the number of future and past taps for two sessions, one with center-out task (session 1) and one with pursuit task (session 16), are shown in Figure 6A. The number of future taps is shown on the x-axis and each setting of past taps is depicted as a separate curve. For the pursuit task, the performance steadily increases with the number of future taps and increases slowly with the number of past taps. For the center-out task, the performance was maximum when 15 future and 2 past taps were used. A large number of future taps resulted in decreased performance, while the number of past taps had small effects on performance. Figure 6. Dependency of reconstruction accuracy on the filter parameters. A: Reconstruction accuracy quantified as signal-to-noise ratio (SNR) versus number of future (x-axis) and past taps (curves). B: Example traces of position reconstruction with parts of filter disabled. The thick dashed curve shows the joystick x-axis position. The solid curve shows the reconstruction using the fully-functional 10th order UKF. The dotted curve shows the reconstruction using a 10th order UKF with the movement model assumed to be the physical equations relating position and velocity, instead of fitted to data. The dash-dotted curve shows the 10th order UKF with the neural observations ignored. To test the capacity of the movement model to predict hand trajectories, we conducted two analyses. In the first analysis, the neural tuning model update step of the 10th order UKF was disabled so that the filter ignored neural activity and used only the movement model to “dead reckon.” In the second analysis, the movement model was not fit to the training data but set by assumption so that position was the discrete integral of velocity and velocity remained constant except for noise perturbations. The movement model noise covariance was fit to the data under these assumptions by calculating the mean-squared-error matrix of the residuals when using this movement model to predict next states. Figure 6B shows example traces of reconstruction under these two conditions on pursuit task (session 16). The true position of the joystick is shown by the thick dashed curve. The “dead reckoning” filter (dash-dotted curve) produced useless predictions shortly after filtering began, showing that the movement model could not reconstruct the hand trajectory alone, even though the monkey tried to follow a deterministic Lissajous curve. The 10th order filter with the assumed movement model (dotted curve) produced less accurate predictions than the filter with movement model fitted from the data. The position estimate SNR of the 10th order assumed movement model filter was 3.88±0.27 dB (mean±standard error, 18 observations) for the pursuit task session and 2.89±0.44 dB in the center-out task session. For the fully-functional 10th order UKF, the SNR was 6.25±0.23 dB for the pursuit task session and 4.08±0.36 dB for the center-out task session, showing a large benefit to using a fitted movement model, especially for the pursuit task. The position estimate SNR of the 1st order assumed movement model filter was 1.57±0.76 dB for the pursuit task session and 0.54±0.59 dB for the center-out task session. Since the assumed movement model was not fitted to data and the movement model noise covariances were identical, this difference in performance between the 1st order and 10th order assumed movement model filters must arise from the different accuracies of the 1 tap and 10 tap quadratic neural tuning model. The large difference in accuracy (2.30 and 2.35 dB) shows the benefit of modeling neural tuning across multiple time offsets simultaneously, although much of this benefit likely comes from the autocorrelation of movements, which is also captured by data-fitted movement models. To quantify the extent the approximations of the unscented Kalman filter affected performance, we performed off-line reconstructions using standard particle filters with identical models as the 1st and 10th order unscented Kalman filter. The particle filters used 50,000 particles and the same parameters, initial conditions, and test data as the unscented Kalman filters. Since we had many sessions and cross-validation folds for comparison, only one particle filter run was performed per session and cross-validation fold. We used the posterior mean of the particles as the output. For the 1st order model, the particle filter produced significantly more accurate position reconstructions (two-sided, paired sign-test, 288 observations, , mean difference 0.07 dB) than the unscented Kalman filter. For the 10th order model, the difference in performance was not significant at the level, with the unscented Kalman filter having a nominal 0.02 dB advantage in mean SNR. This was likely due to the large state space (40 dimensional) associated with the 10th order model—even the large number of particles could not represent distributions in this state space as well as a multivariate normal distribution, hence the UKF provided similar accuracy even with the unscented approximation. Figure 7 shows off-line reconstruction accuracy for a pursuit task session when different-sized subsets of the neurons are used (neuron dropping curves). For each setting of the number of neurons, 10 subsets of neurons were randomly selected and each algorithm was evaluated on these subsets using 10 fold cross-validation. The first fold was reserved for finding optimal ridge regression parameters, and the mean accuracy on the nine remaining folds are plotted in Figure 7. The 1st and 10th order unscented Kalman filter reconstructs position more accurately than the Kalman filter, Wiener filter, and population vector method even for small numbers of neurons. The advantage of the 10th order UKF increases with the number of neurons. The Wiener filter fitted with ridge regression approaches the accuracy of the 1st order UKF as the number of neurons increases. As expected, the benefit of ridge regression for fitting the Wiener filter grows larger as the number of neurons, and hence number of parameters, increases. Modeling the noise covariance between neurons becomes more important as the number of neurons increases, as can be seen by the lower performance of a modified Kalman filter which does not model neuron noise covariance (Kalman w/o covariance) compared to the unmodified Kalman filter. The neural tuning model noise covariance of the Kalman w/o covariance filter has all entries not on the diagonal set to zero. The population vector method peaks in performance at around 60 neurons and then decreases in accuracy, demonstrating the sub-optimality of the parameter fitting procedure which ignores covariance among neurons. Figure 7. Dependency of reconstruction accuracy for each algorithm on the number of neurons. The y-axis depicts the mean accuracy among 10 random subsets of neurons used by all algorithms to make reconstructions. The curve labeled Kalman w/o covariance indicates the reconstruction accuracy of a Kalman filter with the off-diagonal entries of the neural tuning model noise covariance set to zero. In terms of computational load, the MATLAB implementation of the 10th order UKF on an Intel Pentium 4 class computer used 0.012±0.005 seconds per iteration (mean±standard deviation), or around 80 Hz on average. The 30th order UKF (15 future and 15 past taps) used 0.0360±0.0001 seconds per iteration, or around 28 Hz on average. Our on-line implementation in C++ using Automatically Tuned Linear Algebra Software (ATLAS) easily executed faster than 10 Hz, our binning frequency. In this study, we achieved an improvement over previous closed-loop linear BMI decoding by implementing a more accurate decoding algorithm, the n-th order unscented Kalman filter (UKF). This filter modeled arm movement profiles better because it used the history of past movement, and it described neuronal modulations to movements better by using a quadratic model of neuronal tuning which included tuning at multiple time offsets. The filter performed well both in off-line reconstruction of previously recorded data and on-line, closed-loop BMI operation. Review of previous algorithms Much work has been done investigating algorithmic methods for decoding continuous control signals from extracellular neural recordings for neuroprosthetics (for a survey see Bashashati et al. [9]). The underlying theory stems from the pioneering work of Georgopoulos et al. [43], which reported the cosine relationship between firing rates of M1 neurons and the angle between arm movement and the neurons' preferred directions. The observation of this relationship led to a hypothesis of neuronal encoding of movements called the population vector model, in which movement velocity is calculated as vector sums of single-neuron vectors pointing in the neurons' preferred directions and scaled by the neurons' firing rates [10]. Many BMI studies used this approach to decode movement parameters from population activity [11], [12], [13], [14], [15]. The Wiener filter, an optimal linear regression method, improves upon the population vector approach. The Wiener filter has been used in many studies [3], [17], [18], [19], [20], [44] and remains a staple of BMI research because of its relative simplicity and efficacy. As research on BMI decoding methods progressed, attention turned to the Kalman filter [21], [22], [23], [24], [45], [46], which explicitly separates the models of how neural activity relates to produce movements and how these movements evolve over time. The Kalman filter, being a probabilistic method, also provides confidence estimates. Non-linear models of neural tuning provide a better description of neuronal modulations related to motor parameters, but are more computationally demanding to use. The switching Kalman filter, in which several Kalman filters operate in parallel using different parameters, was a non-linear method shown to be superior to the Kalman filter for BMI decoding by Wu et al. [23]. Another non-linear approach, called the particle filter, sequential Monte-Carlo, or condensation, is a recursive Bayesian estimator based on non-parametric representations of probability distributions and stochastic simulation [39]. Several studies have investigated the particle filter for BMI decoding with a variety of non-linear models for neural tuning: Gao et al. [26], [27], Brockwell et al. [25], Shoham et al. [28]. However, due to the heavy computational burden, online closed-loop BMI using the particle filter has not been reported. Another class of decoding methods work directly from individual neuron spikes instead of instantaneous firing rate estimates. In this approach, spike trains are modeled as discrete events or point processes and decoding can operate at millisecond time scales. The point process analog of the Kalman filter, using a Gaussian representation for uncertainty in state estimates and an inhomogenous Poisson model of spiking, was derived by Eden et al. (2004a, 2004b) and called the stochastic state point process filter (SSPPF) [29], [30]. Barbieri et al. estimated the location of a foraging rat using recordings from CA1 hippocampal neurons and the SSPPF [47]. Truccolo et al. (2005, 2008) analyzed and compared the ability of the SSPPF to estimate several behavioral variables in simulations, monkeys, and humans [31], [32]. Wang et al. (2006) showed that preserving a non-parametric posterior distribution for estimated hand movements using a point process particle filter improves decoding accuracy versus the SSPPF in simulation [48]. Brockwell et al. (2007) used a Markov chain Monte-Carlo procedure for fitting point process filter parameters [49]. However, there has been no implementation of an online, closed-loop BMI which uses a point process filter. To improve decoding of simple reaching movements, tuning to the goal coordinates of reach trajectories has been used to augment tuning to movement. Kemere et al. (2004) included both movement tuning and target position tuning in a maximum-likelihood filter [50]. Srinivasan et al. (2005, 2006) incorporated the estimated target position of a reaching movement in both the Kalman and point process filter frameworks [51], [52]. Later, Srinivasan et al. (2007) combined tuning to target position, point process inputs, and continuous-value inputs to allow neural spikes and other neural measurements such as local field potentials (LFPs), electrocorticography (ECoG), electroencephalography (EEG), and electromyography (EMG) to be used in a single Bayesian filter [38]. Mulliken et al. (2008) included the target location in the state of a Kalman filter for prediction from posterior parietal cortex [53]. Other techniques have been investigated for decoding of continuous hand movements. Isaacs et al. (2000) used principle components analysis and the nearest-neighbor algorithm [54]. Kim et al. (2003) proposed a competitive mixture of linear filters [55], [56]. Sanchez et al. (2002, 2003, and 2004) and Hatsopoulos et al. (2004) proposed various artificial neural-network based approaches [18], [33] , [34], [35], [57]. Shpigelman et al. (2003, 2004, and 2005) used support vector regression and a custom-built kernel called the spikernel [58], [59], [60]. Fisher and Black (2006) proposed an autoregressive moving average (ARMA) approach [61], and Shpigelman et al. (2008) demonstrated the kernel autoregressive moving average (KARMA) method with the spikernel in closed-loop BMI [62]. In addition to decoding continuous hand movements, a variety of techniques have been employed for decoding discretized action choices, for example, in the studies of Hatsopoulos et al. [18], Musallam et al. [36], and Santhanam et al. [37]. While there is a large variety of algorithms available for decoding desired movement from neural signals, only our approach and the KARMA algorithm of Shpigelman et al. [62] have incorporated non-linear models of neural tuning in closed-loop BMI. Quadratic tuning model In this study, we explored whether a quadratic model of neural tuning can improve BMI decoding accuracy. Our analysis showed that our quadratic model of neural tuning was significantly more predictive of neuron firing rate than a linear model. We then implemented an unscented Kalman filter which used this quadratic model to infer desired hand movements. The increased spike count prediction accuracy (0.02 dB) and off-line reconstruction accuracy of the (1st order) UKF versus the standard Kalman filter (0.39 dB) and 10 tap Wiener filter (0.25 dB) demonstrates the benefits of our quadratic model. By using the unscented transform, we were able to implement a non-linear filter without resorting to computationally expensive particle filtering techniques. Movement history Our decoding method was further enhanced by incorporating a short history of hand kinematics into the hand movement model. We implemented an n-th order UKF which used the hand movement in the n previous time steps to predict hand movements in the next time step. Adding a short history to the state space had the additional benefit of modeling neural tuning across multiple time offsets simultaneously. When using taps, the 10th order UKF produced more accurate reconstruction than the 1st order UKF (0.85 dB improvement), demonstrating the value of incorporating a short history in the state space. We explored the optimal history length, or number of taps, for the UKF. Our results suggest that the number of taps for best performance depends on the behavioral task. For the pursuit task, accuracy increased with the number of future taps and plateaued at n = 15 or slightly later, and accuracy increased slowly with the number of past taps. For the center-out task, a small number of future taps resulted in the highest accuracy, while the number of past taps had small effects. The improvement of the 10th order UKF versus the 1st order UKF was greater for the pursuit task (mean 1.23 dB) than for the center-out task (mean 0.48 dB). Based on these results, we conjecture that the richer movement model of the 10th order UKF was able to capture the hand movement patterns produced during the performance of the pursuit task better than those generated during execution of the center-out task. This is likely because hand movements for the center-out task are autocorrelated over shorter time spans than hand movements for the pursuit task. Hand movements during center-out reaches were brief and unrelated between reaches, while during the pursuit task the hand moved relatively smoothly. The n taps of our movement model can be viewed as extra smoothing, hence the pursuit task, with smoother movement trajectories, benefits more than the center-out task. Our analysis showed that the movement model of the 10th order UKF made large contributions to the accuracy of the filter (6.25 vs 3.88 dB in pursuit task), yet this movement model was unable to provide accurate estimates by itself, without the aid of the neural recordings (dead reckoning, Figure 6B). In previous studies on the Kalman filter, one lingering question was how to set the best time offset in the model between hand movements and neural activity [22]. Wu et al. (2006) searched for the best time offset using a greedy stochastic search mechanism [24]. Our n-th order implementation allowed multiple time offsets to be used simultaneously. The ridge regression regularization used during parameter fitting automatically chooses the best time offset(s) by suppressing the weight coefficients of less useful time offsets. By using regularization, we have essentially replaced the combinatorial search for the best time offset for each neuron with a continuous optimization problem, at the cost of increased bias. We indirectly gauged the benefit of modeling tuning relationships across multiple time offsets by comparing the 1st and 10th order unscented Kalman filters with movement models assumed to be the physical equations relating position and velocity, instead of fitted to training data. The large difference in accuracy (around 2.3 dB) showed the benefit of modeling tuning relationships across multiple time offsets, though much of this improvement is also captured by data-fitted movement models. Advantages of the n-th order unscented Kalman filter The 10-th order and 1st order UKF both produced significantly more accurate reconstructions than the standard Kalman filter, Wiener filter, and the population vector method [10], [13]. In online, closed-loop BMI operation, the 10th order UKF allowed the monkey to perform a pursuit task significantly better than both the Kalman filter (mean improvement 1.04 dB) and Wiener filter (mean improvement 2.49 dB). While the SNR values reported in this study may seem low compared to filter performance in other domains, the large inherent noise in neural activity (compare 0.05 dB mean predictive accuracy per neuron with the accuracy of sensors from other domains) make the BMI decoding problem challenging. These results demonstrate the advantage of the non-linear model of neural tuning to arm movements at multiple time offsets and the advantage of leveraging patterns of movement. We have demonstrated one computational approach that can achieve these improvements without resorting to a computationally heavy particle filter, the filter design typically used for non-linear observation models. One may argue that the heavy cost of particle filters is not a significant obstacle due to the rapid improvement of computing power, for example, in personal desktop computers. However, an ideal BMI-driven prosthetic device will need to be highly mobile, placing strict limits on power consumption and weight, thus limiting computational power. While modern portable personal computers may be fast enough to host particle filters, they also consume dozens of watts of power and only manage a few hours on a typical battery pack. Thus, an accurate yet computationally efficient filtering algorithm is desirable for a compact BMI-driven prosthetic device. When compared to the commonly-used Wiener filter, our approach has several advantages. When the parameters of the Wiener filter are fitted using least squares, the noise of the neurons is assumed to be independent and of the same variance. These assumptions are violated by real neural populations [24]. The UKF explicitly models the noise of neurons in a full covariance matrix, allowing different variances among neurons and excess covariance among neurons not due to the desired output variable [24] to be modeled. The Wiener filter typically requires more parameters to be fitted than the UKF, leading to increased training data requirements and increased risk of overfitting. However, overfitting can be mitigated with regularization techniques such as ridge regression or sophisticated Bayesian regression techniques such as Variational Bayesian Least Squares [63]. In contrast to the Wiener filter, the UKF is a Bayesian technique which explicitly models the uncertainty of hand kinematics estimation, giving users access to measures of confidence in kinematic estimates. Furthermore, the UKF explicitly separates the neural tuning model and the movement model. Besides theoretical elegance, this separation allows parameter fitting schemes which can make better use of training data. For example, the model for neural tuning may be estimated from data obtained while the user is performing several different tasks, while individual movement models are estimated for each task. Attempting this with a Wiener filter will confound the autocorrelations from hand movements with the cross-correlation between hand movements and neural activity. Compared to the point process based methods, our approach offers less temporal resolution. However, the increased temporal resolution offered by point process methods comes at higher computational cost. The normally-distributed noise assumption inherant in all Kalman filters is likely violated by some neurons with such low firing rates that their spike counts per bin are very low. This is one of several approximations made for computational convenience in the Kalman filter approach and a main reason for the development of point process methods. However, point process methods assume all neurons are well discriminated single units, an assumption which is difficult to verify and which forces multiunits to be discarded. To model covariance of the noise among neurons, point process methods must model neuron interactions, which further increase their computational cost or approximation, while neuron noise covariance is included in the basic Kalman filter. For real-time operation on mobile devices, approximations and assumptions of convenience will likely be made by any approach, and the best algorithm will be the one which has the most appropriate tradeoff between accuracy and computational speed. The switching Kalman filter proposed by Wu et al. [23] is the algorithm most similar to our UKF in design. The switching Kalman filter can be thought of as using a piecewise-linear model, where the pieces are combined in a weighted manner. The space of piecewise-linear functions is clearly more expressive than the space of quadratic functions, but the number of pieces required to approximate quadratic tuning functions for each neuron over many input dimensions (position, velocity, history taps) is very large. Wu et al. reported an approximately 8.9% reduction in mean squared error versus the standard Kalman filter, corresponding to about 0.37 dB improvement. In comparison, our 1st order UKF outperforms the standard Kalman filter by about 0.39 dB and the 10th order UKF outperforms the standard Kalman filter by about 1.25 dB. The kernel autoregressive moving average (KARMA) algorithm proposed by Shpigelman et al. [62] is the algorithm most similar to our algorithm in capability. Shpigelman et al. used a kernel transform custom-built for neural tuning, called the spikernel, as the kernel for the KARMA algorithm. This kernel allows non-linear, non-parametric tuning models to be used for decoding. The KARMA algorithm, the kernel-trick extension of the well-known ARMA algorithm, also employs an autoregressive movement model to improve predictions. Like our approach, the approach by Shpigelman et al. has achieved real-time, closed-loop BMI operation with a non-linear and pattern-exploiting method. Unlike our approach, the KARMA algorithm is not Bayesian and does not directly produce confidence estimates of its output. Future clinical applications Our n-th order unscented Kalman filter is particularly suited for use in cortically driven prosthetic devices because of its relatively high accuracy and unique features. Our algorithm takes advantage of a non-linear model of neural tuning in a computationally inexpensive implementation that is well suited for mobile, low-power prosthetic systems. Furthermore, our algorithm takes advantage of patterns of movement, abundantly found in typical tasks such as feeding, that a prosthetic may be engaged to do. Since this new approach is Bayesian, it allows the computation of the certitude of decoded movements. Thus, decoded movements with low probaility can be suppressed, and undesired movements caused by decoding errors or unexpected neural activity can be detected and prevented. The separation of the neural tuning and movement models also allows training data to be used more efficiently, making the prosthetic easier to calibrate. The unscented Kalman filter can be applied to learn neural tuning model parameters or adapt to time-varying neural tuning and time-varying patterns of movement through a technique called dual Kalman filtering for joint parameter and hidden state estimation [64]. Using this approach, a person with paralysis can be trained to use a BMI-driven cortical prosthetic. The user first observes example movements performed by a technician or computer algorithm. Neural activity recorded from the patient's brain and the example movements are then used to compute a first estimate of the neural tuning model. Next, the user assumes the control of the BMI. Then, the UKF would simultaneously decode neural activity and improve the estimates of the neural tuning model parameters. As neural tuning changes over time due to learning, the UKF would modifiy the neural tuning model to exploit these changes. Unlike the co-adaptive framework of Taylor et al. (2002), the UKF would compute in a probabilistically optimal fashion, without requiring knowledge of what the user is doing, and would update models in the background without explicit recalibration, making the system more user The UKF can also compensate for degradation of neural recordings as this can be described as changes in the neural tuning model. Furthermore, models of movement can be improved over time to best predict movements produced during execution of particular tasks. These models can also be learned over time to handle novel tasks. Our future work will pursue these approaches toward the development of user-friendly, computationally efficient, and accurate algorithms for BMIs. Materials and Methods Neuronal recordings All surgical and experimental procedures conformed to the National Research Council's Guide for the Care and Use of Laboratory Animals (1996) and were approved by the Duke University Animal Care and Use Committee. Cortical recordings were collected from 2 rhesus monkeys (Macaca mulatta) performing reaching tasks by moving a computer cursor using a hand-held joystick and by controlling the cursor directly through their cortical activity decoded by a BMI (Figure 1). Monkey C (which performed the task with its left hand) was implanted with four 32-microwire arrays in M1, PMd, PP and supplementary motor area (SMA) in the right hemisphere. Monkey G (which performed the task with its right hand) was implanted with six microelectrode arrays (32 microwires in each) in primary motor cortex (M1), primary somatosensory cortex (S1) and dorsal premotor cortex (PMd) of both hemispheres. Within each array, electrodes were grouped into 16 pairs. The separation between adjacent pairs was 1 mm. Each pair consisted of two microwires placed tightly together with one electrode 300 micron longer than the other. The longer electrode in each pair was equal or larger in diameter. Monkey C was implanted with stainless steel and tungsten electrodes of 46 and 51 micron diameter in areas SMA and M1 and tungsten electrodes of 51 micron diameter in areas PMd and PP. Monkey G was implanted with stainless steel electrodes of 40 and 63 micron diameter (Figure 1B). The sites with the best quality of neuronal signals were selected. Data from Monkey C were recorded from left PMd (9 daily recording sessions), left SMA (9 sessions), left M1 (9 sessions), and right PP (1 session). Data from Monkey G were recorded from left PMd (13 sessions), left M1 (13 sessions), left S1 (8 sessions), and right PMd (7 sessions). Extracellular neural signals were amplified, digitized, and high-pass filtered using Multichannel Acquisition Processors (Plexon, Inc.). Neuronal action potentials were discriminated by thresholding and sorted on-line through waveform templates set by the experimenter using Plexon spike-sorting software or using templates produced by custom-built spike sorting software [65]. This custom spike sorting software clusters waveforms by their three largest principle components using a modified expectation-maximization algorithm and removes spurious clusters by thresholding on various criteria [65]. Single and multi-units were not treated differently for prediction purposes. Behavioral Tasks During the experimental sessions, each monkey sat in a primate chair. Their heads were unrestrained, and the recording system was connected to the implants using light flexible wires. A two degree of freedom (left-right and forward-backwards) analog joystick was mounted vertically at the monkey's waist level. The joystick was 30 cm in length and had a maximum deflection of 12 cm. The monkeys were trained to manipulate the joystick with their hands. Monkey C performed with the left hand, and Monkey G performed with the right hand. An electrical resistance-based touch sensor on the joystick handle measured whether the monkey was holding the joystick. An LCD projector projected visual images on a screen mounted 1.5 m in front of the monkeys (Figure 1A). Using the joystick, monkeys moved a round cursor, defined by a ring 1.6 cm in diameter. Forward, backward, rightward, and leftward movements of the joystick translated to the upward, downward, rightward, and leftward movements of the cursor, respectively. The joystick to cursor gain varied between 3.2 and 6.4, depending on session (i.e. a 1 cm movement of the joystick translated into a 3.2 to 6.4 cm movement of the cursor). Targets were defined by rings 16 to 20.8 cm in diameter on the screen. The median speeds at which monkeys moved the joystick were approximately 3.5 to 5.5 cm/s, depending on the session. Each behavioral task required placing the computer cursor over the target using the joystick. The monkeys performed two tasks: (1) center-out and (2) pursuit. The center-out task (Figure 1C) used stationary targets that occurred at randomly chosen points on a fixed-radius ring around the center of the screen. The monkey had to hold the cursor at the center target at the screen center. After the center target disappeared and a peripheral target appeared, the monkey had to move the cursor to the peripheral target and keep inside the target until it received a fruit-juice reward. The inter-trial interval that followed a successful trial was 500 ms. The intertrial interval after an error trial was 700 to 1000 ms. Hold times varied per session from 350 to 1050 ms. The trials in which the monkey failed to put the cursor over the target or failed to fulfill the hold requirement were not rewarded. After a trial was finished, the center target appeared again to start the next trial. In our analysis, data collected during the center-out task were treated as a continuous stream and not segmented by trial or movement onset. The pursuit task (Figure 1D) used a moving target which followed a Lissajous curve:(1a) where and are the x- and y-axis coordinates and is time in milliseconds. We used parameter values , , Hz, , and cm (in joystick scale). The temporal frequency was different for the x- and y-axes, making the two coordinates uncorrelated. The monkey had to keep the cursor within the moving target to receive periodic juice rewards. Data preprocessing For all algorithms, spike counts were calculated in 100 ms nonoverlapping bins to estimate the instantaneous firing rate. Joystick position was recorded at 1 KHz and down-sampled to 10 Hz to match the binning rate. Velocity was calculated from position by two-point digital differentiation. Position and velocity data were centered at their means. Spike counts were centered at their means for the Kalman-based filters. Data recorded while the monkey did not hold the joystick were disregarded. Off-line analysis was conducted using MATLAB (Mathworks, Inc). Real-time filters were implemented in a custom built BMI system running on a workstation with an Intel Xeon 2.2Ghz processor. Computational Model Our n-th order unscented Kalman filter (UKF) can be described as a modification of the Kalman filter [42], a commonly-used Bayesian recursive estimation method for a specific class of hidden Markov models (HMMs) with continuous states and observations, normally distributed uncertainty, normally distributed noise, and linear transition and observation models (for more details. An introduction to the Kalman filter can be found in the Supporting Information section (Materials S1). The n-th order unscented Kalman filter combines two extensions: (1) the unscented Kalman filter [41], which allows arbitrary non-linear models to be used in Kalman filtering, (2) the n-th order extension, which allows more expressive autoregressive order n (AR n) movement models and neural tuning models. Figure 2 provides a comparison of the hidden Markov models for the Kalman filter (Figure 2A) and the n-th order unscented Kalman filter (Figure 2B). An example of a linear neural tuning model is shown in Figure 2C, and an example of a quadratic neural tuning model is shown in Figure 2D. Figure 2D depicts example autoregressive (AR 1 vs AR n) movement models (Figure 2D). In the hidden Markov model for BMI decoding using the n-th order unscented Kalman filter (Figure 2B), the hidden state is the position and velocity of the desired hand movement, described by the variable . The state transition model or movement model, a linear function , predicts the hidden state at the current time step t given the state at the previous n time steps:(2) where is normal, i.i.d. noise, called the movement model noise, which describes the uncertainty arising from approximations made in the model and intrinsic randomness in the movement process. This movement model is an autoregressive process of order n (AR n), as compared to the AR 1 movement models of the Kalman filters previously used for BMI decoding (Figure 2D) [21], [22], [23], [24]. Note that the standard unscented Kalman filter allows non-linear movement models, but we did not design a non-linear movement model and instead focused on a non-linear observation model, described next. The observation model relates the observations to the state via a non-linear function :(3) where are the observations (100 ms binned spike counts) at time and is normal, i.i.d. noise, called the observation model noise, which describes the uncertainty in the neural tuning model and the intrinsic randomness of the neurons. The observation model predicts the expected neural activity for a given hand movement state. Following neurophysiological convention, we call it the neural tuning We incorporate multiple taps of both position and velocity in the neural tuning model to relate neural activity with hand kinematics at multiple time offsets simultaneously, avoiding the need to search for a best time offset [22], [24]. Note that the neural tuning model captures relationships between neural activity at time and movements from up to time , meaning that during decoding, desired movement in the future is predicted. We call the number the number of future taps and the number of past taps. In practice, the predictions into the future are usually inaccurate, but as they pass through the time-tap structure of the filter, they are improved by incorporating information from more neural observations. In all experiments, we used the state tap corresponding to the current observations as the filter output, i.e. we did not use lagged estimates or future predictions. Quadratic Neural Tuning Model Many models have been proposed to describe the relationship between neural activity and arm movement, notably the cosine tuning model [43], tuning to speed [31], [66], [67], and tuning to the distance of reach [68], [69]. We used a more general model which we call the quadratic model of tuning that combined several features used in the previously proposed models: tuning to position, velocity, distance, and speed. In Cartesian coordinates, the model is:(4) where is the mean-subtracted single-neuron firing rate at time , and are the x and y coordinates of the cursor at time , and are the x and y velocities of the cursor, and are scalar parameters, one set per neuron. Note that this equation describes the quadratic neural tuning model for the 1^st order UKF. For higher values of n, additional terms for the other time offsets are added. For example, the 2^nd order UKF with 1 future tap and 1 past tap has a set of terms duplicated with time . In general, our quadratic model has scalar parameters per neuron. This quadratic model worked well for our experimental task in which the movements were performed by a joystick where the zero position corresponded to the center of the video screen. We chose not to include higher derivative terms, such as acceleration and jerk, because they did not contribute substantially to decoding accuracy. We implemented the n-th order UKF in Matlab and C++ using the equations presented by Julier et al. [41] with one exception: we used a linear movement model, which meant the first step was the same as that in the standard Kalman filter [42]. The variables in the algorithm are as follows. The vector of length contained the means of the history of state variables at time :(5) The by matrix was the state variable covariance matrix. The vector of length N was the observed binned spike counts at time , where N is the number of neurons. An iteration of the filter began with the prediction step, in which the state at the previous time step was used to predict the state at the current time step:(6) where and were the mean and covariance of the predicted state, and were the mean and covariance of the previous state, matrix implemented the linear movement model, and was the covariance of the movement model noise. and are square by matrices. Details on how these and other parameter matrices were fitted are described in the next section. Besides predicting position and velocity from previous values, the matrix implemented the propagation of taps through time. Next, the update step corrected the prediction from the prediction step using the observations in a Bayesian way. In the Kalman filter, the neural tuning model is linear and the update step can be implemented in a series of matrix equations (Table 1) [42], because linear models allow straightforward, closed-form computation of the posterior distribution of the state estimate given the observation. However, analytical calculation of the posterior distribution is, in general, only possible under this linear model assumption [70]. For arbitrary non-linear observation models, computing the posterior distribution poses an intractable integration problem [70]. The unscented Kalman filter gives an approximate solution using the unscented transform — a method for approximating the mean and covariance of normally distributed random variables after they have passed through a non-linear function [41]. This transform uses a fixed set of algorithmically selected simulation points, called sigma points. The sigma points completely capture the first and second moments of the distribution [70]. Geometrically speaking, the sigma points are located at the mean and along the eigenvectors of the covariance matrix, if the orthgonal matrix square root is used in their calculation [41], though we used the Cholesky decomposition for the matrix square root. sigma points are required, where is the dimension of the state space. The set of sigma points is calculated from the state mean and covariance and evaluated through the non-linear observation function. The mean and covariance of the result are then calculated by taking the weighted mean and weighted covariance of the sigma points (for a detailed review see [70]). This approximation scheme computes precisely the effect on the mean and covariance of a normal distribution by the third order and below terms of the Taylor expansion of the non-linear function, while presence of fourth order or higher terms in the Taylor expansion introduce error [70]. Since we use a quadratic observation function, the mean and covariance of our predicted observations are calculated precisely by the unscented transform. However, the non-linear observation function makes the distribution of the predicted observation no longer normal, while the unscented Kalman filtering paradigm assumes normality and discards the higher order moments, introducing approximation error. Compared to the extended Kalman filter (EKF) [42], a well-known non-linear filtering technique, the unscented Kalman filter has better approximation accuracy for the same asymptotic computational cost [70]. In the general unscented Kalman filter, the sigma points are generated from and and evaluated in the non-linear state transition and observation functions. In our implementation, only the neural tuning model was non-linear, so sigma points were generated from and . The sigma points were set as:(8a) where the subscript outside the parentheses indicate the row taken from the matrix inside the parentheses. The square root is the matrix square root. For robustness, this computation was performed using the Cholesky decomposition. is a parameter which specifies how heavily the center sigma point is weighted compared to the other sigma points. Adjusting this parameter can improve the approximation of higher order moments [70]. We used the conventional value of for normal distributions. Next, the sigma points were evaluated in the quadratic neural tuning function :(9) where denote the sigma points after observation function evaluation. These function evaluations were implemented as separate matrix multiplications of the form:(10) where is the predicted (mean-subtracted) spike count for neuron at time t, the vector on the left hand side is one post-function sigma point , and the right-most vector is one pre-function sigma point with augmented terms. The bolded augmented terms are added to each of the sigma points using the sigma points' own values for position and velocity. Note equation 10 shows the multiplication for the 1^st order UKF. For higher n, there are more columns of model parameters in the parameter matrix and more rows in the vector corresponding to the history taps. The by matrix in the center of equation 10 containing the neural tuning model parameters, , for all N neurons is called matrix , which has a similar function to matrix of for the Kalman filter. The mean and covariance of the predicted neural firing rates were found using weighted mean and weighted covariance:(11) where is the covariance matrix of the tuning model noise. The weights were:(13a) Then, the Kalman gain was calculated: (14) where the state-observation cross-covariance , was:(15) The Kalman gain was used to correct the state estimate using the discrepancy between the predicted and actual (mean-subtracted) spike counts:(16) Finally, the state covariance was updated: (17) Equations 6 through 17 implement one iteration of the algorithm. A side-by-side comparison of the equations for the Kalman filter and the n-th order unscented Kalman filter are shown in Table 1. In off-line reconstructions, the initial values of were set by taking the means of the state variables in the training data, and the initial values of were set by taking the covariance of the state variables in the training data. When n was larger than 1, the means and covariances for the initial values were duplicated for each tap, so that the initial covariance matrix had a block-diagonal form with n blocks. In on-line BMI, the initial values of were set as the joystick position and velocity at that time and initially the values of were set to the identity matrix corresponding to variance of 1 cm for position and 10 cm/sec for velocity. Parameter Fitting We fitted the parameter matrices and to training data using regularized linear regression and estimated the matrices and from the regression residuals. We chose a form of Tikhonov regularization called ridge regression because of its simplicity and low computational cost. To fit , we first composed the 4 by T matrix of the training data position and velocities, where T is the number of data points (i.e. the time length of the training data). We then constructed a by T matrix , where column i of was the vertical concatenation of columns of matrix . To avoid the missing data problem when filling the first n columns of , the first n columns of and were omitted when fitting . Then, we fitted the intermediary matrix using ridge regression:(18) where was the ridge regression parameter. The selection of ridge regression parameters is discussed in the next section. was then augmented with entries which propagated the history taps to make : where is the identity matrix and is the zero matrix. Subscripts indicate matrix sizes. An alternative method for setting the movement model is to use the equations describing motion, e.g. position is the integral of veloctiy over time. However, this method does not capture the patterns in the movements generated by the BMI user as well as movement models fit from kinematic data. In practice, our fits to are similar to the matrix implementing the motion equations except for modest The movement model noise covariance matrix was estimated by first computing :(20) where is the 4 by residual matrix from fitting , and the division is executed per element. We then augmented to construct :(21) To fit , we first constructed the N by T matrix of mean-subtracted binned spike counts from the training data, with the spike counts from all neurons at one time step in each column. We then constructed the by T matrix , where column i of was the vertical concatenation of columns of matrix with the bolded quadratic terms in equation 10 inserted appropriately. To implement the k future taps, must be shifted back in time by k steps. This was done by removing the last k columns of and the first k columns of . Subsequently, to avoid the missing data problem when filling the first columns of , the first columns of and were removed. We then fitted using ridge regression:(22) where was the ridge regression parameter. The N by N neural tuning model noise covariance matrix was estimated using:(23) where is the N by residual matrix from fitting , and the division is executed per element. Algorithm Evaluation The n-th order unscented Kalman filter and several comparison methods were evaluated off-line using data collected in experiments in which monkeys moved a computer cursor using the joystick. The n-th order UKF used taps, with five future taps and fivepast taps. The UKF with past taps was tested to evaluate the benefit of taps. A standard Kalman filter was evaluated to determine the benefit of the quadratic tuning model. For comparison against algorithms commonly used for a closed-loop BMI, a Wiener filter with 10 taps and the population vector method used by Taylor et al. [13] were evaluated. For off-line reconstructions, cross-validation was conducted. In this procedure, a portion of the data for each session was held-out for testing and the rest was used to fit parameters. Performance of the algorithms was evaluated on the held-out portion to avoid fitting models and making predictions on the same data. The data for each session were divided into 10 equal-sized portions (or folds) and the testing procedure was repeated on each held-out portion in turn. Both the movement and neural tuning models were fit for each cross-validation fold. In this study we did not address the question of how to design a general movement model, instead we leave this for future work. For off-line reconstructions, ridge regression parameters for every algorithm fitted using ridge regression were chosen by optimizing for highest position reconstruction accuracy on the first cross validation fold of each session, i.e. fitting and predicting was performed repeatedly for different choices of (for the UKF, and were sought independently) on the first cross validation fold. This first fold was omitted when aggregating performance metrics. For on-line experiments, ridge regression parameters were set to for the 10^th order UKF, for the 1^st order UKF and Kalman filter, and for the Wiener filter. These values were picked using previous experience. Wiener filter parameters were also fit with ordinary least squares (OLS) without ridge regression to demonstrate the benefit of regularization. The Kalman filter used for comparison had the same state variables as the 1^st order UKF, and its models were fitted in a similar way as the 1^st order UKF, less the quadratic terms for the observation model matrix (see Supplementary Materials). For the population-vector method, the neuronal weights were fit via ordinary least squares without regularization. The original formulation of the population vector method predicted velocity and did not predict position directly. To make position predictions, we substituted the Cartesian position coordinates for the velocity components. We implemented the Taylor et al. [13] version of the population vector method with one slight modification: the baseline firing rate (mean) and normalization constant (standard deviation) of neurons were fit once from training data, instead of updated during filtering using a sliding window of spiking history. To quantify filter performance, we compared algorithm estimated trajectories to joystick trajectories (in off-line reconstructions) and to target trajectories (in closed-loop BMI). We computed two metrics: the signal-to-noise ratio (SNR) and the correlation coefficient (CC). SNR was calculated as:(24) where is the sample variance of the desired values (joystick or target) and is the mean squared error of the predicted values from the desired values. Position, velocity, and the x and y axes were evaluated separately. The signal-to-noise ratio can be viewed as the inverse of the normalized mean squared error, where the normalization factor quantifies the power of the desired signal. SNR is widely used in engineering and has been previously used to measure BMI decoding performance [34], [44]. The SNR is unitless and comparable across experimental setups, unlike the mean squared error, which is usually incomparable between studies due to differences in movement magnitudes. In this respect the SNR is similar to the CC. However, the SNR is not translation and scale invariant, unlike the CC. This is an advantage because translation and scale invariance imply that the CC may leave undetected certain unwanted filtering results. For example, a predicted hand trajectory that is incorrect by a large but constant displacement has the same CC as a trajectory without the erroneous displacement, since only deviations from the mean are analyzed by the CC. As indicated by its name, CC is a measure of correlation, but we are interested in measuring accuracy. Furthermore, as the CC saturates at 1, its scale is compressed as it approaches 1, making it more difficult to grasp intuitively and making similar increments at lower values of CC and higher values of CC incomparable. Short of benchmark datasets, we believe the SNR measure best facilitates direct comparison between algorithms developed by different authors. To aggregate results for each session, mean SNR and CC among the cross-validation folds and between the x- and y-axis predictions were computed. Standard error of the mean was calculated for each session with 18 observations (9 folds×2 axes). To test for significant effects, we treated each cross-validation fold and each axis as a condition for paired, two-sided sign tests. We used an significance level. Supporting Information (0.26 MB DOC) We thank Dragan Dimitrov for conducting the surgery, Gary Lehew for engineering the experimental setup, Nathan Fitzsimmons for his helpful comments, Benjamin Grant for his automatic sorting software, Laura Oliveira, Weiying Drake, Susan Halkiotis, Ian Peikon, and Aaron Sandler for outstanding technical assistance. Author Contributions Conceived and designed the experiments: ZL JEO MAL MALN. Performed the experiments: JEO TH. Analyzed the data: ZL. Wrote the paper: ZL JEO TH MAL CH MALN. Publisher's Note: Sigma points missing subscripts Posted by ZhengLi1 Publisher's Note: Kalman gain depends on time Posted by ZhengLi1 Publisher's Note: msec should be sec Posted by ZhengLi1
{"url":"http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0006243","timestamp":"2014-04-19T17:07:38Z","content_type":null,"content_length":"341484","record_id":"<urn:uuid:b6aadcc7-ff02-4683-af36-f930ec083342>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00076-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Help March 28th 2011, 04:29 AM #1 Sep 2009 Dear Colleagues, Could you please help me in solving the following problem: If $Y$ is a subspace of a vector space $X$ and codim $Y=1$, then every element of $X/Y$ is called a hyperplane parallel to $Y$. Show that for any linear functional $feq 0$ on $X$, the set $H_{1}= \{x\in X|f(x)=1\}$ is a hyperplane parallel to the null space $N(f)$ of $f$. Dear Colleagues, Could you please help me in solving the following problem: If $Y$ is a subspace of a vector space $X$ and codim $Y=1$, then every element of $X/Y$ is called a hyperplane parallel to $Y$. Show that for any linear functional $feq 0$ on $X$, the set $H_{1}= \{x\in X|f(x)=1\}$ is a hyperplane parallel to the null space $N(f)$ of $f$. S\Definition: A subspace $H$ of a vector space $V$ is called a hyperplane iff it is a maximal subspace of it, i.e. $<H,v>=V\,,\,\,\forall v\in V$ Claim: a subspace of a vec. sp. is a hyperplane iff it is the kernel of a non-zero linear functional on the space Claim: a subspace of a finite dimensional v.s. is a hyperplane iff its dimension is one less that then space's. March 28th 2011, 05:04 AM #2 Oct 2009
{"url":"http://mathhelpforum.com/differential-geometry/176054-hyperplane.html","timestamp":"2014-04-19T19:36:51Z","content_type":null,"content_length":"36862","record_id":"<urn:uuid:ce344aff-555e-4099-b017-18fe629a6a99>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00267-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Hey can someone take a look at my Test? Check my work? • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4fcc0ae9e4b0c6963ad72953","timestamp":"2014-04-17T09:46:28Z","content_type":null,"content_length":"275750","record_id":"<urn:uuid:2aba5257-ebc3-4398-aad6-4fe227d9f331>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00245-ip-10-147-4-33.ec2.internal.warc.gz"}
Definition of universal gas constant universal gas constant Definition of universal gas constant the constant of proportionality that relates the energy scale in physics to the temperature scale Source: Wikipedia - CC BY-SA 3.0 Examples of universal gas constant in the following topics: □ For gas-specific reactions, you can use the equilibrium concentration. □ The equilibrium constant for gases can therefore also be expressed in terms of partial pressures . □ We can show this relationship mathematically using the ideal gas equation: PV = nRT where P is pressure, V is volume, n is concentration, R is the universal gas constant, and T is □ Because "RT" is constant as long as the temperature is constant, the pressure is directly proportional to the concentration. □ The equilibrium expression can be expressed in terms of partial pressure for a gas-phase reaction: $K_{p} = K_{c}(RT) ^{ \Delta n}$ where Kp is the equiibrium constant expressed in partial pressure, R is the universal gas constant, T is temperature, and \Delta n is the number of moles changing in the reaction. □ For gas-specific reactions, you can calculate the equilibrium constant using concentration or partial pressure. □ The two (ultimately equivalent) equations for these two cases (half-cell, full cell) are as follows: Ered = Estd,red - RT/zF ln ared/aox (half-cell reduction potential) Ecell = Estd,cell - RT /zF ln Q (total cell potential) where: Ered is the half-cell reduction potential at the temperature of interest Estd,red is the standard half-cell reduction potential Ecell is the cell potential (electromotive force) Estd,cell is the standard cell potential at the temperature of interest R is the universal gas constant T is the absolute temperature a is the chemical activity for the relevant species F is the Faraday constant z is the number of moles transferred in the cell reaction or half-reaction Q is the reaction quotient □ This equation of state can be presented as: $(p + a/Vm2)(Vm-b) = RT$ where p is the pressure, Vm is the molar volume, R is the universal gas constant, and T is the absolute temperature. □ The constants a and b have positive values and are specific to each gas. □ The term involving the constant a corrects for intermolecular attraction. □ The b term represents the excluded volume of the gas or the volume occupied by the gas particles. □ The van der Waals equation becomes the ideal gas law as these two correction terms approach zero. □ The van der Waals equation is a modification to the ideal gas law that corrects for non-zero gas volume and intermolecular interactions. □ The ideal gas law is a convenient approximation for predicting the behavior of gas-phase chemical reactions. □ The universal attractive force, or London disperson force, also generally increases with molecular weight. the London dispersion force is caused by correlated movements of the electrons in interacting molecules. □ Finally, as a gas is compressed and pressure increases, repulsive forces from the gas molecules oppose the decrease in volume. □ He corrected for the intermolecular attractive forces which hold the particles together and reduce the pressure on the container by replacing the pressure term with $P + (a/V_m^2)$ where Vm is the molar volume and a is a constant specific to each gas. □ The van der Waals equation of state can then be presented as: $(P+(a/V_m^2))(V_m-b) = RT$ Where b is the excluded volume, R is the universal gas constant, and T is the absolute temperature. □ The Arrhenius equation is a simple, but remarkably accurate, formula for the temperature dependence of the reaction rate constant, and therefore, the rate of a chemical reaction. □ A is the pre-exponential factor and R is the universal gas constant. □ What is "decaying" here is not the concentration of a reactant as a function of time, but the magnitude of the rate constant as a function of the exponent –Ea/RT. □ So, what would limit the rate constant if there were no activation energy requirements? □ They are sometime estimated by comparing the observed rate constant with the one in which A is assumed to be the same as Z.
{"url":"https://www.boundless.com/chemistry/definition/universal-gas-constant/","timestamp":"2014-04-16T08:03:33Z","content_type":null,"content_length":"49575","record_id":"<urn:uuid:6e1c41ae-16a4-46b8-8f0e-4e7c075f6046>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00080-ip-10-147-4-33.ec2.internal.warc.gz"}
Capacitors: How do they store energy ? Nice job Van !!! Maybe it'll help also to think about what goes on in a good dielectric vs in free space: Dielectric materials contain polar molecules, ie they have a + and a - end. Water is a good example. Pure water has a dielectic constant around 80, meaning that a capacitor with pure water between its plates would have 80X the capacitance of one with nothing but free space between them. (Water Molecules image courtesy of these guys: and it's an interesting page. In presence of an increasing electric field those polar molecules will begin to align with it, abandoning their preferred random orientations, and that takes mechanical work . Discharging the capacitor removes the field so the dielectric relaxes. That's why oil is used for severe duty AC capacitors - its slippery molecules don't heat up so much as they oscillate with the field. Plastic capacitors will melt in some applications where oil thrive, like commutating or snubbing SCR's. It's analogous to a mechanical spring. You doubtless noticed the similarity - Van's W = K E^2, for a spring it's K X^2 Doubtless this is oversimplified but it helped me in my early days. Now - if someone can explain why it is that empty space has a dielectric constant - i'd be much obliged. old jim
{"url":"http://www.physicsforums.com/showthread.php?t=665452","timestamp":"2014-04-16T07:46:06Z","content_type":null,"content_length":"80320","record_id":"<urn:uuid:e5e1f4d3-5c86-400f-bb02-12424a2b5fbd>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00392-ip-10-147-4-33.ec2.internal.warc.gz"}
Calculus III: Area and Arc Length in Polar Coordinates I October 19th 2007, 02:08 PM #1 Junior Member May 2007 Calculus III: Area and Arc Length in Polar Coordinates I Hi! I attempted to solve the following problems and I would really appreciate it if someone checks them and show me the exacts steps so I know where I went wrong. ( I forgot how to find the limits, please show me how) Thanks! 1) Write the Integral that represents the area of the shaded region shown in the figure (please see attachment) of the following equation. r= 1- cos 2θ My Answer: 1/2 ∫(1-cos2θ)^2 dθ with limits pi/2, 2pi (pure guess) 2) Find the area of the region by finding the limits and evaluating the integral of the following equations: a) One petal of r= cos 5θ b) Interior of r= 1- sin θ My answers: a) limits: 0, pi/5 integral equation: 1/2 ∫ (cos 5θ)^2 dθ area: pi/20 or 0.157 b) limits (these were a pure guess) : pi/6 , 5pi/6 integral equation: 1/2 ∫ (1- sin θ)^2 dθ area: pi/6 or 0.523 One petal of rose, $r=cos(5{\theta})$: Multiply by two because one petal is symmetric about the x-axis. $\int_{0}^{\frac{\pi}{10}}cos^{2}(5{\theta})d{\thet a}$ Last edited by galactus; November 24th 2008 at 05:39 AM. how do you find the limits by hand? Does that mean my answers are incorrect? If it is then then how would you solve it? For the rose, I just used: Your answer to that one is correct. I just showed another way. For the other, it would appear you should integrate over the whole region. 0 to 2Pi. Last edited by galactus; November 24th 2008 at 05:39 AM. October 19th 2007, 02:57 PM #2 October 19th 2007, 03:20 PM #3 Junior Member May 2007 October 19th 2007, 03:43 PM #4
{"url":"http://mathhelpforum.com/calculus/20920-calculus-iii-area-arc-length-polar-coordinates-i.html","timestamp":"2014-04-20T05:02:25Z","content_type":null,"content_length":"41595","record_id":"<urn:uuid:9a1e8995-e2f0-4ed2-ad07-6724341bf610>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00016-ip-10-147-4-33.ec2.internal.warc.gz"}
What is PLASTER COVERAGE PER BAG OF CEMENT? Mr What? is the first search engine of definitions and meanings, All you have to do is type whatever you want to know in the search box and click WHAT IS! All the definitions and meanings found are from third-party authors, please respect their copyright. © 2014 - mrwhatis.net
{"url":"http://mrwhatis.net/plaster-coverage-per-bag-of-cement.html","timestamp":"2014-04-18T08:09:05Z","content_type":null,"content_length":"38977","record_id":"<urn:uuid:97027469-62c3-4bfb-96e9-1394b3d6dcf1>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00311-ip-10-147-4-33.ec2.internal.warc.gz"}
This preview has intentionally blurred parts. Sign up to view the full document View Full Document • Showing pages 1 - 3 of 55 • Word Count: 7470 Unformatted Document Excerpt Engineering Mechanics-Statics PRITHVI C 1 Department of Mechanical Engineering, PESIT 1. INTRODUCTION TO STATICS Engineering is the application of mathematics and science to meet human needs. This is fun. But now we have some bad news. Engineers need to know a great deal of science and mathematics in order to be able to engineer. Much of this science and mathematics is quite difficult, and theres a whole lot of it. It takes many years to learn everything you need. The next section of the course will start you on your quest to acquire these skills, by introducing you to statics . Mechanics is one of the oldest sciences. Earliest recorded writings in mechanics are those of Archimedes (287-212 B.C.) on the principle of lever and the principle of buoyancy. First investigation of a dynamic problem is credited to Galelio (1564-1642) for his experiments with falling stones. Year in which Galileo died Newton was born. Newton (1642-1727) is famous for his three laws of motion. On his name, a portion of classical mechanics is called Newtonian Mechanics. Alternate formulation for mechanics problems was provided by Lagrange (1736-1813) and Hamilton (1805-1865). Their formulation is based on the concept of energy. Mechanics is the physical science which deals with the effects of forces on objects. MECHANICS What is statics? Statics is actually the application of mathematics and basic physics (Newtons laws) to study forces in materials, machines and structures. Forces are of interest to engineers for two reasons: they cause materials to deform and break, and they cause things to move. Statics is used to calculate forces in systems that dont move, or move at constant velocity . Who needs statics? Structural engineers use statics to design buildings and structures. Engineering Mechanics-Statics PRITHVI C 2 Department of Mechanical Engineering, PESIT Mechanical engineers use statics to design machinery, which may range from engines to micro- electro-mechanical systems. electro-mechanical systems.... View Full Document End of Preview Sign up now to access the rest of the document
{"url":"http://www.coursehero.com/file/6777035/Statics/","timestamp":"2014-04-21T07:05:58Z","content_type":null,"content_length":"34155","record_id":"<urn:uuid:9c741320-92c1-4b2e-a1fe-423f860b5bdd>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00220-ip-10-147-4-33.ec2.internal.warc.gz"}
-6(a+8) distributive property simplify expression - WyzAnt Answers -6(a+8) distributive property simplify expression -6(a+8) need to know how to work this problem out Tutors, please to answer this question. Hi Holly, The distributive property states that when you multiply one factor by a polynomial, the factor must be multiplied by every term in the polynomial separately. In your example, you're multiplying -6 by the polynomial (a+8). The distributive property states that you must multiply -6 by each term (a and 8) separately. -6(a+8) = (-6*a) + (-6*8) = (-6a) + (-48) = -6a - 48 Many people benefit from drawing arrows to indicate what gets multiplied. In this example, you would draw an arrow from the -6 to the a and another arrow from the -6 to the 8. I hope this helps! Please let me know if you have any further questions.
{"url":"http://www.wyzant.com/resources/answers/4469/6_a_8_distributive_property_simplify_expression","timestamp":"2014-04-18T19:07:40Z","content_type":null,"content_length":"35954","record_id":"<urn:uuid:150b34bf-e50e-4374-9937-39527edcdb51>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00646-ip-10-147-4-33.ec2.internal.warc.gz"}
[Edu-sig] Pythonic Math must include... kirby urner kirby.urner at gmail.com Thu Jan 15 05:13:50 CET 2009 "Must include" would be like an intersection of many sets in a Venn Diagram, where we all gave our favorite movies and a very few, such as 'Bagdad Cafe' or 'Wendy and Lucy' or... proved common to us all (no suggesting those'd be -- just personal favorites). In this category, three candidates come to mind right away: Guido's for the gcd: def gcd(a,b): while b: a, b = b, a % b return a Then two generics we've all seen many times, generators for Pascal's Triangle and Fibonacci Sequence respectively: def pascal(): Pascal's Triangle ** row = [1] while True: yield row row = [ a + b for a, b in zip( [0] + row, row + [0] ) ] def fibonacci(a=0, b=1): while True: yield a a, b = a + b, a IDLE 1.2.1 >>> from first_steps import * >>> gcd(51, 34) >>> g = pascal() >>> g.next() >>> g.next() [1, 1] >>> g.next() [1, 2, 1] >>> g.next() [1, 3, 3, 1] >>> f = fibonacci() >>> f.next() >>> f.next() >>> f.next() >>> f.next() >>> f.next() >>> f.next() Check 'em out in kid-friendly Akbar font (derives from Matt Groening of Simpsons fame): http://www.wobblymusic.com/groening/akbar.html ( feel free to link or embed in your gnu math website ) I'm not claiming these are the only ways to write these. I do think it's a feature, not a bug, that I'm eschewing recursion in all three. Will get to that later, maybe in Scheme just like the Scheme folks would prefer (big lambda instead of little, which latter I saw put to good use at PPUG last night, well attended (about 30)). In terms of curriculum, these belong together for a host of reasons, not just that we want students to use generators to explore On-Line Encyclopedia of Integer Sequences type sequences. Pascal's Triangle actually contains Fibonaccis along successive diagonals but more important we're laying the foundation for figurate and polyhedral ball packings ala The Book of Numbers, Synergetics, other late 20th century distillations (of math and philosophy respectively). Fibonaccis converge to Phi i.e. (1 + math.sqrt(5) )/2. gcd will be critical in our relative primality checks, leading up to Euler's Theorem thence RSA, per the review below (a literature search from my cube at CubeSpace on Grand Ave): Remember, every browser has SSL, using RSA for handshaking, so it's not like we're giving them irrelevant info. Number theory goes into every DirecTV box thanks to NDS, other companies making use of this powerful public You should understand, as a supermarket manager or museum administrator, something about encryption, security, what's tough to the crack and what's not. The battle to make RSA public property was hard won, so it's not like our public school system is eager to surrender it back to obscurity. Student geek wannabes perk up at the thought of getting how this works, not hard to show in Javascript and/or Python. Makes school more interesting, to be getting the low-down. By the same token, corporate trainers not having the luxury of doing the whole nine yards in our revamped grades 8-12, have the ability to excerpt specific juicy parts for the walk of life they're in. Our maths have a biological flavor, thanks to Spore, thanks to Sims. We do a Biotum class almost right away ("Hello World" is maybe part of it's __repr__ ?). I'm definitely tilting this towards the health professions, just as I did our First Person Physics campaign (Dr. Bob Fuller or leader, University of Nebraska emeritus). The reason for using bizarre charactersets in the group theory piece is we want to get their attention off numbers and onto something more generic, could be pictograms, icons, pictures of vegetables... Feedback welcome, ** http://www.flickr.com/photos/17157315@N00/3198473850/sizes/l/ -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.python.org/pipermail/edu-sig/attachments/20090114/c717638b/attachment.htm> More information about the Edu-sig mailing list
{"url":"https://mail.python.org/pipermail/edu-sig/2009-January/009008.html","timestamp":"2014-04-21T08:31:46Z","content_type":null,"content_length":"7880","record_id":"<urn:uuid:e2fce50a-f183-4af9-8590-cca98d873906>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00430-ip-10-147-4-33.ec2.internal.warc.gz"}
oint Presentations Rotation ,vibration, electronic spectra - Texas A+M University PPT Presentation Summary : Rotation and vibration spectra Rotational States Molecular spectroscopy: We can learn about molecules by studying how molecules absorb, emit, and scatter ... Source : http://sibor.physics.tamu.edu/teaching/phys689/lecture/13Rotation%20and%20vibration%20spectra.ppt Rotation and Revolution - Rutherford County Schools PPT Presentation Summary : Rotation & Revolution How does the movement of the sun, earth, and moon relate to days, lunar cycles, and years? Orbit – the path the earth takes to travel around ... Source : http://www.lms.rcs.k12.tn.us/TEACHERS/tyreet/documents/RotationandRevolutionEclipseNotes.ppt Rotation and Revolution - Weebly PPT Presentation Summary : Title: Rotation and Revolution Author: manisha Last modified by: Ashley Janaulis Created Date: 8/18/2007 8:24:58 PM Document presentation format: On-screen Show Source : http://msjanaulis.weebly.com/uploads/2/4/3/2/2432202/sol4.7rotationrevolution_manisha.ppt Symmetry, Rotation , Reflection , Translation PPT Presentation Summary : Title: Symmetry, Rotation, Reflection, Translation Author: Mr.Pearson Last modified by: rpearson Created Date: 7/13/2006 2:55:48 PM Document presentation format Source : http://inmanpearson.weebly.com/uploads/1/2/3/4/1234547/symmetry_rotation_reflection_translation_-_standard_1.24.11.ppt Rotation and Revolution - George Mason University PPT Presentation Summary : Title: Rotation and Revolution Author: manisha Last modified by: Anisha Created Date: 8/18/2007 8:24:58 PM Document presentation format: On-screen Show Source : http://sunrise.ite.gmu.edu:8080/Sunrise/DOCS/LessonsSOL/SOL4.7Rotation&RevolutionPre1.ppt Symmetry - Paulding County School District PPT Presentation Summary : Symmetry Has 3 Types Rotation Translation Reflection Symmetry What is a line of symmetry? A line on which a figure can be folded so that both sides match. Source : http://schools.paulding.k12.ga.us/ischooldistrict/media/files/1886/Symmetry%2C%20Rotation%2C%20Translation%2C%20Reflection.ppt Rotation - GCSE Resources for teachers and students - Home PPT Presentation Summary : Rotation Objectives: D Grade Rotate shapes about the origin Describe rotations fully about the origin Identify reflection symmetry in 3-D solids C Grade Rotate shapes ... Source : http://djmaths.weebly.com/uploads/5/5/3/7/5537924/rotation.ppt Rotation and Revolution - LMS PPT Presentation Summary : Rotation & Revolution Solar & Lunar Eclipse Orbit – the path the earth takes to travel around the sun. ROTATION Rotation – the Earth spinning on its axis. Source : http://lms.rcschools.net/TEACHERS/tyreet/documents/RotationandRevolution_Solar_Lunar_NOTES.ppt Revolution and Rotation - Amelon Elementary School PPT Presentation Summary : Title: Revolution and Rotation Author: acps Last modified by: Preferred Customer Created Date: 5/3/2010 8:01:56 PM Document presentation format: On-screen Show Source : http://amelon.amherst.k12.va.us/sites/default/files/Earth,%20Moon,%20Sun%204.7.ppt Transformations! Translations, Reflections, and Rotations ... PPT Presentation Summary : Commonly known as turns A rotation is a transformation that turns a figure about a fixed point called the center of rotation. Source : http://plaza.ufl.edu/mel97/EME_4401_Micro_Micro_Teaching.ppt Rotations and Rotational Symmetry - Solon PPT Presentation Summary : Rotations and Rotational Symmetry We are learning to…rotate a figure, describe a rotation, and identify rotational symmetries. * Vocabulary A rotation turns a ... Source : http://www.solonschools.org/accounts/PHardy/215201162849_RotationsandRotationalSymmetry.ppt EARTH'S ROTATION AND REVOLUTION - Jefferson County Public Schools PPT Presentation Summary : EARTH'S ROTATION AND REVOLUTION Earth’s Rotation Rotation is the spinning of the Earth on its axis. The time for one rotation is 24 hours. The speed of rotation ... Source : http://classroom.jc-schools.net/collinsj/science/EARTHS_ROTATION_AND_REVOLUTION.ppt Rotation and Revolution of Earth - Ohio Wesleyan University PPT Presentation Summary : Rotation and Revolution of Earth Legend has it that Galileo muttered the words “Eppur si muove” (It still moves) under his breath while being tried for heresy ... Source : http://go.owu.edu/~rakaye/ASTR110/Earth.ppt Translations, Rotations, Reflections, and Dilations PPT Presentation Summary : ROTATION If a shape spins 90 , how far does it spin? 90 ROTATION Describe how the triangle A was transformed to make triangle B A B Describe the translation. Source : http://toddhamontree.pbworks.com/f/Translations,+Rotations,+Reflections,+and+Dilations.ppt The Concepts of Orientation/ Rotation Transformations PPT Presentation Summary : The Concepts of Orientation/Rotation ‘Transformations’ ME 4135 -- Lecture Series 2. Fall 2011, Dr. R. Lindeke Source : http://www.d.umn.edu/~rlindek1/ME4135_11/The%20Concepts%20of%20Orientation_Lecture%202.pptx Presentation Summary : Apply rotation formulas to figures on the coordinate plane. * Rotation A transformation in which a figure is turned about a fixed point, called the center of rotation. Source : http://bakermath.org/Classes/Geometry/Chapter%209/9-3_rotations.ppt Rotational Motion and Equilibrium - Tenafly Public Schools PPT Presentation Summary : Translation, Rotation, Rolling Translational motion: all particles in the object have the same instantaneous velocity (linear motion) Rotational motion: ... Source : http://sites.tenafly.k12.nj.us/~hcoyle/AP%20Physics%20%20C%20Power%20Points/6%20APC%20Rotation%20Ch%2010/1%20AP%20Angular%20Position,%20Velocity%20and%20Acceleration.ppt Rotation of Rigid Bodies - WKU PPT Presentation Summary : Title: Rotation of Rigid Bodies Author: Phillip C. Womble Last modified by: Phillip C. Womble Created Date: 8/22/2005 2:10:03 AM Document presentation format Source : http://physics.wku.edu/~womble/phys250/ch9.ppt Crop Rotations - Washington State University PPT Presentation Summary : Individual Producers select the most appropriate rotation system The rotation system allows for a great deal of customization for the individual producer. Source : http://www.tristate.wsu.edu/croprotations.ppt Symmetry - Cecil County Public Schools PPT Presentation Summary : Symmetry Rotation Translation Reflection Symmetry What is a line of symmetry? A line on which a figure can be folded so that both sides match. Maryland Content ... Source : http://tech.ccps.org/downloads/Symmetry,%20Rotation,%20Translation,%20Reflection.ppt Solar Rotation - George Mason University PPT Presentation Summary : Solar Rotation Lab 3 Differential Rotation The sun lacks a fixed rotation rate Since it is composed of a gaseous plasma, the rate of rotation is fastest at the ... Source : http://solar.gmu.edu/teaching/ASTR112_2005/astr114_lab3_Solar_Rotation.ppt Chapter 13 Rotation of a Rigid Body - Georgia State University PPT Presentation Summary : Chapter 12 Rotation of a Rigid Body 12.1 Rotational Motion 12.2 Center of Mass 12.3 Rotational energy 12.4 Moment of Inertia 12.5 Torque 12.6 Rotational dynamics Source : http://www.phy-astr.gsu.edu/Hsiao-Ling/Chapter12-F08.ppt Torque and Rotation - Cloud County Community College PPT Presentation Summary : Torque and Rotation Applied Physics Lesson 14 Learning outcomes: By the end of the lesson you should understand terms and concepts related to the physics ideas known ... Source : http://www.cloud.edu/faculty_downloads/tleif/downloads/SC%20109%20Applied%20Physics%20(T-R%201200,%20MWF%20100)/Syllabus%20and%20Presentations/Lesson%2014%20(Torque%20and%20Rotation).ppt If you find powerpoint presentation a copyright It is important to understand and respect the copyright rules of the author. Please do not download if you find presentation copyright. If you find a presentation that is using one of your presentation without permission, contact us immidiately at
{"url":"http://www.xpowerpoint.com/ppt/rotation.html","timestamp":"2014-04-24T16:58:42Z","content_type":null,"content_length":"23643","record_id":"<urn:uuid:ba6686a0-f2dc-470d-87db-af0e8bdff058>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00383-ip-10-147-4-33.ec2.internal.warc.gz"}
from The American Heritage® Dictionary of the English Language, 4th Edition • n. A symbol or mark used to represent a number. • n. The numbers, usually the last two digits, indicating by year a graduating class in a school or college. • adj. Of, relating to, or representing numbers. from Wiktionary, Creative Commons Attribution/Share-Alike License • n. A symbol that is not a word and represents a number, such as the Arabic numerals 1, 2, 3 and the Roman numerals I, V, X, L. • n. A word or symbol representing a number. from the GNU version of the Collaborative International Dictionary of English • adj. Of or pertaining to number; consisting of number or numerals. • adj. Expressing number; representing number. • n. A figure or character used to express a number • n. A word expressing a number. from The Century Dictionary and Cyclopedia • Pertaining to number; consisting of numbers. • Expressing number; representing number: as, numeral letters or characters, such as V or 5 for five. • Synonyms Numeral, Numerical. Numeral is more concrete than numerical: as, numeral adjectives or letters; numerical value, difference, equality, or equations. • n. One of the series of words used in counting; a cardinal number. • n. A figure or character used to express a number: as, the Arabic numerals, 1, 2, 3, etc., or the Roman numerals, I, V, X, L, C, D, M. • n. In grammar, a word expressing a number or some relation of a number. • n. In musical notation: • n. An Arabic or Roman figure indicating a tone of the scale, as 1 for the tonic or do, 2 for re, 3 for mi, etc. • n. One of the figures used in thorough-bass, by which the constitution of a chord is indicated with reference to the bass tone or to the key-chord. • n. In the Anglo-Saxon Ch., a calendar or directory telling the variations in the canonical hours and the mass caused by saints' days and festivals. from WordNet 3.0 Copyright 2006 by Princeton University. All rights reserved. • adj. of or relating to or denoting numbers • n. a symbol used to represent a number From Middle English, of number, from Late Latin numerālis, from Latin numerus, number; see number. (American Heritage® Dictionary of the English Language, Fourth Edition) From Middle French numeral, from Latin numerālis ("pertaining to a number"), from numerus ("a number"). (Wiktionary) • Each numeral is outlined by silvered disks of reflecting material, and floodlights play upon the figures to make them show up clearly at a distance. • Not to mention the fact that the site's example starts a sentence with a numeral, which is wrong. • V (KI) n, where the second occurrence of n indicates a numeral, that is, the combinator that represents n. • I was really excited about the bills, but then realized they looked strange -- on the bottom left corner of one of them the numeral was a '2' rather than a '50', and the upper left numeral was a • Although the numerals used by the Babylonians were cumbersome (due, per - haps, to the necessity of having to adapt them to the use of the stylus and baked clay media), their place value system in which the “value” of a single digit depended on its position (“place”) within the numeral was the same as that used in the decimal system. • We still call our numeral system Arabic despite the fact that the Arabs do not use it anymore. • Which brings me actually to my Romans numeral which is 23. • The value of each card on the numeral line must be _added_ to that of the card on the balcony immediately above it, and you must again transfer from the ground packets to the numeral line any cards whose value corresponds with the addition thus made, it being understood that any card taken from the ground packet must always be placed on the numeral which is exactly underneath the balcony card to whose value it is added. Lady Cadogan's Illustrated Games of Solitaire or Patience New Revised Edition, including American Games • ThinkProgress attended the event and observed a variant of the original revolutionary war flag, which depicts a roman numeral II within the original thirteen stars, widely distributed throughout Brown party. Think Progress » Brown victory party featured flag calling for a ‘second’ revolution, tea party-inspired civil war. • Technically, of course, Vidic's surname does not form a Roman numeral, rather it comprises them. The Knowledge | Which footballers have been mentioned in parliament? | John Ashdown Log in or sign up to get involved in the conversation. It's quick and easy.
{"url":"https://www.wordnik.com/words/numeral","timestamp":"2014-04-19T21:47:45Z","content_type":null,"content_length":"38348","record_id":"<urn:uuid:46c563a6-7d4b-47d6-b079-55ed6ba1c705>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00299-ip-10-147-4-33.ec2.internal.warc.gz"}
Water in rotating cylinder Hi Pi-Bond! (just got up … I was wondering if something similar to this method can be used to find the equation of the surface if the cylinder is rotated around a point at some angle to the vertical. For example, rotating a bucket of water by using a rope. The question previously mentioned used dh/dr = Centrifugal Force/Gravitational Force h: height of water w.r.t centre r: distance from centre Yes, so long as the water keeps the same shape inside the bucket, you can use a rotating frame whose axis is the vertical line through the top of the rope. Since the surface has a shape such that an object placed on top of the surface will not move relative to the water, it will be in equilibrium in the rotating frame. Since the only three forces on it are gravity centrifugal and normal, you can take tangential components and get dh/dr = Centrifugal Force/Gravitational Force
{"url":"http://www.physicsforums.com/showthread.php?t=679550","timestamp":"2014-04-18T15:55:54Z","content_type":null,"content_length":"38921","record_id":"<urn:uuid:1b17c621-6575-4436-bb0c-62ab234103d0>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00228-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: Linear algebra equation solver? Replies: 3 Last Post: Nov 25, 2012 5:15 AM Messages: [ Previous | Next ] Linear algebra equation solver? Posted: May 17, 2012 6:07 AM I'm looking for software that can solve systems of linear algebra equations. Example: I have two equations, z = y + ax (a scalar, x,y,z vector) & p^t z = 0 (p another vector), assume no two vectors are orthogonal. Vectors x,y,p are known, vector z and scalar a are unknown. The solution of this is z = y+az where a = -(p^ta)^{-1}(p^ty) Is there software that can solve this sort of system of equations? Date Subject Author 5/17/12 Linear algebra equation solver? Victor Eijkhout 5/19/12 Re: Linear algebra equation solver? clicliclic@freenet.de 5/19/12 Re: Linear algebra equation solver? Victor Eijkhout 11/25/12 Re: Linear algebra equation solver? auditor
{"url":"http://mathforum.org/kb/thread.jspa?threadID=2384701&messageID=7824548","timestamp":"2014-04-17T13:32:43Z","content_type":null,"content_length":"19759","record_id":"<urn:uuid:7a095f6c-d72b-42ed-b5b1-58d8d920ed93>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00080-ip-10-147-4-33.ec2.internal.warc.gz"}
Minimum-time turn trajectories to fly-to points , 1997 "... The design (synthesis) of analog electrical circuits starts with a highlevel statement of the circuit's desired behavior and requires creating a circuit that satisfies the specified design goals. Analog circuit synthesis entails the creation of both the topology and the sizing (numerical values) of ..." Cited by 64 (8 self) Add to MetaCart The design (synthesis) of analog electrical circuits starts with a highlevel statement of the circuit's desired behavior and requires creating a circuit that satisfies the specified design goals. Analog circuit synthesis entails the creation of both the topology and the sizing (numerical values) of all of the circuit's components. The difficulty of the problem of analog circuit synthesis is well known and there is no previously known general automated technique for synthesizing an analog circuit from a high-level statement of the circuit's desired behavior. This paper presents a single uniform approach using genetic programming for the automatic synthesis of both the topology and sizing of a suite of eight different prototypical analog circuits, including a lowpass filter, a crossover (woofer and tweeter) filter, a source identification circuit, an amplifier, a computational circuit, a timeoptimal controller circuit, a temperature-sensing circuit, and a voltage reference circuit. The problem-specific information required for each of the eight problems is minimal and consists primarily of the number of inputs and outputs of the desired circuit, the types of available components, and a fitness measure that restates the highlevel - In Proceedings of the Eighth Annual ACM-SIAM Symposium on Discrete Algorithms , 1997 "... Motivated by applications in robotics, we formulate the problem of minimizing the total angle cost of a TSP tour for a set of points in Euclidean space, where the angle cost of a tour is the sum of the direction changes at the points. We establish the NP-hardness of both this problem and its relaxat ..." Cited by 18 (0 self) Add to MetaCart Motivated by applications in robotics, we formulate the problem of minimizing the total angle cost of a TSP tour for a set of points in Euclidean space, where the angle cost of a tour is the sum of the direction changes at the points. We establish the NP-hardness of both this problem and its relaxation to the cycle cover problem. We then consider the issue of designing approximation algorithms for these problems and show that both problems can be approximated to within a ratio of O(log n) in polynomial time. We also consider the problem of simultaneously approximating both the angle and the length measure for a TSP tour. In studying the resulting tradeoff, we choose to focus on the sum of the two performance ratios and provide tight bounds on the sum. Finally, we consider the extremal value of the angle measure and obtain essentially tight bounds for it. In this extended abstract we restrict our attention to the planar setting, but all our results are easily extended to higher dimensio... - COMPUT. GEOM. THEORY APPL , 1996 "... For a given set A ` (\Gamma; +] of angles, the problem "Angle-Restricted Tour" (ART) is to decide whether a set P of n points in the Euclidean plane allows a closed directed tour consisting of straight line segments, such that all angles between consecutive line segments are from the set A. We p ..." Cited by 13 (1 self) Add to MetaCart For a given set A ` (\Gamma; +] of angles, the problem "Angle-Restricted Tour" (ART) is to decide whether a set P of n points in the Euclidean plane allows a closed directed tour consisting of straight line segments, such that all angles between consecutive line segments are from the set A. We present a variety of algorithmic and combinatorial results on this problem. In particular, we show that any finite set of at least five points allows a "pseudoconvex" tour (i. e. a tour where all angles are nonnegative), and we derive a fast algorithm for constructing such a tour. Moreover, we give a complete classification (from the computational complexity point of view) for the special cases where the tour has to be part of the orthogonal grid. - Proceedings of 1997 IEEE International Symposium on Computational Intelligence in Robotics and Automation. Los Alamitos, CA; Computer Society Press. Pages 340 , 1997 "... Genetic programming is an automatic programming technique that evolves computer programs to solve, or approximately solve, problems. This paper presents two examples in which genetic programming creates a computer program for controlling a robot so that the robot moves to a specified destination poi ..." Cited by 5 (3 self) Add to MetaCart Genetic programming is an automatic programming technique that evolves computer programs to solve, or approximately solve, problems. This paper presents two examples in which genetic programming creates a computer program for controlling a robot so that the robot moves to a specified destination point in minimal time. In the first approach, genetic programming evolves a computer program composed of ordinary arithmetic operations and conditional operations to implement a time-optimal control strategy. In the second approach, genetic programming evolves the design of an analog electrical circuit consisting of transistors, diodes, resistors, and power supplies to implement a near-optimal control strategy. 1. - Stanford University , 1997 "... Most problem-solving techniques used by engineers involve the introduction of analytical and mathematical representations and techniques that are entirely foreign to the problem at hand. Genetic programming offers the possibility of solving problems in a more direct way using the given ingredients o ..." Cited by 4 (1 self) Add to MetaCart Most problem-solving techniques used by engineers involve the introduction of analytical and mathematical representations and techniques that are entirely foreign to the problem at hand. Genetic programming offers the possibility of solving problems in a more direct way using the given ingredients of the problem. This idea is explored by considering the problem of designing an electrical controller to implement a solution to the time-optimal fly-to control problem. 1.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1482473","timestamp":"2014-04-20T17:59:45Z","content_type":null,"content_length":"25068","record_id":"<urn:uuid:1a257f7d-ffe8-47b2-9e27-ae351b94f1ed>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00012-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: December 2010 [00711] [Date Index] [Thread Index] [Author Index] Re: Mathematica daily such and so • To: mathgroup at smc.vnet.net • Subject: [mg115005] Re: Mathematica daily such and so • From: Daniel Lichtblau <danl at wolfram.com> • Date: Tue, 28 Dec 2010 06:49:27 -0500 (EST) ----- Original Message ----- > From: "Richard Fateman" <fateman at cs.berkeley.edu> > To: mathgroup at smc.vnet.net > Sent: Friday, December 24, 2010 3:11:55 AM > Subject: [mg114958] Re: Mathematica daily WTF > On 12/23/2010 12:55 AM, kj wrote: > > In<ies9t9$aa9$1 at smc.vnet.net> Andrzej Kozlowski<akoz at mimuw.edu.pl> > > writes: > > > >> Actually, there is nothing puzzling about this at all. > You can only judge if something puzzles you, yourself. > The general topic of naming and binding has been seriously mishandled > over the years by WRI. Hence the original Block was found lacking and > so Module was introduced. Module does not really do lexical scope > since > the names it makes up are accessible global symbols like foo$123. I see no indication from this of any mishandling. Block is primarily a dynamic scoping mechanism. Module emulates lexical scoping. It is an imperfect emulation. That does not make it bad, and indeed there are situations in which symbol leakage is useful (some examples of such have appeared in this forum). I am fairly certain it is useful to have both types of scoping construct, so if anything the adding of Module was a really good idea. Whether this allowing of symbol leakage (or other possibly accidental aspects of the implementation) was intentional I cannot say. The question of whether the implementation and semantics comprise the "best" possible way for Mathematica to emulate lexical scoping is not terribly important. I make this claim because there are no obvious grounds, of which I am aware, to regard it as bad and it seems to work well in practice. > There are other issues, which have often been debated in the context > of > Lisp, a language from which Mathematica has borrowed some ideas. > For example, > Sin=1234 is forbidden since Sin is "protected". > Block[{Sin=1234}, Sin] returns 1234 since the local variable Sin is > different from the global Sin, and is not protected, explaining this > WTF, perhaps more simply. > Indeed, Block[{Sin = 1234}, Sin [Pi]] returns 1234[Pi] > but wait, there's more. > Block[{Sin}, Sin[Pi]] > returns 0, because local Sin evaluates to global Sin which then > is the usual sine function. > Module[{Sin}, Sin] > returns Sin$1317 or something like that. > Block[{Sin = 1234}, Sin [Pi]] returns 1234[Pi] > Now of course it will not puzzle Andrzej, but here is a little > test that you can run on (a) your brain. (b) your Mathematica. > Block[{Sin = notSin}, Block[{Sin}, Sin[Pi]]] > Block[{Sin = notSin}, Module[{Sin}, Sin[Pi]]] > Module[{Sin = notSin}, Module[{Sin}, Sin[Pi]]] > Module[{Sin = notSin}, Block[{Sin}, Sin[Pi]]] > You might consider as possibilities: > 1. 0 > 2. notSin[Pi] > 3. Sin$1325[Pi] your number may vary// > .............. > In Common Lisp, the question of whether a local symbol > (e.g. sort of like a Module variable) > has the global property list of the symbol that > prints the same is resolved as: No. > does it have the global property of the function definition of > the symbol with the same name: No. > However, if you use a name as a function, e.g. Sin, then regardless > of local value bindings, you get the function. > thus > (let((sin 1234))(values (sin pi) sin)) > returns the values 0.0 and 1234. (actually, not 0.0 but 10^(-16) apx) > So Common Lisp is a 2-lisp in this respect. functions and values > are separate. Scheme variety is a 1-lisp requires function=value. > It seems that if Mathematica were a lisp, it would be closer to a > 1-lisp, though if it were really lisp, it would have a much different > and far simpler evaluation mechanism. Lisp has "quote" and "eval". > Instead of all the Hold, Release, Eval, and friends. In Lisp, the > explicit use of eval is almost never used except by novices who are > usually making mistakes. Whether Mathematica's evaluation is the > right thing is presumably now moot. Remember that "evaluate until it > stops" makes x=x+1 hurt. Emulation of infinite evaluation (sans side effects) will do that. As it ought. So what? > RJF I don't see anything here that shows bad behavior. If you are claiming that one or more items above are bad, feel free to elaborate as to what they might be. What I do see is that Mathematica has more complicated semantics than Lisp. This can be a cause of confusion but it is hardly unexpected. Package design, modularity, and context manipulation might be another matter. That stuff gets confusing, and I'm never sure there is a solid reason behind it. Bad design for emulation of encapsulation features in other languages? Bad documentation? Both? Neither? I don't know. As variable and function encapsulation go, this does seem to catch more people (myself included) in more ways during routine usage than anything I've seen mentioned regarding Block, Module, Attributes, or localization of variables in this and other recent threads. I mention this mostly to motivate my main point. To wit, I think the present thread is wildly off the mark from anything resembling either general usage of Mathematica, or a productive line of enquiry. Daniel Lichtblau Wolfram Research
{"url":"http://forums.wolfram.com/mathgroup/archive/2010/Dec/msg00711.html","timestamp":"2014-04-19T09:31:26Z","content_type":null,"content_length":"30417","record_id":"<urn:uuid:d0b68616-2b25-4e69-80ca-6a3b5a2d57c6>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00172-ip-10-147-4-33.ec2.internal.warc.gz"}
find all possible values Let $a,b,c,d$ positive real numbers such that $a+b+c+d=12$ , and $abcd =27+ab+ac+ad+bc+bd+cd$ find all possible values of $a,b,c,d$ Solution: Given a+b+c+d=12 and abcd=27+ab+ac+ad+bc+bd+cd then a,b,c and d =? let a=b=c=d=3 then 3+3+3+3=12 LHS => abcd=3*3*3*3=81 RHS=> 27+3*3+3*3+3*3+3*3+3*3+3*3=81 hence a=b=c=d=12 Solve second eq for a, substitute into first eq, and multiply through by bcd to get: (…) = 12bcd -27. The left side has to be positive so eqs have a sol for all b,c,d s.t. 12bcd>27 Last edited by Hartlw; September 18th 2013 at 07:03 AM. The above is correct, but it doesn’t work, I tried it- it is necessary but not sufficient. A sufficient conditon is given below following an example. a+b+c+d=12 abcd=27+a(b+c+d)+b(c+d)+cd. Example: Assume a=4, and b=2. Then 1) c+d=6 8cd=27+4(8)+cd 2) 7cd=59 1) and 2) give a quadratic equation with solution c=.756 and d=5.244 In general: Assume a and b given. Then: 3) c+d=12-(a+b) (ab-1)cd=27+a (b+(12-(a+b))+b[12-(a+b)], or: 4) cd=k 3) and 4) give a quadratic eq for c which has a solution if (c+d)^2>4k (not very nice). Topsquark: I pissed in your ear and told you it was raining. My sincerest apologies. Feel free to remove the thanks- I appreciate the gesture.
{"url":"http://mathhelpforum.com/algebra/26744-find-all-possible-values.html","timestamp":"2014-04-23T20:07:18Z","content_type":null,"content_length":"48082","record_id":"<urn:uuid:bf01f7fb-03e9-48d6-806c-5cad5d8d86e7>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00325-ip-10-147-4-33.ec2.internal.warc.gz"}
Antreas P. Hatzipolakis and Paul Yiu, Pedal Triangles and Their Shadows, Forum Geometricorum, 1 (2001) 81 -- 90. Abstract: The pedal triangle of a point P with respect to a given triangle ABC casts equal shadows on the side lines of ABC if and only if P is the internal center of similitude of the circumcircle and the incircle of triangle ABC, or the external center of similitude of the circumcircle with one of the excircles. We determine the common length of the equal shadows. More generally, we construct the point the shadows of whose pedal triangle are proportional to given p, q, r. Many interesting special cases are considered. View and Download Instructions Return to Forum Geometricorum, volume 1.
{"url":"http://forumgeom.fau.edu/FG2001volume1/FG200112index.html","timestamp":"2014-04-20T05:50:13Z","content_type":null,"content_length":"1558","record_id":"<urn:uuid:27458f45-d74d-474c-b452-4857eb580a18>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00567-ip-10-147-4-33.ec2.internal.warc.gz"}
Wolf Prize The Wolf Prize in mathematics The Wolf Prize in mathematics has been awarded since 1978. The books [ 1] and [ 2] contain details of the life and work of the winners between 1978 and 2000. We quote from the Preface of the books:- There is no Nobel prize in mathematics. Perhaps this is a good thing. Nobel prizes create so much public attention that mathematicians would lose their concentration to work. There are several other prizes for mathematicians. There is the Fields Medal (only for mathematicians); it honours outstanding work and encourages further efforts. Then there is the Wolf Prize. The Wolf Foundation began its activities in 1976. Since 1978, five or six annual prizes have been awarded to outstanding scientists and artists, irrespective of nationality, race, colour, religion, sex or political view, for achievements in the interest of mankind and friendly relations among people. In science, the fields are agriculture, chemistry, mathematics, medicine, and physics; in the arts, the prize rotates annually among music, painting, sculpture and architecture. The Fields Medal goes to young people, and indeed many mathematicians do their best work in the early years of their life. The Wolf Prize often honours the achievements of a whole life. But it may also honour the work of young people. The first Wolf Prize winners in mathematics were Izrail M Gel'fand and Carl L Siegel (1978). Siegel was born in 1896 and Gel'fand in 1913. Several prize winners were born before 1910. Thus the achievements of the prize winners cover much of the twentieth century. The documents collected in these two volumes characterize the Wolf Prize winners in a form not available up to now: bibliographies and curricula vitae, autobiographical accounts, reprints of early papers or especially important papers, lectures and speeches, for example at International Congresses, as well as reports on the work of the prize winners by others. Since the work of the Wolf laureates covers a wide spectrum, a large part of contemporary mathematics comes to life in these books. Wolf Prize winners in mathematics: 1978 - Izrail M Gelfand ... for his work in functional analysis, group representation, and for his seminal contributions to many areas of mathematics and its applications. 1978 - Carl L Siegel ... for his contributions to the theory of numbers, theory of several complex variables, and celestial mechanics. 1979 - Jean Leray ... for pioneering work on the development and application of topological methods to the study of differential equations. 1979 - André Weil ... for his inspired introduction of algebro-geometry methods to the theory of numbers. 1980 - Henri Cartan ... for pioneering work in algebraic topology, complex variables, homological algebra and inspired leadership of a generation of mathematicians. 1980 - Andrei N Kolmogorov ... for deep and original discoveries in Fourier analysis, probability theory, ergodic theory and dynamical systems. 1981 - Lars V Ahlfors ... for seminal discoveries and the creation of powerful new methods in geometric function theory. 1981 - Oscar Zariski ... creator of the modern approach to algebraic geometry, by its fusion with commutative algebra. 1982 - Hassler Whitney ... for his fundamental work in algebraic topology, differential geometry and differential topology. 1982 - Mark Grigor'evich Krein ... for his fundamental contributions to functional analysis and its applications. 1983/4 - Shiing-Shen Chern ... for outstanding contributions to global differential geometry, which have profoundly influenced all mathematics. 1983/4 - Paul Erdös ... for his numerous contributions to number theory, combinatorics, probability, set theory and mathematical analysis, and for personally stimulating mathematicians the world over. 1984/5 - Kunihiko Kodaira ... for his outstanding contributions to the study of complex manifolds and algebraic varieties. 1984/5 - Hans Lewy ... for initiating many, now classic and essential, developments in partial differential equations. 1986 - Samuel Eilenberg ... for his fundamental work in algebraic topology and homological algebra. 1986 - Atle Selberg ... for his profound and original work on number theory and on discrete groups and automorphic forms. 1987 - Kiyosi Itô ... for his fundamental contributions to pure and applied probability theory, especially the creation of the stochastic differential and integral calculus. 1987 - Peter D Lax ... for his outstanding contributions to many areas of analysis and applied mathematics. 1988 - Friedrich Hirzebruch ... for outstanding work combining topology, algebraic and differential geometry, and algebraic number theory; and for his stimulation of mathematical cooperation and research. 1988 - Lars Hörmander ... for fundamental work in modern analysis, in particular, the application of pseudo-differential and Fourier integral operators to linear partial differential equations. 1989 - Alberto P Calderon ... for his groundbreaking work on singular integral operators and their application to important problems in partial differential equations. 1989 - John W Milnor ... for ingenious and highly original discoveries in geometry, which have opened important new vistas in topology from the algebraic, combinatorial, and differentiable viewpoint. 1990 - Ennio De Giorgi ... for his innovating ideas and fundamental achievements in partial differential equations and calculus of variations. 1990 - Ilya Piatetski-Shapiro ... for his fundamental contributions in the fields of homogeneous complex domains, discrete groups, representation theory and automorphic forms. 1992 - Lennart A E Carleson ... for his fundamental contributions to Fourier analysis, complex analysis, quasi-conformal mappings and dynamical systems. 1992 - John G Thompson ... for his profound contributions to all aspects of finite group theory and connections with other branches of mathematics. 1993 - Mikhael Gromov ... for his revolutionary contributions to global Riemmanian and symplectic geometry, algebraic topology, geometric group theory and the theory of partial differential equations. 1993 - Jacques Tits ... for his pioneering and fundamental contributions to the theory of the structure of algebraic and other classes of groups and in particular for the theory of buildings. 1994/5 - Jürgen K Moser ... for his fundamental work on stability in Hamiltonian mechanics and his profound and influential contributions to nonlinear differential equations. 1995/6 - Robert Langlands ... for his path-blazing work and extraordinary insight in the fields of number theory, automorphic forms and group representation. 1995/6 - Andrew J Wiles ... for spectacular contributions to number theory and related fields, major advances on fundamental conjectures, and for settling Fermat s last theorem. 1996/7 - Joseph B Keller ... for his innovative contributions, in particular to electromagnetic, optical, acoustic wave propagation and to fluid, solid, quantum and statistical mechanics. 1996/7 - Yakov G Sinai ... for his fundamental contributions to mathematically rigorous methods in statistical mechanics and the ergodic theory of dynamical systems and their applications in physics. 1999 - László Lovász ... for his outstanding contributions to combinatorics, theoretical computer science and combinatorial optimization. 1999 - Elias M Stein ... for his contributions to classical and "Euclidean" Fourier analysis and for his exceptional impact on a new generation of analysts through his eloquent teaching and writing. 2000 - Raoul Bott ... for his deep discoveries in topology and differencial geometry and their applications to Lie groups, differential operators and mathematical physics. 2000 - Jean-Pierre Serre ... for his many fundamental contributions to topology, algebraic geometry, algebra, and number theory and his inspirational lectures and writing. 2001 - Vladimir I Arnold ... for his deep and influential work in a multitude of areas of mathematics, including dynamical systems, differential equations, and singularity theory. 2001 - Saharon Shelah ... for his many fundamental contributions to mathematical logic and set theory, and their applications within other parts of mathematics. 2002/3 - Mikio Sato ... for his creation of "algebraic analysis", including hyperfunction and microfunction theory, holonomic quantum field theory, and a unified theory of soliton equations. 2002/3 - John T Tate ... for his creation of fundamental concepts in algebraic number theory. 2005 - Gregory A Margulis ... for his monumental contributions to algebra, in particular to the theory of lattices in semi-simple Lie groups, and striking applications of this to ergodic theory, representation theory, number theory, combinatorics, and measure theory. 2005 - Sergei P Novikov ... for his fundamental and pioneering contributions to algebraic and differential topology, and to mathematical physics, notably the introduction of algebraic-geometric methods. 2006/7 - Stephen Smale ... for his groundbreaking contributions that have played a fundamental role in shaping differential topology, dynamical systems, mathematical economics, and other subjects in mathematics. 2006/7 - Harry Furstenberg ... for his profound contributions to ergodic theory, probability, topological dynamics, analysis on symmetric spaces and homogenous flows. 2008 - Pierre R Deligne ... for his work on mixed Hodge theory; the Weil conjectures; the Riemann-Hilbert correspondence; and for his contributions to arithmetic. 2008 - Phillip A Griffiths ... for his work on variations of Hodge structures; the theory of periods of abelian integrals; and for his contributions to complex differential geometry. 2008 - David B Mumford ... for his work on algebraic surfaces; on geometric invariant theory; and for laying the foundations of the modern algebraic theory of moduli of curves and theta functions. 2010 - Shing-Tung Yau ... for his work in geometric analysis that has had a profound and dramatic impact on many areas of geometry and physics; 2010 - Dennis Sullivan ... for his innovative contributions to algebraic topology and conformal dynamics. 1. S S Chern and F Hirzebruch (eds.), Wolf Prize in mathematics Vol. 1 (River Edge, NJ, 2000). 2. S S Chern and F Hirzebruch (eds.), Wolf Prize in mathematics Vol. 2 (River Edge, NJ, 2001). Other Web site: JOC/EFR August 2011 The URL of this page is:
{"url":"http://www-history.mcs.st-and.ac.uk/Societies/Wolf_Prize.html","timestamp":"2014-04-19T15:25:28Z","content_type":null,"content_length":"23705","record_id":"<urn:uuid:a95e5988-56e5-4cb9-a49e-5d813ec1c5db>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00197-ip-10-147-4-33.ec2.internal.warc.gz"}
Summary: Sponsored by: UGA Math Department and UGA Math Club Ciphering Round / 2 minutes per problem October 13, 2007 No calculators are allowed on this test. 2 minutes per problem, 10 points for each correct answer. Problem 1. The length of a rectangle is increased by 30% and its width is decreased by 20%. By what percentage does its area increase? Answer. 4% Solution. If the original length and width are and w, respectively, then the new length is 1.3 and the new width is 0.8w, so the area of the new rectangle is (1.3 )(0.8w) = 1.04 w. Thus, the area increases by 4%. Problem 2. When two joggers run around a 1-mile oval track in the same direction, one passes the other every 30 minutes. When they run in opposite directions, they pass every 10 minutes. What is the speed of the slower jogger (in mph)? Answer. 2 mph Solution. Let x be the speed (in mph) of the faster jogger and y the speed of the slower jogger. Then we have
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/757/0207838.html","timestamp":"2014-04-17T22:47:10Z","content_type":null,"content_length":"8042","record_id":"<urn:uuid:3cbe9aed-3676-4f33-bef2-80b0ed2d6517>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00594-ip-10-147-4-33.ec2.internal.warc.gz"}
Solution Set 1 Problem 1 1.A. A state is a complete A,B bipartite graph where A =&lt J and B &gt = K. An operator on state S is to add a vertex V to the hubs the graph, and to restrict the authorities to those vertices in S that have inarcs from V. The start state is the "graph" with no hubs and where every vertex is an authority. The goal state is a complete bipartite J,M graph where M &gt = K. 1.B. The branching factor is W. The depth of search is J. The size of the state space is certainly less than W^J 1.C. { C,F,G ; D,E} 1.D No, since the depth of the goal state is known. Problem 2 2.A. A state is a complete A,B bipartite graph where A &gt = J and B =&lt K. An operator on state S is to add a vertex V to the authorities the graph, and to restrict the hubs to those vertices in S that have outarcs to V. The start state is the "graph" with no hubs and where every vertex is an authority. The goal state is a complete bipartite M,K graph where M &gt = J. 2.B. The branching factor is Z. The depth of search is K. The size of the state space is certainly less than Z^K 2.C. The algorithm actually returns a 4,2 bipartite graph: { D,E,F,H; C,G}. Problem 3 There is more than one way to do this. One can take a state to be either the state at the top of each iterations of the "repeat" loop. Then an operation combines the "choose" and the "either/or". Alternatively, one can take a state to be the state at any choice point. That is, there would be one state at the "choose" statement, and an operator corresponding to that choice followed by a different state at the "either/or" statement and an operator corresponding to one or the other branch. We will take the first approach. 3.A. A state is a complete A,B bipartite graph where A =&lt J and B =&lt K. An operator is either to add a hub or add an authorities, so that the graph remains complete. The start state is the empty graph. The goal state is a complete J,K bipartite graph. 3.B. The branching factor is V. The depth is J+K. The size of the state space is therefore at most V^J+K Problem 4 For simplicity, let J=K. Step 1: Construct a K,K complete bipartite graph. Let G be the set of hubs and H the set of authorities. Pick one hub and call it vertex A; pick one authority and call that vertex B. Step 2: We now construct a second graph. For any N &gt 2K, let c[1] ... c[N] be a set of hubs and x[1] ... x[N] be a set of authorities connected according to the following rule: there is an arc from c[P] to x[Q] just if 0 =&lt (P-Q) mod N &lt 2K-1. It is not hard to show (a) that this has no K,K complete bipartite subgraph; (b) that algorithms 1 and 2 will run a very long time before finding that out. Step 3: Make a copy of the graph in step 2. Label the hubs y[1] ... y[N] and label the authorities d[1] ... d[N] Step 4: Join the three graphs together by identifying c[1] with A and d[1] with B. Step 5: Order the vertices alphabetically, with A and B first. Now, if you run algorithm 1, it will begin by trying to find a good set of hubs among the c[i], spend a very long time at that, and only then move on to G;H. Similarly if you run algorithm 2, it will begin by trying to find a good set of authorities among the d[i]. However, if you run algorithm 3, it will • First choose A as a hub. • Then choose B as an authority (it can't be an authority, because it has no outarcs.) • Since A has been chosen a hub, we can ignore all the d[i]'s since there's no arc from A to them. Similarly, since B is an authority, we can ignore all the c[i]'s since there no arc from them to B. So we can proceed next, alphabetically, to G and H, and find the solution immediately.
{"url":"http://cs.nyu.edu/courses/spring03/G22.2560-001/sol1.html","timestamp":"2014-04-17T09:35:06Z","content_type":null,"content_length":"4269","record_id":"<urn:uuid:0766bff4-80db-4f88-981e-45db288405c7>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00567-ip-10-147-4-33.ec2.internal.warc.gz"}
MATH-348 Advanced Engineering Mathematics - Spring 2010 From Physiki Main Page > Mathematical and Computer Sciences Course Wikis Course Information MATH348: Advanced Engineering Mathematics - Introduction to partial differential equations, with applications to physical phenomena. Fourier series. Linear algebra, with emphasis on sets of simultaneous equations. Prerequisite: MATH 225 or equivalent. Instructor Information Instructor : Scott Strong Office : Chauvenet Hall 266 Office Phone : 303.384.2446 email : math348.spring2010@gmail.com Course Calendar Classes Begin : January 13th, 2010 Lecture Days : Monday, Wednesday, Friday Course Sections : B : 11:00am-11:50am - Location: Coolbaugh Hall 131 C : 1:00pm-1:50pm - Location: Green Center 211 D : 2:00pm-2:50pm - Location: Alderson Hall 430 Last Day to Drop Without a W : January 28th Last Day to Withdraw : March 30th Classes End : May 14th, 2010 Important Dates : February 14th : No Classes March 15th-19th : Spring Break April 8th-10th : E-Days May 3th-7th : Dead Week May 7th : Dead Day Office Hours Fixed Office Hours : MWF : 12:00pm-12:50pm Monday : 3:00pm-5:00pm If you cannot meet during the previous office hours then please contact me to schedule another meeting time. Please see this google calender to see the times I am unavailable. Textbook Information Textbook : Advanced Engineering Mathematics - Erwin Kreyszig, ISBN 978-0-471-48885-9 9th Edition Amazon : Advanced Engineering Mathematics - Erwin Kreyszig, ISBN 978-0-471-48885-9 8th Edition Amazon (Used) : Advanced Engineering Mathematics - Erwin Kreyszig, ISBN 978-0-471-48885-9 Course Materials These downloads require Adobe Acrobat Reader Lecture Slides 01.LS.Classical Vector Spaces 02.LS.Geometry in R^n 03.LS.KinematicsAndDynamics - Those Evil Natured Robots Linear Algebra and its Applicationsby Peter D. Lax 04.LS.Abstract Vector Spaces 05.LS.Fourier Series to Fourier Integral to Fourier Transform - Update 4/5/2010 06.LS.1D Heat Equation-Separation of Variables 07.LS.The Acoustic Approximation and Wave Equations Lecture Notes 00.LN.Overview And Outline 01.LN.LinearDefinitions : Updated 1.27.2010. Footnotes have been added referencing locations in the text where these definitions can be found. 02.LN.Introduction To Linear Equations 03.LN.Solving Linear Systems 04.LN.Square Systems - Determinants and Matrix Inversion 05.LN.Introduction to Linear Vector Spaces 06.LN.Chapter 7 - Wrap Up 09.LN.Introduction to Fourier Series : Review of Periodic and Symmetric Functions 10.LN.Complex Fourier Series 11.LN.Fourier Integral to Fourier Transform 12.LN.Fourier Transform Homework0 - Due Jan. 18th by 5:00pm Homework0 - Solutions Homework1 - Due Feb. 3rd by 5:00pm - Note: Updated 1/19/2010, fixed a typo in problem 2 matrix 2, a_{22} = -3 Homework1 - Solutions Graphics for Homework 1 Geometry of Problem 2 System 1 Geometry of Problem 2 System 2 Geometry of Problem 2 System 3 Geometry of Problem 2 System 4 Geometry of Problem 2 System 5 Interpolated Parabolas of Problem 4 Set 1 Interpolated Parabolas of Problem 4 Set 2 Geometry of Least Squares Problem of Problem 4 Set 2 Interpolated Parabolas of Problem 4 Set 3 Fourier Transform Homework2 - Due Feb. 12th by 5:00pm : 1) Header Box Updated 2) Problem 4.2 \lambda = n^2 Homework2 - Solutions : Update - There were a couple of typos, nothing major, corrected. 2/8/2010 : Updated again - One of the typos I corrected last time was not a typo at all (1.4). I have put it back in its place. Homework 3 - Note : I have just noticed a pesky typo. Equation (2) from the assignment, (26) from the solutions, should read l_1 u(a) + k_1 u'(a) = 0 and NOT l_1 u(a) + k_1 u'(b) = 0 Homework3 - Due Feb. 22th by 5:00pm : Update - There were multiple things going on here. Once I updated the assignment with an old copy that was missing problems.... Ugh, it's all fixed up now. :) Homework3 - Solutions Homework4 - Due: March 12th Homework4 - Solutions Homework5 - Due: March 31st Homework5 - Solutions Homework6 - Due: April 12th Homework6 - Solutions Homework7 - Due April 28 - Last edits (minor) at 4:31pm. Homework7 - Solutions - Last edits (minor) at 4:31pm : Note at a step in problem 1.1 I use quantities like I1 and I2. By these I mean l1 and l2. Homework8 - Due May 3 Homework8 - Solutions Exam I Exam I will be held on March 1st in class. There will be no notecards or calculators. The exam will have five required questions and contain material outlined in the following review: Exam 1 - Review Sheet The following exams with solutions are posted for your review. Exam 1 - Fall2008 Exam 1 - Fall2008 Solutions Exam I - Spring2009 Exam I - Spring2009 Solutions Exam I - Summer2009 Exam I - Summer2009 Solutions Exam I - Fall2009 Exam I - Fall2009 Solutions Exam I - Statistics Mean = 37.15 (74,31%) Median = 38 (76%) Mode = 47 (94%) A's = 34, B's = 17, C's = 24, D's = 18, F's = 25, Total Number of Exams = 118 A's = 29%, B's = 14%, C's = 20%, D's = 15%, F's = 21 % Exam I - Spring2010 Exam I - Spring2010 Solutions Exam II Exam II will be held on April 16th in class. There will be no notecards or calculators. The exam will have five required questions and contain material outlined in the following review: Exam 2 - Review Sheet The following are the results of Q+A's from previous semesters: Exam 2 - Spring2009 Q + A Exam 2 - Fall2008 Q + A The following exams with solutions are posted for your review. Exam II - Spring2009 See Soln for problem 3 graph. Exam II - Spring2009 Solutions Exam 2 - Fall2008 Exam 2 - Fall2008 Solutions Exam II - Summer2009 Exam II - Summer2009 Solutions Exam II - Fall2009 Exam II - Fall2009 Solutions - Graphs Included Exam II - Statistics Mean = 36.5 (72.5%) Median = 37.5 (75%) A's = 9, B's = 32, C's = 38, D's = 19, F's = 16, Total Number of Exams = 114 A's = 8%, B's = 28%, C's = 33%, D's = 17%, F's = 14 % Exam II - Spring2010 Exam I - Spring2010 Solutions Final Exam The final exam will be held Saturday May 8th from 7:00pm-9:00pm. The classes will be testing in the following rooms: Class : Meeting Time : Testing Room : Proctor MATH348B : 11:00am Section : Petroleum Hall : Jennifer Strong MATH348C : 1:00pm Section : CT 102 : Scott Strong MATH348D : 2:00pm Section : CO209 : Doug Poole Since we will be in different rooms it is very important that you go to the room associated with your section. There will be no notecards or calculators. The exam will have ten required questions and contain material outlined in the following review: Final Exam - Review Sheet The following is an old 50 minute PDE exam, which should give you some idea of the content and structure of the PDE portion of the exam. OLD PDE EXAM - See Soln for the graph in problem 1 OLD PDE EXAM - SOLN Other Materials Linear Algebra Three Planes in Space Three Planes in Space - Four Different Ways Three Planes in Space - Four Different Ways Legend for the Animations Red = First Plane Equation Orange = Second Plane Equation Yellow = Third Plane Equation Green = Column Space of A (AKA the set of all linear combination of the pivot columns of A) Blue = Right Hand Side for non-homogeneous problem. Animation : Ax=0 with oo-many solutions that form a line in space. Animation : Ax=b with oo-many solutions that form a line in space. Animation : Ax=b with a single solution Animation : Ax=b with no solutions Linear Algebra Software Linear Algebra Toolkit Fourier Methods Review of Functions Special Angles and the Unit Circle Odd and Even Functions (Wikipedia) : (see Also 09.LN) Periodic Functions (Wikipedia) : (See Also 09.LN) Fourier Series FS for f(x}=x, x \in (-\pi,\pi) FS for f(x}=Exp(Abs(x)), x \in (-\pi,\pi) Fourier Series - Wikipedia Gibbs Phenomenon - Wikipedia Fourier Transform Fourier Transform - Wikipedia Wikipedia - Sinc Function Mathworld - Sinc Function Wikipedia - Nyquist-Shannon Sampling Theorem Mathworld - Convolution (Animation) Convolution and Diffraction (Animations) Convolution and Diffraction (Animations) Wikipedia - Convolution (Animation) Green's Function - Wikipedia Frequency Response Graph for a Harmonic Oscillator m=k=1, Gamma = {1,.5,.25,.125} Partial Differential Equations Ordinary Differential Equations Review of Ordinary Differential Equations (DRAFT - 11/16/09) Millennium Bridge - Wikipedia You Tube Video - Millennium Bridge Resonance Heat Equation Heat Movie 1 - abs(x) Heat Movie 2 - parabola Heat Movie 3 - Double V Heat Movie 4 - Forced Heat Equation with B.C. u(0,t)=u(L,t)=0 Heat Movie 5 - Forced Heat Equation with B.C. u_{x}(0,t)=u_{x}(L,t)=0 Wave Equation 1D Wave Equation Wave on a 1-D Sting with Fixed Endpoints Wave on a 1-D Sting with Fixed Endpoints - Animated with first 5 Fourier Modes (Fundamental Mode in Red) Wave on a 1-D Sting with FLAT Endpoints from HW10 Wave on a 1-D Sting with FLAT Endpoints from HW10 - Animated with first 5 Fourier Modes (Fundamental Mode in Red) Traveling Wave :u[0](x) = − tanh(x): Red = Right Traveling, Blue=Left Traveling, Black = Superposition 2D Wave Equation Rectangular and Polar Rectangular Membrane Movie 1 -Text Example pg577 Rectangular Membrane 2 -Text Example pg577 Applet - Pretty Cool Rectangular Membrane Modes Animations of Rectangular Membrane Modes - Pretty Good Animations done by Dr. Russell - All sorts of stuff! The Well-Tempered Timpani By Richard K. Jones Vibrating Membrane1 - 12.9.1 Example Vibrating Membrane2 - 12.9.1 Example Vibrating Membrane3 - 12.9.1 Example Vibrating Membrane4 - 12.9.1 Example Nonlinear Wave Phenomenon Wikipedia Article on Shock Waves Animation of Shock Wave Formation in Pressure Field Shock Wave (Plane) - You Tube 1 Shock Wave (Plane) - You Tube 2 Shock Wave (Explosion) - You Tube 3 Shock Wave (Explosion) - You Tube 4 : Ignore The the cartoon bubble Shock Wave (Simulation) - You Tube 5 : Notice the distortion of the expanding wave-front Shockwave Slowmo NASA - Shock Wave Simulator Shockwave :)
{"url":"http://ticc.mines.edu/csm/wiki/index.php/MATH-348_Advanced_Engineering_Mathematics_-_Spring_2010","timestamp":"2014-04-19T04:44:06Z","content_type":null,"content_length":"51974","record_id":"<urn:uuid:5b6a8050-ce61-4a21-9c66-aa51349ba040>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00631-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Tools Newsletter MATH TOOLS NEWSLETTER - OCTOBER 2, 2009 - No. 80 As you browse the catalog, please take a moment to rate a resource, comment on it, review it -- join an existing conversation, or start a new discussion! ***FEATURED ACTIVITY Activity: Printable compass / straightedge construction worksheets - Math Open Reference John Page A web page that contains a list of 35 printable worksheets for compass and straightedge constructions. Each printable sheet has 2 or 3 problems and links back to the pages interactively explaining the constructions needed. Each sheet has a space for student name. ***FEATURED ACTIVITY Activity: A Japanese Temple Problem Pat Ballew: Explorations in Geometry In this guided Geometer's Sketchpad exploration, students use their knowledge of tangent circles to discover a relationship between the radii of three circles which are mutually tangent to each other and an external line. This proof was drawn up on a wooden block and hung in a temple in Gunma prefecture of Japan in the early 1800's. The practice of hanging such proofs was called Sangaku. ***FEATURED ACTIVITY Activity: Math Operations in Lists Pat Ballew: TI-82 Users Guide These instructions detail mathematical operations that can be used on lists or sequences of numbers with the TI-82. They include questions to test students' understanding, with answers at the bottom of the page. ***FEATURED TOOL Tool: Function Grapher Online Amir Rashedi Graph functions and find roots (zeros) of functions without guesswork. Specify domain for functions, drag the graphs and rotate the axes. ***FEATURED TOOL Tool: Abacus v1.4 Dana M. Proctor The Abacus Java Applet/Application is an implementation of an Abacus counting machine. The machine displays a total for the number represented by the machine's current state in addition to each digits value. The program provides a learning tool to teach basic concepts of the base 10 counting system. ***FEATURED TOOL Tool: Paramertic Surfaces in Rectangular Coordinates Barbara Kaskosz and Doug Ensley This 3D grapher graphs a parametric surface defined in rectangular coordinates. The user enters parametric formulas for the x, y, and z coordinates and the applet draws the corresponding surface in 3D. The surface can be rotated in real time by the user with the mouse and its opacity can be changed. The second version of the grapher is much older and slower as it is written in AS1, but it has a few practice problems that might be of interest. ***FEATURED TOOL Tool: The Abacus in Various Number Systems Alexander Bogomolny An interactive Java applet showing one or two Russian abacuses with options to select the base of the number system and the number of wires. The applet displays the number matching the current configuration and performs an automatic carry. The abaci can be synchronized so that adding to one removes an equal number from the other. Bogomolny links to a page discussing the history and use of the abacus. ***FEATURED TOOL Tool: Suan Pan in Various Number Systems Alexander Bogomolny An interactive Java gadget simulating suan pan -- the Chinese abacus. Options include selecting the number system base and the number of wires. The applet displays the number matching the configuration and performs an automatic carry. CHECK OUT THE MATH TOOLS SITE: Math Tools http://mathforum.org/mathtools/ Register http://mathforum.org/mathtools/register.html Discussions http://mathforum.org/mathtools/discuss.html Research Area http://mathforum.org/mathtools/research/ Developers Area http://mathforum.org/mathtools/developers/ Newsletter Archive http://mathforum.org/mathtools/newsletter/ .--------------.___________) \ |//////////////|___________[ ] `--------------' ) ( The Math Forum @ Drexel -- 2 October 2009
{"url":"http://mathforum.org/mathtools/newsletter/2009/October.html","timestamp":"2014-04-19T17:54:37Z","content_type":null,"content_length":"16056","record_id":"<urn:uuid:9b7513e3-a5ce-49b1-938f-fa0a6f90a765>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00368-ip-10-147-4-33.ec2.internal.warc.gz"}
Trigonometry Question Being Find the coordinates of the points (Tip: Consider the point Hey Don, Refer to the figure attached. Choose x-axis along $AB$ and y-axis along $AC$ with the origin at $A$. $Q,R$ can be interchanged. Case 1. $QR$ on sides $AB$ and $AC$ we have $Q=(2\alpha,0)$, $R=(0,2\beta)$. With the additional condition that $0 \le \alpha \le \frac{b}{2}$, $0 \le \beta \le \frac{c}{2}$ Case 2. $QR$ on sides $AC$ and $BC$ we have the abscissa of $Q$ is $0$ and hence abscissa of $R$ has to be $2\alpha$. Let the ordinate of $Q$ be $\gamma$ then we have ordinate of $R$ has to be $2\beta-\gamma$ and it has to satisfy the line equation $\frac{y}{x-b} = -\frac{c}{b} \ implies \gamma = -\frac{c(2\alpha - b)}{b}$ with the additional bound constraints $0 \le \alpha \le \frac{b}{2}$ and $0 \le \gamma \le c$ Case 3. Can be done on similar lines of Case2. Now use the rotational matrix to translate to the Rectangular Cartesian Co-ordinates. ~Kalyan.
{"url":"http://mathhelpforum.com/trigonometry/202847-trigonometry-question.html","timestamp":"2014-04-18T09:37:39Z","content_type":null,"content_length":"38224","record_id":"<urn:uuid:1fd0b083-ff8d-4c5f-8e02-7a09957a4b94>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00624-ip-10-147-4-33.ec2.internal.warc.gz"}
Spooky math problem On my home node for some time I have had an interesting problem: Suppose I have two envelopes. All you know is that they contain different numbers. I randomly hand you one of them. You open it, look at it, then hand it back. You now have sufficient information to, with guaranteed better than even odds, correctly tell me whether I gave you the envelope with the larger number. How? I warn that the answer is crazier than the problem. It being Halloween I think it appropriate to publically demonstrate that by giving the answer. The trick is that you make up a number, pretend it is the one that I still have, and that gives you just enough information to get better than even odds, guaranteed! How is that for sheer and utter insanity? Not to mention being spooky! Now before I fill in details and explanations, I should mention that this problem is not original to me. I learned it from Laurie Snell, a well-known probability theorist, and it has a history dating back to the 60's. From me it found its way onto the Internet, and can be found in such places as rec.puzzles. First details. The above answer is indeed correct, and to have it work you need to pick your number from a "continuous probability distribution with non-zero density everywhere". If that confuses you, just think of a normal distribution which is widely known as the bell curve. Now explanations. The first is a simple algebraic explanation. If you sit down and do the algebra, you will find that your probability of being right turns out to be exactly 50% plus 1/2 the probability that you pick a number between my two. While obviously you don't know when you pick a number between my two, you do know that you have some chance of doing so, so you know that you have better than even odds of being right. Some people like that explanation, some prefer to see a picture. Well get out a piece of paper and draw a line. Put two marks on it to represent my two numbers. From the right mark draw an arrow left. This is the range of numbers you need to pick in to get the answer right if you are handed the larger number. From the left mark draw an arrow right. This is the range of numbers you need to pick in to get the answer right if you are handed the smaller number. Now look closely. If you pick a number bigger than both of mine, you have even odds of being right. Ditto if you pick a number smaller than both of mine. But if you pick a number between mine, You are right. Guaranteed. Since you always have a chance of picking a number between my two, you always have better than even odds. See? You can get something for nothing! Real information from just making something up! How much you cannot know, but you get something! :-) This appears in the rec.puzzles FAQ as "high or low" and their answer appears here. UPDATE 2 I have posted Perl code at Spooky math - with Perl which shows the argument in detail. It is a little raw, but works and shows the key ideas in the explanation. Comment on Spooky math problem RE: Spooky math problem by BlueLines (Hermit) on Nov 01, 2000 at 06:02 UTC Hrmm, i dunno about this. Let x,y be your two numbers. So there are |x-y| possible "correct" guesses (a "correct" guess being one that falls between x and y). Let z be the set from which i'm randomly picking a number n. This number can be anything (1, pi, 2^.5, e^i*pi, etc). So there's an infinite set of numbers from which i'm choosing one member. (I'm not going to get in to the infiniteness of this infinity, as it's been too many years since i've taken discrete math). The chances of me picking a "correct" number are |x-y|/|z|, where z is infinite, which is 0. The assumption of gaining odds based on this imagined third number assumes that there is a last number at some point. Once you can quantify infinity, you can retrieve a decimal approximation of your odds. Until that point, you are dealing with zero. If you're going to pretend that a number is in between the other two, you might as well pretend you know what the other number is.... Update: Here's the proof in the rec.puzzle FAQ. I see two things wrong with it (i think), but i'm going to check into this before posting:-) Pick any cumulative probability function P(x) such that a > b ==> P(a) > P(b). Now if the number shown is y, guess "low" with probability P(y) and "high" with probability 1-P(y). This strategy yields a probability of > 1/2 of winning since the probability of being correct is 1/2*( (1-P(a)) + P(b) ) = 1/2 + (P(b)-P(a)), which is > 1/2 by assumption. Update II, Electric Boogaloo: Here's the problem with the proof. As it is written, it is correct, but the first assumption it makes is false. For this situation (picking two random numbers from an infinite set), it is impossible to create a cumulative probability function P(x) as the solution describes. The cumulative probability function is based on the probability mass function, which is always undefined in this case. The probability mass function for any value in this problem is 1/infinity, which is zero. The cumulative probability function is a sum of probability mass functions, which will be zero no matter how many you add together. So if it were indeed possible to pick a function like the proof describes, the proof would hold. However, since the initial assumption is a contradiction, the proof fails. Here's a nice discussion on cumulative probability functions, and here's a discussion of the probability mass function. Neat-o. Has anyone seen another proof of this? Disclaimer: This post may contain inaccurate information, be habit forming, cause atomic warfare between peaceful countries, speed up male pattern baldness, interfere with your cable reception, exile you from certain third world countries, ruin your marriage, and generally spoil your day. No batteries included, no strings attached, your mileage may vary. Read very, very carefully. :-) What your probability is of being correct depends on what two numbers I have. You only can guarantee that it is better than even, but it could be by a very small amount indeed. There is theoretical discussion on this, and the fact that you don't know the probability turns out to be important. (I don't remember details though.) However there is a variant of this problem where both you and I pick our numbers independently out of the same distribution. Now you can work out the probability of your being right prior to my picking my numbers. And even though your number has no actual information about mine, you turn out to be right exactly 2/3 of the time. If you disbelieve this it is easy to write a short script to test it. Again, I got this problem from a probability theorist and it really does work. A Perl script describing this would be most helpful. All this math talk is giving me a headache. Theoretical physics I can handle. It's conceptual. Math is another matter. Given that I think in Perl, I think that'd be best. Questions about how I got my numbers get you into the kind of trouble you describe. But that is outside of the problem. What is inside is that when you make up a number, you need to use a "good" cumulative distribution. And there are plenty of them indeed to be found in any good probability theory book. In fact I named one. The standard normal, which is the prototypical bell-curve, is a probability function which will work. (Albeit with a tail that falls off very rapidly, so your win is miniscule if my numbers are very far away from zero.) Trust me. I studied math long before I studied Perl, and I got this problem off of a well-known probability theorist. The solution is good. RE: Spooky math problem by runrig (Abbot) on Nov 01, 2000 at 07:04 UTC This is voodoo math. I looked in the rec.puzzles FAQ on deja and all I could find is this: "Someone has prepared two envelopes containing money. One contains twice as much money as the other. You have decided to pick one envelope, but you can then "prove" that the other envelope contains more money than the one you chose." "Threads about these and other logical paradoxes tend to go on for a long time and can get nasty as people try to convince each other of the truth of their positions. If you would like to start a thread about a paradox, please read the archive explanation first to see if that clears things up for you. Whether you are reading or posting to one of these threads, remember that there are many logical interpretations that are often equally valid. If there weren't, it wouldn't be a paradox, would it?" Update: I looked at the url you gave, but I like this one (update: that link now dead AFAICT) better :-) (At least he uses perl...I wonder if he uses taint?) Update regarding BlueLines's post: I would've bet that there'd be a division by zero somewhere... RE: Spooky math problem by rlk (Pilgrim) on Nov 01, 2000 at 11:37 UTC I don't buy this. Call the amounts in the two envelopes "A" and "B". A is known, B is not. So you guess at B and pick the number "B'". Now, If B' > A, you will pick B, and you will be right if B > A. B can be anything, so the probability of this is 50%. Alternately, if you pick B' such that A > B', your chance of being correct with respect to B is also 50%. So, 2 out of 4 possibilities of being correct. Even odds overall. Ryan Koppenhaver, Aspiring Perl Hacker "I ask for so little. Just fear me, love me, do as I say and I will be your slave." Non-Perl Topic OK? by princepawn (Parson) on Nov 01, 2000 at 13:24 UTC Because it has been awhile since I have received deluge of downvotes and because I am eager to secure my fourth position in the all-time worst nodes hall of fame, it is time for me to ask a question: What does the original post in this thread have to do with Perl? And, assuming I know enough about to Perl that my conclusion that it has nothing to do with Perl is correct, then why should I not downvote this because it has no business here? Of course, it is very hard to criticize someone whom you have met in person, but, well, somebody's gotta do the dirty work. And of course finally,due to my highly developed psychic powers, I can already see a -30 or -40 reputation on this post. But, if you would be so kind, please drop me a line as to why I deserve such treatment for an innocent and reasonable post. Over and out, the -- magnet, princepawn princepawn, I believe that until your post contains the words "Natalie Portman naked and petrified" you may still be looking at a single digit negative reputation for this node. And for the record, the above is the first time I have ever typed the words "Natalie Portman naked and petrified", much less posted such in a monastery. Not everything on this site is strictly perl. For example, look at the current poll. At other times, conversations take twists and end up being about everything but programming. I think it would be pretty dull if we never brought up other topics. On the other hand, too much off-topic discussion will obscure the main subject of the site. I, at least, would find it nice if posts were labeled in some way, either with an 'ot' disclaimer in the title, or with a section as discussed. RE: Spooky math problem (featuring perl!) by turnstep (Parson) on Nov 01, 2000 at 22:49 UTC Someone asked for some perl? Try this out with a small range (3) and then boost it up to 100 and watch the difference. Running at least 100,000 games is recommended but play around with the variables. :) ## HiLowDeal use strict; ## This program has a "dealer" and a "player" ## The dealer generates two random numbers and sticks them ## into two envelopes. One of these is handed to the player. ## The player looks at her number, then picks another ## number at random and assumes that she has picked the ## dealer's number. She then either states that she has ## the high or the low envelope. 50/50? Perhaps not... ## Usage: HiLowDeal 10 10000 50 ## The above example plays ten thousand games, ## uses the numbers 1 through 10 to draw the random numbers from, ## and outputs a running total about 50 times my $games = shift || 10000; ## How many games shall we play? my $limit = shift || 10; ## What range of numbers? my $checkpoint = shift || 3; ## Approximate # of checkpoint displa $checkpoint = int $games/$checkpoint; ## The total set is really an infinite set, but we ## will restrain our players to this without letting ## them "know" the range, making it effectively infinite. ## In other words, if a "10" is received, they do not know ## that there is no "11", "12", etc... and cannot automatically ## deduce that they have the higher card. my @money=(1..$limit); print "Games=$games, from 1 to $limit, checkpoint is $checkpoint.\n"; my ($letter1, $letter2); my ($guess, $result, $actual); my $total=0; my $counter=0; for (1..$games) { ## Select two random cards, must be different $letter1 = $money[rand @money]; { $letter2 = $money[rand @money]; redo if $letter1 == $letter2; } ## Player one has both, letters, then one to player #2. ## Player two looks at his, then makes a guess as to the other one. ## Cannot guess their own number { $guess = $money[rand @money]; redo if $guess == $letter2; } ## The player's best guess: (1 means they have the higher card) $result = $letter2 > $guess ? 1 : 0; ## The actual value: $actual = $letter2 > $letter1 ? 1 : 0; ## Are they correct? If so, give them a point. If not, take one away $total += ($result==$actual) ? +1 : -1; if (++$counter == $checkpoint) { print "TOTAL: $total\n"; print "Grand total: $total\n"; [download] [reply] Update: OK I'm re-reading everything and I'm still not sure that I understand it after all.. *sigh*.. For those that don't get it, like I didn't, realize that we're making one important assumption here: that both parties know the upper and lower bounds of the numbers! Sure, if we know we're working from 0 .. 100, and the envelope we see is number 88, odds are, it's the higher of the two envelopes (assuming both are truly random; this can be defeated if the "dealer" keeps writing two consecutive numbers or otherwise knows the trick). Now I look at the situation and think, "Well yah, duh." But what if you don't know the upper bounds? Modify the lines in the script like below, and the odds come out to exactly 50%: what we'd expect. The way *I* would play this game in real life, without knowing the limit, is to try to "figure out" the limit as I go, but statistically, given a one-time shot, we have to assume that the number we were given is, on average, mid-way through the distribution. This means that no matter what number we think of, we're still 50% likely of guessing right. $letter1 = $money[rand @money / 2]; { $letter2 = $money[rand @money / 2]; redo if $letter1 == $letter2; + } { $guess = $money[rand (($letter2+1) * 2)]; redo if $guess == $lette +r2; } But if we know we're going to play 100 games, we can sorta "figure out" the upper limit as we go (assuming there is one, and it's kept constant during play), by assuming that our numbers will average out to be mid-way through the distribution. That would let us approach a higher probability of being correct further down the line. The script can be modified to play by those rules too, but I'll leave that as an exercise for the reader. If you were to actually make these bold statements and try them in real life (give them 10 cards, pick two at random, show me one and I'll guess 66% correctly whether it's the higher of the two cards), I would seriously hope that they would figure out the trick without me needing to guess very much, if at all. If they were smart, they'd shut up when they figured it out and would keep pulling, say 9 and 10, and showing me the 9 all the time. [reply] Whoa. What tilly is claiming is that we have a "better than 50% chance" of getting it right. The degree to which it's "better" can be *mind-bogglingly miniscule* (we're talking continuous quantities here) so I don't think we're warranted in exporting any claims about the actual probabilities from what a program reports. Philosophy can be made out of anything. Or less -- Jerry A. Fodor No such assumption is made in the problem. With a continuous probability distribution you can have some probability of picking a number in any range at all. The probability may be low, but it is always positive. That is why I said "non-zero density everywhere" in the details. More carefully stated the experiment is defined as, "I hand you one of my two numbers at random, you look at it, hand it back, and make a guess as to high or low." The assertion is that there is a method you can use such that, no matter what my numbers are, you will have better than even odds of being right. What your odds are will depend, of course, on what my numbers are. All that you can guarantee is better than even. Of course any purely mechanical computer will have the usual problems with "randomly pick out of a continuous distribution" however you can specify a concrete algorithm for doing that with coin-flipping (note that you don't need to find the number precisely, just determine it to sufficient detail to compare it with the one you were handed). You need make no assumptions on my numbers. And an outside observer who knows both my numbers and your method can calculate the exact probability you will be right - and that will be better than even. (Yes, this skirts on the boundary of a lot of interesting problems, philosophical and otherwise, in math.) (crazyinsomniac) RE: Spooky math problem gone CRAZY by crazyinsomniac (Prior) on Nov 02, 2000 at 00:36 UTC When I saw tillys question for the first time, i was really confused. My problem, as always, turned out to be that i didn't really understand the question in the first place. The English combined with my lack of attention to detail, really warped my perception of what the question was. Here's my version of the question (modified so normal people can understand): Suppose I have two envelopes. All you know is that they contain a different number of flashcards, with different numbers on each of them. I randomly hand you one of them. You open it, look at it, then hand it back. You now have sufficient information to, with guaranteed better than even odds, correctly tell me whether I gave you the envelope with the larger sum. How? This is how i understood tilly's problem at first, so i thought, "what the $@@$!?! THATS IMPOSSIBLE!!!" Then i thought "OK,ok, don't give up so fast, he says there's a solution." I continued thinking about this problem and the answer tilly gave occured to me, but i thought you don't know how many flashcards you have, so the sum can be anything. But think about it. You make up a number, and pretend it is in between the sums of the two envelopes, and that the number on the flashcard I handed you is between the number of flashcards in each envelope, and so you can guarantee to have better than 1/2 odds of guessing which envelope has the largest sum. It is more miniscule percentage than the one from tilly's original problem but it is the same principal. 1/2 + (crazy math) = better than 1/2 update: May 2, 2001 11:50AM PST I've been reviewing some of my posts lately, and I've noteiced that this one doesn't quite reflect my ignorance and misunderstanding of the question. With that said, I point out that what is written here is not what was in my head at the time of writing, but rather a broken argument of misunderstanding. So please just accredit this one to irresponsibility, sleep depravation, and whatever else you want. Like all(most) of my posts, I leave this one as a reminder and example of what to avoid, in hopes i will not repeat my stupdeneous error, or merely dislexic miscommunication. "cRaZy is co01, but sometimes cRaZy is cRaZy". - crazyinsomniac RE: Spooky math problem by Compilers-R-Us (Initiate) on Nov 02, 2000 at 01:04 UTC Let's frame this in the simplest terms: You can only 'win' if you know something about the mean of the probability distribution that produced the numbers. As we say in physics, this breaks the symmetry of the problem. You have more information that just the value of one number, and from there you can compare your number with the mean. If your number is higher than the mean, than you are likely to have the higher number. Otherwise it is likely to be the lower. If you have a uniform probability distribution between + and - infinity, one could argue that the mean is zero. Therefore, if the number you viewed is positive, it is likely the higher one and vice versa. However, I don't feel comfortable saying it is the mean - too many infinities are involved. It depends on your definition of zero and infinity. A good definition of zero is that it is the mean of integers, rational and/or irrational numbers. A bell curve, although infinite, has a definite mean and finite integral. The probability is zero at +/- infinity, so there are fewer affronts to nature. Read the problem again. Were such assumptions needed you can be sure I would have stated them. But they are not. The key is that your random number needs some chance of falling in the interval between the numbers. As long as you can guarantee that, you get better than even odds. But it turns out that is not hard to guarantee. However if the two numbers differ by, say, 1 from each other it turns out that on average you are ahead by all of zero percent (very large tails with probabilities near 50% dominate). So against a mildly malicious opponent you - on average - don't come out ahead. But that is an average across an infinite number of situations, every last one of which you came out ahead in. (Infinity has lots of strange stuff like this.) Again, the probability of your winning depends on the two numbers that I have. But it is always better than even. And you can guarantee that no matter how I tried to produce my RE: Spooky math problem by Compilers-R-Us (Initiate) on Nov 04, 2000 at 19:06 UTC Actually, you do. In order to generate a random number, you have to know the probability distribution. Therefore your original assertion was right, and my analysis was right. I didn't get it 100% last time. In the problem the probability distribution is the 'trivial' uniform, infinite distribution and with a mean of zero you can actually beat better than 50-50 odds with only the ability to generate a random number - you don't actually have to do it. If you are randomly given a second number without knowledge of the distribution, the result tells you something about the distribution again giving you better than 50-50 odds. I'm done! Please look at Spooky math - with Perl. You will see that the numbers I have are parameters to the experiment, the guesser uses no knowledge about numbers, and the trick is that the guesser is sometimes guaranteed of being right. The only limits to the guessing rule in that program are internal to how computers select pseudo-random numbers and the floating point math that Perl uses. Other incidental notes. Your "trivial" uniform, infinite distribution actually does not and cannot exist. Its not existing has deep consequences. The details of why not are covered in real analysis. In the US and Canada this would traditionally be taken either by an advanced 4'th year math student or a beginning graduate student. And random trivia. Not only is an infinite uniform distribution impossible, but attempts to look for really random numbers invariably turn up patterns that don't fit with a uniform distribution. For instance Benford's law states that the first digit obeys a logarithmic distribution. It isn't really a theorem, but other than that detail the following is a good introduction for the general public. Knuth tries to explain it in his series, but does not manage IMNSHO to show why his abstract model has anything to do with reality. Just thought I would throw that out there... Re: Spooky math problem by lidden (Deacon) on Sep 05, 2004 at 17:01 UTC Suppose I have two envelopes. All you know is that they contain different numbers. I randomly hand you one of them. You open it, look at it, then hand it back. You now have sufficient information to, with guaranteed better than even odds, correctly tell me whether I gave you the envelope with the larger number. How? No, that question actually is: If I give you a number from a set of two diffrent numbers tell me if you got the small or large number? My odds of doing that is exactly 50%. If you sit down and do the algebra, you will find that your probability of being right turns out to be exactly 50% plus 1/2 the probability that you pick a number between my two. Hmm. If I choose to guess myNumber + epsilon(where epsilon is small enough) I have 50% chance of being between your two numbers and hence have 75% chance of being correct, which is Ok four years late but I got here from this node. [reply] Suppose I have two envelopes. All you know is that they contain different numbers. I randomly hand you one of them. You open it, look at it, then hand it back. You now have sufficient information to, with guaranteed better than even odds, correctly tell me whether I gave you the envelope with the larger number. How? No, that question actually is: If I give you a number from a set of two diffrent numbers tell me if you got the small or large number? My odds of doing that is exactly 50%. You'd think that your odds of doing that are 50%. It turns out that they don't have to be. This violates common sense, which is what makes the problem interesting. Mathematicians are very interested in understanding situations where their intuitions differ from what happens. By examining those "pathological cases" you sharpen your intuition for how things work. Note that in these weird boundary cases it is important to be very precise in your thinking. Any sloppiness will cause you to misanalyze the problem to fit your preconceptions, not If you sit down and do the algebra, you will find that your probability of being right turns out to be exactly 50% plus 1/2 the probability that you pick a number between my Hmm. If I choose to guess myNumber + epsilon(where epsilon is small enough) I have 50% chance of being between your two numbers and hence have 75% chance of being correct, which is silly:-) And that is what sloppiness looks like. The algebra only works if your method of choosing the other number is independent of the number that you get. As soon as you introduce a dependency you have to analyze that dependency, and it changes the answer. Since you don't seem inclined to try the algebra, allow me to demonstrate what it looks like. Suppose that the numbers that I have are x and y with x < y. Suppose that p(z) is the function that tells you for any number how likely you are to think that you got the larger one if you're handed that number. Then: P(You're right) = P(You're handed x)*P(You don't think that x is larger) + P(You're handed y)*P(You think that y is larger) = 0.5 * (1 - p(x)) + 0.5 * p(y) = 0.5 + 0.5*(p(y) - p(x)) So far we haven't introduced any details about the method. With the method that I described, though, p is a monotonically increasing function, in fact p(y) - p(x) is the probability that you pick a number between x and y, so you're better than even odds by half the probability that your number is between my two. With the choosing method that you came up with, you always conclude that you're handed the smaller number. Therefore p(x) and p(y) are both 0.5 and your odds of being right remain at 50%. Slight difference! If your eyes are glazing over at the algebra, then please examine the following (bad) ASCII art version of the picture that I described in my root node. (Right if handed the larger) | (even odds) (guaranteed) | (even odds) | (Right if handed the smaller) As you can see, no matter what number you independently come up with, you never have worse than even odds of being right, and you have some chance of guaranteeing that you're right. That [d/l] chance gives you better than even odds overall. [select] In order for the envelope receiver to receive the benefit of additional odds above 50%, I think the receiver must come up with a random number before viewing the number in the envelope. Then given the number in the envelope they can determine whether to say high or low. Using tilly's ascii chart above; my guess is z, assume my guess in between x and y. When I open the envelope to reveal x, which is smaller than z, I will then say that I was handed the smaller number and I will be right. If I open the envelope to reveal y, which is larger than z, I will say it is the larger number and I will be right. If my guess z is smaller than x, I will be wrong when ever I am handed x, and right whenever I am handed y. Similarly if z is larger than y, I will be right whenever I am handed x and wrong when handed y. Now, if I look at the number in the envelope first, as a human, I am incapable of picking a truly random number not biased by that number. (Hell, humans are incapable of picking a truly random number period). So, I now have a 50-50 shot at deciding to pick higher or lower than that number. Which means 50% chance of deciding whether to say the received envelope number is higher or lower than the number in the other envelope. It no longer matters whether my 'guess' falls between the actual numbers anymore, because I can not receive the other envelope. Therefore, the odds are 50%. Only by choosing a number before the envelope is handed to you can you increase your odds however slightly. Re: Spooky math problem by SimonClinch (Chaplain) on Aug 30, 2005 at 13:39 UTC Another approach:- There is some additional information needed. 1) The smallest size per digit you are capable of writing the numbers, 2) the largest envelope size you are capable of acquiring, these collectively limit the number of digits the remaining number can have. For the sake of argument lets call it D. If the numbers in envelopes are M and N, with M being the one handed to you, then the probability that N < M is p=(M-1)*10^(-D). Because D is arbitrarily large, p is arbitrarily small. One world, one people First of all, no additional information was needed. Secondly you are implicitly assuming a probability distribution on the numbers in the envelopes, namely that it is evenly distributed among all possible numbers that could be written on the pieces of paper. This assumption is both wrong and unnecessary. Thirdly note that the technique must work no matter what pair of numbers happen to be in the envelope. Creating a technique which will work for 90% of the pairs that you think could be there won't cut it. It has to be all pairs. Re: Spooky math problem by tmoertel (Chaplain) on Sep 21, 2005 at 18:57 UTC The trick here – and there is a trick – is a subtle use of equivocation: You make a claim about one problem but then explain the claim in terms of another problem that is subtly yet significantly different. In the first problem, each envelope can contain any number. In the second problem, however, you require that the distribution of numbers have a "continuous probability distribution with non-zero density everywhere." These problems are not the same. To see why, imagine an infinite number line representing all numbers. If you pick any segment on this line, say that between 0 and 1, the portion of the line that the segment represents is zero. Hence the probability density function over the segment is also zero, which contradicts the assumption made in your analysis. If we repeat your analysis, this time using the distribution of numbers for your originally stated problem, we find that the probability of picking a number in between your two numbers is zero, and thus knowing the first number provides no benefit. Our intuition turns out to be correct. There is no trick. In the problem, each envelope can contain any number. The requirement of a distribution of numbers that has a continuous probability distribution with non-zero density everywhere is on the one you create for your algorithm. That number is made up out of thin air and has no connection to either of the numbers in the envelopes. The trick is that it gives us a chance of telling the two numbers apart. The probability density function that you are trying to describe, "uniform over the real numbers", is not a valid probability density function. It is a classic result both in real analysis and probability that it can't be. (The real analysis statement is that no such measure exists, the probability theory statement is that no such probability distribution exists. Those statements are the same.) This is why you have to be very careful in the wording to even get a well-defined problem. Given any two numbers and the algorithm, there is a well-defined probability that you're right, and that probability is over 0.5. Prior to the numbers and algorithm, the probability of your being right is undefined and undefinable. Again I'd like to point people to Bently Preece's excellent explanation. The goal of the question is to come up with a single strategy that will give better than even odds for every game in a particular class of games. And the described strategy does that. Now I haven't explained why this particular probability distribution can't exist. The technical reason is that the axioms of probability require the probability of a countable disjoint union to be the sum of the probabilities, and that the probability of the whole set is 1. But the real line is a countable disjoint union of intervals, each of which has probability 0 (for the reason that you explained), so the probability of the whole set is a countable sum of 0's, which is 0. Contradicting the requirement that it be 1. Now it is easy to object that one could just define probabilities differently. And it is true, one could. But any way you define it, you'll have a lot of subtleties around infinity, and attempting to think about a uniform probability distribution on the real numbers will be one place that you'll encounter lots of them. BTW in the USA this fact is generally covered in a real analysis class in advanced undergraduate or beginning graduate school mathematics. There is no trick. Yes, there is – even though it is probably not intentional – and it is equivocation. First, you say: In the problem, each envelope can contain any number. Here, you present "any number" as meaning "any number, with absolutely no restrictions." Later, however, you do place restrictions on the numbers by requiring that it be possible for a guessed third number to fall between two such "any numbers" with a probability of greater than zero: Given any two numbers and the algorithm, there is a well-defined probability that you're right, and that probability is over 0.5. In other words, you subtly (and perhaps unknowingly) redefined "any number" to effectively mean "any number within a finite range." This is why you have to be very careful in the wording to even get a well-defined problem.... Prior to the numbers and algorithm, the probability of your being right is undefined and undefinable. Precisely. Prior to the numbers and the algorithm, the probability of your being right is undefinable. How, then, did you arrive at a concrete statement about that probability? You redefined the problem to make it possible. You did it while explaining the "numbers and the algorithm," which made it harder to see, but you did do it. I'll say it again: The problem you originally presented and the problem you ultimately analyzed are not the same. The original problem's numbers were free of restrictions, but the analyzed problem's numbers were not. Two different problems. Re: Spooky math problem by polettix (Vicar) on Sep 26, 2005 at 14:58 UTC Hi, your link to the puzzle has to be updated, it now points to a nonexistent page. Nevertheless, I found the solution and also this discussion about the problem, but the solution is flawed IMHO. Apart from the fact that the author is arguing about the phrasing in the original solution, the algebra should be the following (quoted from one of the messages in the thread): P(correct guess) = P(we were shown the higher number H) * P(we guessed "high" given H) + P(we were shown the lower number L) * P(we guessed "low" given L) => (plugging in from the original selection method) P(correct guess) = (1/2) * (1 - F(H)) + (1/2) * F(L) P(correct guess) = (1/2) (1 - (F(H) - F(L))) => Since F(H) - F(L) > 0 (by assumption) P(correct guess) = (1/2) (1 - (a positive value)) P(correct guess) < 1/2. What I find very amusing is that we're using two different values (namely H and L) to feed the F function, but this is not applicable. We have to use the same value - that is the value we figure out in our mind, let's call it y. This leads to: P(correct guess) = (1/2) (1 - (F(y) - F(y))) = 1/2 as it should clearly be. perl -ple'$_=reverse' <<<ti.xittelop@oivalf [reply] Don't fool yourself. [select] You've got it backwards. It should be P(correct guess) = (1/2) * F(H) + (1/2) * (1 - F(L)) which does give you a probability > 1/2. [reply] Re: Spooky math problem by ikegami (Pope) on Mar 25, 2009 at 04:31 UTC One flaw: The guarantee fails if the enveloper contains two adjacent numbers. 50% + 0.5 * 0 is not greater than 50%. In standard mathematics there is no such thing as adjacent real numbers. Endless arguments from non-mathematicians notwithstanding, 1 and 0.999... are two different ways of representing the same number and not two different numbers. This is because one of the rules the real numbers follow is trichotomy, which says that if x and y are real numbers then exactly one of the statements x-y>0, x=y and y-x>0 must be true. (Depending on the axiomatization chosen trichotomy can be either an axiom or a theorem. Either way it is true.) The requirement in the problem that the numbers be different rules out the second possibility. In fact we can make an even stronger statement. There is a basic theorem (called the Archimedean principle) which makes an even stronger assertion, given any two distinct reals there is always a rational number between them. So let n/m be a rational number between 0 and x-y. Then x and y must differ by more than 1/m. So you see that between any two real numbers there is always a finite visible gap. There is therefore no such thing as an infinitesmal in the standard real number system. (Google will provide adequate references to demonstrate that I'm not just making this up.) (Google will provide adequate references to demonstrate that I'm not just making this up.) You hacked google to become a reference so it would appear that way ... ;P Re: Spooky math problem by wazoox (Prior) on Mar 31, 2009 at 20:38 UTC I can't wrap my head around this, however I know you must be right because I've read exactly the same problem (maybe with a slightly different story) from Martin Gardner, the famous headache-provider :) Re: Spooky math problem by oiskuu (Pilgrim) on Jan 08, 2014 at 20:07 UTC Here's my attempted proof. Please do not hesitate to correct me if it's insufficiently crazy. First, we need to gather the important bits of the puzzle: Suppose I have two envelopes. All you know is that they contain different numbers. The second crucial hint is more readily apparent in explications that follow, I quote: ... given any two distinct reals there is always a rational number between them. Okay. From the above, we now know that (a) the entertainer has two envelopes; (b) he passes you one of the envelopes, opening which you discover a piece of paper where a real number is written in its precise infinite glory. Clearly, this is a feat that only God could pull off. Having witnessed the magnitude of the Number, you are invited to make a guess. However, by the mere fact that we exist at all, we know that God is good. And omnipotent. It is therefore evident that whatever guess you may make, you are always correct! As God hands you the second envelope, it shall magically contain yet another piece of paper with yet another infinitely glorious and satisfactory Number. Unless maybe, perhaps, if you've been naughty? Re: Spooky math problem by hdb (Parson) on Jan 09, 2014 at 20:47 UTC In my opinion, this great thread still needs a simple Perl script to prove what's going on. Here it is, so that you can play the game yourself against the script. In order to avoid any confusion about the source of the numbers in the envelope you have to specify them as parameters to the script yourself. The third required parameter is the number of repetitions in the simulation to evaluate the odds. use strict; use warnings; use List::Util qw(min max); sub chose { my $x=shift; $x>0?1-chose(-$x):0.5*exp($x) } my ($a,$b,$n) = @ARGV; # provide numbers and repetitions on command li my $ctr = $n; my $correct = 0; my $envelope = rand()<0.5?$a:$b; # pick an envelope at random my $answer = rand()<1-chose($envelope)?'low':'high'; $correct++ # count the correct answers if ( $envelope==min($a,$b) and $answer eq 'low' ) or ( $envelope== +max($a,$b) and $answer eq 'high' ); printf"Correct ones: %20.17f%%, theoretical odds: %20.17f%%\n", 100*$c +orrect/$n, 50+50*(chose(max($a,$b))-chose(min($a,$b))); IMHO, the script is simple enough to prove that no cheating is going on. If you play with the inputs you can see that if you provide numbers that are easily distinguishable (by the chose function) the odds are significantly larger than 50% (for example choosing -1 and 1 will lead to odds larger than 80%). If numbers are less distinguishable like 100 and 200, the odds will converge to 50%, and the simulation error will be larger than the advantage. Great puzzle! [d/l] Re: Spooky math problem by oiskuu (Pilgrim) on Jan 10, 2014 at 00:21 UTC If you want a good, counter-intuitive but concrete puzzle, allow me to direct you towards Monty Hall problem. Personally, I find this "spooky math" problem little more than a guileful parlor trick. It is vague and ambiguous. We don't really know the full setup. Possible choices lead to effectively different puzzles. In reality, both Contestant and Entertainer would have individual bias, and a very constrained range for their rand(). Assuming the Entertainer gets to choose his numbers, is he spiteful or sympathetic? Now it's a behavioral game with external factors. The abstract problem on an infinite number set is different. In this case, [S:there is no strategy to take advantage of:S]. The third scenario, two persons and/or programs acting it out—that's just a curiosity, a sleight of mind, a voluntary(?) deception. Re: Spooky math problem by oiskuu (Pilgrim) on Jan 11, 2014 at 06:07 UTC Considering and reconsidering the abstract case once more, it now admittedly appears very similar to Monty Hall problem. First, let's get rid of the red herrings and ambiguities. A random list needs no reshuffling: no need to pick an envelope. Generate the numbers, including the guess. Assume they are distinct. Now the roles of Entertainer and Contestant have become superfluous. Values of numbers are also irrelevant, only their order matters. All we are left with is six permutations: A < B < C .......... 1 A < C < B .......... 1 B < A < C .......... 0 B < C < A .......... 1 C < A < B .......... 0 C < B < A .......... 1 The favorable outcome is one where A (revealed number) is not between B and C (the guess). Comparing to the guess reduces permutations (to first or second half). The chances of winning are evidently 2/3. But increasing the number of guesses will asymptotically improve your outlook towards 3/4. So there is a strategy that works. Très bizarre. Can you please explain, how this relates to the original question? Where does the guess come from? The original question was to state whether it is high or low. Also, there is no way that the six cases you list have the same probability. If A and B are very close (whatever that means), C will not be between them with the same probability than outside. Quoting from here: The numbers x and y are part of the experiment. How they came to be is not part of the question asked, and therefore questions about how to choose them do not enter into the Despite tilly's meticulous attempts to befuddle, bemuddle and obfuscate, there is an implicit assumption made. The numbers have a defined relationship, a comparison function. In the absence of further restraints, they form an open-ended (infinite) ordered set. Drawing elements from a finite set, randomly and without bias, yields n**3 equally likely hands. In this abstract case, the set is infinite. Elements are drawn from the same set because there is an (implicit) mutual understanding, and presumption of rationality. Sorry, you're skirting a ton of paradoxes in probability theory. This looks reasonable but isn't for the simple reason that there is no way to pick "a truly random real number". That probability distribution does not exist. In order to avoid paradoxes the problem has to be *very* carefully stated. Back to Log In^? Node Status^? node history Node Type: perlmeditation [id://39366] Approved by root How do I use this? | Other CB clients Other Users^? Others perusing the Monastery: (6) As of 2014-04-20 07:52 GMT Find Nodes^? Voting Booth^? April first is: Results (485 votes), past polls
{"url":"http://www.perlmonks.org/?node_id=39366","timestamp":"2014-04-20T07:53:30Z","content_type":null,"content_length":"97148","record_id":"<urn:uuid:53505c92-8efe-4e69-a841-a95c7ac828b0>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00229-ip-10-147-4-33.ec2.internal.warc.gz"}