content
stringlengths
86
994k
meta
stringlengths
288
619
Dallas ACT Tutor Find a Dallas ACT Tutor ...After taking a GRE course for graduate school, I started designing my own course for the GED, SAT, and ACT. The course covers all areas of the test, and all of my students have raised their scores. I make it fun, while helping you reach your goals. 29 Subjects: including ACT Math, reading, Spanish, GRE ...For that time period, I spent at least three hours daily working in AutoCAD, creating & updating drawings. Prior to my collegiate and professional experience, I took four years of drafting. These classes utilized AutoCAD and CADKey; board drafting and 3D modeling were also included. 17 Subjects: including ACT Math, calculus, statistics, physics ...I have extensive experience taking standardized tests and even spent some time grading TAKS as well as the Arkansas standardized test. However, my expectations for what was acceptable grammatically in a written essay were higher than those of the other scorers. I recently took an SAT preparation course and mastered the material. 24 Subjects: including ACT Math, English, chemistry, reading ...My success is largely through an education which focuses on process rather than solution. In the same way you can give a man a fish you can cram for a test. However, in the same way you can TEACH him to fish, you can give a deeper understanding of a subject that will provide for continued success. 15 Subjects: including ACT Math, reading, geometry, algebra 1 ...For the 2010-2011 school year, I have worked as a science teacher at Heritage Academy where I incorporate these disciplines in my everyday instruction. Over the years, I have come to realize that appropriate language and literacy development are the foundation for young children to thrive academ... 29 Subjects: including ACT Math, reading, writing, geometry
{"url":"http://www.purplemath.com/Dallas_ACT_tutors.php","timestamp":"2014-04-16T04:46:14Z","content_type":null,"content_length":"23489","record_id":"<urn:uuid:21037fb8-076a-4ffd-8113-ff7542d4af01>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00396-ip-10-147-4-33.ec2.internal.warc.gz"}
Mplus Discussion >> Error message-Cov matrix could not be inverted Susan Seibold-Simpson posted on Monday, September 10, 2007 - 2:18 pm I recently attended the Baltimore conference on MLM and am attempting SEM with the Add Health Data set. I am in the preliminary stages and have not added the multi-levels as of yet. I have decreased the background variables I am controlling for and have only included the variables of primary interest. I have TWO categorical DVs. I am getting the following error message: I reviewed the discussions on error messages but each time I found this message it seemed the response was to send the data to Linda. Thank you for your help. Sue Linda K. Muthen posted on Tuesday, September 18, 2007 - 5:04 am I think you need to send your information to support. Susan Seibold-Simpson posted on Tuesday, September 18, 2007 - 12:15 pm I made an initial error but didn't know how to retract my post. I'm sure I'll still end up contacting support, but not yet. Thanks Qilong Yuan posted on Thursday, August 12, 2010 - 2:54 pm Hi Linda, I have the same problem. I am running a CFA and I have 6 factors in my model and had to reduce the integration points to 4 since my computer does not have enough memory. I am using a MLR estimator and use the results from WLSMV as starting values. My understanding is fixing factor loadings does not help, so I fixed the factor correlations but the model stopped at iteration 1. Is there any other parameters that I can change (I mean change the default)? I just need the fit indices (like AIC BIC). Thank you very much!!! Linda K. Muthen posted on Thursday, August 12, 2010 - 3:43 pm WLSMV gives probit coefficients for factor loadings. The default for maximum likelihood is logistic regression so your starting values are not going to be in the right ballpark. You can use LINK= PROBIT with maximum likelihood to get probit coefficients. You should free all factor loadings and fix the factor variances to one. It may be that the first factor loading that is fixed at one as the default is not close to one and this is causing problems. Otherwise, send the full output and your license number to support@statmodel.com. Qilong Yuan posted on Thursday, August 19, 2010 - 12:24 pm Hi Linda, Thank you very much for your reply. I have four follow up questions: 1. For some models, if I fix all factor correlations the model actually converges in the end. However, if I only specify the starting values then the model won't converge. Why fixing factor correlations helps? 2. Is there a way that I can get the fit indices (AIC, BIC etc) without using numerical integration? 3. Will it help if I am able to use more integration points? Right now I am using 4. 4. I got different fit indices using logit and probit link function. Why is that? Thank you very much!!! Linda K. Muthen posted on Thursday, August 19, 2010 - 4:11 pm 1. If it works when all factor indicators are free and factor correlations are fixed at one, this means that the first factor indicator is most likely not estimated close to one which is the value it is fixed at as the default. This suggests choosing another factor indicator to fix at one. 2. You get these whenever you use maximum likelihood estimation. It is not necessary to have numerical integration. 3. It may help. Try it and see. 4. The models are different. Back to top
{"url":"http://www.statmodel.com/cgi-bin/discus/discus.cgi?pg=prev&topic=11&page=2544","timestamp":"2014-04-19T17:07:32Z","content_type":null,"content_length":"25542","record_id":"<urn:uuid:31288d0d-977b-43a8-b4dd-524936fe1a33>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00438-ip-10-147-4-33.ec2.internal.warc.gz"}
Killer - WINNER - mr. CD Killer - WINNER - mr. CD 'Killer' is a 1 v 1 tournament, looking for a minimum of 20 players, may increase if demand is there. This could potentially be a long tournament so dedicated CC players only, minimum requirement of 250 completed games and 98% or above attendance record. No exceptions. All players start with 5 points. When a player wins 3 games, they become a 'Killer' This means that they can start taking points off their opponents. That does mean of course that someone could come up against a 'Killer' as early as the fourth round of games, it's all in the luck of the draw if it happens to you. A successful defence against a 'Killer' by a non-'Killer' will gain the non-'Killer' a point but you can't move above the original 5. Only losing to a 'Killer' leads to you losing a point. As the tournament moves on, all players can potentially win 3 games and become a 'Killer' themselves, the point for a defence rule will be dropped at this stage. Once you are a 'Killer', you will stay a 'Killer' First round of games will be played on the 'Classic' map and will then work their way down through the CC maps, second round of games on 13 Colonies, third round on 8 Thoughts and so on. All settings for games: Auto, seq, flat, unlimited, foggy. When your points drop to 0, you are out. Last player standing wins. If at any stage there is an odd number of players, the lowest scoring player will sit out that round of games. Last edited by paulgamelb on Fri Oct 21, 2011 1:10 am, edited 6 times in total. Re: 'Killer' Last edited by paulgamelb on Sun Oct 02, 2011 4:23 am, edited 28 times in total. Re: 'Killer' (1/20) in bro Re: 'Killer' (1/20) In please Re: 'Killer' (1/20) in, i want to make some kills Re: 'Killer' (1/20) Nice idea! but won't be playing Last edited by benga on Sun Dec 26, 2010 7:20 pm, edited 1 time in total. Re: 'Killer' (1/20) im in. 16:00:18 ‹Pixar› Valentines Day the one day in they year that the V and the D come together Re: 'Killer' (1/20) In please/ Re: 'Killer' (1/20) in please Re: 'Killer' (1/20) In plz Re: 'Killer' (1/20) I will play. Re: 'Killer' (1/20) in please Re: 'Killer' (1/20) I'll play Killer Question paulgamelb wrote:A successful defence against a 'Killer' by a non-'Killer' will gain the non-'Killer' a point but you can't move above the original 5. Only losing to a 'Killer' leads to you losing a point. So, what happens to a 'killer' who loses to a 'non-killer'? Do they lose the point that the 'non-killer' gained? Oh, I'm in.
{"url":"http://www.conquerclub.com/forum/viewtopic.php?t=133646","timestamp":"2014-04-18T07:16:38Z","content_type":null,"content_length":"162042","record_id":"<urn:uuid:a5c54312-d36f-457c-8cc9-a7bc2117759b>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00540-ip-10-147-4-33.ec2.internal.warc.gz"}
Question about polynomials with coefficients in Z up vote 15 down vote favorite Let $f = a_0 + a_1 x + \ldots + a_n x^n$ ($f \ne 0$), where $a_i \in \{-1, 0, 1\}$. Let $p(f)$ be the largest number such that $f(x)$ is divisible by $y$ for any integer $x$ and for any $1 \leq y \ leq p(f)$. Let $g(n)=max_f\; p(f)$. Is it true that $g(n) = o(n)$? What is the best upper or lower bound on $g(n)$ can be derived? For my application it would be great to prove that $g(n) = o(n)$ in order to obtain something non-trivial, or $g(n) = o(n^{2/5})$ in order to improve the best known result. Do you think it is real? UPD It is an obvious consequence of Bertrand's postulate and Schwartz–Zippel lemma that $g(n) \leq 2n$. Using bruteforce I've got the following values: $g(10) = 7$, $f = x^{10} + x^8 - x^4 - x^2$. $g(15) = 10$, $f = x^{15} + x^{13} + x^{12} + x^{11} + x^{10} - x^7 - x^6 - x^5 - x^4 - x^3$. $g(17) = 10$, $f = x^{16} + x^{15} + x^{14} + x^{13} + x^{12} + x^{11} - x^8 - x^7 - x^6 - x^5 - x^4 - x^3$. Here's an idea, but I'd have to open a book in analytic number theory to push it further. Consider a polynomial f of degree (at most) n such that p(f)=g(n). You didn't say it, but surely meant to 1 imply that we're supposed to be assuming f is non-zero. So this f is non-zero. Now consider f(2). This number cannot vanish, because the leading term beats all the other terms put together. However f(2) is clearly less than 2^{n+1}. This already gives a non-trivial bound for g(n), because it shows that 2^{n+1} is an upper bound for the "product of all prime powers <= g(n)". What does that tell us? – Kevin Buzzard Jan 21 '10 at 14:20 OK so I opened the analytic number theory book, and this argument gives (if I got it right) that g(n)=O(n.log(2)). So not as strong as you want. – Kevin Buzzard Jan 21 '10 at 14:28 Fp[x] has unique prime factorization, so if a polynomial is 0 everywhere mod p, then its reduction mod p must be a multiple of x(x-1)(x-2)...(x-p+1) = x^p-x which has degree p. That means results on the density of primes tell you g(n) can't be much more than n. – Douglas Zare Jan 21 '10 at 14:56 "g(n) can't be much more than n". Just to point out that log(2)<1 so my argument also gives this. – Kevin Buzzard Jan 21 '10 at 16:30 Here's what may be a (completely non-rigorous) reason you might NOT expect o(n^{2/5}) to hold: For p(f) to be at least k, it suffices for p to satisfy the k^2 equations corresponding to y|f(x) for 5 each y and x between 0 and k. Assuming everything you could possibly want to be true actually is true (the probability each equation is satisfied by a random polynomial is at least 1/k, and satisfying the equations are all independent), then a random p would work with probability at least k^{-k^2}. But there are about 3^n polynomials, so for k much smaller than n^{-1/2} we'd expect a solution. – Kevin P. Costello Jan 21 '10 at 22:11 show 4 more comments 2 Answers active oldest votes I'll prove the upper bound $g(n) = O(n^{1/2+o(1)})$, which is essentially best possible if Kevin Costello's heuristics are correct. Suppose that $q$ is a prime with $q > n^{1/2}+1$. Reducing $f(x)$ modulo $x^{q-1}-1$ in $\mathbb{Z}[x]$ amounts to reducing the exponents modulo $q-1$, so the result is a polynomial $h(x)$ of degree less than $q-1$ whose coefficients are at most $n/(q-1) + 1$ (which is less than $q$) in absolute value. On the other hand, if $q \le p(f)$, then $f(x) \bmod q$ is divisible by $x^q-x$ and hence also by $x^{q-1}-1$, so $h(x) \bmod q$ must be $0$; this is possible only if $h(x)=0$ in $\mathbb{Z}[x]$, which implies that $f(x)$ vanishes at the $\phi(q-1)$ primitive up vote 20 $(q-1)$-th roots of unity. Since $f(x)$ has at most $n$ complex zeros, we get $\sum_{n^{1/2}+1 < q \le p(f)} \phi(q-1) \le n$ (the sum ranges over primes $q$ in the interval). By the prime down vote number theorem and Theorem 327 in Hardy and Wright (which states that $\phi(m)/m^{1-\delta} \to \infty$ for any $\delta>0$), this is a contradiction for sufficiently large $n$ if $p(f)>n^ accepted {1/2+\epsilon}$ for a fixed $\epsilon>0$. EDIT: The more precise bound $\phi(m) \ge (e^{-\gamma} - o(1)) m/\log \log m$ given by Theorem 328 in Hardy and Wright leads to $g(n) \le (e^{\gamma}/2 + o(1)) n^{1/2} \log n \log \log n$. I need time to undrestand it. Actually, number theory is a topic of next year in my university. :) – ilyaraz Jan 22 '10 at 8:07 What is a prime number theorem? – ilyaraz Jan 22 '10 at 8:11 As far as I understand in your sum $q$ is also prime. Is it right? – ilyaraz Jan 22 '10 at 8:31 The Prime Number Theorem tells you that the number of primes up to x is asymptotically x/log x. Yes, q is assumed to be prime. Poonen's very nice proof shows that primes q greater than sqrt(n) which divide all values of f tell you complex zeros of f, primitive (q-1)st roots of unity. Then there can't be too many primes, o(n^(1/2 + epsilon)), since more would mean there would be too many distinct complex zeros of f. The prime number theorem shows that asymptotically, there would be too many primes between p(f) and sqrt(n) if p(f) were much larger than sqrt(n). – Douglas Zare Jan 22 '10 at 12:57 Thank you for your explanations. – ilyaraz Jan 22 '10 at 21:47 add comment The point of this answer is to point out that Kevin Costello's heuristic can be made rigorous. For any positive $\epsilon$, if $y=O(n^{1/2-\epsilon})$ then such a polynomial exists for large $n$. Lemma: Let $G$ be a finite abelian group and let $g_1$, $g_2$, ..., $g_n$ be elements of $G$. If $2^n > |G|$ then there are integers $\epsilon_i \in \{ -1, 0, 1 \}$, not all zero, such that $\sum \epsilon_i g_i =0$. Proof: Consider the $2^n$ sums $\sum a_i g_i$ with $a_i \in \{ 0, 1 \}$. By the pigeonhole principle, two of these are equal. Subtracting them, we get the claimed relation. QED up vote 11 down vote Now, consider the abelian group $$G:=\bigoplus_{k=1}^y (\mathbb{Z}/k)^{\oplus k}.$$ Let $g_i$ be the element of $G$ whose $k$-th component is $(0^i, 1^i, 2^i, 3^i, \ldots, (k-1)^i)$, for $i=0$, $1$, ..., $n$. The order of $G$ is $\exp( \sum k \log k) = \exp( O(y^2 \log y))$. So, if $y=O(n^{1/2-\epsilon})$, then $2^{n+1} > |G|$ and the lemma tells us that there are $\ epsilon_i$ such that $\sum \epsilon_i g_i=0$. Then $\sum \epsilon_i x^i$ is the required polynomial. There is a lot of slack in this argument, but Bjorn's argument shows that we can't improve the exponent of $n$ by tightening it. As David remarked, his argument can be refined. Here is one such refinement: Let G be (Z/LZ)^y, where L=lcm(1,...,y), and let g_i=(0^i,...,(y-1)^i). Since log L = (1+o(1)) y, this leads to the lower bound g'(n) >= ((log 2)^{1/2} - o(1)) n^{1/2}, which is pretty close to my upper bound. – Bjorn Poonen Jan 24 '10 at 1:33 add comment Not the answer you're looking for? Browse other questions tagged nt.number-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/12530/question-about-polynomials-with-coefficients-in-z?sort=oldest","timestamp":"2014-04-16T04:57:37Z","content_type":null,"content_length":"70001","record_id":"<urn:uuid:2888583f-b46d-4565-9560-0a7f8a5506b3>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00049-ip-10-147-4-33.ec2.internal.warc.gz"}
Deriving the Kinematic Equations The Kinematic equations are the bread and butter equations of Newtonian projectile motion, and are quite useful for games. There are four of them, but you just need to remember two very intuitive ones and you can derive the other two from there. For one derivation, we need Calculus, for the other, it's straight Algebra. But don't be afraid, these are trivial! I'm mostly doing this to refresh my memory and it may help someone. First, some definitions: X &=& position,\ distance\ traveled \\ V &=& velocity \\ a &=& acceleration \\ t &=& time \\ ?_0 &=& initial\ value\ of\ whatever\ ?\ is \\ ?_f &=& final \ value\ of\ whatever\ ?\ is Pretty straightforward. So, the First Kinematic Equation ? Well, what is velocity anyway? If I know I'm accelerating forward at 10 m/s^2, and I travel for 2 seconds, how fast am I going at the end? 10*2 = 20m/s. If I was going 5 m/s at first, then I'm now going 20+5=25m/s. So velocity is a function of acceleration and time plus an initial velocity: [math]V_f = a*t + V_0[/math] Indeed, if we look at the units, they match up: [math]V_f (\frac{m}{s}) = a (\frac{m}{s^2}) * t (s) + V_0 (\frac{m}{s})[/math] The seconds in the time factor cancel one of the seconds in the acceleration factor, leaving a sum of m/s. Now let's integrate with respect to time! [math]\int{V_f dt} = \int{(a*t + V_0)dt} = \frac{1}{2}*a*t^2 + V_0*t + C[/math] Convince yourself that when we integrate velocity over time, we get back position, and therefore the C constant is just the initial position: [math]X_f = \frac{1}{2}*a*t^2 + V_0*t + X_0[/math] And that is the Second Kinematic Equation Third Kinematic Equation is another intuitive one. What else is position, if we don't have information about acceleration, but only velocity? If I'm traveling at 60 mph, and I travel for two hours, how far have I gone? 120 miles. (Plus an initial position, of course.) But in the real world, I can't just jump to 60 mph instantly, and there may be stop signs, slow downs, speed ups when no cops are around... But if I know speed is 60mph, then the above holds. Position is thus a function of average velocity and time, the third equation: [math]X_f = \frac{V_f + V_0}{2} * t + X_0[/math] And looking at units: [math]X_f (m) = \frac{V_f + V_0}{2} (\frac{m}{s}) * t (s) + X_0 (m)[/math] The seconds in time cancel out the seconds in velocity, giving a sum of meters. To get the final Kinematic equation, it's straight algebra from here. Let's take the third equation, solve for t, and plug it in to the first equation. Why not? If we're careful we can get an equation that will let us know our velocity from simply an acceleration and position alone, no need for time. Solve\ for\ t: \\ \frac{V_f + V_0}{2} * t &= X_f - X_0 \\ t &= \frac{2 * (X_f - X_0)}{V_f + V_0} \\ Now\ plug\ it\ in: \\ V_f &= a*t + V_0 \\ &= \frac{2*a*(X_f - X_0)}{V_f + V_0} + V_0 Multiply\ both\ sides\ by\ (V_f + V_0): \\ V_f^2 + V_f*V_i = (V_f + V_0) * (\frac{2*a*(X_f - X_0)}{V_f + V_0} + V_0) \\ \ \ \ = V_0^2 + \frac{V_0*2*a*(X_f - X_0)}{V_f + V_0} + V_f*V_0 + \frac{V_f*2*a*(X_f - X_0)}{V_f + V_0} \\ V_f^2 &= \frac{2*a*(X_f - X_0) * (V_f + V_0)}{V_f + V_0} + V_0^2 \\ &= 2*a*(X_f - X_0) + V_0^2 And that's the Fourth Kinematic Equation Simple, right? : There is also a second derivation for the fourth equation, following from conservation of energy. Recall that: [math]Kinetic\ Energy = \frac{1}{2} * m * V^2 [/math] [math]Potential\ Energy = m*g*h[/math] where m is mass, V is velocity, g is acceleration due to gravity, h is positional height. Recall the law for Conservation of Energy: [math]PE_0 + KE_0 = PE_f + KE_f[/math] So, by simple algebra: [math]m_0*g_0*h_0 + \frac{1}{2}*m_0*V_0^2 = m_f*g_f*h_f + \frac{1}{2}*m_f*V_f^2[/math] We assume that mass is constant (so we can divide it out) as well as the gravitational acceleration: g*h_0 + \frac{1}{2}*V_0^2 &= g*h_f + \frac{1}{2}*V_f^2 \\ Multiply\ both\ sides\ by\ two: \\ 2*g*h_0 + V_0^2 &= 2*g*h_f + V_f^2 \\ V_f^2 &= 2*g*h_0 - 2*g*h_f + V_0^2 \\ &= 2*g*(h_0 - h_f) + V_0^2 So this form is essentially the same as the above. Posted on 2010-11-15 by Jach Tags: math, physics Permalink: http://www.thejach.com/view/id/142 Trackback URL: http://www.thejach.com/view/2010/11/deriving_the_kinematic_equations Larry 15 September 2012 03:09:17 AM I don't understand where the denominator "2" comes from in the 3rd kinematic equation. It's probably something simple that I'm missing. Perhaps you can clarify this for me. Jach 15 September 2012 03:28:52 PM Hey Larry. The third equation says that the final position of an object is the average velocity multiplied by time, plus a starting position. Equivalently, the average velocity is equal to the total distance traveled divided by time. (m/s). Averages in general are calculated by adding up the relevant data points and dividing by the total number of data points that were added. Since we're only adding up the final velocity and the initial velocity, we divide by 2. That result is the average, it's another way of calculating it besides change in distance over change in time. There are some assumptions here that I glossed over when I wrote this post... Why pick the initial velocity and final velocity to use as sample points for the average, as opposed to some other velocities in the time interval of interest? It's arbitrary, unless I assume that acceleration is constant. That's an okay assumption because in the fourth equation, the acceleration variable "a" does not depend on time so it is constant. For the problems where kinematics are useful (like cannon shots and free-falling objects), it's also not a bad assumption. If I drop a ball, it starts at 0 and ends at say 10 m/s, the average speed was 5 m/s. (Note we have to adjust the reference frame for time to be the moment before collision, or else the average would be 0 m/s and that's not true.) In reality, if I wanted my true average velocity I would have to have an equation for velocity in terms of time and integrate over all the instantaneous velocities in a time interval, and divide by a factor of the final time minus the initial time. (And you can always get an equation for velocity if you have an equation for position or acceleration that depend on time.) An interesting thing to think about is how different speeds affect the average. Here's a fun problem: A car travels a distance of 60 miles at an average speed of 30 mph. How fast would the car have to travel the same 60 mile distance home to average 60 mph over the entire trip? The answer is it can't. In order to average 60 mph over the entire trip, the car would need to travel the 120 miles within 2 hours. (Distance over time.) Any longer and it is below average, any shorter and it is above average. Since it averaged 30 mph on the way there, that means it's already been traveling for 2 hours. Too late. The point is that periods of low speeds with destroy your overall average, and by the time you notice it may be too late to fix it. Larry Schlanger 16 September 2012 02:16:43 AM Thanks so much for your response. It was very helpful. I think it is fantastic you are willing to share your knowledge so graciously. I am a physician, far removed from math and physics training but I am trying to relearn some of it for the purpose of helping my son get through it but also because I just love the stuff. Any suggestions for a basic physics text? Thanks again. Jach 16 September 2012 08:51:04 AM No problem, glad I could help. For textbooks, the standard for high schools and undergraduates in college is still Physics for Scientists and Engineers, and there are camps of people who prefer the latest edition while other people prefer the 4th edition when Giancoli was the author. I don't have much preference for either in particular. Khan Academy has become very popular over the past year and a half. The free videos they have cover a broad range of topics pretty well and in a short time period (here is the mechanics section). One of the neatest features is their Knowledge Map for math, where anyone can practice the subjects and let someone else (a "Coach") have access to the results. That is great for teachers or tutors because then they know exactly what areas a student is struggling with. Walter Lewin's MIT class in 1999 is also good material, and he usually ends a class with some sort of experiment. I don't know any book of experiments, but doing the fun ones help cement the If you want The Best Physics Text Ever (in my opinion), check out Richard Feynman's Feynman's Lectures on Physics volumes 1, 2, and 3. (Though 1 is all you need if you just want the Newtonian mechanics.) It doesn't have problem sets to solve and it's not exactly basic (you need to be pretty comfortable with calculus), but it's still excellent. If you want a digital copy to check out either Feynman's Lectures or the Giancoli/non-Giancoli book I can email you a link--libraries should have the latter two but they don't always have Feynman's biggest work. If you or your son are ever interested in electricity and magnetism, all of the above resources cover the subject but I think Introduction to Electrodynamics by Griffith along with Div, Grad, Curl, and All That by Schey as an optional supplement are a better combination. Outside electrodynamics in general and into computer engineering, I'd recommend Digital Electronics: A Practical Approach by Kleitz because even though it's pretty basic it's also fun to follow along and build interesting circuits out of regular wires, chips, and a breadboard. Recent Comments Make sure to include tank.c when compilingby Anonymous on OOP in C with Function Pointers and Structs at 2014-04-18 19:16:18 look, i see people talking aboutby Alyssa on 40 + 40 x 0 + 1= ? at 2014-04-08 07:27:13 Thanks! I don't know how I could I found thisby yogsototh on Fixing a trivial Clojure Error... at 2014-02-10 09:52:24 As an update to this post, Seattle no longerby Jach on TSA Patdowns at 2014-01-01 02:07:46 Well they just started it on Xmas. And I amby Neogreen on Furcadia is Dead at 2013-12-26 11:56:31
{"url":"http://www.thejach.com/view/2010/11/deriving_the_kinematic_equations","timestamp":"2014-04-19T17:01:59Z","content_type":null,"content_length":"34057","record_id":"<urn:uuid:2c5df155-b050-4d7c-97a0-b1d4331d323f>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00316-ip-10-147-4-33.ec2.internal.warc.gz"}
Adding fractions with unlike denominators Adding fractions with unlike denominators Adding Fractions with Unlike Denominators Adding fractions with unlike denominators Discussion and questions for this video why does the denominator have to be the same as the other denominator Good question. In mathematics, the concept of adding and subtracting fractions require the denominator to be common. The numerator does not need to be common but the denominator must be. If the denominator wasn't common how would we work out the question? E.g. 1/3 + 3/7 = ? To work out this question, we would have to find the LCM (Lowest Common Multiple) of the denominators; in this case, 3 and 7. Then when we find that it is 21, we can adjust the numerators to 7 and 9. With that done, we can add 3 and 7 to make 10 over the denominator, which is 21. This is why the denominators have to be common. There was no equation or formula built in the concept of adding and subtracting fractions that do not require the denominators to be common. I hope this clears everything up. :) Why did he say to turn the nine in to a 36 and why multiply by 4 what.] nine times four is thirty six but why cant the nine just be nine. I'm confused ): and why do we have to multiply the numerator by 4 too.~_~ to add fractions, you need to have the denominator the same. these two fractions have denominators of 9 and 12. The lowest common multiple of these two numbers is 36. That requires multiplying by x4 and x3 respectively. We multiply the numerator and denominator by the same number so the number does not change. 4/9 * 4/4 = 16/36. 4/4 is the same as 1, so the number doesn't change If both denominators are prime numbers, is the only least common multiple between them always going to be the product of the two? Well, yes because since both are prime numbers, and prime numbers can't divide into each other's multiples, it will have to be the product. For example, 1/5 + 1/7, 7 can't divide into 5's multiples and 5 can't divide into 7's multiples, so the denominator has to be the product. By the way the answer is 7/35 + 5/35 = 12/35 Why do the denominators have to be the same? You do not have to make them the same but it makes it far easier to do. why does the denominater have to be greater than the top number? Excellent question my friend. If you had two fractions like 6/5 and 7/5 you would get 13/5. That is what you would call a improper fraction. In arithmetic you wouldn't leave it like that because it's considered a improper fraction. You would need to change it into a mixed number. When you get to algebra 1 they will allow you to leave it as a improper fraction as an answer to a problem. I don't get this it is hard can anybody help me? it is quite simple if you ask me all you have do is this Whenever you add fractions with unlike denominators, you must make the denominators of the same value. In this example, the easiest approach is to multiply 8 x 3 to get 24. Here, you multiply by 3 to get . Then you multiply the second equation by 8 to get . Note that both addends have 24 as a denominator. Add in the same manner as with the unit Adding fractions with like denominators. yes you can beacuse fractions are the answer of 1 divide by a number so yes they can i cant understand this, how did you get the x3 but the other x4? Because it is a fraction you can multiply it by anything - it will still be the same fraction of a whole number. (1/2 * 2 is 2/4 - but both are 0.5 in decimal terms) The objective is to get the denominators the same. What if you have a problem where one of the fractions has to be divided into a fraction, and the other number is whole? If one number is whole, you put it over a denominator of 1. If you have 24/8 - 2, write: 24/8 - 2/1 Hope this helps! Is multiplying a fraction as easy as adding and subtracting one? Multiplying is even easier because you don't have to find any common denominators or anything. You just multiply across: the numerator times the numerator and the denominator times the denominator. can both fractions be negativs I still don't understand. Please help me i have test next week. Well when you add a fraction like 4/6 and 9/10 you have to get a common denominator. To do that multiply the denominator 6x10= 60 now the fraction would look like _/60 all we need to do is fine the numerator 6x what = 60 its 10 then we do 10x4 to get the numerator 40/60 do the same thing to the other one now we got 54/60 + 40/60 ok add the numarator 54+40=94 so now we got 94/60 we need t turn it into a mixed number. How much times can 94 go into 60 its 1 time so now we have 1 and 34/60 and thats not your answer yet you have to reduce. So divided by 2 = 17 then 60 divided by 2 is 30 so 17/ When numerator is greater than the denominator.. it is considered as improper fraction? Yes it is a improper fraction so you need to turn it into a mixed number. what if the denominators of two fractions don't have a least common multiple you work with them in the way as they are - they are already simplified ;) we are doing this in math and i kinda understand it but its hard also Agreed, I still think fractions is a pain in the neck, but just work hard and you'll understand it. i think you can also just multiply the denominaters by each other........right? yes, you can just multiply the denominators by each other, but when you use larger numbers, it becomes tedious to multiply all of those giant numbers. It is much simpler to work with smaller numbers, and you are less likely to make a mistake that way. Multiplying the two numbers together is sometimes the only way to get a common denominator. This website has helped me a lot in my school work. Although i am a little confuzzled on how he got his answer, like how did he turn it to a mixed #? Take a improper fraction like 4/3. Divide the denominator by the numerator. That's 1. How much is left? 1/3, so the mixed number is 1 1/3. This problem doesn't need simplifying, but remember to in other problems if they need it. Hope this helps. why does the denominator have to be the same? The denominator has to be the same because then it makes it easier to add, subtract, multiply, or divide the fraction. I do not get the mixed fractions, is there an easier way What don't you understand about them? What if both numbers don't have a greatest common factor? Hi William. Two numbers always have a greatest common factor. Sometimes, it is 1, but often it is larger. do you have fractions with multiplication Yes they do have fractions with multiplication khan academy. what is he trying to say? it is confusing. it is, why doesn't it have audio what if there was an x in the denominator like in the problem 1/x + 4/1 That is a more advanced type of problem, but you can still find a common denominator. you could multiply 4/1 by x/x and get 4x/x then add the numerators 1 + 4x, which can't be simplified further. So the new form is the (1 + 4x)/x also written as 1 + 4x This isn't really any simpler than the previous expression but they are equal. What if the Denominator was different from the other Denominator? That's what were talking about here. "Unlike Denominators" are fractions with- literally- Unlike denominators. Try the Skill "Adding Fractions" when you're done! This is too fast can you slow down? If it goes too fast you can always pause the video or rewatch it. Hope this helps! What I don't get is 36/49= 0.7... I have to make it 1? And if it was 0.5? I need help with dividing fractions how do I do that Ex: 1/2 divided by 3/4 You will flip 3/4 and the dividing sign will be multiplying sign, so it will be 1/2 multiplied by 4/3 so the answer will be 2/3. Can u multiply it with a bigger denominater ? Sure, you can use any denominator you want, as long as they are common. Then reduce at the end for the last step. Tip: See the recognizing divisibility video. It is very helpful. i dont get it why does the denominator have to be the same as the other denominator? (: why do u need to do a common denominator u are soo smart.. just what i need to no except can you do a vidio were you can like turn the inproper fraction into a proper one?? if that makes sence hey jcnz2011 u boy or girl you cant answer that it dosent make sense fractions only Make a video that tells me how to simplify a fraction that is NOT an improper fraction! why do you have to have the sam e denominator the denominator of a fraction is its place value and the numerator is the digit . when we add we must add digits with the same place value. would is easier for you guys to complete 4/5 + 6/7 or 5/6 +1/2? I don't understand. This just confuses me. Can anyone turn this video into something that a 13 yr. old can understand? I grasped this concept when I was 8. You'll get it, just keep trying. But what if you get a improper fraction do you have to turn it into a mixed number or can you just let it be. If your asked to "simplify" the answer, turn it into an improper fraction. Other than that, it depends on whether or not your teacher wants you to. what did he say I still don't get it it is conufseing me i don't now why i still dont get nothing by the way dont put my question at the bottom So how would you do a question like 1/4 - 3/12 ? Find the least common factor of the denominators (twelve here), find out what to multiply the numerator AND denominator with to achieve a denominator of twelve, i.e. three for the first fraction and one for the second. Then do the subtraction. Easy Peasy! Okay so this is today's homework is about that and I haven't really get what I had to do so look at this problem 1/3 + 5/9 they both can't reduce! or unless you can do the factor tree on 1 and 3 okay never mind thank you for doing this video I really didn't understand Adding Fractions with Unlike Denominators. and plus is their ever gonna be a problem with dividing? Like with unlike denominators? To add fractions, find the least common denominator (LCD). Here, it would be 9. 1) You can convert 1/3 to 3/9. So you end up with 1/3 + 5/9 = 3/9 + 5/9. 2) Add the numerators. (3+5)/9 = 8/9. ok this will be a question but does anyone have mrs sullivan for math at mckinley elementary ok if so what is tha simplest form of 2/3-1/9? Why are both the denominators the same? If you want to add or subtract a fraction, you have to make both side have the same denominator. Why would you not simplify this answer, 22/21 to 1 1/21? The problem was 5/7 + 1/3. In my homework book it said the answer was simply 22/21. I'm puzzled...someone please explain! How do you simplify it if it is not an improper fraction? the whole thing was confusing Is there any other faster ways to do the problems instead of just wasting so much time on only one problem when i could be on the next one in just a minute or less? Get the LCD(least common denominator) of the denominator then add. He explains things that is why the video is long..:) Why do we have to find a common multiple? Why cant we just the numerators and the denominators together, like we multiply the numerators and the denominators when we are multiplying fractions. I'm confused. Help! tell me the answer to this 56/93 + 43/94 This section inst for doing your home work. is there any videos on subtracting fractions with different denominators vote-if u agree that the person who made this website is trying to turn the unlike denominator into a common denominator to make it more easier Can u write a fraction like this -5/9? I thought you have to divide 36 to 4/9 and to 11/12 ??? this video has helped me like sooooooooo much. THANKS SAL! can you show an ex. of m-5 over m squared+9m+20 +4m-1 over m squared +7m+10 ? How do you add fractions with unlike fractions can anyone give me some steps where are the problem exercises? why do you make it so complicated? I don't get why we have to multiply can you please teach me calculus ? That's in the calculus section. When I search for hint. It does not seems well And i Gotta do my math right now. BYE EVERYONE BYE , when you subtract the fraction do you have to slimply it ? can somebody answer it in a different answer how did you know to multiply in the begging of the video i am in 4th grade and i loved the way u teach in a simplified manner. where can i find problems to solve adding fractions with unlike denominators. i don't understand this formula its a little bit tricky.can some one help me. Why do we have covert mixed numbers to improper fractions? why do you have to multiply by the same number You have to multiply the same number for the denominator and numerator because whatever you do to the bottom, you do to the top. If you don't do this your fraction would not be equivalent so you would get a different answer because you changed the value of the fraction completely. Hope this helped. :) why can't you add them wit the denomenator that the # come with If you mean by counting the multiples of 9 to find one that is divisible by 12, you can check every single one (but it's not very efficient), or you can learn the multiples of 9 and 12 up to a certain point. If you mean the multiplying of the denominator as well as the numerator, my teacher always says: "what you do to the bottom you do to the top!" so you make an equivalent fraction. how would you multiply 5/20 * 8/5 Well you would cancel some of the numbers. the 5 and the 5 could cancel out, and that would leave you with 8/20. u are soo smart.. just what i need to no except can you do a vidio were you can like turn the inproper fraction into a proper one?? if that makes sence how do u add and subtracted fractions I do not understand this? i thought that you had to add the different denominators but you have to divide them so when your multiplying do you have to divide the different denominators and so when your dividing different denominators you have to multiply them When you find a common denominator are you allowed to divide instead of multiply 9x2=18 ,9x3=27and blah blah blah ? Why does it have to be unlike denominators? what should i do if my fraction is 9/11+6/10 This looks hard can some one help? i figured it out! it's easy Add My Bro as Couch: ¸„ø¤º°¨°º¤ø„¸¸„ø¤º°¨ is there any way to do it without changing the denominators? Why do the denominators have to be the same? how about if we have like 36, and we have to find a GCF, and we have to do 2. what can save me from doin all that work? i dont know but good question. When subtracting mixed fractions, what if the second fraction is larger than the first? Like 7 3/5 - 5 4/5 = ? I don't understand. Why do you have to multiply the numerator? You want the new fraction to be equivalent to the original one so that the sum of the two fractions doesn't change. To do that, you need to multiply the numerator and denominator of the fraction by the same number. why is this so hard i kinda understand but not really who agrees Is it possible for there to not be a common denominator ? I don't really understand how to simplify. It is confusing to me somehow and everytime I get a question that asks me to simplify I can't do it, I don't understand it. Can somebody please help me figure it out? I was going through the adding mixed fractions with unlike denominators and simplifying them to their lowest term and I am so lost right now. I had to watch the videos like five times. I am just trying to figure it out. 10,020,396 over 300,285,932 plus 125 over 70,000,000. Seems hard. bet you I can fool ya. :) If the denominator of both numbers are prime then why can't 1 be divisible by that prime number? can anyone give an example of what he means You have to fined a number that goes in to the denominators evenly.... i think Not sure! Hope that helps what is common denominator in 6/7 + 8/21 How did you get the x 4 and x 3? How would you do it it you had variables in different spots of the equation still adding and subtracting fractions with unlike and like denominators? Why is this so difficult? If you work harder, it will be easier. Why does the numorator have to be on the top and the denominator have to be on the bottom? Hi, I was wondering if you were going make 49/36 into a proper fraction like he did... it there any other possible way of doing that? Or is that the only way to covert the improper fraction into a proper fraction? Just wondering! Thanks! Why Does This Video Have To Be So Short? I Don't Understand! why do we need to know this? how can this help us in the future? Do you understand English? Click here to see more discussion happening on Khan Academy's English site. Report a mistake in the video At 2:33, Sal said "single bonds" but meant "covalent bonds." Discuss the site For general discussions about Khan Academy, visit our Reddit discussion page. Flag inappropriate posts Here are posts to avoid making. If you do encounter them, flag them for attention from our Guardians. • disrespectful or offensive • an advertisement not helpful • low quality • not about the video topic • soliciting votes or seeking badges • a homework question • a duplicate answer • repeatedly making the same post wrong category • a tip or thanks in Questions • a question in Tips & Thanks • an answer that should be its own question about the site • a question about Khan Academy • a post about badges • a technical problem with the site • a request for videos or features
{"url":"https://khan-academy.appspot.com/math/arithmetic/fractions/fractions-unlike-denom/v/adding-fractions-with-unlike-denominators?_escaped_fragment_=","timestamp":"2014-04-20T08:30:20Z","content_type":null,"content_length":"991729","record_id":"<urn:uuid:b6c3248a-0239-45f7-aa39-e13b8d2e5731>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00025-ip-10-147-4-33.ec2.internal.warc.gz"}
Conic Sections/Rotation of Axes A conic is a second degree polynomial. When it is expanded, you get an equation of the form: $Ax^2 + Bxy +Cy^2 + Dx + Ey + F = 0$. If the value of $B$ is zero then the conic is not roated and sits on the x- and y- axis. If $B$ is non-zero, then the conic is rotated about the axes, with the rotation centred on the origin. Graphing a Rotated ConicEdit If you are asked to graph a rotated conic in the form $Ax^2 + Bxy +Cy^2 + Dx + Ey + F = 0$, it is first necessary to transform it to an equation for an identical, non-rotated conic. This is then plotted onto new axes which are drawn onto the graph. The equation for the nonrotated conic can be found by: $AX^2 + BXY +CY^2 + DX + EY + F = 0$. Note how capital letters are used for the pronumerals. This signifies that they reperesent different values to the original equaiton. To determine the values of X and Y, you use the formulae: • $tan 2\theta = \frac{B}{A-C}$ • $x = X cos \theta - Y sin \theta$ • $y = X sin \theta + Y cos \theta$ By substituting the new values of $x$ and $y$ into the original equation, a new one can be obtained which represents a non-rotated conic which can be plotted on a set of axes rotated at $\theta$ (anti-clockwise) to the original x- and y- axes. When you do this, however, it will still be necessary to determine the new rotated location of points such as the vertex, foci and directrixes. This can be done using the following formulae: • $X = x cos \theta + y sin \theta$ • $Y = -x sin \theta + y cos \theta$ Where (X,Y) are the new rotated coordinates of the orginal point (x,y). The formula: $B^2 -4AC$ can be used to determine the type of conic from the original equation before you start graphing: • $B^2 -4AC = 0$: Parabola • $B^2 -4AC < 0$: Ellipse • $B^2 -4AC > 0$: Hyperbola Rotating a ConicEdit If you wish to rotate a conic by a certain angle, $\theta$, it is relatively simple. All you do is make the following substitution from the previous section: • $x = X cos \theta - Y sin \theta$ • $y = X sin \theta + Y cos \theta$ Replacing the $x$ and $y$ values from the function with these new ones. Then simplify the answer, and it will be a function for the same conic rotated by $\theta$ counter-clockwise about the origin. Last modified on 25 February 2011, at 00:45
{"url":"http://en.m.wikibooks.org/wiki/Conic_Sections/Rotation_of_Axes","timestamp":"2014-04-20T03:14:10Z","content_type":null,"content_length":"19194","record_id":"<urn:uuid:db72e52a-8a88-4bc2-98e6-b1a319760b35>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00015-ip-10-147-4-33.ec2.internal.warc.gz"}
Nutting Lake Math Tutor Find a Nutting Lake Math Tutor ...I teach students that SAT Math is not like classroom math. While classroom math rewards students for doing problems the "right" way, for SAT Math, it's not about how you get there, just that you get the right answer. While we review the important concepts for the test, I teach students more about how to tackle problems they don't know by using alternate strategies. 26 Subjects: including ACT Math, probability, linear algebra, algebra 1 ...Students doing theses/dissertations have found my guidance especially useful. My specialty is in biostatistics, but the foundation of statistics is the same in all fields. Arguably, the foundation of statistics is the most important to understand. 18 Subjects: including prealgebra, trigonometry, SPSS, English ...If you have any questions or need further references, please feel free to contact me. Gracias!!! "Happy Learning, while Building a Bridge to Success!"I taught Spanish classes at a Montessori Charter Public School for 3 years. Each classroom usually had 25 students from K-8. 26 Subjects: including algebra 1, prealgebra, Spanish, ESL/ESOL ...I've tutored nearly all the students I've worked with for many years, and I've also frequently tutored their brothers and sisters - also for many years. I enjoy helping my students to understand and realize that they can not only do the work - they can do it well and they can understand what they're doing. My references will gladly provide details about their own experiences. 11 Subjects: including algebra 1, algebra 2, Microsoft Excel, general computer ...My goal in tutoring students is to alleviate the anxiety that often arises when facing problems by supplying them with a step by step approach to arriving at the answer. In my one-on-one tutoring sessions, I hope to inspire a joy for learning, if not chemistry and math! In addition to my passion for chemistry, I enjoy playing soccer. 10 Subjects: including algebra 1, algebra 2, calculus, prealgebra
{"url":"http://www.purplemath.com/nutting_lake_math_tutors.php","timestamp":"2014-04-18T19:18:19Z","content_type":null,"content_length":"24123","record_id":"<urn:uuid:2c732160-fd9d-4bbc-a070-f73d4d894592>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00266-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: December 2010 [00331] [Date Index] [Thread Index] [Author Index] Re: Replacement Rule with Sqrt in denominator • To: mathgroup at smc.vnet.net • Subject: [mg114639] Re: Replacement Rule with Sqrt in denominator • From: David Bevan <david.bevan at pb.com> • Date: Sat, 11 Dec 2010 01:54:18 -0500 (EST) It seems to me that the reason rules seem counterintuitive to some na=EFve users is due to the difference between FullForm and StandardForm. Obviously rules can't be based on some nebulous concept of mathematical understanding and must be applied syntactically to a specific form of expressions; Mathematica implements rules against FullForm. If it were possible to represent StandardForm as a 'tree structure' (rather than just as a visible display), then rules could be applied against that form. For example, we might have: FullForm[x/Sqrt[5] + 1/z + I] Plus[Times[Power[5, Rational[-1, 2]], x], Power[z, -1], Complex[0, 1]] FullStandardForm[x/Sqrt[5] + 1/z + I] Plus[Divide[x, Sqrt[5]], Divide[1, z], I] Of course, this does not provide any more mathematical intuition than apply ing rules to FullForm, but the ability to apply rules to FullStandardForm would perhaps sometimes be a bit closer to what some users expect / want. Maybe this is something Wolfram might consider. David %^> -----Original Message----- From: AES [mailto:siegman at stanford.edu] Sent: 10 December 2010 07:30 To: mathgroup at smc.vnet.net Subject: [mg114639] [mg114611] Re: Replacement Rule with Sqrt in denominator In article <idnqq6$q5i$1 at smc.vnet.net>, Noqsi <noqsiaerospace at gmail.com> wrote: > It is easy to see the kind of chaos the vague and ambiguous "rules > should be interpreted semantically in a way that makes mathematical > sense" would cause. How should > a + b I /. I->-I > be interpreted *semantically*? I do not possess anything like the depth of knowledge of symbolic algebra or the understanding of the principles of semantics that would embolden me to offer any answer to the preceding question. But I will offer the following opinion: However the above rule is to be interpreted, in any decent symbolic algebra system, assuming a and b have not yet been assigned any values, the symbol I should be interpreted (i.e., modified) identically -- i.e., in *exactly* the same fashion -- for either of the inputs a + b I /. I->-I OR a + 2 b I /. I->-I This is NOT the case in Mathematica. This behavior is a "gotcha" that can be responsible for large and hard to trace difficulties for many Furthermore, I believe that Mathematica WILL interpret (i.e. , modify) the two inputs above in exactly the same fashion if the character I in thee two expressions is replaced by ANY OTHER single upper or lower case letter in the alphabet. Does anyone else find this not to be true?
{"url":"http://forums.wolfram.com/mathgroup/archive/2010/Dec/msg00331.html","timestamp":"2014-04-18T20:47:25Z","content_type":null,"content_length":"27634","record_id":"<urn:uuid:f9368759-1e8c-492a-95bf-6c7add4c8d26>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00026-ip-10-147-4-33.ec2.internal.warc.gz"}
A Certain Integral Closure March 30th 2013, 10:49 AM A Certain Integral Closure I have a bunch of assertions without motivation which I'm trying to sort out. Let H be a subgroup of S_n, [; A = k[x_1,...,x_n] ;], and [; \sigma_i ;] the elementary symmetric polynomials. The assertions are: i) A_H is defined as the integral closure of [; k[\sigma_1,...,\sigma_n] ;] in [; k(x_1,...,x_n)^H ;], and [; A_H = k[x_1,...,x_n] \cap k(x_1,...,x_n)^H ;]. ii) [; k(x_1,...,x_n)^H ;] is the field of fractions of A_H, "i.e." [; k(x_1,...,x_n)^H = A_H[1/k[\sigma_1,...,\sigma_n]] ;]. Alright, so about the first assertion, that A_H is the intersection. This intersection contains only polynomials, i.e. we must have [; k[x_1,...,x_n] \cap k(x_1,...,x_n)^H \subset k[x_1,...,x_n] ;], so we can simply consider the intersection [; k[x_1,...,x_n] \cap k[x_1,...,x_n]^H ;], which must be [; k[x_1,...,x_n]^H ;]. I've no idea how to show or see that [; k[x_1,...,x_n]^H ;] is the supposed integral closure, but let's leave it at that for the moment. Second assertion. What the notation [; A_H[1/k[\sigma_1,...,\sigma_n]] ;] means I have no idea; my guess is that it's the set [; \{ f/g \mid f \in A_H, g \in k[\sigma_1,...,\sigma_n] \} ;]. If this is the case, how do I verify it? Because I would have thought that Frac A_H = k(x_1,...,x_n)^H? March 30th 2013, 10:56 AM Re: A Certain Integral Closure I have a bunch of assertions without motivation which I'm trying to sort out. Let H be a subgroup of S_n, $A = k[x_1,...,x_n]$, and $\sigma_i$ the elementary symmetric polynomials. The assertions are: i) A_H is defined as the integral closure of $k[\sigma_1,...,\sigma_n]$ in $k(x_1,...,x_n)^H$, and $A_H = k[x_1,...,x_n] \cap k(x_1,...,x_n)^H$. ii) $k(x_1,...,x_n)^H$ is the field of fractions of A_H, "i.e." $k(x_1,...,x_n)^H = A_H[1/k[\sigma_1,...,\sigma_n]]$. Alright, so about the first assertion, that A_H is the intersection. This intersection contains only polynomials, i.e. we must have $k[x_1,...,x_n] \cap k(x_1,...,x_n)^H \subset k[x_1,...,x_n]$, so we can simply consider the intersection $k[x_1,...,x_n] \cap k[x_1,...,x_n]^H$, which must be $k[x_1,...,x_n]^H$. I've no idea how to show or see that $k[x_1,...,x_n]^H$ is the supposed integral closure, but let's leave it at that for the moment. Second assertion. What the notation $A_H[1/k[\sigma_1,...,\sigma_n]]$ means I have no idea; my guess is that it's the set $\{ f/g \mid f \in A_H, g \in k[\sigma_1,...,\sigma_n] \}$. If this is the case, how do I verify it? Because I would have thought that Frac A_H = k(x_1,...,x_n)^H? I have replaced your [; and ;] with "tex" and "/tex".
{"url":"http://mathhelpforum.com/advanced-algebra/216027-certain-integral-closure-print.html","timestamp":"2014-04-20T18:57:40Z","content_type":null,"content_length":"8921","record_id":"<urn:uuid:f1a796c0-fe51-469c-ac6a-b9d008a5da6a>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00234-ip-10-147-4-33.ec2.internal.warc.gz"}
Two equations with two unknowns: how to rearrange or solve the equations February 25th 2013, 11:38 AM #1 Feb 2013 New York Two equations with two unknowns: how to rearrange or solve the equations I have to equations: y = T*Cx*U and x = T*Cy*U where x and y are scalar values that are known, Cx and Cy are both 4x4 matrices that are known. T is a 1x4 vector : T=[1 t t^2 t^3] where t is unknown and U is a 4x1 vector : U=[1 u u^2 u^3]' where ' means transpose and u is unknown. So basically I have two equations and two unknowns, t and u. I would like to figure out a way to represent the equation so that the unknowns are a function of the knowns so that t = f(x,y,Cx,Cy) and u = f(x,y,Cx,Cy) or T = f(x,y,Cx,Cy) and U = f(x,y,Cx,Cy). I imagine that getting the final numerical solution might involve getting the roots of a cubic but I don't know how to get that far. I've tried to figure out a way to substitute one equation into the other to eliminate one of the unknowns (i.e., U or T) but I get stuck because I'm not sure what to do about dividing by a vector (i.e., finding the inverse of a vector?). Re: Two equations with two unknowns: how to rearrange or solve the equations can you post the matrix C ? and the values x,y? Re: Two equations with two unknowns: how to rearrange or solve the equations I can give you example numbers but they are floating point values that I'm pulling from a program I'm working on so I don't know if they would be helpful. In my program I can go from u,t to x,y but I would like to go the other way so I am trying to invert the equation. For this example, t = 0.3624 and u = 0.4950. t and u are always between 0 and 1. (I am trying to invert a bicubic interpolation fn. ) x = 1022.9; y = 495.9297; Cx = 930.8436 12.3171 36.8271 -18.4136 220.5397 -2.7589 -8.5661 4.2831 2.2148 0.3214 0.6427 -0.3214 -0.4175 -0.0662 -0.1324 0.0662 Cy = 400.3912 194.3285 -5.4094 2.7047 0.3092 1.2141 0.4406 -0.2203 -0.0395 -0.1551 -0.0583 0.0291 0.0066 0.0139 0.0013 -0.0006 February 25th 2013, 12:04 PM #2 February 25th 2013, 12:43 PM #3 Feb 2013 New York
{"url":"http://mathhelpforum.com/advanced-algebra/213804-two-equations-two-unknowns-how-rearrange-solve-equations.html","timestamp":"2014-04-17T19:35:04Z","content_type":null,"content_length":"36843","record_id":"<urn:uuid:9d610955-628b-4dfd-bc3c-e38d62159ff7>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00205-ip-10-147-4-33.ec2.internal.warc.gz"}
Mathematics at St. Olaf Practical - Popular - Visible - Active - Useful - Fun Mathematics is all of those things--and more--at St. Olaf, where the mathematics program is recognized nationally for innovative and effective teaching. Our program was cited as an example of a successful undergraduate mathematics program by the Mathematical Association of America (Models That Work, Case Studies in Effective undergraduate Mathematics Programs) and St. Olaf consistently ranks as a top producer of students who go on to complete Ph.D.'s in the mathematical sciences. [edit] Areas of emphasis The mathematics major does not have different tracks, but by designing an Individualized Mathematics Program (IMaP) with the help of a mathematics faculty member, students can complete their majors in a variety of ways. Here are some popular area of emphasis: • Pure Mathematics Students intending to earn higher degrees in theoretical mathematics should take a broad range of 200-level courses and as many 300-level courses as possible. At the 200-level, the "transition" courses Real Analysis I (Math 244) and Abstract Algebra I (Math 252) are a must. A variety of courses with different perspectives will provide excellent breadth of knowledge. Advanced courses in Real Analysis II (Math 344) and Abstract Algebra II (Math 352) are also a must. Courses in Topology (Math 348), Combinatorics (Math 364), and Complex Analysis (Math 340) are highly recommended. Students should be alert to special topics courses and independent study & research opportunities. More and more graduate programs expect their successful applicants to have had an undergraduate research experience. Students should strive to achieve good scores on the general and mathematics GRE exams. • Applied Mathematics Students intending to earn higher degrees in applied mathematics should take a broad range of 100- and 200-level courses in mathematics, statistics, computer science and other fields, and as many 300-level courses as possible. At the 200-level, mathematics courses such as Multivariable Calculus (Math 226), Differential Equations (Math 230), Real Analysis I (Math 244), Modern Computational Mathematics (Math 242), Probability (Math 262), and Operations Research (Math 266) teach material that is used in a wide variety of applications to the biological, physical, and social sciences. Advanced mathematics courses in Differential Equations II (Math 330), Complex Analysis (Math 340), Real Analysis II (Math 344), and Mathematics Practicum (Math 390) are highly recommended. Students should be alert to special topics courses and independent study & research opportunities. More and more graduate programs expect their successful applicants to have had an undergraduate research experience. Students should strive to achieve good scores on the general and mathematics GRE exams. • Secondary School Teaching Students planning to teach secondary school mathematics complete a standard mathematics major (with certain courses prescribed by state certification requirements). In addition, they take several courses in the Department of Education and devote part of one senior semester to student teaching. • General Mathematics Major Many mathematics majors do not enter graduate school, law school, business school, or medical school right away or even at all. For those students a broad and deep mathematics major can serve them well in a variety of settings: business, technology, the non-profit sector, consulting, actuarial work, etc. Search the alumni directory for mathematics majors and see the kind of professions Oles have entered. • Double Majoring Many students combine mathematics with another major or concentration. Doubling with majors in the sciences and economics is especially common, as is combining mathematics with a statistics concentration. We also graduate a fair number of students who major in religion, philosophy, art, English, theatre, etc. as well as mathematics. [edit] Students and Graduates • About 60 mathematics majors graduate each year (79 in 2010!) • 8-10% of St. Olaf graduates are mathematics majors • One third of St. Olaf Mathematics majors are women • 50+ students employed as tutors, clinic workers, or paper graders • Daily "Mathematics Clinic" for homework help. • 75% of St. Olaf students take a course in MSCS • 30% graduate school: 20% in mathematical science; 10% in other sciences • 15% professional programs (business, law, medicine, etc.) • 10% secondary school teaching • 35% business and industry • 10% other [edit] Employment • Cray Research • IBM • Peace Corps • Unisys • Travelers • CSC Consulting • Accenture • Thrivent Financial for Lutherans • Target Corporation • Northwest Airlines • Best Buy Corporation • Mayo Clinic • US Bank • General Mills [edit] Graduate schools in the mathematical sciences attended by alumni (a sample) • University of Minnesota • University of Wisconsin • University of Illinois • Clemson University • Iowa State University • University of Iowa • University of Nebraska • Northwestern University • University of Chicago • University of Colorado • University of North Carolina • Rice University • Brandeis University [edit] Doctorates received by alumni in various areas: • Mathematics • Statistics • Physics • Theology • Economics • Law • Medicine • Computer Science [edit] Resources [edit] Faculty The Department of Mathematics, Statistics, and Computer Science has more than 20 faculty members, most of whom teach full or part-time in mathematics. All hold doctorates in the mathematical sciences, and have expertise in areas including the following: • Algebra • Graph Theory • Artificial Intelligence • Logic • Combinatorics • Mathematical Physics • Real Analysis • Complex Analysis • Mathematics Education • Computer Science • Number Theory • Mathematical Biology • Operations Research • Differential Equations • Probability • Dynamical Systems • Statistics • Mathematical exposition • Symbolic Computation • Functional Analysis • Topology • Geometry [edit] Computer Facilities • The Advanced Mathematics Computing Lab is equipped with modern mathematics and statistics software. • All classrooms are equipped with computing resources for the professor, and several are fully equipped for students too. • The Computer Science program has separate facilities with extraordinary computing power, including a Beowulf cluster. [edit] Library Resources • One of the nation's largest undergraduate mathematics libraries, with more than 10,000 mathematics books and 80 journals, each with extensive back issues. • Access to several online services, including JSTOR • Easy access to the Carleton College libraries and others via Interlibrary Loan. [edit] Grant Support • Over the years St. Olaf has attracted considerable support for its leadership in mathematical sciences education. Recent grant activity includes: □ NSF S-STEM (Richey) □ NSF funded International Research Scholars (Humke and Hanson) [edit] Special Opportunities [edit] Colloquium Series Weekly presentations by mathematicians, statisticians, computer scientists, employers, alumni, and graduate school faculty on MSCS topics beyond the classroom. [edit] Mathematics Practicum During January, three teams of five students work for a month on real industrial problems and present their results to scientists and executives of the company that posed the problem. [edit] Recent Practicum topics include: • Time-Efficient Suturing During Cardiac Surgery • Estimation of Minimum Freight Car Needs • Optimal Positioning of Manufacturing Equipment • Load Factors for Airline Scheduling • Federal Fairness Test for Benefit Plans [edit] MAA Student Chapter This organization for mathematics students arranges social and mathematical activities. Past events include a Halloween pumpkin-carving party, a pig roast and the Math-Bowl. [edit] Mathematical Contests Students compete in annual contests on calculus and other undergraduate mathematics. Prizes, fanfare, and a bronze plaque serve to recognize the winners. An Opportunity for study abroad in one of the world's leading mathematical centers. St. Olaf has supplied the largest number of students enrolled in this program, which is open to all North American students of mathematics or computer science. This lively weekly publication of fact, opinion, news, jokes, and misinformation keeps students and faculty informed of happenings in the Mathematics Department. [edit] National Leadership Members of the St. Olaf mathematics faculty not only keep up in their field, but also help lead collegiate mathematics through active research and writing. The professional record of St. Olaf mathematics faculty includes service in many capacities: • More than ten books, including: Counterexamples in Topology, Problem Solving Through Problems, Mathematics Today, Calculus for a New Century, A Course in Modern Geometries, The Wohascum County Problems Book, Calculus from Graphical, Numerical, and Symbolic Points of View, and "Understanding Real Analysis" • Dozens of research papers in mathematics journals • Four national awards for expository writing • President of the Mathematical Association of America • Editors-in-chief, problems editors, notes editor, and book reviews editors of the American Mathematical Monthly, Mathematics Magazine, and the Real Analysis Exchange • Associate Director of the William Lowell Putnam Mathematical Competition • President of the Minnesota Council of Teachers of Mathematics (MCTM) • Chair of the North Central Section of the Mathematical Association of America • Chairs of the New Mathematical Library Editorial Committee and the MAA Committee on the Undergraduate Program in Mathematics (CUPM) • North American Director of the Budapest Semester in Mathematics • Members of numerous committees and councils of the Mathematical Association of America and the American Mathematical Society • Chair of the Council of Scientific Society Presidents (CSSP) and the Conference Board of the Mathematical Sciences (CBMS) and the Mathematical Sciences Education Board (MSEB) • Lectures, research, seminars, and teaching in many countries, including Thailand, France, Switzerland, China, Sweden, Hungary, Italy, Crete, Germany, Mexico, Japan, Australia, New Zealand, Russia, Poland, Austria, and Czechoslovakia
{"url":"http://devel.cs.stolaf.edu/depts/mscs/Math_Information","timestamp":"2014-04-19T13:04:31Z","content_type":null,"content_length":"32894","record_id":"<urn:uuid:62bc2bbc-7f3b-4d53-a98f-c63973dcb898>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00468-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Help April 4th 2011, 09:49 AM #1 Nov 2010 if equations 2x-y+z=0 ,x-2y+z=0 and tx-y+2z=0 have non trivial solutions and f(x)be continuous functions such that f(5+x)+f(x)=2 then integrate f(x) dx (limit 0 to -2t) Here are some hints to get you started. First find the determinant of this matrix and set it equal to zero and solve for $t$ $\begin{vmatrix}2 & -1 & 1 \\ 1 & -2 & 1 \\ t & -1 & 2 \end{vmatrix}=0$ For the 2nd part use the linearlity of the integral to break it up into two parts $\displaystyle \int_{a}^{b}f(x)dx=\int_{a}^{c}f(x)dx+\int_{c}^{b} f(x)dx$ Then make a u-substitution. Good luck and post your workings if you get stuck. April 4th 2011, 10:42 AM #2
{"url":"http://mathhelpforum.com/calculus/176784-integration.html","timestamp":"2014-04-19T07:19:39Z","content_type":null,"content_length":"33524","record_id":"<urn:uuid:e620e7c8-5fbb-49a9-8106-6a870fe4c3ef>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00535-ip-10-147-4-33.ec2.internal.warc.gz"}
Adding fractions with unlike denominators Adding fractions with unlike denominators Adding Fractions with Unlike Denominators Adding fractions with unlike denominators Discussion and questions for this video why does the denominator have to be the same as the other denominator Good question. In mathematics, the concept of adding and subtracting fractions require the denominator to be common. The numerator does not need to be common but the denominator must be. If the denominator wasn't common how would we work out the question? E.g. 1/3 + 3/7 = ? To work out this question, we would have to find the LCM (Lowest Common Multiple) of the denominators; in this case, 3 and 7. Then when we find that it is 21, we can adjust the numerators to 7 and 9. With that done, we can add 3 and 7 to make 10 over the denominator, which is 21. This is why the denominators have to be common. There was no equation or formula built in the concept of adding and subtracting fractions that do not require the denominators to be common. I hope this clears everything up. :) Why did he say to turn the nine in to a 36 and why multiply by 4 what.] nine times four is thirty six but why cant the nine just be nine. I'm confused ): and why do we have to multiply the numerator by 4 too.~_~ to add fractions, you need to have the denominator the same. these two fractions have denominators of 9 and 12. The lowest common multiple of these two numbers is 36. That requires multiplying by x4 and x3 respectively. We multiply the numerator and denominator by the same number so the number does not change. 4/9 * 4/4 = 16/36. 4/4 is the same as 1, so the number doesn't change If both denominators are prime numbers, is the only least common multiple between them always going to be the product of the two? Well, yes because since both are prime numbers, and prime numbers can't divide into each other's multiples, it will have to be the product. For example, 1/5 + 1/7, 7 can't divide into 5's multiples and 5 can't divide into 7's multiples, so the denominator has to be the product. By the way the answer is 7/35 + 5/35 = 12/35 Why do the denominators have to be the same? You do not have to make them the same but it makes it far easier to do. why does the denominater have to be greater than the top number? Excellent question my friend. If you had two fractions like 6/5 and 7/5 you would get 13/5. That is what you would call a improper fraction. In arithmetic you wouldn't leave it like that because it's considered a improper fraction. You would need to change it into a mixed number. When you get to algebra 1 they will allow you to leave it as a improper fraction as an answer to a problem. I don't get this it is hard can anybody help me? it is quite simple if you ask me all you have do is this Whenever you add fractions with unlike denominators, you must make the denominators of the same value. In this example, the easiest approach is to multiply 8 x 3 to get 24. Here, you multiply by 3 to get . Then you multiply the second equation by 8 to get . Note that both addends have 24 as a denominator. Add in the same manner as with the unit Adding fractions with like denominators. yes you can beacuse fractions are the answer of 1 divide by a number so yes they can i cant understand this, how did you get the x3 but the other x4? Because it is a fraction you can multiply it by anything - it will still be the same fraction of a whole number. (1/2 * 2 is 2/4 - but both are 0.5 in decimal terms) The objective is to get the denominators the same. What if you have a problem where one of the fractions has to be divided into a fraction, and the other number is whole? If one number is whole, you put it over a denominator of 1. If you have 24/8 - 2, write: 24/8 - 2/1 Hope this helps! Is multiplying a fraction as easy as adding and subtracting one? Multiplying is even easier because you don't have to find any common denominators or anything. You just multiply across: the numerator times the numerator and the denominator times the denominator. can both fractions be negativs I still don't understand. Please help me i have test next week. Well when you add a fraction like 4/6 and 9/10 you have to get a common denominator. To do that multiply the denominator 6x10= 60 now the fraction would look like _/60 all we need to do is fine the numerator 6x what = 60 its 10 then we do 10x4 to get the numerator 40/60 do the same thing to the other one now we got 54/60 + 40/60 ok add the numarator 54+40=94 so now we got 94/60 we need t turn it into a mixed number. How much times can 94 go into 60 its 1 time so now we have 1 and 34/60 and thats not your answer yet you have to reduce. So divided by 2 = 17 then 60 divided by 2 is 30 so 17/ When numerator is greater than the denominator.. it is considered as improper fraction? Yes it is a improper fraction so you need to turn it into a mixed number. what if the denominators of two fractions don't have a least common multiple you work with them in the way as they are - they are already simplified ;) we are doing this in math and i kinda understand it but its hard also Agreed, I still think fractions is a pain in the neck, but just work hard and you'll understand it. i think you can also just multiply the denominaters by each other........right? yes, you can just multiply the denominators by each other, but when you use larger numbers, it becomes tedious to multiply all of those giant numbers. It is much simpler to work with smaller numbers, and you are less likely to make a mistake that way. Multiplying the two numbers together is sometimes the only way to get a common denominator. This website has helped me a lot in my school work. Although i am a little confuzzled on how he got his answer, like how did he turn it to a mixed #? Take a improper fraction like 4/3. Divide the denominator by the numerator. That's 1. How much is left? 1/3, so the mixed number is 1 1/3. This problem doesn't need simplifying, but remember to in other problems if they need it. Hope this helps. why does the denominator have to be the same? The denominator has to be the same because then it makes it easier to add, subtract, multiply, or divide the fraction. I do not get the mixed fractions, is there an easier way What don't you understand about them? What if both numbers don't have a greatest common factor? Hi William. Two numbers always have a greatest common factor. Sometimes, it is 1, but often it is larger. do you have fractions with multiplication Yes they do have fractions with multiplication khan academy. what is he trying to say? it is confusing. it is, why doesn't it have audio what if there was an x in the denominator like in the problem 1/x + 4/1 That is a more advanced type of problem, but you can still find a common denominator. you could multiply 4/1 by x/x and get 4x/x then add the numerators 1 + 4x, which can't be simplified further. So the new form is the (1 + 4x)/x also written as 1 + 4x This isn't really any simpler than the previous expression but they are equal. What if the Denominator was different from the other Denominator? That's what were talking about here. "Unlike Denominators" are fractions with- literally- Unlike denominators. Try the Skill "Adding Fractions" when you're done! This is too fast can you slow down? If it goes too fast you can always pause the video or rewatch it. Hope this helps! What I don't get is 36/49= 0.7... I have to make it 1? And if it was 0.5? I need help with dividing fractions how do I do that Ex: 1/2 divided by 3/4 You will flip 3/4 and the dividing sign will be multiplying sign, so it will be 1/2 multiplied by 4/3 so the answer will be 2/3. Can u multiply it with a bigger denominater ? Sure, you can use any denominator you want, as long as they are common. Then reduce at the end for the last step. Tip: See the recognizing divisibility video. It is very helpful. i dont get it why does the denominator have to be the same as the other denominator? (: why do u need to do a common denominator u are soo smart.. just what i need to no except can you do a vidio were you can like turn the inproper fraction into a proper one?? if that makes sence hey jcnz2011 u boy or girl you cant answer that it dosent make sense fractions only Make a video that tells me how to simplify a fraction that is NOT an improper fraction! why do you have to have the sam e denominator the denominator of a fraction is its place value and the numerator is the digit . when we add we must add digits with the same place value. would is easier for you guys to complete 4/5 + 6/7 or 5/6 +1/2? I don't understand. This just confuses me. Can anyone turn this video into something that a 13 yr. old can understand? I grasped this concept when I was 8. You'll get it, just keep trying. But what if you get a improper fraction do you have to turn it into a mixed number or can you just let it be. If your asked to "simplify" the answer, turn it into an improper fraction. Other than that, it depends on whether or not your teacher wants you to. what did he say I still don't get it it is conufseing me i don't now why i still dont get nothing by the way dont put my question at the bottom So how would you do a question like 1/4 - 3/12 ? Find the least common factor of the denominators (twelve here), find out what to multiply the numerator AND denominator with to achieve a denominator of twelve, i.e. three for the first fraction and one for the second. Then do the subtraction. Easy Peasy! Okay so this is today's homework is about that and I haven't really get what I had to do so look at this problem 1/3 + 5/9 they both can't reduce! or unless you can do the factor tree on 1 and 3 okay never mind thank you for doing this video I really didn't understand Adding Fractions with Unlike Denominators. and plus is their ever gonna be a problem with dividing? Like with unlike denominators? To add fractions, find the least common denominator (LCD). Here, it would be 9. 1) You can convert 1/3 to 3/9. So you end up with 1/3 + 5/9 = 3/9 + 5/9. 2) Add the numerators. (3+5)/9 = 8/9. ok this will be a question but does anyone have mrs sullivan for math at mckinley elementary ok if so what is tha simplest form of 2/3-1/9? Why are both the denominators the same? If you want to add or subtract a fraction, you have to make both side have the same denominator. Why would you not simplify this answer, 22/21 to 1 1/21? The problem was 5/7 + 1/3. In my homework book it said the answer was simply 22/21. I'm puzzled...someone please explain! How do you simplify it if it is not an improper fraction? the whole thing was confusing Is there any other faster ways to do the problems instead of just wasting so much time on only one problem when i could be on the next one in just a minute or less? Get the LCD(least common denominator) of the denominator then add. He explains things that is why the video is long..:) Why do we have to find a common multiple? Why cant we just the numerators and the denominators together, like we multiply the numerators and the denominators when we are multiplying fractions. I'm confused. Help! tell me the answer to this 56/93 + 43/94 This section inst for doing your home work. is there any videos on subtracting fractions with different denominators vote-if u agree that the person who made this website is trying to turn the unlike denominator into a common denominator to make it more easier Can u write a fraction like this -5/9? I thought you have to divide 36 to 4/9 and to 11/12 ??? this video has helped me like sooooooooo much. THANKS SAL! can you show an ex. of m-5 over m squared+9m+20 +4m-1 over m squared +7m+10 ? How do you add fractions with unlike fractions can anyone give me some steps where are the problem exercises? why do you make it so complicated? I don't get why we have to multiply can you please teach me calculus ? That's in the calculus section. When I search for hint. It does not seems well And i Gotta do my math right now. BYE EVERYONE BYE , when you subtract the fraction do you have to slimply it ? can somebody answer it in a different answer how did you know to multiply in the begging of the video i am in 4th grade and i loved the way u teach in a simplified manner. where can i find problems to solve adding fractions with unlike denominators. i don't understand this formula its a little bit tricky.can some one help me. Why do we have covert mixed numbers to improper fractions? why do you have to multiply by the same number You have to multiply the same number for the denominator and numerator because whatever you do to the bottom, you do to the top. If you don't do this your fraction would not be equivalent so you would get a different answer because you changed the value of the fraction completely. Hope this helped. :) why can't you add them wit the denomenator that the # come with If you mean by counting the multiples of 9 to find one that is divisible by 12, you can check every single one (but it's not very efficient), or you can learn the multiples of 9 and 12 up to a certain point. If you mean the multiplying of the denominator as well as the numerator, my teacher always says: "what you do to the bottom you do to the top!" so you make an equivalent fraction. how would you multiply 5/20 * 8/5 Well you would cancel some of the numbers. the 5 and the 5 could cancel out, and that would leave you with 8/20. u are soo smart.. just what i need to no except can you do a vidio were you can like turn the inproper fraction into a proper one?? if that makes sence how do u add and subtracted fractions I do not understand this? i thought that you had to add the different denominators but you have to divide them so when your multiplying do you have to divide the different denominators and so when your dividing different denominators you have to multiply them When you find a common denominator are you allowed to divide instead of multiply 9x2=18 ,9x3=27and blah blah blah ? Why does it have to be unlike denominators? what should i do if my fraction is 9/11+6/10 This looks hard can some one help? i figured it out! it's easy Add My Bro as Couch: ¸„ø¤º°¨°º¤ø„¸¸„ø¤º°¨ is there any way to do it without changing the denominators? Why do the denominators have to be the same? how about if we have like 36, and we have to find a GCF, and we have to do 2. what can save me from doin all that work? i dont know but good question. When subtracting mixed fractions, what if the second fraction is larger than the first? Like 7 3/5 - 5 4/5 = ? I don't understand. Why do you have to multiply the numerator? You want the new fraction to be equivalent to the original one so that the sum of the two fractions doesn't change. To do that, you need to multiply the numerator and denominator of the fraction by the same number. why is this so hard i kinda understand but not really who agrees Is it possible for there to not be a common denominator ? I don't really understand how to simplify. It is confusing to me somehow and everytime I get a question that asks me to simplify I can't do it, I don't understand it. Can somebody please help me figure it out? I was going through the adding mixed fractions with unlike denominators and simplifying them to their lowest term and I am so lost right now. I had to watch the videos like five times. I am just trying to figure it out. 10,020,396 over 300,285,932 plus 125 over 70,000,000. Seems hard. bet you I can fool ya. :) If the denominator of both numbers are prime then why can't 1 be divisible by that prime number? can anyone give an example of what he means You have to fined a number that goes in to the denominators evenly.... i think Not sure! Hope that helps what is common denominator in 6/7 + 8/21 How did you get the x 4 and x 3? How would you do it it you had variables in different spots of the equation still adding and subtracting fractions with unlike and like denominators? Why is this so difficult? If you work harder, it will be easier. Why does the numorator have to be on the top and the denominator have to be on the bottom? Hi, I was wondering if you were going make 49/36 into a proper fraction like he did... it there any other possible way of doing that? Or is that the only way to covert the improper fraction into a proper fraction? Just wondering! Thanks! Why Does This Video Have To Be So Short? I Don't Understand! why do we need to know this? how can this help us in the future? Do you understand English? Click here to see more discussion happening on Khan Academy's English site. Report a mistake in the video At 2:33, Sal said "single bonds" but meant "covalent bonds." Discuss the site For general discussions about Khan Academy, visit our Reddit discussion page. Flag inappropriate posts Here are posts to avoid making. If you do encounter them, flag them for attention from our Guardians. • disrespectful or offensive • an advertisement not helpful • low quality • not about the video topic • soliciting votes or seeking badges • a homework question • a duplicate answer • repeatedly making the same post wrong category • a tip or thanks in Questions • a question in Tips & Thanks • an answer that should be its own question about the site • a question about Khan Academy • a post about badges • a technical problem with the site • a request for videos or features
{"url":"https://khan-academy.appspot.com/math/arithmetic/fractions/fractions-unlike-denom/v/adding-fractions-with-unlike-denominators?_escaped_fragment_=","timestamp":"2014-04-20T08:30:20Z","content_type":null,"content_length":"991729","record_id":"<urn:uuid:b6c3248a-0239-45f7-aa39-e13b8d2e5731>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00025-ip-10-147-4-33.ec2.internal.warc.gz"}
College Algebra plus MyMathLab with Pearson eText -- Access Card Package 4th Edition | 9780321639394 | eCampus.com FREE SHIPPING OVER $59! Your order must be $59 or more, you must select US Postal Service Shipping as your shipping preference, and the "Group my items into as few shipments as possible" option when you place your order. Bulk sales, PO's, Marketplace Items, eBooks, Apparel, and DVDs not included. • The New copy of this book will include any supplemental materials advertised. Please check the title of the book to determine if it should include any CDs, lab manuals, study guides, etc. • The Rental copy of this book is not guaranteed to include any supplemental materials. You may receive a brand new copy, but typically, only the book itself.
{"url":"http://www.ecampus.com/college-algebra-plus-mymathlab-pearson/bk/9780321639394","timestamp":"2014-04-19T15:58:54Z","content_type":null,"content_length":"67467","record_id":"<urn:uuid:17409b26-0652-47f9-b5bd-39e7b5f215e2>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00311-ip-10-147-4-33.ec2.internal.warc.gz"}
Probably Overthinking It It's been a while since the last post because I have been hard at work on Think Bayes . As always, I have been posting drafts as I go along, so you can read the current version at I am teaching Computational Bayesian Statistics in the spring, using the draft edition of the book. The students will work on case studies, some of which will be included in the book. And then I hope the book will be published as part of the Think X series (for all ). At least, that's the plan. In the next couple of weeks, students will be looking for ideas for case studies. An ideal project has at least some of these characteristics: • An interesting real-world application (preferably not a toy problem). • Data that is either public or can be made available for use in the case study. • Permission to publish the case study! • A problem that lends itself to Bayesian analysis, in particular if there is a practical advantage to generating a posterior distribution rather than a point or interval estimate. Examples in the book include: • The hockey problem: estimating the rate of goals scored by two hockey teams in order to predict the outcome of a seven-game series. • The paintball problem, a version of the lighthouse problem. This one verges on being a toy problem, but recasting it in the context of paintball got it over the bar for me. • The kidney problem. This one is as real as it gets -- it was prompted by a question posted by a cancer patient who needed a statistical estimate of when a tumor formed. • The unseen species problem: a nice Bayesian solution to a standard problem in ecology. So far I have a couple of ideas prompted by questions on Reddit: But I would love to get more ideas. If you have a problem you would like to contribute, let me know!
{"url":"http://allendowney.blogspot.com/2013/01/call-for-bayesian-case-studies.html","timestamp":"2014-04-17T19:07:48Z","content_type":null,"content_length":"94563","record_id":"<urn:uuid:7c326a5c-7555-4761-9cc6-50bb92907ec2>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00083-ip-10-147-4-33.ec2.internal.warc.gz"}
The primary aim of this workshop is to provide a venue for academics, including graduate students and postdoctoral fellows, to meet and discuss their latest research topics in the broad area of insurance mathematics and its related disciplines (e.g. mathematical finance, applied probability and statistics). The workshop does not have a unique theme and/or topic in mind, but is intended to cover a rather broad scope of research interests in the general area of actuarial science. Among others, this includes life and non-life insurance, risk management in insurance and finance, risk and ruin theory, financial modelling and applications of statistical methods in insurance. As for the first edition of this workshop, one of the main objectives of the second edition is to give the opportunity to the up-and-coming researchers in the provinces of Québec and Ontario to promote their research program and facilitate their integration into the actuarial academic community in Canada and abroad. As such, the workshop plans to actively involve graduate students and postdoctoral fellows in its scientific program. This will provide a natural platform for these individuals to present their most recent research contributions to an audience of experts in the field of insurance mathematics. Also, young faculty will be invited to play a preponderant role in the scientific program. Canada has been known for years to be a stronghold of the actuarial science profession with many high-profile academics among its ranks. As such, our goal is to take advantage of Canada's unique feature in this regard and ensure continuity through the development of a strong cohort of young actuarial science academics.The workshop also intends to stimulate interaction and scientific collaboration, and foster relations of an academic and professional nature among the actuarial science groups in the Québec-Ontario area (as well as outside of these two provinces). Keynote speaker: Jose Garrido (Concordia University) Invited speakers: Maciej Augustyniak (Université de Montréal) Mathieu Boudreault (UQAM) Hélène Cossette (Université Laval) Sebastian Jaimungal (University of Toronto) Bruce Jones (University of Western Ontario) Hyejin Ku (York University) Khouzeima Moutanabbir (Université Laval) David Saunders (University of Waterloo) Wei Wei (University of Waterloo) Scientific Program audio and slides available here │Friday February 3, 2012 │ │8:30-8:50 │Registration │ │8:50-9:00 │Opening remarks │ │SESSION 1 │ │9:00-10:00 │José Garrido (Concordia University) │ │10:00-10:30 │David Saunders (University of Waterloo) │ │10:30-11:00 │Coffee break │ │SESSION 2 │ │11:00-11:30 │Bruce L. Jones (University of Western Ontario) │ │11:30-12:00 │Khouzeima Moutanabbir (University Laval) │ │12:00-12:30 │Mathieu Boudreault (UQAM) │ │12:30-2:00 │Lunch at Fields │ │SESSION 3 │ │2:00-2:30 │Hélène Cossette (University Laval) │ │2:30-3:00 │Wei Wei (University of Waterloo) │ │3:00-3:30 │Hyejin Ku (York University) │ │3:30-4:00 │Coffee Break │ │SESSION 4 │ │4:00-4:30 │Sebastian Jaimungal (University of Toronto) │ │4:30 -5:00 │Maciej Augustyniak (University de Montréal) │ │RECEPTION │ │6:00 - 10:00│Cocktail Hour followed by Buffet Dinner at 7:00 (The Faculty Club, University of Toronto, 41 Willcocks Street) │ Speaker Abstracts Keynote Speaker: José Garrido (Concordia University) Credit risk; a complex system seen from an actuarial perspective Credit risk models share several common characteristics with actuarial risk theory models. Even if the problems studied with these models are different, their solutions are similar in some respects. In modern science, risk credit could be considered a complex system, where it is not sufficient to isolate the effect of a single factor on the risk credit quantity of interest (like the probability of default on a corporate bond). Rating agencies, like Moody`s or Standard and Poor`s use complex econometrical models with several variables, some quite subjective, to come up with their credit ratings. We propose to revisit the problem with a more classical actuarial approach. In classical finance, a consistent market is in balance if it does not let agents take advantage of price differences to make a risk-free profit at zero cost. The existence of such classical arbitrage opportunities can arise from over- or under-estimation of the underlying risk, like with current credit ratings on European governments bonds, indicating inefficiencies in the market. As an alternative to the classical arbitrage methods to deal with this problem, we introduce a new ranking based on risk measures. We first introduce a new type of arbitrage defined from the properties of risk measures. That is, if under a specific risk measure, the risk of a portfolio is less than or equal to zero, then a possible positive portfolio income is considered as an arbitrage income. Inconsistencies in bond markets refer to the existence of these arbitrage opportunities. A new tool to detect and measure these is established. Numerical examples with corporate bonds will serve to illustrate the ideas. David Saunders (University of Waterloo) Mathematical and Computational Issues in Calculating Capital for Credit Risk The inadequacies of methods for calculating credit risk capital, particularly in the trading book, in the lead-up to the global financial crisis have led to a reevaluation of regulatory capital, resulting in the new Basel III requirements. I will discuss mathematical and computational problems that arise when computing the new capital requirements for credit risk in the trading book. Bruce L. Jones (University of Western Ontario) Credibility for Pension Plan Terminations In establishing demographic assumptions for pension plan calculations, pension actuaries must decide on suitable termination rates. These rates typically depend on age and years of service, but may also depend on other factors such as economic conditions. Restricting our attention to terminations other than mortality, disability or retirement (i.e. resignations and firings), we investigate an approach to adjusting a standard termination table to reflect the experience of the plan and other variables. Actual to expected ratios are modeled using a generalized linear model, and a limited fluctuation approach is used to reflect the credibility of the plan experience. This is joint work with Chou Chio Leong (University of Western Ontario). Khouzeima Moutanabbir (Université Laval) Asset-liability management for pension fund using an international investment model We introduce an asset liability model using stochastic programming. We use an international investment model where investors are allowed to hold assets in both domestic and foreign economies. We formulate a multi-stage optimization problem for pension fund asset liability management and we provide a solution based on scenario generation and stochastic programming. The model is calibrated to Canadian and American data. Mathieu Boudreault (UQAM) Multivariate integer-valued autoregressive models applied to earthquake occurrences In various situations in the insurance industry, in finance, in epidemiology, etc., one needs to represent the joint evolution of the number of occurrences of an event. In this paper, we present a multivariate integer-valued autoregressive (MINAR) model, derive its properties and apply the model to earthquake occurrences across various pairs of tectonic plates. The model is an extension of Pedeli & Karlis (2011) where cross autocorrelation (spatial contagion in a seismic context) is considered. We fit various bivariate count models and find that for many contiguous tectonic plates, spatial contagion is significant in both directions. Furthermore, ignoring cross autocorrelation can underestimate the potential for high numbers of occurrences over the short-term. Our overall findings seem to further confirm Parsons & Velasco (2011), meaning that reinsurance companies can still diversify earthquake risk across different regions of the planet. Hélène Cossette (Université Laval) Analysis of the discounted sum of ascending ladder heights Within the Sparre-Andersen risk model, the ruin probability corresponds to the survival function of the maximal aggregate loss. It is well known that the maximum aggregate loss follows a compound geometric distribution, in which the summands consists of the ascending ladder heights. We propose to investigate the distribution of the discounted sum of ascending ladder heights over finite or infinite-time intervals. In particular, the moments of the discounted sum of ascending ladder heights over finite- and infinite-time intervals are derived in both the classical compound Poisson risk model and the Sparre-Andersen risk model with exponential claims. The application of a particular Gerber-Shiu functional is central to the derivation of these results, as is the mixed Erlang distributional assumption. Finally, we define VaR and TVaR risk measures in terms of the discounted sum of ascending ladder heights. We use a moment-matching method to approximate the distribution of the discounted sum of ascending ladder heights allowing the computation of the VaR and TVaR risk measures. Wei Wei (University of Waterloo) Optimal Allocations of deductibles and policy limits with generalized dependence structures Optimal allocations of deductibles and policy limits have been studied by Cheung (2007), Hua and Cheung (2008 a, b), and Zhuang et al. (2008) among many others. In those literatures, only independent and comonotonic structures have been taken into considerations. This paper aims to develop a generalized dependence structure so as to unify and generalize the studies in the previous models. Motivated by the bivariate characterizations of likelihood ratio order and joint likelihood ratio order (Shanthikumar and Yao (1991)), we employ the concept of arrangement increasing to define dependence between multivariate random variables. Specifically, we associate arrangement increasing survival functions and arrangement increasing joint density function with two different dependence structures (SAI and UOAI) respectively, both of which include independence and comonotonicity as special cases. It turns out that most results derived in Cheung (2007), Hua and Cheung (2008 a, b), and Zhuang et al. (2008) are preserved under these dependence structures. Namely, the deductibles or policy limits could be ordered accordingly. We also solve a more general optimal allocation problem under the dependence of SAI. Hyejin Ku (York University) Discrete Time Pricing and Hedging of Options under Liquidity Risk Liquidity risk is the additional risk in a financial market due to the timing and size of a trade. In the past decade, the literature on the liquidity risk has been growing rapidly. Built on the asset pricing theory developed by Cetin-Jarrow-Protter, we study how the classical hedging strategies should be modified and how the prices of derivatives should be changed in the presence of liquidity costs, especially when we hedge only at discrete time points. Sebastian Jaimungal (University of Toronto) Valuing GWBs with Stochastic Interest Rates and Volatility Abstract: Guaranteed withdrawal benefits (GWBs) are long term contracts which provide investors with equity participation while providing them a secured income stream. Due to the long investment horizons involved, stochastic volatility and stochastic interest rates are important factors to include in their valuation. Here, we provide an efficient method for valuing these path-dependent products through re-writing the problem in the form of an Asian styled claim and a dimensionally reduced PDE. The PDE is then solved using an Alternating Direction Implicit (ADI) method. Furthermore, we derive an analytical closed form approximation and compare the approximate, as well as the results from the ADI method, with Monte Carlo simulations. We illustrate the various effects of the parameters on the valuation through numerical experiments and discuss their financial implications. This is joint work with Dmitri Rubisov (BMO Capital Markets) and Ryan Donnelly (University of Toronto). Maciej Augustyniak (Université de Montréal) Estimation of a path dependent RS-GARCH model by a Monte Carlo EM algorithm Regime-switching generalized autoregressive conditional heteroskedasticity (RS-GARCH) models are becoming increasingly popular to model financial data in the econometric literature. Estimating these models is a challenging task because the path dependence element of these models renders the exact computation of the likelihood infeasible in practice. This led some authors to propose estimation methods that do not depend on the likelihood such as a generalized method of moments procedure and a Bayesian algorithm. Other authors suggested estimating by maximum likelihood modified versions of the RS-GARCH model that avoid the path dependence problem. However, there is not yet a method available to obtain the maximum likelihood estimator (MLE) of the path dependent RS-GARCH model without resorting to some sort of modification of the model. In this presentation, I propose a novel approach based on the Monte Carlo expectation-maximization algorithm to estimate the MLE. Practical implementation of this method and its effectiveness in recovering the MLE are studied. Confirmed Participants as of January 31, 2012 │Full Name │University/Affiliation │ │Abdallah, Anas │Université Laval │ │Al Jarousha, Ayat │University of Western Ontario │ │Augustyniak, Maciej │Université de Montréal │ │Badescu, Andrei │University of Toronto │ │Bernard, Carole │University of Waterloo │ │Bosch Frigola, Irene │Concordia University │ │Boucher, Jean-Philippe│UQAM │ │Boudreault, Mathieu │UQAM │ │Chen, Bingzheng │Tsinghua University │ │Chen, Yingying │University of Waterloo │ │Cheng, Jianhua │Jilin University │ │Cheng, Xiaohua │University of Western Ontario │ │Chong, Yuxiang │University of Toronto │ │Cossette, Hélène │Laval University │ │Cousineau, Alexandre │Université de Montréal │ │Donnelly, Ryan │University of Toronto │ │Elmahdaoui, Raymond │University of Montreal │ │Gao, Huan │University of Western Ontario │ │Garrido, José │Concordia University │ │Ge, Jing │University of Western Ontario │ │Geng, Li │University of Western Ontario │ │Gu, Zhimin │University of Western Ontario │ │Guan, Jiali │University of Western Ontario │ │Hackmann, Daniel │York University │ │Han, Dezhao │Concordia University │ │Hou, Xueting │University of Western Ontario │ │Huang, Yue │Carleton University │ │Hyun, Darae │University of Western Ontario │ │Iftekhar, Aisha │ │ │Jackson, Ken │University of Toronto │ │Jaimungal, Sebastian │University of Toronto │ │Jin, Shu │University of Western Ontario │ │Jin, Tao │University of Western Ontario │ │Jones, Bruce │University of Western Ontario │ │Ke, Wanjun │University of Western Ontario │ │Kim, Taehee Kyle │University of Western Ontario │ │Kreinin, Alexander │Algorithmics Incorporated │ │Ku, Hyejin │York University │ │Kunka, Robert │University of Western Ontario │ │Landriault, David │University of Waterloo │ │Lee, Wing Yan │University of Waterloo │ │Lemieux, Christiane │University of Waterloo │ │Li, Dongchen │University of Waterloo │ │Li, Shu │University of Waterloo │ │Lin, X. Sheldon │University of Toronto │ │Liu, Fangda │University of Waterloo │ │Liu, Xiaoming │University of Western Ontario │ │MacKay, Anne │University of Waterloo │ │Mailhot, Mélina │Université Laval │ │Marceau, Etienne │Université Laval │ │Morales, Manuel │Université de Montréal │ │Moutanabbir, Khouzeima│Laval University │ │Qian, Cheng │University of Western Ontario │ │Ren, Jiandong │University of Western Ontario │ │Renaud, Jean-François │Université du Québec à Montréal │ │Ricci, Jason │University of Toronto │ │Rosu, Cristina │University of Waterloo │ │Saunders, David │University of Waterloo │ │Scott, Alexandre │University of Western Ontario │ │Shi, Tianxiang │University of Waterloo │ │Wei, Wei │University of Waterloo │ │Willmot, Gordon │University of Waterloo │ │Woo, Jae Kyung │Columbia University │ │Wu, Panpan │University of Toronto │ │Yang, Guang │University of Western Ontario │ │Zang, Yanyan │University of Western Ontario │ │Zhou, Xiaowen │Concordia University │ back to top
{"url":"http://www.fields.utoronto.ca/programs/cim/11-12/insurancemath/","timestamp":"2014-04-17T21:41:47Z","content_type":null,"content_length":"37108","record_id":"<urn:uuid:8b92c2ba-f5e9-4409-9124-3a13dba0c29b>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00562-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: November 2011 [00196] [Date Index] [Thread Index] [Author Index] NIntegrate fails to work... • To: mathgroup at smc.vnet.net • Subject: [mg122740] NIntegrate fails to work... • From: GQ Wang <gqwang1984 at gmail.com> • Date: Wed, 9 Nov 2011 06:24:22 -0500 (EST) • Delivered-to: l-mathgroup@mail-archive0.wolfram.com Hi guys, I came across this problem in my calculation: I have G(x, y), which is a very complicated matrix function of x and y. The integrand of my problem, denoted by f(G(x, y)), is a function of the eigenvalues and eigenvectors of G(x, y), and so naturally is a function of x and y. G(x, y) is formally so complicated that it's impractical to be diagonalized symbolically on the level of variables x and y. When I indeed tried this symbolic calculation, I get the error message Eigenvectors::eivec0: Unable to find all eigenvectors. >> and get the zero vector as the result. So I thought numerical integration should be the way to go. The procedure in my mind goes like this: write down G(x, y) numerically for each (x0, y0) point which appeared in the numerical integration, diagonalize the numerical matrix, calculate the eigenvectors and eigenvalues, and then calculate f(G(x0, y0)). In this way, at least in principle, the numerical integration could be performed. The code goes like G[x_, y_]:=... NIntegrate[f[x, y],{x, xmin, xmax}, {y, ymin, ymax}] But it did not work out. The reason probably is, I suspect (based on the same error message that I received), that mathematica evaluates the integrand at each (x0, y0) point using something like f(G(x, y))/{x->x0, y->y0}, which requires the explicit form of the integrand f(G(x, y)), which, as I have mentioned, mathematica fails to calculate symbolically. So, my question is, how should I proceed... It seemed such an innocent Thanks a lot guys.
{"url":"http://forums.wolfram.com/mathgroup/archive/2011/Nov/msg00196.html","timestamp":"2014-04-17T01:13:57Z","content_type":null,"content_length":"26570","record_id":"<urn:uuid:335f4796-9e94-40de-a0a6-34469639e085>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00513-ip-10-147-4-33.ec2.internal.warc.gz"}
Why is algebra so important? Algebra is known as a gatekeeper subject, so when should your child take it? Last fall results from national math exams stirred up a tempest in a standardized test. It turns out math scores rose more quickly before No Child Left Behind was implemented, and fourth-grade math scores haven’t improved since 2007. As reported in the New York Times, the achievement gap remains a chasm between the haves and the have-nots. What does this mean for your child? While pundits and politicians battle over the big issues, it's up to parents to stay on top of the little ones: their own kids' academic development. Make sure your tween or teen is on track for high school math with this guide to algebra. Why algebra matters It is frequently called the gatekeeper subject. It is used by professionals ranging from electricians to architects to computer scientists. It is no less than a civil right, says Robert Moses, founder of the Algebra Project, which advocates for math literacy in public schools. Basic algebra is the first in a series of higher-level math classes students need to succeed in college and life. Because many students fail to develop a solid math foundation, an alarming number of them graduate from high school unprepared for college or work. Many end up taking remedial math in college, which makes getting a degree a longer, costlier process than it is for their more prepared classmates. And it means they're less likely to complete a college-level math course. For middle-schoolers and their parents, the message is clear: It's easier to learn the math now than to relearn it later. The first year of algebra is a prerequisite for all higher-level math: geometry, algebra II, trigonometry, and calculus. According to a study (pdf) by the educational nonprofit ACT, students who take algebra I, geometry, algebra II, and one additional high-level math course are much more likely to do well in college math. Algebra is not just for the college-bound. Even high school graduates headed straight for the work force need the same math skills as college freshmen, the ACT found. This study looked at occupations that don't require a college degree but pay wages high enough to support a family of four. Researchers found that math and reading skills required to work as an electrician, plumber, or upholsterer were comparable to those needed to succeed in college. Algebra is, in short, the gateway to success in the 21st century. What's more, when students make the transition from concrete arithmetic to the symbolic language of algebra, they develop abstract reasoning skills necessary to excel in math and science. Algebra I: Learn it now or later? Students typically take algebra in eighth or ninth grade. The benefit of studying it in eighth grade is that if your child takes the PSAT as a high school sophomore, she will have completed geometry. By the time she's ready to take the SAT or ACT as a junior, she will have completed algebra II, which is covered in both of these college admissions tests. There's a growing movement to require algebra in seventh grade, but many seventh-graders aren't prepared for it, math educators say. "Some kids get turned off of math because they start math too early," says Francis "Skip" Fennell, president of the National Council of Teachers of Mathematics (NCTM). If you're wondering whether your child is ready to advance, he recommends talking to her current teacher. The goal is for your child to learn algebra well and stay engaged in math, not to push her through the curriculum as quickly as possible. Is your child on track? Math curriculum varies widely from state to state, so it can be difficult to determine whether your child is getting the right preparation for higher-level courses. For a better sense of how your child's schoolwork compares, look up your state's math standards. Or see what the NCTM recommends for preschool through high school. W. Stephen Wilson, a math professor at Johns Hopkins University, reviewed K-12 math standards nationwide for the Thomas B. Fordham Institute and has strong opinions about which offer the best guidance. He calls California's the " gold standard" and recommends that parents who want to make sure their kids are prepared for high school and college compare their curriculum to the California The answer is in the homework Wilson offers this advice to parents trying to evaluate their children's math instruction:"If a student isn't bringing home work that requires lots of manipulation and word problems, then there is probably a problem." Fennell suggests talking to your child and her math teacher about how homework is used, specifically: • Are homework assignments corrected and returned in a timely way? • Is homework reviewed in class so students can learn from their mistakes? • Does the teacher change the pace or direction of his or her instruction, based on student feedback? You don't need to be a mathematician to ask good questions about your child's curriculum, Fennell adds. "Ask the teacher, 'Is it a repeat of math that should have already been mastered? When my child finishes this year, will he be ready for high school math?'" Bill Moore directs Washington’s Transition Mathematics Project, which is working to better prepare students for college math. According to him, middle-schoolers need to have a solid foundation of “basic procedural skills that really make problem solving more fluid. There's a fundamental set of stuff that just has to be memorized, and then there's a sense of numbers, a sense of what's a reasonable answer." Calculators: Tool or crutch? How much should students rely on calculators? The issue has been debated by math teachers, university professors, and parents, but there is general agreement that calculators shouldn’t be a substitute for learning basic arithmetic and standard algorithms. “In some cases,” says Moore, “students go straight to calculators, and if the calculator says it’s right, then it must be right.” "The calculator is an instructional tool,” says Fennell. “It should support but not supplant anything. You don't use it for 6 x 7." Updated January 2010
{"url":"http://www.greatschools.org/print-view/students/academic-skills/354-why-algebra.gs?fromPage=1","timestamp":"2014-04-18T05:50:04Z","content_type":null,"content_length":"17360","record_id":"<urn:uuid:05eb3c6d-f60e-444b-94e5-7b448849fa5a>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00591-ip-10-147-4-33.ec2.internal.warc.gz"}
Bellevue, WA Geometry Tutor Find a Bellevue, WA Geometry Tutor Summary of QualificationsPresentation Skills Man ManagementPassion for Technology and Quality Train and Lead TeamsSoftware Skills: MAC and PCADOBE Creative Suite HTML 3DMax LCMS MSOffice EducationPittsburg State University, Pittsburg KSMaster of Science in Technology with Academic Honors, G... 12 Subjects: including geometry, algebra 1, web design, photography ...I have excellent interpersonal and communication skills. I am an extensive professional with teaching experience in the fields of Mathematics (Pre-calculus, Calculus, Differential Equations) and Medical and Biological Physics in a position of associate professor. I have extensive experience in ... 9 Subjects: including geometry, calculus, physics, algebra 1 ...Sincerely, Mary AnnI enjoy tutoring Algebra 1, trying to make it interesting and easy to learn. I've helped many students with their math and improved their grades. If you don't understand something or can't solve an algebra problem, I can simplify it until you get it and solve it all by yourself. 13 Subjects: including geometry, Chinese, algebra 1, algebra 2 I have a strong understanding of science and math and I love to share interesting ideas. I teach with patience and thoughtful communication. I studied computer science in college, with additional coursework in biology, physics, chemistry, anatomy and physiology, and linguistics. 18 Subjects: including geometry, chemistry, biology, algebra 2 ...Working with them and going over multiple problems until they understood the concepts they were struggling with. I have also taken a leadership program at the University of Berkeley and through it gained skills to successfully lead others through their challenges. Challenges such as ropes cours... 15 Subjects: including geometry, reading, Spanish, piano Related Bellevue, WA Tutors Bellevue, WA Accounting Tutors Bellevue, WA ACT Tutors Bellevue, WA Algebra Tutors Bellevue, WA Algebra 2 Tutors Bellevue, WA Calculus Tutors Bellevue, WA Geometry Tutors Bellevue, WA Math Tutors Bellevue, WA Prealgebra Tutors Bellevue, WA Precalculus Tutors Bellevue, WA SAT Tutors Bellevue, WA SAT Math Tutors Bellevue, WA Science Tutors Bellevue, WA Statistics Tutors Bellevue, WA Trigonometry Tutors Nearby Cities With geometry Tutor Beaux Arts Village, WA geometry Tutors Bothell geometry Tutors Burien, WA geometry Tutors Clyde Hill, WA geometry Tutors Hunts Point, WA geometry Tutors Issaquah geometry Tutors Kirkland, WA geometry Tutors Medina, WA geometry Tutors Mercer Island geometry Tutors Newcastle, WA geometry Tutors Redmond, WA geometry Tutors Renton geometry Tutors Seattle geometry Tutors Shoreline, WA geometry Tutors Yarrow Point, WA geometry Tutors
{"url":"http://www.purplemath.com/Bellevue_WA_geometry_tutors.php","timestamp":"2014-04-17T13:26:20Z","content_type":null,"content_length":"24080","record_id":"<urn:uuid:465fef33-1680-42f2-a177-efc3db571e83>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00394-ip-10-147-4-33.ec2.internal.warc.gz"}
Eigen Vectors and eigen values! February 10th 2010, 08:20 PM #1 Feb 2010 Hi all, I am a complex systems researcher and I need to have complete knowledge about eigen vectors and eigen values. How does change in dimension affect a point's eigen vector and eigen value? What does principal eigen vector and principal eigen value mean for a point of n-dimension? Please help. Thanks in advance. Hi all, I am a complex systems researcher and I need to have complete knowledge about eigen vectors and eigen values. How does change in dimension affect a point's eigen vector and eigen value? What does principal eigen vector and principal eigen value mean for a point of n-dimension? Please help. Thanks in advance. I'm sorry but I simply don't understand your question. Points do not have "eigenvalues" or "eigenvectors". Linear transformations have eigenvalues and eigenvectors. By Points, I meant, nodes in a network. It's true a node doesn't have an eigen value unique to itself. But, the entries in the principal eigen vector is unique for each node in the network. So, if I isolate a node, or add a new node, the dimension changes. So, what effect will this have on the principal eigen value and the entries of the principal eigen vector when compared to the previous value? First of all, what does, principal eigen value and principal eigen vector mean for a node in an n-dimensional network? Please reply. February 10th 2010, 11:58 PM #2 MHF Contributor Apr 2005 February 11th 2010, 06:29 AM #3 Aug 2009
{"url":"http://mathhelpforum.com/advanced-algebra/128298-eigen-vectors-eigen-values.html","timestamp":"2014-04-16T16:28:21Z","content_type":null,"content_length":"36523","record_id":"<urn:uuid:7840f8c9-c1a0-4aff-b828-cf32c02c35c0>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00490-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts about injective functions on Math ∩ Programming In this post we’ll cover the second of the “basic four” methods of proof: the contrapositive implication. We will build off our material from last time and start by defining functions on sets. Functions as Sets So far we have become comfortable with the definition of a set, but the most common way to use sets is to construct functions between them. As programmers we readily understand the nature of a function, but how can we define one mathematically? It turns out we can do it in terms of sets, but let us recall the desired properties of a function: • Every input must have an output. • Every input can only correspond to one output (the functions must be deterministic). One might try at first to define a function in terms of subsets of size two. That is, if $A, B$ are sets then a function $f: A \to B$ would be completely specified by $\displaystyle \left \{ \left \{ x, y \right \} : x \in A, y \in B \right \}$ where to enforce those two bullets, we must impose the condition that every $x \in A$ occurs in one and only one of those subsets. Notationally, we would say that $y = f(x)$ means $\left \{ x, y \ right \}$ is a member of the function. Unfortunately, this definition fails miserably when $A = B$, because we have no way to distinguish the input from the output. To compensate for this, we introduce a new type of object called a tuple. A tuple is just an ordered list of elements, which we write using round brackets, e.g. $(a,b,c,d,e)$. As a quick aside, one can define ordered tuples in terms of sets. We will leave the reader to puzzle why this works, and generalize the example provided: $\displaystyle (a,b) = \left \{ a, \left \{ a, b \right \} \right \}$ And so a function $f: A \to B$ is defined to be a list of ordered pairs where the first thing in the pair is an input and the second is an output: $\displaystyle f = \left \{ (x, y) : x \in A, y \in B \right \}$ Subject to the same conditions, that each $x$ value from $A$ must occur in one and only one pair. And again by way of notation we say $y = f(x)$ if the pair $(x,y)$ is a member of $f$ as a set. Note that the concept of a function having “input and output” is just an interpretation. A function can be viewed independent of any computational ideas as just a set of pairs. Often enough we might not even know how to compute a function (or it might be provably uncomputable!), but we can still work with it abstractly. It is also common to call functions “maps,” and to define “map” to mean a special kind of function (that is, with extra conditions) depending on the mathematical field one is working in. Even in other places on this blog, “map” might stand for a continuous function, or a homomorphism. Don’t worry if you don’t know these terms off hand; they are just special cases of functions as we’ve defined them here. For the purposes of this series on methods of proof, “function” and “map” and “mapping” mean the same thing: regular old functions on sets. One of the most important and natural properties of a function is that of injectivity. Definition: A function $f: A \to B$ is an injection if whenever $a eq a'$ are distinct members of $A$, then $f(a) eq f(a')$. The adjectival version of the word injection is injective. As a quick side note, it is often the convention for mathematicians to use a capital letter to denote a set, and a lower-case letter to denote a generic element of that set. Moreover, the apostrophe on the $a'$ is called a prime (so $a'$ is spoken, “a prime”), and it’s meant to denote a variation on the non-prime’d variable $a$ in some way. In this case, the variation is that $a' eq a$. So even if we had not explicitly mentioned where the $a, a'$ objects came from, the knowledgeable mathematician (which the reader is obviously becoming) would be reasonably certain that they come from $A$. Similarly, if I were to lackadaisically present $b$ out of nowhere, the reader would infer it must come from $B$. One simple and commonly used example of an injection is the so-called inclusion function. If $A \subset B$ are sets, then there is a canonical function representing this subset relationship, namely the function $i: A \to B$ defined by $i(a) = a$. It should be clear that non-equal things get mapped to non-equal things, because the function doesn’t actually do anything except change perspective on where the elements are sitting: two nonequal things sitting in $A$ are still nonequal in $B$. Another example is that of multiplication by two as a map on natural numbers. More rigorously, define $f: \mathbb{N} \to \mathbb{N}$ by $f(x) = 2x$. It is clear that whenever $x eq y$ as natural numbers then $2x eq 2y$. For one, $x, y$ must have differing prime factorizations, and so must $2x, 2y$ because we added the same prime factor of 2 to both numbers. Did you catch the quick proof by direct implication there? It was sneaky, but present. Now the property of being an injection can be summed up by a very nice picture: The arrows above represent the pairs $(x,f(x))$, and the fact that no two arrows end in the same place makes this function an injection. Indeed, drawing pictures like this can give us clues about the true nature of a proposed fact. If the fact is false, it’s usually easy to draw a picture like this showing so. If it’s true, then the pictures will support it and hopefully make the proof obvious. We will see this in action in a bit (and perhaps we should expand upon it later with a post titled, “Methods of Proof — Proof by Picture”). There is another, more subtle concept associated with injectivity, and this is where its name comes from. The word “inject” gives one the mental picture that we’re literally placing one set $A$ inside another set $B$ without changing the nature of $A$. We are simply realizing it as being inside of $B$, perhaps with different names for its elements. This interpretation becomes much clearer when one investigates sets with additional structure, such as groups, rings, or topological spaces. Here the word “injective mapping” much more literally means placing one thing inside another without changing the former’s structure in any way except for relabeling. In any case, mathematicians have the bad (but time-saving) habit of implicitly identifying a set with its image under an injective mapping. That is, if $f :A \to B$ is an injective function, then one can view $A$ as the same thing as $f(A) \subset B$. That is, they have the same elements except that $f$ renames the elements of $A$ as elements of $B$. The abuse comes in when they start saying $A \ subset B$ even when this is not strictly the case. Here is an example of this abuse that many programmers commit without perhaps noticing it. Suppose $X$ is the set of all colors that can be displayed on a computer (as an abstract set; the elements are “this particular green,” “that particular pinkish mauve”). Now let $Y$ be the set of all finite hexadecimal numbers. Then there is an obvious injective map from $X \to Y$ sending each color to its 6-digit hex representation. The lazy mathematician would say “Well, then, we might as well say $X \subset Y$, for this is the obvious way to view $X$ as a set of hexadecimal numbers.” Of course there are other ways (try to think of one, and then try to find an infinite family of them!), but the point is that this is the only way that anyone really uses, and that the other ways are all just “natural relabelings” of this way. The precise way to formulate this claim is as follows, and it holds for arbitrary sets and arbitrary injective functions. If $g, g': X \to Y$ are two such ways to inject $X$ inside of $Y$, then there is a function $h: Y \to Y$ such that the composition $hg$ is precisely the map $g'$. If this is mysterious, we have some methods the reader can use to understand it more fully: give examples for simplified versions (what if there were only three colors?), draw pictures of “generic looking” set maps, and attempt a proof by direct implication. Proof by Contrapositive Often times in mathematics we will come across a statement we want to prove that looks like this: If X does not have property A, then Y does not have property B. Indeed, we already have: to prove a function $f: X \to Y$ is injective we must prove: If x is not equal to y, then f(x) is not equal to f(y). A proof by direct implication can be quite difficult because the statement gives us very little to work with. If we assume that $X$ does not have property $A$, then we have nothing to grasp and jump-start our proof. The main (and in this author’s opinion, the only) benefit of a proof by contrapositive is that one can turn such a statement into a constructive one. That is, we can write “p implies q” as “not q implies not p” to get the equivalent claim: If Y has property B then X has property A. This rewriting is called the “contrapositive form” of the original statement. It’s not only easier to parse, but also probably easier to prove because we have something to grasp at from the To the beginning mathematician, it may not be obvious that “if p then q” is equivalent to “if not q then not p” as logical statements. To show that they are requires a small detour into the idea of a “truth table.” In particular, we have to specify what it means for “if p then q” to be true or false as a whole. There are four possibilities: p can be true or false, and q can be true or false. We can write all of these possibilities in a table. p q T T T F F T F F If we were to complete this table for “if p then q,” we’d have to specify exactly which of the four cases correspond to the statement being true. Of course, if the p part is true and the q part is true, then “p implies q” should also be true. We have seen this already in proof by direct implication. Next, if p is true and q is false, then it certainly cannot be the case that truth of p implies the truth of q. So this would be a false statement. Our truth table so far looks like p q p->q T T T T F F F T ? F F ? The next question is what to do if the premise p of “if p then q” is false. Should the statement as a whole be true or false? Rather then enter a belated philosophical discussion, we will zealously define an implication to be true if its hypothesis is false. This is a well-accepted idea in mathematics called vacuous truth. And although it seems to make awkward statements true (like “if 2 is odd then 1 = 0″), it is rarely a confounding issue (and more often forms the punchline of a few good math jokes). So we can complete our truth table as follows p q p->q T T T T F F F T T F F T Now here’s where contraposition comes into play. If we’re interested in determining when “not q implies not p” is true, we can add these to the truth table as extra columns: p q p->q not q not p not q -> not p T T T F F T T F F T F F F T T F T T F F T T T T As we can see, the two columns corresponding to “p implies q” and “not q implies not p” assume precisely the same truth values in all possible scenarios. In other words, the two statements are logically equivalent. And so our proof technique for contrapositive becomes: rewrite the statement in its contrapositive form, and proceed to prove it by direct implication. Examples and Exercises Our first example will be completely straightforward and require nothing but algebra. Let’s show that the function $f(x) = 7x - 4$ is injective. Contrapositively, we want to prove that if $f(x) = f (x')$ then $x = x'$. Assuming the hypothesis, we start by supposing $7x - 4 = 7x' - 4$. Applying algebra, we get $7x = 7x'$, and dividing by 7 shows that $x = x’$ as desired. So $f$ is injective. This example is important because if we tried to prove it directly, we might make the mistake of assuming algebra works with $eq$ the same way it does with equality. In fact, many of the things we take for granted about equality fail with inequality (for instance, if $a eq b$ and $b eq c$ it need not be the case that $a eq c$). The contrapositive method allows us to use our algebraic skills in a straightforward way. Next let’s prove that the composition of two injective functions is injective. That is, if $f: X \to Y$ and $g: Y \to Z$ are injective functions, then the composition $gf : X \to Z$ defined by $gf (x) = g(f(x))$ is injective. In particular, we want to prove that if $x eq x'$ then $g(f(x)) eq g(f(x'))$. Contrapositively, this is the same as proving that if $g(f(x)) = g(f(x'))$ then $x=x'$. Well by the fact that $g$ is injective, we know that (again contrapositively) whenever $g(y) = g(y')$ then $y = y'$, so it must be that $f(x) = f(x')$. But by the same reasoning $f$ is injective and hence $x = x'$. This proves the statement. This was a nice symbolic proof, but we can see the same fact in a picturesque form as well: If we maintain that any two arrows in the diagram can’t have the same head, then following two paths starting at different points in $X$ will never land us at the same place in $Z$. Since $f$ is injective we have to travel to different places in $Y$, and since $g$ is injective we have to travel to different places in $Z$. Unfortunately, this proof cannot replace the formal one above, but it can help us understand it from a different perspective (which can often make or break a mathematical idea). Expanding upon this idea we give the reader a challenge: Let $A, B, C$ be finite sets of the same size. Prove or disprove that if $f: A \to B$ and $g: B \to C$ are (arbitrary) functions, and if the composition $gf$ is injective, then both of $f, g$ must be injective. Another exercise which has a nice contrapositive proof: prove that if $A,B$ are finite sets and $f:A \to B$ is an injection, then $A$ has at most as many elements as $B$. This one is particularly susceptible to a “picture proof” like the one above. Although the formal the formal name for the fact one uses to prove this is the pigeonhole principle, it’s really just a simple observation. Aside from inventing similar exercises with numbers (e.g., if $ab$ is odd then $a$ is odd or $b$ is odd), this is all there is to the contrapositive method. It’s just a direct proof disguised behind a fact about truth tables. Of course, as is usual in more advanced mathematical literature, authors will seldom announce the use of contraposition. The reader just has to be watchful enough to notice Though we haven’t talked about either the real numbers $\mathbb{R}$ nor proofs of existence or impossibility, we can still pose this interesting question: is there an injective function from $\mathbb {R} \to \mathbb{N}$? In truth there is not, but as of yet we don’t have the proof technique required to show it. This will be our next topic in the series: the proof by contradiction. Until then!
{"url":"http://jeremykun.com/tag/injective-functions/","timestamp":"2014-04-17T12:40:44Z","content_type":null,"content_length":"85634","record_id":"<urn:uuid:849f073a-afe4-4d68-a6bf-71a6aff0f8bd>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00210-ip-10-147-4-33.ec2.internal.warc.gz"}
Horizontal Shift of Graphs An applet helps you explore the horizontal shift of the graph of a function. The exploration of the graphs of function f(x) is carried out by adding a constant c to the independent variable x and change it. 1 - Click on the button above "click here to start" and MAXIMIZE the window obtained. 2 - Use the scrollbar to set the constant c to negative values and observe the effect on the graph. Is the graph shifted to the left or to the right? 3 - the scrollbar to set the constant c to positive values and observe the effect on the graph. Is the graph shifted to the left or to the right? Note: You have the choice (left panel, top) of any of the three functions f(x)=||x|-2| (this has a "W" shaped graph), f(x)=x^2 or f(x)=x^3. Related topics on graph transformations can be found in this site. • Explore interactively and understand the stretching and compression of the graph of a function when this function is multiplied by a constant a. Vertical Stretching and Compression(scaling) • Explore the changes that occur to the graph of a function when its independent variable x is multiplied by a positive constant a. Horizontal Stretching and Compression • Explore interactively the vertical shifting of the graph of a function. Vertical Shifting/translation of Graphs SEARCH THIS SITE Home Page -- HTML5 Math Applets for Mobile Learning -- Math Formulas for Mobile Learning -- Algebra Questions -- Math Worksheets -- Free Compass Math tests Practice Free Practice for SAT, ACT Math tests -- GRE practice -- GMAT practice Precalculus Tutorials -- Precalculus Questions and Problems -- Precalculus Applets -- Equations, Systems and Inequalities -- Online Calculators -- Graphing -- Trigonometry -- Trigonometry Worsheets -- Geometry Tutorials -- Geometry Calculators -- Geometry Worksheets -- Calculus Tutorials -- Calculus Questions -- Calculus Worksheets -- Applied Math -- Antennas -- Math Software -- Elementary Statistics High School Math -- Middle School Math -- Primary Math Math Videos From Analyzemath Author - e-mail Updated: 2 April 2013 Copyright © 2003 - 2014 - All rights reserved
{"url":"http://www.analyzemath.com/Horizontal_Shift.html","timestamp":"2014-04-18T10:39:53Z","content_type":null,"content_length":"9263","record_id":"<urn:uuid:9027f70f-63d4-4a32-9102-57cba12dc96d>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00234-ip-10-147-4-33.ec2.internal.warc.gz"}
"Scattering matrix" redirects here. For the meaning in linear electrical networks, see Scattering parameters In physics, the S-matrix or scattering matrix relates the initial state and the final state of a physical system undergoing a scattering process. It is used in quantum mechanics, scattering theory and quantum field theory. More formally, the S-matrix is defined as the unitary matrix connecting asymptotic particle states in the Hilbert space of physical states (scattering channels). While the S-matrix may be defined for any background (spacetime) that is asymptotically solvable and has no horizons, it has a simple form in the case of the Minkowski space. In this special case, the Hilbert space is a space of irreducible unitary representations of the inhomogeneous Lorentz group; the S-matrix is the evolution operator between time equal to minus infinity (the distant past), and time equal to plus infinity (the distant future). It is defined only in the limit of zero energy density (or infinite particle separation distance). It can be shown that if a quantum field theory in Minkowski space has a mass gap, the state in the asymptotic past and in the asymptotic future are both described by Fock spaces. The S-matrix was first introduced by John Archibald Wheeler in the 1937 paper "'On the Mathematical Description of Light Nuclei by the Method of Resonating Group Structure'".^[1] In this paper Wheeler introduced a scattering matrix – a unitary matrix of coefficients connecting "the asymptotic behaviour of an arbitrary particular solution [of the integral equations] with that of solutions of a standard form".^[2] In the 1940s, Werner Heisenberg developed, independently, the idea of the S-matrix. Due to the problematic divergences present in quantum field theory at that time Heisenberg was motivated to isolate the essential features of the theory that would not be affected by future changes as the theory developed. In doing so he was led to introduce a unitary "characteristic" S-matrix.^[2] After World War II, the clout of Heisenberg and his attachment to the S-matrix approach may have retarded development of alternative approaches and the closer study of sub-hadronic physics for a decade or more, at least in Europe: "Pretty much like medieval Scholastic Magisters were extremely inventive in defending the Church Dogmas and blocking the way to experimental science, some great minds in the sixties developed the S-Matrix dogma with great perfection and skill before it was buried down in the seventies after discovery of quarks and asymptotic freedom" ^[3] In high-energy particle physics we are interested in computing the probability for different outcomes in scattering experiments. These experiments can be broken down into three stages: 1. Collide together a collection of incoming particles (usually two particles with high energies). 2. Allowing the incoming particles to interact. These interactions may change the types of particles present (e.g. if an electron and a positron annihilate they may produce two photons). 3. Measuring the resulting outgoing particles. The process by which the incoming particles are transformed (through their interaction) into the outgoing particles is called scattering. For particle physics, a physical theory of these processes must be able to compute the probability for different outgoing particles when we collide different incoming particles with different energies. The S-matrix in quantum field theory is used to do exactly this. It is assumed that the small-energy-density approximation is valid in these cases. Use of S-matrices[edit] The S-matrix is closely related to the transition probability amplitude in quantum mechanics and to cross sections of various interactions; the elements (individual numerical entries) in the S-matrix are known as scattering amplitudes. Poles of the S-matrix in the complex-energy plane are identified with bound states, virtual states or resonances. Branch cuts of the S-matrix in the complex-energy plane are associated to the opening of a scattering channel. In the Hamiltonian approach to quantum field theory, the S-matrix may be calculated as a time-ordered exponential of the integrated Hamiltonian in the interaction picture; it may also be expressed using Feynman's path integrals. In both cases, the perturbative calculation of the S-matrix leads to Feynman diagrams. In scattering theory, the S-matrix is an operator mapping free particle in-states to free particle out-states (scattering channels) in the Heisenberg picture. This is very useful because often we cannot describe the interaction (at least, not the most interesting ones) exactly. Mathematical definition[edit] In Dirac notation, we define $|0\rangle$ as the vacuum quantum state. If $a^{\dagger}(k)$ is a creation operator, its hermitian conjugate (destruction or annihilation operator) acts on the vacuum as $a(k)\left |0\right\rangle = 0.$ Now, we define two kinds of creation/destruction operators acting on different Hilbert spaces (initial space i, final space f), $a_i^\dagger (k)$ and $a_f^\dagger (k)$. So now $\mathcal H_\mathrm{IN} = \operatorname{span}\{ \left| I, k_1\ldots k_n \right\rangle = a_i^\dagger (k_1)\cdots a_i^\dagger (k_n)\left| I, 0\right\rangle\},$ $\mathcal H_\mathrm{OUT} = \operatorname{span}\{ \left| F, p_1\ldots p_n \right\rangle = a_f^\dagger (p_1)\cdots a_f^\dagger (p_n)\left| F, 0\right\rangle\}.$ It is possible to play the trick assuming that $\left| I, 0\right\rangle$ and $\left| F, 0\right\rangle$ are both invariant under translation and that the states $\left| I, k_1\ldots k_n \right\ rangle$ and $\left| F, p_1\ldots p_n \right\rangle$ are eigenstates of the momentum operator $\mathcal P^\mu$, by adiabatically turning on and off the interaction. In the Heisenberg picture the states are time-independent, so we can expand initial states on a basis of final states (or vice versa) as follows: $\left| I, k_1\ldots k_n \right\rangle = C_0 \left| F, 0\right\rangle\ + \sum_{m=1}^\infty \int{d^4p_1\ldots d^4p_mC_m(p_1\ldots p_m)\left| F, p_1\ldots p_m \right\rangle}$ Where $\left|C_m\right|^2$ is the probability that the interaction transforms $\left| I, k_1\ldots k_n \right\rangle$ into $\left| F, p_1\ldots p_m \right\rangle$ According to Wigner's theorem, $S$ must be a unitary operator such that $\left \langle I,\beta \right |S\left | I,\alpha\right\rangle = S_{\alpha\beta} = \left \langle F,\beta | I,\alpha\right\ rangle$. Moreover, $S$ leaves the vacuum state invariant and transforms IN-space fields in OUT-space fields: $S\left|0\right\rangle = \left|0\right\rangle$ $\phi_f=S\phi_i S^{-1}$ If $S$ describes an interaction correctly, these properties must be also true: S-matrix and evolution operator U[edit] Define a time-dependent creation and annihilation operator as follow $a^{\dagger}\left(k,t\right)=U^{-1}(t)a^{\dagger}_i\left(k\right)U\left( t \right)$ $a\left(k,t\right)=U^{-1}(t)a_i\left(k\right)U\left( t \right)$ $\phi_f=U^{-1}(\infty)\phi_i U(\infty)=S^{-1}\phi_i S.$ where we have $S= e^{i\alpha}\, U(\infty)$ . We allow a phase difference given by $e^{i\alpha}=\left\langle 0|U(\infty)|0\right\rangle^{-1}$ because for $S$: $S\left|0\right\rangle = \left|0\right\rangle \Longrightarrow \left\langle 0|S|0\right\rangle = \left\langle 0|0\right\rangle =1$ Substituting the explicit expression for U we obtain: $S=\frac{1}{\left\langle 0|U(\infty)|0\right\rangle}\mathcal T e^{-i\int{d\tau H_{\rm{int}}(\tau)}}.$ where $H_{\rm{int}}$ is the interaction part of the hamiltonian and $\mathcal T$ is the time ordering. By inspection it can be seen that this formula is not explicitly covariant. Dyson series[edit] Main article: Dyson series The most widely used expression for the S-matrix is the Dyson series. This expresses the S-matrix operator as the series: $S = \sum_{n=0}^\infty \frac{(-i)^n}{n!} \int \cdots \int d^4x_1 d^4x_2 \ldots d^4x_n T [ H_{\rm{int}}(x_1) H_{\rm{int}}(x_2) \cdots H_{\rm{int}}(x_n)]$ See also[edit]
{"url":"http://www.digplanet.com/wiki/S-matrix","timestamp":"2014-04-17T22:27:05Z","content_type":null,"content_length":"62085","record_id":"<urn:uuid:65b5ea27-9c37-4ac2-a385-507a69c044b0>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00512-ip-10-147-4-33.ec2.internal.warc.gz"}
line plot October 14th 2007, 09:36 AM line plot Help! How would you make a line plot of a set of 15 name lengths with a range from 8 letters to 16 letters? This is a bit confusing for me. Trying to help my 11 yr. old grandson to understand this. Thanks October 14th 2007, 09:40 AM is it possible for you to be a bit more specific? i'm afraid i don't really get what you are asking October 14th 2007, 10:03 AM line plot The problem just says "make a lice plot of a set of fifteen name lengths with a range from 8 to 16 letters".
{"url":"http://mathhelpforum.com/algebra/20560-line-plot-print.html","timestamp":"2014-04-25T01:22:46Z","content_type":null,"content_length":"4626","record_id":"<urn:uuid:820798f0-1f1f-4607-a9c8-586f121d017a>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00599-ip-10-147-4-33.ec2.internal.warc.gz"}
Proceedings of School and Workshops on the "Standard Model and Beyond - Standard Cosmology" and on "Cosmology - Strings: Theory - Cosmology - Phenomenology" The proceedings of the 1st and 2nd week of the 9th Hellenic School on Elementary Particle Physics and Gravity, Corfu 2009 will be published in Fortsch.Phys. Prospective authors should have received an invitation by email. If you think that you should have received one but didn't, please contact Konstantinos Anagnostopoulos. Manuscripts should be submitted electronically to the journal. The formal procedures for the submission are explained at the homepage of the journal (click here to be redirected) In the submission procedure you will be asked for a cover letter where you can write that this is a contribution to the Corfu 2009 proceedings. Please ask Hans-Jörg Otto to be your editor. The length of the manuscript should not exceed 6 pages for a lecture and 4 pages for a regular talk and the deadline for the submission is February 28 (extended), 2010. Instructions for authors (Note: Upload your pdf as "main document" and all sources in a zip file as "suppl. material not for review"). LaTeX files and style sheets Proceedings of the 2nd School on Quantum Gravity and Quantum Geometry The proceedings of the 2nd School on Quantum Gravity and Quantum Geometry session of the 9th Hellenic School on Elementary Particle Physics and Gravity, Corfu 2009 will be published in a dedicated issue of General Relativity and Gravitation Prospective authors should have received an invitation by email. If you think that you should have received one but didn't, please contact Konstantinos Manuscripts should be submitted electronically to the journal. The formal procedures for the submission are explained at the homepage of the journal (click here to be redirected). In the online submission form there is a section called "Comments" where your should should enter: "Proceedings Corfu 2009". There are no LaTeX templates, a standard file will do. The length of the manuscript should be no longer than 25 pages for lectures and 10 pages for talks and the deadline for the submission is February 28 (extended), 2010.
{"url":"http://www.physics.ntua.gr/corfu2009/proceedings.html","timestamp":"2014-04-16T10:09:18Z","content_type":null,"content_length":"15250","record_id":"<urn:uuid:d5c12565-8a37-4f44-8b69-aa6448c06003>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00564-ip-10-147-4-33.ec2.internal.warc.gz"}
The Diesel The Diesel and Flat Car Inelastic Collision The animation below portrays the inelastic collision between a very massive diesel and a less massive flatcar. Before the collision, the diesel is in motion with a velocity of 5 km/hr and the flatcar is at rest. The mass of the diesel is 8000 kg and the mass of the flatcar is 2000 kg. The diesel has four times the mass of the flatcar. After the collision, both the diesel and the flatcar move together with the same velocity. (Collisions such as this where the two objects stick together and move with the same post-collision velocity are referred to as inelastic collisions.) What is the after-collision velocity of the two railroad cars? Collisions between objects are governed by laws of momentum and energy. When a collision occurs in an isolated system, the total momentum of the system of objects is conserved. Provided that there are no net external forces acting upon the two cars, the momentum of the diesel and the flatcar before the collision equals the momentum of the diesel and the flatcar after the collision The mathematics of this problem is simplified by the fact that before the collision, there is only one object in motion and after the collision both objects have the same velocity. That is to say, a momentum analysis would show that all the momentum was concentrated in the diesel before the collision. And after the collision, all the momentum was the result of a single object (the combination of the diesel and flatcar) moving at an easily predictable velocity. The prediction of the final velocity of the two cars involves determining the ratio by which the mass which is in motion changed; and then dividing the initial velocity by that ratio. That is if the amount of mass in motion increases by a factor of two, then the velocity would decrease by a factor of two (divide the original velocity by two). If the amount of mass in motion increases by a factor of five, then the velocity would decrease by a factor of five (divide the original velocity by five). In the case of the animation above, the amount of mass in motion increased by a factor of 5/4; a change from say 8000 kg for the diesel before the collision to 10 000 kg for the combination of the diesel and flatcar after the collision. Since the amount of mass in motion increased by a factor of 5/4, the velocity at which that mass is in motion must decrease by a factor of 5/4. That is, the original velocity of 5 km/hr must be divided by 5/4. The result is 4 km/hr; the diesel and flatcar move together with a velocity of 4 km/hr after the collision. For more information on physical descriptions of motion, visit The Physics Classroom Tutorial. Detailed information is available there on the following topics: Momentum Conservation Principle Isolated Systems Momentum Conservation in Collisions
{"url":"http://www.physicsclassroom.com/mmedia/momentum/dft.cfm","timestamp":"2014-04-17T10:08:54Z","content_type":null,"content_length":"48518","record_id":"<urn:uuid:12226548-d4f2-425b-9c48-56751be58a34>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00391-ip-10-147-4-33.ec2.internal.warc.gz"}
Vertex Form of a Quadratic Ppt Presentation Vertex Form of a Quadratic : Vertex Form of a Quadratic Equation: Slide 2: To review: Given K – shifts the graph k units up or down (positive – up, negative – down) H – shifts the graph h units up or down (positive – left, negative – right) A – if negative, reflects the graph over the x axis Slide 3: Graph of a Quadratic: Vertex: (0, 0) Axis of Symmetry: x = 0 Vertex is a minimum point Slide 4: The green graph is the graph of the function above while the purple graph Is the standard quadratic. Vertex: (-2, -3) Axis of Symmetry: x= -2 Vertex is a minimum Slide 5: The red graph is the graph of the function above Vertex: (2, 4) Axis of symmetry: x = 2 Vertex is a maximium Slide 6: As you can see from the previous graphs, you can find the vertex, axis of symmetry, and the maximum or minimum just by using the equation. Vertex: (h, k) (the opposite of “h” and “k”) Axis of symmetry: x = h (use the x coordinate of the vertex) If a is positive, the vertex is a minimum If a is negative, the vertex is a maximum Slide 7: Find the vertex, the axis of symmetry, and determine the maximum or minimum. Vertex: (-5, -3) Axis of symmetry: x = -5 Vertex is a maximum Vertex: (4, -1) Axis of Symmetry: x = 4 Vertex is a minimum
{"url":"http://www.authorstream.com/Presentation/bsndev-242957-vertex-form-quadratic-entertainment-ppt-powerpoint/","timestamp":"2014-04-19T00:06:22Z","content_type":null,"content_length":"127432","record_id":"<urn:uuid:fa0d44e3-9fc2-4c48-af2b-588463d2c8b9>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00360-ip-10-147-4-33.ec2.internal.warc.gz"}
Popular Vote v Electoral Vote 8:24 PM Jun 16, 2008 Popular Vote v Electoral Vote This might be my favorite graph that we’ve done so far: a comparison of Barack Obama’s popular and electoral vote totals across the first 1,000 simulations that we ran last night: Several interesting things to point out: 1. The relationship between the popular vote and the electoral vote is approximately linear, except at the endpoints. As a rule of thumb, a gain of one percentage point in a Obama’s popular vote share results in a gain of 25 electoral votes. This is also, you will note, a pretty steep slope. If Obama wins the election by 4 percentage points, he projects to win by approximately 100 electoral votes (319-219). 2. The regression line crosses the y-intercept at 269.3 electoral votes, which is almost exactly half of 538. That means that there does not appear to be any systematic advantage in the electoral vote math to one candidate or another, at least based on our present rendering of these numbers. 3. Where you do see a little bit of skew are those scenarios where one candidate wins by about 5-15 percentage points. In those cases, the winning candidate tends to win by more electoral votes than is predicted by the regression line. This is because an especially high number of states are within reach for one or another candidate. In contrast to 2004, when 16 states and the District of Columbia were decided by 20 or more points, very few are polling that way this year. 4. The range of possible outcomes given any specific value of the popular vote is about 80-100 electoral votes wide. For example, an Obama win by 5 percentage points could easily be associated with any number from about 290 electoral votes up to as many as 390, depending on how the individual states shake out. Likewise, for any given value of the electoral vote, the range of the popular vote margin is about 6 or 7 percentage points wide. What this means, among other things, is that it’s virtually impossible for a candidate to win the electoral college while losing the popular vote by more than about 3 or 3.5 percentage points. comments Add Comment
{"url":"http://fivethirtyeight.com/features/popular-vote-v-electoral-vote/","timestamp":"2014-04-19T12:31:39Z","content_type":null,"content_length":"58304","record_id":"<urn:uuid:081b8f13-8ff1-4199-a262-b0e55d786818>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00495-ip-10-147-4-33.ec2.internal.warc.gz"}
Higher-order uni with dependent types , 2001 "... We present a variant of Proof-Carrying Code (PCC) in which the trusted inference rules are represented as a higherorder logic program, the proof checker is replaced by a nondeterministic higher-order logic interpreter and the proof by an oracle implemented as a stream of bits that resolve the nondet ..." Cited by 55 (3 self) Add to MetaCart We present a variant of Proof-Carrying Code (PCC) in which the trusted inference rules are represented as a higherorder logic program, the proof checker is replaced by a nondeterministic higher-order logic interpreter and the proof by an oracle implemented as a stream of bits that resolve the nondeterministic interpretation choices. In this setting, Proof-Carrying Code allows the receiver of the code the luxury of using nondeterminism in constructing a simple yet powerful checking procedure. This oracle-based variant of PCC is able to adapt quite naturally to situations when the property being checked is simple or there is a fairly directed search procedure for it. As an example, we demonstrate that if PCC is used to verify type safety of assembly language programs compiled from Java source programs, the oracles that are needed are on the average just 12% of the size of the code, which represents an improvement of a factor of 30 over previous syntactic representations of PCC proofs. ... - Topics in Advanced Language Implementation , 1991 "... ions *) and varbind = Varbind of string * term (* Variable binders , Type *) In the implementation of the term language and the type checker, we have two constants type and pi. And, yes, type is a type, though this could be avoided by introducing universes (see [16]) without any changes to the code ..." Cited by 35 (0 self) Add to MetaCart ions *) and varbind = Varbind of string * term (* Variable binders , Type *) In the implementation of the term language and the type checker, we have two constants type and pi. And, yes, type is a type, though this could be avoided by introducing universes (see [16]) without any changes to the code of the unifier. As is customary, we use A ! B as an abbreviation for \Pix : A: B if x does not occur free in B. Also, however, \Pix : A: B is an abbreviation for the application pi A (x : A: B). In our formulation, then, the constant pi has type \PiA : type: ((A ! type) ! type). As an example consider a predicate constant eq of type \PiA : type: A ! A ! o (where o is the type of formulas as indicated in Section 9). The single clause eqAM M: correctly models equality, that is, a goal of the form eq AM N will succeed if M and N are unifiable. The fact that unification now has to branch can be seen by considering the goal eq int (F 1 1) 1 which has three solutions for the functional logic var...
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=2198287","timestamp":"2014-04-20T05:04:45Z","content_type":null,"content_length":"15641","record_id":"<urn:uuid:e389f769-d238-411d-844d-3c4a1d62f5a2>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00380-ip-10-147-4-33.ec2.internal.warc.gz"}
Using the difference quotient? May 16th 2010, 10:51 AM #1 May 2010 Using the difference quotient? Hi! I'm new to Math Help Forums, so you'll have to forgive me if I don't do things correctly. I'm not quite sure how to post math icons either =/ So, I just have a basic difference quotient problem that was actually included in my review packet for the beginning of calculus one. Evaluate the function at the given value of the independent variable. F(x) = x^3 f(x+deltax)-f(x)/delta x I got as far as plugging it in: (x+delta x)^3-x^2/delta x And, I know I'm supposed to simplify things out to get rid of the delta x on the bottom. I tried multiplying out the top and I got [x^2+x(delta x) +x(delta x) + (delta x)^2][x+delta x]/delta x. I'm not entirely sure if that's right, because the delta x's kind of confuse me. I wasn't quite sure how to continue. EDIT: I forgot to mention! The back of the book states that the answer is 3x^2 + 3x(delta x)+(delta x)^2, where delta x does not equal zero. Thanks again! Last edited by Ohoneo; May 16th 2010 at 11:00 AM. Reason: Addendum It's mostly just an algebra problem in the numerator. Notice that $(a+b)^3 = a^3 + 3a^2 b + 3a b^2 + b^3$ $\frac{f(x+ \Delta x) - f(x)}{\Delta x} = \frac{(x+ \Delta x)^3 - x^3}{\Delta x} = \frac{x^3 + 3 x^2 \Delta x + 3 x (\Delta x)^2 + (\Delta x)^3 - x^3}{\Delta x}$$= \frac{3 x^2 \Delta x + 3 x (\ Delta x)^2 + (\Delta x)^3}{\Delta x}$ If the $\Delta x$ terms are throwing you off, you can use $h$ instead. This is actually more common notation anyway. $\frac{f(x+ h) - f(x)}{h} = \frac{(x+ h)^3 - x^3}{h} = \frac{x^3 + 3 x^2 h + 3 x h^2 + h^3 - x^3}{h}$$= \frac{3 x^2 h + 3 x h^2 + h^3}{h}$ Does this help? Oh my gosh! I couldn't believe I forgot the difference of cubes! Thank you so much, that definitely makes more sense. I was trying to multiply everything out by hand instead of using a simple equation. Thank you! May 16th 2010, 11:06 AM #2 Senior Member Jan 2010 May 16th 2010, 11:09 AM #3 May 2010
{"url":"http://mathhelpforum.com/calculus/145009-using-difference-quotient.html","timestamp":"2014-04-23T21:00:40Z","content_type":null,"content_length":"36139","record_id":"<urn:uuid:00a6bc8d-4356-4617-acd7-25945c9c47f6>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00197-ip-10-147-4-33.ec2.internal.warc.gz"}
Rutherford, NJ SAT Math Tutor Find a Rutherford, NJ SAT Math Tutor ...I have an advanced degree in Philosophy, which has made me proficient in the English language sections of the ASVAB; and I've completed a Bachelors program in mathematics, which qualifies me to teach the mathematical sections. I cannot offer tutoring for any of the other sections. I have taken MIT's 6.00x course in Computer Science and gained certification. 32 Subjects: including SAT math, physics, calculus, GRE ...I also teach students to love these incredibly boring passages, which makes the section more tolerable for those who generally dislike this type of work. Secondarily, I have developed a vocabulary bank that helps students on vocab based questions. As an attorney I possess the highest level of proofreading and writing ability. 25 Subjects: including SAT math, English, reading, geometry ...I have been a very active tutor in excellent standing with WyzAnt (see my ratings and reviews) and I always include study skills as part of my tutoring, regardless of subject; skills such as how to focus, time management, recitation, using index cards, using the internet, as well as planning and ... 29 Subjects: including SAT math, reading, biology, ASVAB ...I have experience tutoring in a broad subject range, from Algebra through college level Calculus.I recently passed and an proficient in the material on both Exams P/1 and FM/2. I am able to tutor for the Praxis for Mathematics Content Knowledge. I have passed this test myself, getting only one question incorrect. 21 Subjects: including SAT math, calculus, geometry, statistics ...Joseph's College in Patchogue and a masters degree in English literature from St. John's University. I am recently becoming certified to become a ESL/TEFL instructor. 21 Subjects: including SAT math, reading, English, ESL/ESOL Related Rutherford, NJ Tutors Rutherford, NJ Accounting Tutors Rutherford, NJ ACT Tutors Rutherford, NJ Algebra Tutors Rutherford, NJ Algebra 2 Tutors Rutherford, NJ Calculus Tutors Rutherford, NJ Geometry Tutors Rutherford, NJ Math Tutors Rutherford, NJ Prealgebra Tutors Rutherford, NJ Precalculus Tutors Rutherford, NJ SAT Tutors Rutherford, NJ SAT Math Tutors Rutherford, NJ Science Tutors Rutherford, NJ Statistics Tutors Rutherford, NJ Trigonometry Tutors
{"url":"http://www.purplemath.com/Rutherford_NJ_SAT_Math_tutors.php","timestamp":"2014-04-18T06:16:18Z","content_type":null,"content_length":"24202","record_id":"<urn:uuid:58c0aac3-afdc-402b-9ea4-bcbf0988e617>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00234-ip-10-147-4-33.ec2.internal.warc.gz"}
Within Any Possible Universe, No Intellect Can Ever Know It All Deep in the deluge of knowledge that poured forth from science in the 20th century were found ironclad limits on what we can know. Werner Heisenberg discovered that improved precision regarding, say, an object’s position inevitably degraded the level of certainty of its momentum. Kurt Gödel showed that within any formal mathematical system advanced enough to be useful, it is impossible to use the system to prove every true statement that it contains. And Alan Turing demonstrated that one cannot, in general, determine if a computer algorithm is going to halt. David H. Wolpert, a physics-trained computer scientist at the NASA Ames Research Center, has chimed in with his version of a knowledge limit. Because of it, he concludes, the universe lies beyond the grasp of any intellect, no matter how powerful, that could exist within the universe. Specifically, during the past two years, he has been refining a proof that no matter what laws of physics govern a universe, there are inevitably facts about the universe that its inhabitants cannot learn by experiment or predict with a computation. Philippe M. Binder, a physicist at the University of Hawaii at Hilo, suggests that the theory implies researchers seeking unified laws cannot hope for anything better than a “theory of almost everything.” Wolpert’s work is an effort to create a formal rigorous description of processes such as measuring a quantity, observing a phenomenon, predicting a system’s future state or remembering past information—a description that is general enough to be independent of the laws of physics. He observes that all those processes share a common basic structure: something must be configured (whether it be an experimental apparatus or a computer to run a simulation); a question about the universe must be specified; and an answer (right or wrong) must be supplied. He models that general structure by defining a class of mathematical entities that he calls inference devices. The inference devices act on a set of possible universes. For instance, our universe, meaning the entire world line of our universe over all time and space, could be a member of the set of all possible such universes permitted by the same rules that govern ours. Nothing needs to be specified about those rules in Wolpert’s analysis. All that matters is that the various possible inference devices supply answers to questions in each universe. In a universe similar to ours, an inference device may involve a set of digital scales that you will stand on at noon tomorrow and the question relate to your mass at that time. People may also be inference devices or parts of one. Wolpert proves that in any such system of universes, quantities exist that cannot be ascertained by any inference device inside the system. Thus, the “demon” hypothesized by Pierre-Simon Laplace in the early 1800s (give the demon the exact positions and velocities of every particle in the universe, and it will compute the future state of the universe) is stymied if the demon must be a part of the universe. Researchers have proved results about the incomputability of specific physical systems before. Wolpert points out that his result is far more general, in that it makes virtually no assumptions about the laws of physics and it requires no limits on the computational power of the inference device other than it must exist within the universe in question. In addition, the result applies not only to predictions of a physical system’s future state but also to observations of a present state and examining a record of a past state. The theorem’s proof, similar to the results of Gödel’s incompleteness theorem and Turing’s halting problem, relies on a variant of the liar’s paradox—ask Laplace’s demon to predict the following yes/ no fact about the future state of the universe: “Will the universe not be one in which your answer to this question is yes?” For the demon, seeking a true yes/no answer is like trying to determine the truth of “This statement is false.” Knowing the exact current state of the entire universe, knowing all the laws governing the universe and having unlimited computing power is no help to the demon in saying truthfully what its answer will be.
{"url":"http://www.scientificamerican.com/article/limits-on-human-comprehension/","timestamp":"2014-04-16T04:50:43Z","content_type":null,"content_length":"61280","record_id":"<urn:uuid:265c2fd3-54d8-48ff-a478-7477424c8474>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00272-ip-10-147-4-33.ec2.internal.warc.gz"}
Code to sort the leading digit of a number. Join Date Nov 2013 Rep Power Hello Guys, So I have been given an assignment by my teacher to create a program that takes a bunch of data and sorts them by the leading digit. By this I mean it finds the leading digit of a number (e.g. 260 leading digit: 2) and then will increase the count of the array for "2" by one (yes im horrible at explaining this). So here is what I have so far. Java Code: import java.io.File; import java.io.FileNotFoundException; import java.util.Scanner; public class BenfordsLaw { public static void main(String[] args) throws FileNotFoundException { // TODO Auto-generated method stub Scanner input = new Scanner(new File("spanish-city-pop.txt")); while (input.hasNextInt()) { int next = input.nextInt(); // process next int array[]={next}; for(int counter = 0; counter<array.length;counter++){ System.out.println(counter + "\t" + array [counter]); Yes I know all this does is print out all the digits in the txt document. I have no clue where to start so for now I really need to just find a way to get the leading digit of each number. Anyone have any recommendations? What is required by the end of the program: 1) Create a project in Eclipse called Benford with a class named Benford (make sure you have the main method check box!) 2) Your project should read in the data from a file called data.txt - you can put any data you want in the file, such as US population numbers over the past 100 years, etc. Just make sure all of the data is of type integer! This file should be located in your project folder, NOT THE SRC. 3) Your project then should determine for each number what the leading digit is, and tally the results in an array of appropriate length. 4) Finally, you should display the results in a table. Last edited by CreatingDrake; 11-30-2013 at 08:12 PM. Would this be the correct order for sorted numbers: 11111 22 3333333 4 55 What does this mean: count of the array for "2" by one ? What would the results printed by the program look like? If you don't understand my response, don't ignore it, ask a question. Join Date Nov 2013 Rep Power Okay well I will try to split it up I have worked on the code and am getting an "type mismatch" saying it cant convert from int to int[]. Here is the code: Java Code: public static int[] countDigits(Scanner input) { int[] count = new int[10]; int i = input.nextInt(); while (Math.abs(i) >= 10 ) { i = i / 10; return Math.abs(i); why am I getting this error? cant convert from int to int[]. You can't convert an int to an array of int. The method is defined to return an array of int. The abs() method does not return an array. Either change what the method is defined to return to an int or have the method return an int array. If you don't understand my response, don't ignore it, ask a question. Join Date Jan 2013 United States Rep Power Sounds like a frequency count too me, counting numbers based on their first digit. Join Date Nov 2013 Rep Power Okay now I have to make a few lines of code that return the first nonzero digit of a string, 0 if no such digit found. I was thinking of something like this. Java Code: public static int firstDigitOf(String token) { for (char ch : token.toCharArray()) { if (ch >= '1' && ch <= '9') { return ch - '0'; return 0; I sent that over to my teacher but he told me to try to use it without char or token. In particular I am having trouble finding an alternative to this phrase. Java Code: for (char ch : token.toCharArray()) { Any alternative code to this? Join Date Jan 2013 United States Rep Power Since you are returning 0 if you can't find non-zero, you just want the first digit regardless. Try looking at the String methods. There are several that can do the job. my teacher but he told me to try to use it without char or token. Is your teacher asking you to use int values? Not String or char. A way to strip digits out of an int value is to use the % and / operators. If you don't understand my response, don't ignore it, ask a question. Join Date Nov 2013 Rep Power We haven't touched char at all in class, I just did some research and used it in self projects so I get it. We have touched on String I just don't know a way to put for (char ch : token.toCharArray()) { into a string compatible form. Join Date Nov 2013 Rep Power Okay I think I came up with an alternative I am still stuck on the math.abs(i); error. I didnt post all the code which is my bad but changing the int [] to just int creates other problems posed earlier in the code. Here is the ENTIRE code. Java Code: import java.io.File; import java.io.FileNotFoundException; import java.util.Scanner; public class BenfordsLaw { public static void main(String[] args) throws FileNotFoundException { Scanner console = new Scanner(System.in); System.out.println("Let's count those leading digits..."); System.out.print("input file name? "); String name = console.nextLine(); Scanner input = new Scanner(new File(name)); int[] count = countDigits(input); // Reads integers from input, computing an array of counts // for the occurrences of each leading digit (0-9). public static int[] countDigits(Scanner input) { int[] count = new int[10]; int i = input.nextInt(); while (Math.abs(i) >= 10 ) { i = i / 10; return Math.abs(i); // returns the first nonzero digit of a string, 0 if no such digit found public static int firstDigitOf(String digits) { if (digits.length() == 0) return 0; int firstDigit = Integer.parseInt(digits.substring(0, 1)); if (firstDigit > 0 && firstDigit < 10) return firstDigit; return firstDigitOf(digits.substring(1)); // Reports percentages for each leading digit, excluding zeros public static void reportResults(int[] count) { if (count[0] > 0) { System.out.println("excluding " + count[0] + " tokens"); int total = sum(count) - count[0]; System.out.println("Digit Count Percent"); for (int i = 1; i < count.length; i++) { double pct = count[i] * 100.0 / total; System.out.printf("%5d %5d %6.2f\n", i, count[i], pct); System.out.printf("Total %5d %6.2f\n", total, 100.0); // returns the sum of the integers in the given array public static int sum(int[] data) { int sum = 0; for (int n : data) { sum += n; return sum; // returns the first digit of the given number public static int firstDigit(int n) { int result = Math.abs(n); while (result >= 10) { result = result / 10; return result; How should I change this code to eliminate the Type mismatch error but not coming up with any other errors. Please copy the full text of the error messages for the posted code and paste it here. It has important info about the error. If you don't understand my response, don't ignore it, ask a question. Join Date Nov 2013 Rep Power Exception in thread "main" java.lang.Error: Unresolved compilation problem: Type mismatch: cannot convert from int to int[] at BenfordsLaw.countDigits(BenfordsLaw.java:27) at BenfordsLaw.main(BenfordsLaw.java:15) Type mismatch: cannot convert from int to int[] at BenfordsLaw.countDigits(BenfordsLaw.java:27) At line 27 the compiler found code that tries to convert an int to an int[]. That can't be done. See post#4 If you don't understand my response, don't ignore it, ask a question. Join Date Nov 2013 Rep Power I know that but the problem is if I change the int [] to int on line 21 I get a error on line 15 Join Date Jan 2013 United States Rep Power In the following code: Java Code: // Reads integers from input, computing an array of counts // for the occurrences of each leading digit (0-9). public static int[] countDigits(Scanner input) { int[] count = new int[10]; int i = input.nextInt(); while (Math.abs(i) >= 10 ) { i = i / 10; return Math.abs(i); It is called countDigits but you aren't counting anything. Why do you have an array allocated? It isn't being used. And why to you repeatedly take the absolute value of a number? No matter how many times you divide a number, it will never turn I would recommend that you simply get the user input outside of the method and then use the method to get the first digit of the passed integer. Change line 15 to be an int, not an int[] If you don't understand my response, don't ignore it, ask a question. Join Date Nov 2013 Rep Power The reportResults doesn't work on line 16 Join Date Nov 2013 Rep Power Nevermind the reportResults is pointless actually (i think), starting to get overwhelmed with the code. Then go the other way: create an int[] and return that instead of an int value. If you don't understand my response, don't ignore it, ask a question. Join Date Nov 2013 Rep Power I had a friend recommend coding the first two sections like this: Java Code: public static void main(String[] args) throws FileNotFoundException { Scanner console = new Scanner(System.in); System.out.println("Let's count those leading digits..."); System.out.print("input file name? "); String name = console.nextLine(); Scanner input = new Scanner(new File(name)); int[] count = countDigits(input); // Reads integers from input, computing an array of counts // for the occurrences of each leading digit (0-9). // Reads integers from input, computing an array of counts // for the occurrences of each leading digit (0-9). public static int[] countDigits(Scanner input) { int[] count = new int[10]; while (input.hasNextInt()) { int n = input.nextInt(); return count; I am wondering does "firstDigit" obtain the first digit of the number (could be a dumb questions sorry). If that is the case I am sure that is not allowed.
{"url":"http://www.java-forums.org/new-java/83925-code-sort-leading-digit-number.html","timestamp":"2014-04-20T19:54:49Z","content_type":null,"content_length":"136173","record_id":"<urn:uuid:7bb8340f-f00f-439d-8dc5-f16018213dfa>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00634-ip-10-147-4-33.ec2.internal.warc.gz"}
NWCU by the numbers Law Students > Distance Education Law Schools NWCU by the numbers (1/3) > >> At present there are in the neighborhood of 600 students enrolled at NWCU school of law. About 300 are 1L. In 2011, 152 took the FYLSE for the first time. 31 passed. 72 took the FYLSE for the second time. 27 passed. 224 students took the FYLSE in 2011. 58 students passed the FYLSE in 2011. If you're willing to accept an inference or two, then I'll take a stab at drawing a few conclusions. If yours are different than mine, I'd love to hear them. I'm not a rocket scientist. First assumption: Since there are about 300 1L's today, there were 300 1L's between 2010 and 2011, give or take. First question: if two-thirds of the 1L's took the baby bar for the first or second time in 2011, what happened to the other third, more or less? First conclusion: The 1L attrition rate is between 35% and 50%. This isn't really surprising. Second question: if only 58 of 224 FYLSE takers passed ( 25%), what became of the 166 who failed? Second conclusion: Given the number of second time takers is half that of first timers, half of those who fail the first time give up and leave, becoming the remainder of 1L attrition. The rest become repeat takers. 83 first time fails + 75 that dropped before taking the FYLSE= 150+/-, divided into 300 equals 50%. Second assumption: The population of 2, 3 and 4L's is also about 300, give or take. In 2012, 26 graduates took the bar for the first time. 5 passed. 55 retook the bar. 10 passed. 81 graduates took the bar in 2012. 15 became lawyers. Second conclusion: The attrition rate after 1L is much lower. Neither is this terribly surprising. Third conclusion: Given that the 15 graduates who became lawyers in 2012 were once among a 1L class of 300, the odds of a new 1L becoming a lawyer are 15 in 300, or 5%. I'll bet these 15 lawyers could have succeeded at any ABA law school, but that's just speculation. Back to the 1L's. If you accept the facts as I have proposed them, every year about 150 1L's hand over $3000 to NWSCU and leave empty handed. That adds up to $450,000 a year. If every one of the remaining 300 1L's goes on to graduate, they will have paid a total of another $2,700,000 over another 3 years, but only 15 become lawyers. The remaining 285 leave with a JD from an unaccredited correspondence school. But let's focus on the positive. First, 15 new lawyers each paid just $12,000 for law school. And if they find jobs, it'll be worth every penny. Could you say that our winners are subsidized on the backs of a great many losers? OK. But hey, NWCU school of law, winner, winner, chicken dinner! Jackpot. $1,800,000 a year in revenues. You don't think this business model has great margins? Think again. I know. I'm a 1L. At NWCU school of law, 1L's had better be prepared to fail alone, and on their own. 1L's interact with 1P. One instructor for 300 1L's, that is. He got his law degree from, wait for it…NWCU school of law. There are actually a total of 11 members of the faculty. I have no idea what the other 10 do. Did I make a mistake choosing NWCU? Maybe. On the other hand I don't intend to become a lawyer, so I'm sure to succeed here! Thanks for the interesting observations. Not everyone believes education is a business, but when it boils down to it, almost everything is run to some degree by bean counters. Please keep us posted on your progress. Where did you get the statistic 600 student at NWCU Law? That seems extremely high. Are you sure you read those stats right? You're asking me if I can read? June 2012 FYLSE has 31 NWCU First timers taking it and 69 total October 2011 - 31 First Timers from NWCU and 74 total June 2011 41 FT and 78 total Given that many of these students were stalled and only a third passed - the ongoing enrollment at NWCU is likely less than 200. The ones that can't get by the FYLSE exam don't count. The First Timer total would seem to reflect an annual enrollment of of under 100 new students a year even counting the wash outs that don't make it to the FYLSE. No way the NWCU is collecting tuition at an annual rate of 600 students. Bottom line, may will try but few succeed because most are unqualified to pass a simple exam. [0] Message Index [#] Next page Go to full version
{"url":"http://www.lawschooldiscussion.org/index.php?topic=4027734.0;wap2","timestamp":"2014-04-21T08:08:05Z","content_type":null,"content_length":"7028","record_id":"<urn:uuid:3c98a918-c59e-49c7-a084-598b554cee34>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00634-ip-10-147-4-33.ec2.internal.warc.gz"}
An Inventory Controlled Supply Chain Model Based on Improved BP Neural Network Discrete Dynamics in Nature and Society Volume 2013 (2013), Article ID 537675, 7 pages Research Article An Inventory Controlled Supply Chain Model Based on Improved BP Neural Network Research Center of Cluster and Enterprise Development, School of Business Administration, Jiangxi University of Finance & Economics, Nanchang 330013, China Received 22 June 2013; Revised 8 September 2013; Accepted 10 October 2013 Academic Editor: Zhigang Jiang Copyright © 2013 Wei He. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Inventory control is a key factor for reducing supply chain cost and increasing customer satisfaction. However, prediction of inventory level is a challenging task for managers. As one of the widely used techniques for inventory control, standard BP neural network has such problems as low convergence rate and poor prediction accuracy. Aiming at these problems, a new fast convergent BP neural network model for predicting inventory level is developed in this paper. By adding an error offset, this paper deduces the new chain propagation rule and the new weight formula. This paper also applies the improved BP neural network model to predict the inventory level of an automotive parts company. The results show that the improved algorithm not only significantly exceeds the standard algorithm but also outperforms some other improved BP algorithms both on convergence rate and prediction accuracy. 1. Introduction Inventory control is one of the key topics for supply chain management. Usually inventory takes the form of raw material, work in process (WIP) products, semifinished products, or finished products. Inventory cost is the main cost for supply chain management. A drop of just several percentage points of inventory cost can greatly increase the profits of the whole supply chain. In addition, sound inventory level can prevent shortage of material, maintain the continuity of the production process, and quickly satisfy customers' demand. Thereby, exploring the optimal inventory level is very necessary and valuable for supply chain management. To date, the following inventory control problems need to be addressed [1, 2].(1)There are highly nonlinear models which are hard to process.(2)There are qualitative indicators which are hard to deal with.(3)The unchangeable indicators of inventory control lack self-adaptation.(4)Information of inventory control models is always indirect and the collection of information is time-consuming and of low efficiency.(5)Inventory control models always ignore the influence of uncertain factors, such as lead time, transportation conditions, and change of demand. Considering the above problems, traditional inventory control theory is hard to meet the requirement posed by the new environment. Thanks to the uncertain feature of inventory control and the strengths of neural network in model prediction, this paper chooses to use BP neural network to establish inventory model and predict inventory level. BP neural network is a kind of nonlinear feed forward network which has good nonlinear mapping ability. Theories have proved that BP network can approach any nonlinear mapping relationship given enough input and hidden layers while there is no necessity to establish a mathematical model. Furthermore, by learning and training, BP network can store information systematically in weight matrix . In doing so, it indicates that BP network can memorize the characteristics of inventory information and at the same time can adapt to the changes of inventory environment. In view of the features of BP neural network, it has great advantages in classification and prediction. However, it is acknowledged that BP neural network also has such problems as slow convergence and easily converging to local minimum when forecasting. Considering the shortcomings of standard BP algorithm, this paper proposes a new fast convergent BP neural network model for predicting inventory level. By adding an error offset, this paper deduces the new chain propagation rule and the updated weight formula. The application of the improved BP neural network model to predict the inventory level of an automotive parts company shows that the improved algorithm significantly outperforms the standard algorithm and some other improved BP algorithms both on convergence rate and prediction accuracy. This paper proceeds as follows: Section 2 has a wide review of related literature. Based on the standard BP neural network, Section 3 introduces an improved BP neural network. Section 4 applies the improved BP algorithm to predict the inventory level of an automotive parts company. Section 5 draws some conclusions according to the results. 2. Literature Review Recently, more and more scholars have applied neural network technique to inventory control. Bansal et al. used a neural network-based data mining technique to solve the problem of inventory of a large medical distribution company [3]. Based on the neural network model described by them, a prototype was conceived with data from a large decentralized organization. The prototype was successful in reducing the total level of inventory by 50% in the organization, while maintaining the same level of probability that a particular customer's demand would be satisfied. Shanmugasundaram et al. [4 ] discussed the use of neural network-based data mining and knowledge discovery techniques to optimize inventory levels in a large medical distribution company [4]. They identified the strategic data mining techniques used to address the problem of estimating the future sales of medical products using past sales data and used recurrent neural networks to predict future sales. Reyes-Aldasoro et al. adopted neural network technique to create a hybrid framework that could be utilized for analysis, modeling, and forecasting purposes [5]. The framework combined two existing approaches and introduced a new associated cost parameter that served as a surrogate for customer satisfaction. Hong et al. developed an online neural network controller that optimized a three-stage supply chain. With the inventory data feedback from an RFID system, the neural network controller minimized the total cost of the supply chain rapidly while satisfying a target order fulfillment ratio [6]. Some of these studies further proved that the neural network technique exceeded the traditional statistical technique in forecasting inventory level [7]. In fact, comparing with traditional prediction methods, neural network has its own unique advantages in processing prediction problems such as high fault tolerance, fast prediction speed, avoidance of description of complex relation between characteristic factors and object, strong adaptation, and good uncertain information processing ability [8–10]. Although there are various neural network models, BP neural network is the most widely used model because of its simple structure and strong ability to learn. In fact, it has been widely used in inventory control. Zhang et al. used the reinforcement learning technique and the BP neural network to propose a new adaptive inventory control method for supply chain management [11]. Wang proposed a neural network-based classification approach to inventory risk level of spare parts [12]. The BP algorithm for training a neural network is used to decide the weights to connections in the model. Mansur and Kuncoro cooperated to use the market basket analysis (MBA) and artificial neural network (ANN) Back propagation to predict inventory level [13]. In addition, ANN Back propagation is used to predict product inventories requirements/needs for each product. Huang et al. applied back-propagation network (BPN) to evaluate the criticality class (I, II, III, and IV) of spare parts [14]. They found that the proposed BPN could successfully decrease inventory holding costs by modifying the unreasonable target service level setting which was decided by the criticality class. The BP neural network we mentioned above is referring to the standard BP neural network. The standard BP neural network is based on the Widrow-Hoff rule and uses the gradient descent method to transfer the mapping of a set of inputs to its correct output into nonlinear optimal problems. However, the standard BP algorithm has inherent disadvantages such as slow convergence, problem of converging to local minimum, complication of system, and random network structure selection [15, 16]. Aiming at the weaknesses of standard BP neural network, scholars have made further studies and proposed different improved BP neural network models [17–25]. The improvements of BP neural network mainly incorporate two perspectives: the direct improvements on BP neural network and the improvements based on the proposals of other new theories. Usually the first perspective includes adding the momentum factor [17], varying the learning rate dynamically [18], and introducing resilient back propagation (RPROP) [19]. The second perspective usually includes the introduction of simulated annealing genetic algorithm [24] and introduction of multiple extended Kalman algorithm [25]. All of these improved BP algorithms can reduce the training time to some degree and increase the prediction accuracy. Some have even applied the improved BP algorithms to forecasting inventory level and inventory control [26, 27]. By adding an error offset to the error function, this paper puts forward a direct improvement on standard BP neural network. Based on a dataset of an automotive parts company, it proves that the improved BP algorithm not only exceeds the standard BP algorithm both on convergence rate and prediction accuracy but also outperforms some other improved BP neural networks. 3. Improvement of BP Neural Network 3.1. Standard BP Neural Network Back-propagation algorithm or BP algorithm, one of the most widely used algorithms in artificial neural network, is a kind of supervised learning algorithm. Its main purpose is to adjust weight matrix according to the squared error between the actual output and target output. The squared error is expressed as follows: Here, denotes the th training sample, denotes the target output of the th training sample, and denotes the actual output of the th training sample. The weights to each neuron are revised according to the following delta rule: Here, denotes theth layer neural network and denotes the weight on the connection from the th neuron in the th layer to the th neuron in the th layer. is expressed as follows: Here, denotes the learning rate. By analyzing the above formula, we know that the key of BP algorithm is the calculation of . Suppose that denotes the input of th neuron, denotes the output of th neuron, and denotes the output of th neuron. Then, , . When the th neuron is the output node, we have If theth neuron is not the output node, it must be the hidden node and we have From the above analysis, we can know that the standard BP algorithm updates the weights of its output layer and hidden layer just according to the above formula. Regarded as a part of the weights, the update of bias is quite similar to that of weights so we will not give further details about its deduction. 3.2. Improved BP Neural Network To improve the convergence rate of standard BP algorithm, we propose a new algorithm, which can achieve the goal by adding an error offset. The essence of BP algorithm is the forward propagation of data and backward propagation of errors. The weight value is revised according to the errors in back propagation. However, the convergence rate of standard BP algorithm is slow and often cannot satisfy the requirements when applied. Therefore, we propose a new method: adding an error offset in back propagation to greatly improve the convergence rate. The latter experiment illustrates that its effect is quite outstanding. Here, we redefine the squared error as follows: and is the error offset, and what follows next is our deduction of from the revised squared error. Consider For the right-hand side of (7), the first half part is the same with that of standard BP algorithm. What we need to calculate is the second half part. If is the output node, then , . Consider The new weight formula is If th neuron is not the output node, then it must be the hidden node. To avoid confusion, we suppose that th is the output layer and we have The new weight formula is 4. Model Construction This paper uses the dataset of an automotive parts company to train the improved BP neural network. As we know, nowadays automobiles are comprised of lots of parts. These parts are produced on the demand of automobile manufacturers and then are sent to assembly factories to form a complete product. In this way, the whole production process of an automobile exists in the form of a supply chain. To realize the highest overall efficiency, it needs cooperation of all the suppliers, manufacturers, wholesalers, and retailers. Inventory control is an important aspect which reflects such kind of cooperation. In the following part, this paper will use the improved BP neural network to forecast the inventory level of bearings—one of the components for an automobile. 4.1. Factors Influencing Inventory Control and Selection of Sample Usually accurate inventory level is the precondition for good inventory management. For inventory management, inventory controlling cost and customers’ service levels as well as inventory controlling quality are the main factors to estimate the inventory level. Therefore, in the design of inventory control system, we mainly use these factors to predict. They are described as follows [2]. (1) Various Costs. They are one of the main indicators to evaluate inventory control strategy. The costs mainly include all the expenses in product purchase and production as well as sales. For enterprises, analyzing inventory controlling cost can effectively reduce the overall cost of enterprises. However, inventory controlling costs include many aspects and these aspects can influence each other. Therefore, dividing inventory controlling cost in details and analyzing the accumulated data of business systems to find the main factors will be helpful for enterprises to make corresponding decisions and control all kinds of costs. The costs mainly include ordering cost, storage cost, transportation cost, and shortage cost. (2) Demand Level. The purpose of inventory control is to best satisfy the demands. Therefore, demand is another important factor influencing inventory control. However, demand may be certain but also may be stochastic or seasonal. Demand level is positively proportional to inventory level. (3) Supply Level. It refers to supply level of finished products of producers. It is positively proportional to inventory level. (4) Quantity of Substitutes. It refers to the types of other parts which can substitute for the parts used. It is negatively proportional to inventory level. (5) Lead Time. It refers to the period of time from sending the order to being ready for production. It includes the time for ordering, waiting time, preparatory time for suppliers to deliver goods, time on transportation, time for check and acceptance for warehouse entry, and time for preparation for use. It is positively proportional to inventory level. (6) Customer Service Level. It refers to the possibility for enterprises to satisfy customers’ needs after customers propose the ordering requirements. It is negatively proportional to inventory level. The higher the customer service level goes, the lower the inventory level will be. In this case, we use 2 (very good), 1 (general), and 0 (poor) to represent the extent of the customer service This paper chooses the historical data of factors which influence the safety inventory level and inventory data of bearing of an automotive parts production company in one of the middle provinces of China from March 2012 to March 2013 as a sample to train the improved BP neural network. We mainly choose 100 groups of the data to train the network and then check its prediction ability. The number of training samples cannot be too small; otherwise, the network cannot learn enough which may result in low prediction ability. However, too large samples will lead to redundancy. At this time, the network will be overfitted. Therefore, this paper chooses 100 groups of data as input to train and predict and chooses inventory level as output to establish the BP neural network model. In this case, because the system is nonlinear, the initial value plays very important role in achieving local minimum. Therefore, the input sample needs to be normalized and the purpose is to make the big input values also fall in the range with large gradients of activation function. Before network training, we normalized the training data according to and made them within (see Table 1). 4.2. Network Variables Any continuous function can be realized by a three-layer artificial neural network. Therefore, this paper adopts the three-layer BP neural network structure. When all information is input into the network, the information starts by being transmitted from input layer to hidden layer. With the work of activation function, the information is then transmitted to output layer. There are 9 input factors and the output is inventory level. The selection of variables of the network is as follows. (1) Input Layer. The input layer includes 9 factors: storage cost (X1), ordering cost (X2), shortage cost (X3), transportation cost (X4), demand level (X5), supply level (X6), quantity of substitutes (X7), waiting time (X8), and service level (X9). (2) Hidden Layer. Usually when there are one or two hidden layers, it has the best convergent attributes. If there is no hidden layer or there are too many hidden layers, the convergent effect is not so good. Theories have proved that networks which have deviations and at least one S-type hidden layer and one linear output layer can approach any nonlinear function. That is, a three-layer BP network with a hidden layer can approach any nonlinear function. According to empirical formula , is the number of nodes of hidden layer and is the number of nodes of input layer. We suppose . (3) Output Layer. The number of nodes of output layer is the number of system objects. We choose one node as the inventory level of March 2013 to be measured. (4) Selection of Initial Value and Threshold Value. Because both of them are two random groups of value, we choose a random value between . (5) Selection of Expected Error and Number of Iterations. We choose 10000 as the number of iterations and the expected error is 0.1. 5. Training Process and Experimental Result This paper uses the neural network tool package of MATLAB 7.6 to program the model for safety inventory level based on BP neural network. In the BP neural network model established in this paper, there are 9 inputs and the number of neurons is relatively large. We preliminarily set the training variables as follows: times of training are 10000, training target is 0.01, and learning rate is 0.1. The code and training result is as follows:net. trainParam. Epochs = 10000;net. trainParam. goal = 0.1;LP. lr = 0.1;net-train(net, P, T);after 1000 trainings, the training is finished. After network finishes training, the network gets tested. We use the data of March 2013 to test. The code of prediction is as follows:_test = [0.5 0.78 0.63 1 0.43 0.4 0.25 0.08 1]:Out = sim (net, By comparing Figures 1 and 2, we can clearly see that the convergence rate of the improved algorithm is significantly faster than that of standard algorithm. We select the data from February 1, 2013, to February 20, 2013, to test. The result is as follows. From Table 2, we can know that the improved BP algorithm is significantly better than that of standard BP algorithm on convergence rate. In addition, we also compare our improved BP algorithm with some other improved BP algorithms. The result shows that our BP algorithm also outperforms the other two improved BP algorithms mentioned in the literature review on convergence rate. As prediction accuracy is concerned, from Figure 3, we can know that our improved BP algorithm exceeds significantly the standard BP algorithm. Suppose is the prediction set error. From Table 3 we can clearly see that our improved BP algorithm not only exceeds the standard BP algorithm but also outperforms the other two improved BP algorithms mentioned in the literature review on prediction effect. 6. Conclusions We conclude the following with the practical importance of our findings. First, this paper proposes a new, fast convergent BP algorithm and deduces new chain propagation rules of neural network by introducing an error offset. Secondly, this paper applies it to the prediction of inventory level of an automotive parts company and achieves good effect. From the experimental results, we can see that using neural network to predict inventory is effective. The improved BP algorithm not only significantly exceeds the standard algorithm both on convergence time and prediction effect but also outperforms some other improved BP algorithms on these two main indicators. In this sense, this paper provides a valuable reference for inventory control of supply chain. However, this paper also has limitations. There are still some problems that need to be solved such as how to decide the number of nodes of hidden layer and the optimization of whole structure of network. Apart from that, the introduction of the error offset is based on experiences. The theoretical explanation for it still needs to be further discussed. All these problems wait to be further explored in future research. This work is supported by the NSFC (71361013 and 71163014) and The Education Department of Jiangxi Province Science and Technology Research Projects (11728). 1. P. W. Balsmeier and W. J. Voisin, “Supply chain management: a time-based strategy,” Industrial Management, vol. 38, no. 5, pp. 24–27, 1996. View at Scopus 2. S. Minner, “Multiple-supplier inventory models in supply chain management: a review,” International Journal of Production Economics, vol. 81-82, pp. 265–279, 2003. View at Publisher · View at Google Scholar · View at Scopus 3. K. Bansal, S. Vadhavkar, and A. Gupta, “Brief application description. A neural networks based forecasting techniques for inventory control applications,” Data Mining and Knowledge Discovery, vol. 2, no. 1, pp. 97–102, 1998. View at Scopus 4. J. Shanmugasundaram, M. V. N. Prasad, S. Vadhavkar, and A. Gupta, “Use of recurrent neural networks for strategic data mining of sales information,” MIT Sloan 4347-02; Eller College Working Paper 1029-05, 2002. 5. C. C. Reyes-Aldasoro, A. R. Ganguly, G. Lemus, and A. Gupta, “A hybrid model based on dynamic programming, neural networks, and surrogate value for inventory optimisation applications,” Journal of the Operational Research Society, vol. 50, no. 1, pp. 85–94, 1999. View at Scopus 6. S. R. Hong, S. T. Kim, and C. O. Kim, “Neural network controller with on-line inventory feedback data in RFID-enabled supply chain,” International Journal of Production Research, vol. 48, no. 9, pp. 2613–2632, 2010. View at Publisher · View at Google Scholar · View at Scopus 7. F. Y. Partovi and M. Anandarajan, “Classifying inventory using an artificial neural network approach,” Computers and Industrial Engineering, vol. 41, no. 4, pp. 389–404, 2002. View at Scopus 8. J. Li, Y. Li, J. Xu, and J. Zhang, “Parallel training algorithm of BP neural networks,” in Proceedings of the 3rd World Congress on Intelligent Control and Automation, vol. 2, pp. 872–876, July 2000. View at Scopus 9. D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning internal representations by error propagation,” in Parallel Distributed Processing: Explorations in the Microstructure of Cognition, D. E. Rumelhart and J. L. McClelland, Eds., vol. 1, chapter 8, MIT Press, Cambridge, Mass, USA, 1986. 10. N. Ampazis and S. J. Perantonis, “Two highly efficient second-order algorithms for training feedforward networks,” IEEE Transactions on Neural Networks, vol. 13, no. 5, pp. 1064–1074, 2002. View at Publisher · View at Google Scholar · View at Scopus 11. K. Zhang, J. Xu, and J. Zhang, “A new adaptive inventory control method for supply chains with non-stationary demand,” in Proceedings of the 25th Control and Decision Conference (CCDC '13), pp. 1034–1038, Guiyang , China, May 2013. View at Publisher · View at Google Scholar 12. W. P. Wang, “A neural network model on the forecasting of inventory risk management of spare parts,” in Proceedings of the International Conference on Information Technology and Management Science (ICITMS '12), pp. 295–302, Springer, 2012. 13. A. Mansur and T. Kuncoro, “Product inventory predictions at small medium enterprise using market basket analysis approach-neural networks,” Procedia Economics and Finance, vol. 4, pp. 312–320, 2012. View at Publisher · View at Google Scholar 14. Y. Huang, D. X. Sun, G. P. Xing, and H. Chang, “Criticality evaluation for spare parts based on BP neural network,” in Proceedings of the International Conference on Artificial Intelligence and Computational Intelligence (AICI '10), vol. 1, pp. 204–206, October 2010. View at Publisher · View at Google Scholar · View at Scopus 15. Z. Zheng, “Review on development of BP neural network,” Shanxi Electronic Technology, no. 2, pp. 90–92, 2008. 16. H. Yu, W. Q. Wu, and L. Cao, “Improved BP algorithm and its application,” Computer Knowledge and Technology, vol. 19, no. 5, pp. 5256–5258, 2009. 17. D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning representations by back-propagating errors,” Nature, vol. 323, no. 6088, pp. 533–536, 1986. View at Scopus 18. T. P. Vogl, J. K. Mangis, A. K. Rigler, W. T. Zink, and D. L. Alkon, “Accelerating the convergence of the back-propagation method,” Biological Cybernetics, vol. 59, no. 4-5, pp. 257–263, 1988. View at Publisher · View at Google Scholar · View at Scopus 19. M. Riedmiller and H. Braun, “Direct adaptive method for faster backpropagation learning: the RPROP Algorithm,” in Proceedings of the IEEE International Conference on Neural Networks (ICNN '93), vol. 1, pp. 586–591, San Francisco, Calif, USA, April 1993. View at Scopus 20. C. Charalambous, “Conjugate gradient algorithm for efficient training of artificial neural networks,” IEE Proceedings G, vol. 139, no. 3, pp. 301–310, 1992. View at Scopus 21. M. F. Møller, “A scaled conjugate gradient algorithm for fast supervised learning,” Neural Networks, vol. 6, no. 4, pp. 525–533, 1993. View at Scopus 22. F. D. Foresee and M. T. Hagan, “Gauss-Newton approximation to Bayesian learning,” in Proceedings of the IEEE International Conference on Neural Networks, pp. 1930–1935, June 1997. View at Scopus 23. R. Battiti, “First and second order methods for learning: between steepest descent and newton's method,” Neural Computation, vol. 4, no. 2, pp. 141–166, 1992. View at Publisher · View at Google 24. Y. Gao, “Study on optimization algorithm of BP neural network,” Computer Knowledge and Technology, vol. 29, no. 5, pp. 8248–8249, 2009. 25. S. Shah and F. Palmieri, “MEKA-A fast, local algorithm for training feed forward neural networks,” in Proceedings of the International Joint Conference on Neural Networks, pp. 41–46, June 1990. View at Scopus 26. X. P. Wang, Y. Shi, J. B. Ruan, and H. Y. Shang, “Study on the inventory forecasting in supply chains based on rough set theory and improved BP neural network,” in Advances in Intelligent Decision Technologies Smart Innovation, Systems and Technologies, vol. 4, pp. 215–225, Springer, Berlin, Germany, 2010. 27. H. Lican, Z. Yuhong, X. Xin, and F. Fan, “Prediction of investment on inventory clearance based on improved BP neural network,” in Proceedings of the 1st International Conference on Networking and Distributed Computing (ICNDC '10), pp. 73–75, Hangzhou, China, October 2010. View at Publisher · View at Google Scholar · View at Scopus
{"url":"http://www.hindawi.com/journals/ddns/2013/537675/","timestamp":"2014-04-16T19:09:57Z","content_type":null,"content_length":"168522","record_id":"<urn:uuid:cefac96b-8458-429b-8102-5db08fc3f800>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00228-ip-10-147-4-33.ec2.internal.warc.gz"}
Determination of Analyte Concentration One of the most common applications of spectrophotometry is to determine the concentration of an analyte in a solution. The experimental approach exploits Beer's Law, which predicts a linear relationship between the absorbance of the solution and the concentration of the analyte (assuming all other experimental parameters do not vary). In practice, a series of standard solutions are prepared. A standard solution is a solution in which the analyte concentration is accurately known. The absorbances of the standard solutions are measured and used to prepare a calibration curve, which is a graph showing how the experimental observable (the absorbance in this case) varies with the concentration. For this experiment, the points on the calibration curve should yield a straight line (Beer's Law). The slope and intercept of that line provide a relationship between absorbance and concentration: A = slope c + intercept The unknown solution is then analyzed. The absorbance of the unknown solution, A[u], is then used with the slope and intercept from the calibration curve to calculate the concentration of the unknown solution, c[u]. □ Determine the concentration of an unknown solution. □ Measure the intensity of transmitted light for various standard solutions. □ For each standard solution, calculate the absorbance of the solution. □ Construct a calibration curve. □ Plot the line-of-best-fit through the experimental points. □ Measure the intensity of transmitted light for the unknown solution. □ Calculate the absorbance of the unknown solution. □ Use the calibration curve to determine the concentration of the unknown solution. Run each simulation sufficiently long to detect at least 1000 photons. (Not all photons are shown on the screen.) Because the intensity for the blank is used to calculate all absorbances, it is especially important that the intensity for the blank be known accurately. If possible, wait until at least 4000 photons are detected for the blank. Light Cell Detector UnknownSolution.html version 2.1 © 2000-2014 David N. Blauch
{"url":"http://www.chm.davidson.edu/vce/spectrophotometry/UnknownSolution.html","timestamp":"2014-04-19T02:32:35Z","content_type":null,"content_length":"17185","record_id":"<urn:uuid:bc7c26cb-c170-4513-8ed8-d9b9ad3810fc>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00024-ip-10-147-4-33.ec2.internal.warc.gz"}
[FOM] Cohen was right Monroe Eskew meskew at math.uci.edu Tue Sep 13 17:06:56 EDT 2011 On the other hand we have models like V_{\omega*2} in which powerset holds and instances of replacement fail and there is no set of all countable ordinals. But a lot of ordinary mathematics can be done within it, and equivalents of cardinal arithmetic can be stated within the model. What philosophical lessons should be drawn from it? Perhaps replacement is a powerful new principle transcending powerset. Or maybe we should say both axioms come from a common idea. On Tue, Sep 13, 2011 at 11:00 AM, Ali Enayat <ali.enayat at gmail.com> wrote: > The following two examples justify Cohen's position challenged by > Monore Eskew's recent postings. > In particular, the first ones addresses Eskew's comment that he sees > no philosophical difference between "completed R" (set of real > numbers) and "completed \omega_1." (set of countable ordinal), while > the second one shows the fundamental difference between "completed R" > and "completed alephs of all orders". > Example 1: > Let N be a model of ZFC in which the continuum is aleph_2; Cohen > showed us how to build N assuming Con(ZF). > Let M be H(aleph_2) as computed within M, i.e., M is the collection of > sets that are *hereditarily* of cardinality at most that aleph_1, as > viewed in N, > Then we have (1)-(3) below: > (1) All of the axioms of ZFC with the exception of the power set axiom > hold in M; > (2) The collection of real numbers DO NOT form a set in M; > (3) The collection of countable ordinals DO form a set in M (and they > are the last aleph in M). > So in M, "completed R" does not exist, but "completed omega_1" exists; > hence illustrating Cohen's claim. > Example 2: > Assuming Con(ZF + there exists an inacccessible cardinal), there is a > model N* of ZFC in which the continuum is a regular limit cardinal > (i.e., a weakly inaccessible cardinal). This is a consequence of > Solovay's classical modificaion of Cohen's argument in his "The > continuum can be anything it ought to be" paper, in which he > demonstarted that the continuum can be arranged to be any prescribed > aleph of uncountable cofinality in a cofinality-preserving generic > extension of the universe (Easton, in turn, generalized Solovay's > theorem, but that's a different story). > In such a model N*, if we define M* as H(continuum), i.e., then we have: > (1*) All of the axioms of ZFC with the exception of the power set > axiom hold in M*; > (2*) The collection of real numbers DO NOT form a set in M*'; > (3') There is no last aleph in M*. > Regards, > Ali Enayat > _______________________________________________ > FOM mailing list > FOM at cs.nyu.edu > http://www.cs.nyu.edu/mailman/listinfo/fom More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/2011-September/015750.html","timestamp":"2014-04-20T12:01:08Z","content_type":null,"content_length":"6048","record_id":"<urn:uuid:099b79e8-5378-48ed-bdd5-6c57abe8c188>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00296-ip-10-147-4-33.ec2.internal.warc.gz"}
Summary: UNIVERSITY OF CALIFORNIA, SANTA BARBARA BERKELEY · DAVIS · IRVINE · LOS ANGELES · MERCED · RIVERSIDE · SAN DIEGO · SAN FRANCISCO Geometry, Topology, and Physics Seminar N=2 dualities and Riemann surfaces David Morrison Friday, October 2nd, 2009, 4:00 p.m. Room 6635 South Hall Abstract: N=2 supersymmetric field theories in four dimensions have been studied from many points of view, notably by Seiberg and Witten in the mid-1990's who introduced an associated Riemann surface and used its properties to derive remarkable results about the physics, and remarkable consequences for mathematics. In the past six months, work of Gaiotto and collaborators has shown that these theories can be studied by means of *another* Riemman surface, this time used to compactify the six-dimensional N=(2,0) field theories to obtain a four-dimensional theory. Moreover, the two-dimesional field theory on this "other" Riemann surface is related in many interesting ways to the N=2 four-dimensional field theory. This lecture will give an overview of these developments, which will be described in more detail in future lectures of this seminar.
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/1013/1383501.html","timestamp":"2014-04-21T05:46:33Z","content_type":null,"content_length":"8327","record_id":"<urn:uuid:2fab143a-caa5-4afe-83fb-db319c21508e>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00407-ip-10-147-4-33.ec2.internal.warc.gz"}
wow that's offensive i do these when i dont know what to do My math binders are always red every year I feel like math is just a red subject Math is a blue subject and I’m prepared to fight you over this My math binders are always red every year I feel like math is just a red subject Math is a blue subject and I’m prepared to fight you over this Side note: Shoppers, please put back the stuff where you found it. Pic 1 is ridiculous.
{"url":"http://wowthatsoffensive.tumblr.com/","timestamp":"2014-04-17T15:50:07Z","content_type":null,"content_length":"31087","record_id":"<urn:uuid:f6337adc-e4f1-47c9-b00b-23c9d1317d95>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00137-ip-10-147-4-33.ec2.internal.warc.gz"}
function of y second order equation November 20th 2010, 12:01 PM #1 MHF Contributor Nov 2008 function of y second order equation1 $y(0)=1 y'(0)=2$ i know i need to do x'(y) but i dont know how to put it there how i solve it? Last edited by transgalactic; November 20th 2010 at 01:19 PM. Or you could try this: $\dfrac{y''}{y'}=\dfrac{y'}{y},$ and integrate. November 20th 2010, 04:27 PM #2
{"url":"http://mathhelpforum.com/differential-equations/163865-function-y-second-order-equation.html","timestamp":"2014-04-17T19:27:57Z","content_type":null,"content_length":"34028","record_id":"<urn:uuid:4f732283-84b2-408e-acb5-9b0727147595>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00311-ip-10-147-4-33.ec2.internal.warc.gz"}
Problem 376 Nontransitive sets of dice Problem 376 Published on Sunday, 18th March 2012, 01:00 am; Solved by 205 Consider the following set of dice with nonstandard pips: Die A: 1 4 4 4 4 4 Die B: 2 2 2 5 5 5 Die C: 3 3 3 3 3 6 A game is played by two players picking a die in turn and rolling it. The player who rolls the highest value wins. If the first player picks die A and the second player picks die B we get P(second player wins) = ^7/[12] > ^1/[2] If the first player picks die B and the second player picks die C we get P(second player wins) = ^7/[12] > ^1/[2] If the first player picks die C and the second player picks die A we get P(second player wins) = ^25/[36] > ^1/[2] So whatever die the first player picks, the second player can pick another die and have a larger than 50% chance of winning. A set of dice having this property is called a nontransitive set of dice. We wish to investigate how many sets of nontransitive dice exist. We will assume the following conditions: • There are three six-sided dice with each side having between 1 and N pips, inclusive. • Dice with the same set of pips are equal, regardless of which side on the die the pips are located. • The same pip value may appear on multiple dice; if both players roll the same value neither player wins. • The sets of dice {A,B,C}, {B,C,A} and {C,A,B} are the same set. For N = 7 we find there are 9780 such sets. How many are there for N = 30 ?
{"url":"http://projecteuler.net/problem=376","timestamp":"2014-04-19T14:29:59Z","content_type":null,"content_length":"6293","record_id":"<urn:uuid:dffec076-c4c9-4315-9144-597373effa27>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00053-ip-10-147-4-33.ec2.internal.warc.gz"}
Calc Word Problem. Please Help!!! November 19th 2012, 05:59 PM #1 Nov 2012 Calc Word Problem. Please Help!!! a staircase has stairs with a width of 11 inches and a height of 8 inches per step. a storage closet is to be built under the staircase between the 5th and 13th steps. 1. draw a model of the back wall of the closet showing the stairs. 2. if the ceiling of the closet is the bottom of the stairs, what is the area of back wall of the closet? 3. if the length of the step is 38 inches, then what is the volume of the closet? 4. instead, if a board is placed over the bottom of the stairs so that the top of the closet is flat but slanted, draw a new model of the back wall of the closet. 5. using geometry formulas, find the area of the back wall of the closet. 6. find an equation for the position of the board. use calculus to find the area of the back wall of the closet. 7. if the length of the step is 38 inches, the what is the volume of the closet with slanted ceiling? Re: Calc Word Problem. Please Help!!! Please post what you have so far or what your thoughts are on what needs to be done, so we can assist you where you are stuck. Re: Calc Word Problem. Please Help!!! Ok so I believe that this is a riemann sums problem. So far I figured that there were 9 steps and since each one is 8 inches tall, I multiplied 8 by the 9 steps. This gave me a total of 72 in. for how deep the closet is. So I thought that maybe the formula for the closet was 72(x). I thought that maybe I should use riemann sums with the end points 5 and 13, but I am not even sure if this is the right approach. Re: Calc Word Problem. Please Help!!! If you are to include the 5th and 13th steps, then you are correct about there being 9 steps over the closet, since (13 - 5) + 1 = 9. The way the problem is worded, I was unsrue if the ends are included. Let's assume they are. To find the area A of the back wall in square inches, we could use the Riemann sum type of approach as you suggested: We could also deconstruct the back wall into a trapezoid and 9 congruent right triangles. Both give the same result. Now, once you have the area of the back wall, multiply this area by the depth of the closet which is 38 inches, to get the volume in cubic inches. Once you have correctly done this, we will move on to the next part. Re: Calc Word Problem. Please Help!!! Where is the 32 and 8 coming from Re: Calc Word Problem. Please Help!!! The measures are in inches, and 32 is the elevation of the 4th step and we want to add 8 for each step 1 through 9 of the included steps. Re: Calc Word Problem. Please Help!!! So would the area be 6023.11 Re: Calc Word Problem. Please Help!!! No, you should get an integral answer somewhat larger than that. I suggest computing the area using both methods I suggested, and the second method is simpler to use and can be used as a method to check your result. Re: Calc Word Problem. Please Help!!! I'm not sure what you mean by your second method, but I resolved it and got a value of 7128, is that correct? Re: Calc Word Problem. Please Help!!! or do I evaluate it like this: 88*(4k+k^2/2)from 1 to 9 this would give an answer of 6336 Re: Calc Word Problem. Please Help!!! The second method was deconstructing the back wall into a trapezoid and 9 congruent right triangles to get: $A=\frac{99}{2}(32+104)+\frac{9}{2}\cdot8\cdot11=71 28$ We may even use the area of the trapezoid in the next part of the problem where we are instructed to use a geometric method to compute the area of the back wall when a board is placed over the bottom of the stairs. Okay, so you have correctly found the area of the back wall, so what is the volume in cubic inches of the closet? Re: Calc Word Problem. Please Help!!! Re: Calc Word Problem. Please Help!!! So the volume would then be 270,864 in cubed Re: Calc Word Problem. Please Help!!! Yes, that's right. Now, if you are supposed to give the result in cubic feet, how would you convert it? Re: Calc Word Problem. Please Help!!! You would divide it by 12 so it would be 22572 cubic feet November 19th 2012, 06:04 PM #2 November 19th 2012, 06:11 PM #3 Nov 2012 November 19th 2012, 06:29 PM #4 November 19th 2012, 06:50 PM #5 Nov 2012 November 19th 2012, 07:00 PM #6 November 19th 2012, 07:18 PM #7 Nov 2012 November 19th 2012, 07:25 PM #8 November 19th 2012, 07:30 PM #9 Nov 2012 November 19th 2012, 07:45 PM #10 Nov 2012 November 19th 2012, 07:50 PM #11 November 19th 2012, 07:54 PM #12 November 19th 2012, 07:54 PM #13 Nov 2012 November 19th 2012, 07:58 PM #14 November 19th 2012, 08:00 PM #15 Nov 2012
{"url":"http://mathhelpforum.com/calculus/208021-calc-word-problem-please-help.html","timestamp":"2014-04-20T16:12:01Z","content_type":null,"content_length":"69691","record_id":"<urn:uuid:20b32f8a-1a17-4e5c-8665-d9d168b0e519>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00470-ip-10-147-4-33.ec2.internal.warc.gz"}
Mplus Discussion >> GMM with missing/truncated data GMM with missing/truncated data Adam Slez posted on Monday, October 01, 2007 - 12:29 pm I am interested in using growth mixture modeling to identify types of career trajectories. Is this possible given that career lengths vary significantly across individuals? I'm not certain whether it is appropriate to think of variation in career length as either a missing or truncated data issue. Any suggestions on whether it is feasible to use GMM? If so, what is the best way to do so? What I would like to be able to do is to be able to identify career types defined by both the shape and the duration of the trajectory. Bengt O. Muthen posted on Monday, October 01, 2007 - 6:57 pm Can you tell me a bit about the outcome as well as the different shape classes that you expect? Adam Slez posted on Monday, October 01, 2007 - 8:10 pm The outcome variable is a hierarchy measure which varies between 0 and 1. Essentially, what the measure does is rank all jobs or positions that an individual might occupy and then assign each job a score between 0 and 1 which is proportionate to its position within the overall ranking. The top job is assigned a hierarchy score of 1, the lowest a score of 0; all jobs in between receive a score proportional to their distance from the top. If there is a strict career ladder in play, we might expect a linear relationship between time and job rank. Alternatively, I think that it would be reasonable to observe stalled careers in which their is a non-linear relationship such that there is a an early increase and then a leveling-off. The other dimension in play here is career duration. Some people have long careers and some people have short careers. Most of what I have read on the use of growth curve models assumes an ideal situation in which there are observations for all individuals at all points in time. In this case, the fact that there aren't observations at all points in time (i.e. short careers) is part of what needs to be explained. I wish I could be off more help, but I am still trying to figure out whether this can even be done with a GMM or LGM. Bengt O. Muthen posted on Wednesday, October 03, 2007 - 11:03 am It's an interesting question that seems to connect to both growth modeling and survival analysis. I don't think I have a final answer, but here are some thoughts. One approach would be growth modeling, either with one or several latent trajectory classes, where a short career is simply treated as missing data in line with "MAR" (see missingness lit.). MAR says that missingness is predicted by previous observed outcomes - for instance, a leveled-off development might predict missingness (leaving the career). Given great variation in career length, perhaps the growth modeling should not be done as a single-level multivariate approach but as a two-level approach. Another approach would be survival analysis where you model the time to leaving the career, i.e. the career length. A third approach combines the above two. For example, certain early career shapes predict not surviving. Such modeling is not always, however, straightforward. Some related modeling ideas are given in the Muthen-Masyn (2005) JEBS article on the web site under Papers, Survival Analysis. Adam Slez posted on Wednesday, October 03, 2007 - 11:31 am Much thanks! Back to top
{"url":"http://www.statmodel.com/cgi-bin/discus/discus.cgi?pg=prev&topic=14&page=2619","timestamp":"2014-04-20T13:21:00Z","content_type":null,"content_length":"22931","record_id":"<urn:uuid:b39236b1-5fef-4347-bd09-a348063c7746>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00130-ip-10-147-4-33.ec2.internal.warc.gz"}
My watch list With an accout for my.chemeurope.com you can always see everything at a glance – and you can configure your own website and individual newsletter. • My watch list • My saved searches • My saved topics • My newsletter Heat equation The heat equation is an important partial differential equation which describes the variation of temperature in a given region over time. General-audience description Suppose one has a function u which describes the temperature at a given location (x, y, z). This function will change over time as heat spreads throughout space. The heat equation is used to determine the change in the function u over time. The image below is animated and has a description of the way heat changes in time along a metal bar. One of the interesting properties of the heat equation is the maximum principle which says that the maximum value of u is either earlier in time than the region of concern or on the edge of the region of concern. This is essentially saying that temperature comes either from some source or from earlier in time because heat permeates but is not created from nothingness. This is a property of parabolic partial differential equations and is not difficult to prove mathematically (see below). Another interesting property is that even if u has a discontinuity at an initial time t = t[0], then the temperature becomes instantly smooth as soon as t > t[0]. For example, if a bar of metal has temperature 0 and another has temperature 100 and they are stuck together end to end, then instantaneously the temperature at the point of connection is 50 and the graph of the temperature is smoothly running from 0 to 100. This is not physically possible, since there would then be information propagation at infinite speed, which would violate causality. Therefore this is a property of the mathematical equation rather than of heat conduction itself. However, for most practical purposes, the difference is negligible. The heat equation is used in probability and describes random walks. It is also applied in financial mathematics for this reason. It is also important in Riemannian geometry and thus topology: it was adapted by Richard Hamilton when he defined the Ricci flow that was later used to solve the topological Poincaré conjecture. See also the Dirac delta function. The physical problem and the equation In the special case of heat propagation in an isotropic and homogeneous medium in the 3-dimensional space, this equation is ${\partial u\over \partial t} = k \left({\partial^2 u\over \partial x^2 } + {\partial^2 u\over \partial y^2 } + {\partial^2 u\over \partial z^2 }\right) = k ( u_{xx} + u_{yy} + u_{zz} ) \quad$ • $u=u(t,x,y,z) \,\!$ is temperature as a function of time and space; • $\frac{\partial u}{\partial t}$ is the rate of change of temperature at a point over time; • $u_{xx}\,\!$, $u_{yy}\,\!$, and $u_{zz}\,\!$ are the second spatial derivatives (thermal conductions) of temperature in the x, y, and z directions, respectively The heat equation is a consequence of Fourier's law of cooling (see heat conduction). If the medium is not the whole space, in order to solve the heat equation uniquely we also need to specify boundary conditions for u. To determine uniqueness of solutions in the whole space it is necessary to assume an exponential bound on the growth of solutions, this assumption is consistent with observed experiments. Solutions of the heat equation are characterized by a gradual smoothing of the initial temperature distribution by the flow of heat from warmer to colder areas of an object. Generally, many different states and starting conditions will tend toward the same stable equilibrium. As a consequence, to reverse the solution and conclude something about earlier times or initial conditions from the present heat distribution is very inaccurate except over the shortest of time periods. The heat equation is the prototypical example of a parabolic partial differential equation. Using the Laplace operator, the heat equation can be simplified, and generalized to similar equations over spaces of arbitrary number of dimensions, as $u_t = k abla^2 u = k \Delta u, \quad \,\!$ where the Laplace operator, Δ or $abla^2$, the divergence of the gradient, is taken in the spatial variables. The heat equation governs heat diffusion, as well as other diffusive processes, such as particle diffusion or the propagation of action potential in nerve cells. Although they are not diffusive in nature, some quantum mechanics problems are also governed by a mathematical analog of the heat equation (see below). It also can be used to model some phenomena arising in finance, like the Black-Scholes or the Ornstein-Uhlenbeck processes. The equation, and various non-linear analogues, has also been used in image analysis. The heat equation is, technically, in violation of special relativity, because its solutions involve instantaneous propagation of a disturbance. The part of the disturbance outside the forward light cone can usually be safely neglected, but if it is necessary to develop a reasonable speed for the transmission of heat, a hyperbolic problem should be considered instead -- like a partial differential equation involving a second-order time derivative. Solving the heat equation using Fourier series The following solution technique for the heat equation was proposed by Joseph Fourier in his treatise Théorie analytique de la chaleur, published in 1822. Let us consider the heat equation for one space variable. This could be used to model heat conduction in a rod. The equation is $(1) \ u_t = k u_{xx} \quad$ where u = u(t, x) is a function of two variables t and x. Here • x is the space variable, so x ∈ [0,L], where L is the length of the rod. • t is the time variable, so t ≥ 0. We assume the initial condition $(2) \ u(0,x) = f(x) \quad \forall x \in [0,L] \quad$ where the function f is given and the boundary conditions $(3) \ u(t,0) = 0 = u(t,L) \quad \forall t > 0 \quad$. Let us attempt to find a solution of (1) which is not identically zero satisfying the boundary conditions (3) but with the following property: u is a product in which the dependence of u on x, t is separated, that is: $(4) \ u(t,x) = X(x) T(t). \quad$ This solution technique is called separation of variables. Substituting u back into equation (1), $\frac{T'(t)}{kT(t)} = \frac{X''(x)}{X(x)}. \quad$ Since the right hand side depends only on x and the left hand side only on t, both sides are equal to some constant value − λ. Thus: $(5) \ T'(t) = - \lambda kT(t) \quad$ $(6) \ X''(x) = - \lambda X(x). \quad$ We will now show that solutions for (6) for values of λ ≤ 0 cannot occur: 1. Suppose that λ < 0. Then there exist real numbers B, C such that $X(x) = B e^{\sqrt{-\lambda} \, x} + C e^{-\sqrt{-\lambda} \, x}.$ From (3) we get $X(0) = 0 = X(L). \quad$ and therefore B = 0 = C which implies u is identically 0. 2. Suppose that λ = 0. Then there exist real numbers B, C such that $X(x) = Bx + C. \quad$ From equation (3) we conclude in the same manner as in 1 that u is identically 0. 3. Therefore, it must be the case that λ > 0. Then there exist real numbers A, B, C such that $T(t) = A e^{-\lambda k t} \quad$ $X(x) = B \sin(\sqrt{\lambda} \, x) + C \cos(\sqrt{\lambda} \, x).$ From (3) we get C = 0 and that for some positive integer n, $\sqrt{\lambda} = n \frac{\pi}{L}.$ This solves the heat equation in the special case that the dependence of u has the special form (4). In general, the sum of solutions to (1) which satisfy the boundary conditions (3) also satisfies (1) and (3). We can show that the solution to (1), (2) and (3) is given by $u(t,x) = \sum_{n = 1}^{+\infty} D_n \left(\sin \frac{n\pi x}{L}\right) e^{-\frac{n^2 \pi^2 kt}{L^2}}$ $D_n = \frac{2}{L} \int_0^L f(x) \sin \frac{n\pi x}{L} \, dx.$ Generalizing the solution technique The solution technique used above can be greatly extended to many other types of equations. The idea is that the operator u[xx] with the zero boundary conditions can be represented in terms of its eigenvectors. This leads naturally to one of the basic ideas of the spectral theory of linear self-adjoint operators. Consider the linear operator Δ u = u[x x]. The infinite sequence of functions $e_n(x) = \sqrt{\frac{2}{L}}\sin \frac{n\pi x}{L}$ for n ≥ 1 are eigenvectors of Δ. Indeed $\Delta e_n = -\frac{n^2 \pi^2}{L^2} e_n.$ Moreover, any eigenvector f of Δ with the boundary conditions f(0)=f(L)=0 is of the form e[n] for some n ≥ 1. The functions e[n] for n ≥ 1 form an orthonormal sequence with respect to a certain inner product on the space of real-valued functions on [0, L]. This means $\langle e_n, e_m \rangle = \int_0^L e_n(x) e_m(x) dx = \left\{ \begin{matrix} 0 & n eq m \\ 1 & m = n\end{matrix}\right..$ Finally, the sequence {e[n]}[n ∈ N] spans a dense linear subspace of L^2(0, L). This shows that in effect we have diagonalized the operator Δ. Heat conduction in non-homogeneous anisotropic media In general, the study of heat conduction is based on several principles. Heat flow is a form of energy flow, and as such it is meaningful to speak of the time rate of flow of heat into a region of • The time rate of heat flow into a region V is given by a time-dependent quantity q[t](V). We assume q has a density, so that $q_t(V) = \int_V Q(t,x)\,d x \quad$ • Heat flow is a time-dependent vector function H(x) characterized as follows: the time rate of heat flowing through an infinitesimal surface element with area d S and with unit normal vector n is $\mathbf{H}(x) \cdot \mathbf{n}(x) \, dS$ Thus the rate of heat flow into V is also given by the surface integral $q_t(V)= - \int_{\partial V} \mathbf{H}(x) \cdot \mathbf{n}(x) \, dS$ where n(x) is the outward pointing normal vector at x. • The Fourier law states that heat energy flow has the following linear dependence on the temperature gradient $\mathbf{H}(x) = -\mathbf{A}(x) \cdot abla u (x)$ where A(x) is a 3 × 3 real matrix that is symmetric and positive definite. By Green's theorem, the previous surface integral for heat flow into V can be transformed into the volume integral $q_t(V) = - \int_{\partial V} \mathbf{H}(x) \cdot \mathbf{n}(x) \, dS$ $= \int_{\partial V} \mathbf{A}(x) \cdot abla u (x) \cdot \mathbf{n}(x) \, dS$ $= \int_V \sum_{i, j} \partial_{x_i} a_{i j}(x) \partial_{x_j} u (t,x)\,dx$ • The time rate of temperature change at x is proportional to the heat flowing into an infinitesimal volume element, where the constant of proportionality is dependent on a constant κ $\partial_t u(t,x) = \kappa(x) Q(t,x)\,$ Putting these equations together gives the general equation of heat flow: $\partial_t u(t,x) = \kappa(x) \sum_{i, j} \partial_{x_i} a_{i j}(x) \partial_{x_j} u (t,x)$ • The coefficient κ(x) is the inverse of specific heat of the substance at x × density of the substance at x. • In the anisotropic case where the coefficient matrix A is not scalar (i.e., if it depends on x), then an explicit formula for the solution of the heat equation can seldom be written down. Though, it is usually possible to consider the associated abstract cauchy problem and show that it is a well-posed problem and/or to show some qualitative properties (like preservation of positive initial data, infinite speed of propagation, convergence toward an equilibrium, smoothing properties). This is usually done by one-parameter semigroups theory: for instance, if A is a symmetric matrix, then the elliptic operator defined by $Au(x):=\sum_{i, j} \partial_{x_i} a_{i j}(x) \partial_{x_j} u (x)$ is self-adjoint and dissipative, thus by the spectral theorem it generates a one-parameter semigroup. Particle diffusion Particle diffusion equation One can model particle diffusion by an equation involving either: • the volumetric concentration of particles, denoted c, in the case of collective diffusion of a large number of particles, or • the probability density function associated with the position of a single particle, denoted P. In either case, one uses the heat equation $c_t = D \Delta c, \quad$ $P_t = D \Delta P. \quad$ Both c and P are functions of position and time. D is the diffusion coefficient that controls the speed of the diffusive process, and is typically expressed in meters squared over second. If the diffusion coefficient D is not constant, but depends on the concentration c (or P in the second case), then one gets the nonlinear diffusion equation. The random trajectory of a single particle subject to the particle diffusion equation is a brownian motion. If a particle is placed in $\vec R = \vec 0$ at time t = 0, then the probability density function associated to the vector $\vec R$ will be the following: $P(\vec R,t) = G(\vec R,t) = \frac{1}{(4 \pi D t)^{3/2}} e^{-\frac {\vec R^2}{4 D t}}$ It is related to the probability density functions associated to each of its components R[x], R[y] and R[z] in the following way: $P(\vec R,t) = \frac{1}{(4 \pi D t)^{3/2}} e^{-\frac {R_x^2+R_y^2+R_z^2}{4 D t}} = P(R_x,t)P(R_y,t)P(R_z,t)$ The random variables R[x], R[y] and R[z] are distributed according to a normal distribution of mean 0 and of variance $2\,D\,t$. In 3D, the random vector $\vec R$ is distributed according to a normal distribution of mean $\vec 0$ and of variance $6\, D\,t$. At t=0, the expression of $P(\vec R,t)$ above is singular. The probability density function corresponding to the initial condition of a particle located in a known position $\vec R = \vec 0$ is the Dirac delta function, noted $\delta (\vec R)$ (the generalisation to 3D of the Dirac delta function is simply $\delta (\vec R) = \delta (R_x) \delta (R_y) \delta (R_z)$). The solution of the diffusion equation associated to this initial condition is also called a Green function. Historical origin of the diffusion equation The particle diffusion equation was originally derived by Adolf Fick in 1855. Solving the diffusion equation through Green functions Green functions are the solutions of the diffusion equation corresponding to the initial condition of a particle of known position. For another initial condition, the solution to the diffusion equation can be expressed as a decomposition on a set of Green Functions. Say, for example, that at t=0 we have not only a particle located in a known position $\vec R = \vec 0$, but instead a large number of particles, distributed according to a spatial concentration profile $c(\vec R, t=0)$. Solving the diffusion equation will tell us how this profile will evolve with time. As any function, the initial concentration profile can be decomposed as an integral sum on Dirac delta functions: $c(\vec R, t=0) = \int c(\vec R^0,t=0) \delta(\vec R - \vec R^0) dR_x^0\,dR_y^0\,dR_z^0$ At subsequent instants, given the linearity of the diffusion equation, the concentration profile becomes: $c(\vec R, t) = \int c(\vec R^0,t=0) G(\vec R - \vec R^0,t) dR_x^0\,dR_y^0\,dR_z^0$, where $G(\vec R - \vec R^0,t)$ is the Green function defined above. Although it is more easily understood in the case of particle diffusion , where an initial condition corresponding to a Dirac delta function can be intuitively described as a particle being located in a known position, such a decomposition of a solution into Green functions can be generalized to the case of any diffusive process, like heat transfer, or momentum diffusion, which is the phenomenon at the origin of viscosity in liquids. List of Green function solutions in 1D $\begin{cases} u_{t}=ku_{xx} & -\infty<x<\infty,\,0<t<\infty \\ u(x,0)=g(x) & IC \end{cases}$ $u(x,t)=\frac{1}{\sqrt{4\pi kt}} \int_{-\infty}^{\infty} \exp\left(-\frac{(x-y)^2}{4kt}\right)g(y)\,dy$ $\begin{cases} u_{t}=ku_{xx} & \, 0\le x<\infty, \, 0<t<\infty \\ u(x,0)=g(x) & IC \\ u(0,t)=0 & BC \end{cases}$ $u(x,t)=\frac{1}{\sqrt{4\pi kt}} \int_{0}^{\infty} \left(\exp\left(-\frac{(x-y)^2}{4kt}\right)-\exp\left(-\frac{(x+y)^2}{4kt}\right)\right) g(y)\,dy$ $\begin{cases} u_{t}=ku_{xx} & \, 0\le x<\infty, \, 0<t<\infty \\ u(x,0)=g(x) & IC \\ u_{x}(0,t)=0 & BC \end{cases}$ $u(x,t)=\frac{1}{\sqrt{4\pi kt}} \int_{0}^{\infty} \left(\exp\left(-\frac{(x-y)^2}{4kt}\right)+\exp\left(-\frac{(x+y)^2}{4kt}\right)\right) g(y)\,dy$ $\begin{cases} u_{t}=ku_{xx}+f & -\infty<x<\infty,\,0<t<\infty \\ u(x,0)=0 & IC \end{cases}$ $u(x,t)=\int_{0}^{t}\int_{-\infty}^{\infty} \frac{1}{\sqrt{4\pi k(t-s)}} \exp\left(-\frac{(x-y)^2}{4k(t-s)}\right)f(s)\,dyds$ $\begin{cases} u_{t}=ku_{xx}+f(x,t) & 0\le x<\infty,\,0<t<\infty \\ u(x,0)=0 & IC \\ u(0,t)=0 & BC \end{cases}$ $u(x,t)=\int_{0}^{t}\int_{0}^{\infty} \frac{1}{\sqrt{4\pi k(t-s)}} \left(\exp\left(-\frac{(x-y)^2}{4k(t-s)}\right)-\exp\left(-\frac{(x+y)^2}{4k(t-s)}\right)\right) f(y,s)\,dyds$ $\begin{cases} u_{t}=ku_{xx} & 0\le x<\infty,\,0<t<\infty \\ u(x,0)=0 & IC \\ u(0,t)=h(t) & BC \end{cases}$ $u(x,t)=\int_{0}^{t} \frac{x}{\sqrt{4\pi k(t-s)^3}} \exp\left(-\frac{x^2}{4k(t-s)}\right)h(s)\,ds$ $\begin{cases} u_{t}=ku_{xx}+f & -\infty<x<\infty,\,0<t<\infty \\ u(x,0)=g(x) & IC\end{cases}$ $\begin{cases} v_{t}=kv_{xx}+f, \, w_{t}=kw_{xx} \, & -\infty<x<\infty,\,0<t<\infty \\ v(x,0)=0,\, w(x,0)=g(x) \, & IC\end{cases}$ $\begin{cases} u_{t}=ku_{xx}+f & 0\le x<\infty,\,0<t<\infty \\ u(x,0)=g(x) & IC \\ u(0,t)=h(t) & BC\end{cases}$ $\begin{cases} v_{t}=kv_{xx}+f, \, w_{t}=kw_{xx}, \, r_{t}=kr_{xx} \, & 0\le x<\infty,\,0<t<\infty \\ v(x,0)=0, \; w(x,0)=g(x), \; r(x,0)=0 & IC \\ v(0,t)=0, \; w(0,t)=0, \; r(0,t)=h(t) & BC \end Schrödinger equation for a free particle With a simple division, the Schrödinger equation for a single particle of mass m in the absence of any applied force field can be rewritten in the following way: $\psi_t = \frac{i \hbar}{2m} \Delta \psi$, where i is the unit imaginary number, and $\hbar$ is Planck's constant divided by 2π, and ψ is the wavefunction of the particle. This equation is a mathematical analogue of the particle diffusion equation, which one obtains through the following transformation: $c(\vec R,t) \to \psi(\vec R,t)$ $D \to \frac{i \hbar}{2m}$ Applying this transformation to the expressions of the Green functions determined in the case of particle diffusion yields the Green functions of the Schrödinger equation, which in turn can be used to obtain the wavefunction at any time through an integral on the wavefunction at t=0: $\psi(\vec R, t) = \int \psi(\vec R^0,t=0) G(\vec R - \vec R^0,t) dR_x^0\,dR_y^0\,dR_z^0$, with $G(\vec R,t) = \bigg( \frac{m}{2 \pi i \hbar t} \bigg)^{3/2} e^{-\frac {\vec R^2 m}{2 i \hbar t}}$ Remark: this analogy between quantum mechanics and diffusion is a purely mathematical one. In physics, the evolution of the wavefunction according to Schrödinger equation is not a diffusive process. Diffusion (of particles, heat, momentum...) describes the return to global thermodynamic equilibrium of an inhomogeneous system, and as such is a time-irreversible phenomenon, associated to an increase in the entropy of the universe: in the case of particle diffusion, if $c(\vec R,t)$ is a solution of the diffusion equation, then $c(\vec R,-t)$ isn't. Intuitively we know that particle diffusion tends to resorb spatial concentration inhomogeneities, and never amplify them. As a generalization of classical mechanics, quantum mechanics involves only time-reversible phenomena: if $\psi(\vec R,t)$ is a solution of the Schrödinger equation, then the complex conjugate of $\ psi(\vec R,-t)$ is also a solution. Note that the complex conjugate of a wavefunction has the exact same physical meaning as the wavefunction itself: the two react exactly in the same way to any series of quantum measurements. It is the imaginary nature of the equivalent diffusion coefficient $i \hbar/(2m)$ that makes up for this difference in behavior between quantum and diffusive systems. On a related note, it is interesting to notice that the imaginary exponentials that appear in the Green functions associated to the Schrödinger equation create interferences between the various components of the decomposition of the wavefunction. This is a symptom of the wavelike properties of quantum particles. The heat equation arises in the modeling of a number of phenomena and is often used in financial mathematics in the modeling of options. The famous Black-Scholes option pricing model's differential equation can be transformed into the heat equation allowing relatively easy solutions from a familiar body of mathematics. Many of the extensions to the simple option models do not have closed form solutions and thus must be solved numerically to obtain a modeled option price. The heat equation can be efficiently solved numerically using the Crank-Nicolson method and this method can be extended to many of the models with no closed form solution. (Wilmott, 1995) An abstract form of heat equation on manifolds provides a major approach to the Atiyah-Singer index theorem, and has led to much further work on heat equations in Riemannian geometry. See also • Heat • Partial differential equation • Heat kernel regularization • Caloric polynomial • Neher–McGrath • Einstein, A. "Über die von der molekularkinetischen Theorie der Wärme geforderte Bewegung von in ruhenden Flüssigkeiten suspendierten Teilchen." Ann. Phys. 17, 549, 1905. [1] • Wilmott P., Howison S., Dewynne J. (1995) The Mathematics of Financial Derivatives:A Student Introduction. Cambridge University Press. • L.C. Evans, Partial Differential Equations, American Mathematical Society, Providence, 1998. ISBN 0-8218-0772-2. This article is licensed under the GNU Free Documentation License. It uses material from the Wikipedia article "Heat_equation". A list of authors is available in Wikipedia.
{"url":"http://www.chemeurope.com/en/encyclopedia/Heat_equation.html","timestamp":"2014-04-16T10:31:50Z","content_type":null,"content_length":"86275","record_id":"<urn:uuid:94588fb4-2c01-4668-8701-b90c2bbd7989>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00620-ip-10-147-4-33.ec2.internal.warc.gz"}
Totowa Science Tutor Find a Totowa Science Tutor ...MH ranks 45th in NJ and 1,311 out of 21000 schools nationally-- top 6% in the nation!!I have a B.S. in Secondary Education Biology/Gen Science from Kutztown University (GPA 3.70) and a MAT Master in the Art of Teaching from Marygrove College (GPA 3.85). I was Morris Hills 2008 "Outstanding Teac... 5 Subjects: including chemistry, biology, physical science, track & field ...After going over what is desired to learn, then can I work out the best way to approach the subject to ensure comprehension. I look forward to hearing from you.I am a Biology major and am in the process of acquiring a teaching certification. I am going to teach Upper-Level Biology and am very comfortable with the material involved. 24 Subjects: including chemistry, calculus, ecology, statistics ...I scored in the top 1 to 3 percent on every standardized test I've taken (SAT 2240 | ACT 36 | GMAT 740 | GRE 167 QR/168 VR) and have been helping students of all ages boost their standardized test scores, full-time for the past 2 years, and part-time since college. ACADEMIC TUTORING Whether you... 18 Subjects: including ACT Science, geometry, GRE, algebra 1 ...I also can tutor conversational French.I have a strong math background from college (I was a chemistry major). I have tutored several students in algebra 1 in the last 3 years. It is very important for a student to understand the basic concepts in Algebra. I have a strong math background from ... 7 Subjects: including organic chemistry, chemistry, French, algebra 2 ...I have a PhD in Biochemistry and the equivalent of a bachelors degree in Computer Science. I am passionate about teaching - as a graduate student I won an award as the best graduate student teacher. My area of specialization in biochemistry is bioinformatics - the use of computer techniques to solve biological problems. 1 Subject: biochemistry Nearby Cities With Science Tutor Cedar Grove, NJ Science Tutors East Rutherford Science Tutors Fairfield, NJ Science Tutors Glen Rock, NJ Science Tutors Haledon Science Tutors Hawthorne, NJ Science Tutors Hillcrest, NJ Science Tutors Lincoln Park, NJ Science Tutors Little Falls, NJ Science Tutors North Caldwell, NJ Science Tutors North Haledon, NJ Science Tutors Paterson, NJ Science Tutors Verona, NJ Science Tutors Wayne, NJ Science Tutors Woodland Park, NJ Science Tutors
{"url":"http://www.purplemath.com/totowa_science_tutors.php","timestamp":"2014-04-17T07:50:25Z","content_type":null,"content_length":"23746","record_id":"<urn:uuid:6f4d8638-3cd1-40ab-a16b-186eb6cf01aa>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00393-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts about Just for Fun on Random Walks In my Calculus class today I showed just a short clip from this video (“the limit does not exist!”). Apparently, completely and totally without my knowledge, I showed this video today, on October 3rd, which just happens to be Mean Girls Day. So..happy Mean Girls Day! (happy/mean…that sounds a little Kudos to the folks who put these videos together! Summer Odds and Ends I promise I’ll start blogging again. But as followers of this blog might know, I like to take the summer off–both from teaching and blogging. I never take a break from math, though. Here are some fun things I’ve seen recently. Consider it my own little math carnival :-). I love this comic, especially as I start my stat grad class this semester @ JHU. After this class, I’ll be half-way done with my masters. It’s a long road! [ht: Tim Chase] Speaking of statistics, my brother also sent me this great list of lottery probabilities. Could be very useful in the classroom. These math dice. Honestly I don’t know what I’d do with them, but you have to admit they’re awesome. [ht: Tim Chase] These two articles about Khan academy and the other about edX I found very interesting. File all of them under ‘flipping the classroom.’ I’m still working up the strength to do a LITTLE flipping with my classroom. My dad forwarded these links to me. He has special interest in all things related to MIT (like Khan, and like edX) since it’s his alma mater. I’ll be teaching BC Calculus for the first time this semester and we’re using a new book, so I read that this summer. Not much to say, except that I did actually enjoy reading it. I also started a fabulous book, Fearless Symmetry by Avner Ash and Robert Gross. I have a bookmark in it half way through. But I already recommend it highly to anyone who has already had some college math courses. I just took a graduate course in Abstract Algebra recently and it has been a great way to tie the ‘big ideas’ in math together with what I just learned. The content is very deep but the tone is conversational and non-threatening. (My dad, who bought me the book, warns me that it gets painfully deep toward the end, however. That’s to be expected though, since the authors attempt to explain Wiles’ proof of Fermat’s Last Theorem!) I had this paper on a juggling zeta function (!) sent to me by the author, Dr. Dominic Klyve (Central Washington University). I read it, and I pretended to understand all of it. I love the intersection of math and juggling, and I’m always on the look out for new developments in the field. And most recently, I’ve been having a very active conversation with my math friends about the following problem posted to NCTM’s facebook page: Feel free to go over to their facebook page and join the conversation. It’s still happening right now. There’s a lot to say about this problem, so I may devote more time to this problem later (and problems like it). At the very least, you should try doing the problem yourself! I also highly recommend this post from Bon at Math Four on why math course prerequisites are over-rated. It goes along with something we all know: learning math isn’t as ‘linear’ an experience as we make it sometimes seem in our American classrooms. And of course, if you haven’t yet checked out the 90th Carnival of Mathematics posted over at Walking Randomly (love the name!), you must do so. As usual, it’s a thorough summary of recent quality posts from the math blogging community. Okay, that’s all for now. Thanks for letting me take a little random walk! 87th Carnival of Mathematics The 87th Carnival of Mathematics has arrived!! Here’s a simple computation for you: What is the sum of the squares of the first four prime numbers? That’s right, it’s Good job. Now, onto the carnival. This is my first carnival, so hopefully I’ll do all these posts justice. We had lots of great submissions, so I encourage you to read through this with a fine-toothed comb. Enjoy! Here’s a post (rant) from Andrew Taylor regarding the coverage from the BBC and the Guardian on the Supermoon that occurred in March 2011. NASA reports the moon as being 14% larger and 30% brighter, but Andrew disagrees. Go check out the post, and join the conversation. Have you ever heard someone abuse the phrase “exponentially better”? I know I have. One incorrect usage occurs when someone makes the claim that something is “exponentially better” based on only two data points. Rebecka Peterson has some words for you here, if you’re the kind of person who says this! Physics and Science-flavored Frederick Koh submitted Problem 19: Mechanics of Two Separate Particles Projected Vertically From Different Heights to the carnival. It’s a fun projectile motion question which would be appropriate for a Precalculus classroom (or Calculus). I like the problem, and I think my students would like it too. John D. Cook highlights a question you’ve probably heard before: Should you walk or run in the rain? An active discussion is going on in the comments section. It’s been discussed in many other places too, including twice on Mythbusters. (I feel like I read an article in an MAA or NCTM magazine on this topic once, as well. Anyone remember that?) Murray Bourne submitted this awesome post about modeling fish stocks. Murray says his post is an “attempt to make mathematical modeling a bit less scary than in most textbooks.” I think he achieves his goal in this thorough development of a mathematical model for sustainable fisheries (see the graph above for one of his later examples of a stable solution under lots of interesting constraints). If I taught differential equations, I would absolutely use his examples. Last week I highlighted this new physics blog, but I wanted to point you there again: Go check out Five Minute Physics! A few more videos have been posted, and also a link to this great video about the physics of a dropping Slinky (see above). Statistics, Probability, & Combinatorics Mr. Gregg analyzes European football using the Poisson distribution in his post, The Table Never Lies. I liked how much real world data he brought to the discussion. And I also liked that he admitted when his model worked and when it didn’t–he lets you in on his own mathematical thought process. As you read this post, you too will find yourself thinking out loud with Mr. Gregg. Card Colm has written this excellent post that will help you wrap your mind around the number of arrangements of cards in a deck. It’s a simple high school-level topic, but he really puts it into the number of possible ways to order or permute just the hearts is 13!=6,227,020,800. That’s about what the world population was in 2002. So back then if somebody could have made a list of all possible ways to arrange those 13 cards in a row, there would have been enough people on the planet for everyone to get one such permutation. I think it’s good to remind ourselves that whenever we shuffle the deck, we can be almost certain that our arrangement has never been created before (since $52!\approx 8\times 10^{67}$ arrangements are possible). Wow! Alex is looking for “random” numbers by simply asking people. Go contribute your own “random” number here. Can’t wait to see the results! Quick! Think of an example of a real-world bimodal distribution! Maybe you have a ready example if you teach stat, but here’s a really nice example from Michael Lugo: Book prices. Before you read his post, you should make a guess as to why the book prices he looked at are bimodal (see histogram above). Philosophy and History of Math Mike Thayer just attended the NCTM conference in Philadelphia and brings us a thoughtful reaction in his post, The Learning of Mathematics in the 21st Century. Mike wrote this post because he had been left with “an ambivalent feeling” after the conference. He wants to “engage others in mathematics education in discussions about ways to improve what we do outside of the frameworks that are being imposed on us by those outside of our field.” As a secondary educator, I agree with Mike completely and really enjoyed his post. Mike isn’t satisfied with where education is going. In his post, he writes, “We are leaping ahead into the unknown with new educational models, and we never took the time to get the old ones right.” Edmund Harriss asks Have we ever lost mathematics? He gives a nice recap of foundational crises throughout the history of mathematics, and wonders, ultimately, if we’ve actually lost any mathematics. There’s also a short discussion in the comments section which I recommend to you. Peter Woit reflects on 25 Years of Topological Quantum Field Theory. Maybe if you have degree in math and physics you might appreciate this post. It went over my head a bit, I’m afraid! Book Reviews In this post, Matt reviews a 2012 book release, Who’s #1, by Amy N. Langville and Carl D. Meyer. The book discusses the ranking systems used by popular websites like Amazon or Netflix. His review is thorough and balanced–Matt has good things to say about the book, but also delivers a bit of criticism for their treatment of Arrow’s Impossibility Theorem. Thanks for this contribution, Matt! [edit: Thanks MATT!] Shecky R reviews of David Berlinski’s 2011 book, One, Two Three…Absolutely Elementary mathematics in his Brief Berlinski Book Blurb. I’m not sure his review is an *endorsement*. It sounds like a book that only a small eclectic crowd will enjoy. Peter Rowlett submitted this post about linear programming and provides a link to an interactive problems solving environment. Peter Rowlett also weighs in on the recent news about a German high school boy who has (reportedly) solved an open problem. Many news sources have picked up on this, and I’ve only followed the news from a distance. So I was grateful for Peter’s comments–he questions the validity of the news in his recent post “Has schoolboy genius solved problems that baffled mathematicians for centuries?” His comments in another recent post are perhaps even more important though–Peter encourages us to think of ways we can remind our students that lots of open problems still exist, and “Mathematics is an evolving, alive subject to which you could contribute.” Here’s a fun-loving post about Heptagrins, and all the crazy craft projects you can do with them. Don’t know what a Heptagrin is? Neither did I. But go check out Jess Hawke’s post and she’ll tell you all about them! Any Lewis Carroll lovers out there? Julia Collins submitted a post entitled “A Night in Wonderland” about a Lewis Carroll-themed night at the National Museum of Scotland. She writes, “Other people might be interested in the ideas we had and also hearing about what a snark is and why it’s still important.” When you check out this post, you’ll not only learn about snarks but also about creating projective planes with your sewing machine. Cool! Mike Croucher over at Walking Randomly gives a shout out to the free software Octave, which is a MATLAB replacement. Check out his post, here. MATLAB is ridiculously expensive, and so the world needs an alternative like Octave. He provides links to the Kickstarter campaign–and Mike has backed the project himself. I too believe in Octave. I’ve used it a few times for my grad work and I’ve been very grateful for a free alternative to MATLAB. The End Okay, that’s it for the 87th Carnival of Mathematics. Hope you enjoyed all the posts! Sorry it took me a couple days to post it–there was a lot to digest :-). If you missed the previous carnival (#86), you can find it here. The next carnival (#88) will be hosted by Christian at checkmyworking.com. For a complete listing of all the carnivals, and more information & FAQ about the carnivals, follow this link. Jake Scott Mr. Scott hits another one out of the park! A math carnival here?? Yes, that’s right! In just a few weeks, I’ll be hosting the 87th Carnival of Mathematics. Please submit articles here, sometime before June 1st. I look forward to curating the submissions, and of course, sharing some great mathematics with the math blogging community! And if you haven’t done so yet, please go check out the current carnival at the Math Less Traveled. To get you in the carnival mood, here’s a juggling video. See if you can spot Mr. Chase :-). In fact, today, I just gave the “Mathematics of Juggling” lecture three times. I try to give this lecture as a fun-day at the end of the year in my Precalculus classes. So, needless to say, I’m in the juggling mood! For Reals A few people have pointed me to this mathy web comic: I’m not sure how often discrete mathematics uses the phrase “for reals”….I would think “for natural numbers” would be more appropriate, don’t you? I’m Perfect! Happy Birthday to Mr. Chase, today! Today, I think I can safely say, is the last time my age will be a perfect number. The last time my age was perfect was when I was 6 years old. For those that forget the definition of a perfect A number is perfect if it is the sum of its proper divisors (that is, the sum of its divisors, excluding itself). For example, 6 is perfect because 1+2+3=6. So, how old am I? If you’re a consummate mathematician, you have the first couple perfect numbers memorized, and this is an easy question. If you’ve never thought about perfect numbers, or you forget what the next one is, I challenge you to figure it out for yourself. I challenged my students today to figure out my age, and two of them got it out without my help. For a real challenge, prove that there are infinitely many perfect numbers. (open problem!)
{"url":"http://mrchasemath.wordpress.com/category/just-for-fun/page/3/","timestamp":"2014-04-16T11:14:00Z","content_type":null,"content_length":"86754","record_id":"<urn:uuid:8ef465a3-9cd7-4e58-aa04-f1acc6974758>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00039-ip-10-147-4-33.ec2.internal.warc.gz"}
Low-Jitter 0.1-to-5.8GHz Clock Synthesizer for Area-Efficient Per-Port Integration Journal of Electrical and Computer Engineering Volume 2013 (2013), Article ID 364982, 8 pages Research Article Low-Jitter 0.1-to-5.8GHz Clock Synthesizer for Area-Efficient Per-Port Integration ^1PMC-Sierra, Burnaby, BC, V5A 4V7, Canada ^2University of British Columbia, Vancouver, BC, V6T 1Z4, Canada Received 31 December 2012; Accepted 11 June 2013 Academic Editor: Chih-Wen Lu Copyright © 2013 Reza Molavi et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Phase-locked loops (PLLs) employing LC-based voltage-controlled oscillators (LC VCOs) are attractive in low-jitter multigigahertz applications. However, inductors occupy large silicon area, and moreover dense integration of multiple LC VCOs presents the challenge of electromagnetic coupling amongst them, which can compromise their superior jitter performance. This paper presents an analytical model to study the effect of coupling between adjacent LC VCOs when operating in a plesiochronous manner. Based on this study, a low-jitter highly packable clock synthesizer unit (CSU) supporting a continuous (gapless) frequency range up to 5.8GHz is designed and implemented in a 65nm digital CMOS process. Measurement results are presented for densely integrated CSUs within a multirate multiprotocol system-on-chip PHY device. 1. Introduction The design of clock multipliers for multirate multistandard applications involves a tradeoff between the output clock jitter and the frequency tuning range. Traditionally, a wide range is achieved via non-LC-based oscillators such as relaxation or ring oscillators [1–3] at the cost of higher phase noise and intrinsic jitter. LC VCOs are used for low-jitter multigigahertz applications, but their tuning range is inherently small [2, 4]. Moreover, dense integration of multiple LC VCOs on a silicon die poses a new challenge due to mutual coupling between inductors and the resulting frequency pulling and induced phase jitter among adjacent oscillators. In this work, a low-jitter highly packable Clock-Synthesizer Unit (CSU) supporting a continuous (gapless) frequency range up to 5.8GHz is designed and implemented in 65nm digital CMOS process. One of the objectives of this clock generation architecture is to close the gap between ring oscillators with wide tuning range but high phase-noise and jitter and LC oscillators with limited tuning range and low phase noise. The clock synthesizer architecture is described in Section 2. In Section 3, a model is presented that describes the effect of magnetic coupling between adjacent VCOs and the resulting phase jitter in the PLL under test. Implementation results and conclusions are presented in Sections 4 and 5, 2. Architecture The clock synthesizer unit presented in this work is intended for per-port integration in transceivers supporting various wireline telecommunications and data communication standards. As shown in Figure 1, the CSU receives a stable crystal-based reference clock (REFCLK) and employs two LC VCOs, a programmable charge pump, a high-speed fractional feedback divider, and flexible bank of post-PLL dividers (postdividers) to multiply up the reference frequency to generate the intended half-baud-rate clock. This synthesizer employs a moderate bandwidth PLL, programmable from 400kHz to 1.2MHz, to attenuate fractional-N spurs, and the reference and charge-pump noise, while suppressing the VCO phase noise to comply with stringent jitter specifications of numerous wireline standards. As shown in Figure 2, the CSU provides complementary CMOS output clocks, CLKHR and CLKHRB, at half the baud rate driving one transmitter (TX), which transmits data on both transitions of the differential clock (CLKHR-CLKHRB). The large tuning range of the VCO (3.6GHz to 5.8GHz), comes from two LC tanks, combined with a flexible postdivider bank implementing multiple divide ratios with 50% output duty cycle which guarantees gapless frequency synthesis for baud rates from the VCO’s maximum frequency of 5.8GHz down to 0.1GHz. Relying on the wide VCO frequency range and the postdivider flexibility, a redundant frequency mapping is planned for critical telecom rates, most notably 2.488Gb/s SONET, that employs alternative VCO rate and postdivider combinations to avoid running adjacent VCOs at the same (or close) nominal rates. This allows dense integration of a large number of serializer-deserializer (SERDES) links each with a per-port frequency synthesizer, without any significant inductor coupling amongst adjacent VCOs. The CSU feedback path consists of a high-speed multimodulus divider (MMD) running at the VCO rate that is controlled by a modulator (DSM) [5, 6]. The 24b DSM uses a 3rd-order single-loop topology, allowing frequency synthesis resolution down to 2 parts per billion (ppb). A programmable integrated passive loop filter is used to suppress the reference clock and the DSM quantization noise from the VCO’s control voltage. A parallel combination of accumulation mode (AMOS) varactors and PMOS capacitors is used to linearize the characteristics of the on-chip capacitor to maintain optimal loop dynamics across the range of VCO’s control voltage (see Figure 1, inset). Two LC VCOs with overlapping tuning ranges, each comprised of a cross-coupled NMOS and PMOS topologies, generate the required 3.6GHz to 5.8GHz tuning range. Integrated inductors with stacked metal for lower resistance are used to achieve high quality factor () and hence low VCO phase noise. To increase the headroom for low-voltage operation on 1 volt supply, the tail current source of the VCO is eliminated. One advantage of this approach is the removal of the tail-current noise, which would otherwise fold back into the close-in phase noise of the VCO [4]. Furthermore, the increased oscillator swing due to the added headroom improves the phase noise performance. The overall silicon area is reduced due to the removal of a large current source, current mirrors, and associated noise filters for biasing. It is worth noting that since there is no tail current source in this design, the of the devices and hence the total negative resistance is solely governed by the size of the NMOS and PMOS transistors. To guarantee oscillation, it is necessary that across the frequency band, where is the equivalent shunt resistance of the inductor’s series loss resistance (), and is the overall transconductance of the cross-coupled transistors. Assuming a relatively constant versus frequency, the minimum required transconductance for oscillation varies by a factor of across the frequency range of each VCO. Since the of the cross-coupled pairs has to be large enough to guarantee the oscillation startup at the lower end of the frequency band, there is a waste of power at the higher end of the frequency range, especially at fast process corner (FF), where the transistor threshold voltages are smaller. To alleviate this problem, a set of programmable parallel switches control the total resistance to ground and hence the VCO’s power consumption (Figure 1, inset). This flexible scheme results in up to 30% power reduction for high-frequency settings or Fast silicon process corner. The wide tuning range of the VCO is achieved through the combination of coarse tuning using fixed switchable capacitors, implemented by a stack of interdigitated metal capacitors and fine-tuning of the AMOS varactors via the control voltage. A VCO calibration scheme which sets to one of multiple voltage levels at startup (nominally ) selects the optimum metal capacitor for the target rate and given process corner. Provisions have been made for temperature-aware calibration, that is, to choose for the calibration based on the calibration temperature so as to offer additional margin for postcalibration variations of temperature and supply voltage. A dedicated flip-chip power bump near the VCO core is intended to minimize IR drop and power supply noise caused by other blocks in the SERDES, including adjacent PLLs. To further stabilize the VCO’s supply, a large decoupling capacitor, consisting of AMOS varactor and metal capacitor using metal layers M1-M2, is implemented underneath the patterned ground shield (PGS) of the VCO’s inductor. The PGS is implemented in a higher metal layer (M3) to allow this implementation. The incremental effect on the inductor quality factor is negligible, while a large area of silicon die is reused to filter the sensitive VCO supply. 3. Clock Jitter in Plesiochronous Neighboring PLLs According to ITU standards for telecommunications (ITU-T), two signals are plesiochronous if they have the same nominal rate, with any variation in rate being constrained within specified limits. For example, two bit streams are plesiochronous if they are clocked off two independent clock sources that have the same nominal frequencies but may have a slight frequency mismatch measured in parts per million (ppm), which would lead to a drifting phase and cycle slips. In other words, two plesiochronous signals or systems are almost synchronous but not quite perfectly. One of the most challenging situations for noise coupling among densely integrated SERDES links with independent rates is when adjacent links run in a plesiochronous manner with the line rates offset anywhere in the approximate range of ±10 to ±500ppm. In this case, any coupling between the links in general, and magnetic coupling between their respective LC VCOs in particular, can cause in-band noise and spurs. The unwanted pulling of one VCO by another VCO right around the bandwidth of the victim’s PLL proves to be problematic, especially for Telecommunication standards with close-in jitter specifications, for example, SONET OC-48 with jitter integration band specified from 12kHz to 20MHz offset from the carrier. We present a model that helps understand the behavior of the unwanted periodic jitter in two adjacent PLLs (here known as aggressor and victim), when the two PLLs operate at a small frequency offset and the magnetic isolation between their VCO inductors is finite. To quantify this effect, consider two adjacent VCOs operating at slightly different frequencies, the victim VCO at and the aggressor VCO at , separated by small frequency offset . The coupling factor () between the inductors and in the two VCOs is simulated using an electromagnetic (EM) simulation tool. Assuming identical inductors used in the two VCOs in neighboring links, the open-circuit voltage induced by the aggressor on the victim can be calculated as in (2): where is the current flowing through the aggressor inductor. The noise voltage induced in the victim’s inductor is then calculated as follows: Equation (3) indicates that when loaded by the tank impedance of the victim VCO, which also includes the impedance of the cross-coupled pair, the induced voltage, , becomes smaller by a loading factor . As can be seen in Figure 3, this voltage appears as two asymmetric sidebands in the output voltage spectrum of the victim VCO. This is because the interference from the aggressor at some offset from the victim VCO frequency, that is, , can be modeled as the superposition of two AM and PM components. To explain this, we express the victim VCO’s output voltage as where the first term represents the desired VCO output voltage oscillating at , while the second term is the interference due to the aggressor VCO as expressed by (3). Using the phasor representation in Figure 4 and assuming that , the victim’s output voltage may be rewritten as where The term represents a periodic amplitude modulation (AM) of the VCO’s carrier with a modulation index of at frequency and generates two in-phase sidebands around the VCO frequency. The term represents phase modulation (PM) with a modulation index of and produces two opposite-phase sidebands around the VCO frequency. This explains the existence of a sideband at in Figure 3 that is smaller in magnitude than the sideband at the aggressor frequency. The PM modulation of can be described by a voltage perturbation at angular frequency on the control voltage of the VCO’s varactor. This voltage would modulate the varactor capacitance, hence the frequency and phase of the oscillator, and creates sideband spurs. This modeling is useful since it allows us to evaluate noise-shaping behavior of the PLL on the induced phase interference, as described next. The response of a PLL to a voltage disturbance at the input of its VCO largely depends on the dynamics of the loop and the location of zeros and poles set for the stability of the PLL. This can be analyzed based on the closed-loop phase model of the victim PLL as shown in Figure 5. As explained, represents a small-signal voltage perturbation referenced to the input control voltage of the victim VCO that describes the frequency/phase modulation caused by the magnetic coupling from the aggressor VCO. The frequency of this unwanted modulation is the difference (offset) between the two VCO frequencies. The transfer function from this ripple voltage to the output phase of the victim VCO () is calculated as where , , and are the VCO gain, charge pump current, and feedback divider ratio, respectively, in the charge pump-based PLL. and are the values of the resistor and capacitor comprising the loop filter zero frequency. As implied by (9), the transfer function from the unwanted coupled spur to the output phase of the PLL has a bandpass characteristics, with the passband extending from the zero frequency () to the PLL’s unity-gain bandwidth frequency (denoted by ). The transfer function versus the offset frequency is shown in Figure 6. This implies that plesiochronous links with rate offsets close to the bandwidth of the PLL have the largest impact on one another. To support this analysis and key conclusion, an experiment is carried out in which the frequency offset between two adjacent PLLs is varied from 0 (synchronous operation) to values larger than the bandwidth of each PLL. The total RMS jitter (TJ[rms]) of the PLL is measured for each offset case, and the results are plotted as in Figure 7. The PLL under test has its zero frequency and bandwidth set to 90kHz and 300kHz, respectively. As seen in Figure 7, the total RMS jitter peaks around 200kHz (i.e., near the transfer function peaking predicted by Figure 6) and drops off at frequencies below the zero frequency and above the bandwidth of the PLL, as expected from (9). This behavior can also be explained by the PLL dynamics. That is, if the induced spur is far below the loop’s zero frequency, the PLL response is fast enough to correct this variation and the jitter goes down. Conversely, if the spur is far above the PLL bandwidth, the VCO being an integrator does not follow fast changes on its control voltage, and hence the output spur will be small. Note that the jitter at zero offset reaches its lowest limit, that is, the intrinsic jitter (a.k.a. random jitter or ) of the victim PLL. In other words, the lowest total jitter is achieved by synchronous In synchronous operation (0ppm offset), the total jitter is dominated by the random jitter of a standalone PLL, which in turn depends on the noise contribution of the blocks within the PLL, as well as the PLL dynamics. Hence, the charge pump, the VCO, the feedback divider, and the passive loop filter are designed with careful attention to their random jitter contribution. This noise optimization, as will be discussed shortly, allows the use of a moderate quality low-cost reference clock for this multirate PLL. In order to reduce the plesiochronous magnetic coupling effect, several techniques have been proposed that may be exercised including [10–12]. In this work, we propose a variation of the straightforward solutions of spacing out the links physically, hence lowering the mutual coupling and the induced noise. An exercise employing this technique would be to power up every other link (rather than all links) on the chip and measure the resulting spurs. This is shown in Table 1. As can be seen, doubling the distance between the active links results in about 12dB reduction in the aggressor spur observed at the output spectrum of the victim PLL, which agrees with the fact that magnetic coupling is inversely proportional to the square of the distance between the inductors. However, if an aggressor VCO operates at a frequency corresponding to a frequency offset far above the bandwidth of the nearby victim PLL, the aggressor will have very little impact due to 20dB/ decade suppression of the coupled spur beyond the bandwidth of the victim PLL. As a result, rather than powering down every other VCO, one can run them alternately at totally different frequencies to satisfy the previously mentioned frequency offset condition. This technique can be implemented if the dividers following each VCO provide the same final half-baud-rate clocks to their respective TX. In other words, the goal is to have a redundant frequency plan to achieve the same HRCLK(B) frequencies after the PLL postdividers, while the VCOs run at totally different rates. In practical terms, every other VCO is tuned to a different frequency, hence circumventing the unwanted coupling between adjacent PLLs and effectively increasing the spacing between plesiochronous VCOs by a factor of two. In this case, one would only worry about the coupling between every other link, which means 12dB improvement in the magnitude of unwanted coupled spurs. This frequency scheme virtually eliminates noise coupling amongst plesiochronous neighboring links and allows for dense placement of the links with integrated per-port clock synthesizers. 4. Implementation and Summary Each clock synthesizer unit occupies an area of (560 × 700)μm^2, integrated along with a transceiver link making it 1.2mm tall, thereby allowing a minimum integration pitch of 560μm for abutting multiple links. Figure 8 shows the die micrograph of a high-capacity single-chip multirate multiprotocol PHY device in which 18 SERDES ports are integrated as described. This device enables the convergence of high-bandwidth data, video, and voice services over optical transport network (OTN) and offers advanced protocol mapping and multiplexing capabilities for more efficient multiservice integration on a single platform. The VCO and its output multiplexer and buffers draw a typical current of 11mA at 1 volt, while the entire CSU draws under 20mA. The measured tuning characteristics of the dual VCO versus coarse tuning metal capacitor settings over process, temperature, and supply voltage (PVT) variations are shown in Figure 9. Measurement results are within 0 to 2% of the simulations at . The 2% discrepancy occurs at higher frequencies where most fixed metal capacitors are disconnected, and hence any inaccuracies in modeling varactors and parasitic capacitors are more pronounced. The RMS jitter measured is 538 fs for 2.488Gb/s applications (integrated from 1kHz to 40MHz), as shown in the phase noise snapshot in Figure 10. With an integration bandwidth of 12kHz to 20MHz based on SONET OC-48 specifications, the RMS jitter is 0.46ps (±0.01ps) for an isolated channel and 0.50ps (±0.01ps) with all channels active, as shown in Table 2. Note that the PLL’s reference clock is the dominant phase noise contributor below 1kHz. Despite the low output jitter of the CSU, its input reference clock has fairly relaxed requirements for most applications. The reference clock comes from a low-cost 2- (12kHz to 20MHz) source, enters the chip through a single-ended pad, and is conveniently autorouted through the digital core to all the links. Table 2 summarizes measured RMS jitter of the CSU output for two representative supported wireline standards. Comparative measurements are done first on an isolated link-under-test and then with full activity on all links. Also, both alternative configurations for odd and even channels are shown for the SONET OC-48 case. Table 3 presents the performance summary and comparison with prior 5. Conclusion The design and integration of an array of LC-based clock synthesizers for multiple transceiver links supporting various wireline standards, especially Telecommunication standards, requires particular attention to the issue of electromagnetic coupling amongst LC VCOs. This paper develops a modeling technique that explains the behavior of a victim synthesizer PLL due to this coupling effect. In addition, a highly packable clock synthesizer, employing redundant frequency mapping, is designed and fabricated in a 65nm digital CMOS technology. The measured clock jitter of this synthesizer is only 0.5 (integrated from 12kHz to 20MHz) in SONET OC-48 application when all adjacent links are up and running in a plesiochronous manner that is the worst-case scenario for noise coupling. The authors would like to thank anonymous reviewers for their useful and constructive comments. The authors acknowledge the support of PMC-Sierra for the chip fabrication and testing. The research is also supported in part by the Natural Sciences and Engineering Research Council of Canada (NSERC) and CMC Microsystems.
{"url":"http://www.hindawi.com/journals/jece/2013/364982/","timestamp":"2014-04-18T14:03:51Z","content_type":null,"content_length":"156177","record_id":"<urn:uuid:efaba097-abe3-4b48-b971-2459594b7a48>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00477-ip-10-147-4-33.ec2.internal.warc.gz"}
Sudoku Rules Sudoku Rules Are Simple It's the solution that's the challenge. Here are the illustrated Sudoku rules. In the illustration at the left the numbers 5, 3, 1, and 2 are the "givens". They can not be changed. The remaining numbers in black are the numbers that you fill in to complete the row. In the illustration at the left the numbers 7, 2, and 6 are the "givens". They can not be changed. You fill in the remaining numbers as shown in black to complete the column. Like the Sudoku requirements for rows and columns, every region must also contain the numbers 1, 2, 3, 4, 5, 6, 7, 8, and 9. Duplicate numbers are not permitted in any region. Each region will differ from the other regions. In the illustration to the left the numbers 1, 2, and 8 are the "givens". They can not be changed. Fill in the remaining numbers as shown in black to complete the region. In summary, the Sudoku rule is: Complete the Sudoku puzzle so that each and every row, column, and region contains the numbers one through nine only once. There is only one solution to a properly designed Sudoku puzzle. Let's learn How To Play Sudoku! Return from Sudoku Rules And Instructions to Sudoku. home page
{"url":"http://www.sudokuessentials.com/sudoku_rules.html","timestamp":"2014-04-18T10:34:51Z","content_type":null,"content_length":"11860","record_id":"<urn:uuid:229e178c-e23f-4d00-a6f2-17a4cd5f147a>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00562-ip-10-147-4-33.ec2.internal.warc.gz"}
Applications of Different Bases Date: 03/27/2002 at 05:57:06 From: L. McCrory Subject: Base numbers Dear Dr. Math, I am trying to find three bases other than base 2, and find a use for them. Any help would be greatly appreciated. Thank you. L. McCrory Date: 03/27/2002 at 23:43:11 From: Doctor Twe Subject: Re: Base numbers Hi - thanks for writing to Dr. Math. First, check out our "Number Bases" FAQ at: Some number bases (other than 2 and 10) that come immediately to mind for their practical use are: Octal (base 8) and hexadecimal (base 16): These are used in computer sciences as a "shorthand notation" of binary values. Their use extends to diagnostics, programming, and html, among other computer-based Sexagesimal (base 60): Used in hour:minute:second representation of time, as well as degree*minute'second" representation of angular measure. (We don't represent the minutes and seconds with a single symbol, but the minutes and seconds can be thought of as a single "digit" represented by a two-symbol combination.) The Babylonians also used a sexagesimal-based number system. Vigesimal (base 20): The Mayans used a vigesimal-based number system. The names of numbers in French are also loosely based on 20's. Senary (base 6): Sometimes used in dice-based games. Multiple dice are read by color as a multi-digit senary value. The result is usually then looked up on a chart of some sort. There are also some specialized "weird" uses of number bases. See, for example, the following from our Ask Dr. Math archives: Paint Formulas in Base 48 Duotrigesimal (Base 32) Numbers In the latter entry, you'll also find references to uses of base 64, base 85, and base 36 in specific computer applications. I hope this helps. If you have any more questions, write back. - Doctor TWE, The Math Forum
{"url":"http://mathforum.org/library/drmath/view/56104.html","timestamp":"2014-04-17T12:55:14Z","content_type":null,"content_length":"7106","record_id":"<urn:uuid:79ba2952-7a64-4aa1-a0f0-c54413c1aa05>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00187-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Use the shell method to find the volume of the solid generated by revolving about the y-axis. x=sqrt(9-x^2) and x=0 • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50bb40ace4b0017ef6252475","timestamp":"2014-04-18T16:23:05Z","content_type":null,"content_length":"42426","record_id":"<urn:uuid:7218f3d0-55a4-4441-847e-1092085933d8>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00267-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: st: Advanced linear regression question (non constant random perturb [Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index] Re: st: Advanced linear regression question (non constant random perturbation variance) From "Arne Risa Hole" <arnehole@gmail.com> To statalist@hsphsun2.harvard.edu Subject Re: st: Advanced linear regression question (non constant random perturbation variance) Date Wed, 28 Jun 2006 13:29:13 +0100 Hi Guillermo You can estimate this model using the -regh- command written by Jeroen Weesie (type findit regh). You have to generate a dummy variable indentifying the two groups and include this variable in the var() part of the model specification which models the (log) error variance. The help file includes some useful references. Best wishes On 28/06/06, Guillermo Villa <guillermo.villa@uc3m.es> wrote: Dear statalisters, I want to estimate a linear regression in which the variance of the random perturbation is not constant. I do not want this variance to depend on some explanatory variable, rather I have two types of observations and each of this types should have its own constant variance. My sample is divided in two parts (I = I1 + I2). Then, ei follows a normal distribution with mean 0 and variance sigma1 if i belongs to I1 and ei follows a normal distribution with mean 0 and variance sigma2 if i belongs to I2. I suppose this model should be estimated using GLS, but I do not know how to tell Stata that here the random perturbation variance is not constant. Any idea? * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/ * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2006-06/msg00911.html","timestamp":"2014-04-16T19:51:56Z","content_type":null,"content_length":"7587","record_id":"<urn:uuid:ccbd7134-f5ba-4d40-976a-5438e537df3a>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00465-ip-10-147-4-33.ec2.internal.warc.gz"}
How many tons of dynamite are in one megaton? In physics, mass (from Greek μᾶζα "barley cake, lump [of dough]") is a property of a physical system or body, giving rise to the phenomena of the body's resistance to being accelerated by a force and the strength of its mutual gravitational attraction with other bodies. Instruments such as mass balances or scales use those phenomena to measure mass. The SI unit of mass is the kilogram For everyday objects and energies well-described by Newtonian physics, mass has also been said to represent an amount of matter, but this view breaks down, for example, at very high speeds or for subatomic particles. Holding true more generally, any body having mass has an equivalent amount of energy, and all forms of energy resist acceleration by a force and have gravitational attraction; the term matter has no universally-agreed definition under this modern view. TNT equivalent is a method of quantifying the energy released in explosions. The "ton of TNT" is a unit of energy equal to 4.184 gigajoules, which is approximately the amount of energy released in the detonation of one metric ton of TNT. The "megaton of TNT" is a unit of energy equal to 4.184 petajoules. The kiloton and megaton of TNT have traditionally been used to rate the energy output, and hence destructive power, of nuclear weapons (see nuclear weapon yield). This unit is written into various nuclear weapon control treaties, and gives a sense of destructiveness as compared with ordinary explosives, like TNT. More recently, it has been used to describe the energy released in other highly destructive events, such as asteroid impacts. However, TNT is not the most energetic of conventional explosives. Dynamite, for example, has about 60% more energy density (approximately 7.5 MJ/kg, compared to about 4.7 MJ/kg for TNT). Nuclear weapon designs are physical, chemical, and engineering arrangements that cause the physics package of a nuclear weapon to detonate. There are four basic design types. In all except the last, the explosive energy of deployed devices is derived primarily from nuclear fission, not fusion. Pure fission weapons historically have been the first type to be built by a nation state. Large industrial states with well-developed nuclear arsenals have two-stage thermonuclear weapons, which are the most compact, scalable, and cost effective option once the necessary industrial infrastructure is built. The explosive yield of a nuclear weapon is the amount of energy discharged when a nuclear weapon is detonated, expressed usually in TNT equivalent (the standardized equivalent mass of trinitrotoluene which, if detonated, would produce the same energy discharge), either in kilotons (kt; thousands of tons of TNT) or megatons (Mt; millions of tons of TNT), but sometimes also in terajoules (1 kiloton of TNT = 4.184 TJ). Because the precise amount of energy released by TNT is and was subject to measurement uncertainties, especially at the dawn of the nuclear age, the accepted convention is that one kt of TNT is simply defined to be 1012 calories equivalent, this being very roughly equal to the energy yield of 1,000 tons of TNT. The yield-to-weight ratio is the amount of weapon yield compared to the mass of the weapon. The theoretical maximum yield-to-weight ratio for fusion weapons (thermonuclear weapons) is 6 megatons of TNT per metric ton of bomb mass (25 TJ/kg).]citation needed[ Yields of 5.2 megatons/ton and higher have been reported for large weapons constructed for single-warhead use in the early 1960s. Since this time, the smaller warheads needed to achieve the increased net damage efficiency (bomb damage/bomb weight) of multiple warhead systems, has resulted in decreases in the yield/weight ratio for single modern warheads. A petaton is a unit of mass that is equal to 1,000 teratons. It can also used as a unit of energy equivalent to 1×1015 (one million billion) tons of TNT. This latter use is usually restricted to astronomical events such as meteor impacts or large science fiction weapons. The energy released by the explosion of one petaton of TNT, 4.18×1024 joules, is equivalent to the energy of an earthquake of magnitude 12 on the Richter Scale, or to the energy of a 60 km rocky meteorite impacting the earth at 25 km/s.]citation needed[ Because energy is defined via work, the SI unit for energy is the same as the unit of work – the joule (J), named in honour of James Prescott Joule and his experiments on the mechanical equivalent of heat. In slightly more fundamental terms, 1 joule is equal to 1 newton-metre and, in terms of SI base units An energy unit that is used in atomic physics, particle physics and high energy physics is the electronvolt (eV). One eV is equivalent to J−191.60217653×10. In spectroscopy the unit cm−1 = 0.000123986 eV is used to represent energy since energy is inversely proportional to wavelength from the equation $E = h u = h c/\lambda$. Business Finance Related Websites:
{"url":"http://answerparty.com/question/answer/how-many-tons-of-dynamite-are-in-one-megaton","timestamp":"2014-04-19T06:53:53Z","content_type":null,"content_length":"31887","record_id":"<urn:uuid:61885596-daea-4f75-9cfd-59f57ef3e33e>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00502-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: st: roll rates Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: st: roll rates From Nick Cox <njcoxstata@gmail.com> To statalist@hsphsun2.harvard.edu Subject Re: st: roll rates Date Sat, 23 Apr 2011 00:58:29 +0100 . h xttrans On Sat, Apr 23, 2011 at 12:25 AM, Argyn Kuketayev <akuketayev@mail.primaticsfinancial.com> wrote: > i wonder if Stata has a package for pool roll rate analysis. > the roll rates are probabilities of transitions between asset's > states, such as credit grades. let's say, we have N assets, each can > be in one of the states S. some of these states are end-states, i.e. > once an asset gets into this state, it exits the pool. > so we can observe monthly asset states, and transitions between them. > the assumptions is that all assets have the same state transition > probabilities, and that these probabilities remain constant over time > (stationary). i need to estimate the probabilities of transitions > between states. one can think of a matrix with rows corresponding to > an asset state this month, and the columns are states in next month. > so sum of columns in each row is 100%. each cell is a probability of > transition from row state to column state. > what would be the most straightforward way to estimate the transition > probability matrix (roll rates) in Stata? * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/ • References: □ st: roll rates ☆ From: Argyn Kuketayev <akuketayev@mail.primaticsfinancial.com>
{"url":"http://www.stata.com/statalist/archive/2011-04/msg01099.html","timestamp":"2014-04-17T12:49:30Z","content_type":null,"content_length":"8443","record_id":"<urn:uuid:6c861d3d-ab9d-4587-b81f-f6059195d185>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00207-ip-10-147-4-33.ec2.internal.warc.gz"}
Training & E-Learning Zone for Quizzes Math is usually considered as a profound and dull subject and it arouses many complaints from students, so teachers always need to wrack their brains to make teaching job more inspiring. When it comes to online teaching, the online math test comes as an effective assistant for math teachers. It helps them ensure the concentration of their students in test. Then how could teachers make a math test effectively? Of course, the tool, I mean, math quiz creator is quite critical. Here I will introduce 3 popular math test makers. 1. Wondershare QuizCreator – Flash math test maker Wondershare QuizCreator is one of the few Flash quiz making software which enable users to make math test. It has an equation editor which allows you to edit all the mathematical symbols. Now I will show you the steps to make a math test with this math quiz creator. First, launch your Wondershare QuizCreator and choose to create a new quiz. Pick one question type you want from the nine question types it provides. I will take “multiple choice” for example. In the question panel, you could add math symbols both in questions and answers. Click the “Equation” button and a math symbol panel would pop out. You can choose any symbol you want and edit the math formula. After making the quizzes, you could publish them as Flash, SCORM, exe, word, excel, etc. Also you can publish them to QuizCreator Online which helps teachers track, analyze, and report the test 2. Math Test Creator from Software Reflections – specialized math quiz maker Math Test Creator is the math quiz creator which allows teachers and parents to create many different types of math tests. These tests are designed for students of grades 1st-7th. Teachers and parents can select a variety of options to generate the math test with the answer sheets quickly. All the math tests generated are automatically saved and can therefore be retaken by the students or Math test Creator has a free evaluation version. With this version, you could make four question types. The insufficiency of making math tests with specialized math quiz creator is that you can only make math quiz. Unlike Wondershare QuizCreator, the pure math test maker could not make stunning Flash quiz with multimedia which may turn the test more interesting. 3. Math Test Worksheets – the printable math quiz creator online There’re lots of math test worksheets you could download online to make a math test. They’re free, easy to use, and effective in creating printable quiz, but they cannot well support online test. The following are some math test worksheets Websites for teachers: So above are the 3 popular math test makers for teachers to make mathematic quizzes. They enable teachers to create effective Flash math quiz, professional math test, or printable math quiz. Facing so many choices teachers can choose the proper one according to their own needs.
{"url":"http://www.quiz-creator.com/blog/tag/make-math-test/","timestamp":"2014-04-16T13:22:27Z","content_type":null,"content_length":"27058","record_id":"<urn:uuid:77c814dc-1748-4729-957c-faac5a0e338d>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00189-ip-10-147-4-33.ec2.internal.warc.gz"}
T.,Yoshise, A.: A unified approach to interior point algorithms for linear complementarity problems Results 1 - 10 of 130 - SIAM REVIEW , 1996 "... ..." , 1998 "... We present primal-dual interior-point algorithms with polynomial iteration bounds to find approximate solutions of semidefinite programming problems. Our algorithms achieve the current best iteration bounds and, in every iteration of our algorithms, primal and dual objective values are strictly imp ..." Cited by 181 (34 self) Add to MetaCart We present primal-dual interior-point algorithms with polynomial iteration bounds to find approximate solutions of semidefinite programming problems. Our algorithms achieve the current best iteration bounds and, in every iteration of our algorithms, primal and dual objective values are strictly improved. - SIAM Review , 1997 "... Abstract. This paper gives an extensive documentation of applications of finite-dimensional nonlinear complementarity problems in engineering and equilibrium modeling. For most applications, we describe the problem briefly, state the defining equations of the model, and give functional expressions f ..." Cited by 127 (24 self) Add to MetaCart Abstract. This paper gives an extensive documentation of applications of finite-dimensional nonlinear complementarity problems in engineering and equilibrium modeling. For most applications, we describe the problem briefly, state the defining equations of the model, and give functional expressions for the complementarity formulations. The goal of this documentation is threefold: (i) to summarize the essential applications of the nonlinear complementarity problem known to date, (ii) to provide a basis for the continued research on the nonlinear complementarity problem, and (iii) to supply a broad collection of realistic complementarity problems for use in algorithmic experimentation and other studies. - Proceedings KDD-2001: Knowledge Discovery and Data Mining , 2001 "... Abstract—A new approach to support vector machine (SVM) classification is proposed wherein each of two data sets are proximal to one of two distinct planes that are not parallel to each other. Each plane is generated such that it is closest to one of the two data sets and as far as possible from the ..." Cited by 109 (14 self) Add to MetaCart Abstract—A new approach to support vector machine (SVM) classification is proposed wherein each of two data sets are proximal to one of two distinct planes that are not parallel to each other. Each plane is generated such that it is closest to one of the two data sets and as far as possible from the other data set. Each of the two nonparallel proximal planes is obtained by a single MATLAB command as the eigenvector corresponding to a smallest eigenvalue of a generalized eigenvalue problem. Classification by proximity to two distinct nonlinear surfaces generated by a nonlinear kernel also leads to two simple generalized eigenvalue problems. The effectiveness of the proposed method is demonstrated by tests on simple examples as well as on a number of public data sets. These examples show the advantages of the proposed approach in both computation time and test set correctness. Index Terms—Support vector machines, proximal classification, generalized eigenvalues. 1 "... . This paper is summary of a comprehensive study of the problem of predicting the possible acceleration(s) of a set of rigid, three-dimensional bodies in contact in the presence of Coulomb friction. We begin with a brief introduction to this problem and a survey of related work and previous approach ..." Cited by 75 (18 self) Add to MetaCart . This paper is summary of a comprehensive study of the problem of predicting the possible acceleration(s) of a set of rigid, three-dimensional bodies in contact in the presence of Coulomb friction. We begin with a brief introduction to this problem and a survey of related work and previous approaches. This is followed by the introduction of two novel complementarity formulations for the contact problem under two friction laws: Coulomb's Law and an analogous law in which Coulomb's quadratic friction cone is approximated by a pyramid. Under a full column rank assumption on the system Jacobian matrix, we establish the existence and uniqueness of a solution to our new models in the case where the friction coefficients are nonnegative and sufficiently small. For the model based on the friction pyramid law, we also show that the classical Lemke almost-complementary pivot algorithm and our new feasible interior point method are guaranteed to compute a solution. Extensive computational result... , 1992 "... CONTENTS 1 Introduction 1 2 The Basics of Predictor-Corrector Path Following 3 3 Aspects of Implementations 7 4 Applications 15 5 Piecewise-Linear Methods 34 6 Complexity 41 7 Available Software 44 References 48 1. Introduction Continuation, embedding or homotopy methods have long served as useful ..." Cited by 70 (6 self) Add to MetaCart CONTENTS 1 Introduction 1 2 The Basics of Predictor-Corrector Path Following 3 3 Aspects of Implementations 7 4 Applications 15 5 Piecewise-Linear Methods 34 6 Complexity 41 7 Available Software 44 References 48 1. Introduction Continuation, embedding or homotopy methods have long served as useful theoretical tools in modern mathematics. Their use can be traced back at least to such venerated works as those of Poincar'e (1881--1886), Klein (1882-- 1883) and Bernstein (1910). Leray and Schauder (1934) refined the tool and presented it as a global result in topology, viz., the homotopy invariance of degree. The use of deformations to solve nonlinear systems of equations Partially supported by the National Science Foundation via grant # DMS-9104058 y Preprint, Colorado State University, August 2 E. Allgower and K. Georg may be traced back at least to Lahaye (1934). The classical embedding methods were the , 1996 "... In this paper a symmetric primal-dual transformation for positive semidefinite programming is proposed. For standard SDP problems, after this symmetric transformation the primal variables and the dual slacks become identical. In the context of linear programming, existence of such a primal-dual tran ..." Cited by 54 (10 self) Add to MetaCart In this paper a symmetric primal-dual transformation for positive semidefinite programming is proposed. For standard SDP problems, after this symmetric transformation the primal variables and the dual slacks become identical. In the context of linear programming, existence of such a primal-dual transformation is a well known fact. Based on this symmetric primal-dual transformation we derive Newton search directions for primal-dual path-following algorithms for semidefinite programming. In particular, we generalize: (1) the short step path following algorithm, (2) the predictor-corrector algorithm and (3) the largest step algorithm to semidefinite programming. It is shown that these algorithms require at most O( p n j log ffl j) main iterations for computing an ffl-optimal solution. The symmetric primal-dual transformation discussed in this paper can be interpreted as a specialization of the scaling-point concept introduced by Nesterov and Todd [12] for self-scaled conic problems. The ... - MATH. PROGRAMMING , 2004 "... We present polynomial-time interior-point algorithms for solving the Fisher and Arrow-Debreu competitive market equilibrium problems with linear utilities and n players. Both of them have the arithmetic operation complexity bound of O(n 4 log(1/ɛ)) for computing an ɛ-equilibrium solution. If the p ..." Cited by 36 (7 self) Add to MetaCart We present polynomial-time interior-point algorithms for solving the Fisher and Arrow-Debreu competitive market equilibrium problems with linear utilities and n players. Both of them have the arithmetic operation complexity bound of O(n 4 log(1/ɛ)) for computing an ɛ-equilibrium solution. If the problem data are rational numbers and their bit-length is L, then the bound to generate an exact solution is O(n 4 L) which is in line with the best complexity bound for linear programming of the same dimension and size. This is a significant improvement over the previously best bound O(n 8 log(1/ɛ)) for approximating the two problems using other methods. The key ingredient to derive these results is to show that these problems admit convex optimization formulations, efficient barrier functions and fast rounding techniques. We also present a continuous path leading to the set of the Arrow-Debreu equilibrium, similar to the central path developed for linear programming interior-point methods. This path is derived from the weighted logarithmic utility and barrier functions and the Brouwer fixed-point theorem. The defining equations are bilinear and possess some primal-dual structure for the application of the Newton-based path-following method.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=409869","timestamp":"2014-04-17T16:54:52Z","content_type":null,"content_length":"35450","record_id":"<urn:uuid:baa2bc72-813f-4912-bd5f-323f2c32e883>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00653-ip-10-147-4-33.ec2.internal.warc.gz"}
What graph parameters are determined by parameters for strongly regular graph up vote 6 down vote favorite Say two graphs are not isomorphic but are both strongly regular with the same set of parameters. Are there any parameters (other than the usual such as order, degrees, eigenvalues and multiplicities, etc.) that are determined, e.g., independence number, chromatic number, etc.? Thanks for any help add comment 5 Answers active oldest votes It's a classic result that a graph parameter called Lovasz theta-function $\theta(\Gamma)$ of a strongly regular graph $\Gamma$ is determined by its parameters. And the significance of $\ theta(\Gamma)$ is that it is "sandwiched" between the clique number and the chromatic number. up vote 7 In more detail, the parameters of the s.r.g. $\Gamma$ determine a 3-dimensional commutative algebra of symmetric matrices (the adjacency matrix $A(\Gamma)$ of $\Gamma$, the adjacency matrix down vote of its complement, and the identity matrix span this algebra). Anything that can be expressed in terms of this algebra, which is specified by the eigenvalues of $A(\Gamma)$, is a parameter you are asking about, and $\theta(\Gamma)$ is one of them. Another one is the number of spanning trees, as by Matrix Tree Theorem it is determined by the eigenvalues. Can you suggest a standard reference for this fact? Thanks! – Felix Goldberg Dec 13 '12 at 18:54 For s.r.g.'s? Say, designtheory.org/library/preprints/srg.pdf Or E.Bannai, T.Ito "Algebraic combinatorics I. Association schemes.", ISBN 978-0805304909. – Dima Pasechnik Dec 14 '12 at add comment The number of cycles of length 3,4,5 are determined. If the girth is 4, the number of 6-cycles is determined too. up vote 2 down vote add comment Okay, well, I checked Brouwer's website and combined that with the comment to the accepted answer of a question on this site. I checked the complement of the Shrikhande graph versus the complement of the line graph of $K_{4, 4}$ using Sage and found independence numbers of 3 and 4, and chromatic numbers of 6 and 4, respectively. Both are strongly regular with parameters (16, 9, 4, 6). So, that answers my question for some parameters. up vote 1 down vote They have the same girth though. What about the girth of the Shrikhande graph vs $K_{4,4}$? – Aaron Meyerowitz Dec 13 '12 at 7:25 @aaron Also equal. Sorry, I was in a hurry last night so I didn't have time to say everything I should have said. – Graphth Dec 13 '12 at 14:04 @aaron The reason I used the complements of those graphs was because the chromatic numbers and independence numbers were equal for the graphs themselves, all of those being 4. But, I did check the girth for all 4 and the pairs with same parameters had equal girth. – Graphth Dec 13 '12 at 14:44 1 If the girth of a strongly regular graph is not three, it is four and the graph is bipartite. – Chris Godsil Dec 13 '12 at 17:05 1 @chris The Petersen graph is strongly regular with girth 5... Did you say quite what you meant there? – Louis Deaett Dec 20 '12 at 15:12 show 1 more comment It seems the girth of a strongly regular graph would be determined by its parameters in the following way. If $\lambda > 0$, then the girth is 3. If $\lambda=0$ and $\mu > 1$, then the up vote 1 girth is 4. If $\lambda=0$ and $\mu=1$ then the girth is 5. That last case is a little unusual... down vote add comment The diameter, energy and number of closed walks could be determined by parameters. up vote 0 down vote add comment Not the answer you're looking for? Browse other questions tagged graph-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/116220/what-graph-parameters-are-determined-by-parameters-for-strongly-regular-graph?answertab=votes","timestamp":"2014-04-16T07:57:44Z","content_type":null,"content_length":"73367","record_id":"<urn:uuid:f06f3ba2-ee71-4216-aa32-35f20631d37e>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00443-ip-10-147-4-33.ec2.internal.warc.gz"}
MATH 1413, Survey of Calculus, Fall 2006 MATH 3303, Ordinary Differential equations, Fall 2006 Tue, Thu 2:00 pm – 3:15 pm (3 credit hours), 303 Boyd Prerequisites: A grade of C or above in MATH 2644 (Calculus II). Instructor: Dr. Kwang Shin Office Hours: 11:00 am - 1:00 pm on Thursdays, and 3:30 pm – 5:30 pm on Tuesdays and Thursdays. Office Hours at Math Lab (205 Boyd): Tuesdays 11:00 am - 1:00 pm Office: 328 Boyd Phone: 678-839-4138 E-mail: kshin@westga.edu through your campus e-mail (myUWG). Course Webpage: http://www.westga.edu/~kshin/math3303 Course Description: This course is an introduction to the subject of differential equations and has three components: 1. Existence theory and classical methods for first order equations (chapters 1&2) 2. Real life applications and the theory of linear equations (chapters 3&4) 3. Techniques and methods for solving general linear equations: operator method, power series, and an introduction to the Laplace transform (chapters 6&7). We plan to use the computer algebra system Maple to explore topics such as the Laplace transform and the numerical integration of differential equations. If time permits, we will discuss some topics in Chap. 8. Required Text: Differential Equations with Boundary-Value Problems, Sixth Edition, by Dennis G. Zill and Michael R. Cullen, Brooks-Cole Publishing Company, 2004. Learning Outcomes: the student will be able: -To identify and classify a differential equation, -To decide whether a solution is unique, and to find its domain of existence, -To solve first order equations by classical methods, -To model a simple process and determine its evolution for large time, -To solve an inhomogeneous equation using undetermined coefficients or variation of parameters, -To find power series solutions of linear equations with analytic coefficients, -To use computer resources to solve ordinary differential equations. Hour Exams: Exam 1 (Tue, Feb 13), Exam 2 (Tue, Mar 13), Exam 3 (Tue, Apr 24). Final Exam: Tuesday, May 1, 2:00 pm - 4:00 pm. The final exam will be cumulative. Homework: Homework will be collected three times during the semester and will be posted at http://www.westga.edu/~kshin/math3303/. It is due at the beginning of class on Feb 6, Mar 6, and Apr 17. Each complete submission will receive 20 points and partial credit will be considered for incomplete work. However, late submission will receive zero point. Quizzes: There will be a quiz on almost every Thursday, consisting of one or two problems that are identical or almost identical to homework problems. Each quiz will be 10 points and three lowest scores will be dropped. If needed, the total quiz score will be converted to 90 point scale at the end of semester. Grade Scale: 3 hour exams 300 points (100 points each) Final 200 points Quizzes 90 points Homework 60 points Total 650 points A: 585 (90%) - 650, B: 520 (80%) - 584, C: 455 (70%) - 519, D: 357 (55%) - 454, F: 0 – 356. March 1 is the last day to withdraw the class with a grade of W. Attendance: Attendance is expected and required. You are responsible for all material covered in class and all announcements made. Undetermined number of pop-up quizzes may be given for extra points as a way of checking attendance. Such a quiz will consist of one problem, discussed during the same class. Make-up: There will be no make-up quiz. Make-up hour exams will be granted for official University activities if the student notifies the instructor at least a week in advance and for well-documented illness. There will be no make-up final except when a conflict with other finals occurs. If a conflict occurs to you, please inform the instructor at least two weeks in advance. Make-up exams will not be given after the scheduled exam date. Classroom Behavior: You are expected not to disturb your classmate's learning. Academic Honesty: Academic honesty is fundamental to the activities and principles of a university. All members of the academic community must be confident that each person's work has been responsibly and honorably acquired, developed and presented. Any effort to gain an advantage not given to all students is dishonest whether or not the effort is successful. The academic community regards academic dishonesty as an extremely serious matter, with serious consequences. In this class, when it happens, the corresponding quiz or exam will receive 0 point and the person's final letter grade will be lowered by one level.
{"url":"http://www.westga.edu/~math/syllabi/syllabi/spring07/MATH3303.html","timestamp":"2014-04-19T12:03:08Z","content_type":null,"content_length":"10522","record_id":"<urn:uuid:3ebdc6e4-3340-4562-a275-dd404b15adfe>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00356-ip-10-147-4-33.ec2.internal.warc.gz"}
Abel's equation for the dilog up vote 9 down vote favorite Abel's identity for the dilogarithm (see the wikipedia page about polylogarithms) plays a role in web geometry as it is one of the abelian relations of the first example of exceptional web (Bol's 5-web) to appear in the literature. I have heard it is important in other domains (cohomology of SL(3,C), algebraic K-theory, motives ). I would like to learn more about it. I am asking for: 1. Insights on why Abel's identity is relevant in this or that field; 2. References where it plays a role. Edit. I have just learned from this blog about Bridgeman's orthospectrum identity. Those interest in the question above might want to take a look at it. dg.differential-geometry cohomology nt.number-theory hyperbolic-geometry add comment 4 Answers active oldest votes One basic answer is given by hyperbolic geometry. Ideal tetrahedra in hyperbolic 3-space H^3 are equivalent (under the action of the automorphism group PGL_2(C)) to tetrahedra with vertices {0,1,oo,z}, and their volume is given by D(z), where D(z) is the Bloch-Wigner dilogarithm, which is a slightly modified version of the dilogarithm. This amounts to writing down the hyperbolic metric and evaluating an integral, which turns out to be (very close to) Li_2(z) (although it is real valued for complex z). The tetrahedron {0,1,oo,z} is equivalent under PSL_(2)(C) to {0,1,oo,1/(1-z)} and {0,1,oo,1-1/z}, and so we get formulae: D(z) = D(1/(1-z)) = D(1 - 1/z). The tetrahedron {0,1,oo,z} is also equivalent to {0,1,oo,1/z}, except with an odd permutation of the vertices, and thus:D(z) = - D(1/z). Finally, choose a random point y in the boundary P^1(C) of H^3. If we take the tetrahedron {0,1,oo,y}, we can break it off into {0,1,oo,x} and three other tetrahedra (just like in Euclidean space). Transforming the coordinates of the other three tetrahedra into the standard form gives the 5-term relation: D(x) - D(y) + D(y/x) - D((1-1/x)/(1-1/y)) + D((1-x)/(1-y)) = 0,which gives a proof of Abel's equation. up vote 11 down vote Let's think some more about a closed hyperbolic 3-manifold M. By definition, M = H^3/Gamma for a lattice Gamma in PSL_2(C). Since H^3 is contractible, M is a K(Pi,1) space, and so there accepted is a canonical isomorphism H_*(M,Z) = H_*(Gamma,Z), comparing simplicial homology with the group homology of Gamma. Now M has a fundamental class [M] in H_3(M,Z), which gives an element in H_3(Gamma,Z) and hence also a class in H_3(PSL_2(C),Z). On the other hand, [M] can be decomposed ("triangulated") into ideal tetrehedra with parameters z_i. The set of parameters [z_i] is not unique, however, the only real "move" is the subdivision of tetrahedra, and so associated to M we get an element of the group generated by [z_i] for z_i in P^1(C) and with relation s exactly of the form satisfied by D above. This is essentially the definition of the Bloch group. D is a function this group, and this decomposition gives a map from H_3(PSL_2(C),Z) to the Bloch group. Note that it is not obvious that the z_i can be taken inside some field F, this is a consequence of Mostow Rigidity. It turns out that if we take the Bloch group B(F) generated by elements of F, this is, by work of Suslin, essentially equal to K_3(F). To summarize, the connection between the identity, the cohomology of PSL_2(C), and the Bloch group is well understood, see some papers by Walter Neumann. For the connection between the Bloch group B(F) and K_3(F), see papers of Suslin. The connection with motives is more speculative, but here you should look at some papers of Goncharov. (There are some generalizations/connections to higher regulators for K-groups, but this is a very nice example to understand, being both somewhat accessible yet still very interesting.) If I could, I would add a flag to this post "hyperbolic geometry". Thanks. Very nice answer! I will try to add the hyperbolic-geometry tag to the question. – jvp Oct 25 '09 at 4:35 added a hyperbolic-geometry tag to the question – Greg Stevenson Oct 25 '09 at 4:44 add comment There is a remarkable article, "The remarkable dilogarithm," J. Math. Phys. Sci. 22 (1988), 131--145, by Don Zagier, which was recently reprinted and updated as "The dilogarithm up vote 3 down function" (63 pages!) in one of the collections by Springer Verlag. add comment For the relation to motives (and K-theory), I'd suggest the first several article of Motives, volume 2 (the proceedings of the Seattle conference link text). I don't really know this stuff, but it is apparently believed that the polylogarithms are related to the "higher regulators" from K-theory to Deligne cohomology. These regulators are supposed to help explain the values of up vote 1 L-functions of motives at integers. Apparently the usual logarithm occurs in the first chern class of a variety (and the regulators are thought of as generalizations of chern class, or down vote something). Good luck. add comment There is a little book by Bloch, called Higher regulators, algebraic K-theory, and zeta functions of elliptic curves. It was published quite recently but is based on a famous lecture up vote 1 series from the late 70s or so. He treats the dilog specifically, rather than the more general polylog framework referred to by Rob. See in particular chapter 6. down vote add comment Not the answer you're looking for? Browse other questions tagged dg.differential-geometry cohomology nt.number-theory hyperbolic-geometry or ask your own question.
{"url":"http://mathoverflow.net/questions/2402/abels-equation-for-the-dilog?sort=oldest","timestamp":"2014-04-21T10:30:22Z","content_type":null,"content_length":"65306","record_id":"<urn:uuid:595a9ee3-c82d-49bb-b75c-9591323a313b>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00155-ip-10-147-4-33.ec2.internal.warc.gz"}
How to Encrypt a Winding Way Cipher Step 1 Write down your PLAINTEXT message on a piece of paper without any spaces between the words. Count the number of letters used in the message. Step 2 Create a MATRIX, or grid, that is large enough to contain your message. In our example, the message has 22 letters and uses a 4x6 matrix (4 squares wide by 6 squares high), but we could have used a matrix that was 5x5. Step 3 Writing from left to right, fill in the boxes of the matrix. Use a NULL to fill in any remaining boxes in the matrix. Step 4 Determine the shape of the pattern, or path, you will use to encrypt the plaintext. Patterns can start in any corner of the matrix and then move up or down, left or right, diagonally, zig-zag, or spiral throughout the grid. Step 5 Draw the pattern lightly over the matrix and then copy the letters as they appear on your chosen path. CONGRATULATIONS! You have created a Winding Way Cipher! How to Decrypt a Winding Way Cipher Step 1 Using the KEY (the number of squares and the type of pattern used to make the grid) that your friend gave you, draw a matrix and fill in the boxes with the CIPHERTEXT. If there are any empty boxes remaining, use NULLS to fill them in. Step 2 Reading from left to right, you can now read the plaintext message. CONGRATULATIONS! You have deciphered a Winding Way Cipher!
{"url":"http://www.nsa.gov/kids/ciphers/ciphe00013.shtml","timestamp":"2014-04-18T08:05:34Z","content_type":null,"content_length":"8252","record_id":"<urn:uuid:22ae098f-e9bc-453c-9c8d-885362c31633>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00225-ip-10-147-4-33.ec2.internal.warc.gz"}
circle graph July 11th 2006, 12:14 AM #1 Junior Member Jun 2006 circle graph What part of the total quantity is represented bya 24-degree sector of a circle graph? (I don't even understand what they are asking for) What part of the total quantity is represented bya 24-degree sector of a circle graph? (I don't even understand what they are asking for) This is a circle graph (aka pie chart). The question asks what percent is 24 degrees out of the total 360 degrees. What part of the total quantity is represented bya 24-degree sector of a circle graph? (I don't even understand what they are asking for) Just a bit more detail then what Jake said.... A pie chart shows percentages, looking at Jake's pie chart you see that Individual income taxes are the governments biggest source of revenue and therefore have the biggest sector, which means that it has the most degrees, compared to every other sector, at its vertex (the center of the circle) Now let's answer your question... What part of the total quantity is represented by a 24-degree sector of a circle graph? They're asking what percentage of a circle is 24-degrees A circle has 360 degrees, so to find the percentage we divide 24 by 360 so... $\frac{24}{360}=\boxed{\frac{1}{15}}$ and that is the "part of the total quantity represented by 24-degrees" July 11th 2006, 12:40 AM #2 July 11th 2006, 04:56 AM #3
{"url":"http://mathhelpforum.com/math-topics/4098-circle-graph.html","timestamp":"2014-04-16T20:45:49Z","content_type":null,"content_length":"35007","record_id":"<urn:uuid:036bfb39-a3ed-49d9-861a-756594c909e4>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00505-ip-10-147-4-33.ec2.internal.warc.gz"}
Great Neck Estates, NY Algebra 1 Tutor Find a Great Neck Estates, NY Algebra 1 Tutor ...I had the student create noises; the younger students I had them make faces, and for the older students have show them the position where the tongue and shape of the lips. I am currently a Spanish major with a concentration in linguistics. I have tutors students in ESL but based it more with sounds and practicing their speaking. 13 Subjects: including algebra 1, Spanish, English, algebra 2 ...I am 24 years old and in my final year of law school at Columbia Law School. I transferred to Columbia from Fordham University School of Law. At Fordham, I was awarded the Wilkinson Scholarship for finishing with one of the top GPAs after my first year. 16 Subjects: including algebra 1, reading, writing, GED ...I recently sat for and passed the New York and New Jersey Bar Exams on the first attempt. I developed my own study plan which I plan to utilize with students. I am a law school graduate and currently work as a lawyer in the field of criminal law. 34 Subjects: including algebra 1, English, reading, writing ...I mastered HOW to learn, and then I excelled in my academic pursuits. As a tutor, I help each student find his or her own unique learning style. I want to empower you to stride forward with confidence in all of your studies. 9 Subjects: including algebra 1, reading, English, grammar ...For the last year, I have tutored college students in Calculus I and Calculus 2. I feel very confident tutoring this subject. I have been tutoring students grades K-5 for the last 5 years, in addition to middle and high school students. 19 Subjects: including algebra 1, calculus, geometry, biology Related Great Neck Estates, NY Tutors Great Neck Estates, NY Accounting Tutors Great Neck Estates, NY ACT Tutors Great Neck Estates, NY Algebra Tutors Great Neck Estates, NY Algebra 2 Tutors Great Neck Estates, NY Calculus Tutors Great Neck Estates, NY Geometry Tutors Great Neck Estates, NY Math Tutors Great Neck Estates, NY Prealgebra Tutors Great Neck Estates, NY Precalculus Tutors Great Neck Estates, NY SAT Tutors Great Neck Estates, NY SAT Math Tutors Great Neck Estates, NY Science Tutors Great Neck Estates, NY Statistics Tutors Great Neck Estates, NY Trigonometry Tutors Nearby Cities With algebra 1 Tutor East Atlantic Beach, NY algebra 1 Tutors Great Nck Plz, NY algebra 1 Tutors Great Neck algebra 1 Tutors Great Neck Plaza, NY algebra 1 Tutors Harbor Hills, NY algebra 1 Tutors Kensington, NY algebra 1 Tutors Kings Point, NY algebra 1 Tutors Lake Gardens, NY algebra 1 Tutors Little Neck algebra 1 Tutors Manhasset algebra 1 Tutors Plandome, NY algebra 1 Tutors Russell Gardens, NY algebra 1 Tutors Saddle Rock Estates, NY algebra 1 Tutors Saddle Rock, NY algebra 1 Tutors Thomaston, NY algebra 1 Tutors
{"url":"http://www.purplemath.com/Great_Neck_Estates_NY_algebra_1_tutors.php","timestamp":"2014-04-20T04:25:31Z","content_type":null,"content_length":"24548","record_id":"<urn:uuid:036ada8b-d400-4079-9d37-ab0dcdb21a71>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00218-ip-10-147-4-33.ec2.internal.warc.gz"}
Taylor series, please check July 29th 2006, 02:08 PM #1 Junior Member May 2006 Taylor series, please check f(x) = 36/(6+x)^2 and g(x) = 36/(1+x)^2 By writing f(x) = 1/(1+1/6x)^2 I found the Taylor series about 0 for f up to term in x^3, = 1 - 1/3x + 1/12x^2-1/54X^3 .........(a) Valid for -6<x<6 Then g(x) can be written as 1/(1+(x-5)/6)^2 Replace x in solution 1 by x-5 ??????? 1 - 1/3(x-5) + 1/12(x-5)^2-1/54(X-5)^3 ......(b)...????? Valid for -1<x<11 Not sure how to check the first four terms in the Taylor seris found in part b by finding the first, second and third derivatives of g and using these to find the cubic Taylor polynomial about 5 for g. Thanx a lot. f(x) = 36/(6+x)^2 and g(x) = 36/(1+x)^2 By writing f(x) = 1/(1+1/6x)^2 I found the Taylor series about 0 for f up to term in x^3, = 1 - 1/3x + 1/12x^2-1/54X^3 .........(a) Valid for -6<x<6 Then g(x) can be written as 1/(1+(x-5)/6)^2 Replace x in solution 1 by x-5 ??????? 1 - 1/3(x-5) + 1/12(x-5)^2-1/54(X-5)^3 ......(b)...????? Valid for -1<x<11 Not sure how to check the first four terms in the Taylor seris found in part b by finding the first, second and third derivatives of g and using these to find the cubic Taylor polynomial about 5 for g. Thanx a lot. It looks like a right substitution to me. Though the second series is a Taylor's series for 'g' centered at x=5. Do you want to "uncenter" it? Is the range of validity in the first post correct for the second series. Should it not be -11>x>1? Sorry if thats what you are hinting at in the previous post. Just keen to learn! July 29th 2006, 05:24 PM #2 Global Moderator Nov 2005 New York City August 4th 2006, 05:15 AM #3 Jul 2006
{"url":"http://mathhelpforum.com/calculus/4559-taylor-series-please-check.html","timestamp":"2014-04-19T11:19:27Z","content_type":null,"content_length":"36253","record_id":"<urn:uuid:abb61d0b-9d4d-462b-8a17-53611962299b>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00280-ip-10-147-4-33.ec2.internal.warc.gz"}
Summary: 1/19 Analysis of Algorithms Lecture 6 Max Alekseyev University of South Carolina September 8, 2010 Fast Integer Multiplication Fast Matrix Multiplication Fast Integer Multiplication Let b, c 0 be integers, represented in binary, with n bits each. Here, n is assumed to be large, so we cannot assume as we usually do that b and c can be added, subtracted, or multiplied in constant time. We imagine that the b and c are both represented as arrays of n bits: b = bn-1 · · · b0 and c = cn-1 · · · c0, where the bi and ci are individual bits (leading 0's are allowed). Thus,
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/488/2203793.html","timestamp":"2014-04-18T03:52:05Z","content_type":null,"content_length":"7806","record_id":"<urn:uuid:f64624b7-b6e3-4990-9914-cc0477c9b788>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00237-ip-10-147-4-33.ec2.internal.warc.gz"}
Directional derivative February 4th 2011, 04:07 PM #1 Junior Member Feb 2011 Directional derivative Find $a,b,c$ constants so that the directional derivative of $f(x,y,z)=axy^2+byz+cz^2x^3$ on $(1,2,-1)$ has a maximum value of $64$ on a parallel direction to the $z$ axis. I think we can calculate the directional derivative by using $\langleabla f(x_0),x_0\rangle$ where $x_0=(1,2,-1),$ but a maximum value is asked which I don't get, and I don't either get when it says "on a parallel direction to the $z.$" The directional derivative to the direction to the $<br /> abla f \cdot (0,0,1)=(\frac{\partial f}{\partial x},\frac{\partial f}{\partial y},\frac{\partial f}{\partial z})\cdot (0,0,1)=\frac{\partial f}{\partial z} \; | \; _{at \; point \; (1,2,-1)}=64.<br />$ Last edited by zzzoak; February 5th 2011 at 01:14 PM. The deriviative in the direction of the z axis is simply the partial derivative with respect to z, $by+ 2czx^4$, evaluated at (1, 2, -1). That will give you as single equation for b and c. a can be anything. February 4th 2011, 09:57 PM #2 February 5th 2011, 10:33 AM #3 Junior Member Feb 2011 February 5th 2011, 12:13 PM #4 Senior Member Mar 2010 February 6th 2011, 04:02 AM #5 MHF Contributor Apr 2005
{"url":"http://mathhelpforum.com/calculus/170223-directional-derivative.html","timestamp":"2014-04-18T21:34:24Z","content_type":null,"content_length":"44791","record_id":"<urn:uuid:0898d4d0-3a5f-4b70-8a74-d9c68ac7b211>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00411-ip-10-147-4-33.ec2.internal.warc.gz"}
Finding maximum value of degree-3 homogeneous polynomials when variables sum to 1 up vote 1 down vote favorite I would like to be able to find maximum values of degree-3 homogeneous polynomials, when the variables are non-negative real numbers that sum to 1. For example, For example, the maximum value of $xy^2$ subject to $x+y=1$, $x\ge0$, $y\ge0$, occurs when $x=1/3$ and $y=2/3$. And the maximum value of $xyz + xyw + xzw$ subject to $x+y+z+w=1$, $x,y,z,w\ge0$, occurs when $x=1/3$ and $y=z=w=2/9$. I have found that I can do many cases by hand (using Lagrange multipliers), but I would like to be able to do this computationally. The motivation is I would like to be able to compute 3-graph Lagrangians (see e.g. this paper) of arbitrary 3-graphs. (A 3-graph is a 3-uniform hypergraph.) I would appreciate any pointers in the right direction... Edit: I am only interested in obtaining exact answers. I know how to solve these problems numerically. co.combinatorics hypergraph graph-theory computer-algebra Is there any reason to believe that there are exact (by which I assume you mean "rational") answers? – Igor Rivin May 14 '12 at 15:37 1 I asked a similar question at mathoverflow.net/questions/1493/… The short answer is that the problem is computationally hard. Small instances can be solved by general decision algorithms for the theory of real closed fields, but anything realistic is currently impractical. – Boris Bukh May 14 '12 at 15:39 @Igor: no, the answer may not be rational (see page 10 of the linked paper). @Boris: thanks! – Emil May 14 '12 at 16:06 add comment 3 Answers active oldest votes There can be no efficient way to compute such optimal values. Already for degree two this is NP-hard. Let $A$ be the adjacency matrix of a graph. The optimization problem $\min_{\Delta} x up vote 4 ^T(I+A)x$ over the standard simplex $\sum x_i=1$ is equal to $1/\alpha$, where $\alpha$ is the independence number of the graph. Computing $\alpha$ is known to be NP-hard. down vote Nice! But maybe there is an inefficient programmbale way... – Felix Goldberg May 14 '12 at 16:01 Good point. However, I'm not really interested in the complexity of this problem. – Emil May 14 '12 at 16:07 add comment I do not know how far these methods can go, but in both your examples the maximum can be computed easily and without calculus using some algebraic inequality techniques, the kind of "standard tricks" that is taught to contestants in high-school math Olympiads. For the first inequality, AM-GM suffices: $$ xy^2 = 4 (x \cdot y/2 \cdot y/2) \leq 4 \left(\frac{x+y/2+y/2}{3}\right)^3 = \frac{4}{27} (x+y)^3 $$ (look also for weighted AM-GM). up vote 2 down For the second, use one of Maclaurin's inequalities in three variables to get $$ (yz+yw+zw) \leq \frac{1}{3}\left(y+z+w\right)^2, $$ and then the inequality can be reduced to the vote previous one by setting $Y=y+z+w$. This kind of tricks can work in simple cases, or where there is much symmetry in the variables (check Muirhead's inequality for instance for another highly-symmetric case); if this is not your case you may be out of luck though. add comment Googling a bit, I found this paper: Penalized maximum-likelihood estimation, the Baum–Welch algorithm, diagonal balancing of symmetric matrices and applications to training acoustic data up vote 0 down vote It seems to tangentially discuss your problem on p.3 (1.3-1.4). The thrust of this paper is to compare the numerical Baum–Welch algorithm to something called the "degree raising algorithm" which might be the kind of thing you're looking for. Thanks for the link! It looks like they are interested in numerical solutions though. – Emil May 14 '12 at 16:09 add comment Not the answer you're looking for? Browse other questions tagged co.combinatorics hypergraph graph-theory computer-algebra or ask your own question.
{"url":"http://mathoverflow.net/questions/96909/finding-maximum-value-of-degree-3-homogeneous-polynomials-when-variables-sum-to?sort=votes","timestamp":"2014-04-17T22:02:21Z","content_type":null,"content_length":"65863","record_id":"<urn:uuid:a4078f72-b9bf-41d7-b386-07a6ff1e0a9b>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00426-ip-10-147-4-33.ec2.internal.warc.gz"}
random number algorithm im having trouble finding any concrete random number algorithms on the web. does anyone have any links or even books to refer me? ive tried the library as well but ... Enjoy an ad free experience by logging in. Not a member yet? Just Joined! Join Date Jun 2005 random number algorithm im having trouble finding any concrete random number algorithms on the web. does anyone have any links or even books to refer me? ive tried the library as well but i cant seem to find anything... Linux Newbie Join Date Aug 2004 are you trying to find one, or make one? Just Joined! Join Date Feb 2005 Delft, Holland There is no such bloody thing as a random number, but how about the Mersenne Twister? Linux Newbie Join Date Aug 2004 There is no such bloody thing as a random number, but how about the Mersenne Twister? what about a coin, die, spinner? Just Joined! Join Date Jun 2005 i am using CAD for VLSI and I need a random number generator in my design. The tools and the board that I am using do not include any random number generators and so i need to make one myself. I was looking on the web for some algorithms but have had trouble finding sites that properly explain themselves. and morgoth, there may not be random numbers but as long as the numbers are equally distributed after simulation then I will be happy. Once I find an algorithm that I can understand then I will attempt to implement it on the board and simulate. Just Joined! Join Date Feb 2005 Delft, Holland @judhkqkhsd, have you looked at: Mersenne Twister what about a coin, die, spinner? Smile Since the coin follows the laws of nature, the outcome can be predicted with 100% accuracy when all the variables are known, examples of these variables are air density, the distance to the ground, the weight and size of the coin, the used force, the spin, etc. These variables can never be measured with 100% accuracy though ("the act of observing disturbs the observed", as that very cool Dr Seuss-like rhyme on quantum mechanics went) so we can never predict the exact outcome, and therefor it is only "random" from our flawed point of view. With computers however, everything is measurable, so everything can be determined with 100% accuracy, and therefor computers can never act randomly (until Bill Gates came along, after that they crashed randomly Just Joined! Join Date Mar 2005 South Africa The maths involved in random number gerneration is rather involved. Would in not be possible for you to use another program (c++, matlab/octave ...) to generate the random numbers and then read them from a text file in your application? Just Joined! Join Date Jun 2005 i had thought of that but the number generator needs to be apart of the design (using the same data from the text file is not sufficient). I guess i'll just keep looking. Linux Newbie Join Date Aug 2004 just to be sure, you are trying to write a piece of code for a persudo-random number generator? or are you trying to make a piece of hardware that does this?
{"url":"http://www.linuxforums.org/forum/miscellaneous/35844-random-number-algorithm.html","timestamp":"2014-04-24T16:57:03Z","content_type":null,"content_length":"62393","record_id":"<urn:uuid:b5c0e7c8-80f6-4cad-ab1b-184af3922ae2>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00205-ip-10-147-4-33.ec2.internal.warc.gz"}
That doesn’t look like a Jericho missile. (Source: kayytx) Erik Johansson (Sweden/Germany) Erik Johansson is a full time photographer and retoucher from Sweden based in Berlin, Germany. He works on both personal and commissioned projects and creates sometimes street illusions. Erik creates realistic photos of impossible scenes - capturing ideas, not moments: “To me photography is just a way to collect material to realize the ideas in my mind. I get inspired by things around me in my daily life and all kinds of things I see. Although one photo can consist hundreds of layers I always want it to look like it could have been captured. Every new project is a new challenge and my goal is to realize it as realistic as possible.” Erik has been invited to speak at the TED conference in London on how something can look real but at the same time be impossible. © All images courtesy the artist [more Erik Johansson] Watermelon snow, also called snow algae, red snow, or blood snow, is Chlamydomonas nivalis, a species of green algae containing a secondary red carotenoid pigment in addition to chlorophyll. This phenomenon is especially common during the summer months in the Sierra Nevada of California where snow has lingered from winter storms, mainly at altitudes of 10,000 to 12,000 feet. Compressing the snow with your boot leaves a distinct footprint the color of watermelon pulp. The snow even has a fresh watermelon scent. Photo credit: © Michal Renee (Source: malformalady) Japanese KitKat (by Silivren) HOLY SHIT (Source: sizvideos) … Y’see, now, y’see, I’m looking at this, thinking, squares fit together better than circles, so, say, if you wanted a box of donuts, a full box, you could probably fit more square donuts in than circle donuts if the circumference of the circle touched the each of the corners of the square donut. So you might end up with more donuts. But then I also think… Does the square or round donut have a greater donut volume? Is the number of donuts better than the entire donut mass as a whole? A round donut with radius R[1] occupies the same space as a square donut with side 2R[1]. If the center circle of a round donut has a radius R[2] and the hole of a square donut has a side 2R [2], then the area of a round donut is πR[1]^2 - πr[2]^2. The area of a square donut would be then 4R[1]^2 - 4R[2]^2. This doesn’t say much, but in general and throwing numbers, a full box of square donuts has more donut per donut than a full box of round donuts. The interesting thing is knowing exactly how much more donut per donut we have. Assuming first a small center hole (R[2] = R[1]/4) and replacing in the proper expressions, we have a 27,6% more donut in the square one (Round: 15πR[1]^2/16 ≃ 2,94R[1]^2, square: 15R[1]^2/4 = 3,75R[1]^2). Now, assuming a large center hole (R[2] = 3R[1]/4) we have a 27,7% more donut in the square one (Round: 7πR[1]^2/16 ≃ 1,37R[1]^2, square: 7R[1]^2/4 = 1,75R[1]^2). This tells us that, approximately, we’ll have a 27% bigger donut if it’s square than if it’s round. tl;dr: Square donuts have a 27% more donut per donut in the same space as a round one. Thank you donut side of Tumblr. (Source: nimstrz)
{"url":"http://a-level-30-wizard.tumblr.com/","timestamp":"2014-04-21T02:18:43Z","content_type":null,"content_length":"44491","record_id":"<urn:uuid:26e996c6-b827-4f41-ace6-4057d4224928>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00175-ip-10-147-4-33.ec2.internal.warc.gz"}
Orthomagic Square of Squares It's not known if there exists a 3x3 magic square of squares, i.e., a 3x3 arrangement of nine distinct integer squares such that the sum of each row, column, and main diagonal is the same. A recent note discussed one approach to this problem, namely, to determine the form of all 3x3 arrangements of squares that satisfy the four sums involving the central number, and then see if any of those arrangements can be made to also satisfy the four outer sums. In this way it was shown that no solution is possible if the central square is expressible as a sum of two squares in only four ways (which is the simplest non-trivial case). It may be possible to extend that method to the general case, but I wonder if another approach might be more effective. Instead of looking at the 3x3 arrangements that satisfy the four sums involving the central number, suppose we consider the arrangements that satisfy the six orthogonal sums, i.e.,the sums of the rows and columns. If these "orthomagic squares" of squares could be completely characterized, it might be possible to show that they can never satisfy the sums on the two main diagonals, thereby proving the impossibility of a 3x3 magic square of squares. (Of course, if this can't be shown, this approach may help to construct an example.) Remarkably, it turns out that most orthomagic squares of squares also possess another property: the common sum of the rows and columns is a square! For example, the smallest orthomagic arrangement of distinct squares is 4^2 23^2 52^2 32^2 44^2 17^2 47^2 28^2 16^2 and each rows and column of this arrangement sums to 3249 = 57^2. The same is true for the next several OMSOS's. In any case, this is nice because we know the common sum of a completely magic arrangement of squares must be of the form 3E^2 where E^2 is the central square. Therefore, since a square can't be 3 times a square, we can immediately rule out all orthomagic arrangements whose common sum is a square. Of the twelve smallest OMSOS's, nine of them have a square common sum, so this just leaves three possibilities, and those can also be ruled out individually. Interestingly, the smallest OMSOS that does NOT have a square common sum happens to be unique in another sense, namely, all the entries are squared primes: 11^2 23^2 71^2 61^2 41^2 17^2 43^2 59^2 19^2 The common sum of the rows and columns is 5691 = 3*7*271. Obviously we can permute the rows and columns of an OMSOS without affecting the sums, but since 3*7*271 is not 3 times a square, we know this can't be permuted into a fully magic square. Still, this is an interesting square in its own right. The next two "all-prime" OMSOS's (after the one noted above) are based on the matricies It's also interesting that the next two "exceptional" OMSOS's (meaning those whose common sum is NOT a square) also have common sums of the form 3*7*p where p is a prime congruent to 1 (mod 6). Even though the OMSOS's with square common sums are immediately excluded from being completely magic, they are interesting in their own right, and it's worthwhile to consider why the condition of equal sums for the row and columns predisposes the common sum to be a square (when the elements themselves are squares). First, notice that they seem to occur in infinite families, and it's not too hard to figure out parametric representations for some of them. For example, there's an infinite family containing "(1)^2": (1)^2 (8+10k)^2 (4+16k+10k^2)^2 (4+8k+6k^2)^2 (4+14k+8k^2)^2 (7+8k)^2 (8+14k+8k^2)^2 (1+8k+6k^2)^2 (4+6k)^2 with the common sum (9+16k+10k^2)^2. (Of course there are a few values of k for which the elements of this array are not distinct, so I exclude those from the set of orthomagic squares. Note also that k can be positive or negative, because all the results are squared anyway.) Similarly an infinite family containing (2)^2 is given by (2)^2 (14+10k)^2 (5+14k+5k^2)^2 (11+10k+3k^2)^2 (2+10k+4k^2)^2 (10+8k)^2 (10+10k+4k^2)^2 (5+10k+3k^2)^2 (10+6k)^2 with the common sum (15+14k+5k^2)^2. We can give a similar infinite 1-parameter family containing (n)^2 for any given n, so there ought to be a 2-parameter representation covering all of these. Ideally we'd like to find a complete characterization of all OMSOS's, or at least the possible common sums, to see if complete magicality can be ruled out. The class of orthomagic squares whose "common sum" is a square is closely related to quaternions, spatial rotation matricies, and representations of numbers as sums of FOUR squares. This is an observation that essentially goes back to Euler (see Dickson's History). Specifically, for any numbers a,b,c,d we can construct a 3x3 matrix a^2 + b^2 - c^2 - d^2 2(bc - ad) 2(ac + bd) 2(ad + bc) a^2 - b^2 + c^2 - d^2 2(cd - ab) 2(bd - ac) 2(ab + cd) a^2 - b^2 - c^2 + d^2 Each row and column, regarded as a 3D vector, has the magnitude L = a^2 + b^2 + c^2 + d^2 so obviously if we construct a 3x3 square whose components are the squares of the components of the above matrix, it will be an "orthomagic square of squares" with the common sum L^2. This accounts for the frequent occurrance of OMSOS's with a square common sum. For example, with a=1, b=2, c=4, d=6 we have the basic matrix -47 4 32 28 -23 44 and if we square each number this is the smallest orthomagic square of squares, with the common sum 3249 = 57^2. The Determinant is 57^3. Note that the three row vectors constitute an orthogonal triad, as do the three column vectors, and if we normalize each term by dividing it by the magnitude L=57 the above matrix is the rotation operator representing the space rotation relating the row triad to the column Obviously the product of two such base squares gives another base square. For example, we have the product _ _ _ _ _ _ | -47 4 32 | | -51 18 46 | | 3397 1190 -1850 | | 28 -23 44 | | 42 -19 54 | = | -1250 3845 178 | |_ 16 52 17 _| |_ 26 66 3 _| |_ 1810 422 3595 _| The second factor on the left side is given by setting a=1, b=3, c=5, and d=6, it's determinant is 71^3, and the common sum of squares of its rows and columns is 71^2. The matrix product is also a base square, i.e., the squares of its elements form an orthogonal orthomagic square of squares, with the common sum (57*71)^2 and the base square has determinant (57*71)^3. It is produced by setting a=-61, b=-1, c=15, d=10, which can be inferred from the four-square multiplication formula (A^2 + B^2 + C^2 + D^2)(a^2 + b^2 + c^2 + d^2) = w^2 + x^2 + y^2 + z^2 w = Aa + Bb + Cc + Dd x = Ab - Ba + Cd - Dc y = Ac - Bd - Ca + Db z = Ad + Bc - Cb - Da The above is an interesting example of how, when trying to work in three parameters (or dimensions), it often seems that we're led to a much more natural formulation by going to four. I had been looking at sums of THREE squares, noting that the the most general solution (according to Dickson) of the equation X^2 + Y^2 + Z^2 = N^2 is of the form X = 2( a^2 + b^2 - c^2 ) Y = 2( a^2 - b^2 + c^2 ) + 2a( b-3c) Z = (-a^2 + b^2 + c^2 ) + 2b(2a-3c) N = 3( a^2 + b^2 + c^2 ) - 2c(2a+ b) and in retrospect it's clear that there's a 4-parameter family hidden behind this, trying to get out. Indeed, it was derived from Euler's four-square product formula, supressing one of the parameters. Incidentally, the quaternionic formulas noted above are very similar to the generalized Heron's formula, relating the volume of a "perfect tetrahedron" in terms of the areas of its faces, as discussed in Heron's Formula For Tetrahedrons. Anyway, the OMSOS's with square common sum seem to be well covered by the above parameterization, although I'm not quite sure it necessarily includes ALL OMSOS's with square sum. Another interesting question is whether the OMSOS's that *don't* have a square sum are also given by the 4-parameter matrix above, with non-integer values of a,b,c,d. In other words, are there any sets of numbers a,b,c,d such that the nine elements of the basic matrix are integers but the magnitude of the row and column vectors is not an integer? There are 91 primitive orthomagic squares of squares with common sums less than 30000. Of those, 56 have a square common sum, whereas the remaining 35 do not. Of the 35 non-square cases, none of them is of the form 3k^2, so they clearly can't give a complete magic square of squares. For more on the search for a magic square of squares, see the note Automedian Triangles and Magic Squares Return to MathPages Main Menu
{"url":"http://www.mathpages.com/home/kmath427.htm","timestamp":"2014-04-21T09:37:39Z","content_type":null,"content_length":"10136","record_id":"<urn:uuid:ac3c448f-e91d-4778-9cc0-b7ff0ea29808>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00593-ip-10-147-4-33.ec2.internal.warc.gz"}
Anjela Govan, an expert on the Markov method of rating and ranking, weighs in on the “science” behind getting a jump on the competition. The topic of ranking (or the question “Who’s #1?”) is usually accompanied by the question “Who will win this game?” Granted, ranking does not apply to sports only, but sports is viewed as less academic than most applications. The question of “Who will win?” essentially asks us to somehow identify which team is stronger, better, of higher quality, or is higher ranked. We need to recall that even though we often interchange words “ranking” and “rating,” they do have different meaning. Rating may somehow summarize the quality of a team based on some associated criteria. Ranking is simply an indication of the place in the list that reflects relative importance of teams to each other. The difficulty of this topic is to determine what constitutes quality and importance as far as this particular set of teams is concerned. Ideally, to determine rating we would need to know the characteristics of the perfect team and measure all the teams against it, thus arriving at an absolute measure. Given that we know how far any given team is from the ideal, we can compute the absolute ratings. However, very few (if any) real world applications allow this ideal method. What we are able to do is measure relative difference in quality between teams, thus arriving at a rating based on these relative measurements. Now we are waxing philosophical! Back to game predictions: This question has two aspects to it, first being “which team in a given competition will win?” and the second being “by how much?” The first is easier to answer. Suppose we pick a method that produces rating scores, a favorite one from the great collection introduced by Dr. Langville and Dr. Meyer. For a game between teams A and B to determine a winner we simply compare each team’s rating scores, rA and rB , the better rated team wins! Now for the by how much, referred to in chapter 9 of Who’s # 1 as point spread: there are many ways to estimate point spread. One of the simplest approaches is to think of the point spread being proportional to the difference between the ratings of the teams. That is, point spread for a game between team A and team B = α|rA − rB |, where α is some constant. In this simplistic point spread approach, the work is concentrated in estimating an appropriate constant α. This constant could be the same for all the games, and could be determined using the previous season, that is the point spreads from the previous season are known and least squares could be a way to approximate α. Another way is to customize α to the pairs of teams. Maybe there is a trend between teams that could be observed across a number of seasons. The described method is simplistic, perhaps it is evident why. For a more in depth discussion do consider the well laid out chapter on point spreads in Who’s #1?
{"url":"http://blog.press.princeton.edu/category/math-science/mathematics/march-mathness/algorithms-march-mathness/page/2/","timestamp":"2014-04-18T18:11:10Z","content_type":null,"content_length":"121691","record_id":"<urn:uuid:1407d5da-fd92-4a0a-859b-b2f7c0decdda>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00557-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: State the steps used to construct a segment of length d such that a/b =c/d given three other segments of length a, b and c O.o • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50a7f569e4b082f0b853098c","timestamp":"2014-04-21T08:06:44Z","content_type":null,"content_length":"63967","record_id":"<urn:uuid:6c535dac-dd86-41e2-9f41-f2828b7aea36>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00543-ip-10-147-4-33.ec2.internal.warc.gz"}
Graph Theory: problem with an n-partite graph up vote 2 down vote favorite I came across a problem in complexity theory which I believe reduces to the following graph theory problem. I am not familiar with discrete math and so I do not know how to approach this problem. Does anyone have a solution, or can anyone recommend a source that might help me? The problem: Let n and r be parameters. We are interested in r being constant, or at least very small compared to n (say r=loglog(n)). We have a directed n-partite graph G, with vertex sets V_1,...,V_n, where each V_i={1,...,r}. Denote the j-th vertex of V_i by (i,j). We want to choose vertices (1,j_1),...,(n,j_n) such that the total number of paths in the induced graph is polynomial in n. Given the structure of G, a weaker though nontrivial result would be to show that no path in the induced graph has length greater than Clog(n) for every constant C. We have the following structure on the original graph: 1) If i >= i' then (i,j) is not connected to (i',j') for all j,j'. 2) If (i,j) is connected to (i',j') and (i,k) connected to (i',k') then j=k. 3) If (i,j) is connected to (i',j') then (i,j) is connected to (i',k) for all k=1,...,j'. 4) If (i,j) is connected to (i',j') and (i',j') is connected to (i'',j'') then (i,j) is connected to (i'',j''). Thanks. Go M-O!! add comment 2 Answers active oldest votes The result appears to be false for $r=1$. It's not immediately obvious whether the counterexample extends even to $r=2$. First, a (hopefully accurate) restatement of the conditions on $G$. The directed graph $G$ has vertex set $[n]\times[r]$, which we think of as embedded in the plane in the usual way for the purposes of restating conditions (1)–(4): 1. All edges in $G$ are directed right to left. 2. For each pair of columns, at most one vertex of the first column sends edges to the second. up vote 2 3. The neighbourhood of each vertex is a down-set (consisting of initial segments, possibly empty, of each column). down vote 4. $G$ is transitive. For $r=1$, the complete graph with all edges oriented right to left satisfies these properties, and the only possible induced graph is $G$ itself, which has almost $2^n$ paths. For higher $r$, one approach to searching for counterexamples is to try and weave together complete transitive graphs, but conditions 2 and 4 mean you need to keep them well separated, and condition 3 tends to force them to collide if you try to avoid leaving large gaps. add comment Yes, I forgot to mention that the statement is known to be false for r=1, and the result in complexity theory I'm going for is known to be true when r=log(n). Though it is proved with different methods entirely, so even a positive answer when r=log(n) would be interesting. Personally, I believe it is true for r=2, but have resigned myself to attempting to prove it up vote 0 assuming that $r\in\omega(1)$ for the time being. down vote add comment Not the answer you're looking for? Browse other questions tagged co.combinatorics or ask your own question.
{"url":"http://mathoverflow.net/questions/112547/graph-theory-problem-with-an-n-partite-graph?sort=newest","timestamp":"2014-04-16T13:33:15Z","content_type":null,"content_length":"53076","record_id":"<urn:uuid:a4982573-a76d-40e4-8154-b48b59ddfc8f>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00153-ip-10-147-4-33.ec2.internal.warc.gz"}
Several maths questions June 18th 2009, 01:44 PM #1 Oct 2008 Several maths questions Alright, at uni they make us sit a skills test of stuff we did back at school. Problem is i can't remember half of it! Struggling big time, goign to see a tutor on Monday and got till august to sit but i want to learn as much as i can now. I can pratice on the web this test as many times as i want but the test is supervised the questions that follow are from the practice so am not asking for you guys to do my homework just need help. The test is always the same qurestions just diffrent numbers so i need to find out how to do things. So I have 10 of the 20 questions am not sure of can you please show the steps how to do it or for the 3 graph ones can you tell me how to go around doing them. Since it's alot of questions might be best if you pick one or two to show me and then someone else can pick another few to do or what not, if you want. Al label the ones i can do. A. Explained! B. Explained! C. Explained! D. Explained! E. Explained! I know it's alot so any help will be much appricated. Any questions on the above ask and i will explain what the questions asking. The really frustrating thing is i have already passed the degree exam and i still need to sit this Thanks guys Last edited by Johnt447; June 19th 2009 at 07:53 AM. A & E) (they are the same problem) Find the common denominator. $\frac{x}{x + 1} + \frac{2}{x - 1}$ \begin{aligned}<br /> &= \frac{x(x - 1)}{(x + 1)(x - 1)} + \frac{2(x + 1)}{(x + 1)(x - 1)} \\<br /> &= \frac{x^2 - x}{(x + 1)(x - 1)} + \frac{2x + 2}{(x + 1)(x - 1)} \\<br /> &= \frac{x^2 - x + 2x + 2}{(x + 1)(x - 1)} \\<br /> &= \frac{x^2 + x + 2}{(x + 1)(x - 1)}<br /> \end{aligned} \begin{aligned}<br /> 2x - z &= -x(yz + 1) \\<br /> 2x + x(yz + 1) &= z \\<br /> x(2 + yz + 1) &= z \\<br /> x(yz + 3) &= z \\<br /> x &= \frac{z}{yz + 3}<br /> \end{aligned} $\frac{x^3 - 3x^2 + 3x - 1}{x^2 - 2x + 1}$ \begin{aligned}<br /> &= \frac{(x - 1)^3}{(x - 1)^2} \\<br /> &= x - 1<br /> \end{aligned} Problem C I suppose I could help out with problem C, since that one hasn't been done yet. Start by factoring $(2x+1)^2$ and $(6x-1)^3$ from the numerator, since they are common to both terms in the Now there are two things you can do. You can combine like terms inside the brackets, and you can remove the common factor of $(2x+1)^2$ from the numerator and denominator. It doesn't matter which one you do first. When finished you should come up with $\frac{(6x-1)^3(-4x + 2)}{(2x+1)^4}$ At this pioint you can also factor out a negative two from the numerator to finish the problem off. I come up with Hope this helps. Last edited by mr fantastic; June 18th 2009 at 06:36 PM. Reason: Fixed latex, removed color (it was causing problems so easier to get rid of it) $y = (x - 2)(x^2 - 4x + 3)$ This is the same as $y = (x - 2)(x - 1)(x - 3)$ We have a cubic, with three real roots, x = 1, 2, 3. That means that the graph crosses the x-axis at (1, 0), (2, 0), and (3, 0). If you were to multiply the factors out, you would see that the leading coefficient is positive (+1), and since the degree of the polynomial (3, cubic) is odd, you're graph is such that as x-> -∞, y -> -∞, and as x -> ∞, y -> ∞. x = 1 is the first root, and the curve to the left of this root would be under the x-axis. You can see that this is correct by plugging in x = 0 (part of the interval [0, 4]) -> y = -6. The next place that the graph crosses the x-axis is at x = 2, which means that between x = 1 and x = 2, there is a small "hump" above the axis. And between x = 2 and x = 3 (the final place where the graphs crosses the x-axis), there must be a small "hump" below the axis. Finally, after x = 3, the curve is above the x-axis, and when x = 4, y = 6. Just so you know, it's possible to draw a rough sketch of a polynomial without graphing it on a calculator first, if you know things like degree, end behavior, roots, etc. And again thank you June 18th 2009, 02:17 PM #2 Super Member May 2009 June 18th 2009, 02:22 PM #3 Oct 2008 June 18th 2009, 03:15 PM #4 Jun 2009 June 19th 2009, 06:27 AM #5 Oct 2008 June 19th 2009, 06:57 AM #6 Super Member May 2009 June 19th 2009, 07:52 AM #7 Oct 2008
{"url":"http://mathhelpforum.com/algebra/93205-several-maths-questions.html","timestamp":"2014-04-19T11:03:46Z","content_type":null,"content_length":"50426","record_id":"<urn:uuid:bac5dd89-f9c9-4cb9-9cc2-c0f55004cc4b>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00547-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Integration problem • one year ago • one year ago Best Response You've already chosen the best response. Best Response You've already chosen the best response. This looks like one of the trig substitutions. Best Response You've already chosen the best response. or maybe integration by parts would work. Best Response You've already chosen the best response. Best Response You've already chosen the best response. so I will rewrite it as|dw:1349016056313:dw| Best Response You've already chosen the best response. Am I doing it correct so far? Best Response You've already chosen the best response. I think it's best if we leave it inside. Something tells me that will have to stay as u^(3/2) Best Response You've already chosen the best response. then I don't see with trig sub I am suppose to use Best Response You've already chosen the best response. oh nvm Best Response You've already chosen the best response. Nope . Use Inverse sec :) or Sine? I forgot what to use but simple as that :D Best Response You've already chosen the best response. It is trig, u-sub won't work. Yeah keep it out, it's inverse sin of 6x i believe Best Response You've already chosen the best response. Best Response You've already chosen the best response. Its nerp it is sin inverse x/6 +C :) Best Response You've already chosen the best response. |dw:1349016274224:dw| is what my solution is suppose to be. Of course I am more interested in how to get to the solution Best Response You've already chosen the best response. I think we should use sec instead of sin?! Let x = 6sec (theta) Best Response You've already chosen the best response. 36Sec^2(theta) - 36 = 36 tan^2(theta) Best Response You've already chosen the best response. so the bottom is: sqrt (36 tan^2(theta)) = 6tan^2(theta), yea callisto's right Best Response You've already chosen the best response. and top is: dx = 6Sec(theta)Tan(theta)dtheta Best Response You've already chosen the best response. so is my given solution incorrect? Best Response You've already chosen the best response. yea precal Best Response You've already chosen the best response. I have some integrals at the backof my book "Integral involving x^2-a^2; a>0" Best Response You've already chosen the best response. \[\int\limits \frac{ 6secutanudu }{ (36\sec^2u - 36)^{3/2} } = \int\limits \frac{ 6secu tanu du }{ 6^3 \tan^3u }\] \[1/36 \int\limits secdu/\tan^2u \] \[1/36 \int\limits \frac{ cosudu }{ \sin^2u? Best Response You've already chosen the best response. cos u/ sin^2 u = cscu cotu \[\int \csc u \cot udu = -cscu +C\] Time to draw a triangle... Best Response You've already chosen the best response. time to draw a triangle? how is that going to help....I wonder if we are taking the correct approach. I was given problems to study for an upcoming exam. Are we on the correct path? Best Response You've already chosen the best response. I don't see how this is going to get me the solution I was given at all???? Best Response You've already chosen the best response. Take sinu as t. => -dt = cosudu \[1/36 \int\limits \frac{ -dt }{ t^2 } = \frac{ 1 }{ 36t } + C = cosecu/36 + C = \frac{ \sqrt6 }{ 36\sqrt{x^2 - 6} } \] Best Response You've already chosen the best response. Best Response You've already chosen the best response. LOL ITS FUNNY CUZ UR NAME IS PRECAL AND UR DOING CAL LOL Best Response You've already chosen the best response. |dw:1349017310113:dw| since we let x = 6 secu secu = x/6 cosu = 6/x You can find the opposite side, hence sin u, and cscu in terms of x Best Response You've already chosen the best response. gotta study all types of math. I am just more comfortable at the precal and below level Best Response You've already chosen the best response. Yeaah my bad. Thats \[\sqrt x\] NOT \[\sqrt 6\] Best Response You've already chosen the best response. Best Response You've already chosen the best response. Yes. And what is sin u and cscu? Best Response You've already chosen the best response. Best Response You've already chosen the best response. And cscu? Best Response You've already chosen the best response. Best Response You've already chosen the best response. And remember we got \[-\frac{1}{36}cscu+C\]For the integral? Best Response You've already chosen the best response. Best Response You've already chosen the best response. ALTERNATE: \[x^2 - 36 = t^2 =>tdt=xdx\] \[I = \int\limits \frac{ tdt }{ t^3\sqrt{t^2 + 36}} = \int\limits \frac{ dt }{t^3 \sqrt{ 1 + \frac{ 36 }{ t^2 }} }\] Best Response You've already chosen the best response. Thanks Callisto, you were correct (back to the basics) Best Response You've already chosen the best response. You're welcome :) Best Response You've already chosen the best response. that is in reference to drawing the triangle. Yes, I got my solution but now I have to go back and study the process Best Response You've already chosen the best response. Take: \[ 1 + \frac{ 36 }{ t^2 } = u = > du = -36dt/t^3\] Best Response You've already chosen the best response. You would get the same answer again after the sub. :) Best Response You've already chosen the best response. @siddhantsharan I am always interested in the alternate approach, especially if it gets me to the answer quicker Best Response You've already chosen the best response. thanks everyone, gotta take a break and look at this later........ Best Response You've already chosen the best response. Perhaps it's easier for you to read. (PS: I'm not giving direct answer!!) First Method: By Trigo Sub.\[\int \frac{dx}{(x^2 -36)^\frac{3}{2}}\]Let x=6sec u; dx = 6secu tanu du. Then, the integral becomes\[=\int \frac{6secu \ tanu\ du}{((6secu)^2 -36)^\frac{3}{2}}\]\[=\int \frac{6secu \ tanu\ du}{[36(sec^2u -1)]^\frac{3}{2}}\]\[=\int \frac{6secu \ tanu\ du}{[36(tan^2u)]^\frac{3}{2}}\]\[=\ frac{1}{36}\int \frac{secu \ tanu\ du}{tan^3u}\]\[=\frac{1}{36}\int cscutanudu\]\[=-\frac{1}{36}\csc u+C\] Since x=6secu , secu = x/6 , cosu = 6/x By Pyth. Thm., opposite side is \(\sqrt{x^2-6^2} =\sqrt{x^2-36}\) So, cscu = \(\frac{x}{\sqrt{x^2-36}}\). Substituting this into the last step, you can get your answer. Best Response You've already chosen the best response. Second method: by substitution, suggested by @siddhantsharan \[\int \frac{dx}{(x^2 -36)^\frac{3}{2}}\]Let x^2-36 = t^2 ; 2x dx = 2tdt => x dx = t dt => \(dx = \frac{t dt}{ x} = \frac{t dt}{ \sqrt {t^2+36}}\) PS: we already know x \(\ge\) 6, so it should be +ve sqrt The integral becomes \[\int \frac{tdt}{\sqrt{(t^2 +36)}(t^3)}\]\[=\int \frac{\frac{tdt}{t}}{\sqrt{\frac{(t^2 +36)}{t^2}}(t^ 3)}\] \[=\int \frac{dt}{\sqrt{1+\frac{36}{t^2}}(t^3)}\]Let u = 1+ (36/t^3) ; du = \(\frac{-72}{t^3}\)dt The integral becomes \[=-\frac{1}{72}\int \frac{du}{\sqrt{u}}\]\[=-\frac{1}{72}(2\sqrt{u}) +C\]\[=-\frac{1}{36}(\sqrt{u}) +C\] u= 1+ (36/t^2) And t^2 = x^2-36, So,\( u = 1 +\frac{36}{x^2-36} = \frac{x^2}{x^2-36}\) Sub. \( u = \frac{x^2}{x^2-36}\) into the last step, and simplify it, you should be able to get the answer. It's so amazing :D Best Response You've already chosen the best response. @Callisto Clarified succinctly. Really well done :D Best Response You've already chosen the best response. Thanks you are so awesome :) Both of you, wish I could give more than 1 medal Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/506859aae4b0e3061a1d518b","timestamp":"2014-04-19T04:25:34Z","content_type":null,"content_length":"369851","record_id":"<urn:uuid:bf9f4602-35bf-4f1c-b4bc-1f2a9665362a>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00454-ip-10-147-4-33.ec2.internal.warc.gz"}
Universal One-Way Hash Functions via Inaccessible Entropy Iftach Haitner, Thomas Holenstein, Omer Reingold, Salil Vadhan, and Hoeteck Wee This paper revisits the construction of Universal One-Way Hash Functions (UOWHFs) from any one-way function due to Rompel (STOC 1990). We give a simpler construction of UOWHFs, which also obtains better efficiency and security. The construction exploits a strong connection to the recently introduced notion of inaccessible entropy (Haitner et al. STOC 2009). With this perspective, we observe that a small tweak of any one-way function f is already a weak form of a UOWHF: Consider F(x,i) that outputs the i-bit long prefix of f(x). If F were a UOWHF then given a random x and i it would be hard to come up with a different x' such that F(x,i)=F(x',i). While this may not be the case, we show (rather easily) that it is hard to sample x' with almost full entropy among all the possible such values of x'. The rest of our construction simply amplifies and exploits this basic property. With this and other recent works, we have that the constructions of three fundamental cryptographic primitives (Pseudorandom Generators, Statistically Hiding Commitments and UOWHFs) out of one-way functions are to a large extent unified. In particular, all three constructions rely on and manipulate computational notions of entropy in similar ways. Pseudorandom Generators rely on the well-established notion of pseudoentropy, whereas Statistically Hiding Commitments and UOWHFs rely on the newer notion of inaccessible entropy. ● In Advances in Cryptology---EUROCRYPT `10, Lecture Notes on Computer Science, Springer-Verlag, 30 May-3 June 2010. [pdf][Springer page] ● Cryptology ePrint Archive, Report 2010/120, March 2010. [pdf][ePrint page]
{"url":"http://people.seas.harvard.edu/~salil/research/UOWHFs-abs.html","timestamp":"2014-04-17T09:35:15Z","content_type":null,"content_length":"9072","record_id":"<urn:uuid:141b3636-4182-44e9-8658-59171462c6e1>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00631-ip-10-147-4-33.ec2.internal.warc.gz"}
Ramón Bonfil Wildlife Conservation Society 2300 Southern Blvd, Bronx NY, 10460, USA 10.1 INTRODUCTION Perhaps the most influential, but not necessarily the best, works on shark stock assessment were those of Holden in the 1960s and 1970s. Holden (1977) was one of the first scientists to consider the problem of shark fisheries stock assessment from a general point of view. He correctly pointed out that sharks were different from bony fishes in terms of their biology but unfortunately he wrongly concluded that classic fisheries models such as stock production models could not be applied to sharks and rays. Holden dismissed these models and called for new models to be developed. He stated that the assumptions of surplus production models regarding immediate response in the rate of population growth to changes in population abundance and independence of the rate of natural increase from the age composition of the stock do not hold for sharks. These conclusions were based mainly on the time delays caused by the longer reproductive cycles of sharks and their reproductive mode, which in his view would cause a linear and direct stock-recruitment relationship. Because of his influential paper, surplus-production models have been mostly ignored for shark stock assessment and scientists and non-scientists reading Holden's papers have sought new methods and models for dealing with shark fisheries stock assessment. For a while, Holden's thoughts influenced the works of other scientists who opted for the more detailed approach offered by age structured models (e.g. Wood et al., 1979; Walker, 1992). The main problem of surplus production models is not that they are inadequate when applied to sharks but the way in which they were being applied. A paramount obstacle for the use of classic surplus production models in the 1960s and part of the 1970s was the equilibrium constraint (see Section 10.6 on fitting models data). At that time, due to the lack of readily available computers to perform iterative search algorithms, scientists engaged in surplus production model-fitting were forced to assume that populations were in equilibrium at all exploitation levels (i.e. that every catch observed was sustainable) to simplify the process of fitting surplus production models to data. The dangerous consequences of this assumption are well known and explicitly warned against in fishery text books (Pitcher and Hart, 1982; Hilborn and Walters, 1992). However, the personal computer revolution has helped to overcome the equilibrium constraint through the availability of non-linear optimization routines which are accessible to virtually any fishery scientist in the world today. The diversity of approaches this offers for fitting surplus production models has translated into a new era of popularity for the utilization of what are presently known as dynamic surplus production models that have been applied to organisms as slow-growing as whales and sharks (Punt, 1991; Prager et al., 1994; Polachek et al., 1993; Babcock and Pikitch, 2001). Perhaps the most interesting outcome of all this re-appraisal of surplus production models is the view that most of the problems associated with successfully applying them are due to the quality of the fisheries data (Hilborn, 1979; see also Section 10.3), and the finding that simple surplus production fishery models can sometimes perform better than the more elaborate and biologically detailed age-structured approaches (Ludwig and Walters, 1985, 1989; Ludwig et al., 1988; Punt, 1991). One of the reasons for the difficulty in applying these models to sharks is that the data available on shark fisheries and our knowledge about shark biological parameters may not be adequate. This is expressed clearly in the work of Anderson (1990), Anderson and Teshima (1990) and Bonfil (1996). In fisheries science, independent of the species in question, the most common problem is that lack good and sufficient data that lack contrast in the data when it is available. Another problem often overlooked is that the more ‘realistic’ age-structured models also pose problems in their application. Age-structured data are much more difficult and expensive to obtain. Further, the life cycles of most shark species, even in terms of the basic parameters of age, growth and reproduction, have just started to be unveiled during the last 20 years, and this only in the case of a handful of stocks [see Pratt and Casey, (1990) and Cortés, (2000) for reviews]. In addition, there are some relevant areas of elasmobranch population dynamics that are still largely unknown. For example: empirically derived stock-recruitment relationships have never been documented for any elasmobranch, although a strong relationship is suspected due to the reproductive strategies of the group (Holden, 1973; Hoff, 1990); the size, structure and spatial dynamics of most stocks of elasmobranchs are almost totally unknown. Inadequate knowledge of migration routes, stock delimitation and movement rates amongst them, can seriously undermine otherwise “solid”assessments and management regimes. Hoff (1990) favored the use of dynamic surplus-production models for shark stock assessment for a variety of reasons. Punt (1988, 1991) also reported dynamic surplus production models to be the most reliable for management of slow-growing resources with limited reproductive potential such as baleen whales, when tested using a simulated fully age-structured population. Similar positive results were reported with a Schaefer model for a swordfish age-structured simulation model (Prager et al., 1994). The results of Bonfil (1996) suggest that surplus production models are good enough for shark biomass assessment but less so for management parameter estimation. He found that although generally inferior to the Deriso-Schnute model (Section 10.4.1 below), surplus production models are capable both of estimating biomass benchmarks and obtaining good biomass fits for most of the scenarios analysed. The best advice in regard to model choice for elasmobranch stock assessment is found in Section 2.2.3. Surplus production models can, and should, be applied to elasmobranch fisheries as they are one of the easiest to implement, but their results should be taken as a first and preliminary assessment. A complete and reliable assessment should not stop there but attempt to apply delay-difference and fully-age structured models as soon as that is also possible. 10.2 SURPLUS PRODUCTION MODELS 10.2.1 Ease of application These models are among the simplest and most widely used in stock assessment. They are easy to use because they require only two or three types of data. These models are flexible and have different variations; the Schaefer, Fox, and Pella-Tomlinson models are some of the best known. Surplus production models (SPM) are based on the following principles: Next biomass = last biomass + recruitment + body growth -catch -natural mortality If there is no catch Next biomass = last biomass + production -natural mortality where production is the sum of recruitment and body growth, and Surplus production = production -natural mortality New biomass = last biomass + surplus production -catch 10.2.2 Logistic growth and the Schaefer model Population growth has been typified in several ways, but the logistic model of population growth has been found to fit a large number of populations both in nature and in captivity. This model is expressed in the following way (differential equation or continuous model): where B = biomass, K = carrying capacity, and r = intrinsic rate of population increase. The carrying capacity of the system, K (or B[∞]), is the maximum population size that can be achieved. Mortality, age-structure, reproduction, and tissue growth are all captured by a simple parameter called the intrinsic rate of increase, or intrinsic rate of production, r. In theory, the intrinsic rate of increase is fully realized at the lowest population level while the finite rate of population growth is highest at the midpoint of K. Figure 10.1 illustrates some of these concepts and shows the trajectories of population growth for two different values of r. FIGURE 10.1 Examples of population growth according to the logistic model. Two different r values are shown. The Schaefer model is the most commonly used among SPMs (known also as Biomass Dynamic Models). This model is based exactly on the logistic population growth model. The continuous logistic model explained above can also be written in discrete form in the following way (Hilborn and Walters 1992): When catch is included in the above equation we obtain the discrete version of the Schaefer (1954) surplus production model: and C is catch, q is the catchability coefficient and f is effort. In the Schaefer model above, the middle term is known as the surplus production. If the surplus production is greater than catch, population size increases; if catch equals surplus production, catch is sustainable and the population size remains constant (B[t+1]= B[t]); if catch is greater than surplus production, population size declines. The Schaefer model has the following assumptions: • there are no species interactions • r is independent of age composition • no environmental factors affect the population • r responds instantaneously to changes in B (no time delays) • q is constant • there is a single stock unit • fishing and natural mortality take place simultaneously • no changes in gear or vessel efficiency have taken place • catch and effort statistics are accurate In practice, many of the above assumptions are not met but this does not mean that the method cannot be used. As long as it is used critically, the Schaefer model is a powerful tool for an initial assessment of a stock. The management parameters of importance from the Schaefer model are given by: MSY = r K/4 B[MSY]= K/2 Optimum effort (f[MSY]) = r/2q 10.2.3 Fox and Pella-Tomlinson models There are other SPMs that have been proposed to more ‘realistically’ describe fisheries. Fox (1970) describes a model that is based not on the logistic population growth model but on the Gompertz growth model. The Fox model equation is: The model is supposed to be more realistic because it assumes that the population can never be totally driven to extinction, something that sounds intuitive but may be wrong in the light of the severe depletion of fishery resources in recent years and well-documented human-caused terrestrial species extinctions. The management parameters of the Fox model are given by: MSY = rKe^-1/InK B[MSY]= Ke^-1 f^MSY= r/q InK Pella and Tomlinson (1969) proposed a generalized model that can take any shape, including that of the Schaefer (m =2) and Fox (m = 1) models. However, there is a price to be paid for this ‘improvement’, one must estimate an additional parameter (m) to fit the model to the data. This model is not much more useful because despite its ‘flexibility’ the fit will probably be worse than with the Schaefer or Fox models as there is often an inverse relationship between the number of parameters to be estimated and the performance of the models (Hilborn and Walters 1992). 10.2.4 Data requirements In its simplest form, SPMs have only two data requirements: • a time series of total catch data (including discards, bycatch, etc.) and • at least one time series of relative abundance data (usually CPUE from the fishery but much better if fishery independent surveys are available). The abundance data can be constructed if effort data is available corresponding to the time series of catches and if we assume that CPUE is linearly related to abundance. The assessment can greatly benefit if an estimate of the virgin biomass is also available, but this is not essential for applying the model. The longer the time series and the better the quality of these data, the greater chances of having a good assessment. Modern implementation of SPMs through Bayesian approaches can incorporate additional heterogeneous information such as estimates of the intrinsic rate of increase of the stock and estimates of historical catches for which no effort or abundance index is available (McAllister and Pikitch 1998a; Apostolaki et al., 2002; Cortés et al., 2002). 10.2.5 Advantage and disadvantages of Surplus Production Models These models offer an excellent cost/benefit ratio. Data requirements are modest compared with age-structured models yet, SPMs can yield critical information for assessment and management such as estimates of virgin and current biomass, level of population depletion, MSY and optimal effort (f[op]t). Most importantly, they can be used to make projections of the population under several scenarios of management (quotas or efforts) and to evaluate the outcomes of each scenario. This is possible because SPMs explicitly incorporate the time variable unlike demographic analysis and yield-per-recruit (Y/R) models. Thus, they are dynamic models that can be used to make predictions. A further advantage (simplicity), but at the same time criticism (lack of biological reality) of SPMs is that they do not include age structure. They assume that all the processes occurring in a population can be captured by the simple processes described above while ignoring the size or age structure of the population and the dynamics of different parts of the population. Another common criticism of SPMs, especially in respect to elasmobranchs, is that they do not incorporate time delays between reproduction and recruitment. While this is true, in practice this seems to be the least of the problems for the application of SPMs to real shark fisheries. Often the shortage and bad quality of the data available for the assessment are more pressing problems. Bonfil (1996) using Monte Carlo simulation showed, that despite criticisms of these models, SPMs can be useful for certain situations when applied to elasmobranch fisheries data. 10.2.6 Examples of use of Surplus Production Models in shark stock assessment Aasen (1964) was the first to apply the Schaefer model to a shark fishery and probably the first scientist to perform a stock assessment of an elasmobranch species. Although there was a dominant view 40 years ago that these models were not adequate for sharks due to incompatibility between the assumptions of the models and the biology of sharks, they are now widely accepted as applicable although not necessarily recommended as the best. They have been used in the multispecies shark fishery of the east coast of the USA (Otto et al 1977; Anderson 1980; McAllister and Pikitch 1998a; McAllister et al., 2001; Cortés 2002; Cortés et al., 2002), for the kitefin shark fishery in Portugal (Silva 1987), the Australian fishery for school and gummy sharks (Xiao 1995; Walker 1999) and in the multispecies skate and ray fishery of the Falkland Islands (Agnew et al., 2000). 10.3 YIELD PER RECRUIT MODEL 10.3.1 Introduction Beverton and Holt (1957) first developed this model, which provides a steady-state (static) view of the population that allows determination of the catch or yield relative to recruitment (catch divided by recruitment, thus the yield per recruit, or Y/R, name of the technique) that can be obtained from a stock at different levels of fishing mortality F (which is dependent on effort) and age of entry to the fishery. The method is described in detail by Pitcher and Hart (1982), Megrey and Wespestad (1988), and Quinn and Deriso (1999). The model describes the population in terms of the biological processes of growth, recruitment and mortality and treats the exploited population as the sum of its individual members. It has more biological detail than surplus production models but is not as powerful and detailed as the fully age-structured models treated below. Also, it is inferior to SPMs in the sense that it is static, assumes that there is no dependence between stock size and recruitment, and cannot provide estimates of absolute biomass or be used for making projections of stock size according to different management strategies. Its main utility is that it indicates if the fishery is catching fish at an age that is too early or too late to obtain the maximum biomass relative to recruitment and if the level of fishing mortality is too high or could be higher. 10.3.2 Data requirements and assumptions The calculation of yield per recruit requires the following data: • at least two mortality rates (Z-total mortality; M natural mortality; or F-fishing mortality as F = Z-M) • the parameter k of the von Bertalanffy growth function (VBGF) • the age of first capture in the fishery • the age of recruitment to the stock and • the maximum age in the stock. The method has the following assumptions: • there is a distinct spawning period and all fish recruit at the same time and age (they are both knife-edge processes) • growth parameters do not change over time, stock size or age • M is assumed known and constant over all ages, over time and over stock size • F is constant over all ages • Recruitment is constant and can be ignored • the length-weight relationship has an exponent of value = 3 and • there is complete mixing within the stock. 10.3.3 Methodology This model is based on three equations: (i) Von Bertalanffy Growth Model (in weight): W[t] = W[∞] (1 - e^-k(t-t[0]))^3 (10.7) (ii) Exponential survival model: N[t] = R . e^- M(t[c] - t[r]) . e ^- (M + F) (t - t[c]) (10.8) where R is the number of recruits, t[c] is age at first capture and t[r] is age of recruitment to the stock. (iii) General yield equation: where Y represents yield (catch). These three equations can be integrated to obtain the yield equation of Beverton and Holt (1957): • t[0] is the von Bertalanffy parameter that describes age at zero length • t[1] is maximum age of fish in stock • k is the von Bertalanffy growth coefficient • and the integration constants Ω0=1, Ω1=-3, Ω2=3, Ω3= -1 Because the level of recruitment is not known, the above equation is usually expressed in relative terms, as yield per recruit: The model predicts the level of yield (catch) that can be obtained depending on the age of entry and maximum age in the stock and the level of natural and fishing mortality. This model allows managers to investigate the effects of varying fishing mortality (F) or age of first entry (t[c]) on yield. One disadvantage of the model is that the shape of yield is completely determined by growth and mortality. If the stock has a low rate of growth and high M the yield curve is asymptotic (this wrongly suggests yield does not decrease as you fish harder and harder). Conversely, if the stock has rapid growth rate and a low M the yield curve is dome-shaped. 10.3.4 Advantages and disadvantages The main advantages of this method it that it is relatively simple to implement and does not require historical data on catch and effort. It is a step forward from demographic methods because it informs within a relatively simple implementation procedure if fish are being exploited at the right age (or size) and also if fishing is at the right intensity. Using this method advice can be provided on the best age of entry to the fishery and an adequate level of effort, thus offering information that can potentially translate into direct management recommendations such as changing the fishing mesh size of gillnets used to catch sharks, or taking a number of boats out of the fishery to reduce fishing mortality. The main disadvantages are that the method provides no estimate of the absolute biomass of the stock and gives only limited advice on management actions. As with life tables, a disadvantage of this method is that it is not dynamic (there is no time variable) and therefore cannot be used to make predictions. Nor does it incorporate density-dependent processes such as stock-recruitment relationships. Other disadvantages of the model are that it unrealistically assumes constant growth and mortality rates; it is more expensive to implement than SPMs as age needs to be frequently determined requiring large samples of fish; the curve shape is predetermined and inflexible; the model predicts yield even at infinite effort, which is unrealistic; and yield is not expressed in absolute terms so the real magnitude of the catch cannot be known. Using the Y/R method alone can be misleading as pointed by Grant et al. (1979). These authors suggested that the recommended 10-fold increases in fishing mortality from their Y/R assessment was a bad advice as only a 2-fold increase could already reduce the reproductive stock of school sharks to less than half of its original abundance. Using a modified demographic method Au and Smith (1997) showed that the estimates of Y/R obtained by Smith and Abramson (1990) for the leopard shark (Triakis semifasciata) were considerably lower after adjusting for the effect of reduction in recruitment due to fishing. Also, Rago et al. (1998) found that the optimum age of entry predicted by the Y/R model would lead to recruitment failure and stock collapse in spiny dogfish (Squalus acanthias) because of the late age of maturity in this species. Another problem of the Y/R method is that a poor estimation of growth or mortality can strongly influence conclusions and lead to decisions that could put the stock in jeopardy. 10.3.5 Applications The Y/R method has been used for stock assessment of school sharks (Grant et al., 1979), for little skate (Waring, 1984), for leopard shark (Smith and Abramson, 1990), for silky sharks (Bonfil, 1990), for sandbar sharks (Cortés, 1998) and for porbeagle (Campana et al., 1999, 2001). To my knowledge this method has not been used as the main basis for the management of any elasmobranch 10.4 DELAY-DIFFERENCE MODEL 10.4.1 Application and assumptions The delay-difference model of Deriso (1980) is a clever simplification that allows the inclusion of biological information of the species to be taken into account in a simple way. This model belongs to an intermediate class known as partially age-structured models. They represent a step forward from the rather simple surplus-production models that ignore biological processes like recruitment and individual growth, while avoiding the demanding data requirements of the more sophisticated fully age-structured models and they consider age structure implicitly, not explicitly. The biological realism of the delay-difference model arises from terms for recruitment, natural and fishing mortality and growth. Yet, this model can be simplified to be fitted to data on catch and effort and an index of abundance, as in the case of surplus production models. Additional requirements are knowledge of the growth in weight of the species and an estimate of natural mortality. An important advantage of the model is that it has fewer model parameters to be estimated in comparison to fully age-structured models. Thus, it can be applied to fisheries with limited amounts of data while still offering a more realistic representation of population dynamics. The delay-difference model of Deriso (1980) was further generalized by Schnute (1985). The model incorporates four main types of biological information: body growth, recruitment, survival and a measure of age-structure. The main formula of the model links present available biomass (exploitable biomass or that recruited to the gear) to available biomass and population numbers from the previous year. The advantage of the model lies on several simplifications that allow the incorporation of important population dynamics processes into a simple equation. However, perhaps its more important characteristic is that the model allows for time-lags in the dynamics of the stock, such as are found in species with slow growth and late age of entry to the fishery. This ability to take into account time-delay is what gives the model its name of ‘delay-difference’ model. The derivation of the delay-difference model here is taken from Hilborn and Walters (1992). The model assumes that body growth of the exploitable stock can be represented by a linear function (the Brody equation): where w[a] is body weight at age a, and α and ρare constants. This equation states that after a certain age, the typical von Bertalanffy model of growth in weight shown in Figure 10.2 can be represented by a linear equation of weight at age a against weight at age a+1. FIGURE 10.2 Individual growth in weight according to the von Bertalanffy Growth Model. The parameters α and ρof the Brody equation are determined by linear regression as shown in Figure 10.3. This figure shows several possible linear regressions, which differ in how many points are considered for the regression (i.e. different starting points). Which regression is chosen and therefore which parameters α and ρused in the model depends on the age of entry to the fishery. The delay-difference model also assumes that all fish older than age k (in this particular model age of entry to the fishery) are vulnerable to fishing and have the same natural mortality M. Another simplification of the model is that the total survival rate S at time t is given by and can be decomposed into terms for constant and variable (harvest) survival by FIGURE 10.3 Ford-Walford plot of weights at age. Solid diamonds represent the original data points and each straight line is a linear regression using a different starting age (0, 2, 4, 6, and 7). where Ψ is the natural survival rate and h is the harvest rate in year t. This assumes that harvest (fishing) takes place in a short time during the beginning or end of the year. Biomass at age can be represented as numbers at age times average weight at age: This can be extended for the whole exploited population plus the recruitment R: where k is the age of recruitment (to the gear or fishery). Population number N, can be expressed as survivors from last year at age a-1, and all the weights at age a can be expressed using the Brody equation, thus arriving at the following formula: Factoring out terms that do not depend on age results in sums over age k and older for year t-1: B[t] = S[t-1]αN[t-1] + S[t-1]ρB[t-1] + w[k] R[t] (10.18) and total numbers in the population are N[t] = S[t-1]N[t-1] + R[t] (10.19) But the term αN[t-1] can be expressed as αN[t-1] = αS[t-2] + αR[t-2] (10.20) and the term α S[t-2] N[t-2] can be expressed in terms of B[t-1] and N[t-2] using the equation for B[t] above as αS[t-2]N[t-2] = B[t-1] - ρS[t-2]B[t-2] - w[k]R[t-1] (10.21) Combining the last two equations and with some more algebraic manipulations gives the delay-difference equation (Schnute, 1985): B[t] = (1 + ρ)S[t-1]B[t-1] - ρ S[t-1]S[t-2]B[t-2] - ρw[k-1]S[t-1]R[t-1] + w[k]R[t] (10.22) This is the original form of the model and it requires 7 parameters to predict biomass dynamics and to fit the model to catch and CPUE data: i. p and w[k] for the Brody growth equation ii. ψ, the natural survival rate (no fishing) iii. a, b or a', b', for the stock recruitment relationship iv. B[0], the stock size at the beginning of the fishery v. R[0], the recruitment at equilibrium (when mortality equals births) and vi. q, the catchability for the catch equation Recruitment can be expressed using either the Ricker or the Beverton and Holt model, simplified by assuming that the population was in equilibrium (i.e. virgin population) when exploitation began. For the Ricker recruitment model the equations are R[t+1] = S[t-k+1]e^(α' - b'S[t-k+1]) (10.23) For the Beverton and Holt recruitment model the equations are: Other parameters needed to fit the delay-difference model can be estimated externally or internally with the following assumptions: • ρ and w[k] are estimated directly from growth data and • ψ depends on external estimates of natural mortality, M. This leaves us with only 3 parameters to be estimated during model fitting by non-linear methods: i. b or b'for the stock recruitment relationship ii. B[0] stock size at the beginning of the fishery iii. q the catchability coefficient for the catch equation Thus, the delay-difference model can be simplified by fixing values for the first 3 parameters listed above and fitted to the catch and effort data by finding the values of the last 3 parameters using nonlinear iterative methods such as those included in spreadsheet software. The parameter a or a' of the recruitment model is eliminated by the assumptions above. 10.4.2 Advantages and disadvantages of the delay-difference model The advantages of this model are: • the model offers more biological realism than SPMs without the demanding data requirements of fully age-structured models • it takes into account the time delays due to growth and recruitment • it can be fitted to simple catch-effort time series of data when information on mortality and growth is available • fitting the model to data requires estimation of fewer parameters than fully age-structured models, thus simplifying the estimation process and improving performance • it can be used to estimate stock size and for the calculation of management benchmarks and • it can be used to make predictions of different management scenarios. The main disadvantages of this model are (Hilborn and Walters, 1992): • it can provide an acceptable fit to the data in terms of goodness-of-fit criteria while providing parameter values that are meaningless from a biological point of view (e.g. extremely high or low virgin stock sizes, virgin recruitment levels) and • they can at times provide biased estimates of management benchmarks such as optimum fishing effort. 10.4.3 Use of delay-difference models in shark stock assessment This simplification of age-structured population dynamics was initially welcomed with excitement but has been seldom used in practice due to the availability of more sophisticated models that can be easily applied thanks to the powerful computer technology now available. The delay-difference model has not been used often for the assessment of shark fisheries but Monte Carlo simulations performed by Bonfil (1996) showed that it performed better than surplus production models for estimating stock size in shark-like fishes. In addition, this model was used as part of the assessment of the school and gummy shark fisheries of Australia by Walker (1999). Cortés (2002) and Cortés et al. (2002) used a simplified version of the Deriso (1980) delay-difference model known as lagged recruitment, survival and growth model as part of the assessment of small and large coastal sharks, respectively, off the U.S. eastern seaboard. 10.5 VPA AND CATCH-AT-AGE ANALYSIS 10.5.1 VPA structures This family of methods is based on catch-at-age data, i.e. the catch is disaggregated into age-groups. These methods are more detailed and are more realistic than the previously reviewed models. Nevertheless, age-structured models are also extremely data demanding and require much detailed information that is often expensive to obtain. Age-structured models can be classified into two groups (Hilborn and Walters, 1992): Virtual Population Analysis or VPA and statistical catch-at-age analysis or CAGEAN. These methods are recursive algorithms that calculate stock size based on catches broken down by each age-class. Using these methods it is possible to estimate the magnitude of fishing mortality, recruitment and the numbers at age in the stock for each past year using only catch-at-age and an estimate of natural mortality, M. VPA does the calculations without having any specific statistical underlying assumptions. In contrast, the more sophisticated CAGEAN methods depend on formal statistical models and have been developed to the degree that various types of data can be integrated in a statistical framework to be used for the assessment. Thus, data on S/R (stock-recruitment) relationships, CPUE time series, biomass time series and others can be integrated into a powerful analysis. The stock synthesis method of Methot (1989) is one of the best examples of a sophisticated CAGEAN model. 10.5.2 Cohorts as the basis of VPA and CAGEAN A fundamental part of age-structured models is the concept of a cohort. A cohort comprises all the individuals (fish in this case) that were born in the same year. An example of a human cohort is all the persons that were born in 1960. The cohort of 1960 can be followed through time year-after-year by looking at individuals that are age 1 in 1961, age 2 in 1962, and so on. The size of the 1960 cohort in the year 2003 consists of all the individuals that were born in 1960 and have survived to that year. The cohort concept is illustrated in Figure 10.4. FIGURE 10.4 Diagrammatic representation of the 1960 cohort of humans (all individuals born in 1960). N represents the numbers of age A alive each year for cohort 1960. VPA and CAGEAN are recursive algorithms that track the history of each cohort in the exploited population back in time from the present to the time when each cohort was born or more commonly to the time it recruited to the fishery, i.e. they calculate the number of fish alive in each cohort for each past year, following each cohort through time. They are used to reconstruct the entire exploited population to estimate fishing mortality and numbers at age for each age class in each year. 10.5.3 Virtual Population Analysis The VPA is also known as a cohort analysis because each cohort is treated separately. The method is based on the following equation: N alive at beginning of next year = (N alive at beginning of this year) - (catch this year) - (natural mortality this year) In this particular case recruitment is not considered because we are analysing only a single cohort. We can change the above equation to: N alive at beginning of this year = (N alive at beginning of next year) + (catch this year) + (natural mortality this year) Assuming that natural mortality, M, is known and that at some age x there are no more fish alive (that is, all fish in the cohort die after age x) we can iteratively calculate the number of fish alive each year, starting from the oldest age and moving backwards to the youngest. The basis of the method is the assumption that if we know that this year we have zero fish of the oldest age left alive and we know how many of them we caught last year (in theory those were the last fish of that age left in the sea after those which died of natural causes) and if we know the instantaneous natural mortality rate then, for fisheries where the fishing period is short it can be assumed that there is no natural mortality during the short fishing period so that: N[t]= N[t+1] + C[t] + D[t] (10.27) so that N[t]- N[t](1-S) = N[t+1] + C[t] (10.30) N[t]- N[t]+ N[t]S = N[t+1] + C[t] (10.31) N[t]= (N[t+1] + C[t]) / S (10.33) where N is number of fish, C is catch, D is the number of deaths, t is time (year) and S is the finite survival rate. The last equation above is the key equation for VPA or cohort analysis, when fishing takes place in a single short period of time during which we can consider M to be negligible. This equation allows the calculation of the numbers alive last year from the numbers this year, the catch-at-age and natural mortality, but because we assume there were no more fish left of the oldest age this year (they were all caught or died) we can calculate the numbers last year with only catch and mortality as parameter values. An illustrative example of the principles of VPA Consider a shark species that lives only to 10 years (such as Rhizoprionodon terraenovae) when we assume that all the individuals die. Consider a situation where this species recruits to a fishery at age 3. Further consider that this fishery takes place in only a couple of weeks each year when the fish aggregate to mate. The information needed for a cohort analysis is an estimate of M, which for this stock is considered to be 0.5 (finite rate) and the total catch of fish in each age class for each year. A table with such hypothetical catch data is given in column 3 of table 10.1 and represents the total numbers in the catch for the cohort of Rhizoprionodon terraenovae born in 1980. Using these data and the following equations provides estimates of: • the population at the end of the fishery each year • the population just before the fishery each year • the harvest rate and • the instantaneous fishing mortality rate TABLE 10.1 Hypothetical example of data required and the results of a cohort analysis for a short-lived elasmobranch, loosely based on the life history of Rhizoprionodon terraenovae. See text for methods used to calculate each column. Year Age Catch Cohort size at start of year Cohort size before fishery Harvest rate Instantaneous fishing mortality rate 1990 10 0 0 - - - 1989 9 900 1,800 900 1.00 Infinite 1988 8 2,480 8,560 4,280 0.58 0.87 1987 7 6,032 29,184 14,592 0.41 0.53 1986 6 13,985 86,338 43,169 0.32 0.39 1985 5 8,183 189,042 94,521 0.09 0.09 1984 4 7,653 393,390 196,695 0.04 0.04 1983 3 2,045 790,870 395,435 0.01 0.01 The numbers at age t at the start of the year is given by N[t]= (N[t+1] + C[t]) / s (10.34) For numbers alive at the beginning of the fishery: The harvest rate is given by: The instantaneous fishing mortality rate is given by: Table 10.1 shows the results of the calculations for the cohort born in 1980; but other cohorts can be treated in the same way for a full VPA. For the last cohort in the last year of data we assume there are no fish left, they all die after age 10 in 1990. The table is constructed for this cohort using equation (1) to calculate cohort size at the beginning of each year (note that fish age 10 in 1990 were age 9 in 1989, etc.). The equations for VPA when fishing takes place during the whole year (continuous fishing) are more complicated and can be found in Hilborn and Walters (1992) and Quinn and Deriso (1999) Sparre and Venema (1992) describe a length-based VPA method. The above example of cohort analysis includes only one cohort. For a complete VPA the same method should be applied for all cohorts that have completely ceased to exist, which is all cohorts that are no longer present in the fishery. One remaining problem after doing this is that there is no information to do the analysis for living cohorts (those still present in the fishery) and these are usually the most important for managers. One way to solve the problem of incomplete cohorts is to estimate the fishing mortality rate of cohorts currently being fished and use this to estimate the sizes of the incomplete cohorts. Two ways used to estimate the size of current cohorts are: (a) to obtain population size estimates from surveys or mark recapture methods or (b), more commonly, to assume a value for the current F and estimate previous values from there. This last case, known as the terminal F assumption, comes from the following equation: There are two ways to estimate F here: (a) from tag-recapture methods or (b), from effort (f) data while assuming that q can be obtained from the relation F = fq. The catchability coefficients (q) for each age can be obtained from the complete cohorts and assuming q is constant over time we can use that together with effort data to calculate F for each age. Another variation of this approach is known as the ‘tuned’ VPA which first uses the q's from complete cohorts and this is used to derive a new set of catchability coefficients for the incomplete Disadvantages of VPAs A problem of VPA is that using the wrong M estimate can lead to severely overestimated or underestimated cohort sizes. More worryingly, when catchability increases as the stock declines in size, using the assumption that the terminal F has not changed has been found to introduce great errors, overestimating the stock size and probably recommending larger catches than can be sustained, which can lead to overfishing of the stock. Another problem is that to obtain the necessary catch-at-age data it is essential to perform routine ageing of large samples of fish from the catch (which is costly) and if the ages are wrongly estimated this will introduce systematic biases to the assessment. Use of VPAs for shark stock assessment Smith and Abramson (1990) used a backward VPA in combination with Y/R to estimate replacement rates of leopard sharks off California 10.5.4 Catch-at-age analysis 10.5.4.1 Methods and assumptions CAGEAN or statistical catch-at-age analysis is similar to VPA but differs in that it uses formal statistical methods to estimate the current abundance of incomplete cohorts. CAGEAN methods also provide a means to estimate natural mortality rate provided that the data have clearly contrasting levels of fishing effort and total mortality rate. CAGEAN starts by using the catch curve concept (Section 8) to calculate the instantaneous total mortality rate for each age class from the catch at age data. In the same way that curves for the catches at age of one single year are calculated, the same concept is applied to the catches of all cohorts between subsequent years. The equation used for normal catch curves (one single year of data) is a linear regression of the numbers-at-age in the catch (Ca) against age (a), where the slope of the line is the estimate of Z, and the intercept of the Y axis represents the logarithm of the recruitment (R) times the vulnerability to the gear (v): ln(C[a]) = 1n(Rv) -Za (10.39) To use the catch equation to estimate mortality within a single cohort we use a modified version of the catch curve is used with the following equation: ln(C[ai]) = 1n(R[j]v) - Za (10.40) where j denotes a specific cohort. This allows the estimation of the total mortality and the relative recruitment ‘strength’ of each cohort. This method assumes that fishing and natural mortality are constant and that vulnerability to the fishing gear is constant above a given age. One problem is that these catch curves do not allow an estimate of the natural mortality rate or vulnerability, so their usefulness is limited. CAGEAN is a modification of these techniques. An introduction to the CAGEAN methods explained below is provided by Hilborn and Walters (1992) and is recommended for beginners: Quinn and Deriso (1999) offer an updated and mathematically more rigorous treatment of the same topics. 10.5.4.2 Paloheimo method There are several versions of the CAGEAN method. That of Paloheimo (1980) is the simplest and the one analysed here with some detail. The Paloheimo method uses the following equations and some algebra to arrive at its key equation. It starts with the catch equation, which in this version assumes that fishing mortality is responsible for a fraction (F/Z) of the total mortality: Second, numbers at age a can be related to recruitment times cumulative fishing and natural mortality for each previous age by A linear relation between effort and fishing mortality is assumed where f is effort and q is the catchability coefficient. These equations can be combined and manipulated to give: This equation relates CPUE at age to recruit numbers, the catchability coefficient, total and natural mortality and fishing effort. It can be shown that this becomes: The Paloheimo method assumes that M is constant over years and uses a well-known approximation for the last term (which is valid for values of Z that are no larger than 0.7): The Paloheimo equation can be arranged to give: where j = year, a = age, and k = the number of years that the cohort has been fished. Equation 10.47 is a linear multiple regression of the form: Given the needed data (usually catch by age for several ages and the corresponding effort that produced the catches), this equation can be solved with standard multiple regression packages to obtain estimates of Rq, q, and M. The following example taken from Hilborn and Walters (1992) shows an application of Paloheimo's method. Table 10.2 presents the required data on catch at age and corresponding effort for Lake Erie TABLE 10.2 Data on catch at age and corresponding effort for the 1971 cohort of Lake Erie perch Age Catch Effort 2 103 15.9 3 59 15.4 4 11 13.5 5 3 12.6 The estimates of the parameters given by Paloheimo's methods are: Ln (Rq) = 2.37 q = - 0.22 M = 4.34 The correlations between the parameters are shown in Table 10.3. TABLE 10.3. Parameter correlations for the CAGEAN analysis based on the Paloheimo method for the data of Table 10.2. Parameter correlations Rq q M Rq 1 q -0.71 1 M -0.69 -1 1 These results are suspicious and suffer from strong parameter correlation. This occurs because of poor data contrast (see Section 10.6); q is negative, which is impossible, while M is extremely high. To perform this catch-at-age analysis, not only the catches at age for each year for this cohort are needed, but also the fishing effort used to catch them. These efforts are all of the same magnitude and almost constant (i.e. poor contrast in effort) and this is why there is a strong negative correlation between q and M. To simultaneously analyse data for three cohorts of Lake Erie perch using this method (see Hilborn and Walters 1992 for further details), dummy variables may be used to form an experimental design table, to perform a multiple linear regression. In this case, the equation becomes: Y = b[1]X[1]+ b[2]X[2]+ b[3]X[3]+ b[4]X[4]+ b[5]X[5] (10.49) The first three coefficients represent the recruitment level of each cohort. The dummy variables X[1–3] take the values 1 or 0 depending on which cohort we are analyzing, so that the corresponding coefficient b (recruitment) is included or excluded. The last two terms are the same as before and are the fishing effort and number of years of accumulated natural mortality. An analysis the results would still not be satisfactory because there is still poor data contrast in the effort for this set of data despite the fact that there are data for 3 different cohorts and 4 different years of fishing. It is still impossible to differentiate between the effects of natural and fishing mortality from these data. However, it is possible to obtain good estimates of the recruitment levels because there is good contrast in the relative abundance data (CPUE). 10.5.4.3 Doubleday's method A more general approach to the catch-at-age method was proposed by Doubleday (1976). This method does not assume a linear relationship between the variables and is thus more difficult to calculate, requiring non-linear estimation methods. Its advantages are that fishing mortality F is not assumed to be proportional to effort, so the method can be applied in the absence of effort data. However, this method suffers from the general problem that good contrast is needed between fishing mortalities for good parameter estimation. The main Doubleday equation is presented below and more details about this method can be found in Hilborn and Walters (1992) and Quinn and Deriso (1999). 10.5.4.4 Other methods A more developed and powerful catch-at-age method is that developed by Fournier and Archibald (1982). Paloheimo and Doubleday derived their models assuming an underlying deterministic process but in nature variables are measured and are subject to natural variability, which may be interpreted as noise. The method of Fornier and Archibald is flexible and accounts for explicit estimation of errors • catch measurement (C) • fishing mortality and • stock recruitment relationship (S/R) Their method also explicitly accounts for a stock-recruitment relationship. This method is sophisticated both mathematically and statistically and is not analysed here, but has the advantage that it can include several types of external information that can help in the estimation of parameters, such as estimates of recruitment levels, fishing mortalities from other studies and effort data. A further sophistication of this type of analysis was developed by Methot (1989) and is able to use CPUE, gear selectivity and independent survey biomass data in the estimation of parameters. 10.6 PRINCIPLES OF FITTING MODELS TO DATA 10.6.1 Assumptions Some of the models used in fisheries stock assessment are simple but the estimation of their parameters, which implies fitting the models to the data, is not always simple. In the case of the surplus production models treated above, there are three main approaches that are commonly employed for the estimation of their parameters. First, one might assume equilibrium conditions, that is, that all the catches observed so far in the fishery are sustainable at the corresponding level of fishing effort. This assumption is invariably wrong and must be avoided. Equilibrium methods were used to simplify the computations because of difficulties in calculating parameter values analytically. However, modern computers allow the use of other methods mentioned below or even more sophisticated ones and there is no longer any need to assume equilibrium conditions. 10.6.2 Linear regression A better option than assuming equilibrium conditions is to use linear regression. In the case of the Schaefer model, it is shown below that this model can be expressed as a linear equation to which standard regression methods can be applied to provide the values of the parameters and fit the model to our data. Given the Schaefer model equation for biomass dynamics in a fishery: Thus, substituting the last equation in the first gives: Rearranging, dividing by U[t] and multiplying by q gives: Equation 10.55 is a linear equation of the general form: Y = b[0] + b[1]X[1] + b[2]X[2] (10.56) which can be easily solved using the multiple regression facilities available in most spreadsheet software programs. Although regression methods are easily applied to solve fisheries models, it has been demonstrated that they can give biased answers (Uhler 1979). They can also produce obviously wrong answers, such as negative values of r or q, which are biologically impossible. The general corollary is that illogical answers only mean bad data! 10.6.3 Time-series fitting The most recommended method for fitting fisheries models to data is time-series fitting. Hilborn and Walters (1992) note that this method was first proposed by Pella and Tomlinson (1969) and implies taking an initial estimate of the stock size at the beginning of the time series of data (catch and CPUE) and using the Schaefer model to predict each point in the entire time series of data. Initial parameter values (guesses) are iteratively adjusted to minimize the difference (ε[t]) between the observed CPUE and the CPUE predicted by the Schaefer model: Where U (CPUE) is: This means that r, q, K, and the initial biomass size B[0] be estimated. Usually, the problem of finding the best parameter values while minimizing the difference given by equation 10.57 is solved by using nonlinear estimation procedures such as those available in spreadsheets. 10.6.4 Bayesian estimation Bayesian estimation is a powerful method for fitting fisheries models to data because it allows the incorporation of previous knowledge about the system into the estimation process, effectively helping to find better solutions. The types of additional information that can be incorporated into Bayesian estimation are extremely varied and include, e.g. fishery CPUE, independent survey CPUE, catches, estimates of intrinsic rate of population growth from life-table analyses, biological limits, knowledge from similar stocks and mark-recapture information. Bayesian estimation is also extremely useful because it quantifies the uncertainty of the parameter estimates. The method uses previous knowledge to determine a probability distribution for the parameters that are to be estimated. This distribution is known as the prior probability distribution or ‘the prior’. Although relatively new in fisheries stock assessments, Bayesian estimation has rapidly become a powerful and accepted method to fit models to data. Bayes theorem is based on the conditional probability and states that the probability of a parameter or group of parameters given certain data is equal to the product of: (a) the probability of the data given the parameters and (b) the probability of the parameters themselves, all divided by the sum over all possible parameter values of the product of (a) and (b): The left term of the equation is the posterior probability distribution or ‘posterior’. The right-most terms in the numerator and denominator, imply that previous knowledge about the shape of the distribution of the parameters is available. This is the strength of the method as it allows additional ‘external’ information, such as biological or fisheries information to be included into the estimation process. Depending on the type of ‘external’ information that can be incorporated, different possible prior distributions can be used for the parameters such as the binomial, normal, uniform, Poisson, multinomial and others. For more details about the types of distributions for different types of data users should consult a statistical text book. A rudimentary, but simple way, to implement Bayesian statistics is to calculate the “kernel”, which is based on the sum of squares. where L is the likelihood of the parameters and SS, the sum of squared differences between the real data and the estimated data points derived from a given set of model parameter values for t-1 degrees of freedom. Bayesian approaches have been applied to elasmobranch fisheries by, e.g. McAllister and Pikitch (1998a,b), Punt and Walker (1998), Babcock and Pikitch (2001), McAllister et al. (2001) and Apostolaki et al. (2001, 2002). Berger (1985), Gelman et al. (1995) and Congdon (2001) provide a comprehensive treatment of Bayesian analysis. Hilborn and Walters (1992), Quinn and Deriso (1999) and Haddon (2001) provide more detailed treatment of parameter estimation issues. 10.6.5 Data quality An extremely important principle of practical fisheries science identified by Hilborn and Walters (1992) and one often overlooked is that one cannot know exactly how a fish stock will respond to exploitation until the stock has been exploited. A good stock assessment depends as much on having an adequate model to describe the system dynamics as on the quality of the data that the model is fitted to. Data quality does not only refer to biases or errors, but also to the danger ofhow much information is embedded in the data. Historical variation in stock size and fishing pressure are needed if the data are to be used to estimate the parameters of the model with reliability. Otherwise the assessment may produce meaningless estimates that do not represent the stock dynamics well. The most important quality of fisheries data is the degree of contrast imbedded in the data. To obtain good parameter estimates data must have high contrast. For example, in a SPM we should ideally have a data point at low stock sizes with low fishing effort (for information about r), data points at high stock sizes with low fishing effort (to estimate q and K) and data points at high fishing effort to estimate q. This is difficult to find in real fisheries because of the way most fisheries develop. Typically, low effort at large stock sizes is gradually increased to high fishing levels that usually lead to low stock sizes. Thus, one usually misses having a point of low fishing effort at low stock sizes. This common way in which fisheries develop leads to uninformative data and a typical case known as the “one way trip”in which the data show an increase in effort with time accompanied by a declining CPUE see (figure 10.5). This lack of contrast in the data makes for uncertain parameter estimates. In general, the standard deviation of such parameters is as large as, or larger than, the actual parameter values, and signales unreliable results. Under such circumstances management will be severely handicapped. FIGURE 10.5 A hypothetical example of a ‘one-way’ trip type of data (modified from Hilborn and Walters, 1992). Data with better contrast can be obtained when a fishery shows a period of increased effort followed by a period when effort was reduced gradually such that the stock was allowed to rebuild after heavy exploitation. This case has been termed by Hilborn and Walters (1992) as ‘moving up and down the isocline’. Note how Figure 10.6 shows that there is a better scatter in the data instead of them all falling along one single line as before. These data have inherently more variation and contrast than the preceding example (the solid diamonds in the figure represent the start and finish points of the time series). Typically, in these cases the model parameters are more precisely estimated than for a ‘one-way trip’ case, but the slow pace of change in effort in these data still does not generally provide enough contrast for good precision. In cases like that in Figure 10.6, the standard deviation of the parameters is usually about half or less than the actual parameter estimates and although not good, it is better than in the previous example. FIGURE 10.6 Hypothetical example of data with better contrast (modified from Hilborn and Walters, 1992). Data sets with high contrast have strong variations in data values, with relatively rapid changes back and forth between high and low effort. In these cases, parameters can be much more precisely estimated although other factors such as the total number of points in the time-series of data and the intrinsic variability of the data also influence the final precision of model parameter In summary, when fitting models to fisheries data it is imperative to look at the uncertainty in the parameter estimates and not only at a single ‘goodness-of-fit’ measure such as the sum of squares. It is always advisable to apply different models to the same data set and compare the results between models, trying to validate results or to ask questions about why results might be different and what the implications of this are. In addition, it is important to learn how to use uncertain (‘bad’ ) results to improve the contrast in the data through carefully thought and well planned management regulations aimed at improving the quality of the data (such as large variations in effort over short periods of time). 10.6.6 The relationship between CPUE and abundance There is an important assumption at the core of most fisheries models that use fisheries-dependent CPUE information (as most do) that the abundance of the fish stock (or other aquatic animal) has a direct relationship with CPUE, i.e. that CPUE is an index of abundance. This can be expressed mathematically for fisheries where the fishing season occurs as a single pulse or over a relatively short part of the year as: C[t]= qf[t]B[t] ⇒ U[t] = qB[t] where U[t] is CPUE in year t. According to this expression, CPUE is directly linked to biomass (abundance) by the coefficent q, the catchability coefficient. This model assumes that there is a linear relationship between CPUE and the abundance of the stock. This is a dangerous but necessary assumption of most fisheries models, but one that should be questioned and validated. Hilborn and Walters (1992) note that the relationship between CPUE and abundance can have at least two other forms apart from the linear form. Hyperdepletion occurs when the stock abundance decreases at a much slower rate than the CPUE. Thus, the CPUE signal tells us that the stock abundance is low when it is still high. If hyperdepletion is not detected, one would conclude that there was overexploition of the stock when in fact the stock might be in a good state. Hyperstability happens when the stock abundance falls more rapidly than the CPUE index, thus giving the opposite impression, that the stock abundance is still high when there may be dangerous overexploition of the resource. Hyperdepletion can occur when the species is being exploited only over a relatively small part of its range, as when there are natural refuge areas (such as deeper waters or rougher grounds where the gear cannot fish). In such cases the exploited part of the stock will decrease rapidly but the overall abundance of the entire stock does not. Given that the abundance index (CPUE) is based only on the fishing grounds, it will show a faster decrease than if it was based on fishing over the entire geographical range of the stock. Hyperstability is a well known phenomenon in fisheries for highly gregarious or schooling species such as herrings, sardines, anchovies and tunas. In these fisheries, searching for fish schools is highly efficient and once located, fishing an entire school is relatively quick and efficient. The remaining schools remain concentrated as the overall abundance of fish goes down. Possible ways to detect a lack of proportionality between CPUE and effort include mapping and stratification of CPUE and effort data to analyse spatial patterns and depletion experiments to gain additional information. Overall, hyperstability is more common and more dangerous, as it leads to stock collapses. A way to avoid this is to obtain fishery-independent indices of stock abundance (see Section 12), either through research cruises or by coordinating efforts with fishermen to perform controlled experiments to fish in other areas or other ways than they usually do, such as following a systematic sampling design. Quinn and Deriso (1999) summarize different ways to model non-linear relationships between CPUE and abundance. Finally, it should be mentioned that generalized linear models (GLMs) are becoming common methods for standardizing fishery-dependent CPUE data. These methods take account of the effect of various factors (such as environmental variables or fishery operational variables) on catch rates. Fisheries stock assessment in not usually a problem of the species or group under analysis but rather a problem of the approach used for the analysis. There are several methods available to perform stock assessment and some of them have been presented here in detail. However, keep in mind that there are three main rules for good stock assessment: i. The data drive the analysis and although we should always try to do the best we can with whatever data we have, only complete and good quality data provide reliable assessments in the long run. Having limited data provides only limited and uncertain advice no matter which model is used. The main problem for elasmobranch stock assessment is not the model used, but that data be available. For this reason, fisheries managers should strive to build the necessary systems to collect the appropriate information needed for stock assessment. ii. There is no single ‘best’ model that should be used for fisheries stock assessments. The best assessment is one that uses all the models that can be applied depending on available data and compares the results of all models to detect inconsistencies, coincidences and patterns. A complete picture of the situation can only be obtained when the conclusions from one analysis are compared with those of a different analysis and the different results are used critically to gauge conclusions, improve the data and therefore have the capacity for better assessments in the iii. Stock assessment is a long-term and dynamic process that never ends. Models are used not only to decide how many fish should be taken next year or how many fishermen should be allowed to fish, but also and perhaps more importantly, to set goals about the ways in which to obtain fisheries data, the type of data that are lacking (including biological and ecological information) and that must be obtained in the future to improve the quality of the assessments. Fish stock assessment must be a feedback system to be successful. Table 10.4 presents a few examples of real elasmobranch fisheries with a list of their characteristics, the methods used in each case for stock assessment, the status of the fishery and major references. These examples can be reviewed more closely by those interested in more detailed analyses of real elasmobranch fisheries and the practice of their stock assessment and management. TABLE 10.4 A referenced selection of real shark fisheries, summarizing their main characteristics, the assessment methods in use and the state of management and the resource. Fishery Species Catch Management system Stock Assessment Methods Status Main references Southern Galeorhinus galeus, Mustelus antarcticus 2,800 Controls on amount of gear Surplus Production, Delay-difference and Overexploited, under Australian shark and other spp t/y (licenses) Age-structured models recovering regulations Walker 1999 Canadian Porbeagle Lamna nasus 850 t/ TAC (250 t), Fishing licenses Catch curves, catch rate trends, Overexploited, under severe Campana et al.1999, 2001 shark fishery y plus fishing restrictions agestructured mode recovering regulations Galeorhinus galeus, New Zealand shark Squalus acanthias, 17,000 None, quotas established through ad hoc Recovered after fisheries Callorhinchus milii, t/y ITQs and TACs methods (proportion of past catches) overxploitation or unknown Francis and Shallard 1999 Mustelus lenticulatus, Raja spp. Hydrolagus spp. and other 15 spp East coast of US 39 species mostly Carcharhinus 3,500 TAC Bayesian Surplus Production Models Overexploited, under MacAllister and Pikitch. shark fishery s t/y recovering regulations 1998a, b; Branstetter 1999 Gulf of Mexico 35 species mostly Carcharhinus 12,000 5 prohibited species and other None Unknown, likely heavily Bonfil 1997, Castillo et al. shark fisheries t/y simple regulations overexploited 1998 Argentinean shark Mustelus schmitii, Galeorhinus galeus, 30,000 None None Unknown, likely heavily Chiaramonte 1998 fisheries Carcharhinus brachyurus and other 10 spp t/y overexploited 10.8 LITERATURE CITED Aasen, O. 1964. The exploitation of the spiny dogfish (Squalus acanthias, L.) in European waters. Fisk. Dir. Ski. Senie. Haocundus, 13: 5–16. Agnew, D.J., Nolan, C.P., Beddington, J.R. & Baranowski, R. 2000. Approaches to the asessment and management of multispecies skate and ray fisheries using the Falkland Islands fishery as an example. Can. J. Fish. Aquat. Sci., 57: 429–440. Anderson, E.D. 1980. MSY estimate of pelagic sharks in the Western North Atlantic (mimeo.), U.S. Dep. Commer., NOAA, NMFS, NEFC, Woods Hole Lab. Ref. Doc. No. 80–18: 13. Anderson, E.D. 1990. Fishery models as applied to elasmobranch fisheries. In H.L. Pratt, Jr., S.H. Gruber. & T. Taniuchi (eds). Elasmobranchs as Living Resources: Advances in biology, ecology, systematics and the status of the fisheries, pp. 473–484. NOAA Technical Report NMFS 90. Anderson, E.D. & Teshima, K. 1990. Workshop on fisheries management. In H.L. Pratt, Jr., S.H. Gruber. & T. Taniuchi (eds). Elasmobranchs as Living Resources: Advances in biology, ecology, systematics and the status of the fisheries, pp. 499–504. NOAA Technical Report NMFS 90. Apostolaki, P., Babcock, E.A., Bonfil, R. & Mcallister, M.K. 2002. Assessment of large coastal sharks using a two-area, fleet disaggregated, age-structured model. NMFS Shark Evaluation Workshop. SB-02-1. 27pp. Apostolaki, P., Mcallister, M.K., Babcock, E.A. & Bonfil, R. 2001. Use of a generalized stagebased, age-and sex-structured model for shark stock assessment. ICCAT Data preparatory meeting for pelagic shark assessment, 11–13 September 2001. Halifax, Nova Scotia. Au, D.W. & Smith, S.E. 1997. A demographic method with population density compensation for estimating productivity and yield per recruit of the leopard shark (Triakis semifasciata). Can. J. Fish. Aquat. Sci., 54: 415–420. Babcock, E.A. & Pikitch, E.K. 2001. Bayesian methods in shark fishery management. Shark News. Newsletter of the IUCN Shark Specialist Group, 13: 3pp. Berger, J.O. 1985. Statistical decision theory and Bayesian Analysis (second edition). Springer-Verlag, N.Y. 617 pp. Beverton, R.J.H. & Holt, S.J. 1957. On the dynamics of exploited fish populations. Fisheries Investment Series 2, vol 19. U.K. Ministry of Agriculture and Fisheries, London. Bonfil, R. 1990. Contribution to the fisheries biology of the silky shark, Carcharhinus falciformis (Bibron 1839) from Yucatan, Mexico. M.Sc. Thesis. School of Biological Sciences. Bangor, University College of North Wales: 112 pp. Bonfil, R. 1996. Elasmobranch fisheries: status, assessment and management. Ph.D. Thesis. Faculty of Graduate Studies. Vancouver, BC, University of British Columbia, 301 pp. Bonfil, R. 1997. Status of shark resources in the Southern Gulf of Mexico and Caribbean: implications for management. Fisheries Research, 29: 101–117. Branstetter, S. 1999. The management of the United States Atlantic shark fishery. In R.Shotton (ed.). Case studies of the management of elasmobranch fisheries, pp. 109–148. FAO Fisheries Technical Paper No. 378/1. Rome. Campana, S., Marks, L., Joyce, W. & Harley, S. 2001. Analytical assessment of the porbeagle shark (Lamna nasus) population in the Northwest Atlantic, with estimates of long-term sustainable yield. Canadian Science Advisory Secretariat Research Document 2001/067:59 pp. Campana, S., Marks, L., Joyce, W., Hurley, P., Showell, M. & Kulka, D. 1999. An analytical assessment of the porbeagle shark (Lamna nasus) population in the northwest Atlantic. Canadian Stock Assessment Secretariat Research Document 99/158: 57pp. Castillo-Geniz, J.L., Marquez-Farias, J.F., Rodriguez De La Cruz, M.C., Cortés, E. & Cid Del Prado, A. 1998. The Mexican artisanal shark fishery in the Gulf of Mexico: towards a regulated fishery. Marine and Freshwater Research, 49(7): 611–620. Chiaramonte, G.E. 1998. Shark fisheries in Argentina. Marine and Freshwater Research, 49(7): 601–609. Congdon. P. 2001. Bayesian statistical modelling. Wiley and Sons, Chichester, England. 556 pp. Cortés, E. 1998. Demographic analysis as an aid in shark stock assessment and management. Fish. Res., 39: 199–208. Cortés, E. 2000. Life History Patterns and Correlations in Sharks. Fisheries Science, 8:299–344. Cortés, E. 2002. Stock assessment of small coastal sharks in the U.S. Atlantic and Gulf of Mexico. Sust. Fish. Dir. Contrib. SFD-01/02-152. NOAA Fisheries. Panama City, FL. Cortés, E., Brooks, L. & Scott, G. 2002. Stock assessment of large coastal sharks in the U.S. Atlantic and Gulf of Mexico. Sust. Fish. Dir. Contrib. SFD-02/03-177. NOAA Fisheries. Panama City, FL. Deriso, R.B. 1980. Harvesting strategies and parameter estimation for an age-structured model. Can. J. Fish. Aquat. Sci., 37: 268–282. Doubleday, W.G. 1976. A least squares approach to analyzing catch at age data. Res. Bull.Int. Comm. NW Atl. Fish., 12: 69–81 Fournier, D.A. & Archibald, C. 1982. A general theory for analyzing catch at age data. Can.J. Fish. Aquat. Sci., 39: 1195–1207 Fox, W.W. 1970. An exponential surplus-yield model for optimizing exploited fish populations. Trans. Am. Fish. Soc., 99(1): 80–88. Francis, M.P. & Shallard, B. 1999. New Zealand shark fishery management. In R. Shotton (ed.). Case studies of the management of elasmobranch fisheries, pp. 515–551. FAO Fisheries Technical Paper No. 378/2. Rome. Gelman, A., Carlin, J., Stern, H. & Rubin, D. 1995. Bayesian data analysis. Chapman and Hall, New York. Grant, C.J., Sandlord, R.L. & Olsen, A.M. 1979. Estimation of growth, mortality and yield per recruit of the Australian school shark, Galeorhinus australis (Macleay), from tag recoveries. Aust. J. Mar. Freshwater Res., 30: 625–637. Haddon, M. 2001. Modelling and quantitative methods in fisheries. Chapman and Hall, CRC, Boca Raton, FL. 424 pp. Hilborn, R. 1979. Comparison of fisheries control systems that utilize catch and effort data. J. Fish. Res. Bd. Can., 33: 1477–1489. Hilborn, R. & Walters, C.J. 1992. Quantitative Fisheries Stock Assessment: Choice, dynamics and uncertainty. Chapman and Hall. 570 pp. Hoff, T.B. 1990. Conservation and Management of the Western North Atlantic Shark Resource Based on the Life History Strategy Limitations of Sandbar Sharks. Ph.D. Thesis. Marine Studies. Delaware, University of Delaware. 282 pp. Holden, M.J. 1973. Are long-term sustainable fisheries for elasmobranchs possible? J.Reun. Cont. Int. Explor. Mer., 164: 360–367. Holden, M.J. 1977. Chapter 9: Elasmobranchs. In J. Gulland (ed.). Fish Population Dynamics, pp: 187–215. John Wiley and Sons, London. Ludwig, D. & Walters, C.J. 1985. Are age-structured models appropriate for catch-effort data? Can. J. Fish. Aquat. Sci., 42: 1066–1072. Ludwig, D. & Walters, C.J. 1989. A robust method for parameter estimation from catch and effort data . Can. J. Fish. Aquat. Sci., 46: 137–144. Ludwig, D., Walters, C.J. & Cook, J. 1988. Comparison of two models and two estimation methods for catch and effort data. Natural Resource Modeling, 2(3): 457–498. Mcallister, M.K. & Pikitch, E.K. 1998a. A Bayesian approach to assessment of sharks: fitting a production model to large coastal shark data. London, Renewable Resources Assessment Group, pp. 1–23. Mcallister, M.K. & Pikitch, E.K. 1998b. Evaluating the potential for recovery of large coastal sharks: a Bayesian decision analysis. NMFS Shark Evaluation Workshop. SBIV-27: 25pp. Mcallister, M.K., Pikitch, E.K. & Babcock, E.A. 2001. Using demographic methods to constructs Bayesian priors for the intrinsic rate of increase in the Schaefer model and implications for stock rebuilding. Canadian Journal of Fisheries and Aquatic Sciences, 58: 1871–1890. Megrey, B. & Wespestad, V.G. 1988. A Review of Biological Assumptions underlying Fishery Assessment Models. National Marine Fisheries Service, 44 pp. Methot, R.D. 1989. Synthetic estimates of historical abundance and mortality for northern anchovy. Am. Fish. Soc. Symp., 6: 66–82. Otto, R.S., Zuboy, J.R. & Sakagawa, G.T. 1977. Status of Northwest Atlantic billfish and shark stocks (mimeo). Report of the La Jolla Working Group, 28 March–8 April 1977. Paloheimo, J.E. 1980. Estimation of mortality rates in fish populations. Trans. Am. Fish. Soc., 1094: 378–386. Pella, J.J. & Tomlinson, P.K. 1969. A generalized stock production model. Bull. Inter-Am. Trop. Tuna Comm., 13: 419–496. Pitcher, T.J. & Hart, P.J.B. 1982. Fisheries Ecology. Croom Helm. London. 414 pp. Polacheck, T., Hilborn, R. & Punt, A.E. 1993. Fitting surplus production models: comparing methods and measuring uncertainty. Can. J. Fish. Aquat. Sci., 50: 2597–2607. Prager, M.H., Goodyear, C.P. & Scott, G.P. 1994. Application of a stock-production model to age-structured simulated data: a swordfish-like model. ICCAT Working Document SCRS/94/116: 9 pp. Pratt, H.L.J. & Casey, J.G. 1990. Shark reproductive strategies as a limiting factor in directed fisheries, with a review of Holden's method of estimating growth-parameters. In H.L. Pratt, Jr., S.H. Gruber. & T. Taniuchi (eds). Elasmobranchs as Living Resources: Advances in biology, ecology, systematics and the status of the fisheries, pp. 97–109. NOAA Technical Report NMFS 90. Punt, A.E. 1988. Model selection for the dynamics of the southern African hake resources. M.Sc. Thesis. University of Cape Town, 395 pp. Punt, A.E. 1991. Management procedures for the cape hake and baleen whale resources. Benguela Ecology Programme Report No. 32, 643 pp. Punt, A.E. & Walker, T.I. 1998. Stock assessment and risk analysis for the school shark (Galeorhinus galeus) off southern Australia. Marine and Freshwater Research, 49:719–731. Quinn, T.J.Ii. & Deriso, R.B. 1999. Quantitative Fish Dynamics. Oxford University Press, New York. Oxford, 542 pp. Rago, P.J., Sosebee, K.A., Brodziak, J.K.T., Murawski, S.A. & Anderson, E.D. 1998. Implications of recent increases in catches on the dynamics of Northwest Atlantic spiny dogfish (Squalus acanthias). Fisheries Research, 39: 165–181. Schaefer, M.B. 1954. Some aspects of the dynamics of populations important to the management of commercial marine fisheries. Bull. Inter-Am. Trop. Tuna Comm., 1:27–56. Schnute, J. 1985. A general theory for analysis of catch and effort data. Can. J. Fish. Aquat. Sci., 42: 419–429. Silva, H.M. 1987. An assessment of the Azorean stock of kitefin shark, Dalatias licha (Bonn, 1788). ICES Demersal Fish Committee G:66: 10 pp. Smith, S.E. & Abramson, N.J. 1990. Leopard shark Triakis semifasciata distribution, mortality rate, yield, and stock replenishment estimates based on a tagging study in San Francisco Bay. Fish. Bull. , 88: 371–381. Sparre, P. & Venema, S. 1992. Introduction to tropical fish stock assessment. (Second Edition). FAO Fisheries Technical Paper 306. Uhler, R. 1979. Least squares regression estimates of the Schaefer production model: some Monte Carlo simulation results. Can. J. Fish. Aquat. Sci., 37: 1248–1294. Walker, T.I. 1992. A fishery simulation model for sharks applied to the gummy shark, Mustelus antarcticus, from southern Australian waters. Australian Journal of Marine and Freshwater Research, 43: Walker, T.I. 1999. Southern Australian shark fishery management. Case studies of the management of elasmobranch fisheries. In R. Shotton (ed.). Case studies of the management of elasmobranch fisheries, pp. 480–514. FAO Fisheries Technical Paper No. 378/2. Rome. Waring, G.T. 1984. Age, growth, and mortality of the little skate off the northeast coast of the United States. Transactions of the American Fisheries Society, 113: 314–321. Wood, C.C., Ketchen, K.S. & Beamish, R.J. 1979. Population dynamics of spiny dogfish (Squalus acanthias) in British Columbia waters. J. Fish. Res. Board Can., 36(6):647–656. Xiao, Y. 1995. Stock assessment of the school shark Galeorhinus galeus (Linneaus) off Southern Australia by Schaefer production model. Prepared for Southern Shark Fishery Assessment Workshop 27 February–3 March 1995. CSIRO Division of Fisheries, Hobart, 58 pp.
{"url":"http://www.fao.org/docrep/009/a0212e/A0212E14.htm","timestamp":"2014-04-16T20:09:08Z","content_type":null,"content_length":"110052","record_id":"<urn:uuid:8c659416-b8ed-4149-a190-a75eeb3c757c>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00002-ip-10-147-4-33.ec2.internal.warc.gz"}
Marcus Hook Statistics Tutor Find a Marcus Hook Statistics Tutor ...My credentials include over 10 years tutoring experience and over 4 years professional teaching experience. I received 800/800 on the GRE math section and perfect marks on the Praxis I math section, as well as the Award for Excellence on the Praxis II mathematics content test. I possess clean FBI/criminal history and Child Abuse clearances. 58 Subjects: including statistics, reading, geometry, biology ...Tutoring for one or two weeks with the goal of passing a test or completing a single assignment does not usually prepare the student to continue their study of mathematics. When teaching calculus at The Rochester Institute of Technology, I ran into students who struggled due to a lack of confide... 18 Subjects: including statistics, calculus, geometry, GRE I am currently a volunteer math tutor at the Center for Literacy in Philadelphia. I have a degree in engineering and math. My approach towards tutoring is simple. 23 Subjects: including statistics, physics, geometry, biology ...As an Aide at Harriton High School I assist students daily in Algebra I, II, Geometry I, II, Trig, and Honors for these courses. I have my personal notes to take students from basic concepts to complex problems. The key to Algebra II is a mastery of Algebra I. 35 Subjects: including statistics, English, reading, chemistry I am a graduate of Stony Brook University's secondary education of mathematics program, and New York and Delaware State certified to teach teach mathematics in public schools, for all topics. To teach students of all educational types, one must be open to what the student responds to and what is be... 22 Subjects: including statistics, physics, SAT math, GRE
{"url":"http://www.purplemath.com/Marcus_Hook_statistics_tutors.php","timestamp":"2014-04-19T19:42:54Z","content_type":null,"content_length":"24103","record_id":"<urn:uuid:c22ee573-db1c-4566-bab5-e92d124dfdaf>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00659-ip-10-147-4-33.ec2.internal.warc.gz"}
Fenchel-Nielsen coordinates Fenchel-Nielsen coordinates The Fenchel-Nielson coordinates are certain coordinates on Teichmüller space. They parameterize Teichmüller space by cutting surfaces into pieces with geodesic boundaries and Euler characteristic $\xi = -1$. These building blocks (of hyperbolic 2d geometry) are precisely • the 3-holed sphere; • the 2-holed cusp; • the 1-holed 2-cusp; • the 3-cusp Each surface of genus $g$ with $n$ marked points will have • $2g - 2 + n$ generalized pants; • $3 g - 3 + n$ closed curves. The boundary lengths $\ell_i \in \mathbb{R}_+$ and twists $t_i \in \mathbb{R}$ of these pieces for $1 \leq i \leq 3g-3+n$ constitute the Fenchel-Nielsen coordinates on Teichmüller space $\Tau$. Also use $\theta_i := t_i/\ell_i \in \mathbb{R}/\mathbb{Z}$ This constitutes is a real analytic atlas of Teichmüller space. On $M$ this reduces to coordinates $t_i \in \mathbb{R}/{\ell_i \mathbb{Z}}$, and these constitute a real analytic atlas of moduli • Kathy Paur, The Fenchel-Nielson coordinates of Teichmüller spaces (pdf) • Werner Fenchel, Jakob Nielsen, reprinted in Discontinuous groups of isometries in the hyperbolic plane, edited by Asmus L. Schmidt; De Gruyter Studies in Math. 29, 2003. Revised on September 9, 2010 19:28:29 by Zoran Škoda
{"url":"http://ncatlab.org/nlab/show/Fenchel-Nielsen+coordinates","timestamp":"2014-04-16T16:31:50Z","content_type":null,"content_length":"16508","record_id":"<urn:uuid:9ff4c5b7-2697-4889-9fb0-7a58427a98d2>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00561-ip-10-147-4-33.ec2.internal.warc.gz"}
Floating 2 Date: 06/12/2001 at 10:34:59 From: Britney Turner Subject: Algebra The floating 2: Start with the number 2 on the far left side, then float the number 2 to the far right side. The new number must be three times more than the old number. For example: 268 682. The only problem is that they're not three times apart. Date: 06/12/2001 at 12:55:22 From: Doctor Greenie Subject: Re: Algebra Hi, Britney - I'm hoping that you haven't written your question the way you really meant to. If the new number is "three times more than" the old number, then it is FOUR times AS LARGE AS the old number. For that problem, your question has no solution. However, if you meant to say that the new number is three times AS LARGE AS the old number, then there are solutions. So let's look at this problem: You want to find a string of digits "dd...dd" (with an unknown number of digits) such that the number "dd...dd"2 is three times as large as the number 2"dd...dd". (This is awkward notation. Remember that "dd...dd" is a string of digits; there is no multiplication implied by writing the "d"s next to each other.) We can get a series of equations to try to solve by trying different numbers of digits in the string "dd...dd". If we try a string of a single digit, then we have "d"2 = 3 * 2"d" "d" is now a 1-digit number; call it x. Then algebraically we have 10x+2 = 3(20+x) 10x+2 = 60+3x 7x = 58 The solution to this equation is not an integer. So next let's try a string of 2 digits. We now have "dd"2 = 3 * 2"dd" "dd" is now a 2-digit number; call it x. Then algebraically we have 10x+2 = 3(200+x) 10x+2 = 600+3x 7x = 598 The solution to this equation also is not an integer. Let's go one step further and try a string of 3 digits. We now have "ddd"2 = 3 * 2"ddd" "ddd" is now a 3-digit number; call it x. Then algebraically we have 10x+2 = 3(2000+x) 10x+2 = 6000+3x 7x = 5998 The solution to this equation also is not an integer. If we pause now and compare the calculations we have made for these first three cases, we can see that what we are going to need is an equation of the form 7x = 599...998 which has an integer solution. To find equations of this form, we can be a bit clever. We are currently looking for numbers of the form 599...998 which, when divided by 7, leave no remainder. Let's look instead for numbers of the form 600...000 which, when divided by 7, leave a remainder of 2. The repeating decimal equivalent of the fraction 6/7 is 6/7 = 0.857142857142857142... From this we can see that 60/7 = 8.57142857142857142... = 8 remainder 4 600/7 = 85.7142857142857142... = 85 remainder 6 6000/7 = 857.142857142857142... = 857 remainder 1 60000/7 = 8571.42857142857142... = 8571 remainder 3 600000/7 = 85714.2857142857142... = 85714 remainder 2 We have now that 600000/7 leaves remainder 2, so 599998/7 is an integer (=85714). So our first solution to the problem is x = 285714; 3x = 857142 And if we think about continuing looking for numbers of the form 600...000 which leave remainder 2 when divided by 7, we can see that we will have an infinite number of similar solutions to the problem: x = 285714285714; 3x = 857142857142 x = 285714285714285714; 3x = 857142857142857142 and so on.... And not only do we have an infinite number of solutions to the problem; this infinite set of solutions of this form provides the only solutions to the problem. Let's go back and see why the question as you asked it has no solutions (that is, there are no solutions if dd...dd2 is four times as large as 2dd...dd instead of three times as large). In the above analysis of the problem where "dd...dd2" is three times as large as 2dd...dd, we encountered the following series of equations to try to 7x = 58 where x is a 1-digit integer 7x = 598 where x is a 2-digit integer 7x = 5998 where x is a 3-digit integer If we do the same type of analysis for the problem where dd...dd2 is supposed to be four times as large as 2dd...dd, then the corresponding series of equations we get is the following: 6x = 78 where x is a 1-digit integer 6x = 798 where x is a 2-digit integer 6x = 7998 where x is a 3-digit integer In every case, these equations have integer solutions (every number of the form 799...998 is divisible by both 2 and 3, and therefore by 6) - but they are always the wrong number of digits. Thanks for the interesting problem. It turned out to be a lot more interesting than it sounded when I first read it! - Doctor Greenie, The Math Forum
{"url":"http://mathforum.org/library/drmath/view/53264.html","timestamp":"2014-04-17T02:04:06Z","content_type":null,"content_length":"9873","record_id":"<urn:uuid:6977aa1a-d9dd-4522-bf31-b032547f8106>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00277-ip-10-147-4-33.ec2.internal.warc.gz"}
GMAT Gurus Speak Out: Permutation and Combination Basics The Fundamental Counting Principle states that if an event has x possible outcomes and a different independent event has y possible outcomes, then there are xy possible ways the two events could occur together. For example, how many three-digit integers have either 6 or 9 in the tens digit and 1 in the units digit? To solve, we need to find the possible outcomes for each digit (hundreds, tens, and units) and multiply them. Each digit has 10 possible values (0 through 9). The hundreds digit can be any of these except 0 (since a three-digit number cannot begin with 0). The tens digit has only 2 options (6 or 9). The units digit has only 1 possibility (1). Therefore, the total number of possibilities is 9 x 2 x 1 = 18. Permutations are sequences. In a sequence, order is important. How many different ways can four people sit on a bench? For the first spot on the bench, we have 4 to choose from. For the next spot we’ll have 3, for the third spot we’ll have 2, and the last remaining person will take the final spot. Therefore, there are 4 x 3 x 2 x 1 = 24 ways. Harder permutations problems will require you to use this formula: n = the number of options r = the number chosen from those options For example, how many possible options are there for the gold, silver, and bronze medals out of 12 athletes? Here n = 12 and r = 3. Since the order in which the athletes finish matters, we know to use the Permutation formula: n! / (n – r)! = 12! / (12 – 3)! = 12! / 9! = 12 x 11 x 10 = 1,320 options Combinations are groups. Order doesn’t matter. The Combination formula is only slightly different from the Permutation formula: Let’s say Dominic took 10 photos. He wants to put 7 of them on Facebook. How many groups of photos are possible? n! / r! (n – r)! = 10! / 7! (10 – 7)! = 10! / 7! 3! = 10 x 9 x 8 / 3 x 2 x 1 = 720 / 6 = 120 different groups Remember to ask yourself whether order matters in the problem, and don’t forget the Fundamental Counting Principle! The GMAT may also combine one or more of these concepts in a longer Word Problem to make the question more challenging, but if you can remember these basics, you’ll be good to go! Plan on taking the GMAT soon? We have GMAT prep courses starting all the time. And, be sure to find us on Facebook and Google+, and follow us on Twitter! Vivian Kerr is a regular contributor to the Veritas Prep blog, providing tips and tricks to help students better prepare for the GMAT and the SAT.
{"url":"http://www.veritasprep.com/blog/2013/01/gmat-gurus-speak-out-permutation-and-combination-basics/","timestamp":"2014-04-19T06:52:36Z","content_type":null,"content_length":"47557","record_id":"<urn:uuid:d30e2d82-9046-47ef-87ad-f7efe6653c2b>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00163-ip-10-147-4-33.ec2.internal.warc.gz"}
- Boston.com Re: gasoline to the posted at 11/1/2013 7:19 PM EDT In response to pcmIV's comment: In response to pezz4pats' comment: Like I said, Please post those stats. Also post where they rank in return of investment. ROI I have already shown you that their draft index does not support your findings (agenda). In fact in the past 5 years it has dropped. Probably due to 07-09 drafts, but dropped none the less. When you can come up with something that disputes the posted player index. let me know. Also do you consider the 11-13 drafts conclusive? Would you still give the 2010 draft an A-. today? I wouldn't even consider one of the remaining players worthy of that A- grade, never mind the whole draft where 3 of the 6 high round picks were busts..... sorry I told you where you can look up the stats genius. I exported the data from http://www.pro-football-reference.com/ and did the computations myself. The CareerAV of Patriots players drafted from 2010-2012 is tops in the league at 188. The CareerAV of Patriots players drafted from 2008-2012 is 331 which is 4th in the league. The Patriots players drafted from 2008-2012 have been named to 4 pro bowls and 2 first team all pro teams both of which are tops in the league. I am not going to post the AV and pro bowl and all pro teams for the 1,275 players that have been drafted from 2008 to 2012. Why don't you go look at the data and tell me where my mistake is. I assure you it isn't there. Your obsession with this ROI statistic is laughable. I looked at the article that explained how this "metric" was calculated and it is misguided to say the least. What it does it look at the careerAV /seasons played for each player drafted in a particular spot and averages them to compute an "expected value" for each draft spot. It then compares a pick to this "expected value" to compute ROI. This is silly because the draft is a high variance event. This means that the average (which is the basis for the ROI calculation) provides very little statistical value. Consider one of the most extreme examples. Since Tom Brady was drafted 199th overall in 2000 here are the players that have been drafted in that spot, their CareerAV and how many seasons they played. Note that the 2013 season does not count as AV does not get calculated on partial seasons. The expected value for this spot would be 1.65 which is the average AV/season of the 13 players drafted. This value doesn't have any real meaning though. There is exactly one player that is even remotely in the ballpark to this value and he was still 15% lower. After that each player is at least 40% larger or smaller. This is reflected in the fact that the standard deviation is 3.11 which is practically twice the average. As I said before, the draft is a high variance event and the average of a high variance population is not valuable. This is statistics 101 which apparently both you and the author from NinersNation (the source of this statistic and the ROI chart you posted earlier) flunked. The larger question is why would you be reading a 49ers fan blog? Do you really have nothing better to do than google for "articles" that purport to show that BB sucks at drafting (and btw that article only talks about the 2006 draft which I said was bad so dunno why you think it generalizes to other years). As for your other chart it doesn't show what you think it does. What it shows is in the last 10 years the Patriots have drafted the 5th most players that are still active in the NFL despite having the highest winning percentage over that period meaning they have a highly competitive roster and lower draft picks before any trade downs. That must mean they drafted some pretty good talent which is reflected in the most pro bowlers over that period. Your obsession with "efficiency" completely misses the point of the trade down strategy. The whole point is that you think the loss in efficiency will be more than made up for by the additional picks. The fact that in absolute terms the Patriots have gotten more out of the draft than most teams in the NFL in terms of active players and have gotten the most pro bowlers speaks to the effectiveness of that strategy. I would also point out that I find it amusing that this "analysis" treats all draft picks the same when calculating "efficiency" whereas the ROI "article" you referenced specifically argues that each draft spot is different (not each round, but each individual pick). It appears consistency isn't really important to you when pushing that agenda of yours huh Pezzy? So it looks like that agenda of yours has once again gone down in flames just like that 9 win prediction. I guess you're just going to have to stew through another double digit win season. I'll enjoy every moment of course. Keep throwing out those predictions though buddy. Even blind squirrels find an acorn from time to time. LMAO @ U. We'll think what you want but the ROI value is significant for all ayers and especially when applied to top picks as their contribution and time is paramount. The top picks are not meant to last a couple of years with little or no contribution. That's why it is measured. The chart you supplied amounts to the top , lifetime but each player is given a value. That is why in the previous chart it rated the Pats 2006 as an F with each second and first rounder with a negative value and why it rated the top team with each as a positive value. , 6 years later. So, no! The past few drafts would not be relevant or computable unless a pick were already out with no contribution and no time Based on that and AV stats. Which are dynamic, the last few drafts are incomplete and no value can be assigned. Like I said, look at the 2010 draft. Would you still give it an A grade, 3 years later.? And another thing, if you feel things like longevity and contribution of players is irrelevant. As apposed to replacing the same player over and over two years into his contract because he sucks, then there really is no more need to discuss anything with you. You might as well live on another And the last thing before I put you to bed. The way I read the other chart is 100% accurate. You don't understand that if 2 teams have 45 players in the league but one had 23 more picks than the other, then that means that the team with 23 more picks to achieve the same results, also had 23 more misses. Really? sorry but that points directly to inefficiency and not the way you see it.
{"url":"http://boston.com/community/forums/sports/patriots/on-the-front-burner/gasoline-to-the-best-gm-debate/100/6870460?page=7","timestamp":"2014-04-20T11:10:12Z","content_type":null,"content_length":"73295","record_id":"<urn:uuid:96c9d678-ce2a-4fc8-a3f5-db4071491f81>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00063-ip-10-147-4-33.ec2.internal.warc.gz"}
Semilocal Convergence Analysis for Inexact Newton Method under Weak Condition Abstract and Applied Analysis Volume 2012 (2012), Article ID 982925, 13 pages Research Article Semilocal Convergence Analysis for Inexact Newton Method under Weak Condition Department of Mathematics, Zhejiang Normal University, Jinhua 321004, China Received 29 May 2012; Accepted 5 August 2012 Academic Editor: Jen-Chih Yao Copyright © 2012 Xiubin Xu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Under the hypothesis that the first derivative satisfies some kind of weak Lipschitz conditions, a new semilocal convergence theorem for inexact Newton method is presented. Unified convergence criteria ensuring the convergence of inexact Newton method are also established. Applications to some special cases such as the Kantorovich type conditions and -Conditions are provided and some well-known convergence theorems for Newton's method are obtained as corollaries. 1. Introduction Let be a continuously Fréchet differentiable nonlinear operator from a convex subset of Banach space to Banach space . Finding solutions of a nonlinear operator equation: in Banach space is a basic and important problem in applied and computational mathematics. A classical method for finding an approximation of a solution of (1.1) is Newton's method which is defined by There is a huge literature on local as well as semilocal convergence for Newton's method under various assumptions (see [1–9]). Besides, there are a lot of works on the weakness of the hypotheses made on the underlying operators, see for example [2, 3, 5–9] and references therein. In particular, Wang in [7, 8] introduced the notions of Lipschitz conditions with average, under which Kantorovich like convergence criteria and Smale's point estimate theory can be put together to be investigated. However, Newton's method has two disadvantages. One is to evaluate involved, the other is to solve the exact solution of Newton equations: In many applications, for example, those in Euclidean spaces, computing the exact solutions using a direct method such as Gaussian elimination can be expensive if the number of unknowns is large and may not be justified when is far from the searched solution. While using linear iterative methods to approximate the solutions of (1.3) instead of solving it exactly can reduce some of the costs of Newton's method. One of the methods is inexact Newton method which can be found in [10] and takes the following form: where is a sequence in . As is well known, the convergence behavior of the inexact Newton method depends on the residual controls of under the hypothesis that satisfies different conditions. Some relative results can be found in [10–24], for example. Under the Lipschitz continuity assumption on , different residual controls were used. For example, the residual controls were adopted in [10, 12]; in [15] the affine invariant conditions were considered; while in [21] Shen has analyzed the semilocal convergence behavior in some manner such that the relative residuals satisfy Assuming that the residuals satisfy , where is a sequence of invertible operators from to , and that satisfies the Hölder condition around , Li and Shen established the local and semilocal convergence in [16, 20], respectively. Besides, the -condition was also introduced into inexact Newton method in [22] by considering residual controls (1.5) with , that is, Smale's -theory for the inexact Newton method was established there. In the present paper, by considering the residual controls (1.6), we will study the convergence of inexact Newton method under the assumption that has a continuous derivative in a closed ball , exists and satisfies the weak Lipschitz condition: where is a positive number, , and is a positive integrable nondecreasing function on . We also establish the unified convergence criteria, which include Kantorovich type and Smale type convergence criteria as special cases. In particular, in the special case when , (1.4) reduces to Newton's method and our result extends the corresponding one in [7]. The paper is organized as follows. Section 2 gives some lemmas which are used in the proof of our main theorem. In Section 3, the semilocal convergence of inexact Newton method is studied under the weak Lipschitz condition (1.7). Its applications to some special cases are provided in Section 4. 2. Preliminaries Let and be Banach spaces. Throughout this paper, are two positive numbers, is a positive integrable nondecreasing function on any involved intervals, and is an open ball in with center and radius . Let , and . Define Obviously, Set Write . Then where is such that . Furthermore, it follows that Let The following two lemmas describe some properties about the majorizing function and the convergence property of . Lemma 2.1. Suppose that and is defined by (2.1). Then the function is strictly decreasing and has exact one zero on satisfying . Proof. By (2.4) and (2.5), we know is strictly increasing on and has the values and . This implies that is strictly decreasing on . Note that and by the definition of . Thus, has exact one solution on . Since we have . The proof is complete. Lemma 2.2. Let be the positive solution of equation on . Suppose that and the sequence is defined by (2.9). Then Consequently, is strictly increasing and converges to . Proof. We prove the lemma by mathematical induction. Note that . For , assume that Since is strictly decreasing on . Hence, Moreover, by of Lemma 2.1. It follows that Define a function on by Note that , unless , and , for which we adopt the convention that and . Hence, the function is well defined and continuous on . Moreover, by (2.2) and (2.3), we have Hence, is monotonically increasing on . This together with (2.9) and (2.14) implies that Therefore, by mathematical induction, (2.11) holds. Consequently, is increasing, bounded, and converges to a point , which satisfies . Hence, . The proof is complete. To prove our main result, we need two more lemmas. The first can be found in [23] and the second in [7]. Lemma 2.3. Suppose that has a continuous derivative satisfying the weak Lipschitz condition (1.7). Let satisfy . Then is invertible in the ball and Lemma 2.4. Let and define Then, is increasing on . 3. Semilocal Convergence Analysis Recall that is a nonlinear operator with continuous Fréchet derivative. Let and be such that exists. In the present paper, we adopt the residuals satisfying (1.6) and assume that . Thus, if and is well defined, then Let Write Recall that is determined by (2.5), , and is generated by (2.9) with and given in (3.3). Lemma 3.1. Let be a sequence generated by (1.4). Suppose that F satisfies the weak Lipschitz condition (1.7) on and that . For an integer , if hold for each , then the following assertions hold: Proof. Assume that (3.4) holds for each . Write . Applying (1.4), we have Hence, To estimate , by (3.4), we notice that In particular, Thus, by the weak Lipschitz condition (1.7), we obtain Below we estimate . We firstly notice that (3.1) and (3.4) yield Since we have Combining this with (3.1) implies that Consequently, by (3.7), (3.10), (3.14) and Lemma 2.4, we get Noting that and , we have Moreover, since is decreasing on [0, ], one has And therefore That is, (3.5) holds, and the proof is complete. We now give the main result. Theorem 3.2. Suppose that and , and that satisfies the weak Lipschitz condition (1.7) on . Then the sequence generated by the inexact Newton method (1.4) converges to a solution of (1.1). Moreover, Proof. We firstly use mathematical induction to prove that (3.4) holds for each . For , by the above condition and (3.2), the first inequality in (3.4) holds trivially. While the second one can be proved as follows: Assume that (3.4) holds for all . Then, Lemma 3.1 is applicable to concluding that Hence, by (3.5), together with the weak Lipschitz condition (1.7) and Lemma 2.3, one has Therefore, (3.4) holds for and so for each . Consequently, for and , This together with Lemma 2.2 means that is a Cauchy sequence and so converges to some . While taking in (3.23), we obtain The proof is complete. In the special case when , inexact Newton method (1.4) reduces to Newton's method. Moreover, . Thus, Theorem 3.2 reduces to the related theorem of Newton's method. Corollary 3.3. Assume that and , where and satisfying . Suppose that satisfies the weak Lipschitz condition (1.7) on . Then the sequence generated by Newton's method (1.2) converges to a solution of (1.1). Moreover, where and are defined in Lemma 2.2 for . In more particular, suppose that and . Then Corollary 3.3 reduces to the following result given in (Theorem 3.1, [7]). Corollary 3.4. Assume that , where and . Suppose that satisfies weak Lipschitz condition (1.7) on . Then the sequence generated by Newton's method (1.2) converges to a solution of (1.1). Moreover, where and are defined in Lemma 2.2 for and . 4. Application This section is divided into two subsections: we consider the applications of our main results specializing, respectively, in Kantorovich type condition and in -condition. In particular, our results reduce some of the corresponding results of Newton's method. 4.1. Kantorovich-Type Condition Throughout this subsection, let be a positive constant. By (2.1), we have By (2.5) and (2.6), we get The convergence criterion becomes Moreover, suppose that and . Then criterion (4.3) reduces to the well-known Kantorovich type criterion of Newton's method in [7]. Corollary 4.1. Let be a positive constant, and , where and . Assume that satisfies the condition: where . Then the sequence generated by Newton's method (1.2) converges to a solution of (1.1), and 4.2. -Condition Throughout this subsection, we assume that and has continuous second derivative and satisfies Let Then, by (2.1), we have By (2.5) and (2.6), and satisfy The convergence criterion becomes In the more special case, when and , we obtain the criterion the same with Newton's method in [7]. Corollary 4.2. Let be a positive constant, and , where and . Assume that F satisfies the condition: where . Then the sequence generated by Newton's method (1.2) converges to a solution of (1.1), and Supported in part by the National Natural Science Foundation of China (Grants no. 61170109 and no. 10971194) and Zhejiang Innovation Project (Grant no. T200905). 1. I. K. Argyros, “On the Newton-Kantorovich hypothesis for solving equations,” Journal of Computational and Applied Mathematics, vol. 169, no. 2, pp. 315–332, 2004. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 2. J. A. Ezquerro and M. A. Hernández, “Generalized differentiability conditions for Newton's method,” IMA Journal of Numerical Analysis, vol. 22, no. 2, pp. 187–205, 2002. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 3. J. A. Ezquerro and M. A. Hernández, “On an application of Newton's method to nonlinear operators with $w$-conditioned second derivative,” BIT. Numerical Mathematics, vol. 42, no. 3, pp. 519–530, 4. J. M. Gutiérrez, “A new semilocal convergence theorem for Newton's method,” Journal of Computational and Applied Mathematics, vol. 79, no. 1, pp. 131–145, 1997. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 5. J. M. Gutiérrez and M. A. Hernández, “Newton's method under weak Kantorovich conditions,” IMA Journal of Numerical Analysis, vol. 20, no. 4, pp. 521–532, 2000. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 6. M. A. Hernández, “The Newton method for operators with Hölder continuous first derivative,” Journal of Optimization Theory and Applications, vol. 109, no. 3, pp. 631–648, 2001. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 7. X. H. Wang, “Convergence of Newton's method and inverse function theorem in Banach space,” Mathematics of Computation, vol. 68, no. 225, pp. 169–186, 1999. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 8. X. H. Wang, “Convergence of Newton's method and uniqueness of the solution of equations in Banach space,” IMA Journal of Numerical Analysis, vol. 20, no. 1, pp. 123–134, 2000. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 9. X. H. Wang and C. Li, “Convergence of Newton's method and uniqueness of the solution of equations in Banach spaces. II,” Acta Mathematica Sinica, English Series, vol. 19, no. 2, pp. 405–412, 2003. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 10. R. S. Dembo, S. C. Eisenstat, and T. Steihaug, “Inexact Newton methods,” SIAM Journal on Numerical Analysis, vol. 19, no. 2, pp. 400–408, 1982. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 11. I. K. Argyros, “A new convergence theorem for the inexact Newton methods based on assumptions involving the second Fréchet derivative,” Computers & Mathematics with Applications, vol. 37, no. 7, pp. 109–115, 1999. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 12. Z. Z. Bai and P. L. Tong, “Affine invariant convergence of the inexact Newton method and Broyden's method,” Journal of University of Electronic Science and Technology of China, vol. 23, no. 5, pp. 535–540, 1994. 13. J. Chen and W. Li, “Convergence behaviour of inexact Newton methods under weak Lipschitz condition,” Journal of Computational and Applied Mathematics, vol. 191, no. 1, pp. 143–164, 2006. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 14. M. G. Gasparo and G. Morini, “Inexact methods: forcing terms and conditioning,” Journal of Optimization Theory and Applications, vol. 107, no. 3, pp. 573–589, 2000. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 15. X. P. Guo, “On semilocal convergence of inexact Newton methods,” Journal of Computational Mathematics, vol. 25, no. 2, pp. 231–242, 2007. View at Zentralblatt MATH 16. C. Li and W. P. Shen, “Local convergence of inexact methods under the Hölder condition,” Journal of Computational and Applied Mathematics, vol. 222, no. 2, pp. 544–560, 2008. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 17. I. Moret, “A Kantorovich-type theorem for inexact Newton methods,” Numerical Functional Analysis and Optimization, vol. 10, no. 3-4, pp. 351–365, 1989. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 18. J. M. Martínez and L. Q. Qi, “Inexact Newton methods for solving nonsmooth equations,” Journal of Computational and Applied Mathematics, vol. 60, no. 1-2, pp. 127–145, 1995. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 19. B. Morini, “Convergence behaviour of inexact Newton methods,” Mathematics of Computation, vol. 68, no. 228, pp. 1605–1613, 1999. View at Publisher · View at Google Scholar · View at Zentralblatt 20. W. P. Shen and C. Li, “Convergence criterion of inexact methods for operators with Hölder continuous derivatives,” Taiwanese Journal of Mathematics, vol. 12, no. 7, pp. 1865–1882, 2008. View at Zentralblatt MATH 21. W. P. Shen and C. Li, “Kantorovich-type convergence criterion for inexact Newton methods,” Applied Numerical Mathematics, vol. 59, no. 7, pp. 1599–1611, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 22. W. P. Shen and C. Li, “Smale's $\alpha$-theory for inexact Newton methods under the $\gamma$-condition,” Journal of Mathematical Analysis and Applications, vol. 369, no. 1, pp. 29–42, 2010. View at Publisher · View at Google Scholar 23. M. Wu, “A new semi-local convergence theorem for the inexact Newton methods,” Applied Mathematics and Computation, vol. 200, no. 1, pp. 80–86, 2008. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 24. T. J. Ypma, “Local convergence of inexact Newton methods,” SIAM Journal on Numerical Analysis, vol. 21, no. 3, pp. 583–590, 1984. View at Publisher · View at Google Scholar · View at Zentralblatt
{"url":"http://www.hindawi.com/journals/aaa/2012/982925/","timestamp":"2014-04-17T21:45:08Z","content_type":null,"content_length":"567258","record_id":"<urn:uuid:eb877354-0ac7-4d53-ac81-eaa468ea695a>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00327-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: What does a 124 rock weigh if it is accelerating upward at 19.0 ? • one year ago • one year ago Best Response You've already chosen the best response. do you have units for original weight of rock and acceleration? Best Response You've already chosen the best response. sorry yes, 124 N and 19.0 m/s^2 Best Response You've already chosen the best response. a a newton is defined as: a kilogram meter per second squared \[\text N=\frac{\text{kg}\cdot\text m}{\text s^2}\] Best Response You've already chosen the best response. I disagree. Think of riding in an elevator accelerating upwards. Your legs have to take up the extra push from the elevator floor as the elevator travels (i.e., your weight should be greater not less). If a rock weighs 124N while at rest on the surface of the Earth, it's mass is 124/9.8=12.7kg. Now, it's weight as measured while accelerating upward will be \[W=12.7*28.8=364.4N\] Best Response You've already chosen the best response. But, if I misunderstood your question and you are asking instead "what is the weight of the rock while at rest on the Earth's surface if it weights 124N while accelerating 19m/s^2 upward, away from the surface of the Earth", then we have for the mass of the rock, \[124N/28.8=4.3 kg\]This gives a weight of:\[W=4.3*9.8=42.2N\]while at rest at the surface of the Earth. Best Response You've already chosen the best response. basically, 124N means mass is 124/9.8 kg, so upward force (net on the body) is 124/9.8 *19, there is no meaning of it's weight changing when it's accelerated... (If you take the frame of reference as the rock, then ofcourse the rock will feel heavier)... In a lift, the net force downwards increases(wrt to lift) hence u feel heavier.... Best Response You've already chosen the best response. I take an operational definition of weight in which weight is defined to be the value read off a spring (or suitable) balance. In an elevator accelerating upwards, the spring will have a longer extension than when the elevator is at rest (or moving uniformly), hence the weight in the accelerated case is said to be greater than in the non-accelerated case. Best Response You've already chosen the best response. the \(effective\) weight of the rock will be 364.4N Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4fd5b3dfe4b04bec7f16d7fd","timestamp":"2014-04-17T16:00:09Z","content_type":null,"content_length":"45495","record_id":"<urn:uuid:f15d791b-0870-499e-9d0d-4544eab923e7>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00639-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: April 2005 [00864] [Date Index] [Thread Index] [Author Index] Re: "large" matrices, Eigenvalues, determinants, characteristic polynomials • To: mathgroup at smc.vnet.net • Subject: [mg56492] Re: [mg56458] "large" matrices, Eigenvalues, determinants, characteristic polynomials • From: Daniel Lichtblau <danl at wolfram.com> • Date: Tue, 26 Apr 2005 21:52:57 -0400 (EDT) • References: <200504260533.BAA14390@smc.vnet.net> • Sender: owner-wri-mathgroup at wolfram.com Grischa Stegemann wrote: > Dear group > What is the difference between calculating Eigenvalues of a matrix M by > a) Eigenvalues[M] > b) Solve[CharacteristicPolynomial[M,x]==0,x] > c) Solve[Det[M - x IdentityMatrix[n]]==0,x] > ? > Have a look at the following setting: > n = 12; > M = Table[Table[Random[Real, {0, 100}], {n}], {n}] > In this case all the 3 methods a, b and c give the same set of Eigenvalues > (neglecting small numerical differences and maybe using Chop). > As soon as I increase n to at least 13 the result of method c gives a different > set of solutions. In particular method c gives 18 solutions instead of the > expected 13, where the 5 new ones always lay close together with small > imaginary parts. > As far as I can see the problem occurs already when calculating the > characteristic polynomial: > h[x_] = Det[M - x IdentityMatrix[n]] > looks good up to n=12, for n>=13 this function looks very strange, including a > fraction and orders of x larger than n. > This is particularly annoying since the actual problem I am dealing with > involves the calculation of a characteristic function like this: > h2[lambda_]=Det[M[lambda] - lambda IdentityMatrix[31]] > where M[lambda] is a sparse 31x31 matrix having only the last row and the last > column as well as the main diagonal and the first secondary diagonals unequal > to zero. The dependence of lambda is an Exp[-lambda] in M[[31,31]]. > Of course I want to solve the transcendental equation > FindRoot[h2[lambda]==0,{lambda,...}] afterwards. But since the calculation of > the determinant already "fails" (giving really high order terms in lambda) I > have no chance to get any sane results out of FindRoot. > In this case it also doesn't make a difference whether I use > h2[lambda_]=Det[M[lambda] - lambda IdentityMatrix[31]] > or > h2[lambda_]=CharacteristicPolynomial[M[lambda],lambda] > which is only available in Mathematica 5 whereas I am using 4.0 or 4.1 very > often. > Any suggestions, clarifications or hints are really appreciated. > Thank you, > Grischa For a matrix of approximate numbers, Eigenvalues will use Lapack functions. Solve will call on a Jenkins-Traub based rootfinder to obtain roots of the input given by CharacteristicPolynomial. Depending on the matrix, even finding the characteristic polynomial can be numerically unstable. Moreover, the step of finding its roots can likewise be problematic even if the polynomial, expressed in Horner form, is stable as a "black box" for numeric evaluation. Explicit computation of Det[...] will depend very much on internals of how it is computed. I think you are seeing the effect of it using a row reduction based approach when dimension is sufficiently high and input contains symbolic data. As to getting too many solutions, the explanation follows readily from what you have observed. Det is producing a rational function that, were it done in exact arithmetic, would have denominator exactly cancelled by numerator factors. The numeric root finder is passed the numerator which has degree too high, and any later checking in Solve will reveal that the denominator does not vanish at the roots (it merely "almost" vanishes). All this simply shows that symbolic methods will not be up to the task if input is not exact. Among your choices are: (1) Use exact arithmetic. (2) Use, say, PolynomialReduce or PolynomialQuotient/Remainder to figure out a "close" numerator of correct degree, that is, such that the fractional part may be discarded. (3) Use more refined methods based on optimization e.g. least squares, in order to get a better approximation of the determinant. (4) Compute the determinant by interpolation. Specifically, find e.g. Map[Det[mat/.lambda->#]&, Range[32]] and use these in I do not recommend (3) because it will involve nontrivial programming and some familiarity with the relevant literature. Moreover it is not likely to be worth the bother as an improvement to (2) since already in computing Det you will have introduced numeric error. I would say either (2) or (4) are your best bets unless exact arithmetic is an option and does not turn out to be too slow. Daniel Lichtblau Wolfram Research • References:
{"url":"http://forums.wolfram.com/mathgroup/archive/2005/Apr/msg00864.html","timestamp":"2014-04-19T19:40:36Z","content_type":null,"content_length":"39211","record_id":"<urn:uuid:8a78c9de-769b-495a-859f-489feedd422d>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00612-ip-10-147-4-33.ec2.internal.warc.gz"}
Interaction Categories I: Synchronous processes - In Deductive Program Design: Proceedings of the 1994 Marktoberdorf Summer School, NATO ASI Series F , 1995 "... We propose Interaction Categories as a new paradigm for the semantics of functional and concurrent computation. Interaction categories have specifications as objects, processes as morphisms, and interaction as composition. We introduce two key examples of interaction categories for concurrent compu ..." Cited by 123 (19 self) Add to MetaCart We propose Interaction Categories as a new paradigm for the semantics of functional and concurrent computation. Interaction categories have specifications as objects, processes as morphisms, and interaction as composition. We introduce two key examples of interaction categories for concurrent computation and indicate how a general axiomatisation can be developed. The upshot of our approach is that traditional process calculus is reconstituted in functorial form, and integrated with type theory and functional programming. - In Proceedings of IEEE Symposium on Logic in Computer Science , 1995 "... We propose a typed calculus of synchronous processes based on the structure of interaction categories. Our aim has been to develop a calculus for concurrency that is canonical in the sense that the typed -calculus is canonical for functional computation. We show strong connections between syntax, lo ..." Cited by 56 (4 self) Add to MetaCart We propose a typed calculus of synchronous processes based on the structure of interaction categories. Our aim has been to develop a calculus for concurrency that is canonical in the sense that the typed -calculus is canonical for functional computation. We show strong connections between syntax, logic and semantics, analogous to the familiar correspondence between the typed -calculus, intuitionistic logic and cartesian closed categories. 1 Introduction T ypes are fundamental to the study of functional computation, for both theoretical and practical reasons. On the foundational side there are elegant connections between the typed -calculus, intuitionistic logic and cartesian closed categories, leading to the Propositions as Types paradigm [14] and the development of categorical logic [9,17]. From a practical point of view, compile-time type reconstruction is a boon to the programmer in languages such as Standard ML and Haskell. Turning to concurrency, the situation is much less sati... - Logics for Concurrency: Structure vs. Automata---Proceedings of the VIIIth Banff Higher Order Workshop, volume 1043 of Lecture Notes in Computer Science , 1995 "... Many different notions of "property of interest" and methods of verifying such properties arise naturally in programming. A general framework of "Specification Structures" is presented for combining different notions and methods in a coherent fashion. This is then applied to concurrency in the se ..." Cited by 21 (5 self) Add to MetaCart Many different notions of "property of interest" and methods of verifying such properties arise naturally in programming. A general framework of "Specification Structures" is presented for combining different notions and methods in a coherent fashion. This is then applied to concurrency in the setting of Interaction Categories.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1662361","timestamp":"2014-04-18T11:29:27Z","content_type":null,"content_length":"18116","record_id":"<urn:uuid:0546304e-4215-4402-9231-653528b32d24>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00142-ip-10-147-4-33.ec2.internal.warc.gz"}
Spring 2000 LACC Math Contest - Problem 4 Problem 4. In triangle ABC pictured below, DA, EB, and FC are altitudes (perpendicular to the sides of the triangle with which they intersect). Side AB is 21 units in length, altitude FC is 12 units long, and altitude DA is 12 3/5 units long. Find the length of altitude EB. [Problem submitted by Robert Hart, LACC Associate Professor of Computer Science.]
{"url":"http://lacitycollege.edu/academic/departments/mathdept/samplequestions/2000prob4.html","timestamp":"2014-04-21T08:23:01Z","content_type":null,"content_length":"2575","record_id":"<urn:uuid:e0bd3c83-ec79-43b1-9576-80c40d9589fd>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00185-ip-10-147-4-33.ec2.internal.warc.gz"}
Patent application title: METHOD OF SETTING GAMMA OF DISPLAY DEVICE Sign up to receive free email alerts when patent applications with chosen keywords are published SIGN UP A method of setting gamma of a display device comprises: sensing an optical characteristic from a display module; comparing a color coordinate the sensing of an optical characteristic and a target color coordinate and determining a fluctuation value of R gamma and a fluctuation value of G gamma; determining whether the fluctuation value of R gamma and the fluctuation value of G gamma determined at the determining of a fluctuation value satisfy an allowable error; first correcting of correcting the fluctuation value of R gamma and the fluctuation value of G gamma and lowering or raising according to the fluctuation values of R, G and B gamma; second correcting of lowering or raising according to the fluctuation values of a corrected R, G and B gamma; and applying R gamma, G gamma, and B gamma corrected at the second correcting to the display module. A method of setting gamma of a display device, the method comprising: sensing an optical characteristic from a display module; comparing a color coordinate sensed at the sensing of an optical characteristic and a target color coordinate and determining a fluctuation value of R gamma and a fluctuation value of G gamma of RGB gamma; determining whether the fluctuation value of R gamma and the fluctuation value of G gamma determined at the determining of a fluctuation value satisfy an allowable error; first correcting of correcting, if the fluctuation value of R gamma and the fluctuation value of G gamma do not satisfy an allowable error, the fluctuation value of R gamma and the fluctuation value of G gamma, and lowering, if the fluctuation value of R gamma and the fluctuation value of G gamma are a positive number, a fluctuation value of B gamma, and raising, if the fluctuation value of R gamma and the fluctuation value of G gamma are a negative number, a fluctuation value of B gamma; second correcting of lowering, if one of the fluctuation value of R gamma and the fluctuation value of G gamma arrives at a gamma maximum value at the first correcting, the fluctuation value of B gamma, and raising, if one of the fluctuation value of R gamma and the fluctuation value of G gamma arrives at a gamma minimum value, the fluctuation value of B gamma, and lowering, if the fluctuation value of B gamma arrives at a gamma maximum value, the fluctuation value of R gamma and the fluctuation value of G gamma, and raising, if the fluctuation value of B gamma arrives at a gamma minimum value, the fluctuation value of R gamma and the fluctuation value of G gamma; and applying RGB gamma corrected at the second correcting to the display module. The method of claim 1, wherein at the first correcting, if the fluctuation value of R gamma and the fluctuation value of G gamma are a positive number, the fluctuation value of B gamma is lowered with a method of lowering the fluctuation value of R gamma and the fluctuation value of G gamma; and if the fluctuation value of R gamma and the fluctuation value of G gamma are a negative number, the fluctuation value of B gamma is raised with a method of raising the fluctuation value of R gamma and the fluctuation value of G gamma. The method of claim 1, wherein the first correcting is performed based on an Equation of if (R & G are positive number) then {B , G )| and R =0, G =0} and if (R & G are negative number) then {B , G )| and R =0, G =0}, where the R is a fluctuation value of R gamma, G is a fluctuation value of G gamma, and the |Min(R , G )| is a function that returns an absolute value of a small value of the fluctuation value of R gamma and the fluctuation value of G gamma. The method of claim 1, wherein at the second correcting, if one of the fluctuation value of R gamma and the fluctuation value of G gamma arrives at a gamma maximum value, one of the fluctuation value of R gamma and the fluctuation value of G gamma sustains the gamma maximum value, if one of the fluctuation value of R gamma and the fluctuation value of G gamma arrives at a gamma minimum value, one of the fluctuation value of R gamma and the fluctuation value of G gamma sustains the gamma minimum value, if the fluctuation value of B gamma arrives at a gamma maximum value, the fluctuation value of B gamma sustains the gamma maximum value, and if the fluctuation value of B gamma arrives at a gamma minimum value, the fluctuation value of B gamma sustains the gamma minimum value. The method of claim 1, wherein the second correcting is performed based on an Equation of if (R or G =MAX) then {B decreases, R or G sustains MAX}, if (R or G =MIN) then {B increases, R or G sustains MIN}, if (B =MAX) then {R and G decreases, B sustains MAX}, and if (B =MIN) then {R and G increase, B sustains MIN}, where the R is a fluctuation value of R gamma, G is a fluctuation value of G gamma, and the Min is a gamma minimum value, and the MAX is a gamma maximum value. The method of claim 1, wherein the sensing, the determining of a fluctuation value, the determining, the first correcting, the second correcting, and the applying of RGB gamma are repeated until the fluctuation value of R gamma and the fluctuation value of G gamma satisfy an allowable error. The method of claim 1, wherein the applying of RGB gamma comprises: transferring the corrected fluctuation value of R gamma, fluctuation value of G gamma, and fluctuation value of B gamma from a system to a board; and generating the corrected fluctuation value of R gamma, fluctuation value of G gamma, and fluctuation value of B gamma into a corrected R gamma value, G gamma value, and B gamma value using firmware existing in the board, and storing the corrected R gamma value, G gamma value, and B gamma value in the display module. The method of claim 1, wherein at the determining, if the fluctuation value of R gamma and the fluctuation value of G gamma determined at the determining of a fluctuation value satisfy an allowable error, the first correcting and the second correcting are terminated. The method of claim 1, wherein the display module is one of a module formed with an organic light emitting display panel and a module formed with a liquid crystal display panel. Pursuant to 35 U.S.C. §119(a), this application claims the benefit of earlier filing date and right of priority to Korean Application 10-2010-0135804 filed on Apr. 22, 2011, the content of which is incorporated by reference herein in its entirety. BACKGROUND [0002] 1. Field of the Invention This document relates to a method of setting gamma of a display device. 2. Discussion of the Related Art As information technology develops, the market for a display device, which is a connection medium between a user and information increases and thus use of a display device such as an organic light emitting display (OLED), a liquid crystal display (LCD), and a plasma display panel (PDP) has increased. The display device is used in various industrial fields of a mobile phone or a computer such as a notebook computer as well as a household appliance field such as a television (TV) or a video In order to express desired luminance and color coordinate, a display device generally sets a gamma value (corrects a color) and stores the set gamma value in a memory, and displays a screen with reference to the gamma value stored in the memory whenever driving. Conventionally, in order to set a target color coordinate of a panel and to adjust internal gamma of a data driver, by adjusting only RG gamma of red color, green color, and blue color (RGB) gamma, a color coordinate was set. Because the related art changes only RG gamma, when setting gamma, RG gamma arrives at a gamma minimum value or a gamma maximum value and thus the RG gamma may be no longer In this way, because the related art sets a color coordinate using only two RG gamma, a time required for approaching a target color coordinate is longer than that when using all three RGB gamma. However, in order to use all RGB gamma, conventionally used devices should be changed for compatibility of firmware on a control board as well as an operating program of a person computer (PC), but this is uneasy in view of technical difficulty of algorithm, etc., and as a program becomes complex, much difficulty in solving problems that may occur exists. Therefore, conventionally, a method of setting gamma for expressing luminance and a color coordinate using only two RG gamma due to the above-described problems should have been continuously used. BRIEF SUMMARY [0012] In an aspect, a method of setting gamma of a display device, the method comprises: sensing an optical characteristic from a display module; comparing a color coordinate sensed at the sensing of an optical characteristic and a target color coordinate and determining a fluctuation value of R gamma and a fluctuation value of G gamma; determining whether the fluctuation value of R gamma and the fluctuation value of G gamma determined at the determining of a fluctuation value satisfy an allowable error; first correcting of correcting, if the fluctuation value of R gamma and the fluctuation value of G gamma do not satisfy an allowable error, the fluctuation value of R gamma and the fluctuation value of G gamma, and lowering, if the fluctuation value of R gamma and the fluctuation value of G gamma is a positive number, a fluctuation value of B gamma, and raising, if the fluctuation value of R gamma and the fluctuation value of G gamma is a negative number, a fluctuation value of B gamma; second correcting of lowering, if one of the fluctuation value of R gamma and the fluctuation value of G gamma arrives at a gamma maximum value, through the first correction, the fluctuation value of B gamma, and raising, if one of the fluctuation value of R gamma and the fluctuation value of G gamma arrives at a gamma minimum value, the fluctuation value of B gamma, and lowering, if the fluctuation value of B gamma arrives at a gamma maximum value, the fluctuation value of R gamma and the fluctuation value of G gamma, and raising, if the fluctuation value of B gamma arrives at a gamma minimum value, the fluctuation value of R gamma and the fluctuation value of G gamma; and applying R gamma, G gamma, and B gamma corrected by the second correcting to the display module. BRIEF DESCRIPTION OF THE DRAWINGS [0013] The accompany drawings, which are comprised to provide a further understanding of the invention and are incorporated on and constitute a part of this specification illustrate implementations of the invention and together with the description serve to explain the principles of the invention. FIG. 1 is a schematic block diagram illustrating a display device according to an implementation of this document; FIG. 2 is a diagram illustrating a subpixel circuit configuration of an OLED panel according to an implementation of this document; [0016] FIG. 3 is a diagram illustrating a subpixel circuit configuration of an LCD panel according to an implementation of this document; [0017] FIG. 4 is a diagram illustrating a configuration of a device for setting gamma of a display device according to an implementation of this document; FIG. 5 is a flowchart illustrating a method of setting gamma according to an implementation of this document; and FIGS. 6 and 7 are flowcharts illustrating a method of setting gamma according to an implementation of this document. Reference will now be made in detail implementations of the invention examples of which are illustrated in the accompanying drawings. Hereinafter, an implementation of this document will be described in detail with reference to the attached drawings. FIG. 1 is a schematic block diagram illustrating a display device according to an implementation of this document, FIG. 2 is a diagram illustrating a subpixel circuit configuration of an OLED panel according to an implementation of this document, and FIG. 3 is a diagram illustrating a subpixel circuit configuration of an LCD panel according to an implementation of this document. As shown in FIG. 1, the display device comprises a timing driver TCN, a gate driver SDRV, a data driver DDRV, and a panel PNL. The timing driver TCN receives a vertical synchronous signal Vsync, a horizontal synchronous signal Hsync, a data enable signal DE, a clock signal CLK, and a data signal DDATA from the outside. The timing driver TCN controls operation timing of the data driver DDRV and the gate driver SDRV using a timing signal such as the vertical synchronous signal Vsync, the horizontal synchronous signal Hsync, the data enable signal DE, and the clock signal CLK. Because the timing driver TCN can determine a frame period by counting a data enable signal DE of one horizontal period, the vertical synchronous signal Vsync and the horizontal synchronous signal Hsync supplied from the outside may be omitted. Control signals generated in the timing driver TCN comprise a gate timing control signal GDC for controlling operation timing of the gate driver SDRV and a data timing control signal DDC for controlling operation timing of the data driver DDRV. The gate timing control signal GDC comprises a gate start pulse GSP, a gate shift clock GSC, a gate output enable signal GOE, etc. The gate start pulse GSP is supplied to a gate drive integrated circuit (IC) in which a first gate signal occurs. The gate shift clock GSC is a clock signal commonly input to gate drive ICs and is a clock signal for shifting a gate start pulse GSP. The gate output enable signal GOE controls an output of the gate drive ICs. The data timing control signal DDC comprises a source start pulse SSP, a source sampling clock SSC, a source output enable signal SOE, etc. The source start pulse SSP controls a data sampling start time point of the data driver DDRV. The source sampling clock SSC is a clock signal for controlling a sampling operation of data within the data driver DDRV based on a rising or falling edge. The source output enable signal SOE controls an output of the data driver DDRV. The source start pulse SSP supplied to the data driver DDRV may be omitted according to a data transmission method. The gate driver SDRV sequentially generates a gate signal while shifting a level of a signal with a swing width of a gate driving voltage in which transistors of subpixels SP comprised in the panel PNL can be operated in response to a gate timing control signal GDC supplied from the timing driver TCN. The gate driver SDRV supplies a gate signal generated through gate lines SL1-SLm to subpixels SP comprised in the panel PNL. The gate driver SDRV is directly formed in the panel in a gate in panel (GIP) method or is formed in the outside of the panel PNL. The data driver DDRV samples and latches a data signal DDATA of a digital form supplied from the timing driver TCN and converts the data signal DDATA to data of a parallel data system in response to a data timing control signal DDC supplied from the timing driver TCN. As described above, when the data signal DDATA is converted to data of a parallel data system, the data driver DDRV converts the data signal DDATA of a digital form to a gamma reference voltage and converts the gamma reference voltage to a data signal ADATA of an analog form. The data driver DDRV supplies a data signal ADATA converted through data lines DL1-DLn to subpixels SP comprised in the panel PNL. The panel PNL comprises red, green, and blue (hereinafter, referred to as `RGB`) subpixels SP disposed in a matrix form. The panel PNL comprises an OLED panel or an LCD panel. When the panel PNL is formed as an OLED panel, a subpixel has a circuit configuration of FIG. 2. In a switching transistor T1, a gate is connected to a gate line SL1 to which a gate signal is supplied, one end thereof is connected to a data line DL1 to which a data signal is supplied, and the other end thereof is connected to a first node n1. In a driving transistor T2, a gate is connected to the first node n1, one end thereof is connected to a second node n2 connected to a first power source wiring VDD to which a driving power source Vdd of a high potential is supplied, and the other end thereof is connected to a third node n3. One end of a storage capacitor Cst is connected to the first node n1 and the other end thereof is connected to the second node n2. In an organic light emitting diode D, an anode is connected to the third node n3 connected to the other end of the driving transistor T2, and a cathode is connected to a second power source wiring VSS to which a driving power source Vss of a low potential is supplied. In an OLED panel having such a subpixel SP structure, according to a gate signal supplied through the gate line SL1 and a data signal supplied through the data line DL1, as a light emitting layer comprised in each subpixel emits light, an image is displayed. Alternatively, when the panel PNL is formed as an LCD panel, a subpixel SP has a circuit configuration of FIG. 3 . In a switching transistor TFT, a gate is connected to a gate line SL1 to which a gate signal is supplied and one end thereof is connected to a data line DL1 to which a data signal is supplied, and the other end thereof is connected to a first node n1. One end of a pixel electrode 1 positioned at one side of a liquid crystal cell Clc is connected to the first node n1 connected to the other end of the switching transistor TFT, and a common electrode 2 positioned at the other side of the liquid crystal cell Clc is connected to a common voltage wiring Vcom. One end of a storage capacitor Cst is connected to the first node n1, and the other end thereof is connected to the common voltage wiring Vcom. The LCD panel having such a subpixel SP structure can display an image with transmission of light according to a change of a liquid crystal layer comprised in each subpixel according to a gate signal supplied through the gate line SL1 and a data signal supplied through the data line DL1. In the foregoing description, for a better understanding of a subpixel, FIGS. 2 and 3 illustrate a common circuit configuration, and the implementation is not limited thereto. In order to express desired luminance and color coordinate, a display device comprising the above-described OLED panel or LCD panel sets a gamma value (corrects a color) and displays, when a set gamma value is stored in a memory, a screen with reference to the gamma value stored in the memory whenever driving. Hereinafter, a method of setting gamma of a display device will be described. [0032] FIG. 4 is a diagram illustrating a configuration of a device for setting gamma of a display device according to an implementation of this document, FIG. 5 is a flowchart illustrating a method of setting gamma according to an implementation of this document, and FIGS. 6 and 7 are flowcharts illustrating a method of setting gamma according to an implementation of this document. As shown in FIGS. 4 and 5, the gamma setting device of a display device comprises a sensing unit 120, an optical measuring unit 130, a processor 140, and a board 150. The gamma setting device of the display device senses an optical characteristic (color coordinate) displayed in a display module 110 using the sensing unit 120. The optical characteristic sensed by the sensing unit 120 is converted to data of a digital signal form by the optical measuring unit 130 and is transferred to the processor 140. The processor 140 provides a sensed color coordinate of the display module 110 based on data transferred from the optical measuring unit 130, sets a gamma value appropriate for the display module 110 by comparing the sensed color coordinate and a target color coordinate and a gamma value through the board 150, and stores the gamma value in a memory of a data driver comprised in the display module 110 through the board 150. Here, a computer that can check, correct, and output a result through input data can be used as the processor 140, but the processor 140 is not limited thereto. An external memory or an internal memory comprised in the data driver can be selected as the memory of the data driver, but the memory of the data driver is not limited thereto. An OLED panel or an LCD panel comprising the panel PNL, the data driver DDRV, and the gate driver SDRV described with reference to FIG. 1 can be selected as the display module 110, but the display module 110 is not limited thereto. Here, the display module 110 is driven by a pattern generator, etc., and thus an RGB subpixel is displayed over a specific area or an entire panel PNL, but it is not limited thereto. A method of setting gamma finds a gamma value appropriate for the display module 110 by repeatedly performing a series of process of FIG. 5 using the above device. As shown in FIG. 5, the method of setting gamma is generally performed with step of performing luminance setting algorithm (S10), step of performing color coordinate setting algorithm (S30), and determining whether luminance & a color coordinate satisfy an allowable error (S50). In order to set a target optical characteristic for the display module 110, luminance is set and then a color coordinate is set, as described above. If luminance and a color coordinate satisfy an allowable error, gamma setting is terminated and a value that is set at this time is stored in the memory of the data driver through the board 150. If luminance and a color coordinate do not satisfy an allowable error, the process returns to step S10. As shown in FIGS. 6 and 7, the method of setting gamma of the display device according to an implementation of this document is performed in order of step of sensing (S101), step of determining a fluctuation value of RG gamma (S103), step of determining (S105), step of first correction (S107), step of second correction (S109), and step of applying (S111 and S113). Hereinafter, a method of setting gamma of a display device will be described in detail with reference to FIGS. 1 to 7. Step of sensing (S101) is step of sensing an optical characteristic from the display module 110. Step of sensing (S101) is performed by sensing an optical characteristic displayed in the display module 110 with the sensing unit 120. An optical characteristic of the display module 110 sensed by the sensing unit 120 is converted to data of a digital signal form by the optical measuring unit 130 and is transferred to the processor 140. Step of determining a fluctuation value of RG gamma (S103) compares a color coordinate sensed at sensing step (S101) and a target color coordinate and determines a fluctuation value RG of R gamma and a fluctuation value GG of G gamma. At step of determining a fluctuation value of RG gamma (S103), by comparing a sensed color coordinate and a target color coordinate, a fluctuation value RG of R gamma and a fluctuation value GG of G gamma are determined. Step of determining (S105) is step of determining whether a fluctuation value RG of R gamma and a fluctuation value GG of G gamma determined at step of determining a fluctuation value of RG gamma (S103) satisfy an allowable error. At step of determining (S105), if a fluctuation value RG of R gamma and a fluctuation value GG of G gamma determined at step of determining a fluctuation value of RG gamma (S103) satisfy an allowable error, the following correction step is not performed and the process is terminated. A figure illustrating step of determining (S105) illustrates that an allowable error of the fluctuation value RG of R gamma and the fluctuation value GG of G gamma is 0, but the allowable error is set as α, which is a value of 1 or less comprising a decimal point instead of 0, and this can be changed according to a request characteristic of a product. However, in the implementation, for a better understanding, it is described as the allowable error is 0. If the fluctuation value RG of R gamma and the fluctuation value GG of G gamma do not satisfy an allowable error at step S105, the fluctuation value RG of R gamma and the fluctuation value GG of G gamma are corrected (S107). If the fluctuation value RG of R gamma and the fluctuation value GG of G gamma are a positive number at step S107, a fluctuation value BG of B gamma is lowered, and if the fluctuation value RG of R gamma and the fluctuation value GG of G gamma are a negative number, a fluctuation value BG of B gamma is raised. If the fluctuation value RG of R gamma and the fluctuation value GG of G gamma are a positive number at step S107, the fluctuation value BG of B gamma is lowered with a method of lowering the fluctuation value RG of R gamma and the fluctuation value GG of G gamma, and if the fluctuation value RG of R gamma and the fluctuation value GG of G gamma are a negative number, the fluctuation value BG of B gamma is raised with a method of raising the fluctuation value RG of R gamma and the fluctuation value GG of G gamma. First correction step (S107) is performed based on Equation 1. (RG & GG are positive number) then {BG=-|Min(RG,GG)| and RG=0, GG=0}, (RG & GG are negative number) then {BG=|Min(RG,GG)| and RG=0, GG=0} [Equation 1] In Equation 1, RG is a fluctuation value of R gamma, GG is a fluctuation value GG of G gamma, and |Min(RG, GG)| is a function of returning an absolute value of a smaller value of a fluctuation value of R gamma and a fluctuation value GG of G gamma. As can be seen in Equation 1, a method of setting gamma of a display device according to an implementation uses a characteristic that if a fluctuation value RG of R gamma is raised, an X color coordinate rises, if a fluctuation value RG of R gamma is lowered, an X color coordinate falls, if a fluctuation value GG of G gamma is raised, a Y color coordinate rises, if a fluctuation value GG of G gamma is lowered, a Y color coordinate falls, if a fluctuation value BG of B gamma is raised, an XY color coordinate falls, and if a fluctuation value BG of B gamma is lowered, an XY color coordinate rises. At second correction step (S109), if one of the fluctuation value RG of R gamma and the fluctuation value GG of G gamma becomes a gamma maximum value MAX at step S107, the fluctuation value BG of B gamma is lowered, and if one of the fluctuation value RG of R gamma and the fluctuation value GG of G gamma becomes a gamma minimum value MIN, the fluctuation value BG of B gamma is raised. Further, if the fluctuation value BG of B gamma becomes a gamma maximum value MAX, the fluctuation value RG of R gamma and the fluctuation value GG of G gamma are lowered, and if the fluctuation value BG of B gamma becomes a gamma minimum value MIN, an addition correction is performed in a form of raising the fluctuation value RG of R gamma and the fluctuation value GG of G gamma. If one of the fluctuation value RG of R gamma and the fluctuation value GG of G gamma becomes a gamma maximum value MAX at step S109, one of the fluctuation value RG of R gamma and the fluctuation value GG of G gamma sustains the gamma maximum value MAX, and if one of the fluctuation value RG of R gamma and the fluctuation value GG of G gamma becomes a gamma minimum value MIN, one of the fluctuation value RG of R gamma and the fluctuation value GG of G gamma sustains the gamma minimum value MIN. If the fluctuation value BG of B gamma becomes a gamma maximum value MAX, the fluctuation value BG of B gamma sustains the gamma maximum value MAX, and if the fluctuation value BG of B gamma becomes a gamma minimum value MIN, the fluctuation value BG of B gamma sustains the gamma minimum value MIN. Second correction step S109 is performed based on Equation 2. (RG or GG=MAX) then {BG decreases, RG or GG sustains MAX}, (RG or GG=MIN) then {BG increases, RG or GG sustains MIN}, (BG=MAX) then {RG and GG decrease, BG sustains MAX}, (BG=MIN) then {RG and GG increase, BG sustains MIN}, [Equation 2] In Equation 2, RG is a fluctuation value RG of R gamma, GG is a fluctuation value GG of G gamma, BG is a fluctuation value BG of B gamma, Min is a gamma minimum value MIN, and MAX is a gamma maximum value MAX. As can be seen in Equation 2, if one of the fluctuation value RG of R gamma and the fluctuation value GG of G gamma arrives at a gamma minimum value MIN or a gamma maximum value MAX, a method of setting gamma of a display device according to an implementation lowers or raises other values while sustaining a corresponding value. When gamma by the fluctuation value RG of R gamma and the fluctuation value GG of G gamma corrected at first correction step S107 arrives at a limit value, second correction step S109 is performed and thus may be omitted. Step of applying (S111, S113) is step of applying an R gamma value, a G gamma value, and a B gamma value corrected at second correction step S109 to the display module 110. Step of applying (S111, S113) comprises step of transferring a fluctuation value RG of R gamma, a fluctuation value GG of G gamma, and a fluctuation value BG of B gamma corrected at second correction step S109 from a system to a board (S111) and step of generating a corrected fluctuation value RG of R gamma, fluctuation value GG of G gamma, and fluctuation value BG of B gamma into a corrected R gamma value, G gamma value, and B gamma value using firmware existing in a board and storing the corrected R gamma value, G gamma value, and B gamma value in the display module 110 (S113). Step of transferring (S111) is step of transferring the corrected fluctuation value RG of R gamma, fluctuation value GG of G gamma, and fluctuation value BG of B gamma from the processor 140 to be a system to the board 150 to be a board. Accordingly, firmware existing in the board 150 generates a corrected R gamma value, G gamma value, and B gamma value by applying the fluctuation value RG of R gamma, the fluctuation value GG of G gamma, and the fluctuation value BG of B gamma transferred from the processor 140 to an R gamma value, a G gamma value, and a B gamma value, respectively. Step of storing (S113) is step of storing the corrected R gamma value, G gamma value, and B gamma value in a memory of the data driver DDRV comprised in the display module 110 to be a sample using the board 150 to be a board and applying the corrected gamma value. As described above, a method of setting gamma of a display device according to an implementation of this document is performed in order of step of sensing (S101), step of determining a fluctuation value of RG gamma (S103), step of determining (S105), first correction step (S107), second correction step (S109), and step of applying (S111, S113). The process is repeatedly performed in the above order until the fluctuation value RG of R gamma and the fluctuation value GG of G gamma satisfy an allowable error. In the method of setting gamma of a display device according to an implementation of this document, step of determining a fluctuation value of RG gamma (S103), step of determining (S105), first correction step (S107), and second correction step (S109) are performed by actual algorithm on an operating system (OS) of the processor 140 to be a system. Further, the method of setting gamma of a display device according to an implementation of this document is performed with a method of determining the fluctuation value RG of R gamma and the fluctuation value GG of G gamma and then determining the fluctuation value BG of B gamma. Table 1 is a result table of an experiment of a target optical characteristic adjustment consumption time and arrival at a limitation value (determination as bad panel) on a panel basis with a method of setting gamma by a conventional method and a method of setting gamma according to this document. -US-00001 TABLE 1 Method according to this Conventional method document Target optical Arrival at Target optical Arrival at characteristic limitation characteristic limitation adjustment value(deter- adjustment value(deter- Panel consumption mination as consumption mination as # time (sec) bad panel) time (sec) bad panel) 1 120 good 108 good 2 122 good 107 good 3 121 good 110 good 4 118 good 100 good 5 118 good 101 good 6 130 bad 101 good 7 117 good 108 good 8 116 good 100 good 9 114 good 99 good 10 125 bad 111 good 11 114 good 105 good 12 128 good 105 good 13 119 good 107 good 14 131 good 100 good 15 124 good 109 good 16 124 good 100 good 17 129 good 110 good 18 118 good 105 good 19 122 good 101 good 20 118 bad 106 good 21 122 good 100 good 22 131 good 100 good 23 118 good 98 good 24 117 good 106 good 25 126 bad 110 bad 26 129 good 107 good 27 129 good 107 good 28 117 good 103 good 29 121 good 102 good 30 130 good 100 good Average 122 4 104 1 As can be seen in Table 1, in a method according to this document, an average consumption time is good as 104 seconds and a bad panel determining rate according to arrival at a limitation value is very good as 1, compared with a conventional method. Therefore, in the method of setting gamma of a display device according to an implementation, a time consumed for setting gamma can be reduced, compared with a conventional method and thus a system quantity, an operator, and operation space can be remarkably reduced, and because an addition device configuration or algorithm setting is unnecessary, the method has a merit in an installation cost. Further, because the method of setting gamma of a display device according to an implementation can approach a more accurate target value, compared with a conventional method, an allowable error can be remarkably reduced, and a high quality of product can be provided. Further, because the method of setting gamma of a display device according to an implementation can effectively avoid from arriving at a gamma limitation value when setting gamma, compared with a conventional method, a yield of a product can be improved. As described above, according to this document, a method of setting gamma of a display device that can expect reduction of a time consumed for setting gamma, reduction of an installation cost, improvement of accuracy according to decrease of an allowable error, and improvement of a yield through escaping of arrival at a gamma limitation value is provided. The foregoing implementations and advantages are merely exemplary and are not to be construed as limiting this document. The present teaching can be readily applied to other types of apparatuses. The description of the foregoing implementations is intended to be illustrative, and not to limit the scope of the claims. Many alternatives, modifications, and variations will be apparent to those skilled in the art. In the claims, means-plus-function clauses are intended to cover the structures described herein as performing the recited function and not only structural equivalents but also equivalent structures. Patent applications in class Light detection means (e.g., with photodetector) Patent applications in all subclasses Light detection means (e.g., with photodetector) User Contributions: Comment about this patent or add new information about this topic:
{"url":"http://www.faqs.org/patents/app/20120162168","timestamp":"2014-04-24T13:23:14Z","content_type":null,"content_length":"62334","record_id":"<urn:uuid:2c3cf8f7-5e70-476c-b5aa-346a2d0532aa>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00625-ip-10-147-4-33.ec2.internal.warc.gz"}
Aligning time-series Recently I wanted to align some time-series data recently for plotting. I couldn’t find some library function in R to do it for me (actually I did, but they involved stuffing around with special time series objects instead of vectors), and the problem was interesting anyway so I wrote something myself. The idea is to find the offset using cross-correlation and then do a phase shift to align. This can all be done in the frequency domain, where the cross-correlation is just the Hadamard product between the Fourier components of the signal to align with the conjugate of the reference[1]. there’s two align function, align2() aligns a pair of signals with x as the reference and align() aligns the columns of a matrix. There’s a bunch of padding added for the Fourier transform so the final vectors will contain leading and trailing zeros. The plotting function removes these. Here’s an example, unaligned: and aligned: [1] Semi-cute but well known, reversing the Fourier coefficients of a real signal reduces to calculating the complex conjugate. As an ordinary convolution in the frequency domain is x ⊗ y, the cross-correlation is thus x ⊗ y̅. jealousdispu likes this nickthejam likes this jbedo posted this
{"url":"http://blog.cua0.org/post/3324273287/aligning-time-series","timestamp":"2014-04-16T22:49:39Z","content_type":null,"content_length":"26102","record_id":"<urn:uuid:85b8418e-96e3-4490-9cf1-3d9474e45ac2>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00063-ip-10-147-4-33.ec2.internal.warc.gz"}
Faculty Research - Pure Mathematics - MSCS@UIC Faculty Research - Pure Mathematics Paul Fong (Emeritus), Ph.D. Harvard University, 1959. Group theory; representation theory of finite groups. Daniel Groves, Ph.D. University of Oxford, 2000. Geometric group theory. Michael Hull, Ph.D. Vanderbilt University, 2013. Geometric group theory. Olga Kashcheyeva, Ph.D. University of Missouri, Columbia, 2003. Algebraic geometry and commutative algebra. Richard G. Larson (Emeritus), Ph.D. University of Chicago, 1965. Applications of Hopf algebras to control theory and data mining; structure of Hopf algebras and quantum groups; applications of algebra to computer science. Jeffrey Leon (Emeritus), Ph.D. California Institute of Technology, 1971. Computation group theory; computational combinatorics. David E. Radford (Emeritus), Ph.D. University of North Carolina at Chapel Hill, 1970. Hopf algebras; quantum groups; invariants of knots, links and 3-manifolds. Mark Ronan (Emeritus), Ph.D. University of Oregon, 1978. Groups; geometry. Stephen Smith (Emeritus), Ph.D. Oxford University, 1973. Finite group theory; combinatorics; computer science; algebraic topology. Bhama Srinivasan (Emeritus), Ph.D. University of Manchester, 1960. Representation theory of finite groups of Lie type . Kevin Tucker, Ph.D. University of Michigan, 2010. Algebraic Geometry and Commutative Algebra. Number Theory and Algebraic Geometry Alina Cojocaru, Ph.D. Queen's University, 2002. Number Theory (including analytic number theory, algebraic number theory, and arithmetic geometry) . Izzet Coskun, Ph.D. Harvard University, 2004. Algebraic Geometry. Lawrence Ein, Ph.D. University of California, Berkeley, 1981. Algebraic geometry. Henri Gillet, Ph.D. Harvard University, 1978. Arithmetic geometry; algebraic geometry; algebraic k-theory. Majid Hadian-Jazi, Ph.D. Max Planck Institute for Mathematics, 2010. Arithmetic Geometry. Jack Huizenga, Ph.D. Harvard University, 2012. Algebraic Geometry. Olga Kashcheyeva, Ph.D. University of Missouri, Columbia, 2003. Algebraic geometry and commutative algebra. Anatoly S. Libgober (Emeritus), Ph.D. Tel-Aviv University, 1977. Topology of algebraic varieties; theory of singularities; mirror symmetry. Mihnea Popa, Ph.D. University of Michigan, 2001. Algebraic Geometry. Kevin Tucker, Ph.D. University of Michigan, 2010. Algebraic Geometry and Commutative Algebra. Jan Verschelde, Ph.D. Katholieke Universiteit Leuven, Belgium, 1996. Computational algebraic geometry; symbolic-numeric computation; combinatorial and polyhedral methods; development of mathematical software; high performance computing; numerical analysis. Stephen Yau (Emeritus), Ph.D. State University of New York-Stonybrook, 1976. Algebraic geometry, singularity theory; complex geometry; CR geometry; nonlinear filtering theory; algebraic geometry code; information theory; control theory; financial math; image database; computer software testing, Bioinformatics. Neil Berger (Emeritus), Ph.D. Courant Institute of Mathematical Sciences, 1968. Elasticity, fluid mechanics. Calixto Calderon (Emeritus), Ph.D. University of Buenos Aires, 1969. Harmonic analysis; differential equations; mathematical biology and history of science. Alexey Cheskidov, Ph.D. Indiana University, 2004. Nonlinear PDE, fluid dynamics, and infinite-dimensional dynamical systems. Mimi Dai, Ph.D. University of California - Santa Cruz, 2012. Partial differential equations, fluids dynamics, complex fluids. Laura DeMarco, Ph.D. Harvard University, 2002. Dynamical systems, complex analysis. David Dumas, Ph.D. Harvard University, 2004. Teichmuller theory, Kleinian groups, hyperbolic manifolds, geometric structures. Shmuel Friedland, Ph.D. Technion - Israel Institute of Technology, 1971. Matrices, Tensors & Applications, Statistical Mechanics, Math. Biology, Dynamical Systems. Michael Greenblatt, Ph.D. Princeton University, 1998. Resolution of singularities in analysis, oscillatory integrals, Radon transforms. Jeff E. Lewis (Emeritus), Ph.D. Rice University, 1966. Analysis: Harmonic Analysis and Partial Differential Equations. Charles S.C. Lin (Emeritus), Ph.D. University of California-Berkeley, 1967. Operator theory; perturbation theory; functional analysis. Irina Nenciu, Ph.D. California Institute of Technology, 2005. Integrable systems, random matrices and mathematical physics. Cheng Ouyang, Ph.D. Northwestern University, 2009. Probability theory and Stochastic analysis: diffusions and differential geometry, Malliavin calculus and Gaussian processes, Levy processes, mathematical finance. Christian Rosendal, Ph.D. University of Paris 6, 2003. Descriptive set theory and applications. Yoram Sagher (Emeritus), Ph.D. University of Chicago, 1967. Harmonic analysis, interpolation theory, mathematics education. Andy Sanders, Ph.D. University of Maryland - College Park, 2013. Teichmuller theory, minimal surfaces, Kleinian groups, Higgs bundles and the geometry of character varieties. Roman Shvydkoy, Ph.D. University of Missouri, Columbia, 2001. Euler, Navier-Stokes equations, turbulence, spectral problems, stability. Zbigniew Slodkowski, Ph.D. Institute of Mathematics of the Polish Academy of Sciences, 1974. Several complex variables. Christof Sparber, Ph.D. University of Vienna, 2004. Mathematical Physics, Partial Differential Equations, Analysis and Numerical Simulation of Multi-scale Problems. David Tartakoff (Emeritus), Ph.D. University of California-Berkeley, 1969. Partial differential equations, several complex variables. Jie Yang, Ph.D. University of Chicago, 2006. Multiple comparisons, Cluster Analysis, Discriminant Analysis, Dimension Reduction, Design of Experiments, and Financial Mathematics. Ergodic Theory and Dynamical Systems Laura DeMarco, Ph.D. Harvard University, 2002. Dynamical systems, complex analysis. Shmuel Friedland, Ph.D. Technion - Israel Institute of Technology, 1971. Matrices, Tensors & Applications, Statistical Mechanics, Math. Biology, Dynamical Systems. Alexander Furman, Ph.D. The Hebrew University of Jerusalem, 1996. Ergodic theory and Lie groups, especially aspects of rigidity for group actions. Steven Hurder (Emeritus), Ph.D. University of Illinois-Urbana Champaign, 1980. Differential topology and geometry of foliations; smooth ergodic theory and rigidity of group actions; spectral and index theory of operators. Howard A. Masur (Emeritus), Ph.D. University of Minnesota, 1974. Dynamical systems; low dimensional topology; Riemann surface theory. Geometry and Topology Aldridge K. Bousfield (Emeritus), Ph.D. Massacusetts Institute of Technology, 1966. Algebraic topology, homotopy theory. Marc Culler, Ph.D. University of California, Berkeley, 1978. Group theory; low-dimensional topology; 3-manifolds; hyperbolic geometry; computation in geometry and topology. David Dumas, Ph.D. Harvard University, 2004. Teichmuller theory, Kleinian groups, hyperbolic manifolds, geometric structures. Brayton Gray (Emeritus), Ph.D. University of Chicago, 1965. Algebraic topology; homotopy theory. Daniel Groves, Ph.D. University of Oxford, 2000. Geometric group theory. James Heitsch (Emeritus), Ph.D. University of Chicago, 1971. Differential topology; theory of foliations; index theory; heat equation methods. Jack Huizenga, Ph.D. Harvard University, 2012. Algebraic Geometry. Michael Hull, Ph.D. Vanderbilt University, 2013. Geometric group theory. Steven Hurder (Emeritus), Ph.D. University of Illinois-Urbana Champaign, 1980. Differential topology and geometry of foliations; smooth ergodic theory and rigidity of group actions; spectral and index theory of operators. Louis Kauffman, Ph.D. Princeton University, 1972. Knot theory, topological quantum field theory, quantum topology, topological quantum computing and information. Benjamin Klaff, Ph.D. Univeristy of Illinois at Chicago, 2003. low-dimensional topology, 3-manifolds, geometric group theory, informal STEM learning (math circles). Anatoly S. Libgober (Emeritus), Ph.D. Tel-Aviv University, 1977. Topology of algebraic varieties; theory of singularities; mirror symmetry. Howard A. Masur (Emeritus), Ph.D. University of Minnesota, 1974. Dynamical systems; low dimensional topology; Riemann surface theory. Andy Sanders, Ph.D. University of Maryland - College Park, 2013. Teichmuller theory, minimal surfaces, Kleinian groups, Higgs bundles and the geometry of character varieties. Peter B. Shalen, Ph.D. Harvard University, 1972. Low-dimensional topology; geometric and combinatorial group theory; hyperbolic geometry. Brooke Shipley, Ph.D. Massachusetts Institute of Technology, 1995. Algebraic Topology and Homotopy Theory. John Steenbergen, Ph.D. Duke University, 2013. Spectral Graph Theory, Simplicial Complexes, Dimension Reduction. Martin Tangora (Emeritus), Ph.D. Northwestern University, 1966. Algebraic topology. Kevin Whyte, Ph.D. University of Chicago, 1998. Topology and geometric group theory . John Wood (Emeritus), Ph.D. University of California, Berkeley, 1968. Differential and algebraic topology; topology of nonsingular varieties. John Baldwin (Emeritus), Ph.D. Simon Fraser University, 1971. Model theory: stability theory; applications to algebra and universal algebra; finite model theory; random graphs; math education. Joel Berman (Emeritus), Ph.D. University of Washington, 1970. Universal algebra; ordered sets. Isaac Goldbring, Ph.D. University of Illinois at Urbana-Champaign, 2009. Model Theory, applications of nonstandard analysis to Lie Theory and Group Theory, Model Theory for Metric Structures, Operator Algebras. William Howard (Emeritus), Ph.D. University of Chicago, 1956. Proof theory; foundations of mathematics; history of mathematics. David E. Marker, Ph.D. Yale University, 1983. Mathematical logic; model theory; applications to algebra and geometry, real and complex exponetntation, connections between model theory and descriptive set theory differential algebra, connections between model theory and descriptive set theory. Christian Rosendal, Ph.D. University of Paris 6, 2003. Descriptive set theory and applications. Dima Sinapova, Ph.D. University of California, Los Angeles, 2008. Logic and Set Theory. Large cardinals, forcing, infinitay combinatorics, cardinal arithmetic. John Lenz, Ph.D. Univeristy of Illinois at Urbana-Champaign, 2011. Combinatorics, Theoretical Computer Science. Dhruv Mubayi, Ph.D. University of Illinois at Urbana-Champaign, 1998. extremal and probabilistic combinatorics. Vera Pless (Emeritus), Ph.D. Northwestern University, 1957. Coding theory; combinatorics. Gyorgy Turan, Ph.D. Joszef Attila University, Hungary, 1981. Complexity theory; computational learning theory; combinatorics; logic.
{"url":"http://www.math.uic.edu/research/pure_mathematics","timestamp":"2014-04-17T21:23:30Z","content_type":null,"content_length":"33810","record_id":"<urn:uuid:65199d7c-b145-465d-81c3-22e7f116c56d>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00613-ip-10-147-4-33.ec2.internal.warc.gz"}
reflections and a parallelogram April 4th 2009, 08:21 AM #1 Sep 2008 reflections and a parallelogram (2) Let l be a line and let m = τQ(l). Suppose A is some point on the plane. Let B= σl(A), C = τQ(B) and D = σm(C). Show that ABCD is a parallelogram. Follow Math Help Forum on Facebook and Google+
{"url":"http://mathhelpforum.com/geometry/82213-reflections-parallelogram.html","timestamp":"2014-04-20T04:13:04Z","content_type":null,"content_length":"28486","record_id":"<urn:uuid:257e0d99-b869-4070-827d-e68615d8655b>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00234-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: AW: AW: st: AW: beta coefficients for interaction terms [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: AW: AW: st: AW: beta coefficients for interaction terms From lschoele@rumms.uni-mannheim.de To statalist@hsphsun2.harvard.edu Subject Re: AW: AW: st: AW: beta coefficients for interaction terms Date Tue, 21 Apr 2009 18:10:01 +0200 I got different results first, but trying it again gave me the same results. According to the book "Data analysis with stata" from Kohler and Kreuter, you are not supposed to use reg mpg headroom length ia2, beta you are supposed to use reg smpg shead slength sia2 for interaction effects, but if I get the same beta coefficients, why is it wrong to use reg mpg headroom length ia2, beta? That confuses me a little. Zitat von Martin Weiss <martin.weiss1@gmx.de>: How do you mean that? The example that Kit provided gives the same results? Did you get something else for your special dataset? -----Ursprüngliche Nachricht----- Von: owner-statalist@hsphsun2.harvard.edu [mailto:owner-statalist@hsphsun2.harvard.edu] Im Auftrag von Gesendet: Dienstag, 21. April 2009 16:10 An: statalist@hsphsun2.harvard.edu Betreff: re: AW: st: AW: beta coefficients for interaction terms I tried it out and got different figures for the beta coefficient. Thats why I asked that question. It should give me the same figures, right? Zitat von Christopher Baum <baum@bc.edu>: Why don't you just try it out? sysuse auto, clear egen shead = std(headroom) egen slength = std(length) egen smpg = std(mpg) gen ia2 = shead*slength egen sia2 = std(ia2) reg smpg shead slength sia2 reg mpg headroom length ia2, beta Kit Baum | Boston College Economics and DIW Berlin | An Introduction to Stata Programming | An Introduction to Modern Econometrics Using Stata | * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/ * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/ * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/ * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2009-04/msg00902.html","timestamp":"2014-04-17T18:53:59Z","content_type":null,"content_length":"8827","record_id":"<urn:uuid:cc8b87d4-aa6b-4836-809c-6dc32af7ec2b>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00434-ip-10-147-4-33.ec2.internal.warc.gz"}
Taking a test is the best way to learn Re: Taking a test is the best way to learn Thorough once said that he would have learned more by sailing once around the harbor. Dry studying rarely sticks, being forced to apply your knowledge as in a test is a powerful stimulus. The adrenalin rush is unparalleled. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=173935","timestamp":"2014-04-18T16:47:59Z","content_type":null,"content_length":"9940","record_id":"<urn:uuid:a70c0b77-46e5-41dc-822d-63d26d5d540b>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00555-ip-10-147-4-33.ec2.internal.warc.gz"}
Fair Lawn Statistics Tutor Find a Fair Lawn Statistics Tutor ...I'm able to provide direct and clear explanations for which choice is the correct one, and why each of the others are false. I am also able to advise techniques for drawing diagrams on the logic games section, and well as general logical rules to help understand the arguments and constraints in each problem. I have tutored the GRE at least 20 times. 32 Subjects: including statistics, physics, calculus, geometry ...I will methodically and patiently work step by step to make the material easy. I instruct students ranging from pre-K to adult. I offer the following: A multisensory approach to learning. 30 Subjects: including statistics, English, piano, reading ...Having now Aced calculus II and taking Differential Equations and Econometrics (advanced Stats), I'm very familiar with Statistics, Algebra 1, Calculus, and Precalculus. Now, I am more than ready and able to explain all facets of math to any struggling student. SAT math is just Algebra and geometry, (which can look kinda scary sometimes)which I can demystify easily. 15 Subjects: including statistics, geometry, algebra 1, algebra 2 ...I know all of the material that is currently taught in schools today. I have aced this course during my senior year of high school. This topic has a lot of algebra involved. 20 Subjects: including statistics, physics, geometry, algebra 1 ...Can you prove the existence of God? What is the nature of man? I hope, through tutoring, to both share my love for philosophy and to help individuals develop the critical thinking skills needed to analyze philosophy. 18 Subjects: including statistics, English, reading, French
{"url":"http://www.purplemath.com/Fair_Lawn_Statistics_tutors.php","timestamp":"2014-04-18T21:36:17Z","content_type":null,"content_length":"23903","record_id":"<urn:uuid:13b79953-f0b5-4762-8eb6-40ce2ac6748c>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00059-ip-10-147-4-33.ec2.internal.warc.gz"}
Finite groups of even cardinality February 24th 2006, 02:10 PM #1 Finite groups of even cardinality Let (G,*) be a finite group with an even number of elements. Show that there must exist at least one element $a eq e \in G$ such that $a^2=e$. I have no clue how to start and what ticks me off about that is that I think I've seen it before. (And I don't recall it was that difficult, either.) However my class notes (from years ago) doesn't contain the proof. Consider pairing up elements with their inverses. Consider pairing up elements with their inverses. Got it. Let me formalize it then. Let (G,*) be a finite group with an even number of elements. Consider the subset of G: $A \equiv \{b|b^2=e, b \in G\}$ where e is the identity in G. It may be that A is empty. If not, the inverse of any element $b \in A$ is also in A and is distinct from b. So b and its inverse form a pair of elements in A and this is true of any element in A. Thus we know that A has no elements or has an even number of them. Now form the set G - A. (For others, I believe the notation is G\A?) We know that G - A is not empty because $e^2=e$. We also know that the set G - A has an even number of elements since both G and A do (calling 0 an even number). Because of this, the set G - A - {e} cannot be empty, since this would imply that G has an odd number of elements. Since any element $a \in G-A-{e}$ is not in A we know that $a^2=e$. Thus there is at least one element a in G that is not the identity that has the property $a^2=e$. (End of proof.) Wow. That took a lot more writing than I thought it would. Thanks for the tip! Last edited by topsquark; February 25th 2006 at 01:56 PM. Reason: Corrections Got it. Let me formalize it then. Let (G,*) be a finite group with an even number of elements. Consider the subset of G: $A \equiv \{b|b^2=e, b \in G\}$ Don't you need to add $be e$ to the definition of $A$? where e is the identity in G. It may be that A is empty. If not, the inverse of any element $b \in A$ is also in A and is distinct from b. Take the group $\mathcal{G}=(\{e,b\},*)$, where $b=b^{-1}$. Then $b \in A$ but $b^{-1}$ is not distinct form $b$. So b and its inverse form a pair of elements in A and this is true of any element in A. Thus we know that A has no elements or has an even number of them. Now form the set G - A. (For others, I believe the notation is G\A?) We know that G - A is not empty because $e^2=e$. We also know that the set G - A has an even number of elements since both G and A do (calling 0 an even number). Because of this, the set G - A - {e} cannot be empty, since this would imply that G has an odd number of elements. Since any element $a \in G-A-{e}$ is not in A we know that $a^2=e$. Thus there is at least one element a in G that is not the identity that has the property $a^2=e$. (End of proof.) Wow. That took a lot more writing than I thought it would. Thanks for the tip! Last edited by CaptainBlack; February 26th 2006 at 04:32 AM. Let (G,*) be a finite group with an even number of elements. Show that there must exist at least one element $a eq e \in G$ such that $a^2=e$. I have no clue how to start and what ticks me off about that is that I think I've seen it before. (And I don't recall it was that difficult, either.) However my class notes (from years ago) doesn't contain the proof. If there is no such element every element $a$ except $e$ has a distinct partner $a^{-1}$, adding up to an even number of elements, but $e$ is un-partnered so there are an odd number of elements in the group, a contradiction. In regards to your second to last post on this thread, CaptainBlack, yes I forgot to include b<>e in the definition of A, thank you. For providing a proof MUCH shorter than mine...thppppt! February 25th 2006, 01:16 PM #2 February 25th 2006, 01:54 PM #3 February 26th 2006, 04:24 AM #4 Grand Panjandrum Nov 2005 February 26th 2006, 04:49 AM #5 Grand Panjandrum Nov 2005 February 26th 2006, 10:49 AM #6
{"url":"http://mathhelpforum.com/advanced-algebra/2000-finite-groups-even-cardinality.html","timestamp":"2014-04-18T06:11:47Z","content_type":null,"content_length":"57436","record_id":"<urn:uuid:abe31665-455b-4d5f-86a9-64656cdec0b9>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00145-ip-10-147-4-33.ec2.internal.warc.gz"}
Characters separating points on Maximal Torus modulo Weyl group? up vote 2 down vote favorite Let G be a compact Lie group, for example, SU(n). Let T be its maximal torus. Let W be its Weyl group. Every finite-dimensional representation of G has a character, which is a function on G, T and T/W. I want to prove that for two different points a and b in T/W, we can find such a character $\chi$ that $\chi(a)\neq \chi(b)$. Sorry, Piotr Achinger, you are probably right. I have to reformulate my thoughts and repost it. lie-groups rt.representation-theory fa.functional-analysis harmonic-analysis Presumably $G$ is connected. Points $t, t' \in T$ have distinct images in $T/W$ iff they are not conjugate in $G$. Indeed, if $t' = gtg^{-1}$ then $gTg^{-1}$ and $T$ contain $t'$ and so lie in the 2 connected compact Lie group $Z_G(t)^0$. By conjugacy of maximal tori in such Lie groups, there exists $h \in Z_G(t)^0$ such that $h(gTg^{-1}) = T$, so $t' = ht'h^{-1} = (hg)t(hg)^{-1}$ with $hg \ in N_G(T)$, so $t'$ is in the $W$-orbit of $t$. So your question is exactly if characters of $G$ separate conjugacy classes in $G$ (as every $g \in G$ lies in a torus, since $G$ is connected and compact). – user29720 Jan 12 '13 at 17:37 I should have written $Z_G(t')^0$ rather than $Z_G(t)^0$ above. – user29720 Jan 12 '13 at 17:53 Many thanks to your comment. Now I feel more secure. As you may know, by Stone-Weierstrass theorem, this would imply that the linear combination of characters would be dense in the space of continuous function on T/W. It is something like Peter-Weyl theorem which says that all continuous functions can be approximated by linear combinations of characters in L^2 norm. – Jeep Wrangler Jan 12 '13 at 18:08 add comment 1 Answer active oldest votes This type of question (separating classes by representations) is natural throughout representation theory of various different flavors. In particular, the systematic treatment of (finite dimensional) irreducible representations of connected compact Lie groups leads to a straightforward affirmative answer: see for instance the textbook Representations of Compact Lie Groups by Brocker and tom Dieck (Springer GTM 98), expecially IV.2 and VI.2. Their treatment of class functions is spread out a bit, but doesn't actually require all details of the Weyl character Briefly, the compact Lie group case involves some structure theory: all maximal tori are conjugate and every element is conjugate to some element in your fixed $T$. Moreover, two elements of $T$ are conjugate iff they are conjugate under the Weyl group $W = N_G(T)/T$. Then one has to pin down the class functions on $G$ (invariant on conjugacy classes) by identifying them with $W$-invariant functions on $T$. (Here the category of groups determines what kind of functions are relevant.) A key fact is that characters of irreducible (necessarily finite dimensional) up vote representations generate the algebra of class functions on $G$, and you can pass to the isomorphic algebra of $W$-invariant functions on $T$. These ingredients are standard but not trivial to 4 down develop. As a refinement, when $G$ is semisimple and simply connected, one sees that classes in $G$ are already separated just by the values of the finitely many fundamental characters. To get more perspective on these ideas, it's worthwhile to look at the closely parallel treatment of a connected semisimple algebraic group (over an algebraically closed field of any characteristic): see especially Theorem 6.1 and its consequences in Steinberg's 1965 paper on regular elements here. For a semisimple algebraic group virtually the same results can be proved as in the compact case, with the important difference that not all elements of $G$ are semisimple (and indeed, characters of representations fail to distinguish elements from their semisimple parts). Here it's more obvious that you don't have to know all the fine details about irreducible highest weight representations (which in fact aren't yet complete in prime characteristic). Broadly speaking, three types of semisimple groups behave similarly: compact Lie groups, complex Lie groups, linear algebraic groups. @Allen: Thanks for the edit. I was writing off the top of my head and have now made references more precise. (Is there a better source for the compact groups?) – Jim Humphreys Jan 13 '13 at add comment Not the answer you're looking for? Browse other questions tagged lie-groups rt.representation-theory fa.functional-analysis harmonic-analysis or ask your own question.
{"url":"http://mathoverflow.net/questions/118744/characters-separating-points-on-maximal-torus-modulo-weyl-group?sort=oldest","timestamp":"2014-04-20T18:54:30Z","content_type":null,"content_length":"58262","record_id":"<urn:uuid:de4add00-6543-4847-9317-b505cb88a28a>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00480-ip-10-147-4-33.ec2.internal.warc.gz"}
Browse by Author Jump to: Aragón, J Bai, M Baker, R Barrio, R Gil, D Maini, P Naumis, G Torres, M Varea, C Woolley, T Number of items: 8. Aragón, J Aragón, J. L. and Barrio, R. A. and Woolley, T. E. and Baker, R. E. and Maini, P. K. (2012) Non-linear effects on Turing patterns: time oscillations and chaos. Physical Review Letters . (Submitted) Aragón, J. L. and Barrio, R. A. and Woolley, T. E. and Baker, R. E. and Maini, P. K. (2012) Nonlinear effects on Turing patterns: time oscillations and chaos. Physical Review E, 86 (2). 026201-1. ISSN 1063-651X Woolley, T. E. and Baker, R. E. and Maini, P. K. and Aragón, J. L. and Barrio, R. A. (2010) Analysis of stationary droplets in a generic Turing reaction-diffusion system. Physical Review E, 82 (5). ISSN 1063-651X Aragón, J. L. and Naumis, G. G. and Bai, M. and Torres, M. and Maini, P. K. (2008) Turbulent luminance in impassioned van Gogh paintings. Journal of Mathematical Imaging and Vision, 30 (3). pp. Aragón, J. L. and Torres, M. and Gil, D. and Barrio, R. A. and Maini, P. K. (2002) How to generate pentagonal symmetry using Turing systems. Physical Review E, 65 (5). 051913-1-051913-9. Barrio, R. A. and Maini, P. K. and Aragón, J. L. and Torres, M. (2002) Size dependent symmetry breaking in models for morphogenesis. Physica D, 168-169 (1). pp. 61-72. Barrio, R. A. and Varea, C. and Aragón, J. L. and Maini, P. K. (1999) A two-dimensional numerical study of spatial pattern formation in interacting Turing systems. Bulletin of Mathematical Biology, 61 (3). pp. 483-505. Aragón, J. L. and Varea, C. and Barrio, R. A. and Maini, P. K. (1998) Spatial patterning in modified Turing systems: Application to pigmentation patterns on marine fish. FORMA, 13 (3). pp. 213-221. ISSN 0911-6036 Bai, M Aragón, J. L. and Naumis, G. G. and Bai, M. and Torres, M. and Maini, P. K. (2008) Turbulent luminance in impassioned van Gogh paintings. Journal of Mathematical Imaging and Vision, 30 (3). pp. Baker, R Aragón, J. L. and Barrio, R. A. and Woolley, T. E. and Baker, R. E. and Maini, P. K. (2012) Non-linear effects on Turing patterns: time oscillations and chaos. Physical Review Letters . (Submitted) Aragón, J. L. and Barrio, R. A. and Woolley, T. E. and Baker, R. E. and Maini, P. K. (2012) Nonlinear effects on Turing patterns: time oscillations and chaos. Physical Review E, 86 (2). 026201-1. ISSN 1063-651X Woolley, T. E. and Baker, R. E. and Maini, P. K. and Aragón, J. L. and Barrio, R. A. (2010) Analysis of stationary droplets in a generic Turing reaction-diffusion system. Physical Review E, 82 (5). ISSN 1063-651X Barrio, R Aragón, J. L. and Barrio, R. A. and Woolley, T. E. and Baker, R. E. and Maini, P. K. (2012) Non-linear effects on Turing patterns: time oscillations and chaos. Physical Review Letters . (Submitted) Aragón, J. L. and Barrio, R. A. and Woolley, T. E. and Baker, R. E. and Maini, P. K. (2012) Nonlinear effects on Turing patterns: time oscillations and chaos. Physical Review E, 86 (2). 026201-1. ISSN 1063-651X Woolley, T. E. and Baker, R. E. and Maini, P. K. and Aragón, J. L. and Barrio, R. A. (2010) Analysis of stationary droplets in a generic Turing reaction-diffusion system. Physical Review E, 82 (5). ISSN 1063-651X Aragón, J. L. and Torres, M. and Gil, D. and Barrio, R. A. and Maini, P. K. (2002) How to generate pentagonal symmetry using Turing systems. Physical Review E, 65 (5). 051913-1-051913-9. Barrio, R. A. and Maini, P. K. and Aragón, J. L. and Torres, M. (2002) Size dependent symmetry breaking in models for morphogenesis. Physica D, 168-169 (1). pp. 61-72. Barrio, R. A. and Varea, C. and Aragón, J. L. and Maini, P. K. (1999) A two-dimensional numerical study of spatial pattern formation in interacting Turing systems. Bulletin of Mathematical Biology, 61 (3). pp. 483-505. Aragón, J. L. and Varea, C. and Barrio, R. A. and Maini, P. K. (1998) Spatial patterning in modified Turing systems: Application to pigmentation patterns on marine fish. FORMA, 13 (3). pp. 213-221. ISSN 0911-6036 Gil, D Aragón, J. L. and Torres, M. and Gil, D. and Barrio, R. A. and Maini, P. K. (2002) How to generate pentagonal symmetry using Turing systems. Physical Review E, 65 (5). 051913-1-051913-9. Maini, P Aragón, J. L. and Barrio, R. A. and Woolley, T. E. and Baker, R. E. and Maini, P. K. (2012) Non-linear effects on Turing patterns: time oscillations and chaos. Physical Review Letters . (Submitted) Aragón, J. L. and Barrio, R. A. and Woolley, T. E. and Baker, R. E. and Maini, P. K. (2012) Nonlinear effects on Turing patterns: time oscillations and chaos. Physical Review E, 86 (2). 026201-1. ISSN 1063-651X Woolley, T. E. and Baker, R. E. and Maini, P. K. and Aragón, J. L. and Barrio, R. A. (2010) Analysis of stationary droplets in a generic Turing reaction-diffusion system. Physical Review E, 82 (5). ISSN 1063-651X Aragón, J. L. and Naumis, G. G. and Bai, M. and Torres, M. and Maini, P. K. (2008) Turbulent luminance in impassioned van Gogh paintings. Journal of Mathematical Imaging and Vision, 30 (3). pp. Aragón, J. L. and Torres, M. and Gil, D. and Barrio, R. A. and Maini, P. K. (2002) How to generate pentagonal symmetry using Turing systems. Physical Review E, 65 (5). 051913-1-051913-9. Barrio, R. A. and Maini, P. K. and Aragón, J. L. and Torres, M. (2002) Size dependent symmetry breaking in models for morphogenesis. Physica D, 168-169 (1). pp. 61-72. Barrio, R. A. and Varea, C. and Aragón, J. L. and Maini, P. K. (1999) A two-dimensional numerical study of spatial pattern formation in interacting Turing systems. Bulletin of Mathematical Biology, 61 (3). pp. 483-505. Aragón, J. L. and Varea, C. and Barrio, R. A. and Maini, P. K. (1998) Spatial patterning in modified Turing systems: Application to pigmentation patterns on marine fish. FORMA, 13 (3). pp. 213-221. ISSN 0911-6036 Naumis, G Aragón, J. L. and Naumis, G. G. and Bai, M. and Torres, M. and Maini, P. K. (2008) Turbulent luminance in impassioned van Gogh paintings. Journal of Mathematical Imaging and Vision, 30 (3). pp. Torres, M Aragón, J. L. and Naumis, G. G. and Bai, M. and Torres, M. and Maini, P. K. (2008) Turbulent luminance in impassioned van Gogh paintings. Journal of Mathematical Imaging and Vision, 30 (3). pp. Aragón, J. L. and Torres, M. and Gil, D. and Barrio, R. A. and Maini, P. K. (2002) How to generate pentagonal symmetry using Turing systems. Physical Review E, 65 (5). 051913-1-051913-9. Barrio, R. A. and Maini, P. K. and Aragón, J. L. and Torres, M. (2002) Size dependent symmetry breaking in models for morphogenesis. Physica D, 168-169 (1). pp. 61-72. Varea, C Barrio, R. A. and Varea, C. and Aragón, J. L. and Maini, P. K. (1999) A two-dimensional numerical study of spatial pattern formation in interacting Turing systems. Bulletin of Mathematical Biology, 61 (3). pp. 483-505. Aragón, J. L. and Varea, C. and Barrio, R. A. and Maini, P. K. (1998) Spatial patterning in modified Turing systems: Application to pigmentation patterns on marine fish. FORMA, 13 (3). pp. 213-221. ISSN 0911-6036 Woolley, T Aragón, J. L. and Barrio, R. A. and Woolley, T. E. and Baker, R. E. and Maini, P. K. (2012) Non-linear effects on Turing patterns: time oscillations and chaos. Physical Review Letters . (Submitted) Aragón, J. L. and Barrio, R. A. and Woolley, T. E. and Baker, R. E. and Maini, P. K. (2012) Nonlinear effects on Turing patterns: time oscillations and chaos. Physical Review E, 86 (2). 026201-1. ISSN 1063-651X Woolley, T. E. and Baker, R. E. and Maini, P. K. and Aragón, J. L. and Barrio, R. A. (2010) Analysis of stationary droplets in a generic Turing reaction-diffusion system. Physical Review E, 82 (5). ISSN 1063-651X This list was generated on Wed Apr 16 08:21:54 2014 BST.
{"url":"http://eprints.maths.ox.ac.uk/view/author/Arag=F3n=3AJ=2E_L=2E=3A=3A.html","timestamp":"2014-04-16T07:23:03Z","content_type":null,"content_length":"25654","record_id":"<urn:uuid:0177bca8-db08-49be-94b8-f7ddbc0d25d7>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00311-ip-10-147-4-33.ec2.internal.warc.gz"}
snoburbia, the blog The TI-84 graphing calculator. In snoburbia, every student in Algebra 1 or above is required to have one - at $99.99 apiece. The TI-84 is a gold mine - but not for math. The real benefit is that snoburban dads can use it to drop how advanced their kids are in math. Bonus! For example, this recent listserv request: "Does anyone have a TI-84 calculator for my 7th grade daughter who's in Algebra I?" Or this article disguised as a rant against the costly TI-83: I turned to my Daughter. She's just finished 9th grade pre-International Baccalaureate Geometry, and also has a mandatory TI-83 graphing calculator. Some enterprising youths have made a killing on eBay with my son's TI-83 calculators - four stolen in two years. My husband is convinced he has bought back the exact same calculator four times. Sorry, I'm getting old. My father received a calculator as a graduation gift. I was maybe $500 and it could add, subtract, multiply and divide. When I was in high school, I had a Radio Shack programmable calculator that was so novel that my French teacher agreed that I could use it for my test, doubtful that it would be any advantage: I programmed in conjugations and got only of my only good grades in the class. Let's see what TI has to offer when my girls get to the 7th grade... Funny, when I was taking those classes in HS and college, graphing calculators were prohibited because (a) they were programmable and students could bring in notes or share questions with friends taking the test later and (b) we were expected to LEARN how to graph. I can still plot a mean hyperbola . . . After 10 years those calculators still have not dropped in price...supply and demand is awesome. In 1995 my mother was shocked that a $100 calculator was a requirement and of course a new and better edition came out before I graduated. Once I got to college all engineering majors needed a TI-89 for calc and of course differential equations and linear algebra. Who would ever do matrices by hand, I can second the comments on prices. I remember getting my TI-83 (no plus!) in 7th grade at about $100. Almost fifteen years later, I'm in grad school and still using the thing, and it would (still) cost me $100 to replace it. I'm the underachiever. I am. If you weren't taking Algebra 1 in 8th grade you were a loser (there were only like 5 kids who took Algebra 1 in 7th grade) and I was taking Math C or Pre-Algebra 1 in 8th grade. The school had put the TI-83 calculator on the back-to-school supplies list, and as much as I told my mother that I would not need the thing for another year, she bought it anyway because "you never know". I didn't need it that year. By the time 9th grade rolled around, the new TI-83+ and the TI-84 calculators came about, and I wanted one of them because you could change the colors... so I tried to hide the calculator she had gotten the year before. She found it. I still have it, granted I need it for Algebra 2 this year (I'm a senior taking Algebra 2... I don't meet the 7 keys to success) but whatever.
{"url":"http://blog.snoburbia.com/the_snoburbs/2010/08/ti84.html","timestamp":"2014-04-23T11:47:15Z","content_type":null,"content_length":"38488","record_id":"<urn:uuid:5b7c8ba6-87cc-4789-a978-bf654342e65b>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00638-ip-10-147-4-33.ec2.internal.warc.gz"}
Issue 49 The two major events over the last couple of months have been the credit crunch and the US presidential election. We take a mathematical view of both of these, muse over the surprising effectiveness of maths when it comes to describing the world we live in, and scrutinise some mathematical philosophy. Plus the usual mix of news, reviews and podcasts.
{"url":"http://plus.maths.org/content/issue/49","timestamp":"2014-04-19T17:21:30Z","content_type":null,"content_length":"40957","record_id":"<urn:uuid:420de9b3-5f72-4ca5-a3cb-8d4fd002a448>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00474-ip-10-147-4-33.ec2.internal.warc.gz"}
RE: st: RE: Already Differenced Variables in xtabond2 [Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index] RE: st: RE: Already Differenced Variables in xtabond2 From "David Roodman" <DRoodman@CGDEV.ORG> To <statalist@hsphsun2.harvard.edu> Subject RE: st: RE: Already Differenced Variables in xtabond2 Date Wed, 9 Nov 2005 11:13:12 -0500 Jean Salvati is right that the problem is that there is no straightforward way in xtabond2 to prevent differencing of regressors in difference GMM-no direct analogy to the diff() option of xtabond. Seems to me the easiest thing to do is to undifference your variables before entering them in xtabond2-that is, create variables whose first differences are the variables you want to enter. Here's how to do it. If x is the already differenced variable, type: gen xc = x if L.x >= . replace xc = x + L.xc if L.xc < . D.xc will then equal c. ("c" for cumulative.) A couple things need explaining here. First missing (.) is like +infinity in most mathematical expressions. And other missing values (.a, .b., etc.) are greater than ".". That's why I do ">= ." and "< ." to determine if a value is missing or not. Second, the replace command is self-referential in that xc is on both the left and right side. Stata will do what you want here, first computing xc for one period, then moving to then next period and computing the next xc as a function of L.xc. Below is an example where I get nearly identical results with xtabond and xtabond2 using this. Before going to that let me mention a few other sources of difficulty in making xtabond and xtabond2. One is that xtabond treats the constant term as an already-differenced exogenous variable. It enters it straight in, which is equivalent to entering *time* as a regressor in your levels-equation model. xtabond2 differences the constant away. Second, in xtabond2 every variable ordinarily appears twice in the command line, once as a regressor, once as a basis for instrumenting. Third, the default in xtabond2 is system GMM. Use "noleveleq" in xtabond2 for difference GMM. Finally, last I checked there appeared to be a bug in xtabond that occurs when the time series for an individual is interrupted in the middle. It may be that when the core xtabond code (which is not public and not an .ado) tries to obtain lags of variables that are in fact missing from the regression sample, it jumps back farther to use available but incorrect values. Here's the example of a perfect match. There is another at the bottom of the xtabond2 help file, and another in bbest.do, an auxiliary file that comes with xtabond2. --David Roodman . clear . webuse abdata . gen kc = k if L.k >= . (891 missing values generated) . replace kc = k + L.kc if L.kc < . (891 real changes made) . xtabond n, diff(k) nocons robust Arellano-Bond dynamic panel-data estimation Number of obs = 751 Group variable (i): id Number of groups = 140 Wald chi2(.) = . Time variable (t): year Obs per group: min = 5 avg = 5.364286 max = 7 One-step results | Robust D.n | Coef. Std. Err. z P>|z| [95% Conf. Interval] n | LD. | 1.203445 .078324 15.36 0.000 1.049933 1.356957 k | -.004177 .0020447 -2.04 0.041 -.0081845 -.0001695 Arellano-Bond test that average autocovariance in residuals of order 1 is 0: H0: no autocorrelation z = -2.48 Pr > z = 0.0131 Arellano-Bond test that average autocovariance in residuals of order 2 is 0: H0: no autocorrelation z = -1.39 Pr > z = 0.1652 . xtabond2 n L.n kc, iv(kc) gmm(L.n) noleveleq robust Building GMM instruments.. Performing specification tests. Arellano-Bond dynamic panel-data estimation, one-step difference GMM results Group variable: id Number of obs = 751 Time variable : year Number of groups = 140 Number of instruments = 29 Obs per group: min = 5 F(2, 139) = 136.10 avg = 5.36 Prob > F = 0.000 max = 7 | Robust | Coef. Std. Err. z P>|z| [95% Conf. Interval] n | L1. | 1.203445 .078324 15.36 0.000 1.049933 1.356957 kc | -.004177 .0020447 -2.04 0.041 -.0081845 -.0001695 Hansen test of overid. restrictions: chi2(27) = 69.06 Prob > chi2 = 0.000 Arellano-Bond test for AR(1) in first differences: z = -2.48 Pr > z = 0.013 Arellano-Bond test for AR(2) in first differences: z = -1.39 Pr > z = 0.165 David Roodman Research Fellow Center for Global Development * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2005-11/msg00275.html","timestamp":"2014-04-20T13:50:05Z","content_type":null,"content_length":"10198","record_id":"<urn:uuid:13f1cd35-2b7e-4760-b4d8-124fde251505>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00549-ip-10-147-4-33.ec2.internal.warc.gz"}
Automating the search for elegant proofs , 1996 "... Introduction Many researchers who study the theoretical aspects of inference systems believe that if inference rule A is complete and more restrictive than inference rule B, then the use of A will lead more quickly to proofs than will the use of B. The literature contains statements of the sort "ou ..." Cited by 24 (5 self) Add to MetaCart Introduction Many researchers who study the theoretical aspects of inference systems believe that if inference rule A is complete and more restrictive than inference rule B, then the use of A will lead more quickly to proofs than will the use of B. The literature contains statements of the sort "our rule is complete and it heavily prunes the search space; therefore it is efficient". 2 These positions are highly questionable and indicate that the authors have little or no experience with the practical use of automated inference systems. Restrictive rules (1) can block short, easy-to-find proofs, (2) can block proofs involving simple clauses, the type of clause on which many practical searches focus, (3) can require weakening of redundancy control such as subsumption and demodulation, and (4) can require the use of complex checks in deciding whether such rules should be applied. The only way to determ - American Mathematical Monthly , 2001 "... 1. INTRODUCTION. For geometers, Hilbert’s influential work on the foundations of geometry is important. For analysts, Hilbert’s theory of integral equations is just as important. But the address “Mathematische Probleme ” [37] that David Hilbert (1862– 1943) delivered at the second International Cong ..." Cited by 10 (4 self) Add to MetaCart 1. INTRODUCTION. For geometers, Hilbert’s influential work on the foundations of geometry is important. For analysts, Hilbert’s theory of integral equations is just as important. But the address “Mathematische Probleme ” [37] that David Hilbert (1862– 1943) delivered at the second International Congress of Mathematicians (ICM) in Paris has tremendous importance for all mathematicians. Moreover, a substantial part of , 1997 "... Experimentation strongly suggests that, for attacking deep questions and hard problems with the assistance of an automated reasoning program, the more effective paradigms rely on the retention of deduced information. A significant obstacle ordinarily presented by such a paradigm is the deduction and ..." Cited by 9 (4 self) Add to MetaCart Experimentation strongly suggests that, for attacking deep questions and hard problems with the assistance of an automated reasoning program, the more effective paradigms rely on the retention of deduced information. A significant obstacle ordinarily presented by such a paradigm is the deduction and retention of one or more needed conclusions whose complexity sharply delays their consideration. To mitigate the severity of the cited obstacle, I formulated and feature in this article the hot list strategy. The hot list strategy asks the researcher to choose, usually from among the input statements characterizing the problem under study, one or more statements that are conjectured to play a key role for assignment completion. The chosen statements---conjectured to merit revisiting, again and again---are placed in an input list of statements, called the hot list. When an automated reasoning program has decided to retain a new conclusion C---before any other statement is chosen to initiat... - J. Automated Reasoning , 2000 "... For more than three and one-half decades beginning in the early 1960s, a heavy emphasis on proof finding has been a key component of the Argonne paradigm, whose use has directly led to significant advances in automated reasoning and important contributions to mathematics and logic. The theorems t ..." Cited by 8 (5 self) Add to MetaCart For more than three and one-half decades beginning in the early 1960s, a heavy emphasis on proof finding has been a key component of the Argonne paradigm, whose use has directly led to significant advances in automated reasoning and important contributions to mathematics and logic. The theorems that have served well range from the trivial to the deep, even including some that corresponded to open questions. Often the paradigm asks for a theorem whose proof is in hand but that cannot be obtained in a fully automated manner by the program in use. The theorem whose hypothesis consists solely of the Meredith single axiom for two-valued sentential (or propositional) calculus and whose conclusion is the Lukasiewicz three-axiom system for that area of formal logic was just such a theorem. Featured in this article is the methodology that enabled the program OTTER to find the first fully automated proof of the cited theorem, a proof with the intriguing property that none of its - Technical Memorandum ANL/MCS-TM-221, Mathematics and Computer Science Division, Argonne National Laboratory , 1997 "... ..." "... Throughout the twentieth century, the worlds of logic and mathematics were well aware of Hilbert’s twenty-three problems and the challenge they offered. Although not known until very recently, there existed yet one more challenge offered by Hilbert, his twenty-fourth problem. This problem focuses on ..." Add to MetaCart Throughout the twentieth century, the worlds of logic and mathematics were well aware of Hilbert’s twenty-three problems and the challenge they offered. Although not known until very recently, there existed yet one more challenge offered by Hilbert, his twenty-fourth problem. This problem focuses on finding simpler proofs, on the criteria for measuring simplicity, and on the ‘‘development of a theory of the method of proof in mathematics in general’’. Of the three themes of Hilbert’s twenty-fourth prob-lem, the first two are central to this article. We visit various areas of logic, showing that some of the studies of the masters are indeed strongly connected to this newly discovered problem. We also demonstrate that the use of an automated reasoning program (specifically, W. McCune’s OTTER) enables one to address this challenging problem. We offer questions that remain unanswered. "... Gibbard [2] presents an argument to the effect that any conditional satisfying certain principles must be equivalent to the material (viz., classical) conditional. Here is one rendition of Gibbard’s (informal) argument. Let be the classical material conditional, and let be the indicative conditional ..." Add to MetaCart Gibbard [2] presents an argument to the effect that any conditional satisfying certain principles must be equivalent to the material (viz., classical) conditional. Here is one rendition of Gibbard’s (informal) argument. Let be the classical material conditional, and let be the indicative conditional. Suppose that the indicative satisfies the import-export law. Thatis, suppose (IE) A B C is logically equivalent to A & B C.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=2492075","timestamp":"2014-04-16T22:20:32Z","content_type":null,"content_length":"27190","record_id":"<urn:uuid:ff8ef170-6a73-4193-af44-27a9b3f4af69>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00004-ip-10-147-4-33.ec2.internal.warc.gz"}