content
stringlengths
86
994k
meta
stringlengths
288
619
logical question Author logical question Ranch Hand this is what i got couple of weeks ago. Some of you probably know about it. Joined: Jul There are 9 rectangles, 8 of which are of same weight and 1 with a different weight. You are given a weight machine(kind of like pendulum) on which you can place the items to pick the 10, 2001 one with the odd weight(the one that is not same as 8 others). Posts: 1934 Give the least number of times before which you can find the item that is different in weight. You can place the items anywhere on the weight measurement(at the center, or on the left, on the right or in all the three places in any combination). I need to be given couple of tips before I could nail it. [ February 25, 2004: Message edited by: Chris Daniel ] SCJP, blog Joined: Feb looks like to me you need 4 times in worst case. there are many ways to do it: 11, 2004 put 3 on each side, or 4 at each side, to start with. but the odd one may be heavier or lighter than normal, so it takes more times than you may think. Posts: 25 [ February 24, 2004: Message edited by: Ken Smart ] Ranch Hand I'm going to cheat a bit. I already know the solution with 13 balls is 3, so I'm going to say that it can't take more than 3 weighings. It's also going to take at least two weighings, Joined: Jul since I can't possible solve the problem in one weighing. 27, 2001 The question becomes, can I do it in two? If my first weighing is the maximum possible 4 and 4, I also know from the above mentioned problem that it'll take at least two more weighings Posts: 1365 to solve the problem. If my first weighing is 2 and 2, then if it comes out even I'm left with 5 squares I know nothing about -- I can't really solve that in one weighing. 1 and 1 is even worse. So if I can do it in two weighings when my first weighing is 3 and 3, then I have a best solution. If not, I know I can always do it in 3 weighings. If I weigh 3v3 and it comes out even, I'm left with 3 squares I know nothing about. Weighing one of them against another doesn't immediately provide an answer since we don't know if the odd one is heavier or lighter. Putting them on the same side of the scale certainly isn't going to help. And so I conclude that the answer is 3 weighings. There's probably a short and pretty solution using information theory, but that's not my department. Sheriff It's possible to solve this in one weighing if you're really lucky. Put 4 on the left side of the balance, and 4 on the right. If they balance, the remaining rectagle is the odd one. I already know the solution with 13 balls is 3 Joined: Jan It is? I'm familiar with a 12-ball puzzle where you need 3 weighings; I don't see how to do it for 13 balls, unless you know in advance whether the odd ball is heavier or lighter. And 30, 2000 if you know that, then you can actually handle 27 balls in 3 weighings. Why stop at 13? Posts: 18671 Maybe there's something different about the 13-ball problem which I'm missing. [ February 24, 2004: Message edited by: Jim Yingst ] "I'm not back." - Bill Harding, Twister Ranch Hand Joined: Jul 10, 2001 Assume it is heavier than the others, tell me what it could be. Posts: 1934 Leverager of our synergies I agree with David about 3. Sheriff The first weighting: 3 vs. 3 If they are equal, the wrong rectangle is among the other 3 and the rest is trivial. Joined: Aug If they are not, then we have 3 rect. left that we know are standard. Use them to replace any of the two groups that were weighted. 26, 2000 Now if the result is "equal" again, then we have 3 suspicious that were replaced and we know whether one of them is heavier or lighter (depending whether these 3 were lighter or Posts: 10065 heavier on the last weighting). Weight any two of them, and the problem is solved. I the result is "not equal", then we have the same situation as in the previous case, 3 suspicious rectangles that we know are lighter or heavier, only now they are on the other side of the scales (up or down). Uncontrolled vocabularies "I try my best to make *all* my posts nice, even when I feel upset" -- Philippe Maquet If you know the odd rectangle is heavier than the others, 2 weighings are sufficient. Weigh 3 vs. 3. If one side goes down, one of those 3 is the heavy one. Use the second weighing to Joined: Jan weigh 2 or those 3 - either one side goes down, and you know which is heavier, or they balance, and the one you left out is the heaviest. If the first weighing balanced, then you've 30, 2000 got 2 remaining rectangles to investigate. Weigh them against each other; the heavier one will be obvious. Posts: 18671 Ranch Hand 2 Weighing is sufficient Joined: Apr Group the 9 squares as 3 in a group say A, B and C 16, 2003 now Weight A and B. if one of it is hevaier then the odd square is in that only of if they are equal then the odd square is in the third group. Posts: 70 No we have found out the odd group. now take two Squares from that group and weigh. if one is heavier then that is the odd square. if they are equal then the third square in the group is the odd one. So 2 weighings is all we require to find it out. Karthikeyan<br />SCJP 1.4, SCWCD. Ranch Hand Originally posted by Karthikeyan Rajendraprasad: Joined: Mar 2 Weighing is sufficient 11, 2002 Posts: 1140 Only if you are told in advance whether the odd piece is heavier or lighter than the other pieces. But if you don't know whether the piece is heavier or lighter, then you need a minimum of 3 tries. Quaerendo Invenietis Ranch Hand Joined: Jul It is? I'm familiar with a 12-ball puzzle where you need 3 weighings; I don't see how to do it for 13 balls, unless you know in advance whether the odd ball is heavier or lighter 27, 2001 I originally encountered the 12-ball problem a few years ago and faced the 13 problem just recently in class. Notice that this question asks us to identify the different square but Posts: 1365 doesn't require us to determine whether it's lighter or heavier. We can only do 12 if we have to figure that out whether it weighs more or less. subject: logical question
{"url":"http://www.coderanch.com/t/35251/Programming/logical","timestamp":"2014-04-20T06:27:01Z","content_type":null,"content_length":"40387","record_id":"<urn:uuid:449c792c-a5f1-4d1b-9908-de22cab471fd>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00304-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: I need help with the problem attached. • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/5010b398e4b009397c68b35c","timestamp":"2014-04-20T10:52:36Z","content_type":null,"content_length":"89789","record_id":"<urn:uuid:eb2a7f5b-1aa8-4aeb-93b4-dc1bb268c1d4>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00030-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: The measure of an interior angle of a regular polygon is 165.6°. How many sides make up the regular polygon? • one year ago • one year ago Best Response You've already chosen the best response. I regular polygon has equal interior angles. And the formula for the sum of the interior angles is (n-2)180=Total Deg where n equals the number of sides. Also the sum of the exterior angles always add up to 360. Now using these 3 facts we can solve this question Best Response You've already chosen the best response. 1) Find the exterior angle. 2) Divide 360 by the exterior to get the number of sides Best Response You've already chosen the best response. \[\frac{ (n - 2) \times 180 }{ n } = 165.6\] Solve for "n". "n" = 25 Best Response You've already chosen the best response. The "n-2" is the number of triangles one can get from the polygon by drawing diagonals from one given vertex to "n-3" other vertices (all vertices except the given vertex and the 2 adjacent vertices). Example: look at a square. Take one vertex as "the vertex". You can draw only one diagonal. Best Response You've already chosen the best response. All good now? Best Response You've already chosen the best response. @DorelTibi ? Best Response You've already chosen the best response. yeah i got it thanks Best Response You've already chosen the best response. you're welcome Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50e1d4ede4b0e36e3513d298","timestamp":"2014-04-20T06:22:34Z","content_type":null,"content_length":"44720","record_id":"<urn:uuid:aae64324-f6f6-4da5-8ba2-7b3a9173a816>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00316-ip-10-147-4-33.ec2.internal.warc.gz"}
Intuition behind the diagonal intersection up vote 7 down vote favorite Suppose that for all $\alpha<\kappa$ we have that $A_\alpha\subseteq\kappa$. We define the diagonal intersection to be $$\bigtriangleup_{\alpha<\kappa}A_\alpha = \left\lbrace\xi<\kappa\ \middle|\ \xi One of the most surprising theorems in basic set theory, I think, is that if $A_\alpha$ is closed and unbounded (and $\kappa$ is regular and uncountable) then this diagonal intersection is also a closed and unbounded set. Looking at it from a measure theoretic point of view now, clubs correspond to sets of measure one. Is there any measure theoretic operation which corresponds to diagonal intersections? Are there possibly other analogies in mathematics which can be used to describe this construction in a rather simple way that non-set theorists could relate to? Furthermore, it is quite clear that changing the order of the $A_\alpha$ or taking a subsequence can completely change the resulting set. Is there some invariance? For example, up to order the result is unique modulo a non-stationary set? lo.logic set-theory measure-theory Just a nit-pick: $\kappa$ must be an uncountable regular cardinal. – Trevor Wilson Sep 14 '12 at 15:55 Thanks Trevor, I corrected this. – Asaf Karagila Sep 14 '12 at 17:46 add comment 1 Answer active oldest votes The diagonal intersection corresponds to the infimum in the boolean algebra $\mathbb{B}_I:=P(\kappa)/I$ (where $I$ is the nonstationary ideal, or more generally where $I$ is any normal ideal on $\kappa$). More precisely: if $Z \subset P(\kappa)/I$ and $|Z| = \kappa$, then $Z$ has an infimum in $\mathbb{B}_I$, and this infimum is exactly ``the'' diagonal intersection up vote 11 of representatives from the members of $Z$. (This diagonal intersection does not depend on the particular $\kappa$-enumeration of $Z$ or the choice of representatives from the down vote equivalence classes in $Z$; they'll all yield the same element of $\mathbb{B}_I$). Hm, that is quite a wonderful insight. I have two followup questions now: (1) Is there yet a measure theoretic notion corresponding to that? (2) If we take an arbitrary ideal $I$, is there a reasonable way to describe the operation $\inf[X_\alpha]$ in $P(\kappa)/I$, or is that just a very nice property of normal ideals? – Asaf Karagila Sep 14 '12 at 14:57 I do not think there is a nice notion of $inf[X_{a}]$ in arbitrary ideals $I$ since the $inf[X_{\alpha}]$ generally does not exist. For instance, in $P(\omega)/fin$ where $fin$ is the ideal consisting of finite sets, then the least upper bound of a countable sequence of elements almost never exists since there is no countable strictly increasing sequence of elements $(x_{n})$ in $P(\omega)/fin$ with a least upper bound. – Joseph Van Name Sep 14 '12 at 16:06 Regarding Asaf's questions: 1) I don't know, other than viewing elements of a Boolean algebra as being 0, 1, or "positive". 2) It's just a very nice property of normal ideals. For non-normal ideals on $\kappa$, sups and infs of $\kappa$-sized subsets of $\mathbb{B}_I$ do not necessarily exist, as shown by Joseph's example (his example was a non-normal ideal on $\kappa = \omega$, but more generally, for any regular $\kappa$ the ideal $J$ of bounded subsets of $\kappa$ is $\kappa$-complete, non-normal, and some $\kappa$-sized subsets of $\ mathbb{B}_{J}$ fail to have sups). – Sean Cox Sep 14 '12 at 17:02 Ah, I see. Thanks Joseph and Sean. Is normality somehow equivalent to completeness (or some $\kappa^+$-closedness) of the quotient algebra? – Asaf Karagila Sep 14 '12 at 17:33 Asaf, it is not equivalent. Suppose $I$ is a $\kappa^+$-saturated ideal on a regular $\kappa$. Let $f$ be a bijection between the limit ordinals and successor ordinals of $\kappa$. Then $f$ projects $I$ to a $\kappa$-complete, $\kappa^+$-saturated ideal $J$ for which the set of successor ordinals is measure one. $J$ is not normal but $P(\kappa)/J$ is a complete boolean algebra. – Monroe Eskew Sep 14 '12 at 18:04 add comment Not the answer you're looking for? Browse other questions tagged lo.logic set-theory measure-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/107179/intuition-behind-the-diagonal-intersection?sort=oldest","timestamp":"2014-04-19T22:24:52Z","content_type":null,"content_length":"61077","record_id":"<urn:uuid:4576d1f5-13ef-4da8-bb3b-8bd5554307eb>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00076-ip-10-147-4-33.ec2.internal.warc.gz"}
Why can an obtuse triangle have only one obtuse angle? Use the triangle Angle Sum Theorem. Or try fitting two elephants into a one-bedroom flat. Let's see what happens if we say that ∠1, ∠2, and ∠3 make up the interior angles of a triangle and both ∠1 and ∠2 are obtuse. Obtuse angles are greater than 90°, so just adding the measures of ∠1 and ∠2 would be greater than 180. That means ∠3 would have to be negative and we can't have negative angles.
{"url":"http://www.shmoop.com/congruent-triangles/triangle-types-exercises.html","timestamp":"2014-04-16T22:16:52Z","content_type":null,"content_length":"35328","record_id":"<urn:uuid:5f8fb076-8800-44eb-bdfc-96e447475056>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00381-ip-10-147-4-33.ec2.internal.warc.gz"}
Pressure and flow rates There is a pipe whose diameter narrows smoothly from 10cm to 5.0cm. The pressure at where this pipe's diameter is 5cm is 50kPa. Given water's density of 1000kg/m^3, and a flow rate of 5.0L/s, what is the pressure at the point where the pipe's diameter is 10cm? First the continuity equation: Then, Bernoulli's equation: Substitute v1 for A2v2/A1 I left the potential energy out of the equation because it is negligible givien the number of significant figures we have. Even though I did the calculation with adding potential energy it is still So I got 79993Pa. I know it is very wrong but I cannot check it since I don't have the answers. Any help please? Your approach is difficult to follow because you are plugging in numbers too soon. You have made a simple arithmetic error. I am not sure what you did. The second 2.564 should be squared but that does not explain the error. Do the analysis first then plug in the numbers at the end. You appear to be using Bernoulli's equation (ignoring gravitational potential): (1) [tex]P_1 + \frac{1}{2}\rho v_1^2 = P_2 + \frac{1}{2}\rho v_2^2[/tex] (2) [tex]v_1A_1 = \frac{dV}{dt} = v_2A_2[/tex] so: [itex]v_2 = v_1A_1/A_2[/itex] and from (1) then: [tex]P_2 = P_1 + \frac{1}{2}\rho v_1^2 - \frac{1}{2}\rho \frac{v_1^2A_1^2}{A_2^2} = P_1 - \frac{1}{2}\rho v_1^2\left(1-\frac{A_1^2}{A_2^2}\right)[/tex] So far, this is what you have done. Now plug in your numbers: [tex]P_2 = 50000 + .5 * 1000 * 2.546^2(1-.25) = 50000 + 2400 = 52,400 \text{kPa}[/tex]
{"url":"http://www.physicsforums.com/showthread.php?t=135276","timestamp":"2014-04-17T18:41:24Z","content_type":null,"content_length":"24585","record_id":"<urn:uuid:a642d565-5af2-4682-9a24-c1b2ac71fff8>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00202-ip-10-147-4-33.ec2.internal.warc.gz"}
Algebra I/Lines (Linear Functions)/Chapter Review From Wikibooks Lesson 1. Graphing Points Lesson 2. Equations in Two Variables Lesson 3. Graphing Lines(Linear Functions) Lesson 4. Slope of a Line Lesson 5. Slope-Intercept Form of a Line Lesson 6. Find Parallel and Perpendicular Lines Lesson 7. Find the Equation of the Line Using Two Points Lesson 8. Functions and Relations
{"url":"https://simple.wikibooks.org/wiki/Algebra_I/Lines_(Linear_Functions)/Chapter_Review","timestamp":"2014-04-16T07:30:00Z","content_type":null,"content_length":"19724","record_id":"<urn:uuid:fce4fe12-3832-463a-8bc1-c179a71c2746>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00568-ip-10-147-4-33.ec2.internal.warc.gz"}
Part 2 of Statistical Sampling by Bruce Truitt by AccountingWeb on printer friendly Confessions of a Recovering Auditor Ye Shall Know the Formula, and the Formula Shall Set Ye Free! August 2009 By Bruce Truitt Welcome back, Dear Reader. Our first installment in these tales of woe and intrigue (q.v., "Is Statistics a Criminal Act?") ended with your humble scribe positing that "the seed, the primum mobile, the Alpha-Omega, the thread, the DNA, the glue that ties it all together" was the following, AKA "The Formula": Sample Size = Confidence x Variation In Part One, we also established that a whole passel of folks sprinted from their final "Sadistics" exam to the nearest trading post to turn their textbook into cash (and immediately thereafter into either anti-inflammatories or an adult beverage), assuming that the dumpster wasn't too seductive en route, and noting that "passel" really should be officially recognized by the National Bureau of Yet, those of you who avoided this rush to rubles and somehow found the Herculean stamina required to take Statistics in multiple collegiate departments ("Inconceivable!" quoth Wallace Shawn) found yourselves in Dante's eighth level of academic Hell. You likely discovered that statistics terminology varies by department and—will the fun never stop?—by textbook or author. Next time the Ambien fails, compare statistical revelations from engineering, physics, psychology, sociology, math, education, and business textbooks. Vertigo by variation is assured! Even limiting ourselves to the parlance of auditing texts is daunting: • "Risk of over-reliance" AKA "Confidence level" AKA "Alpha" • "Estimated deviation rate" AKA "Expected population attribute error percentage" AKA "Standard deviation" AKA "Standard error, nee Relative error" • "Tolerable error rate" AKA "Upper error limit" AKA "Desired precision" AKA "Margin of error" AKA "Confidence interval" With this many aliases, crimes must be afoot, or, at least, quests for immortality in a footnote. (This part of the page deliberately blank for deep, cleansing breaths.) Anyhoo, fact is all these terminologies do collapse into one of the three words in "The Formula"—confidence, variation, or precision. Repeat with me: Confidence. Variation. Precision. Or, even more logically: Sample Size = How Confident Must I Be In My Results x How Much Variation Is In The Population How Precise Do I Want To Be Or, to totally eliminate the lingering stench of math: The Amount Of Work I Gotta Do = How Often I Wanna Be Right x How Screwy The World Is How Close I Wanna Be To The Bullseye So, let's play with The Formula a bit. For example, what happens to sample size if confidence goes up, i.e., if you want to be right more often? Sample Size ↕ ? = Confidence ↑ x Variation Right you are! Sample size goes up if you want to be more confident in your results. That's reasonable with or without math. You gotta do more work to be right more often. Okus dokus. What happens to sample size if variation in the population increases, that is, if the world gets more screwy? Sample Size ↕ ? = Confidence x Variation ↑ Right again—you're good! The sample size must again go up. This also makes sense. If the stuff you are sampling turns out to be more messed up than you thought it would be, you have to sample more to figure out how screwy it really is. Then, what happens to sample size if you want to be more precise, i.e., the numeric value of precision goes down. Does sample size go up or down if you want to get closer to the bull's eye? Sample Size ↕ ? = Confidence x Variation Precision ↓ No math needed here either. Sample size has to rise. If you wanna get closer to the bulls eye, you gotta fire more arrows (sample more). Take it from one who rarely hit the blasted bull's eye and often missed the whole target. All this makes perfect logical sense and is easily understood without those pesky Greek and Latin letters and {formulas [formulas (formulas) formulas] formulas}. More confidence—more work. More variation—more work. More precise—more work. Period. It does not matter whether you think about The Formula logically, linguistically, or mathematically, as long as you think about, speak it, share it, dream it, mantra it ... without ceasing until next we speak ... Confidence—Variation - Precision Confidence—Variation - Precision Confidence—Variation - Precision Confidence—Variation - Precision Confidence—Variation - Precision Confidence—Variation - Precision Confidence—Variation - Precision Confidence—Variation - Precision Confidence—Variation - Precision ... Bruce Truitt has 25+ years' experience in applied statistics and government auditing, with particular focus on quantitative methods and reporting in health and human services fraud, waste, and abuse. His tools and methods are used by public and private sector entities in all 50 states and 33 foreign countries and have been recognized by the National State Auditors Association for Excellence in He also teaches the US Government Auditor's Training Institute's "Practical Statistical Sampling for Auditors" course, is on the National Medicaid Integrity Institute's faculty, and taught Quantitative Methods in Saint Edward's University's Graduate School of Business. Bruce holds a Master of Public Affairs from the LBJ School of Public Affairs, as well as Masters' Degrees in Foreign Language Education and Russian and East European Studies from The University of Texas at Austin. www.auditskills.com [1]
{"url":"http://www.accountingweb.com/print/206684","timestamp":"2014-04-20T07:34:30Z","content_type":null,"content_length":"18360","record_id":"<urn:uuid:2325d15d-682b-464e-a29f-a910f072d5fe>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00041-ip-10-147-4-33.ec2.internal.warc.gz"}
Figure 4: Nonlocal transport parameter (a, c) and Mach Numbers (b, d) for simulations at various values of mass ratio and cooing parameter , where varies, and where varies, . For the Mach number plots, heavy lines denote the Mach number as measured in an inertial (static) frame, whereas light lines show the Doppler-shifted Mach number (measured in a frame corotating with the fluid). Taken from Cossins et al. [13].
{"url":"http://www.hindawi.com/journals/aa/2012/846875/fig4/","timestamp":"2014-04-19T07:50:18Z","content_type":null,"content_length":"14679","record_id":"<urn:uuid:eb2bb610-66ef-4a58-bf74-2f74824e74d6>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00088-ip-10-147-4-33.ec2.internal.warc.gz"}
Geometry: Parallel and Perpendicular Lines February 8th 2010, 03:44 PM Geometry: Parallel and Perpendicular Lines I just want to know how do I do an equation like this Find an equation of the line that has a y-intercept of 2 that is parallel to the graph of the line 4x + 2y = 8 February 8th 2010, 03:47 PM parallel lines have the same slope. so find the slope of your first line. Then use that along with your second y-intercept to get the equation of your second line.
{"url":"http://mathhelpforum.com/algebra/127853-geometry-parallel-perpendicular-lines-print.html","timestamp":"2014-04-16T16:28:02Z","content_type":null,"content_length":"3623","record_id":"<urn:uuid:e4b03fa0-3cf5-427c-8f07-fe6ef0d65edb>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00073-ip-10-147-4-33.ec2.internal.warc.gz"}
External Factor Evaluation Matrix Wal-Mart Posted by mbalectures | Posted in Strategic Management | 12,411 views | Posted on 12-10-2010 | External Factor Evaluation (EFE) Matrix is a strategic management tool which allows the strategists to examine the cultural, social, economic, demographic, political, legal, and competitive information. EFE Matrix indicates whether the firm is able to effectively take advantage of existing opportunities along with minimizing the external threats. Similarly, it will help the strategists to formulate new strategies and policies on the basis of existing position of the company. An example of external factor evaluation (EFE) matrix is given for the Wal-Mart (An American public corporation that runs a chain of large discount department stores and a chain of warehouse stores). Steps in the Construction of EFE Matrix 1. In the first column, list down all the opportunities and threats. EFE matrix should include 10 to 20 key external factors. 2. In the second column assign weights to each factor that ranges from 0.0 (not important) to 1 (most important). The total weights must sum to 1.00 (It should be noted that the importance of weights depend upon the probable impact of factors on the strategic position of the company). 3. In the column three, rate each factor (ranging from 1 to 4) on the basis of company’s response to that factor. (Here, 1 shows poor response, 2 shows average response, 3 shows above average response and 4 shows superior response) 4. In the column four, calculate the weighted score by multiplying the each factor’s weight by its rating. 5. Find the total weighted score by adding the weighted score for each variable. Wal-Mart External Factor Evaluation Matrix By adding the weighted score of various opportunities and threats of Wal-Mart, we get the total weighted score of 3.40. Here it should be noted that the highest possible total weighted score of a firm is 4 whereas the lowest possible total weighted score is 1. The total weighted score remains in the limit of 1 to 4 regardless of the total number of opportunities and threats. Similarly, the average total weighted score is 2.5. If the total weighted score of a company is 4, it means that the company is effectively taking advantage of existing opportunities and is also able to minimize the risk. On the other hand, the total weighted score of 1 shows that firm is not able to take advantage of current opportunities or avoid external threats. In the case of Wal-Mart, the total weighted score is above average, which means that the Wal-Mart strategies are effective and the company is taking advantage of existing opportunities along with minimizing the potential adverse effects of external threats.
{"url":"http://mba-lectures.com/management/strategic-management/985/external-factor-evaluation-matrix-wal-mart.html","timestamp":"2014-04-17T18:23:17Z","content_type":null,"content_length":"26623","record_id":"<urn:uuid:c6468dbc-716f-48e7-a81a-096e41fba2c1>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00576-ip-10-147-4-33.ec2.internal.warc.gz"}
Cheverly, MD Geometry Tutor Find a Cheverly, MD Geometry Tutor ...As a private tutor, I have accumulated over 750 hours assisting high school, undergraduate, and returning adult students. And as a research scientist, I am a published author and have conducted research in nonlinear dynamics and ocean acoustics. My teaching focuses on understanding concepts, connecting different concepts into a coherent whole and competency in problem solving. 9 Subjects: including geometry, calculus, physics, algebra 1 ...A large part of studying successfully is first learning HOW to study. I have worked with students to teach them how to take notes, test taking strategies, manage their time, become more organized, and use their class materials to their fullest advantage. By learning these study skills and more, students are much more likely to achieve their goals. 26 Subjects: including geometry, reading, ESL/ESOL, writing ...I believe clarity and patience are two important traits that each tutor must possess in order to help their students in the most effective way. Also it is very important to encourage the student by letting them know how much they know instead of pointing out their weaknesses. I have helped, on ... 35 Subjects: including geometry, calculus, physics, accounting ...To provide the student with the best opportunity for course success, cancelled classes should be rescheduled as soon as possible. Each year I will be unavailable between 7:00-9:00 pm on the following days:- January 5th, March 7th and 9th, Good Friday, May 10th, July 25th, and September 26th and ... 20 Subjects: including geometry, chemistry, calculus, GED ...I have tutored algebra, pre-algebra, and geometry for the past 6 years. I have worked in the public school setting with students of all learning abilities. In particular, I have worked with students with special needs and those needing extra academic support. 24 Subjects: including geometry, reading, writing, English
{"url":"http://www.purplemath.com/Cheverly_MD_geometry_tutors.php","timestamp":"2014-04-16T04:41:38Z","content_type":null,"content_length":"24395","record_id":"<urn:uuid:a4d0049d-bd75-4fb4-8c92-ae32f5e97ccb>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00467-ip-10-147-4-33.ec2.internal.warc.gz"}
help with automorphisms I have two problems: 1) Identify Aut(Aut( $\mathbb{Z}_8$)) I know that Aut( $\mathbb{Z}_8$) is isomorphic to $\mathbb{Z}_2 \oplus \mathbb{Z}_2$ but what about Aut(Aut( $\mathbb{Z}_8$))? 2) Identify Aut G, where G=({1, 3, 5, 7}, $._8$). {1, 3, 5, 7} are generators of $\mathbb{Z}_8$ so I'm guessing I should head in that direction, but don't know how. Please help.
{"url":"http://mathhelpforum.com/advanced-algebra/83908-help-automorphisms.html","timestamp":"2014-04-17T06:47:41Z","content_type":null,"content_length":"42798","record_id":"<urn:uuid:da114b7d-0d45-4f64-8821-7a7ac6e3d360>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00088-ip-10-147-4-33.ec2.internal.warc.gz"}
Derwood Geometry Tutor Find a Derwood Geometry Tutor ...Not only that, I try to correlate the theoretical knowledge with real life examples. The end is very satisfying, students learn and remember the subject for very long. Effective teaching depends on elaborate preparation of the material. 9 Subjects: including geometry, chemistry, calculus, algebra 1 ...I truly believe that math can be fun and easy if it's broken down for you in a way that you can comprehend it. I believe that there is a way to learn math for everyone and I look forward to finding out which way works best for you. Even if you just need a little reminder of math you used to know, I'm happy to help you remember the fundamentals. 22 Subjects: including geometry, calculus, algebra 1, GRE ...I am good at explaining and helping people to grasp ideas and theory of why the subjects are when we learn them. I have good math skills and can help conceptualize problems. I also play guitar and would love to teach basic guitar to new players. 19 Subjects: including geometry, algebra 1, algebra 2, grammar ...Math is a key skill and a worthwhile subject. The examples are countless: finding the "best buy" at the store, calculating sales, approximating values, organizing budgets, planning trips, designing new spaces, forming and following logical arguments, or, maybe most importantly, knowing when some... 19 Subjects: including geometry, English, reading, physics ...I have taught elementary, middle and high school levels, as well as college level courses. With my experience and expertise I use a variety of teaching methods and will tailor my methodology to meet your child's needs. I guarantee that your child will comprehend difficult concepts, be motivated to learn and will have grade improvement. 14 Subjects: including geometry, English, reading, biology
{"url":"http://www.purplemath.com/Derwood_Geometry_tutors.php","timestamp":"2014-04-18T13:54:28Z","content_type":null,"content_length":"23832","record_id":"<urn:uuid:4dfc7257-f787-4b60-9e5a-615d1cd5baea>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00307-ip-10-147-4-33.ec2.internal.warc.gz"}
The GPS Network Consists Of 24 Satellites, Each ... | Chegg.com The GPS network consists of 24 satellites, each of which makes two orbits around the earth per day. Each satellite transmits a 50.0 W ( or even less) sinusoidal electromagnetic signal at two frequencies, one of which is 1575.42 MHz. Assume that a satellite transmits half of its power at each frequency and that the waves travel uniformly in a downward hemisphere. ( a) What average intensity does a GPS receiver on the ground, directly below the satellite, receive? ( Hint: First use Newton’s laws to find the altitude of the satellite.) ( b) What are the amplitudes of the electric and magnetic fields at the GPS receiver in part ( a), and how long does it take the signal to reach the receiver? ( c) What wavelength must the receiver be tuned to?
{"url":"http://www.chegg.com/homework-help/questions-and-answers/gps-network-consists-24-satellites-makes-two-orbits-around-earth-per-day-satellite-transmi-q1748235","timestamp":"2014-04-20T03:42:37Z","content_type":null,"content_length":"22122","record_id":"<urn:uuid:079f9737-ddf2-40c5-8c7d-acfbea08f8e6>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00507-ip-10-147-4-33.ec2.internal.warc.gz"}
Derwood Geometry Tutor Find a Derwood Geometry Tutor ...Not only that, I try to correlate the theoretical knowledge with real life examples. The end is very satisfying, students learn and remember the subject for very long. Effective teaching depends on elaborate preparation of the material. 9 Subjects: including geometry, chemistry, calculus, algebra 1 ...I truly believe that math can be fun and easy if it's broken down for you in a way that you can comprehend it. I believe that there is a way to learn math for everyone and I look forward to finding out which way works best for you. Even if you just need a little reminder of math you used to know, I'm happy to help you remember the fundamentals. 22 Subjects: including geometry, calculus, algebra 1, GRE ...I am good at explaining and helping people to grasp ideas and theory of why the subjects are when we learn them. I have good math skills and can help conceptualize problems. I also play guitar and would love to teach basic guitar to new players. 19 Subjects: including geometry, algebra 1, algebra 2, grammar ...Math is a key skill and a worthwhile subject. The examples are countless: finding the "best buy" at the store, calculating sales, approximating values, organizing budgets, planning trips, designing new spaces, forming and following logical arguments, or, maybe most importantly, knowing when some... 19 Subjects: including geometry, English, reading, physics ...I have taught elementary, middle and high school levels, as well as college level courses. With my experience and expertise I use a variety of teaching methods and will tailor my methodology to meet your child's needs. I guarantee that your child will comprehend difficult concepts, be motivated to learn and will have grade improvement. 14 Subjects: including geometry, English, reading, biology
{"url":"http://www.purplemath.com/Derwood_Geometry_tutors.php","timestamp":"2014-04-18T13:54:28Z","content_type":null,"content_length":"23832","record_id":"<urn:uuid:4dfc7257-f787-4b60-9e5a-615d1cd5baea>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00307-ip-10-147-4-33.ec2.internal.warc.gz"}
Sphere Layout March 22nd 2010, 04:45 PM #1 Mar 2010 Sphere Layout Hi I'm new to this forum,tried to do a search but the wording is complicated. Anyway I'm trying to find a formula or if there is a mathematical way of finding this answer. I have a bowl that is 36" in diameter. It is 2" deep at the center. If I were to cut it into 8 sections(pies) and layed them flat the long sides of the pies would have a curve to them.How do I find that curve?Is there an equation that tells you the distance from the curved line on the flat to the straight line when in a sphere?How do I find the length of the new curved line?Any help would be appreciated.I need to create a bowl out of flat pieces of paper.Hope I made my question clear without any diagrams. I cannot visualize the problem. How can I cut flat paper into triangles ,so the long sides of the triangles meet when I lay the paper in a bowl.The bowl is 36" in diameter and 2" deep in the center.Is there a way to find the radius of the long sides of the triangle?Does the radius change if there are 8 triangles or 10 or 12 to makeup the bowl. Similar to the shapes of fabric to create a baseball cap, they are not straight sides of the Hope this hepls, Sil You need to know calculus March 22nd 2010, 10:05 PM #2 Super Member Jun 2009 March 23rd 2010, 04:31 AM #3 Mar 2010 March 26th 2010, 07:57 AM #4 Sep 2009
{"url":"http://mathhelpforum.com/geometry/135116-sphere-layout.html","timestamp":"2014-04-18T16:05:14Z","content_type":null,"content_length":"33689","record_id":"<urn:uuid:7b6476e1-3811-474d-98a3-1358e672a771>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00305-ip-10-147-4-33.ec2.internal.warc.gz"}
[FOM] Hilbert's proof [FOM] Hilbert's proof Stephen G Simpson simpson at math.psu.edu Mon Mar 15 17:21:23 EDT 2010 Hilbert's proof of the Hilbert Basis Theorem is famously nonconstructive. Therefore, from a historical/philosophical/foundational standpoint, it is interesting to ask about the metamathematical aspects or "logical strength" of this Many precise questions can be formulated. My old paper on this subject is author = "Stephen G. Simpson", title = "Ordinal numbers and the {H}ilbert basis theorem", journal = "Journal of Symbolic Logic", year = 1988, volume = 53, pages = "961--974" There I proved the following result concerning the Hilbert Basis Theorem. Let $x_1,\ldots,x_n,\ldots$ be commuting indeterminates. The following statements are pairwise equivalent over RCA$_0$. 1. For each $n$, every ideal in the polynomial ring $Q[x_1,\ldots,x_n]$ is finitely generated. Here $Q$ is the field of rational numbers. 2. For each $n$ and each countable field $F$, every ideal in the polynomial ring $F[x_1,\ldots,x_n]$ is finitely generated. 3. The standard notation system for the ordinal numbers up to $\omega^\omega$ is well ordered. I also obtained an analogous result concerning the Robson Basis Theorem. Here $x_1,\ldots,x_n,\ldots$ are noncommuting indeterminates, and the appropriate ordinal number is $\omega^{\omega^\omega}$ rather than $\omega^\omega$. Thus we see that the Robson Basis Theorem is strictly and measurably "stronger" or "more nonconstructive" than the Hilbert Basis Theorem. A still-open question is to measure the strength of the Ritt Basis Theorem. Here $x_1,\ldots,x_n,\ldots$ are commuting differential Those who wish to discuss the speculative or philosophical aspects of these matters are encouraged to do so. Stephen G. Simpson Professor of Mathematics Pennsylvania State University Research interests: foundations of mathematics, mathematical logic. More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/2010-March/014458.html","timestamp":"2014-04-17T10:22:39Z","content_type":null,"content_length":"4401","record_id":"<urn:uuid:1c392c86-a0aa-4710-9e4b-32bd19dd510c>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00027-ip-10-147-4-33.ec2.internal.warc.gz"}
Bourbaki theory of isomorphism, examples of untransportable formulas up vote 4 down vote favorite In their book "Theory of sets" Bourbaki suggested a general theory of isomorphism. (See also http://www.tau.ac.il/~corry/publications/articles/pdf/bourbaki-structures.pdf ) The example of an untransportable relation (i.e. formula) in the book involves 2 principal base sets. Are there examples of untrasportable formulas when we allow only one principal base set? metamathematics set-theory bourbaki 3 Very few people here are familiar with Bourbaki's set theory. You should probably state the definitions here as well. – Harry Gindi Jun 19 '10 at 12:14 A reference is added in the question. – Victor Makarov Jun 22 '10 at 13:44 add comment 1 Answer active oldest votes An example of untrasportable sentence, when there is only one principal base set X, may be the following one: All elements of the set X are finite sets, Because, by definition, the truth value of a transportable sentence must be preserved under all bijections from the set X. Obviously, there exists a bijection from X to a set Y, where not all elements of Y are finite sets. up vote 1 down vote accepted A simpler example is "the set X contains the empty set". There is a paper "Sentences of type theory: the only sentences preserved under isomorphisms" by Victoria Marshall and Rolando Chuaqui - see The Journal of symbolic Logic, vol 56, #3, Sep 1991. add comment Not the answer you're looking for? Browse other questions tagged metamathematics set-theory bourbaki or ask your own question.
{"url":"http://mathoverflow.net/questions/28744/bourbaki-theory-of-isomorphism-examples-of-untransportable-formulas/28989","timestamp":"2014-04-19T18:08:09Z","content_type":null,"content_length":"54139","record_id":"<urn:uuid:22d27350-69ac-4383-87f5-8bd2d688d786>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00627-ip-10-147-4-33.ec2.internal.warc.gz"}
Quantum Electrodynamics. III. The Electromagnetic Properties of the Electron—Radiative Corrections to Scattering The discussion of vacuum polarization in the previous paper of this series was confined to that produced by the field of a prescribed current distribution. We now consider the induction of current in the vacuum by an electron, which is a dynamical system and an entity indistinguishable from the particles associated with vacuum fluctuations. The additional current thus attributed to an electron implies an alteration in its electromagnetic properties which will be revealed by scattering in a Coulomb field and by energy level displacements. This paper is concerned with the computation of the second-order corrections to the current operator and the application to electron scattering. Radiative corrections to energy levels will be treated in the next paper of the series. Following a canonical transformation which effectively renormalizes the electron mass, the correction to the current operator produced by the coupling with the electromagnetic field is developed in a power series, of which first- and second-order terms are retained. One thus obtains second-order modifications in the current operator which are of the same general nature as the previously treated vacuum polarization current, save for a contribution that has the form of a dipole current. The latter implies a fractional increase of $\frac{\alpha }{2\pi }$ in the spin magnetic moment of the electron. The only flaw in the second-order current correction is a logarithmic divergence attributable to an infra-red catastrophe. It is remarked that, in the presence of an external field, the first-order current correction will introduce a compensating divergence. Thus, the second-order corrections to particle electromagnetic properties cannot be completely stated without regard for the manner of exhibiting them by an external field. Accordingly, we consider in the second section the interaction of three systems, the matter field, the electromagnetic field, and a given current distribution. It is shown that this situation can be described in terms of an external potential coupled to the current operator, as modified by the interaction with the vacuum electromagnetic field. Application is made to the scattering of an electron by an external field, in which the latter is regarded as a small perturbation. It is found convenient to calculate the total rate at which collisions occur and then identify the cross sections for individual events. The correction to the cross section for radiationless scattering is determined by the second-order correction to the current operator, while scattering that is accompanied by single quantum emission is a consequence of the first-order current correction. The final object of calculation is the differential cross section for scattering through a given angle with a prescribed maximum energy loss, which is completely free of divergences. Detailed evaluations are given in two situations, the essentially elastic scattering of an electron, in which only a small fraction of the kinetic energy is radiated, and the scattering of a slowly moving electron with unrestricted energy loss. The Appendix is devoted to an alternative treatment of the polarization of the vacuum by an external field. The conditions imposed on the induced current by the charge conservation and gauge invariance requirements are examined. It is found that the fulfillment of these formal properties requires the vanishing of an integral that is not absolutely convergent, but naturally vanishes for reasons of symmetry. This null integral is then used to simplify the expression for the induced current in such a manner that direct calculation yields a gauge invariant result. The induced current contains a logarithmically divergent multiple of the external current, which implies that a non-vanishing total charge, proportional to the external charge, is induced in the vacuum. The apparent contradiction with charge conservation is resolved by showing that a compensating charge escapes to infinity. Finally, the expression for the electromagnetic mass of the electron is treated with the methods developed in this paper. DOI: http://dx.doi.org/10.1103/PhysRev.76.790 • Received 26 May 1949 • Published in the issue dated September 1949 © 1949 The American Physical Society
{"url":"http://journals.aps.org/pr/abstract/10.1103/PhysRev.76.790?qid=6920eba1e20746dc&qseq=4&show=30","timestamp":"2014-04-18T23:22:33Z","content_type":null,"content_length":"28525","record_id":"<urn:uuid:36f0d9ee-767f-4e20-8786-c3f09a8d3817>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00414-ip-10-147-4-33.ec2.internal.warc.gz"}
Bayesian Variable Order Markov Models Download Links author = {Christos Dimitrakakis}, title = {Bayesian Variable Order Markov Models}, year = {} We present a simple, effective generalisation of variable order Markov models to full online Bayesian estimation. The mechanism used is close to that employed in context tree weighting. The main contribution is the addition of a prior, conditioned on context, on the Markov order. The resulting construction uses a simple recursion and can be updated efficiently. This allows the model to make predictions using more complex contexts, as more data is acquired, if necessary. In addition, our model can be alternatively seen as a mixture of tree experts. Experimental results show that the predictive model exhibits consistently good performance in a variety of domains. We consider Bayesian estimation of variable order Markov models (see Begleiter et al., 2004, for an overview). Such models create a tree of partitions, where the disjoint sets of every partition correspond to different contexts. We can associate a sub-model or expert with each context in order to make predictions. The main contribution of this paper is a conditional prior on the Markov order—or equivalently the context depth. This is based on a recursive construction that estimates, for each context at a certain depth k, whether it makes better predictions than the predictions of contexts at depths smaller than k. This simple model defines a mixture of variable order Marko models and its parameters can be updated in closed form in time O (D) for trees of depth D with each new observation. For unbounded length contexts, the complexity of the algorithm is O ( T 2) for an input sequence of length T. Furthermore, it exhibits robust performance in a variety of tasks. Finally, the model is easily extensible to controlled processes. 488 The infinite hidden Markov model - Beal, Ghahramani, et al. - 2002 331 Data compression using adaptive coding and partial string matching - Cleary, Witten - 1984 202 2003 Being Bayesian about network structure: a Bayesian approach to structure discovery in Bayesian networks - Friedman, Koller 160 Prior distributions on spaces of probability measures - Ferguson - 1974 109 Learning with mixtures of trees - Meila, Jordan - 2000 85 Variable length Markov chains - Bühlmann, Wyner - 1999 56 On prediction using variable order Markov models - Begleiter, El-Yaniv, et al. - 2004 25 Bayes-adaptive POMDPs - Ross, Chaib-draa, et al. - 2008 11 Lossless compression based on the Sequence Memoizer - Gasthaus, Wood, et al. - 2010 8 The infinite Markov model - Mochihashi, Sumita - 2007 1 Variable order Markov decision processes: Exact Bayesian inference with an application to POMDPs. Submitted - Dimitrakakis - 2010 1 Stationary autoregressive models via a bayesian nonparametric approach - Mena, Walker - 2005
{"url":"http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.207.129","timestamp":"2014-04-20T18:16:22Z","content_type":null,"content_length":"22385","record_id":"<urn:uuid:85ec8e69-4437-41d2-953d-054d35406dba>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00052-ip-10-147-4-33.ec2.internal.warc.gz"}
7.8 Cryptography This section under major construction. Cryptology. Cryptology is the science of secret communication. It has two main subfields: cryptography is the science of creating secret codes; cryptanalysis is the science of breaking codes. There are five pillars of cryptology: • Confidentiality: keep communication private. • Integrity: detect unauthorized alteration to communication. • Authentication: confirm identity of sender. • Authorization: establish level of access for trusted parties. • Non-repudiation: prove that communication was received. We will focus primarily on confidentiality, the most romantic of these endeavors. Highly recommended reading for entertainment: The Code Book. Useful Flash demo: e-Security history from rsa.com. Some applications of crypto. Phil Zimmermann asserts "Cryptography used to be an obscure science, of little relevance to everyday life. Historically, it always had a special role in military and diplomatic communications. But in the Information Age, cryptography is about political power, and in particular, about the power relationship between a government and its people. It is about the right to privacy, freedom of speech, freedom of political association, freedom of the press, freedom from unreasonable search and seizure, freedom to be left alone." (Code Book, p. 296). Crypo benefits both ordinary citizens and terrorists. Enables e-commerce. Below is a table of activities that we would like to be able to implement digitally and securely. We all list a number of everyday analog implementation of each task. Task Analog Implementations Protect information Code book, lock and key Identification Driver's license, Social Security number, password, bioinformatics, secret handshake Contract Handwritten signature, notary Money transfer Coin, bill, check, credit card Public auction Sealed envelope Public election Anonymous ballot Poker Cards with concealed backs Public lottery Dice, coins, rock-paper-scissors Anonymous communication Pseudonym, ransom note A malicious adversary can sometimes subvert these analog implementations: forgery, lock picks, counterfeiters, card cheats, ballot-stuffing, loaded dice. Our goal. Our goal is to implement all of these tasks digitally and securely. We would also like to implement additional tasks that can't be done with physics! For example: play poker variant where dealer wins if no one has an Ace, have an anonymous election where everyone learns winner, but nothing else. Is any of this possible? If so, how? In the remainder of this section, we will give a flavor of modern (digital) cryptography, implement a few of these tasks, and sketch a few technical details. History. Decryption of Mary Stuart's encrypted letters revealed her intent to assassinate Elizabeth I. In the 1800s, Edgar Allen Poe boasted that he could break anyone's cypher using frequency analysis. Alan Turing led a team at Bletchley Park which cracked the German Enigma cipher. Many historians believe this was the turning point of World War II. Here's an Enigma applet. Security by obscurity. The Content Scrambling System (CSS) is used by Hollywood to encrypt DVDs. Each disc has three 40-bit keys. Each DVD decoder has unique 40-bit key. In principle it is "not possible" to play back on computer without disc. In 1999, two Norwegians (Canman and SoupaFrog, 1999) wrote a decryption algorithm that cracked the CSS system. CSS was a proprietary algorithm and Hollywood was banking on the fact that nobody would discover the algorithm. Moreover, the size of the keys was too small, so brute force attacks were possible. Other high profile failures due to ad hoc approach: GSM cell phones, Windows XP product activiation, RIAA digital music watermarking, VCR+ codes, and Adobe eBooks, Diebold AccuVote-TS electronic voting machines, ExxonMobil SpeedPass In 1883, The Dutch linguist Auguste Kerckhoffs von Nieuwenhof embodied the underlying principle guiding modern cryptography in his paper Cryptographie militaire: Il faut qu'il n'exige pas le secret, et qu'il puisse sans inconvenient tomber entre les mains de l'ennemi. The system must not require secrecy and can be stolen by the enemy without causing trouble. This is now known as Kerckhoffs' Principle. The security of a cryptosystem should not depend on keeping the algorithm secret, but only on keeping the numeric key secret. There are two primary distinctions between the algorithm and the numeric key. (Ed Felten) First, since we generate the numeric key at random, we can accurately model and quantify how long it would take an adversary to guess (under general technical conditions); in contrast, it is much harder to predict or quantify how long it would take an adverary to guess our algorithm. Second, it's easy to use different numeric keys for different purposes of people, or to stop using a key that has been compromised; it's more difficult to design new algorithms. It says that systems based on "security by obscurity" are fatally flawed. This is equivalent to Shannon's maxim is "The enemy knows the system." The design of secure systems should be left to the experts. Despite this, we can still explore the basic ideas of cryptography that the experts use. The participants. In keeping with the rich tradition of cryptographers, Alice and Bob are the two people trying to communicate securely over an insecure communication channel. We will assume that the message is already encoded in binary, so we can treat it as a (potentially huge) integer m. We let N denote the number of bits in the message m. Alice applies an encryption function E to the message, which produces another N bit integer E(m). Bob receives E(m) and applies his decryption function D to this. An obvious condition for this to make any sense is that D(E(m)) = m. In other words, Bob recovers the original message. Eve is a third party who wishes to intercept the message. Eve can observe E(m), so for the scheme to be secure, it should be prohibitively difficult for Eve to recover m from E(m) alone. Private key cryptography. Private key = two parties share a secret key prior to their communication. One-time pads (Chapter 1) are provably secure if the bits in the key are generated from a truly random source. It is also extremely easy to implement. Nevertheless, one-time pads have several mitigating factors that render it impractical in most situations. First, it is a challenge to generate truly random bits, free of biases and correlations. One must go outside the world of digital computers and extract them from some physical source (e.g., time between emission of particles due to radioactive decay, sound from a microphone, elapsed time between keystrokes). Such sources are often biased and we would need to take great care to prevent Eve from observing or tampering with the process. The scheme is called one-time since we need new key for each message or part of the message. If we re-use a one-time pad, then the system is no longer secure. Signature? Non-repudiation? Perhaps the most limiting factor is key distribution. Alice and Bob must know each other and exchange the key sending the secret message. The Kremlin and White House used to communicate with each other using this method. A trusted courier would be sent across the Atlantic Ocean with a briefcase of one-time pads handcuffed to his arm. This method is ridiculously impractical for if Alice wants to purchase a product from Bob over the Internet. Other private key encryption schemes. Data Encryption Standard (DES). Advanced Encryption Standard (AES, Rijndael algorithm). Blowfish. Methods are not provably secure like one-time pads, but have withstood the test of time of mathematical scrutiny. Efficient. However, these schemes suffer from the same key-distribution problem that plagues one-time pads. One emerging solution to the key distribution problem is to use quantum mechanics. This is known as Quantum Key Distribution. It is an unconditionally secure way for two parties to share a one-time pad. Moreover, there is an intrusion detection component so that if Eve observes even one bit, both parties will learn about the attempted eavesdropping. Modern cryptography. The modern theory of cryptography leverages the theory of hard problems. The goal is to show that breaking security system is equivalent to solving some of the world's greatest unsolved problems! Bruce Schneier, a noted electronic security expert, wrote in Applied Cryptography, "It is insufficient to protect ourselves with laws, we need to protect ourselves with mathematics." The foundations of modern cryptography hinges on three crucial axioms and one important fact. • Axiom 1. Players can toss coins. Crypto is impossible without randomness so this axiom is essential. In practice we can generate truly random bits by using quantum phenomenon or the radioactive decay of particles. • Axiom 2. Players are computationally limited. We express this notion formally by restricting the participants (communicating parties and malicious adversaries) to use only polynomial time • Axiom 3. Factoring is hard computationally. We assume that it is not possible to factor an N-bit integer in time polynomial in N. Given an integer (e.g., 1541) it appears difficult to find its prime factorization. However, given the factors (e.g., 23 * 67) it is easy to multiply them out and obtain the original number. This is referred to as a "1-way trapdoor function" since it is easy to go one way (from factors to product), but apparently hard to go the other way (from product to factors). • Fact. Primality testing is easy computationally. Miller-Rabin primality testing algorithm. PRIMES in P proved in 2002. If the three axioms above are valid, then digital cryptography exists. That is, it is possible to do all of the previous tasks digitally. Public key cryptography. Public key cryptography is an amazing scheme that enables two parties to communicate securely, even if they've never met. It is the digital analog of a box with a combination lock. Suppose Alice wants to send Bob a message. First, Bob send the box to Alice with the padlock in the open position, without revealing the combination to anyone. Alice puts her message in the box, closes the combination lock, and sends it back to Bob. Eve may intercept the box in transit, but since she doesn't know the combination, she is unable to open it. She can try to guess the combination, but there are just too many possibilities. When the box arrives, Bob can open it, knowing that nobody else looked inside (unless they knew the combination). To do this digitally, Bob has two keys (or combinations): his private key d is not revealed to anybody, his public key e is published in an Internet phonebook. We think of the keys as integers, but they are really just sequences of bits, say 1024. If Alice wants to send a message to Bob, she looks up Bob's public key e on the Internet. She uses e to encrypt her message and sends it to Bob. Bob uses his private key d to decrypt the message. The idea of public key cryptography was first published in 1976 by Whitfield Diffie and Martin Hellman in their groundbreaking paper New Directions in Cryptography. This paper described a public key cryptosystem for the key distribution problem. The idea was apparently discovered independently by Ellis, Cocks, and Williamson in the UK at the Government Communications Headquarters (GCHQ) in the early 1970s, but their work remained a secret for two decades. RSA cryptosystem. We will describe the basic mechanics of the RSA cryptosystem , a scheme developed by Adleman, Rivest, and Shamir in 1978. Here's the RSA paper . RSA is very widely used today for secure Internet communication (browsers, S/MIME, SSL, S/WAN, PGP, Microsoft Outlook), operating systems (Sun, Microsoft, Apple, Novell) and hardware (cell phones, ATM machines, wireless Ethernet cards, Mondex smart cards, Palm Pilots). Then, we will give intuition for why it works and describe to implement it efficiently. The RSA cryptosystem involves modular arithmetic. Recall ..... Key generation. To participate in the RSA cryptosystem, Bob must first generate a public and private key. He only needs to do this once, even if he plans to use the system many times. • Select two large prime numbers p and q at random. • Compute n = p × q. • Select two integers e and d such that (m^e)^d ≡ m (mod n) for all integers m. As an example, we might choose the following parameters, although in practice we would need to use much larger integers to guarantee security. p = 11, q = 29 n = 11 * 29 = 319 e = 3 d = 187 Encryption. Alice wants to send an N-bit secret message m to Bob. She obtains Bob's public key (e, n) from the Internet. Then she encrypts the message m using the encryption function E(m) = m^e (mod n), and sends E(m) to Bob. m = 100 E(m) = 100^3 (mod 319) = 254 Decryption. Bob receives the encrypted message c from Alice. Bob recalls his private key (d, n). Then he decrypts the ciphertext by applying the decryption function D(c) = c^d (mod n). Since Bob knows d, he can compute this function. c = 254 D(c) = 254^187 (mod 319) = 100 RSA simulator. RSA simulator. Correctness. To make sure that Bob receives the original message, we must check that D(E(m)) = m. It worked in the example above where m = 100, E(100) = 254, D(254) = 100, but we need to be sure it works for all possible messages, and for all valid choices of e, d, and n. This follows in a straightforward way from the defintions and the way we chose e and d. We have supressed one important detail - how to choose e and d so that the magic property holds. Implementing the RSA cryptosystem. Implementing the RSA cryptosystem is a formidable engineering challenge. A successful implementation requires many ingenious algorithms and knowledge of several theorems in number theory. We will describe a bare-bones implemenation, but commercial implementations are more sophisticated. • Big integers. Can't use built in int or long types since numbers are too big. Need to re-implement the laws of arithmetic, e.g., addition, subtraction, multiplication, and division. Grade school algorithms are reasonably efficient for all of these operations, although there is always opportunity for improvement using clever algorithms. • Modular exponentation. How to perform modular exponentation: a^b (mod c). The naive method would be to repeatedly multiply a by itself, b times, and then divide by c and return the remainder. When a, b, and c are N-bit integers, this fails spectacularly for two reasons. First, the intermediate number a^b can be monstrously large. The number of digits can be exponential in N. When N = 50, this consumes 128TB memory. The second problem is that the number of multiplications also takes exponential time. So it will take forever. A better alternative is to use repeated squaring. This idea dates back to at least 200 BCE according to Knuth. Program ModExp.java uses the following recurrence to compute a^b mod n: 1. if b is zero: 1 2. if b is even: (a^b/2 * a^b/2) mod n 3. if b is odd: (a * a^b/2 * a^b/2) mod n This is analogous to the following recurrence for multiplication. 1. if b is zero: 1 2. if b is even: (a * b/2) + (a * b/2) 3. if b is odd: (a * b/2) + (a * b/2) + a • Computing a random prime. To generate the key, we must have a method for generating a random N-bit prime, say N = 1024. One idea is to choose an N-bit integer at random and check if it is prime. If it is, then stop; otherwise repeat until you stumble upon one that is prime. x = random N-bit integer UNTIL (x is prime) This simple idea works, but to implement it requires two crucial ideas. First, the loop may take a very long time if there are not enough prime numbers. Fortunately, the Prime Number Theorem (Hadamard, Vallee Poussin, 1896) asserts that the number of primes between 2 and x is approximately x / ln x. There are over 10^151 primes with 512 bits or fewer. In other words, roughly 1 out of every ln x x-bit numbers are prime, so we expect to wait only ln x steps before stumbling upon a prime number. But how do we check to see if a number is prime? Attempting to factor it would be prohibitively expensive. Instead, we can use an ingenious algorithm due to Miller-Rabin (or a more recent one due to Agarwal-Kayal-Saxena) that checks if an integer is prime in a polynomial number of steps. • Generating random numbers. (move to one-time pad?) Physical sources of randomness. The Java library SecureRandom is a pseudo-random number generator that generates cryptographically secure random numbers. This means that it is computationally intractable to predict future bits. Unlike a LFSR, you can't reverse-engineer it. • Computing the private exponent. One final challenge is choosing the public and private keys. In practice it is common use e = 65,537 as the public key. But this means we need to find a private key that makes the magic property hold. This turns out to be a well understood problem in number theory and a unqiue d always exists provided gcd(e, (p-1)(q-1)) = 1. We can use an extension of Euclid's algorithm (see exercise xyz) for this purpose. Easy to do using Java's BigInteger library for manipulating huge integers. private final static SecureRandom random = new SecureRandom(); BigInteger ONE = new BigInteger("1"); BigInteger p = BigInteger.probablePrime(N/2, random); BigInteger q = BigInteger.probablePrime(N/2, random); BigInteger n = p.multiply(q); BigInteger phi = (p.subtract(ONE)).multiply(q.subtract(ONE)); BigInteger e = new BigInteger("65537"); BigInteger d = e.modInverse(phi); BigInteger rsa(BigInteger a, BigInteger b, BigInteger c) { return a.modPow(b, n); RSA Attacks. Cryptanalysis is the science of breaking secret codes. We describe a few common attacks to the RSA crpytosystem to give you the flavor of modern cryptanalysis. • Factoring. The most obvious way to break the RSA cryptosystem is by factoring the modulus n. If Eve can factor n = pq, then she has exactly the same information as Bob, so she can efficiently compute his private exponent given the public one (using exactly the same algorithm that Bob used to compute his private exponent in the first place). Using a very sophisticated factoring algorithm known as the general number field sieve, researchers were recently able to factor RSA-576, a 576-bit (174 decimal digits) composite integer offered as a challenge problem by RSA Security. This effort required 100 workstations and 3 months of number crunching. The running time of this algorithm is super-polynomial but sub-exponential - O(exp(c (log n)^1/3 (log log n)^2/ • Improper usage. The RSA system can also be broken if it is used improperly. For example, if Bob decides to use a small private exponent to lessen his computational burden for decryption, then he is sacrificing security. If d < 1/3 n^1/4, then can recover d in polynomial time (Wiener attack). Note that it is okay to use a small public exponent e, and 65,537 is common in practice. Another mistake is to allow two participants to share the same modulus n (even if neither party knows how to factor n). For example, suppose Bob and Ben have (d1, e1) and (d2, e2) for their private and public exponents, respectively, but they are both using n as their modulus. Then it is possible for either party to discover the other's private exponent (Simmon's attack). • Side channel attack. Exploit physical information leaked from machine, including electromagnetic emanations, power consumption, diffuse visible light from CRT displays, and acoustic emanations. For example, in a timing attack Eve gleans information about Bob's private key by measuring the amount of time it takes for Bob to exponentiate. If Bob is using a highly optimized exponentiation routine, then Eve can discover enough information to reveal Bob's private key. Recently, Dan Boneh showed how to use this technique to break SSL on a LAN. It is a long-standing open research question whether or not there is a way to break the RSA system without factoring or physical access. There is no guarantee that RSA is secure even if factoring is hard. Also, there are currently know guarantees that factoring is hard other Also, currently no mathematical guarantee that factoring is hard! FACTOR and its complementary problem NON-FACTOR are both in NP. This makes it unlikely that FACTOR is NP-complete since this would imply NP = coNP... Semantic security. Other stronger notions of security. A public key cryptosystem is semantically secure if anything Eve can compute in polynomial time with the ciphertext can be computed without the ciphertext. Thus, observing the ciphertext provides no useful information. For example, we shouldn't be able to determine if the last bit of the plaintext is 0 or 1 or if the plaintext has more 1 bits than 0 bits. The RSA system is not semantically secure, and in fact no deterministic scheme can be. This is not just a theoretical shortcoming. To see why, suppose that Eve knows Alice is going to send Bob either the message ATTACK or RETREAT. Eve can encrypt both message using Bob's public key and then compare against the encrypted message that Alice sends to Bob. Thus, Alice can learn exactly which message was sent. Naive ideas like appending a random sequence of 0's and 1's to the plaintext before encrypting do not typically guarantee additional security. Provably secure cryptosystems. It is a bit unsatisfying to be using a cryptosystem that is not provably as difficult as some hard problem, e.g., factoring. Theoretical highground = Blum-Goldwasser (1985). Provably as hard as factoring, semantically secure. Based on the probabilistic encryption scheme of Goldwasser and Micali. Comparable in speed to RSA. Electronic voting. Need a crypto scheme that makes it possible to confirm that your vote was correctly counted, without revealing whom the vote was for. Need the second condition to prevent someone from "buying" your vote since if they have no way to verify for whom you voted, they have no incentive to bribe you. Zero knowledge. Alice wants to prove to Bob that a graph G is 3 colorable, but doesn't want to reveal any additional information. Example generalizes to many other problems since 3Color is NP Digital rights management. In the traditonal setting, the Alice, Bob, and Eve who are trying to communicate are human beings, and they use a computer to assist with the comptuation. An intriguing variant is when Alice and Bob are computers, and Eve is a human being. This is exactly the setting that the music industry envisions with digital rights management. In this case Alice is your computer, Bob is your speakers, and you are Eve. The music industry wants only your computer to be able to play the legally purchased music on your computer, but does not want you to be able to intercept the raw audio data. We can quickly imagine a world where there are restrictions for copying DVDs, runnig software, printing documents, and forwarding email. All of these restrictions will be enforced via cryptographic algorithms and protocols. Goal: transform a program into an obfuscated version that computes the same function, but reveals no extra information (e.g., the source code) to a polynomial-bounded adversary. Obfuscation not possible in general. Security. Cryptography is only one part of overall computer security. This survey revealed that 70% of people would reveal their computer password in exchange for a chocolate bar. One security expert comments "using encryption on the Internet is the equilvant of arranging an armored car to deliver credit card information from someone living in a cardboard box to someone living on a park bench." CAPTCHAs. Completely automated public Turing test to tell computers and humans apart. Reverse Turing test where computer is the judge, trying to distinguish between a human and a computer. New York Times article. 1. Write a program to empirically determine the running time of the methods methods BigInteger.add, BigInteger.multiply, BigInteger.mod, and BigInteger.modExp. Try to model the running time of each operation as c N^k seconds for some constants c and k. Use BigInteger.rand to generate random input parameters. For add, multiply, and modular exponentiation use N-bit integers for all of the arguments; for division, use a N-bit numerator and an N/2-bit denominator. 2. Write a program RandomPrime.java that takes a command-line argument N and prints out an N-bit integer that is (probably) prime. Use BigInteger.probablePrime for primality testing and SecureRandom to generate cryptographically secure pseudorandom numbers. 3. Estimate the running time of RandomPrime.java as a function of the number of bits N. 4. Suppose that instead of using RandomPrime.java to choose a prime with N bits, you used the following strategy: generate all primes with at most N bits, and choose a random one. What will happen if N is large, say 512? 5. Suppose that instead of using reapeated squaring to compute a^b mod c, you repeatedly multiply a to itself, b times, modding out by c. Estimate how long will it take if a, b, and c are N bit 6. What is the complexity of the following problem: given an even integer x, determine if x has any odd factors greater than one. Answer: polynomial - check whether x is a power of 2. 7. What is the complexity of the following problem: Given an even integer x and another integer y, determine whether x has any odd factors between 3 and y. Answer: equivalent to factoring problem. Creative Exercises 1. Extended Euclid's algorithm. Extend Euclid's algorithm for computing the greatest common divisor of p and q to also compute coefficients a and b (possibly zero or negative) such that ap + bq = gcd(p, q). Write a program ExtendedEuclid.java that takes two command line parameters p and q and outputs gcd(p, q) and a pair of integers a and b as described above. EXTENDED-EUCLID(p, q) if q = 0 then return (p, 1, 0) (d', a', b') <- EXTENDED-EUCLID(q, p % q) (d, a, b) <- (d', b', a' - (p/q) b') return (d, x, y) Hint: since your method needs to return three integers, consider using an array of three elements. 2. Best rectangle. Given area A of a rectangle, find a rectangle with integer width and height whose area is A and such that the difference between the height and width are as close to each other as possible. For example, if A = 48, then the best rectangle is 6-by-8 and not 3-by-16 or 4-by-12. Show that if you could solve this problem, you could break the RSA cryptosystem. 3. Buckets of water. Given two buckets of capacity p and q, a receptacle of infinite capacity, a water hose, and a drain, devise a method to get exactly k liters of water into the receptacle using the following rules: □ You can fill either of the two buckets with the huse. □ You can empty either bucket to the drain. □ You can transfer water between the two buckets or either bucket and the receptacle until one is full or ther other is empty. Prove that you can solve the problem if and only if k is a multiple of gcd(p, q). Hint: use the fact from previous exercise that there exist integers a and b such that ap + bq = gcd(p, q). 4. Multiplicative inverse. Given a positive integer n, a multiplicative inverse mod b of an integer k is an integer x such that (k * x) % n = 1. Such an inverse exists if and only if gcd(k, n) = 1. Write a program Inverse.java that reads in two command line arguments k and n and computes the modular inverse if it exists. Hint: use the answer to the previous exercise. See also 5. Breaking the RSA cryptosystem. One potential way to break the RSA cryptosystem is to compute φ(n) given n. Recall that if n = pq, then φ(n) = (p-1)(q-1). Show that computing φ(n) is equivalent to Solution: obviously if you can factor n = pq, then computing φ(n) = (p-1)(q-1) is easy. To see the other direction, observer that n + 1 - φ(n) = pq + 1 - (p-1)(q-1) = p + q = n/q + q. Thus q^2 - (n + 1 - φ(n))q + n = 0. Assuming we know φ(n), we can solve the quadratic equation for q and recover one of the factor of n. We can recover the other factor p by computing n/q. 6. Generating public and private RSA keys. Write a program RSA.java to generate a key pair for use with the RSA cryptosystem, determine two N/2 bit primes p and q. Set e = 65537, compute n = (p-1) (q-1), and find a number d such that (e * d) % n == 0. Assuming gcd(e, n) = 1, the inverse d will exist. Hint: use the RandomPrime.java to compute p and q, and use Inverse.java to compute d. 7. Sophie Germaine primes. The security of the RSA cryptosystem appears to be improved if you use special types of primes for p and q. Specifically, a Sophie Germaine prime is a prime number p where (p-1)/2 is also prime. Generate a public and private RSA key where p and q are Sophie Germaine primes. Investigate how long it takes to find such a prime as a function of the number of bits N. 8. Fermat primality testing. The Fermat primality test is an algorithm that takes an odd integer n and reports that it is definitely composite or "likely" prime. By "likely", the algorithm is sometimes wrong, but not too often. Fermat's theorem says that if p is prime and gcd(a, p) = 1, then a^p-1 = 1 (mod p). A version of the converse is used as a crude primality test in the PGP cryptosystem: if 2^p-1 = 3^p-1 = 5^p-1 = 7^p-1 = 1 (mod p), then use p as a prime. Unfortunately, there are some numbers that satisfy this Fermat test, but are not prime (e.g., 29341, 46657, 9. Miller-Rabin primality testing. The Miller-Rabin algorithm is a randomized algorithm for determining whether an odd integer n is prime. It takes a security parameter t and outputs either prime or composite. If it outputs composite, then n is definitely composite; if it outputs prime, then n is probably prime, but the algorithm could be wrong with probability 2^-t. boolean isProbablyPrime(BigInteger n, int t) { Compute r and s such that n-1 = 2^sr and r is odd Repeat from 1 to t { Choose a random integer a such that 1 < a < n - 1 Compute y = a^r mod n by repeated squaring If y ≠ 1 and y ≠ n-1 { j = 1 while (j < s and y ≠ n-1) y = y^2 mod n if (y == 1) return false j = j + 1 if y ≠ n-1 return false return true 10. Factoring. Win $200,000 from RSA Security for factoring a 2048 bit number (616 digits). Factor a 64 bit number (32 bit RSA) using Program xyz in under a minute. How long to factor a 128 bit 11. Pollard's rho method. Pollard's rho method is a randomized factoring algorithm that can factor 128 bit numbers in a reasonable amount of time, especially if the numbers have some small factors. It is based on the following fact: if d is the smallest nontrivial factor of N and x - y is a nontrivial multiple of d then gcd(x-y, N) = d. A naive method would be to generate a bunch of random values x[1], x[2], ..., x[m] and compute gcd(x[i]-x[j], N) for all pairs i and j. Pollard's rho method is an ingenious method way to find x and y without doing all of the pairwise computations. It works as follows: choose a and b at random between 1 and N-1, and initialize x = y = a. Repeatedly update x = f(x), y = f(f(y)), where f(x) = x^2 + b as long as gcd(x-y, N) = 1. The gcd is a factor of N, but if you get unlucky, it could be equal to N. By randomly choosing a and b each time, we ensure that we never get too unlucky. Write a program PollardRho.java that takes a command-line argument N and uses the Pollard rho method to compute a prime factorization of N. Estimate the running time as a function of N. 12. Feit-Thompson Conjecture. Disprove the Feit-Thompson conjecture: there are no two primes p and q such that (p^q - 1) / (p - 1) and (q^p - 1) (q - 1) have a common factor other than 1. Counterexample: (17, 3313) with common factor 112643. 13. Karatsuba multiplication. Write a program Karatsuba.java that multiplies two integers using the Karatsuba algorithm. This ingenious algorithm computes the product of two 2N-bit integers using only three N-bit multiplications (and a linear amount of extra work). To multiply x and y, break up x and y into N-bit chunks and use the following identity: xy = (a + 2^Nb) (c + 2^N d) = ac + [(a+b)(c+d) - ac - bd] 2^N + bd 2^2N Your recursive algorithm should compute the number of bits N and cutoff to the default BigInteger.multiply method when N is small (say 10,000) and apply the Karatsuba divide-and-conquer strategy otherwise. Investigate the optimal cutoff point and compare its effectiveness against BigInteger.multiply when N = 10 million. 14. Factoring reduces to finding a factor. Given a function factor(N) that returns 1 if N is prime, and any nontrivial factor of N otherwise, write a function factorize(N) that returns the prime factorization of N. 15. Perfect power. An integer N is a perfect power if N = p^q for two integers p ≥ 2 and q ≥ 2. Design an efficient algorithm (polynomial in the number of bits in N) to determine if N is a perfect power, and if so, find its prime factorization. Hint: for all q ≤ lg N binary search for p satisfying N = p^q. 16. Euler's conjecture. In 1769 Euler conjectured that there are no positive integer solutions to a^4 + b^4 + c^4 = d^4. Noam Elkies discovered the first counterexample 2682440^4 + 15365639^4 + 18796760^4 = 20615673^4 over 218 years later. Write a program Euler.java to disprove Euler's conjecture. The brute force solution outlined in Exercise XYZ won't work for two reasons: (i) it will take too much time to find the solution using a quadruply nested loop, and (ii) computing a^4 will overflow a long since the smallest such counterexample is 95800^4 + 217519^4 + 414560^4 = 422481 1. Use the following idea. Iterate over all integers a and b between 1 and N and insert a^4 + b^4 into a hash table. Then, iterate over all integers c and d between 1 and N and search to see if d^4 - c^4 is in the hash table. Use extended precision integers to avoid overflow. 2. Using extended precision integers can be a significant overhead over using primitive types. Instead of inserting a^4 + b^4 into the hash table, insert a^4 + b^4 modulo p, where p is some big prime, say XYZ. Then, iterate over all c and d and search for d^4 - c^4 modulo p. If there's a match, use extended precision arithmetic to check that it isn't just an coincidental collision. Hint: to avoid overflow when computing a^4 + b^4 modulo p, modulo out multiples of p after each multiplication. 17. Fingerprinting. Alice and Bob maintain two copies of a large genomics database in different locations. For consistency, they want to be ble to compare whether the two databases are identical. We interpret the databases as N-bit integers, say A and B. Because N is very large, they can't afford to transmit the whole database. Instead, consider the following scheme for sending a fingerprint of the data that enables Alice and Bob to check if the data is inconsistent. Alice generates a random prime number p between 2 and, say, N^2 and sends p and (A % p). This takes only O(log N) bits. Bob declares that A and B are the same if ((A % p) == (B % p)). The probability of having a false negative (a no that should have been a yes) from the scheme is zero. Show that the probability of having a false positive (a yes that should have been a no) goes to 0 as n goes to infinity. Hint: Use the fact that the number of primes less than n^2 is at least c n^2 / log n, for some constant c > 0. How many bits are sent? 18. Flipping a coin over the phone. Alice and Bob are in the midst of a bitter divorce. They have decided to flip a coin to see who will get custody of their only son Carl. However, they refuse to see each other in person and they don't want anyone else to know how the resolved the custody dispute. In other, words, we want to devise a method to flip a fair coin over a phone line or the Internet so that neither party can cheat. Here is an elegant protocol: 1. Alice multiplies together two or three large primes to sends the product N to Bob. 2. Bob receives the integer N and responds with the number 2 or 3. 3. Alice waits for a valid response from Bob and then sends Bob the prime factorization of N. 4. If Bob guesses the correct number of factors, then he gets custody. Otherwise, assuming Alice follows the protocol, she wins custody. Explain why the system works by answering each of the following questions. You may assume that there is no efficient way to determine whether a given integer N has at least 3 nontrivial factors (although this is an unresolved conjecture). 1. How can Alice compute N efficiently? 2. Why can't Bob efficiently determine the true answer on his own? 3. How can Bob efficiently check that Alice sent him the correct factorization of N? In other words, what's to prevent Alice from revealing two factors (one of which is not prime) if Bob says 3, even if she multiplied three (or more) primes together? 19. Poker over the phone. Use the bit-commitment scheme described above to develop a protocol to play poker over the phone, say between two parties. 20. Discrete log. Let p be a prime number. The discrete log of a to the base b is the unique integer x between 0 and p-1 such that a = b^x (mod p). For example, if p = 97, b = 5, and a = 35, then log [5] 35 = 32 since 5^32 = 35 (mod 97). Write a program DiscreteLog.java that takes three command line inputs a, b, and p, and computes log[b] a modulo p via brute force search. 21. Diffie Hellman. Let p be a prime number, and let a and b be two integers. Given p, an x, x^a (mod p) and x^b (mod p), the Diffie-Hellman problem is to compute x^ab (mod p). 22. Rabin's cryptosystem. Select p, q to be prime such that p = 3 mod 4 and q = 3 mod 4. The public key is n = pq and the private key is (p, q). To encrypt, compute E(m) = m^2 mod n. To decrypt compute D(c) = sqrt(c) mod n. How to compute square root: c = x^2 mod n? Use extended Euclid's algorithm to find a, b such that ap + bq = 1. Compute r = c^(p+1)/4 mod p and s = c^(q+1)/4 mod q. Compute m = (aps + bqr) mod n and t = (aps - bqr) mod n. The four square roots of c are m, -m mod n, t, and -t mod n. 23. Analog private-key exchange. You are stranded on an island with a box, a padlock with key, and a copy of Introduction to Computer Science. You have a friend on another island who also has a box, a padlock with key, but wants to borrow your copy of the textbook. You can ship stuff via an unscrupulous courier service who will pillage anything inside the box if it is left unlocked. How can you get your book to your friend? 24. Cryptographically secure hash functions. SHA-1 and MD5. Can compute it by converting string to bytes, or when reading in bytes 1 at a time. import java.security.MessageDigest; MessageDigest sha1 = MessageDigest.getInstance("SHA-1"); Byte[] hash = sha1.digest(); 25. RSA in Java. Built-in functionality for RSA or DSA. Untested code below. // key generation KeyPairGenerator keygen = KeyPairGenerator.getInstance("DSA"); SecureRandom random = new SecureRandom(); keygen.initialize(512, random); KeyPair keys = keygen.generateKeyPair(); PublicKey pubkey = keys.getPublic(); PrivateKey prikey = keys.getPrivate(); // digital signing Signature signer = Signature.getInstance("DSA"); Byte[] signature = signer.sign(); // verifying Signature verifier = Signature.getInstance("DSA"); Boolean check = verifier.verify(signature); 26. Blum-Blum-Shub pseudorandom bit generator. Choose two distinct N-bit primes p and q such that p mod 4 = q mod 4 = 3. Set n = pq and choose a starting value x[0] by selecting a random seed 1 < s < n such that gcd(s, n) = 1. Form the sequence of integers x[0] = s^2 mod n and x[i+1] = x[i] x[i] mod n. Use x[i] % 2 as the sequence of pseudorandom bits. No need to keep n secret. Discovering any pattern (in poly time) is provably as hard as factoring n. Note: we still need to generate p, q, and s at random, but these have only O(N) bits, and we will be able to generate 2^N pseudorandom bits. Can use as a one-time pad. Can also be used for directly for public-key crypto 27. VCR Plus decoding. Remote control scheme for recording programs on a VCR using special code printed in newspapers. Bad cryptography so easy to break. paper 28. Pascal's triangle. One way to compute the kth row of Pascal's triangle (for k > 2) is to compute (2^k + 1)^k+1 and take its binary representation k bits at a time. 1^0 = 1 (1^0 = 1 = 1) 11^1 = 1 1 (3^1 = 3 = 1 1) 101^10 = 01 10 01 (5^2 = 25 = 1 2 1) 101^11 = 01 11 11 01 (5^3 = 125 = 1 3 3 1) 1001^100 = 001 100 110 100 001 1 4 6 4 1 (9^4 = 6561 = 1 4 6 4 1) 10001^101 = 0001 0101 1010 1010 0101 0001 (17^5 = 1419857) 29. Bailey-Borwein-Plouffe algorithm. Compute the ith binary digit of π without computing the earlier digits using the BBP algorithm which requires modular exponentiation. 30. Secret sharing. Want to distribute a message to N people so that any 3 of them can recover the original message, but any 1 or 2 cannot. reference. Scientific American puzzle 31. Mersenne prime. A Mersenne prime is a prime of the form M_p = 2^p - 1, where p is an odd prime. To test whether M_p is prime, form the following sequence: s_0 = 4, s_i+1 = (s_i)^2 - 2 mod M_p. M_p is prime iff s_(p-2) = 0 mod M_p. This is method known as the Lucas-Lehmer primality test.
{"url":"http://introcs.cs.princeton.edu/java/78crypto/","timestamp":"2014-04-18T15:38:41Z","content_type":null,"content_length":"60984","record_id":"<urn:uuid:af28f2ce-c17b-4230-877b-5f11384297a9>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00600-ip-10-147-4-33.ec2.internal.warc.gz"}
Free online tutorials from Interactmath.com for many Pearson textbooks, but you can use with any precalculus textbook. For precalculus, choose Blitzer's Precalculus textbook, and you can practice sections and subsections that you need help with. Step by step help is provided. Free videos from Khan Academy These videos address all the main topics of algebra and precalculus. Free registration is required. Study Guide for Precalculus - This notebook organizer will help you develop a section-by-section summary of the key concepts in Precalculus. It is a set of templates to help you take notes, review section highlights, draw graphs, and keep track of homework assignments. From the Cengage companion web site to Larson's Precalculus, 4th ed. (but you can use it with any textbook - the page numbers and section numbers may differ, but the content will be very similar .) NEW! These Wolfram|Alpha widgets can be very useful. Change the entries in the box, and try them out! You can even embed them to your own web site. Type in an equation to solve, or a function to graph in the dialog box for Wolfram Alpha! Use it to explore concepts and check homework problems. For example, type solve x^2-9=0 or plot x^2-9, x=-2 to 4 Student success organizer - This notebook organizer will help you develop a section-by-section summary of the key concepts in College Algebra. It is a set of templates to help you take notes, review section highlights, draw graphs, and keep track of homework assignments. From the Cengage companion web site to Larson's College Algebra, 6th ed. Online algebra quizzes Self checking quizzes on various basic algebra topics from a textbook companion web site. Accessible to anyone. Purplemath algebra page - from pre-algebra to precalculus; lots of topics covered for review. Graphing calculator resources - online guides and tutorials for using many popular models of graphing calculators; some are interactive Modeling with Excel. A self contained Excel workbook showing how to graph functions, build tables and perform curve fitting with polynomials. The file contains macros to use for graphing. Download and open with Excel for maximal functionality. www.mathdemos.org - site containing many demos for courses; an NSF project General data graphing applet from the National Center for Education Statistics. Very cool. SOS Math algebra page - various topics in algebra are covered here; graphical approach when appropriate Geogebra- free graphing program; Check out the GeoGebra page on this site
{"url":"http://www.mymathspace.net/geogebra/collalgebra.html","timestamp":"2014-04-18T03:15:12Z","content_type":null,"content_length":"7130","record_id":"<urn:uuid:e711ac02-0e44-497d-8446-37689078a152>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00100-ip-10-147-4-33.ec2.internal.warc.gz"}
Fractals and Benoit Mandelbrot: Lessons for society It was announced this week that Benoit Mandelbrot passed away at the age of 85. One news source called him a ‘maverick’ mathematician. It was Mandelbrot who introduced the word ‘fractals’ to the Western world to capture an aspect of mathematics that had been resisted by the Western academy because of a worldview that would not deal with an ‘alien’ concept of uncertainty and the infinite complexity of nature. We want to use the news of his passing to bring to the fore the importance of fractals and fractal thinking in society. According to the report on his passing by the New York Times, ‘Dr. Mandelbrot coined the term “fractal” to refer to a new class of mathematical shapes whose uneven contours could mimic the irregularities found in nature.’ In the era of quantum mechanics, complexity and chaos, the ideas behind fractal thinking could no longer be ignored and grudgingly, fractal geometry began to gain acceptance in the Western academy. We want to salute Mandelbrot for his tenacity in bringing the concept of fractals to the Western academy. While we commend Mandelbrot for his doggedness, we use this opportunity to state that before Mandelbrot coined the term ‘fractal’ and popularised it in the Western academy, the knowledge and application of this geometry of nature had always existed in the thinking of African peoples. Fractal geometry was at the heart of the African ontology and knowledge system, from divination and architecture to hair weave and craft. More than 40 years ago, Claudia Zaslavsky exposed to the West her research on the African mathematical heritage. Her book, ‘Africa Counts: Number and Pattern in African Culture’ was a major contribution to the understanding of mathematics in everyday life in Africa. This analysis was carried to another level by Ron Eglash at the end of the 20th century. In his research presented in the book ‘African Fractals: Modern Computing and Indigenous Design’, Ron Eglash was exposed to the fact that the knowledge and application of fractal had been alive for millennia in Africa. There are invaluable lessons to be learned for humanity by exploring further the heap of ideas surrounding fractals. Particularly, African societies, the African academy and the political leadership in Africa must pay close attention to exploring the transformational and revolutionary ideas embedded in fractals. There is no doubt about the tremendous contribution of Mandelbrot to the fields of mathematics and science. Almost every discipline in the Western academy has been affected by fractal geometry. For decades, Benoit Mandelbrot was at the forefront of explaining and writing about fractals. ‘If you cut one of the florets of a cauliflower, you see the whole cauliflower but smaller. Then you cut again, again, again, and you still get small cauliflowers. So there are some shapes which have this peculiar property, where each part is like the whole, but smaller,’ explained Mandelbrot. He argued that seemingly random mathematical shapes followed a pattern if broken down into a single repeating shape. The concepts of self-similarity and scaling in fractals enabled scientists to measure previously immeasurable objects, including the coastline of the British Isles and the geometry of a lung or a cauliflower. We now know that the seminal contribution of fractal mathematics led to technological breakthroughs in the fields of digital music and image compression. Computer modelling and the information technology revolution have been pushed by insights from fractal geometry. In his interviews and books, Mandelbrot argued that seemingly random mathematical shapes followed a pattern if broken down into a single repeating shape. This is what in fractals is called self-similarity. This concept of self-similarity is also linked to the other key elements of fractal concepts: scaling, recursion and infinity. In fractals, this concept of infinity is also known as the Cantor Set. In the late 19th century, George Cantor (1845–1918) had provided a new approach for European mathematicians when he showed that it was possible to ‘keep track of the number of elements in an infinite set’, and did so in a descriptively simple fashion. Starting with a single straight line, Cantor erased the middle third, leaving two lines. He then carried out the same operation on those two lines, erasing their middles and leaving four lines. In other words he used a sort of feedback look, with end result of one stage brought back as the starting point for the next. The technique is called ‘recursion’ (Eglash, p. 8). This concept of infinity had for long, before Cantor, been part of the African divination system. In Africa, Eglash encountered some of the most complex fractal systems that exist in religious activities, such as the sequence of symbols used in sand divination, a method of fortune telling found in Senegal. The concept of infinity had a metaphysical link with infinity. This sand divination was to be later referred to as ‘geomancy’ in Europe (Eglash, p. 99–101). Eglash and others credited Mandelbrot with the conceptual leap in the application of fractal geometry from the simulations of natural objects. The relevant point is that fractals existed in nature and before Mandelbrot there was Koch and Cantor. Before Koch and Cantor there were many people in Africa who understood fractal geometry and the explicit and implicit mathematical idea that was to be found in everyday life in Africa. It has been established that before Mandelbrot exposed the Western world to the application of fractals, these forms of knowledge had always existed in the ontology and creativity of Africans. The ideas about the infinite nature of the universe that are now central to particle physics were manifest in many African communities with the celebrated case of the Dogon people, which is the most widely known. Other aspects of advanced geometry and physics were present in the numeric systems of many societies, especially in relation to the Lusona drawings of the Chokwe people. When the colonial missionaries could not decipher the complex mathematics behind the Lusona they deemed the Chokwe to be the most backward and uncivilised in Africa. It is now known that the Dogon and Chokwe reflected a deep understanding of the mathematics of nature. African village settlements show self-similar characteristics, circle of circles, circular dwellings and streets in which broad avenues branch down to tiny footpaths with striking geometric repetition, distinguishable from the Euclidian layout. Ron Eglash presented his research findings in his book ‘African Fractals’ to show that African fractals emanated from a conscious knowledge system and not from unconscious activity. It was during an aerial exploration of rural parts of Africa that Eglash grasped the central aspect of the architectural designs in terms of self-similarity and scaling of patterns. In his book he said clearly that, ‘While fractal geometry can take us into the far reaches of high tech science, its patterns are common in traditional African designs and the concepts are fundamental to African knowledge system.’ Eglash’s findings also include the use of sophisticated mathematical ideas in everyday objects. In the arid region of the Sahel, for example, artisans produce windscreens by utilising a scaling design that gives them the maximum effect – keeping out the wind-driven dust – for the minimum amount of effort and material. Abdul Karim Bangura, another scholar of African science and mathematics, in his review of Eglash’s text noted that: ‘Aerial photographs of various settlement compounds revealed that many were composed of circular structures enclosed in other circles, or rectangles within rectangles, and that the compounds were likely to have street patterns in which broad avenues branched into very small footpaths. As Eglash notes, at first he thought it was just from unconscious social dynamics. But during his fieldwork, he found that fractal designs also appear in a wide variety of intentional designs–carving, hairstyling, metalwork, painting, textiles–and the recursive process of fractal algorithms are even employed in African quantitative systems…. These results, Eglash concludes, are congruent with recent developments in complex systems theory, which suggest that pre-modern, non-state societies were neither utterly anarchic, nor frozen in static order, but rather utilized an adaptive flexibility that capitalized on the non-linear aspects of ecological dynamics.’ Since the writing of this review, Ron Eglash has not only written extensively on African Fractals but his widely watched presentation at the TED conference has brought the ideas of Fractals to an international audience. When Eglash returned from Africa, one of his colleagues advised him to focus on scaling patterns in African hairstyles. In the conclusion on scaling, Eglash himself admitted: ‘While it is not difficult to invent explanations based on unconscious social forces – for example flexibility in conforming designs to material surfaces as expressions of social flexibility – I do not think that any such explanations can account for this diversity. From optimisation engineering, to modelling organic life, to mapping between different spatial structures, African artisans have developed a wide range of tools, techniques and design practices based on the conscious application of fractal geometry’ (p. 85). Scaling and self-similarity are descriptive characteristics; one can see these in African designs. The idea is to grasp how these were intentionally designed so that we can have a better grasp of African fractals. Eglash then went on to look closely at African architecture, designs, art and village structure, cosmology and divination systems and sought to understand how all of these are linked to an African knowledge system. I have elsewhere used the term the African ideation system or worldview. The question for us is to understand how this is linked to political relations in Of the five main elements of Fractals that were highlighted in his book – scaling, self-similarity, recursion, infinity and fractal dimensions – Eglash drew attention to the recursive processes that generate a feedback loop. Eglash gave three examples of recursion, namely, cascade, iteration and self-reference. I was introduced to fractals and African mathematics by Sam E. Anderson, and I met Eglash in 1999 to engage him on this concept of African fractals. Ever since my meeting with Eglash, I have seen the revolutionary implications of fractal thinking and a fractal worldview. I have sought to further the understanding of the relationship between fractal optimism and politics in my book, ‘Barack Obama and Twenty First Century Politics’. In this book, I sought to underline the importance of self-organisation and self-mobilisation as the basis for a new bottom-up politics that could unleash a new form of participatory democracy for the 21st century (based on the intentional activities of conscious humans). Fractal has been applied in many other fields. In the application of fractal to political science, elements such as recursion, cascading, self-similarity and memory help us understand the self-replication of genocidal violence, exploitation, militarism, masculinity and environmental plunder, among others. Thus, it becomes imperative for there to be a coordinated human intention to make a break with such traditions (negative recursion) and to establish a different legacy that would form a positive recursive loop for the transformation of society for posterity. One lesson of fractal for African (and other) societies is the conceptual application of the ideas of self-similarity and self-referencing in recursion, and the imperative that this mode of thinking breaks the certainty and predictability of determinism. Determinism, simplicity and reductionism had migrated from the physical sciences to implant the artificial divisions in the academic disciplines that became the hallmark of the social sciences in the Western world. F. Kapra had warned against this certainty of Western thinking. In the book, ‘The Turning Point’, he argued: ‘For two and a half centuries physicists have used a mechanistic view of the world to develop and refine the conceptual framework known as classical physics. They have based their ideas on the mathematical theory of Isaac Newton, the philosophy of René Descartes, and the scientific methodology advocated by Francis Bacon … Like human-made machines, the cosmic machine was thought to consist of elementary parts. Consequently it was believed that complex phenomena could always be understood by reducing them to their basic building blocks and by looking for the mechanisms through which these interacted. This attitude, known as reductionism, has become so deeply ingrained in our culture that it has often been identified with the scientific method.’ Humans now know that this reductionism of the ‘scientific method’ emanated from a European reading of science and human knowledge. With the advances in digital technology and genetic engineering, advances made possible by the application of fractal geometry, the promise of the future demands that humans have a deep appreciation of the inter-relationship between humans and nature so that we do not become slaves to technology. This demands from us the obligation to intervene as humans to reverse the headlong rush towards dehumanisation and the destruction of the planet earth. Fractal thinking and the understanding of the consequences of the reference points for progress demonstrates the necessity to make a break with the recursion of negative self-similar patterns such as conflicts and wars, domination, exploitation, militarization and religious and ethnic tensions. We can see that we are in a feedback loop of economic crisis, intensified exploitation, stock-market failures and conflicts. This kind of recursive process has a definite reference point which is the history of capitalism, racism, domination, oppression, greed and plunder. It is in examining the connection between the two (recursion and cultural categories) that the use of fractal geometry as a knowledge system (and not just unconscious social dynamics) becomes evident. The next lesson of African fractals is for African educational institutions. African education must support research agendas that seek to unearth the richness of Africa and focus on positive aspects of the African knowledge system as an indispensable site of knowledge. The road to the re-establishment and reaffirmation of Africa as a site of knowledge has never been smooth, and may get rougher unless our scholarly tradition refrains from following a recursive path that is self-similar to that which attempted to deny and subjugate our intelligence and ontology. Three years ago, Paulus Gerdes and Ahmed Djebbar produced the important bibliography on ‘Mathematics in African History and Culture’. This bibliography carried forward the traditions of Cheikh Anta Diop, who did so much to unearth and highlight the contributions of African mathematics to research and learning. Diop studied in France at the same time of Mandelbrot. Diop moved to Paris in 1946 and studied nuclear physics and Egyptology. He submitted his thesis to the University of Paris in 1951, but could not find a committee to examine his work on the Egyptian contribution to math and science. It was after nine years that he was granted his doctorate by the University of Paris in 1960. It was not by chance that Diop was a physicist who had studied relativity and quantum physics. It was this study that brought Diop back to an awareness of the richness of African knowledge and intellectual traditions and although he did not use the term fractals, his research and work shared many points of convergence with Benoit Mandlebrot. Just as how it was difficult for the ideas of Diop to be accredited in the French academy, so Mandelbrot’s popularisation of the idea of fractals in the West was not an easy task. Mandelbrot attended school in France at the same period when the African scientist Cheikh Anta Diop was also studying in Paris. Between 1949–52, Mandelbrot wrote his Docteur d’Etat ès Sciences Mathématiques: Faculté des Sciences, Paris. After receiving his doctorate in 1962 from France, Mandelbrot moved to the United States, where he pursued postdoctoral work. Mandelbrot followed a tortuous career between industry and the academy because of his view on complexity and infinity. It was not until he was nearly 75 years old that he was granted tenure in the mathematics department at Yale in 1999. His book, ‘The Fractal Geometry of Nature’, was first published in 1982. Writing in the popular magazine the New Scientist, one reviewer said of the book: ‘Fractal geometry is one of those concepts which at first sight invites disbelief but on second thought becomes so natural that one wonders why it has only recently been developed…’ The reviewer further writes about Mandelbrot: ‘First, he has enriched our geometric imagination … with computer graphics of stunning beauty … Secondly, he demonstrates that fractals are good models for an impressive variety of natural objects … Thirdly, he emphasizes that fractals imply an unconventional philosophy of geometry [contrary to the conventional] “Newtonian” picture … Mandelbrot’s essay is written in a personal, intense and immediate style.’ Mandelbrot wrote the book, ‘The (Mis)behavior of Markets: A Fractal View of Risk, Ruin, and Reward’. In this book, Mandelbrot warned that markets are far riskier than society wanted to believe. From the gyrations of IBM’s stock price and the Dow, to cotton trading and the dollar–euro exchange rate, Mandelbrot showed that the world of finance can be understood for its volatility. Contrary to the advice of stockbrokers, there was nothing certain about the future and stability of the stock market. The ideas of fractals were further popularised and published in Scientific American in 1999 under the title, ‘A Multifractal walk down Wall Street.’ In this article Mandelbrot argued that: ‘Fractal patterns appear not just in the price changes of securities but in the distribution of galaxies throughout the cosmos, in the shape of coastlines and in the decorative designs generated by innumerable computer programs. ‘In finance, this concept is not a rootless abstraction but a theoretical reformulation of a down-to-earth bit of market folklore – namely, that movements of a stock or currency all look alike when a market chart is enlarged or reduced so that is fits the same time and price scale. An observer then cannot tell which of the data concern prices that change from week to week, day to day or hour to hour. This quality defines the charts as fractal curves and makes available many powerful tools of mathematical and computer analysis.’ Despite the warnings about the fact that there was uncertainty in this branch of finance, a brand new group of financial wizards attempted to bring back the linearity and certainty of capitalist development and growth to predict the unlimited rise of the stock market. These wizards were to be called ‘quants’ on Wall Street, and they populated the area of speculation called the market for derivatives. Warren Buffet had called these derivatives ‘financial weapons of mass destruction’. The world was brought face to face with the complexity and chaos of this branch of finance in 2008, yet the mindset of certainty and unlimited potential of capitalism has meant that the gurus of the world of quants have returned to the mythical world of unlimited profits. In the New York Times report on the passing of Mandelbrot we are reminded by Mandelbrot himself that life is not linear and not based on a straight line: ‘Dr. Mandelbrot compared his own trajectory to the rough outlines of clouds and coastlines that drew him into the study of fractals in the 1950s. ‘“If you take the beginning and the end, I have had a conventional career,” he said, referring to his prestigious appointments in Paris and at Yale. ”But it was not a straight line between the beginning and the end. It was a very crooked line.”’ The important point was that human intentions become an important aspect of human interactions with nature and it is this intentionality that existed in Africa that was brought out in the book ‘African Fractals’ by Eglash. The study of fractals illustrates the importance of the human intention to make a break when the recursive processes lead to militarism, destruction and greed. While the quants have applied fractal geometry to the modelling for the derivatives market, it is only the conscious actions by citizens that can make a break from these financial weapons of mass destruction. This break with negative recursion and the establishment of a positive recursive loop is applicable to our education system, our leadership orientation, our engagement with the environment and in our relations as humans. In this bid, we propose that there must be human intentions to make Ubuntu – shared humanity and respect for the environment –the reference point that would self-replicate and cascade itself across all sections of society. Written by Horace Campbell Horace Campbell is a teacher and writer. His latest book is ‘Barack Obama and 21st Century Politics: A Revolutionary Moment in the USA‘, published by Pluto Press. PAMBAZUKA NEWS 2010-10-21, Issue 501 All pictures in this article belong to copyrights owners. Pictures of Fractals were taken from Wikipedia/Fractals Leave your response! Add your comment below, or trackback from your own site. You can also subscribe to these comments via RSS. Be nice. Keep it clean. Stay on topic. No spam. You can use these tags: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong> This is a Gravatar-enabled weblog. To get your own globally-recognized-avatar, please register at Gravatar.
{"url":"http://afrobeatradio.net/2010/10/24/fractals-and-benoit-mandelbrot-lessons-for-society/","timestamp":"2014-04-17T06:47:36Z","content_type":null,"content_length":"67525","record_id":"<urn:uuid:ff1c6218-1662-4d4b-81ad-de9fd98fe1c2>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00542-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: • one year ago • one year ago Best Response You've already chosen the best response. solve for x. Best Response You've already chosen the best response. Best Response You've already chosen the best response. 20x+25+24x+20= 44x+48 44x+45= 44x+48 45 not equal to 48 as you can see this equality doesn't hold ...so no value of x (ans.) or either you hav posted wrong question. Best Response You've already chosen the best response. also i just can't get this: \[\frac{ 1 }{ 5 }x+ \frac{ 1 }{ 3 }=1(\frac{ 2 }{ 3 }x+2)\] Best Response You've already chosen the best response. regarding the answer above ! thank you sir! answer no value :'D Best Response You've already chosen the best response. Best Response You've already chosen the best response. alwaz welcum................ Best Response You've already chosen the best response. thanks mann your awesome bro!! :) props Best Response You've already chosen the best response. plss click the best reponse if u think i hav given u d best 1 Best Response You've already chosen the best response. 5(4x+5)+4(6x+5)+2=44x+48 Multiply 5 by each term inside the parentheses. 20x+25+4(6x+5)+2=44x+48 Multiply 4 by each term inside the parentheses. 20x+25+24x+20+2=44x+48 Since 20x and 24x are like terms, add 24x to 20x to get 44x. 44x+25+20+2=44x+48 Add 20 to 25 to get 45. 44x+45+2=44x+48 Add 2 to 45 to get 47. 44x+47=44x+48 Since 44x contains the variable to solve for, move it to the left-hand side of the equation by subtracting 44x from both sides. 44x+47-44x=48 Since 44x and -44x are like terms, add -44x to 44x to get 0. 0+47=48 Remove the 0 from the polynomial; adding or subtracting 0 does not change the value of the expression. 47=48 Since 47$48, there are no solutions. No Solution Best Response You've already chosen the best response. (1)/(5)*x+(1)/(3)=1((2)/(3)*x+2) Multiply (1)/(5) by x to get (x)/(5). (x)/(5)+(1)/(3)=1((2)/(3)*x+2) Multiply (2)/(3) by x to get (2x)/(3). (x)/(5)+(1)/(3)=1((2x)/(3)+2) Remove the parentheses around the expression (2x)/(3)+2. (x)/(5)+(1)/(3)=(2x)/(3)+2 Since (1)/(3) does not contain the variable to solve for, move it to the right-hand side of the equation by subtracting (1)/(3) from both sides. (x)/(5)=-(1)/(3)+(2x)/(3)+2 Simplify the right-hand side of the equation. (x)/(5)=(2x+5)/(3) Since there is one rational expression on each side of the equation, this can be solved as a ratio. For example, (A)/(B)=(C)/(D) is equivalent to A*D=B*C. x*3=(2x+5)*5 Multiply x by 3 to get 3x. 3x=(2x+5)*5 Multiply 5 by each term inside the parentheses. 3x=10x+25 Since 10x contains the variable to solve for, move it to the left-hand side of the equation by subtracting 10x from both sides. 3x-10x=25 Since 3x and -10x are like terms, add -10x to 3x to get -7x. -7x=25 Divide each term in the equation by -7. -(7x)/(-7)=(25)/(-7) Simplify the left-hand side of the equation by canceling the common factors. x=(25)/(-7) Simplify the right-hand side of the equation by simplifying each term. x=-(25)/(7) Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/51075d74e4b08a15e7844746","timestamp":"2014-04-19T22:46:09Z","content_type":null,"content_length":"104102","record_id":"<urn:uuid:582c9d4a-e932-4ea3-957b-a5c2977ee353>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00271-ip-10-147-4-33.ec2.internal.warc.gz"}
Optimal capital structure December 7th 2012, 07:18 AM #1 Dec 2012 Optimal capital structure Company Y requires 10 million capital. The rate of interest on debt is 10% and the tax rate is 40%. What is the optimal percentage of debt, when the model stating the negative value of financial distress V = I*A*X^2 ( where I is the total amount of investment, A is a constant which in this case is 0,05 and X is the proportion of the investment financed through debt) applies? I'm not sure how to begin solving this. I tried making a formula for WACC and attempted to minimize it but didn't get a reasonable answer. Please help I'm clueless here Follow Math Help Forum on Facebook and Google+
{"url":"http://mathhelpforum.com/business-math/209273-optimal-capital-structure.html","timestamp":"2014-04-20T08:38:22Z","content_type":null,"content_length":"29434","record_id":"<urn:uuid:952f708b-69c9-4ccd-b6d7-7c8c1ac56457>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00153-ip-10-147-4-33.ec2.internal.warc.gz"}
Using total differential to find the marginal rate of substitution help please.. Consider the utility function U(x, y) = xy + y. (a) Use the total differential to find the marginal rate of substitution (MRS) between y and x. (b) Use the MRS to show that the indifference curves are strictly convex to the origin (i.e. show that the MRS is diminishing).
{"url":"http://mathhelpforum.com/business-math/133313-using-total-differential-find-marginal-rate-substitution.html","timestamp":"2014-04-20T18:44:21Z","content_type":null,"content_length":"37380","record_id":"<urn:uuid:3e1331f8-3d42-4b49-b7a0-e498ebb8ec11>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00309-ip-10-147-4-33.ec2.internal.warc.gz"}
Packing in the Cookies January 6, 2011 Posted by Consider packing circles inside a circular container, or less abstractly, placing cookie dough on a cookie sheet. In the case of cookies, which expand to be a roughly circular shape, you don’t want them so close that they run into each other. At the same time, you don’t want them too far apart, because that would mean fewer cookies. One of the latest features of Wolfram|Alpha is the ability to get information about packing circles into circles. For instance, suppose you have a circular baking sheet with a diameter of 12 inches, and you want to make 20 cookies. You can ask Wolfram|Alpha “pack 20 circles in a diameter 12 inch circle”; not only does it give you a diagram of the densest packing, but also the largest radius of the circular cookies on the 12-inch baking sheet. Or you might know the size of the cookies and want to know how many can fit? One way to get the answer would be “pack r=1 circles in a diameter 12 circle”. You can also get information about hexagonal and square arrangements, which are packings that you get when you space out the cookies in a regular pattern. In the latter query above, you discover that the densest packing of radius-1 cookies on a 12-inch diameter baking sheet is 27: better than with a hexagonal grid where you can only fit 24, or with a square grid where you can only fit 22. Suppose we’ve made two batches of 27 cookies, making a total of 54 cookies. How big of a serving platter would you need so that none overlap? One way to ask Wolfram|Alpha this question is “pack 54 r= 1 circles in a circle”. You learn that the densest packing requires the serving platter to have a radius just a little more than 8.2 inches, while a hexagonal arrangement is almost as efficient, and a square arrangement requires a platter with a little more than a 9-inch radius. At this point, you might be wondering, “How many calories is that?” and “How many miles do I need to jog to compensate?” These are beyond the scope of this blog. Still, not surprisingly, Wolfram| Alpha will answer these questions, but I leave them as an exercise for the reader. (Hint: “calories 54 cookies” and “running 3693 calories“.) 4 Comments Try “pack 1 circle in circle”… Or “pack 1 circle in a diameter 12 circle”. Posted by adrian January 7, 2011 at 7:47 am Reply “pack 2″x3″ rectangles in a 10″ diameter circle” is my problem. I wish wolfram alpha could do it! Instead it thinks I want to know about airplanes. “pack 2″ diameter circles in a 10″ diameter circle” is the same… airplanes. “pack r=2 circles in a 10 diameter circle” is the same… more airplanes. It’d be nice if there was a “It looks like you’re trying to pack things… here’s the tool” then it could show drop down boxes “What shape do you want to pack?” “Into what shape do you want to pack it”, etc. I love wolfram alpha, but often I can’t get it to do what I want it to, despite knowing that it should be able to do it… Posted by Dan January 10, 2011 at 3:13 pm Reply Hi Dan, Thank you for your feedback. Wolfram|Alpha can’t do this at the moment but it is something that we’re working on. We’ll keep you posted as more develops. Thank you. Posted by The Wolfram|Alpha Team January 17, 2011 at 12:13 am Reply This seems cool – does Alpha do any search for non-lattice packings or just rely on a list of known construtions? Off topic, but are you by any chance the same Todd Rowland I was in grad school with at the Univ. of Chicago (if so, please contact me, I’d love to catch up). Posted by Kevin March 7, 2011 at 3:48 am Reply Bitcoins have been heavily debated of late, but the currency's popularity makes it worth attention. Wolfram|Alpha gives values, conversions, and more. Some of the more bizarre answers you can find in Wolfram|Alpha: movie runtimes for a trip to the bottom of the ocean, weight of national debt in pennies… Usually I just answer questions. But maybe you'd like to get to know me a bit, too. So I thought I'd talk about myself, and start to tweet. Here goes! Wolfram|Alpha's Pokémon data generates neat data of its own. Which countries view it most? Which are the most-viewed Pokémon? Search large database of reactions, classes of chemical reactions – such as combustion or oxidation. See how to balance chemical reactions step-by-step.
{"url":"http://blog.wolframalpha.com/2011/01/06/packing-in-the-cookies/","timestamp":"2014-04-19T15:58:35Z","content_type":null,"content_length":"46191","record_id":"<urn:uuid:1817b76e-75e7-473f-bee6-3dbfb5a9150e>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00379-ip-10-147-4-33.ec2.internal.warc.gz"}
Input-Output Tables for Function Rules 7.16: Input-Output Tables for Function Rules Practice Input-Output Tables for Function Rules Remember the car wash from an earlier Concept? Well, here is the problem once again. Kara and Marc decided to spend one morning helping out at a car wash to benefit a club that their grandparents have participated in for year. The car wash was a busy place. At the beginning there weren’t any cars, but between 9 am and 10 am the class washed 5 cars. From 10 to 11, the class washed 10 cars, from 11 to 12 the class washed 15 cars and from 12 – 1 the class washed 20 cars. Marc kept track of the cars washed each hour. Hour 1 - 5 cars Hour 2 - 10 cars Hour 3 - 15 cars Hour 4 - 20 cars The number of cars washed is connected to the number of hours worked. One is a function of the other. Look at this list and write a function rule. Then figure out how many cars would be washed in the fifth hour of the car wash. This Concept will show you how to accomplish this task. An input-output table , like the one shown below, can also be used to represent a function. Because of that, we can also call this kind of a table a function table. Input number $(x)$ Output number $(y)$ Each pair of numbers in the table is related by the same function rule . That rule is: multiply each input number ( $x-$$y-$ Working with function tables and function rules is a lot like being a detective! You have to use the clue of the function rule to complete a table! Patterns are definitely involved in this work. Now let’s look at how we can use a function rule to complete a table. The rule for the input-output table below is: add 1.5 to each input number to find its corresponding output number. Use this rule to find the corresponding output numbers for the given input numbers in the table. Input number $(x)$ Output number $(y)$ To find each missing output number, add 1.5 to each input number. Then write that output number in the table. Input number $(x)$ Output number $(y)$ 0 1.5 $\leftarrow 0+1.5=1.5$ 1 2.5 $\leftarrow 1.0+1.5=2.5$ 2.5 4 $\leftarrow 2.5+1.5=4.0$ 5 6.5 $\leftarrow 5.0+1.5=6.5$ 10 11.5 $\leftarrow 10.0+1.5=11.5$ The table above shows five ordered pairs that match the given function rule. Let’s write the answer in ordered pairs. The answer is (0, 1.5) (1, 2.5) (2.5, 4) (5, 6.5) (10, 11.5). Now let’s look at how to create a function table given a rule. The rule for a function is: multiply each $x-$$y-$ First, choose three $x-$$y-$ $x$ $y$ $\leftarrow 1 \times 4=4$ $\leftarrow 2 \times 4=8$ $\leftarrow 3 \times 4=12$ The table above shows five ordered pairs that match the given function rule. The answer is (1, 2) (2, 6) (3, 10). We can also write a function rule in the form of an equation. Just like an equation shows the relationship between values, the function table does too. Let’s look at one. The equation $y=\frac{x}{3}+1$ The table requires you to find the value of $y$$x = 9$$y-$$x-$$x$$y$ $y &= \frac{x}{3}+1\\y &= \frac{9}{3}+1\\y &= 3+1\\y &= 4$ So, when $x = 9, y = 4$ The table also requires you to find the value of $x$$y = 8$$x-$$y-$$x$ $y &= \frac{x}{3}+1\\8 &= \frac{x}{3}+1\\8-1 &= \frac{x}{3}+1-1\\7 &= \frac{x}{3}+0\\7 &= \frac{x}{3}\\\\7 \times 3 &= \frac{x}{3} \times 3\\21 &= \frac{x}{\cancel{3}} \times \frac{\cancel{3}}{1}\\21 &= \frac{x}{1}=x$ So, when $y = 8, x = 21$ The completed table will look like this. You could say that an equation is another way of writing a function rule. Use what you have learned to complete the following examples. Example A Complete the table given the rule add 2. Input $(x)$ Output $(y)$ Solution: The answers are 5, 7 and 8. Example B Write the rule for the table. Solution: $y = x + 2$ Example C Create a function table given the rule $y = x \div 2 + 3$ Solution: Look at the data below. Input $(x)$ Output $(y)$ Here is the original problem once again. Kara and Marc decided to spend one morning helping out at a car wash to benefit a club that their grandparents have participated in for year. The car wash was a busy place. At the beginning there weren’t any cars, but between 9 am and 10 am the class washed 5 cars. From 10 to 11, the class washed 10 cars, from 11 to 12 the class washed 15 cars and from 12 – 1 the class washed 20 cars. Marc kept track of the cars washed each hour. Hour 1 - 5 cars Hour 2 - 10 cars Hour 3 - 15 cars Hour 4 - 20 cars The number of cars washed is connected to the number of hours worked. One is a function of the other. Look at this list and write a function rule. Then figure out how many cars would be washed in the fifth hour of the car wash. To write the rule, notice that each hour the number of cars increased by 5. We can use multiplication to show this rule. $y = 5x$ This rule works for all of the values. Then we can evaluate using the fifth hour. $5(5) = 25$ Given this rule, 25 cars will be washed in hour five. Here are the vocabulary words in this Concept. A set of ordered pairs in which one element corresponds to exactly one other element. Functions can be expressed as a set of ordered pairs or in a table. the $x$$x$ the $y$$y$ Input-Output Table a way of showing a function using a table where the $x$$y$ Function Table another name for an input-output table. Function Rule a written equation that shows how the domain and range of a function are related through operations. Guided Practice Here is one for you to try on your own. At the amusement park, Taylor noticed that there seemed to be a pattern for people who won the dart throwing game. She was so curious that she watched people play the game for a few hours. When 12 people played, there were only 6 winners. When ten people played, there were five winners. This is a table to represent the data that Taylor collected. Input Output Can you write a rule for this data? We can accomplish this task by looking at what happened to the $x$$y$ Notice that the $x$ $\frac{x}{2} = y$ This is our rule. Video Review Here is a video for review. - This Khan Academy video is an introduction to functions. Directions: Use the given rule or equation to complete the table. 1. The rule for the input-output below table is: multiply each input number by 7 and then add 2 to find each output number. Use this rule to find the corresponding output numbers for the given input numbers in the table. Fill in the table with those numbers. Input number $(x)$ Output number $(y)$ 2. The rule for this function table is: subtract 6 from each $x-$$y-$ 3. The equation $y=\frac{x}{2}-1$ Directions : Evaluate each function rule. 4. $2x$ 5. $3x-1$ Input Output 6. $2x+1$ Input Output 7. $4x$ Input Output 8. $6x-3$ Input Output 9. $2x$ Input Output 10. $3x-3$ Input Output Directions : Create a table for each rule. 11. $7x$ 12. $3x+1$ 13. $5x - 3$ 14. $4x+3$ 15. $4x- 5$ Files can only be attached to the latest version of Modality
{"url":"http://www.ck12.org/book/CK-12-Middle-School-Math-Concepts---Grade-7/r4/section/7.16/","timestamp":"2014-04-17T07:25:54Z","content_type":null,"content_length":"168819","record_id":"<urn:uuid:41d6f3ac-79b9-4364-b3c3-206e0f6fba23>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00101-ip-10-147-4-33.ec2.internal.warc.gz"}
Several Problems March 26th 2008, 07:25 PM #1 Oct 2007 Several Problems I've got these three problems that I just can't work out. If someone could explain one or all of them, that would be fantastic 1) Find a formula for the Y(Nth derivative) for X^4+Y^4=A^4 ^^^(Derivative of X with respect to Y)^^^ 2) Suppose the curve X^4+aX^3+bX^2+cX+d has a tangent line with a formula y=2x+1 at x=0, and one with a formula y=2-3x at x=1. Find the values of a, b, c, and d. 3)Suppose that XY=C has a tangent line at point P. a) Show that the midpoint of the line segment cut from this tangent line by the coordinate axes is P. b) Show that the triangle formed always has the same area, no matter where P is located on the graph of the function. You guys have yet to let me down, and I'm really counting on you this time! Thanks! Okay, here's what I got for #2... $\rightarrow f'(x)=4x^3+3ax^2+2bx+c$ Now we can tell a few things from the information that they give us from the tangent lines. If the tangent at $x=0$ is $y=2x+1$, then we know that $f'(0)=2$ because the derivative takes the value of the slope of the tangent line. We also know that $f(0)=1$ because $y=2x+1$ is tangent at $x=0$ and $2(0)+1=1$. Likewise the other tangent line tells us that $f'(1)=-3$ and $f(1)=-1$ with this information we can start chipping away at the coefficients. Let's start with the derivative: $\Rightarrow 4(0)^3+3a(0)^2+2b(0)+c=2$ $\Rightarrow c=2$ using this and our other derivative point we find... $\Rightarrow 4+3a+2b+2=-3$ $\Rightarrow 3a+2b=-9$ this equation of a and b may seem like a dead end, but we'll use it later. Now let's look at the function itself and the points that we have for it. notice that I'm going to continue using c=2. $\Rightarrow (0)^4+a(0)^3+b(0)^2+2(0)+d=1$ $\Rightarrow d=1$ using this information and the other function point we discover... $\Rightarrow (1)^4+a(1)^3+b(1)^2+2(1)+1=-1$ $\Rightarrow a+b=-5$ lo and behold, we have yet another equation of a and b. Now we can set up a linear system to solve for the last two coefficients: $<br /> 3a+2b=-9$ $a+b=-5<br />$ you can solve this by whatever method suits you, and you'll discover that a=1 and b=-6. collecting our results: In order to answer the first part of this question, we need to figure out how we can find the midpoint of the line segment in question. In order to do this we need to notice that the endpoints of the line segment in question will be the x-intercept and y-intercept of the associated tangent line. So we need to find an equation for the line tangent to $xy=c$. in order to avoid having to do implicit differentiation, I'm going to rewrite $xy=c$ as $y=cx^{-1}$. This means that $y'(x)=-\frac{c}{x^2}$ Now we need to write the equation of the line tangent to $y=cx^{-1}$ at some arbitrary point $(x_0, y_0)$. Using the point-slope formula we get: $\rightarrow y-y_0=y'(x_0)(x-x_0)$ $\rightarrow y-y_0=-\frac{c}{x_0^2}(x-x_0)$ now that we have an equation of the line tangent to $(x_0, y_0)$, we can find the intercepts. Lets start with the x-intercept, which occurs when y=0... $\rightarrow -y_0=-\frac{c}{x_0^2}(x-x_0)$ $\rightarrow -y_0=-\frac{cx}{x_0^2}+\frac{c}{x_0}$ $\rightarrow x=\left(y_0+\frac{c}{x_0}\right)\left(\frac{x_0^2} {c}\right)$ $\rightarrow x=\frac{x_0^2y_0}{c}+x_0$ $\rightarrow x=\frac{x_0x_0y_0}{c}+x_0=x_0+x_0=2x_0$ $\therefore xint\ at\ (2x_0,0)$ if you're wondering about the second to last line, remember that since $(x_0,y_0)$ is a specific point on the curve $xy=c$, it is also true that $x_0y_0=c$. By a similar process, we can set x=0 and discover that the y-intercept occurs at $(0,2y_0)$. now we apply the midpoint formula to the two intercepts that we have found--( $(2x_0,0)$ and $(0,2y_0)$)... $\left(\frac{2x_0+0}{2},\frac{0+2y_0}{2}\right)=(x_ 0,y_0)$ so we have shown that the midpoint of any line segment formed in the prescribed manner will be the point tangent to $xy=c$. as for the area of the triangle formed from the points $(2x_0,0)$, $(0,2y_0)$, and $(0,0)$, simply note that the triangle has a base of $2x_0$ and a height of $2y_0$, and apply the formula for area of a triangle... and so we see that the area of the triangle will always be 2c regardless of our choice of $x_0$ and $y_0$ Sorry, I've got no idea about how to do #1. March 26th 2008, 09:56 PM #2 March 26th 2008, 11:03 PM #3 March 26th 2008, 11:13 PM #4
{"url":"http://mathhelpforum.com/calculus/32189-several-problems.html","timestamp":"2014-04-19T15:20:43Z","content_type":null,"content_length":"50840","record_id":"<urn:uuid:4add7b9c-15b8-4952-92c2-414e0fbeb81a>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00402-ip-10-147-4-33.ec2.internal.warc.gz"}
Work-Energy Theorum: Spring potential energy vs Kinetic Energy 1. The problem statement, all variables and given/known data A 1350-kg car rolling on a horizontal surface has a speed v = 40 km/h when it strikes a horizontal coiled spring and is brought to rest in a distance of 2.5 m. What is the spring constant of the spring? Ignore Friction and assume spring is mass-less. 2. Relevant equations [tex] W = \Delta E[/tex] [tex] E_{pspring} = \frac{1}{2}(kx^2) [/tex] [tex] E_k = \frac{1}{2}(mv^2) [/tex] 3. The attempt at a solution First right off the bat, i converted 40 km/h to its m/s equivalent of aprox. 11.11 m/s i state the law of conservation of energy: Energy before = Energy after E_k = E_{pspring} \frac{1}{2}(mv^2) = \frac{1}{2}(kx^2) then i isolate k [tex] k = \frac{-mv^2}{x^2} [/tex] now heres the issue, is x negative? because the displacement is against the direction of motion? and 2.5m = x, (-2.5)^2 gives me a answer of 4266 Nm but -(2.5)^2 is entirely different.. This has been a long lasting math issue for me. And what if x is positive? i know k MUST be positive right?
{"url":"http://www.physicsforums.com/showthread.php?t=344551","timestamp":"2014-04-16T10:31:50Z","content_type":null,"content_length":"31752","record_id":"<urn:uuid:f1e018f0-5840-44e3-a64f-ab84b68f3442>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00602-ip-10-147-4-33.ec2.internal.warc.gz"}
Constructing rational functions with ramification locus the divisor of some $n$-form up vote 2 down vote favorite I'm still busy learning the theory of linear systems for compact Riemann surfaces. If the answer to the following question is negative, then there might not be any point in continuing. Let $X$ be a compact connected Riemann surface and let $\omega$ be an $n$-form on $X$. That is, $\omega$ is a global section of the canonical sheaf $\omega_X^{\otimes n}$. Now, let $D$ be the divisor of $\omega$ on $X$. Can we construct a morphism $X\to \mathbf{P}^1$ such that the support of the ramification locus equals the support of $D$ for some choice of $n$? If yes, the degree of such a morphism equals the degree of $\omega_X^{\otimes n}$, right? Slightly weaker: can we construct a morphism $X\to \mathbf{P}^1$ such that the support of the ramification locus is contained in the support of $D$? As Francesco points out, this is not possible if $g=2$ and $n=1$ Probably, if $g$ is small compared to $n$, the answer will be negative. Based on your comment on Francesco Polizzi's answer, you have stated this wrong. You should say something like "Can we construct a morphism $X \to \mathbb P^1$ such that the ramification locus is the divisor of some $n−form$ on $X$? Do you want to count ramification with multiplicity or without? – Will Sawin Mar 19 '12 at 19:35 You're right! Without multiplicity at first. I'll take care of multiplicities later. – Harry Mar 19 '12 at 20:58 Why did you delete this question? – S. Carnahan♦ Mar 20 '12 at 10:35 It's better if I figure it out by myself maybe. I did not mean to offend and am grateful to Francesco for his helpful answer. – Harry Mar 20 '12 at 10:45 add comment 1 Answer active oldest votes The answer is no as the following simple example shows. Assume $g(X)=2$ and take $n=1$, i.e. $D$ is the divisor of a holomorphic $1$-form. Then $\deg D=2$, so if your morphism $f \colon X \to \mathbf{P}^1$ exists, it is ramified at two points. Consequently, $f$ is branched at at most two points, so at exactly two points since $\mathbf{P}^1$ minus a point is simply connected. But any cover of $\mathbf{P}^1$ branched at two points is still $\mathbf{P}^1$, a contradiction. up vote 2 down vote EDIT. For completeness, let me show my assertion that if the cover $f \colon X \to \mathbf{P}^1$ is branched at two points, then $ X \cong \mathbf{P}^1$. In general, if $f$ has degree $d$, the branch points are $b_1, \ldots ,b_n$ and the permutation $\sigma_i$ giving the local monodromy at $b_i$ is the product of $k_i$ disjoint cycles, then $$g(X)=1 + \frac{(n-2)d-\sum_{i=1}^ nk_i}{2},$$ see for instance [Miranda, Algebraic curves and Riemann surfaces, page 93]. In particular if $n=2$ the only possibility is $k_1=k_2=1$ and $g(X)=0$, so $X$ is isomorphic to the projective line. Yes, but that's why I allow one to take powers of $\omega$. So what if we take for example a section of $\omega^{1/2 g(g+1)}$? Then the degree of the divisor is $g^3-g$. So according to 2 your answer, it should ramify at $g^3-g$ points (counted with multiplicity?). So the number of branch points is bounded by the genus again, but now the bound is big. By the way, what is the relation between the ramification divisor (so ramification locus with proper multiplicities) and the divisor of our section in this example? – Harry Mar 19 '12 at 18:18 add comment Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry riemann-surfaces divisors or ask your own question.
{"url":"http://mathoverflow.net/questions/91643/constructing-rational-functions-with-ramification-locus-the-divisor-of-some-n?sort=votes","timestamp":"2014-04-17T04:31:48Z","content_type":null,"content_length":"59478","record_id":"<urn:uuid:1683eefc-a2a7-456b-86c9-040b75bb6379>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00246-ip-10-147-4-33.ec2.internal.warc.gz"}
Homework Help Posted by anonymous on Saturday, February 5, 2011 at 3:21am. solve the equation. give an exact solution, and also an approximate solution to four decimal places. My first step was to change it to log(base 2) 7 = 9x-7 i got 1.0897 Did i do something wrong or did i miss one question? • pre calculus - Reiny, Saturday, February 5, 2011 at 7:02am just take log of both sides log (2^(9x-7)) = log 7 (9x-7) log2 = log7 9x - 7 = log7/log2 your answer is correct, see • pre calculus - drwls, Saturday, February 5, 2011 at 7:03am [9x -7] log 2 = log 7 Any log base can be used, as long as it is the same for both sides. 9x - 7 = 2.807 9x = 9.807 x = 1.0897 Your answer is correct! Related Questions pre-calculus - solve the following logarithmic equation. be sure to reject any ... Pre-Calculus - Find a solution to the equation if possible. Give the answer in ... Math - Find the exact solution, using common logarithms, and a two-decimal-place... calculus - solve the following exponential equation. 8^x=3 what is the exact ... Math 117 - Give exact and approximate solutions to three decimal places. (x-8)^2... precalc/trig - Solve the Logarithmic equation algebraically. Give an Exact ... Logs. - Can you please help me with the following log. problems? thanxs! Solve ... Advanced Functions/ Precalculus Log - 1. Evaluate 4^(log base4 64) + 10^(log100... Pre-Calculus - Laws of Logarithms Without using a calculator use the change of ... pre-calculus - the function has a real zero in the given interval. approximate ...
{"url":"http://www.jiskha.com/display.cgi?id=1296894110","timestamp":"2014-04-21T07:35:10Z","content_type":null,"content_length":"9169","record_id":"<urn:uuid:8fd7d027-0d0b-486b-bca7-06bb9adadd27>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00179-ip-10-147-4-33.ec2.internal.warc.gz"}
The Geomblog I've just returned (well almost: I'm in JFK waiting out the storm) from a Dagstuhl workshop on data structures. For those not in the know, this is one of the common retreat spots for computer scientists to get together and chat about problems in a very relaxed atmosphere (close to the true spirit of a workshop). In that spirit, people present things that are more "uncooked", half-baked, or incomplete, and so I won't talk about too many of the presentations, except for the ones which are available online. In this post, I'll discuss a new take on an old warhorse: the red-black tree A bit of background (for those unfamiliar with the story thus far): One of the most fundamental data structures in computer science is the binary search tree. It's built over a set of items with an ordering defined on them, and has the property that all nodes in the left subtree of the root are smaller than all nodes in the right subtree (and so on recursively). BSTs are handy for building structures that allow for quick membership queries, or "less-than"-type One can build a BST with depth log n for a set of n ordered items. This means that operations on this structure will take log n time in general. However, if the items can change on the fly, then it's more difficult to maintain such a structure while making sure that updates themselves are cheap (O(log n)). Among many solutions proposed to handle dynamic maintainence of BSTs is the red-black tree, proposed in its current form by Leo Guibas and Bob Sedgewick back in 1978. The red-black tree is the definitive dynamic BST data structure, at least in practice: it has worst-case log n bounds for all operations, and shows up in the implementations of basic structures in the STL and in many language libraries. By virtue of its being in CLRS, it also has stupefied the minds of undergraduates for many years now. The new story Conceptually, the re-balancing operations that are used to maintain a red-black tree are not terribly hard. However, there are numerous cases to consider, especially when doing deletions, and this tends to make the actual code used to write these trees fairly complex, and thus potentially error prone. In the first talk at Dagstuhl, Bob Sedgewick talked about a simpler variant of red-black trees, called left-leaning red-black trees , whose main virtue is that they are simpler to implement (many fewer lines of code) while maintaining the nice properties of red-black trees. Bob's slides have more details: although this new structure hasn't been published anywhere, it will appear in the next edition of his book. The key idea is to first understand how a red-black tree simulates a 2-3-4 tree (a more complex search tree in which nodes can have two, three or four children (and therefore, one, two or three keys). It's possible to construct a direct mapping between a node in a 2-3-4 tree and a subtree in a red-black tree. Once this is understood, the LLRB tree comes about by restricting the subtrees thus formed to be "left-leaning". Specifically, right-leaning red edges are not allowed, and to prevent the tree from being too skewed, more than three consecutive left-leaning red edges are disallowed as well. Doing this allows us to simplify various cases in the insertion and deletion steps, while not increasing the tree depth by too much (this last statement was argued by analogy with 2-3-4 trees). It's a simple idea, and simplifies code by a substantial amount without sacrificing asymptotics. From the PC Chair: Please know that I understand how much work it is to do research and write a paper. Rejection is never fun, happens to all of us, and there is likely always an element of misunderstanding to it. Because CONFERENCE accepts such a small percentage of papers, however, rejection from CONFERENCE may not at all mean your paper will be rejected at other high quality conferences. To that end, I encourage you to take into account the reviewers' recommendations. They have spent many hours with your paper and their remarks (and even their misunderstandings) may help you to clarify your paper or perhaps to do some more work. I can just imagine the PC sitting around a table, tears streaming down their eyes, as they pen this dejected missive to me. For a problem that I've been working on, it turns out that if a related range space has bounded VC-dimension, the problem can be solved exactly (but with running time exponential in the dimension). The range space is constructed from two parameters: a metric space (X, d), and a radius e, and consists of the domain X, and all balls of radius at most e in X. So a natural question that I've been unable to answer is: What properties of a metric space ensure that the induced range space has bounded VC dimension ? Most of what we do know comes from the PAC-learning community. For instance, the doubling dimension of a metric space is the smallest number d such that any ball of radius e can be covered by at most 2^d balls of radius e/2. In recent years, it has been popular to explore the extent to which "metric space of bounded doubling dimension == bounded dimensional Euclidean space" is true. Unfortunately, there are spaces of bounded VC-dimension that do not have bounded doubling dimension. Another related proxy for the "dimension" of a metric space is its metric entropy: Determine the minimum number of balls of radius e needed to cover all points in a metric space. The log of this number is the metric entropy, and among other things is useful as a measure of the number of points needed to "cover" a space approximately (in a dominating set sense). It's known that the metric entropy of concept classes is closely related to their VC-dimension, (where the underlying metric is symmetric difference between the classes). I am not aware of any general result that relates the two though. For more on the dizzying array of numbers used to describe metric space dimension, do read Ken Clarkson's magnificent survey. [On a side note, I don't quite understand why the term "entropy" is used. It seems to me that if one wanted to use entropy, one would compute the entropy of the resulting set of balls, rather than merely the log of their number. ] Via compgeom-announce: (papers in order of submission time stamp) 1. Manor Mendel and Assaf Naor. Markov convexity and local rigidity of distorted metrics 2. Noga Alon, Robert Berke, Maike Buchin, Kevin Buchin, Peter Csorba, Saswata Shannigrahi, Bettina Speckmann and Philipp Zumstein. Polychromatic Colorings of Plane Graphs 3. Jinhee Chun, Matias Korman, Martin Nöllenburg and Takeshi Tokuyama. Consistent Digital Rays 4. Eitan Yaffe and Dan Halperin. Approximating the Pathway Axis and the Persistence Diagram of a Collection of Balls in 3-Space 5. Naoki Katoh and Shin-ichi Tanigawa. Fast Enumeration Algorithms for Non-crossing Geometric Graphs 6. Ken Been, Martin Nöllenburg, Sheung-Hung Poon and Alexander Wolff. Optimizing Active Ranges for Consistent Dynamic Map Labeling 7. Hans Raj Tiwary and Khaled Elbassioni. On the Complexity of Checking Self-duality of Polytopes and its Relations to Vertex Enumeration and Graph Isomorphism 8. Victor Chepoi, Feodor Dragan, Bertrand Estellon, Michel Habib and Yann Vaxes. Diameters, centers, and approximating trees of delta-hyperbolic geodesic spaces and graphs 9. Esther Arkin, Joseph Mitchell and Valentin Polishchuk. Maximum Thick Paths in Static and Dynamic Environments 10. Julien Demouth, Olivier Devillers, Marc Glisse and Xavier Goaoc. Helly-type theorems for approximate covering 11. Sarit Buzaglo, Ron Holzman and Rom Pinchasi. On $k$-intersecting curves and related problems 12. Mohammad Ali Abam, Mark de Berg and Joachim Gudmundsson. A Simple and Efficient Kinetic Spanner 13. Frederic Chazal and Steve Oudot. Towards Persistence-Based Reconstruction in Euclidean Spaces 14. Ken Clarkson. Tighter Bounds for Random Projections of Manifolds 15. Krzysztof Onak and Anastasios Sidiropoulos. Circular Partitions with Applications to Visualization and Embeddings 16. Bernard Chazelle and Wolfgang Mulzer. Markov Incremental Constructions 17. Kenneth L. Clarkson and C. Seshadhri. Self-Improving Algorithms for Delaunay Triangulations 18. Frederic Cazals, Aditya Parameswaran and Sylvain Pion. Robust construction of the three-dimensional flow complex 19. Herbert Edelsbrunner, John Harer and Amit Patel. Reeb Spaces of Piecewise Linear Mappings 20. Evangelia Pyrga and Saurabh Ray. New Existence Proofs for $\epsilon$-Nets 21. Lars Arge, Gerth Stølting Brodal and S. Srinivasa Rao. External memory planar point location with logarithmic updates 22. Eric Berberich, Michael Kerber and Michael Sagraloff. Exact Geometric-Topological Analysis of Algebraic Surfaces 23. Misha Belkin, Jian Sun and Yusu Wang. Discrete Laplace Operator on Meshed Surfaces 24. Olivier Devillers, Marc Glisse and Sylvain Lazard. Predicates for 3D visibility 25. Luca Castelli Aleardi, Eric Fusy and Thomas Lewiner. Schnyder woods for higher genus triangulated surfaces 26. Erin Chambers, Jeff Erickson and Pratik Worah. Testing Contractibility in Planar Rips Complexes 27. Rado Fulek, Andreas Holmsen and János Pach. Intersecting convex sets by rays 28. Noga Alon, Dan Halperin, Oren Nechushtan and Micha Sharir. The Complexity of the Outer Face in Arrangements of Random Segments 29. Maarten Löffler and Jack Snoeyink. Delaunay triangulations of imprecise points in linear time after preprocessing 30. Erin Chambers, Éric Colin de Verdière, Jeff Erickson, Sylvain Lazard, Francis Lazarus and Shripad Thite. Walking Your Dog in the Woods in Polynomial Time 31. Jean-Daniel Boissonnat, Camille Wormser and Mariette Yvinec. Locally Uniform Anisotropic Meshing 32. Adrian Dumitrescu, Micha Sharir and Csaba Toth. Extremal problems on triangle areas in two and three dimensions 33. Timothy M. Chan. A (Slightly) Faster Algorithm for Klee's Measure Problem 34. Timothy M. Chan. Dynamic Coresets 35. Timothy M. Chan. On Levels in Arrangements of Curves, III: Further Improvements 36. Minkyoung Cho and David Mount. Embedding and Similarity Search for Point Sets under Translation 37. Jacob Fox and Janos Pach. Coloring K_k-free intersection graphs of geometric objects in the plane 38. Timothy G. Abbott, Zachary Abel, David Charlton, Erik D. Demaine, Martin L. Demaine and Scott D. Kominers. Hinged Dissections Exist 39. Vida Dujmovic, Ken-ichi Kawarabayashi, Bojan Mohar and David R. Wood. Improved upper bounds on the crossing number 40. Pankaj Agarwal, Lars Arge, Thomas Mølhave and Bardia Sadri. I/O Efficient Algorithms for Computing Contour Lines on a Terrain 41. Pankaj Agarwal, Bardia Sadri and Hai Yu. Untangling triangulations through local explorations 42. Ileana Streinu and Louis Theran. Combinatorial Genericity and Minimal Rigidity The 2007 Turing Award was awarded to Edmund Clarke, E. Allen Emerson and Joseph Sifakis for their work on model checking. In our "theory A" cocoon, we tend not to pay attention to much "theory B" work, and model checking is one of the shining stars in theory B, with a rich theory and immense practical value. I am not competent to discuss model checking at any level, and so I asked someone who is ! Ganesh Gopalakrishnan is a colleague of mine at the U. and heads our Formal Verification group. I asked him if he could write a perspective on model checking, and he kindly agreed. What follows is his piece (with URLs added by me). As usual, all credit to him, and any mistakes in transcribing are my fault A Perspective On Model Checking Ganesh Gopalakrishnan One of the earliest attempts at proving the correctness of a program was by Alan Turing in 1949. The activity was raised to a much more prominent level of interest and importance through the works of Dijkstra, Hoare, and others. Yet, the enterprise of "Program Verification" was met with more noticeable failures than successes in the 1970's, due to its inability to make a dent on practice. The introduction of Temporal Logic for modeling concurrent systems (for which Amir Pnueli won the 1996 Turing Award) was the first attempt at shifting the attention away from the impossibly difficult task of 'whole program verification' to a much more tractable focus on verifying the reactive aspects of concurrency. Looking back, this enterprise did not meet with the expected degree of success, as Temporal Logic reasoning was still based on `deduction' (proof from first principles). From an engineer's perspective, what was needed was a simple way to specify the classes of "legal and illegal waveforms" a system can produce on its signals / variables, and an efficient way of checking that all those waveforms are contained in the set of "legal waveforms" as can be specified through temporal properties. In other words, it was sufficient to show that finite state machines (or finite state control skeletons of programs) served as *models* for a catalog of desired temporal properties. This was essentially the approach of model checking. It immediately took off as a practically viable method for debugging concurrent systems; the approach guaranteed exhaustive verification for small instances of a problem. In practice it meant that all relevant corner cases were considered in the confines of afforable resource budgets. This was the gist of Allen Emerson's contribution in his PhD dissertation which he wrote under Ed Clarke's guidance. A parallel effort underway at France in Joseph Sifakis's group also arrived at the same pragmatic approach to verification. Model checking was given crucial impetus by Ken McMillan who, using the technology of Binary Decision Diagrams formulated and popularized by Randy Bryant, developed the highly efficient Symbolic Model Checking approach. The technology has since then skyrocketed in its adoption rate. Today, model checking and its derivatives are part of every verification tool used to debug digital hardware. While "full verification" is not the goal (and is largely a meaningless term), the ability of model checkers to find *high quality bugs* is uncanny. It is those bugs that escape simulation, and may not even manifest in hardware that operates at the rate of GHz. Some of the error traces produced by model checkers are only about two dozen steps long; yet, if one imagines the number of reachable nodes in a tree with arity (branching factor) 2000 in two dozen steps, one understands why conventional approaches to testing miss bugs: they have to make the same lucky "1 in 2000" choices two dozen times in a row. A model checker can often get away by not really modeling these choices explicitly. Reduction techniques allow model checkers to detect and avoid exploring many "equivalent" behaviors. In that sense, model checking is one of the most powerful "test acceleration technologies" that mankind has invented. Today, microprocessor companies employ several kinds of model checkers. In software, Bell Labs (mainly before its collapse), JPL, NASA, and Microsoft (to name a few) have had tremendous success using model checking for verifying device drivers, switch software, and flight control software. Speaking more generally, anything with a finite state structure (e.g., decision processes, reliability models, planning in AI) can be approached through model checking. • Daniel Jackson (MIT), "With its dramatic successes in automatically detecting design errors (mainly in hardware and protocols), model checking has recently rescued the reputation of formal methods." (source) • "Proofs of programs are too boring for the social process of mathematics to work." DeMillo et al in CACM 1979). List here, courtesy muthu.
{"url":"http://geomblog.blogspot.com/2008_02_01_archive.html","timestamp":"2014-04-16T11:12:15Z","content_type":null,"content_length":"175519","record_id":"<urn:uuid:9bc2db4b-aba4-40ca-a9f3-f9efea710708>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00568-ip-10-147-4-33.ec2.internal.warc.gz"}
188 helpers are online right now 75% of questions are answered within 5 minutes. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/users/cyclevleoce12345/answered","timestamp":"2014-04-21T12:51:27Z","content_type":null,"content_length":"106803","record_id":"<urn:uuid:5a0bd1e5-b422-42a8-a889-1d579662852b>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00420-ip-10-147-4-33.ec2.internal.warc.gz"}
Register or Login To Download This Patent As A PDF United States Patent Application 20060097594 Kind Code A1 Abou-Akar; Atef ; &nbsp et al. May 11, 2006 Synchronous electrical machine comprising a stator and at least one rotor, and associated control device The invention relates to a synchronous electric motor comprising a stator (10) and at least one rotor (20) with permanent magnets (21), characterised by an embodiment with X.sub.d>X.sub.q; where X.sub.d is the direct reactance and X.sub.q the quadrature reactance. Inventors: Abou-Akar; Atef; (L'Isle d'Espagnac, FR) ; Saint-Michel; Jacques; (Angouleme, FR) Correspondence Address: OLIFF & BERRIDGE, PLC P.O. BOX 19928 Serial No.: 547194 Series Code: 10 Filed: March 29, 2004 PCT Filed: March 29, 2004 PCT NO: PCT/FR04/00787 371 Date: November 2, 2005 Current U.S. Class: 310/156.01 Class at Publication: 310/156.01 International Class: H02K 21/12 20060101 H02K021/12 Foreign Application Data Date Code Application Number Mar 31, 2003 FR 0303980 1. A synchronous electrical machine comprising: a stator (10); and at least one rotor (20) having permanent magnets (21), characterized in that it is designed so as to have X.sub.d>X.sub.q, where X.sub.d is the direct reactance and X.sub.q is the quadrature reactance. 2. The machine as claimed in claim 1, characterized in that X.sub.d/X.sub.q>1.1 and better still X.sub.d/X.sub.q>1.5. 3. The machine as claimed in claim 2, characterized in that X.sub.d/X.sub.q 3. 4. The machine as claimed in claim 1, characterized in that X.sub.qI.sub.o/E is between 0.33 and 0.6. 5. The machine as claimed in claim 1, characterized in that X.sub.dI.sub.o/E is between 0.66 and 1. 6. The machine as claimed in claim 1, characterized in that the stator (10) has teeth (11), each carrying at least one individual coil (12). 7. The machine as claimed in claim 6, characterized in that the teeth (11) of the stator (10) are devoid of pole shoes. 8. The machine as claimed in claim 1, characterized in that the rotor (20) is a flux-concentrating rotor, the permanent magnets (21) of the rotor being placed between pole pieces (22). 9. The machine as claimed in claim 8, characterized in that the pole pieces (22) of the rotor each have a face turned toward the stator (10), which face has a convex portion (24). 10. The machine as claimed in claim 9, characterized in that the convex portion (24) of a pole piece (22) has a radius of curvature of between 20% and 30% of the inside radius (R) of the stator. 11. The machine as claimed in claim 10, characterized in that the circumferential ends (25) of the convex portion (24) of a pole piece (22) are angularly offset relative to the permanent magnets (21) adjacent this pole piece (22). 12. The machine as claimed in claim 11, characterized in that the angular offset .beta. of the circumferential ends (25) relative to the adjacent permanent magnets (21) lies: between 80.degree./ n.sub.teeth and 100.degree./n.sub.teeth, being especially about 90.degree./n.sub.teeth, for a machine in which the ratio of the number of stator teeth n.sub.teeth to the number of rotor poles n.sub.poles is 3/2 or which satisfies the relationship n.sub.teeth/n.sub.poles=6n/(6n-2), where n is an integer greater than or equal to 2; and between 50.degree./n.sub.teeth and 70.degree./ n.sub.teeth, being especially about 60.degree./n.sub.teeth, for a machine that satisfies the relationship n.sub.teeth/n.sub.poles=6n/(6n+2), where n is an integer greater than or equal to 2. 13. The machine as claimed in claim 8, characterized in that each of the permanent magnets (21) of the rotor (20) lies radially set back from the circumferential ends of the convex portions (24) of the two adjacent pole pieces (22). 14. The machine as claimed in claim 13, characterized in that the setback (r) in the radial direction of the magnets (21) relative to the circumferential ends (25) of the convex portions (24) lies between 10% and 20% of the inside radius (R) of the stator (10). 15. The machine as claimed in claim 8, characterized in that each of the pole pieces (22) of the rotor (20) has two shoulders (26), at least one permanent magnet (21) lying between the shoulders of two adjacent pole pieces (22). 16. The machine as claimed in claim 8, characterized in that each of the pole pieces (22) of the rotor (20) has a salient part (27) extending toward the stator (10), having radial edges (28) that are angularly offset relative to the radially directed edges (29) of the permanent magnets (21) adjacent the pole piece (22). 17. The machine as claimed in claim 1, characterized in that the permanent magnets (21) have, when the machine is observed along the axis (X) of rotation of the rotor, a cross section of elongate shape with its long axis lying in a radial direction. 18. The machine as claimed in claim 1, characterized in that the permanent magnets (21) of the rotor (20) have, when the machine is observed along the axis (X) of rotation of the rotor, a rectangular cross section with its large side oriented parallel to a radius of the machine. 19. The machine as claimed in claim 1, characterized in that the stator (10) has 6n teeth (11) and the rotor (20) has 6n.+-.2 poles (22), n being greater than or equal to 2. 20. The machine as claimed in claim 1, characterized in that it has a single inner rotor. 21. The machine as claimed in claim 1, characterized in that the power of the machine is equal to or greater than 0.5 kW. 22. The machine as claimed in claim 1, characterized in that it constitutes a generator. 23. The machine as claimed in claim 1, characterized in that it constitutes a motor. 24. An assembly comprising: a machine as defined in claim 1, this machine constituting a synchronous motor; and a control device for controlling a synchronous motor, allowing the motor to operate at approximately constant power P.sub.o over a range of rotation speeds of the rotor, which includes a computer (45) designed to determine the direct current component I.sub.d and the quadrature current component I.sub.q of the motor supply current, the current component I.sub.d and I.sub.q being equal, to within 20%, better still to within 10% and even better to within 5%, to: I.sub.d i.sub.dI.sub.o -i sin .alpha.I.sub.o and I.sub.q i.sub.qI.sub.o i cos .alpha.I.sub.o, where I.sub.o is the maximum intensity of the current imposed by the rating of the control device; .alpha. = arc .times. .times. tan .function. ( x q .function. ( e - y ) x q .times. x ) ; i = ( x x d ) 2 + ( e - y x d ) 2 ,the unitary current flowing in one phase of the armature; (x,y) being one of the real roots of the equations: x 2 + y 2 = v 2 m 2 .times. .times. and .times. .times. y = e .function. ( 1 - x d x d - x q ) + p m .times. e .times. x d .times. x q x d - x q .times. 1 x ;m denotes the ratio of the rotation speed of the rotor to the base speed; e is the ratio of, on the one hand, the electromotive force and, on the other hand, the product of m multiplied by the voltage V.sub.o imposed by the mains supply; v is the ratio of the voltage across the terminals of one phase of the armature to the maximum voltage per phase V.sub.o imposed by the mains supply; p is the ratio of the rms power to the power P.sub.o; .alpha. is the phase difference between the current and the electromotive force; x.sub.d is the quotient X.sub.dI.sub.o/mV.sub.o, X.sub.d being the direct reactance; and x.sub.q is the quotient X.sub.qI.sub.o/mV.sub.o, where X.sub.q is the quadrature reactance. 25. The assembly as claimed in claim 24, characterized in that the root (x,y) chosen is that which minimizes i. 26. The assembly as claimed in claim 24, characterized in that it includes: a three-phase inverter (35); and a vector controller (37) designed to transmit, according to the current components i.sub.d and i.sub.q, control signals to electronic switches (60) of the inverter (35). 27. A method of controlling a motor as defined in claim 23, in which: at least the supply voltage (V.sub.DC) of an inverter connected to the motor and the rotation speed (.OMEGA.) of the motor are measured; and the direct current components i.sub.d and the quadrature current components i.sub.q of the supply current for maintaining constant power for a given speed setpoint (.OMEGA.*) above the base speed are determined by real-time calculation and/or by access to a register on the basis of at least the voltage V.sub.DC and the measured speed. 28. The method as claimed in claim 26, characterized in that a torque setpoint t* is determined as a function of at least the difference between the measured rotation speed (.OMEGA.) and the rotation speed setpoint (.OMEGA.*) of the rotor. 29. The method as claimed in claim 28, characterized in that a power setpoint (p*) is determined as a function of at least the torque setpoint and the measured rotation speed. 30. The method as claimed in claim 29, characterized in that the direct current component i.sub.d and quadrature current component i.sub.q values are calculated in real time from the power setpoint, the measured rotation speed and the DC supply voltage of the inverter. [0001] The present invention relates to the field of rotating electrical machines. [0002] The invention relates more particularly, but not exclusively, to permanent-magnet synchronous machines, able to operate at substantially constant power over a large speed range, for example to lifting machines or electrical traction machines. [0003] Within the context of lifting, it is useful to match the lifting speed to the load being lifted, so as to reduce the lifting time when this load is small, while still being able to lift heavier articles. [0004] Within the context of electrical traction, at startup or when the vehicle comes to a rise, the motor must deliver a high torque at low speed. In contrast, on a horizontal path, the loads to be delivered are less and the vehicle can run more quickly without requiring more power from the motor. [0005] Synchronous machines can operate at constant torque up to a certain speed, called the base speed. Up to this base speed, the power increases approximately proportionally to the rotation speed of the rotor. Above the base speed, the torque decreases at approximately constant power. [0006] The armature phases may be modeled, each by an inductance that groups together the terms: self-induction, mutual induction between phases and leakage induction. This inductance depends on the angular position of the rotor relative to the stator and it has, as components in a reference frame tied to the electrical angular frequency, the direct inductance L.sub.d and the quadrature inductance L.sub.q. The direct reactance X.sub.d denotes the product of the direct inductance L.sub.d multiplied by the electrical angular frequency .omega. and the quadrature reactance X.sub.q denotes the product of the quadrature inductance L.sub.q multiplied by the electrical angular frequency .omega.. The rotation speed .OMEGA. of the rotor is related to the electrical angular frequency .omega. through the relationship .omega.=z.OMEGA., where z denotes the number of pairs of poles. [0007] In the reference frame tied to the electrical angular frequency, the direct inductance L.sub.d of a phase of the armature is the value of the inductance on the d axis, called the direct axis, that is to say when the axis of the armature poles coincides with that of the stator coils of this same phase. The quadrature inductance L.sub.q is the value of the inductance on the q axis, called the quadrature axis, that is to say when the axis of the inductor poles is perpendicular to the axis of the stator coils for this same phase. [0008] Known permanent-magnet rotating electrical machines for lifting and for electrical traction are predominantly machines called "smooth pole" machines, for which the direct reactance X.sub.d is approximately equal to the quadrature reactance X.sub.q. [0009] In addition to smooth pole machines, there are also machines called "inverted salient pole" machines, for which the direct reactance X.sub.d is substantially less than the quadrature reactance X.sub.q. Their main advantage is that the reluctance torque, which is proportional to the difference between the reactances X.sub.q and X.sub.d, is added, in normal operation, to the electromotive force torque generated by the magnets. This makes it possible, for the same demanded torque, to reduce the volume of the magnets and therefore the cost of the machine. For this type of machine, there is an optimum phase lead of the current relative to the electromotive force, for which the torque is a maximum. It is this operating point that is retained up to the base speed. [0010] Above the base speed, the voltage across the phase terminals of the machine becomes, all other things being equal, greater than the available voltage supplied by the mains to the machine via the control device, because of the electromotive force that varies proportionally with the speed. [0011] To reduce the voltage across the phase terminals of the machine, the current in the stator windings and its phase difference relative to the armature flux, that is to say that of the magnets, are varied in order to create a magnetic flux that partly opposes the armature flux. This operation is called "defluxing" and generates electrical losses that are greater the higher the current needed for defluxing. [0012] There is a need to improve synchronous machines and to allow them to operate with a high efficiency at substantially constant power over a wide speed range, and especially above the base [0013] The invention satisfies this need thanks to a synchronous electrical machine comprising a stator and at least one rotor having permanent magnets, the machine being characterized in that it is designed so as to have X.sub.d>X.sub.q, where x.sub.d is the direct reactance and X.sub.q is the quadrature reactance. For example, X.sub.d/X.sub.q>1.1 and better still X.sub.d/X.sub.q>1.5. For example, it is possible to have X.sub.d/X.sub.q 3. [0014] The advantages afforded by the invention are given below. [0015] Firstly, if the power factor cos .PHI. varies inversely with the quadrature reactance X.sub.q, a low X.sub.q value allows a high power factor to be obtained. For example, depending on the desired power factor level, X.sub.qI.sub.o/E lies between 0.33 and 0.6, where I.sub.o denotes the maximum line current intensity imposed by the rating of the controller and E denotes the electromotive force induced per phase of the machine. [0016] Secondly, since the flux of the magnets is oriented along the direct axis d, the defluxing is achieved by injecting a current into the armature so as to generate, along the direct axis d, a flux proportional to the direct reactance X.sub.d and to the component I.sub.d of the current along the direct axis. With a high direct reactance X.sub.d, substantial defluxing is obtained with a lower direct current I.sub.d and therefore lower corresponding losses. This consequently reduces the rating of the control device and improves the efficiency. [0017] In addition, in the event of a short circuit, a high X.sub.d reduces the risk of demagnetization, which depends on the value of the short-circuit current. This current is proportional to the ratio of the electromotive force to the direct reactance, and it is therefore low when the direct reactance X.sub.d is large. For example, over the defluxing range required, X.sub.dI.sub.o/E lies between 0.66 and 1, where I.sub.o denotes the maximum line current intensity imposed by the rating of the controller and E is the electromotive force induced per phase of the machine. [0018] Up to the base speed, the machine can operate with a current in phase with the electromotive force. The electromotive force torque is a maximum and the reluctance torque is zero. The base speed may for example be greater than 100 or 200 revolutions per minute. [0019] In one particular embodiment, the stator has teeth, each carrying at least one individual coil, and these teeth are devoid of pole shoes. This makes it possible in particular to install prefabricated coils on the teeth, thereby simplifying the manufacture of the machine. [0020] The rotor is advantageously a flux-concentrating rotor, the permanent magnets of the rotor then being placed between pole pieces. This makes it possible to reduce the number of magnets, and therefore to reduce the cost of the machine. [0021] The direct and quadrature reactance values may be determined by the shape of the rotor pole pieces, and especially by the shape of the salient parts of these pole pieces. [0022] The salient parts of two successive pole pieces may define, between them, a notch that has two opposed edges, including radial portions, and a bottom partly formed by one face of at least one permanent magnet. [0023] Such a pole piece shape introduces a dissymmetry between the direct and quadrature reactances and a relatively large positive difference between the direct and quadrature reactances. [0024] The rotor pole pieces may each have a face turned toward the stator, which face has a convex portion. The convex portion of a pole piece may have a radius of curvature of between 20% and 30% of a radius of the stator, especially the inside radius of the stator, or even about 25% thereof. [0025] The circumferential ends of this convex portion may be angularly offset relative to the permanent magnets that are adjacent this pole piece. The angular offset of the circumferential ends relative to the adjacent permanent magnets may lie: [0026] between 80.degree./n.sub.teeth and 100.degree./n.sub.teeth, being for example about 90.degree./n.sub.teeth, for a machine in which the ratio of the number of stator teeth n.sub.teeth to the number of rotor poles n.sub.poles is 3/2 or which satisfies the relationship n.sub.teeth/n.sub.poles=6n/(6n-2), where n is an integer greater than or equal to 2; and [0027] between 50.degree./n.sub.teeth and 70.degree./n.sub.teeth, being for example about 60.degree./n.sub.teeth, for a machine that satisfies the relationship n.sub.teeth/n.sub.poles =6n/(6n+2), where n is an integer greater than or equal to 2. [0028] Each of the permanent magnets of the rotor may lie radially set back from the circumferential ends of the convex portions of the two adjacent pole pieces. The setback in the radial direction of the magnets relative to the circumferential ends of the convex portions may lie between 10% and 20%, for example being about 15%, of a radius of the stator, especially the inside radius of the [0029] Each of the pole pieces of the rotor may have two shoulders. A permanent magnet may lie between the shoulders of two adjacent pole pieces. [0030] Each of the pole pieces of the rotor may have a salient part extending toward the stator, having radial edges that are angularly offset relative to radially directed edges of the permanent magnets adjacent this pole piece. [0031] The permanent magnets of the rotor may have, when the machine is observed along the rotation axis of the rotor, a cross section of elongate shape in a radial direction. In particular, the permanent magnets of the rotor may have, when the machine is observed along the rotation axis of the rotor, a rectangular cross section with its large side oriented parallel to a radius of the [0032] In one particular embodiment of the invention, the X.sub.d/X.sub.q ratio is chosen so as to obtain, at the maximum rotation speed of the rotor, substantially the same power as that obtained at the base speed, with the same voltage and the same current. [0033] It is preferable to choose, from among the possible values of the X.sub.d/X.sub.q ratio for obtaining the abovementioned result, the smallest one in order to avoid having a high salience, which would result in poles of smaller opening and a larger equivalent gap, and which would consequently increase the volume of the magnets, and therefore the cost and the weight of the machine. A high salience would furthermore reduce the maximum torque that the machine would be able to deliver, which would limit the overload possibilities. [0034] The stator may have 6n teeth and the rotor may have 6n.+-.2 poles, n being greater than or equal to 2. Such a structure allows both the torque ripple and the voltage harmonics to be reduced. [0035] The machine may have a single inner rotor or, as a variant, an inner rotor and an outer rotor that are placed radially on either side of the stator and are rotationally coupled. By using a double rotor, it is possible to reduce the iron losses. [0036] The machine may constitute a generator or a motor. [0037] The power of the machine may be equal to or greater than 0.5 kW, for example being around 1.5 kW, although this value is in no way limiting. [0038] The subject of the invention is also, independently of or in combination with the foregoing, an electrical machine comprising: [0039] at least one stator; and [0040] at least one rotor, the rotor having pole pieces and permanent magnets that are placed between the pole pieces, and each pole piece having a salient part and, on either side of this salient part, a shoulder. [0041] The shoulders of two adjacent pole pieces may be flush with the permanent magnet placed between them. [0042] Each salient part may be bounded in the circumferential direction by an edge running radially. [0043] It is possible for each pole piece not to cover the adjacent permanent magnets in the circumferential direction. [0044] Each salient part may be radially bounded by a continuously rounded edge. [0045] Each pole piece may be symmetrical relative to a mid-plane lying radially. [0046] Each pole piece may comprise a stack of magnetic laminations. [0047] The salient part may have a radially external circular edge, the center of curvature of which is different from the center of rotation, the center of curvature for example lying on a radius between the center of rotation and the maximum half-diameter of the rotor. [0048] The angular separation between two adjacent salient parts may be greater than the angular width of the permanent magnet placed between the corresponding pole pieces. [0049] The permanent magnets may have an outer face turned toward the stator. [0050] The subject of the invention is also a control device for controlling a machine as defined above. [0051] The subject of the invention is also, independently of or in combination with the foregoing, a control device for controlling a synchronous motor, allowing the motor to operate at approximately constant power P.sub.o over a range of rotation speeds of the rotor, which includes a computer designed to determine the direct current component I.sub.d and the quadrature current component I.sub.q of the motor supply current, the current components I.sub.d and I.sub.q being equal, to within 20%, better still to within 10% and even better to within 5%, to: I.sub.d i.sub.dI.sub.o -i sin .alpha.I.sub.o and I.sub.q i.sub.qI.sub.o i cos .alpha.I.sub.o, where I.sub.o is the maximum intensity of the current imposed by the rating of the control device; .alpha. = arc .times. .times. tan .function. ( x q .function. ( e - y ) x q .times. x ) ; i = ( x x d ) 2 + ( e - y x d ) 2 , the unitary current flowing in one phase of the armature; (x,y) being one of the real roots of the equations: x 2 + y 2 = v 2 m 2 .times. .times. and .times. .times. y = e .function. ( 1 - x d x d - x q ) + p m .times. e .times. x d .times. x q x d - x q .times. 1 x ; [0052] m denotes the ratio of the rotation speed of the rotor to the base speed; [0053] e is the ratio of, on the one hand, the electromotive force and, on the other hand, the product of m multiplied by the voltage V.sub.o imposed by the mains supply; [0054] v is the ratio of the voltage across the terminals of one phase of the armature to the voltage V.sub.o imposed by the mains supply; [0055] p is the ratio of the rms power to the constant power P.sub.o at which it is desired to operate the machine; and [0056] .alpha. is the phase difference between the current and the electromotive force. [0057] The terms "direct current component" and "quadrature current component" are understood to mean the current intensities projected onto the direct axes d and the quadrature axes q of the reference frame tied to the electrical angular frequency. [0058] In the above, X.sub.d denotes the quotient X.sub.dI.sub.o/mV.sub.o, X.sub.d being the direct reactance. [0059] Likewise, = x q .times. I o m .times. .times. V o , where X.sub.q is the quadrature reactance. [0060] Such a control device shifts the current through an angle .alpha. relative to the electromotive force, while keeping the voltage constant, and the component i.sub.d of i on the direct axis d will create a flux that opposes the main flux. The magnetomotive force is therefore reduced, consequently resulting in a drop in the overall induced voltage. [0061] The unitary value i of the current may only be increased above its nominal value for reasons associated with the heat-up of the machine and with the rating of the control device. [0062] The desired voltage may be obtained with the minimum shift .alpha., that is to say the lowest unitary direct intensity i.sub.d, so as to have a higher quadrature current, which helps to create the torque. [0063] The defluxing ratio is the maximum value of m for obtaining the same power P.sub.o as that obtained at the base speed with the same voltage V.sub.o and the same current I.sub.o defined above. From these may be deduced the values of the electromotive force and the values of the direct and quadrature reactances for a given machine. The X.sub.d/X.sub.q ratio thus obtained for given P.sub.o, V.sub.o and I.sub.o is an increasing function of the desired defluxing ratio, the latter possibly being, for example, greater than 2, for example equal to 6. [0064] Among the possible solutions (x,y), it is preferred to choose that which minimizes i. [0065] The control device described above is preferably used in combination with a synchronous motor having X.sub.d>X.sub.q, as defined above. [0066] The control device may furthermore include: [0067] a three-phase inverter; and [0068] a vector controller designed to transmit, according to the current components i.sub.d and i.sub.q, the control signals to electronic switches of the inverter. [0069] The subject of the invention is also, independently of or in combination with the foregoing, a method of controlling a motor in which at least the supply voltage of an inverter connected to the motor and the rotation speed of the motor are measured, and the direct current component i.sub.d and quadrature current component i.sub.q of the supply current for maintaining constant power for a given speed setpoint .OMEGA.* above the base speed are determined by real-time calculation and/or by access to a register. [0070] A torque setpoint t* may be determined as a function of at least the difference between the measured rotation speed and the rotation speed setpoint .OMEGA.* of the rotor. A power setpoint may be determined as a function of at least the torque setpoint and the measured rotation speed. The unitary direct current component i.sub.d and quadrature current component i.sub.q values may be calculated in real time from the power setpoint, the measured rotation speed and the DC supply voltage of the inverter. The direct and quadrature current components may be determined according to the control laws as a function of the load and the supply voltage of the inverter. These control laws may be integrated into the computer so as to improve its dynamic performance. [0071] The subject of the present invention is also, independently of or in combination with the foregoing, an electric vehicle having a motor comprising: [0072] a stator; and [0073] 1at least one rotor having permanent magnets, the motor being designed so as to have X.sub.d>X.sub.q, where X.sub.d is the direct reactance and X.sub.q is the quadrature reactance. [0074] The vehicle may also include a control device for controlling a synchronous motor, allowing the motor to operate at approximately constant power over a range of rotation speeds of the rotor, which includes a computer designed to determine the direct current component I.sub.d and the quadrature current components I.sub.q of the motor supply current, which are injected into the motor, the current components I.sub.d and I.sub.q being equal, to within 20%, better still to within 10% and even better to within 5%, to: I.sub.d i.sub.dI.sub.o -i sin .alpha.I.sub.o and I.sub.q i.sub.qI.sub.o i cos .alpha.I.sub.o, [0075] where I.sub.o is the maximum intensity of the current imposed by the rating of the control device; .alpha. = arc .times. .times. tan .function. ( x q .function. ( e - y ) x q .times. x ) ; i = ( x x d ) 2 + ( e - y x d ) 2 , the unitary current flowing in one phase of the armature; (x,y) being one of the real roots of the equations: x 2 + y 2 = v 2 m 2 .times. .times. and .times. .times. y = e .function. ( 1 - x d x d - x q ) + p m .times. e .times. x d .times. x q x d - x q .times. 1 x ; [0076] m denotes the ratio of the rotation speed of the rotor to the base speed; [0077] e is the ratio of, on the one hand, the electromotive force and, on the other hand, the product of m multiplied by the maximum voltage per phase V.sub.o imposed by the mains supply; [0078] v is the ratio of the voltage across the terminals of one phase of the armature to the voltage V.sub.o imposed by the mains supply; [0079] p is the ratio of the rms power to the constant power at which it is desired to operate the machine; and; [0080] .alpha. is the phase difference between the current and the electromotive force. [0081] The present invention will be better understood on reading the following detailed description of a nonlimiting illustrative example of the invention and on examining the appended drawing in [0082] FIG. 1 shows a schematic partial view, in cross section, of a machine according to the invention; [0083] FIG. 2 is a Blondel diagram showing various sinusoidal quantities in a reference frame tied to the electrical angular frequency; [0084] FIG. 3 is a simplified block diagram of a control device for a synchronous motor according to the invention; and [0085] FIG. 4 shows schematically an illustrative example of the main computer of the control device of FIG. 3. [0086] FIG. 1 shows a synchronous electrical machine 1 comprising a stator 10 and a rotor 20 having permanent magnets 21. [0087] The stator 10 has teeth 11, each carrying an individual coil 12, the coils 12 being electrically connected together so as to be supplied by a three-phase current. [0088] The rotor 20 is a flux-concentrating rotor, the permanent magnets 21 being placed between pole pieces 22. The permanent magnets 21 and the pole pieces 22 are appropriately fastened to a shaft 23 of the machine. [0089] The pole pieces 22 may be held in place on the shaft 23 by bonding, or else by producing complementary shapes on the shaft and on the pole pieces, or else they can be held in place by rods engaged in the pole pieces 22 and fastened at their ends to flanges of the rotor. [0090] The pole pieces 22 are produced by a stack of magnetic laminations, each coated with an insulating varnish, so as to limit the induced current losses. [0091] The magnets 21 have polarities of like type, these being directed toward the pole piece 22 placed between them, as may be seen in FIG. 1. [0092] The pole pieces 22 each have a salient part 27 and their face turned toward the stator 10 has a convex portion 24. The convex portion 24 of a pole piece 22 may have a radius of curvature of between 20% and 30% of a radius of the stator, especially the inside radius of the stator, or even about 25% thereof. [0093] Each convex portion 24 has circumferential ends 25 angularly offset relative to the adjacent permanent magnets 21. The angular offset .beta. of the circumferential ends 25 relative to the adjacent permanent magnets 21 may lie: [0094] between 80.degree./n.sub.teeth and 100.degree./n.sub.teeth, being for example about 90.degree./n.sub.teeth, for a machine in which the ratio of the number of stator teeth n.sub.teeth to the number of rotor poles n.sub.poles is 3/2 or which satisfies the relationship n.sub.teeth/n.sub.poles=6n/(6n-2), where n is an integer greater than or equal to 2, for example equal to 2 or 3; and [0095] between 50.degree./n.sub.teeth and 70.degree./n.sub.teeth, being for example about 60.degree./n.sub.teeth, for a machine that satisfies the relationship n.sub.teeth/n.sub.poles=6n/(6n+2), where n is an integer greater than or equal to 2, for example equal to 2 or 3. [0096] In these equations, n.sub.teeth denotes the number of stator teeth 11 and n.sub.poles denotes the number of pole pieces 27. [0097] The permanent magnets 21 lie radially set back from the circumferential ends 25 of the convex portions 24. The setback r in the radial direction of the magnets 21 relative to the circumferential ends 25 of the convex portions 24 may lie between 10% and 20%, or even about 15%, of the inside radius R of the stator. [0098] Each pole piece 22 furthermore has two shoulders 26 lying on either side of the salient part 27, each permanent magnet 21 lying between two shoulders 26. [0099] The salient parts 27 of each of the rotor pole pieces 22 have radial edges 28 which, just like the circumferential ends 25, are angularly offset relative to the radially directed faces 29 of the adjacent permanent magnets 21. [0100] The permanent magnets 21 have, when the machine is observed along the rotation axis X, a cross section of elongate shape in a radial direction. This cross section is, in the example described, rectangular, with its large side oriented parallel to a radius of the machine. As a variant, the permanent magnets could each have a wedge shape. [0101] In the example in question, the rotor has ten poles and the stator twelve teeth, the stator thus having 6n teeth and the rotor 6n.+-.2 poles, n being equal to 2. It would not be outside the scope of the present invention if n were to be greater than 2. [0102] In the example described, the rotor is an inner rotor, but it would not be outside the scope of the present invention if the rotor were to be an outer rotor, or if the machine were to have both an inner rotor and an outer rotor, each placed radially on either side of the stator and rotationally coupled. Advantageously, the motor satisfies, in accordance with the invention, the relationship X.sub.d/X.sub.q. Control Device [0103] The machine described above with reference to FIG. 1 may be controlled by a control device that allows it to operate at constant power over a wide range of rotation speeds of the rotor, as will be described with reference to FIGS. 2 and 3. This control device is particularly suitable for a machine in which X.sub.d>X.sub.q. [0104] For greater clarity, FIG. 2 shows a well-known reference frame tied to the electrical angular frequency .omega., having a direct axis d in the same sense and the same direction as the armature flux .phi. or main flux, which passes through one phase of the armature, and a quadrature axis q shifted through an angle of +.pi./2 relative to the direct axis d. In the figure, the sinusoidal quantities may be represented by fixed vectors. [0105] In what follows, the unitary values e, x.sub.d and x.sub.q defined above will be used. [0106] The shift of the unitary current i relative to the unitary electromotive force e is chosen so as to keep the voltage constant at speeds above the base speed, since the unitary component i.sub.d of i on the direct axis d will create a flux that opposes the main flux, the total magnetomotive force therefore being reduced and consequently causing a reduction in the induced unitary voltage v, without this unduly impairing the motor torque, the control device being designed so as to allow the machine to operate with the highest possible torque above the base speed. In particular, the purpose of the control device is to allow operation at a power above the base speed substantially equal to the power at the base speed. [0107] The synchronous motor 1 is supplied with a three-phase current coming from an inverter 35 comprising six electronic switches 60, for example one or more IGBTs, each associated with a diode 61 and controlled by six control signals 62 coming from a vector controller 37. [0108] The latter is used to correct the intensity of the current delivered to the motor according to direct current component i.sub.d and quadrature current component i.sub.q setpoints that it receives from a main computer 45, to the measured currents i.sub.a and i.sub.b for two of the three phases, and to an angular position datum .theta.. [0109] The angular position datum .theta. is transmitted by a position calculator 40 connected to a position sensor 39. [0110] The position sensor 39 is also connected to a speed calculator 41. [0111] The value of the rotation speed .OMEGA. calculated by the speed calculator 41 is transmitted to the main computer 45, to a multiplier 46 and to a subtractor 47. [0112] The rotation speed .OMEGA.is subtracted from a rotation speed setpoint .OMEGA.* of the rotor in the subtractor 47, and then the difference .OMEGA.*-.OMEGA. is processed by a regulating circuit 48 of the PID (proportional-integral-differential) type and transmitted to a torque calculator 49, which determines a torque setpoint t* according to the difference between the measured rotation speed .OMEGA. and the rotation speed setpoint .OMEGA.* of the rotor. The torque setpoint t* is limited to the maximum torque t.sub.max that the machine is capable of delivering. [0113] The torque setpoint t* is transmitted to the multiplier 46 which calculates a power setpoint p* according to the measured rotation speed .OMEGA. of the rotor. [0114] This power setpoint p* is transmitted to the main computer 45. [0115] Moreover, the voltage V.sub.DC across the terminals of the inverter 35 is measured and transmitted to the main computer 45 via a regulating circuit 50 of the PID type in order to smooth out any possible variations. This regulating circuit 50 delivers a unitary voltage v that may vary, being dependent on the mains voltage. [0116] The main computer 45 determines, from the data that it receives, the direct i.sub.d and quadrature i.sub.q current components that correspond to operation at the power [0117] These i.sub.d and i.sub.q values are transmitted to the vector controller 37 and allow it to control, as described above, the inverter 35. [0118] The main computer 45 may determine the i.sub.d and i.sub.q values by access to a register 53 containing precalculated values, as illustrated in FIG. 4. This register 53 contains the i.sub.d and i.sub.q values for a large number of inputs v, p* and .OMEGA.. For v, p* and .OMEGA. values, the computer determines the closest values of v, p* and .OMEGA. for which i.sub.d(v,p*,.OMEGA.) and i.sub.q(v,p*,.OMEGA.) are known in the register 53. [0119] The main computer 45 may also determine i.sub.d and i.sub.q analytically, by a real-time calculation, using the following formulae: i d = - 1 .times. .times. sin .times. .times. .alpha. .times. .times. and .times. .times. i q = i .times. .times. cos .times. .times. .alpha. ; .alpha. = arc .times. .times. tan .function. ( x q .function. ( e - y ) x d .times. x ) ; i = ( x x q ) 2 + ( e - y x d ) 2 ; (x,y) being one of the real roots of the equations: x 2 + y 2 = v 2 m 2 .times. .times. and .times. .times. y = e .function. ( 1 - x d x d - x q ) + p m .times. e .times. x d .times. x q x d - x q .times. 1 x . [0120] The terms "calculator", "register", "regulating circuit", "subtractor" and "multiplier" must be understood in the broad sense of the words. All these functions may be carried out by one or more specific electronic circuits on one or more electronic cards. These functions may be carried out in hardware and/or software form. In particular, the elements 40, 41, 47, 48, 49, 46, 45, 50 and 37 may be integrated into one and the same electronic card comprising one or more microcontrollers and/or microprocessors. [0121] In the example in question, the defluxing ratio is 6, that is to say the maximum rotation speed of the rotor is six times the base speed, and for example about 1350 revolutions per minute. [0122] Of course, it would not be outside the scope of the present invention if the defluxing ratio were to be different from 6, especially greater than or equal to 2 for example. The electronic switches 60 of the inverter 35 are therefore designed accordingly. [0123] Of course, the invention is not limited to the illustrative example described above. For example, the electrical machine may be produced differently, while still having X.sub.d>X.sub.q. [0124] Throughout the description, the expression "having a" must be considered as being synonymous with "having at least one", unless specified to the contrary. * * * * *
{"url":"http://patents.com/us-20060097594.html","timestamp":"2014-04-16T13:10:15Z","content_type":null,"content_length":"58219","record_id":"<urn:uuid:b91e9ce4-575d-4a3f-9c00-7524960447b8>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00064-ip-10-147-4-33.ec2.internal.warc.gz"}
Adding and Subtracting Constants Sample Problem Consider the equation x = 5. This equation claims that x is equal to 5. We can create equivalent equations to x = 5 by adding or subtracting the same amount of weight to or from each side of the scale. If we add a hacky sack with weight 1 to each pan, the scale must still balance. (Yes, those are hacky sacks. Just go with it.) It now represents the equation x + 1 = 6...although it's much more difficult to play hacky sack this way. If we add another hacky sack with weight 1 to each pan, the scale must still balance, now representing the equation x + 2 = 7. Where are we getting all these things? If we remove two hacky sacks from each side, we are back where we started, representing the equation x = 5. Plus, we get our hacky sacks back. We were starting to worry. Now we know how to solve equations such as x + 2 = 7. We want to isolate x on one side of the equation, so we take 2 from each side and see the scale balance at x = 5. The solution to the equation is Although the hacky sack analogy breaks down a bit with negative numbers, unless you have one of those newfangled "negative space hacky sacks," the idea is the same. We can add or subtract any number we like, as long as we add or subtract it from both sides of the equation. Otherwise, we are unbalancing the scale. The goal is to keep the scale balanced while getting the variable all by itself on one side of the equation. Sample Problems Solve each equation Add 3 to each side of the equation to find that x = 2. Subtract 5 from each side of the equation to find that x = -7. One way to keep track of what we are doing is to write the operation we are performing under each side of the equation. Doctors will often do this sort of thing when they are performing operations: Adding And Subtracting Constants Practice: Solve the equation z + 6 = 3. Solve the equation x + 5 - 2 = 3(4). Solve the equation z - 6 = 5. Solve the equation 9 - 8 + 2 = x + 4. Solve the equation x + 1 = 1. Solve the equation 4 + y = 100. Solve the equation 24 ÷ 8 = y + 3. Solve the equation 5(6) - 4 = z + 11(2). The following equation is solved incorrectly. Identify the mistake and then correctly solve the equation: x + 5 = 14 Incorrect solution: The following equation is solved incorrectly. Identify the mistake and then correctly solve the equation: x + 4 = 16 Incorrect solution: The following equation is solved incorrectly. Identify the mistake and then correctly solve the equation: y - 1 = - 3 Incorrect solution:
{"url":"http://www.shmoop.com/equations-inequalities/adding-subtracting-constants-help.html","timestamp":"2014-04-16T22:18:31Z","content_type":null,"content_length":"48592","record_id":"<urn:uuid:d4515f4e-6937-4535-93f9-74800190c222>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00655-ip-10-147-4-33.ec2.internal.warc.gz"}
This Article Bibliographic References Add to: ASCII Text x T.G. Clarkson, D. Gorse, J.G. Taylor, C.K. Ng, "Learning Probabilistic RAM Nets Using VLSI Structures," IEEE Transactions on Computers, vol. 41, no. 12, pp. 1552-1561, December, 1992. BibTex x @article{ 10.1109/12.214663, author = {T.G. Clarkson and D. Gorse and J.G. Taylor and C.K. Ng}, title = {Learning Probabilistic RAM Nets Using VLSI Structures}, journal ={IEEE Transactions on Computers}, volume = {41}, number = {12}, issn = {0018-9340}, year = {1992}, pages = {1552-1561}, doi = {http://doi.ieeecomputersociety.org/10.1109/12.214663}, publisher = {IEEE Computer Society}, address = {Los Alamitos, CA, USA}, RefWorks Procite/RefMan/Endnote x TY - JOUR JO - IEEE Transactions on Computers TI - Learning Probabilistic RAM Nets Using VLSI Structures IS - 12 SN - 0018-9340 EPD - 1552-1561 A1 - T.G. Clarkson, A1 - D. Gorse, A1 - J.G. Taylor, A1 - C.K. Ng, PY - 1992 KW - synaptic noise; global rewards; global penalties; local penalties; RAM nets; VLSI structures; learning probabilistic RAMs; local reinforcement rules; stochastic search; local rewards; backpropagation; serial updating; weights; learning rule; backpropagation; content-addressable storage; neural nets; VLSI. VL - 41 JA - IEEE Transactions on Computers ER - Hardware-realizable learning probabilistic RAMs (pRAMs) which implement local reinforcement rules utilizing synaptic rather than threshold noise in the stochastic search procedure are described. The design allows for both global and local rewards and penalties (in this latter case implementing a modified version of backpropagation). The architecture allows for serial updating of the weights of a pRAM net according to a reward/penalty learning rule. It is possible to generate a new set of pRAM outputs at least every 100 mu s, which is faster than the response time of biological neurons. [1] T. G. Clarkson, D. Gorse, and J. G. Taylor, "Hardware-realisable models of neural processing," inProc IEE Int. Conf. Artificial Neural Networks, London, 1989, pp. 242-246. [2] D. Gorse and J. G. Taylor, "A general model of stochastic neural processing,"Biol. Cybern., vol. 63, pp. 299-306, 1990. [3] D. Gorse and J. G. Taylor, "Universal associative stochastic learning automata,"Neural Network World, vol. 1, pp. 192-202, 1991. [4] D. Gorse and J. G. Taylor, "A continuous input RAM-based stochastic neural model,"Neural Networks, vol. 4, pp. 657-666, 1991. [5] T. G. Clarkson, D. Gorse, and J. G. Taylor, "From wetware to hardware: Reverse engineering using probabilistic RAMs," to appear in a Special Issue: "Recent Advances in Neural Nets",J. Intell. Syst., vol. 2, pp. 11-30, 1992. [6] H. Eguchi, T. Faruta, H. Hariguchi, S. Oteki, and T. Kitaguchi, "Neural network LSI chip with on-chip learning," inProc. IJCNN. '91, vol. I, Seattle, WA, 1991, pp. 453-456. [7] C. Schneider and H. C. Card, "CMOS implementation of analog Hebbian synaptic learning circuits," inProc. IJCNN. '91, vol. I, Seattle, WA, 1991, pp. 437-442. [8] D. E. Rumelhart, G. E. Hinton, and R. J. Williams, "Learning internal representation by error propagation,"Parallel Distributed Processing: Explorations in the Microstructure of Cognition, vols. 1 and 2. Cambridge, MA: MIT Press, 1986. [9] D. Gorse and J. G. Taylor, "An analysis of noisy RAM and neural nets,"Physica D, vol. 34, pp. 90-114, 1989. [10] D. E. Rumelhartet al., Parallel Distributed Processing: Explorations in the Microstructures of Computing. Cambridge, MA: MIT Press, 1986. [11] A. G. Barto and P. Anandan, "Pattern recognizing stochastic learning automata,"IEEE Trans. Syst., Man, Cybern., vol. 15, pp. 360-375, 1985. [12] A. G. Barto, R. S. Sutton, and C. W. Anderson, "Neuronlike adaptive elements that can solve difficult learning control problems,"IEEE Trans. Syst., Man, Cybern., vol. 13, pp. 834-846, 1983. [13] J. G. Taylor, "Spontaneous behavior in neural networks,"J. Theor. Biol., vol. 36, pp. 513-528, 1972. [14] P. C. Bressloff and J. G. Taylor, "Random iterative networks,"Phys. Rev. A, vol. 41, pp. 1126-1137, 1990. [15] T. G. Clarkson, J. G. Taylor, and D. Gorse, "pRAM automata," inProc. IEEE Int. Workshop Cellular Neural Networks and their Appl. (CNNA. '90), Budapest, 1990, pp. 235-243. [16] P. Y. Allaet al., "Silicon integration of learning algorithms and other auto-adaptive properties in a digital feedback neural network," inVLSI Design of Neural Networks, Ramacher and Rückert, Eds. Boston, MA: Kluwer, 1991, pp. 174-175. [17] D. Gorse and J. G. Taylor, "Hardware-realisable learning algorithms,"Proc. INNC-90, Paris, Dordrecht, Kluwer, 1990, pp. 821-824. [18] G. E. Hinton and T. J. Sejnowski, "Learning and relearning in Boltzmann machines," inPARALLEL DISTRIBUTED PROCESSING: Explorations in the Microstructure of Cognition, Vol. 1: Foundations, pp. [19] D. Gorse and J. G. Taylor, "Learning sequential structure with recurrent pRAM nets,"Proc. IJCNN '91, vol. II, Seattle, WA, 1991, pp. 37-42. [20] D. Servan-Schreiber, A. Cleeremans, and J. L. McClelland, "Encoding sequential structure in simple recurrent networks," Carnegie Mellon Univ. Tech. Rep. CMU-CS-88-183, Nov. 1988. [21] C. L. Giles, G. Z. Sun, H. H. Chen, Y. C. Lee, and D. Chen, "Higher order recurrent networks and grammatical inference," inAdvances in Neural Information Processing Systems 2, D. S. Touretzky Ed. San Mateo, CA: Morgan Kauffman, 1990, pp. 380-387. [22] A. H. Klopf,The Hedonistic Neuron: A Theory of Memory, Learning and Intelligence. Washington, DC: Hemisphere, 1982. [23] D. E. Koshland, "Bacterial chemotaxis in relation to neurobiology,"Ann. Rev. Neurosci., vol. 3, pp. 43-75, 1980. [24] K. S. Narendra and M. A. L. Thathachar, "Learning automata--A survey,"IEEE. Trans. Syst., Man., Cybern., vol. 4, pp. 323-334, 1974. [25] B. Katz,Nerve, Muscle and Synapse. New York: McGraw-Hill, 1966. [26] A. G. Barto and R. S. Sutton, "Landmark learning: An illustration of associative search,"Biol. Cybern., vol. 42, pp. 1-8, 1981. [27] C. E. Myers, "Reinforcement training when results are delayed and interleaved in time," inProc INNC-90-Paris, 1990, pp. 860-863. [28] S. Joneset al., "Toroidal neural network: Architecture and processor granularity issues," inVLSI Design of Neural Networks, Ramacher and Rückert, Eds. Boston, MA: Kluwer, 1991, pp. 174-175. [29] U. Ramacheret al., "Design of a 1st generation neurocomputer," inVLSI Design of Neural Networks, Ramacher and Rückert, Eds. Boston, MA: Kluwer, 1991, pp. 174-175. [30] F. M. A. Salam and Y. Wang, "A real-time experiment using a 50- neuron CMOS analog silicon chip with on-chip digital learning,"IEEE Trans. Neural Networks, vol. 2, no. 4, pp. 461-464, 1991. [31] T. G. Clarkson, C. K. Ng, D. Gorse, and J. G. Taylor, "A serial update VLSI architecture for the learning probabilistic RAM neuron," inProc. ICANN91, Helsinki, 1991, pp. 1573-1576. Index Terms: synaptic noise; global rewards; global penalties; local penalties; RAM nets; VLSI structures; learning probabilistic RAMs; local reinforcement rules; stochastic search; local rewards; backpropagation; serial updating; weights; learning rule; backpropagation; content-addressable storage; neural nets; VLSI. T.G. Clarkson, D. Gorse, J.G. Taylor, C.K. Ng, "Learning Probabilistic RAM Nets Using VLSI Structures," IEEE Transactions on Computers, vol. 41, no. 12, pp. 1552-1561, Dec. 1992, doi:10.1109/ Usage of this product signifies your acceptance of the Terms of Use
{"url":"http://www.computer.org/csdl/trans/tc/1992/12/t1552-abs.html","timestamp":"2014-04-19T00:59:21Z","content_type":null,"content_length":"56964","record_id":"<urn:uuid:e0aa46da-af81-4f9e-bf37-8f926a1c766c>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00223-ip-10-147-4-33.ec2.internal.warc.gz"}
Sampling Methods Sampling Methods and Sampling Distributions GOALS 2 SHOULD BE FOR BOTH MEANS AND WHY SAMPLE THE POPULATION? 3 The destructive nature of certain tests. The physical impossibility of checking all items in the population. The cost of studying all the items in a population is often prohibitive. The adequacy of sample results. To contact the whole population would often be time-consuming. PROBABILITY SAMPLING 4 What is a Probability Sample? A sample selected in such a way that each item or person in the population being studied has a known (nonzero) likelihood of being included in the sample. Simple Random Sample: A sample formulated so that each item or person in the population has the same chance of being included. SIMPLE RANDOM SAMPLING VIA 5 Given a list of elements, select a random » Uniform Random Number Generator » Sort Function PROBABILITY SAMPLING (continued) 6 Systematic Random Sampling: The items or individuals of the population are arranged in some way-alphabetically or by some other method. A random starting point is selected, and then every kth member of the population is selected for the sample. Stratified Random Sampling: A population is first divided into subgroups, called strata, and a sample is selected from each PROBABILITY SAMPLING (continued) 7 Sampling Error: The difference between a sample statistic and its corresponding For example … SAMPLING DISTRIBUTION OF 8 THE SAMPLE MEANS A probability distribution consisting of a list of all possible sample means of a given sample size selected from a population, and the probability of occurrence associated with each sample mean. EXAMPLE : A law firm has five partners. At their weekly partners meeting each reported the number of hours they charged clients for their professional services last week. The results are given on the next EXAMPLE (continued) 9 Two partners are randomly selected. How many different samples are possible? EXAMPLE (continued) 10 This is the combination of 5 objects taken 2 at a time. That is, 5C2 = (5!)/[(2!)(3!)] = 10. List the possible samples of size 2 and compute the mean. EXAMPLE (continued) 11 Organize the sample means into a sampling distribution. The sampling distribution is shown below. EXAMPLE (continued) 12 Compute the mean of the sample means and compare it with the population mean. The population mean, m = (22 + 26 + 30 + 26 + 22)/5 = 25.2. The mean of the sample means = [(22)(1) + (24)(4) + (26)(3) + (28)(2)]/10 = 25.2. Observe that the mean of the sample means is equal to the population mean. CENTRAL LIMIT THEOREM 13 For a population with a mean m and a variance s2, the sampling distribution of the means of all possible samples of size n generated from the population will be approximately normally distributed - with the mean of the sampling distribution equal to m and the variance equal to s2/n - assuming that the sample size is sufficiently large. POINT ESTIMATES One value (called a point) that is used to estimate a population parameter. Examples of point estimates are the sample mean, the sample standard deviation, the sample variance, the sample proportion etc. EXAMPLE: The number of defective items produced by a machine was recorded for five randomly selected hours during a 40-hour work week. The observed number of defectives were 12, 4, 7, 14, and 10. So the sample mean is 9.4. Thus a point estimate for the weekly mean number of defectives is 9.4 INTERVAL ESTIMATES 15 An Interval Estimate states the range within which a population parameter probably lies. The interval within which a population parameter is expected to occur is called a confidence interval. The two confidence intervals that are used extensively are the 95% and the 99%. A 95%confidence interval means that about 95% of the similarly constructed intervals will contain the parameter being estimated. INTERVAL ESTIMATES (continued) 16 Another interpretation of the 95% confidence interval is that 95% of the sample means for a specified sample size will lie within 1.96 standard deviations of the hypothesized population mean. For the 99% confidence interval, 99% of the sample means for a specified sample size will lie within 2.58 standard deviations of the hypothesized population mean. -2.58 -1.96 0 1.96 2.58 MEANS 18 This is the standard deviation of the sampling distribution of the sample means. The standard error of the sample means is computed by: s x is the symbol for the standard error of the sample means. s is the standard deviation of the population. n is the size of the sample. MEANS (continued) 19 If s is not known and n = 30 or more (considered a large sample), the standard deviation of the sample, designated by s, is used to approximate the population standard deviation, s. The formula for the standard error then becomes: What happens as n gets larger? 95% AND THE 99% CONFIDENCE INTERVALS (CI) FOR m 20 The 95% and the 99% confidence intervals for m are constructed as follows when n 95% CI for the population mean m is given by s X 196 99% CI for m is given by X 258 CONFIDENCE INTERVALS (CI) FOR m 21 In general, a confidence interval for the mean is computed by: X Z The Z value is obtained from the standard normal table in Appendix D (look-up EXAMPLE 22 The Dean of Students at Penta Tech wants to estimate the mean number of hours worked per week by students. A sample of 49 students showed a mean of 24 hours with a standard deviation of 4 hours. What is the point estimate of the mean number of hours worked per week by students? » The point estimate is 24 hours (sample mean). What is the 95% confidence interval for the average number of hours worked per week by the students? EXAMPLE (continued) 23 Using formula, we have 24 1.96(4/7) or we have 22.88 to 25.12. What are the 95% confidence limits? The endpoints of the confidence interval are the confidence limits. The lower confidence limit is 22.88 and the upper confidence limit is 25.12. What degree of confidence is being used? The degree of confidence (level of confidence) is 0.95 EXAMPLE (continued) 24 Interpret the findings. » If we had time to select 100 samples of size 49 from the population of the number of hours worked per week by students at Penta Tech and compute the sample means and 95% confidence intervals, the population mean of the number of hours worked by the students per week would be found in about 95 out of the 100 confidence intervals. Either a confidence interval contains the population mean or it does not. About 5 out of the 100 confidence intervals would not contain the population mean. CONFIDENCE INTERVAL FOR A POPULATION PROPORTION 25 The confidence interval for a population p zs p where s p is the standard error of the p (1 p ) The confidence interval is constructed by: p (1 p ) p is the sample proportion. z is the z value for the degree of confidence selected. n is the sample size. EXAMPLE 27 Chris Cooper, a financial planner, is studying the retirement plans of young executives. A sample of 500 young executives who owned their own home revealed that 175 planned to sell their homes and retire to Arizona. Develop a 98% confidence interval for the proportion of executives that plan to sell and move to Here n = 500, p = 175/500 = 0.35, and z =2.33 . . the 98% CI is 0.35 ± 2.33 or 0.35 0.0497. CORRECTION FACTOR 28 A population that has a fixed upper bound is said to be finite. For a finite population, where the total number of objects is N and the size of the sample is n, the following adjustment is made to the standard errors of the sample means and the proportion. Standard error of the sample means: s N n n n 1 CORRECTION FACTOR (continued) 29 Standard error of the sample proportions: p (1 p ) N n n N 1 Note: If n/N < 0.05, the finite-population correction factor can be ignored. EXAMPLE 30 The Dean of Students at Penta Tech wants to estimate the mean number of hours worked per week by students. A sample of 49 students showed a mean of 24 hours with a standard deviation of 4 hours. Construct a 95% confidence interval for the mean number of hours worked per week by the students if there are only 500 students on campus. Now n/N = 49/500 = 0.098 > 0.05, so we have to use the finite population correction factor. 49 500 1 = [22.9352,25.1065] SELECTING A SAMPLE SIZE 31 There are 3 factors that determine the size of a sample, none of which has any direct relationship to the size of the population. They are: 1. The degree of confidence selected. 2. The maximum allowable error. 3. The variation of the population. SAMPLE SIZE FOR THE MEAN 32 A convenient computational formula for determining n is: n Z S E is the allowable error. z is the z score associated with the degree of confidence selected. s is the sample deviation of the pilot EXAMPLE 33 A consumer group would like to estimate the mean monthly electric bill for a single family house in July. Based on similar studies the standard deviation is estimated to be $20.00. A 99% level of confidence is desired, with an accuracy of $5.00. How large a sample is required? n = [(2.58)(20)/5]2 = 106.5024 107. SAMPLE SIZE FOR PROPORTIONS 34 The formula for determining the sample size in the case of a proportion is: n p(1 p) Z p is the estimated proportion, based on past experience or a pilot survey. z is the z value associated with the degree of confidence selected. E is the maximum allowable error the researcher will EXAMPLE 35 The American Kennel Club wanted to estimate the proportion of children that have a dog as a pet. If the club wanted the estimate to be within 3% of the population proportion, how many children would they need to contact? Assume a 95% level of confidence and that the Club estimated that 30% of the children have a dog as a pet. n = (0.30)(0.70)(1.96/0.03)2 = 896.3733 897.
{"url":"http://www.docstoc.com/docs/112203408/Sampling-Methods","timestamp":"2014-04-18T23:36:20Z","content_type":null,"content_length":"66915","record_id":"<urn:uuid:2ed03f29-4ae4-4084-b158-47ea1a7b88e8>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00623-ip-10-147-4-33.ec2.internal.warc.gz"}
Models and selection criteria for regression and classification - Level Perspective on Branch Architecture Performance, IEEE Micro-28 , 2002 "... Bayesian network models are widely used for supervised prediction tasks such as classification. Usually the parameters of such models are determined using `unsupervised' methods such as maximization of the joint likelihood. In many cases, the reason is that it is not clear how to find the parameters ..." Cited by 4 (1 self) Add to MetaCart Bayesian network models are widely used for supervised prediction tasks such as classification. Usually the parameters of such models are determined using `unsupervised' methods such as maximization of the joint likelihood. In many cases, the reason is that it is not clear how to find the parameters maximizing the supervised (conditional) likelihood. We show how the supervised learning problem can be solved e#ciently for a large class of Bayesian network models, including the Naive Bayes (NB) and tree-augmented NB (TAN) classifiers. We do this by showing that under a certain general condition on the network structure, the supervised learning problem is exactly equivalent to logistic regression. Hitherto this was known only for Naive Bayes models. Since logistic regression models have a concave loglikelihood surface, the global maximum can be easily found by local optimization methods. - In , 2000 "... We propose a data reduction method based on a probabilistic similarity framework where two vectors are considered similar if they lead to similar predictions. We show how this type of a probabilistic similarity metric can be defined both in a supervised and unsupervised manner. As a concrete applica ..." Cited by 3 (0 self) Add to MetaCart We propose a data reduction method based on a probabilistic similarity framework where two vectors are considered similar if they lead to similar predictions. We show how this type of a probabilistic similarity metric can be defined both in a supervised and unsupervised manner. As a concrete application of the suggested multidimensional scaling scheme, we describe how the method can be used for producing visual images of high-dimensional data, and give several examples of visualizations obtained by using the suggested scheme with probabilistic Bayesian network models. 1. INTRODUCTION Multidimensional scaling (see, e.g., [3, 2]) is a data compression or data reduction task where the goal is to replace the original high-dimensional data vectors with much shorter vectors, while losing as little information as possible. Intuitively speaking, it can be argued that a pragmatically sensible data reduction scheme is such that two vectors close to each other in the original multidimensional s... , 2002 "... my family-- especially my father, Donald. iv Abstract Many important data analysis tasks can be addressed by formulating them as probability estimation problems. For example, a popular general approach to automatic classification problems is to learn a probabilistic model of each class from data in ..." Cited by 3 (1 self) Add to MetaCart my family-- especially my father, Donald. iv Abstract Many important data analysis tasks can be addressed by formulating them as probability estimation problems. For example, a popular general approach to automatic classification problems is to learn a probabilistic model of each class from data in which the classes are known, and then use Bayes's rule with these models to predict the correct classes of other data for which they are not known. Anomaly detection and scientific discovery tasks can often be addressed by learning probability models over possible events and then looking for events to which these models assign low probabilities. Many data compression algorithms such as Huffman coding and arithmetic coding rely on probabilistic models of the data stream in order achieve high compression rates. - Hong Kong University of Science and Technology , 1999 "... xvi 1) ..." - Proceedings of the Fifth European Workshop on Case-based Reasoning (EWCBR’2000). LNAI1898 , 2000 "... . We introduce a distance measure based on the idea that two vectors are considered similar if they lead to similar predictive probability distributions. The suggested approach avoids the scaling problem inherent to many alternative techniques as the method automatically transforms the original ..." Cited by 2 (0 self) Add to MetaCart . We introduce a distance measure based on the idea that two vectors are considered similar if they lead to similar predictive probability distributions. The suggested approach avoids the scaling problem inherent to many alternative techniques as the method automatically transforms the original attribute space to a probability space where all the numbers lie between 0 and 1. The method is also flexible in the sense that it allows different attribute types (discrete or continuous) in the same consistent framework. To study the validity of the suggested measure, we ran a series of experiments with publicly available data sets. The empirical results demonstrate that the unsupervised distance measure is sensible in the sense that it can be used for discovering the hidden clustering structure of the data. 1 Introduction Machine learning techniques usually aim at compressing available sample data into more compact representations called models. These models can then be used for ... , 2002 "... this paper we show, how this supervised learning problem can be solved e#ciently. We introduce an alternative parametrization in which the supervised likelihood becomes concave. From this result it follows that there can be at most one maximum, easily found by local optimization methods. We present ..." Cited by 2 (1 self) Add to MetaCart this paper we show, how this supervised learning problem can be solved e#ciently. We introduce an alternative parametrization in which the supervised likelihood becomes concave. From this result it follows that there can be at most one maximum, easily found by local optimization methods. We present test results that show this is feasible and highly beneficial "... We present an overview of the data collection and transcription efforts for the COnversational Speech In Noisy Environments (COSINE) corpus. The corpus is a set of multi-party conversations recorded in real world environments, with background noise, that can be used to train noise-robust speech reco ..." Add to MetaCart We present an overview of the data collection and transcription efforts for the COnversational Speech In Noisy Environments (COSINE) corpus. The corpus is a set of multi-party conversations recorded in real world environments, with background noise, that can be used to train noise-robust speech recognition systems or develop speech de-noising algorithms. We explain the motivation for creating such a corpus, and describe the resulting audio recordings and transcriptions that comprise the corpus. These high quality recordings were captured in-situ on a custom wearable recording system, whose design and construction is also described. On separate synchronized audio channels, seven-channel audio is captured with a 4-channel far-field microphone array, along with a close-talking, a monophonic far-field, and a throat microphone. This corpus thus creates many possibilities for speech algorithm research. "... Abstract: Bayesian networks encode causal relations between variables using probability and graph theory. They can be used both for prediction of an outcome and interpretation of predictions based on the encoded causal relations. In this paper we analyse a tree-like Bayesian network learning algorit ..." Add to MetaCart Abstract: Bayesian networks encode causal relations between variables using probability and graph theory. They can be used both for prediction of an outcome and interpretation of predictions based on the encoded causal relations. In this paper we analyse a tree-like Bayesian network learning algorithm optimised for classification of data and we give solutions to the interpretation and analysis of predictions. The classification of logical – i.e. binary – data arises specifically in the field of medical diagnosis, where we have to predict the survival chance based on different types of medical observations or we must select the most relevant cause corresponding again to a given patient record. Surgery survival prediction was examined with the algorithm. Bypass surgery survival chance must be computed for a given patient, having a data-set of 66 medical examinations for 313 patients.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=648087&sort=cite&start=10","timestamp":"2014-04-16T23:29:55Z","content_type":null,"content_length":"31553","record_id":"<urn:uuid:3ef1fbd2-a1b8-4837-a3a7-a5bea0522262>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00012-ip-10-147-4-33.ec2.internal.warc.gz"}
The mathematics of slots Parameters and variables of the probability models We denote by the number of distinct symbols of the machine (). If the machine has blank stops, the blank should be considered as a symbol among them. Parameter is specific to the machine. We denote by the length of a certain payline. is specific to that payline. Each slot machine belongs to one of the two types: Type A – All reels have the same distribution of symbols; Type B – The reels have different numbers of stops and each symbol has different distributions on the stops of the reels. In case A, denote by the number of stops on each reel and by the distribution (number of instances) of symbol on each reel (); In case B, denote by the number of stops on reel number and by the distribution of symbol on reel number ( and ). Given a specific symbol , the probability of occurring on a reel after a spin is in case A and in case B, where is the number of that reel. Probabilities , respectively are called basic probabilities in slots. Winning combinations, slots events Any winning rule on a payline is expressed through a combination of symbols (for instance, the specific combination any bar-symbol twice or any triple of symbols) and any outcome is a specific combination of stops on that line. Therefore, the combination of stops should be naturally taken as an elementary event of the probability field. We have possible combinations of stops in case A and possible combinations of symbols on a payline of length n across n reels. In case B, we have the same number of possible combinations of symbols and possible combinations of stops for that payline of length n. With regard to the complexity of the events in respect to the ease of the probability computations, we have: Simple events. These are the events related to one line, which are types of combinations of stops expressed through specific numbers of identical symbols (instances). For example, Complex events of type 1. These are unions of simple events related to one line. For instance, the event any triple on a payline of length 3 of a fruit machine is a complex event of type 1, as being the union of the simple events Any double or two cherries or two oranges or at least one cherry are also complex events of type 1. Complex events of type 2. These are events that are types of combinations of stops expressed through specific numbers of identical symbols, related to several lines. For instance, at least one payline is also a complex event of type 2. Complex events of type 3. These are unions of events that are types of combinations of stops expressed through specific numbers of identical symbols (like the complex events of type 2), related to several lines. For instance, any triple on paylines 1 or 2 is a complex event of type 3. At least one cherry on at least one payline is also a complex event of type 3. General formulas of the probability of the winning events related to one payline For an event E related to a line of length n, the general formula of the probability of E is: in case A and in case B, (1) where F(E) is the number of combinations of stops favorable for the event E to occur. For an event E expressed through the number of instances of each symbol on a payline in case A, formula (1) is equivalent to: where is the number of instances of , and so on, is the number of instances of (). Formula (2) can be directly applied for winning events defined through the distribution of all symbols on the payline, in case A. These are simple events. For more complex events, we must apply the general formula (1), which reverts to counting the number of favorable combinations of stops F(E), or, for particular situations, apply formula (2) several times and add the results. In case B, the number of variables is larger and therefore most of the explicit formulas from case B are too overloaded. We take here one particular type of events for which we present its probability formula in terms of basic probabilities, namely the events expressed through a number of instances of one symbol.If E is the event exactly m instances of S (), then: where and are the basic probabilities (the probability of symbol S occurring on reel number j, respectively k). Probability calculus tools for events related to several lines For events related to several lines, other properties of probability are used (for instance, the inclusion-exclusion principle), along with formulas (1) and (2) and some approximation methods necessary for the ease of computations. When estimating the probability of an event related to several lines, some topological properties of that group of lines do count; for instance, the independence of the lines: We call two lines independent if they do not contain stops of the same reel. This means that the outcome on one line does not depend on the outcome of the other and vice versa. Two lines that are not independent will be called non-independent. For two non-independent lines, the outcome of one is influenced (partially or totally) by the outcome of the other. This definition can be extended to several lines (m), as follows: We call m lines independent if every pair of lines from them are independent. From probabilistic point of view, any two or more events each related to a line from a group of independent lines are independent, in the sense of the definition of independence of events from probability theory. Independent and non-independent lines in a 3 x 3-display of a 9-reel slot machine In the previous figure, lines and are independent, while and , as well as and are non-independent (for the last two pairs, the lines have a stop in common). Non-independent lines in a 4 x 5-display of a 5-reel slot machine In the previous figure, lines and , and , and , and therefore , , and , are non-independent, since within each of the mentioned groups we have stops of the same reel on different lines. In such configuration, there is no group of independent lines, regardless the shape or other properties of the lines. An immediate consequence of the definition of independent lines is that if two lines intersect each other (that is, they share common stops), they are non-independent, so any group of lines containing them will be non-independent. Another consequence is that if two lines are independent, they do not intersect each other. If two lines do not intersect each other, they are not necessarily independent. For instance, take lines and in the last figure. On the contrary, lines and not intersecting each other in the last but one figure are independent. The non-independent lines (intersecting or non-intersecting) for which there are non-shared stops belonging to the same reels (like lines and in the last figure) are called linked lines. For events related to linked lines, the probability estimations are only possible if we know the arrangements of the symbols on the reels, not only their distributions. All probabilities were worked out under the following assumptions: - the reels spin independently; - a payline does not contain two stops of the same reel (it crosses over the reels without overlapping them); this reverts to the fact that any m events, each one related to one stop of the payline, are independent of each other; - each reel contains p symbols; this is actually a convention: if a symbol does not appear on a reel, we could simply take its distribution on that reel as being zero. Given parameters Of course, any practical application can be fulfilled only if we know in advance the parameters of the given slot machine, that is, the numbers of stops of the reels and the symbol distributions on the reels. All the probability formulas and tables of values are ultimately useless without this information. In the book The Mathematics of Slots: Configurations, Combinations, Probabilities you will find explained some methods of estimating these parameters based on empirical data collected through statistical observation and physical measurements. Of course, taking into account the incomputable error ranges of such approximations, any credible information regarding these parameters should prevail over these methods of estimating them. The Mathematics Department of Infarom will launch soon the project Probability Sheet for any Slot Game, dealing with collecting statistical data from slot players, using the data to estimate the parameters of the slot machine, refining the estimations with the newly collected data and computing the probabilities and other statistical indicators attached to the payout schedule of the slot machine, in order to provide the so-called PAR sheet of any slot game on the market. Contact us with subject "slots data project" if you want to be part of our future project. Practical applications and numerical probabilities This section is dedicated to practical results, in which the general formulas are particularized in order to provide results for the most common categories of slot games and winning events. The practical results are presented as both specific formulas, ready for inputting the parameters of the slot game, and computed numerical results, where the specific formulas allow the generation of two-dimensional tables of values. The collection of results hold for winning combinations with no wild symbols (jokers) and is partial. You can find the complete collection of practical results in the book The Mathematics of Slots: Configurations, Combinations, Probabilities, for 3-reel, 5-reel, 9-reel, and 16-reel slot machines. 3-reel slot machines The 3-reel slot machines could have the following common configurations of the display: 1 x 3, 2 x 3, 3 x 3. The standard length of a payline is 3. The common winning events on a payline are: │ Winning event │Case A │ Case B │ │– A specific symbol three times │ table │formula and tables│ │(for example, ( │ │ │ │– Any symbol three times (triple) │ │ │ │– A specific symbol exactly twice │ table │formula and tables│ │(for example, (any)) │ │ │ │– Any symbol exactly twice (double) │ │ │ │– A specific symbol exactly once │ │ │ │(for example, ( any any)) │ │ │ │– Any combination of two specific symbols │tables │ formula │ │(for example, (mix & │ │ │ │– Any combination of at least one of three specific symbols │formula│ formula │ │(for example, (any bar any bar any bar ), with three bar symbols like │ │ │ (The symbols from the examples are just for illustrating the winning combinations and may be replaced by symbols of any graphic. For the same parameters of the machine, the probabilities of the above events are the same regardless of the chosen graphic for the symbols.) Unions of winning events on a payline (disjunctions of the previous events through , operated with or): │ Winning event │Case A│ Case B │ │8. A specific symbol at least twice │table │formula and tables│ │9. A specific symbol at least once │ │ │ │10. A specific symbol three times or another specific symbol twice │table │ formula │ │11. A specific symbol three times or another specific symbol once │ │ │ │12. A specific symbol three times or another specific symbol at least once │table │ formula │ │13. A specific symbol three times or any combination of that symbol with another specific symbol │table │ formula │ │14. A specific symbol twice or another specific symbol once │ │ │ │15. A specific symbol twice or any combination of at least one of three other specific symbols │ │ │ On a 3-reel 2 x 3- or 3 x 3-display slot machine, any two paylines are linked; therefore we cannot estimate the probabilities of the winning events related to several lines. 16-reel slot machines The 16-reel slot machines usually have the 4 x 4 configuration of the display. The standard length of a payline is 4, but it could also have the length 3, 6, 7, or 8. The 16-reel 4 x 4-display slot machine could have 8 to 22 paylines of length 4, as follows: 4 horizontal, 4 vertical, 2 oblique (diagonal), or 12 trapezoidal lines. It could also have 4 transversal stair lines of length 7, 12 double-stair lines of length 6, or 10 double-stair lines of length 8. It could also have 4 oblique lines of length 3. The common winning events on a payline are: │ Winning event │Case A│Case B │ │– A specific symbol four times (on a payline of length at least 4; for example, ( │table │formula│ │– Any symbol four times (quadruple; on a payline of length at least 4) │ │ │ │– A specific symbol exactly three times (on a payline of length at least 3; for example, (any)) │table │formula│ │– Any symbol exactly three times (triple) (on a payline of length at least 3) │ │ │ │– Any combination of two specific symbols (on a payline of length at least 3; for example, (mix & │tables│formula│ │– Any combination of at least one of three specific symbols (on a payline of length at least 3; for example, (any bar any bar any bar any bar), with three bar symbols like│ │ │ The table notes the probabilities of the winning events on a payline of length 4. Unions of winning events on a payline (disjunctions of the previous events through , operated with or): │ Winning event │Case A│Case B │ │7. A specific symbol at least three times │table │formula│ │8. A specific symbol four times or another specific symbol three times │ │ │ │9. A specific symbol four times or another specific symbol at least three times │tables│formula│ │10. A specific symbol four times or any combination of that symbol with another specific symbol │tables│formula│ │11. A specific symbol three times or any combination of at least one of three other specific symbols │ │ │ Winning events on several paylines For the probabilities of these events, we considered only paylines of the regular length 4 in case A. 1.1 A winning event on any of the horizontal lines 1.2 A winning event on any of the vertical lines 1.3 A winning event on any of the horizontal or vertical lines 1.4 A winning event on either or both of the diagonals 1.5 A winning event on any of the horizontal or diagonal lines 1.6 A winning event on any of the vertical or diagonal lines 1.7 A winning event on any of the horizontal, vertical, or diagonal lines 1.8 A winning event on any of the left-right trapezoidal lines 1.9 A winning event on any of the horizontal or left-right trapezoidal lines
{"url":"http://probability.infarom.ro/slots.html","timestamp":"2014-04-19T14:28:54Z","content_type":null,"content_length":"107586","record_id":"<urn:uuid:6ae286f7-694d-4e55-93da-e21bc1d1e465>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00054-ip-10-147-4-33.ec2.internal.warc.gz"}
Robust Monte Carlo Methods for Light Transport Simulation Robust Monte Carlo Methods for Light Transport Simulation Reference: Eric Veach, Ph.D. dissertation, Stanford University, December 1997. Light transport algorithms generate realistic images by simulating the emission and scattering of light in an artificial environment. Applications include lighting design, architecture, and computer animation, while related engineering disciplines include neutron transport and radiative heat transfer. The main challenge with these algorithms is the high complexity of the geometric, scattering, and illumination models that are typically used. In this dissertation, we develop new Monte Carlo techniques that greatly extend the range of input models for which light transport simulations are practical. Our contributions include new theoretical models, statistical methods, and rendering algorithms. We start by developing a rigorous theoretical basis for bidirectional light transport algorithms (those that combine direct and adjoint techniques). First, we propose a linear operator formulation that does not depend on any assumptions about the physical validity of the input scene. We show how to obtain mathematically correct results using a variety of bidirectional techniques. Next we derive a different formulation, such that for any physically valid input scene, the transport operators are symmetric. This symmetry is important for both theory and implementations, and is based on a new reciprocity condition that we derive for transmissive materials. Finally, we show how light transport can be formulated as an integral over a space of paths. This framework allows new sampling and integration techniques to be applied, such as the Metropolis sampling algorithm. We also use this model to investigate the limitations of unbiased Monte Carlo methods, and to show that certain kinds of paths cannot be sampled. Our statistical contributions include a new technique called multiple importance sampling, which can greatly increase the robustness of Monte Carlo integration. It uses more than one sampling technique to evaluate an integral, and then combines these samples in a way that is provably close to optimal. This leads to estimators that have low variance for a broad class of integrands. We also describe a new variance reduction technique called efficiency-optimized Russian roulette. Finally, we link these ideas together to obtain new Monte Carlo light transport algorithms. Bidirectional path tracing uses a family of different path sampling techniques that generate some path vertices starting from a light source, and some starting from a sensor. We show that when these techniques are combined using multiple importance sampling, a large range of difficult lighting effects can be handled efficiently. The algorithm is unbiased, handles arbitrary geometry and materials, and is relatively simple to implement. The second algorithm we describe is Metropolis light transport, inspired by the Metropolis sampling method from computational physics. Paths are generated by following a random walk through path space, such that the probability density of visiting each path is proportional to the contribution it makes to the ideal image. The resulting algorithm is unbiased, uses little storage, handles arbitrary geometry and materials, and can be orders of magnitude more efficient than previous unbiased approaches. It performs especially well on problems that are usually considered difficult, e.g. those involving bright indirect light, small geometric holes, or glossy surfaces. To our knowledge, this is the first application of the Metropolis method to transport problems of any kind. Click on any chapter heading to retrieve an individual chapter (Postscript with low-resolution grayscale images). The full dissertation can also be downloaded as a single file. It is formatted for two-sided printing, so don't worry if you see an occasional blank page. If you are having trouble downloading the files because of an unreliable network connection, please note that they are also available by ftp. Also available: Last modified: January 22, 1998 Eric Veach
{"url":"http://graphics.stanford.edu/papers/veach_thesis/","timestamp":"2014-04-18T23:16:36Z","content_type":null,"content_length":"17920","record_id":"<urn:uuid:96b3017a-e4ec-45d8-ae93-16f863fcae44>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00543-ip-10-147-4-33.ec2.internal.warc.gz"}
Adding an affine term to linear SVM / logistic regression objective function Hello. I am working currently on a problem where I have to solve either an L2-regularized logistic regression or L2-reg linear SVM problem, where I have an added affine term. So my problem for example is: min_ w {C*sum_i log(1+exp(w * x_i * y_i)) + ||w||^2_2 + w * v } where v is a constant vector. Of course this is a convex problem and can be solved with the usual methods, but I have to solve very many large problems of this type, so I would very much like to use a standard library such as My question is, is there a way to transform the data x, the labels y, or the weighing factor C (perhaps into a different C_i for each instance), such that this problem will be equivalent to a standard logistic regression or hinge-loss SVM problem? asked Feb 08 '12 at 04:48 Plugging an efficient way of computing this function into a fast lbfgs implementation shouldn't be that much slower than liblinear. Have you tried this? No, I haven't tried. I was under the impression it wouldn't be as fast, given how optimized the solvers for large linear classification problems are now. I will try out now following your suggestion, and update on the results. (Feb 08 '12 at 14:48) ualex I can't think of a way to turn it into something which can be processed by something like liblinear. However, you could easily solve the SVM version of this optimization problem with one of the general purpose cutting plane optimization libraries. All you have to do is write code to compute an element of the subgradient (which is just w + v - Csum_i x_iy_i in your case) and the value of the objective. Then a cutting plane routine can find the optimal w. There is a CPA optimizer in Shogun and also one in dlib. I haven't used Shogun's version but I have used the one in dlib for a lot of problems (I'm also the author of dlib). Someone also mentioned using L-BFGS which should work fine for the log loss. However, a cutting plane method should work a lot better for the hinge loss. For some more background, the following paper is excellent reading: • Bundle Methods for Regularized Risk Minimization by Choon Hui Teo, S.V.N. Vishwanthan, Alex J. Smola, Quoc V. Le; 11(Jan):311-365, 2010. answered Feb 08 '12 at 17:49 Davis King SVM perf uses a cutting plane optimization method; see the SGD link I provided for a benchmark comparison. For large scale problems these days, I don't know why anyone still uses batch algorithms. Thanks a lot (for your answer double answer...) ! I wasn't very familiar with general purpose cutting plane algorithms, and I will definitely look into the libraries you mention. (Feb 09 '12 at 05:05) ualex SGD is great for some problems. But I find that when I use it I often spend a lot of time running it multiple times and playing with parameters or wondering to myself "maybe it didn't converge, I'll run it again and check...". On the other hand, the state-of-the-art batch methods leave no doubt about convergence and have no parameters and so often require me to spend less of my own time on it even if it takes more computer time. In this sense, I think the batch methods are more user friendly. This is usually called elastic net regularization and you can implement it fairly easily and efficiently with stochastic gradient descent (assuming you remember some calculus!). answered Feb 08 '12 at 14:56 Travis Wolfe Elastic net adds an l1-term: sum_i vi xi, not a linear term, sum_i vi xi. The absolute value function matters a lot. If I'm not mistaken, in elastic net you add an L1 term to the penalty. Here I'm doing something that in essence is simpler - just adding an affine term, without any other constraints. This makes it very amenable to SGD, but as I mentioned - I wish to use a fast existing library because I'm going to solve this problem tens of thousands of times for hundreds of thousands of instances each time. (Feb 08 '12 at 15:03) ualex sorry, i just assumed that you wanted an L1 penalty because it is far more common. if you are writing code in a fast language and you are willing to tune a parameter, then i would just implement myself. i think you may have a hard time finding a library that does what you want because it is uncommon. My thought/hope was that perhaps a simple transformation (change of variables) will do the trick. For example, if we define w' = w+v/2, then ||w'||^2_2 = ||w||_2^2+w*v+const. But now you have to adjust the loss term somehow... In the SVM / hinge-loss case I managed to solve it assuming that 1+v*x_i*y_i > 0 for all i - an assumption that of course doesn't hold in general. (Feb 08 '12 at 16:46) ualex One reason why this is not particularly useful advice is that liblinear deals with elastic net regularization. I'm skeptical about finding a nice closed-form solution for the hinge loss and log-loss cases precisely because these losses are nonlinear so a linear reparametrization can change them in weird
{"url":"http://metaoptimize.com/qa/questions/8989/adding-an-affine-term-to-linear-svm-logistic-regression-objective-function","timestamp":"2014-04-16T16:15:20Z","content_type":null,"content_length":"27844","record_id":"<urn:uuid:963b164f-e5e7-45ba-974b-6cf7ffc692b5>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00240-ip-10-147-4-33.ec2.internal.warc.gz"}
Langhorne Calculus Tutor Find a Langhorne Calculus Tutor ...I look forward to working with you soon! RyanI am proficient in Microsoft Excel and use it on a daily basis professionally. I can show you how to navigate through Excel, create functions and pivot tables, and analyze data. 16 Subjects: including calculus, geometry, algebra 1, algebra 2 ...His industrial career included technical presentations and workshops, throughout North America and Europe, to multinational companies, to NATO, and to trade delegations from China and Russia. In particular, he was proud to be part of the NASA Space Shuttle program and the development of new-generation jet engines by the General Electric Company. Dr. 10 Subjects: including calculus, GRE, algebra 1, GED ...Many have known me as Carl The Math Tutor. I have nearly completed a PhD in math (with a heavy emphasis towards the computer science side of math) from the University of Delaware. I have 20+ years of solid experience tutoring college-level math and theoretical computer science, having mostly financed my education that way. 11 Subjects: including calculus, statistics, ACT Math, precalculus I have over 15 years of experience teaching and tutoring physics, and have a PhD. I am very patient and can teach at all levels - from high school through college. I believe in active learning - using many examples, pictures, and simulations to illustrate key concepts while making it fun to learn as well. 15 Subjects: including calculus, physics, geometry, algebra 1 ...Along with helping in one-on-one and group environments in homes and schools, I got to tutor inmates at the Jones Farm Prison in Trenton, NJ who were working towards their GEDs. This has taught me to adapt to particular situations requiring special attention, and alter my style according to the ... 26 Subjects: including calculus, chemistry, physics, statistics
{"url":"http://www.purplemath.com/Langhorne_calculus_tutors.php","timestamp":"2014-04-19T12:02:11Z","content_type":null,"content_length":"24050","record_id":"<urn:uuid:1f715cb5-4a36-48c6-9d48-c3794f5d63da>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00113-ip-10-147-4-33.ec2.internal.warc.gz"}
Summary: Finding and counting given length cycles Noga Alon Raphael Yuster Uri Zwick We present an assortment of methods for finding and counting simple cycles of a given length in directed and undirected graphs. Most of the bounds obtained depend solely on the number of edges in the graph in question, and not on the number of vertices. The bounds obtained improve upon various previously known results. 1 Introduction The main contribution of this paper is a collection of new bounds on the complexity of finding simple cycles of length exactly k, where k 3 is a fixed integer, in a directed or an undirected graph G = (V, E). These bounds are of the form O(Ek ) or of the form O(Ek ·d(G)k ), where d(G) is the degeneracy of the graph (see below). The bounds improve upon previously known bounds when the graph in question is relatively sparse or relatively degenerate. We let Ck stand for a simple cycle of length k. When considering directed graphs, a Ck is assumed to be directed. We show that a Ck in a directed or undirected graph G = (V, E), if one exists, can be found in O(E2- 2 k ) time, if k is even, and in O(E2- 2 k+1 ) time, if k is odd. For finding triangles
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/018/1705640.html","timestamp":"2014-04-17T02:10:19Z","content_type":null,"content_length":"8255","record_id":"<urn:uuid:7ddc66d7-7c82-4b75-aa65-39e0f1657850>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00073-ip-10-147-4-33.ec2.internal.warc.gz"}
What exactly do gluProject and gluUnProject do? [Archive] - OpenGL Discussion and Help Forums 07-15-2003, 04:32 PM Greetings all, I was reading up on the two functions in the subject line and slapped together a little MFC test program that's on the site in my sig. What exactly is gluUnProject "unprojecting"? I know my mouse arithmetic isn't perfect yet, but the numbers it's returning don't make any sense - they're not affected by whether or not the mouse is actually pointing at the cube. So what exactly do these functions do and when might you want to use them? Pointers appreciated! http://daltxcoltsfan.tripod.com/OpenGL/OpenGL.htm (please pardon the popups)
{"url":"http://www.opengl.org/discussion_boards/archive/index.php/t-142932.html","timestamp":"2014-04-20T06:25:29Z","content_type":null,"content_length":"7962","record_id":"<urn:uuid:f15eccf3-0b28-4cf4-aa0a-bc29647c7532>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00466-ip-10-147-4-33.ec2.internal.warc.gz"}
Wolfram Demonstrations Project A Double Exponential Equation Since and , there are points on the graphs of and where . These graphs are the special cases of where and . All points with can be found as intersections of the graph with the lines with slope . In this case, parametric equations in terms of have simple formulas. The graph of is black. The graph of interest, where , is blue for and red for , and is the graph of a function . The intersection points with and , for , and corresponding points on , are plotted. It is interesting to see that when is varied between 0 and 2, the graph of bows from concave up to concave down, and appears to be a line segment from to for some . The graphs of and are shown to help you decide whether the graph of for this really is straight. The special satisfies . The case is especially interesting because then the equation is equivalent to , which has a solution and . (The slider for can take values from -2 to 5.) Assume . Conveniently, iff iff iff . Thus, is on the graph of where iff and , where . Let and . Then and , and are parametric equations for the graph of . Since the graph of is symmetric with respect to , and . Parametric equations for the graphs of and are obtained by differentiating the identity . They are , , and , . Where does the graph of meet the line ? Two ways to determine it are 1) observe it is the point such that , where , and 2) compute using L'Hospital's rule. You will find . What are the domain and range of ? It is an interesting exercise in L'Hospital's rule to determine that and when and and when . By symmetry, the domain and range of are when and when .
{"url":"http://demonstrations.wolfram.com/ADoubleExponentialEquation/","timestamp":"2014-04-20T21:43:16Z","content_type":null,"content_length":"51632","record_id":"<urn:uuid:b5f2755e-1227-4657-ad19-713753bb7ab8>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00135-ip-10-147-4-33.ec2.internal.warc.gz"}
[FOM] Roth's Theorem; Liouville numbers Timothy Y. Chow tchow at alum.mit.edu Tue Apr 18 11:50:15 EDT 2006 Stephen G Simpson <simpson at math.psu.edu> > Earlier this semester, in an introductory course on foundations of > mathematics, I naively assigned students the following problem: > Prove that the function f(n) = the nth digit of pi is primitive > recursive. > This turned out to be harder than I expected. The proof that I > eventually came up with uses a result due to K. Mahler in the 1950s. > Can anyone here supply an alternative proof that doesn't involve such > heavy number theory? Ah yes, a classic chestnut! This sometimes shows up as the puzzle: how can you make money betting on the next n digits of pi, that haven't been computed yet? That is, what do we know about the digits of pi that distinguishes them from a "truly random" sequence? The basic issue is that of an upper bound on the length of a consecutive string of 9's in the decimal expansion of pi. This is unavoidably going to lead to questions of irrationality measures for pi and hence to some fairly sophisticated number theory. Maybe if all you want is a primitive recursive bound then Mahler's proof can be simplified, but I've not heard of anyone doing that. If you choose the base to be a power of 2, then perhaps a simpler proof using a Bailey-Borwein-Plouffe-type formula is possible. However, I don't see how to do this immediately, and a quick look at the original BBP paper (Math. Comp. 66 (1997), 903-913) reveals that they skirt this issue themselves. They write: "There is always a possible ambiguity when computing a digit string base b in distinguishing a sequence of digits a(b-1)(b-1)(b-1) from (a+1)000. In this particular case we consider either representation as an acceptable computation. In practice this problem does not arise." More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/2006-April/010443.html","timestamp":"2014-04-18T06:19:35Z","content_type":null,"content_length":"4503","record_id":"<urn:uuid:be47c804-45bf-44e5-a3de-dd0763a4e546>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00093-ip-10-147-4-33.ec2.internal.warc.gz"}
Steiner's Porism Internet explorer users should click empty spaces to start the applet Take 2 circles one inside the other. (The red ones in the applet) Try to fill it with a chain of sequentially tangential circles.(the black ones in the applet) Steiner found that if you could not fill a particular circle pair starting at one spot you never could from any starting spot.(button 5+) However if you could fill a particular circle pair it works from any starting spot.(buttons 4, 6 and 10) It is obvious why this is so if the starting circles are concentric but not so if they are not concentric. The reason is that every "Steiner circle set" can be generated from a geometric transformation called an inversion of a starting pair of concentric circles with the filler circles having a constant diameter. Under geometric inversions circles remain circles and touch points remain touch points. You can see the concentric circles and the filler circles in light gray if you click [Show Inversion], the circle of inversion is green . It is clearest with 10 circles showing. A really good online reference and the reason if it is down Mail me Back to Bob's Index Page
{"url":"http://members.ozemail.com.au/~llan/steiner.html","timestamp":"2014-04-17T04:25:47Z","content_type":null,"content_length":"2286","record_id":"<urn:uuid:8475cec7-5cf3-425e-956f-03fbae6517f8>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00498-ip-10-147-4-33.ec2.internal.warc.gz"}
How Not to Do Message Integrity, featuring CBC-MAC In my last cryptography post, I wrote about using message authentication codes (MACs) as a way of guaranteeing message integrity. To review briefly, most ciphers are designed to provide message confidentiality – which means that no one but the sender and the intended receiver can see the plain-text of the message. But ciphers that provide confidentiality don’t necessarily make any guarantees that the message received is exactly the message that was sent. There are a good number of cryptographic attacks that work by altering the message in transit, and depending on the cipher, that can result in a variety of undesirable For example, if you use DES encryption with the ECB mode of operation, you can insert new blocks anywhere in a message that you want. By using a replay attack (where you take encrypted blocks from other messages using the same encryption, and resend them), an attacker can alter your messages, and you won’t be able to detect it. So in addition to just confidentiality, we need to provide integrity. What does integrity really mean? Basically, it expands the definition of the decryption function. Written as a function signature, confidential message decryption is a function decrypt : ciphertext × key → plaintext. With message integrity, we add the option that decrypt can return a result saying that the message is invalid: decrypt[integ] : ciphertext × key → (plaintext | REJECT). In the last post, I explained the basic concept behind integrity schemes: you add an addition chunk of data to the message, which summarizes the message; any change to the message will (with a very high degree of probability) result in a change in the authentication code; so if the message is altered, that change will be detected, because the authentication code will not match the rest of the message. The idea of the message authentication code seems quite simple. We know lots of pseudo-random functions that can do a good job as MAC functions. But like so many things in cryptography, the fact that the concept is simple doesn’t mean that the actual application of that concept is simple: it’s incredibly easy to get it disastrously wrong, even when all of the building blocks are right. For example, let’s look at a very simple (and very weak) integrity system, called CBC-MAC. In CBC-MAC, you compute an authentication code by running a your block cipher on the message with an initialization vector set to 0. The output for encrypting the last block is your MAC. The reason that this works is fairly simple – CBC keeps pushing the result from the previous encryption step through to the next one – so the output from the final block conceptually includes information from all of the other blocks. So you’ve got an authentication code whose security is, roughly, equivalent to the security of the block cipher used in CBC mode. CBC-MAC isn’t a bad mode of operation. It’s got serious flaws – the biggest one is that there are a number of attacks against it if it’s used for variable length messages. (And you can’t fix it just by adding a message length indicator to the message, it’s deeper than that.) But CBC is a pretty good integrity mode for fixed-length messages, and there’s a widely used variant, called CMAC, which works for variable length messages. So how can this go wrong? What do you use as the password to the MAC computation? CBC is a secure cryptographic mode, so why not just use the same password? Lots of novices (and even some not-no-novice folks!) have implemented CBC-MAC where the integrity and confidentiality modes used the same key. If you compute the MAC using the same key as the message, then you’re open to an attack which completely voids the integrity guarantee! Imagine a message M, which is divided into n blocks: (M[1], …, M[n]). For convenience, we’ll treat the IV as if it were ciphertext block 0. Tuen using CBC with a key K, M encrypts to ciphertext (C[1], …, C[K]), where C_i=Encrypt(M[i] ⊕ C[i-1], K), plus a CBC-MAC integrity block. If you look at the CBC block, if it’s using the same key as the main encryption, then the value of the MAC is the result of computing Encrypt(M[n] ⊕C[n-1], k). That value is exactly the same as C[N]! So it’s trivial to fake the MAC. If you use CBC-MAC with the same encryption key as you used to encrypt the message, you’re wasting your time: you might as well not bother with the MAC at all, because you’ve got no integrity guarantee. But that’s an amazingly common mistake. A huge number of implementations of supposedly secure cryptosystems that claim to guarantee integrity have actually used CBC-MAC with one key. I mentioned above that CBC-MAC is only safe for fixed-length messages. The reason for that is closely related to the trick I described above. Basically, the nature of CBC ultimately means that given a MAC value M, it’s not hard to generate a string to append to a message which will result in it having the desired MAC. Suppose we know that the message M will generate the mac T[M], and we know that the block we want to replace the message with, N has the MAC T[N]. Then we can easily generate a string N’, such that appending MAC(M,N) = MAC(N). All we do is XOR the first block of N with T[M], and add it to the message. The new message is is (M[1],…,M[n],(N[1]⊕T[N]), N[2],…,N[n]) – that is, we can remove the MAC from the first message, use it to glue the two messages together, and tack T[N] on to the message as the MAC. T[N], the MAC computed for T alone, will be the MAC for the combined message. It’s easy to do that if we’re allowed to change the length of the message. If the receiver doesn’t know how long the message is going to be, then they can’t count on its integrity using CBC-MAC. Next time, I’ll look at a variation of CBC-MAC that provides integrity for variable-length messages using 2 keys – one for confidentiality, and one for integrity. Then I’ll go on to a two-pass single-key mode of operation that provides both confidentiality and integrity. 1. #1 Nobody Important October 28, 2008 Hey Mark, Just a suggestion, but you should really go back through all these posts and tag them with “Cryptography” or something so that they’re easy to dig up (assuming SB supports tagging, of course).
{"url":"http://scienceblogs.com/goodmath/2008/10/24/how-not-to-do-message-integrit/","timestamp":"2014-04-19T14:46:07Z","content_type":null,"content_length":"35248","record_id":"<urn:uuid:356a541c-d9cd-4c56-8af5-cc68255c3f55>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00092-ip-10-147-4-33.ec2.internal.warc.gz"}
Five Properties: basic proofs April 22nd 2009, 08:39 PM #1 Jan 2009 Five Properties: basic proofs I'm not expecting anyone to answer all of these, but hopefully someone can do one for me. It seems pretty straightforward and I wouldn't be posting if I had time to talk to my prof. before my exam tomorrow. Again, only using the 5 rules . (1) For any real numbers a and b and any positive real number c. a < b => ac < bc (2) For any real number a, a is positive <=> -a is negative (3) For any real numbers a, b, c if a < b and c is negative, then ac > bc (4) For any real numbers a and b ab is positive <=> a and b are both positive or both negative (5) -1 < 0 < 1 USING ONLY THESE RULES (a) a^2>=0 for any a (b) For any real number a, if a is positive, then 1/a is positive. (c) For any positive real numbers a and b, if a < b, then 1/a > 1/b Thanks guys. Hello glover_m I'm not expecting anyone to answer all of these, but hopefully someone can do one for me. It seems pretty straightforward and I wouldn't be posting if I had time to talk to my prof. before my exam tomorrow. Again, only using the 5 rules . (1) For any real numbers a and b and any positive real number c. a < b => ac < bc (2) For any real number a, a is positive <=> -a is negative (3) For any real numbers a, b, c if a < b and c is negative, then ac > bc (4) For any real numbers a and b ab is positive <=> a and b are both positive or both negative (5) -1 < 0 < 1 USING ONLY THESE RULES (a) a^2>=0 for any a (b) For any real number a, if a is positive, then 1/a is positive. (c) For any positive real numbers a and b, if a < b, then 1/a > 1/b Thanks guys. (a) If $a = 0, a^2 = 0$. Otherwise, using Rule (4), $\forall \,a , b \in \mathbb{R},\, (b = a)\Rightarrow (a$ and $b$ are both positive or both negative) $\Rightarrow aa = a^2$ is positive. (b) $(a \cdot \frac1a = 1 > 0)$ from (5) $\Rightarrow a$ and $\frac1a$ are both positive or both negative, using Rule 4, with $b = \frac1a$. So if $a>0, \frac1a>0$. (c) $a>0 \Rightarrow \frac1a > 0$, from (b). So $0<a<b \Rightarrow a\cdot \frac1a < b\cdot \frac1a$, using Rule (1) with $c = \frac1a$ $\Rightarrow 1 < b\cdot\frac1a$ Also $b>0 \Rightarrow \frac1b>0$ from (b), so $1\cdot\frac1b < \frac1b\cdot b\cdot \frac1a$, using (1) $\Rightarrow \frac1b < \frac1a$ April 23rd 2009, 05:24 AM #2
{"url":"http://mathhelpforum.com/discrete-math/85171-five-properties-basic-proofs.html","timestamp":"2014-04-16T13:33:53Z","content_type":null,"content_length":"38908","record_id":"<urn:uuid:3d50b21c-fe10-40fc-a4b5-6d6b2d12ea33>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00253-ip-10-147-4-33.ec2.internal.warc.gz"}
Fulton, MD Algebra 2 Tutor Find a Fulton, MD Algebra 2 Tutor ...As a former engineering student, I know that concepts in math and math related subjects build one over the other. Hence, I make sure a student understands a given material very well before moving on to the next. If I find that a student has missed a fundamental concept in the past, I go back and address it. 14 Subjects: including algebra 2, chemistry, calculus, physics ...These students range from high school freshmen to adults returning to college after focusing on their careers for many years. I've helped them with Latin, SAT prep, English, and creative writing. I love teaching, and I am committed to helping my students learn.I was the layout editor of my university's literary magazine for two years. 22 Subjects: including algebra 2, English, reading, writing ...My tutoring method relies heavily on organization, confidence building, and study strategies. Throughout my college experience, I was a double major (math and political science) and varsity athlete with extracurricular activities and a part-time job. I learned that the best way to juggle everything successfully was to stay organized and study efficiently. 17 Subjects: including algebra 2, English, reading, calculus ...I have had prior experience tutoring college bound students in both Math and Reading for the TOEFL and exam. I have had prior experience tutoring college bound students in both Math and Reading for the SAT, ACT and GMAT exams. I excel at Math, I am apt at tutoring Prealgebra, Algebra 1 & 2, Geometry and Trigonometry. 26 Subjects: including algebra 2, reading, SAT math, GED ...Applications of Integrals C. Fundamental Theorem of Calculus D. Techniques of Antidifferentiation E. 18 Subjects: including algebra 2, physics, calculus, Java Related Fulton, MD Tutors Fulton, MD Accounting Tutors Fulton, MD ACT Tutors Fulton, MD Algebra Tutors Fulton, MD Algebra 2 Tutors Fulton, MD Calculus Tutors Fulton, MD Geometry Tutors Fulton, MD Math Tutors Fulton, MD Prealgebra Tutors Fulton, MD Precalculus Tutors Fulton, MD SAT Tutors Fulton, MD SAT Math Tutors Fulton, MD Science Tutors Fulton, MD Statistics Tutors Fulton, MD Trigonometry Tutors
{"url":"http://www.purplemath.com/Fulton_MD_Algebra_2_tutors.php","timestamp":"2014-04-19T12:42:07Z","content_type":null,"content_length":"24020","record_id":"<urn:uuid:1117f12c-32f7-49fa-a262-1b37e714985f>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00034-ip-10-147-4-33.ec2.internal.warc.gz"}
Implicit & Logarithmic differentiation problems. October 23rd 2012, 10:21 PM #1 Sep 2012 #1. Find the derivative of: $e^x^y=sin(y^2)$ I need to use 2 chain rules on $lnsin(y^2)$ I'm sure there is something wrong in there #2. Find the derivative using logartihmic differentiation: Now I am unsure of what to do. Can I, and if so should I, rewrite the right side as: Or should I use a combination of the quotient, product and multiple chain rules on what I had before: You don't have to show me any work past this point, I would like to figure this our on my own, I just need a push in the right direction. Re: Implicit & Logarithmic differentiation problems. #2. Find the derivative using logartihmic differentiation: Now I am unsure of what to do. Can I, and if so should I, rewrite the right side as: Or should I use a combination of the quotient, product and multiple chain rules on what I had before: You don't have to show me any work past this point, I would like to figure this our on my own, I just need a push in the right direction. You were on the right track: $lny=ln\frac{x(x+1)^3}{(3x+1)^2} = \underbrace{\ln(x)+3\ln(x+1)}_{numerator}-\overbrace{2\ln(3x+1)}^{denominator}$ Now the differentiation of the RHS is quite simple. Re: Implicit & Logarithmic differentiation problems. Re: Implicit & Logarithmic differentiation problems. #1. Find the derivative of: $e^x^y=sin(y^2)$ I need to use 2 chain rules on $lnsin(y^2)$ I'm sure there is something wrong in there #2. Find the derivative using logartihmic differentiation: Now I am unsure of what to do. Can I, and if so should I, rewrite the right side as: Or should I use a combination of the quotient, product and multiple chain rules on what I had before: You don't have to show me any work past this point, I would like to figure this our on my own, I just need a push in the right direction. I think in Question 1, attempting logarithmic differentiation is a waste of time. \displaystyle \begin{align*} e^{x\,y} &= \sin{\left( y^2 \right)} \\ \frac{d}{dx} \left( e^{x\,y} \right) &= \frac{d}{dx}\left[ \sin{\left( y^2 \right)} \right] \\ e^{x\,y} \, \frac{d}{dx} \left( x\, y\right) &= \frac{d}{dy}\left[ \sin{\left( y^2 \right)} \right] \frac{dy}{dx} \\ e^{x\,y} \left( x\,\frac{dy}{dx} + y \right) &= 2y \cos{\left( y^2 \right)} \, \frac{dy}{dx} \\ x\, e^{x\,y} \, \frac{dy}{dx} + y \, e^{x\,y} &= 2y\cos{\left( y^2 \right)} \, \frac{dy}{dx} \\ y\, e^{x\,y} &= 2y\cos{\left( y^2 \right)} \, \frac{dy}{dx} - x\, e^{x\,y} \, \frac{dy}{dx} \\ y\, e^{x\,y} &= \ left[ 2y\cos{\left( y^2 \right)} - x\, e^{x\,y} \right] \frac{dy}{dx} \\ \frac{y\, e^{x\,y}}{2y\cos{\left( y^2 \right)} - x \, e^{x\,y} } &= \frac{dy}{dx} \end{align*} October 23rd 2012, 10:45 PM #2 October 23rd 2012, 11:01 PM #3 Sep 2012 October 24th 2012, 05:12 AM #4
{"url":"http://mathhelpforum.com/calculus/205989-implicit-logarithmic-differentiation-problems.html","timestamp":"2014-04-16T13:31:34Z","content_type":null,"content_length":"53260","record_id":"<urn:uuid:ca9458c2-3329-4d19-ab85-d6126292dc6b>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00601-ip-10-147-4-33.ec2.internal.warc.gz"}
petsc-3.4.4 2014-03-13 Extracts several submatrices from a matrix. If submat points to an array of valid matrices, they may be reused to store the new submatrices. #include "petscmat.h" PetscErrorCode MatGetSubMatrices(Mat mat,PetscInt n,const IS irow[],const IS icol[],MatReuse scall,Mat *submat[]) Collective on Mat Input Parameters mat - the matrix n - the number of submatrixes to be extracted (on this processor, may be zero) irow, icol - index sets of rows and columns to extract (must be sorted) scall - either MAT_INITIAL_MATRIX or MAT_REUSE_MATRIX Output Parameter submat -the array of submatrices MatGetSubMatrices() can extract ONLY sequential submatrices (from both sequential and parallel matrices). Use MatGetSubMatrix() to extract a parallel submatrix. Currently both row and column indices must be sorted to guarantee correctness with all matrix types. When extracting submatrices from a parallel matrix, each processor can form a different submatrix by setting the rows and columns of its individual index sets according to the local submatrix When finished using the submatrices, the user should destroy them with MatDestroyMatrices(). MAT_REUSE_MATRIX can only be used when the nonzero structure of the original matrix has not changed from that last call to MatGetSubMatrices(). This routine creates the matrices in submat; you should NOT create them before calling it. It also allocates the array of matrix pointers submat. For BAIJ matrices the index sets must respect the block structure, that is if they request one row/column in a block, they must request all rows/columns that are in that block. For example, if the block size is 2 you cannot request just row 0 and column 0. Fortran Note The Fortran interface is slightly different from that given below; it requires one to pass in as submat a Mat (integer) array of size at least m. See Also MatDestroyMatrices(), MatGetSubMatrix(), MatGetRow(), MatGetDiagonal(), MatReuse Index of all Mat routines Table of Contents for all manual pages Index of all manual pages
{"url":"http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatGetSubMatrices.html","timestamp":"2014-04-18T21:36:38Z","content_type":null,"content_length":"4302","record_id":"<urn:uuid:3af8e085-8e19-41b1-83ad-42399643ac3c>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00548-ip-10-147-4-33.ec2.internal.warc.gz"}
Hellow friends, I have using Excel 2007. I creating two sheets. (Sheet1 & DATA) In DATA sheet : Total 20,000 row record. (sorted : column "C" by order A to Z & column "B" by order newest to oldest) column "C" is code & "B" is date. In Sheet1 : I have calculate customer wise recorder. i.e. number of quantity sold & last date of transaction. for example : In F6 position I enter formula =SUMIF(DATA!$AI:$AI,":"&$A6&"::"&$F$2&":",DATA!$N:$N) In Sheet1 : A6 is code F2, G2, H2, I2, J2 is series. In DATA sheet AI column created a key with two columns (=":"&C2&"::"&Q2&":") code & series & column "N" quantity. Same formula in column : G6,H6,I6,J6 in column K6 i using formula: =INDEX(DATA!$B:$B,MATCH(1,INDEX(($A6=DATA!$C:$C)*(Sheet1!$K$2=DATA!$Q:$Q),0),0)) in column L6 i using formula: =INDEX(DATA!$B:$B,MATCH(1,INDEX(($A6=DATA!$C:$C)*(Sheet1!$L$2=DATA!$Q:$Q),0),0)) in column M6 i using formula: =INDEX(DATA!$B:$B,MATCH(1,INDEX(($A6=DATA!$C:$C)*(Sheet1!$M$2=DATA!$Q:$Q),0),0)) in column N6 i using formula: =INDEX(DATA!$B:$B,MATCH(1,INDEX(($A6=DATA!$C:$C)*(Sheet1!$N$2=DATA!$Q:$Q),0),0)) in column O6 i using formula: =INDEX(DATA!$B:$B,MATCH(1,INDEX(($A6=DATA!$C:$C)*(Sheet1!$O$2=DATA!$Q:$Q),0),0)) In data sheet column B is date, column C is code, column Q is series. In sheet1 : column A6 is code, column K2,L2,M2,N2,O2 is series. In sheet1 total data row is approx 3000. My problem is : when i selecting filter & selecting any type of data i.e. > or < or any data record. it will take calculating / processors time more than 15 to 20 minutes its work very slow. Is there a way to faster calculation. I have attach sample file with some record.
{"url":"http://www.knowexcel.com/view/1111680-calculate-customs-duties.html","timestamp":"2014-04-20T05:43:13Z","content_type":null,"content_length":"64261","record_id":"<urn:uuid:5401150c-9855-4931-b981-7c1ecc4c6686>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00482-ip-10-147-4-33.ec2.internal.warc.gz"}
Hedwig Village, TX Houston, TX 77092 Master Tutor for Math, Chem, Phys, & Computers for HS and College ...am a master tutor of all levels of mathematics, statistics, chemistry, physics, computers, and computer programming (C, Java, Fortran, MatLab, Excel, Maple), and engineering for high school and college. Math classes include , precalculus, calculus, differential... Offering 10+ subjects including algebra 1 and algebra 2
{"url":"http://www.wyzant.com/geo_Hedwig_Village_TX_algebra_tutors.aspx?d=20&pagesize=5&pagenum=4","timestamp":"2014-04-20T18:47:46Z","content_type":null,"content_length":"59678","record_id":"<urn:uuid:437e1ee3-2fc0-4aa4-aaae-d960babbac3f>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00618-ip-10-147-4-33.ec2.internal.warc.gz"}
Solving a Typical Rate-Time-Distance Problem Date: 09/02/2005 at 20:14:45 From: Daniela Subject: I need help with figuring out word problems I need help with figuring out word problems. I just don't understand them! I have this problem: You are taking part in a charity walk-a-thon where you can either walk or run. You walk at 4 km per hour and run 8 km per hour. The walk-a-thon lasts 3 hours. Money is raised based on the total distance you travel in 3 hours. Your sponsors donate $15 for each kilometer you travel. Write an expression that gives the total amount of money raised. Evaluate the expression if you walk for 2 hours and run for 1 hour. The most difficult part about problems like this is what do I do first? When I read the question, I don't understand what they are asking me and from there I give up. In this problem they ask me to find the expression, but I don't know how to set it up and I don't understand what is going to be my variable in the expression. Well, if I walk for 2 hours, then I must travel 8 km because I multiply 4 km/h by 2 hrs. Then, I did 8 km/h multiplied by 1 hour, and find that I will run 8 km. After that, I added the 8's together to find the whole distance and I got 16 km. Then, I took that 16 km and multiplied it by $15 to find the amount of money raised...I got $240. I'm pretty sure that this is the right answer, but now I don't know how to make it into an expression, so that when I substitute 2 hours and 1 hour for the variable I get $240. Date: 09/02/2005 at 21:34:37 From: Doctor Peterson Subject: Re: I need help with figuring out word problems Hi, Daniela. A lot of students have trouble with word problems. Let's see what I can do for you. Probably the first thing to do is just to make sure you understand the problem--what is going on, and how are the various parts related? I like to write down the data in some orderly fashion, like this: Walk: 4 km/hr Run: 8 km/hr Time: 3 hr (total) Money earned: $15 per km Next, look for the unknown(s). What don't you know, that you would need to know in order to answer the question (in this case, how much money do you earn)? In the first part of the problem, you don't know how long you walked and how long you ran; you are asked to find an expression that will tell you what you earned IF you know these things. Now, we could either define TWO variables (for these two numbers), or we can notice that the two times are related by the fact that the total is 3 hr. If you walk 2 hr, you HAVE TO run 1 hr. So let's arbitrarily choose the time you walk as the unknown. (We could have chosen the running time instead.) Let w = time walked (in hours) I always state the unit used, because that saves a lot of trouble! Now, you've already done one of the things I like to suggest: walk through the calculation you'd do IF you knew the values. Since they gave you specific values to try, you tried them before making an expression. That's fine: it gives you a practice run before you actually start the algebra. So what did you do? You took the time walked and multiplied it by 4 to get the distance walked, and did the same with the time and distance run. Then you added those to get the total distance, and multiplied by 15 to get the money earned. Great job there! I'll put that down in an orderly way: time * speed = distance walking 2 hr 4 km/hr 2*4 = 8 km running 1 hr 8 km/hr 1*8 = 8 km total 16 km * $15/km = $240 Now all we have to do is to do just the same thing using variables. Remember that the walking time is w hours; the running time is what's left of the 3 total hours, or 3-w hours. time * speed = distance walking w hr 4 km/hr 4w km running 3-w hr 8 km/hr 8(3-w) km Do you see what I've done so far? I replaced your 2 hr and 1 hr with the expressions w and (3-w). Now I have to add those expressions, then multiply the product by 15. Can you finish, and write up the final expression? Once you've got that, try replacing w with 2 and see if you get 240. That will be a good check, besides answering the last part of the If you have any further questions, feel free to write back. - Doctor Peterson, The Math Forum
{"url":"http://mathforum.org/library/drmath/view/68419.html","timestamp":"2014-04-19T10:49:03Z","content_type":null,"content_length":"9473","record_id":"<urn:uuid:65e59025-93ed-4743-96d6-618a06de2eca>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00069-ip-10-147-4-33.ec2.internal.warc.gz"}
Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole. Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages. Do not use for reproduction, copying, pasting, or reading; exclusively for search engines. OCR for page 124 124 A Manual for Design of Hot Mix Asphalt with Commentary final mix design should avoid borderline values for aggregate specification properties that, when the blend is actually tested, might fail to meet requirements. Aggregate Blending: Summary One of the most important and complicated parts of the HMA mix design process is deter- mining the appropriate aggregate blend to use for a given application. Various procedures are available, including the Bailey method, and several techniques described in mix design manuals published by the Asphalt Institute. Engineers and technicians comfortable with the methods they are currently using for proportioning aggregates for HMA mix designs should continue to use these methods. The procedure given in this manual is based on a few simple concepts relating aggregate blends to VMA. In most cases, HMA mix designs are not designed from scratch. Instead, existing mix designs are modified by replacing aggregates or the binder or by changing the binder content and VMA. In these cases, the best guide for adjusting the aggregate proportions is the experience of the engineer or technician with the materials being used. When modifying existing mix designs, one or two aggregate blends are developed by modifying the blend used in the existing mix. A trial- and-error approach is then used to refine the aggregate blend until the desired mix properties are achieved. In situations where an entirely new HMA mix design is to be developed, three initial trial blends are developed using dense/coarse, dense/dense, and dense/ fine aggregate gradations. The design closest to meeting all requirements is then further refined by making additional trial blends, evaluating their properties, and modifying the aggregate gradation as needed. An important part of the mix design process is determining the specification properties of the aggregate blends. For initial trial batches, specification properties can be estimated by using mathematical equations and the specification property values for the individual aggregates. This is done automatically in HMA Tools (and many similar spreadsheets and computer programs). However, the specification properties for the final mix design should be verified by actual measurements on the aggregate blend. Step 9. Calculate Trial Mix Proportions by Weight and Check Dust/ Binder Ratio At this point in the HMA mix design, the amount of air voids, binder, and aggregate has been determined on a volume basis, and up to three different aggregate blends have been developed-- on a proportion-by-weight basis. Now, the overall mixture composition in percent by weight must be calculated and the dust/binder ratio checked to make sure it is within the specified values. If desired, the mixture composition by volume can also be determined. The following procedure and equations can be used to calculate mix proportions by weight and related mix properties. First, calculate the overall aggregate bulk specific gravity: Ps1 A + Ps 2 A + Ps 3 A + . . . Gsb = (8-4) Ps1 A Ps 2 A Ps 3 A . . . + + + Gsb1 Gsb 2 Gsb 3 where Gsb = overall bulk specific gravity for aggregate blend Ps1/A = volume % of aggregate 1 in aggregate blend Gsb1 = bulk specific gravity for aggregate 1 Ps2/A = volume % of aggregate 2 in aggregate blend OCR for page 124 Design of Dense-Graded HMA Mixtures 125 Gsb2 = bulk specific gravity for aggregate 2 Ps3/A = volume % of aggregate 3 in aggregate blend Gsb3 = bulk specific gravity for aggregate 3 As discussed in Step 7, the volume percentage of the aggregate is simply 100% minus the target VMA. The weight percentage of binder and aggregate are then calculated using the following equations: VbGb Pb = × 100% (8-5) VsbGsb + VbGb VsbGsb Ps = × 100% (8-6) VsbGsb + VbGb where Pb = total binder content, % by total mix weight Vb = total binder content, % by total mix volume Gb = binder specific gravity Vsb = aggregate content, % by total mix volume Gsb = overall bulk specific gravity of aggregate (Equation 8-4) Ps = total aggregate content, % by total mix weight Then, calculate the effective asphalt binder content by weight: Vbe Gb Pbe = × 100% (8-7) VsbGsb + VbGb where Pbe = effective binder content, % by total mix weight Vbe = effective binder content, % by total mix volume Gb = binder specific gravity Vsb = aggregate content, % by total mix volume Gsb = overall bulk specific gravity of aggregate (Equation 8-4) Calculate the percent by weight of each aggregate: Ps1 A Ps1 = Ps (8-8) 100 where Ps1 = weight percent (by total mix) of aggregate 1 (or aggregate 2, 3, etc.) Ps = weight percent (by total mix) of combined aggregate, from Equation 8-6 Ps1/A = weight percent (in aggregate blend) of aggregate 1 (or aggregate 2, 3, etc.) If desired, the volume percent of the aggregates can also be calculated, but the equation is more complicated: Ps1 (100 - Pb ) Vsb1 = (8-9) Pb Ps1 Ps 2 Ps 3 . . . + + + + Gb Gsb1 Gsb 2 Gsb 3 OCR for page 124 126 A Manual for Design of Hot Mix Asphalt with Commentary where Vsb1 = volume % of aggregate 1 in total mix Ps1 = weight % of aggregate 1 in total mix Pb = weight % binder in total mix Gb = binder specific gravity Gsb1 = bulk specific gravity for aggregate 1 Ps2 = volume % of aggregate 2 in aggregate blend Gsb2 = bulk specific gravity for aggregate 2 Ps2 = volume % of aggregate 3 in aggregate blend Gsb3 = bulk specific gravity for aggregate 3 Calculate the percent of mineral dust (material finer than 0.075 mm) in the total mixture: P0.075 s1 Ps1 + P0.075 s 2 Ps 2 + P0.075 s 3 Ps 3 + . . . P0.075 = (8-10) 100 where P0.075 = mineral dust content (material finer than 0.075 mm), percent by total mix weight P0.075/s1 = % passing the 0.075-mm sieve for aggregate 1 Ps1 = weight percent (by total mix) of aggregate 1 P0.075/s2 = % passing the 0.075-mm sieve for aggregate 2 Ps2 = weight percent (by total mix) of aggregate 2 P0.075/s3 = % passing the 0.075-mm sieve for aggregate 3 Ps3 = weight percent (by total mix) of aggregate 3 Calculate the dust/binder ratio, using the effective asphalt binder content: P0.075 D B= (8-11) Pbe where D/B = dust/binder ratio, calculated using effective binder content P0.075 = mineral dust content, % by total mix weight (Equation 8-10) Pbe = effective binder content, % by total mix weight (Equation 8-7) The required range for dust/binder ratio is 0.8 to 1.6 for all mixtures larger than 4.75-mm NMAS. However, the specifying agency may reduce the requirements to a range of 0.6 to 1.2 if local materials and conditions warrant this change. For 4.75-mm NMAS mixtures, the required dust/binder ratio is 0.9 to 1.2, and this should not be modified. These requirements are similar to those given in the Superpave method, but the required and optional ranges are reversed; in the Superpave method, the required range is 0.6 to 1.2, but agencies can increase the requirement to 0.8 to 1.6. Higher dust/binder ratios are desirable for several reasons. Perhaps most importantly, they help provide stiffness and rut resistance to the HMA. Higher dust/binder ratios also will tend to reduce the permeability of an HMA mixture, improving durability. However, it is possible that in some locations obtaining high dust/binder ratios might be prohibitively expensive, and the nature of the local materials might allow the design of HMA with good performance at lower dust/binder ratios. Because of the beneficial effects of high dust/binder ratios on rut resistance, if VMA requirements are increased above those given in Table 8-5, the dust/binder ratio requirement should not be reduced. Otherwise, the rut resistance of the resulting mixtures might, in some cases, be marginal. Table 8-12 summarizes the requirements for dust/binder ratio. OCR for page 124 Design of Dense-Graded HMA Mixtures 127 Table 8-12. Requirements for dust/binder ratio. Allowable Range for Mix Aggregate Dust/Binder Ratio, by NMAS, mm Weight > 4.75 0.8 to 1.6a 4.75 0.9 to 2.0 a The specifying agency may lower the allowable range for dust/binder ratio to 0.6 to 1.2 if warranted by local conditions and materials. The dust/binder ratio should, however, not be lowered if VMA requirements are increased above the standard values as listed in Table 8-5. When including RAP in a mixture, the same principles described above are applied. RAP is composed of both binder and aggregate. The weight and volume of binder in the RAP must be added to the weight and volume of new binder added to a mixture. Similarly, the weight and vol- ume of aggregate must be added to the weight and volume of new aggregate added to the mix. As will be discussed in Chapter 9, HMA Tools automatically performs the needed calculations when including RAP in an HMA mix design. Example Problem 8-1. Calculation of Mix Composition Table 8-13 presents the results of an example calculation of mixture composition for a trial batch. The mixture is a 12.5-mm NMAS design, with a target air void content of 4% and a target VMA value of 15%. Column 1 describes the various mix components; this includes total binder, absorbed binder, and effective binder-- this makes the relationship among these values clear. Column 2 gives the mix composition in percentage by volume, which is determined using the procedure described above. Column 3 lists the bulk specific gravity for the various components, while Column 4 lists apparent specific gravity values for the aggregates. Column 5 lists the aggregate contents as a percentage by weight of the aggregate blend. Table 8-13. Example calculation of HMA mix composition by weight percentage from volume percentage and specific gravity values. (1) (2) (3) (4) (5) (6) Percent by Bulk Apparent Percent by Percent by Total Mix Specific Specific Aggregate Total Mix Mix Component Volume Gravity Gravity Weight Weight Air 4.00 --- --- --- 0.0 Total Asphalt Binder 11.38 1.025 --- --- 4.60 Absorbed Asphalt Binder -0.40 1.025 --- --- (0.16) Effective Asphalt Binder 10.98 1.025 --- --- 4.44 No. 7 Traprock 19.54 2.971 2.992 24 22.90 Traprock screenings 24.47 2.867 2.893 29 27.67 Manufactured sand 24.46 2.868 2.891 29 27.67 Natural sand 13.74 2.642 2.676 15 14.31 Mineral filler 2.80 2.588 2.629 3 2.86 Note: Calculations may not agree exactly because of rounding. (continued on next page)
{"url":"http://www.nap.edu/openbook.php?record_id=14524&page=124","timestamp":"2014-04-18T13:09:24Z","content_type":null,"content_length":"67618","record_id":"<urn:uuid:3d42b940-6784-446f-ba89-677af36637a4>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00123-ip-10-147-4-33.ec2.internal.warc.gz"}
Alameda Trigonometry Tutor Find an Alameda Trigonometry Tutor ...As a doctoral student in clinical psychology, I was hired by the university to teach study skills, test taking skills, and time management to incoming freshmen students. As part of this job, I was trained in and provided materials for each of these topics. I often find, when working with my stu... 20 Subjects: including trigonometry, calculus, statistics, geometry ...This undergrad had dropped the course in the Fall due to a failing grade after the first midterm. After regularly working with me during the Spring semester, my undergrad tutee ended up with an A in the Statistics class. Also, as an undergrad, I was a Teaching Assistant for the intro to probability and statistics course at Caltech. 27 Subjects: including trigonometry, chemistry, physics, geometry ...It is essential when teaching calculus to make full use of graphs, even when problems can be solved exclusively with equations. There is frequent confusion between a function and its derivative function, and this confusion can be remedied only by lots of practice with graphs. Calculus classes now include some physics, particularly the physics of trajectories. 17 Subjects: including trigonometry, calculus, physics, geometry ...I also have organized astronomical star parties to educate interested persons about astronomy and navigating the night sky. Each year that I taught Astronomy/Earth Science, I took my students on a field trip to a major science center, including the Chabot Space and Science Center (CA), Goddard S... 32 Subjects: including trigonometry, reading, calculus, physics ...I most recently worked on detector development in the Space Sciences Lab associated with UC Berkeley and still continue some previous research with my group in Colorado at the Center for Astrophysics and Space Astronomy. In addition to formal training in education gained through coursework, I ha... 29 Subjects: including trigonometry, English, French, calculus
{"url":"http://www.purplemath.com/Alameda_trigonometry_tutors.php","timestamp":"2014-04-20T23:52:08Z","content_type":null,"content_length":"24265","record_id":"<urn:uuid:f450d90c-2c06-49e5-8a47-5d2f098b39c2>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00483-ip-10-147-4-33.ec2.internal.warc.gz"}
Homework Help - Conductors & Methods October 16th, 2012, 05:56 AM Homework Help - Conductors & Methods I'm in a Java online course. I am completely lost on a homework problem and cannot proceed, since I don't have my Java textbood to refer off of anymore (accidentally left it in another state). Any suggestions on how to start? I can't seem to find any help online. Here is the problem. Write and fully test a class that represents rational numbers. A rational number can be represented as the ratio of two integer values, ( a ) and ( b ), where ( b ) is not zero. The class has attributes for the numerator and denominator of this ratio. The ratio should always be stored in its simplest form. That is, any common factor of ( a ) and ( b ) should be removed. For example, the rational number 40/12 should be stored as 10/3. The class has the following contructors and methods: * A default constructor that sets the rational numbers to 0/1 * A constructor that has parameters for the numerator and denominator and converts the resulting ratio to simplified form. * simplify--a private method that converts the rational numbers to simplified form. * geGCD (x, y )--a private static method that returns the largest common factor of the two positive integers x and y, that is, their greatest common divisor. For example, the greatest common divisor of 40 and 12 is 4. * getValue--returns the rational number as a double value. * toString--returns the rational number as a string in the form a/b. //Here is what I currently have... like I said, it's hard for me to proceed without my book. How would I even begin? I am completely lost. Much help is appreciated!! import java.util.Scanner; public class rational October 16th, 2012, 08:24 AM Re: Homework Help - Conductors & Methods First off, it's constructor, not conductor. And secondly, who needs a textbook when you have the tutorials? The Java™ Tutorials October 16th, 2012, 01:20 PM Re: Homework Help - Conductors & Methods First, thanks to KevinWorkman for an excellent link. A favorite reference of mine. Second, please see the announcements page for the use of code tags and other forum tips. I think with a choice of the many search engines, and keywords "java hello world" you can get a bit more code than provided in the op.
{"url":"http://www.javaprogrammingforums.com/%20whats-wrong-my-code/18499-homework-help-conductors-methods-printingthethread.html","timestamp":"2014-04-20T04:33:07Z","content_type":null,"content_length":"6677","record_id":"<urn:uuid:58a52a0d-5ada-419b-88e2-d13a72e6f97b>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00209-ip-10-147-4-33.ec2.internal.warc.gz"}
How Many People Would Work in the Death Star? | Science Blogs | WIRED • By Rhett Allain • 05.02.13 | • 8:36 am | In case you haven’t seen it, here is my recent post on the ESA ATV blog. Basically, I calculate how many ATV launches we would need to supply a Death Star. Check it out. There was one thing that bothered me. In my estimation, I said that there could be 6 x 10^12 people working in the Death Star. That number is crazy high. But where did it come from? Basically, I looked at the people per volume on a Nimitz Class carrier and said that the Death Star would have the same density. Clearly, this was a bad idea. Estimating the People Density Here is a new idea. What if I look at the number of crew on different sized vessels? I suspect that the people volume density is not constant – but that is just a guess. I need some data on naval ships with the size and the number of crew. It makes me feel like Chekov looking for this stuff. Perhaps what I need is access to Jane’s Fighting Ships, but it seems I will have to do this the hard way with Wikipedia – starting with this page. Here is the data I gathered (as a spreadsheet). I went through the list of US Navel Wessels and tried to get one of each class. In order to calculate the volume, I had to make a couple of guesses. First, I had to guess the height of the ship. Wikipedia seems to list the draft (which would be the depth of the ship below water level – I think). So, I just kind of guessed at the height. Really, in my mind I pictured the ship as a rectangle. So, the height listed is my approximation of how tall the ship would be if it were squashed into a rectangular cube. I know you are ready to see the data. Well, here is my plot of the number of people in a vessel vs. the estimated volume of the vessel. I am quite disappointed at how linear this turned out. It would have been more fun to have some crazy logarithmic function or something. Really, I expected that the people density would be lower for larger vessels, but this says it is constant. The linear function that fits this data is: Interesting. I am surprised the y-intercept is so high. With this model, a tiny tub of a ship would still need 118 people in it. Well, it’s clear that this model doesn’t work for small ships but that last data point is what makes this look linear. I really need more data – but what has more crew than a Nimitz Class aircraft carrier? It would be nice to have some data in the middle (so in between 1,000 and 6,000 crew). The only thing that might fit would be a giant battleship like the Bismarck. Unfortunately, it looks like it would be right around the same range as the stuff I already have since it uses around 1,000 sailors. What if I plot the same data but remove the Nimitz Class carrier? Here is what that looks like. It’s still mostly linear – but the people density is a bit higher at 0.007 people per cubic meter. Also, this has a negative y-intercept of -16 people. I like the negative intercept. It says that if you want a ship for one person, it would need at least a volume of over 2000 m^3. What about the people density? This is actually a little bit higher than my previous estimate (I had used 0.003 people per cubic meter). I guess the only thing to say that is the Death Star requires more people than are available on Earth. It’s that simple. This is why we can not yet build the Death Star. Ok, one more point to be clear. I am making a crazy assumption here. I am assuming that this people density function is valid for the very large volumes of the Death Star. This is a crazy assumption, but I have no other data to use with my model. Really, what we need is to at least design the Death Star. That way we can know the answer to number of people it would need. Vessel Density While I have the data, how about a look at the density of different sized ships? Here is a plot of density as a function of length of the ship. From this it looks like a fairly constant vessel density expect for a few with a much higher value. It’s almost as if there are two kinds of vessels here. Yes, there are. These higher density values are for submarines. So, from this a density of 100 kg/m^3 seems reasonable. Using this density and a Death Star with a diameter of 160 km, I get a mass of 2 x 10^17 kg – a bit smaller than the estimate from Centives. If I use the smaller Death Star, I get a mass of 1.4 x 10^17 kg.
{"url":"http://www.wired.com/2013/05/how-many-people-would-work-in-the-death-star/","timestamp":"2014-04-21T00:15:11Z","content_type":null,"content_length":"106707","record_id":"<urn:uuid:afef7e9e-65e6-45ad-a649-99f7e7e7f9fa>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00239-ip-10-147-4-33.ec2.internal.warc.gz"}
by Sarah Glaz I tell my students the story of Newton versus Leibniz, the war of symbols, lasting five generations, between The Continent and British Isles, involving deeply hurt sensibilities, and grievous blows to national pride; on such weighty issues as publication priority and working systems of logical notation: whether the derivative must be denoted by a "prime," an apostrophe atop the right hand corner of a function, evaluated by Newton's fluxions method, Δy/Δx; or by a formal quotient of differentials dy/dx, intimating future possibilities, terminology that guides the mind. The genius of both men lies in grasping simplicity out of the swirl of ideas guarded by Chaos, becoming channels, through which her light poured clarity on the relation binding slope of tangent line to area of planar region lying below a curve, The Fundamental Theorem of Calculus, basis of modern mathematics, claims nothing more. While Leibniz―suave, debonair, philosopher and politician, published his proof to jubilant cheers of continental followers, the Isles seethed unnerved, they knew of Newton's secret files, locked in deep secret drawers— for fear of theft and stranger paranoid delusions, hiding an earlier version of the same result. The battle escalated to public accusation, charges of blatant plagiarism, excommunication from The Royal Math. Society, a few blackened eyes, (no duels); and raged for long after both men were buried, splitting Isles from Continent, barring unified progress, till black bile drained and turbulent spirits becalmed. Calculus―Latin for small stones, primitive means of calculation; evolving to abaci; later to principles of enumeration advanced by widespread use of the Hindu-Arabic numeral system employed to this day, as practiced by algebristas―barbers and bone setters in Medieval Spain; before Calculus came the Σ (sigma) notion― sums of infinite yet countable series; and culminating in addition of uncountable many dimensionless line segments― the integral first to thirst for knowledge, at any price. That abstract concepts, applicable―at start, merely to the unseen unsensed objects: orbits of distant stars, could generate intense earthly passions, is inconceivable today; when Mathematics is considered a dry discipline, depleted of life sap, devoid of emotion, alive only in convoluted brain cells of weird scientific minds.
{"url":"http://www.math.uconn.edu/~glaz/Calculus_by_Sarah_Glaz.html","timestamp":"2014-04-21T04:34:55Z","content_type":null,"content_length":"6844","record_id":"<urn:uuid:e517dcfe-379b-4ffb-915e-06f9831caf15>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00637-ip-10-147-4-33.ec2.internal.warc.gz"}
Stock market kurtosis over time In the last decade we have observed an increase in computational power, information availability, speed of execution and stock market competition in general. One might think that, as a result, we are prone to larger shocks that occur faster than what was common in the past. I crunched some numbers and was surprised to see that this is not the case. We can use the Kurtosis as a proxy for extreme events, high Kurtosis in a certain month means that the return distribution for that month was such that most observations were in the center, and some were in the tails, the higher the kurtosis, the fatter the tail. Here I do not differentiate between positive and negative shocks. Here is a figure of the monthly Kurtosis, calculated from the S&P returns, over time, starting from 1993 until 2012. The Lines correspond with the 90%,95% and 99% quantiles of the series. We can see that this measure is quite steady over time with a few jumps that correspond with past extreme events. No upward trend is observed and the number of months with very high Kurtosis do not appear to be more likely nowdays than in the past. This means that you can prepare yourself before losing your money as oppose to just wake up to see your account, which was just fine the night before, now needs more collateral. You can also try this exercise with Semi-Kurtosis (call it partial moment for academic ambiguity). We should not really care about the right tail when we calculate risk (if we are long that is). Another idea is to check the ratio of the two semi-kurtosis. The positive side will represent “potential upward jumps” while the denominator (negative side) will counter that with “potential train through your stomach”. library(quantmod) ; library(e1071) tckr = c('SPY') # the ETF on the S&P end<- format(Sys.Date(),"%Y-%m-%d") # yyyy-mm-dd start<-format(Sys.Date() - 365*20,"%Y-%m-%d") dat1 = (getSymbols(tckr[1], src="yahoo", from=start, to=end, auto.assign = FALSE)) ret1 = as.numeric((dat1[,4] - dat1[,1])/dat1[,1] ) n = length(as.numeric(ret1)) time0 = index(dat1) head(time0) ; tail(time0) s = seq(1,n,22) # make it monthly k = NULL for (i in 1:(length(s)-1)){ k[i] = 3+kurtosis(ret1[s[i]:s[(i+1)]]) time0[s[which(k>quantile(k,.95))]] # which months plot(k~time0[s[1:(length(s)-1)] ], ty = "b", xlab = "Time", ylab = "Monthly Kurtosis", main = "Monthly Kurtosis over Time") abline(a = quantile(k,.90), b = 0, lwd = 2, col = 2) abline(a = quantile(k,.95), b = 0, lwd = 2, col = 3) abline(a = quantile(k,.99), b = 0, lwd = 2, col = 4) 2 thoughts on “Stock market kurtosis over time” 1. In the code shown above, you seem to have taken the intraday return (close-open) for calculating the kurtosis for the month. Is this intentional and do we have to use that to calculate kurtosis? Or should we use the traditional close-close return for this? 2. Your question is good. I trade intra-day, so I naturally care more about the intra-day move (position is only open during trading hours). For over night positions it would indeed be better to use the more standard Close to Close and take it from there. Thanks for the comment.
{"url":"http://eranraviv.com/blog/r/stock-market-kurtosis-over-time/","timestamp":"2014-04-18T02:58:42Z","content_type":null,"content_length":"37712","record_id":"<urn:uuid:84e1b5db-0c16-48ed-985f-39f90b74dc63>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00409-ip-10-147-4-33.ec2.internal.warc.gz"}
The Two Trains and How Fast Here's my approach to it but I can't find what's wrong: Let time taken by the first train TO COMPLETE THE WHOLE JOURNEY be t1 Let time taken by the second train TO COMPLETE THE WHOLE JOURNEY be t2 Let the distance between London and Liverpool be d Let the time when they had met each other be f Because at the moment of passing each other the distance both trains have covered add up to d Because Train1 takes one hour more after the passing each other moment Because Train2 takes four hours more after the passing each other moment I shall call the above Equation 1, 2 & 3 respectively From Equation 2: From Equation 3: Therefore, we can say that: Putting t2 = (t1 + 3) and f = (t1 - 1) in Equation 1: After fooling around with it you get: Now according to the brute force method of Quadratic equations, I get: Ultimately we get the following things: What are the mistakes(if any), I have commited in my above calculations? Now the most important thing: The puzzle told us to find how much faster is one train running than the other and the answer given in the book is this: How can I come to the conclusion in the answer? 'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.' 'God exists because Mathematics is consistent, and the devil exists because we cannot prove it' 'Who are you to judge everything?' -Alokananda
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=227093","timestamp":"2014-04-17T07:02:15Z","content_type":null,"content_length":"50940","record_id":"<urn:uuid:4f8d3bb0-57bb-428c-a18d-19563a4364ad>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00344-ip-10-147-4-33.ec2.internal.warc.gz"}
Excel maths applications Re: Excel maths applications Good luck with your page. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=190818","timestamp":"2014-04-16T07:41:30Z","content_type":null,"content_length":"11590","record_id":"<urn:uuid:d4e2188a-9460-4a39-823d-999ed381483e>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00260-ip-10-147-4-33.ec2.internal.warc.gz"}
module without using % 01-16-2008 #1 Registered User Join Date Jan 2007 module without using % I remember there is a way to calculate the remainder without using the % operator but cant seem to find it. Anyone knows what it was? Divide, and subtract (quotient*divisor)? In what situation do you need any other means of getting the remainder than using the "%" ? When you are asked to implement modulus without using &#37;? I might be wrong. Thank you, anon. You sure know how to recognize different types of trees from quite a long way away. Quoted more than 1000 times (I hope). I just read an implementation of the standard function div() and remembered there was a way without division and % to get the remainder. Was just curious what it was again Without division? I don't see how that's possible, but if it is I'd like to see :P. This page shows a way for powers of 2. In addition, div() seems to calculate both a/b and a&#37;b. If you have done the first, the modulo can be found without doing the same division again. That is, div() is probably meant for optimizing cases where you need both results. Last edited by anon; 01-16-2008 at 09:19 AM. I might be wrong. Thank you, anon. You sure know how to recognize different types of trees from quite a long way away. Quoted more than 1000 times (I hope). int a = 10 ; int b = 3 ; int remainder = a - ((a/b)*b) ; You can always emulate integer division with subtraction and counting. By doing so, you'll get the modulus automatically. All the buzzt! "There is not now, nor has there ever been, nor will there ever be, any programming language in which it is the least bit difficult to write bad code." - Flon's Law If the number you're dividing by is small enough and is known at compile time, and the number being divided is no larger than half your register size, then to do fast division you can: Multiply by the fixed-point inverse of the divisor, and then convert back to an integer. That's a multiply and a shift, instead of a division, making it definitely faster. You wouldn't do this for powers of two though as then you can just use a single shift. template <unsigned char n> unsigned short div(unsigned short x) return (int)x * ((0x10000+n-1)/n) >> 16; Mod can then be trivially based on this, though there may be a faster way also. template <unsigned char n> unsigned short mod(unsigned short x) return x - div<n>(x) * n; Note that I'm also assuming that x is positive as I can't be bothered checking the bahviour of negatives, hence use of unsigned to make sure. Repeated subtraction until the number is less than the divisior will also work, not that you'd want to do that of course. My homepage Advice: Take only as directed - If symptoms persist, please see your debugger Linus Torvalds: "But it clearly is the only right way. The fact that everybody else does it some other way only means that they are wrong" 01-16-2008 #2 01-16-2008 #3 Registered User Join Date Nov 2007 01-16-2008 #4 The larch Join Date May 2006 01-16-2008 #5 Registered User Join Date Jan 2007 01-16-2008 #6 Registered User Join Date Nov 2007 01-16-2008 #7 The larch Join Date May 2006 01-16-2008 #8 01-16-2008 #9 01-16-2008 #10
{"url":"http://cboard.cprogramming.com/cplusplus-programming/97948-module-without-using-%25.html","timestamp":"2014-04-20T17:26:51Z","content_type":null,"content_length":"74509","record_id":"<urn:uuid:1d7b3de8-71f6-403d-b51b-803e0a4dc793>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00026-ip-10-147-4-33.ec2.internal.warc.gz"}
Question on exponential distributions December 1st 2010, 03:25 PM #1 Senior Member Sep 2009 from an exponential distribution with expected value 0.2, if i get the mean of 20 observations, then repeat (get another 20) , etc, so in total 1000 times so 1000 means, then compare it with doing the whole thing again but now with 100 observations, i notice that there are more mean values from the 100 observations experiment, that are closer to 0.2, what property of convergence in probability is this? Is is strong law of large numbers or weak law of large numbers or something else? And can someone explain briefly what the difference is between strong law and weak law, I read wiki, but that makes no sense to me. Essentially it's a consequence of a law of large numbers. As n gets larger, the probability of being close to the mean is going to 1. If you aren't interested in mathematical statistics or probability theory you probably don't need to be too concerned with the difference between the strong and weak laws. To the layperson the difference is rather technical (weak convergence is quite intuitive while strong convergence requires a more sophisticated view of random variables). December 2nd 2010, 05:19 AM #2 Mar 2010
{"url":"http://mathhelpforum.com/advanced-statistics/164995-question-exponential-distributions.html","timestamp":"2014-04-18T19:42:47Z","content_type":null,"content_length":"32058","record_id":"<urn:uuid:50ec1a66-4d19-450c-ba00-abb7bcec758d>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00472-ip-10-147-4-33.ec2.internal.warc.gz"}
Welch Algorithm Baum-Welch Algorithm Baum-Welch Algorithm, also known as forward-backword algorithm was invented by Leonard E. Baum and Lloyd R Welch. It is a special case of Estimation Maximization (EM) method. Baum-Welch algorithm is very effective to train a Markov model without using manually annotated corpora. Baum Welch algorithm works by assigning initial probabilities to all the parameters. Then until the training converges, it adjusts the probabilities of the HMM’s parameters so as to increase the probability the model assigns to the training set. Maximization Process If no prior information is available then the parameters will be assigned some random probabilities. In case domain knowledge is available, an informed intial guess will be made for the parameter Once the initial values are assigned to the parameters, the algorithm enters a loop for training. In each iteration, based on the tags and corresponding probabilities that the current model assigns, probabilities are estimated. That is the parameter values are adjusted in each iteration. Training stops when the increase in the probability of the training set between iterations falls below some small value. Forward-backward algorithm guarantees to find a locally best set of values from the initial parameter values. It works well if small amount of manually tagged corpus given.
{"url":"http://language.worldofcomputing.net/pos-tagging/baum-welch-algorithm.html","timestamp":"2014-04-19T17:01:00Z","content_type":null,"content_length":"36582","record_id":"<urn:uuid:4549d619-834b-44b2-8ce9-e663b2d9c27a>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00511-ip-10-147-4-33.ec2.internal.warc.gz"}
Blackhole can be relative? Hi,all, the problem is: you and me observe a star, and the star moving with velocity V relative to me, and you are in the rest frame of the star. Then in my frame of reference, I saw the star length contraction in the direction of its motion relative to me, hence its volume is smaller than its proper volume, and its mass is greater than its rest mass. Then I calculate its gravitational constant, say a. And you calculate its gravitation constant as well, say you got a' since i observe smaller volume and larger mass, so a>a' if a just reach the limit that light can not escape from that star. Then it will result in, In my frame, that star is a blackhole, and in your frame, that star is Not. however reality is invariant under lorentz transformation, so this can not be true, but where am I wrong?? Thanks in advance!
{"url":"http://www.physicsforums.com/showthread.php?p=2602241","timestamp":"2014-04-21T07:21:16Z","content_type":null,"content_length":"26573","record_id":"<urn:uuid:81b8ce56-b56d-48c2-bfc9-ae7760e0bd0b>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00121-ip-10-147-4-33.ec2.internal.warc.gz"}
Equation-solving front page Quick description There are several different kinds of equations, and techniques for solving them vary considerably. So this page consists mainly of links to pages about solving particular kinds of equations. There are a few general principles, however. General discussion If we have a number , there are many things we can do to it. For example, we can add 2 to it, or multiply it by 17, or square it. Or one can combine two such operations and obtain a number like . More ... ( This leads to a general class of questions of the following form: if I start with and do such-and-such to it and get , then what is ? Here we think of as given and as something that is calculated from . Equations arise when one reverses this question and asks instead the following: if I started with and did such-and-such to it, and the answer came out to be , then what was ? This time is given and is to be determined. To give a simple example, if , we can work out that and therefore that . Questions of this kind do not have to be about numbers. For example, we could ask a question like this: I start with a function , differentiate it, and end up with ; what was the original function? This is a simple differential equation, and the unknown object is the function . In this case, the equation does not fully determine , but we can at least say that must be of the form for some constant . As this simple example already suggests, the number of different kinds of equation is more or less the same as the number of different kinds of processes we like to do to mathematical objects. Since there are many kinds of processes, there are many kinds of equations. Further below is a list of different kinds, with brief explanations of what they are and links to articles or navigation pages about them. However, some techniques for solving equations are very general, so those are linked first.) Generic equation-solving techniques Transform equation to reach a solved pattern Brief summary ( Make logically equivalent rearrangements of the equation until it, or the "hard" part of it, matches a pattern you already know how to deal with. ) Articles about different kinds of equations How to solve linear equations in one variable Brief summary ( These are equations like . This equation is called linear because the graph of is a straight line. ) How to solve linear equations in two variables Brief summary ( Things get slightly more complicated if you have two unknowns, and two equations to help you to determine them. For example, the equations could be and . However, there are various systematic ways of solving such pairs of equations and determining and . ) How to solve linear equations in many variables Brief summary ( The methods used to solve linear equations in two variables can be generalized to any number of variables. At this point it becomes fruitful to think of the unknown as a vector rather than as a sequence of many individual variables. ) How to solve quadratic equations Brief summary ( A quadratic equation is something like , which involves as well as . There are several techniques for solving them: which is best varies from example to example. ) How to solve cubic and quartic equations Brief summary ( These are like quadratic equations, but now they also involve cubes and fourth powers, respectively. For instance, the equation is a quartic equation. Cubic and quartic equations can be solved systematically, but they are much harder than quadratic equations. ) What makes some equations so much easier to solve than others? Brief summary ( There are several answers one might give to this question but here is one: if you can work something out on your calculator without using any memory, then the resulting equation will be easy to solve. A few examples will make clear what this means. ) How to solve polynomial equations in several variables How to solve linear Diophantine equations How to solve quadratic Diophantine equations Diophantine equations front page Differential equations front page Brief summary ( Differential equations are equations where the unknown involves a function (or more than one function) and the equation involves not just the function but also some of its derivatives. If the function is of more than one variable, then these may be partial derivatives. There are many importantly different kinds of differential equation. This page gives some initial guidance. ) Login or register to post comments
{"url":"http://www.tricki.org/article/Equation-solving_front_page","timestamp":"2014-04-20T22:45:52Z","content_type":null,"content_length":"30652","record_id":"<urn:uuid:7a75f52a-110c-4f11-8499-eddc6b11a34f>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00013-ip-10-147-4-33.ec2.internal.warc.gz"}
help with ekvation? Work backwards from the information given. (Are you sure there isn't a interval to solve over? Something like $0 \leq x \leq 2\pi$) $5x - \dfrac{4\pi}{3} = \arcsin \left(\dfrac{1}{2}\right)$ If you know your special triangles then you will know that $\arcsin \left(\dfrac{1}{2}\right) = \dfrac{\pi}{6} + 2k\pi \ \ ,\ \ k \in \mathbb{Z}$ Then solve the linear equation edit: that is one epic typo in the title if you meant equation Last edited by e^(i*pi); December 28th 2010 at 01:09 PM. Reason: latex
{"url":"http://mathhelpforum.com/trigonometry/167038-help-ekvation.html","timestamp":"2014-04-21T02:42:34Z","content_type":null,"content_length":"34130","record_id":"<urn:uuid:398fd786-44b9-41ff-abad-f18b8a643127>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00631-ip-10-147-4-33.ec2.internal.warc.gz"}
How to Factor a Number Edit Article Factoring Basic IntegersStrategy for Factoring Large Numbers Edited by Histrion, Sondra C, Tom Viren, Flickety and 45 others A number's factors are numbers which multiply together to form it as a product. Another way of thinking of this is that every number is the product of multiple factors. Learning how to factor - that is, break up a number into its component factors - is an important mathematical skill that is used not only in basic arithmetic but also in algebra, calculus, and beyond. See Step 1 below to start learning how to factor! Method 1 of 2: Factoring Basic Integers 1. 1 Write your number. To begin factoring, all you need is a number - any number will do, but, for our purposes, let's start with a simple integer. Integers are numbers without fractional or decimal components (all positive and negative whole numbers are integers). □ Let's choose the number 12. Write this number down on a piece of scratch paper. 2. 2 Find two more numbers that multiply to make your first number. Any integer can be written as the product of two other integers. Even prime numbers can be written as the product of 1 and the number itself. Thinking of a number as the product of two factors can require "backwards" thinking - you essentially must ask yourself, "what multiplication problem equals this number?" □ In our example, 12 has multiple factors - 12 × 1, 6 × 2, and 3 × 4 all equal 12. So, we can say that 12's factors are 1, 2, 3, 4, 6, and 12. For our purposes, let's work with the factors 6 and 2. □ Even numbers are especially easy to factor because every even number has 2 as a factor. 4 = 2 × 2, 26 = 13 × 2, etc. 3. 3 Determine whether any of your factors can be factored again. Lots of numbers - especially large ones - can be factored multiple times. When you've found two of a number's factors, if one has its own set of factors, you can reduce this number to its factors as well. Depending on the situation, it may or may not be beneficial to do this. □ For instance, in our example, we have reduced 12 to 2 × 6. Notice that 6 has its own factors - 3 × 2 = 6. Thus, we can say that 12 = 2 × (3 × 2). 4. 4 Stop factoring when you reach prime numbers. Prime numbers are numbers that are evenly divisible only by themselves and 1. For instance, 1, 2, 3, 5, 7, 11, 13, and 17 are all prime numbers. When you've factored a number so that it's the product of exclusively prime numbers, further factoring is superfluous. It does you no good to reduce each factor to itself times one, so you may stop. □ In our example, we've reduced 12 to 2 × (2 × 3). 2, 2, and 3 are all prime numbers. If we were to factor further, we'd have to factor to (2 × 1) × ((2 × 1)(3 × 1)), which isn't typically useful, so it's usually avoided. 5. 5 Factor negative numbers in the same way. Negative numbers can be factored nearly identically to how positive numbers are factored. The sole difference is that the factors must multiply together to make a negative number as their product, so an odd number of the factors must be negative. □ For example, let's factor -60. See below: ☆ -60 = -10 × 6 ☆ -60 = (-5 × 2) × 6 ☆ -60 = (-5 × 2) × (3 × 2) ☆ -60 = -5 × 2 × 3 × 2. Note that having an odd number of negative numbers besides one will give the same product. For example, -5 × 2 × -3 × -2 also equals 60. Method 2 of 2: Strategy for Factoring Large Numbers 1. 1 Write your number above a 2-column table. While it's usually fairly easy to factor small integers, larger numbers can be daunting. Most of us would be hard-pressed to break a 4 or 5-digit number into its prime factors using nothing but mental math. Luckily, using a table, the process becomes much easier. Write your number above a t-shaped table with two columns - you'll use this table to keep track of your growing list of factors. □ For the purpose of our example, let's choose a 4-digit number to factor - 6,552. 2. 2 Divide your number by the smallest possible prime factor. Divide your number by the smallest prime factor (besides 1) that divides into it evenly with no remainder. Write the prime factor in the left column and write your answer across from it in the right column. As noted above, even numbers are especially easy to start factoring because their smallest prime factor will always be 2. Odd numbers, on the other hand, will have smallest prime factors that differ. □ In our example, since 6,552 is even, we know that 2 is its smallest prime factor. 6,552 ÷ 2 = 3,276. In the left column, we'll write 2, and in the right column, write 3,276. 3. 3 Continue to factor in this fashion. Next, factor the number in the right column by its smallest prime factor, rather than the number at the top of the table. Write the prime factor in the left column and the new number in the right column. Continue to repeat this process - with each repetition, the number in the right column should decrease. □ Let's continue with our process. 3,276 ÷ 2 = 1,638, so att the bottom of the left column, we'll write another 2, and at the bottom of the right column, we'll write 1,638. 1,638 ÷ 2 = 819, so we'll write 2 and 819 at the bottom of the two columns as before. 4. 4 Deal with odd numbers by trying small prime factors. Odd numbers are more difficult to find the smallest prime factor of than even numbers because they don't automatically have 2 as their smallest prime factor. When you reach an odd number, try dividing by small prime numbers other than 2 - 3, 5, 7, 11, and so on - until you find one that divides evenly with no remainder. This is the number's smallest prime factor. □ In our example, we've reached 819. 819 is odd, so 2 is not a factor of 819. Instead of writing down another 2, we'll try the next prime number: 3. 819 ÷ 3 = 273 with no remainder, so we'll write down 3 and 273. □ When guessing factors, you should try all prime numbers up to the square root of the largest factor found so far. If none of the factors you try up to this point divide evenly, you're probably trying to factor a prime number and thus are finished with the factoring process. 5. 5 Continue until you reach 1. Continue dividing the numbers in the right column by their smallest prime factor until you obtain a prime number in the right column. Divide this number by itself - this will put the number in the left column and "1" in the right column. □ Let's finish factoring our number. See below for a detailed breakdown: ☆ Divide by 3 again: 273 ÷ 3 = 91, no remainder, so we'll write down 3 and 91. ☆ Let's try 3 again: 91 doesn't have 3 as a factor, nor does it have the next lowest prime (5) as a factor, but 91 ÷ 7 = 13, with no remainder, so we'll write down 7 and 13. ☆ Let's try 7 again: 13 doesn't have 7 as a factor, or 11 (the next prime), but it does have itself as a factor: 13 ÷ 13 = 1. So, to finish our table, we'll write down 13 and 1. We can finally stop factoring. 6. 6 Use the numbers in the left-hand column as your original number's factors. Once you reach 1 in the right-hand column, you're done. The numbers listed on the left side of the table are your factors. In other words, the product when you multiply all of these numbers together will be the number at the top of the table. If the same factor appears multiple times, you can use exponent notation to save space. For instance, if your list of factors has four 2's, you can write 2^4 rather than 2 × 2 × 2 × 2. □ In our example 6,552 = 2^3 × 3^2 × 7 × 13. This is the complete factorization of 6,552 into prime numbers. No matter what order these numbers are multiplied in, the product will be 6,552. • Also important is the concept of a prime number: a number that has only two factors, 1 and itself. 3 is a prime number because its only factors are 1 and 3. 4, on the other hand, has 2 as a factor. A number that isn't prime is called composite. (The number 1 itself, however, is considered neither prime nor composite -- it's a special case.) • The lowest prime numbers are 2, 3, 5, 7, 11, 13, 17, 19, and 23. • Understand that one number is a factor of another, larger number if it "divides it cleanly" -- that is, the larger number can be divided by the smaller number without leaving a remainder. For instance, 6 is a factor of 24, because 24 ÷ 6 = 4 with no remainder. On the other hand, 6 is not a factor of 25 • If the numbers in the numerator add up to a multiple of three then three is a factor of that number. ( 819 = 8+1+9 which = 18, 1+8 =9. Three is a factor of nine so it is a factor of 819.) • Some numbers can be factored in faster ways, but this method works every time and, as an added bonus, the prime factors are listed in ascending order when you're done. • Remember that we're only talking about the so-called "natural numbers" -- sometimes called the "counting numbers": 1, 2, 3, 4, 5... We're not going to get into negative numbers or fractions, which might warrant their own articles • Don't make unnecessary work for yourself. Once you've eliminated a factor candidate, you don't have to test it again. Once we decided that 819 didn't have 2 as a factor, we didn't have to test 2 any further throughout the rest of the process. Things You'll Need • Paper • Writing tools, preferably pencil and eraser • Calculator (optional) Article Info Thanks to all authors for creating a page that has been read 174,596 times. Was this article accurate?
{"url":"http://www.wikihow.com/Factor-a-Number","timestamp":"2014-04-17T15:41:46Z","content_type":null,"content_length":"78692","record_id":"<urn:uuid:77455203-cd63-44f6-846b-0a122bfbfa95>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00079-ip-10-147-4-33.ec2.internal.warc.gz"}
Hallandale Beach, FL Math Tutor Find a Hallandale Beach, FL Math Tutor ...My name is Jesse and I recently graduated cum laude from Tulane University with a major in sociology and a pre-dental concentration. I am currently applying to dental school. Last summer, I took the Dental Admissions Test (DAT) and scored in the 94th percentile in Academic Average (AA), 91st pe... 8 Subjects: including algebra 1, biology, chemistry, prealgebra ...Personally, I believe that such experience is very much in phase with my personality and has given me a very particular perspective about general psychological impediments to enjoying and learning Physics and Math concepts. I also strongly believe that learning Math and Sciences is of the upmost... 11 Subjects: including precalculus, algebra 1, algebra 2, calculus Hello students! My name is Ross, and I am in my senior year of college. Like many people, I have had many troubles in school throughout my educational career. 36 Subjects: including algebra 1, algebra 2, probability, linear algebra ...I have been exposed to different types of learners and I love it! I strive to tailor my teaching to their needs. One question that students always ask me is "why am I studying algebra, will I ever use it?". 6 Subjects: including algebra 2, algebra 1, prealgebra, French ...I tried out as a walk on for the men's basketball team and was even invited to practice with the team. However, due to a large influx of guards on the team, I was let go. I currently hold basketball skills camps at local Miami gyms. 8 Subjects: including geometry, prealgebra, study skills, algebra 1 Related Hallandale Beach, FL Tutors Hallandale Beach, FL Accounting Tutors Hallandale Beach, FL ACT Tutors Hallandale Beach, FL Algebra Tutors Hallandale Beach, FL Algebra 2 Tutors Hallandale Beach, FL Calculus Tutors Hallandale Beach, FL Geometry Tutors Hallandale Beach, FL Math Tutors Hallandale Beach, FL Prealgebra Tutors Hallandale Beach, FL Precalculus Tutors Hallandale Beach, FL SAT Tutors Hallandale Beach, FL SAT Math Tutors Hallandale Beach, FL Science Tutors Hallandale Beach, FL Statistics Tutors Hallandale Beach, FL Trigonometry Tutors Nearby Cities With Math Tutor Aventura, FL Math Tutors Carol City, FL Math Tutors Golden Beach, FL Math Tutors Golden Isles, FL Math Tutors Hallandale Math Tutors Indian Creek, FL Math Tutors Keystone Islands, FL Math Tutors Lauderdale Isles, FL Math Tutors Melrose Vista, FL Math Tutors Ojus, FL Math Tutors Palm Springs North, FL Math Tutors South Florida, FL Math Tutors Sunny Isles, FL Math Tutors Uleta, FL Math Tutors West Hollywood, FL Math Tutors
{"url":"http://www.purplemath.com/Hallandale_Beach_FL_Math_tutors.php","timestamp":"2014-04-18T05:36:58Z","content_type":null,"content_length":"24079","record_id":"<urn:uuid:d8afb5ef-8512-4502-8f98-6ffe862fbe93>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00428-ip-10-147-4-33.ec2.internal.warc.gz"}
6.830: Lecture 3 (2/13/13) The reading for this lecture is: 1. The start of Chapter 4 to end of Section 4.2.5 (pages 100-110) of Ramakrishnan and Gehrke 2. The start of Chapter 2 to end of Section 2.6 (pages 25-46) of Ramakrishnan and Gehrke 3. Section 3.5 (pages 75-86) of Ramakrishnan and Gehrke 4. The start of Chapter 19 to end of Section 19.4 (pages 605-619) of Ramakrishnan and Gehrke The first reading discusses the "relational algebra" proposed by Ted Codd, which we will discuss at the beginning of lecture. The second two readings discuss ER modeling, which is one practical way which can be used to model a database and generate relational database schemas. The fourth reading discusses a formal model based on the notion of functional dependencies that makes it possible to reason about whether schemas are free of anomalies that lead to operational problems in database system execution. You should focus on understanding BCNF and 3NF; we will not discuss the higher normal forms. The relationship between these last three readings is that ER modeling generally produces relational schemas that conform to 3NF/BCNF, though this isn't strictly true (see section 19.7 for a discussion of cases when ER modeling doesn't lead to BCNF or 3NF schemas.) For those who are interested, it turns out that there are relatively straightforward algorithms for generating BCNF/3NF schemas given a collection of functional dependencies -- these are given in Sections 19.5 and 19.6 of Ramakrishnan and Gehrke, but you don't need to know these in detail. As you read these chapters, think about and be prepared to answer the following questions in Lecture: • What problems does schema normalization solve? Do you believe that these are important problems? • What is the distinction between BCNF and 3NF? Is there a reason to prefer one over the other? • Think about a data set you have worked with recently, and try to derive a set of functional dependencies that correspond to it. What assumptions did you have to make in modeling your data in this • How is it that ER modeling generally leads to BCNF/3NF schemas? Last change: 2/7/13.
{"url":"http://db.csail.mit.edu/6.830/lectures/lecture3.html","timestamp":"2014-04-21T12:08:19Z","content_type":null,"content_length":"4640","record_id":"<urn:uuid:ab9dba34-724a-493d-930d-d4aa9b7efef6>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00284-ip-10-147-4-33.ec2.internal.warc.gz"}
Trenton, NJ Science Tutor Find a Trenton, NJ Science Tutor ...I make video tutorials to help scientists in the National Cancer Institute Physical Sciences-Oncology Network understand how to derive governing systems of differential equations to describe dynamical systems. As part of my PhD at Princeton and postdoctoral work at the University of California, ... 13 Subjects: including physical science, physics, writing, reading ...I also taught physics in the university for more than 8 years and in high school for other 4 years. All the spectrum of subjects from mechanics, optics, electrostatics, magnetostatics, thermodynamics were in my curriculum. I also tutored high school students. 9 Subjects: including physical science, physics, calculus, algebra 1 ...Additionally, I studied German for 5 years in high school, and then continued studying it for 4 years in college. This included a semester abroad, at the University of Tuebingen, in Tuebingen, Germany. While there, I took several classes with other German students (as opposed to classes designe... 12 Subjects: including chemistry, biology, reading, calculus ...I fully understand the student's frustration with trigonometry. I will help students to feel more comfortable with the subject matter, by relating it to real situations. I will meet with parents or guardians to discuss student progress. 62 Subjects: including physical science, mechanical engineering, sociology, biology ...This may first involve presentation of a concept followed by some computations that would illustrate this concept. The next steps would involve solving word problems to make sure the student understands how to apply the concepts presented. I have a Master's degree in statistics from Villanova University. 13 Subjects: including psychology, anatomy, physiology, biostatistics Related Trenton, NJ Tutors Trenton, NJ Accounting Tutors Trenton, NJ ACT Tutors Trenton, NJ Algebra Tutors Trenton, NJ Algebra 2 Tutors Trenton, NJ Calculus Tutors Trenton, NJ Geometry Tutors Trenton, NJ Math Tutors Trenton, NJ Prealgebra Tutors Trenton, NJ Precalculus Tutors Trenton, NJ SAT Tutors Trenton, NJ SAT Math Tutors Trenton, NJ Science Tutors Trenton, NJ Statistics Tutors Trenton, NJ Trigonometry Tutors
{"url":"http://www.purplemath.com/trenton_nj_science_tutors.php","timestamp":"2014-04-20T21:14:11Z","content_type":null,"content_length":"23996","record_id":"<urn:uuid:73c67b4d-7695-4cdd-8a32-0c88d031fb40>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00202-ip-10-147-4-33.ec2.internal.warc.gz"}
Examples where the analogy between number theory and geometry fails up vote 21 down vote favorite The analogy between $O_K$ ($K$ a number field) and affine curves over a field has been very fruitful. It also knows many variations: the field over which the curve is defined may have positive or zero characterstic; it may be algebraically closed or not; it may be viewed locally (by various notions of "locally"); it may be viewed through the "field with one element" (if I understand that program) and so forth. Often when I've dealt with this analogy the case is that the geometric analog of a question is easier to deal with than the arithmetic one, and is strongly suggestive for the veracity of the arithmetic statement. My question is: what are some examples of where this analogy fails? For example, when something holds in the geometric case, and it is tempting to conjecture it's true in the arithmetic case, but it turns out to be false. If you can attach an opinion for as to why the analogy doesn't go through in your example that would be extra nice, but not necessary. nt.number-theory ag.algebraic-geometry big-list examples 6 The abc-conjecture without the epsilon in it? – KConrad Oct 28 '10 at 8:46 Oh, for a reference, in Lang's Algebra he gives examples showing the epsilon is needed for the abc-conjecture over Z. – KConrad Oct 28 '10 at 8:48 Periodicity of zeta functions (may be cheating). – S. Carnahan♦ Oct 28 '10 at 8:53 Scott, what do you have in mind: that it has periodic in s? That wouldn't seem tempting to conjecture for the arithmetic case. – KConrad Oct 28 '10 at 12:49 Example 1: vanishing of ${\rm{H}}^1(k,G)$ for global function fields $k$ and connected semisimple $k$-groups $G$ that are simply connected. (Over number fields one has to account for effect of 6 real places.) Example 2: if a nonzero element of a global function field is an $n$th power in all completions then it is globally an $n$th power, but this is false for certain number fields and certain $n$. (Over number fields one has to account for the effect of 2-adic places.) In both cases, the phenomena underlying failure over number fields were known before the proofs were found over fn fields. – BCnrd Oct 28 '10 at 20:27 show 2 more comments 2 Answers active oldest votes Of the commenters on this question, two are authors (with Harald Helfgott) of the very nice paper "Root numbers and ranks in positive characteristic", which gives an example (under parity conjecture) of a non-isotrivial 1-parameter family of elliptic curves over a global function field $K = \mathbf{F}_q(u)$ (any odd $q$) such that each fiber $E_t$ for $t \in K$ has rank up vote 7 strictly greater than that of the generic fiber. This is conjectured to be impossible in the number field case. down vote add comment If $A/K$ is an abelian variety, $v$ is a place of $K$, $h$ is the global height function and $\lambda_v$ is the local height function at $v$, then comparing $h(P)$ and $\lambda_v(P)$ for $P \in A(K)$ varies a lot depending on the situation. $\lambda_v(P) = O(1)$ if $K$ is a function field of characteristic zero, $\lambda_v(P) = O(h(P)^{1/2})$ (usually) in positive up vote 2 characteristic and this cannot be improved, and $\lambda_v(P) = O(\log(h(P)))$ conjecturally for number fields (and is definitely not $O(1)$). down vote add comment Not the answer you're looking for? Browse other questions tagged nt.number-theory ag.algebraic-geometry big-list examples or ask your own question.
{"url":"http://mathoverflow.net/questions/43922/examples-where-the-analogy-between-number-theory-and-geometry-fails?sort=newest","timestamp":"2014-04-20T13:31:21Z","content_type":null,"content_length":"60983","record_id":"<urn:uuid:34f6589e-d36f-43f5-ae75-aefbb53b263a>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00293-ip-10-147-4-33.ec2.internal.warc.gz"}
Probability/Combinatorics Q April 10th 2010, 07:11 AM #1 Apr 2010 Probability/Combinatorics Q I have the formula for the probability of chance agreement where: k people are giving ratings n is the number of possible ratings to choose from This is the probability for chance agreement where the ratings are allowed to differ by 2 (eg. 1 and 3 would be classed as in agreement). $P = \frac{3^{k-2} (5n-6)+I \sum_{i=3}^k 3^{k-i} (n-3) 2^{i-1}}{n^k}$ I is an indicator function, $I = 1 \mbox{ if } k \ge 3, I = 0 \mbox{ if } k \le 2.$ Can anyone help me describe how this probability occurs. I was hoping for an explanation along the lines of, "No. of exact agreements + No. of agreements where ratings differ by 1 ..." etc. I understand the way that the probability for chance agreement in the case where the ratings can differ by one works: $P = \frac{{\left(n-1\right)}{\displaystyle\sum_{i=1}^{k-1} 2^{i}}+n}{n^k} = \frac{(n-1) \sum_{i=1}^{k-1} 2^{i}}{n^k}+ \frac{1}{n^{k-1}}$ The final term in the numerator is then the number of exact matches. The (n-1) gives the number of adjacent pairs that would be classed as in agreement. The summation can then be written as, $\displaystyle(\frac{2}{n})^k -2\displaystyle(\frac{1}{n})^k$ which is the matches differing by one minus the exact matches as they have already been I just can't work out the similar ideas for the probability where ratings can differ by 2! Follow Math Help Forum on Facebook and Google+
{"url":"http://mathhelpforum.com/advanced-statistics/138300-probability-combinatorics-q.html","timestamp":"2014-04-20T09:57:08Z","content_type":null,"content_length":"30863","record_id":"<urn:uuid:105fe122-a3d5-40de-adc2-e2ce58cc6822>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00018-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions - Seasons greetings from Math Forum neighbors Date: Dec 29, 2008 3:33 PM Author: Kirby Urner Subject: Seasons greetings from Math Forum neighbors Here's something from math-teach, a sister list: Even if your school is too poor to access these geo-cams (used for GPS work), your teacher still has access to Google Streets probably. Here's how we teach it in Portland (as of a couple years ago -- big strides since then). This is from a Public School for geeks (we're Open Source Capital, take that So as I've been discussing with some friends (and Friends) on other lists, a lesson plan for our children involves discovering the dodecacam, a real piece of equipment warehoused around SE 10th in our coordinate system, with a website worth visiting. The geometry angle is of course the camera itself, with 11 lenses occupying facets of a pentagonal dodecahedron, in terms of how they're oriented, with the 12th facet reserved for the pedestal, which attaches the camera to the camera person, who also lugs the recording media (a hard drive). We use SQL to pull up tables of duals, starting with the Oregon state standard V + F = E + 2 (search for details), columns in our Polyhedra table. By searching on (V,F) == (F,V) pairs, E constant, you find which polyhedra interpenetrate in a dualistic manner -- check any geometry book for this idea (tetrahedron self dual). The dual of the pentagonal dodecahedron is of course the icosahedron, of volume 18.51 relative to the tetrahedron of same edge lengths (icosa: E = 30, V = 12, F = 20; dodeca: E = 30, V = 20, F = 12 -- just switch F and V). Of course actually field testing the equipment is of interest, as one of our mantras is you can't do math if you don't have access to a night sky free of light pollution, i.e. no certification by an "indoors only" route anymore, unless you're disabled. So our math lessons have shifted to calories and joules, their expenditure, in the context of physical work. This may sound like a detour, but a mathematical algorithm, such as sorting alphabetically (or by any dimension -- no limit on how many dimensions SQL might sort by) takes energy, and less efficient equipment or programming tends to squander energy (and/or time). Important lessons for anyone yes? You body is a mathematical instrument (Keith Devlin good on this). Getting our hands on actual dodecacams is maybe more feasible here in Portland. I've met the CTO and know this company is eager to partner with urban studies type people, has a long track record in that respect already. So to keep this properly academic, I will be going through the professoriate. In fact, the University of Rochester is responsible for getting me into this loop in the first place (actually a Reed connection, but academia is like that -- lots of criss-crossing affiliations). Speaking of which, congratulations to the VPython team for the new release announcement (Bruce Sherwood et al), I'm looking forward to testing it soon. For those of you just tuning in, VPython is our tool of choice for actually showing these duals I've been talking about, i.e. once you're ready to see the progression, of tetrahedron, cube etc., we have a graphical engine at the ready, our big departure from TI culture and "technology in the classroom" ala NCTM, which is rarely anything but flat, "two dee". We don't go for that. No can do. Gotta have "shapes" (polyhedra) as 3rd graders called 'em in the high Himalayas (I used to call this "Bhutanese Math" when writing for Father Mackey in the 1980s). Of course you've probably guessed my fantasies, of where I'd like to take these dodecacams (Paro anyone?) That's hard country to beat, when it comes to "immersive". So, to put it another way: I've been working with my Meeting committees to put together opportunities for our teens that would involve a field trip to one of our respected Portland based companies, in the context of a gnu math unit on duals and Euler's Law for polyhedra (in it's easiest form -- proofs by cartoon very doable). The Columbia Gorge is a likely outdoor school for practicing the camera skills, even with dummy cameras some of the time (expensive gear can't be wasted). The goal of the outdoor segment is to fulfill our commitment to keeping math from being physically wasting, a kind of disease in that case. If you don't have a physical education component, you won't get it about calories and joules. We have lots of gym equipment. Gnu math teachers tend to be pretty fit. PS: my daughter is having trouble with Spore suddenly not working, a very fun biology game, a computer fantasy of course (long in the making) but good for putting down hooks for later science, recommended (despite this sudden glitch...). School is out at this time. Doesn't mean there's no learning curve.
{"url":"http://mathforum.org/kb/plaintext.jspa?messageID=6549694","timestamp":"2014-04-19T17:17:45Z","content_type":null,"content_length":"6385","record_id":"<urn:uuid:79d2df2b-5333-4a94-be85-3ab03dc6dc94>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00507-ip-10-147-4-33.ec2.internal.warc.gz"}
foldr : ('a -> 'b -> 'c -> 'c) -> ('a, 'b) func -> 'c -> 'c Folds an operation iteratively over the graph of a finite partial function. This is one of a suite of operations on finite partial functions, type ('a,'b)func. These may sometimes be preferable to ordinary functions since they permit more operations such as equality comparison, extraction of domain etc. If a finite partial function p has graph [x1,y1; ...; xn,yn] then the application foldl f p a returns f x1 y1 (f x2 y2 (f x3 y3 (f ... (f xn yn a) ... ))) Note that the order in which the pairs are operated on depends on the internal structure of the finite partial function, and is often not the most obvious. Fails if one of the embedded function applications does. # let f = (1 |-> 2) (2 |=> 3);; val f : (int, int) func = # graph f;; val it : (int * int) list = [(1, 2); (2, 3)] # foldr (fun x y a -> (x,y)::a) f [];; val it : (int * int) list = [(2, 3); (1, 2)] Note how the pairs are actually processed in the opposite order to the order in which they are presented by graph. The order will in general not be obvious, and generally this is applied to operations with appropriate commutativity properties. |->, |=>, apply, applyd, choose, combine, defined, dom, foldl, graph, is_undefined, mapf, ran, tryapplyd, undefine, undefined.
{"url":"http://www.cl.cam.ac.uk/~jrh13/hol-light/HTML/foldr.html","timestamp":"2014-04-19T22:19:56Z","content_type":null,"content_length":"2636","record_id":"<urn:uuid:c4b6dca1-dea2-46a8-807a-34679d4e5ef2>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00329-ip-10-147-4-33.ec2.internal.warc.gz"}
Charge and Dipole Units The Debye is the unit for the dipole moment of molecules. The Debye has the dimensions of charge times distance. 1 Debye is 10^-18 statcoul cm 1 Debye is 3.335641x10^-30 A m s 1 Debye is 3.335641x10^-30 C m 1 Debye is 0.208194 e Å 1 Debye is 0.393430 e Bohr 1 Debye is 10^-18 esu cm Mulliken and CHELPG charges are given in terms of e, the charge on an electron. 1 e is 1.602176462x10^-19 C
{"url":"http://cccbdb.nist.gov/debye.asp","timestamp":"2014-04-20T15:58:04Z","content_type":null,"content_length":"1671","record_id":"<urn:uuid:255b7ff6-7e81-4796-88e2-cff9b9187968>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00074-ip-10-147-4-33.ec2.internal.warc.gz"}
Terminology for a cyclically ordered set of objects up vote 1 down vote favorite If I have an ordered set of objects (for concreteness, say they're integers) $(x_1,\ldots,x_n)$, I might call it a tuple of integers. Perhaps, though, I have an set of integers $(x_1,\ldots,x_n)$ but the order is only defined up to cyclical permutation (imagine they're sitting at distinct points along a circle); so $(x_1,\ ldots,x_n)$ is the same as $(x_2,\ldots,x_n,x_1)$. I thus can't call this a "tuple of integers" because that would imply there is a canonical ordering. Is there some standard term I can use, besides the unweildy phrase "cyclically ordered set of integers"? 9 A necklace $ \ $ – Gjergji Zaimi Jul 26 '12 at 2:22 Even with the term necklace one needs some care. It matters in some cases that the necklace not be worn upside down. I suggest a less common term like circlet, with a modifier in case reversals are NOT allowed. Gerhard "Doesn't Know The Standard Term" Paseman, 2012.07.26 – Gerhard Paseman Jul 26 '12 at 16:30 Perhaps clocklet? Gerhard "OK I'll Stop For Now" Paseman, 2012.07.26 – Gerhard Paseman Jul 26 '12 at 16:34 add comment 3 Answers active oldest votes Cyclic orderings are somewhat rare beasts but they do show up here and there. A set with a cyclic ordering is a "cyclically ordered set", just as a set with a total (or partial) ordering is a "totally (or partially) ordered set". The formal definition of a cyclic ordering is found in http://en.wikipedia.org/wiki/Cyclic_order. Its just a certain kind of ternary relation on a set. up vote 1 down vote I suppose if you wanted to mimic "poset" for "partially ordered set" you could say "coset", but I would not advise it :-) and for an exhaustive list of the most common alternatives see the link above. add comment It certainly depends on the context. I've seen 'cycle' or 'necklace' used before for this, but you had best define it first as these aren't universal. up vote 0 down vote add comment If your objects that sit at distinct points along a circle are all elements of some set $L$, and repetitions are allowed, the term "cyclic words in the alphabet $L$" is fairly commonly up vote 0 used. If no repetitions are allowed, so that you just have an equivalence relation on total orderings of some set $A$, one usually talks about "cyclic orderings of $A$". down vote add comment Not the answer you're looking for? Browse other questions tagged terminology or ask your own question.
{"url":"http://mathoverflow.net/questions/103146/terminology-for-a-cyclically-ordered-set-of-objects?sort=oldest","timestamp":"2014-04-19T02:08:29Z","content_type":null,"content_length":"62128","record_id":"<urn:uuid:e8127e38-80f1-4ebb-a42a-17ce7c4beb60>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00257-ip-10-147-4-33.ec2.internal.warc.gz"}
Hybrid Manifold Abstract: [Moderator's note: no responsibility for content. LM] This is an attempt to define a bulk which corresponds to both a string cosmological and brane cosmological formalism which disallows other branes that exist outside this universe. Our bulk is homogenous manifold similar to a Joyce manifold but this then glued at every point to an AdS4 which forms a surface to the manifold . This manifold forms the background of everything that Definition of the Hybrid Manifold We define our bulk as continuous map of all points on a AdS4 (which may or may be periodic) manifold and the Joyce manifold. We state that the observation of the propagation of point particles is actually a reflection of the current shape of the Joyce manifold. On our manifold when the Joyce manifold bends or is warped then the AdS4 manifold's characteristics change as a result of the continuous mapping. In other words the AdS4 manifold shows the shape of the underlying substructure of the Joyce manifold. The Joyce manifold is bound to a certain set of shapes and transformations which must follow the laws of physics in the AdS4. But now we must extend the hybrid manifold to have properties that reflect current physical observations. We must have a manifold that is both restricted to a subset of shapes that can exist on the manifold. Also we must deal with the fact that energy is quantized. A quantization of energy using hybrid manifolds As a string moves through the part of our manifold that consists of the G2 manifold it bends the manifold slightly so that we can detect the point particle on the AdS4 space. The surface of our new manifold is glued and because of this it can reflect properties of its bending The reason we can detect different messenger particles is because as a string moves through our hybrid manifold it is bound at certain parts of the hybrid manifold due to the structure of the manifold. There are only a specific set of path's that the string can move through and that determines how the surface is bent. Another property is that due to quantization the vibrations of the string follow an ascending or descending chain of vibrational patters as the strings gain or lose energy. Therefore each string follows a set of vibrations given a specific amount of energy that a string Disallowing Violations of the Laws of Physics Certain properties of physics can be disallowed by the Joyce Manifold being unable to transform to that state because it is impossible. Certain holes that may occur may require a great deal of energy which are prohibitive to accomplish because they require a great deal of mass , energy, or mass/energy density. Therefore they would be difficult or maybe even impossible to accomplish. Gravity Representation There are two ways to represent gravity using this hybrid space. One way is to give it a formal messenger particle and have a given energy value for it. Another method is to say that gravity is a result of the specific shape of the Joyce manifold and requires no messenger particle whatsoever. This representation will be chosen once we have direct evidence for either outcome and may remain independent of the theory. An interesting conclusion that we can get from this formalism is that we don't need a set of interacting branes to start the process of the formation of the universe. Instead some event must create a difference inside of the space and make it form into a stable state. Therefore we can confine the universe formation to just this brane. Another conclusion we might be able to draw is that the underlieing substructure allows for the space to be improbablly flat and bypasses all horizons by being glued to a singular space that influences all spacetime. That it is merely a propery of the substructure and independant of an alternate form of matter.
{"url":"http://www.physicsforums.com/showthread.php?t=135946","timestamp":"2014-04-21T02:07:39Z","content_type":null,"content_length":"22981","record_id":"<urn:uuid:2e0aa4f7-134b-4580-b58f-8770e9679941>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00196-ip-10-147-4-33.ec2.internal.warc.gz"}
Cambridge University Press Open global navigation $141.99 (C) Replaced by 9780521712293Unavailable Add to wishlist Looking for an examination copy? This title is not currently available for examination. However, if you are interested in the title for your course we can consider offering an examination copy. To register your interest please contact collegesales@cambridge.org providing details of the course you are teaching. Product filter button • Diophantine geometry has been studied by number theorists for thousands of years, since the time of Pythagoras, and has continued to be a rich area of ideas such as Fermat's Last Theorem, and most recently the ABC conjecture. This monograph is a bridge between the classical theory and modern approach via arithmetic geometry. The authors provide a clear path through the subject for graduate students and researchers. They have re-examined many results and much of the literature, and provide a thorough account of several topics at a level not seen before in book form. The treatment is largely self-contained, with proofs given in full detail. □ The authors have re-examined many results and much of the literature, and give a thorough account of several topics at a level not seen before in book form □ For graduate students and researchers, and is largely self-contained: proofs are given in full detail, and many results appear here for the first time □ Destined to be a definitive reference on modern diophantine geometry, bringing a new standard of rigour and elegance to the field Read more □ Enrico Bombieri and Walter Gubler are winners of the American Mathematical Society Joseph Doob Prize for their book Heights in Diophantine Geometry Reviews & endorsements "Bombieri and Gubler have written an excellent introduction to some exciting mathematics...written with an excellent combination of clarity and rigor, with the authors highlighting which parts can be skipped on a first reading and which parts are particularly important for later material. The book also contains a glossary of notation, a good index, and a nice bibliography collecting many of the primary sources in this field." MAA Reviews, Darren Glass, Gettysburg College "The quality of exposition is exemplary, which is not surprising, given the brilliant expository style of the elder author." Yuri Bilu, MATHEMATICAL REVIEWS See more reviews Customer reviews Not yet reviewed Be the first to review Review was not posted due to profanity Product details □ Date Published: February 2006 □ format: Hardback □ isbn: 9780521846158 □ length: 668 pages □ dimensions: 235 x 163 x 39 mm □ weight: 1.043kg □ availability: Replaced by 9780521712293 • Table of Contents I. Heights II. Weil heights III. Linear tori IV. Small points V. The unit equation VI. Roth's theorem VII. The subspace theorem VIII. Abelian varieties IX. Neron-Tate heights X. The Mordell-Weil theorem XI. Faltings theorem XII. The ABC-conjecture XIII. Nevanlinna theory XIV. The Vojta conjectures Appendix A. Algebraic geometry Appendix B. Ramification Appendix C. Geometry of numbers Glossary of notation • Authors Enrico Bombieri, Institute for Advanced Study, Princeton, New Jersey Professor Enrico Bombieri is a Professor of Mathematics at the Institute for Advanced Study. Walter Gubler, Universität Dortmund Dr Walter Gubler is a Lecturer in Mathematics at the University of Dortmund. You are now leaving the Cambridge University Press website, your eBook purchase and download will be completed by our partner www.ebooks.com. Please see the permission section of the www.ebooks.com catalogue page for details of the print & copy limits on our eBooks. Continue ×
{"url":"http://www.cambridge.org/us/academic/subjects/mathematics/number-theory/heights-diophantine-geometry?format=HB","timestamp":"2014-04-16T04:23:14Z","content_type":null,"content_length":"86301","record_id":"<urn:uuid:9950059d-5132-4d5b-b1a3-c00976a47cdc>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00069-ip-10-147-4-33.ec2.internal.warc.gz"}
Geometric Translations ( Read ) | Geometry What if Lucy lived in San Francisco, $S$$P$$U$ a) The component form of $\stackrel{\rightharpoonup}{PS}, \stackrel{\rightharpoonup}{SU}$$\stackrel{\rightharpoonup}{PU}$ b) Lucy’s parents are considering moving to Fresno, $F$$\stackrel{\rightharpoonup}{PF}$$\stackrel{\rightharpoonup}{UF}$ c) Is Ukiah or Paso Robles closer to Fresno? After completing this Concept, you'll be able to answer these questions. Watch This CK-12 Foundation: Chapter12TranslationsA Learn more about translations by watching the video at this link. A transformation is an operation that moves, flips, or changes a figure to create a new figure. A rigid transformation is a transformation that preserves size and shape. The rigid transformations are: translations (discussed here), reflections, and rotations. The new figure created by a transformation is called the image. The original figure is called the preimage. Another word for a rigid transformation is an isometry. Rigid transformations are also called congruence transformations. If the preimage is $A$$A'$$A'$$A''$ A translation is a transformation that moves every point in a figure the same distance in the same direction. In the coordinate plane, we say that a translation moves a figure $x$$y$vector is a quantity that has direction and size. In the graph below, the line from $A$$B$$\stackrel{\rightharpoonup}{AB}$$A$initial point and $B$terminal point. The terminal point always has the arrow pointing towards it and has the half-arrow over it in the label. The component form of $\stackrel{\rightharpoonup}{AB}$$\stackrel{\rightharpoonup}{AB}$$\left \langle 3, 7 \right \rangle$$\stackrel{\rightharpoonup}{AB}$$\left \langle 3, 7 \right \rangle$ Example A Graph square $S(1, 2), Q(4, 1), R(5, 4)$$E(2, 5)$$(x, y) \rightarrow (x - 2, y + 3)$ The translation notation tells us that we are going to move the square to the left 2 and up 3. $(x, y) & \rightarrow (x - 2, y + 3)\\S(1,2) & \rightarrow S'(-1,5)\\Q(4,1) & \rightarrow Q'(2,4)\\R(5,4) & \rightarrow R'(3,7)\\E(2,5) & \rightarrow E'(0,8)$ Example B Name the vector and write its component form. The vector is $\stackrel{\rightharpoonup}{DC}$$D$$C$$\stackrel{\rightharpoonup}{DC}$$\left \langle -6, 4 \right \rangle$ Example C Name the vector and write its component form. The vector is $\stackrel{\rightharpoonup}{EF}$$\stackrel{\rightharpoonup}{EF}$$\left \langle 4, 1 \right \rangle$ Watch this video for help with the Examples above. CK-12 Foundation: Chapter12TranslationsB Concept Problem Revisited a) $\stackrel{\rightharpoonup}{PS}= \left \langle -84, 187 \right \rangle, \stackrel{\rightharpoonup}{SU} = \left \langle -39, 108 \right \rangle, \stackrel{\rightharpoonup}{PU} = \left \langle -123, 295 \right \rangle$ b) $\stackrel{\rightharpoonup}{PF} = \left \langle 62, 91 \right \rangle,\stackrel{\rightharpoonup}{UF} = \left \langle 185, -204 \right \rangle$ c) You can plug the vector components into the Pythagorean Theorem to find the distances. Paso Robles is closer to Fresno than Ukiah. $UF = \sqrt{185^2 + (-204)^2} \cong 275.4 \ miles, PF = \sqrt{62^2 + 91^2} \cong 110.1 \ miles$ A transformation is an operation that moves, flips, or otherwise changes a figure to create a new figure. A rigid transformation (also known as an isometry or congruence transformation) is a transformation that does not change the size or shape of a figure. The new figure created by a transformation is called the image. The original figure is called the preimage. A translation is a transformation that moves every point in a figure the same distance in the same direction. A vector is a quantity that has direction and size. The component form of a vector combines the horizontal distance traveled and the vertical distance traveled. Guided Practice 1. Find the translation rule for $\triangle TRI$$\triangle T'R'I'$ 2. Draw the vector $\stackrel{\rightharpoonup}{ST}$$\left \langle 2, -5 \right \rangle$ 3. Triangle $\triangle ABC$$A(3, -1), B(7, -5)$$C(-2, -2)$$\triangle ABC$$\left \langle -4, 5 \right \rangle$$\triangle A'B'C'$ 4. Write the translation rule for the vector translation from #3. 1. Look at the movement from $T$$T'$$T$$T'$$x$$y$$(x,y) \rightarrow (x + 6, y - 4)$ 2. The graph is the vector $\stackrel{\rightharpoonup}{ST}$$S$ 3. It would be helpful to graph $\triangle ABC$$\triangle ABC$$\triangle A'B'C'$ $A(3, -1) + \left \langle -4, 5 \right \rangle & = A'(-1, 4)\\B(7, -5) + \left \langle -4, 5 \right \rangle & = B'(3,0)\\C(-2, -2) + \left \langle -4, 5 \right \rangle & = C'(-6, 3)$ 4. To write $\left \langle -4, 5 \right \rangle$$(x, y) \rightarrow (x - 4, y + 5)$ 1. What is the difference between a vector and a ray? Use the translation $(x, y) \rightarrow (x + 5, y - 9)$ 2. What is the image of $A(-6, 3)$ 3. What is the image of $B(4, 8)$ 4. What is the preimage of $C'(5, -3)$ 5. What is the image of $A'$ 6. What is the preimage of $D'(12, 7)$ 7. What is the image of $A''$ 8. Plot $A, A', A''$$A'''$ The vertices of $\triangle ABC$$A(-6, -7), B(-3, -10)$$C(-5, 2)$$\triangle A'B'C'$ 9. $(x, y) \rightarrow (x - 2, y - 7)$ 10. $(x, y) \rightarrow (x + 11, y + 4)$ 11. $(x, y) \rightarrow (x, y - 3)$ 12. $(x, y) \rightarrow (x - 5, y + 8)$ In questions 13-16, $\triangle A'B'C'$$\triangle ABC$ For questions 17-19, name each vector and find its component form. 20. The coordinates of $\triangle DEF$$D(4, -2), E(7, -4)$$F(5, 3)$$\triangle DEF$$\left \langle 5, 11 \right \rangle$$\triangle D'E'F'$ 21. The coordinates of quadrilateral $QUAD$$Q(-6, 1), U(-3, 7), A(4, -2)$$D(1, -8)$$QUAD$$\left \langle -3, -7 \right \rangle$$Q'U'A'D'$
{"url":"http://www.ck12.org/geometry/Geometric-Translations/lesson/Geometric-Translations-Intermediate/","timestamp":"2014-04-20T21:53:00Z","content_type":null,"content_length":"135989","record_id":"<urn:uuid:bcd480ff-0cdf-48e3-b431-bc9679e6aa27>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00082-ip-10-147-4-33.ec2.internal.warc.gz"}
Numerical comparison of Augmented Lagrangian algorithms for nonconvex problems Numerical comparison of Augmented Lagrangian algorithms for nonconvex problems (2004) Download Links by E. G. Birgin , R. A. Castillo , J. M. Martínez Venue: Computational Optimization and Applications Citations: 29 - 1 self author = {E. G. Birgin and R. A. Castillo and J. M. Martínez}, title = {Numerical comparison of Augmented Lagrangian algorithms for nonconvex problems}, journal = {Computational Optimization and Applications}, year = {2004}, volume = {31}, pages = {31--56} Augmented Lagrangian algorithms are very popular tools for solving nonlinear programming problems. At each outer iteration of these methods a simpler optimization problem is solved, for which ecient algorithms can be used, especially when the problems are large. The most famous Augmented Lagrangian algorithm for minimization with inequality constraints is known as Powell-Hestenes-Rockafellar (PHR) method. The main drawback of PHR is that the objective function of the subproblems is not twice continuously dierentiable. This is the main motivation for the introduction of many alternative Augmented Lagrangian methods. 1874 Numerical Optimization - Nocedal, Wright - 1999 1081 Practical methods of optimization - Fletcher - 2000 177 A method for nonlinear constraints in minimization problems. Optimization - Powell - 1969 133 Nonmonotone Spectral Projected Gradient Methods on Convex Sets - Birgin, Martínez, et al. 58 Algorithm 813: SPGsoftware for convex-constrained optimization - Birgin, Martínez, et al. 42 On the convergence of the exponential multiplier method for convex programming - Tseng, Bertsekas - 1993 31 Augmented Lagrangian with adaptive precision control for quadratic programming with equality constraints - Dostal, Friedlander, et al. - 1999 19 Analysis and implementation of a dual algorithm for constrained optimization - Hager - 1993 18 Interior proximal and multiplier methods based on second order homogeneous kernels - Auslender, Teboulle, et al. - 1999 11 2002. Benchmarking optimization software with performance profiles - Dolan, More 9 1974], A class of exponential penalty functions - Murphy - 1974 7 1970], An algorithm for solving nonlinear programming problems subject to nonlinear inequality constraints - Allran, Johnsen - 1970 7 Toint [1991], A globally convergent augmented Lagrangian algorithm for optimization with general constraints and simple bounds - unknown authors 6 2002], Large-scale active-set box-constrained optimization method with spectral projected gradients - Birgin, M 5 1998], Metodos de Lagrangiano Aumentado usando penalidades generalizadas para programac~ao n~ao linear, Tese de Doutorado, COPPE, Universidade Federal do Rio de Janeiro, Rio de - Castillo - 5 1967], The Fritz-John necessary optimality conditions in presence of equality and inequality constraints - Mangasarian, Fromovitz 5 1975], A generalized Lagrangian function and multiplier method - Samaya, Sawaragi - 1975 4 Bertsekas [1982], Constrained optimization and Lagrange multiplier methods - P 4 Hager [1987], Dual techniques for constrained optimization - W 4 Bertsekas [1973], Multiplier methods for convex programming - Kort, P 4 Teboulle [1997], Nonlinear rescaling and proximal-like methods in convex optimization - Polyak, M 3 Zibulevsky [1992], Penalty/barrier multiplier methods for minimax and constrained smooth convex programs - Tal, Yuzefovich, et al. 3 Zibulevsky [1997], Penalty/barrier multiplier methods for convex programming problems - Ben-Tal, M 3 Santos [2002], Solution of contact problems by FETI domain decomposition with natural coarse space projection - Dostal, Gomes, et al. 3 Hestenes [1969], Multiplier and gradient methods - R 3 Silva [2000], Strict convex regularizations, proximal points and Augmented - Humes, S 3 Iusem [1999], Augmented Lagrangian methods and proximal point methods for convex optimization - N 3 Kiwiel [1997], Proximal minimization methods with generalized Bregman functions - C 3 Bertsekas [1976], Combined primal{dual and penalty methods for convex programming - Kort, P 3 Matioli [2001], Uma nova metodologia para construc~ao de func~oes de penalizac~ao para algoritmos de Lagrangiano Aumentado, Tese de Doutorado, Universidade Federal de Santa Catarina - C 3 Rockafellar [1973], The multiplier method of Hestenes and Powell applied to convex programming - T 3 1992], Penalizac~ao hiperbolica, Tese de Doutorado, COPPE, Universidade Federal do Rio de - Xavier 2 Castillo [2003], A nonlinear programming algorithm based on non-coercive penalty functions - Gonzaga, A 2 Polyak [2001], Log-sigmoid multiplier method in constrained optimization - A
{"url":"http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.57.9388","timestamp":"2014-04-18T22:25:20Z","content_type":null,"content_length":"30223","record_id":"<urn:uuid:67c8ed94-314b-4bb4-8056-dd90eabafc8f>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00119-ip-10-147-4-33.ec2.internal.warc.gz"}
locally algebraic scheme locally algebraic scheme A scheme over a field $k$ is called locally algebraic scheme if it has a covering $\{X_i\}$ of (open affine sub functors which are) affine schemes and all $O(X_i)$ are finitely generated $k$ -algebras, i.e. if it is locally of finite type over $Spec k$. A scheme is called algebraic scheme if moreover there exist a finite such covering, i.e. if it is of finite type over $Spec k$. Revised on March 6, 2013 19:23:27 by Zoran Škoda
{"url":"http://ncatlab.org/nlab/show/locally+algebraic+scheme","timestamp":"2014-04-20T03:14:50Z","content_type":null,"content_length":"13396","record_id":"<urn:uuid:9e425edf-a0e6-4bc3-aba2-6ff4e601ae4e>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00360-ip-10-147-4-33.ec2.internal.warc.gz"}
Sammamish Calculus Tutor ...My job is not just to get you through your current assignment (though I can certainly do that, too!) but to make sure you have the tools you need to get yourself through the next one. I also find that practice problems are incredibly helpful. For example, if a student is struggling with a math ... 35 Subjects: including calculus, English, writing, reading ...Cancellations should be made 6 hours in advanced. Any later can result in a charge of a late-cancellation fee.I have been using Windows since Windows 95 when I was 6. Growing up I have been the "tech guy" for my entire family. 17 Subjects: including calculus, chemistry, physics, geometry ...I'm currently a Math and Science tutor at a school. In college, I completed math through Calculus 3 and am proficient in Advanced Trigonometry, Calculus 1 and 2. My favorite types of math are Trigonometry, Geometry, Algebra 1 and 2. 26 Subjects: including calculus, chemistry, physics, geometry ...I can also help with programming. I earned a Bachelor of Science in Computer Science and in Computer Engineering at UW in Tacoma. My primary programming language is currently Java. 16 Subjects: including calculus, chemistry, French, geometry ...I have excellent interpersonal and communication skills. I am an extensive professional with teaching experience in the fields of Mathematics (Pre-calculus, Calculus, Differential Equations) and Medical and Biological Physics in a position of associate professor. I have extensive experience in ... 9 Subjects: including calculus, physics, geometry, algebra 1
{"url":"http://www.purplemath.com/Sammamish_calculus_tutors.php","timestamp":"2014-04-16T10:12:35Z","content_type":null,"content_length":"23747","record_id":"<urn:uuid:b3430687-8e19-4d04-9859-66eb6f30dea0>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00185-ip-10-147-4-33.ec2.internal.warc.gz"}
Auburn, MA Statistics Tutor Find an Auburn, MA Statistics Tutor ...I live in Gardner, MA, but anywhere in central Mass. is perfectly fine. A little about me...I have a Bachelor of Science degree in Mathematics, graduating Summa Cum Laude with a GPA of 3.92. I have over 8 years experience tutoring math at colleges and on a private basis. 15 Subjects: including statistics, calculus, geometry, algebra 1 ...I was also a Summit private tutor for SAT, both Math and English. I was an SAT instructor for Princeton Review and Kaplan. I was also a Summit private tutor for SAT, both Math and English. 67 Subjects: including statistics, reading, English, calculus Hello students and parents, My name is Justin, and I am a tutor specializing in math and SAT prep. I enjoy working with students and sharing my passion for math. It is always my goal to not only help students understand the concepts, but also to make things fun by explaining real life applications. 32 Subjects: including statistics, physics, calculus, geometry ...I am also an engineering and business professional with BS and MS degrees. I tutor Algebra, Geometry, Pre-calculus, Pre-algebra, Algebra 2, Analysis, Trigonometry, Calculus, and Physics. Seasonally I work with students on SAT preparation, which I love and excel at. 15 Subjects: including statistics, physics, calculus, geometry ...I help my students acquire real mastery by fitting the specific topics they study into a bigger picture. I'm especially familiar with calculus courses at many local high schools, as well as courses at Northeastern, BU, and Harvard. I earned a 5 on the AP Chemistry exam in high school, and as an... 47 Subjects: including statistics, English, reading, chemistry
{"url":"http://www.purplemath.com/Auburn_MA_statistics_tutors.php","timestamp":"2014-04-19T19:48:56Z","content_type":null,"content_length":"23877","record_id":"<urn:uuid:54f9e3e2-01fd-4bdd-bb34-c11db19d7f90>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00372-ip-10-147-4-33.ec2.internal.warc.gz"}
What's Up In Finance? | Thirteen/WNET In this lesson, students will learn personal financial management strategies based on budgeting. Students will learn the theoretical concepts involved with budgeting and financial management, including income, expenses, savings, and debt; and will then watch a clip of What's Up in Finance? to see how a college student learned to manage his budget. Students will then complete hands-on activities to create three different saving and spending scenarios based on their own lives and expenses. The amounts of money spent and saved will differ in each scenario, with scenario 1 having some debt, scenario 2 having no debt but almost no savings, and scenario 3 having significant savings. After completing these activities, students will use the online interactivity "Bank It or Bust" to utilize the financial management concepts and strategies they have learned. As a final activity, students will brainstorm ways to manage their own budgets while making room for investments, like classes, that will help their personal development in the long run. In this way, students are examining ways to save money to "invest" in themselves, for a return in their lives over the long-term. 3 classes at 45 minutes per class Math, Finance, Economics Students will be able to: • Understand the components of a budget • Compute savings • Compute debt • Learn financial management • Learn the nature of opportunity costs • Understand the importance of self-regulation 1. National Council of Teachers and Mathematics Principles and Standards for School Mathematics Number and Operations □ Understand numbers, ways of representing numbers, relationships among numbers, and number systems; □ Understand meanings of operations and how they relate to one another; □ Compute fluently and make reasonable estimates. Problem Solving □ Build new mathematical knowledge through problem solving; □ Solve problems that arise in mathematics and in other contexts; □ Apply and adapt a variety of appropriate strategies to solve problems; □ Monitor and reflect on the process of mathematical problem solving. □ Create and use representations to organize, record, and communicate mathematical ideas; □ Select, apply, and translate among mathematical representations to solve problems; □ Use representations to model and interpret physical, social, and mathematical phenomena. 2. JumpStart Coalition for Personal Financial Literacy Money Management Students will be able to: 1. Explain how limited personal financial resources affect the choices people make. 2. Identify the opportunity cost of financial decisions. 3. Discuss the importance of taking responsibility for personal financial decisions. 4. Apply a decision-making process to personal financial choices. 5. Explain how inflation affects spending and investing decisions. 6. Describe how insurance and other risk-management strategies protect against financial loss. 7. Design a plan for earning, spending, saving, and investing. 8. Explain how to use money-management tools available from financial institutions 3. Mid-continent Research for Education and Learning (McREL) Benchmarks for Economics Standard 1 Understands that scarcity of productive resources requires choices that generate opportunity costs Standard 7 Understands savings, investment, and interest rates Video: Web site: BANK IT OR BUST http://www.thirteen.org/finance/games/bankorbust.html The goal of this game is to budget expenses with the aim of saving for a short-term purchase (a car), and to see the results of saving over a long-term for a larger purchase (a home). The player is a teen working a full-time retail job for the summer. The player will develop a weekly budget of expenses, modifying their behavior so as to create savings. Then, the player will move through the ten weeks of summer vacation, with each week presenting an opportunity for savings or spending. Teachers will need the following supplies: • Computer with connection to a screen or television on which to project the Web-based video clips, or computer stations where students can watch the clips • Board and/or chart paper • "Financial Management Terms" Teacher Organizer Students will need the following supplies: • Computers with internet access (for individuals or groups) • Notebook or journal • Pens/pencils • Calculator • "Dreams for the Future" Student Organizer • "Debt/Savings" Student Organizer 1. Bookmark the Web sites used in the lesson on each computer in your classroom, or upload all links to an online bookmarking utility such as www.portaportal.com. 2. Preview all of the video clips and Web sites used in the lesson to make certain that they are appropriate for your students, currently available, and accessible from your classroom. 3. Download the video clips used in this lesson onto your hard drive or a portable storage device, or prepare to stream the clips from your classroom. 4. Print out the "Financial Management Terms" Teacher Organizer, to copy the terms and definitions on the board. 5. Print out the Student Organizers: "Dreams for the Future" and "Debt/Savings," and make enough copies so that each student has one copy of each organizer. 6. When using media, provide students with a FOCUS FOR MEDIA INTERACTION, a specific task to complete and/or information to identify during or after viewing of video segments, Web sites, or other multimedia elements. 1. Open the discussion by asking if any of the students have saved money, or maintain a budget of any kind. 2. Write the following terms and their definitions on the board: budget, income, expense, savings, and debt. (see "Financial Management Terms" Teacher Organizer) 3. Discuss with the students what each term means, and how savings and debt are a result of different combinations of income and expenses. 4. Discuss with the students any dreams they have for the future, and what investments they could make now to achieve those dreams. For instance, if one person would like to play in a band, they may need guitar lessons now to achieve that dream in the future. Ask the students to explain how they might save enough money to pay for the current expense necessary to achieve that dream. 5. Pass out the "Dreams for the Future" Student Organizer to each student. Ask students to brainstorm four different dreams for the future, and the steps they need to take now to achieve that dream, along with the potential cost of the current steps. 1. Now ask if any of the students have thought about living on their own. Make a list of some of the costs that students might have to consider if they were living on their own (for example: rent, food, transportation, laundry, phone and internet, entertainment). 2. Explain to the class that they will be watching a short video where they will meet Eddie, a young person who is living on his own for the first time. Eddie has faced some challenges maintaining a budget and saving money. 3. Provide students with a FOCUS FOR MEDIA INTERACTION, asking them to think about the areas where Eddie could spend less money. 4. Play the "Moving Out" segment for the class. 5. Have a short discussion about the "Moving Out" segment that was viewed, stressing to students that Eddie had to re-work his budget in order to afford his dream: attending a four-year college. Review some of the ways Eddie adjusted his budget to spend less money. 6. Ask students to discuss why attending a four-year college might help Eddie achieve his future goals. 7. Hand out the "Debt/Savings" Student Organizer. Explain that this is a template to use for hypothetical budgets. The students will be filling them out based on their own expenses or projected expenses, but all the scenarios assume that the students are making $500 a month in income. 8. Ask students to complete the column titled "Scenario 1: Debt" with as much expense as they would like in each category. They need to spend more than $500. If the students think of additional expenses beyond those listed on the organizer, they can add them below "Books/Magazines" on the chart. 9. Students will then add up their expenses, and subtract them from the income figure of $500. The expenses should be higher than $500, so the total will be negative. This means the students have gone into debt and the debt figure should go in the "Debt" row. 10. Next, ask students to fill out the column titled "Scenario 2: Break-Even." The goal here is to spend exactly $500. This means that students will have to lower their expenses in certain areas from the "Debt" column. 11. Students will add up their expenses to confirm that their expense and income totals are equal. They will then have no savings or debt, so a "0" should go in the "Debt" and "Savings" rows. 12. Then, ask students to complete the column titled "Scenario 3: Savings." The goal with this column is to lower their costs below those of the "Break-even Scenario" in order to have savings. 13. Students will add up their expenses and subtract the total from the income figure of $500. This figure should go in the "Savings" row. 14. Discuss with students how they were able to achieve savings. Ask students how they set their spending priorities in order to lower some expenses, and ultimately save money. 15. Explain that the students will play the online interactivity "Bank It or Bust" to utilize the financial management concepts and strategies they have learned. In "Bank it or Bust," the students will create and modify a budget, and then stick to it for ten weeks with the goal of saving up to buy a car. Provide students with a FOCUS FOR MEDIA INTERACTION, asking them to play the interactivity and to be able to list some strategies for saving money. 16. After the students have finished playing "Bank it or Bust," review some of the savings strategies they used in the interactivity. Ask the students if they can name things that happened to them during the game that provided them with extra cash (answers will vary). Ask them to name some "emergencies" that occurred that required to students to use some of their savings (answers will 1. Ask students to take out their" Dreams for the Future" Student Organizer and to think back to the initial discussion regarding their dreams. Have the students review the monthly cost of the current activities they brainstormed - the "personal investments" that would help them in the future. 2. Next, ask students to take out their "Debt/Savings" Student Organizer, and look at their figures for "Scenario 3: Savings." 3. Students should look at the monthly savings figure, and determine if they have saved enough for their "personal investment." 4. If students have not saved enough for their "personal investment," they should re-work their Savings Scenario on the Student Organizer one more time to come up with the necessary monthly savings to cover these investments in their personal development. Lesson plan written by Melissa Donohue For Teachers For Students
{"url":"http://www.thirteen.org/finance/educators/p-lesson1.html","timestamp":"2014-04-19T01:58:35Z","content_type":null,"content_length":"17182","record_id":"<urn:uuid:d79ceedc-1799-4c69-9b07-404c191d55c0>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00259-ip-10-147-4-33.ec2.internal.warc.gz"}
Slide 1 Slide 2 Terms & Concepts What is the IV flow rate? The speed at which intravenous fluid infuses into the body What is the drop factor? The number of drops (abbreviated “gtt”) required to deliver 1mL of fluid What are the tubing sizes? 10, 15, or 20 gtt/mL (Macrodrip tubing) 60 gtt/mL (Microdrip tubing) What determines gtt/mL? The size of the IV administration set (tubing) How do you calculate the IV flow rate? Using the IV flow rate formula Slide 3 IV Flow Rate Formula IV Flow Rate Formula is used when calculating an infusion by gravity (without an IV pump). Volume to be Infused (in mL) x Drop Factor = Flow Rate Total Infusion Time (in minutes) (in gtt/min) Electronic Flow Rate Formula is used when calculating an infusion by IV pump (electronic infusion device, or EID). Volume to be Infused (in mL) = Flow Rate Total Infusion Time (in hours) (in mL/h) *Round all rates to the nearest whole number Slide 4 Dosage Calculations Complete necessary conversions (i.e. dosage per weight, mass, volume, etc) prior to using either of the two previous flow rate formulas. Conversions may be calculated using: • Ratio/Proportion • Dimensional Analysis • Whatever method you are most comfortable with and use consistently/correctly Slide 5 Slide 6 Order: Infuse 800mL of NS in 10 hours. • This is a straight-forward IV flow rate calculation, in which no conversion is required. • We know we will be using the Electronic Flow Rate Formula because we were not given the tubing size or drop factor, which would be required to calculate a gravity flow rate. • The correct formula is: 800mL ÷ 10 hours = 80mL/h Volume to be Infused (in mL) = Flow Rate Total Infusion Time (in hours) (in mL/h) Slide 7 Order: Infuse 150mL of D5W in 30 minutes using an administration set with a drop factor of 20gtt/mL. • This is also a straight-forward IV flow rate calculation, in which no conversion is required. • We know we will be using the IV Flow Rate Formula because we were given the drop factor. • The correct formula is: 150mL x 20gtt/mL ÷ 30 minutes = 100gtt/min Volume to be Infused (in mL) x Drop Factor = Flow Rate Total Infusion Time (in minutes) (in gtt/min) Slide 8 Order: Infuse 100mL of LR by IV pump in 20 minutes. • We know we will be using the Electronic Flow Rate Formula because we were directed to use an IV pump, and because we were not given the tubing size or drop factor. • Before using our flow rate formula, we must complete the necessary conversion(s). □ To use our Electronic Flow Rate Formula, we need the total volume to be infused in mL (which we know), and the total infusion time in hours (which we don’t know). □ We must convert minutes to hours using our chosen dosage calculation method. Slide 9 Order: Infuse 100mL of LR by IV pump in 20 minutes. • Our calculation method of choice is ratio/proportion, which involves a 3-part process: 1) Set up ratio, 2) Cross-multiply, 3) Isolate X by dividing its value by itself Step 1 20 min = X hour 60 min 1hour Step 2 60(X) = 20(1) or 60X = 20 Step 3 60X = 20soX = 0.33 Slide 10 Order: Infuse 100mL of LR by IV pump in 20 minutes. 100mL ÷ 0.33 hours = 303mL/h Slide 11 Order: Infuse 200mL of D5W in 4 hours using a Microdrip tubing. • We know we will be using the IV Flow Rate Formula because we were given a tubing size (remember that a Microdrip administration set delivers 60gtt/mL). • Before using our flow rate formula, we must complete the necessary conversion(s). □ To use our IV Flow Rate Formula, we need the total volume to be infused in mL (which we know), the drop factor (which we also know), and the total infusion time in hours (which we don’t □ We must convert hours to minutes using our chosen dosage calculation method. Slide 12 Order: Infuse 200mL of D5W in 4 hours using a Microdrip tubing. • Complete the conversion calculation. Step 1 4 hours = X minutes 1 hour 60 minutes Step 2 1(X) = 4(60) or 1X = 240 Step 3 1X = 240soX = 240 Slide 13 Order: Infuse 200mL of D5W in 4 hours using a Microdrip tubing. 200mL x 60gtt/mL ÷ 240 minutes = 50gtt/min Slide 14 When preparing to tackle any type of IV flow rate calculation, be sure to determine what, if any, conversions need to take place first. Once you have completed all necessary conversions, you are ready to calculate the IV flow rate using one of the two IV flow rate formulas we’ve discussed in this tutorial. Remember, it is always best to be consistent with the dosage calculation method you choose to use when completing this type of problem. Slide 15 Slide 16 Order: Give 500mg of dopamine in 250mL of D5W to infuse at 20mg/h. Calculate the flow rate in mL/h. *In order to know how many mL we need to infuse in 1 hour per the IV pump, we need to convert our dosage needed (20mg) into its equivalence in mL. *Set up ratio, and then cross-multiply: 500mg = 250mLso 500X = 250(20) 20mg XmL *Isolate X and solve: 500X = 5000so X = 10mL *There are 20mg of dopamine in 10mL of solution, so we will program our IV pump at 10mL/h. Slide 17 Your patient has an order to receive 800U of heparin per hour by continuous intravenous infusion. If the pharmacy mixes the IV bag to contain a total of 5,000U of heparin in 500mL of D5W, how many mL per hour should the patient receive? *In order to calculate mL/hour, we need to convert our dosage needed (800U) into its equivalence in mL. *Set up ratio, and then cross-multiply: 5000U = 500mLso 5000X = 500(800) 800U XmL *Isolate X and solve: 5000X = 400,000so X = 80mL *There are 800U of heparin in 80mL of solution, so we will program our IV pump at 80mL/h. Slide 18 Order: 21.7mg of dopamine in 105mL of D5W to be infused at a rate of 9mg/h. Calculate the flow rate in mL/h. *In order to calculate mL/hour, we need to convert our dosage needed (9mg) into its equivalence in mL. *Set up ratio, and then cross-multiply: 21.7mg = 105mLso 21.7X = 9(105) 9mg XmL *Isolate X and solve: 21.7X = 945so X = 43.55mL 21.7 21.7 *There are 9mg of dopamine in 43.55mL of solution, so we will program our IV pump at 44mL/h. Slide 19 Order: Aggrastat at 12.5mg in 250mL to be infused at a rate of 6 mcg/kg/hr in a patient who weighs 100kg. At what flow rate will you set the IV pump? *For problems that include a weight/mass, do that conversion 1st: 6mcg per kg = 6mcg x 100kg = 600mcg/hr *Since our ordered dose is in mg, we need to convert mcg to mg: 600mcg ÷ 1000 = 0.6mg/hr *In order to calculate mL/hour, we need to convert our dosage needed (0.6mg) into its equivalence in mL. *Set up ratio, and then cross-multiply: 12.5mg = 250mLso 12.5X = 0.6(250) 0.6mg XmL *Isolate X and solve: 12.5X = 150so X = 12mL 12.5 12.5 *There are 0.6mg of Aggrastat in 12mL of solution, so we will program our IV pump at 12mL/h. Slide 20 A 1000cc solution of D5NS with 20,000U of heparin is infusing at 20mL/h. The IV set delivers 60gtt/mL. How many units of heparin is the patient receiving each hour? *This is a reverse calculation, as we already know the electronic flow rate (mL/h). We will use our dosage calculation method to work through the problem. *Set up ratio, and then cross-multiply: 20,000U = 1000mLso 1000X = 20(20,000) X U 20mL *Isolate X and solve: 1000X = 400,000so X = 400U *There are 400U of heparin in 20mL of solution, which means the patient is receiving 400U of heparin per hour. The drop factor is simply a distracter, and is not used in this problem. Slide 21 The physician orders an IV infusion of D5W 1000mL to be infused over the next 8 hours. The IV tubing you are using delivers 15gtt/mL. What is the correct rate of flow? *In order to use the IV Flow Rate Formula, we need to know time in minutes instead of hours, which is the only conversion we will need to do to solve this problem. *We can eliminate the extra steps of a time conversion by incorporating it into our formula: 1000mL x 15gtt/mL = 15,000 = 31.25 8 h x 60 min 480 *Remember our “mL” labels are cancelled out during the calculation process, and we must round all flow rates, leaving us with 31gtt/min as the correct rate of flow. Slide 22 Your patient has an order to infuse 10mEq of KCl in 100mL of D5½NS over the next 30 minutes. The set calibration is 10gtt/mL. What is the correct rate of flow? *We will be using the IV Flow Rate Formula, and no conversion calculation is needed: 100mL x 10gtt/mL = 1000 ÷ 30 = 33.33 30 min *The correct rate of flow is 33gtt/min. *Do not be confused by extra numbers, such as are used in the name of a solution (i.e. D5W or ½NS). These have no bearing on your dosage calculation. *Be careful in determining which value(s) are pertinent in solving your problem. In this example, the 10mEq of KCl is not a necessary component in terms of calculating the correct rate of flow, so ignore it! Slide 23 The 0900 medications scheduled for your patient include Keflex 1.5g in 50mL of a 5% dextrose solution. According to the pharmacy, this preparation should be administered in 30 minutes. The IV tubing on your unit delivers 15gtt/mL. What is the correct rate of flow in gtt/min? *We will be using the IV Flow Rate Formula, and no conversion calculation is needed: 50mL x 15gtt/mL = 750 ÷ 30 = 25 30 min *The correct rate of flow is 25gtt/min. *Again, remember the 5% dextrose is just describing the type of solution, and the Keflex 1.5g does not have anything to do with calculating the flow rate for this problem. Slide 24 On Wednesday afternoon, your patient returns from surgery with an IV fluid order for 1000mL every 8 hours. On Thursday morning at 0800, you note that 600mL of a 1L bag has been infused. The physician orders the remainder of the bag to infuse over the next 6 hours. The IV tubing used by your unit delivers 10gtt/mL. What is the correct rate of flow? *There are 400mL remaining in the IV bag, which needs to be infused in 6 hours. *We need to know time in minutes instead of hours, which can be incorporated into our IV Flow Rate Formula calculation: 400mL x 10gtt/mL = 4000 ÷ 360 = 11.11 6 h x 60 min *The correct rate of flow is 11gtt/min. Slide 25 The physician orders 1.5L of Lactated Ringers solution to be administered intravenously to your patient over the next 12 hours. Calculate the rate of flow if the IV tubing delivers 20gtt/mL. *In order to utilize the IV Flow Rate Formula, we must convert liters to milliliters and hours to minutes, both of which can be completed as part of the formula calculation: (1.5L x 1000) x 20gtt/mL = 30,000 ÷ 720 = 41.67 12 h x 60 min *The correct rate of flow is 42gtt/min. Slide 26 Slide 27 #1: The physician orders 200mg of Rocephin to be taken by a 15.4 lb infant every 8 hours. The medication label shows that 75-150 mg/kg/day is the appropriate dosage range for this medication. Is the order within the desired range? Slide 28 Convert weight (lbs to kg): 15.4 lbs ÷ 2.2 = 7kg Calculate minimum and maximum dosage: 75mg x 7kg = 525mg (minimum daily therapeutic dose) 150mg x 7kg = 1050mg (maximum daily therapeutic dose) Determine if the order is within range: 200mg every 8 hours (or 200mg x 3 times per day) = 600mg Solve the problem: Yes, 600mg/day of Rocephin is within the desired daily range of 525mg and 1050mg for this patient. Slide 29 #2: Solumedrol 1.5mg/kg is ordered for a child weighing 34 kg. This medication is available as 125mg/2mL. How many mL will you administer? Slide 30 Calculate mg according to weight: 1.5mg/kg x 34 kg = 51mg Convert mg to mL using your chosen dosage calculation method: 125mg = 2mLso 125X = 51(2) 51 mg XmL 125X = 102so X = .82mL Solve the problem: You will administer 0.82mL of Solumedrol to this patient. Slide 31 #3: You are to infuse 800mL of Lactated Ringers over 20 hours using an IV administration set that delivers 20gtt/mL. What is the drip rate? Slide 32 Solve using the IV Flow Rate Formula: 800mL x 20gtt/mL = 16,000gtt = 13gtt/min 20 h x 60 min 1200 min Slide 33 Calculate the mL/h of the following orders via IV pump: #4. Administer 1500mL of 0.9 NS in 24 hours. #5. Administer 750mL of LR in 16 hours. #6. Administer 500mL of D5W in 12 hours. #7. Administer 2000mL of D5W in 24 hours. Slide 34 Solve the previous calculations using the Electronic Flow Rate Formula: 1500mL ÷ 24 hours = 63mL/h 750mL ÷ 16 hours = 47mL/h 500mL ÷ 12 hours = 42mL/h 2000mL ÷ 24 hours = 83mL/h Slide 35 #8 Order: Administer 30mL of Ancef in 0.9 NS over 20 minutes via intravenous infusion pump. Slide 36 Convert minutes to hours: 20 minutes = X hoursso 60X = 20(1) 60 minutes 1 hour 60X = 20soX = 0.33∞ hours Solve using the Electronic Flow Rate Formula: 30mL ÷ 0.33∞ hours = 90mL/h Slide 37 #9 Order: Administer 100mL of ½NS in 45 minutes. Slide 38 Convert minutes to hours: 45 minutes = X hoursso 60X = 45(1) 60 minutes 1 hour 60X = 45so X = 0.75 hours Solve using the Electronic Flow Rate Formula: 100mL ÷ 0.75 hours = 133mL/h Slide 39 #10 Order: Administer 150mL of D5W in 30 minutes. Slide 40 Convert minutes to hours: 30 minutes = X hoursso 60X = 30(1) 60 minutes 1 hour 60X = 30so X = 0.5 hours Solve using the Electronic Flow Rate Formula: 150mL ÷ 0.5 hours = 300mL/h Slide 41 #11 Your patient is to receive 2000mL of D5W with a flow rate of 160mL/h. How long will this order take to infuse? Slide 42 Solve using your chosen dosage calculation formula: 2000mL = X hoursso 160X = 2000(1) 160mL 1 hour 160X = 2000so X = 12.5 hours *It will take 12 ½ hours to infuse 2000mL of D5W to your patient under this order. Slide 43 #12 Your patient has been prescribed 1L of NS. The IV pump is set at 150mL/h. How long will this order take to infuse? Slide 44 Solve using your chosen dosage calculation formula: 1L x 1000 = 1000mL 1000mL = X hoursso 150X = 1000(1) 150mL 1 hour 150X = 1000so X = 6.7 hours *It will take just under 7 hours to infuse 1000mL of NS to your patient under this order. Slide 45 #13 The physician orders D5W IV at 125mL/h. The infusion set is calibrated for a drop factor of 10gtt/mL. Calculate the IV flow rate in gtt/min. Slide 46 Solve using the IV Flow Rate Formula: 125mL x 10gtt/mL = 1250gtt = 21gtt/min 1 h x 60 min 60 min Slide 47 #14 Order: 150mL Lactated Ringers solution to infuse in 30 minutes. The drop factor is 15gtt/mL. Slide 48 Solve using the IV Flow Rate Formula: 150mL x 15gtt/mL = 75gtt/min 30 min Slide 49 #15 Order: Cefazolin 0.5g in 100mL D5W IV piggyback to run over 30 minutes. The drop factor is 20gtt/mL. What is the drip rate? Slide 50 Solve using the IV Flow Rate Formula: 100mL x 20gtt/mL = 67gtt/min 30 min Slide 51 #16 Order: Ampicillan 500mg IV in 100mL of NS to infuse over 45 minutes. How will you program the infusion pump? Slide 52 Convert minutes to hours: 45 minutes = X hoursso 60X = 45(1) 60 minutes 1 hour 60X = 45so X = 0.75 hours Solve using the Electronic Flow Rate Formula: 100mL ÷ 0.75 hours = 133mL/h Slide 53 #17 Order: Bactrim 500mg IV in 50mL D5½NS in 30 minutes by IV pump. What is the mL/h? Slide 54 Convert minutes to hours: 30 minutes = X hoursso 60X = 30(1) 60 minutes 1 hour 60X = 30so X = 0.5 hours Solve using the Electronic Flow Rate Formula: 50mL ÷ 0.5 hours = 100mL/h Slide 55 #18 Order: NS 1800mL IV to infuse in 15 hours by infusion pump. Calculate the flow rate. Slide 56 Solve using the Electronic Flow Rate Formula: 1800mL ÷ 15 hours = 120mL/h Slide 57 #19 You receive a physician’s order for D5W 250mL IV over the next 2 hours by infusion pump. What is the mL/h? Slide 58 Solve using the Electronic Flow Rate Formula: 250mL ÷ 2 hours = 125mL/h Slide 59 #20 You receive an order for D5W 500mL with heparin 25,000U IV at 850U per hour. Calculate the flow rate in mL/h. Slide 60 Convert U/h to mL/h to solve this problem: 25,000U = 500mLso 25,000X = 850(500) 850U X mL 25,000X = 425,000so X = 17mL/h 25,000 25,000 *The flow rate is 17mL/h. Slide 61 #21 You receive an order for D5W 1000mL IV to infuse at 50mL/h to begin at 0600. At what time will this IV be complete? Slide 62 Solve using your chosen dosage calculation formula: 1000mL = X hoursso 50X = 1000(1) 50mL 1 hour 50X = 1000so X = 20 hours *This IV will be complete at 0200 the following morning. Slide 63 #22 You receive an order for LR solution 1000mL IV to run at 125mL/h. How long will this IV last? Slide 64 Solve using your chosen dosage calculation formula: 1000mL = X hoursso 125X = 1000(1) 125mL 1 hour 125X = 1000so X = 8 hours *This IV will last for 8 hours. Slide 65 #23 Order: Dobutamine 250mg in 250mL D5W per IV to infuse at 5mcg/kg/min. The client’s weight is 80 kg. Calculate the flow rate using an infusion pump. Slide 66 Calculate mcg according to weight: 5mcg/kg x 80 kg = 400mcg/min Convert mcg/min to mg/min: 400mcg ÷ 1000 = 0.4mg/min Convert mg/min to mg/hour: 0.4mg x 60 minutes = 24mg/h Convert mg to mL based on dosage on hand: 250mg = 250mLso 250X = 24(250) 24mg XmL 250X = 6000so X = 24mL Solve using the Electronic Flow Rate Formula: 24mL/h via IV pump Slide 67 #24 Order: Lidocaine 2g in 500mL D5W IV to run at 4 mg/min. What is the flow rate per IV pump? Slide 68 Convert g to mg: 2g x 1000 = 2000mg Convert mg/min to mL/min based on dosage on hand: 2000mg = 500mLso 2000X = 4(500) 4mg XmL 2000X = 2000so X = 1mL/min Convert mL/min to mL/h to solve the problem: 1mL x 60 minutes = 60mL/h Slide 69 #25 Order: Ancef 1g in 100mL D5W IV piggyback to be infused over 45 minutes. You are using a Microdrip tubing. What is the flow rate in gtt/min? Slide 70 Solve using the IV Flow Rate Formula: 100mL x 60gtt/mL = 133gtt/min 45 min Slide 71 #26 Order: 50mL Zofran solution IV piggyback to infuse over 30 minutes. The drop factor is 60gtt/mL. What is the flow rate in gtt/min? Slide 72 Solve using the IV Flow Rate Formula: 50mL x 60gtt/mL = 100gtt/min 30 min Slide 73 #27 Order: 1000mL D5W IV q 24 hours. The drop factor is 60gtt/mL. What is the flow rate in gtt/min? Slide 74 Convert hours to minutes and solve using the IV Flow Rate Formula: 1000mL x 60gtt/mL = 60,000 = 42gtt/min 24 hours x 60 minutes 1440 Slide 75 #28 A dose strength of gr ¼ of an IV push medication is ordered. The available dosage is 15mg/mL. How many mL will you administer? Slide 76 Convert gr to mg: 1grain = 60mgso 1X = ¼(60) ¼ grain X mg 1X = 15so X = 15mg Solve the problem: 15mg = 1mL, so you will administer 1mL. Slide 77 #29 You have an order for epinephrine to be infused at 30mL/h. The solution available is 2mg of epinephrine in 250mL D5W. Calculate the mcg/min. Slide 78 Convert mL/h to mg/h based on dosage on hand: 2mg = 250mLso 250X = 2(30) X mg 30mL 250X = 60so X = 0.24mg/h Convert mg/h to mcg/h: 0.24mg x 1000 = 240mcg/h Convert mcg/h to mcg/min to solve the problem: 240mcg ÷ 60 minutes = 4mcg/min Slide 79 #30 Aminophyline 0.25g is added to 500mL D5W to infuse in 8 hours. Calculate the mg/h. Slide 80 Use the Electronic Flow Rate Formula to determine mL/h: 500mL ÷ 8h = 62.5mL/h Convert mL/h to g/h based on dosage on hand: 0.25g = 500mLso 500X = 0.25(62.5) X g 62.5mL 500X = 15.6so X = 0.03g/h Convert g/h to mg/h to solve the problem: 0.03g x 1000 = 30mg/h Slide 81 #31 You receive an order to infuse 500mL ½NS with 30,000U heparin at 600U/h. The drop factor is 60gtt/mL. What is the gtt/min? Slide 82 Convert U/h to mL/h per on dosage on hand: 30,000U = 500mLso 30,000X = 500(600) 600U XmL 30,000X = 300,000so X = 10mL/h 30,000 30,000 Convert hours to minutes and solve using the IV Flow Rate Formula: 10mL x 60gtt/mL = 10gtt/min 1 hour x 60 minutes Slide 83 #32 A patient is to receive Pitocin at 15microgtt/min. The solution contains 10U Pitocin in 1000mL D5W. Calculate the number of units of Pitocin the patient is receiving per hour. Slide 84 Convert mgtt/min to mgtt/h: 15mgtt x 60 minutes = 900mgtt/h Convert mgtt/h to mL/h (*remember Microdrip tubing delivers 60 drops in 1mL): 60mgtt = 1mLso 60X = 900(1) 900mgtt X mL 60X = 900so X = 15mL/h *Continued on next slide Slide 85 Convert mL/h to U/h: 10U = 1000mLso 1000X = 10(15) X U 15mL 1000X = 150soX = 0.15U Solve the problem: The patient is receiving 0.15U/hour of Pitocin. Slide 86 #33 You have an order for 3 mcg/kg/min of Nipride. You have available 50mg of Nipride in 250mL D5W. The client’s weight is 60 kg. Calculate the flow rate in mL/h that will deliver the correct dose. Slide 87 Calculate mcg according to weight: 3mcg/kg x 60 kg = 180mcg/min Convert mcg/min to mg/min: 180mcg ÷ 1000 = 0.18mg/min Convert mg/min to mg/hour: 0.18mg x 60 minutes = 10.8mg/h Convert mg to mL based on dosage on hand: 50mg = 250mLso 50X = 10.8(250) 10.8mg XmL 50X = 2700so X = 54mL Solve using the Electronic Flow Rate Formula: 54mL/h via IV pump Slide 88 #34 A 50mg nitroglycerin drip in 250mL D5W is infusing at 3mL/h. Calculate the mcg/min of nitroglycerin this patient is receiving. Slide 89 Convert mL/h to mg/h based on dosage on hand: 50mg = 250mLso 250X = 50(3) X mg 3mL 250X = 150so X = 0.6mg/h Convert mg/h to mcg/h: 0.6mg x 1000 = 600mcg/h Convert mcg/h to mcg/min to solve the problem: 600mcg ÷ 60 minutes = 10mcg/min Slide 90 #35 A dose strength of 0.3g of a medication has been ordered to infuse over 20 minutes. The available dosage is 0.4g in 1.5mL of solution. Calculate the mL/h. Slide 91 Convert g to mL based on dosage on hand: 0.4g = 1.5mLso 0.4X = 0.3(1.5) 0.3g XmL 0.4X = 0.45so X = 1.125mL 0.4 0.4 Convert minutes to hours: 20 minutes = X hoursso 60X = 20(1) 60 minutes 1 hour 60X = 20so X = 0.33∞ hours Use the Electronic Flow Rate Formula to solve: 1.125mL ÷ 0.33∞ hours = 3mL/h
{"url":"http://www.slideserve.com/carrington/dosage","timestamp":"2014-04-20T23:32:02Z","content_type":null,"content_length":"112914","record_id":"<urn:uuid:ed86b1d4-26ad-4e30-b85a-f7362b144d16>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00617-ip-10-147-4-33.ec2.internal.warc.gz"}
A basic calculus question? The ratio of Y over X is tangent. Now I get some of your problems. No, the calculated number will be "tangent" to the curve here, but it WILL give you the tangent value associated with the formed by the secant line and the horizontal line formed at the point (x,f(x)). You have a natural triangle here in the (x,y)-plane, with corners: (x,f(x)), (x+h,f(x)) and (x+h,f(x+h)) We usually call that ratio the of the line. Have you heard that expression?
{"url":"http://www.physicsforums.com/showthread.php?t=419125","timestamp":"2014-04-21T04:43:04Z","content_type":null,"content_length":"61138","record_id":"<urn:uuid:c24696b2-816f-4941-a1e6-2faebe372936>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00644-ip-10-147-4-33.ec2.internal.warc.gz"}
Archive of Mr Excel Message Board Back to Forms in Excel VBA archive index Back to archive home Posted by J Davis on February 06, 2001 10:48 AM I need to look up match data in two columns w/ vlookup 22222 1 $25 22222 2 $77 22222 3 $16 44444 1 $18 44444 2 $100 44444 3 $17 77777 1 $16 77777 2 $80 77777 3 $42 CELL A1 CONTAINS 44444 AND CELL A2 CONTAINS 2 I.E. HOW DO I VLOOKUP (OR MATCH, INDEX) TO GET THE $100 IN ACCOUNT 44444 AND COST CENTER 2? J Davis Check out our Excel Resources Re: VLOOKUP & MULTIPLE COLUMNS Posted by Dave Hawley on February 06, 2001 12:31 PM Hi J Davis You could use an array for this, but I wouldn't. Use the DGET instead, much better. Copy your Column headings to any range. Below the appropriate copied column heading place: 44444 and 2 Then use: Where A1:C1000 is your entire table, including Column headings. 3 is the Column position in your table you want the result from. E1:F2 is you copied headings with the 44444 and 2 below the appropriate headings. Hope this helps DaveOzGrid Business Applications Re: VLOOKUP & MULTIPLE COLUMNS Posted by Tim Francis-Wright on February 06, 2001 1:51 PM DGET, though has two failings: if there are multiple matches, it returns an error. Also, using DGET requires filling in the criteria range, which makes it less useful if there are many DGETs to DGET. (Why Lotus's database functions work so much better than Excel's is a mystery.) It may work great in J Davis's case, but if there is the possibility of multiple hits, VLOOKUP may be better. One easy way to use VLOOKUP is to make another column [here, to the right of column B, so the database now has columns A:D]. Have C2 = A2&B2 and copy that down as far as necessary. Now, VLOOKUP("44444"&"2",C2:D10000,2) will return the [first] match for 44444 and 2. Re: VLOOKUP & MULTIPLE COLUMNS Posted by Dave Hawley on February 06, 2001 2:44 PM You are correct of course that DGET returns a error if there is more than one matching criteria. But Wrong in your assesment of having to fill in a criteria range. All you need is a Drop down list of both Columns in a Data Validation list, can't get much easier than that. Your method will of course work, but requires overloading the spreadsheet with many un-needed formulas and not very easy to modify. If you want the best way of all, then I would use a Pivot Table with "Account" and "Center" in the Page Field and "Amount" in the Data and Row Field. OzGrid Business Applications Re: VLOOKUP & MULTIPLE COLUMNS Posted by Mark W. on February 06, 2001 2:45 PM J", let's say that your lookup table is in cells A4:C15, your ACCOUNT lookup value, 44444, is in cell A1, and your CENTER lookup value, 2, is in cell A2. Use the following formula to find the corresponding AMOUNT: Re: VLOOKUP & MULTIPLE COLUMNS Posted by J Davis on February 06, 2001 8:15 PM Jamal Davis Thank you gentleman for your help. However there is a catch. A B C I had Account Center Amount 22222 1 $17 22222 10 $1000 I Used =vlookup(a1&b1,datarange,3). HOWEVER, I want 22222-10 $1,000. The problem is that It I got the value for 22222-1 ($17--since it was the first "match"). The solution where you add the columns (a1&b1) seems to work best but the program can not differentiate between (222221 and between 2222210). Doesn't the "&" convert things to text. I would I make these EXACTLY match. Thank you, Jamal Davis Re: VLOOKUP & MULTIPLE COLUMNS Posted by Aladin Akyurek on February 06, 2001 10:28 PM Add a 0 (or FALSE) to the VLOOKUP that Tim suggested for an exact match. =VLOOKUP(A1&B1, DATARANGE,3,0) Re: VLOOKUP & MULTIPLE COLUMNS Posted by Jamal Davis on February 07, 2001 5:09 AM 22222 1 $17 22222 10 $1000 : I Used =vlookup(a1&b1,datarange,3). HOWEVER, Yes, I figured that out last night after looking at that. Thank you Gentleman VERY much. Re: VLOOKUP & MULTIPLE COLUMNS Posted by E Graham on April 18, 2001 8:56 AM Just wanted to say a huge thank you to Tim Francis-Wright - your advice has just helped me solve a problem which has been bugging me for MONTHS!! I can't tell you what a relief it is! E Graham. This archive is from the original message board at www.MrExcel.com. All contents &copy 1998-2004 MrExcel.com. Visit our online store to buy searchable CD's with thousands of VBA and Excel answers. Microsoft Excel is a registered trademark of the Microsoft Corporation. MrExcel is a registered trademark of Tickling Keys, Inc.
{"url":"http://www.mrexcel.com/archive/Formulas/9678.html","timestamp":"2014-04-16T10:11:25Z","content_type":null,"content_length":"8366","record_id":"<urn:uuid:be62d2d5-6cd1-4bc1-a367-b55c31437b72>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00008-ip-10-147-4-33.ec2.internal.warc.gz"}
Realistic Expectations When Betting on the NFL Why is a 70% Win Rate Unrealistic When Betting on the NFL? The sports betting industry is one that lacks regulation, meaning anyone can start a company, website or used car salesman persona to start selling picks. Because there is no regulation, handicappers can tout false records and promises of unimaginable wealth in order to obtain business. While there are many legitimate and transparent handicappers in the industry, there are also an overwhelming amount who use fake names, flashy cars, women of questionable clothing and morals (we’re guessing) and unachievable records to convince new or uneducated bettors to buy their picks. While this may sound a bit over the top, we often get calls asking why we don’t hit 70% of our games like many of the other services out there. Our response is always that a 70% win rate isn’t attainable over the long-haul. To explain this in more detail, we analyzed the probability that a sports bettor can win 70% of all wagers to illustrate just how unrealistic this is. For the purposes of this article, we chose the z-ratio (also known as z-score) to show how many standard deviations away from “expected” an event is. Example 1: No Edge This example assumes a handicapper who historically hits 50% of his games, meaning the handicapper does not have any edge when picking games. The data assumes 1,000 plays against the spread (with a vig of -110) over a calendar year, across all major US sports. Desired Win Rate Odds of Win Rate Probability of Win Rate Z-Ratio 50.0% 1 in 2 50% 0.0 51.6% 1 in 6.3 15.9% 1.0 53.2% 1 in 44 2.3% 2.0 54.8% 1 in 700 0.1% 3.0 70.0% < 1 in a trillion < .0000000001% > 10.0 As you can see, a sports bettor with no edge has only a 2.3% chance of winning 53.2% of his games, which is just above the break-even point of 52.4%. That same bettor has less than a one in a trillion chance of hitting 70% of his games over the course of 1,000 plays. Example 2: Good Handicapper We ran the same analysis above but this time assumed a skilled handicapper who historically hits 55% of his games. The data assumes 1,000 plays against the spread over a calendar year, across all major US sports. For the record, we do believe there are good handicappers out there that are able to achieve 55% over the long-term, which is a very good win rate. However, even in the case of a handicapper with a long-term expected winning percentage of 55%, a 70% win rate over a whole season (with 1,000 plays) would still be a hugely unexpected event. In fact, this is still an almost-impossible “9 standard deviation event” with the odds of a 70% or better win rate occurring less than one in a billion (.0000001%). 10 comments on “Why is a 70% Win Rate Unrealistic When Betting on the NFL?” 1. Finally, an honest voice out there - I’ve been betting sports since I moved to Vegas in 1980. I’ve heard every tout and every gambler boast about how well he does picking winners against the spread. Unfortunately, both groups live in a world of denial and delusional rationalization. What’s funny is gamblers and touts think they can fool me because they claim to be hitting 75% of their plays, but forget to tell me it was only on a Tuesday between 4 and 7 pm by winning 3 of 4 games. Complete nonsense for the long haul! 2. Ok, but if you hit 70% then 70% is 0 standard deviations. 3. How do you calculate the “probability of win rate”? Am I missing something? They didn’t just pull the 2.3% chance out of there ass, I am assuming. I agree with the premise of the article, just trying understand how they got the 2.3% chance to hi5 52.3% of the time number 4. I disagree, i have been handicapping for 11 years now, i have record of every bet ever made, and In NcaaF alone, i have had 62% my first season and after that every year ranged in between 67%-78% i bet from 10-30 plays per week every week in ncaff season … now i dont do that well in other sports, NFL 55% NCAAB 59% MLB & NBA are losing props for me so i have stopped wagering on them 3 years ago… i bet spread only and totals… now im not the best in the world but what i am stating is if i can beat 70% in college football for the entire season im sure there is people better than me that can do it for a whole year, especially if they have a friend or 2 helping them research and crunch numbers… just a thought □ Do you post your plays publicly? If so where? ☆ Hi, you can see all of our Best Bet records here: 5. I’ve been betting sports for 35PLUS years and have literally wagered MILLIONS! These guys are deadon!!! Sure, there may be an occasional, very,very rare exception, but just what are those odds??? If you want to have a fighting chance in hell of actually winning in the long run, LISTEN TO BRAD AND THESE GUYS!!! THIS IS YOUR GOLDEN OPPORTUNITY! SINCERELY Don (Been there and done it!!!) 6. Your article sort if glosses over your math and how you ran your regression. I know that might be a bit heavy for some people from a statistical explanation point I find this to be little light on details and proof. 7. All you guys got it wrong. Stats can be skewed any way you want them to look so just look at the black and white. All that matters is performance, right? In sports betting all one needs is a 52.4% or 53% to be profitable (taking into account for -110 vigorish or eleven divided by 21). Now whe none of you seem to take into account is this math dowss not account for ‘cherry picking’. There are certainly better picks than others. If one limits oneself to cherry picking the highest odds of willing games based on probability as a function of information it’s definately possible to pick 70%. What it comes down to is endurance and stamina keeping away from the distractions of Las Vegas. Simply put, your 1 trillion to one statements are raw math not taking into account of intelligence and everchanging information which ultimately contributes to the outcome of each pick. 8. I agree that over the long run (1000+ bets) it would be highly unusual for anyone to win or lose at a rate of 70% and that given Steve Stevens record of 1-4 (counting Pirelli’s recommendation that he says he got from Stevens) it is especially unlikely that Stevens has won 70%+ every year in the past . I say that not because it is true mathematically, or can be proven true mathematically, but because in 40 years of sports betting, twenty of them as a professional, I have never seen anyone even come close over 1000+ games. Mathematically, however, it is not impossible. From a probability standpoint, every individual combination of wins and losses is equally likely to occur even in random results. Thus, flipping a coin you are equally likely to have heads come up 1000 times in a row as you are to0 have heads and tails alternate perfectly for the entire 1000 flips, or to get any other predicted combination of heads and tails. Of course the odds against any particularly combination appearing at random are immense ( 2 multiplied by itself 999 times) Try it on your calculator and see how far you get before your calculator goes on tilt. Nevertheless, some combination must come up every time someone gets 1000 betting results, and nothing makes the result they got more likely than 70% wins. In addition, it is much easier to get 70% wins than it is to get 1000 wins or even to get the result any of you actually got in your last 1000 games. The reason is that only one result will satisfy the requirement of any set combination. To win 70%, however, you don’t need any given 70% combination. There are thousands of combinations that will provide 700 wins and 300 losses. To figure out the odds of ending up with a 70% record in 1000 bets, you must first multiply 2 by itself 999 times. Then, calculate all the possible combinations that will provide 700 W’s and 300 L’s. Then divide the first number by the second number. But there is yeat another fly in the soup. We don’t even need exactly 700 wins and 300 losses. A result of 701 wins and 299 losses will do. In fact, any result that provides more than 700 wins and even a few results sli8ghtly less than 700 wins will do. Certainly, if Stevens won between 695 and 699 games the upward rounding wiould still get him hojnestly to 70% and to say he wasn’t 70% would be nitpicking. That means we can add hundreds of thousands of possible combinations in the 1000 games that will get him to where he needs to be. Thus, the question, is: How did you calculate any of the probabilities you provide in the article? Do you have some formula that does it for you? A computer program perhaps? A little transparency is necessary if you want to credibly prove something scientifically or mathematically. Without transparency, you are simply spouting propaganda and become as bad as muck you claim to be raking. Also, don’t forget, there is no reason to assume that the results will be completely random. Not every team is equally likely to cover the spread. Much has to do with the ability of the line maker to be accurate,in predicting the game, and the line maker is not trying to pick the game accurately — he is trying to even out public sentiment, which is quite a different matter. Then the public comes in and moves the line further to correct for the errors of the line maker in predicting sentiment. It is those errors in the line as a game predictor that account for the fact that year in and year out the Vegas casinos fail to hold their full 4.5% of the drop in sprots betting, and that the numbers indicate that the general public consistently wins between 51% and 52% of the time, and sometimes as much as 53%, with all bookmaker profits coming from sucker bets such as teasers, parlays, point buying, and parlay cards. If we assume that a win is more likely that a loss, the odds against a final result providing between 700 and 1000 wins in 1000 diminish further. Finally, you are engaging in circular reasoning. You try to prove that nobody can win 70% in 1000 sports bets by showing that it is unlikely for it to happen to someone who can’t win 70%. In other words, you are providing a self-fulfilling prophesy. What have you done to show that anyone claiming to win 70% has not actually done so? What sport are using for the results? How good is the line maker? How much has the line been skewed by public? What is the probability that any game will cover the given spread against any other team. I can prove to you all day long that a 70% result is highly likely depending on the betting propostion compared to the odds. The traveling con men used to win 70% of the time from the uneducated frontier suckers who made the incorrect assumption that the number 3 and the number 7 were equally likely to appear on a dice toss. If I bet laying 11-10 that a star NFL kicker will make a filed goal, I will certainly be a big 70% plus winner. If any handicapper waits and only bets when he can take advantage of some huge edge such as a star player getting hurt in the pregsame shoot around before the bookmakers get wind of it, he can most likely win close to 70% with nothing more. If you bet Withita State on College Baskets on the money line this season, you won 100%, and very few of those money lines fully reflected the very high odds that Wchita State would win straight up. You didn’t hear Stevens say the sport in which he hit 70% did you? He didn’t mention if that record was in money line games or in point spread games. You make the invalid assumption that every game had a true 50% probability and that the handicapper had no advantage at all and could only pick games at random probability of 50%, and then told us that ending up with exactly a 70% ressult (as opposed to 70.4% or 71.2% is a one in a trillion shot. In other words you told us that someone who cannot win 70% is highly unlikely to win 70%. To which i say, “Duh!” You haven’t proven that it is highly unlikely to win 70% of ones bets for someone skilled enough to do it and who has historically dne it. It is like telling me that a golfer who averages 150 strokes for 9 holes is unlikely to hit a hole in one. What’s that got to do with Tiger Woods? There are many things that cannot be proven mathematically. In fact, mathematically people can an will win between 70 and 100 games in 1000 even by just flipping a coin on spread games. That Stevens can’t or his unlikely to do it can only be proven anecdotally. That is we don’t see it everyday or any day for that matter. We can set a coputer to make random picks and note that we don’t see 700 wins in several million pretend bets. We accep0t fingerprints as a means of identification even though the laws of probability tell us that there can and will be multiple people with the same fingerprints. We accept fingerprints, simply because duplication is not something we can find in millions of examinations. If you want ot discuss Steve Stevens as a fraud, best to stik to things like the lack of reason for doubling up bets, the mathematically proven Kelly criterion adn the proper amount to bet on a $50,000 bankroll with a 70% win probability and 11-10 odds, and the fact that he tells clients he is great money-management adviser, and he keeps repeating that he is there to help you stay in control aqnd then tells clients to make $99,000 in bets based on a $50,000 bankroll. Speak of the probability that a 70% + handicapper will have a 1-4 result (counting the game given to the Pirelli client) . Talk about his ttatement after the first loss that only the long run matters, and then doubling up into losses for no more reason than that there were two losses in the short term. Finally, criticize CNBC for allowing that con man to sit there are say, “I lost two and won one and my client made money. Don’t tell me about percentages,” while failing to disclose that the deal was a 50% commission on each winning game. the final bet was $66000 and the client won $60,000. Stevens, happy as a con man in Vegas, proclaims “Pay me.” The commission is $30,000. The prior loss is $33,000. the profit after the last bet is just $27,000, and the client needs to p;ay Stevens $3000 out of his own pocket on a pay only after you win deal. Final Score: Client: MINUS $3000. Steve Stevens: +$30,000 which, if his client had any brains at all, Stevens had to pay on medical bills to repair his bromen jaw and nose sustained just after he asked for the full win plus $3000 out of pocket for all that wonderful advice. It’s time for a letter-writing campaign to CNBC. then again, what can you expect from a station that gives us stock advice from Jim Cramer and bond advice from Rick Santelli? Betting advice from Steve Stevens fits right in to the programming mix. Where’s Stacy Keach when you need him? Say something
{"url":"http://www.sportsinsights.com/blog/what-are-realistic-expectations-when-betting-on-the-nfl/","timestamp":"2014-04-21T15:08:48Z","content_type":null,"content_length":"93296","record_id":"<urn:uuid:b7c254d5-92fe-4ad5-9875-893c35706c7a>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00556-ip-10-147-4-33.ec2.internal.warc.gz"}
Patent US20030220779 - Extracting semiconductor device model parameters [0023] As shown in FIG. 1, system 100, according to one embodiment of the invention, comprises a central processing unit (CPU) 102, which includes a RAM, and a disk memory 110 coupled to the CPU 102 through a bus 108. The system 100 further comprises a set of input/output (I/O) devices 106, such as a keypad, a mouse, and a display device, also coupled to the CPU 102 through the bus 108. The system 100 may further include an input port 104 for receiving data from a measurement device (not shown), as explained in more detail below. The system 100 may also include other devices 122. An example of system 100 is a Pentium 233 PC/Compatible computer having RAM larger than 64 MB and a hard disk larger than 1 GB. [0024] Memory 110 has computer readable memory spaces such as database 114 that stores data, memory space 112 that stores operating system 112 such as Windows 95/98/NT4.0/2000, which has instructions for communicating, processing, accessing, storing and searching data, and memory space 116 that stores program instructions (software) for carrying out the method of the present invention. Memory space 116 may be further subdivided as appropriate, for example to include memory portions 118 and 120 for storing modules and plug-in models, respectively, of the software. [0025] A set of model parameters for a semiconductor device is often referred to as a model card for the device. Together with the model equations, the model card is used by a circuit simulator to emulate the behavior of the semiconductor device in an integrated circuit. A model card may be determined by process 200 as shown in FIG. 2. Process 200 includes step 210 in which one or more input files are loaded into the RAM of the CPU 102. The input files may include a model definition file and an object definition file. The object definition file provides information of the object (device) to be simulated. The model definition file provides information associated with the device model for modeling the behavior of the object. These files are discussed in further detail below in connection with FIGS. 3A and 3B. [0026] Process 200 further includes step 220 in which the measurement data is loaded from database 114. The measurement data includes physical measurements from a set of test devices, as will be explained in more detail below. Once the measurement data has been loaded, process 200 proceeds to extract in step 230 the model parameters. The parameter extraction step 230 is discussed in detail in connection with FIGS. 8, 9, 10 and 11 below. [0027] After the parameters are extracted, binning may be performed in step 240. The binning step 240 is an optional step and it may depend on whether the device model is binnable or not. Process 200 further includes step 250, in which the extracted model parameters are verified. Once verified, the extracted parameters are output in step 260 as a model card. An error report may be generated afterwards in step 270, and the process 200 is then complete. More detailed discussion about the binning step 240 and verification step 250 can be found in the BSIMPro+ User Mannual—Basic Operation, by Celestry Design Technologies, released in September, 2001, which is incorporated by reference in its entirety herein. [0028] Referring to FIG. 3A, the model definition file 300A comprises a general model information field 310, a parameter definition field 320, an intermediate variable definition field 330, and an operation point definition field 340. The general model information field 310 includes general information about the device model, such as a model name, a model version, a model type, compatible circuit simulators, and binning information. The parameter definition field 320 defines the parameters in the model. As an example, a list of the model parameters in the BSIMPD model are provided in Appendix A. For each parameter, the model definition file also specifies information associated with the parameter, such as a parameter name, a default value, a parameter unit, a data type, and optimization information. The operation point definition section 340 defines operation point or output variables, such as device terminal currents, threshold voltage, etc., used by the model. [0029] Referring to FIG. 3B, object definition file 300B defines object related information, including input variables 350, output variables 360, instance variables 370, and object and node information 380. Input variables 350 and output variables 360 are associated with the inputs and outputs, respectively, of the device in an integrated circuit. The instance variables 370 are associated with the geometric characteristics of the device to be modeled. The object node information 380 includes information regarding the nodes or terminals of the device to be modeled. [0030] Process 200 can be used to generate model cards for models describing semiconductor devices such as BJTs, JFETs, and MOSFETs, etc. Discussions about the use of some of these models can be found in the BSIMPro+ User Mannual—Device Modeling Guide, by Celestry Design Technologies, released in September, 2001, which is incorporated by reference in its entirety herein. As an example, the BSIMPD model, which was developed by UC Berkley to model silicon-on-insulator (SOI) MOSFET devices, is used here to illustrate the parameter extraction step 230 of the process 200. The model equations for the BSIMPD model are provided in Appendix B. More detailed discussion about the BSIMPD model can be found in the BSIMPD2.2 MOSFET Model Users' Manual by the Department of Electrical Engineering and Computer Sciences, UC Berkeley, Copyright 1999, which is incorporated herein by reference in its entirety. [0031] As shown in FIG. 4, an SOI MOSFET device 400 may comprise a thin silicon on oxide (SOI) film 480, having a thickness T[si], on top of a layer of buried oxide 460, having a thickness T[box]. The SOI film 480 has two doped regions, a source 430 and a drain 450, separated by a body region 440. The SOI MOSFET also comprises a gate 410 on top of the body region 440 and is separated from SOI film 480 by a thin layer of gate oxide 420. The SOI MOSFET 400 is formed on a semiconductor substrate 470. [0032] The SOI MOSFET as described can be considered a five terminal (node) device. The five terminals are the gate terminal (node g), the source terminal (node s), the drain terminal (node d), the body terminal (node p), and the substrate terminal (node e). Nodes g, s, d, and e can be connected to different voltage sources while node p can be connected to a voltage source or left floating. In the floating body configuration there are four external biases , the gate voltage (V[g]), the drain voltage (V[d]), the source voltage (V[s]) and the substrate bias V[e]. If body contact is applied, there will be an additional external bias, the body contact voltage (V[p]). [0033] For ease of further discussion, Table I below lists the symbols corresponding to the physical variables associated with the operation of SOI MOSFET device 400. [0034] In order to model the behavior of the SOI MOSFET device 400 using the BSIMPD model, experimental data are used to extract model parameters associated with the model. These experimental data include terminal current data and capacitance data measured in test devices under various bias conditions. In one embodiment of the present invention, the measurement is done using a conventional semiconductor device measurement tool that is coupled to system 100 through input port 104. The measured data are thus organized by CPU 102 and stored in database 114. The test devices are typically manufactured using the same or similar process technologies for fabricating the SOI MOSFET device. In one embodiment of the present invention, a set of test devices having different device sizes, meaning different channel widths and channel lengths are used for the measurement. The device size requirement can vary with different applications. Ideally, as shown in FIG. 5, the set of devices [0035] a. one largest device, meaning the device with the longest drawn channel length and widest drawn channel width that is available, as represented by dot 502; [0036] b. one smallest device, meaning the device with the shortest drawn channel length and smallest drawn channel width that is available, as represented by dot 516; [0037] c. one device with the smallest drawn channel width and longest drawn channel length, as represented by dot 510; [0038] d. one device with the widest drawn channel width and shortest drawn channel length, as represented by dot 520; [0039] e. three devices having the widest drawn channel width and different drawn channel lengths, as represented by dots 504, 506, and 508; [0040] f. two devices with the shortest drawn channel length and different drawn channel widths, as represented by dots 512 and 514; [0041] g. two devices with the longest drawn channel length and different drawn channel widths, as represented by dots 522 and 524; [0042] h. (optionally) up to three devices with smallest drawn channel width and different drawn channel lengths, as represented by dots 532, 534, and 536; and [0043] i. (optionally) up to three devices with medium drawn channel width (about halfway between the widest and smallest drawn channel width) and different drawn channel lengths, as represented by dots 538, 540, and 542. [0044] If in practice, it is difficult to obtain measurements for all of the above devices sizes, a smaller set of different sized devices can be used. For example, the different device sizes shown in FIG. 6 are sufficient in one embodiment of the present invention. The test devices as shown in FIG. 6 include: [0045] a. one largest device, meaning the device with the longest drawn channel length and widest drawn channel width, as represented by dot 602; [0046] b. one smallest device, meaning the device with the shortest drawn channel length and smallest drawn channel width, as represented by dot 616; [0047] c. (optional) one device with the smallest drawn channel width and longest drawn channel length, as represented by dot 610; [0048] d. one device with the widest drawn channel width and shortest drawn channel length, as represented by dot 620; [0049] e. one device and two optional devices having the widest drawn channel width and different drawn channel lengths, as represented by dots 604, 606, and 608, respectively; [0050] f. (optional) two devices with the shortest drawn channel length and different drawn channel widths, as represented by dots 612 and 614. [0051] For each test device, terminal currents are measured under different terminal bias conditions: These terminal current data are put together as I-V curves representing the I-V characteristics of the test device. In one embodiment of the present invention, for each test device, the following I-V curves are obtained: [0052] 1. Linear region I[d ]vs. V[gs ]curves for a set of V[p ]values. These curves are obtained by grounding the s node and the e node, setting V[d ]to a low value, such as 0.05V, and for each of the set of V[p ]values, measuring I[d ]while sweeping V[g ]in step values across a range such as from 0 to V[DD]. [0053] 2. Saturation region I[d ]vs. V[gs ]curves for a set of V[p ]values. These curves are obtained by grounding the s node and the e node, setting V[d ]to a high value, such as V[DD], and for each of the set of V[p ]values, measuring I[d ]while sweeping V[g ]in step values across a range such as from 0 to V[DD]. [0054] 3. I[d ]vs. V[gs ]curves for different V[d], V[p ]and V[e ]values, obtained by grounding the s node, and for each combination of V[d], V[p ]and V[e ]values, measuring I[d ]while sweeping V[g ] in step values across a range such as from −V[DD ]to V[DD]. [0055] 4. I[g ]vs. V[gs ]curves for different V[d], V[p ]and V[e ]values, obtained by grounding the s node, and for each combination of V[d ]V[p ]and V[e ]values, measuring I[g ]while sweeping V[g ] in step values across a range such as from −V[DD ]to V[DD]. [0056] 5. I[s ]vs. V[ds ]curves for different V[g], V[p ]and V[e ]values, obtained by grounding the s node, and for each combination of V[g], V[p ]and V[e ]values, measuring I[s ]while sweeping V[d ] in step values across a range such as from 0 to V[DD]. [0057] 6. I[p ]vs. V[gs ]curves for different V[d], V[p ]and V[e ]values, obtained by grounding the s node, and for each combination of V[d], V[p ]and V[e ]values, measuring I[p ]while sweeping V[g ] in step values across a range such as from −V[DD ]to V[DD]. [0058] 7. I[d ]vs. V[gs ]curves for different V[d], V[p ]and V[e ]values, obtained by grounding the s node, and for each combination of V[p], V[d ]and V[e ]values, measuring I[d ]while sweeping V[g ] in step values across a range such as from −V[DD ]to V[DD]. [0059] 8. I[d ]vs. V[ps ]curves for different V[d], V[g ]and V[e ]values, obtained by grounding the s node, and for each combination of V[g], V[d ]and V[e ]values, measuring I[d ]while sweeping V[p ] in step values across a range such as from −V[DD ]to V[DD]. [0060] 9. Floating body I[d ]vs. V[gs ]curves for different V[d ]and V[e ]values, obtained by grounding the s node, floating the b node, and for each combination of V[d ]and V[e ]values, measuring I [d ]while sweeping V[g ]in step values across a range such as from 0 to V[DD]. [0061] 10. Floating body I[d ]vs. V[ds ]curves for different V[g ]and V[e ]values, obtained by grounding the s node, floating the b node, and for each combination of V[g ]and V[e ]values, measuring I [d ]while sweeping V[d ]in step values across a range such as from 0 to V[DD]. [0062] As examples, FIG. 7A shows a set of linear region I[d ]vs. V[gs ]curves for different V[ps ]values, FIG. 7B shows a set of saturation region I[d ]vs. V[gs ]curves for different V[ps ]values, FIG. 7C shows a set of I[d ]vs. V[ds ]curves for different V[gs ]values while V[ps]=0.5V and V[es]=0; FIG. 7D shows a set of I[d ]vs. V[ds ]curves for different V[gs ]values while V[ps]=0.25V and V [0063] In addition to the terminal current data, for each test device, capacitance data are also collected from the test devices under various bias conditions. The capacitance data can be put together into capacitance-current (C-V) curves. In one embodiment of the present invention, the following C-V curves are obtained: [0064] a. C[ps ]vs. V[ps ]curve obtained by grounding s node, setting I[e ]and I[d ]to zero, or to very small values, and measuring C[ps ]while sweeping V[p ]in step values across a range such as from −V[DD ]to V[DD]. [0065] b. C[pd ]vs. V[ps ]curve obtained by grounding s node, setting I[e ]and I[s ]to zero, or to very small values, and measuring C[pd ]while sweeping V[p ]in step values across a range such as from −V[DD ]to V[DD]. [0066] As shown in FIG. 8, in one embodiment of the present invention, the parameter extraction step 230 comprises step 810 for extracting base parameters, step 820 for extracting other DC model parameters, step 830 for extracting temperature dependent related parameters; and step 840 for extracting AC parameters. In step 810, base parameters, such as V[th ](the threshold voltage when V[bs]= 0), K[1 ](the first order body effect coefficient), and K[2 ]( the second order body effect coefficient) are extracted based on process parameters corresponding to the process technology used to fabricate the SOI MOSFET device to be modeled. The base parameters are then used to extract other DC model parameters at step 820, which is explained in more detail in connection with FIGS. 9, 10, and 11 below. [0067] The temperature dependent parameters are parameters that may vary with the temperature of the device and include parameters such as: Ktl1(temperature coefficient for threshold voltage); Ua1 (temperature coefficient for U[a]), and Ub1 (temperature coefficient for U[b]), etc. These parameters can be extracted using a conventional parameter extraction method. [0068] The AC parameters are parameters associated with the AC characteristics of the SOI MOSFET device and include parameters such as: CLC (the constant term for the short chanel model) and moin (the coefficient for the gate-bias dependent surface potential), etc. These parameters can also be extracted using a conventional parameter extraction method. [0069] As shown in FIG. 9, the DC parameter extraction step 820 further comprises: extracting I[diode ]related parameters (step 902); extracting I[bjt ]related parameters (step 904); extracting V[th ]related parameters (step 906); extracting I[dgid1 ]and I[sgid1 ]related parameters (step 908); extracting I[g ](or J[gb]) related parameters (step 910); extracting L[eff ]related parameters, R[d ] related parameters and R[s ]related parameters (step 912); extracting mobility related parameters and W[eff ]related parameters (step 914); extracting V[th ]geometry related parameters (step 916); extracting sub-threshold region related parameters (step 918); extracting parameters related to drain-induced barrier lower than regular (DIBL) (step 920); extracting I[dsat ]related parameters (step 922); extracting I[ii ]related parameters (step 924); and extracting junction parameters (step 926). [0070] The equation numbers below refer to the equations set forth in Appendix B. [0071] In step 902, parameters related to the calculation of the diode current I[diode ]are extracted. These parameters include, J[sbjt], n[dio], R[body], n[recf ]and j[srec]. As shown in more detail in FIG. 10, step 902 comprises: extracting J[sbjt ]and n[dio ](step 1010); extracting R[body ](step 1020); and extracting n[recf ]and j[srec ](step 1030). [0072] Model parameters J[sbjt ]and n[dio ]are extracted in step 1010 from the recombination current in neutral body equations (Equations 14.5a -14.5.f) using measured data in the middle part of the I[d ]vs V[ps ]curves taken from the largest test device (test device having longest L[drawn ]and widest W[drawn]). By using the largest device, α[bjt]→0. Then, assuming A[hli]=0, E[hlid ]will also equal zero. Therefore Equations 14.5.d -14.5.f can be eliminated. The set of equations is thus reduced to two equations (14.5.b and 14.5.c) with two unknowns, resulting in a quick solution for J[sbjt ]and n[dio]. In one embodiment of the present invention, the middle part of an I[d ]vs V[ps ]curve corresponds to the part of the I[d ]vs V[ps ]curve with V[ps ]ranging from about 0.3V to about 0.8V. In another embodiment, the middle part of the I[d ]vs V[ps ]curve corresponds to V[ps ]ranging from about 0.4V to about 0.7V. [0073] R[body ]is extracted in step 1020 from the body contact current equation (Equations 13.1-13.3) using measured data in the high current part of the I[d ]vs V[ps ]curves. In one embodiment of the present invention, the high current part of an I[d ]vs V[ps ]curve corresponds to the part of the I[d ]vs V[ps ]curve with V[ps ]ranging from about 0.8V to about 1V. [0074] The parameters n[recf ]and j[srec ]are extracted in step 1030 from the recombination/trap-assisted tunneling current in the depletion region equations (Equations 14.3.a and 14.3.b), also using the I[d ]vs I[ps ]curves taken from a shortest device. The remaining I[diode ]related parameters are second order parameters and may be neglected. [0075] Referring back to FIG. 9, the parasitic lateral bipolar junction transistor current (I[bjt]) related parameter L[n ]is extracted in step 904. In this step, a set of I[c]/I[p ]v. V[ps ]curves are constructed from the I[d ]vs. V[ps ]curves taken from a shortest device. Then the bipolar transport factor equations (Equation 14.1) wherein I[c]/I[b]=α[bjt]/1−α[bjt ]are used to extract L[n]. [0076] In step 906, threshold voltage V[th ]related parameters, such as V[th0], k1, k2, and Nch, are extracted by using the linear I[d ]vs V[gs ]curves measured from the largest device. [0077] In step 908, parameters related to the gate-induced drain leakage current at the drain and at the source (I[dgid1]) and the gate-induced drain leakage current at the source (I[sgid1,]) are extracted. The I[dgid1 ]and I[sgid1 ]related parameters include parameters such as α[gid1 ]and β[gid1], and are extracted using the I[d ]vs. V[gs ]curves and Equations 12.1 and 12.2. [0078] In step 910 the oxide tunneling current (I[g], also designated as J[gb]) related parameters are extracted. The I[g ]related parameters include parameters such as V[EvB], α[gb1], β[gb1], V [gb1], V[ECB], α[gb2], β[gb2], and V[gb2], and are extracted using the I[g ]vs. V[gs ]curves and equations 17.1a-f and 17.2 a-f. [0079] In step 912, parameters related to the effective channel length L[eff], the drain resistance R[d ]and source resistance R[s ]are extracted. The L[eff], R[d ]and R[s ]related parameters include parameters such as L[int], and R[dsw], and are extracted using data from the linear I[d ]vs V[gs ]curves as well as the extracted V[th ]related parameters from step 906. [0080] In step 914, parameters related to the mobility and effective channel width W[eff], such as μ[0], U[a], U[b], U[c], Wint, Wri, Prwb, Wr, Prwg, R[dsw], Dwg, and Dwb, are extracted, using the linear I[d ]vs V[gs ]curves, the extracted V[th ]related parameters, L[eff ]related parameters, R[d ]related parameters, and R[s ]related parameters from steps 906 and 912. [0081] Steps 906, 912, and 914 can be performed using a conventional BSIMPD model parameter extraction method. Discussion of extracting parameters involved in these steps can be found in the following articles: Terada K., Muta H., “A new method to determine effective MOSFET channel length”, Japan J Appl. Phys. 1979:18:953-9; Chern J., Chang P., Motta R., Godinho N., “A new Method to determine MOSFET channel length” IEEE Trans Electron Dev 1980:ED-27:1846-8; and Hassan Md Rofigul, Liou J J, et al. “Drain and source resistances of short-channel LDD MOSFETs,” Solid-State Electron 1997:41:778-80; which are incorporated by reference herein. [0082] In step 916, the threshold voltage V[th ]geometry related parameters, such as D[VT0], D[VT1], D[VT2], N[LX1], D[VT0W], D[VT1W], D[VT2W], k[3], and k[3b], are extracted, using the linear I[d ] vs V[g ]curve, the extracted V[th ]related parameters, L[eff ]related parameters, mobility related parameters, and W[eff ]related parameters from steps 906, 912, and 914, and Equations 3.1 to 3.10. [0083] In step 918, sub-threshold region related parameters, such as C[it], Nfactor, V[off], D[dsc], and C[dscd], are extracted, using the linear I[d ]vs V[gs ]curves, the extracted V[th ]related parameters, L[eff ]related parameters, R[d ]related parameters, R[s ]related parameters, mobility related parameters, and W[eff ]related parameters from steps 906, 912, and 914, and Equations 5.1 and [0084] In step 920, DIBL related parameters, such as D[sub], Eta0, and Etab, are extracted, using the saturation I[d ]vs V[gs ]curves and the extracted V[th ]related parameters from step 906, and Equations 3.1 to 3.10. [0085] In step 922, the drain saturation current I[dsat ]related parameters, such as B0, B1, A0, Keta, and A[gs], are extracted using the saturation I[d ]vs V[d ]curves, the extracted V[th ]related parameters, L[eff ]related parameters, R[d ]related parameters, R[s ]related parameters, mobility related parameters, W[eff ]related parameters, V[th ]geometry related parameters, sub-threshold region related parameters, and DIBL related parameters from steps 906, 912, 914, 916, and 918, and Equations 9.1 to 9.10. [0086] In step 924, the impact ionization current I[ii ]related parameters, such as α[0], β[0], β[1], β[2], V[dsatii], and L[ii], are extracted, as discussed in detail in connection to FIG. 11 below. [0087]FIG. 11 is a flow chart illustrating in further detail the extraction of the impact ionization current I[ii ]related parameters (step 924). In one embodiment of the present invention, data from the I[p ]v V[gs ]and I[d ]v V[gs ]curves measured from one or more shortest devices are used to construct the I[ii]/I[d ]vs V[ds ]curves for the one or more shortest devices (step 1110). This begins by identifying the point where V[gs ]is equal to V[th ]for each I[p ]v V[gs ]curve, which point is found by setting V[gst]=0. When V[gst]=0, V[gsstep]=0. Then, using the impact ionization current equation, Equation 11.1, the I[ii]/I[d ]vs V[ds ]curve can be obtained. [0088] After the I[ii]/I[d ]vs V[ds ]curve is obtained, L[ii ]is set to zero and V[dsatti0 ]is set to 0.8 (the default value). Using the I[ii]/I[d ]vs V[ds ]curve, β[1], α[0], β[2], β[0 ]are extracted in step 1115 from the impact ionization current equation for I[ii], Equation 11.1. [0089] In step 1120, V[dsatii ]is interpolated from a constructed I[ii]/I[d ]vs V[ds ]curve by identifying the point at which I[p]/I[d]=α[0]. [0090] Following the interpolation, using a conventional optimizer such as the one using the well known Newton-Raphson algorithm, β[1], β[2], β[0 ]are optimized in step 1125. [0091] Step 1120 is repeated for each constructed I[ii]/I[d ]vs V[ds ]curve. This results in an array of values for V[dsatii]. Using these values for V[dsatii], L[ii ]is extracted in step 1135 from the V[dsatii ]equation for the impact ionization current (Equation 11.3). [0092] The extracted β[1], α[0], β[2], β[0], L[ii], and V[dsatti0 ]are optimized in step 1140 by comparing calculated and measured I[ii]/I[d ]vs V[ds ]curves for the one or more shorted devices. [0093] In step 1145, the extracted parameters from the I[ii ]and V[dsatii ]equations are used to calculate V[gsstep ]using Equation 11.4 for the largest device. Then in step 1150, S[ii1], S[ii2], S [ii0 ]are determined using a local optimizer such as the Newton-Raphson algorithm and the V[gsstep ]equation (Equation 11.4). [0094] In the next step 1155, the last of the I[ii ]related parameters is extracted using the shortest device. In this step, E[satii ]is solved for by using the V[gsstep ]equation, Equation 11.4, and the I[ii]/I[d ]vs V[ds ]curve. The extraction of the I[ii], related parameters is then complete. [0095] Referring back to FIG. 9, in step 926, the junction parameters, such as Cjswg, Pbswg, and Mjswg, are extracted using the C[ps ]vs. V[ps ]and C[pd ]vs. V[ps ]curves, and Equations 21.4.b.1 and [0096] In performing the DC parameter extraction steps (steps 901-926), it is preferred that after the I[diode ]and I[bjt ]related parameters are extracted in steps 902 and 904, I[diode ]and I[bjt ] are calculated based on these parameters and the model equations. This calculation is done for the bias condition of each data point in the measured I-V curves. The I-V curves are then modified for a first time based on the calculated I[diode ]and I[bjt ]values. In one embodiment of the present invention, the I-V curves are first modified by subtracting the calculated I[diode ]and I[bjt ]values from respective I[s], I[d], and I[p ]data values. For example, for a test device having drawn channel length L[T ]and drawn channel width W[T], if under the bias condition where V[s]=V[s] ^T, V[d]=V [d] ^T, V[p]=V[p] ^T, V[e]=V[e] ^T, and V[g]=V[g] ^T, the measured drain current is I[d] ^T, then after the first modification, the drain current will be I[d] ^first-modified=I[d] ^T−I[diode] ^T−I [bjt] ^T, where I[diode] ^T and I[bjt] ^T are calculated I[diode ]and I[bjt ]values, respectively, for the same test device under the same bias condition. The first-modified I-V curves are then used for additional DC parameter extraction. This results in higher degree of accuracy in the extracted parameters. In one embodiment, the I[diode ]and I[bjt ]related parameters are extracted before extracting other DC parameters, so that I-V curve modification may be done for more accurate extraction of the other DC parameters. However, if such accuracy is not required, one can choose not to do the above modification and the I[diode ]and I[bjt ]related parameters can be extracted at any point in the DC parameter extraction step 820. [0097] Similalry, after the I[dgid1], I[sgidi1 ]and I[g ]related parameters are extracted in steps 908 and 910, I[dgid1], I[sgid1 ]and I[g ]are calculated based on these parameters and the model equations. This calculation is done for the bias condition of each data point in the measured I-V curves. The I-V curves or the first-modified I-V curves are then modified or further modified based on the calculated I[dgid1], I[sgid1 ]and I[g ]values. In one embodiment of the present invention, the I-V curves or first-modified I-V curves are modified or further modified by subtracting the calculated I[dgid1], I[sgid1 ]and I[g ]values from respective measured or first-modified I[s], I[d], and I[p ]data values. For example, for a test device having drawn channel length L[T ]and drawn channel width W[T], if under bias condition where V[s]=V[s] ^T, V[d]=V[d] ^T, V[p]=V[p] ^T, V[e]=V[e] ^T, and V[g]=V[g] ^T, the measured drain current is I[d] ^T, then after the above modification or further modification, the drain current will be I[d] ^modified=I[d] ^T−I[dgid1]−I[sgid1]−I[g], or I[d] ^further-modified=I[d] ^first-modified−I[dgid1]−I[sgid1]−I[g], where I[dgid1], I[sgid1 ]and I[g ]are calculated I[dgid1], I[sgid1 ]and I[g ]values, respectively, for the same test device under the same bias condition. The modified or further modified I-V curves are then used for additional DC parameter extraction. This results in higher degree of accuracy in the parameters extracted afterwards. In one embodiment the I[dgid1], I[sgid1 ]and I[g ]related parameters are extracted before extracting other DC parameters that can be affected by the modifications, so that I-V curve modification may be done for more accurate extraction of these other DC parameters. However, if such accuracy is not required, one can choose not to do the above modification and the I[dgid1], I[sgid1 ]and I[g ]related parameters can be extracted at any point in the DC parameter extraction step 820. [0098] The forgoing descriptions of specific embodiments of the present invention are presented for purpose of illustration and description. They are not intended to be exhaustive or to limit the invention to the precise forms disclosed, obviously many modifications and variations are possible in view of the above teachings. The embodiments were chosen and described in order to best explain the principles of the invention and its practical applications, to thereby enable others skilled in the art to best utilize the invention and various embodiments with various modifications as are suited to the particular use contemplated. Furthermore, the order of the steps in the method are not necessarily intended to occur in the sequence laid out. It is intended that the scope of the invention be defined by the following claims and their equivalents. [0011]FIG. 1 is a block diagram of a system according to an embodiment of the present invention; [0012]FIG. 2 is a flow chart illustrating a modeling process in accordance with an embodiment of the present invention; [0013]FIG. 3A is a block diagram of a model definition input file in accordance with an embodiment of the present invention; [0014]FIG. 3B is a block diagram of an object definition input file in accordance with an embodiment of the present invention; [0015]FIG. 4 is a diagrammatic cross sectional view of a silicon-on-insulator MOSFET device for which model parameters are extracted in accordance with an embodiment of the present invention; [0016]FIG. 5 is a graph illustrating sizes of test devices used to obtain experimental data for model parameter extraction in accordance with an embodiment of the present invention; [0017]FIG. 6 is a graph illustrating sizes of test devices used to obtain experimental data for model parameter extraction in accordance with an alternative embodiment of the present invention;; [0018] FIGS. 7A-7D are examples of current-voltage (I-V) curves representing some of the terminal current data for the test devices; [0019]FIG. 8 is a flow chart illustrating in further detail a parameter extraction process in accordance with an embodiment of the present invention; [0020]FIG. 9 is a flow chart illustrating in further detail a DC parameter extraction process in accordance with an embodiment of the present invention; [0021]FIG. 10 is a flow chart illustrating a process for extracting diode current related parameters in accordance with an embodiment of the present invention; and [0022]FIG. 11 is a flow chart illustrating a process for extracting impact ionization current related parameters in accordance with an embodiment of the present invention. [0002] The invention relates generally to computer-aided electronic circuit simulation, and more particularly, to a method of extracting semiconductor device model parameters for use in integrated circuit simulation. [0003] Computer aids for electronic circuit designers are becoming more prevalent and popular in the electronic industry. This move toward electronic circuit simulation was prompted by the increase in both complexity and size of circuits. As circuits have become more complex, traditional breadboard methods have become burdensome and overly complicated. With increased computing power and efficiency, electronic circuit simulation is now standard in the industry. Examples of electronic circuit simulators include the Simulation Program with Integrated Circuit Emphasis (SPICE) developed at the University of California, Berkeley (UC Berkeley), and various enhanced versions or derivatives of SPICE, such as, SPICE2 or SPICE3, also developed at UC Berkeley; HSPICE, developed by Meta-software and now owned by Avant!; PSPICE, developed by Micro-Sim; and SPECTRE, developed by Cadence. SPICE and its derivatives or enhanced versions will be referred to hereafter as SPICE circuit [0004] SPICE is a program widely used to simulate the performance of analog electronic systems and mixed mode analog and digital systems. SPICE solves sets of non-linear differential equations in the frequency domain, steady state and time domain and can simulate the behavior of transistor and gate designs. In SPICE, any circuit is handled in a node/element fashion; it is a collection of various elements (resistors, capacitors, etc.). These elements are then connected at nodes. Thus, each element must be modeled to create the entire circuit. SPICE has built in models for semiconductor devices, and is set up so that the user need only specify model parameter values. [0005] An electronic circuit may contain any variety of circuit elements such as resistors, capacitors, inductors, mutual inductors, transmission lines, diodes, bipolar junction transistors (BJT), junction field effect transistors (JFET), and metal-oxide-semiconductor field effect transistors (MOSFET), etc. A SPICE circuit simulator makes use of built-in or plug-in models for semiconductor device elements such as diodes, BJTs, JFETs, and MOSFETs. If model parameter data is available, more sophisticated models can be invoked. Otherwise, a simpler model for each of these devices is used by default. [0006] A model for a device mathematically represents the device characteristics under various bias conditions. For example, for a MOSFET device model, in DC and AC analysis, the inputs of the device model are the drain-to-source, gate-to-source, bulk-to-source voltages, and the device temperature. The outputs are the various terminal currents. A device model typically includes model equations and a set of model parameters. The model parameters, along with the model equations in the device model, directly affect the final outcome of the terminal currents. In order to represent actual device performance, a successful device model is tied to the actual fabrication process used to manufacture the device represented. This connection is represented by the model parameters, which are dependent on the fabrication process used to manufacture the device. [0007] SPICE has a variety of preset models. However, in modern device models, such as BSIM (Berkeley Short-Channel IGFET Model) and its derivatives, BSIM3, BSIM4, and BSIMPD (Berkeley Short-Channel IGFET Model Partial Depletion), all developed at UC Berkeley, only a few of the model parameters can be directly measured from actual devices. The rest of the model parameters are extracted using nonlinear equations with complex extraction methods. See Daniel Foty, “MOSFET Modeling with Spice—Principles and Practice,” Prentice Hall PTR, 1997. [0008] Since the sets of equations utilized in a modern semiconductor device model are complex with numerous unknowns, there is a need to extract the model parameters in the equations in an efficient and accurate manner so that using the extracted parameters, the model equations will closely emulate the actual process. [0009] The present invention includes a method for extracting semiconductor device model parameters for a device model such as the BSIMPD model. The device model parameters for the device model includes a plurality of base parameters, DC model parameters, temperature dependent related parameters, and AC parameters. The method comprises obtaining terminal current data corresponding to various bias conditions in a set of test devices and extracting a first portion of the DC model parameters for the device model from the terminal current data. The terminal current data are then first modified based on the extracted first portion of the DC model parameters. The method may further comprise extracting a second portion of the DC model parameters and further modifying the first-modified terminal current data based on the extracted second portion of the DC model parameters. The method further comprises extracting additional DC model parameters based on the first-modified or the further-modified terminal current data. [0010] The present invention also includes novel methods for extracting the first portion of the DC model parameters, the second portion of the DC model parameters, and some of the additional DC model parameters, as explained in more detail below. [0001] This patent claims priority pursuant to 35 U.S.C. § 119(e)1 to U.S. Provisional Patent Application Serial No. 60/368,599, filed Mar. 29, 2002.
{"url":"http://www.google.com/patents/US20030220779?dq=%235,519,867","timestamp":"2014-04-25T02:08:24Z","content_type":null,"content_length":"126443","record_id":"<urn:uuid:230ed70a-6945-4fe5-9c93-2504f720cfa3>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00545-ip-10-147-4-33.ec2.internal.warc.gz"}
Greenville, TX Math Tutor Find a Greenville, TX Math Tutor ...I received an A in both semesters of chemistry from the University of Texas at Dallas and have become exceedingly familiar with its basic concepts from valence electrons the the Henderson–Hasselbalch equation in the semesters of biochemistry and organic chemistry that followed. I am confident in... 15 Subjects: including algebra 1, algebra 2, chemistry, physics ...I love algebra and have tutored over 30 hours of Algebra 2. Algebra 2 is a key foundation for college-level mathematics and science courses, particularly for students interested in science or engineering majors. I used Algebra 2 concepts in many of my own undergraduate and graduate engineering courses. 15 Subjects: including algebra 1, algebra 2, grammar, geometry ...My focus is always on understanding concepts rather than simply knowing how to get the answers. I can help struggling students figure out the gaps in their skills and prevent future difficulties. I am currently teaching high school geometry in Plano, but I can tutor evenings and Saturdays in Plano, Allen and McKinney areas. 10 Subjects: including SAT math, algebra 1, geometry, prealgebra ...I live in Rockwall and would love to be able to help your child understand math better.I teach MST & Honors students and we use technology everyday in class. My students are actively engaged and they really enjoy being able to work with iPads and Macbooks. I will have each of these available during my tutoring sessions if your child needs to use them. 4 Subjects: including prealgebra, grammar, elementary math, vocabulary ...I have worked with a wide range of students who speak another language other than English for their first language. These students range in age from children to adult learners. Also, I have helped individuals whose first language was Russian, Korean, Mandarin, French and Spanish. 25 Subjects: including algebra 1, ACT Math, ESL/ESOL, geometry Related Greenville, TX Tutors Greenville, TX Accounting Tutors Greenville, TX ACT Tutors Greenville, TX Algebra Tutors Greenville, TX Algebra 2 Tutors Greenville, TX Calculus Tutors Greenville, TX Geometry Tutors Greenville, TX Math Tutors Greenville, TX Prealgebra Tutors Greenville, TX Precalculus Tutors Greenville, TX SAT Tutors Greenville, TX SAT Math Tutors Greenville, TX Science Tutors Greenville, TX Statistics Tutors Greenville, TX Trigonometry Tutors
{"url":"http://www.purplemath.com/greenville_tx_math_tutors.php","timestamp":"2014-04-20T23:44:25Z","content_type":null,"content_length":"23922","record_id":"<urn:uuid:612e9931-0b0c-4166-a2e3-df3a40ee71c2>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00627-ip-10-147-4-33.ec2.internal.warc.gz"}
Journal of Porous MediaHeat Transfer with Propane Evaporation from a Porous Wick of Heat PipeThe Performance of Miscible Enhanced Oil Recovery Displacements in Geostatistically Generated Permeable Media Using Horizontal WellsNonlinear Study of Stratified Fluid Through Porous MediaEffects of Heterogeneity in Forced Convection in a Porous Medium: Parallel-Plate Channel, Asymmetric Property Variation, and Asymmetric HeatingAnalytical Solutions of Fluid Flows Through Composite ChannelsDarcy's Law for a Heterogeneous Porous MediumNatural Convection from a Vertical Plate in a Saturated Porous Medium: Nonequilibrium TheoryTransient Pulsating Flow in Channels Partially Filled with a Porous MaterialTransient Natural Convection in Differentially Heated Porous Enclosures: Corrected Abstract Heat transfer enhancement in the evaporator of a vapordynamic thermosyphon and loop heat pipe (LHP) filled with propane is discussed. The experimental data for heat transfer with propane pool boiling on horizontal smooth and porous tubes inside the vapordynamic thermosyphon and the LHP evaporator were recorded for saturation temperature Ts = 0−30°C and heat input q = 0.1−120 kW/m2 (q = 0.02−30 kW /m2 for LHP evaporator). Heat transfer intensification up to 3−5 times was obtained with a heat transfer coefficient up to 30 × 103 W/m2. Leonid L. Vasiliev, Jr. Byelorussian Academy of Sciences; and Luikov Heat & Mass Transfer Institute, Porous Media Laboratory, P. Brovka Str. 15, 220072 Minsk, Belarus Alexander S. Zhuravlyov Luikov Heat & Mass Transfer Institute, National Academy of Sciences of Belarus, Porous Media Laboratory, P. Brovka Str. 15, 220072 Minsk, Belarus Mikhail N. Novikov Luikov Heat & Mass Transfer Institute, National Academy of Sciences of Belarus, Porous Media Laboratory, P. Brovka Str. 15, 220072 Minsk, Belarus L. L. Vasiliev, Jr Luikov Heat & Mass Transfer Institute, National Academy of Sciences of Belarus, Porous Media Laboratory, P. Brovka Str. 15, 220072 Minsk, Belarus 10
{"url":"http://www.dl.begellhouse.com/ixml/Volume4_Issue2-5aea75fc654b44e1.xml","timestamp":"2014-04-24T07:08:31Z","content_type":null,"content_length":"15698","record_id":"<urn:uuid:aa8e678f-a6a5-4ca8-abaf-e8c58fc16d0b>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00606-ip-10-147-4-33.ec2.internal.warc.gz"}
Cambridge, MA Precalculus Tutor Find a Cambridge, MA Precalculus Tutor ...I'm available for one-on-one tutoring sessions or for groups up to 5 - please inquire about rates for groups. Feel free to contact me if you have any additional questions! In terms of my teaching experience, I spent the past year teaching AP Biology every Sunday to a group of high school students through the Delve program at MIT. 13 Subjects: including precalculus, biology, algebra 1, GRE ...I have graduated with a Cognitive Science degree at MIT. I'm especially aware of how we learn. From the many concepts I have learned about memorization, categorization, and perception in my cognitive science classes, I can be creative with coming up with methods that are personalized to a stude... 26 Subjects: including precalculus, reading, physics, writing ...Not much of it actually. What is important is that I am incredibly patient and love to explain things to others. In my experiences, it's not just about how informed or knowledgeable a tutor is, but more importantly, how well they can explain and convey the material in a way that's effective and understandable to someone else. 23 Subjects: including precalculus, calculus, geometry, algebra 1 ...I also periodically helped some of the other people in the class. I found the material very intuitive and still remember almost all of it. I've also performed very well in several math competitions in which the problems were primarily of a combinatorial/discrete variety. 14 Subjects: including precalculus, calculus, geometry, algebra 1 ...You will have the opportunity to choose the method your feel comfortable with: either conversation of daily life or a fun learning process with songs, games and stories.Algebra seems hard at first for students who just start learning it. Instead of using numbers algebra introduces letters as sym... 5 Subjects: including precalculus, physics, Chinese, algebra 2 Related Cambridge, MA Tutors Cambridge, MA Accounting Tutors Cambridge, MA ACT Tutors Cambridge, MA Algebra Tutors Cambridge, MA Algebra 2 Tutors Cambridge, MA Calculus Tutors Cambridge, MA Geometry Tutors Cambridge, MA Math Tutors Cambridge, MA Prealgebra Tutors Cambridge, MA Precalculus Tutors Cambridge, MA SAT Tutors Cambridge, MA SAT Math Tutors Cambridge, MA Science Tutors Cambridge, MA Statistics Tutors Cambridge, MA Trigonometry Tutors Nearby Cities With precalculus Tutor Allston precalculus Tutors Arlington, MA precalculus Tutors Boston precalculus Tutors Brighton, MA precalculus Tutors Brookline, MA precalculus Tutors Cambridgeport, MA precalculus Tutors Charlestown, MA precalculus Tutors Dorchester, MA precalculus Tutors Everett, MA precalculus Tutors Jamaica Plain precalculus Tutors Malden, MA precalculus Tutors Medford, MA precalculus Tutors Newton, MA precalculus Tutors Roxbury, MA precalculus Tutors Somerville, MA precalculus Tutors
{"url":"http://www.purplemath.com/Cambridge_MA_precalculus_tutors.php","timestamp":"2014-04-18T13:53:28Z","content_type":null,"content_length":"24348","record_id":"<urn:uuid:6a649148-8cb0-432c-a98c-b930d321f57d>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00653-ip-10-147-4-33.ec2.internal.warc.gz"}
FOM: Golden Age/FOM plans Harvey Friedman friedman at math.ohio-state.edu Wed Oct 22 12:15:53 EDT 1997 Looking at the growth of the list of subscribers to FOM (currently 62), there is evidently a great deal of interest in the foundations of mathematics and the vital intellectual issues surrounding it. As a consequence, I, for one, am poised to make a growing committment to actively participate in FOM. Of course, everyone knows that I speak for noone other than myself, and there is never going to be any kind of standard FOM view. My intentions are to contribute what I can in the following ways: 1. I will write personal reviews of the major articles and books relating to the foundations of mathematics. This may include lesser ones if there are major provacative points in them. Two examples: Angus MacIntyre's "The Strength of Weak Systems", in P.Weingartner and G.Shurz (eds.) Proc. 11th Wittgenstein Symp. 1986, 1987, focusing on pp. 56-57; and Sol Feferman's "Does Mathematics need new Axioms?", preprint. Also a book by Penelope Maddy is coming out soon on Naturalist Foundations of Mathematics. If any of you have seen articles or books that you might want to see me review, please contact me at friedman at math.ohio-state.edu. 2. I will discuss, from my perspective, what has been accomplished in the foundations of mathematics; and also what has not yet been accomplished in the foundations of mathematics that I feel can realistically be accomplished. I am planning to discuss a wide range of problems and projects of significance for the foundations of mathematics, which are at various levels of formality. Many of you know that I have been doing this in person on an individual basis for many years. 3. I have been articulating a strong view that the general area of mathematical logic has suffered greatly by overspecialization and compartmentalization, with a gross overemphasis on pure technique without regard to any higher intellectual purpose. On the other hand, I have always conceded that periods of great technocracy are sometimes essential for the proper development of a subject. I freely admit that a number of things that I do simply could not be done without a significant technical Nevertheless, we are now at a point in mathematical logic where the higher intellectual purposes can now be effectively addressed, at an unprecedented level, armed with a considerable amount of technical machinery and I will be reviewing what is happening in mathematical logic with emphasis on the issue of its connections or lack of connections with higher intellectual purposes. There will be a special focus on missed opportunities, and candid discussion of the issues of overspecialization and compartmentalization. I am convinced that overspecialization and compartmentalization will eventually destroy mathematical logic as a viable area within mathematics, and also threatens to destroy the support for pure mathematics as a whole. 4. I have been expressing concern that students are forced into overspecialization at an early stage in their career before they can grasp the higher intellectual issues. This is a direct result of the minimal role that higher intellectual issues have in the ongoing research of the faculty. There is a tendency that with each generation there is a decreasing awareness of the higher intellectual issues - and the proper connection between higher intellectual issues and research becomes more and more remote. Thus there is a vicious circle where overspecialized faculty advise overspecialized students who become even more overspecialized faculty who advise even more overspecialized students, etcetera. I want to talk about what can be done to break this vicious circle. Foundations of mathematics can be expected to play a leading role. In particular, there is a different way to teach mathematical logic, based on genuinely foundational organizational schemes for mathematical logic. 5. There has been discussion of "fundamental concepts" of mathematics. I have my own view on this matter. I believe that there is a tremendously selective notion of fundamental concept - where the concept must speak clearly to everyone as intellectuals. However, most people would think that under such a strong criterion, nearly nothing of nontrivial substance in mathematics can be reconstructed. I claim that this is completely false. So I want to write about a radical reconstruction of at least classical mathematics from this point of view. This requires altered presentation of material, new theorems, even the development of new subareas. But all of this is highly motivated - to give a general intellectual's account of much of mathematics. 6. I want to discuss other major foundational programs. For instance, it is clear that in applications of mathematics to other subjects, one must ultimately ccnfront our finiteness - as intellectuals, as physical beings, as observers and measurers - so that finite precision, as opposed to infinite precision, assumes paramount importance. This emphasis on finite precision should begin with mathematics itself. So there is the program of giving a finitary treatment of basic mathematics in a systematic, readily useful, and intelligible way. For instance, what does a treatment of functions of one complex variable look like in strictly finitary terms? Does this necessarily degenerate into an ugly morass? I don't think so. In fact, I believe this can be done appropriately, and that it may spawn a number of new subareas where new kinds of estimates have to be made - and the whole development will look quite interesting. At the moment, I see three higher intellectual purposes that mathematical logicians should turn to. a. Foundations of Mathematics. b. Direct applications to mathematics. c. Foundational Studies in general. For practical reaons, I think that the FOM group needs to have some sort of limitation of scope, and so I, for one, will not focus on c. There may come a time when it is appropriate to have a group in Foundational Studies, and I hope to have a role in its setup. And as I have said before, I think the ultimate future of Foundations of Mathematics rests with Foundational Studies in general, which is a context in which Foundations of Mathematics has the special role of being the most highly developed Foundations - one that serves as a model of what can be accomplished. But for the FOM group, I, personally, would like to focus on a). However, there is the contentious issue on the table that direct applications to mathematics - of the kind with which there have been recent successes - is somehow to be construed as genuine Foundations of Mathematics, in a legitimate use of this term. As long as this issue is on the table, it seems entirely appropriate for people to discuss direct applications to mathematics (as they are doing) as long as it is presented as foundations of mathematics, in perhaps some alternative sense. I would prefer a serious articulation of this nonstandard view of foundations of mathematics, rather than simply a declaration that there is such - in particular, I would prefer to hear what is not foundations of mathematics in order to make clarifying contrasts. Now I for one do not think that these direct applications to mathematics discussed by Anand, Dave, and Lou, and the mathematical logic that supports them - at least in their present form - constitute Foundations of Mathematics in any accepted sense of the term; i.e., as long as the phrase "Foundations of Mathematics" is used in remotely the same way as it has been consistently used for at least one hundred and fifty years. And this normal way of using the phrase "Foundations of Mathematics" is entirely analogous to the way "Foundations of X" has also been used, for any field X, for at least one hundred and fifty years - and arguably for, say, two thousand years. Moreover, there are excellent ways to sharply distinguish genuine "Foundations of mathematics" with these other pieces of "Fundamental Mathematics." ***This will be the subject of another e-mailing called: Foundations??*** This episode - the controversy over the appropriate meaning of "Foundations of Mathematics" - is typical of how I personally want to handle controversy in FOM. E.g., I talked the matter through with someone I thought that Lou, Anand, and Dave would think would agree with or have considerable sympathy with their point of view. In this case I might have heard some real support for their viewpoint on the matter, but I detected none. In fact, I was explicitly told that the typical mathematician in their Department of Mathematics holds a view much closer to mine than to theirs on this issue. And that the level of respect for the work of Kurt Godel as science is immeasurably high - much higher than that of themselves or their We now appear to be in a Golden Age with regard to a) and b). In both cases, things are now possible because of this prior technical development. However, one should not even begin to think that a large percentage of the technical development has, or even will have any substantial bearing on a) or b) or any other higher intellectual purpose. It's simply not true. Nor should one think that any technical advance has an equal chance as any other technical advance of becoming relevant to a) or b) or any higher intellectual purpose. On the contrary, upon choosing a particular problem to work on, one should have its place clearly in mind in the general intellectual landscape. This kind of judgment is not arbitrary - although at this time it is not subject to precise formal characterization. The quality of this kind of judgment is what separates the truly great thinkers from others. For the higher levels of intellectual achievement, one must go far beyond technique. The technique must be under such complete control, that it is second nature - that it is effortless, and always subservient to a higher intellectual Many pure mathematicians are quite familiar with how this applies within pure mathematics, and how it separates the truly great mathematician from the rest. However, for people dedicated to Foundations of Mathematics, it is neither necessary nor sufficient to look for placement of ideas in the ***general MATHEMATICAL landscape***. Its the placement in the ***general INTELLECTUAL landscape*** that is important. In fact, this is the distinguishing feature of Foundational Studies in general - that one speaks to the general intellectual community - the general world of ideas. There is no place for the slavish acceptance of the importance or significance of a particular development or kind of problem that is fashionable in a particular field. Its relationship to the whole intellectual community is to be continually evaluated and reevaluated on a case by case basis, in consultation with people in other fields - even with people in other walks of life. Normally what happens is that one or more aspects of the fashion do have connections with something globally fundamental - but not usually in the exact form in which specialists normally cast them (for each other). They have to be recast in more fundamental terms. Often this requires further developments that would not be conceived of by specialists. The problem with pure mathematics and the wider intellectual community at this time is that this recasting of the current fashions in globally fundamental terms has not taken place - or at least not systematically enough to create any real understanding on the part of the general intellectual community. Instead, today there is great confusion among the scientific public - or even within the mathematical sciences - as to the appropriate placement of pure mathematics in the general intellectual Which brings me back to the phrase "Golden Age." There can be no doubt that during the last part of the 19th century up through the 1930's, that Foundations of Mathematics was in a period where great revelations regarding questions of the highest possible general intellectual significance were being successfully addressed on a continuing basis replete with stirring surprises. Although great advances were made in pure mathematics during this period, many thoughtful people would agree that this development was singular, in that its extraordinary meaning and significance in the general world of ideas was incomparably higher. For example, the applied mathematician and historian Morris Kline, Professor of Mathematics, Courant Institute of Mathematical Sciences, New York University, writes the following in his epic Mathematical Thought from Ancient to Modern Times, Oxford University Press, 1972, Chapter 51, The Foundations of Mathematics, p. 1182: "By far the most profound activity of twentieth-century mathematics has been the research on the Foundations." After a long period of primarily technical development, I think the time is now ripe for another Golden Age for Foundations of Mathematics. And by talking to applied model theorists like Anand, Dave, Lou, as well as some interested mathematicians, they may be poised for a first Golden Age for "Mathematical Applications of Mathematical Logic." We shall see what emerges. However one distinction is clear: "Mathematical Applications of Mathematical Logic" seeks to speak to the pure mathematicians. "Foundations of Mathematics" seeks to speak to everyone. As long as the former speaks primarily to pure mathematicians and the latter speaks to everyone, it is absurd to pretend that there is no genuine and fundamental distinction. More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/1997-October/000061.html","timestamp":"2014-04-17T10:19:54Z","content_type":null,"content_length":"15957","record_id":"<urn:uuid:a230895e-3ba7-4b6c-94af-0876abc0321b>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00370-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Hello! I'm working on a finance-related problem, and I'm not sure that I'm solving the ODE well! Here's the prompt: A certain college graduate borrows $8,000 to buy a car. The lender charges interest at an annual rate of \(10\%\). Assuming that interst is compounded \(\sf continuously\), and that the borrower makes payments \(\sf continuously\) at a constant annual rate \(k\), determine the payment rate \(k\) that is required to pay off the loan in \(3\) years. Also, determine how much interest is paid during the \(3\)-year period. The answers: \(k=$3086.64\ /\text{year}\); \($1259.92\) • 7 months ago • 7 months ago Best Response You've already chosen the best response. Here's what I did. Let \(D\) be the debt in dollars. Note that \(\dfrac{dD}{dt}=D'=.10D-k\implies D'-.10D=-k\). I'll use \(C\) variables as arbitrary constants. Then \(\Large {D=e^{-\int -.10dt}\ int (-k)e^{\int -.10dt}dt\\~\\~\\ =-k\ e^{\int .10dt}\int e^{-\int .10dt}dt\\~\\~\\ =-k\ e^{.10t+C_1}\int e^{-.10t+C_2}dt\\~\\~\\ =-k\ e^{.10t+C_1}(-10 e^{-.10t+C_2}+C_3)\\~\\~\\ =-k\ e^{.10t}e^ {C_1}(-10 e^{-.10t}e^{C_2}+C_3)\\~\\~\\ =-k\ e^{.10t}C_4(-10 e^{-.10t}C_5+C_3)\\~\\~\\ =-k\ e^{.10t}C_4(-10) e^{-.10t}C_5-k\ e^{.10t}C_4C_3\\~\\~\\ =-k\ e^0C_4(-10) C_5-k\ e^{.10t}C_4C_3\\~\\~\\ =10k\ C_6-k\ e^{.10t}C_7\\~\\~\\ =k\ C_8-k\ e^{.10t}C_7}\) Best Response You've already chosen the best response. Where might I be making a mistake? This doesn't look right... I see that I can simplify: \(\Large\qquad\qquad\qquad = k(C_8-e^{0.10t}C_7)\) Best Response You've already chosen the best response. I forget the reason behind it, but I'm pretty sure you just ignore the constant of integration when you find the integrating factor. I end up with \[D(t)=10k+Ce^{-1/10~t}\] Best Response You've already chosen the best response. Okay! I solved on Wolfram Alpha and I think I got something like that! I have to check! But thank you! I was just using a formula that was derived with the integrating factor, I think. I don't quite understand it! :P Best Response You've already chosen the best response. You're welcome! And by the way, the reason (according to wikipedia) is that we only need *a* solution, not the general solution, to the integral. Best Response You've already chosen the best response. Hmm! Thanks! I'll probably look into this tomorrow when I'm less tired! :) Thank you very much, @SithsAndGiggles ! Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/5237cdbbe4b0b9d6b2d6cf92","timestamp":"2014-04-20T18:26:19Z","content_type":null,"content_length":"41208","record_id":"<urn:uuid:5e6863dd-ac3d-4579-b8ac-b8c11f2f7a80>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00137-ip-10-147-4-33.ec2.internal.warc.gz"}
Stability of Exact and Discrete Energy for Non-Fickian Reaction-Diffusion Equations with a Variable Delay Abstract and Applied Analysis Volume 2014 (2014), Article ID 840573, 9 pages Research Article Stability of Exact and Discrete Energy for Non-Fickian Reaction-Diffusion Equations with a Variable Delay ^1School of Mathematics and Statistics, Huazhong University of Science and Technology, Wuhan 430074, China ^2School of Computer Science and Engineering, Beihang University, Beijing 100191, China ^3School of Computer Science, McGill University, Montreal, QC, Canada H3A 2K6 ^4Department of Mathematics and Statistics, McGill University, Montreal, QC, Canada H3A 2K6 Received 4 December 2013; Revised 28 December 2013; Accepted 11 January 2014; Published 5 March 2014 Academic Editor: Adem Kilicman Copyright © 2014 Dongfang Li et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. This paper is concerned with the stability of non-Fickian reaction-diffusion equations with a variable delay. It is shown that the perturbation of the energy function of the continuous problems decays exponentially, which provides a more accurate and convenient way to express the rate of decay of energy. Then, we prove that the proposed numerical methods are sufficient to preserve energy stability of the continuous problems. We end the paper with some numerical experiments on a biological model to confirm the theoretical results. 1. Introduction Reaction-diffusion equations with delay are widely proposed as models for biological, physical, and control systems [1–5]. Over the past several years, such equations have been extensively investigated and several important properties such as existence and stability of the travelling wavefront are well understood (e.g., [6–9]). It is known that traveling waves describe transition processes from the physical point of view. If the reaction is very fast, the speed of propagation will become rather large (e.g., [10–12]). This pathologic behavior is not presented in the physical phenomena but introduced by some mathematical models (cf. [13]). In order to avoid the unphysical behavior, certain memory effects are taken into account in the mathematical models. In [13–15], the time memory flux is proposed by assuming that a flux observed at some time should have something to do with the gradient of the solution at some past time; that is, The first order approximation of the above formula gives Its solution satisfies In fact, the memory term is presented to avoid the infinite propagation speed in flux definition. These ideas yield the study on the following non-Fickian reaction-diffusion equation with a constant delay: where and . Recently, many researchers have investigated these kinds of equations. The existence and uniqueness of the solutions were derived by Chang [16]. The well posedness of the integrodifferential models was studied by Araújo et al. in [17]. Later, they further studied the effect of the integrodifferential term in the qualitative behavior of the solution of the diffusion equations in [18]. Zhang and Vandewalle [19] studied the general linear methods for solving the nonlinear integrodifferential equations with memory. Wang et al. investigated the asymptotic stability of exact and discrete solutions of nonlinear integrodifferential equation in [20–22]. Wen et al. considered dissipativity of the functional differential equations [23]. For a more detailed description of this subject as well as its open problems, we refer the readers to the books [1–3, 24, 25], the papers [26–32], and the references therein. In particular, some continuous and discrete energy estimates, which are determined by the -norm of the solution and the estimates for the -norm of the past in time of its gradient, were derived by Branco et al. in [13]. However, the derived energy estimates rely on time. When time goes to infinity, the energy estimates seem to become boundless. Later, inspired by their instructive work, Li et al. further derived some asymptotic stability of problem in [33]. The stability result implies that the perturbation of the energy estimate tends to zero as approaches infinity. However, it does not imply anything about how long it takes to decay. Moreover, when referring to the impact of various factors (e.g., environment, temperature, and other potential effects) in some real-world problems, the model should be modified to an equation with a variable delay, which makes the numerical simulation and its analysis more complicated. Hence, some further stability results for the non-Fickian equations with a variable delay are much more significant and challenging from both the physical and mathematical points of view. One aim of the paper is to study the exponential stability of some more general models. Consider the non-Fickian reaction-diffusion equation with a variable delay where , , the boundary conditions and initial condition We find that, the perturbation of the energy estimate decays exponentially. This exponential stability result provides a more accurate and convenient way to express the rate of decay. Meanwhile, the stability result can also be applied to the equations investigated in [13–15]. The given results imply that the perturbations of the energy of the equations are controlled by the initial perturbations from the systems. The other aim is to investigate the discrete energy stability of (5). Since finite element method can be easily designed for high order of accuracy in space (see, e.g., [34, 35]), we introduce the method to solve the equation. Exponential stability of the semidiscrete system is derived. Then, the implicit Euler method is further applied to discretize the equation. And the integral is approximated by the right rectangular rule and the delay term is approximated by a linear interpolation. It is proved that the proposed numerical methods are sufficient to preserve stability of the underlying systems. Besides, if we discretize the diffusion term by using the centered difference operator, the energy stability of full discrete non-Fickian equation with a variable delay can also be derived. The given result indicates that the perturbations of the numerical solutions vanish eventually. Finally, a numerical simulation on a biological model is presented to confirm the theoretical and numerical results. The rest of the paper is organized as follows. In Section 2, we discuss energy stability of non-Fickian delay reaction-diffusion equations. Section 3 describes in detail the stability of the numerical methods for the equation. Section 4 shows experimental studies for verifying the proposed results. Finally, conclusions and discussions for this paper are summarized in Section 5. 2. Exponential Stability The following lemma can be derived from Lemma 2.1 in [36], where it was used to investigate the dissipativity of delay differential equations. Here it will play a key role in discussing the energy stability of the problem (5). Lemma 1. Suppose that Here and , are continuous functions such that , and for all with constants and . Then where and is defined as For the stability analysis, we need to consider the perturbed problem, that is, non-Fickian delay reaction-diffusion equation (5) with a different initial condition . Its solution is denoted by and satisfies the following equation: Besides, let denote the inner product in and by we denote the corresponding induced norm. To simplify the notation, we represent by if is defined in . The energy function is defined by Then, we have the following result. Theorem 2. Assume that for each If where and are continuous and there exists such that then where and is defined as Proof. Let ; we can obtain the following equation: Multiplying each member of (18) by with respect to and integrating by part, we find As in [15], we have where we have used the Poincaré inequality. Now, using assumption (14), we have Therefore, Then, we derive the following inequality: Now, applying Lemma 1, we finally have the stability result. It is remarkable that formula (16) implies that Hence, the perturbation of the solution of the problem also decays exponentially. When the right-hand side function of the problem (5) does not possess the delay term, the problem degenerates into a non-Fickian reaction-diffusion equation: which is also discussed in [13]. We have the following conclusion. Corollary 3. Assume that for each If where are continuous and , for . Then there exist constants and such that When , the problem (5) degenerates into an integrodifferential delay equation, which is investigated in [14, 15]. We have the following conclusion. Corollary 4. Assume that for each If where and are continuous and there exists such that then there exist constants and such that If we assume that where and and are continuous functions, there may exist a bounded set and a time , such that for any given initial function , the corresponding energy of the problem is contained in for all . The analogous results and their conclusions for non-Fickian equations with a variable delay can be derived without any difficulty. We do not list them here for brevity. 3. Stability of the Numerical Approximation In this section, we will concentrate on the stability of two kinds of numerical approximation. Here and below, for the discretization of system (5), we divide the interval with a mesh: with the space stepsize , where is a positive integer. And is a time stepsize, where is a positive integer. We will numerically solve the problem at time , . 3.1. Stability of the Finite Element Approximation Firstly, the finite element method is introduced to discrete the problems. Let be a finite dimensional subspace of ; then the semidiscrete finite element method is to find , such that, for all test functions, and , satisfying The exponential stability of semidiscrete approximation is shown as follows. Theorem 5. Assume that conditions in Theorem 2 hold; then the difference satisfies the following formulae: where the parameters and are defined in Theorem 2 and with satisfying the semidiscrete finite element approximation to the perturbed problem. Proof. The proof is similar to that of Theorem 2. If we further apply the right rectangular rule to approximate the integral and implicit Euler method to discrete (36), then the full discrete numerical approximation is to find , such that, for all test functions, and , satisfying for Here denotes an approximation to , which is obtained by the following linear interpolation procedure at the point : where with integer and . Now, we can establish the discrete version of the energy: Then, we have the following result. Theorem 6. Assume that conditions (14) and (15) hold. Then there exists a constant such that where and is the full discrete approximations to the perturbed problem. Proof. The difference satisfies the following equation: Taking in the above formula, we find that where we have used condition (14). As in [15], we have Substituting (44) into (43) yields Applying the following discrete Poincaré inequality and to (45), we arrive at where we have used the inequality . It follows from (47) that where . In fact, in view of condition (15), there exists a constant , such that Hence, Together with (49) and (52), we have the final conclusion. Theorem 5 shows that, for any given initial perturbation, the perturbation of the corresponding discrete energy of the problem decreases in time. Meantime, noting that in the energy function, the difference tends to zero very quickly. 3.2. Stability of the Finite Difference Approximation In this subsection, we will apply the central finite difference method to discrete the diffusion term and discuss the stability of the full discrete problems. Let be the time stepsize and denote the numerical approximation to with ; then the full discrete finite difference approximation for (5) can be written as where and the argument denotes an approximation to , which is obtained by the linear interpolation procedure at the point where with integer and . The discrete version of the energy of the finite difference approximation is defined by where is the backward finite difference operator: Theorem 7. Assume that conditions (14) and (15) hold. Then there exists a constant such that where and is the full discrete finite difference approximations to the perturbed problem. Proof. The proof is similar to the one done in Theorem 6 with the following relation: 4. Application In this section, we will provide a numerical experiment to illustrate the given results. Consider the following Non-Fickian Mackey-Glass equations [33]: where , , , , and are given parameters, is an even positive integer, and is a nonnegative function. There is no difficulty in verifying that condition (14) holds (). If there exists such that , the perturbation of the energy estimate decays exponentially. For the numerical simulation of the model, we set the parameter , , , the space stepsize , and time stepsize and apply -degree finite element method to solve the problem on with the following different initial conditions: Some statistics of the numerical results with different parameters and are shown in Tables 1 and 2, where the discrete energy and of the problems decreases in time very quickly. Moreover, The numerical differences with the parameters and are shown in Figure 1, where with . The numerical differences (the blue lines) are all bounded by the exponential function (the red line). Clearly, they all confirm the results in this paper. 5. Conclusions In this paper, we have investigated the stability of continuous and discrete non-Fickian delay reaction-diffusion equations with a variable delay. Firstly, we show that the perturbation of the energy estimate decays exponentially. The present stability results for a more general case improved that of our previous paper. Secondly, the numerical analysis indicates that the finite element method or central finite difference method, combined with implicit Euler method and a linear interpolation procedure for the delay term, can preserve the stability of the energy function of the equations. All the above findings are confirmed by using the numerical methods on the non-Fickian Mackey-Glass equations. Conflict of Interests The authors declare that there is no conflict of interests regarding the publication of this paper. This work is supported by the National Natural Science Foundation of China under Grants 11201161, 11171125, 91130003, 61170296, and 61190125; the Fundamental Research Funds for the Central Universities under Grant no. 2012QN033; the Research Fund of the State Key Laboratory of Software Development Environment under Grant no. BUAA SKLSDE-2012ZX-17; the Program for New Century Excellent Talents in University under Grant no. NECT-09-0028; and the Natural Science Foundation of Beijing, China, under Grant no. 4123101. 1. R. P. Agarwal, L. Berezansky, E. Braverman, and A. Domoshnitsky, Nonoscillation Theory of Functional Differential Equations with Applications, Springer, New York, NY, USA, 2012. View at Publisher · View at Google Scholar · View at MathSciNet 2. H. Brunner, Collocation Methods for Volterra Integral and Related Functional Differential Equations, Cambridge University Press, Cambridge, UK, 2004. View at Publisher · View at Google Scholar · View at MathSciNet 3. J. H. Wu, Theory and Application of Partial Functional Differential Equation, vol. 119 of Applied Mathematical Sciences, Springer-Verlag, 1996. View at Publisher · View at Google Scholar · View at MathSciNet 4. L. Berezansky, E. Braverman, and L. Idels, “Nicholson's blowflies differential equations revisited: main results and open problems,” Applied Mathematical Modelling, vol. 34, no. 6, pp. 1405–1417, 2010. View at Publisher · View at Google Scholar · View at MathSciNet 5. D. Li, C. Zhang, and H. Qin, “LDG method for reaction-diffusion dynamical systems with time delay,” Applied Mathematics and Computation, vol. 217, no. 22, pp. 9173–9181, 2011. View at Publisher · View at Google Scholar · View at MathSciNet 6. M. Mei, C.-K. Lin, C.-T. Lin, and J. W.-H. So, “Traveling wavefronts for time-delayed reaction-diffusion equation. I. Local nonlinearity,” Journal of Differential Equations, vol. 247, no. 2, pp. 495–510, 2009. View at Publisher · View at Google Scholar · View at MathSciNet 7. M. Mei, C. Ou, and X.-Q. Zhao, “Global stability of monostable traveling waves for nonlocal time-delayed reaction-diffusion equations,” SIAM Journal on Mathematical Analysis, vol. 42, no. 6, pp. 2762–2790, 2010. View at Publisher · View at Google Scholar · View at MathSciNet 8. J. Wu, D. Wei, and M. Mei, “Analysis on the critical speed of traveling waves,” Applied Mathematics Letters, vol. 20, no. 6, pp. 712–718, 2007. View at Publisher · View at Google Scholar · View at MathSciNet 9. Z.-C. Wang, W.-T. Li, and S. Ruan, “Travelling wave fronts in reaction-diffusion systems with spatio-temporal delays,” Journal of Differential Equations, vol. 222, no. 1, pp. 185–232, 2006. View at Publisher · View at Google Scholar · View at MathSciNet 10. D. D. Joseph and L. Preziosi, “Heat waves,” Reviews of Modern Physics, vol. 61, no. 1, pp. 41–73, 1989. View at Publisher · View at Google Scholar · View at MathSciNet 11. S. Fedotov, “Traveling waves in a reaction-diffusion system: diffusion with finite velocity and Kolmogorov-Petrovskii-Piskunov kinetics,” Physical Review E, vol. 58, no. 4, pp. 5143–5145, 1998. View at Publisher · View at Google Scholar · View at MathSciNet 12. F. Andreu, V. Caselles, and J. M. Mazón, “A Fisher-Kolmogorov equation with finite speed of propagation,” Journal of Differential Equations, vol. 248, no. 10, pp. 2528–2561, 2010. View at Publisher · View at Google Scholar · View at MathSciNet 13. J. R. Branco, J. A. Ferreira, and P. da Silva, “Non-Fickian delay reaction-diffusion equations: theoretical and numerical study,” Applied Numerical Mathematics, vol. 60, no. 5, pp. 531–549, 2010. View at Publisher · View at Google Scholar · View at MathSciNet 14. A. Araújo, J. R. Branco, and J. A. Ferreira, “On the stability of a class of splitting methods for integro-differential equations,” Applied Numerical Mathematics, vol. 59, no. 3-4, pp. 436–453, 2009. View at Publisher · View at Google Scholar · View at MathSciNet 15. J. R. Branco, J. A. Ferreira, and P. de Oliveira, “Numerical methods for the generalized Fisher-Kolmogorov-Petrovskii-Piskunov equation,” Applied Numerical Mathematics, vol. 57, no. 1, pp. 89–102, 2007. View at Publisher · View at Google Scholar · View at MathSciNet 16. J.-C. Chang, “Local existence for retarded Volterra integrodifferential equations with Hille-Yosida operators,” Nonlinear Analysis. Theory, Methods & Applications, vol. 66, no. 12, pp. 2814–2832, 2007. View at Publisher · View at Google Scholar · View at MathSciNet 17. A. Araújo, J. A. Ferreira, and P. Oliveira, “Qualitative behavior of numerical traveling solutions for reaction-diffusion equations with memory,” Applicable Analysis, vol. 84, no. 12, pp. 1231–1246, 2005. View at Publisher · View at Google Scholar · View at MathSciNet 18. A. Araújo, J. A. Ferreira, and P. Oliveira, “The effect of memory terms in diffusion phenomena,” Journal of Computational Mathematics, vol. 24, no. 1, pp. 91–102, 2006. View at MathSciNet 19. C. Zhang and S. Vandewalle, “General linear methods for Volterra integro-differential equations with memory,” SIAM Journal on Scientific Computing, vol. 27, no. 6, pp. 2010–2031, 2006. View at Publisher · View at Google Scholar · View at MathSciNet 20. W. Wang, C. Zhang, and D. Li, “Asymptotic stability of exact and discrete solutions for neutral multidelay-integro-differential equations,” Applied Mathematical Modelling, vol. 35, no. 9, pp. 4490–4506, 2011. View at Publisher · View at Google Scholar · View at MathSciNet 21. W. Wang and C. Zhang, “Preserving stability implicit Euler method for nonlinear Volterra and neutral functional differential equations in Banach space,” Numerische Mathematik, vol. 115, no. 3, pp. 451–474, 2010. View at Publisher · View at Google Scholar · View at MathSciNet 22. W. Wang and C. Zhang, “Analytical and numerical dissipativity for nonlinear generalized pantograph equations,” Discrete and Continuous Dynamical Systems A, vol. 29, no. 3, pp. 1245–1260, 2011. View at Publisher · View at Google Scholar · View at MathSciNet 23. L. Wen, Y. Yu, and W. Wang, “Generalized Halanay inequalities for dissipativity of Volterra functional differential equations,” Journal of Mathematical Analysis and Applications, vol. 347, no. 1, pp. 169–178, 2008. View at Publisher · View at Google Scholar · View at MathSciNet 24. V. Méndez, S. Fedotov, and W. Horsthemke, Reaction-Transport Systems: Mesoscopic Foundations, Fronts, and Spatial Instabilities, Springer Series in Synergetics, 2010. View at Publisher · View at Google Scholar · View at MathSciNet 25. A. Bellen and M. Zennaro, Numerical Methods for Delay Differential Equations, Oxford University Press, Oxford, 2003. View at Publisher · View at Google Scholar · View at MathSciNet 26. H. Brunner, “Recent advances in the numerical analysis of Volterra functional differential equations with variable delays,” Journal of Computational and Applied Mathematics, vol. 228, no. 2, pp. 524–537, 2009. View at Publisher · View at Google Scholar · View at MathSciNet 27. H. Brunner, “Current work and open problems in the numerical analysis of Volterra functional equations with vanishing delays,” Frontiers of Mathematics in China, vol. 4, no. 1, pp. 3–22, 2009. View at Publisher · View at Google Scholar · View at MathSciNet 28. A. Yadav, S. Fedotov, V. Méndez, and W. Horsthemke, “Propagating fronts in reaction C- transport systems with memory,” Physics Letters A, vol. 371, pp. 374–378, 2007. 29. S. Fedotov, “Non-Markovian random walks and nonlinear reactions: subdiffusion and propagating fronts,” Physical Review E, vol. 81, no. 1, Article ID 011117, 7 pages, 2010. View at Publisher · View at Google Scholar 30. S. Fedotov, A. Iomin, and L. Ryashko, “Non-Markovian models for migration-proliferation dichotomy of cancer cells: anomalous switching and spreading rate,” Physical Review E, vol. 84, no. 6, Article ID 061131, 8 pages, 2011. View at Publisher · View at Google Scholar 31. C. Zhang and S. Zhou, “The asymptotic stability of theoretical and numerical solutions for systems of neutral multidelay-differential equations,” Science in China A, vol. 41, no. 11, pp. 1151–1157, 1998. View at Publisher · View at Google Scholar · View at MathSciNet 32. C.-J. Zhang and S.-F. Li, “Dissipativity and exponentially asymptotic stability of the solutions for nonlinear neutral functional-differential equations,” Applied Mathematics and Computation, vol. 119, no. 1, pp. 109–115, 2001. View at Publisher · View at Google Scholar · View at MathSciNet 33. D. Li, C. Zhang, and W. Wang, “Long time behavior of non-Fickian delay reaction-diffusion equations,” Nonlinear Analysis. Real World Applications, vol. 13, no. 3, pp. 1401–1415, 2012. View at Publisher · View at Google Scholar · View at MathSciNet 34. J. R. Cannon and Y. P. Lin, “A priori ${L}^{2}$ error estimates for finite-element methods for nonlinear diffusion equations with memory,” SIAM Journal on Numerical Analysis, vol. 27, no. 3, pp. 595–607, 1990. View at Publisher · View at Google Scholar · View at MathSciNet 35. J. R. Cannon and Y. Lint, “Error estimates for semidiscrete finite element methods for parabolic integro-Differential equations,” Mathematics of Computation, vol. 187, no. 53, pp. 121–139, 1989. 36. H. Tian, “Numerical and analytic dissipativity of the $\theta$-method for delay differential equations with a bounded variable lag,” International Journal of Bifurcation and Chaos in Applied Sciences and Engineering, vol. 14, no. 5, pp. 1839–1845, 2004. View at Publisher · View at Google Scholar · View at MathSciNet
{"url":"http://www.hindawi.com/journals/aaa/2014/840573/","timestamp":"2014-04-21T00:20:22Z","content_type":null,"content_length":"776827","record_id":"<urn:uuid:034f94ff-0847-4864-9376-a3638d6b4285>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00053-ip-10-147-4-33.ec2.internal.warc.gz"}