content
stringlengths
86
994k
meta
stringlengths
288
619
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: evaluate and simplfy 22/-x-11 when x=11 • one year ago • one year ago Best Response You've already chosen the best response. if you mean 22/(-x-11) and you set x=11 you end up with 22/(-11-11) which is also -1 (or 22/-22) Best Response You've already chosen the best response. thank you Best Response You've already chosen the best response. your welcome Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4f8f17ffe4b000310fad612f","timestamp":"2014-04-16T19:45:05Z","content_type":null,"content_length":"32349","record_id":"<urn:uuid:041407b1-8c5d-4123-bfa7-b40811e8741c>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00180-ip-10-147-4-33.ec2.internal.warc.gz"}
Covariance question 1. The problem statement, all variables and given/known data Let the random variables X and Y have the joint p.m.f.: f(x,y) = (x+y)/32 x=1,2, y=1,2,3,4. find the means [tex]\mu[/tex][x] and [tex]\mu[/tex][y], the variances [tex]\sigma[/tex]^2[x] and [tex]\sigma[/tex]^2[y], and the correlation coefficient [tex]\rho[/tex]. 2. Relevant equations 3. The attempt at a solution I was able to find both [tex]\mu[/tex]'s: [tex]\mu[/tex]x= (25/16) [tex]\mu[/tex]y= (45/16) and both variances: But I cant seem to find how to get the covariance...I tried just using the 1 and 2 values for x and y, but it hasn't worked. I think I'm getting confused because there are more y values than x values. Any help would be much appreciated!
{"url":"http://www.physicsforums.com/showthread.php?p=2661791","timestamp":"2014-04-17T12:31:49Z","content_type":null,"content_length":"23237","record_id":"<urn:uuid:fffd9c93-1d5f-43fe-a9bb-2516848463c8>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00568-ip-10-147-4-33.ec2.internal.warc.gz"}
An Electric Field Is Described In Cylindrical Coordinates... | Chegg.com An electric field is described in cylindrical coordinates as: E=p'(1/p) [V/m] where p' is rho vector and p is rho. Find the voltage drop Vab where a is the the point (1,0,0) and b is (2,pi/2,1). In the the description of these points, the cylindrical coordinate notation (p,phi,z) is used, where dimensions are in [m] and angles are in radians. Voltage drop is path independent. Don't need help solving the integrations, just would like a little help in setting up the problem. Electrical Engineering
{"url":"http://www.chegg.com/homework-help/questions-and-answers/electric-field-described-cylindrical-coordinates-e-p-1-p-v-m-p-rho-vector-p-rho-find-volta-q2866696","timestamp":"2014-04-16T22:33:48Z","content_type":null,"content_length":"20371","record_id":"<urn:uuid:73b6f19b-ab20-40fb-9ad0-a811e200e70a>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00323-ip-10-147-4-33.ec2.internal.warc.gz"}
[SciPy-user] Cookbook: Interpolation of an N-D curve error Angus McMorland a.mcmorland at auckland.ac.nz Mon Jun 19 18:15:40 CDT 2006 Hi David et al, Thanks for your interest. David Huard wrote: > Hi Angus, > I updated scipy and numpy this morning, and the example ran fine, > except for the import Float64 statement that has to be changed to > float64. Yep. I got that one okay - in fact neither declaration seems to be required for the example to work now on my laptop. However, some problem still remains - see below. > You may want to turn pdb on in ipython and find out what > variable is triggering the exception. I'd like to help more but I > have no idea what is going on in your case. Did you modify the > example or ran it as is? > David > 2006/6/16, Angus McMorland <a.mcmorland at auckland.ac.nz > <mailto:a.mcmorland at auckland.ac.nz>>: > I'm getting a TypeError when trying the N-D curve cookbook example ( > http://www.scipy.org/Cookbook/Interpolation) with numpy 0.9.9.2630 > and scipy 0.5.0.1979. > The error is: > In [158]: tckp,u = splprep([x,y,z],s=s,k=k,nest=-1) > exceptions.TypeError Traceback (most > recent call last) > [snip]<ipython console> > /usr/lib/python2.3/site-packages/scipy/interpolate/fitpack.py in > splprep(x, w, u, ub, ue, k, task, s, t, full_output, nest, per, > quiet) 215 iwrk=_parcur_cache['iwrk'] 216 > t,c,o=_fitpack._parcur(ravel(transpose(x)),w,u,ub,ue,k,task,ipar,s,t, > --> 217 nest,wrk,iwrk,per) 218 > _parcur_cache['u']=o['u'] 219 _parcur_cache['ub']=o['ub'] > TypeError: array cannot be safely cast to required type After a bit more poking around, I've found that the routine runs fine on my i686 debian laptop, but not on my amd64 desktop machine, both running identical svn versions of numpy (0.9.9.2631) and scipy (0.5.0.1980) in python2.3 (and python2.4 also works on the i686 machine). I attach the script I ran - identical to the example in the cookbook except for the removal of the Float64. ipdb halts on line 22, then line 217 of fitpack.py in splprep. Using ipdb to look at the variables shows me that w, u and s are somehow ill-defined, but I don't know exactly what's going on. Here's the ipdb session print-out: ipdb> whatis w <type 'numpy.ndarray'> ipdb> w.shape 21 # find the knot points ---> 22 tckp,u = splprep([x,y,z],s=s,k=k,nest=-1) 24 # evaluate spline, including interpolated points /usr/lib/python2.3/site-packages/scipy/interpolate/fitpack.py in splprep() 215 iwrk=_parcur_cache['iwrk'] --> 217 nest,wrk,iwrk,per) 218 _parcur_cache['u']=o['u'] 219 _parcur_cache['ub']=o['ub'] ipdb> whatis u <type 'numpy.ndarray'> ipdb> u.shape 21 # find the knot points ---> 22 tckp,u = splprep([x,y,z],s=s,k=k,nest=-1) ipdb> whatis s <type 'float'> ipdb> s WARNING: Failure executing file: <spl.py> In [23]: Requesting s kicks me out of the debugger altogether. Other variables seem okay: x: numpy.ndarray, shape = 3,100, dtype.type = <type 'float64scalar'> ub: int, 0 ue: int, 1 k: int, 2 task: int, 0 ipar: bool, False t: numpy.ndarray, array([], dtype=float64) nest: int, 103 wrk: numpy.ndarray, array([], dtype=float64) iwrk: numpy.ndarray, array([], dtype=int64) per: int, 0 I hope that helps work out what's going on... Angus McMorland email a.mcmorland at auckland.ac.nz mobile +64-21-155-4906 PhD Student, Neurophysiology / Multiphoton & Confocal Imaging Physiology, University of Auckland phone +64-9-3737-599 x89707 Armourer, Auckland University Fencing Secretary, Fencing North Inc. -------------- next part -------------- A non-text attachment was scrubbed... Name: spl.py Type: text/x-python Size: 1059 bytes Desc: not available Url : http://www.scipy.net/pipermail/scipy-user/attachments/20060620/bafb97ee/spl.py More information about the SciPy-user mailing list
{"url":"http://mail.scipy.org/pipermail/scipy-user/2006-June/008324.html","timestamp":"2014-04-17T04:04:27Z","content_type":null,"content_length":"7474","record_id":"<urn:uuid:6e71b710-8cc0-4470-b854-9ffa16f68a3d>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00400-ip-10-147-4-33.ec2.internal.warc.gz"}
Experimental Errors in the Physics Laboratory Systematic errors usually cause the results of a measurement to be consistently too high or too low below the true value. These errors may be due to: Consequently, there is always a significant discrepancy between the expected theoretical results and the experimental results obtained when these frictional effects cannot be ignored. An experimenter's skill is crucial in identifying, preventing, and minimizing any obvious systematic errors as much as possible. Unfortunately, it is very difficult to reliably identify and estimate systematic errors. Personal errors (or Mistakes) Personal errors arise from the mistakes of the experimenter. Observational mistakes may be due to the personal bias or carelessness of the experimenter while reading the scale of an instrument. Arithmetic mistakes usually occur while performing the needed calculations. This class of errors can be completely eliminated if the experimenter exercises utmost caution and skepticism while performing the experiment. If the scales are read incorrectly or if the calculations are wrongly carried out, the entire result will be wrong! Therefore, the experimenter is strongly encouraged to cross-check the data and calculations. In a lab group, each partner should independently read the data and check any calculations for accuracy. Random errors Random errors are usually due to unknown and unpredictable variations in the experimental conditions. The sources of these random errors cannot always be identified and can never be totally eliminated in any measurement. These random errors may be: • Observational - unbiased inconsistency of an observer in determining the measurement readings of an instrument. This often occurs in the estimation of the last digit when reading the scale of a measuring device between the smallest division. • Environmental - physical variations that may affect the equipment or the experiment setup such as fluctuations in the line voltage, temperature changes, or mechanical vibrations. This class of errors usually causes about half of the measurements to be too high and the other half of the measurements to be too low. Fortunately, random errors can be determined by statistical analysis and are sometimes referred to as statistical errors. Due to the random nature of these errors, their effect on the experimental results can be reduced by repeating the measurements as many times as possible so that the erroneous results become statistically insignificant. Accuracy and Precision The objective in most physical science experiments is the measurement of the "accepted" or "true" values of well-known physical quantities (as stated in textbooks and physics handbooks). However, there always exists some difference between the "measured" value and the "true" value. of a measurement is a measure of how close the measured value is to the true value. The accuracy depends on systematic errors and thus measures the correctness of the experimental measurement. of a measurement is a measure of how reliable or how reproducible the results of the measurement are when repeated. The precision depends on random errors and thus measures the uncertainty in the It must be noted that sometimes a measurement appears to be highly accurate but with very poor precision. In such situations, the question arises whether or not such results should be considered as actually meaningful. Unless a measurement has a high precision, its accuracy cannot be considered as realistic. When a measurement has a high precision but poor accuracy, it is often an indication of the presence of systematic errors. Copyright © 2001. All rights reserved. Please contact Martin O. Okafor with any comments or questions about this site.
{"url":"http://facstaff.gpc.edu/~mokafor/gpcphysics/physicslabs/lab_errors.html","timestamp":"2014-04-18T13:18:40Z","content_type":null,"content_length":"10232","record_id":"<urn:uuid:d36edfaf-9c49-4b32-9445-6700f52a4129>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00466-ip-10-147-4-33.ec2.internal.warc.gz"}
Probability and sample space -A few more questions A group of six children are choosing colored pencils to draw a picture. Each child is allowed to select one color. The available colors are green, red, and blue. If the second child refuses to use red pencils, and the third child refuses to use blue pencils, then how many ways are there for the children to choose pencils? Assume there are 12 pencils for each color and different children are allowed to choose the same color. If 20 tickets are sold and two (2) prizes ar to be awarded, find the probability that one (1) person will win both prizes if that person buys exactly two (2) tickets. X is a normally distributed random variable with a standard deviation of 4.00. Find the mean of x if 12.71% of the area under the distribution curve lies to the right of 14.56. *Please note-I can't get the curve on here, so if you can't help on this one its ok. The answer choices are 13.3, 11.3, 10.0, and 9.5 (wasn't sure if that would help or not). I understand if there is not enough information to answer this. A school club consist of 20 male students and 15 female students. If 4 students are selected to represent the club in the student government, what is the probability 2 will be female and 2 will be Anything you guys could help with would be great. Thank you Re: Probability and sample space -A few more questions Hi Skyblast72 OK. glad you got the other question sorted. Let's take these one at a time because once you can do one, it may help you do do another without any more help. That means we all know you have progressed with the topic. A group of six children are choosing colored pencils to draw a picture. Each child is allowed to select one color. The available colors are green, red, and blue. If the second child refuses to use red pencils, and the third child refuses to use blue pencils, then how many ways are there for the children to choose pencils? Assume there are 12 pencils for each color and different children are allowed to choose the same color. There are enough of each colour so supplies won't run out even if all the children choose the same. That's a big bonus as the question gets much harder if what one child chooses affects what the next may choose! The first child has 3 choices. {GRB} The next won't have red, so that child has just 2 choices. {GB} So far that is 3 x 2 = 6 possibilities {GG, GB, RG, RB, BG, BB} Carry on like this, multiplying the choices together for all six children. What do you get? You cannot teach a man anything; you can only help him find it within himself..........Galileo Galilei Re: Probability and sample space -A few more questions I get 324. Thank you I was trying to ad dinstead of multiply. Your forum is very helpful!! Re: Probability and sample space -A few more questions For 2). The only way this problem makes any sense and based on what the other questions are it is a binomial distribution problem. For 4) I get In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Probability and sample space -A few more questions 324 is good. If 20 tickets are sold and two (2) prizes ar to be awarded, find the probability that one (1) person will win both prizes if that person buys exactly two (2) tickets. So if there are 20 tickets in a hat and you have just one, what is the probability you'll win? Let's say that happens. Now you discover you have a second ticket. What's the probability that it gets chosen too. Now, once again multiply the answers, as you want both events to occur. ps. bobbym. I see where you have gone with this, but I don't think that was what the questioner intended. Poorly worded, either interpretation. You cannot teach a man anything; you can only help him find it within himself..........Galileo Galilei Re: Probability and sample space -A few more questions I went for the simplest type of question, I was incorrect in thinking that mine answered that. Your idea looks better to me. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Probability and sample space -A few more questions Sorry for the poor wording, it is exactly how the question is typed on the study guide. I came up with 20C2 = 1 in 190 chance. Is that correct? Re: Probability and sample space -A few more questions A probability is a ratio or fraction. What did you put in the denominator? I would have done it like this In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Real Member Re: Probability and sample space -A few more questions bobbym wrote: A probability is a ratio or fraction. What did you put in the denominator? I would have done it like this Ummm... I don't think that is correct. There are 2 prizes and 20 tickets... The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment Re: Probability and sample space -A few more questions First chance is 1 / 20 , second chance is 1 / 19. I am stuck on that. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Probability and sample space -A few more questions You don't need 'combinations at all. So if there are 20 tickets in a hat and you have just one, what is the probability you'll win? That'll be 1/20 Let's say that happens. Now you discover you have a second ticket. What's the probability that it gets chosen too. Only 19 tickets left and you have one more ticket to check so that'll be Now, once again multiply the answers, as you want both events to occur. So 1/20 times 1/19 Now for question 3. Would you like a quick reminder of normal distribution theory? I've got my diagrams all lined up and ready to roll. You cannot teach a man anything; you can only help him find it within himself..........Galileo Galilei Re: Probability and sample space -A few more questions Hi anonimnystefy; You did not read my post. If we treat it as a combinatoric problem as he did then what is the denominator? Where did he go? In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Probability and sample space -A few more questions I think the clue that this is just a simple event 1 followed by event 2 probability is the clause "if that person buys exactly two (2) tickets." As we are told nothing about the other players we cannot assume they all have two tickets (or one or three) Hence my interpretation. I think it should say "You have two tickets. What is your chance of winning with both?" You cannot teach a man anything; you can only help him find it within himself..........Galileo Galilei Re: Probability and sample space -A few more questions Sorry I'm in class. I don't have my notes with me. Will get back to you Re: Probability and sample space -A few more questions I am incorrect 1/380 is correct Re: Probability and sample space -A few more questions bob bundy wrote: You don't need 'combinations at all. So if there are 20 tickets in a hat and you have just one, what is the probability you'll win? That'll be 1/20 Let's say that happens. Now you discover you have a second ticket. What's the probability that it gets chosen too. Only 19 tickets left and you have one more ticket to check so that'll be Now, once again multiply the answers, as you want both events to occur. So 1/20 times 1/19 Now for question 3. Would you like a quick reminder of normal distribution theory? I've got my diagrams all lined up and ready to roll. Still working on #2 I'm unsure of answer now. Yes please for question 3 a reminder of normal dustribution Re: Probability and sample space -A few more questions hi Skyblast72 Some variables are said to be normally distributed. eg height of people and IQ may be. One example which definitely is normal is a bit more complicated to describe but it will help you to understand why it is used. A sugar packing factory produces 2 lb packs of sugar. In practice, they know that there will be some variation between packs and they don't want complaints from customers who buy a pack, weigh it themselves and find it is underweight. To avoid this they set the packing machine to put 2.1 lbs in each pack, hoping this reduces the chance of an underweight pack. To check this the quality control people take samples of 10 packs and check the mean weight. These samples will follow a normal distribution. So you can use the theory to check how likely it is that a sample mean will be under 2 lb. Managers have decided to pass the quality check if this probablity is less that 0.001 If the calculation for a particular sample is above 0.001 they call in the engineers to check the machinery. Diagram 1 shows some typical normal distributions. All have the same symmetrical shape; they differ in two respects (i) the mid point is the mean and this will be different for different situations. (ii) the spread of the curve will vary. This is measured by the standard deviation (= sq rt of variance). The y axis is set to make the total area under the curve equal to 1. More to come. Sorry got to go out now. You cannot teach a man anything; you can only help him find it within himself..........Galileo Galilei Re: Probability and sample space -A few more questions Normal distribution part two. To work out a probability using the normal distribution you need the area under the curve. My second diagram shows the area between x=a and x=b shaded green. This area would give you the probability that x lies between a & b. It's very hard to get these areas by calculus so computers have been used to generate a table of probabilities. But every mean and standard deviation would give rise to a differently shaped curve (same basic shape but maybe more squashed up and with the midpoint at a different place). So how do we get around the impossibility of making an infinite number of tables to cover every problem? Well there's a simple solution. If you take the mean away from every x value this shifts the whole curve so that it is centred on x = 0 ( y axis) I've shown this in my next diagram with a red shift arrow. Then dividing by the standard deviation converts the amount of spread to a sd of 1. (green squash on the diagram) The normal distribution tables are for mean = 0 and sd = 1. Every problem can be converted to this, so one table will do all problems. In practice it is usual to only give half the x values as the other half can be found by symmetry. Also you've got to stop somewhere (have a maximum x) In theory x could be anythng right up to infinity, but in practice, values beyond about x = 3 are so unlikely they are left out. The version at http://www.mathsisfun.com/data/standard … table.html gives the probabilities from x = 0 up to x = 3. P = 0 up to P = 0.4990 Your own table may give the probabilities in a different way, but you can always adapt a problem by using symmetry and the fact that each half of the graph adds up to P = 0.5 Question 3. X is a normally distributed random variable with a standard deviation of 4.00. Find the mean of x if 12.71% of the area under the distribution curve lies to the right of 14.56. *Please note-I can't get the curve on here, so if you can't help on this one its ok. The answer choices are 13.3, 11.3, 10.0, and 9.5 (wasn't sure if that would help or not). I understand if there is not enough information to answer this. Let's call the mean 'm'. The x value in this problem is 14.56 so the standardized conversion is My final diagram (hint: always draw one of these) shows the area to the right of this x value shaded green. We are told 12.71% of the area lies in this green area so the probability of being there is Now I'm going to use the table mentioned above. That gives probabilities from x = 0 up to the line rather than beyond it to the right, so I need to do this sum: It's a great page as you can slide the vertical line across the graph and see the probabilty and standardized x displayed. You can even click the z onwards button and see the 12.71 directly! But without this aid: I need to hunt for this probability in the table. Got it at 1.14 so finally calculate m from this: I'll leave you to do this and get m. It is one of the multi choice answers you gave in post 1. Let me know how you get on with this. You cannot teach a man anything; you can only help him find it within himself..........Galileo Galilei Re: Probability and sample space -A few more questions I haven't forgotten about you just been working on another class, will be back tomorrow. Thanks for all your help. Re: Probability and sample space -A few more questions Ok. I'll still be here. You cannot teach a man anything; you can only help him find it within himself..........Galileo Galilei Re: Probability and sample space -A few more questions I got m=10 Re: Probability and sample space -A few more questions That's what I got too. Now for Q4. A school club consist of 20 male students and 15 female students. If 4 students are selected to represent the club in the student government, what is the probability 2 will be female and 2 will be male? There are formulas for this but I think you will get a better understanding if you see a diagram. You've got to choose 4 students and at each choice there are 2 possibiliities for the outcome, M or F. The diagram is called a Tree Diagram. For choice number one you draw two branches, label one M and the other F From each end you draw another set of two branches, again marking them M and F, giving four outcomes so far {MM, MF, FM, FF} Continue like this for two more choices. The final diagram now has 2 x 2 x 2 x 2 = 16 outcomes. Below I've given you a start with the diagram but I've left some labels and outcomes for you to complete. You should be able to do Later edit. This isn't right! See my post 34 for the corrected version. You cannot teach a man anything; you can only help him find it within himself..........Galileo Galilei Re: Probability and sample space -A few more questions I get 16 using the tree diagram Re: Probability and sample space -A few more questions .38 is what I got Thanks Re: Probability and sample space -A few more questions I am not getting .38 In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=233803","timestamp":"2014-04-19T22:22:05Z","content_type":null,"content_length":"52115","record_id":"<urn:uuid:cca6dfc1-0bdf-420f-86b4-bc425c93162b>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00038-ip-10-147-4-33.ec2.internal.warc.gz"}
Find derivative using Part 1 of the Fundamental Theorem of Calculus? October 6th 2010, 10:16 AM #1 Sep 2008 Find derivative using Part 1 of the Fundamental Theorem of Calculus? I never seen this problem before! I tried expanding and doing all that stuff but I got the wrong answer! The function is (3+v^2)^10 with limits of integration [sin(x), cos(x)] Fundamental theorem of calculus - Wikipedia, the free encyclopedia Just in case a picture helps... ... differentiating downwards with respect to x (on the left) or with respect to the dashed balloon (right), the latter referring to chain rule, as below. You might like to use properties of definite integrals to split the integral around a constant a. For the half from a up to cos x... ... where (key in spoiler) ... So you have half of the derivative you seek, on the bottom row, and you'll want to subtract a similar result got from applying the same process but with sin in the dashed balloon. Don't integrate - balloontegrate! Balloon Calculus; standard integrals, derivatives and methods Balloon Calculus Drawing with LaTeX and Asymptote! Last edited by tom@ballooncalculus; October 6th 2010 at 11:00 AM. October 6th 2010, 10:47 AM #2 MHF Contributor Oct 2008
{"url":"http://mathhelpforum.com/calculus/158615-find-derivative-using-part-1-fundamental-theorem-calculus.html","timestamp":"2014-04-18T06:18:21Z","content_type":null,"content_length":"34109","record_id":"<urn:uuid:d3f8afc8-3dd8-48c4-834b-95d4fd39424b>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00378-ip-10-147-4-33.ec2.internal.warc.gz"}
Combination Drawn Formats Yikes! If this has happened in your state, don't start calling the Lottery. It should happen, but not very often at all. On the other hand, if it has happened quite often in your state, go ahead, make the call, and make 'em squirm!... :) ....... Let me put you all at ease, it hasn't happened out of turn in any lottery game in the country. In this particular game it should happen once every 1175 drawings. At the time of this writing this game had been played 582 times and it hadn't happened yet. As you have just seen, a number combination using only the first 1/3 of the numbers in the game is highly unlikely to be drawn compared to other types. A more probable type of number combination would be one which is more evenly balanced across the range of numbers in the game, such as (1-2-2), (2-1-2) or (2-2-1) in a 5 number game or (2-2-2), (2-3-1), (3-2-1) etc. in a 6 number game. Certainly not a (5-0-0) or a (6-0-0)!! As an example, the (2-2-2) type of number combination is typically drawn over 100 times for each 1 time a (6-0-0) is drawn!! Which type would you rather be playing on any given night? The one more likely to be drawn, of course! And certainly, a Quick Pick does not differentiate between types! There are 21 different ways 5 numbers can be drawn from 3 different number groups (Formats). In a 6 number game there are 28 different ways. Very important to remember is that all number combinations grouped this way are based on their odds of being drawn as you have just seen in this mock drawing. Guess what. By calculating how many number combinations are possible for each of these different types, we can see precisely what the odds are for each type to be drawn on any given night. These Odds are shown in our exclusive Calculated Results Tables.
{"url":"http://www.lotteryamerica.com/anatomy2.html","timestamp":"2014-04-17T00:58:31Z","content_type":null,"content_length":"18336","record_id":"<urn:uuid:5bfb8d53-ccb0-42b9-905a-fd2d91e4f954>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00339-ip-10-147-4-33.ec2.internal.warc.gz"}
Zeroes of a Polynomial (and its Derivative) January 23rd 2011, 06:03 PM #1 Zeroes of a Polynomial (and its Derivative) I am working on the following problem: Let $P(z)$ be a holomorphic polynomial of degree at least 1 so that $P(z)e 0$ in the upper half plane $\mathbb{R}_+^2 = \{z\in \mathbb{C}: \text{Im }z>0\}$. Prove that $P'(z)e 0$ on $\mathbb{R}_+ I wrote $\displaystyle P'(z)=\sum_{j=1}^n \frac{n_j}{z-z_j}P(z)$ where $z_j$ are the roots of the polynomial and $n_j$ are the multiplicity. Then, taking the real and complex parts, I have $\displaystyle \text{Re }P'(z)&=n_j\sum_{j=1}^n \frac{x-x_j}{(x-x_j)^2+(y-y_j)^2}\text{Re }P(z)$ $\displaystyle \text{Im }P'(z)&=n_j\sum_{j=1}^n \frac{-y+y_j}{(x-x_j)^2+(y-y_j)^2}\text{Im }P(z)$ Since $y_j<0$ by assumption, if we input $z=x+iy$ where $y>0$, then all of the terms in sum the complex part will be negative, so $\text{Im }P'(z)e 0$ and therefore $P(z)e 0$. But there is a little bit of a problem with this: it is possible that $\text{Im }P(z)=0$. If this is the case, then we must have $\text{Re }P(z)e 0$. However, I do not see how to guarantee that $\text{Re }P'(z) e 0$. It almost seems possible to construct a counterexample to the problem with this in mind. Does anybody have any suggestions? I am working on the following problem: Let $P(z)$ be a holomorphic polynomial of degree at least 1 so that $P(z)e 0$ in the upper half plane $\mathbb{R}_+^2 = \{z\in \mathbb{C}: \text{Im }z>0\}$. Prove that $P'(z)e 0$ on $\mathbb{R}_+ I wrote $\displaystyle P'(z)=\sum_{j=1}^n \frac{n_j}{z-z_j}P(z)$ where $z_j$ are the roots of the polynomial and $n_j$ are the multiplicity. Then, taking the real and complex parts, I have $\displaystyle \text{Re }P'(z)&=n_j\sum_{j=1}^n \frac{x-x_j}{(x-x_j)^2+(y-y_j)^2}\text{Re }P(z)$ $\displaystyle \text{Im }P'(z)&=n_j\sum_{j=1}^n \frac{-y+y_j}{(x-x_j)^2+(y-y_j)^2}\text{Im }P(z)$ Since $y_j<0$ by assumption, if we input $z=x+iy$ where $y>0$, then all of the terms in sum the complex part will be negative, so $\text{Im }P'(z)e 0$ and therefore $P(z)e 0$. But there is a little bit of a problem with this: it is possible that $\text{Im }P(z)=0$. If this is the case, then we must have $\text{Re }P(z)e 0$. However, I do not see how to guarantee that $\text{Re }P'(z) e 0$. It almost seems possible to construct a counterexample to the problem with this in mind. Does anybody have any suggestions? An idea: use the Gauss-Lucas theorem (Gauss) , whose proof is pretty easy and beautiful. Thanks for the theorem. I was actually thinking about a geometric interpretation of the sums above, trying to locate $z=x+iy$ in relation to those roots. I had some idea that the expressions were about taking averages somehow, but it wasn't really clear. Now I see the connection. In any case, I would still like to know if my work above is salvageable. Any thoughts would be appreciated. January 23rd 2011, 06:27 PM #2 Oct 2009 January 23rd 2011, 06:41 PM #3
{"url":"http://mathhelpforum.com/differential-geometry/169146-zeroes-polynomial-its-derivative.html","timestamp":"2014-04-23T20:59:50Z","content_type":null,"content_length":"45335","record_id":"<urn:uuid:6d7b8dc4-f1f7-4c7e-bd4e-398d258409ab>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00077-ip-10-147-4-33.ec2.internal.warc.gz"}
Tucker, GA Algebra Tutor Find a Tucker, GA Algebra Tutor ...I have been doing private math tutoring since I was a sophomore in high school. I believe in guiding students to the answers through prompt questions. This makes sure that when the student leaves he or she is equipped to answer the problems on their own for tests and quizzes. 9 Subjects: including algebra 1, algebra 2, geometry, precalculus ...I am also very knowledgeable on the subject of the human body, from years of studying the MCAT. I am a tutor who takes teaching seriously. I want to make the best out of the time spent with my 29 Subjects: including algebra 2, geometry, Microsoft Word, physics ...I have worked with students who have "learning differences" at Sophia Academy and Chrysalis Experiential Academy. I have found that presenting a concept from many perspectives increases retention. I have a constructivist approach which means that I ask lots of questions to bridge from concepts that they understand to the new material. 9 Subjects: including algebra 1, chemistry, physics, statistics Hello, my name is Britton. I am a certified Math teacher with a clear renewable certificate for middle grades math. I have spent the past eight years in and around education and social work. 2 Subjects: including algebra 1, prealgebra ...I graduated from Georgia State University with a major in Middle Grades Education with concentrations in Math and Science. I have worked with a variety of students including gifted, students with special needs, and English language learners. I am patient and caring. 11 Subjects: including algebra 1, reading, ESL/ESOL, grammar Related Tucker, GA Tutors Tucker, GA Accounting Tutors Tucker, GA ACT Tutors Tucker, GA Algebra Tutors Tucker, GA Algebra 2 Tutors Tucker, GA Calculus Tutors Tucker, GA Geometry Tutors Tucker, GA Math Tutors Tucker, GA Prealgebra Tutors Tucker, GA Precalculus Tutors Tucker, GA SAT Tutors Tucker, GA SAT Math Tutors Tucker, GA Science Tutors Tucker, GA Statistics Tutors Tucker, GA Trigonometry Tutors
{"url":"http://www.purplemath.com/Tucker_GA_Algebra_tutors.php","timestamp":"2014-04-16T16:30:04Z","content_type":null,"content_length":"23601","record_id":"<urn:uuid:7ab2a95d-6297-4ae2-9d83-d74efb272e00>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00570-ip-10-147-4-33.ec2.internal.warc.gz"}
Title page for ETD etd-0501102-110350 Mapping of mortality rates has been a valuable public health tool. We describe novel Bayesian methods for constructing maps which do not depend on a post stratification of the estimated rates. We also construct posterior modal maps rather than posterior mean maps. Our methods are illustrated using mortality data from chronic obstructive pulmonary diseases (COPD) in the continental United States. Poisson regression models have attracted much attention in the scientific community for their superiority in modeling rare events (including mortality counts from COPD). Christiansen and Morris (JASA 1997) described a hierarchical Bayesian model for heterogeneous Poisson counts under the exchangeability assumption. We extend this model to include latent classes (groups of similar Poisson rates unknown to an investigator). Also, it is standard practice to construct maps using quantiles (e.g., quintiles) of the estimated mortality rates. For example, based on quintiles, the mortality rates are cut into 5 equal size groups, each containing $20\%$ of the data, and a different color is applied to each of them on the map. A potential problem is that, this method assumes an equal number of data in each group, but this is often not the case. The latent class model produces a method to construct maps without using quantiles, providing a more natural representation of the colors. Typically, for rare events, the posterior densities of the rates are skewed, making the posterior mean map inappropriate and inaccurate. Thus, although it is standard practice to present the posterior mean maps, we also develop a method to provide the joint posterior modal map (i.e., the map with the highest posterior probability over the ensemble). For the COPD data, collected 1988-1992 over 798 health service areas, we use Markov chain Monte Carlo methods to fit the model, and an output analysis is used to construct the new maps.
{"url":"http://www.wpi.edu/Pubs/ETD/Available/etd-0501102-110350/","timestamp":"2014-04-20T13:30:05Z","content_type":null,"content_length":"5156","record_id":"<urn:uuid:cf1c5b70-27ee-4d77-b0d5-ca0f80d94859>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00295-ip-10-147-4-33.ec2.internal.warc.gz"}
1st Fundamental Theorem of Calc January 17th 2011, 10:54 AM #1 Sep 2009 1st Fundamental Theorem of Calc Ok I know the first fundamental theorem of Calc but this question is confusing me because both of the bounds are varibles with exponents. Please help Find the derivative of You can fix the problem that the lower limit of integration is a variable by breaking it into to integrals: $\int_{x^5}^{x^7}(2t- 1)^3 dt= \int_a^{x^7}(2t- 1)^3 dt- \int_a^{x^5} (2t- 1)dt$ where "a" is any fixed number- the lower limit won't be relevant to the derivative. To fix the problem that the upper limit is not just "x", make a change of variable. If we let $u= t^{1/7}$, then when [tex]t= x^7[tex], u will be equal to x. $t= u^7$, of course, and $dt= 7t^6 dt$ so the first integral becomes $7\int_{a^{1/7}}^x(2u^7- 1)^3t^6 du$. For the second integral let $v= t^{1/5}$ so that when $t= x^5$, $v= x$. $t= v^5$ and $dt= 5v^4dv$ so the integral becomes $5\int_{a^{1/5}}^x (2v^5- 1)^3t^4dt$ That is, $\int_{x^5}^{x^7}(2x-1)^2dt= 7\int_{a^{1/7}}^x (2u^7- 1)^3u^6du- 5\int_{a^{1/5}}^x (2v^5- 1)v^4 dv$ and now you can easily apply the Fundamental Theorem of Calculus to the two integrals on the right. The derivative with respect to x is, modulo any careless errors, $7(2x- 1)^3x^6- 5(2x^5-1)x^4$ In fact, that can be generalized to "Leibniz' formula": $\frac{d}{dx}\left(\int_{\phi(x)}^{\psi(x)} f(x,t)dt= f(x, \psi(x))\frac{d\psi(x)}{dx}- f(x,\phi(x))\frac{d\phi(x)}{dx}+ \int_{\phi(x)}^{\psi(x)}\frac{\partial f(x,t)}{\partial x}dt$ which is, I suspect, how FernandoRevilla got his answer so quickly! I got it in the following way: $F(x)=\displaystyle\int_{g(x)}^{h(x)}f(t)dt=\displa ystyle\int_{0}^{h(x)}f(t)dt-\displatytyle\int_{0}^{g(x)}f(t)dt$ Using the Fundamental Theorem of Calculus and Chain's Rule we inmediately obtain: (of course $g,h$ differentiable, etc) Fernando Revilla January 17th 2011, 11:07 AM #2 January 18th 2011, 08:00 AM #3 MHF Contributor Apr 2005 January 18th 2011, 08:33 AM #4
{"url":"http://mathhelpforum.com/calculus/168612-1st-fundamental-theorem-calc.html","timestamp":"2014-04-19T21:30:07Z","content_type":null,"content_length":"45414","record_id":"<urn:uuid:6702b616-63e6-4379-8f59-53a7cab69e44>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00343-ip-10-147-4-33.ec2.internal.warc.gz"}
Adiabatic and Quantum Computers CS 301 Lecture, Dr. Lawlor This lecture is pure bonus material--it will NOT appear on any homeworks or the final exam. │ │ Ordinary Computer │ Adiabatic Circuits │ Quantum Computer │ │ Goal │ Get 'er done! │ Substantially lower power use, │ Speedups up to exponential: │ │ │ │ especially at low clockrate. │ e.g., search n values in sqrt(n) time │ │ Data storage │ 1's and 0's (bits) │ 1's and 0's (bits) │ Vector with axes 1 and 0 (qubits) │ │ │ │ │ Not just 1 or 0: both at once! │ │ Assignments? │ Yes │ No (uses energy = kT ln 2) │ No (violates laws of physics) │ │ Reversible? │ No │ Yes │ Yes, except for "collapse" operation │ │ Swap? │ Yes │ Yes │ Yes │ │ Logic gates │ AND, OR, NOT │ NOT, CNOT, CCNOT │ CNOT, Hadamard rotate 45 degrees │ │ Programming │ Instructions │ Reversible Instructions │ Reversible Quantum Operations, │ │ Model │ │ │ and Irreversible Collapse │ │ Clock │ Square wave │ Two trapezoidal waves │ Limited by coherence time │ │ When? │ Now │ Slowly, over next ten years │ ??? │ │ Limits │ Heat/power, │ Only helps at low clockrate │ How many bits can you keep coherent? │ │ │ hard problems │ │ │ Adiabatic circuits are based on a few interesting fundamental observations about modern circuit efficiency: • Slamming a wire from a 1 down to a 0 means dumping the 1's energy directly into the ground. It's more efficient to shuffle the 1 off somewhere else than to erase it entirely. • Writing a zero into a register actually reduces entropy: the universe has lost a disordered register value, and now has an orderly zero-filled register. You can't reduce entropy in one place without increasing it (even more) somewhere else, so writing a zero actually consumes energy, no matter how the circuitry manages the write. Saed Younis' 1994 MIT PhD Thesis outlines the basic adiabatic circuit model and its inherent power advantage, which is up to several hundredfold at sufficiently low clock rates. These design principles have slowly been trickling into CPU designs piece by piece; still, most circuits are non-reversible and non-adiabatic today. The path to a fully adiabatic computation, assuming we ever get there, will have to change the instruction set at some point, because tons of operations are destructive, like "mov" (which irrevocably destroys the destination), and will need to be replaced with "swap" (which doesn't destroy information). Some future fully-adiabatic CPU will need reversible instructions, and in fact the compiler will probably need to generate entire "antifunctions" to undo the operation of each function. To really be 100% reversible, the display would have to suck the final answer back out of your brain, so some degree of irreversibility is usually considered acceptable in designing real systems. The energy kT ln 2 needed to erase one digit near room temperature is about 10^-20 joules, which is... small. If your machine is chewing through 10 billion bits per clock cycle, and does 10 billion clocks per second, this is one joule/second, or one watt of irreducible "bit erasure cost". A typical desktop CPU is much smaller than this, and uses 30-100 watts for other things, so this isn't a very big effect yet. But depending on how silicon fabrication scales up, it could become a show-stopper in the next ten years, and drive us toward reversible programs. Once your program is fully reversible, you're actually halfway toward writing a quantum computer program! Quantum Physics Small things, like electrons, display several very odd mechanical properties with mystical sounding "quantum" names: • Superposition: an electron can be in several places at once, or several states at once. The terminology here is that the electron's "wave function" has spread over space. The probability of observing an electron at any given location is the square magnitude of the wave's amplitude. • Entanglement: electrons can interact. Electrons in a superposition of states can interact. Interactions between superpositioned electrons always collapse to consistent values, that would have made sense even without superposition. This can create surprisingly complex interactions, since the interaction of n binary unknowns represents 2^n total interactions. (See: ripple tank applet.) • Collapse: if something big (e.g., a human being, a microscope, or any macroscopic measuring device) stares at the electron, it appears in exactly one place. This is called "wavefunction collapse", because the spread-out wave function bunches up again; or "state reduction", since you start with many states and end up with one state. Worse yet, it's looking less like collapse is somehow tied to a "big observer", which would be merely creepy. Delayed quantum erasure means it's probably entangling your wavefunction with the electron's, which would be freaky: just looking at the plots splits your own wavefunction into several pieces. This would mean collapse is just an illusion, A quantum computer is based on "qubits", which you can think of as a dot in 2D space: the X axis means a "0", the Y axis means a "1". Normal bits must lie on one axis or another, but qubits can live between the axes. For example, • (X,Y) coordinates (0.7,0.7) means "might be a 0, or might be a 1". Said more mystically, this is the "quantum superposition of 0 and 1". If you measure the qubit, it will randomly "collapse" to either 0 or 1. • (0.0,1.0) means "definitely a 1". If you measure the qubit, you'll always get a 1. • (0.3,0.95) means "it's probably a 1, but might still be 0". If you measure the qubit, you'll get a 0 10% of the time (10% = 0.3*0.3), and a 1 90% of the time (90%=0.95*0.95). Since coordinates (0,0) means "not a 0, and not a 1", so we usually require the qubit to live on a unit sphere--it's either going to be a zero or going to be a one, so the probabilities (equal to the square of the amplitude) must add up to one. Tons of research groups have built single-bit quantum computers, but the real trick is entangling lots of bits together without premature "collapse", and that seems to be a lot harder to actually pull off. Quantum Programming Just like a classical computer uses logic gates to manipulate bits, the basic instructions in a quantum computer will use quantum logic gates to manipulate individual wavefunctions. Because you don't want to destroy any aspect of the data (this causes decoherence), you can represent any quantum logic gate with a rotation matrix. Again, think of a qubit like a little vector. One qubit is 2-float vector representing the amplitudes for zero and one: A "Pauli-X" gate is represented by this 2x2 rotation matrix: Plugging in the input and output probabilities, we have: a=0 a=1 output a=0 0 1 output a=1 1 0 The amplitude for a=1 on the input becomes the amplitude for a=0 on the output, and vice versa--this is just a NOT gate! CNOT gate A controlled NOT takes two bits as input. Two qubits makes a 2^2=4 float vector with these amplitudes: a=0 && b=0 a=0 && b=1 a=1 && b=0 a=1 && b=1 The CNOT gate's matrix is basically a 2x2 identity, and a 2x2 NOT gate: Again, putting in the input and output vectors, we can see what's going on: a=0 a=1 b=0 b=1 b=0 b=1 a=0 && b=0 1 0 0 0 a=0 && b=1 0 1 0 0 a=1 && b=0 0 0 0 1 a=1 && b=1 0 0 1 0 If a=0, nothing happens--b's probabilities are exactly like before. If a=1, then b gets inverted, just like a NOT gate. The basic programming model with a quantum computer is: 1. Initialize your quantum registers with a superposition of 0 and 1: these hence contain every possible answer. 2. Run a series of instructions to selectively amplify the answer you're looking for, or attenuate the answers you're not looking for. For example, you can arrange "wrong answers" so they cancel each other out. Each instruction must be a reversible operation, but in theory can be arbitrarily complex and there's no theoretical limit on the number of instructions. However, in practice, the machine only works if you can keep the whole register entangled in a coherent superposition: accidental collapse or "decoherence" is currently the limiting factor in building big quantum 3. Finally, look at the register. The act of looking will "collapse" to a particular set of 1's and 0's, hopefully representing the right answer. If there are several right answers, physics will pick one randomly. People started to get really interested in Quantum Computers when in 1994 Peter Shor showed a quantum computer could factor large numbers in polynomial time. The stupid algorithm for factoring is exponential (just try all the factors!), and though there are smarter subexponential algorithms known, there aren't any non-quantum polynomial time algorithms known (yet). RSA encryption, which your web browser uses to exchange keys in "https", relies on the difficulty of factoring large numbers, so cryptographers are very interested in quantum computers. In 1996, Lov Grover showed an even weirder result, that a quantum search over n entries can be done in sqrt(n) time. Quantum Hardware At the moment, nobody has built a useful quantum computer, but there are lots of interesting experimental ones. The biggest number a quantum computer has factored is 15 (=3*5, woo hoo). The largest quantum computers actually built so far have only 8 bits, and only one register, which means the maximum theoretical speedup is 2^8=256 times faster than a normal machine. But *if* the hardware can be scaled up, a quantum computer could solve problems that are intractable on classical computers. Or perhaps there is some physical limitation on the scale of wavelike effects--the "wavefunction collapse"--and hence quantum computers will always be limited to a too-small number of bits or too-simple circuitry. At the moment, nobody knows. Roger Penrose has a theory that the human brain is actually a quantum computer. This could explain how we're able to do some of the really amazing things we do, like recognize pictures of objects. The deal is that it's easy for a computer to *simulate* a picture of an object (that is, object->picture is easy), but to *recognize* a picture of an object means searching over all possible objects, orientations, lighting, and so on (that is, picture->object is hard). A quantum computer with sufficiently large registers (big enough to generate a complete image, so thousands of qubits) could in principle start with a superposition of all possible objects, and use a series of instructions to cancel out objects that are inconsistent with the current picture, finally collapsing out a plausible object that could have generated that picture. There's a different and controversial "Adiabatic Quantum Computer" design used by British Columbia based quantum computer startup D-Wave that they hope will scale to solve large problems. It is not well accepted whether this new design is workable, and there are serious doubts whether the 128-bit superconducting niobium hardware the startup has built is "really" a quantum computer. They got a huge amount of press in 2007 and 2008, but have been quieter lately. They were panned in IEEE Spectrum as "Does Not Quantum Compute"; but they recently got a big optimization contract from The future of quantum computers is currently in a superposition between two outcomes: • Y axis: Quantum computers will replace all other computers, reach and then exceed human-level intelligence, and then things will start to get really interesting. • X axis: Quantum computers will remain an interesting laboratory curiosity. Indefinitely. This superposition may collapse sometime in the next few years. Or maybe not.
{"url":"https://www.cs.uaf.edu/2011/fall/cs301/lecture/11_23_quantum.html","timestamp":"2014-04-18T00:20:03Z","content_type":null,"content_length":"22365","record_id":"<urn:uuid:ae65985e-f57d-4432-ae05-31503f9ca4f6>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00032-ip-10-147-4-33.ec2.internal.warc.gz"}
Title page for ETD etd-0501102-110350 Mapping of mortality rates has been a valuable public health tool. We describe novel Bayesian methods for constructing maps which do not depend on a post stratification of the estimated rates. We also construct posterior modal maps rather than posterior mean maps. Our methods are illustrated using mortality data from chronic obstructive pulmonary diseases (COPD) in the continental United States. Poisson regression models have attracted much attention in the scientific community for their superiority in modeling rare events (including mortality counts from COPD). Christiansen and Morris (JASA 1997) described a hierarchical Bayesian model for heterogeneous Poisson counts under the exchangeability assumption. We extend this model to include latent classes (groups of similar Poisson rates unknown to an investigator). Also, it is standard practice to construct maps using quantiles (e.g., quintiles) of the estimated mortality rates. For example, based on quintiles, the mortality rates are cut into 5 equal size groups, each containing $20\%$ of the data, and a different color is applied to each of them on the map. A potential problem is that, this method assumes an equal number of data in each group, but this is often not the case. The latent class model produces a method to construct maps without using quantiles, providing a more natural representation of the colors. Typically, for rare events, the posterior densities of the rates are skewed, making the posterior mean map inappropriate and inaccurate. Thus, although it is standard practice to present the posterior mean maps, we also develop a method to provide the joint posterior modal map (i.e., the map with the highest posterior probability over the ensemble). For the COPD data, collected 1988-1992 over 798 health service areas, we use Markov chain Monte Carlo methods to fit the model, and an output analysis is used to construct the new maps.
{"url":"http://www.wpi.edu/Pubs/ETD/Available/etd-0501102-110350/","timestamp":"2014-04-20T13:30:05Z","content_type":null,"content_length":"5156","record_id":"<urn:uuid:cf1c5b70-27ee-4d77-b0d5-ca0f80d94859>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00295-ip-10-147-4-33.ec2.internal.warc.gz"}
Star-Delta and Delta-Star Conversion in Three-Phase AC Circuits 1. Star-Delta and Delta-Star Conversion in Three-Phase AC Circuits < Back Page 5 of 10 Next > 3.5 Star-Delta and Delta-Star Conversion in Three-Phase AC Circuits In this book, the three-phase ac systems are considered as a balanced circuit, made up of a balanced three-phase source, a balanced line, and a balanced three-phase load. Therefore, a balanced system can be studied using only one-third of the system, which can be analyzed on a line to neutral basis. The star-delta (Y-Δ) or delta-star (Δ-Y) conversion (Fig. 3-15) is required in three-phase ac systems to simplify the circuits and ease their analysis. If a three-phase supply or a three-phase load is connected in delta, it can be transformed into an equivalent star-connected supply or load. After the analysis, the results are converted back into their original delta equivalent. Impedance circuits that are equivalent in relationship to terminals a, b, and c: (a) star-connected and T-connected impedances, and (b) delta-connected and π-connected impedances. The complex delta-star or star-delta conversion formulas are given next. These are based on the electric circuits shown in Fig. 3-15. where Z is the complex impedance, Z = R ± jX. Since the load is balanced, the impedance per phase of the star-connected load will be one-third of the impedance per phase of the delta-connected load. Hence the equivalent impedances can be given One of the common uses of these transformations is in power system transmission line modeling and in three-phase transformer analysis. Circuit analysis involving three-phase transformers under balanced conditions can be performed on a per-phase basis. When Δ-Y or Y-Δ connections are present, the parameters refer to the Y side. In Δ-Δ connections, the Δ-connected impedances are converted to equivalent Y-connected impedances. 3.5.1 Virtual Instrument Panel The objective of the following VI is to study these transformation concepts and provide an easy calculation tool using the complex impedances. The front panel of Star Delta Transformations.vi is given in Fig. 3-16 and is capable of transforming balanced or unbalanced three-phase impedance loads. Front panel and brief user guide of Star Delta Transformations.vi. 3.5.2 Self-Study Questions Open and run the custom-written VI named Star Delta Transformations.vi in the Chapter 3 folder, and investigate the following questions. 1: Set all impedances equal and perform Δ-Y and Y-Δ transformations, then repeat the transformations for unequal impedances, and verify the results analytically. 2: The circuit shown in Fig. 3-17 is called an unbalanced Wheatstone Bridge. Find the equivalent resistance between terminals A and D, which then can be used to calculate the source current for a given supply voltage. Sample circuit for question 2. A2: Answer: 20.94 Ω Hint: Use Δ-Y transformation to simplify the circuit. < Back Page 5 of 10 Next >
{"url":"http://www.informit.com/articles/article.aspx?p=101617&seqNum=5","timestamp":"2014-04-20T01:08:31Z","content_type":null,"content_length":"33231","record_id":"<urn:uuid:f5ed002c-b3e2-4e5e-a714-e1a8e382b442>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00187-ip-10-147-4-33.ec2.internal.warc.gz"}
About Assignment Scales Gradebook allows you to choose from a number of scales to score each assignment: Points: You can enter the number of points a student earned out of the total possible points. You can enter non-negative numbers up to five digits and with two decimal places, such as 9.25 (out of 10 Percentage: You can enter a percentage to indicate the student's achievement on the assignment, or number of items answered correctly. You can enter non-negative whole numbers, e.g. 85. Text: You can enter notes about student performance on that assignment, track necessary information (such as group # or project partners), or provide brief comments to students about an assignment. The maximum number of characters you can enter is 255. 4.0 Scale: You can set up a 4.0 scale for assignments that converts to percentage scores for the purposes of calculating total scores. Use a 4.0 scale when you want students to receive their scores as values on the 4.0 scale and you want GradeBook to calculate the total score. Custom: Custom scales allow you to define scores or descriptors that distinguish meaningfully between different levels of performance on an assignment. Choose a custom scale if you wish to use any of the following to describe student performance: • Rubric or rating scale (e.g. Excellent, Good, Fair, Poor) • Letter grade scale (e.g., A, B+, etc.) • Pass/Fail • Credit/No Credit When you choose a custom scale, you must create a conversion table. First, you define the scores or descriptors (such as "Pass") you will use to grade the assignment. The score or descriptor you choose will display to students when their scores are published. Then, you must enter an equivalent percentage for each score or descriptor. These equivalent percentages are used by GradeBook when calculating total scores. This conversion table must be "customized" by you because agreement does not exist on how these scores or descriptors should map meaningfully onto a percentage scale. Because GradeBook converts all scores to a percentage in order to calculate a student's total score for the class, the conversion table ensures that the meaning of your custom scale (what it communicates about student achievement) is preserved in this calculation. If a straight mathematical conversion is used instead of professional judgment, students (especially those with lower scores) may be unduly penalized. EXAMPLE: Professor Berg assigns challenging problem sets as homework. He is most interested in seeing how his students approach each problem, not whether they solve the problem correctly. He uses a 3-point rubric and a custom scale to communicate his emphasis on process over product. │ Rubric scores │ Custom scale │ │ 3 - Effective strategies and correct answer │ 3 = 100% │ │ 2 - Effective strategies, incorrect answer │ 2 = 95% │ │ 1 - Attempted, ineffective strategies │ 1 = 85% │
{"url":"http://cat-plone2.cac.washington.edu/lst/help/gradebook/scales","timestamp":"2014-04-17T09:34:33Z","content_type":null,"content_length":"25637","record_id":"<urn:uuid:0b6ec79c-b65d-42f2-a01c-5046f3c6404f>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00572-ip-10-147-4-33.ec2.internal.warc.gz"}
Superelasticity of Carbon Nanocoils from Atomistic Quantum Simulations A structural model of carbon nanocoils (CNCs) on the basis of carbon nanotubes (CNTs) was proposed. The Young’s moduli and spring constants of CNCs were computed and compared with those of CNTs. Upon elongation and compression, CNCs exhibit superelastic properties that are manifested by the nearly invariant average bond lengths and the large maximum elastic strain limit. Analysis of bond angle distributions shows that the three-dimensional spiral structures of CNCs mainly account for their unique superelasticity. Nanocoil; Nanotube; Superelasticity; Young’s modulus There is a large class of novel nanostructures with helical geometries including boron carbide [1], SiC [2] and ZnO [3,4] nanosprings, carbon [5] and ZnO [6] nanohelices, and carbon nanocoils [7,8]. Among them, carbon nanocoil (CNC) (also known as coiled carbon nanotube) has attracted particular attention due to its structural correlation with carbon nanotubes (CNTs). Intuitively, CNCs may inherit some of the fundamental properties of carbon nanotubes but exhibit other unique mechanical, electronic, and magnetic properties associated with their coiled geometries and the intrinsic distribution of five-membered and seven-membered rings. In early 1990s, Dunlap [9] and Ihara et al. [10-12] proposed several structural models for coiled carbon nanotubes and discussed the relationships between the geometric parameters (diameter, pitch length, rotational symmetry) and the energetic, elastic, and electronic properties. Molecular dynamics simulations and tight-binding calculations have demonstrated the structural stability of CNCs; they have higher cohesive energy (~7.4 eV/atom) than that of C[60] (7.29 eV/atom) [10,13]. Electronic properties of CNCs including band structures and density of states were investigated using a tight-binding model [11,14], and it was predicted that some carbon nanocoils could be semi-metals, in contrast to the conventionally semiconducting and metallic behavior known for the straight carbon Since Zhang et al. first fabricated carbon nanocoils (700 nm in pitch and ~20 nm in tubular diameter) via catalytic decomposition of acetylene in 1994 [7], there have been large experimental efforts in synthesizing CNCs of high quality. Production of CNCs by chemical vapor deposition (CVD) [15-19], laser evaporation of the fullerene/Ni particle mixture [20], and opposed flow flame combustion method [21] has been reported. Pan and coworkers realized diameter control of CNCs via tuning the particle size of the nanoscale catalysts [22]. In addition to the conventionally synthesized multi-walled CNCs with tubular diameters of 15–100 nm [7,16-19], evidence of ultrathin single-walled carbon nanocoils (with both tubular diameter and pitch length down to 1 nm) was found in the products of carbon nanotubes from catalytic decomposition of hydrocarbon molecules by Biró’s STM experiments [23]. With their unique three-dimensional (3D) helical structures, the CNCs are expected to exhibit spring-like behavior in their mechanical properties. In an experiment by Chen et al. [24], multi-walled CNCs with outer tubular diameter of ~126 nm have been elastically elongated to a maximum strain of ~42%. A spring constant of 0.12 N/m in the low strain region was obtained. According to the structural parameters of nanocoil given by Chen et al. [24] (tubular diameter of 120 nm, coil radius of 420 nm, and pitch of 2,000 nm), Fonseca et al. [25] computed the CNC’s Young’s modulus within the framework of the Kirchhoff rod model and obtained a value of 6.88 GPa. Using finite element analysis at the continuum level, Sanada et al. also predicted a similar result (about 4.5 GPa) for carbon nanocoil with tubular radius of 240 nm, coil radius of 325 nm, and coil pitch of 1,080 nm [26]. However, the experimentally measured Young’s modulus values are much higher than these theoretical predictions. Volodin et al. [27] reported a Young’s modulus ~0.7 TPa for CNCs with coil diameter >170 nm from AFM measurement. Using a manipulator-equipped SEM, Pan et al. determined the Young’s modulus of CNCs to be up to 0.1 TPa for coil diameter ranging from 144 to 830 nm [28]. The large discrepancy between experiment and theory has been attributed to the usage of mechanical parameters of bulk materials in the continuum mechanics simulations [25]. Despite the above efforts, our theoretical knowledge of the CNCs is still limited. In particular, there have been no atomistic simulations of the mechanical properties of the CNCs. In this paper, we proposed a new way of constructing structural models of carbon nanocoils and computed the Young’s moduli and spring constants for a series of ultrathin CNCs. Most interestingly, we observed an unusual superelasticity in these CNCs owing to their 3D spiral geometries. Structural Model and Computational Methods We developed a simple way to construct atomistic models for the structures of single-walled carbon nanocoils based on nanotubes with given chirality. As shown in Fig. 1, one pair of pentagons and another pair of heptagons are first individually introduced in two sides of a piece of carbon nanotube via adjusting the local topological structures of the two pairs of originally hexagonal rings (see the highlighted parts in Fig. 1a) and the surrounding carbon network. Introducing pentagons forms a cone defect, while introducing heptagons results in a saddle point defect (see the blue and red rings in Fig. 1b, respectively). Figure 1. (Color online) Procedures of constructing structural model of (6, 6) carbon nanocoil from a piece of (6, 6) carbon nanotube Upon relaxation, the nanotube segment is bent around the defect site in order to release the strain energy induced by the pentagons and heptagons. The pentagon (heptagon) pair locates in the convex (concave) part of the segment (see Fig. 1b), passing through a bisector after we adjust the number of carbon atoms on the two ends to make the segment symmetric. Depending on how these basic structural segments are connected, either a nanocoil or a nanotori [9,29,30] is formed. As shown in Fig. 1c, two segments are connected with a certain rotating angle to make the combined structure spiral and to form a seamless hexagonal carbon network. The structure in Fig. 1c can be further used as a building block to construct complete nanocoils with one-dimensional (1D) periodic boundary conditions (see Fig. 1d). By changing the tube length at the two ends of the basic segment (Fig. 1b) or varying the nanotube diameter, we can control coil diameter, coil pitch, and tubular diameter of a carbon nanocoil. In such a way, we built a series of single-walled carbon nanocoils, that is, (5, 5), (6, 6), (7, 7), and (8, 8) CNCs. Here, the index (n, n) for a CNC means that the CNC is constructed from the straight (n, n) nanotube. As shown in Fig. 2, a typical nanocoil exhibits a polygonal shape from the top view, in coincidence with experimental observation [31]. The effective coil diameter d of a nanocoil is nearly proportional to its tubular diameter as well as the side length of the basic segment (see Table 2); but there is no simple relationship between the coil pitch and the other geometry parameters. At present, for a given nanotube, we chose to construct nanocoils using the building blocks with the smallest side length (corresponding to the length of the straight nanotube on each basic segment). Figure 2. Geometry of (6, 6) CNC from side view (left plot) and top view (right plot); the latter is a hexagonal nanotori. a is the side length of hexagon, D is the diameter of the inner ring. The area of cross section (from top view) is computed by The structures and energetics of these CNCs were described by a nonorthogonal tight-binding (TB) model developed by our group previously [32]. This TB total energy model is based on the extended Hückel approximation and employed an exponential distance-dependent function for the hopping integral overlap. The TB parameters were especially developed for hydrocarbon molecules and nanostructures. The experimental or ab initio data on the geometry structures, binding energies, on-site charge transfer, and vibrational frequencies of a variety of hydrocarbon molecules have been well reproduced. In addition, a few test calculations on the carbon fullerenes and nanotubes also showed satisfactory agreement between TB and DFT results. Within 1D periodic boundary condition, the lattice parameter (pitch) of each nanocoil was carefully adjusted to minimize the total energy. Starting from the equilibrium 1D lattice, the CNCs were either compressed or elongated by gradually varying the lattice parameter to investigate the mechanical properties of these nanocoils. At any given lattice parameter, the atomic coordinates of CNCs were fully relaxed without any symmetry constraint. To validate the results from TB calculations, we performed all-electron density functional theory (DFT) calculations on the smaller (5, 5) CNC. In the DFT calculations, we adopted generalized gradient approximation (GGA) with the PW91 parameterization [33] and the double-numerical plus d polarization (DND) basis set as implemented in the DMol^3 package [34]. Results and Discussion Young’s Modulus and Spring Constant The mechanical properties of a carbon nanocoil can be characterized by spring constant (k) and Young’s modulus (E), which can be computed by the following two formula: where U is the elastic potential energy of the system (total energies differences of different lengths),L is the length of 1D unit cell and the L[0] is its equilibrium value, and V[0] is the effective volume of the 1D structural unit in its equilibrium configuration. For a carbon nanocoil, V[0] = S × L[0], where S is the area of cross section of the nanocoil from the top view (see Fig. 2 ). Similarly, for a single-walled carbon nanotube, V[0] = 2πr × L[0] × Δd, where Δd = 3.4 Å is the shell thickness of tube wall and r is the tube radius [35,36]. Using DFT results as benchmark, we first calculated the Young’s modulus of a series of armchair carbon nanotubes to assess the validity of the present TB total energy model. Starting from the equilibrium 1D lattice length, we elongated different armchair CNTs along the axis direction with a strain step of 0.2% up to a maximum strain of 1%. The computational 1D supercells of 29.54 Å in length include 12 unit cells of nanotube. The theoretical Young’s moduli of CNTs from DFT and TB calculations are listed in Table 1. Both methods predicted that the Young’s moduli of CNTs are around 1.0 TPa, nearly independent of tube diameter. Similar results were obtained in previous theoretical [36] and experimental [37] studies on CNTs. The agreement between the TB and DFT calculations and the coincidence with previous results indicate that the present TB model should be reasonable for describing the mechanical properties of carbon nanostructures. Table 1. Young’s modulus (E) of different armchair carbon nanotubes from DFT (E[DFT]) and TB (E[TB]) calculations Similarly, the Young’s moduli and spring constants of CNCs were calculated via stretching the system along the orientation of their spiral axis. Within a maximum strain of 5%, we gradually applied the elongation strain by a step of 1%. The Young’s moduli of CNCs from TB calculations are listed in Table 2. For all systems studied, the computed Young’s moduli range between three and six GPa. For the smallest (5, 5) nanocoil considered, our DFT calculations yield a Young’s modulus of E = 5.31 GPa, rather close to the TB value (5.4 GPa). Compared with those of carbon nanotubes, the Young’s moduli of nanocoils are lower by two orders of magnitude, indicating that the CNCs are quite soft with regard to CNTs owing to their unique spring-like geometry. The Young’s moduli for the ultrathin nanocoils from our present atomistic calculations are comparable to those of previous theoretical results for mesoscale nanocoils. Unfortunately, there are no experimental data reported for ultrathin CNCs with diameters down to several nanometers. Table 2. Young’s modulus (E) and spring constant (k) of carbon nanocoils (CNCs) from TB and DFT (values in brackets) calculations Although the computed Young’s modulus for nanocoil varies with the tubular diameter and coil diameter (see Table 2), there seems no clear diameter-dependent trend, in agreement with the experimental observations [27,28]. For carbon nanocoils of diameters between 144 and 830 nm, Hayashida et al. [28] found that the Young’s modulus changes irregularly from 0.04 TPa to 0.10 TPa. Volodin’s measurement of Young’s modulus also revealed no apparent dependence on the coil diameter [27]. The spring constants of the CNCs were also computed using Eq. (1), and the results are listed in Table 2. For the (5, 5), (6, 6), and (7, 7) CNCs, the spring constants are around 15–19 N/m, whereas the (8, 8) CNC possesses a very large spring constant of 44.36 N/m. Previous experiment by Chen et al. [24] obtained a k = 0.12 N/m for a mesoscale CNC (tubular diameter of 120 nm, coil radius of 420 nm, and pitch of 120 nm). The discrepancy between the present theoretical values and the measured data might be understood by the different length scales of the systems (nanometers in our model systems versus hundreds of nanometers in experimental CNCs). For macroscopic materials, the superelastic (or pseudoelastic) effect in the shape memory alloys results in a variety of useful industrial and medical applications [38]. In the nanostructured materials, similar superelastic phenomena were recently revealed in nanocoils and microcoils. Gao et al. reported superelasticity in ZnO nanohelices (~560 nm in coil diameter) with an experimental maximum elongation of 69.8% measured by AFM and a theoretical maximum elongation of 72% calculated by classical elasticity theory [6]. A Si[4]N[3] microcoil with coil diameter of 160 μm also exhibited good recovery ability under repeated load, corresponding to the superelasticity [39]. In particular, even when stretched to a nearly straight shape for several cycles, the Si[4]N[3] microcoil recovered its original state without damage after the load was released. As for the coiled carbon structures, Motojima et al. revealed that carbon microcoils could be extended and contracted by 3–15 times [40] and 5–10 times [41] with regard to the original coil length. Meanwhile, carbon nanocoils also demonstrated superior elasticity with a maximum relative elongation of ~42% In this work, we applied elongation (compressive) strains up to about 60% (20–35%) on different CNCs. Above such elastic limits, the CNCs will undergo plastic deformation, which will not be discussed here. Within the elastic strain ranges considered, the CNCs hold their topological structures very well upon geometry relaxation. We further examined the changes of average C–C bond lengths of CNCs during elongation and compression. As shown in Fig. 3, the average C–C bond length is very robust under external strains of both directions. With elongation strain up to 50%, the increase in average bond length is only less than 0.6% for (5, 5), 0.4% for (6, 6), and 0.3% for (7, 7) CNC, respectively. On the other hand, the average C–C bond length is slightly reduced under 1D compression. For a (5, 5) CNC, the magnitude of average bond length reduction is about 0.4% up to a maximum compressive strain of 35%. In additional to the above TB results, DFT calculations were carried out on the (5, 5) CNC to further confirm the change of average bond length during elongation and compression. As shown in the insert plot of Fig. 3, up to an elongation strain of 50% (a compressive strain of 30%), the increase (decrease) in average bond length is 0.5% (0.14%) from DFT calculations, comparable to the TB values of 0.6% (0.08%). The excellent coincidence between DFT and TB results proves that the present TB model is reliable at least for describing the elastic properties of the carbon nanocoils. Figure 3. Variation of average C–C bond length in carbon nanocoils (CNCs) under elongation (positive) and compressive (negative) strains. The insert plot shows the comparison of percentages of bond length variation with regard to the equilibrium state for a (5, 5) CNC from DFT and TB calculations With increasing tubular diameter, the variation of average bond length in the nanocoil is less sensitive to elongation strain (see Fig. 3), implying that the nanocoil can undertake higher strain. On the contrary, the elastic limit of compression for a CNC reduces with increasing tubular diameter. For example, the maximum compressive strain is 35% for (5, 5) CNC, 25% for (6, 6) CNC, and 20% for (7, 7) CNC. It is interesting to note that the carbon nanocoils can undertake higher elongation strain (up to ~60%) than compressive one (up to 20–35%). The above computational results show superior superelasticity in CNCs. In particular, under elongation strain up to 60%, the topology structure of the carbon nanocoil is still retained, with an average bond length only increased by less than 1%. This phenomenon can be partially understood by the 3D spiral structures of the CNCs, which offer enough flexibility to be stretched or squeezed. Due to the substantial strength of C–C bond (with average bond energy over 2 eV), the relative orientations of neighboring C–C bonds (i.e., bond angles) in a nanocoil would alter during compression or elongation in order to avoid significant changes of C–C bond lengths. As shown in Fig. 4, the full width at half maximum (FWHM) of bond angle distribution for a (5, 5) CNC increases during elongation or compression. For example, the FWHM for the equilibrium (5, 5) CNC is 2.7°. It increases to 3.5° under an elongation strain of 50%, and 5.6° for a compressive strain of 35%. The superelastic behavior predicted for CNCs may lead to some applications in nanoscale materials and devices, for example, shape memory, elastic energy storage, buffer, nano-spring in NEMS, and so on. Figure 4. Bond angle distribution of (5, 5) carbon nanocoil under large elongation (50%) and compressive strain (35%), compared to the equilibrium case We have constructed a series of carbon nanocoils by periodically introducing pentagons and heptagons in the segments of carbon nanotubes to make them coiled. The computed Young’s moduli of carbon nanocoils (3–6 GPa) are much lower than those of carbon nanotubes (~1 TPa). Under large elongation/compressive strains, the average bond lengths of CNCs almost remain invariant, while the elastic energy is stored via bond angle redistributions, corresponding to the superelastic behavior. Compared to the carbon nanotubes with same chirality, nanocoils show much smaller Young’s moduli and unusual superelasticity, which might lead to some future nanotechnology applications. This work is supported by NCET Program provided by the Ministry of Education of China (NCET-060281), and the Scientific Research Foundation for the Returned Overseas Chinese Scholars. Open Access This article is distributed under the terms of the Creative Commons Attribution Noncommercial License which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited. Sign up to receive new article alerts from Nanoscale Research Letters
{"url":"http://www.nanoscalereslett.com/content/5/3/478?fmt_view=mobile","timestamp":"2014-04-18T22:10:13Z","content_type":null,"content_length":"110288","record_id":"<urn:uuid:4a8abe0d-731a-4e52-b48a-02e9af879ce3>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00545-ip-10-147-4-33.ec2.internal.warc.gz"}
Probabilistic and Fuzzy Arithmetic Approaches for the Treatment of Uncertainties in the Installation of Torpedo Piles Mathematical Problems in Engineering Volume 2008 (2008), Article ID 512343, 26 pages Research Article Probabilistic and Fuzzy Arithmetic Approaches for the Treatment of Uncertainties in the Installation of Torpedo Piles ^1Laboratory of Computer Methods and Offshore Systems (LAMCSO), Civil Engineering Department, COPPE/UFRJ-Postgraduate Institute of the Federal University of Rio de Janeiro, 21945-970 Rio de Janeiro, RJ, Brazil ^2COPPE/UFRJ, Civil Engineering Department, Centro de Tecnologia Bloco B sala B-101, Cidade Universitária, Ilha do Fundão, Caixa Postal 68.506, 21945-970 Rio de Janeiro, RJ, Brazil Received 2 December 2007; Accepted 27 March 2008 Academic Editor: Paulo Gonçalves Copyright © 2008 Denise Margareth Kazue Nishimura Kunitaki et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. The “torpedo” pile is a foundation system that has been recently considered to anchor mooring lines and risers of floating production systems for offshore oil exploitation. The pile is installed in a free fall operation from a vessel. However, the soil parameters involved in the penetration model of the torpedo pile contain uncertainties that can affect the precision of analysis methods to evaluate its final penetration depth. Therefore, this paper deals with methodologies for the assessment of the sensitivity of the response to the variation of the uncertain parameters and mainly to incorporate into the analysis method techniques for the formal treatment of the uncertainties. Probabilistic and “possibilistic” approaches are considered, involving, respectively, the Monte Carlo method (MC) and concepts of fuzzy arithmetic (FA). The results and performance of both approaches are compared, stressing the ability of the latter approach to efficiently deal with the uncertainties of the model, with outstanding computational efficiency, and therefore, to comprise an effective design tool. 1. Introduction 1.1. Context: Offshore Platforms, Mooring Systems, and Anchors Petroleum companies around the world have been faced with the challenge of developing offshore oil production activities in deep and ultradeep waters. In shallow water, the traditional solution consists in employing platforms supported by fixed framed structures, such as the jackets where the foundation system consists of driven piles [1]. Presently, as oil fields have been identified in deeper water such as in the Campos Basin (southeastern Brazil), offshore platforms have included several types of floating units, such as the semisubmersible platforms, the tension-leg platforms (TLPs), and floating production, storage, and offloading (FPSOs) units based on ships. Floating platforms can be maintained in position by different types of mooring systems, which in turn may employ anchors based on different types of foundation elements. Semisubmersible platforms and FPSO units, for instance, may be kept in position by mooring lines in catenary or taut-leg configurations. Mooring lines in a free-hanging catenary configuration transmit essentially horizontal loads to the foundation system. This fact introduces a greater flexibility in the selection of the appropriate anchor type. However, the mooring radius (the horizontal distance, measured at the sea bottom, from the center of the platform) is relatively large; typically, about two to three times the water depth. Therefore, the application of catenary configurations may not be feasible in deep or ultradeep waters, due to the increased weight of the mooring lines, and also due to installation problems that may arise in congested scenarios with several platforms close together (as is the case of some oil fields in the Campos Basin). The taut-leg configuration has been proposed to tackle these constraints. This configuration, where the lines are not slack, allows the use of smaller line lengths. When associated with the use of new materials (such as polyester fiber ropes) [2], this leads to considerable reduction in the weight of the mooring system. Moreover, since at the anchor point the lines are not in contact with the seabed, and may reach inclinations around 45°, the mooring radius is typically equal to the water depth, therefore, considerably shorter than in catenary configurations. However, taut-leg mooring systems transmit vertical loads to the foundation system. This is also the case with the tension leg platforms, which are moored by vertical tendons. Therefore, care should be taken in the selection of anchor types that can withstand vertical loads. Amongst the foundation elements that have been applied in deep water systems, two types of anchors can be mentioned: the suction anchor and the vertically loaded anchor (VLA) [3]. However, some installation difficulties have been reported for suction anchors, due to added mass effects and the resonant period for the lifting system at the installation depth that may approach the dominant wave period at the site [4]. Vertically loaded anchors are easier to install, but require drag procedures that may hinder their correct positioning, mainly in congested areas with many others nearby 1.2. The Torpedo Pile The torpedo pile (illustrated in Figure 1) was proposed [5] as a solution to withstand vertical loads while circumventing the problems associated with other types of anchors. It consists simply in a metallic pipe, with closed tip, filled with scrap chain, and concrete [6]. The installation does not require drag procedures such as employed in VLAs; the procedure is quite simple, and is illustrated in Figure 2. First, the installation vessel hangs the pile (connected to the mooring line) at a specified drop height, above the target point on the seabed. The design embedment is then reached by simply releasing the pile, letting it accelerate, fall freely, and then penetrate into the soil. More than one hanging configuration has been conceived, for instance, one of the alternatives (considered for the installation of torpedo piles to anchor flexible risers or mobile drilling units (MODUs)) has a chain loop at the top of the installation line, as shown in Figure 3. As the pile falls, this loop is pulled and unfolded. Therefore, the torpedo pile presents not only low cost of manufacture, but also low cost of installation, since the same vessel can transport and install the pile. There is another configuration, for permanent mooring of production units, which does not present the chain loop, but requires two vessels to hang, respectively, the installation line and the mooring line (to which the torpedo pile is connected). In this configuration, the bottom end of the installation line is connected to an intermediate point of the mooring line, therefore, maintaining the pile suspended at the desired drop height. Above this connection point, there is a trigger that allows the mooring line and the pile to be released, causing the pile to fall (dragging with it the mooring line), and penetrate the soil. Another advantage of the torpedo pile concept is that, since it can withstand horizontal and vertical loads, it can be used with mooring lines in a taut-leg configuration that, as mentioned before, is the preferred alternative for semisubmersible platforms and FPSO units in deeper waters and congested scenarios. 1.3. Objective of the Paper The design of a torpedo pile should employ theoretical models to predict pile penetration depth, such as the dynamic penetration model proposed by True [7]. This model relies on soil parameters whose values are assumed as known, fixed, and deterministic. However, it is well known that the soil properties present a significant degree of variability that, associated to imprecisions in the determination of their design values, can affect the accuracy of the response given by the simulation method. The objective of this paper, therefore, is to study techniques to deal with the uncertainty of the soil parameters, and to associate these techniques to an analytical/numerical penetration model for the torpedo pile. Two different approaches are considered for the treatment of uncertainties of the penetration model. The first is a probabilistic approach, based on the classical Monte Carlo method. The second is a “possibilistic” approach, derived using concepts from fuzzy arithmetic and fuzzy sets. The following sections of the paper begin by describing the theoretical model and solution procedure considered for the simulation of the pile penetration. Firstly, the analytical formulation originally presented by True [7] is described; then a numerical solution procedure in the time domain is described, followed by an application where a pile dropped from a height of 200m above the seabed is analyzed for deterministic, fixed values of the soil parameters. The paper then proceeds by describing the soil parameters that are considered uncertain. Methodologies to assess the sensitivity of the response to the variation of these uncertain parameters are then presented, based on the Monte Carlo method (MC) and fuzzy arithmetic (FA). More important, such methodologies allow the designer to incorporate, into the analysis method, techniques for the formal treatment of these uncertainties. Finally, results of applications of these concepts for the treatment of uncertainties are presented for an actual case study, beginning with results of deterministic parametric studies in order to assess the sensitivity of the response to the variation of the uncertain parameters. Results for the “probabilistic” MC approach are then presented, followed by the novel implementation and application of the approach based on FA. The results and performance of both approaches are compared, stressing the ability of the latter approach to efficiently deal with the uncertainties of the model, with outstanding computational efficiency, and therefore, to comprise an effective design tool. 2. The Penetration Model 2.1. Original Formulation: Penetration of Projectiles Studies on the behavior of penetration of projectiles were initially intended for military applications [8] and were followed by studies on the prediction of final embedment depth of projectiles into soils [9, 10], and estimation of undrained shear strength [11, 12]. The development of a dynamic penetration model by the US navy was required to represent the penetration of propellant-embedded plate anchors into seafloor soils [13]. This kind of anchor is directly positioned on the mud line and an explosion, caused by the propeller system, pushes the anchor fluke down. In order to fulfill this objective, True [7] took into account recommendations given by authors of empirical models (such as Young [9]) and modified traditional bearing capacity formulations (for deep foundations in cohesive soils) to consider variations in penetration resistance with velocity and penetrator shape. The analytical model developed in [7] to simulate the dynamic penetration of plate anchors is based on Newton's second law. Considering that the penetrator velocity v can be expressed as (where z stands for the soil depth), and therefore, its acceleration can be expressed as , the governing equation can be written as follows: where is the effective mass of the penetrator, given by In this latter equation, M and V are, respectively, the structural mass and the volume of the penetrator, and is the mass density of the soil. It can be seen that the term is similar to the “added mass” term of the Morison equation [14], which has been traditionally employed to calculate hydrodynamic drag and inertia loads on cylinders immersed in fluid. In the present case, when multiplied by the acceleration at the left-hand side of (2.1), the term introduces an additional inertia force that corresponds to the contribution of the soil in which the penetrator is immersed. The forces in the right-hand side of (2.1) are W[s] (the submerged weight of the penetrator); F[D], F[T], and F[S] (which are, resp., the drag force, the tip resistance, and the side resistance); and F[E] (the external driving force applied by the propeller system). The submerged weight W[s] is defined in terms of the weight in air W, volume V, and the unit weight of soil by the following expression: The drag force F[D] is similar to the longitudinal drag component given by Morison’s equation [14], which is expressed as: where A[f] is the frontal or cross-sectional area of the penetrator and C[D] is the empirical drag coefficient, that can have the value of 0.7 as proposed by True according to [13]. The classic formulation for static bearing capacity of deep pile foundation states that, for undrained conditions, the tip resistance Q[T] and side resistance Q[S] [15] are defined by where Su is the undrained soil shear strength; N[c] is the bearing capacity factor (assumed equals to 9 for homogeneous clay); α is the dimensionless side adhesion factor; and A[f] and A[s] are, respectively, the frontal and lateral areas of the pile. The dynamic tip and side resistance F[T] and F[S] are now considered by the inclusion, in this classic static formulation, of a side adhesion reduction factor, a soil strain rate factor, and the soil sensitivity value Sti. The latter represents the loss of shear strength that clays suffer when remolded, and is defined as the ratio of undisturbed and remolded strengths [16]. Thus, the tip resistance F[T] and side resistance F[S] are defined by the following expressions: Values for the side adhesion reduction factor were determined in [17] based on results of model tests. An expression for the strain rate factor was also defined in [17], as a function of the velocity v and the diameter (or thickness) of the penetrator d, the undrained soil shear strength Su, and other empirical parameters. This expression can be written as Values for the empirical parameters (maximum soil strain rate at high velocity values), C[e] (strain rate velocity factor), and C[0] (strain rate constant) were also determined in [17] based on results of model tests. In the penetration model considered for offshore applications in the Campos Basin [5], the undrained shear strength Su of the soil is assumed to vary linearly with depth z, according to the following expression: where Su[0] is the undrained shear strength at the mudline; and Su[k] is the rate of increase with depth. Original Solution Procedure To solve (2.1), True [7] developed an incremental finite-difference algorithm and considered that the penetrator is a point object at the ith depth increment, thus some simplifications could be made: Substituting the expressions for , and W[s] in (2.9), ignoring the external driving force F[E], and assuming (according to [7]), the following expression is obtained, which can be applied repeatedly to obtain the velocities of the penetrator at each depth increment : The final penetration depth can then be seen as the product of the depth increment by the number of the increment for which the penetrator velocity drops to zero. It should be recalled that, as in any numerical solution procedure, the accuracy of the results also depends on the careful selection of the depth increment. This will depend on the particular example that is analyzed, and may also involve the use of different increment values to assess the convergence to an accurate solution. 2.2. Formulation for Free-Falling Pile A free-falling cylindrical penetrometer, dropped from a given height above the mudline, is studied in [12] for the prediction of penetration depth and undrained shear strength. Equation (2.1), with some modifications, is also applied to describe the movement of this penetrometer. Firstly, since it is free falling and there is no external driving force, the term F[E] is omitted. Also, the added mass in (2.2) is considered negligible for slender penetrometers moving along their long axis, thus, the effective mass is equal to the structural mass (). Therefore, also replacing the velocity v by and considering that the acceleration a is equal to , (2.1) becomes Obviously, torpedo piles behave similarly to the free-falling penetrometers, therefore, their motions can also be described by the True formulation [7], with the same considerations as employed in [ 12], resulting in (2.11). Moreover, the traditional Morison hydrodynamic formulation can also be applied to describe the forces acting in the pile while it is still in the water, before reaching the seabed, and the same submerged weight can be assumed for both media. It should be emphasized that such semiempirical formulations incorporate some assumptions, which in turn leads to uncertainties in the model. However, as mentioned in the introduction, this work is focused on the influence of the uncertainties in selected soil parameters. Studies regarding uncertainties associated with the penetration model itself will be dealt with in future works. Solution Procedure in the Time Domain It can be observed that the procedure originally proposed by True [7], as described in (2.9) and (2.10), involved the spatial integration of (2.1) to obtain velocities as functions of depth z. This indeed would be the more natural solution procedure if one is concerned only with the representation of the isolated pile and the natural random variability of the soil parameters with depth. However, as will be commented later, there are other sources of uncertainty to be concerned as well. Moreover, as will be commented in the final section of this paper, the final goal of the developments presented here is to incorporate the penetration model (including the techniques that will be described later, based on fuzzy arithmetic, for the formal treatment of uncertainties) in a finite element (FE) spatial discretization scheme, associated to a time domain nonlinear dynamic solver. The idea is to model not only the isolated pile, but also all the other components of the system being installed (i.e., the mooring line itself and the other lines and chains involved in the installation procedure), in a complete 3D model submitted to other loadings such as marine current. With this scenario in mind, it is more convenient to integrate (2.11) in the time domain. At the current stage, where the focus is in modeling an isolated pile and evaluating the uncertainties in the soil parameters, the added mass can still be disregarded as assumed in [12] and in (2.11). In a posterior implementation of the penetration model in a time domain solver associated to the full FE model, the dynamic equations will also incorporate the added mass effects of the complete configuration of the pile with the installation and mooring lines. The solution in the time domain, in terms of the acceleration , velocity , and displacement , at a given time , can be accomplished by applying a time-integration algorithm such as the Chung and Lee explicit method [18] that can be stated as: where and are algorithmic parameters defined as ; and is the time step, which should not exceed a critical time step () in order to maintain the stability of the numerical solution [18]. The full time domain solution procedure is shown in Algorithm 1. It should also be recalled that the displacements, velocities, and accelerations are positive in the downward direction. The application of the penetration model described above is now illustrated for a problem also studied in [19], corresponding to a pile dropped from a height of 200m above the seabed. The pile and soil data are presented in Tables 1 and 2. It should be emphasized that the soil parameters and the sensitivity value of 3 are related to a specific deposit and may not necessarily be representative of general applications. This application is not intended to represent an actual installation procedure for a torpedo pile (such as the depicted in Figure 2), since, as mentioned before, the current implementation of the penetration model represents only the pile. Therefore, in order to take into account the increase on drag effects due to the mooring line and chain loop that are not explicitly represented, the model employs a value for the drag coefficient C[D] equal to 2.7, larger than the value of 0.7 as proposed by True according to [13]. In future works, which will consider the implementation of a coupled finite element-based, time domain simulation program, there will be no need to perform this “fudging” of the drag coefficient C[D], since the coupled 3D model will explicitly include the complete installation configuration (e.g., the mooring line and chain loop for the application in MODUs described earlier). The time domain solution considered a total time of 15 seconds (enough, as will be seen, for the pile to fully penetrate the soil). The analysis is performed with a time increment of 0.002 seconds. The value considered for the algorithmic parameter of the time-integration algorithm is . The results are presented in Figure 4, in terms of a graph relating the vertical position of the pile to its velocity, and in Figure 5 in terms of time histories of depth and velocity. The origin of the graph of Figure 4 corresponds to the pile in its initial position, before being dropped (therefore, with velocity and displacement equal to zero). It is seen that, as the pile drops in the water, its velocity increases until it nearly reaches the so-called “terminal velocity” due to the water drag (of course, this requires the pile to be released from an appropriate height). As the pile reaches the seabed and begins to penetrate in the soil, the velocity is reduced; when it returns to zero the penetration is completed and the final depth of the pile tip is reached. 3. Uncertainties of the Soil Parameters Selected Parameters As stated before in (2.8), for cohesive soils in offshore applications in Campos Basin [5], the undrained soil shear strength Su is assumed to vary linearly with depth z, in terms of Su[0] (undrained shear strength at the mudline) and of Su[k] (the rate of increase with depth). For the normally consolidated clay encountered offshore in the Campos Basin [5], typical values that may be considered for Su[0] and Su[k] are, respectively, kPa and kPa/m. Therefore, (2.8) could be written as Actual values for these parameters that affect the undrained soil shear strength Su for offshore sites may be obtained fromin situ tests (such as CPT—cone penetration tests [20], or vane tests based on a torsion procedure), or from laboratory tests with undisturbed samples, such as triaxial and minivane tests. It should also be recalled that the soil sensitivity Sti represents the loss of shear strength that clays suffer when remolded, and is defined as the ratio of undisturbed and remolded strengths [16]. Remolded strength values can be obtained from vane, triaxial, or minivane tests of disturbed soils. Thus, it can be seen that values for Su (undrained shear strength) and Sti (sensitivity) are obtained from testing. Traditionally, a deterministic procedure is employed to obtain design values for these parameters, by calculating the average of values obtained from several tests. However, it is known that the results of both in-situ or laboratory tests may be influenced by several factors. The latter tests can be affected by factors such as mechanical disturbance in the soil samples, in the process of extraction and remolding; by changes in the samples during storage, and so forth. In-situ tests can also be affected by mechanical interferences, inadequate execution, and so on. Therefore, it can be intuitively understood that there is a high degree of local soil variability, and imprecisions in the determination of the design values of these soil parameters. Large variations in the response of the torpedo pile, mainly in terms of the final penetration depth reached by the pile, may be expected due to these uncertainties. The main objective of this paper, then, is to present a methodology to take into account uncertainties and imprecision in the values of input parameters that define the physical and numerical models involved in the design and analysis of torpedo piles. This work focuses on Su (specifically, the rate of increase with depth Su[k]) and Sti. Of course, other parameters (not necessarily related only to the soil) could be considered; however, those can be dealt with in future works. Sources of Uncertainty Before proceeding further, it is important to recall some basic concepts regarding sources of uncertainty. In soil profile modeling, they may be grouped in two types [21–23]: (1) noncognitive, random natural variability, usually referred as aleatory uncertainty; and (2) cognitive or epistemic uncertainties, that involve abstraction or subjectivity. The first group comprises the inherent uncertainty type, due to natural heterogeneity or in-situ variability of the soil, such as varying depths of strata during soil formation, variation in mineral composition, and stress history [24]. This corresponds for instance to the natural variability of the soil strength from point to point vertically at the position where the pile is to be installed. The second group includes epistemic uncertainties due to lack of knowledge; in this case, information about subsurface conditions is few, because soil profile characteristics must be inferred from field or laboratory investigation of a limited number of samples. It includes also uncertainties generated from sample disturbance, test imperfections, human factors, and also, when engineering properties are obtained through correlation with index properties, as in the case of CPT tests where empirical models are used to calculate the undrained shear resistance by applying correlation factors to the cone tip resistance [24]. According to this classification, two major approaches, respectively, probabilistic or “possibilistic” can be employed to deal with uncertainties [25, 26]. Therefore, the remainder of this paper will deal with methodologies based on these approaches, to assess the sensitivity of the response to the variation of the selected parameters and mainly to incorporate, into the analysis method, techniques for the formal treatment of uncertainties. Section 4 will describe a probabilistic approach based on the Monte Carlo method and an approach based on fuzzy arithmetic (FA). Before proceeding, some additional comments should be presented regarding these sources of uncertainty. Inherent or natural variability are random by nature and cannot be reduced by increasing the number of tests [27]. The cognitive, epistemic uncertainties are reducible; however, for offshore sites, they will usually be present since the cost of performing in-situ tests at offshore sites is very expensive. Such tests may not be performed for every installation site and sometimes the values of the parameters are even estimated or extrapolated from previous tests made at other locations. Moreover, disturbances in these few samples are very common. As strange as it may seem to experienced geotechnical engineers, not involved in deepwater offshore activities, this is precisely what has happened in soil investigations in the Campos Basin. Those are the reasons why epistemic uncertainties are always added to the natural variability: the use of limited data, of data arising from disturbed soil samples, and data from locations other than the one at which the torpedo pile is to be installed. In summary, the fact that there may be no knowledge of the exact site local soil variability is the very reason why (as presented in the next section) probabilistic approaches may fail, and is the motivation of the use of the approach based on FA. 4. Approaches for Treatment of Uncertainties 4.1. Probabilistic Approach: The Monte Carlo Method As mentioned before, noncognitive sources of uncertainty involve parameters that can be treated as random variables, and to which a probabilistic distribution can be associated, based on statistical data. In such cases, the probabilistic approach is traditionally recommended. Probabilistic approaches for treatment of uncertainties can be divided in two main categories. The first one comprises statistical methods that involve simulation, such as the classical Monte Carlo simulation method and its variants. The second category comprises nonstatistical methods such as those based on perturbation techniques. For instance, the stochastic finite element method [26] falls in this latter category; it is based on expanding the random parameters around their mean values via Taylor series, in order to formulate linear relationships between some random characteristics of the response and the random input parameters. In the implementation of the classical Monte Carlo simulation, N samples of the uncertain parameters are randomly generated using a given joint probability density function. The deterministic analysis procedure is employed for each sample of the simulation process [28], obtaining then N responses that are statistically treated to get the first two statistical moments of the response (mean and standard deviation values). The MC method is completely general, for linear or nonlinear analyses. However, in general, the accuracy of the statistical response is only adequate when the number of sample data N is sufficiently large; therefore, it is usually considered too expensive in terms of computing time. This fact has motivated studies on variants of the classical method, involving for instance variance reduction techniques and the Neumann expansion [29]. Due to its robustness and ability to effectively treat the noncognitive, random uncertainties, the classical MC simulation method has been used to calibrate and validate all other probabilistic techniques. The studies presented in this paper will also employ this method as a benchmark to compare the performance of the approach based on fuzzy arithmetic, which will be described in Section 4.2, in the representation of the random uncertainties. 4.2. Fuzzy Arithmetic (FA) Approach It is important to recall that the cognitive sources of uncertainty are related not to chance, but rather to imprecise or vague information, involving subjectivity and/or dependent on expert judgment. Moreover, the axioms of probability and statistics are not adequate to deal with such types of uncertainties, which can be more effectively treated by “possibilistic” approaches employing for instance the theory of fuzzy sets. The theory of fuzzy sets was introduced by Zadeh in [30] to define classes of objects with continuous membership graduations or associations in the interval [0, 1]. A fuzzy set has vague limits, allowing graded changes from one class to another, instead of exact limits characteristic of ordinary or crisp sets. In classical Boolean algebra, the notion of false and true values is limited to 1 or zero. In fuzzy logic, values that are “more or less” true or false can be treated, defined by real numbers that vary continuously from 0 to 1. The treatment of uncertainties that derive from imprecise information is then possible, avoiding the use of random information. Therefore, complex systems, that would be hard to model with the theory of conventional sets, can be easily modeled by fuzzy sets. The fuzzy set theory allows the representation of imprecise and uncertain measures as fuzzy numbers, defined as: where is the numeric support of the fuzzy number A, and is the membership function (MF). Fuzzy numbers are completely characterized by their MFs, that are built based on knowledge of an expert, who can assign “low,” “probable,” or “high” values for the desired parameters. Based on this subjective information, MFs can be constructed presenting either linear or nonlinear shapes. The more usually employed shapes for engineering problems are triangular, trapezoidal, and sinusoidal; the choice will depend on the type of application, and will also follow the assessment of the expert. In this work, triangular fuzzy MFs are used, defined by estimating three values [31]: (i)a more reliable value, m, to which is attributed a membership degree equals to1;(ii)an inferior value, a, that most certainly will be exceeded by another value, and to which is attributed a membership degree equals to 0;(iii)a superior value, b, that most certainly will not be exceeded by another value, and to which is also attributed a membership degree equals to 0. The membership function can then be defined as zero outside the interval of possible values; taken as linear into this range, increasing from a to m, and decreasing from m to b. This function is triangular, not necessarily symmetric, and can be defined as parameterized piecewise linear functions as: where a and b are, respectively, the lower and upper bounds, and m is the dominant value, as illustrated in Figure 6. Fuzzy numbers can also be defined by L (left) and R (right) MFs, resulting into the so-called L-R fuzzy numbers. In this context, a two-parameter modification of an L-type MF applies to all , whereas the R-MF defines A for , thus yielding Therefore, the fuzzy number can also be identified by the notation , where and are the spreads of the number, which represents its uncertainty [32]. Fuzzy arithmetic (FA) operations, involving fuzzy numbers, can be used to propagate fuzziness from inputs to outputs. General operations can be deduced from the extension principle, which is used to transform fuzzy sets via functions [32] , and plays a fundamental role in translating set-based concepts into their fuzzy set counterparts. However, simplified formulae can be obtained considering the L-R formulation of fuzzy numbers and . The standard arithmetic operations are computed as follows. The addition of triangular fuzzy numbers results in another triangular fuzzy number. Both addition and subtraction conserves the linearity of the numbers. These operations are expressed as, The multiplication of two fuzzy numbers produces a quadratic number. However, a linear approximation can be assumed when the spreads α and are small in comparison to the modal or dominant values m. Therefore, this operation can be approximated to The multiplication of a fuzzy number by a scalar a is defined as: The division between two fuzzy numbers is computed as To apply the FA in a given engineering problem, the uncertainty on each variable is modeled as a triangular fuzzy number; moreover, all the operations related to them have their expressions replaced by the corresponding FA expressions, as shown above on operations (4.4) to (4.7) . 5. Implementation and Case Studies Before proceeding with the study of the approaches described above, this section will begin with deterministic studies to assess the sensitivity of the response of the penetration model to the variation of the selected soil parameters. Later, in order to reach the goal of incorporating the formal treatment of uncertainties in the analysis of the penetration of torpedo piles, this section will proceed by presenting the application of the probabilistic approach based on the Monte Carlo method, followed by the implementation and application of the approach based on FA. Recalling that uncertainties related to the penetration model involve a combination of both noncognitive (random, natural variability of soil parameters) and cognitive (epistemic, due to incomplete or imprecise information), it will be seen that, while the MC method can effectively deal only with the random uncertainties, the implementation of the fuzzy approach presented here can represent all sources of uncertainty. 5.1. Deterministic Sensitivity Studies In order to perform an assessment of the sensitivity of the torpedo pile penetration to the uncertainty of the selected soil parameters, a parametric study is performed by considering the same problem described in Tables 1 and 2. The penetration model is applied to deterministic and arbitrary variations on both uncertain parameters: the rate of increase with depth of the undrained shear strength (Su[k]) and the soil sensitivity (Sti). Recalling that according to Table 2, the fixed, “deterministic” values are kPa/m and . Initially, Su[k] and Sti are individually increased by 10, 20, and 30%. Then, their values are reduced, also by 10, 20 and 30%. The results of the analyses for the different values of the parameters are presented in the graphs of Figure 7, corresponding to analyses where they are increased and reduced, It should be noted that, since the drop height and the characteristics of the pile have not been changed, the behavior of the pile from the drop point until it reaches the seabed is the same as observed in Figure 4. Therefore, the graphs of Figure 7 represent only the behavior of the pile as it penetrates the soil, beginning from the depth of 200m (that corresponds to the seabed) until it completes the penetration. A summary of the results of Figure 7 is presented in Table 3 and Figure 8, in terms of penetration values (displacement minus the drop height) for each variation of the parameters Su[k] and Sti. It can be verified that, as expected, reducing the undrained shear strength (and therefore, the soil resistance) leads to the increase on the final depth values. On the other hand, decreasing the sensitivity values increases the soil resistance and, consequently, reduces the penetration of the pile. 5.2. Probabilistic Analysis Using the Monte Carlo Method Statistical Treatment of the Soil Input Parameters In the probabilistic analysis using the MC method, both uncertain parameters (the undrained shear strength increase rate Su[k] and soil sensitivity Sti) are varied simultaneously. Their values are randomly simulated, following a statistical distribution and its associated values of mean and standard deviation, derived from a given set of soil data from laboratory and/or in situ tests (in this case, the data were acquired from many tests performed at different sites in a certain cluster of Campos Basin). This is accomplished by performing a statistical treatment on the available data, representative of offshore fields in Campos Basin. As a result, for each uncertain parameter, the mean (which in this case is the sample average) and standard deviation values were estimated. These values are presented in Table 4. In order to determine an appropriate probability distribution function for the data, a normality verification is performed for each parameter. Figure 9 present the results, respectively, for Su[k] and Sti. It can be observed that, despite Sti data fits better than Su[k], the normal pdf is not the ideal approximation for them. Hence, other functions are fitted and verified, as presented in Figure 10. Observing this figure, it can be verified that the lognormal pdf provides a better fit for both sets of available data. Another advantage of this distribution is that it does not generate negative values for the soil parameters, which does not have physical meaning and can generate erroneous results. The number of simulations in an MC strategy is dictated by the convergence of the mean value of the considered parameter to the deterministic design value. In the present case, 1000 generations were needed to obtain satisfactory convergence. Figure 11 depicts the distribution of the 1000 randomly generated values of Su[k] and Sti, following the lognormal distribution. The probabilistic study then comprises a total number of 1000 analyses with the penetration model, each taking a randomly generated pair of values for the soil parameters Su[k] and Sti, following the lognormal probability distribution with expected values and standard deviations given in Table 4. The results of the 1000 analyses are then gathered to proceed to a statistical treatment, which will represent the penetration value in terms of mean and standard deviation. These results are presented in Table 5. These values will be compared with the results obtained with the approach using FA, which will be presented in Section 5.3. We recall that the mean value of 39.8m for the penetration cannot be directly compared to the “deterministic” value of 35.6m obtained in the previous section, since the fixed, “deterministic” values for the soil parameters were kPa/m and , and the mean probabilistic values gathered from the set of soil data considered are kPa/m and . Anyway, the results are consistent since, as could be observed in the results of the deterministic sensitivity studies summarized in Figure 8, lower values of Su[k] and higher values of Sti lead to higher penetration values. 5.3. Fuzzy Arithmetic: Implementation and Application In the computational implementation of the approach using FA, the uncertain variables Su[k] and Sti are represented as triangular fuzzy numbers. Therefore, the computational code is altered, and all operations performed with those parameters in the solution procedure (as described in Algorithm 1 and (2.2)–(2.6)) have the traditional arithmetic operators replaced by the fuzzy operators presented in (4.4) to (4.7). As mentioned in Section 4.2, the fuzzy operations of multiplication and division generate quadratic numbers; however, when their spreads are small, they can be approximated by linear ones. Therefore, although (4.5) to (4.7) are approximations for small spreads, they are feasible for this specific work, since the values of final penetration (dominant value, and the lower and upper bounds) are more important than the shape of the membership function. Once these fuzzy operators are implemented in the computational code, it remains to determine the values that define the triangular membership functions, which represent Su[k] and Sti as fuzzy numbers. As illustrated in Figure 6 , these values are the lower and upper bounds a and b, and the dominant value m; they can be derived by investigating the statistical distribution of the soil parameters, taking the lognormal distribution generated as described in the previous item. The lower and upper bounds a and b can be assumed as defining an interval of confidence of 75% corresponding to one standard deviation below and above the mean. This criterion provides samples that have consistent values for the uncertain parameters (positive values, Sti greater than 1.0, etc.), and is illustrated in Figures 12 and 13, for Su[k] and Sti distributions, respectively. Regarding the “dominant” value m, the first choice could be to take the mean value; however, since the most representative value of a sample with large dispersion is the median, its value was chosen as m for each parameter. The values thus obtained for a, b, and m that define the membership functions for Su[k] and Sti are presented in Table 6 and graphically represented in Figure 14. Finally, the evaluation of the uncertain response using this FA approach consists simply in performing one analysis with the penetration model. The uncertainties embedded in the fuzzy numbers that represent the parameters Su[k] and Sti are incorporated in the calculation of the terms F[T] and F[S] defined in (2.6), at the right-hand side of step b.1 of Algorithm 1, and therefore are updated and propagated at each time step of the solution procedure presented in that table. This fact points to the remarkable computational advantage of this approach, compared to the probabilistic MC method with conventional arithmetic that required a total number of 1000 analyses. The results of the fuzzy analyses, in terms of lower, dominant, and upper bound for the final penetration of the torpedo pile are presented in Table 7. 5.4. Comparison of Results This section compares the results of analyses of the torpedo pile with the penetration model, considering both MC and FA approaches. Before comparing the final pile penetration, Figure 15 presents the full behavior of the pile as it penetrates the soil, in terms of penetration x velocity curves, beginning from the depth of 200m (that corresponds to the seabed) until it completes the Three curves are presented for each approach, corresponding to the “dominant” or “most probable” result, and a lower and upper “bounds” of the response. For the MC simulation, the “most probable” curve is represented by taking the median values of penetration and velocities at each time step of the response; the lower and upper bounds are determined, respectively, by taking the mean value and subtracting or adding one standard deviation (similarly to the procedure applied to determine the bounds of the fuzzy input parameters). For the FA approach, the “dominant” curve is represented by taking the “crisp” result, that is, the values corresponding to a degree of membership equal to one. The lower and upper bounds are determined by the support of the fuzzy set defined by the values corresponding to a membership degree greater than zero. Table 8 summarizes and compares the results presented in Tables 5 and 7 for the MC and fuzzy approaches. Observing this table and also Figure 15, it can be observed that the “dominant” or “most probable” results for the final penetration are practically the same; the difference between the median value of the MC analyses and the dominant value of FA analyses is insignificant. Regarding dispersion of results, it should be recalled that MC “lower” and “upper” results (defined by subtracting and adding one standard deviation to the mean value) cannot be directly compared to the spreads of the fuzzy results, where lower and upper bounds define the interval where a value can possibly represent the calculated penetration. Anyway, it can be noted that the uncertainties of the soil parameters are quite significant and can have a decisive influence in the design of the torpedo pile. A better comparison for the final penetration can be graphically assessed in terms of the probability distribution of the MC simulation, and the membership function that characterizes the fuzzy number for the FA approach. Therefore, Figure 16 compares the results obtained from the MC and FA approaches, in terms of the probability distribution and membership function. For this example, while the assumed supports of the fuzzy input parameters Su[k] and Sti corresponded to a certainty interval of 75% of their lognormal distribution, the support of the fuzzy number that represents the final penetration corresponds to a certainty interval of around 80% of the MC distribution. Finally, the most remarkable comparison in the performance of both methods can be stated in terms of the total CPU time required. While the MC probabilistic approach required 1000 analyses with the penetration model using the solution procedure of Algorithm 1, only one analysis was required for the approach employing FA. 6. Final Remarks and Conclusions The torpedo pile has been acknowledged as a very promising alternative to anchor mooring systems. It has recently been considered for use not only in mooring lines of floating production systems, but also for mobile offshore drilling units (MODUs) operating in deep and ultradeep waters. Therefore, oil exploitation companies are devoting intense research and design activities in order to deliver efficient mooring solutions using this concept. One of the main aspects concerning the design of foundation systems are the uncertainties involved in the determination of values for the soil parameters. For conventional onshore systems, this aspect has been tackled by performing a large number of tests with soil samples. However, on deepwater offshore sites the cost of performing such tests may be very expensive, if not prohibitive; therefore, tests may not be performed for every installation site, and sometimes results of tests made at other locations are used to estimate or extrapolate the values of the soil parameters. This fact can severely affect the effectiveness of the design and analysis of torpedo piles, leading to large discrepancies in the response of the torpedo pile, mainly in terms of the final depth reached by the pile. Therefore, it is very important to develop and employ methodologies to properly assess the sensitivity of the response to the variation of these parameters, and to incorporate, into the analysis method, techniques for the formal treatment of the uncertainties. The classical probabilistic Monte Carlo simulation could be considered for this purpose, since it is a sound methodology to estimate the effect of random uncertainties. Nevertheless, in the problem described in this work, there are a great amount of epistemic uncertainties in the model equations and parameters, and therefore, MC simulation results provide only a rough estimation of the uncertainty. In addition, the application of MC simulation requires excessive computational costs, as has been confirmed in the case study considered in this work. More than 1000 simulations were needed to obtain the results. On the other hand, the computational efficiency of the fuzzy arithmetic approach is outstanding—around three orders of magnitude less. Therefore, the results of the application of the FA approach demonstrated its ability to provide low-cost approximations of the bounds of the uncertainties, and therefore, to comprise an effective design tool for the practitioner. Future Developments In this study, only two soil parameters, the undrained shear strength and soil sensitivity, were considered as uncertain. Extensions of the fuzzy methodology presented in this work could consider the treatment of other uncertain parameters, such as for instance, the empirical maximum soil strain rate factor (S[e]), the empirical soil strain rate factor (C[e]), and the drag coefficient C[D] considered for the calculation of the soil drag force as the pile penetrates; this latter parameter can vary for different anchor or pile shapes. Also, this work did not consider the uncertainties associated with the penetration model itself. These could also be considered, since it is a mainly empirical model and involves imprecision in its formulation. Finally, a promising approach for the design of offshore systems would be to incorporate the pile penetration model, associated with the fuzzy methodology, in the implementation of a coupled finite element-based, time domain simulation program. In such implementation, not only the isolated torpedo pile is considered, but also a full finite-element model of all components involved in the installation of the pile (i.e., the mooring line itself and the other lines and chains, illustrated in Figures 2 and 3). The result is a complete 3D model, also submitted to environmental loadings other than dead weight (such as marine current). In such coupled model, there will be no need to “fudge” the drag coefficient C[D], to account for the presence of the mooring line and chain loop. Such computational tool would therefore comprise an efficient tool for the design of mooring systems based on torpedo piles, and for the simulation of the procedures needed for the installation of such complex offshore system. The authors would like to acknowledge the help of Mr. Cláudio dos Santos Amaral and Dr. Álvaro Maia da Costa, experts in geotechnical engineering from CENPES-Petrobras (Research and Development Center of the Brazilian state oil company), for the invaluable collaboration regarding information and data essential for the completion of this work.
{"url":"http://www.hindawi.com/journals/mpe/2008/512343/","timestamp":"2014-04-19T08:07:42Z","content_type":null,"content_length":"254708","record_id":"<urn:uuid:da0c16b6-0989-476b-a560-2a70b6371612>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00250-ip-10-147-4-33.ec2.internal.warc.gz"}
1 Introduction Next: 2 The axiomatic approach Up: Axiomatic and Coordinate Geometry Previous: Axiomatic and Coordinate Geometry At some point between high school and college we first make the transition between Euclidean (or synthetic) geometry and co-ordinate (or analytic) geometry. Later, during graduate studies we are introduced to differential geometry of many dimensions. The justification given in the first instance is that coordinates are a natural outcome of the axioms of Euclidean geometry; and in the second case because Riemannian geometry is much more general than axiomatic non-Euclidean geometry. In this expository account we examine these two justifications. First of all due to the existence of non-Euclidean geometries one should take as the starting point not the usual axioms of Euclidean geometry but the ``local'' axioms proposed by Veblen [5], Hilbert [4] and others. These are satisfied by a small convex region within any of the axiomatic geometries--Euclidean or not. We will then follow (in Section 2) the exposition of Coxeter (see [2]) to show that any such geometry can be embedded within Euclidean space (of dimension 3). Thus there is a choice of coordinates for any such geometry--a justification for the first step above. Now it is clear that there are many possible choices of coordinates. However, from the various physical and other applications it becomes clear that we must restrict our attention to those changes of coordinates which are differentiable with respect to one another. The search is then on for ``differential covariants'' or quantities that change in some systematic fashion with such a change of coordinates. An important such invariant is the Riemannian curvature and its other face, the sectional curvature (see Section 3). We now examine the geometry constructed by axiomatic means to see what the sectional curvatures of this geometry can be. A combination of certain results of Schur, Cartan and Hadamard shows us that indeed we only obtain the ``classical'' geometries--flat Euclidean space, hyperbolic space (of Lobachevsky and Bolyai) and elliptic space (of Poncelet and Riemann)--the geometries of constant sectional curvature. Thus Riemannian geometry does lead to more general non-Euclidean geometries than those that can be constructed by axiomatic means. This gives a justification for the second step away from synthetic geometry. In a final summarising section we suggest some additional reading and also point out some other interesting ways of constructing geometries that have not been covered here. Next: 2 The axiomatic approach Up: Axiomatic and Coordinate Geometry Previous: Axiomatic and Coordinate Geometry Kapil Hari Paranjape 2002-11-21
{"url":"http://www.imsc.res.in/~kapil/papers/krp/node1.html","timestamp":"2014-04-18T23:37:00Z","content_type":null,"content_length":"5701","record_id":"<urn:uuid:4f862dc3-da69-4c81-bc05-c95efb5e5fe7>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00186-ip-10-147-4-33.ec2.internal.warc.gz"}
simple integration. March 26th 2009, 02:18 AM #1 Junior Member Jun 2008 simple integration. The gradient of a curve is dy/dx=kx, where k is constant. y=6x-7 and y=mx+c are the equation of the tangent and normal to the curve respectively. Find the values of m and c. m is solved easily. but couldn't find out what's c. answer; c=-5/6 Any help would be so much appreciated! Thank you in advance! The gradient of a curve is dy/dx=kx, where k is constant. y=6x-7 and y=mx+c are the equation of the tangent and normal to the curve respectively. Find the values of m and c. m is solved easily. but couldn't find out what's c. answer; c=-5/6 Any help would be so much appreciated! Thank you in advance! If dy/dx= kx, then $y= \frac{1}{2}x^2+ C$. If y= 6x- 7 is tangent to that, what is C and where is the line tangent to the curve? What must c be in order that y= mx+ c pass through that point? March 26th 2009, 03:03 AM #2 MHF Contributor Apr 2005
{"url":"http://mathhelpforum.com/calculus/80730-simple-integration.html","timestamp":"2014-04-19T11:38:15Z","content_type":null,"content_length":"32921","record_id":"<urn:uuid:b038ef77-12fb-418a-bd2d-7b788b332f8a>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00231-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: 1. Describe how mutations lead to genetic variations. 2. Which appears to be more dangerous: the BRC1 or BRC2 mutation? 3. Analyze a woman’s risk of dying of cancer if she carries a mutated BRC1 gene. 4. How do heredity and inheritance relate to the data presented in these charts? 5. What data would you need to see in order to draw conclusions about the effectiveness of preventive surgeries? 6. What does the age at diagnosis tell you about the mutation? 7. Explain how breast-cancer genes are still present in the population, despite cancer-related surgeries and deaths. • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50a2acb9e4b0e22d17ef82ef","timestamp":"2014-04-21T07:49:40Z","content_type":null,"content_length":"95051","record_id":"<urn:uuid:755664e9-b228-4f56-b58a-2d18a85f2fd5>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00638-ip-10-147-4-33.ec2.internal.warc.gz"}
Chapter Introduction NAG Library Chapter Introduction f11 – Large Scale Linear Systems 1 Scope of the Chapter This chapter provides functions for the solution of large sparse systems of simultaneous linear equations. These include methods for real nonsymmetric and symmetric linear systems and methods for general real linear systems. Further direct methods are currently available in Chapters f01 2 Background to the Problems This section is only a brief introduction to the solution of sparse linear systems. For a more detailed discussion see for example Duff et al. (1986) Demmel et al. (1999) for direct methods, or Barrett et al. (1994) for iterative methods. 2.1 Sparse Matrices and Their Storage A matrix $A$ may be described as sparse if the number of zero elements is sufficiently large that it is worthwhile using algorithms which avoid computations involving zero elements. is sparse, and the chosen algorithm requires the matrix coefficients to be stored, a significant saving in storage can often be made by storing only the nonzero elements. A number of different formats may be used to represent sparse matrices economically. These differ according to the amount of storage required, the amount of indirect addressing required for fundamental operations such as matrix-vector products, and their suitability for vector and/or parallel architectures. For a survey of some of these storage formats see Barrett et al. (1994) Black Box functions are those based on fixed storage formats. Three fixed storage formats for sparse matrices are currently used. These are known as coordinate storage (CS) format, symmetric coordinate storage (SCS) format and compressed column storage (CCS) format. 2.1.1 Co-ordinate storage (CS) format This storage format represents a sparse matrix , with nonzero elements, in terms of three one-dimensional arrays – a double array and two Integer arrays . These arrays are all of dimension at least contains the nonzero elements themselves, while store the corresponding row and column indices respectively. For example, the matrix $A= 1 -2 -1 -1 -3 0 -1 0 0 -4 3 0 0 0 2 2 0 4 1 1 -2 0 0 0 1$ might be represented in the arrays • $a=1,2,-1,-1,-3,-1,-4,3,2,2,4,1,1,-2,1$ • $irow=1,1,1,1,1,2,2,3,3,4,4,4,4,5,5$ • $icol=1,2,3,4,5,2,5,1,5,1,3,4,5,1,5$. The general format specifies no ordering of the array elements, but some functions may impose a specific ordering. For example, the nonzero elements may be required to be ordered by increasing (i) row index and by increasing column index within each row, as in the example above. nag_sparse_nsym_sort (f11zac) is a utility function provided to order the elements appropriately (see Section (ii) With this storage format it is possible to enter duplicate elements. These may be interpreted in various ways (e.g., raising an error, ignoring all but the first entry, all but the last, or 2.1.2 Symmetric coordinate storage (SCS) format This storage format is suitable for symmetric and Hermitian matrices, and is identical to the CS format described in Section 2.1.1 , except that only the lower triangular nonzero elements are stored. Thus, for example, the matrix $A= 4 -1 -0 -0 -1 2 1 5 0 2 0 0 0 0 2 1 0 -1 0 2 1 3 1 0 -1 0 0 1 4 0 2 0 -1 0 0 3$ might be represented in the arrays • $a=4,1,5,2,2,1,3,-1,1,4,2,-1,3$. • $irow=1,2,2,3,4,4,4,5,5,5,6,6,6$, • $icol=1,1,2,3,2,3,4,1,4,5,1,3,6$. 2.1.3 Compressed column storage (CCS) format This storage format also uses three one-dimensional arrays – a double array and two Integer arrays . The array are of dimension at least , while is of dimension at least contains the nonzero elements, going down the first column, then the second and so on. For example, the matrix in Section 2.1.1 above will be represented by • $a=1,3,2,-2,2,-1,-1,4,-1,1,-3,-4,2,1,1$. records the row index for each entry in , so the same matrix will have • $irowix=1,3,4,5,1,2,1,4,1,4,1,2,3,4,5$. records the index into which starts each new column. The last entry of is equal to . An empty column (one filled with zeros, that is) is signalled by an index that is the same as the next non-empty column, or if all subsequent columns are empty. The above example corresponds to The example in Section 2.1.2 above will be represented by • $a=4,1,-1,2,1,5,2,2,1,-1,2,1,3,1,-1,1,4,2,-1,3$ • $irowix=1,2,5,6,1,2,4,3,4,6,2,3,4,5,1,4,5,1,3,6$ • $icolzp=1,5,8,11,15,18,21$ 2.2 Direct Methods Direct methods for the solution of the linear algebraic system aim to determine the solution vector in a fixed number of arithmetic operations, which is determined a priori by the number of unknowns. For example, an factorization of followed by forward and backward substitution is a direct method for If the matrix is sparse it is possible to design methods which exploit the sparsity pattern and are therefore much more computationally efficient than the algorithms in Chapter f07 , which in general take no account of sparsity. However, if the matrix is very large and sparse, then methods, with an appropriate preconditioner, (see Section 2.3 ) may be more efficient still. This chapter provides a direct factorization method for sparse real systems. This method is based on special coding for supernodes, broadly defined as groups of consecutive columns with the same nonzero structure, which enables use of dense BLAS kernels. The algorithms contained here come from the SuperLU software suite (see Demmel et al. (1999) ). An important requirement of sparse factorization is keeping the factors as sparse as possible. It is well known that certain column orderings can produce much sparser factorizations than the normal left-to-right ordering. It is well worth the effort, then, to find such column orderings since they reduce both storage requirements of the factors, the time taken to compute them and the time taken to solve the linear system. The row reorderings, demanded by partial pivoting in order to keep the factorization stable, can further complicate the choice of the column ordering, but quite good and fast algorithms have been developed to make possible a fairly reliable computation of an appropriate column ordering for any sparsity pattern. We provide one such algorithm (known in the literature as COLAMD) through one function in the suite. Similar to the case for dense matrices, functions are provided to compute the factorization with partial row pivoting for numerical stability, solve by performing the forward and backward substitutions for multiple right hand side vectors, refine the solution, minimize the backward error and estimate the forward error of the solutions, compute norms, estimate condition numbers and perform diagnostics of the factorization. For more details see Section 3.4 Further direct methods may be found in Chapters f01 2.3 Iterative Methods In contrast to the direct methods discussed in Section 2.2 methods for approach the solution through a sequence of approximations until some user-specified termination criterion is met or until some predefined maximum number of iterations has been reached. The number of iterations required for convergence is not generally known in advance, as it depends on the accuracy required, and on the matrix – its sparsity pattern, conditioning and eigenvalue spectrum. Faster convergence can often be achieved using a Golub and Van Loan (1996) Barrett et al. (1994) ). A preconditioner maps the original system of equations onto a different system which hopefully exhibits better convergence characteristics. For example, the condition number of the matrix may be better than that of , or it may have eigenvalues of greater multiplicity. An unsuitable preconditioner or no preconditioning at all may result in a very slow rate or lack of convergence. However, preconditioning involves a trade-off between the reduction in the number of iterations required for convergence and the additional computational costs per iteration. Setting up a preconditioner may also involve non-negligible overheads. The application of preconditioners to real nonsymmetric and real symmetric systems of equations is further considered in Sections 2.4 2.4 Iterative Methods for Real Nonsymmetric Linear Systems Many of the most effective iterative methods for the solution of lie in the class of non-stationary Krylov subspace methods Barrett et al. (1994) ). For real nonsymmetric matrices this class includes: Here we just give a brief overview of these algorithms as implemented in this chapter. RGMRES is based on the Arnoldi method, which explicitly generates an orthogonal basis for the Krylov subspace $spanAkr0$, $k=0,1,2,…$, where $r0$ is the initial residual. The solution is then expanded onto the orthogonal basis so as to minimize the residual norm. For real nonsymmetric matrices the generation of the basis requires a ‘long’ recurrence relation, resulting in prohibitive computational and storage costs. RGMRES limits these costs by restarting the Arnoldi process from the latest available residual every $m$ iterations. The value of $m$ is chosen in advance and is fixed throughout the computation. Unfortunately, an optimum value of $m$ cannot easily be predicted. CGS is a development of the bi-conjugate gradient method where the nonsymmetric Lanczos method is applied to reduce the coefficient matrix to tridiagonal form: two bi-orthogonal sequences of vectors are generated starting from the initial residual $r0$ and from the shadow residual $r^0$ corresponding to the arbitrary problem $AHx^=b^$, where $b^$ is chosen so that $r0=r^0$. In the course of the iteration, the residual and shadow residual $ri=PiAr0$ and $r^i=PiAHr^0$ are generated, where $Pi$ is a polynomial of order $i$, and bi-orthogonality is exploited by computing the vector product $ρ i = r ^ i , r i = Pi AH r ^ 0 P i A r 0 = r ^ 0 , P i 2 A r 0$. Applying the ‘contraction’ operator $PiA$ twice, the iteration coefficients can still be recovered without advancing the solution of the shadow problem, which is of no interest. The CGS method often provides fast convergence; however, there is no reason why the contraction operator should also reduce the once reduced vector $PiAr0$: this can lead to a highly irregular convergence. is similar to the CGS method. However, instead of generating the sequence , it generates the sequence where the are polynomials chosen to minimize the residual the application of the contraction operator . Two main steps can be identified for each iteration: an OR (Orthogonal Residuals) step where a basis of order is generated by a Bi-CG iteration and an MR (Minimum Residuals) step where the residual is minimized over the basis generated, by a method similar to GMRES. For , the method corresponds to the Bi-CGSTAB method of Van der Vorst (1989) . However, as increases, numerical instabilities may arise. The transpose-free quasi-minimal residual method (TFQMR) (see Freund and Nachtigal (1991) Freund (1993) ) is conceptually derived from the CGS method. The residual is minimized over the space of the residual vectors generated by the CGS iterations under the simplifying assumption that residuals are almost orthogonal. In practice, this is not the case but theoretical analysis has proved the validity of the method. This has the effect of remedying the rather irregular convergence behaviour with wild oscillations in the residual norm that can degrade the numerical performance and robustness of the CGS method. In general, the TFQMR method can be expected to converge at least as fast as the CGS method, in terms of number of iterations, although each iteration involves a higher operation count. When the CGS method exhibits irregular convergence, the TFQMR method can produce much smoother, almost monotonic convergence curves. However, the close relationship between the CGS and TFQMR method implies that the speed of convergence is similar for both methods. In some cases, the TFQMR method may converge faster than the CGS method. Faster convergence can usually be achieved by using a . A can be used by the RGMRES, CGS and TFQMR methods, such that , where is the identity matrix of order ; a can be used by the Bi-CGSTAB method, such that . These are formal definitions, used only in the design of the algorithms; in practice, only the means to compute the matrix-vector products (the latter only being required when an estimate of is computed internally), and to solve the preconditioning equations are required, that is, explicit information about , or its inverse is not required at any stage. Preconditioning matrices are typically based on incomplete factorizations (see Meijerink and Van der Vorst (1981) ), or on the approximate inverses occurring in stationary iterative methods (see Young (1971) ). A common example is the incomplete $LU$ factorization is lower triangular with unit diagonal elements, is diagonal, is upper triangular with unit diagonals, are permutation matrices, and is a remainder matrix. A factorization is one for which the matrix has the same pattern of nonzero entries as . This is obtained by discarding any elements (nonzero elements of arising during the factorization in locations where has zero elements). Allowing some of these fill elements to be kept rather than discarded generally increases the accuracy of the factorization at the expense of some loss of sparsity. For further details see Barrett et al. (1994) 2.5 Iterative Methods for Real Symmetric Linear Systems Three of the best known iterative methods applicable to real symmetric linear systems are the conjugate gradient (CG) method (see Hestenes and Stiefel (1952) Golub and Van Loan (1996) ) and Lanczos type methods based on SYMMLQ and MINRES (see Paige and Saunders (1975) For the CG method the matrix $A$ should ideally be positive definite. The application of CG to indefinite matrices may lead to failure, or to lack of convergence. The SYMMLQ and MINRES methods are suitable for both positive definite and indefinite symmetric matrices. They are more robust than CG, but less efficient when $A$ is positive definite. The methods start from the residual , where is an initial estimate for the solution (often ), and generate an orthogonal basis for the Krylov subspace , for , by means of three-term recurrence relations (see Golub and Van Loan (1996) ). A sequence of symmetric tridiagonal matrices is also generated. Here and in the following, the index denotes the iteration count. The resulting symmetric tridiagonal systems of equations are usually more easily solved than the original problem. A sequence of solution iterates is thus generated such that the sequence of the norms of the residuals converges to a required tolerance. Note that, in general, the convergence is not monotonic. In exact arithmetic, after $n$ iterations, this process is equivalent to an orthogonal reduction of $A$ to symmetric tridiagonal form, $Tn=QTAQ$; the solution $xn$ would thus achieve exact convergence. In finite-precision arithmetic, cancellation and round-off errors accumulate causing loss of orthogonality. These methods must therefore be viewed as genuinely iterative methods, able to converge to a solution within a prescribed tolerance. The orthogonal basis is not formed explicitly in either method. The basic difference between the methods lies in the method of solution of the resulting symmetric tridiagonal systems of equations: the CG method is equivalent to carrying out an $LDLT$ (Cholesky) factorization whereas the Lanczos method (SYMMLQ) uses an $LQ$ factorization. The MINRES method on the other hand minimizes the residual into 2-norm. A preconditioner for these methods must be symmetric and positive definite , i.e., representable by , where is nonsingular, and such that , where is the identity matrix of order . These are formal definitions, used only in the design of the algorithms; in practice, only the means to compute the matrix-vector products and to solve the preconditioning equations are required. Preconditioning matrices are typically based on incomplete factorizations (see Meijerink and Van der Vorst (1977) ), or on the approximate inverses occurring in stationary iterative methods (see Young (1971) ). A common example is the incomplete Cholesky factorization is a permutation matrix, is lower triangular with unit diagonal elements, is diagonal and is a remainder matrix. A incomplete Cholesky factorization is one for which the matrix has the same pattern of nonzero entries as . This is obtained by discarding any elements (nonzero elements of arising during the factorization in locations where has zero elements). Allowing some of these fill elements to be kept rather than discarded generally increases the accuracy of the factorization at the expense of some loss of sparsity. For further details see Barrett et al. (1994) 3 Recommendations on Choice and Use of Available Functions 3.1 Types of Function Available The direct method functions available in this chapter largely follow the LAPACK scheme in that four different functions separately handle the tasks of factorizing, solving, refining and condition number estimating. See Section 3.4 The iterative method functions available in this chapter divide essentially into two types: utility functions and Black Box functions. At present there are suites of basic functions for real symmetric and nonsymmetric systems, and for complex non-Hermitian systems. Utility functions perform such tasks as initializing the preconditioning matrix $M$ or computing matrix-vector products, for particular preconditioners and matrix storage formats. Black Box functions provide easy-to-use functions for particular preconditioners and sparse matrix storage formats. 3.2 Iterative Methods for Real Nonsymmetric Linear Systems In general, it is not possible to recommend one of these methods (RGMRES, CGS, Bi-CGSTAB , or TFQMR) in preference to another. RGMRES is popular, but requires the most storage, and can easily stagnate when the size of the orthogonal basis is too small, or the preconditioner is not good enough. CGS can be the fastest method, but the computed residuals can exhibit instability which may greatly affect the convergence and quality of the solution. Bi-CGSTAB seems robust and reliable, but it can be slower than the other methods. TFQMR can be viewed as a more robust variant of the CGS method: it shares the CGS method speed but avoids the CGS fluctuations in the residual, which may give, rise to instability. Some further discussion of the relative merits of these methods can be found in Barrett et al. (1994) The utility functions provided for real nonsymmetric matrices use the coordinate storage (CS) format described in Section 2.1.1 nag_sparse_nsym_fac (f11dac) computes a preconditioning matrix based on incomplete factorization. The amount of fill-in occurring in the incomplete factorization can be controlled by specifying either the level of fill, or the drop tolerance. Partial or complete pivoting may optionally be employed, and the factorization can be modified to preserve row-sums. nag_sparse_nsym_sort (f11zac) orders the nonzero elements of a real sparse nonsymmetric matrix stored in general CS format. The Black Box function nag_sparse_nsym_fac_sol (f11dcc) solves a real sparse nonsymmetric linear system, represented in CS format, using RGMRES, CGS, Bi-CGSTAB , or TFQMR, with incomplete nag_sparse_nsym_sol (f11dec) is similar, but has options for no preconditioning, Jacobi preconditioning or SSOR preconditioning. 3.3 Iterative Methods for Real Symmetric Linear Systems The utility functions provided for real symmetric matrices use the symmetric coordinate storage (SCS) format described in Section 2.1.2 nag_sparse_sym_chol_fac (f11jac) computes a preconditioning matrix based on incomplete Cholesky factorization. The amount of fill-in occurring in the incomplete factorization can be controlled by specifying either the level of fill, or the drop tolerance. Diagonal Markowitz pivoting may optionally be employed, and the factorization can be modified to preserve row-sums. nag_sparse_sym_sort (f11zbc) orders the nonzero elements of a real sparse symmetric matrix stored in general SCS format. The Black Box function nag_sparse_sym_chol_sol (f11jcc) solves a real sparse symmetric linear system, represented in SCS format, using a conjugate gradient or Lanczos method, with incomplete Cholesky preconditioning. nag_sparse_sym_sol (f11jec) is similar, but has options for no preconditioning, Jacobi preconditioning or SSOR preconditioning. 3.4 Direct Methods The suite of functions nag_superlu_column_permutation (f11mdc) nag_superlu_lu_factorize (f11mec) nag_superlu_solve_lu (f11mfc) nag_superlu_condition_number_lu (f11mgc) nag_superlu_refine_lu (f11mhc) nag_superlu_matrix_product (f11mkc) nag_superlu_matrix_norm (f11mlc) nag_superlu_diagnostic_lu (f11mmc) implement the COLAMD/SuperLU direct real sparse solver and associated utilities. You are expected to first call nag_superlu_column_permutation (f11mdc) to compute a suitable column permutation for the subsequent factorization by nag_superlu_lu_factorize (f11mec) nag_superlu_solve_lu (f11mfc) then solves the system of equations. A solution can be further refined by nag_superlu_refine_lu (f11mhc) , which also minimizes the backward error and estimates a bound for the forward error in the solution. Diagnostics are provided by nag_superlu_condition_number_lu (f11mgc) which computes an estimate of the condition number of the matrix using the factorization output by nag_superlu_lu_factorize (f11mec) , and nag_superlu_diagnostic_lu (f11mmc) which computes the reciprocal pivot growth (a numerical stability measure) of the factorization. The two utility functions, nag_superlu_matrix_product (f11mkc) , which computes matrix-matrix products in the particular storage scheme demanded by the suite, and nag_superlu_matrix_norm (f11mlc) which computes quantities relating to norms of a matrix in that particular storage scheme, complete the suite. Some other functions specifically designed for direct solution of sparse linear systems can currently be found in Chapters f01 . In particular, the following functions allow the direct solution of symmetric positive definite systems: Variable band (skyline) nag_real_cholesky_skyline (f01mcc) and nag_real_cholesky_skyline_solve (f04mcc) Functions for the solution of band and tridiagonal systems can be found in Chapters f04 4 Decision Tree Tree 1: Solvers Do you have a real system and want to use a direct method? _ f11mdc, f11mec and f11mfc Symmetric positive definite? _ Incomplete Cholesky preconditioner? _ f11jac and f11jcc yes yes | no | f11jec Incomplete $LU$ preconditioner? _ f11dac and f11dcc 5 Functionality Index Apply iterative refinement to the solution and compute error estimates, after factorizing the matrix of coefficients, real sparse nonsymmetric matrix in CCS format nag_superlu_refine_lu (f11mhc) Basic functions for complex Hermitian linear systems, diagnostic function nag_sparse_herm_basic_diagnostic (f11gtc) setup function nag_sparse_herm_basic_setup (f11grc) Basic functions for complex non-Hermitian linear systems, diagnostic function nag_sparse_nherm_basic_diagnostic (f11btc) reverse communication RGMRES, CGS, Bi-CGSTAB(ℓ) or TFQMR solver function nag_sparse_nherm_basic_solver (f11bsc) setup function nag_sparse_nherm_basic_setup (f11brc) Basic functions for real nonsymmetric linear systems, diagnostic function nag_sparse_nsym_basic_diagnostic (f11bfc) reverse communication RGMRES, CGS, Bi-CGSTAB(ℓ) or TFQMR solver function nag_sparse_nsym_basic_solver (f11bec) setup function nag_sparse_nsym_basic_setup (f11bdc) Basic functions for real symmetric linear systems, diagnostic function nag_sparse_sym_basic_diagnostic (f11gfc) reverse communication CG or SYMMLQ solver nag_sparse_sym_basic_solver (f11gec) setup function nag_sparse_sym_basic_setup (f11gdc) Basic routines for real sparse nonsymmetric linear systems Matrix-matrix multiplier for real sparse nonsymmetric matrices in CCS format nag_superlu_matrix_product (f11mkc) Black Box functions for complex Hermitian linear systems, with incomplete Cholesky preconditioning nag_sparse_herm_chol_sol (f11jqc) with no preconditioning, Jacobi or SSOR preconditioning nag_sparse_herm_sol (f11jsc) Black Box functions for complex non-Hermitian linear systems, RGMRES, CGS, Bi-CGSTAB(ℓ) or TFQMR solver with incomplete LU preconditioning nag_sparse_nherm_fac_sol (f11dqc) with no preconditioning, Jacobi, or SSOR preconditioning nag_sparse_nherm_sol (f11dsc) Black Box functions for real nonsymmetric linear systems, RGMRES, CGS, Bi-CGSTAB(ℓ) or TFQMR solver with incomplete LU preconditioning nag_sparse_nsym_fac_sol (f11dcc) with no preconditioning, Jacobi, or SSOR preconditioning nag_sparse_nsym_sol (f11dec) Black Box functions for real symmetric linear systems, with incomplete Cholesky preconditioning nag_sparse_sym_chol_sol (f11jcc) with no preconditioning, Jacobi, or SSOR preconditioning nag_sparse_sym_sol (f11jec) Compute a norm or the element of largest absolute value, real sparse nonsymmetric matrix in CCS format nag_superlu_matrix_norm (f11mlc) Condition number estimation, after factorizing the matrix of coefficients, real sparse nonsymmetric matrix in CCS format nag_superlu_condition_number_lu (f11mgc) real sparse nonsymmetric matrix in CCS format nag_superlu_diagnostic_lu (f11mmc) real sparse nonsymmetric matrix in CCS format nag_superlu_lu_factorize (f11mec) real sparse nonsymmetric matrices in CCS format nag_superlu_column_permutation (f11mdc) matrix-vector multiplier for complex Hermitian matrices in SCS format nag_sparse_herm_matvec (f11xsc) reverse communication CG or SYMMLQ solver function nag_sparse_herm_basic_solver (f11gsc) Solution of simultaneous linear equations, after factorizing the matrix of coefficients, real sparse nonsymmetric matrix in CCS format nag_superlu_solve_lu (f11mfc) Utility function for complex Hermitian linear systems, incomplete Cholesky factorization nag_sparse_herm_chol_fac (f11jnc) solver for linear systems involving preconditioning matrix from nag_sparse_herm_chol_fac (f11jnc) nag_sparse_herm_precon_ichol_solve (f11jpc) solver for linear systems involving SSOR preconditioning matrix nag_sparse_herm_precon_ssor_solve (f11jrc) sort function for complex Hermitian matrices in SCS format nag_sparse_herm_sort (f11zpc) Utility function for complex non-Hermitian linear systems, incomplete LU factorization nag_sparse_nherm_fac (f11dnc) matrix-vector multiplier for complex non-Hermitian matrices in CS format nag_sparse_nherm_matvec (f11xnc) solver for linear systems involving iterated Jacobi method nag_sparse_nherm_jacobi (f11dxc) solver for linear systems involving preconditioning matrix from nag_sparse_nherm_fac (f11dnc) nag_sparse_nherm_precon_ilu_solve (f11dpc) solver for linear systems involving SSOR preconditioning matrix nag_sparse_nherm_precon_ssor_solve (f11drc) sort function for complex non-Hermitian matrices in CS format nag_sparse_nherm_sort (f11znc) Utility function for real nonsymmetric linear systems, incomplete LU factorization nag_sparse_nsym_fac (f11dac) matrix-vector multiplier for real nonsymmetric matrices in CS format nag_sparse_nsym_matvec (f11xac) solver for linear systems involving iterated Jacobi method nag_sparse_nsym_jacobi (f11dkc) solver for linear systems involving preconditioning matrix from nag_sparse_nsym_fac (f11dac) nag_sparse_nsym_precon_ilu_solve (f11dbc) solver for linear systems involving SSOR preconditioning matrix nag_sparse_nsym_precon_ssor_solve (f11ddc) sort function for real nonsymmetric matrices in CS format nag_sparse_nsym_sort (f11zac) Utility function for real symmetric linear systems, incomplete Cholesky factorization nag_sparse_sym_chol_fac (f11jac) matrix-vector multiplier for real symmetric matrices in SCS format nag_sparse_sym_matvec (f11xec) solver for linear systems involving preconditioning matrix from nag_sparse_sym_chol_fac (f11jac) nag_sparse_sym_precon_ichol_solve (f11jbc) solver for linear systems involving SSOR preconditioning matrix nag_sparse_sym_precon_ssor_solve (f11jdc) sort function for real symmetric matrices in SCS format nag_sparse_sym_sort (f11zbc) 6 Functions Withdrawn or Scheduled for Withdrawal 7 References Barrett R, Berry M, Chan T F, Demmel J, Donato J, Dongarra J, Eijkhout V, Pozo R, Romine C and Van der Vorst H (1994) Templates for the Solution of Linear Systems: Building Blocks for Iterative Methods SIAM, Philadelphia Demmel J W, Eisenstat S C, Gilbert J R, Li X S and Li J W H (1999) A supernodal approach to sparse partial pivoting SIAM J. Matrix Anal. Appl. 20 720–755 Duff I S, Erisman A M and Reid J K (1986) Direct Methods for Sparse Matrices Oxford University Press, London Freund R W (1993) A transpose-free quasi-minimal residual algorithm for non-Hermitian linear systems SIAM J. Sci. Comput. 14 470–482 Freund R W and Nachtigal N (1991) QMR: a Quasi-Minimal Residual Method for Non-Hermitian Linear Systems Numer. Math. 60 315–339 Golub G H and Van Loan C F (1996) Matrix Computations (3rd Edition) Johns Hopkins University Press, Baltimore Hestenes M and Stiefel E (1952) Methods of conjugate gradients for solving linear systems J. Res. Nat. Bur. Stand. 49 409–436 Meijerink J and Van der Vorst H (1977) An iterative solution method for linear systems of which the coefficient matrix is a symmetric M-matrix Math. Comput. 31 148–162 Meijerink J and Van der Vorst H (1981) Guidelines for the usage of incomplete decompositions in solving sets of linear equations as they occur in practical problems J. Comput. Phys. 44 134–155 Paige C C and Saunders M A (1975) Solution of sparse indefinite systems of linear equations SIAM J. Numer. Anal. 12 617–629 Saad Y and Schultz M (1986) GMRES: a generalized minimal residual algorithm for solving nonsymmetric linear systems SIAM J. Sci. Statist. Comput. 7 856–869 Sleijpen G L G and Fokkema D R (1993) BiCGSTAB$ℓ$ for linear equations involving matrices with complex spectrum ETNA 1 11–32 Sonneveld P (1989) CGS, a fast Lanczos-type solver for nonsymmetric linear systems SIAM J. Sci. Statist. Comput. 10 36–52 Van der Vorst H (1989) Bi-CGSTAB, a fast and smoothly converging variant of Bi-CG for the solution of nonsymmetric linear systems SIAM J. Sci. Statist. Comput. 13 631–644 Young D (1971) Iterative Solution of Large Linear Systems Academic Press, New York
{"url":"http://www.nag.co.uk/numeric/CL/nagdoc_cl23/html/F11/f11intro.html","timestamp":"2014-04-17T21:28:20Z","content_type":null,"content_length":"88366","record_id":"<urn:uuid:d17f38c5-96aa-455c-b6ee-065434482e3b>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00523-ip-10-147-4-33.ec2.internal.warc.gz"}
Calendar Magic Calendar Magic Calendar Magic is an easy-to-use program that is entertaining, informative, educational and of equal applicability in the home and in the office. Calendar Magic has been tested on Windows 95, 98, Me, XP, Vista (32-bit), Windows 7 (32-bit and 64-bit) and Windows 8 & 8.1 (64-bit), and has also been reported to run without problems on other versions of Windows. The main features of Calendar Magic are given below (a translation to Serbo-Croatian by Anja Skrba may be found here). • Full year and individual month Gregorian, Afghan, Armenian, Baha'i, Balinese Pawukon (full year only), Balinese Saka, Bangla, Chinese, Coptic, Egyptian, Ethiopic, French Revolutionary, Hebrew, Hindu Lunisolar (3 variants), Hindu Solar, Indian National, Islamic Arithmetical (8 variants), Javanese Pawukon/Pasaran, Julian, Revised Julian, Parsi Fasli, Parsi Kadmi, Parsi Shenshai, Persian (2 variants), Sikh Nanakshahi and Vietnamese calendars. A user option is provided to choose between displaying/printing calendars showing each week starting on a Monday (in line with the ISO 8601 international standard), on a Sunday for North American users, or on a Saturday for Middle East users. • Various types of planning calendars. • “Dual Calendars” – full year calendars in various calendar systems which show, not only the months and days for a year in any one of the calendar systems, but also the corresponding Gregorian • A month-by-month, side-by-side comparison of any two of the 26 calendar systems listed above. The display remains synchronised as you change day, month and year values in either calendar system being viewed. Again, users may choose between displaying each month with weeks starting on a Saturday, Sunday or Monday. • A "Calendar Collector" – how long will it take to collect all 14 possible Gregorian Calendars? • Date conversions among the 26 calendar systems listed above, plus conversions to Aztec Tonalpohualli, Aztec Xiuhpohualli, Balinese Pawukon, Thai solar, old Hindu Solar, old Hindu Lunisolar and Mayan date formats. Julian day value, day of week and day of year information is also displayed. For Gregorian dates, many other facts are displayed, such as modified Julian day value, Lilian day value and Rata Die day value, and year related information including Roman numeral form, Dominical Letter(s), Dionysian Period, Julian Period, Golden Number, Solar Number, Roman Indiction and Epact. Various special days are also recognised (e.g. Halloween), as are modern Olympic years, Commonwealth Games years, European Athletics Championship years, and World Athletic Championship • Conversion of British sovereign regnal dates to historical Julian (years beginning on Jan. 1) or Gregorian dates, as appropriate. • Conversion between ancient Greek Olympiad numbering and calendar years. • Lists of Western Christian festivals, Eastern Orthodox festivals, Hebrew festivals and Islamic festivals for any (Gregorian) year. In addition, Hindu festivals may be listed for any year in the range 2000 to 2043, Baha'i festivals from 1845 onwards, Balinese Hindu festivals from 1816 onwards, Buddhist and Chinese festivals from 1645 to 3000, and Sikh Nanakshahi festivals from 1999 • “Observed Days” for any year from 1990 for over 230 countries and dependencies worldwide. • “Date Detective” command button to tabulate the weekday on which a specified Gregorian date d/m occurs for each of the years in the specified range y1 to y2. • “In Which Months?” command button to list, over a range of years, the months in which a specified day of the month falls on a specified weekday. • The ability to create, display, update and delete reminders for events (birthdays, anniversaries, meetings etc.) for “this year”, “next year” or “every year”. When Calendar Magic is started up, both visual and audible warnings are given for imminent events (those occurring within the next seven days) for which reminders have been set. In addition, a calendar for any month, in this year or next year, may be displayed with day numbers highlighted in red for those days in the month for which reminders have been set. Left-clicking on any “red day” will cause the reminder(s) set for that day to be listed. Out-of-date reminders are also automatically purged by Calendar Magic and appended to a text file, purged.dat, for later reference, if needed. • A multi-sheet “Quick Notes” facility for holding miscellaneous plain text items. • An alarm clock facility for defining an alarm for a given time on a given date. A user may specify the duration of the alarm which may be repeated, after a specified “quiet” time, up to five more times. A separate “stopwatch” function is also provided. • A “countdown timer” for counting down, second by second, any specified time period to zero. • “What Time is it in?” to calculate the current time and date for a world-wide location and to identify Time Zone abbreviations. • “World Clocks” to display simultaneously the local times at any 12 world-wide locations. • Conversion between normal and French Revolutionary time. • “This is your life” information including the day of the week on which you were born, number of days you have lived, your Zodiac sign and the day of the week on which your next birthday falls. Your “Chinese age” and your date of birth in many other calendar systems are also displayed. • Continuously updated display of date, time and Julian day. • Number of days between any two dates in the Gregorian calendar (and number of working days). • Calculation of the date n days, weeks, months or years before or after a specified Gregorian date, where n is a whole number. • An analysis of the Gregorian 400-year cycle, after which the Gregorian calendar repeats itself. • Special Julian to Gregorian change-over calendars for Bulgaria, Czechoslovakia, Denmark, Estonia, France, Great Britain, Hungary, Ireland, Italy, Luxembourg, Norway, Poland, Portugal, Romania, Russia, Spain and Sweden. • Dates and times of equinoxes, solstices and Moon phases for any year from 1582 to 3000. • Solar and lunar eclipse data for any year up to 3000. • Sunrise and sunset information for any date up to the end of 2200 for 18000 locations across the world. • Moonrise and moonset information for any date up to the end of 2200 for these 18000 locations across the world. • Great circle distances between any two of these 18000 locations across the world. • Current local time and date in any of these 18000 locations, plus interpretation of time zone abbreviations. • A Unit Converter for converting among 1722 units of measurement in 83 different categories including length, area, volume, mass, temperature, time, velocity, energy, power, pressure, computer storage etc. • A Time Calculator for performing simple arithmetic on times. • A Geometry Calculator for evaluating key attributes (area, perimeter, volume, surface area etc.) of various 2D and 3D geometric objects. • A Prime Calculator for investigating various aspects of prime numbers. • A Factor Calculator for factorising numbers with up to 100 digits and for evaluating the HCFs and LCMs of lists of numbers. • A stack based Scientific Calculator with a visible stack. • An Expression Calculator for calculating the values of arithmetic expressions entered in normal (infix) form. • An Interval Arithmetic Calculator for performing arithmetic on approximate values specified as numeric intervals. • A Statistics Calculator for performing various statistical procedures. • A Fraction Calculator for evaluating exactly expressions containing integers and fractions. • A Continued Fractions Calculator for evaluating continued fractions and for converting arithmetic expressions of various types to continued fractions. The solution of Pell’s equation is also • An Egyptian Fraction Calculator for writing fractions x/y in the Egyptian form 1/a + 1/b + 1/c ... • A Big Numbers Calculator for performing arithmetic operations on very large numbers. • A Number Base Converter for converting numeric values between different number bases. • An implementation of a method for solving the Travelling Salesman problem. • A Financial Calculator for performing various financial calculations. • A Currency Converter. • Average speed based calculations. • A Fuel Consumption Calculator. • An Ovulation Calculator for predicting the dates of maximum fertility days. • A Pregnancy Calculator for calculating the due date of a pregnancy and other pregnancy related dates. • A Blood Alcohol Content (BAC) calculator. • A Body Mass Index (BMI) Calculator. • A Body Shape Index (ABSI) Calculator. • A Biorhythm Calculator. • A Paper Weight Converter for converting between metric paper weights and American paper basis weights. • A Magic Square Generator. • A Reaction Timer. • Colour customisation of screen backgrounds, non-button text and button backgrounds. • Support for printing any output displayed, and/or copying it to another program via the Windows clip-board using the usual Ctrl+A , Ctrl+C, Ctrl+P, Ctrl+X and Ctrl+V keyboard commands.
{"url":"http://www.stokepoges.plus.com/calendar.htm","timestamp":"2014-04-16T04:10:15Z","content_type":null,"content_length":"12849","record_id":"<urn:uuid:dca0c6a9-25c2-43e0-9b65-4ba6f2cb06b7>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00547-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: May someone help me with this integral? • 11 months ago • 11 months ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/51825f5ee4b0163f4370a6d9","timestamp":"2014-04-19T04:28:35Z","content_type":null,"content_length":"165717","record_id":"<urn:uuid:abee182b-a567-4d3b-9f10-6b9d3d4d3ad7>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00161-ip-10-147-4-33.ec2.internal.warc.gz"}
Can RNH Score 100 Points in a Season? Doug MacLean made quite the assertion on Sportsnet’s coverage of the Oilers game last night, saying that Ryan Nugent-Hopkins is guaranteed to get 100 points in a season in his career. At the time it struck me as an uninformed blurt of Pidgin English typical of MacLean, but I thought I’d do some research to find out how outlandish it actually is. So how rare are 100 point seasons? Well, since the 1994 lockout only 14 different centres have managed to post just 24 of them. Here’s a list of the 14 players: You can see that Sidney Crosby leads the pack with 4, but Malkin, Sakic and Thornton are all close behind with 3 each. Gretzky, Francis and Lemieux all managed the feat in 95-96 over the age of 30 (35,32,30 respectively), and so weren’t in their primes during my post-1994 timeframe — I will leave them out of most of my analysis, but it is fun to list them here. The average draft position of the 14 players who accomplished the feat is 6.2 Overall (excluding Gretzky’s immaculate conception onto the Oilers). In fact, only 3 of the 14 were chosen outside the top 4, and only one out of the first round (DOUG WEIGHT 95-96 FOR LIFE). So this feat is obviously the purview of rarefied prospects, which RNH just so happens to be as a #1 overall himself. Just for fun, if you weight the draft position by the number of times that player scored 100 points, the weighted average draft position is 5.5. For age of first season, I looked for the player’s first year in the NHL in which he played more than 60 games. Then I noted his point per game average in that first year, and then his point per game average in his second year. You can see that the average PPG for these players in their first years was 0.81 — RNH is again sitting pretty with a 0.84 in his first year. However, all but one (Forsberg, a freak of nature), substantially increased their production in their second years, resulting in an average PPG of 1.10. This translates into about a 90 point season over 82 games. This suggests that 100 point players all progress very quickly and begin to near the scoring rate required for 100 points at a young age. This is obvious when you look at a histogram of the frequency of 100 point seasons by age: This graph is heavily weighted towards the younger age brackets, with the majority of 100 point seasons being scored by players 25 years of age or younger. Again, I think this would fit RNH’s profile of opportunity here, as the Oilers will likely run into salary cap constraints in RNH’s age 23+ seasons and may need to jettison good players at that point, but RNH will be surrounded by good talent in his 20-23 yo seasons just entering their primes. Have another quick look at the list — is there a pattern of relying on wingers? Generally they were all on good teams with varying degrees of support, and I’d qualitatively say that a minority of 5 of these centres had to do it without great wingers (Crosby, Lindros, Malkin, Staal, Weight). RNH does have a potentially strong group of castmates to help supplement his totals, like the majority of these men. So what about first overalls — is there a particular advantage that they have in attaining 100 points in a season? I compiled a list of first overall centremen between 1988 (so they can still be a relatively in-prime 25 during the first NHL lockout) and Crosby’s 2005 draft and checked to see if they ever got a 100 point season: Out of 8 centremen who went first overall in this 18 year timeframe, 4 of them (or 50%) eventually went on to score 100 points in a season after 1994. The column labeled ‘Best Points’ shows the most points they ever got in a season after 1994 and then shows the age they accomplished that in. The average peak age is 24.7 (Daigle scored 51 twice 7 years apart), but this is skewed by Mike Modano’s anomalous 32 year old peak. Both Sundin and Modano had better years before 1994 (Sundin even had a 114 pt season pre-Leafs), but I cannot bring myself to allow them in this study as the early 90s were like the WWI of hockey history — old tactics combined with amazing new advances in technology, resulting in an offensive slaughter. So what’s the conclusion. I’d say giving 1st Overalls a 50% chance of getting a 100 point season in their careers is fair. RNH is certainly tracking ahead of busts like Stefan, and is ahead of the average career that resulted in a 100 point season down the line based on first year results. I think this year is absolutely key in answering this question. If RNH puts up, say, 1.0 or higher PPG this year, I think it’s safe to think he could hit 100 within the 3 years after that. If he regresses below 0.84 PPG, I think it’s safe to say he’ll never get there. 100 points is one of the smallest clubs these days in the NHL, and the ones who gain membership are the absolute best of the best for at least one season. These kinds of players generally show this potential early with constant progression upwards into their prime years. We shall see. Post a Comment
{"url":"http://www.boysonthebus.com/2013/01/29/can-rnh-score-100-points-in-a-season/","timestamp":"2014-04-19T06:53:12Z","content_type":null,"content_length":"18567","record_id":"<urn:uuid:b8b4ca9b-beaf-422d-bdf1-06cd87c25ca0>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00483-ip-10-147-4-33.ec2.internal.warc.gz"}
[Scipy-tickets] [SciPy] #1231: use of uninitialized variable in SLSQP when n equality constraints SciPy Trac scipy-tickets@scipy.... Mon Jul 12 17:51:07 CDT 2010 #1231: use of uninitialized variable in SLSQP when n equality constraints Reporter: stevenj | Owner: somebody Type: defect | Status: new Priority: normal | Milestone: 0.9.0 Component: scipy.optimize | Version: 0.7.0 Keywords: | I was adapting the SLSQP code for use in another library (NLopt, ab- initio.mit.edu/nlopt), and I noticed a bug that you might want to fix. In the case where the number of equality constraints equals the dimension of the problem, valgrind reported the use of an uninitialized variable in a conditional statement. I tracked the source of the uninitialized value down to line 813 of slsqp_optmz.f in SUBROUTINE lsei: CALL dcopy_ (mg-mc,w(mc1),0,w(mc1),1) I believe that this should be: CALL dcopy_ (mg,w(mc1),0,w(mc1),1) This initializes part of the work array w to 0. If the number of inequality constraints differs from the number of dimensions (mc.NE.n in lsei), then this statement doesn't matter because w(mc1...mc1+mg-1) get overwritten anyway by the Lagrange multipliers when lsi is called a little later. But if mc.EQ.n, then the GOTO statment on line 815 jumps to the end, where w(mc1...mc1+mg-1) is used in a dot product. I've verified that, with the above fix, it correctly solves a test problem with various numbers of equality and/or inequality constraints, and valgrind no longer complains. PS. I should also mention that, because of rounding errors, SLSQP will occasionally evaluate the objective/constraints slightly outside of the bounding box. I'm not sure if you care about this; my NLopt library guarantees that the bound constraints are strictly honored (unlike nonlinear constraints), so I had to tweak SLSQP in a couple of places to check that rounding errors don't push the solution out of bounds. Ticket URL: <http://projects.scipy.org/scipy/ticket/1231> SciPy <http://www.scipy.org> SciPy is open-source software for mathematics, science, and engineering. More information about the Scipy-tickets mailing list
{"url":"http://mail.scipy.org/pipermail/scipy-tickets/2010-July/002936.html","timestamp":"2014-04-19T07:15:22Z","content_type":null,"content_length":"5221","record_id":"<urn:uuid:0875e56b-1b5a-49ea-8200-5b8154652055>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00380-ip-10-147-4-33.ec2.internal.warc.gz"}
Physics Forums - View Single Post - Analysis help - Continuous function that is differentiable at all points except c It seems that the fact that (lim x -> c f ' exists) means f derievative is bounded on I is important. If I am thinking correctly I think f ' is uniformly continous since it has a continous extension on I.
{"url":"http://www.physicsforums.com/showpost.php?p=3074754&postcount=2","timestamp":"2014-04-20T23:28:11Z","content_type":null,"content_length":"7419","record_id":"<urn:uuid:37ab7952-2549-4529-9e0c-99b534190c76>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00168-ip-10-147-4-33.ec2.internal.warc.gz"}
When quoting this document, please refer to the following URN: urn:nbn:de:0030-drops-13808 URL: http://drops.dagstuhl.de/opus/volltexte/2008/1380/ Go to the corresponding Portal Cussens, James Model equivalence of PRISM programs The problem of deciding the probability model equivalence of two PRISM programs is addressed. In the finite case this problem can be solved (albeit slowly) using techniques from emph{algebraic statistics}, specifically the computation of elimination ideals and Gr"{o}bner bases. A very brief introduction to algebraic statistics is given. Consideration is given to cases where shortcuts to proving/disproving model equivalence are available. BibTeX - Entry author = {James Cussens}, title = {Model equivalence of PRISM programs}, booktitle = {Probabilistic, Logical and Relational Learning - A Further Synthesis}, year = {2008}, editor = {Luc de Raedt and Thomas Dietterich and Lise Getoor and Kristian Kersting and Stephen H. Muggleton}, number = {07161}, series = {Dagstuhl Seminar Proceedings}, ISSN = {1862-4405}, publisher = {Internationales Begegnungs- und Forschungszentrum f{\"u}r Informatik (IBFI), Schloss Dagstuhl, Germany}, address = {Dagstuhl, Germany}, URL = {http://drops.dagstuhl.de/opus/volltexte/2008/1380}, annote = {Keywords: PRISM programs, model equivalence, model inclusion, algebraic statistics, algebraic geometry, ideals, varieties, Gr"{o}bner bases, polynomials} Keywords: PRISM programs, model equivalence, model inclusion, algebraic statistics, algebraic geometry, ideals, varieties, Gr"{o}bner bases, polynomials Seminar: 07161 - Probabilistic, Logical and Relational Learning - A Further Synthesis Issue Date: 2008 Date of publication: 06.03.2008 DROPS-Home | Fulltext Search | Imprint
{"url":"http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=1380","timestamp":"2014-04-20T21:01:19Z","content_type":null,"content_length":"4593","record_id":"<urn:uuid:90484b5a-5c5d-4c9f-b29f-61c59a4c7a0c>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00049-ip-10-147-4-33.ec2.internal.warc.gz"}
Minimal compactification up vote 3 down vote favorite In their famous book, Faltings and Chai constructed, among other things, minimal and toroidal compactification of the Siegel moduli space. The reason for the use of the word toroidal is clear, since toroidal geometry is used to define such compactification, it is not clear to me the use of the word minimal. So the question is: in which sense the minimal compactification is minimal? add comment 2 Answers active oldest votes Given the symmetric space $D$ of non-compact type for the real semi-simple Lie group $G$, there are $2^r-1$ Satake compactifications $\bar{D}$ for $D$ up to homeomorphism, where $r$ is the real rank of $G$. These compactifications correspond to the non-empty subsets of a set $S$ of simple roots, and as such they form a semi-lattice: if $S_1 \subset S_2$, then the identity of $D$ extends to a continuous mapping $\bar{D}_{S_2} \to \bar{D}_{S_1}$. up vote 1 down vote The Satake-Baily–Borel compactification is one of the minimal (in the semi-lattice sense) Satake compactifications in the case where $D$ is Hermitian, i.e., has a $G$-invariant accepted complex structure, see this entry in the Encyclopedia of Mathematics. According to [Faltings-Chai, Degeneration of Abelian Varieties, p. 136], the construction of the minimal compactification of $\mathcal{A}_g$ mimics the construction of the the Satake-Baily-Borel compactification of a symmetric space, and this is why the word "minimal" is used. I see. Thank you! – user32948 Apr 9 '13 at 13:00 Actually $A_g$ is a hermitian symmetric space, so the minimal compactification doesn't just mimic the construction of Satake-Baily-Borel: it is a special case of it. – Dan Petersen Apr 10 '13 at 18:37 add comment There is another reason. If $X$ is a Shimura variety (eg the Siegel modular variety) and $X^*$ is its minimal (or Baily-Borel, or Baily-Borel-Satake) compactification, then it has the following property : for every other open embedding with dense image $X\rightarrow \overline{X}$ with $\overline{X}-X$ a divisor with normal crossings, there exists a unique map (of algebraic varieties) $\overline{X}\rightarrow X^*$ compatible with the embeddings of $X$ in $X^*$ and $\overline{X}$. Cf the remark after corollary 3.16 of Milne's "Introduction to Shimura up vote 2 varieties" (http://jmilne.org/math/xnotes/svi.pdf). down vote (Note that $X^*-X$ itself is not a divisor with normal crossings, except in very particular cases like modular curves.) add comment Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry or ask your own question.
{"url":"http://mathoverflow.net/questions/126947/minimal-compactification","timestamp":"2014-04-20T06:40:48Z","content_type":null,"content_length":"55630","record_id":"<urn:uuid:8cbb0253-4a71-47ab-862b-f4a7b053171c>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00600-ip-10-147-4-33.ec2.internal.warc.gz"}
Hackensack, NJ SAT Math Tutor Find a Hackensack, NJ SAT Math Tutor I'm exclusively an SAT/ACT tutor, and I can help you maximize your score! I tutor all sections of the SAT and ACT, and I provide value for parents by consistently outperforming the highest priced tutors. I draw on both years of experience as a course instructor for a premier test prep company and a strong background in cognitive psychology to personalize my approach for each of my 14 Subjects: including SAT math, writing, statistics, GRE ...I have also assisted them in preparing for such exams as the NY State Regents exam and the SATs. I have tutored dozens of students to help them improve in the math section of the SAT exam. To accomplish this, I review test taking strategies, as well as the basic math rules that help them solve the required problems. 8 Subjects: including SAT math, geometry, algebra 2, ACT Math I have an easy-going style which seems to work well with students who are anxious about science and math. I have been a chemistry teacher for 17 years at very well-regarded Ridgewood High School. I have tutored all levels of chemistry and physics, including AP level. 7 Subjects: including SAT math, chemistry, physics, algebra 1 I have been tutoring students in math, physics, chemistry, biology and physical science for the last 3 years. I have also coordinated and taught SAT Math classes. I have experience working with students ranging from 4th-12th grades, including students diagnosed with ADHD. 24 Subjects: including SAT math, chemistry, physics, calculus ...I have experience tutoring students of all ages, (elementary through graduate school) in many subject areas - although my real passion is math. I have a BA in statistics from Harvard and will be starting nursing school shortly. As someone who is not a typical "math person", I can relate to those struggling to understand material - I get it. 18 Subjects: including SAT math, chemistry, geometry, statistics Related Hackensack, NJ Tutors Hackensack, NJ Accounting Tutors Hackensack, NJ ACT Tutors Hackensack, NJ Algebra Tutors Hackensack, NJ Algebra 2 Tutors Hackensack, NJ Calculus Tutors Hackensack, NJ Geometry Tutors Hackensack, NJ Math Tutors Hackensack, NJ Prealgebra Tutors Hackensack, NJ Precalculus Tutors Hackensack, NJ SAT Tutors Hackensack, NJ SAT Math Tutors Hackensack, NJ Science Tutors Hackensack, NJ Statistics Tutors Hackensack, NJ Trigonometry Tutors
{"url":"http://www.purplemath.com/hackensack_nj_sat_math_tutors.php","timestamp":"2014-04-18T13:30:56Z","content_type":null,"content_length":"24273","record_id":"<urn:uuid:07e20660-b44d-4f0a-9651-84deba50031d>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00009-ip-10-147-4-33.ec2.internal.warc.gz"}
Is round a shape? (logic question) March 7th 2007, 08:23 PM Richard Rahl Is round a shape? (logic question) Hey all, I'm trying to prove whether or not round is a shape. I have: StatementA: round is a shape ConverseA: A shape is round InverseA: If it is not round, then it is not a shape ContrapositiveA: If it is not a shape, then it is not round StatementB: round is not a shape ConverseB: A Shape is not round InverseB: If it is not a shape, then it is not round ContrapositiveB: If it is a shape, then it is not round. I either need to show: A) round is a shape, can show ContrapositiveA is true, or ContrapositiveB is false B) round is not a shape, can show ContrapositiveB is true, ContrapositiveA is false. However, I am not sure if I've formed either contrapositive correctly, since they refer to objects as being round or shapes, and not to roundness itself as being a shape (do you see what I mean?). If that is incorrect, what would be the proper contrapositives? I could also use a proof by contradiction, so maybe my contrapostive approach isn't the best. Any thoughts? March 8th 2007, 09:27 AM Richard Rahl Summary So Far: The original argument demonstarted the converse error: "Shape is an outline" "An outline can be round" A Shape can be round This states that a shape can be round. We were trying to infer that: Round can be a shape Round cannot be a shape In otherwords, you cannot show something is true/false based on the true/false of it's converse -> Converse Error Define, "Shape", A shape is an appearance/outline. Note: appearnce/outline means appearnce or outline or configuration. (if it's an or clause, only one condidtion has to be correct). (Problem: If it's an OR clase, colors are also shapes). Possible Definition: appearance/outline -> (Appearnce AND outline) OR Note: Point Raised: round is not a shape because round is a adjective that describes a shape. For this to be true, the argument follows the form: "Round is an adjective that describes a shape" "An adjective that describres a shape is not a shape" "Round is not a shape" Is this a valid argument? Sure. Is it sound? No The reason it is not sound is because the second premise can be shown not to always be universally true: "An adjective that describres a shape is not a shape" Counter Example: Square is an adjective that describes a shape Square is a shape (it is an apparance/outline) Square is an adjective (a square corner.) Therefore, being an adjective that describes a shape does that mean that the adjective is not a shape. This is what I've come up with so far: Is an outline/apparance necessarily a shape? If all appearances/outlines are shapes round is a shape. "Round is an appearance/outline" "An appearance/outline is a shape" Round is a shape. For this to be a sound argument, both premises have to be true. We have agreed (at this point) that #2 is true. Is number 1 true: "Round is an appearance/outline" ? round - Wiktionary round -> a circular object a circular object is an appearance of round Round is an appearance round -> a circular object a circular object (abstration: circle) is an outline Round is an outline If not all appearances/outlines are shapes: This does not force round is not a shape(at least some appearcenes/outlines are shapes). to do so would be converse error. Does round lack another condidtion that is necessary for being a shape? That is, if there is a property that shape must have, but round does not, then Round is not a shape. Can we find such a property? However, if we can agree that being an appearance/outline is a necessary and sufficient condidtion for being a shape, then round is a shape. (by Proof above) So, is my reason for round being an appearance/outline okay? does another condidtion exist that a shape must have to have that round does not? March 8th 2007, 03:22 PM Yes, I have lots of thoughts. First, what do you understand about proof? In formal systems, sentences are true or false. “Round is a shape.” is a sentence. Its truth or falsity depends on the system of axioms. What are the axioms of your formal system? Secondly, you have given several different forms of that sentence. However, each of those forms applies only to implications. Is your sentence an implication? Perhaps: “If something is round then it is a shape.”? Now again, sentences are true or false whereas arguments are valid or invalid. Are you saying that based on some axiom system, we can deduce the theorem “If something is round then it is a shape.” by some valid argument? March 8th 2007, 04:12 PM Richard Rahl Everything I learned about proofs in Math 277 (2nd year university discrete math course) and Philosophy 100 (first year). I know about Proposistions (true/value), Sets of Propositions (consistent/inconsisten), arguments (vadid/sound), validitiy, soundness, proof by contradition (reductio ad absurdum), contrapostive, math induction, and direct proofs. I know all the logical connectors and theorms (demorgans theorm, distribution theorm), and well as boolean algebra, truth tables, etc. Also The converse and inverse errors, as well as modus ponus and modus tollens, that's what I can think of off the top of my head but i have my phil and math books and notes to refer back too. The definitions I gave for round and shape. To make things simple, they are basically the same as the ones in Wiktionary. I'm not trying to figure out if something which is round is also a shape, but rather if round, in and of itself, is (i.e. qualifies) as what we have defined to be a shape. Once again, not trying to argue if something that is round is a shape. I want to use the definitions and a logical proof technique to either assert or deny A) Round is a shape, or B) Round is not a shape Where those statements are the conclusion to a valid, and sound argument. March 8th 2007, 05:21 PM In my view you are simply confused. You quote two different courses: a mathematics course and a philosophy course. I majored in philosophy as an undergraduate. That drove me to mathematics as a graduate student. Have you had a solid course on the foundations of mathematics? March 8th 2007, 05:45 PM Richard Rahl In my view you are simply confused. You quote two different courses: a mathematics course and a philosophy course. I majored in philosophy as an undergraduate. That drove me to mathematics as a graduate student. Have you had a solid course on the foundations of mathematics? Haha, I suppose I may be confused, but I'm not sure how. Isn't logic still logic regardless of if its in philosophy or mathematics? When I took my Discrete Math course in second year university, I found that much of it paralleled what I learned about logic in first year Philosophy, and likewise in my Digital Electronics/Digital Logic Course (and indeed, what I had intuitively reasoned for most of my life). There were some differences for sure (in discrete, we didn't really talk about soundness of an argument, just its validity (I asked my math prof about soundness, he said that in math it's considered "valid" if its both "valid" and "sound" as it would be defined in philosophy), and we didn't really do truth tables or boolean algebra in philosophy but the concepts still apply (which I discussed with my prof), but all the basics were still there, logic was still logic. I'm not sure what you mean by "solid course on the foundations of mathematics". I suppose the closest thing would be my Discrete Math Course which is defined in my school's Academic Calendar as: MATH 277 Discrete Structures An introduction to sets, binary relations and operations; induction and recursion; partially ordered sets; simple combinations; truth tables; Boolean algebras and elementary group theory, with applications to logic networks, trees and languages; binary coding theory and finite-state machines. The first chapter/few weeks or so in Discrete was solely on logic, and logical forms (everything I mentioned in the list is from discrete). My school does have a 3rd Math year course titled, "MATH 391 Mathematical Logic", and a 3rd year Philsophy Course titled "PHIL 340 Logic" but I have not taken them yet (still in second year). BTW: I've also taken Calculus 1 and 2, Matrix Algebra, and I'm currently in Linear Algebra and Statistics (second year). March 8th 2007, 07:01 PM March 12th 2007, 03:29 PM Richard Rahl So am I completely wrong here or what? March 12th 2007, 06:00 PM Well, to start off with, I would have as a postulate that a noun is not an adjective and vice versa. That immediately disqualifies round from being a shape, as round is an adjective and shape is a noun. Just because both words are concerned with the same idea does not make the one the other. Neither would I say that big is a size, for the same reason. Let's look at a couple of statements: The tabletop is round. (here round is an adjective, acting as a subject complement modifying the subject, tabletop.) Now lets force the word "shape" into the statment: The tabletop's shape is round. (i.e. round is still an adjective, acting as a subject complement modifying the subject, shape) Some adjectives can change to a different part of speech, becoming nouns while still retaining the same idea. For example, it is perfectly okay to say that red is a color, because red in its noun form still has the same idea as color does. However, if you check the dictionary, you will see that "round" loses all connotation to shape when used in its noun form. So, I would say that round is not a shape. Instead, it is a qualifier that describes a shape. Or to be more precise, round is a qualifier that describes the shape of something. March 12th 2007, 06:18 PM Richard Rahl Yea, that's the same argument I heard from everyone else. I still don't get it, perhaps I'll just have to concede that I (maybe) understand logic but obviously don't have a clue about grammer. March 12th 2007, 07:29 PM As far as logic goes, I'd say that you don't have to have an overall mastery of grammar, just the pattern of statements that especially use forms of the verb "be", because many arguments will contain a premise in the form of "A is B". For example, using round again: The table is round. (Here we are asserting something about the subject. Grammatically, table is the subject and we are predicating that its shape is round. Notice, however, that we are not renaming the subject, only qualifiying its shape.) On the other hand: Socrates is a man. (Here we are still asserting something about the subject, but now we are renaming it, calling Socrates a man. This is a true statement because Socrates has all the qualities that a man has. However, the statement is not reflexive, as we cannot say that a man is Socrates, as there are other men besides Socrates. A true statement of this form is transitive when going from the specific to the more general. For example, we can say that Socrates is a mammal, because a man is a mammal. We can say that Socrates is an animal, because Socrates is a man and a man is a mammal and a mammal is an animal.) The logic in use here is really just Aristotle's classification of things which exist, which is a hierachical structure that proceeds from general abstractions down through levels of narrowing abstractions, finally arriving at primary substances, the things that actually exist, from which no further declassifications can occur to the subject withoug making the subject cease to exist. i.e. In the hierarchy of animals, Socrates is a primary substance, because there is no further declassification that Socrates can undergo and still function as Socrates. In this scheme, only things that posess mass and dimension actually exist, and that is why words that are qualifiers or behaviors only have meaning when they are expressed within the context of some substance of matter. When dealing with words like shape and round, which are abstractions, looking at the parts of speech can resolve the identity crisis. If round in its noun form had a connotation that unequivocally expressed the same idea as shape, then you would have an argument. March 13th 2007, 08:43 AM Richard Rahl Yes, I understand your examples on Socrates, they are the similar to the ones we used to demonstrate the differences between True Premises & Conclusion, but Invalid Argument, False Premises & Conclusion, but Valid Argument, False Premises & True Conclusion, but Valid Argument, True Premises & Conclusion + Valid Argument = Sound Argument as well as going from the generic to the particular in class. Would you mind reworking the argument for me a little? What I mean is, I understand that such an argument is invalid: Shape is an outline and appearance (wikipedia) Round is an outline and appearance ( btw...I don't know if this is true) Round is a shape. But what of an argument of the following form: A necessary and sufficient condition for something to be a shape is that it is an outline and appearance (or configuration, according to the definition). Round is an outline and appearance. Round is a shape. Perhaps a looser condidtion on P1 is " A necessary and sufficient condition for something to be a shape is that it fits one of the definitions of shape". First off, is this argument valid? I think so, since whenever the premises are true, the conclusion must also be true (deductively). If it's not valid please feel free to show me why. Assuming it is, is it sound? In order to show it's sound, we must demonstrate both premises to be, in fact, true. So, can you show me that being an outline and appearance is not a necesseary or suffienct condidtion for something to be a shape (i.e. is there a property that something must have to be shape which round lacks). Second to that, can you demonstate that round is not an appearance and outline? (In my above posts, I demonstarted a possible reason that round is, based on definitions from wikitionary, but I am unsure if that is correct). Could you please rework the grammer argument to demonstrate that this argument is not sound (or please show me why its invalid)?. If you take the looser form of the P1, it would have to be demonstrated that round does not qualifiy for any possible definition of shape (argument by exhaustion). To show it does, we only have to find one definition that round qualifies for (which, atm, I'm thinking is the appearance and outline definition, which round may very well not be). March 13th 2007, 12:14 PM Some premises don't have to be proven. The premise that a shape is an outline and appearance is a definition, and so that premise you don't have to prove. For an argument I would get a little more specific with the premise: If a thing exists, then it is a shape. Here I mean shape to be a very general concept, and the concept from which matter derives. From the concept of matter, the levels would branch out to lateral concepts of inanimate and living, and then down through the branching levels, and finally to things that actually exist as individual specimens, which obey the chemistry definitions of possessing mass and taking up space. This is the only way a statement could be made that makes a thing be a shape. Shape must be in the hierarchy of matter and above it. Notice, in that premise, that the converse is also true, which is a property of all precise definitions: If a thing is a shape, then it exists. When the next premise is added however, with round as a shape, the argument stumbles before it can begin: If a thing exists, then it is a shape. Round exists. Therefore, round is a shape. But round does not exist. Round has no mass, takes up no space. You cannot point at anything that exists and say that that is a round. If you try to put round in the hiearchy just proposed, it would come under shape, no? But then would the classification of matter belong in that structure any longer? I think not. We would need to take matter and put it into a different hiearchy. The only way to put round in that hierarchy without changing the hiearchy that branches down from matter would be to insert it inside shape as a descriptor, to describe a type of shape. If you tried to take all the different types of shape and put them into the structure as independent classifications, then you would end up with an unworkable concept of existence, with types of shape driving the hierarchy as oppossed to types of matter. Now Plato would have no problem with saying that round exists, because of his idea of Forms existing in a transcendent realm that we cannot see. However, it was precisely because of the problems associated with that sort of thinking that Aristotle created his classification system. The transcendental things like round and big and so on were brought back to earth and inserted as qualities that existent things possess. So I would never make the argument that If a thing exists, it is a shape. I would instead say that if a thing exists, it has a shape, which is a big difference. Since round is one of the properties that shape has, then it is okay to say that some things have round March 13th 2007, 01:03 PM Richard Rahl The claim isn't that: if a thing exists, it is shape but rather: that any thing which is an outline and appearance is a shape. Of course, you are completely right that round is not a thing at all. So, in other words, in order for something to be a shape it must also physically exist, not just as a subjective concept? March 13th 2007, 02:03 PM So, in other words, in order for something to be a shape it must also physically exist, not just as a subjective concept? That hierarchy that allowed the premise: If a thing exists, then it is a shape. was used only to support the premise. If I were to compose an argument, I would use a different hiearchy, and have shape as an abstract concept, from which other abstract shapes can descend from. For example, from shape would come a lower level of abstract shapes, i.e. round shape, square shape, etc., but not simply round or square. This is to stay consistent...nouns can only be nouns. You can't have a word that is a noun in meaning be equal to a word that is an adjective in meaning. There needs to be consistency there. And I would not try to insert this shape hiearchy into the matter hierarchy...because I think the result would be unworkable. Also, I would say that shapes can only be perceived as specific qualities within the context of actual physical things. In all cases, it is the thing that exists, not the qualities. Color for example. If I asked you to show me red, you might show me a red piece of paper. I will say no, you are showing me a piece of paper. You then might get a red ink pen and draw a line on a white piece of paper. I will say no, you are showing me ink. Forever and a day, you will never show me red. Such is the same with any abstract concept, say mass for example. Even the SI standard for mass is defined in terms of a specific piece of matter. Once you get this concept down and trusting its truth, it actually gets very easy to classify things in such a way that is both logical and common sensical. The two are not incommensurable.
{"url":"http://mathhelpforum.com/discrete-math/12316-round-shape-logic-question-print.html","timestamp":"2014-04-17T01:14:21Z","content_type":null,"content_length":"34874","record_id":"<urn:uuid:db1787a5-437b-428e-9dfd-2a521679137b>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00075-ip-10-147-4-33.ec2.internal.warc.gz"}
A note on least squares fitting of signal waveforms Mishra, SK (2007): A note on least squares fitting of signal waveforms. Download (143Kb) | Preview Signal waveforms are very fast dampening oscillatory time series composed of exponential functions. The regular least squares fitting techniques are often unstable when used to fit exponential functions to such signal waveforms since such functions are highly correlated. Of late, some attempts have been made to estimate the parameters of such functions by Monte Carlo based search/random walk algorithms. In this study we use the Differential Evaluation based method of least squares to fit the exponential functions and obtain much more accurate results. Item Type: MPRA Paper Institution: North-Eastern Hill University, Shillong (India) Original A note on least squares fitting of signal waveforms Language: English Keywords: Signal waveform; exponential functions; Differential Evolution; Global optimization; Nonlinear Least Squares; Monte Carlo; Curve fitting; parameter estimation; Random Walk; Search methods; Fortran C - Mathematical and Quantitative Methods > C2 - Single Equation Models; Single Variables > C22 - Time-Series Models; Dynamic Quantile Regressions; Dynamic Treatment Effect Models C - Mathematical and Quantitative Methods > C6 - Mathematical Methods; Programming Models; Mathematical and Simulation Modeling > C63 - Computational Techniques; Simulation Modeling Subjects: C - Mathematical and Quantitative Methods > C1 - Econometric and Statistical Methods and Methodology: General > C13 - Estimation: General C - Mathematical and Quantitative Methods > C1 - Econometric and Statistical Methods and Methodology: General > C15 - Statistical Simulation Methods: General C - Mathematical and Quantitative Methods > C6 - Mathematical Methods; Programming Models; Mathematical and Simulation Modeling > C61 - Optimization Techniques; Programming Models; Dynamic Analysis Item ID: 4705 Depositing Sudhanshu Kumar Mishra Date 04. Sep 2007 Last 15. Feb 2013 23:00 · Han, XL, Pozdin, V, Haridass, C and Misra, P (2006) "Monte Carlo Least-Squares Fitting of Experimental Signal Waveforms", Journal of Information & Computational Science, 3(4), pp. References: 1-7. http://www.physics1.howard.edu/~pmisra/publications/137_ISICS06.pdf · Mishra, SK (2007) “Performance of Differential Evolution Method in Least Squares Fitting of Some Typical Nonlinear Curves”, SSRN http://ssrn.com/abstract=1010508 URI: http://mpra.ub.uni-muenchen.de/id/eprint/4705
{"url":"http://mpra.ub.uni-muenchen.de/4705/","timestamp":"2014-04-18T21:32:04Z","content_type":null,"content_length":"19399","record_id":"<urn:uuid:a0642ee4-6deb-4bc7-b54c-2f93e22bb0a4>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00309-ip-10-147-4-33.ec2.internal.warc.gz"}
Nematode population dynamics, threshold levels and estimation of crop losses D.L. Trudgill and M.S. Phillips Nematode damage Control measures Population dynamics Yield losses are influenced by the pathogenicity of the species of nematode involved, by the nematode population density at planting, by the susceptibility and tolerance of the host and by a range of environmental factors. Because of this, available models only estimate yield losses as proportions of the nematode-free yield. Estimating threshold levels further involves various economic calculations. Consequently, predicting yield losses and calculating economic thresholds for most nematode/crop problems is not yet possible. What is needed is more field-based information on the relationship between nematode population densities and crop performance, and various approaches to obtaining such data are described. Measuring the population density, especially of Meloidogyne species, is a major problem which needs addressing. Nematode population dynamics are also density dependent and are influenced by host growth, the reproductive potential of the species and by various environmental factors. Consequently, modelling nematode population dynamics is an equally impressive science. Again, good field data are required but the complicating effects of biological control agents, host susceptibility differences and environmental factors, and errors associated with measuring initial population densities, may mean it is practically impossible to predict reliably the multiplication rates of most nematodes, especially those with several generations per season. The nematode, the host and the environment are the three interacting variables influencing the extent of yield loss in infested soils. An understanding of the mechanisms and principles involved in these interacting relationships is basic to being able to predict yield reductions from estimates of pre-planting nematode population densities (Pi). Damage models When modelling the damage caused to plants by root-feeding nematodes certain basic principles apply. These are: · Damage is proportional to the nematode population density. · The degree of damage is influenced by environmental factors. · The yield harvested is determined by the amount of light intercepted by the crop, by how efficiently the intercepted light is converted into dry matter, and finally by how that dry matter is partitioned into non-harvested and harvested yield. For some crops significant variations in moisture content will also affect final yield. The above principles can be simply stated but are more complex in practice. Damage may be proportional to the nematode population density, but there are several qualifications of this statement. The relationship is usually curvilinear, increasing numbers of nematodes having proportionally diminishing effects. There is some evidence that at low densities the host plant can repair the damage and that growth may even be slightly stimulated. Seinhorst (1965) termed the population density (Pi) at which damage first became apparent as the tolerance limit (T). Equally, at very high values of Pi, increasing numbers of nematodes may not further reduce dry matter productivity. Seinhorst termed this the minimum yield (m). There are various reasons why m may occur; there may be some growth before attack starts or after it finishes, and a significant biomass may be planted (e.g. potato tubers). However, m applies to total dry matter and because of effects on partitioning, the harvest value of m may be greater or less than that for total dry matter. The third parameter in the Seinhorst equation is z, a constant slightly less than one. The equation is: [] for Pi>T y=1 where Pi£ T where y is the yield. An important qualification is that y is expressed as a proportion of the nematode-free yield. Hence, according to Seinhorst, the greater the yield potential the greater the loss in tonnes per hectare for any value of Pi. The Seinhorst equation is usually plotted with Pi on a logarithmic scale, producing a sigmoidal curve (Fig. 1). In practice T is usually small and the Pi value at which m is reached is so large that it is only the central part of the curve that is of practical use. Oostenbrink (1966) suggested that this approximated to a straight line. The equation for such a line is: y = y(max) - slope constant × log Pi Even the simplified Oostenbrink relationship is not very helpful. Yield is still expressed in proportional rather than real (tonnes per hectare) terms. Also, there is no way of applying the relationship without considerable experimentation to determine the slope of the regression. The slope of the regression varies for several reasons. These include differences in pathogenicity (capacity to cause damage) between species, e.g. Meloidogyne spp. may be inherently more damaging than Tylenchus but we have no measure of their relative pathogenicities. Different plant species and varieties within species differ in their tolerance (capacity to withstand nematode damage). Also, there are large environmental influences on the damage suffered and particularly how that damage is translated into effects on final yield. An important consideration, often overlooked, is the basis of measuring Pi. Usually it is given as numbers per gram of soil. A more appropriate measure is per unit volume of soil as this allows for bulk density differences. Numbers per gram of root is probably the most appropriate, but is difficult to measure because it is always changing. This latter aspect becomes important when trying to relate results from experiments where root densities are very different, e.g. pot and field trials. A further problem is encountered when considering damage by nematodes that have two or more generations in the lifetime of a crop. Usually the Pi is measured at planting, but on a good host population of, for example, Meloidogyne spp., they can increase from below the value of T to a level in mid-season where they cause significant damage. Even so, it is a race between increasing Pi and increasing plant size that brings with it increasing tolerance (in Seinhorst terms, increasing m). In such situations suitability as a host (susceptibility) and tolerance can have a marked effect on the degree of damage. In summary, both the Seinhorst and Oostenbrink equations are, without the addition of a substantial amount of additional information, purely descriptive and cannot be used to predict actual yield Mechanisms of damage and environmental effects on damage Damage is proportional to the intensity of attack; this is often proportionally greater in sandy soils where nematodes can move more freely, than in heavier soils where movement is impeded. Adequate soil moisture is essential for free movement so attack is often limited as soils dry out later in the season. Temperature also influences the rate of nematode movement, but plant growth is usually equally affected. FIGURE 1: The relationship between proportional yield loss and initial population density as modelled by Seinhorst (1965) Primary damage to the attacked roots can be attributed to mechanical damage associated with feeding or invasion, to withdrawal of nutrients, and/or to more subtle physiological effects. Generally damage reduces the rate of root extension. This reduces the rate of uptake of nutrients and water and, if any become limiting (and they usually do, even for crops without nematode damage), top growth rates are reduced. This reduces the rate of increase in light interception and carbohydrate synthesis and hence the capacity of the plant to generate more roots to overcome the limitations imposed by nematode damage. Such appears to be the main mechanism of damage by potato-cyst nematodes (Globodera spp.) whose effect is further increased by reductions in root efficiency, revealed in a decrease in root: shoot ratio. Further damage is associated with withdrawal of nutrients by the developing females (resistant cultivars of potato are often less damaged than susceptible cultivars) and by secondary pathogens such as Verticillium dahliae. The central role of nutrient uptake is revealed, however, by the substantial ameliorating effect on damage of additional fertilizer. With Meloidogyne spp., impaired water relations appear to contribute substantially to reduced rates of top growth. This is probably because the developing giant cell systems interfere with and disrupt the developing xylem. Clearly, with such damage, effects on growth and yield are likely to be greater where the plants are on the threshold of becoming moisture stressed. Other effects include reduced photosynthetic efficiency and these are reviewed in Trudgill (1992). Effects on light interception and utilization There is a good correlation in many crops between percent ground cover (i.e. the percentage of ground occupied by a plant or a crop, when viewed from above, that is covered by green leaves) and percent light interception. Most annual crops start as individual, separate plants and a reduction in growth rate is directly reflected in ground cover and hence light interception. As they grow the leaves of neighbouring plants merge to form a continuous canopy. Nematode damage that only delays the production of a continuous canopy, and hence 100 percent light interception, will have a smaller effect on final yield than damage which prevents the crop achieving such full cover. Premature crop death will also proportionally reduce yield. Environmental effects Several environmental interactions have already been mentioned. Soil type clearly has an effect because it influences nematode movement as well as being a nutrient and water supply to the host. It can also influence nematode survival during periods of stress and will certainly influence the species composition of nematode communities. The effect of fertilizer practice and of water availability has also been mentioned, but these in turn will interact with host genotype and husbandry factors such as spacing and time of planting. Recent studies of potato-cyst nematodes illustrate some of the interactions and are briefly summarized below: · The interaction between two potato cultivars of different tolerance and rates of compound fertilizer and the nematicide aldicarb was studied at a site with a sandy soil (Trudgill, 1987). In this trial, the site was uniformly heavily infested with Globodera pallida and the tolerant cv. Cara produced tops that were generally twice the size of intolerant cv. Pentland Dell. Consequently, Cara tended to produce many more leaves than were required to give 100 percent ground cover. The yield of the Cara was increased equally by a half and a full rate of aldicarb whereas that of the Pentland Dell was increased more by the full rate. Similarly, increasing rates of fertilizer proportionally increased the yield of Pentland Dell untreated with nematicide more than it did that of treated Pentland Dell or untreated Cara. This trial and several others showed that initially the G. pallida proportionally decreased the top growth of both cultivars to the same degree, supporting the basic proportional model proposed by Seinhorst. · A series of five trials on different soil types tested the same five potato genotypes in plots with a wide range of initial populations (Pi) of G. pallida. Excellent regressions between Pi and tuber yields were produced (Fig. 2) revealing differences in tolerance between genotypes and in overall rates of yield reduction at the different sites. Further analysis showed that variations from a basic model similar to a simplified Seinhorst curve (without T or m) could be partitioned into genotype and site effects. The former were common across sites and the latter across FIGURE 2: The relationship between initial population density of Globodera pallida and tuber yield for tolerant, moderately tolerant and intolerant genotypes at two sites with contrasting yield This information provides the basis for predicting the effects of G. pallida on the tuber yields of different cultivars classified on their degree of tolerance and of sites classified by their soil type. However, the losses are still predicted as a proportion of the nematode-free yield. To have a prediction of the actual loss in tonnes per hectare requires an estimate of the yield potential of the cultivar and site, which requires yet further modelling. Only with this information can yield losses be accurately quantified in financial terms and the tolerance limit identified. The alternative is to extrapolate from the available trial data and make allowances on the basis of experience for the obvious possible environmental influences. Methods of estimating yield loss are therefore of central importance and are considered below. Methods of estimating yield losses Pot studies can be used to determine some of the basic information on yield-loss relationships, but because of environmental differences and interactions, field studies are also needed. There are two approaches: one is to use nematicides at relatively uniformly infested sites; the other is to work at sites with a range of population densities but which are uniform in other respects. A combination of both approaches is often a happy compromise. The former gives practical information on the effectiveness and potential value of a particular treatment but tells little about the nature of the relationship. It also suffers from the criticism that nematicides have a range of side-effects. The latter has the benefit of producing information on the relationship between Pi and yield, but it requires experimental errors to be minimized. Because Pi estimates have large errors, accuracy is improved by reducing plot size and by taking and processing multiple samples from each plot. However, plot size must be large enough to obtain a realistic yield and adequate guard plants are essential. Another option is to establish many small plots in large but otherwise uniform fields. These can be at random, in a grid pattern or along known trends in Pi. The plots can be split and a nematicide applied to one half. For each plot the Pi and yield are determined. The results will produce a scatter of points, hopefully with yield decreasing as Pi increases. Much of the scatter is due to errors in estimating Pi and yield, and it can be minimized by taking the average of all the results within each error band. Such an approach needs: i) a wide range of initial populations; ii) a uniform field; iii) a large number of plots (100 or more); and iv) the plots to be part of an otherwise uniform crop. Control measures aim to protect the treated crop from damage, and to prevent nematode multiplication and so reduce the threat to the next susceptible crop in the rotation. Most cost-effective and successful is the growing of resistant varieties. However, while these will prevent nematode multiplication, they are often as vulnerable to damage as a susceptible variety. In yield-loss studies resistant varieties can be a very useful tool for preparing plots with reduced populations without the side-effects associated with other treatments. Between vulnerable crops, rotations involving non-hosts are almost essential. Nematicides, whether natural or artificial, are a last resort and should not be used as a crutch to compensate for poor management. They are always costly and frequently toxic and environmentally damaging. However, their side-effects can make them attractive in some situations; the oximecarbamates control a broad range of pests, until they develop resistance, while the fumigant nematicides release nitrogen, further increasing yields. Nematodes have various reproductive strategies. Some grow large and have long life cycles with low rates of population increase (K strategists), others are relatively small, have short life cycles and potentially higher reproductive rates (r strategists). An endoparasitic habit with induction of giant cells or other rich and continuously available food sources, reduces exposure to predation and other stresses and further increases reproductive potential. A reduction in the number of active juvenile stages further decreases development time, thereby reducing generation time and increasing the potential for multiple generations in a season. A wide host range completes the adaptation of pathogens such as some Meloidogyne spp., which can be regarded as the ultimate plant-parasitic nematode r strategists. Many Longidorus spp. are examples of K strategists. It is a characteristic of K strategists that they do best in stable environments where populations are usually close to the equilibrium density (the population density that can be sustained). In contrast r strategists increase rapidly where the environment is favourable, often overshooting the equilibrium density. Severe damage to the host occurs and the population crashes. This can occur with repeated cropping of hosts as Pi increases, environmental influences on sex determination reduce multiplication, parasites of the nematode increase in number, and increasing damage to the host and competition for feeding sites progressively reduces multiplication. Consequently, nematode multiplication rates are strongly density dependent. Again, the question of how density is defined arises. Usually it is expressed as the numbers of nematodes per gram or ml of soil, but the units that directly affect the nematode are those that are root related, e.g. number of root tips and/or length or weight. Hence, a cultivar with twice the root mass of another will, except at low densities where the multiplication rate is the maximum, support a higher multiplication rate. Similarly, tolerant cultivars that maintain a greater root mass as Pi increases than intolerant cultivars, will have a greater equilibrium density and maintain a greater multiplication rate at high pre-planting population densities. Overall multiplication rates are determined by the intrinsic maximum rate of multiplication, which is influenced by nematode species, the susceptibility (defined as all those qualities favouring the nematode) of the host, and the various environmental factors that influence both the nematode and the host. Nematode multiplication can be modelled in different ways. For migratory nematode species that multiply continuously, Seinhorst (1966) proposed the following formula derived from a logistic equation: Pf = aEPi/(a - 1)Pi + E where a is the maximum rate of increase and E is the equilibrium density at which Pf = Pi. For sedentary nematodes with one generation at a time, e.g. potato-cyst nematodes, Seinhorst (1967) proposed an alternative model based on the competition model of Nicholson (1933): where a is again the maximum rate of multiplication, and 1 - q is the proportion of the available space which is exploited for food at a density of Pi= 1. Jones and Perry (1978) also proposed a model for sedentary nematodes with a logistic basis derived from the observation that sex determination is density dependent. Their model includes parameters that reflect fecundity and the proportion of the population that does not hatch. All three models, in their most basic form, show maximum rates of multiplication at low initial densities. As Pi increases the rate of multiplication is reduced as an upper asymptote is reached (Fig. 3). In reality, the shape of this curve is modified as Pi increases due to the increasing damage inflicted and the loss of roots. With the Jones and Perry model this is exacerbated as space lost as a result of root damage increases the competition between invading nematodes, resulting in an even greater shift in the sex ratio towards male production than would otherwise be the case. Thus the approach to the asymptote is slower and indeed the asymptote is reduced below the theoretical level. Further increases in Pi can inflict so much root damage that the population increase becomes negative and the population size is ultimately reduced. All the equations mentioned require modifying by including a damage function such as that of Seinhorst, which also allows the differences in tolerance between cultivars to be taken into account. The damage func- tions used model proportional differences, and further modification may be required to account for absolute differences in plant size. Another plant characteristic that affects population increase is the host status of the plant. Differences can be modelled in terms of the maximum multiplication rate or the space required for successful multiplication (Seinhorst models), or in terms of fecundity or effects on the sex ratio (Jones and Perry model). An important method of expressing and comparing the effects of different cultivars or cropping regimes is to consider the equilibrium density, i.e. the point at which Pf = Pi. This density is usually observed at a Pi which is larger than that which gives the largest Pf (Fig. 4). In practice this equilibrium density is reached after a period of oscillation about the equilibrium density. The size of the oscillations will be determined by the tolerance and resistance of the host. Tolerance and resistance will produce small oscillations, while susceptibility and intolerance can result in large oscillations. Indeed, these two factors can interact to the extent that a tolerant but partially resistant cultivar can produce a higher equilibrium density than an intolerant susceptible cultivar (Fig. 5). FIGURE 3: The theoretical logistic relationship between initial population density and final population density and the relationship when roots are damaged FIGURE 4: The relationship between Pf and Pi when a tolerant and an intolerant host are grown FIGURE 5: The relationship between Pf and Pi contrasting the response when an intolerant susceptible host is grown to that which occurs when a tolerant but partially resistant host is grown Care needs to be taken in devising management strategies for the control of nematodes to balance the benefits of tolerance against the benefits of resistance, to ensure that while yields are maximized, nema-tode populations are not raised to levels that are damaging to other cultivars. Models can be used to examine and explore nematode management strategies, but need to take into account the effective population if this is less than the actual population, and the decline in the numbers of nematodes in the absence of a host crop. Jones, F.G.W. & Perry, J.N. 1978. Modelling populations of cyst nematodes (Nematoda: Heteroderidae). J. Appl. Ecology, 15: 349-371. Nicholson, A.J. 1933. The balance of animal populations. J. Animal Ecology, 2: 132-178. Oostenbrink, M. 1966. Major characteristics of the relation between nematodes and plants. Meded. Landbou. Wageningen, 66: 1-46. Seinhorst, J.W. 1965. The relation between nematode density and damage to plants. Nematologica, 11: 137-154. Seinhorst, J.W. 1966. The relationships between population increase and population density in plant-parasitic nematodes. I. Introduction and migratory nematodes. Nematologica, 12: 157-169. Seinhorst, J.W. 1967. The relationships between population increase and population density in plant-parasitic nematodes. II. Sedentary nematodes. Nematologica, 13: 157-171. Trudgill, D.L. 1987. Effects of rates of nematicide and of fertiliser on growth and yield of cultivars of potato which differ in their tolerance of damage by potato-cyst nematodes (G. rostochiensis and G. pallida). Plant & Soil, 104:185-193. Trudgill, D.L. 1992. Resistance to and tolerance of plant-parasitic nematodes in plants. Ann. Rev. Phytopathol., 29: 167-192.
{"url":"http://www.fao.org/docrep/V9978E/v9978e07.htm","timestamp":"2014-04-23T09:12:57Z","content_type":null,"content_length":"27530","record_id":"<urn:uuid:ec68c9a1-5e4d-4e1d-ae39-d53a8ca6173d>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00652-ip-10-147-4-33.ec2.internal.warc.gz"}
Atascocita, TX Geometry Tutor Find an Atascocita, TX Geometry Tutor ...I prefer one on one education to large classes and my students have always shown marked improvement. I am an excellent communicator with a desire to help student achieve their educational goals. I prefer using some internet instructional tools if available, but I spend most of the time in guidi... 9 Subjects: including geometry, algebra 1, algebra 2, precalculus ...We participated in activities including learning Math and Science, solving mysteries, and playing sports. I worked on a daily basis with elementary school kids and grew very fond of them. I look forward to getting a chance to tutor them in the future! 22 Subjects: including geometry, chemistry, calculus, physics ...My first year teaching, my classrooms TAKS scores increased by 40%. This last year I had a 97% pass rate on the Geometry EOC and my students still contact me for math help while in college. I know I can help you.I currently teach Algebra 1 on a team that was hand selected because of our success... 8 Subjects: including geometry, physics, biology, algebra 1 ...I am a Trinity University graduate and I have over 4 years of tutoring experience. I really enjoy it and I always receive great feedback from my clients. I consider my client's grade as if it were my own grade, and I will do whatever it takes to make sure you get it, and at the same time make sure our sessions are easy and enjoyable. 38 Subjects: including geometry, English, calculus, reading ...I have 20 years of experience teaching Algebra and Chinese in elementary and middle school in Taiwan and USA. I also have two years experience teaching Chinese phonics at Evergreen Chinese school. My teaching is fun and creative. 12 Subjects: including geometry, reading, Chinese, algebra 1 Related Atascocita, TX Tutors Atascocita, TX Accounting Tutors Atascocita, TX ACT Tutors Atascocita, TX Algebra Tutors Atascocita, TX Algebra 2 Tutors Atascocita, TX Calculus Tutors Atascocita, TX Geometry Tutors Atascocita, TX Math Tutors Atascocita, TX Prealgebra Tutors Atascocita, TX Precalculus Tutors Atascocita, TX SAT Tutors Atascocita, TX SAT Math Tutors Atascocita, TX Science Tutors Atascocita, TX Statistics Tutors Atascocita, TX Trigonometry Tutors Nearby Cities With geometry Tutor Aldine, TX geometry Tutors Astrodome, TX geometry Tutors Beach, TX geometry Tutors Bordersville, TX geometry Tutors Cloverleaf, TX geometry Tutors Dogwood Acres, TX geometry Tutors Houston Heights, TX geometry Tutors Humble geometry Tutors Kingwood, TX geometry Tutors Klein, TX geometry Tutors Panther Creek, TX geometry Tutors Sheldon, TX geometry Tutors Sorters, TX geometry Tutors Timberlane Acres, TX geometry Tutors Woody Acres, TX geometry Tutors
{"url":"http://www.purplemath.com/Atascocita_TX_geometry_tutors.php","timestamp":"2014-04-17T00:52:33Z","content_type":null,"content_length":"24156","record_id":"<urn:uuid:74e66224-91be-4fa5-9ecc-089e73b95ed2>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00076-ip-10-147-4-33.ec2.internal.warc.gz"}
Nonmonotone Spectral Projected Gradient Methods on Convex Sets Results 11 - 20 of 91 , 2001 "... A practical algorithm for box-constrained optimization is introduced. The algorithm combines an active-set strategy with spectral projected gradient iterations. In the interior of each face a strategy that deals eciently with negative curvature is employed. Global convergence results are given. ..." Cited by 28 (5 self) Add to MetaCart A practical algorithm for box-constrained optimization is introduced. The algorithm combines an active-set strategy with spectral projected gradient iterations. In the interior of each face a strategy that deals eciently with negative curvature is employed. Global convergence results are given. Numerical results are presented. Keywords: box constrained minimization, active set methods, spectral projected gradients, dogleg path methods. AMS Subject Classication: 49M07, 49M10, 65K, 90C06, 90C20. 1 - SIAM Journal on Optimization , 2006 "... Abstract. An active set algorithm (ASA) for box constrained optimization is developed. The algorithm consists of a nonmonotone gradient projection step, an unconstrained optimization step, and a set of rules for branching between the two steps. Global convergence to a stationary point is established ..." Cited by 26 (6 self) Add to MetaCart Abstract. An active set algorithm (ASA) for box constrained optimization is developed. The algorithm consists of a nonmonotone gradient projection step, an unconstrained optimization step, and a set of rules for branching between the two steps. Global convergence to a stationary point is established. For a nondegenerate stationary point, the algorithm eventually reduces to unconstrained optimization without restarts. Similarly, for a degenerate stationary point, where the strong secondorder sufficient optimality condition holds, the algorithm eventually reduces to unconstrained optimization without restarts. A specific implementation of the ASA is given which exploits the recently developed cyclic Barzilai–Borwein (CBB) algorithm for the gradient projection step and the recently developed conjugate gradient algorithm CG DESCENT for unconstrained optimization. Numerical experiments are presented using box constrained problems in the CUTEr and MINPACK-2 test problem libraries. Key words. nonmonotone gradient projection, box constrained optimization, active set algorithm, , 2001 "... A review is given of the underlying theory and recent developments in regard to the Barzilai-Borwein steepest descent method for large scale unconstrained optimization. One aim is to assess why the method seems to be comparable in practical eciency to conjugate gradient methods. The importance of ..." Cited by 21 (1 self) Add to MetaCart A review is given of the underlying theory and recent developments in regard to the Barzilai-Borwein steepest descent method for large scale unconstrained optimization. One aim is to assess why the method seems to be comparable in practical eciency to conjugate gradient methods. The importance of using a non-monotone line search is stressed, although some suggestions are made as to why the modi- cation proposed by Raydan [22] often does not usually perform well for an illconditioned problem. Extensions for box constraints are discussed. A number of interesting open questions are put forward. Keywords Barzilai-Borwein method, steepest descent, elliptic systems, unconstrained optimization. 1 - JOURNAL OF APPLIED GEOPHYSICS , 1999 "... For a fixed, central ray in an isotropic elastic or acoustic media, traveltime moveouts of rays in its vicinity can be described in terms of a certain number of parameters that refer to the central ray only. The determination of these parameters out of multicoverage data leads to very powerful al ..." Cited by 21 (8 self) Add to MetaCart For a fixed, central ray in an isotropic elastic or acoustic media, traveltime moveouts of rays in its vicinity can be described in terms of a certain number of parameters that refer to the central ray only. The determination of these parameters out of multicoverage data leads to very powerful algorithms that can be used for several imaging and inversion processes. Assuming two-dimensional propagation, the traveltime expressions depend on three parameters directly related to the geometry of the unknown model in the vicinity of the central ray. We present a new method to extract these parameters out of coherency analysis applied directly to the data. It uses (a) fast one-parameter searches on different sections extracted from the multi-coverage data to derive initial values of the sections parameters, and (b) the application of a recently introduced Spectral Projected Gradient optimization algorithm for the final parameter estimation. Application of the method on a synthetic example shows an excellent performance of the algorithm both in accuracy and efficiency. The results obtained so far indicate that the algorithm may be a feasible option to solve the corresponding, harder, full three-dimensional problem, in which eight parameters, instead of three, are required. - Optim. Methods Softw , 2005 "... Gradient projection methods based on the Barzilai-Borwein spectral steplength choices are considered for quadratic programming problems with simple constraints. Well-known nonmonotone spectral projected gradient methods and variable projection methods are discussed. For both approaches the behavior ..." Cited by 20 (4 self) Add to MetaCart Gradient projection methods based on the Barzilai-Borwein spectral steplength choices are considered for quadratic programming problems with simple constraints. Well-known nonmonotone spectral projected gradient methods and variable projection methods are discussed. For both approaches the behavior of different combinations of the two spectral steplengths is investigated. A new adaptive steplength alternating rule is proposed that becomes the basis for a generalized version of the variable projection method (GVPM). Convergence results are given for the proposed approach and its effectiveness is shown by means of an extensive computational study on several test problems, including the special quadratic programs arising in training support vector machines. Finally, the GVPM behavior as inner QP solver in decomposition techniques for large-scale support vector machines is also evaluated. , 2003 "... The container loading problem has important industrial and commercial applications. An increase in the number of items in a container leads to a decrease in cost. For this reason the related optimization problem is of economic importance. In this work, a procedure based on a nonlinear decision pr ..." Cited by 19 (1 self) Add to MetaCart The container loading problem has important industrial and commercial applications. An increase in the number of items in a container leads to a decrease in cost. For this reason the related optimization problem is of economic importance. In this work, a procedure based on a nonlinear decision problem to solve the cylinder packing problem with identical diameters is presented. This formulation is based on the fact that the centers of the cylinders have to be inside the rectangular box de ned by the base of the container (a radius far from the frontier) and far from each other at least one diameter. With this basic premise the procedure tries to nd the maximum number of cylinder centers that satisfy these restrictions. The continuous nature of the problem is one of the reasons that motivated this study. A comparative study with other methods of the literature is presented and better results are achieved. "... This paper addresses exact learning of Bayesian network structure from data and expert’s knowledge based on score functions that are decomposable. First, it describes useful properties that strongly reduce the time and memory costs of many known methods such as hill-climbing, dynamic programming and ..." Cited by 18 (1 self) Add to MetaCart This paper addresses exact learning of Bayesian network structure from data and expert’s knowledge based on score functions that are decomposable. First, it describes useful properties that strongly reduce the time and memory costs of many known methods such as hill-climbing, dynamic programming and sampling variable orderings. Secondly, a branch and bound algorithm is presented that integrates parameter and structural constraints with data in a way to guarantee global optimality with respect to the score function. It is an any-time procedure because, if stopped, it provides the best current solution and an estimation about how far it is from the global solution. We show empirically the advantages of the properties and the constraints, and the applicability of the algorithm to large data sets (up to one hundred variables) that cannot be handled by other current methods (limited to around 30 variables). 1. - Mathematical Programming , 2003 "... The Barzilai-Borwein (BB) gradient method, and some other new gradient methods have shown themselves to be competitive with conjugate gradient methods for solving large dimension nonlinear unconstrained optimization problems. Little is known about the asymptotic behaviour, even when applied to n ..." Cited by 15 (3 self) Add to MetaCart The Barzilai-Borwein (BB) gradient method, and some other new gradient methods have shown themselves to be competitive with conjugate gradient methods for solving large dimension nonlinear unconstrained optimization problems. Little is known about the asymptotic behaviour, even when applied to n dimensional quadratic functions, except in the case that n = 2. We show in the quadratic case how it is possible to compute this asymptotic behaviour, and observe that as n increases there is a transition from superlinear to linear convergence at some value of n 4, depending on the method. By neglecting certain terms in the recurrence relations we de ne simpli ed versions of the methods, which are able to predict this transition. The simpli ed methods also predict that for larger values of n, the eigencomponents of the gradient vectors converge in modulus to a common value, which is a similar to a property observed to hold in the real methods. Some unusual and interesting recurrence relations are analysed in the course of the study. - SIAM Journalon Optimization , 2005 "... A practical active-set method for bound-constrained minimization is introduced. Within the current face the classical Euclidian trust-region method is employed. Spectral projected gradient directions are used to abandon faces. Numerical results are presented. Key words: Bound-constrained optimizatio ..." Cited by 14 (2 self) Add to MetaCart A practical active-set method for bound-constrained minimization is introduced. Within the current face the classical Euclidian trust-region method is employed. Spectral projected gradient directions are used to abandon faces. Numerical results are presented. Key words: Bound-constrained optimization, projected gradient, spectral gradient, trust regions. 1 - SIAM J. Optim , 2004 "... Abstract. A new nonmonotone line search algorithm is proposed and analyzed. In our scheme, we require that an average of the successive function values decreases, while the traditional nonmonotone approach of Grippo, Lampariello, and Lucidi [SIAM J. Numer. Anal., 23 (1986), pp. 707–716] requires tha ..." Cited by 14 (2 self) Add to MetaCart Abstract. A new nonmonotone line search algorithm is proposed and analyzed. In our scheme, we require that an average of the successive function values decreases, while the traditional nonmonotone approach of Grippo, Lampariello, and Lucidi [SIAM J. Numer. Anal., 23 (1986), pp. 707–716] requires that a maximum of recent function values decreases. We prove global convergence for nonconvex, smooth functions, and R-linear convergence for strongly convex functions. For the L-BFGS method and the unconstrained optimization problems in the CUTE library, the new nonmonotone line search algorithm used fewer function and gradient evaluations, on average, than either the monotone or the traditional nonmonotone scheme.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1558979&sort=cite&start=10","timestamp":"2014-04-21T07:26:42Z","content_type":null,"content_length":"39486","record_id":"<urn:uuid:33b30d35-8d0a-4305-9ab1-195c8779389e>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00387-ip-10-147-4-33.ec2.internal.warc.gz"}
Yippee, my first animations in Mathematica! Let a circle roll around a circle twice as big. The shape traced by a point on the outer circle is a cardioid. Now consider a third circle rolling around the second one as well (again half as big, and at the same speed); its trace is already less familiar. The more circles, the more fractal-ish the resulting curve will be. In the limit, the traced curve can be described with this parametric formula: (Source of inspiration: http://www.mathrecreation.com/2013/12/brain-curve.html) Deeply nifty.
{"url":"http://yuuki-fluting.tumblr.com/","timestamp":"2014-04-20T06:10:00Z","content_type":null,"content_length":"45481","record_id":"<urn:uuid:e3993d7a-f6ca-4790-8081-dcddb9bbeec5>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00304-ip-10-147-4-33.ec2.internal.warc.gz"}
Baldwin, John T. - Department of Mathematics, Statistics, and Computer Science, University of Illinois at Chicago • Half-way round but still 40 days Being a sequel to the earlier epic: Around the world in 40 days. Also posted on this website. • BEYOND FIRST ORDER LOGIC I John T. Baldwin • Lecture 12: Excellence implies stability transfers John T. Baldwin • TRANSFERING SATURATION, THE FINITE COVER PROPERTY, AND • Constructing !stable Structures: Computing Rank • M121 Fall 95 Exam I Sept. 15, 1995 Name (Print) SSN • MATH 121 FINAL EXAM FALL 1996 The exam consists of 15 problems. Each problem is worth 12 points except for 2 (16 • Amalgamation, Absoluteness, and Categoricity John T. Baldwin • Classification of ffiinvariant amalgamation classes Roman D. Aref'ev • Math 121. Hour Test II 1. (15 points) a) Find remainder when 2x 3 \Gamma 3x 2 + 4x \Gamma 7 is divided by x \Gamma 2. • John T. Baldwin October 12, 2005 • Generalized Quantifiers • Math 503: 5th problem set John Baldwin • [11] S. Shelah. Zeroone laws with probability varying with decaying dis tance. Shelah 467, 199x. • Geometry and High School • Mathematical modeling (October 5): MTHT 400 Methods of Teaching Secondary Mathematics I • for a contradiction. This raises two questions. A positive answer to the first would show none • On the classifiability of Cellular Automata John T. Baldwin \Lambda • DOP and FCP in Generic Structures John T. Baldwin # • VARIABLES: SYNTAX, SEMANTICS AND SITUATIONS JOHN T. BALDWIN • Math 300 Writing for Mathematics Spring 2003 Proportionality • This is part catalog and part encyclopedia. I have listed most of the papers written about the Hrushovski construction and some background material. I • Integers and polynomials Homework due Sept. 28. • Categoricity Asian Logic • Considerations Semi-abelian • Models in 1 John T. Baldwin • Spectrum in Amalgamation • Geometry and High School • LOGIC ACROSS THE HIGH SCHOOL CURRICULUM JOHN T. BALDWIN • M IS NOT MARY: VARIABLES FROM GRADE 3 TO 13 JOHN T. BALDWIN, HYUNG SOOK LEE, AND ALEXANDER RADOSAVLJEVIC • P is not Pizza: Variables from grade 3 to 13 We expand the usual mathematical treatment of the syntax and semantics of variable to in- • Beyond First Order Logic: From number of structures to structure of numbers • Model Theory The impact on • Model Theoretic Perspectives on the Philosophy of Mathematics John T. Baldwin • The Monster Model John Baldwin • Amalgamation, Absoluteness, and Categoricity John T. Baldwin • The Stability spectrum for classes of atomic John T. Baldwin • REVIEW OF THE BIRTH OF MODEL THEORY BY CALIXTO BADESA • The amalgamation spectrum John T. Baldwin • CAYLEY'S THEOREM FOR ORDERED GROUPS: O-MINIMALITY August 10, 2006 • The complex numbers and complex exponentiation • THE METAMATHEMATICS OF RANDOM John T. Baldwin • Nonsplitting extensions John T. Baldwin • I. Lavrov and L. Maksimova. Problems in Set Theory, Mathematical Logic, and the Theory of Algorithms. • Fair, Accurate, and Accountable Voting Systems John Baldwin, Department of Mathematics, Statistics, and Computer • Essay 3: Aluminum Crystal Task Math 300 Spring 2003 1 • Average Grain Intercept (AGI) Method The average grain intercept (AGI) method is a technique used to quantify the • AGI vrs Average Area Math 300 Spring 2003 1 • Averaging Slopes It is 210 miles from Chicago to Ann Arbor. Frank doesn't want to pay a toll • When are equations equivalent? John T. Baldwin • The water in the wine John T. Baldwin • Midterm Exam: MTHT 400 Methods of Teaching Secondary Mathematics • Lecture 4: Categoricity implies Completeness John T. Baldwin • Lecture 5: Abstract Elementary Classes John T. Baldwin • Lecture 6.5 Saturation and homogeneity John T. Baldwin • Lecture 7: Galois stability John T. Baldwin • Lecture 8: Morley's method for Galois Types: Downward categoricity John T. Baldwin • Lecture 10: Covers of the multiplicative group of C John T. Baldwin • Lecture 11: Excellence implies Categoricity: A John T. Baldwin • Eliminating Exchange John T. Baldwin • This is page i Printer: Opaque this • Notes on Quasiminimality and Excellence John T. Baldwin • Math 300 Writing for Mathematics Comment Codes • Essay 3: Aluminum Crystal Task Math 300 Spring 2003 1 • Internships Available Second Derivatives Hedge Fund Inc. • Mathematic 300, Spring 2003 In Class Writing • Math 300 Writing for Mathematics Spring 2003 Comments on first drafts: Essay 1 • Math 300 Writing for Mathematics Spring 2003 Definitions, Procedures, and Explanations • This is part catalog and part encyclopedia. I have listed most of the papers written about the Hrushovski construction and some background material. I • Perspectives Expansions • Model Theory: The `relevant' • First Order and Infinitary • Necessity of the VWGCH ? • Directions for Abstract Elementary Classes • The Vaught Conjecture Do uncountable models count? • A MODEL IN 2 John T. Baldwin • An innovative rum/Algebra • The Math Forum PEMDAS and FOIL • Around the world in 40 days John T. Baldwin • Trigonometry Scoring: 3 points for each part of 1), 2 points for each of 3,4,5. • Abstract Elementary Classes Abelian Groups • Stability, the finite cover property and 01 laws John T. Baldwin \Lambda • M121 Fall 95 Exam III Nov., 1995 J4 is the only new problem for test 3 so far. • Proof. By Theorem 2.30 and the choice of K + 0 , every model of T ff is • Finite and Infinite Model Theory A Historical Perspective • Finite Model Theory, Spring 1997 Assignment 2. Due approximately Feb.24 • Constructing !stable Structures: Rank 2 Fields John T. Baldwin • Model Theory: Infinitary Model Theory • Generalized Quantifiers, Infinitary Logics, and Abstract Elementary Classes • Lecture 3: Abstract Quasiminimality John T. Baldwin • WHAT IS AN EXTENSION AXIOM? John T. Baldwin • Stephen Wolfram. A New Kind of Science. Wolfram Media, Inc., Champaign, IL, 2002, xiv + 1197 pp. • Forking and Multiplicity in First Order Theories John T. Baldwin • STABILITY AND EMBEDDED FINITE John T. Baldwin • STABLE AMALGAMATION John T. Baldwin • Stable Generic Structures John T. Baldwin • eral mathematicians have consulted during the summer with the teachers of the juniorsenior level course (the only one taught at the university rather • Abstract Elementary Classes: Some Answers, More Questions • An American Example of High SchoolUniversity Cooperation \Lambda • MATH 121 EXAM III FALL 1996 Show all your work to obtain credit for the problems. • Expansions of Models Predicates and Automorphisms • 0.2 Definition. 1. The language L is decided by machine M in time f(n) if for every input string w, hS; I; wi M • Problems on `Pathological' Structures John T. Baldwin \Lambda • Math 512: Finite Model Theory February 27, 1997 • Encouraging cooperative solution of mathematics problems • Math 300 Writing for Mathematics Spring 2003 Mathematics for Essay 2 • BEYOND FIRST ORDER LOGIC II John T. Baldwin • Constructing #stable Structures: Rank kfields • THEORIES IN FINITE MODEL John T. Baldwin • Rank and Homogeneous Structures John T. Baldwin • DOP and FCP in Generic Structures John T. Baldwin \Lambda • Lecture 2: Combinatorial Geometries John T. Baldwin • THE METAMATHEMATICS OF RANDOM John T. Baldwin • MATH 121 SYLLABUS FALL 1997 Lecture Time Room Instructor Office Phone Call No. • STABILITY AND EMBEDDED FINITE John T. Baldwin • [Adl08] H. Adler. Introduction to theories without the independence prop-erty. preprint, 2008. • Methods of Teaching Secondary Mathematics I John T. Baldwin • Midterm Exam: MTHT 400 Methods of Teaching Secondary Mathematics • A week in Tunisia November 20-28, 2006 • What is the type-space? • Special Issue Winter 2006 The UIC Algebra Symposium • Day 18 Plan (October 12): MTHT 400 Methods of Teaching Secondary Mathematics I • Syllabus Mtht 400: Methods of Teaching Secondary Mathematics I • Computation versus Simulation Is `compute' a transitive or intransitive verb? That is, must one compute • MATH 121 HOUR TEST I September 25,1996 Name (print) • Perspectives Connections • Categoricity John T. Baldwin, Department of Mathematics, Statistics • M121 Fall 95 Exam I Sept. 15, 1995 Name (Print) SSN • Math 503: Some open problems from FMT John Baldwin • Math 300 Writing for Mathematics Spring 2003 Compton's encyclopedia • Lecture 6: Galois types and saturation John T. Baldwin • MATH 121 SAMPLE FINAL PARTIAL SOLUTIONS 12/4/1996 1. Set f(x) = x 4 + x 3 \Gamma 2x 2 \Gamma 4x \Gamma 8. The rational solutions to f(x) = 0 are • i) P and Q partition the universe. ii) The symbols of L apply only to elements of P . • Constructing !stable Structures: Rank 2 fields • Why the weak GCH is true! • Motivations and Directions • Math 300 Writing for Mathematics Spring 2003 Comments on Job Applications • Subsets of superstable structures are weakly benign Bektur Baizhanov • Mathematics as language • AGI vrs Average Area Math 300 Spring 2003 1 • Math 300, Spring 2003, Essay 2 Improper Integrals and sums of series • from grade 3 Pizza problem • Geometry and Proof John T. Baldwin • Notes on Quasiminimality and Excellence John T. Baldwin • Integers and polynomials Homework due Sept. 14. • !stable fields with infinite definable subsets \Lambda John Baldwin • [BH00][BH01][BH04] [Zil02] [Zil00] [CZ01] [BB00] [Poi83] [Bou89] [Pol05][Las] [Hol99] [Hol][Bal][Bal02][BH03] [Bal04][VZ05] • Writing Exam Questions Nov. 17, 2005 • MCS 261, May 3, 2002 Name: There are nine (9) problems on this exam. • Examples of Non-locality John T. Baldwin • Local Homogeneity Bektur Baizhanov • Lecture 1: Why and What August 26, 2003 • Final Exam: MTHT 400 Methods of Teaching Secondary Mathematics • Math 300, Spring 2003 Writing for Mathematics Monday sections The Mathematics for Essay 1 • THEORIES IN FINITE MODEL THEORY • STABLE AMALGAMATION John T. Baldwin • Math 512: Finite Model Theory Complexity background • M121 Fall 95 Exam I Sept. 15, 1* Name (Print) _________________________________________SSN: ____________________* • Math 512: Finite Model Theory k-variable logic • MATH 121 HOUR TEST I September 25,1996 Name (print) _________________________________________ • Finite and Infinite Model Theory A Historical Perspective • M121 Fall 95 Exam III Nov., 1995 J4 is the only new problem for test 3 so far. * • MATH 121 EXAM III FALL 1996 Show all your work to obtain credit for the problems. • Math 512: Finite Model Theory RPC and r.e. February 3, 1997 • MATH 121 FINAL EXAM FALL 1996 Name:_________________________________________________SSN:_____________________* • Math 512: Finite Model Theory Ptime = Fixed point axiomatizable • MATH 121 SAMPLE FINAL PARTIAL SOLUTIONS 12/4/1996 D. Radford • Finite Model Theory, Spring 1997 Assignment 2. Due approximately Feb.24 • MATH 121 SYLLABUS FALL 1997 Lecture Time Room Instructor Office Phone Call No. • Math 512: Finite Model Theory John Baldwin • Math 502: SUBSTITUTION John Baldwin • Three Mathematical Cultures John T. Baldwin • M121 Fall 95 Exam I Sept. 15, 1995 Name (Print) SSN • Math 300 Writing for Mathematics Spring 2003 Short Writing Exercise 1 • What is a minus sign anyway? John Baldwin • EM Models and Downward Categoricity Transfer John T. Baldwin • The Vaught Conjecture Do uncountable models count? • AMALGAMATION PROPERTIES AND FINITE MODELS IN L n THEORIES • Notes on the philosophy of mathematics John T. Baldwin • Logic Publications of John T. Baldwin Department of Mathematics, Statistics and Computer Science (M/C 249) • Model Companions of T Aut for stable T John T. Baldwin # • Expansions of Geometries John T. Baldwin # • Math 300, Spring 2003, J. Baldwin Some writing and mathematic exercises • Math 300, Spring 2003, J. Baldwin General Comments on Essay 2 • Morley's Proof Mathematical • Math 512: Finite Model Theory k-variable logic • Constructing !-stable Structures: Computing Rank • Rank and Homogeneous Structures John T. Baldwin • Constructing !-stable Structures: Rank 2 fields • Math 512: Finite Model Theory 1 • Math 512: Finite Model Theory Games • Computation versus Simulation Is `compute' a transitive or intransitive verb? That is, must one compute • Stability, the finite cover property and 0-1 laws John T. Baldwin * • Expansions of Geometries John T. Baldwin * • Forking and Multiplicity in First Order Theories John T. Baldwin • Math 512: Finite Model Theory 0-1 Laws • Constructing !-stable Structures: Rank 2 fields • Constructing -stable Structures: Model Completeness • THEORIES IN FINITE MODEL John T. Baldwin • Math 512: Finite Model Theory Monadic \Sigma 1 • Stability theory, Permutations of Indiscernibles, and Embedded Finite Models \Lambda • Finite and Infinite Model Theory A Historical Perspective • MODEL THEORY: FINITE, COUNTABLE, UNCOUNTABLE • Homogeneity and Saturation John T. Baldwin • Perspectives Model Theory • What is the type-space? • Russell's Paradox John T. Baldwin and Olivier Lessmann • Constructing !stable Structures: Rank 2 fields • STABLE AMALGAMATION John T. Baldwin • Classes 2007 Connections • Model Theoretic Perspectives on the Philosophy of Mathematics John T. Baldwin • Contemporary Mathematics Ehrenfeucht-Mostowski models in Abstract Elementary • Assignments Mtht 400: Methods of Teaching Secondary Mathematics I • Model Companions of TAut for stable T John T. Baldwin * • M121 Fall 95 Exam I Sept. 15, 1* Name (Print) _________________________________________SSN: ____________________* • Subsets of superstable structures are weakly benign Bektur Baizhanov • Math 121. Hour Test II 10/23/1996 • Constructing !-stable Structures: Model Completeness • Constructing !-stable Structures: Rank k-fields • Stability theory, Permutations of Indiscernibles, and Embedded Finite Models* • M121 Fall 95 Exam I Sept. 15, 1* Name (Print) _________________________________________SSN: ____________________* • Math 512: Finite Model Theory Fixed Point logics • Math 512: Finite Model Theory Generalized Quantifiers • A Hanf number for saturation and omission John T. Baldwin • Bulletin of the Iranian Mathematical Society Vol. XX No. X (201X), pp XX-XX. BEYOND FIRST ORDER LOGIC: FROM NUMBER OF • Iterated elementary embeddings and the model theory of infinitary logic • Bulletin of the Iranian Mathematical Society Vol. XX No. X (201X), pp XX-XX. BEYOND FIRST ORDER LOGIC: FROM NUMBER OF • Formalization, Primitive Concepts, and Purity John T. Baldwin • Almost Galois -Stable classes John T. Baldwin • Absoluteness Complexity of • Iterated elementary embeddings and the model theory of infinitary logic • Calculating Hanf Numbers • model theory Boston 2012 • Geometry and Categoricity
{"url":"http://www.osti.gov/eprints/topicpages/documents/starturl/02/194.html","timestamp":"2014-04-19T17:38:28Z","content_type":null,"content_length":"43798","record_id":"<urn:uuid:53ec28b7-f67c-4676-bfb9-30437d56c815>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00039-ip-10-147-4-33.ec2.internal.warc.gz"}
HowStuffWorks "How much coal is required to run a 100-watt light bulb 24 hours a day for a year?" We'll start by figuring out how much energy in kilowatt-hours the light bulb uses per year. We multiply how much power it uses in kilowatts, by the number of hours in a year. That gives 0.1 kW x 8,760 hours or 876 kWh. The thermal energy content of coal is 6,150 kWh/ton. Although coal fired power generators are very efficient, they are still limited by the laws of thermodynamics. Only about 40 percent of the thermal energy in coal is converted to electricity. So the electricity generated per ton of coal is 0.4 x 6,150 kWh or 2,460 kWh/ton. To find out how many tons of coal were burned for our light bulb we divide 876 kWh by 2,460 kWh/ton. That equals 0.357 tons. Multiplying by 2,000 pounds/ton we get 714 pounds (325 kg) of coal. That is a pretty big pile of coal, but let's look at what else was produced to power that light bulb. A typical 500 megawatt coal power plant produces 3.5 billion kWh per year. That is enough energy for 4 million of our light bulbs to operate year round. To produce this amount of electrical energy, the plant burns 1.43 million tons of coal. It also produces: │ Pollutant │ Total for Power Plant │ One Light Bulb-Year's Worth │ │ Sulfur Dioxide - Main cause of acid rain │ 10,000 Tons │ 5 pounds │ │ Nitrogen Oxides - Causes smog and acid rain │ 10,200 Tons │ 5.1 pounds │ │ Carbon Dioxide - Greenhouse gas suspected of causing global warming │ 3,700,000 Tons │ 1852 pounds │ It also produces smaller amounts of just about every element on the periodic table, including the radioactive ones. In fact, a coal-burning power plant emits more radiation than a (properly functioning) nuclear power plant! Here are some interesting links:
{"url":"http://science.howstuffworks.com/environmental/energy/question481.htm","timestamp":"2014-04-21T12:11:08Z","content_type":null,"content_length":"117390","record_id":"<urn:uuid:e85daf76-ee18-4b83-8f68-c9c5f1d79a47>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00429-ip-10-147-4-33.ec2.internal.warc.gz"}
Teaching Computer Programming to First-Year Undergraduates with a MATLAB Based Robot Simulator First-year undergraduates often find computer programming difficult because of the abstract nature of the processes and concepts involved. Students who are adept at learning complex historical facts, scientific theories, or mathematical principles may struggle with programming because it requires them to elucidate the process of solving a problem—develop an algorithm—and not just "crunch out" an One approach to making introductory computer programming more tangible to students is to use robotics. Robots show students their code in action, making the abstractions and algorithms concrete and the consequences immediate. For engineering majors, learning how to control real devices and work within the limitations of sensors and actuating hardware is an added benefit. Despite these advantages, robots are rarely used at the introductory level. The expense of purchasing and maintaining robots for a large class can be prohibitive, while sharing one robot among several students leads to lab-scheduling conflicts and student frustration. In an introductory programming course at Cornell University, we use a simulator for the iRobot Create. Each of the 300 or so students in CS1112: Introduction to Computing Using MATLAB has direct access to the simulator, which was developed in MATLAB^® at Cornell. Designed to emulate the behavior and control characteristics of a real programmable robot, the simulator provides a low-cost way to bring the appeal and excitement of robotics to a large class while improving student comprehension of core concepts such as approximation and errors. Teaching Introductory Programming with MATLAB The primary focus of CS1112 is programming and problem solving, not robotics. We use the robot simulator to supplement and reinforce concepts traditionally taught in first-year computer science courses, including iteration, functions, arrays, randomness, and error handling. There are several advantages to using MATLAB to teach a course such as this. First, most students enjoy working with the robotic simulator, whatever their background. Second, MATLAB makes even our non-simulator homework very visual, which helps motivate the students to complete the more basic assignments. With MATLAB, students rapidly learn to build programs that plot graphs or display other simple graphics. In another high-level language such as Java™, that would require significantly more infrastructure code to be handed out by the instructor. Third, it is not uncommon for first-year students to become frustrated in introductory programming courses as they wrestle with algorithms, data structures, and new tools. MATLAB provides a friendlier environment than traditional programming languages because it enables students to get further into programming faster. Students see the results of their efforts earlier than they would with other languages, which reduces frustration and gives students confidence that they are capable of doing the work. Fourth, learning MATLAB in the first year benefits the engineering students who make up more than 75% of the class and who will continue to use MATLAB throughout their studies at Cornell and beyond. Developing the Simulator To develop the iRobot Create simulator, I collaborated with a student, who did most of the coding, and my colleague Hadas Kress-Gazit, an assistant professor at Cornell's Sibley School of Mechanical and Aerospace Engineering, who teaches the junior-level course on autonomous mobile robots (see sidebar). We defined classes, applied object-oriented design patterns, and made full use of the object-oriented programming capabilities of the MATLAB language to build the simulator. The simulator incorporates a library from the MATLAB Toolbox for the iRobot Create, developed by Professor Joel Esposito and Owen Barton at the United States Naval Academy. The library translates MATLAB code into the low-level numerical commands used by iRobot Create, enabling programs written in the MATLAB language to control the robot. Because the simulator uses this library, the same MATLAB code can be used to control a real robot and a virtual robot via the simulator. In addition to its core simulation capabilities, the simulator has four graphical interfaces. The most important one for the course is SimulatorGUI, which is used to visualize the movement of the Create robot on a map as well as the range of its sensors (Figure 1). Other interfaces include MapMakerGUI, for designing environment maps; ConfigMakerGUI, for modeling noise on the sensors and communication delays; and ReplayGUI, for analyzing and playing back autonomous navigation sequences. Development of the simulator was funded by a grant from MathWorks. The MATLAB code for the simulator is available on SourceForge, and it has been downloaded more than 2500 times. The related course material is also freely available; colleagues at the University of Vermont are already using the simulator to teach a computer science course. I encourage educators at all levels to consider using it in their coursework. Autonomous Mobile Robots MAE 4180/5180 is a senior- and master’s-level robotics course in which students write MATLAB code that is first verified via the simulator and then used to control actual iRobot Create robots. By debugging their code with the simulator before coming to the lab, students can focus their lab time on the actual hardware and on the challenges of dealing with physical systems and real-world communication constraints. In the junior-level course, students learn concepts such as localization (how the robot knows where it is in the world), mapping (how the robot finds out what its environment looks like), and motion planning (how the robot figures out where to move and how to get there). They then apply these concepts in hands-on projects with the simulator and real robots. The course concludes with a competition in which the students program the robot to navigate within a map, an exercise requiring both localization and motion planning. The course imposes a heavy workload, but students report that the load is manageable—and more enjoyable—because MATLAB enables them to concentrate on applying new techniques as they develop their algorithms instead of on low-level programming details. The course also serves as a springboard for careers in engineering that require reasoning about tradeoffs and constraints. Programming Exercises Using the Simulator In CS1112, after students have learned the basics of computer programming and the MATLAB language, they complete four exercises using the robot simulator. In the first exercise, the students manipulate the robot using manual control keys and then run a simple program that drives the robot forward, rotates it 90 degrees, and then drives it forward again. When the students are comfortable with these tasks, we ask them to extend the control program to make the robot complete a square. At this point, the concept of approximation error becomes much less theoretical, because the virtual robot does not make a precise 90-degree turn when commanded to do so. The students see that the apparently simple task of getting the robot to complete a square requires more thought—and more code—than they expected. In the second exercise, students write a MATLAB control program to make the robot wander randomly, advancing more often than retreating, until it bumps into a wall (as detected by the robot's bump sensor and infrared wall sensor). Here, the students apply several programming concepts, including user-defined functions, pseudo-random numbers, and indefinite iteration. In the third exercise, the students program the robot to systematically scan the floor of a rectangular room using a sensor that measures reflectivity. When the sensor detects a dark marking on the white floor, the program registers the location using coordinates obtained from the robot's global positioning sensor. When the robot has completed its reconnaissance, which ideally uses parallel traverses of the floor area, the students produce a MATLAB scatter plot of the gathered data (Figure 2). This exercise teaches the classic design tradeoff between efficiency and reliability. Students learn that the most efficient path, which would include no overlap of the parallel traverses, leads to poor results because of the limited accuracy of the sensors and the limited precision of the motors used to turn and position the robot. To ensure more reliable floor coverage they must adjust the algorithms, adding redundancy by increasing the overlap between traverses. In the final exercise, students build upon the random walk and reconnaissance exercise and incorporate several programming concepts, including two-dimensional arrays and file input and output. The students' programs record the robot's travel during a random walk and save this data to a file. The students write MATLAB routines to analyze the recorded data and produce a color map of the floor, using color intensity to indicate how long the robot spent in each area of the environment (Figure 3). Results and Next Steps At the end of the semester, students complete a survey on their experience in the course and with the simulator. In CS1112, where the majority of the students are engineering freshmen who have not declared their majors, the simulator has increased student interest in computing: On a recent survey, more than 40% reported greater interest in computing skills from assignments that involved the simulator compared with assignments that did not. About two-thirds of all students reported that the simulator helped increase their understanding of the concepts of approximation and errors. On the final exam, one question tested the students' understanding of this concept. There was a strong correlation between the students' overall exam score and their use of simulator-based exercises as examples in answering this exam question. The students also reported that they liked seeing their code in action via the simulator. They requested an enhancement to the simulator that would enable the robot to move faster than a real Create robot. Accordingly, we've adjusted the simulator to enable faster than real-time simulations. We continue to develop and expand the material covered in CS1112: Introduction to Computing Using MATLAB. Although the simulator was developed using object-oriented design principles, until now these principles have not been taught in the course. We will incorporate object-oriented programming with MATLAB into the next CS1112 class, and will continue to develop the simulator-based exercises and the simulator itself, adding new sensors as they become available in hardware.
{"url":"http://www.mathworks.de/company/newsletters/articles/teaching-computer-programming-to-first-year-undergraduates-with-a-matlab-based-robot-simulator.html?nocookie=true","timestamp":"2014-04-24T11:32:24Z","content_type":null,"content_length":"38218","record_id":"<urn:uuid:1a801af7-8275-497f-9b3f-b99d976a5f51>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00203-ip-10-147-4-33.ec2.internal.warc.gz"}
[Numpy-discussion] Definitions of pv, fv, nper, pmt, and rate Skipper Seabold jsseabold@gmail.... Tue Jun 9 08:45:43 CDT 2009 On Tue, Jun 9, 2009 at 1:14 AM, <josef.pktd@gmail.com> wrote: > On Tue, Jun 9, 2009 at 12:51 AM, <d_l_goldsmith@yahoo.com> wrote: >> --- On Mon, 6/8/09, Skipper Seabold <jsseabold@gmail.com> wrote: >>> I forgot the last payment (which doesn't earn any >>> interest), so one more 100. >> So in fact they're not in agreement? >>> pretty soon. I don't have a more permanent reference >>> for fv offhand, >>> but it should be in any corporate finance text etc. >>> Most of these >>> type of "formulas" use basic results of geometric series to >>> simplify. >> Let me be more specific about the difference between what we have and what I'm finding in print. Essentially, it boils down to this: in every source I've found, two "different" present/future values are discussed, that for a single amount, and that for a constant (i.e., not even the first "payment" is allowed to be different) periodic payment. I have not been able to find a single printed reference that gives a formula for (or even discusses, for that matter) the combination of these two, which is clearly what we have implemented (and which is, just as clearly, actually seen in practice). These are the two most basic building blocks of time value problems, discounting one cash flow and an annuity. There are *plenty* of examples and use cases for uneven cash flows or for providing a given pv or fv. Without even getting into actual financial contracts, suppose I have an investment account that already has $10,000 and I plan to add $500 every month and earn 4%. Then we would need something like fv to tell me how much this will be worth after 180 months. I don't necessarily need a reference to tell me this would be useful to know. >> Now, my lazy side simply hopes that my stridency will finally cause someone to pipe up and say "look, dummy, it's in Schmoe, Joe, 2005. "Advanced Financial Practice." Financial Press, NY NY. There's your reference; find it and look it up if you don't trust me" and then I'll feel like we've at least covered our communal rear-end. But my more conscientious side worries that, if I've had so much trouble finding our more "advanced" definition (and I have tried, believe me), then I'm concerned that what your typical student (for example) is most likely to encounter is one of those simpler definitions, and thus get confused (at best) if they look at our help doc and find quite a different (at least superficially) definition (or worse, don't look at the help doc, and either can't get the function to work because the required number of inputs doesn't match what they're expecting from their text, or somehow manage to get it to work, but get an answer very >> different from that given in other sources, e.g., the answers in the back of their text.) I don't know that these are "formulas" per se, rather than convenience functions for typical use cases. That's why they're in spreadsheets in the first place. They also follow the behavior of financial calculators, where you typically have to input a N, I/Y, PMT, PV and FV (even if one of these last two values is zero). If you need a textbook reference, as I said before you could literally pick up any corporate finance text and derive these functions from the basics. Try having a look at some end of chapter questions (or financial calculator handbook) to get an idea of when and how they'd actually be >> One obvious answer to this dilemma is to explain this discrepancy in the help doc, but then we have to explain - clearly and lucidly, mind you - how one uses our functions for the two simpler cases, how/why the formula we use is the combination of the other two, etc. (it's rather hard to anticipate, for me at least, all the possible confusions this discrepancy might create) and in any event, somehow I don't really think something so necessarily elaborate is appropriate in this case. So, again, given that fv and pv (and by extension, nper, pmt, and rate) have multiple definitions floating around out there, I sincerely think we should "punt" (my apologies to those unfamiliar w/ the American "football" metaphor), i.e., rid ourselves of this nightmare, esp. in light of what I feel are compelling, independent arguments against the inclusion of these functions in this library in the first place. >> Sorry for my stridency, and thank you for your time and patience. I don't think that there are multiple definitions of these (very simple) functions floating around, but rather different assumptions/implementations that lead to ever so slightly different results. My plan for the additions and when checking the existing ones is to derive the result, so that we know what's going on. Once you state your assumptions, the result will be clearly one way or another. This would be my way of "covering" our functions. I derived the result, so here's what's going on, here's a use case to have a look at as an example. Then we should be fine. It's not that I don't appreciate your concern for being correct. I guess it's just that I don't share it (the concern that is) in this More information about the Numpy-discussion mailing list
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2009-June/043292.html","timestamp":"2014-04-17T21:48:53Z","content_type":null,"content_length":"8651","record_id":"<urn:uuid:e9c27090-828a-490c-a89a-3ab5edd5019d>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00622-ip-10-147-4-33.ec2.internal.warc.gz"}
Research Blogging Do you write about peer-reviewed research in your blog? Use ResearchBlogging.org to make it easy for your readers — and others from around the world — to find your serious posts about academic If you don't have a blog, you can still use our site to learn about fascinating developments in cutting-edge research from around the world.
{"url":"http://www.researchblogging.org/blogger/home/id/2134","timestamp":"2014-04-16T13:04:56Z","content_type":null,"content_length":"106278","record_id":"<urn:uuid:9577ab84-9fe7-4956-9d87-1341a6705251>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00066-ip-10-147-4-33.ec2.internal.warc.gz"}
The geometry of turbo-decoding dynamics Results 1 - 10 of 40 - In Proceedings of Uncertainty in AI , 1999 "... Recently, researchers have demonstrated that "loopy belief propagation" --- the use of Pearl's polytree algorithm in a Bayesian network with loops --- can perform well in the context of error-correcting codes. The most dramatic instance of this is the near Shannon-limit performance of "Turbo ..." Cited by 466 (18 self) Add to MetaCart Recently, researchers have demonstrated that "loopy belief propagation" --- the use of Pearl's polytree algorithm in a Bayesian network with loops --- can perform well in the context of error-correcting codes. The most dramatic instance of this is the near Shannon-limit performance of "Turbo Codes" --- codes whose decoding algorithm is equivalent to loopy belief propagation in a chain-structured Bayesian network. In this paper we ask: is there something special about the error-correcting code context, or does loopy propagation work as an approximate inference scheme in a more general setting? We compare the marginals computed using loopy propagation to the exact ones in four Bayesian network architectures, including two real-world networks: ALARM and QMR. We find that the loopy beliefs often converge and when they do, they give a good approximation to the correct marginals. However, on the QMR network, the loopy beliefs oscillated and had no obvious relationship ... - IN NIPS 13 , 2000 "... Belief propagation (BP) was only supposed to work for tree-like networks but works surprisingly well in many applications involving networks with loops, including turbo codes. However, there has been little understanding of the algorithm or the nature of the solutions it finds for general graphs ..." Cited by 400 (9 self) Add to MetaCart Belief propagation (BP) was only supposed to work for tree-like networks but works surprisingly well in many applications involving networks with loops, including turbo codes. However, there has been little understanding of the algorithm or the nature of the solutions it finds for general graphs. We show that , 2000 "... Since the invention of \turbo codes" by Berrou et al. in 1993, the \turbo principle" has been adapted to several communication problems such as \turbo equalization", \turbo trellis coded modulation", and iterative multi user detection. In this paper we study the \turbo equalization" approach, which ..." Cited by 172 (19 self) Add to MetaCart Since the invention of \turbo codes" by Berrou et al. in 1993, the \turbo principle" has been adapted to several communication problems such as \turbo equalization", \turbo trellis coded modulation", and iterative multi user detection. In this paper we study the \turbo equalization" approach, which can be applied to coded data transmission over channels with intersymbol interference (ISI). In the original system invented by Douillard et al., the data is protected by a convolutional code and a receiver consisting of two trellis-based detectors are used, one for the channel (the equalizer) and one for the code (the decoder). It has been shown that iterating equalization and decoding tasks can yield tremendous improvements in bit error rate (BER). We introduce new approaches to combining equalization based on linear ltering with the decoding. The result is a receiver that is capable of improving BER performance through iterations of equalization and decoding in a manner similar to turbo ... - Proceedings of the IEEE , 2002 "... This paper reviews a significant component of the rich field of statistical multiresolution (MR) modeling and processing. These MR methods have found application and permeated the literature of a widely scattered set of disciplines, and one of our principal objectives is to present a single, coheren ..." Cited by 122 (18 self) Add to MetaCart This paper reviews a significant component of the rich field of statistical multiresolution (MR) modeling and processing. These MR methods have found application and permeated the literature of a widely scattered set of disciplines, and one of our principal objectives is to present a single, coherent picture of this framework. A second goal is to describe how this topic fits into the even larger field of MR methods and concepts–in particular making ties to topics such as wavelets and multigrid methods. A third is to provide several alternate viewpoints for this body of work, as the methods and concepts we describe intersect with a number of other fields. The principle focus of our presentation is the class of MR Markov processes defined on pyramidally organized trees. The attractiveness of these models stems from both the very efficient algorithms they admit and their expressive power and broad applicability. We show how a variety of methods and models relate to this framework including models for self-similar and 1/f processes. We also illustrate how these methods have been used in practice. We discuss the construction of MR models on trees and show how questions that arise in this context make contact with wavelets, state space modeling of time series, system and parameter identification, and hidden , 2001 "... We present a tree-based reparameterization framework that provides a new conceptual view of a large class of algorithms for computing approximate marginals in graphs with cycles. This class includes the belief propagation or sum-product algorithm [39, 36], as well as a rich set of variations and ext ..." Cited by 102 (22 self) Add to MetaCart We present a tree-based reparameterization framework that provides a new conceptual view of a large class of algorithms for computing approximate marginals in graphs with cycles. This class includes the belief propagation or sum-product algorithm [39, 36], as well as a rich set of variations and extensions of belief propagation. Algorithms in this class can be formulated as a sequence of reparameterization updates, each of which entails re-factorizing a portion of the distribution corresponding to an acyclic subgraph (i.e., a tree). The ultimate goal is to obtain an alternative but equivalent factorization using functions that represent (exact or approximate) marginal distributions on cliques of the graph. Our framework highlights an important property of BP and the entire class of reparameterization algorithms: the distribution on the full graph is not changed. The perspective of tree-based updates gives rise to a simple and intuitive characterization of the fixed points in terms of tree consistency. We develop interpretations of these results in terms of information geometry. The invariance of the distribution, in conjunction with the fixed point characterization, enables us to derive an exact relation between the exact marginals on an arbitrary graph with cycles, and the approximations provided by belief propagation, and more broadly, any algorithm that minimizes the Bethe free energy. We also develop bounds on this approximation error, which illuminate the conditions that govern their accuracy. Finally, we show how the reparameterization perspective extends naturally to more structured approximations (e.g., Kikuchi and variants [52, 37]) that operate over higher order cliques. , 2000 "... Belief propagation (BP) was only supposed to work for tree-like networks but works surprisingly well in many applications involving networks with loops, including turbo codes. However, there has been little understanding of the algorithm or the nature of the solutions it nds for general graphs. ..." Cited by 70 (2 self) Add to MetaCart Belief propagation (BP) was only supposed to work for tree-like networks but works surprisingly well in many applications involving networks with loops, including turbo codes. However, there has been little understanding of the algorithm or the nature of the solutions it nds for general graphs. We show that BP can only converge to a stationary point of an approximate free energy, known as the Bethe free energy in statistical physics. This result characterizes BP xed-points and makes connections with variational approaches to approximate inference. More importantly, our analysis lets us build on the progress made in statistical physics since Bethe's approximation was introduced in 1935. Kikuchi and others have shown how to construct more accurate free energy approximations, of which Bethe's approximation is the simplest. Exploiting the insights from our analysis, we derive generalized belief propagation (GBP) versions of these Kikuchi approximations. These new message passing algorithms can be signicantly more accurate than ordinary BP, at an adjustable increase in complexity. We illustrate such a new GBP algorithm on a grid Markov network and show that it gives much more accurate marginal probabilities than those found using ordinary BP. - Advances in Neural Information Processing Systems (NIPS , 2001 "... We present a tree-based reparameterization framework that provides a new conceptual view of a large class of iterative algorithms for computing approximate marginals in graphs with cycles. It includes belief propagation (BP), which can be reformulated as a very local form of reparameterization. Mor ..." Cited by 49 (4 self) Add to MetaCart We present a tree-based reparameterization framework that provides a new conceptual view of a large class of iterative algorithms for computing approximate marginals in graphs with cycles. It includes belief propagation (BP), which can be reformulated as a very local form of reparameterization. More generally, we consider algorithms that perform exact computations over spanning trees of the full graph. On the practical side, we nd that such tree reparameterization (TRP) algorithms typically converge more quickly than BP with lower cost per iteration; moreover, TRP often converges on problems for which BP fails. The reparameterization perspective also provides theoretical insight into approximate estimation, including a new probabilistic characterization of xed points; and an invariance intrinsic to TRP/BP. These two properties in conjunction enable us to analyze and bound the approximation error that arises in applying these techniques. Our results also have natural extensions to approximations (e.g., Kikuchi) that involve clustering nodes. 1 - IEEE Transactions on Information Theory , 2000 "... Motivated by its success in decoding turbo codes, we provide an analysis of the belief propagation algorithm on the turbo decoding graph with Gaussian densities. In this context, we are able to show that, under certain conditions, the algorithm converges and that -- somewhat surprisingly -- though t ..." Cited by 43 (8 self) Add to MetaCart Motivated by its success in decoding turbo codes, we provide an analysis of the belief propagation algorithm on the turbo decoding graph with Gaussian densities. In this context, we are able to show that, under certain conditions, the algorithm converges and that -- somewhat surprisingly -- though the density generated by belief propagation may di#er significantly from the desired posterior density, the means of these two densities coincide. Since computation of posterior distributions is tractable when densities are Gaussian, use of belief propagation in such a setting may appear unwarranted. Indeed, our primary motivation for studying belief propagation in this context stems from a desire to enhance our understanding of the algorithm's dynamics in non-Gaussian setting, and to gain insights into its excellent performance in turbo codes. Nevertheless, even when the densities are Gaussian, belief propagation may sometimes provide a more e#cient alternative to traditional inference metho... - Neural Computation , 2004 "... Belief propagation (BP) is a universal method of stochastic reasoning. It gives exact inference for stochastic models with tree interactions, and works surprisingly well even if the models have loopy interactions. Its performance has been analyzed separately in many fields, such as, AI, statistical ..." Cited by 20 (2 self) Add to MetaCart Belief propagation (BP) is a universal method of stochastic reasoning. It gives exact inference for stochastic models with tree interactions, and works surprisingly well even if the models have loopy interactions. Its performance has been analyzed separately in many fields, such as, AI, statistical physics, information theory, and information geometry. The present paper gives a unified framework to understand BP and related methods, and to summarize the results obtained in many fields. In particular, BP and its variants including tree reparameterization (TRP) and concaveconvex procedure (CCCP) are reformulated with information geometrical terms, and their relations to the free energy function are elucidated from information geometrical viewpoint. We then propose a family of new algorithms. The stabilities of the algorithms are analyzed, and methods to accelerate them are investigated. 1 - IEEE COMM. LETTERS , 2003 "... In this letter, we express the Cramer-Rao Bound (CRB) for carrier phase estimation from a noisy linearly modulated signal with encoded data symbols, in terms of the marginal a posteriori probabilities (APPs) of the coded symbols. For a wide range of classical codes (block codes, convolutional codes, ..." Cited by 12 (8 self) Add to MetaCart In this letter, we express the Cramer-Rao Bound (CRB) for carrier phase estimation from a noisy linearly modulated signal with encoded data symbols, in terms of the marginal a posteriori probabilities (APPs) of the coded symbols. For a wide range of classical codes (block codes, convolutional codes, and trellis-coded modulation), these marginal APPs can be computed efficiently by means of the Bahl-Cocke-Jelinke-Raviv (BCJR) algorithm, whereas for codes that involve interleaving (turbo codes and bit interleaved coded modulation), iterated application of the BCJR algorithm is required. Our numerical results show that when the BER of the coded system is less than about 10 3 , the resulting CRB is essentially the same as when transmitting a training sequence.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=387712","timestamp":"2014-04-23T09:56:53Z","content_type":null,"content_length":"41327","record_id":"<urn:uuid:df4b932b-e079-4d32-90fc-d3d5da66ab2a>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00085-ip-10-147-4-33.ec2.internal.warc.gz"}
cylindrical pipe holding up a screen (real world project) Hi Shawn. Welcome to the board, For a tubular structure, moment of inertia I = [itex]\pi[/itex]/64 (D For a beam, uniformly loaded with SIMPLY SUPPORTED ends, the equations for stress and maximum deflection are: The maximum moment is m = w L / 2 where w = linearly distributed load in units of force per unit length. Make sure to add ALL contributions to weight including the pipe/tube and the stuff it's supporting. Stress = m D /(2 I) Deflection = 5 w L / (384 E I) where E = modulus of elasticity of the pipe/tube material. For a beam, uniformly loaded with FIXED ends, the equations for stress and maximum deflection are: The maximum moment is m = w L / 12 Stress = m D /(2 I) Deflection = w L / (384 E I) Obviously, fixing the ends so they can’t deflect will reduce the maximum deflection at the center of the span by 80%, so that's a lot better than simply supported ends. I'd suggest making up a spreadsheet to see how changing various inputs changes the output. If you need to reduce the deflection of the span further and make it essentially flat, you could use one of the equations above and make a beam that’s bent to that curve, fix the ends as per the equation and then when it’s loaded, it will flatten out.* Of course, if this bent bar were to rotate, it would be horrible, so you would want to keep the bar steady and have some way of having the screen rotate on the bar such that the bar doesn’t need to rotate. One last option would be to take a pipe/tube and deflect the ends slightly so that you put a moment on the ends of the bar so as to help reduce the sag in the middle. The bar could then rotate if you wish, unlike the other option above. In other words, imagine holding a thin plastic bar horizontally out in front of you and watching it sag in the middle, then twist your hands so the sag comes out of it. You could do the same here and allow the pipe/tube to rotate on bearings but the bearings would be canted slightly so the sag is reduced. Not sure if that would completely eliminate the deflection, I'd have to think about the equations, but that's another possibility. Additional information here: *For example, you may have seen 18 wheelers driving down the road pulling a flatbed with no load and noticed they often have a bow to them. The flatbed trailer will bend under load so that it ends up nearly flat.
{"url":"http://www.physicsforums.com/showthread.php?p=4254985","timestamp":"2014-04-21T14:45:45Z","content_type":null,"content_length":"80341","record_id":"<urn:uuid:223f49a2-c76b-490a-af78-e01755af7618>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00289-ip-10-147-4-33.ec2.internal.warc.gz"}
Washington Calculus Tutor Find a Washington Calculus Tutor ...Depending on the course, many teachers also include trigonometry. Algebra 2 is one of the most challenging courses students will take, honors or non-honors. It is important that students keep up with the work level and keep up with practicing. 24 Subjects: including calculus, reading, geometry, ASVAB ...I also know that exploring many methods is the best way to build conceptual understanding of math. In many math classrooms today, teachers show their students one way to solve a problem, and then the students simply mimic a series of steps. This approach does not promote conceptual understanding! 16 Subjects: including calculus, English, writing, geometry ...As a private tutor, I have accumulated over 750 hours assisting high school, undergraduate, and returning adult students. And as a research scientist, I am a published author and have conducted research in nonlinear dynamics and ocean acoustics. My teaching focuses on understanding concepts, connecting different concepts into a coherent whole and competency in problem solving. 9 Subjects: including calculus, physics, geometry, algebra 1 ...The student must become comfortable with both the graphical and algebraic representations of straight line functions. The good student knows both of these, and can go back and forth between them with ease. Part of my tutoring approach is to help the student gain confidence in doing this. 13 Subjects: including calculus, chemistry, physics, algebra 1 ...My job is to successfully work one-on-one with those styles to help them achieve success, and the proof is in the fact that many students rise from a C/D level grade to A/B+. I have several letters of recommendation that support this claim and am happy to provide this information upon request. ... 17 Subjects: including calculus, chemistry, geometry, ASVAB
{"url":"http://www.purplemath.com/washington_calculus_tutors.php","timestamp":"2014-04-16T07:43:29Z","content_type":null,"content_length":"24035","record_id":"<urn:uuid:eab44b41-ed8e-462a-aa53-b3a11e568d1b>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00413-ip-10-147-4-33.ec2.internal.warc.gz"}
Enchanted Learning │ Figure │ Area │ │ Square ││a * a │ │Rectangle ││a * b │ │ Circle ││πr ^2 │ The area of a region is the number of square units contained within the region. For example, the area of a square with a sides of length a is A = a^2. The area of a rectangle is A = length*width. The area of a parallelogram is A = base*height. The area of a triangle is (1/2)base*height. The area of a circle is A = >πr^2 1+1=2 Arithmetic is the study of addition, subtraction, multiplication, and division. arithmetic mean The arithmetic mean of a set of numbers (also called the average) is equal to the sum of the numbers divided by the number of numbers. For example, for the data set {1, 2, 3, 6}, the mean is (1+2+3+6)/4 = 12/4 = 3. arithmetic sequence (or arithmetic progression) An arithmetic sequence, also called an arithmetic progression, is an ordered list of numbers where each term is obtained by adding (or subtracting) a constant amount to the previous term. For example, the arithmetic sequence 0, 2, 4, 6, 8, 10, ... is the sequence of numbers starting with 0 where the terms increase by 2 in turn.
{"url":"http://www.enchantedlearning.com/math/glossary/A.shtml","timestamp":"2014-04-20T21:15:52Z","content_type":null,"content_length":"31877","record_id":"<urn:uuid:78dbf0ac-e5a8-466c-bf1d-da16c96e8515>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00001-ip-10-147-4-33.ec2.internal.warc.gz"}
Absolute Value on the Graphing Calculator Example 4: Solve: Boolean Check: (The inequality symbols are under the TEST Menu - 2nd MATH.) Answer: x < -1; x > 5 You could also enter using Y[1] and Y[2]^ found under Vars →Y-Vars, Function) Where the inequality is true, y-values on the graph will be a 1. If you look at the table, 0's will be listed where the inequality is false and 1's will be listed where the inequality is true. Determine exact cut off points by using the intersection option. Remember that the calculator cannot draw an open or closed circle on the intervals. You will have to determine which circle is needed based upon whether the inequality includes "equal to". Find the endpoints by using the intersect option (2nd TRACE #5 intersect). If you turn off the axes (FORMAT - 2nd ZOOM), you will be able to see the graphing of the 0's and 1's more clearly. Notice that the small vertical segment connecting the 0's to the 1's is simply the calculator being set in "connected" mode. Change to "dot" mode to remove this segment.
{"url":"http://mrsroberts.com/MathBits/TISection/Algebra2/absolutevalue.htm","timestamp":"2014-04-18T00:15:01Z","content_type":null,"content_length":"13152","record_id":"<urn:uuid:4566242e-a79e-4ef1-8dc4-f658fe1bf90c>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00528-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts from November 23, 2010 on The Unapologetic Mathematician Two of the most interesting constructions involving group representations are restriction and induction. For our discussion of both of them, we let $H\subseteq G$ be a subgroup; it doesn’t have to be Now, given a representation $\rho:G\to\mathrm{End}(V)$, it’s easy to “restrict” it to just apply to elements of $H$. In other words, we can compose the representing homomorphism $\rho$ with the inclusion $\iota:H\to G$: $\rho\circ\iota:H\to\mathrm{End}(V)$. We write this restricted representation as $\rho\!\!\downarrow^G_H$; if we are focused on the representing space $V$, we can write $V\! \!\downarrow^G_H$; if we pick a basis for $V$ to get a matrix representation $X$ we can write $X\!\!\downarrow^G_H$. Sometimes, if the original group $G$ is clear from the context we omit it. For instance, we may write $V\!\!\downarrow_H$. It should be clear that restriction is transitive. That is, if $K\subseteq H\subseteq G$ is a chain of subgroups, then the inclusion mapping $\iota_{K,G}K\hookrightarrow G$ is the exactly composition of the inclusion arrows $\iota_{K,H}K\hookrightarrow H$ and $\iota_{H,G}H\hookrightarrow G$. And so we conclude that So whether we restrict from $G$ directly to $K$, or we stop restrict from $G$ to $H$ and from there to $K$, we get the same representation in the end. Induction is a somewhat more mysterious process. If $V$ is a left $H$-module, we want to use it to construct a left $G$-module, which we will write $V\!\!\uparrow_H^G$, or simply $V\!\!\uparrow^G$ if the first group $H$ is clear from the context. To get this representation, we will take the tensor product over $H$ with the group algebra of $G$. To be more explicit, remember that the group algebra $\mathbb{C}[G]$ carries an action of $G$ on both the left and the right. We leave the left action alone, but we restrict the right action down to $H$. So we have a $G\times H$-module ${}_G\mathbb{C}[G]_H$, and we take the tensor product over $H$ with ${}_HV$. We get the space $V\!\!\uparrow_H^G=\mathbb{C}[G]\otimes_HV$; in the process the tensor product over $H$ “eats up” the right action of $H$ on the $\mathbb{C}[G]$ and the left action of $H$ on $V$. The extra left action of $G$ on $\mathbb{C}[G]$ leaves a residual left action on the tensor product, and this is the left action we seek. Again, induction is transitive. If $K\subseteq H\subseteq G$ is a chain of subgroups, and if $V$ is a left $K$-module, then The key step here is that $\mathbb{C}[G]\otimes_H\mathbb{C}[H]\cong\mathbb{C}[G]$. But if we have any simple tensor $g\otimes h\in\mathbb{C}[G]\otimes_H\mathbb{C}[H]$, we can use the relation that lets us pull elements of $H$ across the tensor product. We get $gh\otimes1\in\mathbb{C}[G]\otimes_H\mathbb{C}[H]$. That is, we can specify any tensor by an element in $\mathbb{C}[G]$ alone. • Recent Posts • Blogroll • Art • Astronomy • Computer Science • Education • Mathematics • Me • Philosophy • Physics • Politics • Science • RSS Feeds • Feedback Got something to say? Anonymous questions, comments, and suggestions at • Subjects • Archives
{"url":"http://unapologetic.wordpress.com/2010/11/23/","timestamp":"2014-04-20T03:20:31Z","content_type":null,"content_length":"49671","record_id":"<urn:uuid:dafe2f57-99ef-4413-8311-585c3c2ab002>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00311-ip-10-147-4-33.ec2.internal.warc.gz"}
Help me understand this please!! September 30th 2007, 11:06 AM Help me understand this please!! I don't want any answers i just don't understand the question and would like it explained in common terms for my benefit please!!!!!! The wheels on a bike have a circumference of 2 meters. Each time the front wheel makes one complete revolution, the bicycle moves forward 2 meters. in your mown words describe how to work out the distance travelled by the bicycle if you know how many revolutions the bicycle has made. Hence write down a word formula for the distance travelled. The formula should start: distance travelled in m =........ I'm so :confused: September 30th 2007, 11:38 AM If $n$ is the number of revolutions, then the distance d is $d=2n$ meters. September 30th 2007, 11:41 AM I don't want any answers i just don't understand the question and would like it explained in common terms for my benefit please!!!!!! The wheels on a bike have a circumference of 2 meters. Each time the front wheel makes one complete revolution, the bicycle moves forward 2 meters. in your mown words describe how to work out the distance travelled by the bicycle if you know how many revolutions the bicycle has made. Hence write down a word formula for the distance travelled. The formula should start: distance travelled in m =........ I'm so :confused: this is a triple post! Please do not make multiple posts of the same question, it wastes people's time. case in point, your question has now been answered by 3 users, all saying the exact same thing. so you wasted 2 users' time
{"url":"http://mathhelpforum.com/algebra/19748-help-me-understand-please-print.html","timestamp":"2014-04-18T05:09:19Z","content_type":null,"content_length":"5761","record_id":"<urn:uuid:7623ecbf-03f8-4c33-a5ab-5f24305a8003>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00535-ip-10-147-4-33.ec2.internal.warc.gz"}
CDS chart of the day, Portugal edition Many thanks to my colleague Eric Burroughs for sending over this chart, showing how Portugal’s CDS curve has evolved over the course of this year: The black curve is how Portugal looked in April: a pretty standard upward-sloping curve, with default more likely the longer you go out. By June, however, with the onset of the Greece crisis, things looked very different. (This is the green curve.) Obviously default probabilities were higher across the board. But they were highest at the short end of the curve: 6 months to a year out. If Portugal could make it that far, markets were saying, then it would become steadily less likely to default thereafter. Today, with the red curve, it’s very different yet again. The contrast from just a few months ago is striking: while the 1-year CDS showed the highest default probability back then, today it’s the lowest. The EU bailout of Ireland confirms that Portugal will probably not be allowed to default any time soon. But then look at where Portugal’s CDS curve goes after that: straight up, to the point at which the country is now considered more likely to default at 3 years out, and on from there. The implication is clear: any bailout now only serves to make a future default more likely. Which is not, I’m pretty sure, the message that the EU is really intending to send. So a CDS at 5yr doesn’t payout if the default happens at 3yrs? That seems silly. @tedtwong, the 5Y does pay out on credit events any time in the 5Y period. But that does not mean credit spreads cannot be downward-sloping. If you hold recovery rates fixed (the convention), then you can take the credit spread as qualitative proxy for term (integrated) hazard rates, analogous to term zero-coupon rates in an IR curve. A downward-sloping credit spread would then imply declining forward hazard rates, just as a declining zero curve implies declining forward rates. The limitation is on the degree of the slope; too steep a slope would imply negative hazard rates, just as too steeply declining a zero curve implies negative forward rates. Oops. Perhaps a definition is wanted. “Hazard rate” mans “probability of default given prior survival.” greycap: given the yields on Eric Burroughs’ graph what would you calc today’s hazard rates on the 1Y, 2Y, 3Y, and 10Y Portugal CDS? @greycap, I think I mistook this as the price of the CDS instead of the yield on the CDS. Thanks for the clarification. I’m not sure I understand how to read the yield curve and get the implied probability. If the yield is 3%, the risk-free rate is 1% and I’m a risk neutral investor, that tells me that the probability of default is [p(1.03)=1.01] — only around 2%. Am I reading this correctly? Excuse me as I am totally ignorant of this CDS stuff, but is’nt there a problem with the amount of potential liability and the means of insurers to compensate the insured? And if this were not accurate then how are the insurers assets currently held? Could those assets be liquidated for anywhere near the book value in a global sovreign debt melt down? “I’m not sure I understand how to read the yield curve and get the implied probability.” That is not easy to do. The qualitative shape of the curve is easy to read (Felix has already done so in the post), but backing out a hazard curve involves some assumptions and some calculation. I already mentioned recovery rate. In addition, you need to assume something about the form of the hazard curve, e.g. piecewise-flat, with the points at the quoted CDS tenors. Then at each successive tenor, you solve for the current additional piece of the hazard curve that makes the expected value of the credit event payment equal to the expected value of the swap coupons (plus the upfront amount, in the case of a standardized post-”big bang” quote.) @tedtwong: “I think I mistook this as the price of the CDS instead of the yield on the CDS.” They are the same. “Spread” is probably a better word than yield in this context. Risk-neutral default probabilities are typically derived from assuming a constant recovery rate (e.g. 40 cents on the dollar instantly upon default) and a piecewise flat term structure of hazard rates (instantaneous annualized rate of default, conditional on survival). A cheap (though inexact) calc for a hazard rate is CDSSpread/(1-RecoveryRate). The link between hazard rate and risk-neutral cumulative default probability at time T is simply [1-e^(-haz*T)]. @tonydd: “isn’t there a problem with the amount of potential liability and the means of insurers to compensate the insured?” Yes, and there’s even a name for it: counterparty risk, and it’s extensively studied and managed–which isn’t to suggest that it’s managed *well* by protection buyers, or sellers for that matter. “Could those assets be liquidated for anywhere near the book value in a global sovreign debt melt down?” Good question. Again, there’s a name for that: wrong-way risk (see also: Armageddon insurance). But it may be worth asking whether a “global sovereign debt meltdown” is the *only* scenario in which a particular sovereign might default, and if so, whether, in such scenario, one’s CDS payoff would be the most important thing on one’s mind (or even anywhere near the top). This is why it is sometimes joked that CDS on the US treasury might more appropriately be denominated in (take your pick:) bullets/gasoline/spam/tinfoil (the latter for hat-making). How about the CDS charts of Spain, Italy, and France? Are they all like this one? @Sandrew: “I think I mistook this as the price of the CDS instead of the yield on the CDS.” They are the same. I don’t think they would be. Why would you sell me a 10 year CDS for less than a 5 yr one? If it’s a graph of price, that’s what the graph would be saying. @tedtwong I meant only that the conceptual price of bearing credit risk can be reflected by a spread. You are correct that there is a relevant distinction between an up-front cash payment and a running spread. As greycap alluded to, the CDS market conventions changed last (dubbed the CDS “Big Bang”) s.t. single-name CDS ar now be quoted in a combination of up-front cash payment (points) and a standardized rolling spread (of either 100 or 500 bps, depending on the credit). Sorry if my explanation obfuscated rather than clarified. Post Your Comment
{"url":"http://blogs.reuters.com/felix-salmon/2010/11/22/cds-chart-of-the-day-portugal-edition/","timestamp":"2014-04-20T16:22:59Z","content_type":null,"content_length":"68097","record_id":"<urn:uuid:fd894353-e4d3-4952-bde1-9e4042c59765>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00103-ip-10-147-4-33.ec2.internal.warc.gz"}
Lombard Calculus Tutor Find a Lombard Calculus Tutor ...I have a PhD. in experimental nuclear physics. I have completed undergraduate coursework in the following math subjects - differential and integral calculus, advanced calculus, linear algebra, differential equations, advanced differential equations with applications, and complex analysis. I have a PhD. in experimental nuclear physics. 10 Subjects: including calculus, physics, geometry, algebra 1 ...I have 4 years of teaching experience: 2 years as a middle school math teacher and 2 years as a high school math teacher. During this time I created my own curriculum every year, wrote my own instructional lessons, and designed practice and homework for every lesson. These are skills I bring to all my tutoring engagements. 17 Subjects: including calculus, physics, geometry, GRE ...I think that I can persuade any student that math is not only interesting and fun, but beautiful as well. I have taught Algebra 2 in a high school and at Indiana University. I think that I can persuade any student that math is not only interesting and fun, but beautiful as well. 24 Subjects: including calculus, physics, geometry, GRE ...I am professionally employed and calculus is part of everyday life for me. Trigonometry is an essential building block in the advanced math I have to do daily as a professional engineer. I have a master of science in mechanical engineering. 20 Subjects: including calculus, physics, statistics, geometry ...First, I learn where they are, and then I ask questions until they see what to do! The goal is for students to understand, so that they can deal with test questions after they have done the homework. I have taught Pre-algebra, Elementary Algebra, and Intermediate Algebra at various colleges. 25 Subjects: including calculus, writing, GRE, geometry
{"url":"http://www.purplemath.com/Lombard_calculus_tutors.php","timestamp":"2014-04-18T08:29:51Z","content_type":null,"content_length":"23909","record_id":"<urn:uuid:1bd2b400-f3bc-4ed8-91e6-72cca1bcdba8>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00368-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts by Posts by Joe Total # Posts: 2,001 Simplify. Please help me fractions hate me:) (1/25-1/x^2)divided by (1/5+1/x) physical science What is the mass of a 10.0 cm3 cube of copper? physical science How many hours are required for a radio signal from a space probe near the dwarf planet Pluto, 6.00 x 10 (to the 9th) km away, to reach Earth? Assume that the radio signal travels at the speed of light, 3.00 10 (to the 8th) m/s. The car's displacement 0.7 s after leaving the dock has a magnitude of 14.0 m. What is the car's speed at the instant it drives off the edge of the dock? HELP!You are handed a 8.0 cm stack of new one-dollar bills. Assume the thickness of a dollar bill is 1.4 times thicker than your textbook paper (textbook paper = 63 um). (that's "micrometers") How many dollars are in your stack? Imagine for a moment that treble sounds could travel at 400 m/s, but bass sounds at only 200 m/s. If sound behaved that way, and you were listening to the CU band in Folsom stadium from a distance of 110 m, how much delay would you hear between the trumpet and tuba notes when ... A spherical balloon has an area of 7.1076 m2.How to you calculate the volume? Is the equation 344+0.6x(t-20)? I'm not getting this at all? How long does it take for sound to arrive at the back of a big auditorium if the temperature is 33 oC and the stage is 29 m away? answer in seconds Medical & Billing Can any one let me know how long it will take to complete Medical & billing certificate i am doing from Penn Foster and i am doing 4th assignement & 4 moths are over,it really sucks with projects kindly help me with ur comments what should i do continue or quit ? there's a typo. it should be R+h not R=h. my bad. What is the speed of a communications satellite orbiting the Moon at an altitude of 290 km above the Moon s surface? I know that G=6.67e-11 and h=290. I plugged inthese values, 7.35e24 for the mass and 1.738e6 for the radius into v=sqrtGM/R=h but I didn't get the righ... Would it be better to pay off a $1000 loan at 4% over 15 years or at 10% over five years ? A novice golfer on the green takes three strokes to sink the ball. The successive displacements are 4.00m north, 2.00m northeast, and 1.00m at 30.0 degrees west of south starting at the same initial point, an golfer could make the hole in what single displacement? whch division sentence will give an answer that is not in equal groups? 26/4 35/7 42/6 or 45/5 You have the following data. A monopolist produces 1000 units of output per month, and sells it at the price of 10 each. You know that the monopolist does not do any price discrimination, and you also know that the price-cost margin of this firm (P-MC)/P is evaluated at 0.2. E... if the measure of angle A = 70 degrees & the measure of angle B = 25 degrees, find the measure for angle C in the traingle of ABC Tarzan (m = 75 kg) tries to cross a river by swinging from a 15.0 m long vine. His speed at the bottom of the swing, just as he clears the water, is 9.0 m/s. What is the total force Tarzan exerts on the vine at that point? Fnet = ma 6.5 = 3.3a 6.5/3.3 = a 1.97 = a 4th math Sammy ate 3/8 of a pizza, and Betty ate 1/8 of the same pizza. What fraction of the pizza did Sammy and Betty eat? Is a gaseous mixture able to contain isolated atoms and molecules, and if so, do the mixtures have to contain them? unscramble spanish words integrate: (square root of (6x+4)) - (2x) dx if a rectnagular prism has a height of h, the, width of the prism is six less than h and the length of the prism is one fourth of the length, find an equation for the volume of the rectangular prism in therms of h statistics probablity Each time that Ed Charges an expense to his credit card, he omits the cents and records only the dollar value. If this month he has charged his credit card 20 times what can be said about the probability that the record shows at least $15 less than the actual amount? 9+n=-2 solve 5= D+8 Solve 7th Grade Math I Think it is 21 Because: 2 doubled is 4 4 + 5 is 9 a quarter of 9 is 3 3+9 is 12 12 plus two is 14 and 1/2 of 14 is 7 7 + 14 is 21. so 21 is the answer Calculus II Suppose that y = f (t) satisfies the differential equation dy/dt=y(2−y) and initial condition f (0) = 1. Find an explicit expression for f (t). The solubility of nickel (II) carbonate at 25 degrees C is 0.047 g/L. Calculate Ksp for nickel (II) carbonate. I am writing a paper and need some help getting started. I was asked to, Compare in 750 to 1,050 words the relationship of status and age to gender in the U.S. with the relationship of status and age to gender in another country. I am confused about how to begin this paper. Compare in 750 to 1,050 words the relationship of status and age to gender in the U.S. with the relationship of status and age to gender in another country. When determining whether an online source is credible, it is important to use trusted news agencies, governmental websites, or educational, websites for information. Additionally, most credible websites have an "about us" section in which one can find information reg... b/4 >-5 solve inequality -56+y>113 solve the inequality algebra 1 Solve the equation 7z+30=-5 yes i made a mistake its 3.38 x 10-4 m3 A full can of black cherry soda has a mass of 0.406 kg. It contains 3.38 m3 of liquid. Assuming that the soda has the same density as water, find the volume of aluminum used to make the can. What is a 3 dimensional ecosystem? What is a 2 dimensional ecosystem? #1.) A solar photovoltaic panel has an area of 3 m^2. The panel's maximum power output is 595 W @ 1100 W/m^2. a) What is its efficiency? b) If the panel receives 1300 W/m^2 of irradiance, what is its new power output? #2.) A solar voltaic module has a maximum output of 200... A 2.2 kg purse is dropped from 55 m above before reaching the ground with a velocity of 25 m/s. What was the average air resistance. harry takes a space voyage and returns to find his twin sister has aged more than he has. This is evidence that they have been in different? social studies The goal of America's containment policy was to 8 th grade math 2a+ab=c solve for a If -T = 5, what is T ? if 8 g of vinegar is added to baking soda to form 24 g of c02 and other products, how much baking soda did you need? The solubility of calcium sulfate at 30 degrees C is .209 g/100mL solution. Calculate its Ksp i calculated it and got 1.5 x 10 ^-2 but the program said that it was wrong if you could help explain how to do it i would appreciate it The solubility of calcium sulfate at 30 degrees C is .209 g/100mL solution. Calculate its Ksp i calculated it and got 1.5 x 10 ^-2 but the program said that it was wrong if you could help explain how to do it i would appreciate it What force does work on a ball rotating down an inclined plane? Explain why the other forces the ball experiences do not do work. algebra 2 whats 1+1 I was thinking on doing Dellcomputers or even Geico insurance, but I must point out the facts on their website on one or the other. Havent trouble on how the company uses direct and indirect The Internet is forcing businesses away from traditional functions, such as distribution. The Internet effectively closes the gap between the buyer and the seller and has slowly been eliminating middlemen or intermediaries. Select a company from one of the following industries... Estimation is best defined as: 1. a process of inferring the values of unknown population parameters from those of known sample statistics 2. a process of inferring the values of unknown samples statistics from those of known population parameters 3. any procedure that views t... Physics-Linear momentum A block of mass m = 2.60 kg slides down a 30.0° incline which is 3.60 m high. At the bottom, it strikes a block of mass M = 6.80 kg which is at rest on a horizontal surface, Fig. 7-46. (Assume a smooth transition at the bottom of the incline, an elastic collision, and igno... A radioactive nucleus at rest decays into a second nucleus, an electron, and a neutrino. The electron and neutrino are emitted at right angles and have momenta of 9.80 10-23 kg·m/s, and 5.80 10-23 kg·m/s, respectively. What is the magnitude and direction of the m... According to the standard bell curve and the empirical rule, the lowest 5% would be 2 standard deviations below the mean making the operating cost of the lowest 5% of airplanes equal to $1785 Economics help i need a specific article I had a hard time looking for one Economics help Article dealing with monopoly power, either regulated or illegal attempts at monopolization. What would be an expression that can be used to show how many lengths of 5/8 inch lengths of string can be cut from a 15 inch length of string Tee received six $25 gift cards for his birthday. If he spends 1/4 of his money from the gift cards on Saturday, how much did he save to spend later? Because everyone should be equal. Everyone should have their voice and opinions heard. I don't know how else to expand on this. Why is it important to fight against inequality and promote human rights? Give an example of what you have determined is a bougus analysis using-quasi-scientific jargon, but has little or no objectivity. A. ) A goodyear blimp typically contains 5400 m cubed of helium at an absolute pressure of 1.1*10^5 Pa. The temperature of the helium is 280 K. What is the mass (in kg) of the helium of the blimp? B. ) Estimate the spacing between the centers of neighboring atoms in a piece of... A. ) A goodyear blimp typically contains 5400 m cubed of helium at an absolute pressure of 1.1*10^5 Pa. The temperature of the helium is 280 K. What is the mass (in kg) of the helium of the blimp? B. ) Estimate the spacing between the centers of neighboring atoms in a piece of... There are 25 students in math class. 15 students are female. What percent of the class is female A 260 kg piano slides 4.1 m down a 30° incline and is kept from accelerating by a man who is pushing back on it parallel to the incline (Fig. 6-36). The effective coefficient of kinetic friction is 0.40. (a) Calculate the force exerted by the man. (b) Calculate the work do... A vertical spring (ignore its mass), whose spring stiffness constant is 880 N/m, is attached to a table and is compressed down 0.160 m. (a) What upward speed can it give to a 0.300 kg ball when released? (b) How high above its original position (spring compressed) will the bal... Western civilization Describe and Explain Mussolini's rise to power. What volume of 0.200 N H2SO4 is required to neutralize a solution containing 8.00 equivalents of NaOH? Math, Algebra A=1/2h(b1=b2) A=200 , b1=24 , h=10 multiply and simplify 5Ib. 6oz. * 5 explain why communicationis described as a process multiply and simplify 5ib.6oz. * 5 A 2.38 × 103 kg car requires 5.4 kJ of work to move from rest to some final speed. During this time, the car moves 28.1 m. Neglecting friction, find a) the final speed. Answer in units of m/s. I solved the following question: "A ball is launched from the top of a 320 m high building at 50 m/s at an angle of 37 degrees to the horizontal. For the first 10 seconds of flight, make a table of the vertical position from the ground versus time similar to the one below... The mailing list of an agency that markets scuba-diving trips to the Florida Keys contains 60% males and 40% females. What is the probability that 20 of the 30 are men? The difference of 89,000-58,000 divided by 58,000 Estimate the force a person must exert on a string attached to a 0.160 kg ball to make the ball revolve in a circle when the length of the string is 0.600 m. The ball makes 1.80 revolutions per second. Do not ignore the weight of the ball. In particular, find the magnitude of ... The mailing list of an agency that markets scuba-diving trips to the Florida Keys contains 60% males and 40% females. What is the probability that 20 of the 30 are men? In a "Rotor-ride" at a carnival, people are rotated in a cylindrically walled "room." (See Fig. 5-35.) The room radius is 5.4 m, and the rotation frequency is 0.8 revolutions per second when the floor drops out. What is the minimum coefficient of static fri... can someone help to indentify fallacies and rhetorical devices in this article? The government's 2008 banking bailout is a fraud perpetrated on the American people. The government has abandoned sound money practices in favor of allowing the Federal Reserve a banking c... axia crt/205 can someone help me to identify the fallacies and rhetorical devices in this article? While the U.S. government's 2008 bailout of failing banks flies in the face of the nation's capitalist principles, it is a necessary evil. Without shoring up these financial instituti... 9.8 m/s^2? what is the efficiency of a pulley that requires a worker to do 250 J of work to lift an object that requires 170 J of work 4 th grade math input outout box how do you get from 6 to 40 4th grade math input output tables how do you get from 6 to 40 I incline in the wrong direction a voice cries faint as in a dream from the belly of the wall Is "A voice cries faint as in a dream" an example of a metaphor? if so, why? Thank you Can someone explain what is the difference between a connotation and denotation? In this stanza: where there's a wall there are words to whisper by a loose brick wailing prayers to utter special codes to tap birds to carry messages taped to their feet there are letters to be written novels even Is it talking about God? If possible may i get an explanation of what it means? on this side of the wall I am standing staring at the top lost in the clouds I hear every sound you make but cannot see you "battering ram"-- means that in every problem, there is a solution? where there's a wall there's a way around, over, or through there's a gate maybe a ladder a door a sentinel who sometimes sleeps there are secret passwords you can overhear there are methods of torture for extracting clues to maps of underground passageways there a... College mechanic vibration A half-car model of an automobile suspension system is shown below. You are to design its suspension systems (values of spring constants and damping constants). Generally, for ride comfort, it is desired to have the bounce motion at low frequency and the pitch motion at relati... Pages: <<Prev | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | Next>>
{"url":"http://www.jiskha.com/members/profile/posts.cgi?name=Joe&page=11","timestamp":"2014-04-20T08:41:56Z","content_type":null,"content_length":"27869","record_id":"<urn:uuid:9602a20d-ac10-4052-8782-64f328db3f37>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00251-ip-10-147-4-33.ec2.internal.warc.gz"}
User’s Guide to PARI-GP, by Results 1 - 10 of 23 - SIAM Journal on Scientific Computing , 1999 "... . A new class of quadrature rules for the integration of both regular and singular functions is constructed and analyzed. For each rule the quadrature weights are positive and the class includes rules of arbitrarily high-order convergence. The quadratures result from alterations to the trapezoidal r ..." Cited by 24 (1 self) Add to MetaCart . A new class of quadrature rules for the integration of both regular and singular functions is constructed and analyzed. For each rule the quadrature weights are positive and the class includes rules of arbitrarily high-order convergence. The quadratures result from alterations to the trapezoidal rule, in which a small number of nodes and weights at the ends of the integration interval are replaced. The new nodes and weights are determined so that the asymptotic expansion of the resulting rule, provided by a generalization of the Euler--Maclaurin summation formula, has a prescribed number of vanishing terms. The superior performance of the rules is demonstrated with numerical examples and application to several problems is discussed. Key words. Euler--Maclaurin formula, Gaussian quadrature, high-order convergence, numerical integration, positive weights, singularity AMS subject classifications. 41A55, 41A60, 65B15, 65D32 PII. S1064827597325141 1. Introduction. Recent advances in algor... , 1992 "... Abstract. Symmetric function theory provides a basis for computing Galois groups which is largely independent of the coefficient ring. An exact algorithm has been implemented over Q (t1,t2,...,tm) in Maple for degree up to 8. A table of polynomials realizing each transitive permutation group of degre ..." Cited by 9 (0 self) Add to MetaCart Abstract. Symmetric function theory provides a basis for computing Galois groups which is largely independent of the coefficient ring. An exact algorithm has been implemented over Q(t1,t2,...,tm) in Maple for degree up to 8. A table of polynomials realizing each transitive permutation group of degree 8 as a Galois group over the rationals is included. - Proceedings of the 4th Annual Symposium on Parallel Algorithms and Architectures , 1992 "... A practical version of a parallel algorithm that approximates the roots of a polynomial whose roots are all real is developed using the ideas of an existing NC algorithm. An new elementary proof of correctness is provided and the complexity of the algorithm is analyzed. A particular implementation o ..." Cited by 8 (0 self) Add to MetaCart A practical version of a parallel algorithm that approximates the roots of a polynomial whose roots are all real is developed using the ideas of an existing NC algorithm. An new elementary proof of correctness is provided and the complexity of the algorithm is analyzed. A particular implementation of the algorithm that performs well in practice is described and its run-time behaviour is compared with the analytical predictions. 1 Introduction In this paper we describe and analyze the behaviour of an implementation of a parallel algorithm that approximates the roots of a polynomial which has only real roots. The polynomial root approximation problem we consider can be defined as follows. Given a positive integer ¯, and a polynomial p 0 (x) of degree n, whose coefficients are m-bit integers and whose roots x 1 ; x 2 ; . . . ; x n are all real, we wish to compute ¯-approximations ~ x 1 ; ~ x 2 ; . . . ; ~ x n respectively to these roots, where the ¯-approximation ~ x i to the root x i i... - MapleTech , 1994 "... This article explains how to define a class of decomposable combinatorial structures with Gaia, how to count the number of structures of a given size, how to generate a random structure and how to use it. Details about the algorithms used will be found in [5] and [6]. ..." Cited by 8 (3 self) Add to MetaCart This article explains how to define a class of decomposable combinatorial structures with Gaia, how to count the number of structures of a given size, how to generate a random structure and how to use it. Details about the algorithms used will be found in [5] and [6]. - J. Symbolic Comput , 1998 "... this paper appeared in "Design and Implementation of Symbolic Computation Systems," A. Miola (ed.), Springer Lect. Notes Comput. Science, 722, 66--80 (1993). ..." "... We propose a new algorithm to find worst cases for correct rounding of an analytic function. We first reduce this problem to the real small value problem — i.e. for polynomials with real coefficients. Then we show that this second problem can be solved efficiently, by extending Coppersmith’s work on ..." Cited by 7 (3 self) Add to MetaCart We propose a new algorithm to find worst cases for correct rounding of an analytic function. We first reduce this problem to the real small value problem — i.e. for polynomials with real coefficients. Then we show that this second problem can be solved efficiently, by extending Coppersmith’s work on the integer small value problem — for polynomials with integer coefficients — using lattice reduction [4, 5, 6]. For floating-point numbers with a mantissa less than, and a polynomial approximation of ¡ degree, our al-gorithm finds all worst cases ¢ at distance a machine number �� � § ¥�©������� � in time ¡��¤ �. For, this improves �� � �� � � on the complexity from Lefèvre’s algorithm , 1996 "... odd, then the computation of Uk does not require the computation of U l j (j 1). Proof : Since k is odd (i.e. k0 = 1), Uk(= U l 0 ) = Uh 1 V l 1 l 1 . Thus, only the value of Uh 1 is needed. We only need to show that the value of Uh j-1 can be derived from Uh j . By Eq. (5) and depending on ..." Cited by 4 (1 self) Add to MetaCart odd, then the computation of Uk does not require the computation of U l j (j 1). Proof : Since k is odd (i.e. k0 = 1), Uk(= U l 0 ) = Uh 1 V l 1 l 1 . Thus, only the value of Uh 1 is needed. We only need to show that the value of Uh j-1 can be derived from Uh j . By Eq. (5) and depending on the value of k j-1 , we have the following cases: . if k j-1 = 0, then (l j-1 , h j-1 ) = (2l j , l j + h j ); . if k j-1 = 1, then (l j-1 , h j-1 ) = (l j + h j , 2h j ). Hence, if k j-1 = 0, then h j-1(= h j + l j = 2l j + 1) is odd and Uh j-1 = Uh j V l j l j ; otherwise, h j-1(= 2h j ) is even and Uh j-1 = Uh j Vh j . We now are ready to give the algorithm that we shall extend to the case where k is even. Inputs: k = 2 s i=s k i 2 i-s , (ks = 1) P, Q Outputs: (Uk , Vk ) Uh = 1; V l = 2; Vh = P ; Q l = 1; Qh = 1; for j from n 1 to s + 1 by -1 if k[j] == 1 then Qh = Q l Vh ; Vh Qh else Qh = Q l ; Q l fi Qh ; Qh = - Experiment. Math "... Abstract. Let M(P (z1,..., zn)) denote Mahler’s measure of the polynomial P (z1,..., zn). Measures of polynomials in n variables arise naturally as limiting values of measures of polynomials in fewer variables. We describe several methods for searching for polynomials in two variables with integer c ..." Cited by 4 (3 self) Add to MetaCart Abstract. Let M(P (z1,..., zn)) denote Mahler’s measure of the polynomial P (z1,..., zn). Measures of polynomials in n variables arise naturally as limiting values of measures of polynomials in fewer variables. We describe several methods for searching for polynomials in two variables with integer coefficients having small measure, demonstrate effective methods for computing these measures, and identify 48 polynomials P (x, y) with integer coefficients, irreducible over Q, for which 1 < M(P (x, y)) < 1.37. 1. - Notices of the AMS , 1993 "... We describe the primality testing algorithms in use in some popular computer algebra systems, and give some examples where they break down in practice. 1 Introduction In recent years, fast primality testing algorithms have been a popular subject of research and some of the modern methods are now i ..." Cited by 3 (0 self) Add to MetaCart We describe the primality testing algorithms in use in some popular computer algebra systems, and give some examples where they break down in practice. 1 Introduction In recent years, fast primality testing algorithms have been a popular subject of research and some of the modern methods are now incorporated in computer algebra systems (CAS) as standard. In this review I give some details of the implementations of these algorithms and a number of examples where the algorithms prove inadequate. The algebra systems reviewed are Mathematica, Maple V, Axiom and Pari/GP. The versions we were able to use were Mathematica 2.1 for Sparc, copyright dates 1988-1992; Maple V Release 2, copyright dates 1981-1993; Axiom Release 1.2 (version of February 18, 1993); Pari/GP 1.37.3 (Sparc version, dated November 23, 1992). The tests were performed on Sparc workstations. Primality testing is a large and growing area of research. For further reading and comprehensive bibliographies, the interested - In Proceedings of FPSAC'98 , 1998 "... We present a new computer algebra package which permits to count and to generate combinatorial structures of various types, provided that these structures can be described by a speci cation, as de ned in [7]. Resume Nous presentons un nouveau module de calcul formel dedie audenombrement etala genera ..." Cited by 3 (0 self) Add to MetaCart We present a new computer algebra package which permits to count and to generate combinatorial structures of various types, provided that these structures can be described by a speci cation, as de ned in [7]. Resume Nous presentons un nouveau module de calcul formel dedie audenombrement etala generation aleatoire uniforme de structures combinatoires decomposables. 1 What is CS? CS is a computer algebra package devoted to the handling of combinatorial structures. Its main features are the following: given a combinatorial speci cation of a class of decomposable structures (in the sense of [7]), CS is able to count and uniformly draw at random the structures of any given size n. It can also give some properties of the associated generating series, like recurrences and di erential equations. A speci cation of a class of combinatorial structures, as de ned in [7], is a set of productions made from basic objects (atoms) (Epsilon and Z of size 0 and 1 respectively) and
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1032589","timestamp":"2014-04-21T08:56:08Z","content_type":null,"content_length":"37301","record_id":"<urn:uuid:69a61e89-7b37-4431-b268-05c72e38cd42>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00353-ip-10-147-4-33.ec2.internal.warc.gz"}
Root Test Proof A selection of articles related to root test proof. Original articles from our library related to the Root Test Proof. See Table of Contents for further available material (downloadable resources) on Root Test Proof. Root Test Proof is described in multiple online sources, as addition to our editors' articles, see section below for printable documents, Root Test Proof books and related discussion. Suggested Pdf Resources Suggested Web Resources Great care has been taken to prepare the information on this page. Elements of the content come from factual and lexical knowledge databases, realmagick.com library and third-party sources. We appreciate your suggestions and comments on further improvements of the site. Root Test Proof Topics Related books
{"url":"http://www.realmagick.com/root-test-proof/","timestamp":"2014-04-17T22:01:55Z","content_type":null,"content_length":"25044","record_id":"<urn:uuid:5133f2a5-65ee-45bb-95aa-3b0e925b580b>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00603-ip-10-147-4-33.ec2.internal.warc.gz"}
Tanya Khovanova’s Math Blog Many years ago at Gelfand’s seminar in Moscow, USSR, someone pointed out a young girl and told me: “This is Natalia Grinberg. In her year in the math Olympiads, she was the best in the country. She is the next you.” We were never introduced to each other and our paths never crossed until very recently. Several years ago I became interested in the fate of the girls of the IMO (International Math Olympiad). So, I remembered Natalia and started looking for her. If she was the best in the USSR in her year, she would have been a gold medalist at the IMO. But I couldn’t find her in the records! The only Grinberg I found was Darij Grinberg from Germany who went to the IMO three times (2004, 2005, and 2006) and won two silver medals and one gold. That was clearly not Natalia. I started doubting my memory and forgot about the whole story. Later I met Darij at MIT and someone told me that he was Natalia’s son. I was really excited when I received an email from Natalia commenting on one of my blog posts. We immediately connected, and I asked her about past events. Natalia participated in the All-Soviet Math Olympiads three times. In 1979 as an 8th grader she won a silver medal, and in 1980 and 1981 she won gold. That indeed was by far the best result in her year. So she was invited to join the IMO team. That year the IMO was being held in the USA, which made Soviet authorities very nervous. At the very last moment four members of the team did not get permission to travel abroad. Natalia was one of them. The picture below, which Natalia sent to me, was taken during the Soviet training camp before the Olympiad. These four students were not allowed to travel to the IMO: Natalia Grinberg, Taras Malanyuk, Misha Epiktetov, and Lenya Lapshin. Because of the authorities’ paranoia, the Soviet team wasn’t full-sized. The team originally contained eight people, but as they rejected four, only six traveled to the USA, including two alternates. I have written before how at that time the only way for a Jewish student to get to study mathematics at Moscow State University was to get to the IMO. I wrote a story about my friend Sasha Reznikov who trained himself to get to the IMO, but because of some official machinations, still was not accepted at MSU. Natalia’s story surprised me in another way. She didn’t get to the IMO, but she was accepted at MSU. It appears that she was accepted at MSU as a member of the IMO team, because that decision was made before her travel documents were rejected. Natalia became a rare exception to the rule that the only way for a Jewish person to attend MSU was to participate in the IMO. It was a crack in the system. They had to block visas at the last moment, so that people wouldn’t have time to make a fuss and do something about it. Natalia slipped through the crack and got to study at the best university in the Soviet Union. Unfortunately, the world lost another gold IMO girl. Three Soviet team members won gold medals that year. Natalia, being better then all of them, would have also won the gold medal. 6 Comments 1. George R.: Good morning Tanya! If I may ask a somewhat indiscreet question (?) does Prof. Gelfand has any relation to the world famous chess player Boris Gelfand from Bellarussia? (now representing Israel at international chess competitions) 17 July 2013, 5:14 am 2. Tanya Khovanova: George R, Israel Gelfand lived in Moscow for many years, so the chess player can’t be a close relative. Otherwise, I do not know. 17 July 2013, 6:39 am 3. Faibsz: in the mid-1970s Moscow Volodya Grinberg was a star of every maths competition I went to. He didn’t make it to Moscow University because of anti-Semitism, but is now a full maths professor at UCLA Do you know if he is related to Natalia? 19 July 2013, 4:29 am 4. Tanya Khovanova: I do not know who you are talking about. 20 July 2013, 4:34 pm 5. alex: I was interested in the history of IMO and learned by accident that in 2007 the gold took a girl. From the interview with her I also learned that there was a particularly hard problem that year. It was the problem no.3 and it was solved by four participants only including that girl. I took a look at the problem and it appeared to me that the solution was obvious and I wrote it down in the next 5 minutes. I never participated in math olympiads nor any other olympiads and my background is not related to math. I suspect that there is an error in the solution that I am missing so I will present the solution below so that someone might check it. If it doesn’t contain an error then I would say that the problem was fairly obvious. In a mathematical competition some competitors are friends. Friendship is always mutual. Call a group of competitors a clique if each two of them are friends. (In particular, any group of fewer than two competitors is a clique.) The number of members of a clique is called its size. Given that, in this competition, the largest size of a clique is even, prove that the competitors can be arranged in two rooms such that the largest size of a clique contained in one room is the same as the largest size of a clique contained in the other room. Let the largest clique C have the size 2n. We can divide this clique in two parts and send one part to room A and another part to room B. Let’s call these cliques CA and CB. The rest of the participants will go to room A. In room A there might be a clique V of the size s>n. There are participants in clique V that do not belong to clique CB (If it’s not the case then send the rest of the participants to room B and apply the same reasoning. This subset of the participants can’t belong to both CA and CB). Send one participant from room A belonging to clique L to room B. The size of L decreased by one. The size of the largest clique in B remains unchanged. Proceed till the size of L will become n+1. Send one more participant to B. The size of the largest clique in B will not exceed n since the size of L was less or equal to 2n. Therefore in both rooms the largest clique will have the size n. QED. 27 July 2013, 7:44 am 6. alex: correction: By “L” I mean “V”. 27 July 2013, 7:48 am
{"url":"http://blog.tanyakhovanova.com/?p=464","timestamp":"2014-04-17T13:07:01Z","content_type":null,"content_length":"29018","record_id":"<urn:uuid:2cc194ba-6b28-48f7-8852-c9c401743c04>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00471-ip-10-147-4-33.ec2.internal.warc.gz"}
How to calculate dea number? 0 How to calculate dea number? · Comment · Flag · Answer To confirm the validity of the DEA registration number, the pharmacist should add the first, third, and fifth digits together, then add the second, fourth, and sixth digits, multiplying that sum by 2. The right-most digit of the sum of these 2 calculations will correspond with the final, or seventh, digit of a valid number. more 0 wiki.answers.com · Comment · Flag To confirm the validity of the DEA registration number, the pharmacist should add the first, third, and fifth digits together, then add the second, fourth, and sixth digits, multiplying that sum by 2. The right-most digit of the sum of these 2 calculations will correspond with the final, or seventh, digit of a valid number. more
{"url":"http://www.experts123.com/q/how-to-calculate-dea-number.html","timestamp":"2014-04-16T13:04:20Z","content_type":null,"content_length":"42848","record_id":"<urn:uuid:8cd01939-dcc7-482e-8c03-d89fcd3c8a0f>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00096-ip-10-147-4-33.ec2.internal.warc.gz"}
An elementary number theory question up vote 1 down vote favorite Can anyone suggest a reference or a simple proof of the following? For every sequence of integers $a_1,...,a_n$, there exists a non-empty subsequence whose sum is divisible by $n.$ Here is a more general problem: Prove that for every $r$ and $n$ there is $k$ such that for every sequence of vectors $v_1,...,v_k \in {\mathbb Z}^r$ there exists a subsequence whose sum belongs to $ (n{\mathbb Z})^r$. (I would think that $k=n^r$ should be enough...) 2 This looks like fodder for stackexchange, or better yet artofproblemsolving (I think the latter is a sufficient hint). Yes, your conjectured generalization works, even in an arbitrary finite group with $k$ being the group order. – Noam D. Elkies Jul 28 '12 at 1:50 2 In case for the higher dimensional problem you need more than that bound you mention the key-word to look for is Davenport constant (of a finite abelian group). Indeed, exact value guaranteeing this is open (for general n) if r >= 3; but for prime power n or r=2 it is known that the simple lower bound 1 + r (n-1) is in fact the exact value. And no longer example is know for any pair (r,n). However, for general finite abelian groups (ie different moduli in different coordinates) there are examples known that are larger than the 'obvious' lower bound. – quid Jul 28 '12 at 13:11 Thanks quid! Davenport constant indeed is what I was looking for. As for the requested proof, (in case somebody wonders) I realized that indeed it is very easy with my suggested bound: by contradiction: If $S_k=\sum_{i=1}^k v_i \ne 0$ mod $n$ for $k=1,...,n^r$ then $S_k=S_l$ for some $k<l$ and $S_l-S_k$ is the desired subsequence. – Adam S Sikora Jul 28 '12 at 13:36 add comment closed as too localized by Steven Landsburg, Andreas Blass, Anthony Quas, Joe Silverman, Douglas Zare Jul 28 '12 at 2:20 This question is unlikely to help any future visitors; it is only relevant to a small geographic area, a specific moment in time, or an extraordinarily narrow situation that is not generally applicable to the worldwide audience of the internet. For help making this question more broadly applicable, visit the help center.If this question can be reworded to fit the rules in the help center , please edit the question. Browse other questions tagged nt.number-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/103356/an-elementary-number-theory-question","timestamp":"2014-04-17T15:54:21Z","content_type":null,"content_length":"41795","record_id":"<urn:uuid:e2ad0675-2214-4b49-a982-222b5aad33d7>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00643-ip-10-147-4-33.ec2.internal.warc.gz"}
Mack <(E-Mail Removed)> wrote: > Here is my problem I am trying to solve just for fun. > The input is a number in the range 1-30000. > The output is one line of text for each number in the range 2 to the number input. > For each number it must print either "n is prime" or "n is not prime". > Speed is not important because this is running on a very fast computer. > I am not sure how to do this. Any ideas? Sure, I'd be happy to help. Here you go. Enjoy! I know you said that speed is not important, but this program implements a blindingly fast O(n^n) algorithm to find prime numbers, using an elegant recursive method. #include <stdio.h> int _(int n, int m, int d) int r = m != n; for(int i=0; d && (i<n); i++) r *= _(n,(m<=n)?i*m:0,d-1)|!_(i,1,i); return r; Print primes up to the requested value int main(int argc, char* argv[]) int m; scanf("%d", &m); for(int n = 2; n<=m; n++) printf("%d is%s prime\n",n, _(n,1,n)?"" : " not"); return 0;
{"url":"http://www.velocityreviews.com/forums/t450411-help.html","timestamp":"2014-04-25T05:41:25Z","content_type":null,"content_length":"27189","record_id":"<urn:uuid:3759bb3c-55d7-40aa-847d-b0a3e4e4d762>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00133-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Challenge: When @lgbasallote left OpenStudy, he left us with 44 theoretical dollars. Nine users fight over the money. Lord @shadowfiend decides that all the people will get a distinct amount of money based on a lottery and it must be a natural number. In how many ways can we distribute this money amongst these 9 users? • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/5120ecf1e4b06821731cdd2c","timestamp":"2014-04-19T19:59:43Z","content_type":null,"content_length":"101564","record_id":"<urn:uuid:d085a118-c67d-4adf-aae3-29cc9ee7d15b>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00179-ip-10-147-4-33.ec2.internal.warc.gz"}
The Rabin-Karp Algorithm Next: The Knuth-Morris-Pratt Algorithm Up: String Searching Previous: The General Problem We can use the technique of hashing (explored earlier in the semester) as a way of testing whether two strings are the same- if two strings hash to the same value, then they might be the same string, but if they hash to different values then then are definitely not the same. This property can be used in a string searching algorithm as shown in algorithm 8.2. However, at first analysis, this seems like a terrible idea: good hash functions need to look at every character in the string. If the length of the pattern is M, and the hash function looks at each character in the substring, then computing a hash of a substring of length M takes (at least) O(M) to compute. If the hashing function looks at every character in the string, then this algorithm will may need to perform an O(N) calculation at each of the M-N possible positions in the text, giving an algorithm that is O(M(M-N)). If you recall from section 8.1.1, O(M(M-N)) is the time complexity of the worst-case of the naive algorithm! This is certainly a bad sign- it would seem that by using hashing, we're dooming ourselves to always do the amount of work that would otherwise be done by worst case of the brute-force algorithm. As you might guess, however, there is a way around this problem. Although there is no general way to reduce the amount of work required to perform a single hash, there exist hashing functions that can be computed very efficiently for a group of strings that are related to each other. In this algorithm, each substring is closely related to the next substring: the next substring simply has one character chopped off of one end and a new character added to the other end. Can we devise a good hashing function that takes advantage of this fact? There are several, but one that is commonly used is shown in figure 8.1. This hashing function treats the string as a large number, where each character in the array represents a digit in a number in base b. The modulo of this number is then taken by p to give the hash value. In order to get good results, this p should be chosen so that it and b are relatively prime; this is easily accomplished by having b or p be prime. Figure 8.1: A hashing function to compute the hash of string , where the size of the character set is b and p is relatively prime to b. The hash function given in figure 8.1, as stated above, is clumsy (or impossible) to implement in C. The problem is that integers in C have a fixed number of bits (usually 32 or 64, though 16-bit machines are not unusual). Depending on the values of b and n, it may be inevitable that the term will be too large to fit into an int (or any ordinary C type). For example, if b = 256, then n must be 4 or less in order for the result of the summation to fit into a 32-bit integer. Luckily, we can exploit an important property of the modulo operation- taking the modulus of intermediate steps of an addition, multiplication, or subtraction calculation has no effect on the outcome of the calculation. For example, for any and : This means that by performing the modulo operation early and often, we can compute the modulus of the results of operations on very large numbers without actually ever worrying about the size of the numbers, as long as the modulus we are using is not very large. For example, imagine that we want to compute , where and are very large numbers, but b is a small number. The product will be an extremely large number- but we need not ever deal with it, because , and by definition, the result must always be somewhere in the range 0 and b - 1. By aggressively taking the modulus during intermediate steps, we can avoid using very large numbers at all. Imagine that we want to compute . It is useful to first compute several powers of 10, 7: Given these values, it is easy to compute : The trick that makes the Rabin-Karp string searching algorithm efficient is the fact that we have chosen a hash function which is easy to compute for adjacent substrings- once we know what the hash value of the first substring is, we can compute the hash value of the next in time O(1). This is accomplished by taking advantage, once again, of the properties of the modulo operation. For example, imagine we are that the text and pattern string are drawn from the alphabet consisting only the characters 0 through 9. The text is ``234591'', and we are searching for a pattern of length 4. We have already computed . How can we use this to help us compute ? Observe that 2345 can be transformed into 3459 by the following steps: 1. Multiply 2345 by 10, producing 23450. 2. Subtract 20000 from 23450, leaving 3450. This erases the effect of the ``2'' digit, which has ``shifted'' off to the right. 3. Add 9, giving 3459. This adds in the ``9'' digit. Using this method results in the following calculation: Similarly, once we know , it is easy to use this knowledge to compute , in precisely the same manner as above: We will now explore how to implement this algorithm in C, for strings consisting of C characters. A hashing function based on the function given in figure 8.1 is shown in figure 8.2. Also note that modulus m is not defined in the function; it is a parameter that can be specified at runtime. Figure 8.2: A hashing function. The actual string matching algorithm is given in figure 8.3. Figure 8.3: The Rabin-Karp String Searching Algorithm The first action taken is to compute , which will be used later to figure out what amount to subtract from the hash value as characters are ``shifted off'' to the left. To gracefully deal with the case where M is large, the divide-and-conquer algorithm for computing exponentials is used. This is the same as the algorithm you explored in the first assignment, except that the modulus is taken early and often to prevent the result from becoming too large. The code for the resulting function is shown in figure 8.4. Figure 8.4: An exp function that computes . The next action is to laboriously hash the pattern and the substring of length M at shift zero of the text. This is the only time we'll hash a substring of the text using the hashing function; all subsequent hashes will be computed by the method described previously. Before we begin working our way down the string, however, we must check whether shift zero itself is a match. If it is, then we are finished. Otherwise, we loop through the rest of the string, computing each hash based on the previous (using the nextHash function, shown in figure 8.5). Figure 8.5: The nextHash function. The implementation of the Rabin-Karp algorithm shown in the previous section is faithful to the original algorithm, but has a flaw that makes it somewhat suboptimal on many modern computers- it relies heavily on the modulo operation, which is usually relatively slow. However, the modulus operation can be eliminated from the algorithm by choosing a different hashing function that has the same property that it is easy to compute from the previous string, yet does not require so many modulo operations. One way to generate such a function is to keep the same basic function, but always choose a modulus that is a power of 2, so that bit masks can be used instead of the more expensive % operation. Of course, using a modulus that is a power of 2 will have disastrous consequences for the hashing function if the base b used by the function is also a power of 2- which it often will be. However, we have some freedom here as well- all that we need to do is to choose a b that is relatively prime with respect to m- any number larger than the number of characters in the alphabet will do. In this case, using a b of 257 is a safe bet. A function that implements this change is shown in figure 8.6. Note that this function completely ignores its b and m parameters and instead alway uses b = 257 and m = 1024. (These parameters are retained only so this function can be used as a drop-in replacement for stringSearchRK.) Figure 8.6: A faster implementation of the Rabin-Karp algorithm, which avoids using the mod operation. The Rabin-Karp algorithm, as presented here, can be very slow if the text contains a lot of ``false matches'': substrings that hash to the same number as the pattern and therefore cause an expensive memcmp to be performed. An intelligent (or lucky) adversary who knows b and m can create texts that make this algorithm perform terribly. One solution is to choose m and b randomly at runtime, and whenever the number of false matches is high, randomly choose a new m and/or b. Next: The Knuth-Morris-Pratt Algorithm Up: String Searching Previous: The General Problem Dan Ellard Mon Jul 21 22:30:59 EDT 1997
{"url":"http://ellard.org/dan/www/Q-97/HTML/root/node43.html","timestamp":"2014-04-16T07:15:23Z","content_type":null,"content_length":"13894","record_id":"<urn:uuid:790d1ede-b131-4d0e-915b-d51c498b8599>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00426-ip-10-147-4-33.ec2.internal.warc.gz"}
Finding an inverse function If you draw the line y = x, the inverse of a function should be mirrored across that line. That is, f(x,y) = f-¹(y,x) But I still don't understand the use of the quadratic equation with y as a variable and not y set to zero. y isn't the variable. x is: yx² - Sx - SL = 0 a = y, b = -S, c = -SL ax² + bx + c = 0 There are two ways to find the inverse. Take y = f(x), and swap y with x, then solve for y. This is more natural to most students because they are used to having y as the dependant variable and solving for it. I prefer to take the other route. That is, keep x and y in the same place and just solve for x. You will find the same exact inverse, only x will be the dependant variable instead of y. "In the real world, this would be a problem. But in mathematics, we can just define a place where this problem doesn't exist. So we'll go ahead and do that now..."
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=20611","timestamp":"2014-04-19T17:16:05Z","content_type":null,"content_length":"21195","record_id":"<urn:uuid:7d8800cd-1b08-445a-8c68-584f5a656938>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00299-ip-10-147-4-33.ec2.internal.warc.gz"}
No data available. Please log in to see this content. You have no subscription access to this content. No metrics data to plot. The attempt to load metrics for this article has failed. The attempt to plot a graph for these metrics has failed. Translational to rotational energy transfer in molecule-surface collisions FIG. 1. Final rotational energy resolved intensity at specular scattering angles for several incident energies and surface temperatures. (a) Data taken from Ref. 9: ; open circles are data for and , solid circles are data for and , open diamonds are data for and , and solid diamonds are data for and . (b) Data taken from Ref. 10. ; open squares are data for and , solid squares are data for and , and triangles are data for and . Curves are calculations. FIG. 2. Normal incident energy dependence of the final rotational temperature for specular scattering at . Solid and long dash curves are theory for and 40°, respectively. Solid circles are data for , and open circles are data for . The dotted curve is a least squares fit to the experimental data. Data are taken from Ref. 9. FIG. 3. Final rotational energy resolved intensity for specular scattering geometry. (a) for and . The times symbols are data at , open circles are data for , and solid circles are data for . Data are from Ref. 11. (b) NO scattering from graphite for and . and 60° as marked. Data are from Ref. 12. FIG. 4. Final rotational energy resolved intensity at and incident energy for specular scattering of NO from a Pt(111) substrate covered with 0.5 ML of CO. Open circles are data for , solid circles are data for , open squares are data for , and solid squares are data for . Data are from Ref. 13. FIG. 5. The final rotational energy distribution under specular geometry conditions for several incident energies and surface temperatures for . (a) ; , 350, 640, and as marked. (b) ; and . Symbols are experimental data from Ref. 14, and curves are theory. FIG. 6. The rotational temperature as a function of incident energy for . Symbols are experimental data from Ref. 14, and the curve is theory. and . FIG. 7. The final rotational temperature as a function of surface temperature for at and . Symbols are experimental data from Ref. 14, and curves are theory. . FIG. 8. The final rotational energy distribution for and surface temperature of for . Symbols are experimental data from Ref. 14, the dash curve is theory with a cold initial rotational distribution of , the long dash curve is theory with initial rotational state with , solid curve is theory with initial rotational state at , and dot dashed curve is theory with initial rotational state at . . FIG. 9. Final average translational energy as a function of final rotational energy for several incident energies and angles taken at the specular scattering angle for NO scattering from Ag(111). In each panel the experiment and theory for a given incident energy and angle are compared. Open circles are data at , solid circles are data at 45°, solid triangles are data at 30°, and solid squares are data at 15°. Calculations are the curves in each panel. and . Data are taken from Ref. 10. FIG. 10. The normalized most probable translational energy and the sum of the most probable and the rotational energy as functions of the final rotational energy for two incident energies. (a) and (b) . Symbols are experimental data from Ref. 14, and lines are theory. and . FIG. 11. Same as Fig. 10 except with calculations for an effective surface mass of . Article metrics loading...
{"url":"http://scitation.aip.org/content/aip/journal/jcp/125/8/10.1063/1.2209237","timestamp":"2014-04-17T16:38:54Z","content_type":null,"content_length":"110957","record_id":"<urn:uuid:de98274f-8747-4e4d-9b59-6ddd2bfe10be>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00471-ip-10-147-4-33.ec2.internal.warc.gz"}
Looking for hard and rare logic puzzles Hi everyone, For a very long time I looked for hard puzzles on the internet. The kind of puzzles I like is actually very difficult to find. I'll explain it : I like puzzles that can be easily explained, that seems impossible to solve a priori and whose answers make you think intelligence or logic is awesome. I'll give you examples of the only four enigmas I know and that satisfy those criterias (each one can be found on this forum ) : 10 peoples are placed in independant boxes at t=0. After a random time interval, a machine take a random people and put him in a room where there is just a switch (nothing else). Then, after a random time interval it put the people back in his box. At the begining the switch is off. These people must ellaborate a strategy, before beeing put in the boxes, so that it is possible for one of them to say, at any given moment, "now I know that the nine others have been put in the room". Is it possible ? A monastery is full of monks who have made a vow of silence (there is more than one monk). One day some monks become ill. The only symptom is a blue spot in the forehead so that a monk doesn't know when he is ill but he can see the other ones who are. When a monk know he is ill, he will immediately left the monastery and go to the hospital. The monks can see everyone else once a day during great dinner. After a year have passed, every sick monk go to the hospital, without having broken their vow of silence. How many monks where ill at the beginning? 100 mathematicians are beeing emprisonned by an evil dictator. One day he imagines a silly game : he puts the mathematicians in a row so that the last one can see the 99 ones that are in front of him, etc. He puts a colored hat on the head of each mathematician. There are three colors : red, yellow and green. And then he says : each one of you, starting from the last one (the one who can see each other) will try to guess the color of his hat. If his guess is right he will be freed if he is wrong he will be executed." What strategy the mathematicians should choose to save most of them ? 4. (this one requires real bases of mathematics but I still found the result amazing) 500 students are gathered in a room. Their headteacher start speaking : "On the following room there are 500 numbered lockers. Each locker contains the name of a student. After your name is called you will go in this room, and you will open 250 lockers. If anyone of you can’t find his name among those lockers the game stops. You will go back into your room and we will start again tomorrow after I rearranged the names in the lockers randomly. You will be able to go on vacation only when everyone has found his name." Can the students adopt a method that would probably assure them to be on vacation in a week ? This topic is adressed to people who already know those puzzles and who can give me links to similar ones. It is not a topic for answering the puzzles I gave. PS : I read the puzzle about the mathematicians in a circular prison : but, though it is a really nice puzzle, i found the answer way much complex to be explained orally. Re: Looking for hard and rare logic puzzles I really like this one. It works for any number of people, so, for instance, you could simplify it to Two people are standing face to face, each wearing a hat that is either black or white. They must guess, in secret, what colour hat they are wearing. One of them must get the answer right. What is their strategy? One guesses the same colour as the other person is wearing, the other guesses the opposite colour from the other person. Re: Looking for hard and rare logic puzzles Can anyone suggest where I might find a well-written version of puzzle 4 in the original post? I tried searching for it but got a 'locker puzzle' that was just some numbers thing with factors. Chuff wrote:I write most of my letters from the bottom Re: Looking for hard and rare logic puzzles I have a very good version in French, because it was an exercise of logic in Ecole Polytechnique that a friend of mine showed to me. I could try to translate it if you want. Re: Looking for hard and rare logic puzzles Goldstein wrote:Can anyone suggest where I might find a well-written version of puzzle 4 in the original post? I tried searching for it but got a 'locker puzzle' that was just some numbers thing with factors. It appears at and has a separate, linked, solution thread. Re: Looking for hard and rare logic puzzles Here is the link to the french version, with a detailed solution : http://shkdee.info/fichiers/EnigmeCombinatoire.pdf. However I doubt a google translation would have a good result... Edit : here is my translation. I am not a native english speaker so please excuse my mistakes... You are part of a group of 500 people, gathered in a big room. Each one is given a different number between 1 and 500 (so there is no couple of people with the same number), let's assume you got the number 37, you are the only to have it. In an other room next to the one where your group is gthered, there are 500 lockers. Each locker contains a number between 1 and 500, written on a piece of sheet, without any duplicate : there are not two lockers that contain the same number. So we have 500 people holding 500 different numbers, and 500 lockers containing 500 different numbers. Alternately, each one of the 500 people from your group will go to the room with the lockers, will open and then close 250 lockers to see the number hidden within it. If he sees in one of these lockers the number that correspond to his, he wins, else he looses. On every case he returns into the first room and is not allowed in any manneer to communicate with the other people from his group (so that entering the room with the lockers noone can have a clue about what locker contains his number). For example let's assume it is your turn to go : you open a first locker , and you find the number 412. It is not your number (remember, your number is 37). You close the locker and open a second one, you find the number 125. This is still wrong so you close it, open a third one, etc, and you repeat this 250 times. If at any moment you have found the number 37 in a closer you won, otherwise you lost. Let's assume that, during one day, everyone have found the time to open its 250 lockers. Your aim, to your group of 250 people, is that everyone has won during the same day : you need that veryone of the 500 people has open at least one time the locker containing its number, amongst the 250 that he has opened. If this aime is reached, you all win a trip in Australia and the game is over; otherwise everyone has lost and you start over the next day, after having randomly melt the numbers in the lockers during the night (so that, once again, noone know a priori which locker contains its number, even if they have a very good memory). You were about to despair, realising how impossible this task was, when suddenly one of the people in your group (probably a polytechnician...) says : "If everyone follows my method, we will have won a trip to Australia within a week with 9 out of 10 chances !" What is his method ? Edit edit : However let's go back to the topic. Does anyone knows puzzles corresponding to the ones i gave ? @jestingrabbit : I'm trying to solve the puzzle you gave, I think it is exactly what I was looking for ! (unless the solution is disappointing) Edit^3 : @jestingrabbit : OK I found the solution (only because I already knew the solution to the other hat problem) and I confirm : it is exactly the kind of puzzles I was lloking for :p Re: Looking for hard and rare logic puzzles Aro wrote:Here is the link to the french version, with a detailed solution : http://shkdee.info/fichiers/EnigmeCombinatoire.pdf. However I doubt a google translation would have a good result... Edit : here is my translation. I am not a native english speaker so please excuse my mistakes... You are part of a group of 500 people, gathered in a big room. Each one is given a different number between 1 and 500 (so there is no couple of people with the same number), let's assume you got the number 37, you are the only to have it. In an other room next to the one where your group is gthered, there are 500 lockers. Each locker contains a number between 1 and 500, written on a piece of sheet, without any duplicate : there are not two lockers that contain the same number. So we have 500 people holding 500 different numbers, and 500 lockers containing 500 different numbers. Alternately, each one of the 500 people from your group will go to the room with the lockers, will open and then close 250 lockers to see the number hidden within it. If he sees in one of these lockers the number that correspond to his, he wins, else he looses. On every case he returns into the first room and is not allowed in any manneer to communicate with the other people from his group (so that entering the room with the lockers noone can have a clue about what locker contains his number). For example let's assume it is your turn to go : you open a first locker , and you find the number 412. It is not your number (remember, your number is 37). You close the locker and open a second one, you find the number 125. This is still wrong so you close it, open a third one, etc, and you repeat this 250 times. If at any moment you have found the number 37 in a closer you won, otherwise you lost. Let's assume that, during one day, everyone have found the time to open its 250 lockers. Your aim, to your group of 250 people, is that everyone has won during the same day : you need that veryone of the 500 people has open at least one time the locker containing its number, amongst the 250 that he has opened. If this aime is reached, you all win a trip in Australia and the game is over; otherwise everyone has lost and you start over the next day, after having randomly melt the numbers in the lockers during the night (so that, once again, noone know a priori which locker contains its number, even if they have a very good memory). You were about to despair, realising how impossible this task was, when suddenly one of the people in your group (probably a polytechnician...) says : "If everyone follows my method, we will have won a trip to Australia within a week with 9 out of 10 chances !" What is his method? This is impossible as written. For the first person, the probability of finding their name regardless of method used is 1/2. That means that no method can guarantee a chance of winning above 50%, which can't approximate the 90% given in the puzzle. Re: Looking for hard and rare logic puzzles Well, there was one that made the xkcd newsblag a while back: You are presented with two indistinguishable envelopes and allowed to choose one. Each envelope contains a real number, and the numbers are different. You open your envelope and see the number inside. Now you must guess whether the number in the other envelope is greater or less than the number in your envelope. What strategy can you use to have a greater than 50% chance of being right? And there are a few that invoke the axiom of choice, such as: I am thinking of a function f:ℝ→ℝ. You pick a real number c and I tell you the value of my function for all x except for c. You must guess f(c). What strategy can you use to maximize your probability of being right, and what is that maximal probability? Small Government Liberal Re: Looking for hard and rare logic puzzles Qaanol wrote:I am thinking of a function f:ℝ→ℝ. You pick a real number c and I tell you the value of my function for all x except for c. You must guess f(c). What strategy can you use to maximize your probability of being right, and what is that maximal probability? Depends on the distribution that you chose f from. There is no strategy here that works for any distribution; if you independently choose f(x) uniformly from [0,1] for each x in ℝ, then my probability of being right will always be 0. Re: Looking for hard and rare logic puzzles I rather like this one. Re: Looking for hard and rare logic puzzles LSK wrote:This is impossible as written. For the first person, the probability of finding their name regardless of method used is 1/2. Is it? (Disclaimer: I think I've seen this puzzle before.) The very first locker you open is effectively like rolling a 500-sided die. But you're not required to open lockers "blindly". You're not required, at the beginning, to choose a random collection of 250 lockers. Re: Looking for hard and rare logic puzzles LSK wrote: Aro wrote:otherwise everyone has lost and you start over the next day, after having randomly melt the numbers in the lockers during the night (so that, once again, noone know a priori which locker contains its number, even if they have a very good memory). You were about to despair, realising how impossible this task was, when suddenly one of the people in your group (probably a polytechnician...) says : "If everyone follows my method, we will have won a trip to Australia within a week with 9 out of 10 chances !" What is his method? This is impossible as written. For the first person, the probability of finding their name regardless of method used is 1/2. That means that no method can guarantee a chance of winning above 50%, which can't approximate the 90% given in the puzzle. LSK, they get 7 tries, not just 1. I'm looking forward to the day when the SNES emulator on my computer works by emulating the elementary particles in an actual, physical box with Nintendo stamped on the side. "With math, all things are possible." —Rebecca Watson Re: Looking for hard and rare logic puzzles skullturf wrote: LSK wrote:This is impossible as written. For the first person, the probability of finding their name regardless of method used is 1/2. Is it? (Disclaimer: I think I've seen this puzzle before.) The very first locker you open is effectively like rolling a 500-sided die. But you're not required to open lockers "blindly". You're not required, at the beginning, to choose a random collection of 250 lockers. This doesn't matter. Every individual still has a 1/2 probability of finding his or her own number. The trick to maximizing the probability of success induces a correlation between the various participants: if anyone fails, then more than half of them will fail. The probability of success with the solution I've heard is approximately 1-ln(2), or 30.7%. Looking at the puzzle more closely, this is done every day, and the group member quoted a 90% probability of succeeding within a week. The probability of winning at least once in 7 trials is about Re: Looking for hard and rare logic puzzles @Qaanol : I dont' quite understand your puzzle. How can you elaborate a strategy when you can just say "more" or "less" ? Knowing the axiom of choice is it a necessary condition to find the answer of the other one ? @swfc : this puzzle seems interesting however I find the description too much complex (it requires basic notions of computer science like transition functions, that makes the puzzle difficult to explain to everyone). I'll try to find the solution. Re: Looking for hard and rare logic puzzles Aro wrote:@Qaanol : I dont' quite understand your puzzle. How can you elaborate a strategy when you can just say "more" or "less" ? There are more strategies you could use than just "always say the other number is greater" and "always say the other number is less". (Once you find the most general way to describe a strategy, you've almost solved the problem.) Re: Looking for hard and rare logic puzzles Goplat wrote: Qaanol wrote:I am thinking of a function f:ℝ→ℝ. You pick a real number c and I tell you the value of my function for all x except for c. You must guess f(c). What strategy can you use to maximize your probability of being right, and what is that maximal probability? Depends on the distribution that you chose f from. There is no strategy here that works for any distribution; if you independently choose f(x) uniformly from [0,1] for each x in ℝ, then my probability of being right will always be 0. You have no information about the distribution from which I chose f. The optimal strategy has probability greater than 0 of success. Aro wrote:Knowing the axiom of choice is it a necessary condition to find the answer of the other one ? Personally I reject the axiom of choice, but I recognize that if one accepts the axiom of choice then the puzzle has a solution. Well, to actually implement the strategy you would need to be able to find an appropriate choice function, not just be satisfied that one exists, but all we need to do is describe the strategy not actually utilize it. I do not actually know if the AoC is necessary to solve this puzzle, but it is sufficient. Small Government Liberal Re: Looking for hard and rare logic puzzles Aro wrote:Knowing the axiom of choice is it a necessary condition to find the answer of the other one ? If you don't know the axiom of choice, you would be very unlikely to come up with the correct answer. To even prove that there is such a strategy, you need something like the axiom of choice. In any case, the intended strategy requires deciding infinitely much information in advance. The axiom of choice says: Given a collection of sets {X[i] : i in I} indexed by a set I, there is a function f from I to the union of the collection, such that for each i in I, the value f(i) is an element of X[i]. Or to put it more colloquially (but less precisely) given a collection of sets, you can simultaneously choose one element from each set. I'm looking forward to the day when the SNES emulator on my computer works by emulating the elementary particles in an actual, physical box with Nintendo stamped on the side. "With math, all things are possible." —Rebecca Watson Re: Looking for hard and rare logic puzzles Qaanol wrote:I am thinking of a function f:ℝ→ℝ. You pick a real number c and I tell you the value of my function for all x except for c. You must guess f(c). What strategy can you use to maximize your probability of being right, and what is that maximal probability? Ugh, this hurts my head, but I'm pretty sure the answer is 1, and in fact I think my solution works to leave out countably many points and guess them all with probability 1... which REALLY hurts my head. addams wrote:This forum has some very well educated people typing away in loops with Sourmilk. He is a lucky Sourmilk. Re: Looking for hard and rare logic puzzles mike-l wrote: Qaanol wrote:I am thinking of a function f:ℝ→ℝ. You pick a real number c and I tell you the value of my function for all x except for c. You must guess f(c). What strategy can you use to maximize your probability of being right, and what is that maximal probability? Ugh, this hurts my head, but I'm pretty sure the answer is 1, and in fact I think my solution works to leave out countably many points and guess them all with probability 1... which REALLY hurts my head. Only countably many? Small Government Liberal Re: Looking for hard and rare logic puzzles Qaanol wrote: mike-l wrote: Qaanol wrote:I am thinking of a function f:ℝ→ℝ. You pick a real number c and I tell you the value of my function for all x except for c. You must guess f(c). What strategy can you use to maximize your probability of being right, and what is that maximal probability? Ugh, this hurts my head, but I'm pretty sure the answer is 1, and in fact I think my solution works to leave out countably many points and guess them all with probability 1... which REALLY hurts my head. Only countably many? I'm not sure how to pick a random set of measure 0 in a nice way. I guess I could pick the cantor set shifted/scaled by random factors and do it with uncountably many points as well. addams wrote:This forum has some very well educated people typing away in loops with Sourmilk. He is a lucky Sourmilk. Re: Looking for hard and rare logic puzzles Well done. Wait, your method involves picking a random set? Maybe it’s different from my method. Or did you just mean for the set of points to be omitted, which need not be random? Edit 2. Small Government Liberal Re: Looking for hard and rare logic puzzles mike-l wrote:I'm not sure how to pick a random set of measure 0 in a nice way. It works for any probability measure on the space of negligible sets such that given a negligible set, the measure of the class of negligible sets intersecting that set is 0. Qaanol wrote:Now I’m going to think of a function g:ℝ→ℝ and not give you its value anywhere. You come up with a bijection h from an uncountable null set K ⊂ ℝ, for instance the Cantor set, to ℝ. You then ask me to extend my function g to a function f:ℝ→ℝ such that for all x in K, f(x) = g(h(x)), and I can define f however I like outside K. I oblige. Maybe I choose values for f uniformly at random from [0, 1] (or perhaps even {0, 1}) for each x outside K. In any case, I then tell you the values of f(x) for all x outside K. Do you now know g with probability 1? No. You can just pick f so that f(x)=1 for x outside K, regardless of the values on K. Telling me the values of f outside K doesn't do anything. Wait, your method involves picking a random set? Maybe it’s different from my method. Or did you just mean for the set of points to be omitted, which need not be random? Your method better also... I'm looking forward to the day when the SNES emulator on my computer works by emulating the elementary particles in an actual, physical box with Nintendo stamped on the side. "With math, all things are possible." —Rebecca Watson Re: Looking for hard and rare logic puzzles skeptical scientist wrote:Your method better also Thanks, I needed that. Or I needed sleep. But we’ll go with “I needed that”. Small Government Liberal Re: Looking for hard and rare logic puzzles Qaanol wrote:You are presented with two indistinguishable envelopes and allowed to choose one. Each envelope contains a real number, and the numbers are different. You open your envelope and see the number inside. Now you must guess whether the number in the other envelope is greater or less than the number in your envelope. What strategy can you use to have a greater than 50% chance of being right? I'm lost here... I've come to some conclusions but I'm still blocked. I think that the only "strategy" would be : I will say "more" or "less" after comparing the number I saw with a previously decided number a. The value of a could be easily decided if we knew the probability distribution used to choose the two real numbers, which is not the case... Re: Looking for hard and rare logic puzzles Here's a site I've been frequenting for some years. At the very least, your first 3 examples are there in some form or another though I don't recognize the 500 students riddle. I would recommend going through the posted riddles (which haven't been updated in years) in at least the easy/medium/hard sections as there are some "easy" riddles that demonstrate some interesting concepts. I wouldn't say you'll find many riddles that meet your standards, but I wouldn't presume to try to narrow the list down for you. With somewhat less frequency, regular members have been able to pose good riddles in the forums, but those will be even harder to find. One of my favorites from this set is the "Gods of Gibberland" which is a simple variation on the Truth/Lie/Random puzzle, but unambiguously solvable. Your #1 riddle below has over 50 pages of discussion spanning multiple years and various refinements of the methods used to solve (though their version uses 100 prisoners and asks for an optimal solution rather than a "possible" solution) and is one of the most popular riddles on the site. Re: Looking for hard and rare logic puzzles Thank you for your website Trebla I didn't know it. I'll try to solve the "Gods of Gibberland" puzzle. Qaanol wrote:I am thinking of a function f:ℝ→ℝ. You pick a real number c and I tell you the value of my function for all x except for c. You must guess f(c). What strategy can you use to maximize your probability of being right, and what is that maximal probability? Maybe I lack the necessary mathematic knowledge to solve this one. Whatever, here is my try : Re: Looking for hard and rare logic puzzles Aro wrote: Qaanol wrote:You are presented with two indistinguishable envelopes and allowed to choose one. Each envelope contains a real number, and the numbers are different. You open your envelope and see the number inside. Now you must guess whether the number in the other envelope is greater or less than the number in your envelope. What strategy can you use to have a greater than 50% chance of being right? I'm lost here... I've come to some conclusions but I'm still blocked. I think that the only "strategy" would be : I will say "more" or "less" after comparing the number I saw with a previously decided number a. The value of a could be easily decided if we knew the probability distribution used to choose the two real numbers, which is not the case... Big hint: Use a nondeterministic strategy. Re: Looking for hard and rare logic puzzles Aro wrote: Qaanol wrote:You are presented with two indistinguishable envelopes and allowed to choose one. Each envelope contains a real number, and the numbers are different. You open your envelope and see the number inside. Now you must guess whether the number in the other envelope is greater or less than the number in your envelope. What strategy can you use to have a greater than 50% chance of being right? I'm lost here... I've come to some conclusions but I'm still blocked. I think that the only "strategy" would be : I will say "more" or "less" after comparing the number I saw with a previously decided number a. The value of a could be easily decided if we knew the probability distribution used to choose the two real numbers, which is not the case... There are strategies that work even if the person putting the numbers in the envelopes already knows ahead of time what your strategy will be and chooses numbers in a deliberate attempt to foil it. Small Government Liberal Re: Looking for hard and rare logic puzzles Qaanol wrote:There are strategies that work even if the person putting the numbers in the envelopes already knows ahead of time what your strategy will be and chooses numbers in a deliberate attempt to foil it. However, if the opponent knows your strategy in advance, he can make your odds of picking the greater number less than 50%+ε, for any positive ε he chooses. I'm looking forward to the day when the SNES emulator on my computer works by emulating the elementary particles in an actual, physical box with Nintendo stamped on the side. "With math, all things are possible." —Rebecca Watson Re: Looking for hard and rare logic puzzles Qaanol wrote:I am thinking of a function f:ℝ→ℝ. You pick a real number c and I tell you the value of my function for all x except for c. You must guess f(c). What strategy can you use to maximize your probability of being right, and what is that maximal probability? Everyone so far has been somewhat restrained on this, which gave me the pleasure of working it out for myself (an all-too-rare feeling now that my full-time job has nothing to do with maths). I think this is the method you're all hinting at: This is probably the funniest bit of maths I've seen in a long time! It's enough to turn me into a constructivist (well, not really). Thanks for posting. Re: Looking for hard and rare logic puzzles rhino wrote: Qaanol wrote:I am thinking of a function f:ℝ→ℝ. You pick a real number c and I tell you the value of my function for all x except for c. You must guess f(c). What strategy can you use to maximize your probability of being right, and what is that maximal probability? Everyone so far has been somewhat restrained on this, which gave me the pleasure of working it out for myself (an all-too-rare feeling now that my full-time job has nothing to do with maths). I think this is the method you're all hinting at: This is probably the funniest bit of maths I've seen in a long time! It's enough to turn me into a constructivist (well, not really). Thanks for posting. Nicely done! Small Government Liberal Re: Looking for hard and rare logic puzzles Aro wrote:10 peoples are placed in independant boxes at t=0. After a random time interval, a machine take a random people and put him in a room where there is just a switch (nothing else). Then, after a random time interval it put the people back in his box. At the begining the switch is off. These people must ellaborate a strategy, before beeing put in the boxes, so that it is possible for one of them to say, at any given moment, "now I know that the nine others have been put in the room". Is it possible ? A while ago, I posted that in another forum - it turned out that for large number of prisoners there are a lot of different strategies to speed that up compared to the conventional solution. To make things even more complicated, you can add a second light switch (or any other number of possible states). Bad thing that your list is nearly what I collected in the past. Maybe too easy: n prisoners get a hat which is black (B) or white (W), they cannot see their own hat. They are not allowed to communicate in any way and get the task to line up in row with separated colors (something like WWWWWBBBBBBBBB). The only action they can do is "get in the line" (or "create the line" for the first). How do they do that? A door with several locks is guarded by 9 guards. They must be able to open all locks if and only if 5 or more guards (with their own keys) are present. How many locks and keys are necessary? Re: Looking for hard and rare logic puzzles This thread reminds me of this. This puzzle and this one are both classics as well (I like to give the second one along with the challenge of finding a way to trick the devil despite apparently being in a losing position). Re: Looking for hard and rare logic puzzles Qaanol wrote: rhino wrote: Qaanol wrote:I am thinking of a function f:ℝ→ℝ. You pick a real number c and I tell you the value of my function for all x except for c. You must guess f(c). What strategy can you use to maximize your probability of being right, and what is that maximal probability? Everyone so far has been somewhat restrained on this, which gave me the pleasure of working it out for myself (an all-too-rare feeling now that my full-time job has nothing to do with maths). I think this is the method you're all hinting at: This is probably the funniest bit of maths I've seen in a long time! It's enough to turn me into a constructivist (well, not really). Thanks for posting. Nicely done! Re: Looking for hard and rare logic puzzles Goplat wrote: That only tells us that the value at a random point need not be independent of the other values, even though the value at each point is. AoC is weird, especially when you're doing things like switching the order you do things in (here we're picking uncountably many random numbers, then randomly picking from them, which is different than picking a random point and then randomly picking numbers. I feel like it's akin to the AoC guaranteing immeasurable sets. addams wrote:This forum has some very well educated people typing away in loops with Sourmilk. He is a lucky Sourmilk. Re: Looking for hard and rare logic puzzles Goplat wrote: Small Government Liberal Re: Looking for hard and rare logic puzzles Qaanol wrote:Well, there was one that made the xkcd newsblag a while back: You are presented with two indistinguishable envelopes and allowed to choose one. Each envelope contains a real number, and the numbers are different. You open your envelope and see the number inside. Now you must guess whether the number in the other envelope is greater or less than the number in your envelope. What strategy can you use to have a greater than 50% chance of being Aren't there and equal amount of real numbers above zero as below zero? You have no idea how the numbers are chosen, hence you cannot assume certain numbers are favoured higher than others. Hence an equal (uniform) distribution of all numbers. So if your number is a number below zero then it should be more likely that the other number is greater. If you pick a number above zero then it should be more likely that the number is less. Am I missing something? Re: Looking for hard and rare logic puzzles Superisis wrote: Aren't there and equal amount of real numbers above zero as below zero? You have no idea how the numbers are chosen, hence you cannot assume certain numbers are favoured higher than others. Hence an equal (uniform) distribution of all numbers. So if your number is a number below zero then it should be more likely that the other number is greater. If you pick a number above zero then it should be more likely that the number is less. Am I missing something? There is no uniform distribution over the real numbers. That does not mean that the puzzle is impossible, it only means that your solution does not work. 0 is not special in any way. I think we already had a correct solution here in the thread - if not, the blag should link to one somewhere. There is a real solution to this. Re: Looking for hard and rare logic puzzles Solution to the envelopes problem: Re: Looking for hard and rare logic puzzles But your probability is just larger than 0.5 on average or if you look at specific numbers. There is a way to get a probability larger than 0.5 for every pair of numbers.
{"url":"http://forums.xkcd.com/viewtopic.php?p=2697419","timestamp":"2014-04-18T05:32:59Z","content_type":null,"content_length":"140544","record_id":"<urn:uuid:136bc2fb-73c8-424d-aafc-1f4417a37777>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00301-ip-10-147-4-33.ec2.internal.warc.gz"}
Solving Equation Worksheets Free equation worksheets for algebra learners help to practice with equations involving one step, two steps and multisteps. One step,Two step, and Systems of equations worksheets are provided in distinct web pages. Multistep equations worksheets are listed here. Also find pages related to equation at the bottom of this page. 1 Step and 2 Step Equations Worksheets One-Step Equation Worksheets : Two-step Equation Worksheets : Practice the skill of balancing equation by adding, subtracting, multiplying or dividing Solving equations worksheets in two steps involve two math operators addition/subtraction with equations on both sides. multiplication/division. Multi-Step Equations Worksheets Next level after one step and two step is multistep equation worksheets. Here you can find multi step equation worksheets with different number coefficients such as integers, fractions and decimals. This is good practice for algebra learners and for those who seeks advanced level of solving equations. Linear and Quadratic Equations Worksheets Yet another set of useful worksheets for algebra students based on linear equations and quadratic equations. Linear Equation Worksheets: Quadratic Equation Worksheets: Different types of worksheets based on linear equations are provided here. You may find Quadratic equations can be solved in so many ways such as formula method, factoring method, interesting when you chect out the above link. perfect square method, graphing method etc. Related Equations Worksheets
{"url":"http://www.mathworksheets4kids.com/equations/","timestamp":"2014-04-19T14:45:31Z","content_type":null,"content_length":"27761","record_id":"<urn:uuid:5a0e9927-15b7-489e-8a88-f3b37ef58190>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00456-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: SOMEONE PLEASE HELP ME! Jorge stands 63 feet from the base of a flagpole.The top of the flagpole's shadow falls on Jorge's shoe.The flagpole is 60 feet high.Exactly how far from the flagpole's topis Jorge standing? Best Response You've already chosen the best response. Pythagorean theorem. The flagpole, assuming it is perpendicular to the ground, should form a right triangle such that Jorge's distance to the top can be represented by \[d=\sqrt{h^{2}+L^{2}}\] Where d is the said distance, h is the pole's height, and L is how far Jorge is from the base of the pole. Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4f33281ce4b0fc0c1a0b6129","timestamp":"2014-04-19T07:09:45Z","content_type":null,"content_length":"28011","record_id":"<urn:uuid:88b1d924-0379-452b-9fed-5100260f2d02>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00118-ip-10-147-4-33.ec2.internal.warc.gz"}
Some NP-complete Problems on Graphs Results 1 - 10 of 30 , 1994 "... Algorithms for learning Bayesian networks from data have two components: a scoring metric and a search procedure. The scoring metric computes a score reflecting the goodness-of-fit of the structure to the data. The search procedure tries to identify network structures with high scores. Heckerman et ..." Cited by 130 (2 self) Add to MetaCart Algorithms for learning Bayesian networks from data have two components: a scoring metric and a search procedure. The scoring metric computes a score reflecting the goodness-of-fit of the structure to the data. The search procedure tries to identify network structures with high scores. Heckerman et al. (1994) introduced a Bayesian metric, called the BDe metric, that computes the relative posterior probability of a network structure given data. They show that the metric has a property desireable for inferring causal structure from data. In this paper, we show that the problem of deciding whether there is a Bayesian network---among those where each node has at most k parents---that has a relative posterior probability greater than a given constant is NP-complete, when the BDe metric is used. 1 Introduction Recently, many researchers have begun to investigate methods for learning Bayesian networks, including Bayesian methods [Cooper and Herskovits, 1991, Buntine, 1991, York 1992, Spiegel... - In Proceedings of the Thirteenth National Conference on Artificial Intelligence , 1996 "... Learning during backtrack search is a space-intensive process that records information (such as additional constraints) in order to avoid redundant work. In this paper, we analyze the effects of polynomial-spacebounded learning on runtime complexity of backtrack search. One space-bounded learning sc ..." Cited by 80 (2 self) Add to MetaCart Learning during backtrack search is a space-intensive process that records information (such as additional constraints) in order to avoid redundant work. In this paper, we analyze the effects of polynomial-spacebounded learning on runtime complexity of backtrack search. One space-bounded learning scheme records only those constraints with limited size, and another records arbitrarily large constraints but deletes those that become irrelevant to the portion of the search space being explored. We find that relevance-bounded learning allows better runtime bounds than size-bounded learning on structurally restricted constraint satisfaction problems. Even when restricted to linear space, our relevancebounded learning algorithm has runtime complexity near that of unrestricted (exponential space-consuming) learning schemes. - HANDBOOK OF COMBINATORIAL OPTIMIZATION , 1999 "... ABSTRACT. This paper is a short survey of feedback set problems. It will be published in ..." , 1997 "... The bandwidth problem is the problem of numbering the vertices of a given graph G such that the maximum difference between two numbers of adjacent vertices is minimal. The problem is known to be NP-complete [Pa 76] and there are only few algorithms for rather special cases of the problem [HMM 91] [K ..." Cited by 14 (4 self) Add to MetaCart The bandwidth problem is the problem of numbering the vertices of a given graph G such that the maximum difference between two numbers of adjacent vertices is minimal. The problem is known to be NP-complete [Pa 76] and there are only few algorithms for rather special cases of the problem [HMM 91] [Kr 87] [Sa 80] [Sm 95]. In this paper we present a randomized 3approximation algorithm for the bandwidth problem restricted to dense graphs and a randomized 2-approximation algorithm for the same problem on directed dense graphs. x Dept. of Computer Science, University of Bonn, 53117 Bonn. Research partially supported by DFG Grant KA 673/4-1, by the ESPRIT BR Grants 7097 and EC-US 030. Email: marek@cs.bonn.edu. -- Dept. of Computer Science, University of Bonn, 53117 Bonn. Research partially supported by the ESPRIT BR Grants 7097 and EC-US 030. Email: wirtgen@cs.bonn.edu k Dept. of Computer Science, University of Bonn, 53117 Bonn. Visiting from Dept. of Computer Science, Thornton Hall, U... - Inf. Process. Lett , 2003 "... Given a weighted directed graph G = (V, A), the minimum feedback arc set problem consists of finding a minimum weight set of arcs A # A such that the directed graph A # ) is acyclic. Similarly, the minimum feedback vertex set problem consists of finding a minimum weight set of vertices containi ..." Cited by 8 (1 self) Add to MetaCart Given a weighted directed graph G = (V, A), the minimum feedback arc set problem consists of finding a minimum weight set of arcs A # A such that the directed graph A # ) is acyclic. Similarly, the minimum feedback vertex set problem consists of finding a minimum weight set of vertices containing at least one vertex for each directed cycle. Both problems are NP-complete. We present simple combinatorial algorithms for these problems that achieve an approximation ratio bounded by the length, in terms of number of arcs, of a longest simple cycle of the digraph. - GRAPH THEORETIC CONCEPTS IN COMPUTER SCIENCE , 1997 "... In random geometric graphs, vertices are randomly distributed on [0, 1]² and pairs of vertices are connected by edges whenever they are sufficiently close together. Layout problems seek a linear ordering of the vertices of a graph such that a certain measure is minimized. In this paper, we study sev ..." Cited by 7 (4 self) Add to MetaCart In random geometric graphs, vertices are randomly distributed on [0, 1]² and pairs of vertices are connected by edges whenever they are sufficiently close together. Layout problems seek a linear ordering of the vertices of a graph such that a certain measure is minimized. In this paper, we study several layout problems on random geometric graphs: Bandwidth, Minimum Linear Arrangement, Minimum Cut, Minimum Sum Cut, Vertex Separation and Bisection. We first prove that some of these problems remain NP-complete even for geometric graphs. Afterwards, we compute lower bounds that hold with high probability on random geometric graphs. Finally, we characterize the probabilistic behavior of the lexicographic ordering for our layout problems on the class of random geometric graphs. - in grids, Ars Combinatoria , 1994 "... The link length of a walk in a multidimensional grid is the number of straight line segments constituting the walk. Alternatively, it is the number of turns that a mobile unit needs to perform in traversing the walk. A rectilinear walk consists of straight line segments which are parallel to the mai ..." Cited by 7 (0 self) Add to MetaCart The link length of a walk in a multidimensional grid is the number of straight line segments constituting the walk. Alternatively, it is the number of turns that a mobile unit needs to perform in traversing the walk. A rectilinear walk consists of straight line segments which are parallel to the main axis. We wish to construct rectilinear walks with minimal link length traversing grids. If G denotes the multidimensional grid, let s(G) be the minimal link length of a rectilinear walk traversing all the vertices of G. In this paper we develop an asymptotically optimal algorithm for constructing rectilinear walks traversing all the vertices of complete multidimensional grids and analyze the worst-case behavior of s(G), when G is a multidimensional grid. , 1999 "... Several graph parameters such as induced width, minimum maximum clique size of a chordal completion, k-tree number, bandwidth, front length or minimum pseudo-tree height are available in the CSP community to bound the complexity of specific CSP instances using dedicated algorithms. After an intro ..." Cited by 7 (0 self) Add to MetaCart Several graph parameters such as induced width, minimum maximum clique size of a chordal completion, k-tree number, bandwidth, front length or minimum pseudo-tree height are available in the CSP community to bound the complexity of specific CSP instances using dedicated algorithms. After an introduction to the main algorithms that can exploit these parameters, we try to exhaustively review existing parameters and the relations that may exist between then. In the process we exhibit some missing relations. Several existing results, both old results and recent results from graph theory and Cholesky matrix factorization technology [BGHK95] allow us to give a very dense map of relations between these parameters. These results strongly relate several existing algorithms and answer some questions which were considered as open in the CSP community. Warning: this document is a working paper. Some sections may be incomplete or currently being worked out ([GJC94] degree of cyclicity not ... "... . The bandwidth and the cutwidth are fundamental parameters which can give indications on the complexity of many problems described in terms of graphs. In this paper, we present a method for finding general upper bounds for the bandwidth and the cutwidth of a given graph from those of any of its quo ..." Cited by 6 (2 self) Add to MetaCart . The bandwidth and the cutwidth are fundamental parameters which can give indications on the complexity of many problems described in terms of graphs. In this paper, we present a method for finding general upper bounds for the bandwidth and the cutwidth of a given graph from those of any of its quotient graphs. Moreover, general lower bounds are obtained by using vertexand edge-bisection notions. These results are used, in a second time, to study various interconnection networks: by choosing convenient vertex partitions and judicious internal numberings for the vertices of the partition subsets, we show that bounds previously known for hypercubes can be easily re-proven, and we give original bounds for 2D-mesh, binary de Bruijn, Shuffle-Exchange, FFT, Butterfly, and CCC graphs. 1 Introduction In all this paper, we will denote by V (G) and E(G) the vertex- and edge-sets of a n-vertex graph G. When studying problems described in terms of graphs, it is often useful to have a good knowl...
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=102766","timestamp":"2014-04-23T11:05:22Z","content_type":null,"content_length":"36862","record_id":"<urn:uuid:d1e567b6-939e-499b-a43c-eb69b7717f72>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00595-ip-10-147-4-33.ec2.internal.warc.gz"}
Check that a signal satisfies step response bounds during simulation: ● If all bounds are satisfied, the block does nothing. ● If a bound is not satisfied, the block asserts, and a warning message appears at the MATLAB^® prompt. You can also specify that the block: ○ Evaluate a MATLAB expression. ○ Stop the simulation and bring that block into focus. During simulation, the block can also output a logical assertion signal: ● If all bounds are satisfied, the signal is true (1). ● If a bound is not satisfied, the signal is false (0). You can add Check Step Response Characteristics blocks on multiple signals to check that they satisfy the bounds. You can also plot the bounds on a time plot to graphically verify that the signal satisfies the bounds. This block and the other blocks in the Model Verification library test that a signal remains within specified time-domain characteristic bounds. When a model does not violate any bound, you can disable the block by clearing the assertion option. If you modify the model, you can re-enable assertion to ensure that your changes do not cause the model to violate a bound. If the signal does not satisfy the bounds, you can optimize the model parameters to satisfy the bounds. If you have Simulink^® Control Design™ software, you can add frequency-domain bounds such as Bode magnitude and optimize the model response to satisfy both time- and frequency-domain requirements. The block can be used in all simulation modes for signal monitoring but only in Normal or Accelerator simulation mode for response optimization. ┃ Task │ Parameters ┃ ┃ Specify step response bounds to: │ Include step response bound in assertion in Bounds tab. ┃ ┃ │ ┃ ┃ ● Assert that a signal satisfies the bounds │ ┃ ┃ │ ┃ ┃ ● Optimize model response so that a signal satisfies the bounds │ ┃ ┃ Specify assertion options (only when you specify step response bounds). │ In the Assertion tab: ┃ ┃ Open Design Optimization tool to optimize model response │ Click Response Optimization ┃ ┃ Plot step response │ Click Show Plot. ┃ ┃ Display plot window instead of Block Parameters dialog box on double-clicking the block. │ Show plot on block open ┃ Include step response bound in assertion Check that the step response satisfies all the characteristics specified in: The software displays a warning if the signal violates the specified step response characteristics. This parameter is used for assertion only if Enable assertion in the Assertion tab is selected. The bounds also appear on the step response plot if you click Show Plot, as shown in the next figure. By default, the line segments represent the following step response requirements: ● Amplitude less than or equal to –0.01 up to the rise time of 5 seconds for 1% undershoot ● Amplitude between 0.9 and 1.2 up to the settling time of 15 seconds ● Amplitude equal to 1.2 for 20% overshoot up to the settling time of 15 seconds ● Amplitude between 0.99 and 1.01 beyond the settling time for 2% settling If you clear Enable assertion, the bounds are not used for assertion but continue to appear on the plot. Default: On ● Clearing this parameter disables the step response bounds and the software stops checking that the bounds are satisfied during simulation. The bound segments are also greyed out on the plot. ● To only view the bounds on the plot, clear Enable assertion. Command-Line Information Parameter: EnableStepResponseBound Type: string Value: 'on' | 'off' Default: 'on' Step time (seconds) Time, in seconds, when the step response starts. Default: 0 Minimum: 0 Finite real nonnegative scalar. ● To assert that step time value is satisfied, select both Include step response bound in assertion and Enable assertion. ● To modify the step time value from the plot window, drag the corresponding bound segment. Alternatively, right-click the segment, and select Edit. Specify the new value in Step time. You must click Update Block before simulating the model. Command-Line Information Parameter: StepTime Type: string Value: 0 | finite real nonnegative scalar. Must be specified inside single quotes (''). Default: 0 Initial value Value of the signal level before the step response starts. Default: 0 Finite real scalar not equal to the final value. ● To assert that initial value is satisfied, select both Include step response bound in assertion and Enable assertion. ● To modify the initial value from the plot window, drag the corresponding bound segment. Alternatively, right-click the segment, and select Edit. Specify the new value in Initial value. You must click Update Block before simulating the model. Command-Line Information Parameter: InitialValue Type: string Value: 0 | finite real scalar not equal to final value. Must be specified inside single quotes (''). Default: 0 Final value Final value of the step response. Default: 1 Finite real scalar not equal to the initial value. ● To assert that final value is satisfied, select both Include step response bound in assertion and Enable assertion. ● To modify the final value from the plot window, drag the corresponding bound segment. Alternatively, right-click the segment, and select Edit. Specify the new value in Final value. You must click Update Block before simulating the model. Command-Line Information Parameter: FinalValue Type: string Value: 1 | finite real scalar not equal to the initial value. Must be specified inside single quotes (''). Default: 1 Rise time (seconds) Time taken, in seconds, for the signal to reach a percentage of the final value specified in % Rise. Default: 5 Minimum: 0 Finite positive real scalar, less than the settling time. ● To assert that rise time value is satisfied, select both Include step response bound in assertion and Enable assertion. ● To modify the rise time from the plot window, drag the corresponding bound segment. Alternatively, right-click the segment, and select Edit. Specify the new value in Rise time. You must click Update Block before simulating the model. Command-Line Information Parameter: RiseTime Type: string Value: 5 | finite positive real scalar. Must be specified inside single quotes (''). Default: 5 % Rise The percentage of final value used with the Rise time to define the overall rise time characteristics. Default: 80 Minimum: 0 Maximum: 100 Positive real scalar, less than (100 – % settling). ● To assert that percent rise value is satisfied, select both Include step response bound in assertion and Enable assertion. ● To modify the percent rise from the plot window, drag the corresponding bound segment. Alternatively, right-click the segment, and select Edit. Specify the new value in % Rise. You must click Update Block before simulating the model. Command-Line Information Parameter: PercentRise Type: string Value: 80 | positive scalar less than (100 – % settling). Must be specified inside single quotes (''). Default: 80 Settling time (seconds) The time, in seconds, taken for the signal to settle within a specified range around the final value. This settling range is defined as the final value plus or minus the percentage of the final value, specified in % Settling. Default: 7 Finite positive real scalar, greater than rise time. ● To assert that final value is satisfied, select both Include step response bound in assertion and Enable assertion. ● To modify the settling time from the plot window, drag the corresponding bound segment. Alternatively, right-click the segment, and select Edit. Specify the new value in Settling time. You must click Update Block before simulating the model. Command-Line Information Parameter: SettlingTime Type: string Value: 7 | positive finite real scalar greater than rise time. Must be specified inside single quotes (''). Default: 7 % Settling The percentage of the final value that defines the settling range of the Settling time characteristic. Default: 1 Minimum: 0 Maximum: 100 Real positive finite scalar, less than (100 – % rise) and less than % overshoot. ● To assert that percent settling value is satisfied, select both Include step response bound in assertion and Enable assertion. ● To modify the percent settling from the plot window, drag the corresponding bound segment. Alternatively, right-click the segment, and select Edit. Specify the new value in % Settling. You must click Update Block before simulating the model. Command-Line Information Parameter: PercentSettling Type: string Value: 1 | Real positive finite scalar less than (100 – % rise) and less than % overshoot. Must be specified inside single quotes (''). Default: 1 % Overshoot The amount by which the signal can exceed the final value before settling, specified as a percentage. Default: 10 Minimum: 0 Maximum: 100 Positive real scalar, greater than % settling. ● To assert that percent overshoot value is satisfied, select both Include step response bound in assertion and Enable assertion. ● To modify the percent overshoot from the plot window, drag the corresponding bound segment. Alternatively, right-click the segment, and select Edit. Specify the new value in % Overshoot. You must click Update Block before simulating the model. Command-Line Information Parameter: PercentOvershoot Type: string Value: 10 | Positive real scalar greater than % settling. Must be specified inside single quotes (''). Default: 10 % Undershoot: The amount by which the signal can undershoot the initial value, specified as a percentage. Default: 1 Minimum: 0 Maximum: 100 Positive finite real scalar. ● To assert that percent undershoot value is satisfied, select both Include step response bound in assertion and Enable assertion. ● To modify the percent undershoot from the plot window, drag the corresponding bound segment. Alternatively, right-click the segment, and select Edit. Specify the new value in % Undershoot. You must click Update Block before simulating the model. Command-Line Information Parameter: PercentUndershoot Type: string Value: 1 | postive finite real scalar between 0 and 100. Must be specified inside single quotes (''). Default: 1 Enable zero-crossing detection Ensure that the software simulates the model to produce output at the bound edges. Simulating the model at the bound edges prevents the simulation solver from missing a bound edge without asserting that the signal satisfies that bound. For more information on zero-crossing detection, see Zero-Crossing Detection in the Simulink User Guide. Default: On Command-Line Information Parameter: ZeroCross Type: string Value: 'on' | 'off' Default: 'on' Enable assertion Enable the block to check that bounds specified and included for assertion in the Bounds tab are satisfied during simulation. Assertion fails if a bound is not satisfied. A warning, reporting the assertion failure, appears at the MATLAB prompt. If the assertion fails, you can optionally specify that the block: This parameter has no effect if you do not specify any bounds. Clearing this parameter disables assertion, i.e., the block no longer checks that specified bounds are satisfied. The block icon also updates to indicate that assertion is disabled. In the Configuration Parameters dialog box of the Simulink model, the Model Verification block enabling option in the Debugging area of Data Validity node, lets you to enable or disable all model verification blocks in a model, regardless of the setting of this option. Default: On This parameter enables: ● Simulation callback when assertion fails (optional) ● Stop simulation when assertion fails Command-Line Information Parameter: enabled Type: string Value: 'on' | 'off' Default: 'on' Simulation callback when assertion fails (optional) MATLAB expression to execute when assertion fails. Because the expression is evaluated in the MATLAB workspace, define all variables used in the expression in that workspace. Default: [] A MATLAB expression. Enable assertion enables this parameter. Command-Line Information Parameter: callback Type: string Value: '' | MATLAB expression Default: '' Stop simulation when assertion fails Stop the simulation when a bound specified in the Bounds tab is violated during simulation, i.e., assertion fails. If you run the simulation from a Simulink model window, the Simulation Diagnostics window opens to display an error message. The block where the bound violation occurs is highlighted in the model. Default: Off ● Because selecting this option stops the simulation as soon as the assertion fails, assertion failures that might occur later during the simulation are not reported. If you want all assertion failures to be reported, do not select this option. Enable assertion enables this parameter. Command-Line Information Parameter: stopWhenAssertionFail Type: string Value: 'on' | 'off' Default: 'off' Output assertion signal Output a Boolean signal that, at each time step, is: ● True (1) if assertion succeeds, i.e., all bounds are satisfied ● False (0) if assertion fails, i.e., a bound is violated. The output signal data type is Boolean only if the Implement logic signals as Boolean data option in the Optimization pane of the Configuration Parameters dialog box of the Simulink model is selected. Otherwise, the data type of the output signal is double. Selecting this parameter adds an output port to the block that you can connect to any block in the model. Command-Line Information Parameter: export Type: string Value: 'on' | 'off' Default: 'off' Show plot on block open Open the plot window instead of the Block Parameters dialog box when you double-click the block in the Simulink model. Use this parameter if you prefer to open and perform tasks, such as adding or modifying bounds, in the plot window instead of the Block Parameters dialog box. If you want to access the block parameters from the plot window, select Edit or click . For more information on the plot, see Show Plot. Default: Off Command-Line Information Parameter: LaunchViewOnOpen Type: string Value: 'on' | 'off' Default: 'off'
{"url":"http://www.mathworks.se/help/sldo/ref/checkstepresponsecharacteristics.html?nocookie=true","timestamp":"2014-04-23T21:48:31Z","content_type":null,"content_length":"70826","record_id":"<urn:uuid:4ac75835-7915-458c-bf48-7f3933341790>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00075-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: st: RE: bootstrapping standard errors with several estimated regress [Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index] Re: st: RE: bootstrapping standard errors with several estimated regressors + 1st line: From Steven Samuels <ssamuels@albany.edu> To statalist@hsphsun2.harvard.edu Subject Re: st: RE: bootstrapping standard errors with several estimated regressors + 1st line: Date Mon, 9 Jul 2007 17:09:22 -0400 Obstreperous line The last post with all lines was: Erasmo, What was the basis for your original thought "that bootstrapping would cause statistical significance for all regressors to go down"? I've not seen this in the bootstrap literature. Indeed, your example, and that of Maarten, suggest that there is no order relation between model-based estimated standard errors and those estimated by the bootstrap. You might be thinking that bootstrapping should cause p-values to rise because regressors, as well as responses, are being sampled. This is not so. Assume the classical multiple regression model. If the X variables are random and independent of independent of the error terms, then in the usual formula for the V(b), (X'X)^(-1) is replaced by its expectation. (WH Greene, Econometrics, McMillan, You might also be thinking that the use of estimated regressors should lead to higher higher pvalues, compared to having the "true" regressors. This sounds right, although I am not expert in this area, but it is irrelevant. Both original and bootstrapped standard errors are based on the estimated regressors. Perhaps you are confusing the estimates of coefficients with estimates of standard errors of coefficients. If model assumptions are right, then both model-based estimate of standard error and the bootstrap estimates of standard error are "good" estimates of the same quantity, the "true" standard error. However, the model-based estimate benefits from knowing that model is true. For example, in OLS, for example, the key assumption is that there is a constant SD. The model-based estimate standard error is therefore a function of one quantity besides the X'X matrix, namely the residual SD. The bootstrap estimate is valid even if the residual SD is not constant, as long as the observations are uncorrelated. The price for this greater validity is that, if the model is right, the bootstrap estimate of standard error will be more variable then the model-based estimate. See Efron & Tibshirani, An Introduction to the Bootstrap, Chapman & Hall, 1994. * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2007-07/msg00296.html","timestamp":"2014-04-16T10:27:35Z","content_type":null,"content_length":"8330","record_id":"<urn:uuid:6c9b0957-7c0c-4b0d-9970-4a6b0a8c7bf8>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00115-ip-10-147-4-33.ec2.internal.warc.gz"}
Bench measurements under 110dBc 3rd order intermodulation distortion | EE Times Design How-To Bench measurements under 110dBc 3rd order intermodulation distortion Emerging low power fully differential amplifiers (FDAs) are intended to support IF and ADC interface requirements with exceptional linearity. Offering intercepts exceeding 50dBm on very lower power, they can provide an attractive option to the more typical Class A RF amplifiers for applications below 500Mhz. An immediate practical issue is encountered in attempting to measure the IM3 when the spurious are >-110dBc below the carriers. Typical approaches of projecting from -1dB compression points do not apply to FDA type devices. Other projection techniques can certainly help, but at the end of the day generating a -120dBc clean input and measuring a -110dBc dynamic range are both useful capabilities in these types of measurements. Extremely high 3rd order intercept amplifiers Communications channels have always needed a mix of low noise figure, high intercept, and manageable quiescent power to deliver leading edge systems. The 3rd order intermodulation intercept is particularly important as it describes how low the spurious powers will be at the output of a stage receiving two closely spaced carriers that were not in the original input signal. These are particularly troublesome since they will fall “close-in” to the carries and cannot be filtered out. The classic definition of the 3rd order intercept is shown in figure 1. Also shown is the spacing around a center frequency where the resulting spurious will be. Essentially, for carriers spaced +/-?f around f0, the 3rd order spurious will be +/-3?f around f0 where f0 is the average (or center) frequency of the two carriers. Click on image to enlarge. Figure 1. 3rd order intercept definition For amplifiers that show an intercept characteristic, this simple approach gives an easy way to predict SFDR for different output carrier levels. From fig. 1 , the intercept for equal carrier power (P0), is given by eq. 1 From this single number, an estimate of the 3rd order SFDR may be made as eq. 2 The intercept is often constant over frequency for class A type RF amplifiers, but never so for more high open loop gain, voltage or current feedback based, fully differential amplifier (FDA) type devices. These lower power devices offer a frequency dependent loop gain and lower full power bandwidth (slew rate) that reduce the performance as the frequency increases. The easy measurement is when the test power levels are at 0dBm. In the example drawing of fig. 1 ,which is drawn with a -60dBc to the 3rd order spurious at 0dBm output, so the intercept is 30dBm from equation 1 . Then, at say 10dBm output level (2Vpp on each tone for a sine wave test, 4Vpp output envelope), equation 2 would predict 40dB SFDR, which also can be seen in fig. 1 . The name “intercept point” comes from the intersection of the 2 curves in fig. 1 . That also equals 30dBm and is a projection of where the output spurious would equal the test powers. That 30dBm output power is of course not intended here and the model is only used to project the 3rd order spurious at output powers far below this “intercept” point. Not all amplifiers show a strictly intercept performance, so it is also common to just see a 3rd order spurious level vs. frequency and/or output power level plot. This is particularly common when the loads are not intended to be 50ohm loads – such as driving ADC inputs. For example, a very low IM3 device like the ISL55210 (ref. 1) shows a data sheet plot such as fig. 2 (figure 9, ref. 2). Click on image to enlarge. Figure 2. Swept frequency, fixed gain, 200Ohm load IM2/IM3 SFDR plot for the ISL55210 This is showing the ?dBc from equal test tone powers to the spurious levels for different fixed output 2-tone envelopes swept up in frequency using the 15dB gain test circuit of fig. 3. The output network of fig. 3 maps from a 200O differential load to a single ended 50ohm measurement path. The 2Vpp curve is 2, 1Vpp test tones at the output pins (Vo) spaced +/-100khz around the x-axis frequency. Click on image to enlarge. Figure 3. Test board for IM2/IM3 test of fig. 2 (ref. 3) Above 150Mhz, it is starting to look like it might have an intercept characteristic, but the question here is how to generate and test these <-100dBc levels in a lab environment. While the IM2 is not nearly so low as the IM3 for the 115mW ISL55210, the intent was that a bandpass filter would filter those off. This is to follow the stage when it is the <100dBc 3rd order terms that are of interest in the application. Developing the input test signal Testing for OIP3 starts with summing two signal generators together and eventually ends with using a spectrum analyzer to measure very low spurious levels. To allow the spectrum analyzer some chance of making this measurement, it helps to use very low phase noise sources locked to the spectrum analyzer to zoom in on a narrow span knowing exactly where to look with no phase noise smearing of the measured power. Those synthesized sources are readily available (e.g. HP8662, HP8664, Gigatronics 6080A, R&S SMA100A, etc) and the fact they have very poor harmonic distortion (typically in the -50dBc to -60dBc range) is inconsequential to amplifier IM3 testing. Those straight harmonic distortion do matter to ADC testing and the test signal needs to be run through a bandpass filter in that case. No passive filtering is required in testing IM2/IM3 for amplifiers as none of individual source harmonics create terms at the intermodulation locations.
{"url":"http://www.eetimes.com/document.asp?doc_id=1280119","timestamp":"2014-04-16T04:35:59Z","content_type":null,"content_length":"150390","record_id":"<urn:uuid:1d5551c6-3301-462e-9d2d-028c246f39f7>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00491-ip-10-147-4-33.ec2.internal.warc.gz"}
Milton, MA Math Tutor Find a Milton, MA Math Tutor ...All of this is helped by heavy doses of encouragement; by identifying, tracking, and celebrating tangible progress towards goals; and by constant subtle and/or explicit reminders of why the work at hand is, in fact, worth doing (which it invariably is). I have enjoyed this work a great deal and ... 26 Subjects: including algebra 2, reading, precalculus, prealgebra ...The courses I've taught and tutored required differential equations, so I have experience working with them in a teaching context. In addition to undergraduate level linear algebra, I studied linear algebra extensively in the context of quantum mechanics in graduate school. I continue to use undergraduate level linear algebra in my physics research. 16 Subjects: including algebra 1, geometry, precalculus, trigonometry ...I am trained in the Wilson Reading Program, a progam specifically designed for teaching dyslexic children to read. I attended their week-end seminar and then spent one year meeting regularly with a Wilson-approved trainer to go over my method and technique. I had to model my teaching techniques during these training sessions, with my student every time I met with my trainer. 25 Subjects: including calculus, discrete math, algebra 1, algebra 2 ...I have always enjoyed helping others with schoolwork through demonstrating and explaining concepts to whomever needed it. I am passionate about using the knowledge I have gained to help others either improve or excel in academics, particularly in Math. I will look to you for feedback after each... 15 Subjects: including algebra 2, calculus, chemistry, physics ...I was nominated to play in the NYSSMA Festival (New York State School Music Association), the All-County Orchestra, and the Long Island String Festival. I was nominated and accepted based on my performance on state-wide playing tests and teacher recommendations. I have scored 99%-100% on the highest level state-wide performance tests for 10 years. 11 Subjects: including algebra 1, SAT math, algebra 2, Spanish Related Milton, MA Tutors Milton, MA Accounting Tutors Milton, MA ACT Tutors Milton, MA Algebra Tutors Milton, MA Algebra 2 Tutors Milton, MA Calculus Tutors Milton, MA Geometry Tutors Milton, MA Math Tutors Milton, MA Prealgebra Tutors Milton, MA Precalculus Tutors Milton, MA SAT Tutors Milton, MA SAT Math Tutors Milton, MA Science Tutors Milton, MA Statistics Tutors Milton, MA Trigonometry Tutors
{"url":"http://www.purplemath.com/Milton_MA_Math_tutors.php","timestamp":"2014-04-21T05:12:08Z","content_type":null,"content_length":"23959","record_id":"<urn:uuid:baa10d53-e484-41f7-878c-9bb7f156d195>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00335-ip-10-147-4-33.ec2.internal.warc.gz"}
Http://session.masteringphysics.com/problemAsset/1072829/2/4.4.jpg... | Chegg.com A box with weight w = 530 N is on a rough surface, inclined at an angle of 37 degrees. The box is kept from sliding down (in equilibrium) by means of an external force F. The other forces acting on the box are the normal and friction forces, denoted by n and f. A force diagram, showing the four forces that act on the box, is shown in Fig. 4.4. The magnitude of f is 110 N. The external force F is removed and the box accelerates. The magnitudes of the other forces are unchanged. The acceleration of the box is closest to: 2.3 m/s2 2.9 m/s2 4.8 m/s2 3.8 m/s2 5.8 m/s2
{"url":"http://www.chegg.com/homework-help/questions-and-answers/http-sessionmasteringphysicscom-problemasset-1072829-2-44jpg-box-weight-w-530-n-rough-surf-q937265","timestamp":"2014-04-18T15:06:38Z","content_type":null,"content_length":"21915","record_id":"<urn:uuid:3a92b1b9-b63c-40bf-83bb-884440b390bf>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00659-ip-10-147-4-33.ec2.internal.warc.gz"}
Riverside, RI Algebra 1 Tutor Find a Riverside, RI Algebra 1 Tutor ...I am well traveled and an enthusiastic lifelong learner. This past summer I traveled to Iceland and learned about its unique geology as well as to Peru to study its birds and history. In 2010, I joined a Norwegian trekking group and hiked through two Norwegian National Parks. 16 Subjects: including algebra 1, chemistry, physics, biology ...The lessons we teach ourselves are the ones we remember best. Once I understand what concept a student needs to be taught or clarified, I devise a series of problems or logic steps that the student can solve in succession. Ultimately this will allow the student to start from a place of confiden... 12 Subjects: including algebra 1, chemistry, physics, calculus ...As a retired Special Education Resource teacher, I have many years of experience teaching writing skills. I am well able to help students complete their student essays that will "WOW" college admissions offices! I can help them generate lists of impressive topics, as well as assist with revising and editing their essays. 30 Subjects: including algebra 1, reading, writing, SAT math ...I am very confident that each student will significantly benefit from their tutoring encounter(s) with me.Having had numerous responsibilities in my professional career, I clearly recognize the importance of organization and good study skills. I always carried a notepad and pens and pencils with... 14 Subjects: including algebra 1, reading, geometry, ASVAB ...I am happy to focus on grammar, conversation, writing, or any other skills you seek to improve. I'm happy to work with newcomers to the language who are prepping for a trip! Through my studies I have accrued a general knowledge of modern Italian history and media and would also love to share cultural knowledge if it would be of interest to a student. 10 Subjects: including algebra 1, geometry, Italian, algebra 2 Related Riverside, RI Tutors Riverside, RI Accounting Tutors Riverside, RI ACT Tutors Riverside, RI Algebra Tutors Riverside, RI Algebra 2 Tutors Riverside, RI Calculus Tutors Riverside, RI Geometry Tutors Riverside, RI Math Tutors Riverside, RI Prealgebra Tutors Riverside, RI Precalculus Tutors Riverside, RI SAT Tutors Riverside, RI SAT Math Tutors Riverside, RI Science Tutors Riverside, RI Statistics Tutors Riverside, RI Trigonometry Tutors
{"url":"http://www.purplemath.com/Riverside_RI_algebra_1_tutors.php","timestamp":"2014-04-19T02:11:46Z","content_type":null,"content_length":"24271","record_id":"<urn:uuid:4c27996d-54f7-4aa4-a609-b6811f606482>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00224-ip-10-147-4-33.ec2.internal.warc.gz"}
Religion, Sets, and Politics I've had the recent pleasure of reading Jason Rosenhouse's "The Monty Hall Problem." Rosenhouse's book is a comprehensive investigation into the eponymous Monty Hall problem , variations of the problem, and the larger implications of the problem. The original Monty Hall problem named after a game played on an old television game show " Let's Make a Deal " with host Monty Hall. Rosenhouse describes the problem as: You are shown three identical doors. Behind one of them is a car. The other two conceal goats. You are asked to choose, but not open one of the doors. After doing so, Monty, who knows where the car is, opens one of the two remaining doors. He always opens a door he knows to be incorrect, and randomly chooses which door to open when he has a more than one option (which happens on those occasions where your initial choice conceals the car). After opening an incorrect door, Monty gives you the option of either switching to the other unopened door or sticking with your original choice. You then receive whatever is behind the door you choose. What should you do? (Presumably you are attempting to maximize your chance of winning one's chance of getting a car). Most people conclude that there's no benefit from switching. The general logic against switching is that after the elimination of a door there are two doors remaining, so each should now have a 1/2 chance of containing the door. This logic is incorrect. One door has a 2/3rds chance of getting the car if one's general strategy is switching. Many people find this claim extremely counterintuitive. To see quickly the correctness of this claim, note that if one chooses a strategy of to always switching, then one will switch to the correct car-containing door exactly when your original door was not the car door. This will occur 2/3rd of the time. Many people have great difficulty accepting the correct solution to the Monty Hall problem. This includes not just laypeople, but also professional mathematicians, including most famously Paul Erdos who initially did not accept the answer. The problem, and variants thereof, not only raise interesting questions of probability but also give insight into how humans think about probability. Rosenhouse's book is very well done. He looks not just at the math, but also the history of the problem, and philosophical and psychological implications of the problem. For example, he discusses studies which show that cross-culturally the vast majority of people when given the problem will not switch. I was unaware until I read this book how much cross-disciplinary work there had been surrounding the Monty Hall problem. Not all of this work has been that impressive, and Rosenhouse correctly points out where much of the philosophical argumentation over the problem simply breaks downs. Along the way, Rosenhouse explains such important concepts as Bayes' Theorem (where he uses the simple discrete case), the different approaches to what probabilities mean (classical, frequentist, and Bayesian) and their philosophical implications. The book could easily be used for supplementary reading for an undergraduate course in probability or reading for an interested highschool student. By far the most interesting parts of the book were the chapters focusing on the psychological aspects of the problem. Systematic investigation of the common failure of people to correctly analyze the Monty Hall problem has lead to much insight about how humans reason about probability. This analysis strongly suggests that humans use a variety of heuristics which generally work well for many circumstances humans run into but break down in extreme cases. In a short blog post I can’t do justice to the clever, sophisticated experimental set-ups used to test the nature and extent of these heuristics, so I'll simply recommend that people read the book. For my own part, I'd like to use this as an opportunity to propose two continuous versions of the Monty Hall problem that to my knowledge have not been previously discussed. Consider a circle of circumference 1. A point is randomly picked as the target point on circle (and not revealed to you). You then pick a random interval of length 1/3rd on the circle. Monty knows where the target point is. If you picked an interval that contains the target point, Monty picks a random 1/3rd interval that doesn't overlap your interval and reveals that interval as not containing the target point. If your interval does not contain the target point, Monty instead picks uniformly a 1/3rd interval that doesn't include the target point and doesn't overlap with your interval. At the end of this process, you have with probability one, three possible intervals that might contain the target point, your original interval, or the intervals on either side of Monty's revealed interval. You are given the option to switch to one of these new intervals. Should you switch and if so to which interval? I'm pretty sure that the answer in this modified form is also to switch, in this case switching to the larger of the two new intervals. However, the situation becomes a bit trickier if we modify it a bit. Consider the following situation that is identical to the above, but instead of Monty cutting out an interval of length 1/3rd, he picks k intervals of each length 1/(3k) (thus the initial case above is k=1). Monty picks where to place these intervals by each picking one of the valid intervals uniformly and then going on to the other, then revealing the locations of all his intervals at the end. The remaining choices for an interval for you to pick are your original interval or any of the smaller intervals created in between Monty's choices. You get an option to stay or to switch to one of these intervals. It seems clear that even for k=2, sometiimes you should switch and sometimes you should not switch, depending on the locations of Monty's intervals. However, it isn't clear to me when to stay and when to switch. Thoughts are welcome.
{"url":"http://religionsetspolitics.blogspot.com/2011_01_01_archive.html","timestamp":"2014-04-19T22:53:03Z","content_type":null,"content_length":"71012","record_id":"<urn:uuid:5f25b818-634f-4caf-b23b-e53ef8d73f7a>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00106-ip-10-147-4-33.ec2.internal.warc.gz"}
Dividing Shapes into Equal Parts Partition circles and rectangles into two and four equal shares, describe the shares using the words halves, fourths, and quarters, and use the phrases half of, fourth of, and quarter of. Describe the whole as two of, or four of the shares. Understand for these examples that decomposing into more equal shares creates smaller shares.
{"url":"https://goalbookapp.com/toolkit/goal/dividing-shapes-into-equal-parts","timestamp":"2014-04-20T20:54:53Z","content_type":null,"content_length":"41930","record_id":"<urn:uuid:937bec74-bdcb-4909-bce1-21bb608667bf>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00330-ip-10-147-4-33.ec2.internal.warc.gz"}
Oak Ridge North, TX Algebra 1 Tutor Find an Oak Ridge North, TX Algebra 1 Tutor ...I helped him get up to a B and a 4 out of 5 on the AP Calculus exam. I taught him basic test taking skills. He and his mother were very happy with the results. 24 Subjects: including algebra 1, chemistry, physics, calculus ...I have been a player of chess for many years in Argentina, playing at tournament level (2nd category in Argentina). I have taught chess in elementary schools in Texas. I can be a good beginner and intermediate level teacher of chess. I hold an MBA from the University of Pittsburgh. 16 Subjects: including algebra 1, English, reading, TOEFL ...I am fascinated by the study of humans and their interaction. Things such as culture, language, religion, and how they interact to help form and determine the success of a people group are all incredibly interesting to me. The hands-on field research we were required to do included a three-tiered observation of how people respond to stimuli. 20 Subjects: including algebra 1, reading, piano, ESL/ESOL I am a certified teacher Early Childhood through 6th grade. I specialize in helping struggling reading and math students. At the first session, I test each student to understand what skills they are lacking and develop customized programs to address those gaps. 21 Subjects: including algebra 1, English, elementary (k-6th), dyslexia ...I love helping others and consider it my calling in this life. My Goal is to help all grow, develop and mature to be productive, independent citizens in society. I look forward to having the opportunity to assist with comprehension of math concepts required for success. 4 Subjects: including algebra 1, geometry, prealgebra, linear algebra Related Oak Ridge North, TX Tutors Oak Ridge North, TX Accounting Tutors Oak Ridge North, TX ACT Tutors Oak Ridge North, TX Algebra Tutors Oak Ridge North, TX Algebra 2 Tutors Oak Ridge North, TX Calculus Tutors Oak Ridge North, TX Geometry Tutors Oak Ridge North, TX Math Tutors Oak Ridge North, TX Prealgebra Tutors Oak Ridge North, TX Precalculus Tutors Oak Ridge North, TX SAT Tutors Oak Ridge North, TX SAT Math Tutors Oak Ridge North, TX Science Tutors Oak Ridge North, TX Statistics Tutors Oak Ridge North, TX Trigonometry Tutors Nearby Cities With algebra 1 Tutor Cypress, TX algebra 1 Tutors Hufsmith algebra 1 Tutors New Caney algebra 1 Tutors Oak Ridge N, TX algebra 1 Tutors Patton Village, TX algebra 1 Tutors Patton Vlg, TX algebra 1 Tutors Pinehurst, TX algebra 1 Tutors Porter, TX algebra 1 Tutors Rayford, TX algebra 1 Tutors Roman Forest, TX algebra 1 Tutors Shenandoah, TX algebra 1 Tutors Stagecoach, TX algebra 1 Tutors Tamina, TX algebra 1 Tutors The Woodlands, TX algebra 1 Tutors Woodbranch, TX algebra 1 Tutors
{"url":"http://www.purplemath.com/oak_ridge_north_tx_algebra_1_tutors.php","timestamp":"2014-04-16T10:28:07Z","content_type":null,"content_length":"24331","record_id":"<urn:uuid:96115ae4-f37e-4f70-b2f8-d2abfada8a50>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00283-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: February 2002 [00253] [Date Index] [Thread Index] [Author Index] Re: A Learning TableForm Problem • To: mathgroup at smc.vnet.net • Subject: [mg32860] Re: A Learning TableForm Problem • From: "Allan Hayes" <hay at haystack.demon.co.uk> • Date: Sat, 16 Feb 2002 04:35:16 -0500 (EST) • References: <a4ielo$9ng$1@smc.vnet.net> • Sender: owner-wri-mathgroup at wolfram.com If you use -> for the right arrow everything works: f[x_] := x^3 - 4x^2 - 2x + 8 TableForm[Table[{i, f[i]}, {i, 1, 2, 0.1}], TableHeadings -> {None, {"x", "f(x)"}}] This is the universal "InputForm" that you can input from any keyboard (like, for example a^b for a to the power b). You would find it helpful to read the first part of the Mathematica book - you can find this in the Help menu. Remember that the tabel form is for display. If you want to use the list generated by Table[...] in later calulations please be aware of the f[x_] := x^3 - 4x^2 - 2x + 8 tbf = TableForm[lst = Table[{i, f[i]}, {i, 1, 2, 0.1}], TableHeadings -> {None, {"x", "f(x)"}}] % (*previous output - this is a comment, not part of the input*) We see below that tbf is % or lst with the "Wrapprer" TableForm[...] wrapped round it to specify how it is to be displayed. Of course we can extract lst from tbf: Allan Hayes Mathematica Training and Consulting Leicester UK hay at haystack.demon.co.uk Voice: +44 (0)116 271 4198 Fax: +44 (0)870 164 0565 "Carlos" <k9zm at frontiernet.net> wrote in message news:a4ielo$9ng$1 at smc.vnet.net... > I need to know why this isn't working for me. The example I am given is: > Find an estimate for the zero of the Polynomial. f(x)=x^3-4x^2-2x+8 > On a graph the estimated zero is between (1,2) and it asks us to divide > the interval into tenths > f[x_]:=x^3-4x^2-2x+8 > TableForm[Table[{i,f[i]},{i,1,2,0.1}], > TableHeadings>{None,{"x","f(x)"}}] > The > is as close as I can get to a right arrow sign > The example I have shows a table with 2 columns and appropriate numbers, > but I cannot get this to display on Mathematica for me. Any ideas? > Thanks > Carlos
{"url":"http://forums.wolfram.com/mathgroup/archive/2002/Feb/msg00253.html","timestamp":"2014-04-20T00:56:00Z","content_type":null,"content_length":"36272","record_id":"<urn:uuid:e2844b2f-571e-4550-93f9-ff63b9f88efd>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00503-ip-10-147-4-33.ec2.internal.warc.gz"}
Pottstown Algebra 2 Tutor Find a Pottstown Algebra 2 Tutor I am a well rounded and very well experienced tutor. I have been a teacher for 9 years, most of which have been teaching mathematics on all levels. I have seen all types of standardized test and am very good at helping people understand the tests and gain valuable skills not only to pass the tests but to also master the content. 11 Subjects: including algebra 2, geometry, algebra 1, public speaking ...I graduated from Wheaton College with a degree in Interdisciplinary Studies (Music, English, and Philosophy). I received a 2360 on the SAT (800 math, 800 writing, 760 verbal); on the GRE, I scored 800 on math, 720 on verbal, and 6.0/6.0 on writing. I received Wheaton College's highest music comp... 38 Subjects: including algebra 2, English, reading, physics I am currently employed as a secondary mathematics teacher. Over the past eight years I have taught high school courses including Algebra I, Algebra II, Algebra III, Geometry, Trigonometry, and Pre-calculus. I also have experience teaching undergraduate students at Florida State University and Immaculata University. 9 Subjects: including algebra 2, geometry, GRE, algebra 1 I am a youthful high school Latin teacher. I have been tutoring both Latin & Math to high school students for the past six years. I hold a teaching certificate for Latin, Mathematics, and English, and I am in the finishing stages of my master's program at Villanova. 7 Subjects: including algebra 2, geometry, algebra 1, Latin ...This approach allows me to adapt my tutoring to suit the students' needs and level of understanding. I focus on encouraging students' to critical think through each Math/Science problem for themselves, which allows me to gain an understanding of how they visualize and approach a question, and th... 14 Subjects: including algebra 2, calculus, algebra 1, precalculus
{"url":"http://www.purplemath.com/Pottstown_Algebra_2_tutors.php","timestamp":"2014-04-19T17:05:18Z","content_type":null,"content_length":"24152","record_id":"<urn:uuid:3230808c-15a3-43ba-a99c-9742fd20374d>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00028-ip-10-147-4-33.ec2.internal.warc.gz"}
Advanced Mathematics for Engineers and Scientists/Introduction and Classifications Introduction and ClassificationsEdit The intent of the prior chapters was to provide a shallow introduction to PDEs and their solution without scaring anyone away. A lot of fundamentals and very important details were left out. After this point, we are going to proceed with a little more rigor; however, knowledge past one undergraduate ODE class alongside some set theory and countless hours on Wikipedia should be enough. Some Definitions and ResultsEdit An equation of the form $f(u) = C\,$ is called a partial differential equation if $u$ is unknown and the function $f$ involves partial differentiation. More concisely, $f$ is an operator or a map which results in (among other things) the partial differentiation of $u$. $u$ is called the dependent variable, the choice of this letter is common in this context. Examples of partial differential equations (referring to the definition $\frac{\partial^2 u}{\partial y^2} + u \frac{\partial^2 u}{\partial x^2} + 2 = 0 \qquad \mbox{where} \quad f(u) = \frac{\partial^2 u}{\partial y^2} + u \frac{\partial^2 u}{\partial x^2} \ , \quad C = -2$ $\frac{\partial u}{\partial t} = \frac{\partial^2 u}{\partial y^2} \qquad \mbox{where} \quad f(u) = \frac{\partial u}{\partial t} - \frac{\partial^2 u}{\partial y^2} \ , \quad C = 0$ $\frac{\partial^4 u}{\partial x^4} = 0 \qquad \mbox{where} \quad f(u) = \frac{\partial^4 u}{\partial x^4} \ , \quad C = 0$ Note that what exactly $u$ is made of is unspecified, it could be a function, several functions bundled into a vector, or something else; but if $u$ satisfies the partial differential equation, it is called a solution. Another thing to observe is seeming redundancy of $C$, its utility draws from the study of linear equations. If $C = 0$, the equation is called homogeneous, otherwise it's nonhomogeneous or It's worth mentioning now that the terms "function", "operator", and "map" are loosely interchangeable, and that functions can involve differentiation, or any operation. This text will favor, not exclusively, the term function. The order of a PDE is the order of the highest derivative appearing, but often distinction is made between variables. For example the equation $\frac{\partial^2}{\partial x^2}\left(EI \frac{\partial^2 u}{\partial x^2}\right) = -\mu \frac{\partial^2 u}{\partial t^2}\,$ is second order in $t$ and fourth order in $x$ (fourth derivatives will result regardless of the form of $EI$). Linear Partial Differential EquationsEdit Suppose that $f(u) = L(u)$, and that $L$ satisfies the following properties: □ $L(u + v) = L(u) + L(v)\,$ □ $L(\alpha u) = \alpha L(u)\,$ for any scalar $\alpha$. The first property is called additivity, and the second one is called homogeneity. If $L$ is additive and homogeneous, it is called a linear function, additionally if it involves partial differentiation and $L(u) = C\,$ then the equation above is a linear partial differential equation. This is where the importance of $C$ shows up. Consider the equation $\frac{\partial u}{\partial t} = \frac{\partial^2 u}{\partial x^2} + A$ where $A$ is not a function of $u$. Now, if we represent the equation through $L(u) = 0\quad \mbox{where} \quad L(u) = \frac{\partial u}{\partial t} - \frac{\partial^2 u}{\partial x^2} - A\,$ then $L$ fails both additivity and homogeneity and so is nonlinear (Note: the equation defining the condition is 'homogeneous', but in a distinct usage of the term). If instead $L(u) = A\quad \mbox{where} \quad L(u) = \frac{\partial u}{\partial t} - \frac{\partial^2 u}{\partial x^2}\,$ then $L$ is now linear. Note then that the choice of $L$ and $C$ is generally not unique, but if an equation could be written in a linear form it is called a linear equation. Linear equations are very popular. One of the reasons for this popularity is a little piece of magic called the superposition principle. Suppose that both $u_1$ and $u_2$ are solutions of a linear, homogeneous equation (here onwards, $L$ will denote a linear function), ie $L(u_1) = 0 \quad \mbox{and} \quad L(u_2) = 0\,$ for the same $L$. We can feed a combination of $u_1$ and $u_2$ into the PDE and, recalling the definition of a linear function, see that $L(a_1 u_1 + a_2 u_2) = 0\,$ $a_1 L(u_1) + a_2 L(u_2) = 0\,$ for some constants $a_1$ and $a_2$. As stated previously, both $u_1$ and $u_2$ are solutions, which means that $a_1 \cdot 0 + a_2 \cdot 0 = 0 \qquad (\mbox{Since} \quad L(u_1) = 0 \quad \mbox{and} \quad L(u_2) = 0)\,$ $0 = 0\,$ What all this means is that if both $u_1$ and $u_2$ solve the linear and homogeneous equation $L(u) = 0$, then the quantity $a_1 u_1 + a_2 u_2$ is also a solution of the partial differential equation. The quantity $a_1 u_1 + a_2 u_2$ is called a linear combination of $u_1$ and $u_2$. The result would hold for more combinations, and generally, The Superposition Principle Suppose that in the equation $L(u) = 0\,$ the function $L$ is linear. If some sequence $u_i$ satisfies the equation, that is if $L(u_0) = 0 \ , \ L(u_1) = 0 \ , \ L(u_2) = 0 \ , \ \dots \,$ then any linear combination of the sequence also satisfies the equation: $L\left(\sum a_i u_i\right) = 0\,$ where $a_i$ is a sequence of constants and the sum is arbitrary. Note that there is no mention of partial differentiation. Indeed, it's true for any linear equation, algebraic or integro-partial differential-whatever. Concerning nonhomogeneous equations, the rule can be extended easily. Consider the nonhomogeneous equation $L(u) = C\,$ Let's say that this equation is solved by $u_p$ and that a sequence $u_i$ solves the "associated homogeneous problem", $L(u_p) = C\,$ $L(u_i) = 0\,$ where $L$ is the same between the two. An extension of superposition is observed by, say, the specific combination $u_p + a_1 u_1 + a_2 u_2$: $L(u_p + a_1 u_1 + a_2 u_2) = C\,$ $L(u_p) + a_1 L(u_1) + a_2 L(u_2) = C\,$ $C + a_1 \cdot 0 + a_2 \cdot 0 = C\,$ $C = C\,$ More generally, The Extended Superposition Principle Suppose that in the nonhomogeneous equation $L(u) = C\,$ the function $L$ is linear. Suppose that this equation is solved by some $u_p$, and that the associated homogeneous problem $L(u) = 0\,$ is solved by a sequence $u_i$. That is, $L(u_p) = C \ ; \ L(u_0) = 0 \ , \ L(u_1) = 0 \ , \ L(u_2) = 0 \ , \ \dots \,$ Then $u_p$ plus any linear combination of the sequence $u_i$ satisfies the original (nonhomogeneous) equation: $L\left(u_p + \sum a_i u_i\right) = C\,$ where $a_i$ is a sequence of constants and the sum is arbitrary. The possibility of combining solutions in an arbitrary linear combination is precious, as it allows the solutions of complicated problems be expressed in terms of solutions of much simpler problems. This part of is why even modestly nonlinear equations pose such difficulties: in almost no case is there anything like a superposition principle. Classification of Linear EquationsEdit A linear second order PDE in two variables has the general form $A \frac{\partial^2 u}{\partial x^2} + 2 B \frac{\partial^2 u}{\partial x \partial y} + C \frac{\partial^2 u}{\partial y^2} + D \frac{\partial u}{\partial x} + E \frac{\partial u}{\partial y} + F = 0$ If the capital letter coefficients are constants, the equation is called linear with constant coefficients, otherwise linear with variable coefficients, and again, if $F$ = 0 the equation is homogeneous. The letters $x$ and $y$ are used as generic independent variables, they need not represent space. Equations are further classified by their coefficients; the quantity $B^2 - A C\,$ is called the discriminant. Equations are classified as follows: $B^2 - A C < 0 \ \Rightarrow \ \mathrm{The \ PDE \ is \ \underline{elliptic}.}$ $B^2 - A C = 0 \ \Rightarrow \ \mathrm{The \ PDE \ is \ \underline{parabolic}.}$ $B^2 - A C > 0 \ \Rightarrow \ \mathrm{The \ PDE \ is \ \underline{hyperbolic}.}$ Note that if coefficients vary, an equation can belong to one classification in one domain and another classification in another domain. Note also that all first order equations are parabolic. Smoothness of solutions is interestingly affected by equation type: elliptic equations produce solutions that are smooth (up to the smoothness of coefficients) even if boundary values aren't, parabolic equations will cause the smoothness of solutions to increase along the low order variable, and hyperbolic equations preserve lack of smoothness. Generalizing classifications to more variables, especially when one is always treated temporally (ie associated with ICs, but we haven't discussed such conditions yet), is not too obvious and the definitions can vary from context to context and source to source. A common way to classify is with what's called an elliptic operator. Definition: Elliptic Operator A second order operator $E$ of the form $E(u) = -\sum_{k,j} A_{k j} \frac{\partial^2 u}{\partial x_j \partial x_k} + \sum_l B_l i^{-1} \frac{\partial u}{\partial x_l} + C u$ is called elliptic if $A$, an array of coefficients for the highest order derivatives, is a positive definite symmetric matrix. $i$ is the imaginary unit. More generally, an $n^{th}$ order elliptic operator is $E(u) = \sum_{m = 0}^n \ \sum_{k, j, l, \dots} A^m_{k, j, l, \dots} i^{-m} \frac{\partial^m u}{\partial x_j \partial x_k \partial x_k ...}$ if the $n$ dimensional array of coefficients of the highest ($n^{th}$) derivatives is analogous to a positive definite symmetric matrix. Not commonly, the definition is extended to include negative definite matrices. The negative of the Laplacian, $-abla^2 u$, is elliptic with $A_{k j} = -\delta_{k, j}$. The definition for the second order case is separately provided because second order operators are by a large margin the most common. Classifications for the equations are then given as $E(u) = 0 \ \Rightarrow \ \mathrm{The \ equation \ is \ \underline{elliptic}.}$ $E(u) + k \frac{\partial u}{\partial t} = 0 \ \Rightarrow \ \mathrm{The \ equation \ is \ \underline{parabolic}.}$ $E(u) + k \frac{\partial^2 u}{\partial t^2} = 0 \ \Rightarrow \ \mathrm{The \ equation \ is \ \underline{hyperbolic}.}$ for some constant k. The most classic examples of these equations are obtained when the elliptic operator is the Laplacian: Laplace's equation, linear diffusion, and the wave equation are respectively elliptic, parabolic, and hyperbolic and are all defined in an arbitrary number of spatial dimensions. Other classificationsEdit The linear form $A \frac{\partial^2 u}{\partial x^2} + 2 B \frac{\partial^2 u}{\partial x \partial y} + C \frac{\partial^2 u}{\partial y^2} + D \frac{\partial u}{\partial x} + E \frac{\partial u}{\partial y} + F = 0$ was considered previously with the possibility of the capital letter coefficients being functions of the independent variables. If these coefficients are additionally functions of $u$ which do not produce or otherwise involve derivatives, the equation is called quasilinear. It must be emphasized that quasilinear equations are not linear, no superposition or other such blessing; however these equations receive special attention. They are better understood and are easier to examine analytically, qualitatively, and numerically than general nonlinear equations. A common quasilinear equation that'll probably be studied for eternity is the advection equation $\frac{\partial u}{\partial t} + abla \cdot (u \mathbf{v}) = 0$ which describes the conservative transport (advection) of the quantity $u$ in a velocity field $\mathbf{v}$. The equation is quasilinear when the velocity field depends on $u$, as it usually does. A specific example would be a traffic flow formulation which would result in $\frac{\partial u}{\partial t} + 2 u \frac{\partial u}{\partial x} = 0$ Despite resemblance, this equation is not parabolic since it is not linear. Unlike its parabolic counterparts, this equation can produce discontinuities even with continuous initial conditions. General NonlinearEdit Some equations defy classification because they're too abnormal. A good example of an equation is the one that defines a minimal surface expressible as $u = u(x, y)$: $\left(1 + \left(\frac{\partial u}{\partial y}\right)^2\right) \frac{\partial^2 u}{\partial x^2} - 2 \frac{\partial u}{\partial x} \frac{\partial u}{\partial y} \frac{\partial^2 u}{\partial x \ partial y} + \left(1 + \left(\frac{\partial u}{\partial x}\right)^2\right) \frac{\partial^2 u}{\partial y^2} = 0$ where $u$ is the height of the surface. Last modified on 26 November 2013, at 20:01
{"url":"http://en.m.wikibooks.org/wiki/Partial_Differential_Equations/Introduction_and_Classifications","timestamp":"2014-04-18T10:57:59Z","content_type":null,"content_length":"41331","record_id":"<urn:uuid:24f2205a-7b54-4ecd-9d0d-28249e3c2e26>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00547-ip-10-147-4-33.ec2.internal.warc.gz"}
How can we compare the results of classifying according to several categories? 7. Product and Process Comparisons 7.4. Comparisons based on data from more than two processes 7.4.5. How can we compare the results of classifying according to several categories? Contingency Table approach When items are classified according to two or more criteria, it is often of interest to decide whether these criteria act independently of one another. For example, suppose we wish to classify defects found in wafers produced in a manufacturing plant, first according to the type of defect and, second, according to the production shift during which the wafers were produced. If the proportions of the various types of defects are constant from shift to shift, then classification by defects is independent of the classification by production shift. On the other hand, if the proportions of the various defects vary from shift to shift, then the classification by defects depends upon or is contingent upon the shift classification and the classifications are dependent. In the process of investigating whether one method of classification is contingent upon another, it is customary to display the data by using a cross classification in an array consisting of r rows and c columns called a contingency table. A contingency table consists of r x c cells representing the r x c possible outcomes in the classification process. Let us construct an industrial case: Industrial example A total of 309 wafer defects were recorded and the defects were classified as being one of four types, A, B, C, or D. At the same time each wafer was identified according to the production shift in which it was manufactured, 1, 2, or 3. Contingency table classifying These counts are presented in the following table. defects in wafers according to type and production shift Type of Defects Shift A B C D Total 1 15(22.51) 21(20.99) 45(38.94) 13(11.56) 94 2 26(22.9) 31(21.44) 34(39.77) 5(11.81) 96 3 33(28.50) 17(26.57) 49(49.29) 20(14.63) 119 Total 74 69 128 38 309 (Note: the numbers in parentheses are the expected cell frequencies). Column probabilities Let p[A] be the probability that a defect will be of type A. Likewise, define p[B], p[C], and p[D] as the probabilities of observing the other three types of defects. These probabilities, which are called the column probabilities, will satisfy the requirement p[A] + p[B] + p[C] + p[D] = 1 Row probabilities By the same token, let p[i][ ](i=1, 2, or 3) be the row probability that a defect will have occurred during shift i, where p[1] + p[2] + p[3] = 1 Multiplicative Law of Probability Then if the two classifications are independent of each other, a cell probability will equal the product of its respective row and column probabilities in accordance with the Multiplicative Law of Probability. Example of obtaining column and For example, the probability that a particular defect will occur in shift 1 and is of type A is (p[1]) (p[A]). While the numerical values of the cell probabilities row probabilities are unspecified, the null hypothesis states that each cell probability will equal the product of its respective row and column probabilities. This condition implies independence of the two classifications. The alternative hypothesis is that this equality does not hold for at least one cell. In other words, we state the null hypothesis as H[0]: the two classifications are independent, while the alternative hypothesis is H[a]: the classifications are To obtain the observed column probability, divide the column total by the grand total, n. Denoting the total of column j as c[j], we get Similarly, the row probabilities p[1], p[2], and p[3] are estimated by dividing the row totals r[1], r[2], and r[3] by the grand total n, respectively Expected cell frequencies Denote the observed frequency of the cell in row i and column jof the contingency table by n[ij]. Then we have Estimated expected cell frequency In other words, when the row and column classifications are independent, the estimated expected value of the observed cell frequency n[ij] in an r x c contingency when H[0] is true. table is equal to its respective row and column totals divided by the total frequency. The estimated cell frequencies are shown in parentheses in the contingency table above. Test statistic From here we use the expected and observed frequencies shown in the table to calculate the value of the test statistic df = (r-1)(c-1) The next step is to find the appropriate number of degrees of freedom associated with the test statistic. Leaving out the details of the derivation, we state the The number of degrees of freedom associated with a contingency table consisting of r rows and c columns is (r-1) (c-1). So for our example we have (3-1) (4-1) = 6 d.f. Testing the null hypothesis In order to test the null hypothesis, we compare the test statistic with the critical value of Χ^ 2[1-α/2] at a selected value of α. Let us use α = 0.05. Then the critical value is Χ^ 2[0.95,6] = 12.5916 (see the chi square table in Chapter 1). Since the test statistic of 19.18 exceeds the critical value, we reject the null hypothesis and conclude that there is significant evidence that the proportions of the different defect types vary from shift to shift. In this case, the p-value of the test statistic is 0.00387.
{"url":"http://www.itl.nist.gov/div898/handbook/prc/section4/prc45.htm","timestamp":"2014-04-21T12:09:32Z","content_type":null,"content_length":"14069","record_id":"<urn:uuid:5029a93e-b888-43da-9e4a-9a72a54f85ff>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00591-ip-10-147-4-33.ec2.internal.warc.gz"}
Bench measurements under 110dBc 3rd order intermodulation distortion | EE Times Design How-To Bench measurements under 110dBc 3rd order intermodulation distortion Emerging low power fully differential amplifiers (FDAs) are intended to support IF and ADC interface requirements with exceptional linearity. Offering intercepts exceeding 50dBm on very lower power, they can provide an attractive option to the more typical Class A RF amplifiers for applications below 500Mhz. An immediate practical issue is encountered in attempting to measure the IM3 when the spurious are >-110dBc below the carriers. Typical approaches of projecting from -1dB compression points do not apply to FDA type devices. Other projection techniques can certainly help, but at the end of the day generating a -120dBc clean input and measuring a -110dBc dynamic range are both useful capabilities in these types of measurements. Extremely high 3rd order intercept amplifiers Communications channels have always needed a mix of low noise figure, high intercept, and manageable quiescent power to deliver leading edge systems. The 3rd order intermodulation intercept is particularly important as it describes how low the spurious powers will be at the output of a stage receiving two closely spaced carriers that were not in the original input signal. These are particularly troublesome since they will fall “close-in” to the carries and cannot be filtered out. The classic definition of the 3rd order intercept is shown in figure 1. Also shown is the spacing around a center frequency where the resulting spurious will be. Essentially, for carriers spaced +/-?f around f0, the 3rd order spurious will be +/-3?f around f0 where f0 is the average (or center) frequency of the two carriers. Click on image to enlarge. Figure 1. 3rd order intercept definition For amplifiers that show an intercept characteristic, this simple approach gives an easy way to predict SFDR for different output carrier levels. From fig. 1 , the intercept for equal carrier power (P0), is given by eq. 1 From this single number, an estimate of the 3rd order SFDR may be made as eq. 2 The intercept is often constant over frequency for class A type RF amplifiers, but never so for more high open loop gain, voltage or current feedback based, fully differential amplifier (FDA) type devices. These lower power devices offer a frequency dependent loop gain and lower full power bandwidth (slew rate) that reduce the performance as the frequency increases. The easy measurement is when the test power levels are at 0dBm. In the example drawing of fig. 1 ,which is drawn with a -60dBc to the 3rd order spurious at 0dBm output, so the intercept is 30dBm from equation 1 . Then, at say 10dBm output level (2Vpp on each tone for a sine wave test, 4Vpp output envelope), equation 2 would predict 40dB SFDR, which also can be seen in fig. 1 . The name “intercept point” comes from the intersection of the 2 curves in fig. 1 . That also equals 30dBm and is a projection of where the output spurious would equal the test powers. That 30dBm output power is of course not intended here and the model is only used to project the 3rd order spurious at output powers far below this “intercept” point. Not all amplifiers show a strictly intercept performance, so it is also common to just see a 3rd order spurious level vs. frequency and/or output power level plot. This is particularly common when the loads are not intended to be 50ohm loads – such as driving ADC inputs. For example, a very low IM3 device like the ISL55210 (ref. 1) shows a data sheet plot such as fig. 2 (figure 9, ref. 2). Click on image to enlarge. Figure 2. Swept frequency, fixed gain, 200Ohm load IM2/IM3 SFDR plot for the ISL55210 This is showing the ?dBc from equal test tone powers to the spurious levels for different fixed output 2-tone envelopes swept up in frequency using the 15dB gain test circuit of fig. 3. The output network of fig. 3 maps from a 200O differential load to a single ended 50ohm measurement path. The 2Vpp curve is 2, 1Vpp test tones at the output pins (Vo) spaced +/-100khz around the x-axis frequency. Click on image to enlarge. Figure 3. Test board for IM2/IM3 test of fig. 2 (ref. 3) Above 150Mhz, it is starting to look like it might have an intercept characteristic, but the question here is how to generate and test these <-100dBc levels in a lab environment. While the IM2 is not nearly so low as the IM3 for the 115mW ISL55210, the intent was that a bandpass filter would filter those off. This is to follow the stage when it is the <100dBc 3rd order terms that are of interest in the application. Developing the input test signal Testing for OIP3 starts with summing two signal generators together and eventually ends with using a spectrum analyzer to measure very low spurious levels. To allow the spectrum analyzer some chance of making this measurement, it helps to use very low phase noise sources locked to the spectrum analyzer to zoom in on a narrow span knowing exactly where to look with no phase noise smearing of the measured power. Those synthesized sources are readily available (e.g. HP8662, HP8664, Gigatronics 6080A, R&S SMA100A, etc) and the fact they have very poor harmonic distortion (typically in the -50dBc to -60dBc range) is inconsequential to amplifier IM3 testing. Those straight harmonic distortion do matter to ADC testing and the test signal needs to be run through a bandpass filter in that case. No passive filtering is required in testing IM2/IM3 for amplifiers as none of individual source harmonics create terms at the intermodulation locations.
{"url":"http://www.eetimes.com/document.asp?doc_id=1280119","timestamp":"2014-04-16T04:35:59Z","content_type":null,"content_length":"150390","record_id":"<urn:uuid:1d5551c6-3301-462e-9d2d-028c246f39f7>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00491-ip-10-147-4-33.ec2.internal.warc.gz"}
GAP 3 to GAP 4 This is a page on GAP 3, which is still available, but no longer supported. The present version is GAP 4 (See Status of GAP 3). GAP 3 to GAP 4 For interactive use or simple programming GAP 4 and GAP 3 look alike, especially the GAP language has not changed much. Few commands have changed names (only if there were urgent reasons) and some tasks are performed more efficiently in GAP 4. Nevertheless the GAP 4 kernel has been rebuilt from the ground up (Martin Schönert, Frank Celler). It now has more efficient memory management, faster function calling, save/load workspace facilities, streams, and fast vector arithmetic for finite fields. It is easier to extend and 64 bit clean. There is now a GAP compiler that produces human-readable C code. This C code can be compiled and loaded dynamically (UNIX only) or compiled into a kernel. Compiled code is automatically loaded when it exists. In GAP 3, there is a clear-cut distinction between kernel objects such as permutations, words and elements in polycyclic groups etc. and objects that can be represented in the library via records. The user has no access to the internals of kernel objects but full access to library objects. In GAP 4, this distinction has been mellowed. Namely, there are new kinds of objects that can on one hand be designed by the user similar to the records of GAP 3 but which can on the other hand be made immutable in order to be as well protected as the former kernel objects. These new features are intended to make the introduction of new data types much easier than in GAP 3. Examples for new data structures available in GAP 4 are enumerators (special kinds of lists) and iterators (which admit looping over virtual lists). Also the representation of algebraic structures via records in GAP 3 has been replaced in GAP 4 by one that uses these new objects. The operations records of GAP 3 have been replaced by a more flexible system. Every GAP 4 object has a type, which is used in the choice of methods for an upcoming computation. Part of this type is known information about the object. For example, when GAP is asked to compute the conjugacy classes of a group, different methods are available. One of these is a method for solvable groups. This method can be chosen if the group is known to be solvable, which would be part of its type. In particular this mechanism is used to utilize mathematical implications. Those users who want to convert existing GAP 3 code into GAP 4 code will find some advice in the document Migrating to GAP 4.
{"url":"http://www.gap-system.org/Gap3/Changes3/changes.html","timestamp":"2014-04-21T10:15:57Z","content_type":null,"content_length":"6587","record_id":"<urn:uuid:1b847a89-2d3d-4b55-9dc0-93567610f3a8>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00510-ip-10-147-4-33.ec2.internal.warc.gz"}
┃art theory 101~ The LUFE Matrix: distilling SI quantities into ST dimensions┃ The LUFE Matrix: The distillation of System International (SI) units into more fundamentally base units of Space-Time (ST) dimensions Short Title: The Multi-dimensional LUFE Matrix Ó 1985, 1991, 2003, Reginald Brooks. All rights reserved Introduction-Geometry (Layout)-Page 6a Layout page 6a SI Units page 6d SI Quantities page 6e NOTE: The LUFE Matrix was developed and published by the author originally in 1985 and was subsequently presented in other papers culminating in the 1991-92 work from which the title and substance of this digitized version is based. These drawings are from that paper. (click to enlarge images) This digitized version of The LUFE Matrix supplements "Dark matter=Dark energy (the inversion of): The Conservation of SpaceTime by the Conservation of Force" by the author -see Page 5 of Art Theory (original extended 1995) Apples to oranges. Dimensional analysis of physical entities (quantities) and the units used to express them has led to the formation of a clear and visually powerful ordering Space-Time (ST) Matrix, The L.U.F.E. (LUFE) Matrix. The international system of units (SI)…SI base, derived and supplementary…has been systematically organized into the more fundamentally base units of space and time dimensions. This distillation of all physical entities and the units used to express them into common ST interactive dimensional units allows for a direct, visual display of the conceptual and mathematical operations which inform so much of scientific thought. A useful and dynamic tool for teaching, research and theory has been formed by reducing apples and oranges to ST units. The conservation laws, in relation to the multidimensional LUFE Matrix, are examined in terms of the symmetry of spatial and temporal dimensional units. Conservation of spactime is proposed to be the fundamental symmetry of Nature. The LUFE Matrix's reduction of SI units into ST dimensions coincides with the translation of the abstract mathematical symbolism into a readily visualized geometry of areas…areas differentiated by common graphic means of outline, color, shading, etc…which are manipulated in the LUFE Matrix by simple rules of addition and subtraction. It is a new math (dynamic geometry) that is easy to learn, easy to see and easy to utilize as a tool to teach pure mathematics. Once combined with standard and theoretical concepts in physics, the LUFE Matrix offers a new window forward and mirror back into the cosmos. A valuable supplement containing over 200 examples and proofs (some 97 pages, 216 figures on paper) of the LUFE Matrix reviewing classical and modern physics is available.(Digital version coming.) Abstract extension (added July, 2003) Any grand unification of the gravitational force (interaction) with the strong-electro-weak force (interaction) will require coming to terms with our most basic, fundamental understanding of the nature of mass, charge, spin and space-time itself, and how these identifying properties of matter are formed, interact and are conserved by the very space-time dimensions of which they are inseparably generated from. Relativistic quantum theory will be clarified as we recast our eclectically derived notions of physical law into the base, fundamental units of space and time, or space-time dimensions. The LUFE Matrix is the rosetta stone of physics. It visually translates our knowledge of the universe, and the mathematics used to describe it, into a simpler, more beautiful form. The simplicity of that form belies its amazing dynamics. And while the body of the LUFE Matrix carries the fixed, reference components, it is in the wings that the dynamical lift is to be found. Here, in flight, is where the LUFE Matrix can visually take one to both the past and the future. The LUFE Matrix Supplement: Examples and Proofs, will dynamically fly through the history of classical and modern physics, giving proof to its validity by example. It is a short course in its own right. Dark-Dark-Light: Dark Matter=Dark Energy (the inversion of ): The Conservation of SpaceTime by the Conservation of Force (and Energy) dynamically flys one into the future. Here is a new theory which uses the LUFE Matrix to reinterpret the Standard Model Theory of the cosmos in terms of a more fundamental conservation law: the Conservation of Space-Time. The LUFE Matrix, as a rosetta stone...a visual theory of space-time dimension...is given wings that no sun can melt. In closing, I have included five new constants from my own work in this realm...LUFE, The Layman's Unified Field Expose. Page 6a- Geometry- Layout The evolution of our understanding of the physical world has been anything but a linear plot. Progress has occurred in leaps and bounds. The authenticity to which partial truths gain a stranglehold on the true progression of an idea is both bewildering and unfortunate. But fortunately, it is in our nature to inquire, to search, and to go deeper and deeper into a question to reach for the solution to that idea. The idea is nothing short of understanding the universe and our place in it. From the concepts of fire, water, earth, and air as the fundamental interactive elements defining our world, we have traveled the winding road of scientific thought to the present in which the very space and time to which we have accumulated an enormous physical context into which our physical entities and their expressions exist, and have now come to realize the fundamental role they have in the actual formation of the physical world. Space and time do not exist separately and certainly not absolutely in any way from the matter and fields which have previously so dominated our search toward the idea. From Newtonian three-dimensional and absolute space and absolute time we have traveled, and not without enormous effort, to the Einsteinian four-dimensional, relative space-time and hence to our present and most far-reaching efforts. Post-Einsteinian thought increasingly favors a multi-dimensional universe consisting of a number of spatial dimensions (nine or more) and a temporal dimension to give a nonabsolute 10+dimensional space-time fabric that includes all matter and all fields. Inherent in this approach is the integral roles that space and time play in the actual formation and subsequent expression of all physical phenomena. Such dimensional analysis may be enhanced and clarified once we have found the proper relationship(s) between these multiple dimensions of space and time. The LUFE Matrix is the tool. The LUFE Matrix (L.U.F.E. stands for Layman's Unified Field Expose, a work published by the author in 1985, which describes in detail the development of the concepts which resulted in the formation of the LUFE Matrix) was developed in response to a need to compare apples with oranges. The International System of units (SI), which is used in the LUFE Matrix, itself is a step toward unifying the conceptual basis of physical thought by standardizing the units of physical expression. Dimensional analysis in the SI system allowed numerous and widely differing physical entities to be expressed into a small number (7) of SI base units (length, mass, time, electric current, temperature, molecular amounts and luminous intensity) and from these a much longer list of SI derived and supplementary units. The LUFE Matrix is a further distillation of this process. All SI quantities and units are expressed in fundamental space-time (ST) units (dimensions). This results in a great simplification of an enormous body of units (a simplification not unlike Mendeleev's Periodic Table of the Elements) into which a remarkable visual order is given to such an eclectic array of physical concepts, quantities and units. The LUFE Matrix is the rosetta stone of Nature. Use it to advance peace and harmony. ┃ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ┃ ┃ ┃ ┃ sidebar 1: ┃ ┃ ┃ ┃ PREVIEW~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~REVIEW ┃ ┃ ┃ ┃ can you say "Energy" ten different ways ┃ ┃ ┃ ┃ Slide Show ┃ ┃ ┃ ┃ Quick & Simple Visual Overview ┃ ┃ ┃ ┃ Recommended: Before & After ┃ ┃ ┃ ┃ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ┃ The LUFE Matrix Layout We start with the familiar x-, y-axis of the Cartesian coordinate system in which four quadrants are formed, designated as: ST, S/T, 1/(ST), and T/S, clockwise. More on this in a bit. The horizontal x-axis designates the pure spatial dimensions: Space (S) as S[I], S[II], S[III],…and so on. The vertical y-axis designates the pure temporal dimensions: Time (T) as T[I], T[II], T[III],…and so on. From the origin, the spatial axis is positive to the right, negative to the left. The temporal axis is positive up, negative down. Some entities have purely spatial dimensions, e.g., displacement, length, radius, area, volume, wavelength, etc. These are located on the horizontal "space" axis. Some entities have purely temporal dimensions such as elapsed time or frequency, the two of which are reciprocally related in that frequency is cycles per second. (The inverse of time, T =1/T=n = frequency.) These are located on the vertical "time" axis. However, the vast majority of physical entities are expressed in the interactive ST dimensions in which a combination of horizontal spatial dimension(s) and vertical temporal dimension(s) gives the physical expression. This appears on the LUFE Matrix in one of the four quadrants, usually in the S/T lower right quadrant as most entities have a net dimensional expression of so many S dimensions per so many T dimensions. For example, the velocity of light, c equals so much displacement per unit of time = meters per second = space/time = S/T= wavelength (l ) times frequency (n ) = l n . When an item is per something spatial, or per something temporal, that is effectively dividing that item by the spatial or temporal dimension(s). Meters per second translate as m/s, or positive space meters and negative time seconds on the LUFE Matrix. This would place velocity, or m/s in the lower right quadrant as S[I]/T[I ](it is equivalent to placing an item on the x-, y- coordinate system at x=+1, y=-1). Another common way of expressing division of, or the denominator in a fraction, is with the negative exponential, i.e. m/s = m s^-1 and m/s^2 = m s^-2, and so on. That's it, that's just about as tough as it gets. We are going to refine this process and lay down some "rules of the matrix", but the idea is no more complicated than this. To avoid confusion of so much information, it is helpful to focus on the S/T quadrant realizing that the whole matrix is symmetric. Notice that we have used the word dimension in three ways: (a) as spatial dimension, S; (b) as temporal dimension, T; and (c) as space-time dimension, ST. From the origin, the number of spatial dimensions increases sequentially as S[I], S[II], S[III],…and so on, each S unit represents one unit of spatial dimension. And yes, while we can readily assign any one unit of S (S[I]) as linear space, and two units of S (S[II]) as area, and any three units of S (S[III]) as volume, we must accept Nature's design in which four (S[IV]) and five (S[V]) or more spatial dimensions are required. The same for the temporal dimension, each increases sequentially out form the origin as 1/T [I], 1/T[II], 1/T[III],…and so on. Here again, one unit of time may be thought of as per second, and two units as per second second, or per second 2, and so on. Each pure space or pure time dimension (i.e., S[I], S[II], S[III],… and 1/T[I], 1/T[II], 1/T[III],…etc.) is to be thought of as extending linearly to infinity (like a beam of light) and at a direction perpendicular to its axial location. Thus S[I] extends vertically to infinity in both the positive and negative direction and 1/T[I] extends horizontally to infinity in both the negative and positive direction. It is on the S/T quadrant of the matrix that the pure spatial and pure temporal dimensions overlap…crossover…forming an area of ST, here as S/T, that defines dynamic, interactional ST dimension. Here is where most of the physical entities express themselves. In the ongoing example, the linear S[I] dimension dynamically interacts with the linear 1/T[I] dimension to give the S[I]/T[I] area which designates the velocity of light, c = S[I]/T[I] = meters/second = l n . The LUFE Matrix Operational Rules (See the graphic below.) 1. The area on the matrix defines the physical entity and its units of expression. Interactional ST dimensions have areas within a quadrant, while purely spatial or temporal dimensions have linear "spaces" along their respective axis. The identity of a physical entity, be it a quantity or the units to describe/measure it, in terms of net amounts of so much spatial dimension and/or temporal dimension is constant and does not change regardless of its operation location on the matrix. 2. All net areas and spaces are counted from the origin, 0, where the x-(space) and y-(time) axial coordinates cross. 3. Pure spatial (like length) or temporal (like time, frequency or temperature) dimensions never appear by themselves in a quadrant. They are confined to their static, linear space on the axis. These are referred to as Space Or Time Areas (SOTA). Pure spatial dimensions always go to infinity in both the up and down directions for each spatial dimension. Pure temporal dimensions always go to infinity in both the left and right direction for each temporal dimension. Each is like a bipolar laser. 4. It is only in perpendicular combination(s) that space and time dimensions dynamically interact to form ST interactional dimensions (STID). 5. Once you are on the matrix proper, that is on the STID area, then the multiplication (by addition) or division (by subtraction) of any other dimensions, be they ST interactional dimensions or pure spatial or temporal dimensions, is begun from that area already defined on the matrix at that point (not from the origin, 0), using its usual ST designation. For example, acceleration (S[I]/ T[II]) is velocity (S[I]/T[I]) per unit of time (T[I]), so if velocity is first located on the matrix at the S[I]/T[I] ST interactional area of the S/T quadrant, then dividing this per unit of time (T[I]), which is the same as multiplying it by 1/T [I], requires in this case that we add one unit of pure temporal dimension (1/T[I] ) to the existing S[I]/T[I] STID area to give acceleration (S[I] /T[I] · 1/T[I] = S[I]/T[II]). Tip: Get on the matrix proper first with the larger STID areas, then combine the smaller STID units and/or pure SOTA units. Remember to keep the distinction between SOTA and STID areas clear in mind during all operations. Each is composed of pure dimensions that run to infinity, but only STID areas have both space and time dimensions. Once we get into the mathematical equations, all of which can be simplified and solved on the matrix, we will then begin to add and/or subtract various SOTA and STID areas to others already on the matrix. This entails building out (for multiplication we add areas) or in (for division we subtract areas) as the equations are solved. SOTAs are added or subtracted to the grid row or column, to which their bipolar, laser-light-like influence extends, thus they act next to the sides of an existing area. On the other hand, STID areas are added or subtracted diagonally to other STID areas. This is only natural as the STID areas have their dual, perpendicular, bipolar, laser-light-like influence going both horizontally and vertically. 6. Only physical entities that have net dimensional quantities appear on the matrix or are involved in any of the operations of the matrix. Dimensionless units include any pure numbers, integers, fractions, geometric ratios, radians, trigonomic, logarithmic, or other functions, and the like. In short, any units such that when their SI units are converted to ST units and there remains no net ST units then such entities are dimensionless units. Examples include Newton's gravitational constant, Coulomb's constant, the permittivity of free space, the dielectric constant and the fine structure constant. 7. The LUFE Matrix readily displays the dynamics of mathematical operations involving the multiplication or division of physical entities. Addition or subtraction does not effect the matrix, nor does the order in which mathematical operation take place. It is only the net, remaining area (or linear space) that counts. 8. Multiplication in the matrix is akin to exponential multiplication…the product of two or more dimensional quantities is found by the addition of their dimensional designations. Two examples: S[I] · S[IV] = S[V], and, S[I]/T[I] · S[IV]/T[III ]= S[V]/T[IV]. 9. Division is similar to exponential division…the quotient of two or more dimensional quantities is found by the subtraction of their dimensional designations. Two examples: S [V]/T[IV] ¸ 1/T[I] = S[V]/T[III], and (S[III]/T[II])^2 ¸ (S[I])^2 = S[IV]/T[IV]. 10. The LUFE Matrix is equally valid for the MKS, CGS and other less universal systems of units as the idea is to reduce and distill these systems to one in which there are only two fundamental base units, space and time. The LUFE Matrix Graphic Dynamics Digital version for the computer: grays-inactive , color-active In the following graphics, hover your cursor over the text and the graphic will change to reflect that text. (You must allow the computer to load the images for the first time, thereafter it will respond more quickly.) We start with the horizontal x-axis, graded in grays, to represent the pure "space" axis. Then we add the vertical y-axis for the "time" axis. Although we will in general keep all inactive areas in grayscale, we do find the "space", "time" axes in simple blues and greens so much more appealing. In #4, we add the grid and in #5 the simple ST text. Note the origin, 0, is in red. This matrix graphic represents the null, net 0 state when there are no net SOTA or STID areas. ~hover your cursor over the highlighted text to show its matrix location~ ┃ Geometry: Layout of The LUFE Matrix Axes ┃ ┃~hover your cursor over the text on the right for images~ ┃ ┃ ~click image for full matrix blowup~ ┃ ┃ ~if needed, click and hold while you scroll~ ┃ ┃ │ ┃ Quadrants are where most of the action is. Here is where a horizontal space dimension perpendicularly interacts, by crossing over, a vertical time dimension forming a STID area. Graphics #6-10 take us on a tour of each of the separate quadrants ending with all four quadrants lit up in color. ┃ Geometry: Layout of The LUFE Matrix Quadrants ┃ ┃~hover your cursor over the text on the right for images~ ┃ ┃ ~click image for full matrix blowup~ ┃ ┃ ~if needed, click and hold while you scroll~ ┃ ┃ │ ┃ Working Color Template To make the color work for us a little better, we tried adding the blue-green axes in #11, yet found it more workable to brake the quadrants into more discreet colors, as in #12. The text was added in #13 to complete our working color template. ┃Geometry: Layout of The LUFE Matrix Working Color Template ┃ ┃ ~hover your cursor over the text on the right for images~ ┃ ┃ ~click image for full matrix blowup~ ┃ ┃ ~if needed, click and hold while you scroll~ ┃ ┃ │ ┃ ┃ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ┃ ┃ ┃ ┃ sidebar 2: ┃ ┃ ┃ ┃ PREVIEW~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~REVIEW ┃ ┃ ┃ ┃ can you say "Energy" ten different ways ┃ ┃ ┃ ┃ Slide Show ┃ ┃ ┃ ┃ Quick & Simple Visual Overview ┃ ┃ ┃ ┃ Recommended: Before & After ┃ ┃ ┃ ┃ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ┃ On to Page 6b- Geometry-Space or Time Axis (SOTA) Page 5a- Dark-Dark-Light: Dark Matter = Dark Energy Page 5b- The History of the Universe in Scalar Graphics Page 5c- The History of the Universe_update: The Big Void Page 6a- Geometry- Layout Page 6b- Geometry- Space Or Time Area (SOTA) Page 6c- Geometry- Space-Time Interactional Dimensions(STID) Page 6d- Distillation of SI units into ST dimensions Page 6e- Distillation of SI quantities into ST dimensions Page 7- The LUFE Matrix Supplement: Examples and Proofs: Introduction-Layout & Rules Page 7c- The LUFE Matrix Supplement: References Page 8a- The LUFE Matrix: Infinite Dimensions Page 9- The LUFE Matrix:E=mc^2 Page 10- Quantum Gravity ...by the book Page 11-Conservation of SpaceTime Page 12-LUFE: The Layman's Unified Field Expose` Copyright©2008-09 Reginald Brooks, BROOKS DESIGN. All Rights Reserved.
{"url":"http://www.brooksdesign-ps.com/Code/Html/LM/LMGintro.htm","timestamp":"2014-04-21T12:08:15Z","content_type":null,"content_length":"46521","record_id":"<urn:uuid:f437744f-a51f-4229-a805-4c1ab783e250>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00467-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts by Posts by Kia Total # Posts: 90 HIV is a retrovirus. Which of the following processes does it use to synthesize a DNA strand using its own RNA genome as a template? *reverse translation *reverse transcription-------- *translation Calculate the volume of gasoline, in liters, that you would add to your car's tank given that the density of gasoline is 0.66 g/mL and the mass of the gasoline is 1.2 kg. a motorboat travels 22 mph with the current and 8 mph against the current. Find the speed of the boat in still water and the speed of the current. Sabrina is selling 16 pottery bowls and 20 ceramic vases. She wants to display the bowls and vases in equal rows on her table with the same number of each type of item in each row. Which of the following could be the number of rows? F 1 or 4 G 1, 2 OR 4 H 5 OR 8 I 1, 2, 4, 8, ... A citrus farmer wants to plant 24 lemon trees and 30 orange trees in equal rows. He wants to put the same number of each type of fruit tree in each row. What are the possible number of fruit trees that could be in each row? A sample of 40.0 mL of hydrogen is collected by water displacement at a temperature of 20 degrees Celcius. The barometer reads 751 torr. What is the volume of the hydrogen at STP (standard temperature and pressure)? $9500 is invested part of it at 12% and part of it at 7%. For a certain year the total yield is $960.00. How much was invested at each rate? Please show work soical studys oh i got it! 4. Inquistition soical studys 1. rebirth 2. Muslum Invaders 3. Protestant Churches 4. not sure. sry Chem II I thought to do an ICE chart but was not sure if I did it correctly Chem II I am stuck on what to do for my homework assignment. We were told to come up with seven different statements for: Write down as many correct, distinct, and relevant facts you can about: BrCl3 (g) + Cl2 (l) > BrCl5 (g) Kp = 7.82 * 10−6 When there is originally 0.215 at... Parking in a student lot cost $1 for the first half hour and $1.25 for each hour thereafter. A partial hour is charge the same as a full hour is charge the same as a full hour. What is the longest time that a student can park in this lot for $9. vocabulary studies Equal Rights vocabulary studies Acceptance, Perception and Conclusion A man who is 5ft tall weights 140 lbs. What is his height in centimeters and his mass in kilograms? 2. Analyze the following scenario: Duncombe Village Golf Course is considering the purchase of new equipment that will cost $1,200,000 if purchased today and will generate the following cash disbursements and receipts. Should Duncombe pursue the investment if the cost of capi... Draw a venn diagram to illustrate the following. In a shop 50 cars were inspected and it found that 23 needed new brakes and 34 needed new tires. 3 needed no services at all how many needed new brakes and new tires? How many only needed brakes? Please help The length of a rectangle is one more than seven times the width. The perimeter of the rectangle is 26 7/8 square inches. Find eight times the width. In an electric motor, at the moment that the half-turn of the rotor (central electromagnet) completes, the field of this electromagnet Find the dy/dt for each pair of functions y=x^2-5x , x=t^2+3 Your answer's wrong, btw, sorry :( What is the "u" variable standing for? Suppose that a car starts from rest at position -3.31m and accelerates with a constant acceleration of 4.15m/s^2. At what time t is the velocity of the car 19.2m/s? cyber security Prior to the discovery of any specific public-key schemes such as RSA, an existence proof was developed whose purpose was to demonstrate that public-key encryption is possible in theory. Consider the following tables, as shown below: M1= 5 4 2 3 1 M2= 5 2 3 4 1 4 2 5 1 3 1 3 2... A clock with a pendulum made of steel has a period of 1.000 s at 20.0°C. The temperature is decreased to 4°C. By how much does the period change? How much time does the clock gain or lose in one when will train b catch up with train b if train a is traveling 60 mph and passes station at 1:25pm While train b is traveling 80 mph and passes staion at 1:40pm? the weight of an object changed by =25.0% the object's final weight was 2.5 grams. What is it's inital weight A straight ladder is leaning against the wall of a house. The ladder has rails 4.80 m long, joined by rungs 5.100 m long. Its bottom end is on solid but sloping ground so that the top of the ladder is 3.100 m to the left of where it should be, and the ladder is unsafe to climb... Columnist Dave Barry poked fun at the name "The Grand Cities" adopted by Grand Forks, North Dakota, and East Grand Forks, Minnesota. Residents of the prairie towns then named their next municipal building for him. At the Dave Barry Lift Station No. 16, untreated sewa... A 9.00 kg object starting from rest falls through a viscous medium and experiences a resistive force = -b , where is the velocity of the object. The object reaches one half its terminal speed in 5.54 s. (a) Determine the terminal speed. (b) At what time is the speed of the obj... why isnt acceleration = 9.81m/s^2 due to gravitational force? You stand on the seat of a chair and then hop off. During the time you are in flight down to the floor, the Earth is lurching up toward you with an acceleration a. If your mass is 52 kg, what is the value of a? Visualize the Earth as a perfectly solid object. A landscape architect is planning an artificial waterfall in a city park. Water flowing at 1.40 m/s will leave the end of a horizontal channel at the top of a vertical wall 3.90 m high, and from there the water falls into a pool. To sell her plan to the city council, the archi... A boy can throw a ball a maximum horizontal distance of R on a level field. How far can he throw the same ball vertically upward? Assume that his muscles give the ball the same speed in each case. (Use R and g as appropriate in your equation.) One strategy in a snowball fight is to throw a snowball at a high angle over level ground. While your opponent is watching the first one, a second snowball is thrown at a low angle timed to arrive before or at the same time as the first one. Assume both snowballs are thrown wi... A boy can throw a ball a maximum horizontal distance of R on a level field. How far can he throw the same ball vertically upward? Assume that his muscles give the ball the same speed in each case. (Use R and g as appropriate in your equation.) What i have with me is vi*sin(theta) - (g*sqrt(2*h/g)) however it is not correct. As their booster rockets separate, Space Shuttle astronauts typically feel accelerations up to 3g, where g = 9.80 m/s2. In their training, astronauts ride in a device where they experience such an acceleration as a centripetal acceleration. Specifically, the astronaut is faste... As their booster rockets separate, Space Shuttle astronauts typically feel accelerations up to 3g, where g = 9.80 m/s2. In their training, astronauts ride in a device where they experience such an acceleration as a centripetal acceleration. Specifically, the astronaut is faste... the reference frames are 1) the car, 2) the earth. I've got 3.54 m/s for part (1). but it is not correct. Thanks a million A car travels due east with a speed of 35.0 km/h. Raindrops are falling at a constant speed vertically with respect to the Earth. The traces of the rain on the side windows of the car make an angle of 70.0° with the vertical. Find the velocity of the rain with respect to t... A science student is riding on a flatcar of a train traveling along a straight horizontal track at a constant speed of 12.0 m/s. The student throws a ball into the air along a path that he judges to make an initial angle of 60.0° with the horizontal and to be in line with ... A Coast Guard cutter detects an unidentified ship at a distance of 16.7 km in the direction 13.8° east of north. The ship is traveling at 24.9 km/h on a course at 39.2° east of north. The Coast Guard wishes to send a speedboat to intercept the vessel and investigate it... In zero-gravity astronaut training and equipment testing, NASA flies a KC135A aircraft along a parabolic flight path. As shown in the figure, the aircraft climbs from 24,000 ft to 32000 ft, where it enters the zero-g parabola with a velocity of 158 m/s at 45.0° nose high a... One strategy in a snowball fight is to throw a snowball at a high angle over level ground. While your opponent is watching the first one, a second snowball is thrown at a low angle timed to arrive before or at the same time as the first one. Assume both snowballs are thrown wi... An astronaut on a strange planet finds that she can jump a maximum horizontal distance of 17.0 m if her initial speed is 3.80 m/s. What is the free-fall acceleration on the planet? The small archerfish (length 20 to 25 cm) lives in brackish waters of Southeast Asia from India to the Philippines. This aptly named creature captures its prey by shooting a stream of water drops at an insect, either flying or at rest. The bug falls into the water and the fish... If a block has a mass of 16.2 grams and a volume of 14 cc. What is the density if calculated? Laura is driving to Seattle. Suppose that the remaining distance to drive (in miles) is a linear function of her driving time (in minutes). When graphed, the function gives a line with a slope of -0.75. Laura has 51 miles remaining after 33 minutes of driving. How many miles w... wha tis the exact value of sin 9pi/4 Algebra 1 To make this sentence true i have to put <,>,or = in between ( ) the given fractions or Decimals? 7/20 ( ) 2/5 0.15 ( ) 1/8 Please explain? Given a rectangular prism with dimensions w = 3, l = 4, and h = 6. If you created a second rectangular prism with the length doubled but the height halved (and the width stays the same), which would be the relation of the second volume to the first volume? 740 mm Hg How would you calculate the concentration of an aqueous solution of Ca(OH2)that has a pH of 12.57. poetry, part 1 Which one of the following lines best illustrates personification? A. A narrow wind complains all day. B. The fog comes on little cat feet. C. She floated graceful as a dove. D. Spring is a dream business law Wilma's arm is broken when Paula knocks her down during an agrument. If Wilma sues Paula for battery, what damages is Wilma likely to receive? Well I saying both. I have describe two cultrals and their views on health. i shows african and caucaisians american. I know that african american An excessive impact on minority populations is chronic diseases. Chronic diseases that are consider in African American are AIDS, ... I need some help with what are some of the implications to health care providers in African American and Caucasian? Considering cultural views on the health as organic, health as harmony and disease as a curse or stigma I need some help with listing at two pros and cons for each of the given patient and caregiver roles as a -Mechanics and machines -Parents and children -Spiritualists and believers -Providers and consumers -Partners This is so I can right my paper Thank you I am sorry it doesn't say specfic. I believe whomever the patient communicates with so that's either the doctor and etc. Communication to and from the doctor. Thanks I need some help with Describing at least two factors that influence patient communication. Chopsticks, Damon or Ms. Sue!!!! PLEASE HELP!!!!!! Damon thanks and you " Big D" watch watcha say okay big guy. im out! how can i graph 2x-2y+5=0? i think the confusing part is how to solve it and then graph it. hahaha. math is confusing. i need help on my homework. how can i graph this equation? y-3=0 I need some help with find information on what are the implications for patients? What types of reform efforts are being implemented to improve caregiver s socialization skills? I am having a hard time trying to answer these questions and searching the web for what are the pros and cons of managed care from a consumer s perspective? A caregiver s perspective? From the perspective of a caregiver, would you prefer traditional insurance or one ... I am not asking no one to do my homework. This is what questions being asked. I trying to get an idea of what they mean as far as the major components of health communication. I need some help with a list ofidentifying the major components of health communication. Who is involved in each component? How does each component promote health communication? If not utilized, how would it reduce health communication? re-SraJMcgin geography thankyou for your help that link was very useful and i found alot of information that can help me. happy new year! Causes of deforestation- i found theses causes i just need the meaning or explanation on each one i need to write a detailed explanation on each one commercial logging , forest fires , farming , mining , roads and railways , disadvantages of deforestation - i found these wildl... Environemtal science I think the Dubai awards Environemtal science I need help with creating a 7-10 slide powerpoint presentation that describes environmental benefits and challanges of urbanization. I have no idea what i am suppose to be doing. I have to include descriptions of two 1966 awards winners, dicussing how they overcome their chall... Environemtal science They would all die Environemtal science I believe they are birth rates, death rates, immigration and emigration Environemtal science I need help explaining the four factors that produce changes in poplution and what could happen to the nutria population after all the land is depleted of the nutrias' food resources? HOW DO YOU SAY IT'S 2:45 ON THE DOT language arts language arts identify the following sets of words as subjects,predicates,or complete sentences. 1.The apple 2.broke the window 3.Telephones ring 4.Ball bounce 5.The noisy trucks 6.Fell off the chair 7.A funny clown 8.The big comfortable couch 9.Ran down the street 10.A boy ate a hot wing yes it is B 5th Grade Math How do you do the number five in a rectangular array Exploring Quadratic Graphs Exploring Quadratic Graphs if i am 3 times as old as my sister and last year i was 4 times as old as her how old am i now and how old is she well how old is your sister You are 9 and she is 3 (3x3=9) Last year you were 8 and she was 2 (2x4=8) your sister is 3 and you are 9 let x = sister's age and y... us history Ellis focuses more intensively on the plight of the slaves than that of the Indians, but he does point out that Washington addressed their situation with the suggestion that they abandon their hunter-gatherer way of life and assimilate themselves into the general population a...
{"url":"http://www.jiskha.com/members/profile/posts.cgi?name=Kia","timestamp":"2014-04-16T08:47:38Z","content_type":null,"content_length":"26295","record_id":"<urn:uuid:b4530305-9f58-4555-a59c-688e4e37890c>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00238-ip-10-147-4-33.ec2.internal.warc.gz"}
This Article Bibliographic References Add to: Generating Organic Textures with Controlled Anisotropy and Directionality May/June 2003 (vol. 23 no. 3) pp. 38-45 ASCII Text x Takayuki Itoh, Kazunori Miyata, Kenji Shimada, "Generating Organic Textures with Controlled Anisotropy and Directionality," IEEE Computer Graphics and Applications, vol. 23, no. 3, pp. 38-45, May/ June, 2003. BibTex x @article{ 10.1109/MCG.2003.1198261, author = {Takayuki Itoh and Kazunori Miyata and Kenji Shimada}, title = {Generating Organic Textures with Controlled Anisotropy and Directionality}, journal ={IEEE Computer Graphics and Applications}, volume = {23}, number = {3}, issn = {0272-1716}, year = {2003}, pages = {38-45}, doi = {http://doi.ieeecomputersociety.org/10.1109/MCG.2003.1198261}, publisher = {IEEE Computer Society}, address = {Los Alamitos, CA, USA}, RefWorks Procite/RefMan/Endnote x TY - MGZN JO - IEEE Computer Graphics and Applications TI - Generating Organic Textures with Controlled Anisotropy and Directionality IS - 3 SN - 0272-1716 EPD - 38-45 A1 - Takayuki Itoh, A1 - Kazunori Miyata, A1 - Kenji Shimada, PY - 2003 KW - texture synthesis KW - texture mapping KW - rendering KW - anisotropic meshing KW - Voronoi tessellation KW - subdivision surface VL - 23 JA - IEEE Computer Graphics and Applications ER - This article presents a computational method for generating organic textures. The method first tessellates a region into a set of pseudo-Voronoi polygons using a particle model and then generates the detailed geometry of each of the polygons using Loop's subdivision surface with fractal noise. Unlike previous particle models-which are designed for creating hexagonal cell arrangements-this particle model can also create rectangular cell arrangements, often observed in organic textures. In either cell arrangement, the method lets a user control the anisotropy of the cell geometry and the directionality of the cell arrangements. A detailed 3D cell geometry is then created by adjusting a set of parameters that controls the cells' height and degree of skewing and tapering. A user can create various types of realistic looking organic textures by choosing a cell arrangement type, anisotropy, and directionality, along with the geometry control parameters. 1. K.W. Fleischer et al., "Cellular Texture Generation," Proc. Siggraph, ACM Press, 1995, pp. 239-248. 2. K. Mehlhorn and S. Näher, The LEDA Platform of Combinatorial and Geometric Computing. Cambridge Univ. Press, 1999. 3. K. Shimada and D.C. Gossard, "Bubble Mesh: Automated Triangular Meshing of Non-Manifold Geometry by Sphere Packing," Proc. 3rd Symp. Solid Modeling and Applications, ACM Press, 1995, pp. 409-419. 4. K. Shimada, A. Yamada, and T. Itoh, "Anisotropic Triangulation of Parametric Surfaces via Close Packing of Ellipses," Int'l J. Computational Geometry and Applications, vol. 10, no. 4, 2000, pp. 5. K. Shimada, J. Liao, and T. Itoh, "Quadrilateral Meshing with Directionality Control through the Packing of Square Cells," Proc. 7th Int'l Meshing Roundtable, Sandia National Laboratory, 1998, pp. 6. N. Viswanath, K. Shimada, and T. Itoh, "Quadrilateral Meshing with Anisotropy and Directionality Control via Close Packing of Rectangular Cells," Proc. 9th Int'l Meshing Roundtable, Sandia National Laboratory, 2000, pp. 227-238. 7. D. Doo and M. Sabin, "Analysis of the Behavior of Recursive Division Surfaces Near Extraordinary Points," Computer Aided Design, vol. 10, no. 6, 1978, pp. 356-360. 8. E. Catmull and J. Clark, "Recursively Generated B-Spline Surfaces on Arbitrary Topological Meshes," Computer Aided Design, vol. 10, no. 6, 1978, pp. 350-355. 9. C. Loop, Smooth Subdivision Surfaces Based on Triangles, master's thesis, Dept. of Mathematics, Univ. of Utah, 1987 10. A. Fournier, D. Fussel, and L. Carpenter, "Computer Rendering of Stochastic Models," Comm. ACM, vol. 25, no. 6, pp. 371-384, 1982. 11. J. Dorsey and P. Hanrahan, "Modeling and Rendering of Metallic Patinas," Proc. Siggraph, ACM Press, 1996, pp. 387-396. 12. J. Dorsey et al., "Modeling and Rendering of Weathered Stone," Proc. Siggraph, ACM Press, 1999, pp. 225-234. 13. H.W. Jensen, J. Legakis, and J. Dorsey, "Rendering Wet Materials," Proc. 10th Eurographics Workshop on Rendering, Springer Verlag, 1999, pp. 273-282. 1. B. Grünbaum and G.C. Shephard, Tiling and Patterns, W.H. Freeman and Co., 1987 2. C.I. Yessios, "Computer Drafting of Stones, Wood, Plant and Ground Materials," Computer Graphics, vol. 13, no. 2, 1979, pp. 190-198. 3. K. Miyata, "A Method of Generating Stone Wall Patterns," Proc. Siggraph, ACM Press, 1990, pp. 387-394. 4. G. Turk, “Generating Textures for Arbitrary Surfaces Using Reaction-Diffusion,” Computer Graphics (SIGGRAPH '91 Proc.), T.W. Sederberg, ed., vol. 25, no. 4, pp. 289-298, July 1991. 5. A. Witkin and M. Kass, “Reaction-Diffusion Textures,” Computer Graphics (SIGGRAPH '91 Proc.), T.W. Sederberg, ed., vol. 25, no. 4, pp. 299-308, July 1991. 6. D.R. Fowler, H. Meinhardt, and P. Prusinkiewicz, "Modeling Seashells," Proc. Siggraph, ACM Press, 1992, pp. 379-388. 7. S.P. Worley, “A Cellular Texture Basis Function,” SIGGRAPH 96 Conf. Proc., H. Rushmeier, ed., pp. 291-294, Aug. 1996. 8. K. Perlin, “An Image Synthesizer,” Computer Graphics (SIGGRAPH '85 Proc.), B.A. Barsky, ed., vol. 19, no. 3, pp. 287-296, July 1985. 9. K.W. Fleischer et al., "Cellular Texture Generation," Proc. Siggraph, ACM Press, 1995, pp. 239-248. Index Terms: texture synthesis, texture mapping, rendering, anisotropic meshing, Voronoi tessellation, subdivision surface Takayuki Itoh, Kazunori Miyata, Kenji Shimada, "Generating Organic Textures with Controlled Anisotropy and Directionality," IEEE Computer Graphics and Applications, vol. 23, no. 3, pp. 38-45, May-June 2003, doi:10.1109/MCG.2003.1198261 Usage of this product signifies your acceptance of the Terms of Use
{"url":"http://www.computer.org/csdl/mags/cg/2003/03/mcg2003030038-abs.html","timestamp":"2014-04-24T06:50:46Z","content_type":null,"content_length":"56695","record_id":"<urn:uuid:22178937-cf98-4026-bfaa-aa750b4eb2f6>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00525-ip-10-147-4-33.ec2.internal.warc.gz"}
hmatrix-0.1.1.0: Linear algebra and numerical computations Source code Contents Index Portability portable (uses FFI) Numeric.LinearAlgebra.LAPACK Stability provisional Maintainer Alberto Ruiz (aruiz at um dot es) Wrappers for a few LAPACK functions (http://www.netlib.org/lapack). svdR :: Matrix Double -> (Matrix Double, Vector Double, Matrix Double) svdRdd :: Matrix Double -> (Matrix Double, Vector Double, Matrix Double) svdC :: Matrix (Complex Double) -> (Matrix (Complex Double), Vector Double, Matrix (Complex Double)) eigC :: Matrix (Complex Double) -> (Vector (Complex Double), Matrix (Complex Double)) eigR :: Matrix Double -> (Vector (Complex Double), Matrix (Complex Double)) eigS :: Matrix Double -> (Vector Double, Matrix Double) eigH :: Matrix (Complex Double) -> (Vector Double, Matrix (Complex Double)) linearSolveR :: Matrix Double -> Matrix Double -> Matrix Double linearSolveC :: Matrix (Complex Double) -> Matrix (Complex Double) -> Matrix (Complex Double) linearSolveLSR :: Matrix Double -> Matrix Double -> Matrix Double linearSolveLSC :: Matrix (Complex Double) -> Matrix (Complex Double) -> Matrix (Complex Double) linearSolveSVDR :: Maybe Double -> Matrix Double -> Matrix Double -> Matrix Double linearSolveSVDC :: Maybe Double -> Matrix (Complex Double) -> Matrix (Complex Double) -> Matrix (Complex Double) luR :: Matrix Double -> (Matrix Double, [Int]) luC :: Matrix (Complex Double) -> (Matrix (Complex Double), [Int]) cholS :: Matrix Double -> Matrix Double cholH :: Matrix (Complex Double) -> Matrix (Complex Double) qrR :: Matrix Double -> (Matrix Double, Vector Double) qrC :: Matrix (Complex Double) -> (Matrix (Complex Double), Vector (Complex Double)) hessR :: Matrix Double -> (Matrix Double, Vector Double) hessC :: Matrix (Complex Double) -> (Matrix (Complex Double), Vector (Complex Double)) schurR :: Matrix Double -> (Matrix Double, Matrix Double) schurC :: Matrix (Complex Double) -> (Matrix (Complex Double), Matrix (Complex Double)) svdR :: Matrix Double -> (Matrix Double, Vector Double, Matrix Double) Source Wrapper for LAPACK's dgesvd, which computes the full svd decomposition of a real matrix. (u,s,v)=full svdR m so that m=u <> s <> trans v. svdRdd :: Matrix Double -> (Matrix Double, Vector Double, Matrix Double) Source Wrapper for LAPACK's dgesvd, which computes the full svd decomposition of a real matrix. (u,s,v)=full svdRdd m so that m=u <> s <> trans v. svdC :: Matrix (Complex Double) -> (Matrix (Complex Double), Vector Double, Matrix (Complex Double)) Source Wrapper for LAPACK's zgesvd, which computes the full svd decomposition of a complex matrix. (u,s,v)=full svdC m so that m=u <> comp s <> trans v. eigC :: Matrix (Complex Double) -> (Vector (Complex Double), Matrix (Complex Double)) Source Wrapper for LAPACK's zgeev, which computes the eigenvalues and right eigenvectors of a general complex matrix: if (l,v)=eigC m then m <> v = v <> diag l. The eigenvectors are the columns of v. The eigenvalues are not sorted. eigR :: Matrix Double -> (Vector (Complex Double), Matrix (Complex Double)) Source Wrapper for LAPACK's dgeev, which computes the eigenvalues and right eigenvectors of a general real matrix: if (l,v)=eigR m then m <> v = v <> diag l. The eigenvectors are the columns of v. The eigenvalues are not sorted. eigS :: Matrix Double -> (Vector Double, Matrix Double) Source Wrapper for LAPACK's dsyev, which computes the eigenvalues and right eigenvectors of a symmetric real matrix: if (l,v)=eigSl m then m <> v = v <> diag l. The eigenvectors are the columns of v. The eigenvalues are sorted in descending order (use eigS' for ascending order). eigH :: Matrix (Complex Double) -> (Vector Double, Matrix (Complex Double)) Source Wrapper for LAPACK's zheev, which computes the eigenvalues and right eigenvectors of a hermitian complex matrix: if (l,v)=eigH m then m <> s v = v <> diag l. The eigenvectors are the columns of v. The eigenvalues are sorted in descending order (use eigH' for ascending order). linearSolveR :: Matrix Double -> Matrix Double -> Matrix Double Source Wrapper for LAPACK's dgesv, which solves a general real linear system (for several right-hand sides) internally using the lu decomposition. linearSolveC :: Matrix (Complex Double) -> Matrix (Complex Double) -> Matrix (Complex Double) Source Wrapper for LAPACK's zgesv, which solves a general complex linear system (for several right-hand sides) internally using the lu decomposition. linearSolveLSR :: Matrix Double -> Matrix Double -> Matrix Double Source Wrapper for LAPACK's dgels, which obtains the least squared error solution of an overconstrained real linear system or the minimum norm solution of an underdetermined system, for several right-hand sides. For rank deficient systems use linearSolveSVDR. linearSolveLSC :: Matrix (Complex Double) -> Matrix (Complex Double) -> Matrix (Complex Double) Source Wrapper for LAPACK's zgels, which obtains the least squared error solution of an overconstrained complex linear system or the minimum norm solution of an underdetermined system, for several right-hand sides. For rank deficient systems use linearSolveSVDC. :: Maybe Double rcond -> Matrix Double coefficient matrix -> Matrix Double right hand sides (as columns) -> Matrix Double solution vectors (as columns) Wrapper for LAPACK's dgelss, which obtains the minimum norm solution to a real linear least squares problem Ax=B using the svd, for several right-hand sides. Admits rank deficient systems but it is slower than linearSolveLSR. The effective rank of A is determined by treating as zero those singular valures which are less than rcond times the largest singular value. If rcond == Nothing machine precision is used. :: Maybe Double rcond -> Matrix (Complex Double) coefficient matrix -> Matrix (Complex Double) right hand sides (as columns) -> Matrix (Complex Double) solution vectors (as columns) Wrapper for LAPACK's zgelss, which obtains the minimum norm solution to a complex linear least squares problem Ax=B using the svd, for several right-hand sides. Admits rank deficient systems but it is slower than linearSolveLSC. The effective rank of A is determined by treating as zero those singular valures which are less than rcond times the largest singular value. If rcond == Nothing machine precision is used. luR :: Matrix Double -> (Matrix Double, [Int]) Source Wrapper for LAPACK's dgetrf, which computes a LU factorization of a general real matrix. luC :: Matrix (Complex Double) -> (Matrix (Complex Double), [Int]) Source Wrapper for LAPACK's zgees, which computes a Schur factorization of a square complex matrix. cholS :: Matrix Double -> Matrix Double Source Wrapper for LAPACK's dpotrf, which computes the Cholesky factorization of a real symmetric positive definite matrix. cholH :: Matrix (Complex Double) -> Matrix (Complex Double) Source Wrapper for LAPACK's zpotrf, which computes the Cholesky factorization of a complex Hermitian positive definite matrix. qrR :: Matrix Double -> (Matrix Double, Vector Double) Source Wrapper for LAPACK's dgeqr2, which computes a QR factorization of a real matrix. qrC :: Matrix (Complex Double) -> (Matrix (Complex Double), Vector (Complex Double)) Source Wrapper for LAPACK's zgeqr2, which computes a QR factorization of a complex matrix. hessR :: Matrix Double -> (Matrix Double, Vector Double) Source Wrapper for LAPACK's dgehrd, which computes a Hessenberg factorization of a square real matrix. hessC :: Matrix (Complex Double) -> (Matrix (Complex Double), Vector (Complex Double)) Source Wrapper for LAPACK's zgehrd, which computes a Hessenberg factorization of a square complex matrix. schurR :: Matrix Double -> (Matrix Double, Matrix Double) Source Wrapper for LAPACK's dgees, which computes a Schur factorization of a square real matrix. schurC :: Matrix (Complex Double) -> (Matrix (Complex Double), Matrix (Complex Double)) Source Wrapper for LAPACK's zgees, which computes a Schur factorization of a square complex matrix. Produced by Haddock version 2.4.2
{"url":"http://hackage.haskell.org/package/hmatrix-0.1.1.0/docs/Numeric-LinearAlgebra-LAPACK.html","timestamp":"2014-04-19T18:09:29Z","content_type":null,"content_length":"53014","record_id":"<urn:uuid:8d15f61e-2b76-4069-975d-fd20b6c6f89c>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00327-ip-10-147-4-33.ec2.internal.warc.gz"}
A Guide to Groups, Rings, and Fields The MAA Guide series — a subset of the Dolciani Mathematical Expositions — is rapidly becoming one of my favorite series of books. I like expository books that provide a quick and interesting entrée into an area of mathematics, or a useful source of examples, and that is precisely what these are. They are also, thanks to careful selection of authors, generally very well-written, informative and particularly useful as a resource for a varied audience. This book, the most recent one in the series (number 8, following books on complex variables, advanced real analysis, real variables, topology, elementary number theory, advanced linear algebra and plane algebraic curves) continues this tradition. Each volume in this series is addressed to readers who, although mathematically sophisticated, are not experts in the subject matter of the book. The canonical example, I would think, would be graduate students seeking an efficient way of helping prepare for qualifying exams. However, faculty members who haven’t had occasion to work extensively in a given area and who want a quick overview of the basic ideas and how they hang together would also find these books valuable. The emphasis in most of the Guides that I have read (this one most definitely included) is providing a survey of the subject in a reasonably short amount of pages, providing a book that is accessible and informative but likely does not contain the kind of technical detail that, although obviously necessary for complete mastery of the material, may serve as an impediment to a person who just wants to know “what’s what” in an area. So, for example, this book, like many in the Guide series (one possible exception is Weintraub’s Advanced Linear Algebra) is not really intended as a text. There are no exercises, and most proofs are omitted; some that are fairly easy are provided, though never in the rigid theorem/proof format of most textbooks. Instead of proofs, Gouvêa provides discussions of the results and, quite often, a helpful sort of intuition as to why something should be true. (The author uses the phrase “shadows of proofs” in this connection.) To compensate for the lack of proofs, there is an excellent bibliography, to which the author makes frequent specific references throughout the text. There are also lots of nice examples. A professional algebraist may be able to immediately give an example of a projective module that is not free, or a ring that does not have the invariance of basis number property, but people who don’t work with algebra all the time may not have such examples on the tip of their tongues. The reader will find such examples here (along with, in connection with the latter, a succinct explanation of why such an example must be noncommutative). The reader will also find some examples that involve completely different branches of mathematics; there is, for example, a nice little one-page discussion of how modular forms arise from group actions, and the author also makes occasional remarks about topics such as topology and elliptic curves. The discussions here are not deep or technical, just brief overviews that give the reader some idea of what the terms mean; perfect for a student or non-specialist faculty member who may wind up hearing the phrase in a talk somewhere. In conformity with the intended readership, examples are not necessarily set off with big margins and the word EXAMPLE in large letters, but are often incorporated directly into the text. The book is divided into six chapters, the first three of which are largely prefatory to the last three, which in turn comprise the meat of the book. Chapter 1 provides a succinct, interesting historical look at algebra, in which the author briefly tracks the development of algebra from its classical origins through its modern period (i.e., the axiomatic approach of Artin and Noether) up to its “ultramodern” period of category theory. Chapter 2 continues the study of categories; not being a huge fan of what Serge Lang once famously referred to as “abstract nonsense”, I feared, when I saw this early chapter on the subject, that the entire book would be filled with commutative diagrams and exact sequences, but was pleased to discover, as I read on, that Gouvêa does not overdo this; these things generally don’t appear unless their appearance really does enhance the discussion. Chapter 3 is a bestiary of algebraic terms, some of which are re-defined later and discussed in more The remaining three chapters discuss, in order, the three algebraic structures mentioned in the title of the text: groups, rings and fields (including skew fields). Chapter 4 on groups starts with the definition and then proceeds to discuss all of the general topics that one would expect to encounter in a first year graduate course, and perhaps a somewhat more: the chapter talks about Sylow theory, nilpotence and solvability, the word problem, group representation theory (in characteristic 0) and more. The discussion, even of elementary concepts, is done at a mathematically mature, but nonetheless accessible, level (for example, cosets of a subgroup H of a group G are defined as orbits under a certain group action), which I think is entirely appropriate, given the intended readership, and which also has the advantage of letting the reader see how these ideas really fit into the “big picture” (for example, the fact that distinct cosets partition the group is now seen to be just a special case of the more general result about orbits). The next chapter is on rings and modules, and here, too, we are treated to an excellent survey of that area of mathematics: basic definitions, followed by discussions of topics such as localization, Weddeburn-Artin theory, the Jacobson radical, factorization theory, Dedekind domains (with a look at algebraic numbers), and various kinds of modules (free, projective, injective, etc.). As in the earlier chapter on group theory, the discussion here is at a mature level, with the author frequently stating things at a somewhat greater level of generality than might usually be encountered. (Examples: a quite general statement of Nakayama’s lemma is given, and the usual results about modules over PIDs are deduced as a special case of the more general situation of modules over Dedekind domains.) Notwithstanding this, however, Gouvêa also keeps the needs of students firmly in mind; for example, there is a section titled “Traps”, in which he points out, with simple specific examples, some of the ways in which modules can differ from vector spaces. (He tells of a friend who once described modules as “vector spaces with traps”.) The final chapter is on field theory. Galois theory is covered, of course (in a considerably general way, including infinite Galois groups and their topologies) but the chapter also contains material on such topics as algebras over a field, function fields, central simple algebras and the Brauer group. Because the author is writing for people who already have some mathematical sophistication, including some prior exposure to abstract algebra, he does not feel obliged to follow a strictly linear order of presentation. So, for example, the chapter on groups, which precedes the chapters on rings and fields, nonetheless contains references to things like finite fields, semisimple rings and algebraic numbers; as another example, Nakayama’s Lemma in ring theory is stated in a form involving tensor products, which are not formally discussed until a few sections later. This provides a certain freedom that an author of a strictly introductory text does not have, and helps, I think, enhance one’s overall understanding of the subject by providing a broader point of view than might otherwise be possible. Likewise, even within a chapter, the level of difficulty is not necessarily monotonically increasing, and sometimes fairly sophisticated topics (e.g., profinite groups) are discussed before much more elementary ones (e.g., permutation groups). So, if you find a certain section to be fairly heavy going, just keep reading, and chances are, within a page or two, you will find things more comfortable. The writing style throughout the book is of uniformly high quality. The author is one of those rare people who has the ability to write like people talk, with a nice, conversational tone that sometimes elicits a smile as well as a nod of understanding. Here, for example, is how he ends his discussion of groups of small order: “The next interesting case is order 16, which is, alas, a bit too interesting. There are five different abelian groups (easy to describe) and there are nine different nonabelian ones (most of them not easy to describe). So we will stop here.” And see also page 160 for a cute little comment that will appeal to fans (of a certain age) that remember Tom Lehrer. It should be apparent from the preceding discussion that I liked this book — a lot. Nevertheless, it seems inevitable that any reviewer will find some nits to pick, just because no two people will ever write the same book. The ones I have, though, are neither numerous nor particularly significant, and basically just reflect my personal preferences. I would have liked, for example, to have seen an example of non-isomorphic groups with the same character table (Everybody’s Favorite Example is D[4] and the quaternion group), as well as a specific example of a rational polynomial of degree 5 that is not solvable (the author states that the “generic” polynomial of degree at least five is not solvable and also states that an irreducible polynomial of prime degree with two real roots and at least one non-real root is not solvable by radicals, but does not give an actual fifth-degree polynomial meeting these conditions). I think the phrase “special linear group” should have been introduced when the group SL(n,K) was first defined on page 33, rather than fifty pages later, and also think that discussing unique factorization without at least mentioning Fermat’s Last Theorem can only be described as a lost opportunity. Additionally, one of my favorite cute applications of transcendence bases has always been the proof that the field of complex numbers has infinitely many automorphisms (a fact that I think is insufficiently well known); the author develops all the machinery necessary to establish this, but doesn’t say so explicitly. Finally, in connection with the definition of algebraically closed fields, the author states the Fundamental Theorem of Algebra (that the field of complex numbers is algebraically closed) and says that all proofs “depend on the topology of the complex field”. This statement, though true, may lead students to believe that all proofs are very analytic or topological in nature; in fact, there is at least one proof that uses Sylow and Galois theory and only two simple facts from analysis, namely (a) that any real polynomial of odd degree has at least one real root, and (b) that any quadratic polynomial with complex coefficients has a complex root. But these are quibbles. Overall, this is a valuable book — a pleasure to read, and packed with interesting results. It should be very helpful to graduate students and non-specialists wanting a succinct summary of the subject, and even professional algebraists may find something new and interesting here. It is a splendid addition to an excellent series. One final comment: in the interest of full disclosure, I should mention that, as faithful readers of this column probably already know, the author of this book is also the editor of this column. This raises, I suppose, at least the question of a conflict of interest. This same issue arose when another of the author’s books, p-adic Numbers, was favorably reviewed in this column by Darren Glass more than two years ago, and since I don’t think that I can improve on the way Professor Glass addressed it, I will simply quote him verbatim: “[T]he reader can rest assured that this reviewer would have said equally flattering things about the book even if it wasn’t written by his editor. Besides, I couldn’t think of anything that an editor could use to bribe his volunteer reviewers with (More prominent placing on the site? First crack at the new Keith Devlin?) so I didn’t even bother asking.” Mark Hunacek (mhunacek@iastate.edu) teaches mathematics at Iowa State University.
{"url":"http://www.maa.org/publications/maa-reviews/a-guide-to-groups-rings-and-fields","timestamp":"2014-04-17T01:47:41Z","content_type":null,"content_length":"107763","record_id":"<urn:uuid:543d5d36-df63-4f65-9287-449b840fbb7a>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00501-ip-10-147-4-33.ec2.internal.warc.gz"}
Reading a Ruler Date: 09/18/97 at 10:48:42 From: Kyle Reynolds Subject: Ruler I need to read a ruler, and I can't. Can you help me? Date: 09/24/97 at 13:35:23 From: Doctor Rob Subject: Re: Ruler I hope so. There are two main kinds of rulers in general use, and other, more obscure kinds. We will ignore the obscure ones. First there is the ruler marked in inches, and each inch is subdivided into 16 parts. The lines on it look something like this sketch of a part of a one-foot ruler: | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | D C D B D C D A D C D B D C D The lines have different lengths to help you figure out what lengths they represent. The shortest lines (D) represent an odd number of sixteenths of an inch. The next shortest lines (C) represent an odd number of eighths of an inch. The next shortest lines (B) represent an odd number of quarters of an inch. The next shortest lines (A) represent an odd number of halves of an inch. The longest lines (8 or 9) represent whole inches, and are The lines labeled 8 and 9 above mark points on the edge of the ruler that are eight and nine inches from the left-hand end of the ruler. All distances are measured from that same left-hand end of the ruler, which could be, but probably isn't, marked "0". The line halfway between them labeled A above marks a point on the edge of the ruler, which is 8 1/2 inches from the end. That makes sense because 8 1/2 is halfway between 8 and 9. The next shorter lines labeled B above are halfway between 8 and 8 1/2, and halfway between 8 1/2 and 9. The former one marks 8 1/4 inches, and the latter marks 8 3/4 inches. These make sense because 8 1/4 is halfway between 8 and 8 1/2, and 8 3/4 is halfway between 8 1/2 and 9. In the words of arithmetic, 8 + ([8+1/2]-8)*(1/2) = [8+1/4], distance from 8 to [8+1/2] and [8+1/2] + (9-[8+1/2])*(1/2) = [8+3/4]. distance from [8+1/2] to 9 Likewise, the next shorter lines labeled C above are halfway between 8 and [8+1/4], between [8+1/4] and [8+1/2], between [8+1/2] and [8+3/4], and between [8+3/4] and 9. They must therefore mark the distances [8+1/8], [8+3/8], [8+5/8], and [8+7/8], respectively. Finally, the shortest lines labeled D above are halfway between adjacent pairs of longer lines, and mark [8+1/16], [8+3/16], ..., When I measure a distance, I put the "0" end of the ruler at one end, and then pick the mark on the ruler which is closest to the other end of the distance. The nearest inch line to the left gives me the number of whole inches. I then figure out whether this line is a 1/16 line (shortest), a 1/8 line, a 1/4 line, a 1/2 line, or an inch line. That tells me what the denominator of the fraction of an inch will be. From the inch line I count the lines the same length as my chosen one using odd number: "1, 3, 5, 7, ..." until I find my line. That tells me what the numerator of the fraction of an inch will be. I then combine the number of whole inches with the fraction to get the The other common kind of ruler measures centimeters instead of inches. Each centimeter is divided into 10 parts (each called a millimeter). The lines on it look something like this sketch of a part of such a | | | | | The longest lines labeled C represent whole centimeters. The next longest line labeled B represents a half centimeter. The shortest lines labeled A represent tenths of centimeters, or millimeters. Since 1/2 = 5/10, the B line also represents 5 millimeters. To measure a length, put the left end of the ruler, which could be labeled "0" but probably isn't, at one end, and pick the closest mark on the ruler to the other end. Find the closest centimeter mark to the left of your mark. That will tell you the number of whole centimeters (13 in the above example). The denominator of the fraction of a centimeter is fixed at 10. The numerator is found by counting from the whole centimeter mark you found above, and the medium-length lines B help you count by showing you where 5 tenths or half a centimeter is. If you are close to the "13" mark, you count up as you move to the right starting with "0" for the "13" mark itself. If you are close to the "B" mark, you can count up as you move to the right or down as you move to the left, starting with "5" for the B mark itself. If you are close to the "14" mark, you can count down as you move to the left, starting with "10" for the "14" mark itself. This will tell you the numerator of the fraction of a centimeter. When you have the fraction, you may find that it is not in lowest terms: 4/10, for example, is not and can be reduced to 2/5, and 5/10 can be reduced to 1/2. When you have reduced the fraction, put it together with the whole number of centimeters (13 in this case), and you will have your answer. I hope this explanation has helped. It sounds more complicated when I write out all the little steps than it feels in practice. With some experience, I think you will find that it all works very naturally and -Doctor Rob, The Math Forum Check out our web site! http://mathforum.org/dr.math/
{"url":"http://mathforum.org/library/drmath/view/58333.html","timestamp":"2014-04-20T12:11:26Z","content_type":null,"content_length":"10599","record_id":"<urn:uuid:677a41fa-fa87-4de9-989e-76c102fde62b>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00351-ip-10-147-4-33.ec2.internal.warc.gz"}
Time-Domain Physical Optics Method for the Analysis of Wide-Band EM Scattering from Two-Dimensional Conducting Rough Surface International Journal of Antennas and Propagation Volume 2013 (2013), Article ID 584260, 9 pages Research Article Time-Domain Physical Optics Method for the Analysis of Wide-Band EM Scattering from Two-Dimensional Conducting Rough Surface School of Science, Xidian University, Xi’an 710071, China Received 2 March 2013; Accepted 18 August 2013 Academic Editor: Daniel S. Weile Copyright © 2013 Jia Chungang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Time-domain physical optics (TDPO) method is extended to investigate electromagnetic (EM) scattering from two-dimensional (2D) perfectly electrically conducting (PEC) rough surface in both time domain and frequency domain. The scheme requires relatively small amounts of computer memory and CPU time, and has advantage over the Kirchhoff Approximation (KA) method in obtaining transient response of rough surface by a program run. The 2D Gaussian randomly rough surface is generated by Monte Carlo method and then is partitioned into small triangle facets through the meshing preprocess. The accuracy of TDPO is validated by comparing the numerical results with those obtained by the KA method in both backward and specular directions. The transient response and its frequency distribution of radar cross section (RCS) from rough surface is shown, respectively. The scattering results from rough surface with different size in the specular direction are given. The influence of the root mean square height (σ) and correlation length () on electromagnetic scattering from PEC rough surface is discussed in detail. Finally, the comparisons of backscattering results at different incident angles are presented and analyzed. 1. Introduction Nowadays, the problems of EM scattering from randomly rough surface have been widely applied in fields of remote sensing, target identification, radar detection, and so on. A lot of investigations are carried out to deal with this random problem in terms of roughness and slope of the rough surface. Take the approximate analytical methods, for example, the KA method [1], which is considerably valid approach to solve the problem that rough surface is relatively smooth, the small-perturbation method (SPM) in which standard derivation of rough surface is small enough compared with the wavelength [2], the small-slope approximation (SSA) that is applied to small and large surface roughness [3], and the two-scale method (TSM) [4], which considers that the rough surface can be separated into large- and small-scale surfaces, allowing use of physical optics (PO) and SPM methods. These techniques are limited by the roughness, the incident angle, and so forth. Some numerical methods are also employed, which include the Monte Carlo method [5], the finite difference time-domain (FDTD) [6, 7] method, method of moments (MOM) [8, 9], the finite element method (FEM) [10], and the forward back method (FBM) [11], the fast multipole method (FMM) [12]. However, these approaches and accelerated algorithms can hardly handle the model that is considerably electrically large due to complex computation. High frequency technologies are still extremely valid to solve the problem that the electrical size of rough surface is large, but high frequency methods in time domain rarely appeared in the previous literatures about investigating on scattering from the randomly rough surface, and transient scattering has important significance in synthetic aperture radar (SAR) imaging or inverse SAR (ISAR) imaging. Up to now, to our knowledge, few works have been reported for the wide-band scattering of rough surface by TDPO. In this paper, emphasis is put on discussing this situation in the time domain. The TDPO method is the time version of physical optics, which reserves the advantage of fast computing speed and less memory demand, and can obtain wide-band property by a program running. This paper is devoted to the wide-band scattering characteristics of the 2D rough surface by utilizing TDPO method, which can deal with the electrically large rough surface problem in the time domain. The earlier TDPO method is not carried out in the time domain directly. Each frequency component response is obtained by the frequency method, and the time-domain response of the origin signal is then achieved by inverse Fourier transform (IFT). TDPO is firstly presented by En-Yuan Sun, and the method is used to analyze the scattering field of paraboloidal reflector and hyperboloidal reflector [13]. In [14], the scheme is employed to deal with the large-size structure of the combinative objects. In this paper, we employ TDPO to investigate scattering from the 2D PEC randomly rough surface. The method is performed to obtain the transient response reflected from rough surface, and then the property of wide-band frequency is obtained by fast Fourier transform (FFT). The formulation of TDPO is presented in [13], which derives the far field by transforming the equivalent electric current density and far-field expression in the frequency domain to those in the time domain. The model of 2D rough surface is presented by the Monte Carlo method. The comparison between the wide-band RCS obtained by TDPO and calculated by the sweep-frequency KA method verifies the validation of TDPO. Finally, the presented scheme is utilized to analyze scattering properties of 2D randomly rough The paper remaining is organized as follows. In Section 2, the formulation of TDPO is presented, as well as the theoretical formulae of 2D randomly rough surface which is of Gaussian type are given. Several examples of 2D rough surface generated with different parameters of root mean square height (rms height) and correlation length are demonstrated. Section 3 provides the validity of TDPO by comparing the numerical results with those obtained by sweep-frequency KA. The influence of size of rough surface, correlation length, rms height, and the incident angle on the scattering from rough surface is discussed. Finally, some concluding remarks are addressed in Section 4. 2. Theory and Formulation 2.1. Randomly Rough Surface Generation Firstly, 2D randomly rough surface is modeled in order to determine its scattering characteristic. Using the spectrum density, rough surface is simulated by Monte Carlo method [15] by which the power spectrum is filtered in frequency domain, and then the height of rough surface can be obtained by inverse fast Fourier transform (IFFT). The height distribution function of rough surface is written as where and are the lengths of the rough surface along the -axis and -axis, respectively. The numbers of discrete points are and . and are the spatial frequency at the corresponding point. is a random number following normal distributions function. is the power spectrum density function, which is Gaussian type in this paper and is given as follows: where is rms height, and and are the correlation lengths along the - and -direction, by which the profile of the rough surface is determined. Examples of 2D randomly rough surfaces are simulated and depicted with different correlation length and rms height in Figures 1(a)–1(d). 2.2. Time-Domain Physical Optics Method for 2D PEC Rough Surface Based on the generation of the rough surface profiles, the surface is divided into small triangle facets though mesh processing. For the conducting rough surface, the surface-current-density distribution by the physical optics method in the frequency domain approximates as [16] where is the incident magnetic field, is the unit vector normal to the surface. The far scattering field in the frequency domain is approximately derived as In (5), is the characteristic impedance in free space. is the distant observing point, and is the scatterer integration point, where is the surface current density. is the surface area of the lighten region. In order to derive the scattering field in the time domain, the IFT is needed for (5), and the electric field in time domain can be obtained as follows where is time delay from the integration point to the distant observing point. The relation of surface-current density between the time domain and frequency domain satisfies the Fourier transform: Substituting (4) to (7), could also be written as By taking IFT of the incident magnetic field , one can obtain its formulation in time domain: Based on (6), (8), and (9), the scattering field in the time domain is derived as where is time delay in the incident direction. For the derivation in detail, one can refer to [12]. From the equation above, integration over the whole scattering area only refers to the incident field with time delay () in the lit region and is not related to interaction between the other points on the surface. For each small triangle facet in lit region, (10) is implemented to calculate scattered electric fields in time domain at the point . The wide-band RCS is obtained through carrying out FFT on transient response. And the normalized RCS in the far zone is defined as [17] where is the illuminated area of 2D rough surface. 3. Numerical Results and Discussions 3.1. Validation of TDPO In this section, the TDPO method is utilized to investigate the problem of transient scattering from 2D Gaussian randomly rough surface. It is noted that rough surface with finite length is concerned, similar to that in [18]; no tapering or window is introduced. Here, Gaussian pulse is chosen as the incident source. The proposed method is employed to calculate wide-band scattering from rough surface in the specular and backward direction with different incident angles, respectively, and its accuracy against KA is verified by comparing the and polarized results. In this paper, the proposed TDPO method is exhibited by averaging 15 Monte Carlo realizations. Geometry of the scattering problem is illustrated and defined in Figure 2, where an incident wave impinges on the surface in the direction of , which makes angle relevant to the -axis and relative to the -axis. The scattered angle is , and the scattered azimuthal is . The polarization angle is defined as . Because TDPO is valid only within the high frequency approximation, the incident source is a modulated Gaussian pulse, which is written as follows: where is the modulation circular frequency, is the modulation frequency, is the time delay, and is the pulse width. In order to ensure the validity of the algorithm presented in this paper, we firstly calculate the RCS of 2D PEC rough surface using the TDPO method and KA, respectively. The size of rough surface generated is , the rms height is , and the correlation length is . The frequency band width and the pulse width of the modulated Gaussian pulse are = 1~4GHz and , respectively. In Figure 3(a), the incident angle is , the azimuth angle is , and the polarization angle is . In Figure 3(b), the incident parameters are set to , , and . Figure 3(a) compares the analysis results obtained by TDPO and those by KA in the backward directions with polarization. Figure 3(b) presents the results of polarization in the specular direction using above two mentioned methods. It is obvious that the scattering results by TDPO are coincident with those of KA method, which verifies the validation of the proposed TDPO. In addition, the curve of RCS from the rough surface in the specular direction is much smoother than that in the backward direction. 3.2. Discussion on the Results of EM Scattering from Rough Surface In this section, the proposed TDPO method is employed to analyze rough surface with different sizes and different scale of rough surface in terms of correlation length and rms height both in the time and frequency domain. Moreover, the backscattering results for different incident angles are discussed. In Figure 4, the scattering results from 2D PEC rough surface in the specular direction are shown for polarization discussed with different sizes of surface (, ), where the rms height and the correlation length are and , respectively. The incident angle is , the incident azimuth angle is , and the incident frequency is = 1~4GHz. Figures 4(a)-4(b) investigate the transient response rough surface, where it is obvious that the magnitude of the electric field gets larger when the size of the rough surface increases. We attribute this phenomenon to the fact that the scattering in the specular direction becomes stronger when the size of rough surface is larger. Figure 4(c) plots the wide-band RCS from the rough surface, where one can find that the RCS increases with increasing the surface size for the whole frequency bands. In addition, it is also seen that the variation tendency of the curve with parameter is similar to that with . The reason for this phenomenon is that the parameters of the two kinds of rough surface are the same in terms of the correlation length and the rms height, which determine the profile of surface. Figures 5(a)–5(c) plot the polarized electric fields in the time domain from the rough surface in the specular direction, where the parameters of rough surface are , , and for the case of , and , respectively. The incident angle is , and the incident azimuth angle is . It is found that the first pulse signal has little difference, but it is obviously observed that the second pulse increases with increasing . Figure 5(d) shows the analysis result of wide-band characteristics through transforming transient response into frequency. It is obvious that RCS in the specular direction increases with increasing correlation length , especially at the high frequency band. The reason for this is that by keeping the rms height constant and by increasing the correlation length, the electromagnetic roughness is constant, but the rms slope decreases, which leads to an increase of the scattered energy in the coherent scattering direction. The results of transient response and wide-band RCS from 2D rough surface in specular direction for different root square mean heights (, , , and ) are presented in Figure 6, where the size of rough surface is , the incident angle is , and the azimuth angle is . The polarization is discussed. Figures 6(a)–6(c) show the transient response from rough surface, where the magnitude of the second signal decreases with increasing rms height . In Figure 6(d), wide-band RCS decreases obviously with larger over the whole frequency range; the primary reason for this is because the roughness increases with the increase of , which results in the decrease of the scattering in the specular direction with increasing of the surface roughness. To further explore the important scattering characteristics of rough surface, the backscattering results ( polarization) for different incident angles are compared in Figure 7, where the size of Gaussian rough surface is and the rms height and the correlation length are and , respectively. The frequency band width of the Gaussian pulse is = 1~4GHz. Figures 7(a)-7(b) illustrate the backscattered electric fields in time domain, where the magnitude of electric field is visually smaller with increasing incident angles. The wide-band RCS of 2D PEC rough surface is depicted in Figure 7(c). One can observe that the RCS becomes smaller with the increase of the incident angles over almost all the frequency band of 1–4GHz, which is caused by the fact that the smaller angle leads to stronger scattering in the backward direction. 4. Conclusion In this paper, a time-domain high frequency TDPO method is presented and adopted to investigate the scattering characteristics of 2D PEC Gaussian rough surface. Firstly, the wide-band RCS from 2D rough surface in both backward and specular directions calculated by the TDPO method are compared with the results obtained by the KA method. Good agreement is achieved for the two cases, which verifies the validation of the presented TDPO method. Then, the scattering characteristics of rough surface with different size are analyzed. The transient response in time domain and the wide-band RCS obtained by FFT in frequency domain are presented to examine the effect of the correlation lengths and rms height on scattering properties in detail. Finally, we will put our focus on investigating the dielectric and lossy surfaces by employing MECA [19] in the time domain (TDMECA). Furthermore, the backscattering from rough surface is present and analyzed for the different incident angles. Similar to KA method [20], the TDPO method presented in this paper is invalid for the case of larger incident angle or the larger roughness due to the neglect of the multiple scattering. It should be pointed out that the future investigation on this topic will include the scattering from 2D conducting rough surface at low grazing angle illumination with larger roughness and scattering from the dielectric rough surfaces. This work was supported by the National Science Foundation for Distinguished Young Scholars of China (Grant no. 61225002), the Specialized Research Fund for the Doctoral Program of Higher Education (Grant no. 20100203110016), and the Fundamental Research Funds for the Central Universities (Grant no. K50510070001). 1. D. Holliday, “Resolution of a controversy surrounding the Kirchhoff approach and the small perturbation method in rough surface scattering theory,” IEEE Transactions on Antennas and Propagation, vol. 35, no. 1, pp. 120–122, 1987. View at Scopus 2. S. O. Rice, “Reflection of electromagnetic waves from slightly rough surfaces,” in Theory of Electromagnetic Waves, M. Kline, Ed., pp. 351–378, Wiley, New York, NY, USA, 1951. 3. G. Berginc, “Small-slope approximation method: a further study of vector wave scattering from two-dimensional surfaces and comparison with experimental data,” Progress in Electromagnetics Research, vol. 37, pp. 251–287, 2002. 4. S. L. Durden and J. F. Vesecky, “Numerical study of the separation wavenumber in the two-scale scattering approximation,” IEEE Transactions on Geoscience and Remote Sensing, vol. 28, no. 2, pp. 271–272, 1990. View at Publisher · View at Google Scholar · View at Scopus 5. R. M. Axline and A. K. Fung, “Numerical computations of scattering from a perfectly conducting random surface,” IEEE Transactions on Antennas and Propagation, vol. 26, no. 3, pp. 482–488, 1978. View at Scopus 6. C. H. Chan, S. H. Lou, L. Tsang, and J. A. Kong, “Electromagnetic scattering of waves by random rough surface: a finite-difference time-domain approach,” Microwave and Optical Technology Letters, vol. 4, no. 9, pp. 355–359, 1991. View at Scopus 7. J. Li, L.-X. Guo, and H. Zeng, “FDTD investigation on bistatic scattering from two-dimensional rough surface with UPML absorbing condition,” Waves in Random and Complex Media, vol. 19, no. 3, pp. 418–429, 2009. View at Publisher · View at Google Scholar · View at Scopus 8. R. R. Lentz, “A numerical study of electromagnetic scattering from ocean-like surfaces,” Radio Science, vol. 9, no. 12, pp. 1139–1146, 1974. View at Scopus 9. R. T. Marchand, “On the use of finite surfaces in the numerical prediction of rough surface scattering,” IEEE Transactions on Antennas and Propagation, vol. 47, no. 4, pp. 600–604, 1999. View at Publisher · View at Google Scholar · View at Scopus 10. S. H. Lou, L. Tsang, and C. H. Chan, “Application of the finite element method to Monte Carlo simulations of scattering of waves by random rough surfaces: penetrable case,” Waves in Random Media, vol. 1, no. 4, article 006, pp. 287–307, 1991. View at Publisher · View at Google Scholar · View at Scopus 11. A. Iodice, “Forward-backward method for scattering from dielectric rough surfaces,” IEEE Transactions on Antennas and Propagation, vol. 50, no. 7, pp. 901–911, 2002. View at Publisher · View at Google Scholar · View at Scopus 12. V. Jandhyala, E. Michielssen, S. Balasubramaniam, and W. C. Chew, “A combined steepest descent-fast multipole algorithm for the fast analysis of three-dimensional scattering by rough surfaces,” IEEE Transactions on Geoscience and Remote Sensing, vol. 36, no. 3, pp. 738–748, 1998. View at Publisher · View at Google Scholar · View at Scopus 13. E.-Y. Sun and W. V. T. Rusch, “Time-domain physical-optics,” IEEE Transactions on Antennas and Propagation, vol. 42, no. 1, pp. 9–15, 1994. View at Publisher · View at Google Scholar · View at 14. L.-X. Yang, D.-B. Ge, and B. Wei, “FDTD/TDPO hybrid approach for analysis of the EM scattering of combinative objects,” Progress in Electromagnetics Research, vol. 76, pp. 275–284, 2007. View at Publisher · View at Google Scholar · View at Scopus 15. L. Tsang and J. A. Kong, Scattering of Electromagnetic Waves- Numerical Simulations, Wiley, New York, NY, USA, 2000. 16. C. Scott, Modern Methods of Reflector Antenna Analysis and Design, Artech H, Boston, Mass, USA, 1990. 17. J. Li, L.-X. Guo, and H. Zeng, “FDTD method investigation on the polarimetric scattering from 2-D rough surface,” Progress in Electromagnetics Research, vol. 101, pp. 173–188, 2010. View at 18. J. Li, B. Wei, Q. He, L. Guo, and D. Ge, “Time-domain iterative physical optics method for analysis of EM scattering from the target half buried in rough surface: PEC case,” Progress in Electromagnetics Research, vol. 121, pp. 391–408, 2011. View at Scopus 19. J. G. Meana, J. Á. Martínez-Lorenzo, F. Las-Heras, and C. Rappaport, “Wave scattering by dielectric and lossy materials using the Modified Equivalent Current Approximation (MECA),” IEEE Transactions on Antennas and Propagation, vol. 58, no. 11, pp. 3757–3761, 2010. View at Publisher · View at Google Scholar · View at Scopus 20. A. Collaro, G. Franceschetti, M. Migliaccio, and D. Riccio, “Gaussian rough surfaces and Kirchhoff approximation,” IEEE Transactions on Antennas and Propagation, vol. 47, no. 2, pp. 392–398, 1999. View at Publisher · View at Google Scholar · View at Scopus
{"url":"http://www.hindawi.com/journals/ijap/2013/584260/","timestamp":"2014-04-19T08:25:57Z","content_type":null,"content_length":"203712","record_id":"<urn:uuid:e2db3e07-6a6e-45d8-b5f7-a194fe69af4d>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00497-ip-10-147-4-33.ec2.internal.warc.gz"}
Post a reply Discussion about math, puzzles, games and fun. Useful symbols: ÷ × ½ √ ∞ ≠ ≤ ≥ ≈ ⇒ ± ∈ Δ θ ∴ ∑ ∫ • π ƒ -¹ ² ³ ° You are not logged in. • Index • » Help Me ! • » The Remainder Theorem Post a reply Topic review (newest first) bob bundy 2012-10-19 04:55:39 hi danthemaths Welcome to the forum. Am I on the right lines? Not quite. When you put x = 3 you get a remainder. Let's say and when x =-1 You are also told that So make an equation from this and solve for a. Hope that helps. ?? But I'm not getting a = 4 ?? 2012-10-19 04:03:33 hey guys, I am stuck on this question: "The Remainder when x^3-2x^2+ax+5 is divided by x-3 is twice the remainder when the same expression is divided by x+1. Find the value of the constant a." I know the answer is 4 but i dont know how to get there. I have started it off but dont know what to from then on... I worked out f(3) to be -14=3a ...and f(-1) to be a=2 Am I on the right lines? and what do I do from then on? this will really help me out thanks in advance Discussion about math, puzzles, games and fun. Useful symbols: ÷ × ½ √ ∞ ≠ ≤ ≥ ≈ ⇒ ± ∈ Δ θ ∴ ∑ ∫ • π ƒ -¹ ² ³ ° Not quite.When you put x = 3 you get a remainder. Let's say hey guys, I am stuck on this question:"The Remainder when x^3-2x^2+ax+5 is divided by x-3 is twice the remainder when the same expression is divided by x+1. Find the value of the constant a."I know the answer is 4 but i dont know how to get there.I have started it off but dont know what to from then on...I worked out f(3) to be -14=3a ...and f(-1) to be a=2 Am I on the right lines? and what do I do from then on? this will really help me outthanks in advancedanthemaths
{"url":"http://www.mathisfunforum.com/post.php?tid=18258&qid=235993","timestamp":"2014-04-18T03:31:29Z","content_type":null,"content_length":"17217","record_id":"<urn:uuid:97a2c4d8-a76d-44be-9a3a-cc663203bdaa>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00659-ip-10-147-4-33.ec2.internal.warc.gz"}
Causal inference Theme Co-ordinators: Rhian Daniel, Simon Cousens, Bianca De Stavola, George Ploubidis, Neil Pearce, David Cox (University of Oxford) “Comparing apples and oranges is the only endeavor worthy of true scientists; comparing apples to apples is trivial.” Gene V Glass, Arizona State University Causal inference is a central aim of most medical and epidemiological investigation. We would like to know ‘does this treatment work?’ or ‘is that exposure harmful?’, and if so ‘to what extent?’. The gold standard approach to answering such questions is to conduct a controlled experiment in which treatments/exposures are allocated at random, all subjects are perfectly compliant, and all the relevant data are collected and measured without error. Provided that we can then discount ‘chance’ alone as an explanation, any observed effects can be interpreted as causal. In the real world, however, such experiments rarely attain this ideal status, and for most important questions, such an experiment would not even be ethically, practically, or economically feasible; in these situations, causal inference must be based instead on observational data. Despite this, the role of statistics is often seen as quantifying the extent to which ‘chance’ could explain the results, with concerns over systematic biases due to the non-ideal nature of the data relegated to the qualitative discussion of the results. Over the last thirty years, however, a formal statistical language has been developed in which causal effects can be unambiguously defined, and the assumptions needed for their estimation clearly stated. This clarity has led to increased awareness of causal pitfalls (such as the ‘birthweight paradox’ – see Hernández-Díaz et al, 2006) and the building of a new and extensive toolbox of statistical methods especially designed for making causal inferences from non-ideal data under transparent, less restrictive and more plausible assumptions than were hitherto required. Of course this does not mean that all causal questions can be answered, but at least they can be formally addressed in a quantitative fashion. Considerations of causality are not new. Neyman used potential outcomes (corner stones of this ‘new’ causal language – see Rubin, 1978) in his PhD thesis in the 1920s, and who could forget Bradford Hill’s much-cited guidelines published in 1965? The last few decades, however, have seen the focus move towards developing solutions, as well as acknowledging limitations. Indeed, not all reliable causal inference requires novel methodology. As Philip Dawid once said “a causal model is just an ambitious associational model”. A carefully-considered regression model, with an appropriate set of potential confounders (possibly identified using a causal diagram – see below) measured and included as covariates, is the most appropriate causal model in many simple settings. But how do we decide whether such an approach is suitable? An ubiquitous feature of methods for estimating causal effects from non-ideal data is the need for untestable assumptions regarding the causal structure of the variables being analysed (such as ‘there are no common causes of A and B’, or ‘Z is an instrumental variable’ – see below). Such assumptions are often represented in a causal diagram or graph, with variables identified by nodes and the relationships between them by edges. The simplest and most commonly-used class of causal diagram is the directed acyclic graph (DAG), in which all edges are arrows, and there are no cycles, i.e. no variable explains itself (Greenland et al, 1999). These are used not only to represent assumptions but also to inform the choice of a causally-interpretable analysis. Another common feature of causal inference methods is that, as we move further from the ideal experimental setting, more aspects of the joint distribution of the variables must be modelled, which would have been ancillary had the data arisen from a perfect experiment. Structural equation modelling (SEM) (Kline, 2011) is a fully-parametric approach, in which the relationship between each node in the graph and its parents is specified parametrically. This approach offers a very elegant treatment of measurement error when this affects any variable for which validation or replication data are available. The true variable is included in the graph as a latent (unobserved) variable and the joint distribution of manifest and latent variables is estimated within a single likelihood framework. Missing values can be similarly dealt with within the same framework by including missing value indicators for which specific mechanisms are specified. Concerns over the potential impact of model misspecification in the SEM approach have led to the development of alternative semiparametric approaches to causal inference, in which the number of additional aspects to be modelled is reduced. These include methods based on inverse probability weighting, g-estimation, and the so-called doubly-robust estimation proposed by Robins, Rotnitzky and These newer causal inference methods are particularly relevant for studying the causal effect of a time-varying exposure on an outcome, because standard methods fail to give causally-interpretable estimators when there exist time-varying confounders of the exposure and outcome that are themselves affected by previous levels of the exposure. Methods developed to deal with this problem include the fully-parametric g-computation formula (Robins, 1986), and two semiparametric approaches: g-estimation of structural nested models (Robins et al, 1992), and inverse probability weighted estimation of marginal structural models (Robins et al, 2000). Related to this longitudinal setting is the identification of optimal treatment regimes, for example in HIV/AIDS research where questions such as ‘at what level of CD4 should HAART (highly active antiretroviral therapy) be initiated?’ are often asked. These can be addressed using the methods listed above, and other related methods (see Moodie et al, 2007, for a review). It is important to appreciate that non-ideal experimental data (e.g. suffering from noncompliance, missing data or measurement error) are not on a par with data arising from observational studies (as may be inferred from what is written above). Randomisation can be used as a tool to aid causal inference even when the randomised experiment is ‘broken’, for example as a result of non-compliance to randomised treatment. Such methods make use of randomisation as an instrumental variable (Angrist and Pischke, 2009). Instrumental variables have even been used with observational data, in particular when the instrument is a variable that holds genetic information (in which case it is known as Mendelian randomisation; see Davey-Smith and Ebrahim, 2003) with genotype used in place of randomisation. This is motivated by the idea that genes are ‘randomly’ passed down from parents to offspring in the same way that treatment is allocated in double-blind randomised trials. Although this assumption is generally untestable (Hernán and Robins, 2006), there are situations in which it may be deemed more plausible than the other candidate set of untestable assumptions, namely that of ‘no unmeasured confounding’. Approaches (such as SEM) amenable to complex causal structures have opened the way to looking beyond the causal effect of an exposure on an outcome as a black box, and to asking ‘how does this exposure act?’. For example, if income has a positive effect on certain health outcomes, does this act simply by increasing access to health care, or are there other important pathways? Addressing such questions is the goal of mediation analysis and the estimation of direct/indirect effects (see Ten Have and Joffe, in press, for a review). This area has seen an explosion of new methodology in recent years, with several semiparametric alternatives to SEM introduced. In conclusion, causal inference is an important, exciting and fast-moving area of methodological research. The discussion above gives an overview of some of the topics that exist beneath its ever-growing umbrella, but of course, there are many more. Causal Inference at LSHTM Most statisticians and epidemiologists at the School are engaged in causal inference. In the interest of space, we include here only the names of those with a particular interest in methodological issues relating to the topics discussed above. Jonathan Bartlett; James Carpenter; Simon Cousens; Rhian Daniel; Bianca De Stavola; Frank Dudbridge; Chris Frost; Richard Grieve; Mike Kenward; Noemi Kreif; Neil Pearce; Costanza Pizzi; George Ploubidis; Rosalba Radice; Zia Sadique; Anders Skrondal (honorary); Stijn Vansteelandt (honorary); Michael Wallace; Symon Wandiembe Short courses Two short courses relating to causal inference are run each year at LSHTM. One is entitled Causal Inference in Epidemiology: recent methodological developments and runs for one week each November; the other is a three-day course in February, entitled Factor Analysis and Structural Equation Modelling: an introduction using Stata and MPlus. Discussion group The causal inference discussion group at LSHTM meets once or twice a month, with sessions usually taking the form of a seminar followed by extended discussion. Past speakers include Philip Dawid, Vanessa Didelez, Richard Emsley, Miguel Hernán, Erica Moodie and Anders Skrondal. Details of upcoming meetings can also be found here. Suggested introductory reading Angrist JD, Pischke J (2009) Mostly harmless econometrics: an empiricist’s companion. Princeton University Press. Greenland S, Pearl J, Robins JM (1999) Causal diagrams for epidemiologic research. Epidemiology. 10(1):37–48. Greenland S, Brumback B (2002) An overview of relations among causal modelling methods. International Journal of Epidemiology. 31:1030–1037. Hernán MA, Hernández-Díaz S, Werler MM, Mitchell AA (2002) Causal knowledge as a prerequisite for confounding evaluation: an application to birth defects epidemiology. American Journal of Epidemiology. 155:176–184. Hernán MA, Robins JM (to appear, 2011) Causal Inference. Chapman & Hall/CRC. [First ten chapters available for download
{"url":"http://csm.lshtm.ac.uk/themes/causal-inference/","timestamp":"2014-04-19T11:57:10Z","content_type":null,"content_length":"40566","record_id":"<urn:uuid:6113441a-90aa-4537-8324-753c716d081e>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00442-ip-10-147-4-33.ec2.internal.warc.gz"}
EAN/ISBN13 check digit verification Enjoy an ad free experience by logging in. Not a member yet? New to the CF scene Join Date Jan 2009 Thanked 0 Times in 0 Posts Hi again, As noted in my last post I've been trying to teach myself javascript over the last few weeks in order to build a script that can perform check digit verification. Also again, my apologies up front for any glaring newbie mistakes. Any help is VERY MUCH appreciated!!! How I'd like it to work: When a user has entered an EAN number (13 digits, all numbers) in a text field (called "InsertRecordEAN") and leaves this field (onblur, I believe) the script will run, checking it's last digit/check digit against what I've calcucated it should be. If the check digit is incorrect an alert box will pop up saying so, and when they click OK it highlights the incorrect value, not letting them leave that field until it's corrected. I've learned quite a lot from your posts in the last couple days, and everything I have below SEEMS like it should work, but I get nothing. Any ideas, at all? Thanks in advance!!!!!! <script type="text/javascript"> <script type="text/javascript"> function ErrorAlert() document.getElementByName("InsertRecordEAN").onblur = alert("The EAN you entered is incorrect, please re-enter it.") <script type="javascript"> //**seems to be held up by UserEAN not getting passed to function** function EANCheck(UserEAN) //2 parse EAN and converted to num except m (var m is check digit supplied by user) var a = parseInt(UserEAN.charAt(UserEAN.length-13)); //digit1 var b = parseInt(UserEAN.charAt(UserEAN.length-12)); //digit2 var c = parseInt(UserEAN.charAt(UserEAN.length-11)); //digit3 var d = parseInt(UserEAN.charAt(UserEAN.length-10)); //digit4 var e = parseInt(UserEAN.charAt(UserEAN.length-9)); //digit5 var f = parseInt(UserEAN.charAt(UserEAN.length-8)); //digit6 var g = parseInt(UserEAN.charAt(UserEAN.length-7)); //digit7 var h = parseInt(UserEAN.charAt(UserEAN.length-6)); //digit8 var i = parseInt(UserEAN.charAt(UserEAN.length-5)); //digit9 var j = parseInt(UserEAN.charAt(UserEAN.length-4)); //digit10 var k = parseInt(UserEAN.charAt(UserEAN.length-3)); //digit11 var l = parseInt(UserEAN.charAt(UserEAN.length-2)); //digit12 var m = parseInt(UserEAN.charAt(UserEAN.length-1)); //digit13 //3 multiply each digit against weighting factors to get extended value var a1 =(a * 1); var b1 =(b * 3); var c1 =(c * 1); var d1 =(d * 3); var e1 =(e * 1); var f1 =(f * 3); var g1 =(g * 1); var h1 =(h * 3); var i1 =(i * 1); var j1 =(i * 3); var k1 =(i * 1); var l1 =(i * 3); //4 sum extended values var iSum=(a1+b1+c1+d1+e1+f1+g1+h1+i1+j1+k1+l1); //5&6 divide sum by 10, returning the remainder var iDiv=(iSum % 10); //7 multiply remainder by 10 var By10=(iDiv * 10); //8 subtract remainder from step 7 from "10" (this is check digit -if remainder=0 check=0) var CheckD=(10 - By10); //"CheckD" is the calculated check digit to compare to user input. if (CheckD != m) //If EANCheck is true allow them to tab to next field. //If EANCheck is false, execute ErrorAlert() & set the focus back on the EAN field. <form id="caspioform"> EAN:<input type="text" name="InsertRecordEAN" id="InsertRecordEAN" size="30" onblur="EANCheck(this.value)"> Age: <input type="text" id="age" size="30"><br><br> <script type="text/javascript"> //test to show initial variables are working, or not -guess not, this doesn't seem to work. Senior Coder Join Date Jun 2002 Between DC and Baltimore In a Cave Thanked 94 Times in 88 Posts Tech Author [Ajax In Action, JavaScript: Visual Blueprint] New to the CF scene Join Date Jan 2009 Thanked 0 Times in 0 Posts This one works a bit differently, and should be easier -and no one had responded to the other one. Since I thought this would be easier to troubleshoot I was hoping I'd get some help and then apply the fixes to the ISBN verification -once I figured out how to handle the check digit that could be numerical or text -EAN's are all numerical. Any idea why this one wouldn't work? Thanks for asking!
{"url":"http://www.codingforums.com/dom-json-scripting/158398-ean-isbn13-check-digit-verification.html","timestamp":"2014-04-16T15:07:06Z","content_type":null,"content_length":"69359","record_id":"<urn:uuid:27b87094-34d2-4c9e-a24e-2d1793ebce52>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00124-ip-10-147-4-33.ec2.internal.warc.gz"}
The solution to the following puzzle is unique; in some cases the knowledge that the solution is unique may actually give you a short-cut to finding the answer to a particular question, but it's possible to find the unique solution even without making use of the fact that the solution is unique. 1. The first question whose answer is B is question (A) 1 (B) 2 (C) 3 (D) 4 (E) 5 2. The only two consecutive questions with identical answers are questions (A) 6 and 7 (B) 7 and 8 (C) 8 and 9 (D) 9 and 10 (E) 10 and 11 3. The number of questions with the answer E is (A) 0 (B) 1 (C) 2 (D) 3 (E) 4 4. The number of questions with the answer A is (A) 4 (B) 5 (C) 6 (D) 7 (E) 8 5. The answer to this question is the same as the answer to question (A) 1 (B) 2 (C) 3 (D) 4 (E) 5 6. The answer to question 17 is (A) C (B) D (C) E (D) none of the above (E) all of the above 7. Alphabetically, the answer to this question and the answer to the following question are (A) 4 apart (B) 3 apart (C) 2 apart (D) 1 apart (E) the same 8. The number of questions whose answers are vowels is (A) 4 (B) 5 (C) 6 (D) 7 (E) 8 9. The next question with the same answer as this one is question (A) 10 (B) 11 (C) 12 (D) 13 (E) 14 10. The answer to question 16 is (A) D (B) A (C) E (D) B (E) C 11. The number of questions preceding this one with the answer B is (A) 0 (B) 1 (C) 2 (D) 3 (E) 4 12. The number of questions whose answer is a consonant is (A) an even number (B) an odd number (C) a perfect square (D) a prime (E) divisible by 5 13. The only odd-numbered problem with answer A is (A) 9 (B) 11 (C) 13 (D) 15 (E) 17 14. The number of questions with answer D is (A) 6 (B) 7 (C) 8 (D) 9 (E) 10 15. The answer to question 12 is (A) A (B) B (C) C (D) D (E) E 16. The answer to question 10 is (A) D (B) C (C) B (D) A (E) E 17. The answer to question 6 is (A) C (B) D (C) E (D) none of the above (E) all of the above 18. The number of questions with answer A equals the number of questions with answer (A) B (B) C (C) D (D) E (E) none of the above 19. The answer to this question is (A) A (B) B (C) C (D) D (E) E 20. Standardized test is to intelligence as barometer is to (A) temperature (only) (B) wind-velocity (only) (C) latitude (only) (D) longitude (only) (E) temperature, wind-velocity, latitude, and longitude
{"url":"http://www.brandonbray.com/fun/test.html","timestamp":"2014-04-19T11:56:56Z","content_type":null,"content_length":"6924","record_id":"<urn:uuid:738adcc9-2d63-4b9d-9d7b-f2397f0db239>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00281-ip-10-147-4-33.ec2.internal.warc.gz"}
[Numpy-discussion] checking element types in array Anne Archibald peridot.faceted@gmail.... Sat May 17 13:58:20 CDT 2008 2008/5/17 Zoho Vignochi <zoho.vignochi@gmail.com>: > hello: > I am writing my own version of a dot product. Simple enough this way: > def dot_r(a, b): > return sum( x*y for (x,y) in izip(a, b) ) > However if both a and b are complex we need: > def dot_c(a, b): > return sum( x*y for (x,y) in izip(a.conjugate(), b) ).real As you probably realize, this will be vastly slower (ten to a hundred times) than the built-in function dot() in numpy. > I would like to combine these so that I need only one function which > detects which formula based on argument types. So I thought that > something like: > def dot(a,b): > if isinstance(a.any(), complex) and isinstance(b.any(), complex): > return sum( x*y for (x,y) in izip(a.conjugate(), b) ).real > else: > return sum( x*y for (x,y) in izip(a, b) ) > And it doesn't work because I obviously have the syntax for checking > element types incorrect. So my real question is: What is the best way to > check if any of the elements in an array are complex? numpy arrays are efficient, among other reasons, because they have homogeneous types. So all the elements in an array are the same type. (Yes, this means if you have an array of numbers only one of which happens to be complex, you have to represent them all as complex numbers whose imaginary part happens to be zero.) So if A is an array A.dtype is the type of its elements. numpy provides two convenience functions for checking whether an array is complex, depending on what you want: iscomplex checks whether each element has a nonzero imaginary part and returns an array representing the element-by-element answer; so any(iscomplex(A)) will be true if any element of A has a nonzero imaginary part. iscomplexobj checks whether the array has a complex data type. This is much much faster, but of course it may happen that all the imaginary parts happen to be zero; if you want to treat this array as real, you must use iscomplex. More information about the Numpy-discussion mailing list
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2008-May/033944.html","timestamp":"2014-04-18T16:16:42Z","content_type":null,"content_length":"4970","record_id":"<urn:uuid:2e845d2d-aa27-4a8f-9492-b51e391ddacf>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00471-ip-10-147-4-33.ec2.internal.warc.gz"}
Kendall Park Trigonometry Tutor Find a Kendall Park Trigonometry Tutor ...I also work with my students to develop the expertise of answering "open-ended" questions. As an experienced teacher, I know the concepts and skills that are necessary to get the students on the right track and I am able to pinpoint where they are having difficulty. As an experienced teacher, I know the skills and concepts that are necessary for success in a pre-calculus class. 10 Subjects: including trigonometry, geometry, algebra 1, GED I am a fun, helpful, and experienced tutor for the Sciences (biology and chemistry), Math (geometry, pre-algebra, algebra, and pre-calulus), English/Grammar, and the SATs. For the SAT, I implement a results driven and rigorous 7 week strategy. PLEASE NOTE: I only take serious SAT students who have... 26 Subjects: including trigonometry, reading, chemistry, English ...I feel very comfortable in teaching these topics. During my extensive college math teaching experience, from 1973 through 2007, I have taught several math courses that included the topic of logic. These courses were titled "Discrete Math" or "Math for Elementary School Teachers." I have also written logic questions for publishing firms. 21 Subjects: including trigonometry, calculus, geometry, statistics ...I love playing sports, specifically football. Feel free to contact me. I scored 740 on my Math SAT, and since then excelled in various math courses through my tenure at Rutgers University. 14 Subjects: including trigonometry, calculus, physics, algebra 2 I have always believed in helping others whether it be within my family, my school or my community. As a former elementary school tutor and undergraduate chemistry teaching assistant, I have helped various students of all ages understand concepts that would otherwise be difficult. I realize that every student is different therefore their learning abilities are also different. 13 Subjects: including trigonometry, chemistry, reading, geometry
{"url":"http://www.purplemath.com/Kendall_Park_Trigonometry_tutors.php","timestamp":"2014-04-16T07:59:31Z","content_type":null,"content_length":"24466","record_id":"<urn:uuid:a3676275-a81d-4b1a-ac82-4f79a063f05f>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00400-ip-10-147-4-33.ec2.internal.warc.gz"}
Livermore, CA Trigonometry Tutor Find a Livermore, CA Trigonometry Tutor ...So a large part of my task is to help instill an "I can do it' attitude so that the student will put in the needed effort. I have a B.S. Degree in Mathematical Sciences from Stanford University and work as an independent software consultant. 5 Subjects: including trigonometry, geometry, algebra 2, prealgebra ...I offer tutoring for all levels of math and science as well as test preparation. I will also proofread and help with technical writing, as I believe good communication skills are very important. I always want to ensure good service, and so I offer a free consultation session as a starting point. 27 Subjects: including trigonometry, chemistry, calculus, physics ...I can help you with various aspects of Chinese language, including reading, writing, listening, and speaking. I got my BS and PhD both in Chemistry. I have taken one year organic chemistry in college and spent two years working in an organic chemistry lab. 15 Subjects: including trigonometry, chemistry, calculus, Chinese ...I graduated from the University of California, San Diego, where I majored in quantitative economics, and political science. I also wrote a dissertation, which was about 100 pages in math/ economics/political science to graduate with highest honors (both majors), in 3 years. I have been a tutor to numerous students over several years, ranging from middle school to graduate 29 Subjects: including trigonometry, calculus, statistics, geometry ...My teaching spans areas including the US, Europe and Latin America. This has expanded my ability to work with a diverse learning group of students. I have created new curriculum, always keeping in mind that the student comes first. 13 Subjects: including trigonometry, calculus, ASVAB, geometry Related Livermore, CA Tutors Livermore, CA Accounting Tutors Livermore, CA ACT Tutors Livermore, CA Algebra Tutors Livermore, CA Algebra 2 Tutors Livermore, CA Calculus Tutors Livermore, CA Geometry Tutors Livermore, CA Math Tutors Livermore, CA Prealgebra Tutors Livermore, CA Precalculus Tutors Livermore, CA SAT Tutors Livermore, CA SAT Math Tutors Livermore, CA Science Tutors Livermore, CA Statistics Tutors Livermore, CA Trigonometry Tutors
{"url":"http://www.purplemath.com/livermore_ca_trigonometry_tutors.php","timestamp":"2014-04-18T19:12:50Z","content_type":null,"content_length":"24259","record_id":"<urn:uuid:0223feb7-770f-4cc8-b0f7-98c3915c3387>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00018-ip-10-147-4-33.ec2.internal.warc.gz"}
Baby steps to regression What do you see when you look at a regression analysis? Because me, all I see is a bunch of numbers and I have no idea where to look first or what’s important. Could you start me off with regression in some baby steps? What is it that you are looking at when you stare at this stuff? Never one to shy away from a student’s request, here you go. I had data from 104 people aged 16 -71 living on an American Indian reservation. All but 4 of them were over 18. I thought that there would be a NEGATIVE relationship between educational achievement and age given that the older people would have had fewer opportunities, for a lot of reasons. When I ran the regression, this was the first table I looked at. This tells me that the correlation between the years of education and education was .268. Since it is positive, I can already see my hypothesis is not supported. The R-square is the amount of explained variance – so, 7.2% of the variance in educational attainment in this adult population is explained by age. The ANOVA table tells me that my F-value is 7.94 and the probability of an F-value that large is less than .01 – in fact, it is .006. So, age is positively related, it explains about 7.2 % of the variance and this is statistically significant. If you divide the regression sum of squares by the total sum of squares you will find the quotient is .072. This is not coincidence. The intercept tells me what the value of education would be if age was zero, which is where the regression line intercepts the Y axis. The constant is 9.993. Children on the reservation really aren’t born with almost 10 years of education, which gives you some insight into the fact that you really shouldn’t interpret the intercept in cases where an X of 0 is not really feasible. I’m interested in educational attainment of ADULTS. A more useful statistic is the standardized beta coefficient. In the case where you only have one predictor, this will always equal the correlation between the dependent and the independent. Of course, it is significant and at the same level as the overall model, since it is the only variable in the model. If you square the t-value of 2.818, you’ll see it equals 7.94. This isn’t a coincidence, either. Okay, so we have a model that is significant, there is a positive relationship between age and education, with age explaining about 7% of the variance. I always want to do some checks for possible outliers, so I graph the data like this: It’s a pretty skewed distribution, with that one person at the far right being four standard deviations above the mean for education. I also see, when I plot age by years of education that our one highly educated person is also over 60, so extreme in both ends. I re-run the analysis without this one individual to see what happens. In fact, the regression is still significant, still positive but by dropping this one person the explained variance has dropped from 7.2% to 5%. (I could have looked at all of the same tables again, but you asked about a “quick and dirty” look, and I’d probably just glance at that one.) You might think if I dropped one outlier and it made that much difference, maybe dropping the handful who were under 18 years of age would make a difference also. I did that, ran the regression again, and this time with 99 of my original 104 people the explained variance had dropped to 2.6% — so, by dropping out just five people, less than 5%, the explained variance is now one-third of what it was and my model is non-significant. So …. hopefully this gives you a bit of insight into the first glances at a regression model and also the importance of not jumping up and running off as soon as you find a model with a significant F-value. Try to consider significance, explained variance, the standardized regression coefficient and the potential effect of outliers, for your first few baby steps. One Response to “Baby steps to regression” 1. Monica on December 1st, 2012 2:35 am This is so much easier to understand than our text book!!!
{"url":"http://www.thejuliagroup.com/blog/?p=2742","timestamp":"2014-04-18T01:02:20Z","content_type":null,"content_length":"33365","record_id":"<urn:uuid:a55314b2-1c9d-404d-8d58-c37de8ea0816>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00007-ip-10-147-4-33.ec2.internal.warc.gz"}
the first resource for mathematics Author’s abstract: “We study the stability of homomorphisms between topological (abelian) groups. Inspired by the “singular” case in the stability of Cauchy’s equation and the technique of quasi-linear maps we introduce quasi-homomorphisms between topological groups, that is, maps $\omega :𝒢\to ℋ$ such that $\omega \left(0\right)=0$ and $\omega \left(x+y\right)-\omega \left(x\right)-\omega \left(y\right)\to 0$ (in $ℋ$) as $x,y\to 0$ in $𝒢$. The basic question here is whether $\omega$ is approximable by a true homomorphism $a$ in the sense that $\omega \left(x\right)-a\left(x\right)\to 0$ in $ℋ$ as $x\to 0$ in $𝒢$. Our main result is that quasi-homomorphisms $\omega :𝒢\to ℋ$ are approximable in the following two cases: $•$ $𝒢$ is a product of locally compact abelian groups and $ℋ$ is either $ℝ$ or the circle group $𝕋$. $•$ $𝒢$ is either $ℝ$ or $𝕋$ and $ℋ$ is a Banach space. This is proved by adapting a classical procedure in the theory of twisted sums of Banach spaces. As an application, we show that every abelian extension of a quasi-Banach space by a Banach space is a topological vector space. This implies that most classical quasi-Banach spaces have only approximable (real-valued) quasi-additive functions.” A reference for both main concepts and results in the subject is the book of D. H. Hyers, G. Isac and Th. M. Rassias [Stability of functional equations in several variables (Progress in Nonlinear Differential Equations and their Applications 34, Boston, Birkhäuser) (1998; Zbl 0907.39025)]. 39B82 Stability, separation, extension, and related topics 39B52 Functional equations for functions with more general domains and/or ranges
{"url":"http://zbmath.org/?q=an:1051.39032","timestamp":"2014-04-18T13:08:08Z","content_type":null,"content_length":"24396","record_id":"<urn:uuid:0d2bbd1e-efbf-4202-833f-5e3caa7f25e3>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00216-ip-10-147-4-33.ec2.internal.warc.gz"}
Ashland, MA Science Tutor Find an Ashland, MA Science Tutor ...I will then analyze the test results and develop an individualized study plan for the student.I taught for four years at The Willow Hill School, Sudbury, MA, where I worked exclusively with students with learning disabilities. The majority of these students were diagnosed with ADD or ADHD. In t... 31 Subjects: including chemistry, reading, English, writing ...I love the enthusiasm of elementary science students. I encourage them to ask questions, put information together to draw conclusions, create and read graphs and charts and consider what it means, what happens next and why that matters. I have not taught elementary school but I have tutored elementary students K-6 in all subjects, but mostly reading, spelling and writing. 33 Subjects: including ACT Science, biology, psychology, English ...In addition to conducting and publishing research, I enjoy working individually with children and adults to help them achieve their potential--it keeps my scholarship "real". I hold a masters in Early Childhood Education and a PhD in Developmental Psychology. I've worked as a classroom teacher, reading specialist, literacy coach, and educational consultant for over 15 years. 19 Subjects: including psychology, Spanish, reading, writing ...Everyone has a different, personalized way in which they learn best. Some are visual learners, while others might be better listeners. Regardless of their preferred learning methods, the first things I always tackle with any new student are the basics. 20 Subjects: including chemistry, biology, reading, English I am a senior chemistry major and math minor at Boston College. In addition to my coursework, I conduct research in a physical chemistry nanomaterials lab on campus. I am qualified to tutor elementary, middle school, high school, and college level chemistry and math, as well as SAT prep for chemistry and math.I am a chemistry major at Boston College. 13 Subjects: including biology, chemistry, calculus, geometry
{"url":"http://www.purplemath.com/ashland_ma_science_tutors.php","timestamp":"2014-04-18T06:07:16Z","content_type":null,"content_length":"24065","record_id":"<urn:uuid:a51480b8-3d91-41a5-a098-fa9516c2303a>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00319-ip-10-147-4-33.ec2.internal.warc.gz"}
st: Re: infix problem [Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index] st: Re: infix problem From Kit Baum <baum@bc.edu> To statalist@hsphsun2.harvard.edu Subject st: Re: infix problem Date Thu, 15 May 2008 08:43:23 -0400 -findit precision- will point you to several useful FAQs. Yes, a double will allow you to represent numbers up to 10^307. But that does not mean that you can have an arbitrary number of digits of precision in the representation of that number, as the FAQs discuss. You cannot represent integers with more than a certain number of binary digits of precision, which then translates into a certain number of decimal digits which can be held exactly: about 7 for a float, about 15 for a double. But to avoid worrying about these problems, just store integer IDs as strings of appropriate length. Kit Baum, Boston College Economics and DIW Berlin An Introduction to Modern Econometrics Using Stata: On May 15, 2008, at 02:33 , statalist-digest wrote: If double can't deal with 16 digits number, what does the help mean by saying it can store up to 8.9884656743*10^307? * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2008-05/msg00579.html","timestamp":"2014-04-18T03:04:09Z","content_type":null,"content_length":"5929","record_id":"<urn:uuid:2ab2f329-8835-4807-a09f-6b4883d8ce91>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00264-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Degrees - Online Degrees Salaries For Degrees in Math Graduates with online bachelors math degrees can be employed in a variety of jobs. It is impossible to say what you personally will do with a degree in Math, our survey panel picked the following occupations as likely options: Students with online masters math degrees are considered well prepared for becoming Logisticians. The median salary for people with a degree in Math is $64,333.80. The lifetime value of this degree is approximately $1,245,499.00. Salaries are highly dependent on individual negotiating skill, years of related experience, your employer, location, and a host of other factors. The estimates we show on these pages are just that: estimates. Your individual experience will likely vary. Where does this come from? The Bureau of Labor Statistics, a unit of the US government, classifies all workers into some 800-odd occupational categories. We paid a team of freelancers to get their view on what type of degree a holder of each type of job would likely have majored in. For pairs which had a high degree of consensus, we created a link between the degree and the job. From this, we calculated the average salary for Math degrees and converted it into a lifetime value. We then compared it against other degrees at the same level of schooling (such as online associate math degrees), so that you can make informed educational and employment decisions. What Can a Math Student Expect to Learn? When learning math, what you can expect to learn depends on the specific field of math you choose. If you enter theoretical mathematics, you will learn how to advance mathematical knowledge through the development of novel principles and through the ability to recognize new relationships between existing mathematical principles. You will typically be employed as faculty in universities and will spend your time teaching and conducting research. If you enter applied mathematics, you will learn to use a variety of techniques and theories, such as computational models and mathematical modeling to create and solve a variety of problems across engineering, government, business, physical, life, and social science settings. As an example, when studying to be an applied mathematician, you might learn about how to analyze the effects and integrity of novel drugs, the wind resistance factors in a concept car, or the most efficient way to plan shipping routes between countries. You will learn to work in development, industrial research, and other settings. Degree Requirements If you are interested in becoming a primary or a secondary school mathematics teacher, you will not need more than a bachelor’s degree in many states as long as you meet the certification requirements of the state in which you intend to work. Similarly, if you plan to work for the federal government, you may not need more than a bachelor’s degree in mathematics or a related field to find suitable employment. However, if you plan to work in private industry, you will typically need a doctoral degree (a Ph.D) to be eligible for employment. As a mathematician, you will learn to think critically and to apply a variety of mathematical concepts and skills to solve new and complex problems or to find more efficient solutions to already solved problems. You will be expected to collaborate with colleagues in a variety of settings and disciplines, and as a result will need to learn how to communicate your ideas effectively in person and in print, and how to work persuasively and cooperatively with others. Online Schools Offering Accredited Math Degree Programs A number of offline schools are famous for offering courses in Math. Many are geared toward education, as there are many opportunities to teach with advanced degrees in mathematics. Perhaps the most famous is the University of Phoenix, which offers a master of arts degree in education, curriculum, and instruction with a focus on mathematics education. The program offers a graduate degree for people interested in enhancing and developing their curriculum and instructional skills. Walden University Walden University offers a master of science degree in education with a specialty in mathematics for teachers interested in teaching a variety of grades. There is a kindergarten to 5th grade mathematics degree to help elementary teachers interested in achieving a deeper understanding of foundational mathematical concepts. There is an elementary reading and mathematics degree for elementary school teachers interested in teaching both reading and math skills, and there is a 6th to 8th grade mathematics degree for middle school teachers interested in improving their skills in foundational mathematical concepts. Nova Southeastern University Nova Southestern University offers a master of arts in teaching and learning degree with a focus in elementary math. The degree is designed to assist classroom practitioners by connecting mathematical and educational theories to the most effective strategies to help students in the classroom. University of Cincinnati The University of Cincinnati offers a master of education degree in science, technology, engineering, and mathematics, which is a concentration designed to assist educators who work in a variety of settings improve their knowledge of foundational topics and concepts. The university also offers a master of education math specialist degree focused on pre kindergarten to the 6th grade. This degree is described as focused on transforming teachers into leaders by offering them a strong background and preparation in instructional strategies, mathematics content, and school leadership. Top Colleges & Universities Offering Campus-based Math Degrees Some highly ranked offline schools that offer courses in mathematics are th: • Massachusetts Institute of Technology • Harvard University • Princeton University • Stanford University • University of California at Berkeley • University of Chicago • California Institute of Technology • University of California at Los Angeles • University of Michigan at Ann Arbor • New York University • Yale University Each of these schools offers undergraduate and graduate degrees in mathematics at the bachelor, masters, and doctoral levels, which encourages further study for students who are interested in the topic. It is important to note that of the schools listed, the only public schools are the University of California at Berkeley, the University of Chicago, the University of California at Los Angeles, and the University of Michigan at Ann Arbor; this consideration may be important to keep in mind for students looking for ways to save money on tuition. Famous Students of Math A number of different figures throughout history have become famous, popular, or well known for their study of and contributions to mathematics. Perhaps the most famous of these figures is Isaac Newton (pictured right), who is commonly regarded as the father of what is now known as calculus (although he referred to it as fluxions in his time). Both he and Leibniz are credited with the creation of the fundamental theorem of calculus (that differentiation and integration are simply inverse operations of one another, much in the way multiplication and division are inversions of each other). Carl Gauss is also regarded as one of the legendary figures of mathematics, and was also referred to as the prince of mathematics. He is reputed to have corrected the arithmetic his father performed before he reached the age of three. His contributions to mathematics include the concept of monogenic functions, which appear everywhere today in mathematical physics, as well as his work with the theory of complex numbers and his introduction of the fundamental theorem of algebra. He is also regarded as the most important number theoretician in all of history. Archimedes is widely acknowledged to be the most prominent of the ancient mathematicians. His work led to advances in algebra, analysis, and number theory, but today he is most famous for the numerous theorems he created in relation to solid and planar geometry. His methods were anticipatory to both differential and integral calculus. He also discovered formulas that accounted for both the surface area and volumes of spheres. Leonhard Euler is also considered to be one of the most influential mathematicians in history. Many of this notations and methods in mathematics are still being used today; he was also the most prolific mathematician known to history. Modern trigonometry used today comes from the work of Euclid. He also did great work in number theory, proved e to be irrational, and proved a number of notable theories of geometry. He was also regarded to be the best algorist in all of history, and introduced or popularized the symbols of the constants e, n, the irrational number i, and y.
{"url":"http://www.onlinedegrees.org/degrees/math/","timestamp":"2014-04-18T00:14:14Z","content_type":null,"content_length":"30933","record_id":"<urn:uuid:c2302126-9a37-46a3-a32b-bbdc3e933a3b>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00370-ip-10-147-4-33.ec2.internal.warc.gz"}
AE Expressions: a Basic Count-Down and Hold - Tutorials This came from a question on another forum. How to have a type layer count from 100 to 0. Here's the expression: t = (1 / thisComp.frameDuration) * time ; count = 100 - t ; if (count < 0){ If we add this to the "source type" of a layer, this will do a simple countdown from 100 down to 0 for each frame that passes. How? Let's look! This is equal to the time it takes for one frame to elapse in seconds. is the current time in seconds. Therefore, t = (1 / thisComp.frameDuration) * time ; declares 't' to be equal to the current time in frames, regardless of fps. Then we say count = 100 - t ; So, instead of going 0 - 100, now we have a value going from 100 to 0. There's probably a mathematical term for doing this flip-around, but I don't know what it is. Now, thinking ahead, if we were making a countdown to zero, we want our countdown to stop at zero. So, we need to do a simple test: if (count < 0){ If 'count' is less than zero, then pass the value of 0. Therefore we will never see a negative number. If it is not less than zero.. then do this: We are handing this value to be our source text. First, let's just say we had "count" here. Our source text would display a 100 - 0 countdown occasionally including a non-integer number like 4.9999999. Why? Because NTSC sucks. Our math is dealing with this multiple of 29.97 which yields some crazy numbers occasionally. If you are using PAL, then you might not even need this. BUT, we should always make our code bullet-proof. So, instead we need to round our number to the integer closest to and not greater than 'count'. Hence, Math.floor(count). Make sense? Edited by graymachine, 21 June 2006 - 07:55 PM.
{"url":"http://mograph.net/board/index.php?showtopic=8338&pid=76591&st=0","timestamp":"2014-04-17T15:28:28Z","content_type":null,"content_length":"39493","record_id":"<urn:uuid:313a191a-5002-4d94-a2bc-a66c09984e16>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00648-ip-10-147-4-33.ec2.internal.warc.gz"}