content
stringlengths
86
994k
meta
stringlengths
288
619
Learning Algebra books Hey, I wanted to know if anyone knew of any good books for self teaching Algebra. I am currently using Khan academy and The complete idiots guide to Algebra (The sources are not bad when combined) I was wondering if anyone had any more books or websites such as those to reaffirm my grasp on algebra.
{"url":"http://www.physicsforums.com/showthread.php?s=0062868ca59b17772da8b114874e57b6&p=4593981","timestamp":"2014-04-20T14:17:54Z","content_type":null,"content_length":"29144","record_id":"<urn:uuid:c544bd11-6ca2-42a9-a0fe-adc168a24ad9>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00490-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Help July 16th 2008, 05:25 AM #1 Nov 2006 I was wondering if someone could show me how to do prove these in a Euclidean way: I need to show that the following three statements are equivalent for the triangle ABC, gamma (the angle opposite AB) = 90 degrees C is the orthocenter of ABC the midpoint of AB is the circumcenter of ABC I think I can prove this if I show the first implies the second, the second implies the third, and the third implies the first. I drew out diagrams and I see that it makes sense, but I'm having trouble writing it out in a more rigorous way. Any help would be appreciated. Thanks in advance. I was wondering if someone could show me how to do prove these in a Euclidean way: I need to show that the following three statements are equivalent for the triangle ABC, (1) gamma (the angle opposite AB) = 90 degrees (2) C is the orthocenter of ABC (3) the midpoint of AB is the circumcenter of ABC I think I can prove this if I show the first implies the second, the second implies the third, and the third implies the first. I drew out diagrams and I see that it makes sense, but I'm having trouble writing it out in a more rigorous way. Any help would be appreciated. Thanks in advance. (1) => (2) since $\angle C=\gamma=90^0$, then AC (BC) is the altitude w/ respect to vertex A (B) [which both contain C]. The altitude with respect to C passes also C. The three altitudes intersect at C, therefore, C is the orthocenter of $\Delta ABC$. (2) => (3) i'll try to work on this tomorrow. (3) => (1) Let D, the midpoint of AB, be the circumcenter of ABC. Let E be on CB such that DE is the perpendicular bisector of CB, and let F be on CA such that DF is the perpendicular bisector of CA. Then, CFDE forms a quadrilateral. In particular, since angles CDF and DEC forms right angles, then CFDE is a rectangle. therefore, angle C is a right angle. Last edited by kalagota; July 16th 2008 at 11:59 PM. (2) => (3) Let C be the orthocenter of ABC. then the altitude from A to BC is AC which means AC forms a right angle with BC. Let $M_{AB}$ be the midpoint of AB and consider a point outside the triangle, say D, such that ADBC forms a rectangle. Then, AB is a diagonal of the rectangle. Note that the diagonals of a rectangle bisect each other. Thus, $CM_{AB}$ is in the diagonal CD. Therefore $CM_{AB}$ is congruent to $M_{AB}B$. Let $M_{CB}$ be on CB such that $M_{AB}M_{CB}$ perpendicular to CB. now, $M_{AB}M_{CB}=M_{AB}M_{CB}$; and $CM_{AB}$ is congruent to $M_{AB}B$. Therefore, by HL theorem, $\Delta CM_{CB}M_{AB} \cong \Delta BM_{CB}M_{AB}$ (noting that the triangles are right.). By CPCTC Theorem, $CM_{CB} \cong BM_{CB}$; therefore, $M_{AB}M_{CB}$ is the perpendicular bisector of CB. similarly, if $M_{CA}$ be on CA such that $M_{AB}M_{CA}$ perpendicular to CA, it can be shown that $M_{AB}M_{CA}$ is the perpendicular bisector of CA. The perpendicular bisector of AB passes $M_{AB}$. Since all the perpendicular bisectors of each side intersect at $M_{AB}$, therefore $M_{AB}$ is the orthocenter of ABC.. PS: drawing will help you in the visualization of the proof.. Last edited by kalagota; July 17th 2008 at 12:50 AM. this is the drawing.. July 16th 2008, 07:19 AM #2 July 17th 2008, 12:22 AM #3 July 17th 2008, 12:36 AM #4
{"url":"http://mathhelpforum.com/geometry/43821-geometry.html","timestamp":"2014-04-16T12:11:06Z","content_type":null,"content_length":"43747","record_id":"<urn:uuid:d9ca7f96-7aa2-40b3-836c-1d4fe5e36e6e>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00266-ip-10-147-4-33.ec2.internal.warc.gz"}
What is the difference between DTFT and DFT? Suppose you have discrete signal $x[n], n=0..N-1$ and you need to decompose it in the sum of $N$sine waves (different amplitudes and phase) (again discrete and again having $N$ points) with periods from $N$ points up to 2 points + constant discrete value signal (because our original signal could have a shift in value relative to 0) Think for this only as a mathematical problem. This task is solved by DFT. This is what FFT (algorithm for DFT calculation) solves in the computers. $X[k]=\Bigsum_{n=0}^{N-1}x[n] e^{\frac{-j 2 \pi n k}{N}}$ Now about DTFT. Suppose you have analog real world signal. All the signals in real world are infinite in time. Then you sample this signal with analog sampling function $\tilde{x(t)}=\Bigsum_{k=-\infty}^{+\infty}x(t)\delta(t-kT)$ where $T$ is a time sampling step and $\delta(t)$ is the infinity high and infinity short peak at the origin (Don’t confuse it with $\delta[n]$ which is 1 at the origin). Then you take the normal (analog) Fourier transform of the sampled version as it is shown below. That is all about DTFT. $\tilde{X(\omega)} =\Bigint_{-\infty}^{+\infty} \tilde{x(t)} e^{-j \omega t} \, dt$ Note that there is relation between DFT and DTFT if we made some assumption about analog signal above. Which actually makes the DFT (FFT actually) useful for calculation (with computer) of the real world (analog) problems. Answered by: AnanthKK On: 04-Apr-2011 Suppose you have discrete signal $x[n], n=0..N-1$ and you need to decompose it in the sum of $N$sine waves (different amplitudes and phase) (again discrete and again having $N$ points) with periods from $N$ points up to 2 points + constant discrete value signal (because our original signal could have a shift in value relative to 0) Think for this only as a mathematical problem. This task is solved by DFT. This is what FFT (algorithm for DFT calculation) solves in the computers. $X[k]=\Bigsum_{n=0}^{N-1}x[n] e^{\frac{-j 2 \pi n k}{N}}$ Now about DTFT. Suppose you have analog real world signal. All the signals in real world are infinite in time. Then you sample this signal with analog sampling function $\tilde{x(t)}=\Bigsum_{k=-\infty}^{+\infty}x(t)\delta(t-kT)$ where $T$ is a time sampling step and $\delta(t)$ is the infinity high and infinity short peak at the origin (Don’t confuse it with $\delta[n]$ which is 1 at the origin). Then you take the normal (analog) Fourier transform of the sampled version as it is shown below. That is all about DTFT. $\tilde{X(\omega)} =\Bigint_{-\infty}^{+\infty} \tilde{x(t)} e^{-j \omega t} \, dt$ Note that there is relation between DFT and DTFT if we made some assumption about analog signal above. Which actually makes the DFT (FFT actually) useful for calculation (with computer) of the real world (analog) problems. Note that there is relation between DFT and DTFT if we made some assumption about analog signal above. Which actually makes the DFT (FFT actually) useful for calculation (with computer) of the real world (analog) problems. Now about DTFT. Suppose you have analog real world signal. All the signals in real world are infinite in time. Then you sample this signal with analog sampling function $\tilde{x(t)}=\Bigsum_{k=-\infty}^{+\infty}x(t)\delta(t-kT)$ where $T$ is a time sampling step and $\delta(t)$ is the infinity high and infinity short peak at the origin (Don’t confuse it with $\delta[n]$ which is 1 at the origin). Then you take the normal (analog) Fourier transform of the sampled version as it is shown below. That is all about DTFT. $\tilde{X(\omega)} =\Bigint_{-\infty}^{+\infty} \tilde{x(t)} e^{-j \omega t} \, dt$ Note that there is relation between DFT and DTFT if we made some assumption about analog signal above. Which actually makes the DFT (FFT actually) useful for calculation (with computer) of the real world (analog) problems. $X[k]=\Bigsum_{n=0}^{N-1}x[n] e^{\frac{-j 2 \pi n k}{N}}$ Now about DTFT. Suppose you have analog real world signal. All the signals in real world are infinite in time. Then you sample this signal with analog sampling function $\tilde{x(t)}=\Bigsum_{k=-\infty}^{+\infty}x(t)\delta(t-kT)$ where $T$ is a time sampling step and $\delta(t)$ is the infinity high and infinity short peak at the origin (Don’t confuse it with $\delta[n]$ which is 1 at the origin). Then you take the normal (analog) Fourier transform of the sampled version as it is shown below. That is all about DTFT. $\tilde{X(\omega)} =\Bigint_{-\infty}^{+\infty} \tilde{x(t)} e^{-j \omega t} \, dt$ Note that there is relation between DFT and DTFT if we made some assumption about analog signal above. Which actually makes the DFT (FFT actually) useful for calculation (with computer) of the real world (analog) problems. Suppose you have discrete signal $x[n], n=0..N-1$ and you need to decompose it in the sum of $N$sine waves (different amplitudes and phase) (again discrete and again having $N$ points) with periods from $N$ points up to 2 points + constant discrete value signal (because our original signal could have a shift in value relative to 0) Think for this only as a mathematical problem.This task is solved by DFT. This is what FFT (algorithm for DFT calculation) solves in the computers. $X[k]=\Bigsum_{n=0}^{N-1}x[n] e^{\frac{-j 2 \pi n k}{N}}$ Now about DTFT. Suppose you have analog real world signal. All the signals in real world are infinite in time. Then you sample this signal with analog sampling function $\tilde{x(t)}=\Bigsum_{k=-\infty}^{+\infty}x(t)\delta(t-kT)$ where $T$ is a time sampling step and $\delta(t)$ is the infinity high and infinity short peak at the origin (Don’t confuse it with $\delta[n]$ which is 1 at the origin). Then you take the normal (analog) Fourier transform of the sampled version as it is shown below. That is all about DTFT. $\tilde{X(\omega)} =\Bigint_{-\infty}^{+\infty} \tilde{x(t)} e^{-j \omega t} \, dt$ Note that there is relation between DFT and DTFT if we made some assumption about analog signal above. Which actually makes the DFT (FFT actually) useful for calculation (with computer) of the real world (analog) problems. Suppose you have discrete signal (different amplitudes and phase) (again discrete and again having with periods from Think for this only as a mathematical problem.This task is solved by DFT. This is what FFT (algorithm for DFT calculation) solves in the computers. Now about DTFT. Suppose you have analog real world signal. All the signals in real world are infinite in time. Then you sample this signal with analog sampling function $T$ is a time sampling step and transform of the sampled version as it is shown below. That is all about DTFT. Note that there is relation between DFT and DTFT if we made some assumption about analog signal above. Which actually makes the DFT (FFT actually) useful for calculation (with computer) of the real world (analog) problems. Answered by: Tarapriya On: 14-Mar-2012 Suppose you have discrete signal (different amplitudes and phase) (again discrete and again having with periods from Think for this only as a mathematical problem.This task is solved by DFT. This is what FFT (algorithm for DFT calculation) solves in the computers. Now about DTFT. Suppose you have analog real world signal. All the signals in real world are infinite in time. Then you sample this signal with analog sampling function Answered by: Nithy09cs12 On: 27-Jun-2012 Dear Guest, Spend a minute to in a few simple steps, for complete access to the Social Learning Platform with Community Learning Features and Learning Resources. If you are part of the Learning Community already,
{"url":"https://www.classle.net/faq/what-difference-between-dtft-and-dft","timestamp":"2014-04-20T21:08:10Z","content_type":null,"content_length":"65135","record_id":"<urn:uuid:1b43e15d-eb36-4557-904f-834db1fe717e>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00560-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: use Logarthmic Differentiation to solve: y=(sin(3x))^ln(x) Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4e9a31410b8ba77642a9c5ee","timestamp":"2014-04-20T10:57:12Z","content_type":null,"content_length":"42096","record_id":"<urn:uuid:916992b8-8714-4a18-98f0-92d3c3d750ec>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00120-ip-10-147-4-33.ec2.internal.warc.gz"}
Does the Bible Contain a Mathematically Incorrect Value for "Pi"? by John D. Morris, Ph.D. Does the Bible contain errors in math? If it does, this calls into question its moral and spiritual authority. Much is at stake. Let's carefully examine one of the most frequent charges of error. When describing Solomon's Temple and its fixtures, Scripture tells of a great basin cast of molten brass "ten cubits from the one brim to the other: it was round all about, . . . and a line of thirty cubits did compass it round about" (I Kings 7:23). The circumference, c, of a circle is related to its diameter, d, by the ratio "pi" or "P" according to the equation c = Pd. Mathematical derivatives have calculated the precise value of P to many decimal places, but for most applications the approximation 3.14 is sufficient. Inserting the value of circumference and diameter given by Scripture into the equation yields a value of P to be 3, and it is this apparent error which gives Bible detractors such glee. Construction techniques in those days were surprisingly advanced. We can assume that their mathematics was precise and measurements handled with care. Notice that the basin "was an hand breadth thick, and the brim thereof was wrought like the brim of a cup, with flowers of lilies" (v.26). A "hand breadth" is an inexact distance of about four inches, but sufficient for this general description. The whole basin flared out at the top, much like a lily. So, exactly what do the dimensions given really represent? The diameter of the basin would be the inside diameter, measured from side to side. But the circumference would be measured by placing a cord around the outside, then measuring the length of the cord. Furthermore, at what elevation along the tapered basin was the measurement taken? Obviously, these are not intended to be precise, but to give the overall impression of great size and beauty. Engineers have adopted a technique to insure that reported measurements are properly understood. To do this they use the convention called "significant figures." The number 10 is quite different from the number 10.0 or 10.00 in the precision it implies. To an engineer the number 10 can actually mean anything between 9.5 and 10.5. Likewise, the number 30 can actually mean anything between 29.5 and While the number P is accurate to many decimal places, the other two numbers do not have this precision. When one precise number is multiplied by an imprecise number, the product should be reported with no more precision than the least precise factor. Multiplying the diameter, 10 (i.e., 9.5 to 10.5) by P, is properly understood as implying a circumference somewhere between 29.8 and 33.0. When constructing an object for which extremely high precision is needed (e.g., the space shuttle), numbers are designed, reported, and fabricated to several decimal places, but to expect such precision in a lay description of this huge basin cast from molten brass is not only improper, it shows lack of understanding of basic engineering concepts. Properly understood, the Bible is not only correct, it foreshadows modern engineering truth. Cite this article: John D. Morris, Ph.D. 2003. Does the Bible Contain a Mathematically Incorrect Value for "Pi"?. Acts & Facts. 32 (5).
{"url":"http://www.icr.org/article/524/2/","timestamp":"2014-04-19T09:24:28Z","content_type":null,"content_length":"21170","record_id":"<urn:uuid:354f8f13-923a-4ab6-b707-1b2eb5262d00>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00287-ip-10-147-4-33.ec2.internal.warc.gz"}
Pot Odds 101 - an introduction For some poker players the term ‘pot odds’ may appear intimidating, and for the uninitiated it can sound like you need a degree in mathematics to understand the basic principles. Anyone who regularly watches poker on TV may have heard the commentators talking about, ‘implied odds’, ‘game theory’, ‘being priced out’, or heard players like Roland de Wolfe declaring that they are an ‘88.1% favourite’ post flop… but does this mean that pot odds are difficult and only for the advanced player? Thankfully, the answer is no and once you grasp the basic principles you may be surprised how your game opens up: you may well find yourself playing hands that you previously mucked on a regular basis. Conversely, you will be able to identify more situations where the stakes are too high and it is not worth committing any more chips to the pot. Before we start, there are two pieces of good news: The mathematics involved is not complicated. If you can mentally multiply and divide with small numbers then you should be well set. As with any discipline, the more you practice the more likely you are to improve and, over time, your calculation speed (and poker instincts) will get better. It's easier than you think! Being vaguely right with your maths is often all you need – you don’t need to be a human calculator! For example, let’s say you were mid-hand and reckoned that you had a 40% chance of improving post flop when the exact maths said that your chances were actually 37%. Is the small 3% difference really likely to affect your next decision? Rough maths is fine as long as you can get to within a few percent of the correct figure. This first article is going to concentrate on the basic principles and mathematical notations we need to use by looking at simple, non-poker related examples. Later on we will look at how to work out your pot odds in different poker situations and how to apply this information to your game. So, what are pot odds and what do we hope to achieve with them? Well, pot odds are a guide we can use to work out how likely our hand is to improve at any stage of play (pre and post flop, turn and river), and then identify if we are getting good value to play those hands. They can even be used sometimes after the river to help you decide whether or not to make marginal calls. The main principle involved is one of comparison. Most of the time we are looking to work out the odds of our hand improving to the point where we think we will have the best hand over our opponent (s), and then compare that against how many more chips we are going to have to spend to continue with play and how much we can win from the pot if we do so. In short, we are always looking to be spending chips when we can identify good value bets. Sound complicated? Well, let’s look at a couple of simple, non-poker examples to illustrate the point: 1) Tossing a coin – heads or tails. The chances of calling heads or tails and winning (or losing) is equal, or 50 – 50. If I offer to pay you $5.10 for every time you win, and you pay me $4.90 every time I win, then you would be daft not to take me up every time because you have identified a good value bet: your chances of winning or losing are equal, but you will make more cash by winning than you will have to spend by losing. Now, you could quite easily play and lose three or four times in a row and be out of pocket, but the important point here is that in the long run you will always expect to profit, even if you happen to lose in the short term. Whatever the stakes, big or small, once you identify a good value bet then you should almost always take it because if you continue to make similar decisions in similar situations then, over time, the law of averages will play out and you will profit. Remember, luck is only a short term phenomenon! 2) Rolling a die and getting a 6. Let’s say that I’ll give you $10 if you roll a 6, and you give me $1 if you don’t. Is this a good value bet? So what are the odds telling us here? Well, let’s introduce some maths. What are the chances of you winning? There are obviously six numbers on a die and only one of them is a six, so your chances of winning are 1 in 6. If we like we can convert this into a percentage: 1 divided by 6, multiplied by 100 = 17% (to the nearest percent). But, there is one more useful way of expressing these odds, as 5 to 1 (or 5:1). What this means is there are 5 numbers on the die that will cause you to lose, compared to the 1 number on the die whereby you will win. So, 1 in 6, 5:1 and 17% are three different ways of expressing the same probability. Okay, we know you are 5:1 to win. The last stage is to compare the ‘cash odds’ (in poker we would be saying ‘pot odds’) and see what they are telling us. We know that if you win you will get $10 and if you lose you will pay $1, so the ‘cash odds’ are simply 10 to 1, or 10:1. Now we have two sets of odds that we can compare. We know you are 5:1 to actually win, so if the ‘cash odds’ are any better than 5:1 then you are getting a good value bet… and the ‘cash odds’ are much better at 10:1 so you should always take that bet even though you only have a 17% chance of winning. I.e. you are very likely to lose in the short term, but if you play this situation repeatedly you will make money in the long term. To prove this, let’s just look at six rolls of the die and assume that the law of averages is being fair, so you will expect to lose 5 times and win just the once: You will lose five times and have to pay $1 each time, meaning a loss of $5. You will win once and receive $10. So by playing the game just six times you have won $10 and spent $5 so overall you have a profit of $5, and that’s by playing a game with only a 17% chance to win! Even without looking at a poker hand, that is an introduction to the principles and mathematics that we will be applying to our poker situations. In the next lesson we will be getting used to working out and converting between the three mathematical notations introduced here, e.g. the ability to calculate probabilities as, say: 1 in 5, 4:1, or 20%. The good news is that if you can master the next lesson then you will have a grasp of most of the mathematics you will ever need… …and if you recognize that 1 in 5, 4:1 and 20% are actually different ways of expressing the same probability then you are already ahead of the game! JW – July 2011
{"url":"http://www.pokereverything.com/articles/mathematics/pot-odds-101-an-introduction.html","timestamp":"2014-04-18T20:43:40Z","content_type":null,"content_length":"94118","record_id":"<urn:uuid:003c9983-6861-44de-b6e3-1dcfd741d868>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00088-ip-10-147-4-33.ec2.internal.warc.gz"}
Barto Math Tutor Find a Barto Math Tutor ...After spending four years assisting teachers in the Algebra 1 classroom, I am confident in my knowledge of the subject and of my ability to help your student succeed. I know how frustrating in can be to struggle with a subject matter, and would love to help ease your student's anxiety and suppor... 3 Subjects: including algebra 1, prealgebra, elementary (k-6th) ...I am CompTIA Network + Certified. I have received extensive computer networking training from New Horizons Computer Learning Center, and I received a master of applied science in information technology. I have several years of experience troubleshooting and fixing networks and computer issues. 17 Subjects: including prealgebra, algebra 1, algebra 2, geometry ...I firmly believe students learn and retain knowledge better by studying in short, frequent sessions with no distractions. I highly encourage students to study each subject a little bit each day, even if the class does not meet daily. Regular homework, which includes review problems, is a cornerstone of effective learning. 3 Subjects: including calculus, chemistry, physics ...As a professional actress as well as a college student, I am always studying. Whether it be scripts or Calculus, I have always had a knack at keeping myself organized. Studying takes patience and organization. 28 Subjects: including trigonometry, public speaking, writing, precalculus ...I have 7 publications and have authored a book chapter. My GPA in both undergraduate and graduate school was a 3.6/4.0. I have done volunteer tutoring in the past and really enjoyed both the interaction with the students as well as the look on their face when they finally "got it". 16 Subjects: including calculus, chemistry, elementary (k-6th), physics
{"url":"http://www.purplemath.com/Barto_Math_tutors.php","timestamp":"2014-04-19T17:41:04Z","content_type":null,"content_length":"23399","record_id":"<urn:uuid:4dec109c-6b52-43ea-9dc2-501258ccfcbd>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00541-ip-10-147-4-33.ec2.internal.warc.gz"}
Matlab question Hey guys, I was just wondering if anyone knows how to set the initial conditions for ode45() if you know f(1.5) but NOT f(0) Currently I have >> ode45(f, [0 1 1.8 2.1], [1.5 .5]) But this creates the following error: ??? Error using ==> funfun/private/odearguments @(T,Y) (T-EXP(-T))/(Y+EXP(Y)) must return a column vector.
{"url":"http://www.physicsforums.com/showthread.php?t=218659","timestamp":"2014-04-17T09:57:26Z","content_type":null,"content_length":"31977","record_id":"<urn:uuid:a0f81daf-1303-49c1-a336-ae7dff971e8c>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00069-ip-10-147-4-33.ec2.internal.warc.gz"}
— Functions creating iterators for efficient looping def take(n, iterable): "Return first n items of the iterable as a list" return list(islice(iterable, n)) def tabulate(function, start=0): "Return function(0), function(1), ..." return map(function, count(start)) def consume(iterator, n): "Advance the iterator n-steps ahead. If n is none, consume entirely." # Use functions that consume iterators at C speed. if n is None: # feed the entire iterator into a zero-length deque collections.deque(iterator, maxlen=0) # advance to the empty slice starting at position n next(islice(iterator, n, n), None) def nth(iterable, n, default=None): "Returns the nth item or a default value" return next(islice(iterable, n, None), default) def quantify(iterable, pred=bool): "Count how many times the predicate is true" return sum(map(pred, iterable)) def padnone(iterable): """Returns the sequence elements and then returns None indefinitely. Useful for emulating the behavior of the built-in map() function. return chain(iterable, repeat(None)) def ncycles(iterable, n): "Returns the sequence elements n times" return chain.from_iterable(repeat(tuple(iterable), n)) def dotproduct(vec1, vec2): return sum(map(operator.mul, vec1, vec2)) def flatten(listOfLists): "Flatten one level of nesting" return chain.from_iterable(listOfLists) def repeatfunc(func, times=None, *args): """Repeat calls to func with specified arguments. Example: repeatfunc(random.random) if times is None: return starmap(func, repeat(args)) return starmap(func, repeat(args, times)) def pairwise(iterable): "s -> (s0,s1), (s1,s2), (s2, s3), ..." a, b = tee(iterable) next(b, None) return zip(a, b) def grouper(iterable, n, fillvalue=None): "Collect data into fixed-length chunks or blocks" # grouper('ABCDEFG', 3, 'x') --> ABC DEF Gxx" args = [iter(iterable)] * n return zip_longest(*args, fillvalue=fillvalue) def roundrobin(*iterables): "roundrobin('ABC', 'D', 'EF') --> A D E B F C" # Recipe credited to George Sakkis pending = len(iterables) nexts = cycle(iter(it).__next__ for it in iterables) while pending: for next in nexts: yield next() except StopIteration: pending -= 1 nexts = cycle(islice(nexts, pending)) def partition(pred, iterable): 'Use a predicate to partition entries into false entries and true entries' # partition(is_odd, range(10)) --> 0 2 4 6 8 and 1 3 5 7 9 t1, t2 = tee(iterable) return filterfalse(pred, t1), filter(pred, t2) def powerset(iterable): "powerset([1,2,3]) --> () (1,) (2,) (3,) (1,2) (1,3) (2,3) (1,2,3)" s = list(iterable) return chain.from_iterable(combinations(s, r) for r in range(len(s)+1)) def unique_everseen(iterable, key=None): "List unique elements, preserving order. Remember all elements ever seen." # unique_everseen('AAAABBBCCDAABBB') --> A B C D # unique_everseen('ABBCcAD', str.lower) --> A B C D seen = set() seen_add = seen.add if key is None: for element in filterfalse(seen.__contains__, iterable): yield element for element in iterable: k = key(element) if k not in seen: yield element def unique_justseen(iterable, key=None): "List unique elements, preserving order. Remember only the element just seen." # unique_justseen('AAAABBBCCDAABBB') --> A B C D A B # unique_justseen('ABBCcAD', str.lower) --> A B C A D return map(next, map(itemgetter(1), groupby(iterable, key))) def iter_except(func, exception, first=None): """ Call a function repeatedly until an exception is raised. Converts a call-until-exception interface to an iterator interface. Like builtins.iter(func, sentinel) but uses an exception instead of a sentinel to end the loop. iter_except(functools.partial(heappop, h), IndexError) # priority queue iterator iter_except(d.popitem, KeyError) # non-blocking dict iterator iter_except(d.popleft, IndexError) # non-blocking deque iterator iter_except(q.get_nowait, Queue.Empty) # loop over a producer Queue iter_except(s.pop, KeyError) # non-blocking set iterator if first is not None: yield first() # For database APIs needing an initial cast to db.first() while 1: yield func() except exception: def random_product(*args, repeat=1): "Random selection from itertools.product(*args, **kwds)" pools = [tuple(pool) for pool in args] * repeat return tuple(random.choice(pool) for pool in pools) def random_permutation(iterable, r=None): "Random selection from itertools.permutations(iterable, r)" pool = tuple(iterable) r = len(pool) if r is None else r return tuple(random.sample(pool, r)) def random_combination(iterable, r): "Random selection from itertools.combinations(iterable, r)" pool = tuple(iterable) n = len(pool) indices = sorted(random.sample(range(n), r)) return tuple(pool[i] for i in indices) def random_combination_with_replacement(iterable, r): "Random selection from itertools.combinations_with_replacement(iterable, r)" pool = tuple(iterable) n = len(pool) indices = sorted(random.randrange(n) for i in range(r)) return tuple(pool[i] for i in indices)
{"url":"http://www.wingware.com/psupport/python-manual/3.4/library/itertools.html","timestamp":"2014-04-19T02:02:36Z","content_type":null,"content_length":"117859","record_id":"<urn:uuid:53b9c202-938e-4c36-8502-2a8ad591e7d2>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00364-ip-10-147-4-33.ec2.internal.warc.gz"}
Pyramid scheme formula in PHP? up vote 3 down vote favorite I'm working on a referral system - the formula is exactly like the pyramid/ponzi scheme. The system works like this: The initial user signs up (tier 1) The initial user refers 3 friends (tier 2) Each of those 3 friends refer another 3, (tier3) What would be the mathematical formula for that? How could I code up something in PHP where I could enter a number and it will then give me the number of tiers it has gone down and a semi-visual. ie: I enter 13 - it displays the text "3 tiers" and then displays / | \ php n-tier formula What have you tried and thought so far, how did you fail? People here like to help but almost nobody is going to do your homework for you! – markus Mar 20 '11 at 15:41 @markus: Hi, I've researched into the formula, but have no idea how I could code that up in PHP. mathmotivation.com/money/pyramid-scheme.html (the 2-up). Happy to give it a shot though, then come back here for help if needed :) – DT85 Mar 20 '11 at 15:44 add comment 1 Answer active oldest votes This is a geometric progression (GP), each tier's count is multiplied by a constant number (3) i.e. 1, 3, 9, 27 etc. You are concerned with the sum of the progression. up vote 2 down vote Read a simplified explanation about GP here. http://www.intmath.com/series-binomial-theorem/2-geometric-progressions.php Once you get your way around GPs and the formula for calculating the count of each tier in the GP, making PHP print something on each tier - that number of times - will be a piece of cake. Cheers! – Cogicero Mar 20 '11 at 16:16 That's what it's called! Great - thank you so much. I'll get working on things now :) – DT85 Mar 20 '11 at 16:26 @DT85 Glad to be of help. You're welcome, and I wish you all the best with this! :) – Cogicero Mar 20 '11 at 16:29 Cheers @Cogicero – DT85 Mar 20 '11 at 16:33 add comment Not the answer you're looking for? Browse other questions tagged php n-tier formula or ask your own question.
{"url":"http://stackoverflow.com/questions/5369269/pyramid-scheme-formula-in-php?answertab=votes","timestamp":"2014-04-24T08:32:47Z","content_type":null,"content_length":"69691","record_id":"<urn:uuid:7ae04f88-e0a0-43aa-abd5-84f39e888d1e>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00038-ip-10-147-4-33.ec2.internal.warc.gz"}
Neural Networks NCGIA Core Curriculum in Geographic Information Science URL: "http://www.ncgia.ucsb.edu/giscc/units/u188/u188.html" Unit 188 - Artificial Neural Networks for Spatial Data Analysis Written by Sucharita Gopal Department of Geography and Centre for Remote Sensing Boston University, Boston MA 02215 DRAFT - comments invited This unit is part of the NCGIA Core Curriculum in Geographic Information Science. These materials may be used for study, research, and education, but please credit the author, Sucharita Gopal, and the project, NCGIA Core Curriculum in GIScience. All commercial rights reserved. Copyright 1998 by Sucharita Gopal. Your comments on these materials are welcome. A link to an evaluation form is provided at the end of this document. Advanced Organizer Topics covered in this unit Intended Learning Outcomes After learning the material in this unit, students should be able to: • Define ANN and describe different types and some applications of ANN • Explain the applications of ANN in geography and spatial analysis • Explain the differences between ANN and AI, and between ANN and statistics • Demonstrate a broad understanding of methodology in using ANN • Apply a supervise ANN model in a classification problem • Apply a supervised ANN in a function estimation problem Unit 188 - Artificial Neural Networks for Spatial Data Analysis 1. Introduction 1.1. What are Artificial Neural Networks (ANN)? • provide the potential of an alternative information processing paradigm that involves □ large interconnected networks of processing units (PE) □ units relatively simple and typically non-linear □ units connected to each other by communication channels or "connections" □ connections carry numeric (as opposed to symbolic) data; encoded by any of various means □ units operate only on their local data and on the inputs they receive via the connections 1.2. Some Definitions of ANN • According to the DARPA Neural Network Study (1988, AFCEA International Press, p. 60): □ a neural network is a system composed of many simple processing elements operating in parallel whose function is determined by network structure, connection strengths, and the processing performed at computing elements or nodes. • According to Haykin, S. (1994),Neural Networks: A Comprehensive Foundation, NY: Macmillan, p. 2: □ A neural network is a massively parallel distributed processor that has a natural propensity for storing experiential knowledge and making it available for use. It resembles the brain processor in two respects: 1. Knowledge is acquired by the network through a learning process. 2. Interneuron connection strengths known as synaptic weights are used to store the knowledge. 1.3. Brief History of ANN • ANN were inspired by models of biological neural networks since much of the motivation came from the desire to produce artificial systems capable of sophisticated, perhaps "intelligent", computations similar to those that the human brain routinely performs, and thereby possibly to enhance our understanding of the human brain. 1.4. Applications of ANN • ANN is a multi-disciplinary field and as such its applications are numerous including □ finance □ industry □ agriculture □ business □ physics □ statistics □ cognitive science □ neuroscience □ weather forecasting □ computer science and engineering □ spatial analysis and geography 1.5. Differences between ANN and AI approaches: • Several features distinguish this paradigm from conventional computing and traditional artificial intelligence approaches. In ANN □ information processing is inherently parallel. □ knowledge distributed throughout the system □ ANNs are extremely fault tolerant □ adaptive model free function estimation, non-algorithmic strategy 1.6. ANN in Apatial Analysis and Geography • Fischer (1992) outlines the role of ANN in both exploratory and explanatory modeling. • Key candidate application areas in exploratory geographic information processing are considered to include: □ exploratory spatial data and image analysis (pattern detection and completion, classification of very large data sets), especially in remote sensing and data rich GIS environments (Carpenter et al., 1997 ) □ Regional taxonomy including functional problems and homogenous problems (See Openshaw, 1993) • Key candidate application areas in explanatory geographic information processing □ spatial interaction modeling including spatial interaction analysis and choice analysis (e.g. Fischer and Gopal, 1995) □ optimization problems such as classical traveling salesman problem and shortest-path problem in networks ( Hopfield and Tank, 1985) □ space-time statistical modeling (Gopal and Woodcock, 1996 ). 1.7. Relationship between Statistics and ANN • Major points of differences worth noting are: □ While statistics is concerned with data analysis, supervised ANN emphasize statistical inference . □ Some neural networks are not concerned with data analysis (e.g., those intended to model biological systems) □ Some neural networks do not learn (e.g., Hopfield nets) and therefore have little to do with statistics. □ Some neural networks can learn successfully only from noise-free data (e.g., ART or the perceptron rule) and therefore would not be considered statistical methods • Most neural networks that can learn to generalize effectively from noisy data are similar or identical to statistical methods. • Major points of similarities worth noting are: □ Feedforward nets with no hidden layer (including functional-link neural nets and higher-order neural nets) are basically generalized linear models. □ Probabilistic neural nets are identical to kernel discriminant analysis. □ Kohonen nets for adaptive vector quantization are very similar to k-means cluster analysis. □ Hebbian learning is closely related to principal component analysis. • Some neural network areas that appear to have no close relatives in the existing statistical literature are: □ Kohonen's self-organizing maps. □ Reinforcement learning (although this is treated in the operations research literature on Markov decision processes 2. Types of ANN • There are many types of ANNs. □ Many new ones are being developed (or at least variations of existing ones). 2.1. Networks based on supervised and unsupervised learning 2.1.1. Supervised Learning • the network is supplied with a sequence of both input data and desired (target) output data network is thus told precisely by a "teacher" what should be emitted as output. • The teacher can during the learning phase "tell" the network how well it performs ("reinforcement learning") or what the correct behavior would have been ("fully supervised learning"). 2.1.2. Self-Organization or Unsupervised Learning • a training scheme in which the network is given only input data, network finds out about some of the properties of the data set , learns to reflect these properties in its output. e.g. the network learns some compressed representation of the data. This type of learning presents a biologically more plausible model of learning. • what exactly these properties are, that the network can learn to recognise, depends on the particular network model and learning method. 2.2. Networks based on Feedback and Feedforward connections • The following shows some types in each category • Unsupervised Learning Feedback Networks: ○ Binary Adaptive Resonance Theory (ART1) ○ Analog Adaptive Resonance Theory (ART2, ART2a) ○ Discrete Hopfield (DH) ○ Continuous Hopfield (CH) ○ Discrete Bidirectional Associative Memory (BAM) ○ Kohonen Self-organizing Map/Topology-preserving map (SOM/TPM) Feedforward-only Networks: ○ Learning Matrix (LM) ○ Sparse Distributed Associative Memory (SDM) ○ Fuzzy Associative Memory (FAM) ○ Counterprogation (CPN) • Supervised Learning Feedback Networks: ☆ Brain-State-in-a-Box (BSB) ☆ Fuzzy Congitive Map (FCM) ☆ Boltzmann Machine (BM) ☆ Backpropagation through time (BPTT) Feedforward-only Networks: ☆ Perceptron ☆ Adaline, Madaline ☆ Backpropagation (BP) ☆ Artmap ☆ Learning Vector Quantization (LVQ) ☆ Probabilistic Neural Network (PNN) ☆ General Regression Neural Network (GRNN) 3. Methodology: Training, Testing and Validation Datasets • In the ANN methodology, the sample data is often subdivided into training, validation, and test sets. • The distinctions among these subsets are crucial. • Ripley (1996) defines the following (p.354): □ Training set: A set of examples used for learning, that is to fit the parameters [weights] of the classifier. □ Validation set: A set of examples used to tune the parameters of a classifier, for example to choose the number of hidden units in a neural network. □ Test set: A set of examples used only to assess the performance [generalization] of a fully-specified classifier. 4. Application of a Supervised ANN for a Classification Problem • In this section, we describe how two neural networks to classify data and estimate unknown functions. Multi-Layer Perceptron (MLP) and fuzzy ARTMAP networks. 4.1. Multi-Layer Perceptron (MLP) Using Backpropagation • A popular ANN classifier is the Multi-Layer Perceptron (MLP) architecture trained using the backpropagation algorithm. • In overview, a MLP is composed of layers of processing units that are interconnected through weighted connections. □ The first layer consists of the input vector □ The last layer consists of the output vector representing the output class. □ Intermediate layers called `hidden` layers receive the entire input pattern that is modified by the passage through the weighted connections. The hidden layer provides the internal representation of neural pathways. • The network is trained using backpropagation with three major phases. □ First phase: an input vector is presented to the network which leads via the forward pass to the activation of the network as a whole. This generates a difference (error) between the output of the network and the desired output. □ Second phase: computethe error factor (signal) for the output unit and propagates this factor successively back through the network (error backward pass). □ Third phase: compute the changes for the connection weights by feeding the summed squared errors from the output layer back through the hidden layers to the input layer. • Continue this process until the connection weights in the network have been adjusted so that the network output has converged, to an acceptable level, with the desired output. • Assign "unseen" or new data □ The trained network is then given the new data and processing and flow of information through the activated network should lead to the assignment of the input data to the output class. • For the basic equations relevant to the backpropagation model based on generalized delta rule, the training algorithm that was popularized by Rumelhart, Hinton, and Williams, see chapter 8 of Rumelhart and McClelland (1986). 4.1.1. Things to note while using the backpropagation algorithm • Learning rate: □ Standard backprop can be used for incremental (on-line) training (in which the weights are updated after processing each case) but it does not converge to a stationary point of the error surface. To obtain convergence, the learning rate must be slowly reduced. This methodology is called "stochastic approximation." □ In standard backprop, too low a learning rate makes the network learn very slowly. Too high a learning rate makes the weights and error function diverge, so there is no learning at all. □ Trying to train a NN using a constant learning rate is usually a tedious process requiring much trial and error. There are many variations proposed to improve the standard backpropagation as well as other learning algorithms that do not suffer from these limitations. For example, stabilized Newton and Gauss-Newton algorithms, including various Levenberg-Marquardt and trust-region • Output Representation: □ use 1-of-C coding or dummy variables. □ For example, if the categories are Water, Forest and Urban, then the output data would look like this: Category Dummy variables -------- --------------- Water 1 0 0 Forest 0 1 0 Urban 0 0 1 • Input Data: □ Normalize or transform the data into [0,1] range. This can help for various reasons. • Number of Hidden Units: □ simply try many networks with different numbers of hidden units, estimate the generalization error for each one, and choose the network withthe minimum estimated generalization error. • Activation functions □ for the hidden units, are needed to introduce nonlinearity into the network. □ Without nonlinearity, hidden units would not make nets more powerful than just plain perceptrons (which do not have any hidden units, just input and output units). □ The sigmoidal functions such as logistic and tanh and the Gaussian function are the most common choices. 4.2. Fuzzy ARTMAP • This is a supervised neural network architecture that is based on "Adaptive Resonance Theory", proposed by Stephen Grossberg in 1976. • ART encompasses a wide variety of neural networks based explicitly on human information processing and neurophysiology. • ART networks are defined algorithmically in terms of detailed differential equations intended as plausible models of biological neurons. • In practice, ART networks are implemented using analytical solutions or approximations to these differential equations. • ART is capable of developing stable clusterings of arbitrary sequences of input patterns by self-organisation. • Fuzzy ARTMAP is based on ART. □ Fuzzy ARTMAP's internal control mechanisms create stable recognition categories of optimal size by maximizing code compression while minimizing predictive error during on-line learning. □ Fuzzy ARTMAP incorporates fuzzy logic in its ART modules □ Fuzzy ARTMAP has fuzzy set-theoretic operations instead of binary set-theoretic operations. □ It learns to classify inputs by a fuzzy set of features (or a pattern of fuzzy membership values between 0 and 1) 4.2.1. Basic architecture of fuzzy ARTMAP • A pair of fuzzy ART modules, ART_a and ART_b, connected by an associative learning network called a map field □ the map field makes the association between ART_a and ART_b categories. • A mismatch between the actual and predicted value of output causes a memory search in ART_a, a mechanism called match tracking • Vigilance, a parameter (0-1) in ART_a, is raised by the minimum amount necessary to trigger a memory search. □ This can lead to a selection of a new ART_a category that is a better predictor of output. • Fast learning and match tracking enable fuzzy ARTMAP to learn to predict novel events while maximizing code compression and preserving code stability. • Carpenter (1997) gives a complete description of the algorithm and description of fuzzy ARTMAP for remote sensing applications. 4.3. Software • There are many commercial and free software packages for running backpropagation. • For fuzzy ARTMAP, use ftp://cns-ftp.bu.edu/pub/fuzzy-artmap.tar.Z □ The exercise at the end of this chapter uses a classification example using this package. It uses data from remote sensing data set. 5. Application Exercises: Backpropagation algorithm X Fuzzy ARTMAP for Classification of Landcover Classes 5.1. Data Set 1 • NOTE: please contact the author to obtain a copy of this data • This data set has 6 inputs (Landsat TM Spectral Bands) and 8 output classes represented as a single number (1-8) for each pixel. • Train the neural network with 80% of the data and test it on the remaining 20% of the data. 1. Compare the performance of backpropagation and fuzzy ARTMAP. 2. Use different settings of crucial parameters such as learning (in backpropagation) and vigilance (In fuzzy ARTMAP). Are results different? 3. Use a conventional statistical models to benchmark the performance of neural networks? 5.2. Data Set 2 • This data is found in http://lib.stat.cmu.edu/datasets/boston. 1. Use MLP with backpropagation to estimate the Median value of owner-occupied homes in Boston. 2. Use a conventional regression model to compare your results? 6. Summary This unit has introduced some definitions and types of neural networks □ It has examined the differences between ANN and statistics □ It has given an overview of application domains □ It has provided the use of MLP and Fuzzy ARTMAP neural networks for classification problems □ Sample data sets are provided along with information on free software sources to enable users to learn the applications of ANN. 7. Review and Study Questions 8. References 8.1. References in the text of this unit • Fischer, M.M. Expert systems and artificial neural networks for spatial analysis and modeling. Essential components for knowledge-based geographic information systems, Paper presented at the Specialist Meeting on `GIS and Spatial Analysis' organized by the NCGIA, San Diego, April 15-18, 1992. • Fischer, M., and Gopal, S. Neural network models and interregional telephone traffic: comparative performances between multilayer feedforward networks and the conventional spatial interaction model, Journal of Regional Science, 34,4, 503-527, 1995. • Carpenter, G., Gjaja, M., Gopal, S., and Woodcock, C. ART networks in Remote Sensing, IEEE Transactions on Geoscience and Remote Sensing, 35(2), 308-325, 1997. • Gopal, S. and Fischer, M. Learning in single hidden layer feedforward neural network models: backpropagation in a spatial interaction modeling context, Geographical Analysis, 28 (1), 38-55, 1996. • Gopal, S. and Woodcock, C. E. Remote sensing of forest change using artificial Neural Networks, IEEE Transactions on Geoscience and Remote Sensing, 34 (2), 398-404, 1996. • Hopfield, J.J.and Tank, D.W. (1985): Neural computation of decisions in optimization problems, Biological Cybernetics 52, pp. 141-152. • Openshaw, S. (1993). Modelling spatial interaction using a neural net, in M. M. Fischer and P. Nijkamp (eds) GIS Spatial Modeling and Policy, Springer, Berlin, pp. 147-164. 8.2. Books • For the interested reader, I have selected some books out of a plethora of publications. This list is not exhaustive but a good starting point. • Bishop, C.M. (1995). Neural Networks for Pattern Recognition, Oxford: Oxford University Press. ISBN 0-19-853849-9 (hardback) or 0-19-853864-2 (paperback), xvii+482 pages. • Hertz, J., Krogh, A., and Palmer, R. (1991). Introduction to the Theory of Neural Computation. Addison-Wesley: Redwood City, California. ISBN 0-201-50395-6 (hardbound) and 0-201-51560-1 • Ripley, B.D. (1996) Pattern Recognition and Neural Networks, Cambridge: Cambridge University Press, ISBN 0-521-46086-7 (hardback), xii+403 pages. • Weigend, A.S. and Gershenfeld, N.A., eds. (1994) Time Series Prediction: Forecasting the Future and Understanding the Past, Addison-Wesley:Reading, MA. • Masters, Timothy (1994). Practical Neural Network Recipes in C++, Academic Press, ISBN 0-12-479040-2, US $45 incl. disks. • Fausett, L. (1994), Fundamentals of Neural Networks: Architectures, Algorithms, and Applications, Englewood Cliffs, NJ: Prentice Hall, ISBN 0-13-334186-0. Also published as a Prentice Hall International Edition, ISBN 0-13-042250-9. Sample software (source code listings in C and Fortran) is included in an Instructor's Manual. • Michie, D., Spiegelhalter, D.J. and Taylor, C.C. (1994), Machine Learning, Neural and Statistical Classification, Ellis Horwood. • Aleksander, I. and Morton, H. (1990). An Introduction to Neural Computing. Chapman and Hall. (ISBN 0-412-37780-2). 8.3. Classics • Kohonen, T. (1984). Self-organization and Associative Memory. Springer-Verlag: New York. (2nd Edition: 1988; 3rd edition: 1989). • Rumelhart, D. E. and McClelland, J. L. (1986). Parallel Distributed Processing: Explorations in the Microstructure of Cognition (volumes 1 & 2). The MIT Press. We are very interested in your comments and suggestions for improving this material. Please follow the link above to the evaluation form if you would like to contribute in this manner to this evolving project. To reference this material use the appropriate variation of the following format: Sucharita Gopal. (1998) Artificial Neural Networks for Spatial Data Analysis, NCGIA Core Curriculum in GIScience, http://www.ncgia.ucsb.edu/giscc/units/u188/u188.html, posted December 22, 1998. The correct URL for this page is: http://www.ncgia.ucsb.edu/giscc/units/u188/u188.html Created: November 23, 1998. Last revised: December 22, 1998. To the Core Curriculum Outline
{"url":"http://www.ncgia.ucsb.edu/giscc/units/u188/u188.html","timestamp":"2014-04-18T15:59:24Z","content_type":null,"content_length":"30349","record_id":"<urn:uuid:57ec9c56-ffed-4339-9860-c88d2bd36143>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00542-ip-10-147-4-33.ec2.internal.warc.gz"}
Aharonov-Bohm topological explanation So we have a situation for the AB experimental setup in which something is switched off, where the EM-field is zero for the electrons and there's no A-field, and a situation in which something is switched on where the EM field is still zero for the electrons but the A-field is nonzero and produces a phase shift. We can relate that nonzero A-field with the switching on of something, I'll let you call that something however you want, but I think it's called an EM-field. In that case of the experimental setup it's nothing else but the electric current in the solenoid. The take away point is that the A-field is related with switching it on, fine so far? What I want to underline is that I'm not doubting the effect, I'm only concerned about the usual explanation of it because it seems to mix two incompatible scenarios in a contradictory way, the R^3 scenario and the R^3/R scenario. Hm, I'm curious, what's next. The situation before the switching on is compatible with both spaces, ... No current, no A-field, yes, that's allowed in both cases. ... but the situation after is compatible with the R^3/R space only, ... I don't understand. In the case of the experimental setup with a solenoid, switching on the current produces an A-field outside the solenoid. In the R³/R case we don't care where the A-field comes from; it's there - end-of-story. The funny thing is that the math is the same, so the R³/R case is an idealization. There are two ways you can look at the experiment: 1) you constructed an apparatus with the solenoid and you can switch the current on and off. In that case you know what you are doing. You can solve the Maxwell equations for the current; you find the non-vanishing EM-field inside the solenoid and the vanishing EM-field but non-vanishing A outside; you can calculate the phase shift of the wave function and you'll find that it agrees with 2) you haven't constructed the apparatus and you can't switch anything on and off. All there is is an interference pattern. You observe that this pattern deviates from the usual expectation, so it's not symmetric w.r.t. the symmetry axis of the experimental setup. OK, now you may that the apparatus has been constructed as described above (1) and that there's a current inside which produces the A-field. Fine. But it could be as well that nothing is inside, except for a singularity, a one-dim. line removed from R³ - and that due to some unknown reason there is an A-field which is pure-gauge, locally flat, w/o any EM-field, w/o any energy stored in the A-field etc. w/o looking into the solenoid you can't distinguish between 1) the solenoid with a current and 2) the solenoid wrapping vacuum w/o any current, a one-dim. singularity, and a source-less A-field. The interference patterns are identical. Physically (2) seems to be unacceptable, but as I said: w/o looking into the solenoid and inspecting the apparatus in detail there is no way to distinguish between (1) and (2). This is fine for me: mathematical I can do it either way, and physically I know what the clever guys in the lab have constructed. No problem for me. I wonder why in the quantum physics forum nobody has mentioned the non-locality of the effect. Probably that's all my quibble amounts to. I mentioned it a couple of times; the loop integral = the holonomy related to the winding number is nothing else but a non-local observable due to the non-trivial topology of the vector bundle. In #22 I wrote "The A-field ... is pure-gauge locally, but not globally; that's what's measured by the loop integral"; in #77: "in other words A is pure gauge locally i.e. A ~ A' = 0 but not globally"; in #82 you wrote "... this topological non-triviality, which can be expressed as a number , say, is a global topological invariant and so is not expressible by a local formula"; #83: "... F=dA is granted locally but not globally, so F and A may required patching, cutting out singularities etc. In the case of the A-field as described above one has to remove r=0. On this R³ / R the relation F=dA=0 is valid, so #not globally' means 'not on R³ but on R³ / R'".
{"url":"http://www.physicsforums.com/showthread.php?p=4186686","timestamp":"2014-04-20T23:39:23Z","content_type":null,"content_length":"91814","record_id":"<urn:uuid:28ee7eae-0cac-490e-93b5-c0f9443ac447>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00113-ip-10-147-4-33.ec2.internal.warc.gz"}
MATH...somewhat urgent Posted by Lisa on Sunday, November 22, 2009 at 5:36pm. I have three problems I would like checked please.1: I need to graph and find the y intercept of 6y+5x=-18 First I want to solve for y therefore I will let x be 0: 6y+5(0)=-18 6y=-18...I will now divide y=-3 Therfore the first set to plot would be (0-3) Now I solve for x, so I will let y be 0: 6(0)+5x=-18 My second pair to plot is (-18/5,0) the y intercept is (0,-3) The second problem is the same concept as one, so I will not work it out as much herethe problem is 4x-12=3y the y intercept is (0,-4) The third problem is finding slope: x+4y=8 Did I do these right? • MATH...somewhat urgent - Reiny, Sunday, November 22, 2009 at 5:46pm first one is correct second one: in 4x-12+12=3(0)-12 you added 12 on one side, but subtracted on the other. should have been 4x = 12 x = 3, so x-intercept is (3,0) last one: change x+4y=8 to 4y = -x + 8 y = (-1/4)x + 2 comparing it to y = mx + b we can see the slope is -1/4 your intercepts for the last one are correct. • MATH...somewhat urgent - Lisa willsomeone look at this please, Sunday, November 22, 2009 at 5:51pm Thank you • MATH...somewhat urgent - ashleigh, Friday, February 5, 2010 at 3:23pm y= 2-1/3x Related Questions Math-due today, need a check - Please help me with this, below is a question I ... Math/Algebra - Please help me with this, below is a question I have to answer…... Algebra 2 - would some one please check these for me 1. divide x^2-36y^2 over 5x... College Algebra - okay so using y = (-5/6)x + 3 how do i find the corresponding ... College algebra - Okay theres another part to this equation and I'm not sure if ... Algebra - Question 1 Solve using the multiplication principle then graph. -15&gt... Math(Please help) - Write an equation of the line satisfying the given condition... Quadratic Equations and Functions - I started having trouble on part (c), and I ... Is this right? - I have to determine the slope and y- intercept of this ... Math - Write an equation of the line satisfying the given condition. a) x ...
{"url":"http://www.jiskha.com/display.cgi?id=1258929371","timestamp":"2014-04-18T18:56:46Z","content_type":null,"content_length":"9928","record_id":"<urn:uuid:ded45bdf-4781-45a6-a71c-d8f781e4c047>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00473-ip-10-147-4-33.ec2.internal.warc.gz"}
Geochemistry quick tips: Activity, Molality, Normality calculations for most common water quality parameters. Geochemistry quick tips: Activity, Molality, Normality calculations for most common water quality parameters. Simple thing that might be useful! Just my tips to all young geochemist starting their job and looking at lots of water quality data with different water analysis!! Unit conversion – Have a cheat – sheet ready for all common unit conversions! You will need it for almost any type of calculations. Example: Say our example species is Fe+2. Fe+2 is reported as 4.25 mg/L in laboratory reports. Now lets calculate the activity of Fe+2. Activity of Fe+2 = (weight in mg/L) / (Molecular weight x 1000) = 4.25 / (55.847*1000) = .0000761 = 10^ (-4.119) always remember to convert your results to log units. It will make your calculations tremendously easier!!! Also remember, activity is unitless. Well, here I am attaching a spreadsheet to calculate “activity”, “molality”, “normality”, “moles” for most important water quality parameters : Fe+2, Fe+3, Na+, K+, Mg++, HCO3-, CO3–, SO4–, Cl-. Just change the “Example value” to the actual number you have in your lab analysis sheet. For our purpose we used example concentration of 1 mg/L for all parameters. So, change 1 to the exact value of each parameter. Also note that we did not use “activity co-efficient” in our calculations assuming very dilute solution at low temperatures. Activity, molality, normality calculations-Download Spreadsheet This is just a screenshot of the spreadsheet I am attaching here. You only need to change the “orange” shade area to the actual numbers. Well hope you can use the spreadsheet. Now lets go to the next step. Note: Please let me know if you have any comments. Use the spreadsheet for free. 3 thoughts on “Geochemistry quick tips: Activity, Molality, Normality calculations for most common water quality parameters.” 1. Your spreadsheet has “morality” as one of the column headers! Didn’t realise geochemists had an algorithm for assessing moral values! 2. how do you convert to log units? 3. on a related but separate subject….whats the best way to graph a variety of parameters with a wide range of concentrations (for example converting parameters from mg/L to ug/L results in a range of numbers too wide to graph). also how do you convert concentration differences that result in a negative value into a log unit?
{"url":"http://www.coalgeology.com/geochemistry-quick-tips-activity-molality-normality-calculations-for-most-common-water-quality-parameters/30/","timestamp":"2014-04-18T08:03:39Z","content_type":null,"content_length":"39706","record_id":"<urn:uuid:270be927-bbda-4c76-80de-2cecbd010c70>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00212-ip-10-147-4-33.ec2.internal.warc.gz"}
Comment on I'm using some because the parent used sum. And s/he's using sum because s/he is emulating tye's addition of booleans. You're right that !! screams boolean. However, that's not what is happening here. We are adding numbers here. Let me ask you what "true + false + true" is. That's just nonsensical. Programmers have been getting away with it for years because computers only deal with numbers - whether that's boolean, character, or, oddly enough, numerical. But, abstractly, it makes no sense. You can't add booleans. Boolean algebra consists of operators such as not, and, or, xor, nand, and nor. (I may have missed some - it's been a few years since I've dealt that directly with boolean algebra in the theoretical sense.) Addition, subtraction, multiplication, division - all of these are nonsensical in the boolean world. In fact, the multiplication symbol (the middle-dot, ·) was reused for the "and" operator, and the addition symbol (+) was reused for the "or" operator, IIRC (which I may not recall correctly...). And, as in mathematical algebra, the multiplication symbol is optional between terms, further reusing the sigils. Therefore, adding boolean values is just confusing. The answer to the question above is that "true + false + true" equals "true" (numerically: 1 || 0 || 1) - and the OP wants this to be a count of 2. Which, if my memory serves, the OP wants "a nor b nor c" Unfortunately, the nor solution doesn't scale past three. grep, unlike sum, uses boolean context. So the choices are to either sum numbers (using the ternary operator, for example), or to grep a list in scalar context. Adding booleans makes me question whether the author knows what s/he's doing, or is just copying what someone else is doing - whether s/he's too clever (in taking advantage of undefined behaviour) or not clever enough by half (in just copying something that works without understanding that it's undefined behaviour). Posts are HTML formatted. Put <p> </p> tags around your paragraphs. Put <code> </code> tags around your code and data! Read Where should I post X? if you're not absolutely sure you're posting in the right place. Please read these before you post! — Posts may use any of the Perl Monks Approved HTML tags: a, abbr, b, big, blockquote, br, caption, center, col, colgroup, dd, del, div, dl, dt, em, font, h1, h2, h3, h4, h5, h6, hr, i, ins, li, ol, p, pre, readmore, small, span, spoiler, strike, strong, sub, sup, table, tbody, td, tfoot, th, thead, tr, tt, u, ul, wbr Outside of code tags, you may need to use entities for some characters: For: Use: & &amp; < &lt; > &gt; [ &#91; ] &#93; Link using PerlMonks shortcuts! What shortcuts can I use for linking? See Writeup Formatting Tips and other pages linked from there for more info. Log In^? How do I use this? | Other CB clients Other Users^? Others cooling their heels in the Monastery: (9) As of 2014-04-17 11:27 GMT Find Nodes^? Voting Booth^? April first is: Results (446 votes), past polls
{"url":"http://www.perlmonks.org/?parent=502316;node_id=3333","timestamp":"2014-04-17T11:27:53Z","content_type":null,"content_length":"21667","record_id":"<urn:uuid:a0ce123a-534d-436a-8652-d5d9ea20aba6>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00559-ip-10-147-4-33.ec2.internal.warc.gz"}
7 Utilities Signals and Systems provides a variety of operations that are useful for signal processing, but are extensions of the included functionality that are difficult to classify. These utilities provide features such as connectivity with other packages, enhancements to TeX formatting, polygon manipulation, and more. 7.1 Polygon Manipulation The utilities in Signals and Systems for manipulating polygons are most useful in the context of two-dimensional signal processing. Many of these routines are used internally by decimation system design routines and analysis of downsampling. Most of the routines can only be used with two-dimensional convex polygons. Computations with polygons. Several simple geometric manipulations with polygons are included with these utilities. These determine whether a polygon is convex, whether a point is inside a given polygon, and the area of a given polygon. InsidePolygonQ includes the option BoundaryInside to determine whether a point on the boundary is considered to be inside the polygon or not. A function for finding antipodal pairs also exists. An antipodal pair is an edge of a polygon and a corresponding vertex, for which a line through the edge and another line through the vertex that is parallel to the edge do not cross through any part of the polygon, but remain tangent. Any given edge may have more than one corresponding antipodal vertex, but the algorithm returns only one (the first found by proceeding clockwise about the polygon from the vertex of the previous antipodal pair). The polygon must be convex; this guarantees that an antipodal pair exists for each edge. First, verify that the signal processing functions are accessible. Is the polygon convex? Polygon[{{-1, -1}, {0,0}, {0,1}, {1, -1}}] Is the point Polygon[{{-1, -1}, {0, 1}, {1, -1}}], Point[{0, 0}] The result is easily verified. {Polygon[{{-1, -1}, {0, 1}, {1, -1}}], GrayLevel[0.8], PointSize[.05], Point[{0, 0}]} This defines a polygon we will use in some computations. In[5]:=apoly = Polygon[ {{-1, 0}, {0, -.5}, {1, 0}, {1,1}, {.5, 1.5}, {-.5, 1.5}, {-1, 1}} Here is the area of the polygon we just created. Here are the antipodal pairs generated from the polygon we created. In[7]:=appairs = AntipodalPairs[apoly] Here we see two lines generated from one of the antipodal pairs. Note how the polygon lies completely between the two lines. One of the lines runs through an edge of the polygon, while the other passes through the corresponding antipodal vertex. These lines are parallel by definition. Hue[0], Thickness[0.02], Function[{vec, p1, p2}, {Line[{p1 - 2 vec, p1 + vec}], Line[{p2 - vec, p2 + vec}]} ] @@ {#[[1,1,1]] - #[[1,1,2]], #[[1,1,1]], #[[2,1]]}&[First[appairs]] Manipulations with polygons that return polygons. Several other of the polygon operations return new polygons. PolygonToTriangles breaks up the polygon in question into triangles, which is useful for a number of computations. This operation can be performed on polygons in three dimensions (though not general polyhedra). Note that the triangulation is not the same as that performed by some Mathematica graphics routines. The triangulation is defined around the "center of mass" of the polygon. ConvexPolygonIntersection determines the mutual intersection of the input convex polygons. The input can contain lists of polygons. The minimum rectangle surrounding a given polygon can be found by MinimalEnclosingRectangle. Note that this rectangle is not necessarily unique. The minimum arbitrary parallelogram circumscribing a polygon can be found via MinimalEnclosingParallelogram. Where MinimalEnclosingRectangle uses a quadratic-time algorithm, MinimalEnclosingParallelogram uses a linear-time method. A plane can be mapped so that each point on the plane is mapped into a rectangle. This is equivalent to tiling the rectangle to cover the plane. It is used in two-dimensional signal processing to represent the fundamental frequency tile. The function ConvexPolygonModRectangle transforms a convex polygon (or list of such polygons) into a list of polygons that lie within the bounds of the fundamental tile. The tile is represented by a pair of points giving the lower-left and upper-right corners of the rectangle, as in {{xmin, ymin}, {xmax, ymax}}. If it is not specified, it is assumed to be over the range {{-Pi, -Pi}, {Pi, Pi}}. Here is a set of triangles defining the polygon we gave earlier. When visualized, we note that the triangles are formed by joining the vertices of the polygon to its centroid. Hence, a convex polygon is required. {GrayLevel[.8], apoly, GrayLevel[0], Thickness[.02], %/.Polygon[{a__, b_}] :> Line[{b, a, b}] We now find the intersection of the input polygons. In[11]:=intersect = ConvexPolygonIntersection[ Polygon[{{-.5, -.5}, {-.5,.5}, {.5, .5}, {.5, -.5}}], Polygon[{{0,0}, {.8, 1}, {1, .8}}] We can examine the intersection graphically. GrayLevel[.4], Polygon[{{0,0},{1,0},{0,1}}], GrayLevel[.6], Polygon[{{-.5, -.5}, {-.5,.5}, {.5, .5}, {.5, -.5}}], GrayLevel[.8], Polygon[{{0,0}, {.8, 1}, {1, .8}}], GrayLevel[0], intersect Create another polygon. In[13]:=poly = Polygon[{{0,0}, {0.5, 0.7}, {0.7, 0.8}, {1, 0.8}, {0.4, 0.2}}]; This is the minimum rectangle around the input polygon. In[14]:=mrect = MinimalEnclosingRectangle[ This is the minimum paralellogram around the polygon. In[15]:=mpara = MinimalEnclosingParallelogram[ We can compare visualizations of the polygons. Graphics[{mrect, GrayLevel[.8], poly}, AspectRatio -> Automatic, PlotRange -> {{-0.2, 1.2}, {-0.2, 1.2}}], Graphics[{mpara, GrayLevel[.8], poly}, AspectRatio -> Automatic, PlotRange -> {{-0.2, 1.2}, {-0.2, 1.2}}] Comparison of the areas shows that the minimal parallelogram is smaller and will lead to better compression ratios than the minimal rectangle. In[17]:={PolygonArea[mrect], PolygonArea[mpara]} This polygon does not lie entirely within the specified rectangle. Hence, it is broken into pieces that are shifted to lie within the rectangle. Polygon[{{0.3, 0.6}, {0.5, 1.4}, {0.6, 0.6}}], {{0,0}, {1,1}} This visualization shows the rectangle, the wrapped parts of the triangle, and the outline of the original shape. GrayLevel[0.8], Rectangle[{0,0}, {1,1}], GrayLevel[0.3], %, GrayLevel[0], Line[{{0.3, 0.6}, {0.5, 1.4}, {0.6, 0.6}, {0.3, 0.6}}] }], AspectRatio -> Automatic] Functions for generating distinct colors. Two additional functions are included to provide utility in the creation of graphics. These create lists of distinct colors of the specified type. The colors are not random, but instead are spaced as far apart as possible in the color space, while also remaining distinct from black and white. The colors can be used in plot styles. Here are five different RGB colors. This creates eight different CMYK colors. 7.2 Linking to Ptolemy Ptolemy is a freely distributable signal processing design and simulation package from the University of California at Berkeley. It includes a graphical programming environment, hierarchical dataflow representations of signal processing systems, and DSP assembly code generation among many other features. Signals and Systems provides functionality to export a system description to Ptolemy, so that it may be used in various sorts of simulation and code generation. Commands to write the Ptolemy form of a system. PtolemyForm is a formatting command that represents the signal processing expression in the Ptolemy language. This expression can be inserted into Ptolemy code for use in simulations. The command WritePtolemySimulation creates an entire ready-to-run simulation, with specified variables used as inputs that will increment over their specified ranges during the simulation. Here is a simple system, formatted for use in Ptolemy. Shift[5, n][ Upsample[3, n][ Interleave[n][Sin[n], Cos[n]] WritePtolemySimulation will take a filename in the form of a string or an already opened stream as its first argument. The PtolemyForm of the expression is written to the stream with additional information, such as a header, settings for inputs and outputs, etc., added to the stream. Several options can be used to control this further. Options for WritePtolemySimulation. If the PtolemyHeader option is specified, a particular header is loaded. This should be the filename of the desired header file. If it is not given, then the header for the specified version is loaded in the simulation. Currently, only a header for use with Ptolemy 0.5 is included with Signals and Systems. The Justification option acts as it does in many other functions. You can use the Plot option to specify that the generated program contain code to direct the simulation output to a Ptolemy plotting routine. For more information about Ptolemy, examine the FTP site ptolemy.eecs.berkeley.edu, use the World Wide Web at http://ptolemy.eecs.berkeley.edu, or read the comp.soft-sys.ptolemy news group. 7.3 TeX Formatting Signals and Systems has enhanced the standard Mathematica facility for TeX formatting. Formatting commands for all the included signal processing operators allow you to easily place the results of computations in publication-quality documents. Generating TeX formatting for signal processing expressions. This is the TeX form of a signal processing expression. ZTransform[n, z][Shift[1, n][ Upsample[3, n][Sin[2 n]]]] Out[23]//TeXForm={\cal Z}_{{n\rightarrow z}}(\Muserfunction{Shift}_{1,n}( \uparrow_{3,n}(\sin (2\,n)))) When placed in a TeX document, the preceding example will look like the following formula. This conversion can be used by hand or in conjunction with functions like Splice to quickly prepare visually appealing documents from your computations. You may need to take advantage of the TeX style sheet notebook.sty for the TeX macros to render properly. This style sheet is included with all Mathematica distributions. 7.4 Miscellanea There are a variety of functions in Signals and Systems that are not easily classified as being in a particular category. These functions are individually documented in this section. A signal simplification function. SimplifySignal applies a variety of rules and the standard Mathematica function Simplify to attempt to reduce the expression to primitive signals and systems. The expression is transformed by these rules until it stops changing. Note that unlike Simplify, it does not attempt to determine a signal with minimal leaf count, but instead applies heuristic rules in an attempt to attain a more convenient form, which may not actually have a smaller leaf count. This is particularly true when rewriting expressions with the assumption that some symbols are real-valued. It is tuned to simplify the output of transforms. Here is a signal that can be simplified considerably. Cos[t] Sinc[t] - Sinc[2 t] + a ContinuousPulse[12, t - 2] * Periodic[8, t][ContinuousPulse[2,t]] Options to SimplifySignal. Two options can be employed that are specific to SimplifySignal. Any options valid for Simplify can also be passed through. The Justification option allows some information about intermediate steps in the manipulation of the signal to be printed out. You can use the RealVariables option to note that certain symbols are real-valued; special manipulation rules are then brought into play that handle this particular case. Here is a signal being simplified and displayed with the steps taken in the process. Downsample[3, n][ Upsample[5, n][5 DiscreteDelta[n]] Justification -> All Rules specific to a real variable are used here. Note that the resulting expression is not actually simpler than the original expression, but may be more useful for certain manipulations. Sin[a t]^3/t^2 + 2 Sign[3 t] + 2, RealVariables -> {t} Programmer's utilities. IntegerValued is very similar to IntegerQ. It returns True if the input expression is an integer, and False if it is a number that is not an integer. Unlike IntegerQ, it will remain unevaluated if the input is a symbolic expression. This is an integer. This may or may not be an integer, so the expression remains unevaluated. IntegerVectorQ is a simple enhancement to the standard Mathematica test functions. It returns True if its argument is an integer vector, and False otherwise. Note that IntegerVectorQ is functionally equivalent to VectorQ[expr, IntegerQ]. This input does not contain an integer vector. {a, 5, 3, 2.3, 1} Here is a valid integer vector. Table[Random[Integer, {1, 10}], {10}] Formatting exponentials. In Mathematica, negative exponentials that are factors in a product are generally formatted so they appear in the denominator of the expression. This is contrary to the formatting found in many engineering texts. As a convenience, Signals and Systems provides formatting for expressions of this type in the more usual engineering form. This is a global state set by EngineeringExponentialFormat[On]. Formatting can be returned to the usual Mathematica form by EngineeringExponentialFormat[Off]. The current state of the switch can be determined by giving an empty Here is the current state of exponential formatting. Here is an expression with negative exponents in products. In[32]:=expr = Plus @@ Table[n 10^(a n), {a, -3, 3}] This activates the formatting code. We see that the exponentials are now given as negatives. Comparison of expressions. To assist in forming an interactive experience with courseware based on Signals and Systems, we find the ExpressionMatch function. It is designed to enhance automated checking of results and determine, in some structural sense, how far a result given by a student diverges from the desired answer. It is given two expressions: the student's answer, and the target expected output. It returns the student's answer with False[n] substituted for any part of the input that differs from the expected Let us suppose that this is the input a student gives for a particular problem. In[35]:=ans = ZTransform[(n1 + n2)! * (4/5)^n1 (3/7)^n2 * DiscreteStep[n1, n2]/ (n1! n2!), {n1, n2}, {z1, z2} Here is a comparison with an expected result. In a realistic situation, the target result would probably be acquired from a precomputed collection of expressions. ZTransform[(n1 + n2)! * (4/5)^n1 (2/7)^n2 * DiscreteStep[n1, n2]/ (n1! n2!), {n1, n2}, {z1, z2} We can employ various techniques for examining the result. Here we count the instances of False that appear in the result, and we determine their positions. In[37]:={Count[%, _False, Infinity], Position[%, _False]} Options to ExpressionMatch. There are two options to ExpressionMatch. Tolerance lets you specify how close two numbers must be to be treated as identical for the purposes of this comparison. The value given is the percentage difference between the values. The default is 0.05. The Variables option allows certain symbols in the target expression to be matched by any symbol in the same position in the input expression. This allows the student to use different variable names than the instructor. This expression does not match the input. Exp[.36 x], Exp[.34 x] Now the numeric value is within tolerance. Exp[.36 x], Exp[.34 x], Tolerance -> 0.1 Here the student used the symbol y instead of z. Exp[3 y I], Exp[3 z I], Variables -> {z} The Variables option only allows symbols to be matched, not arbitrary expressions. Exp[3 Sin[z] I], Exp[3 z I], Variables -> {z}
{"url":"http://reference.wolfram.com/legacy/applications/signals/Utilities.html","timestamp":"2014-04-20T16:27:28Z","content_type":null,"content_length":"60416","record_id":"<urn:uuid:31c8d912-e6ee-4028-9ed6-7184d0a75b47>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00157-ip-10-147-4-33.ec2.internal.warc.gz"}
The komplex plane The e^x joke The cocky exponential function e^x is strolling along the road insulting the functions he sees walking by. He scoffs at a wandering polynomial for the shortness of its Taylor series. He snickers at a passing smooth function of compact support and its glaring lack of a convergent power series about many of its points. He positively laughs as he passes |x| for being nondifferentiable at the origin. He smiles, thinking to himself, "Damn, it's great to be e^x. I'm real analytic everywhere. I'm my own derivative. I blow up faster than anybody and shrink faster too. All the other functions suck." Lost in his own egomania, he collides with the constant function 3, who is running in terror in the opposite direction. "What's wrong with you? Why don't you look where you're going?" demands e^x. He then sees the fear in 3's eyes and says "You look terrified!" "I am!" says the panicky 3. "There's a differential operator just around the corner. If he differentiates me, I'll be reduced to nothing! I've got to get away!" With that, 3 continues to dash off. "Stupid constant," thinks e^x. "I've got nothing to fear from a differential operator. He can keep differentiating me as long as he wants, and I'll still be there." So he scouts off to find the operator and gloat in his smooth glory. He rounds the corner and defiantly introduces himself to the operator. "Hi. I'm e^x." "Hi. I'm d / dy."
{"url":"http://www.komplexify.com/math/jokes/TheEToTheXJoke.html","timestamp":"2014-04-20T20:55:26Z","content_type":null,"content_length":"3945","record_id":"<urn:uuid:af13d76a-9376-4eae-9188-6ca3378d1f55>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00146-ip-10-147-4-33.ec2.internal.warc.gz"}
Formal Solution to Systems of Interval Linear or Non-Linear Equations - Reliable Computing "... Modal interval theory is an extension of classical interval theory which provides richer interpretations (including in particular inner and outer approximations of the ranges of real functions). In spite of its promising potential, modal interval theory is not widely used today because of its origin ..." Cited by 1 (1 self) Add to MetaCart Modal interval theory is an extension of classical interval theory which provides richer interpretations (including in particular inner and outer approximations of the ranges of real functions). In spite of its promising potential, modal interval theory is not widely used today because of its original and complicated construction. The present paper proposes a new formulation of modal interval theory. New extensions of continuous real functions to generalized intervals (intervals whose bounds are not constrained to be ordered) are defined. They are called AE-extensions. These AE-extensions provide the same interpretations as the ones provided by modal interval theory, thus enhancing the interpretation of the classical interval extensions. The construction of AE-extensions strictly follows the model of classical interval theory: starting from a generalization of the definition of the extensions to classical intervals, the minimal AE-extensions of the elementary operations "... We investigate some abstract algebraic properties of the system of intervals with respect to the arithmetic operations and the relation inclusion and derive certain practical consequences from these properties. In particular, we discuss the use of improper intervals (in addition to proper ones) and ..." Add to MetaCart We investigate some abstract algebraic properties of the system of intervals with respect to the arithmetic operations and the relation inclusion and derive certain practical consequences from these properties. In particular, we discuss the use of improper intervals (in addition to proper ones) and of midpoint-radius presentation of intervals. This work is a theoretical introduction to interval arithmetic involving improper intervals. We especially stress on the existence of special “quasi”-multiplications in interval arithmetic and their role in relevant symbolic computations. 1. "... In order to describe possible applications of modal interval analysis in early engineering design, let us start by briefly explaining what is interval computation and what is modal interval analysis. Direct and indirect measurements: uncertainty is ubiquitous. In practice, how do we obtain the numer ..." Add to MetaCart In order to describe possible applications of modal interval analysis in early engineering design, let us start by briefly explaining what is interval computation and what is modal interval analysis. Direct and indirect measurements: uncertainty is ubiquitous. In practice, how do we obtain the numerical values of different physical quantities? For some quantities, we can obtain these values directly, either by performing a measurement or by eliciting the value from an expert. Measurements are never absolutely accurate; as a result, the result ˜x of the measurement is somewhat different from the actual (unknown) value x of the desired physical quantity: ˜x ̸= x. In other words, from measurements, we can only determine the value x with uncertainty: the approximation error ∆x def = ˜x − x is, in general, different from 0. Expert estimates are usually even less accurate than measurements, so the values ˜x obtained from the experts also always contain uncertainty. "... In Modal Intervals Revisited Part 1, new extensions to generalized intervals (intervals whose bounds are not constrained to be ordered), called AE-extensions, have been defined. They provide the same interpretations as modal intervals and therefore enhance the interpretations of classical interval e ..." Add to MetaCart In Modal Intervals Revisited Part 1, new extensions to generalized intervals (intervals whose bounds are not constrained to be ordered), called AE-extensions, have been defined. They provide the same interpretations as modal intervals and therefore enhance the interpretations of classical interval extensions (for example, both inner and outer approximations of function ranges are in the scope of AE-extensions). The construction of AE-extensions is similar to the cnstruction of classical interval extensions. In particular, a natural AE-extension has been defined from Kaucher arithmetic which simplified some central results of modal interval theory. Starting from this framework, the mean-value AE-extension is now defined. It represents a new way to linearize a real function, which is compatible with both inner and outer approximations of its range. With a quadratic order of convergence for real-valued functions, it allows one to overcome some difficulties which were encountered using a preconditioning process together with the natural AE-extensions. Some application examples are finally presented, displaying the application potential of the mean-value AE-extension. "... A semantic tolerance modeling scheme based on generalized intervals was recently proposed to allow for embedding more tolerancing intents in specifications with a combination of numerical intervals and logical quantifiers. By differentiating a priori and a posteriori tolerances, the logic relationsh ..." Add to MetaCart A semantic tolerance modeling scheme based on generalized intervals was recently proposed to allow for embedding more tolerancing intents in specifications with a combination of numerical intervals and logical quantifiers. By differentiating a priori and a posteriori tolerances, the logic relationships among variables can be interpreted, which is useful to verify completeness and soundness of numerical estimations in tolerance analysis. In this paper, we present a semantic tolerance analysis approach to estimate size and geometric tolerance stack-ups based on closed loops of interval vectors. An interpretable linear system solver is constructed to ensure interpretability of numerical results. A direct linearization method for nonlinear systems is also developed. This new approach enhances traditional numerical analysis methods by preserving logical information during computation such that more semantics can be derived from variation estimations. 1. "... A significant amount of research efforts has been given to explore the mathematical basis for 3D dimensional and geometric tolerance representation, analysis, and synthesis. However, engineering semantics is not maintained in these mathematic models. It is hard to interpret calculated numerical resu ..." Add to MetaCart A significant amount of research efforts has been given to explore the mathematical basis for 3D dimensional and geometric tolerance representation, analysis, and synthesis. However, engineering semantics is not maintained in these mathematic models. It is hard to interpret calculated numerical results in a meaningful way. In this paper, a new semantic tolerance modeling scheme based on modal interval is introduced to improve interpretability of tolerance modeling. With logical quantifiers, semantic relations between tolerance specifications and implications of tolerance stacking are embedded in the mathematic model. The model captures the semantics of physical property difference between rigid and flexible materials as well as tolerancing intents such as sequence of specification, measurement, and assembly. Compared to traditional methods, the semantic tolerancing allows us to estimate true variation ranges such that feasible and complete solutions can be obtained. 1. "... A semantic tolerance modeling scheme based on generalized intervals was recently proposed to allow for embedding more tolerancing intents in specifications with a combination of numerical intervals and logical quantifiers. By differentiating a priori and a posteriori tolerances, the logic relationsh ..." Add to MetaCart A semantic tolerance modeling scheme based on generalized intervals was recently proposed to allow for embedding more tolerancing intents in specifications with a combination of numerical intervals and logical quantifiers. By differentiating a priori and a posteriori tolerances, the logic relationships among variables can be interpreted, which is useful to verify completeness and soundness of numerical estimations in tolerance analysis. In this paper, we present a semantic tolerance analysis approach to estimate tolerance stack-ups. New interpretable linear and nonlinear constraint solvers are developed to ensure interpretability of variation estimations. This new approach enhances traditional numerical analysis methods by preserving logical information during computation such that more semantics can be derived from numerical results. , 2010 "... This paper deals with systems of parametric equations over the reals, in the framework of interval constraint programming. As parameters vary within intervals, the solution set of a problem may have a non null volume. In these cases, an inner box (i.e., a box included in the solution set) instead of ..." Add to MetaCart This paper deals with systems of parametric equations over the reals, in the framework of interval constraint programming. As parameters vary within intervals, the solution set of a problem may have a non null volume. In these cases, an inner box (i.e., a box included in the solution set) instead of a single punctual solution is of particular interest, because it gives greater freedom for choosing a solution. Our approach is able to build an inner box for the problem starting with a single point solution, by consistently extending the domain of every variable. The key point is a new method called generalized projection. The requirements are that each parameter must occur only once in the system, variable domains must be bounded, and each variable must occur only once in each constraint. Our extension is based on an extended algebraic structure of intervals called generalized intervals, where improper intervals are allowed (e.g. [1,0]). 1
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=3982456","timestamp":"2014-04-17T07:34:14Z","content_type":null,"content_length":"32575","record_id":"<urn:uuid:82a4990b-33e2-4579-8588-8cc4a59f13ec>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00414-ip-10-147-4-33.ec2.internal.warc.gz"}
Geometric Sequences A geometric sequence is a sequence of numbers in which the ratio between consecutive terms is constant. We can write a formula for the n^th term of a geometric sequence in the form a[n] = cr^n, where r is the common ratio between successive terms. Example 1: {2, 6, 18, 54, 162, 486, 1458, ...} is a geometric sequence where each term is 3 times the previous term. A formula for the n^th term of the sequence is Example 2: is a geometric series where each term is –1/2 times the previous term. A formula for the n^th term of this sequence is Example 3: {1, 2, 6, 24, 120, 720, 5040, ...} is not a geometric sequence. The first ratio is 2/1 = 2, but the second ratio is 6/2 = 3. No formula of the form a[n] = cr^n can be written for this sequence. See also arithmetic sequences.
{"url":"http://hotmath.com/hotmath_help/topics/geometric-sequences.html","timestamp":"2014-04-19T09:46:54Z","content_type":null,"content_length":"5145","record_id":"<urn:uuid:5c2a1b6b-dfeb-4415-8d08-e2e7033ee984>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00202-ip-10-147-4-33.ec2.internal.warc.gz"}
Is the support of a flat sheaf flat? up vote 6 down vote favorite Note: in the following, all scheme/algebra morphisms should be assumed essentially of finite type. Geometric version: Let $X$ be a scheme flat over $S$ (both noetherian), and let $\mathscr{F}$ be a coherent sheaf on $X$, also flat over $S$. The scheme-theoretic support $\mathfrak{X}$ for $\ mathscr{F}$ is a closed subscheme of $X$. Is it necessarily true that $\mathfrak{X}$ is flat over $S$? Algebraic version: Let $B$ be a flat $A$-algebra (both noetherian), and let $M$ be a finitely generated $B$-module, also flat over $A$. Is it necessarily true that $B/\operatorname{Ann}(M)$ is flat over $A$? Motivation: the only way I know how to visualize a coherent sheaf is to visualize its support, which is a closed subscheme. I justify this by the fact that many of the properties of a coherent sheaf are shared by (the structure sheaf of) its scheme-theoretic support. [S:For instance, they have the same associated points. In case $A$ is a DVR, this even provides a proof for the algebraic version above, since a module is flat over a DVR iff all its associated points map to the generic point.:S] (see Angelo's comment below) This general "visual intuition" tells me that the two (equivalent) statements above should be true. However, I cannot think of a good argument for this. Although it is not really essential to anything I am doing, it is bothering the heck out of me not to know whether this actually works, and distracting me from my other, more "essential" work. Thus, I would appreciate some help here. A positive answer will help me sleep at night (figuratively speaking); a negative answer will, hopefully, give me a useful counterexample against which to test my intuition in the future. Second motivation: If the statement is true, then it provides evidence for a morphism from the Quot scheme to the Hilbert scheme, that--loosely speaking--takes a coherent sheaf to its support. (Thinking about it in these terms may also suggest solutions to mathematicians who--unlike me--have a great deal of experience with Quot and Hilbert schemes.) ag.algebraic-geometry ac.commutative-algebra flatness 1 Since your question is really about intuition. Let me suggest that another way to visualize $\mathcal{F}$ or $M$ is in terms of its associated primes. These are prime ideals $p_i$ for which $M$ can be built up by a successive extension of $B/p_i$. These define the components of the support plus so called embedded components. It is easy to see, by writing out $Tor$'s, that if the $B/ p_i$'s are $A$-flat, then so is $M$. Hope this helps. – Donu Arapura Feb 10 '12 at 21:10 Surely you mean to say "let $M$ be a finitely generated $B$-module" in the algebraic version? – Konstantin Ardakov Feb 10 '12 at 21:15 Konstantin: Yes, of course. It's now corrected--thanks for pointing out the error. – Charles Staats Feb 10 '12 at 21:22 Donu Arapura: Thanks for your comment; it does help me relate my intuition to rigorous math. If the converse to your last sentence (of more than three words) were also true, then it would provide an easy positive answer to my question. Unfortunately, this converse is not true, since--for instance--$\Bbbk[x,y]/(y)$ is not flat over $\Bbbk[x,y]/(xy)$. – Charles Staats Feb 10 '12 at 21:29 1 By the way, it is not true that a coherent sheaf and its scheme-theoretic support have the same associated points. For example, take $A = k[x]$ and $M = A \oplus k[x]/(x)$. – Angelo Feb 11 '12 at add comment 2 Answers active oldest votes There are many counterexamples to this. Suppose that $S$ is a smooth surface over $\mathbb C$. Let $T \to S$ be a finite morphism from another smooth surface $T$, and consider a up vote 7 factorization $T \to V \to S$, where $V$ is obtained by gluing two points on a fiber. Then $T \to S$ is flat, $V \to S$ is not. Embed $V$ into the product $X$ of a projective space with down vote $S$; as a sheaf $F$ on $X$ take the direct image of the structure sheaf of $T$. Then $F$ and $X$ are flat over $S$, but the scheme-theoretic support is $V$, which is not flat. In case anyone is wondering why $V$ is not flat over $S$: If $V$ were flat over $S$, then $V$ would be Cohen-Macaulay, since the flat pullback of a regular sequence is regular. Since Cohen-Macaulay is equivalent to ($S_n$ for all $n$), it would in particular be $S_2$, and thus satisfy the Hartogs condition. But there exist regular functions defined in a punctured neighborhood of the singular point that cannot be extended over the singular point. (These correspond to local regular functions on $T$ that do not agree on the two points lying over the singular point.) – Charles Staats Feb 12 '12 at 15:03 add comment Here is an algebraic construction. The way I think about it is based on these two facts: 1) when $A$ is regular domain, a module-finite A-algebra is flat iff it is Cohen-Macaulay (CM) of same dimension and 2) there are non-CM domains which admit a CM module of same dimension. Now the concrete construction. Let $B = k[x,y,u,v]$, and we map $B$ onto $C = k[a^4, a^3b, ab^3, b^4]$ which is not CM (so $x$ maps to $a^4$ and $v$ to $b^4$). Let $P \subset B$ be the up vote 7 down vote Let $A = k[x,v]$ and $M = \bar C$, the integral closure of $C$. $M$ is actually $k[a,b]$ which is flat over $A=k[a^4, b^4]$. However, the annihilator of $M$ over $C$ is zero, so the annihilator of $M$ over $B$ is $P$. But if $B/P \cong C$ is flat over $A$, it would be Cohen-Macaulay, contradicting our choice. add comment Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry ac.commutative-algebra flatness or ask your own question.
{"url":"http://mathoverflow.net/questions/88138/is-the-support-of-a-flat-sheaf-flat","timestamp":"2014-04-18T00:27:05Z","content_type":null,"content_length":"64704","record_id":"<urn:uuid:d5f60968-7ba7-44c0-ac5c-36bf594d84b3>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00160-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: Re:Book: symbol table, type system, code generation? anton@mips.complang.tuwien.ac.at (Anton Ertl) 30 May 1998 11:57:00 -0400 From comp.compilers Related articles Book: symbol table, type system, code generation? polymer@drschulz.em.uunet.de (Dr. Oliver Schulz) (1998-05-04) Re:Book: symbol table, type system, code generation? james.grosbach@Microchip.COM (1998-05-07) Re: Re:Book: symbol table, type system, code generation? anton@mips.complang.tuwien.ac.at (1998-05-12) Re: Re:Book: symbol table, type system, code generation? Peter.Damron@Eng.Sun.COM (1998-05-26) Re: Re:Book: symbol table, type system, code generation? anton@mips.complang.tuwien.ac.at (1998-05-30) | List of all articles for this month | From: anton@mips.complang.tuwien.ac.at (Anton Ertl) Newsgroups: comp.compilers Date: 30 May 1998 11:57:00 -0400 Organization: Institut fuer Computersprachen, Technische Universitaet Wien References: 98-05-005 98-05-046 98-05-066 98-05-121 Keywords: books, tools anton@mips.complang.tuwien.ac.at (Anton Ertl) writes: > >In Section 9.2 (on basic block dependence graphs for instruction > >scheduling) the algorithm presented for building the graph compares > >every instruction with every other instruction, resulting in quadratic > >complexity and redundant dependence edges. A linear method that > >introduces fewer redundant edges is known (and even if you don't know > >it, it's not that hard to figure out), but it is not even mentioned. Peter.Damron@Eng.Sun.COM (Peter C. Damron) writes: > The algorithm presented in the book is not very fast, but the problem > is quadratic in general. If you limit the number of resources, and > consider only register type resources, then you can build the DAG in > linear time. But if you introduce either virtual registers (unbounded > resources) or memory resources (and load/store disambiguation), then > the problem becomes quadratic, though approx. linear time in practice. 1) The algorithm in Muchnick's book is always quadratic. And the redundant edges it introduces make the scheduler slower, and (in extreme cases) may cause worse schedules. 2) Given a very realistic restriction on the machine architecture, the use of an unlimited number of registers (or other resources) does not make the problem quadratic. The restriction is that every instruction can only read and write a limited number of registers/resources. This limits the amount of work done per instruction to a constant amount (if you attribute the work of adding flow and antidependences to the reader of the resource), making the algorithm linear in the number of > E.g. to do load/store disambiguation, you may have to compare every > store with every load, since the aliasing relation (or points-to) is > not transitive. That depends on the kind of disambiguation you perform. E.g., if you just divide the references into disjoint classes and don't do any disambiguation within classes (as you may do for a simple type-based disambiguation), you do not have to compare every store with every load: just introduce dependences between a load of a class and the last and the next store of the same class; you also have to introduce a dependence between a store and the last store of the same class. With a little phantasy you can extend this to ANSI C type-based Even if you use a disambiguation, where you have to compare every store to every other memory reference in the basic block, the quadratic component typically has a much lower constant factor (my guess: a factor of 10-100 lower for typical instruction distributions) than the algorithm presented in the book, and the time for building the graphs is probably dominated by the linear component in practice. > Here's a couple of fairly recent references on tree pattern matching. > (I can't find the iburg one at the moment) > %T Simple and Efficient BURS Table Generation The journal version is: author = "Todd A. Proebsting", title = "BURS Automata Generation", journal = toplas, year = "1995", volume = "17", number = "3", pages = "461--486", month = may > %T Engineering a Simple, Efficient Code-Generator Generator This is the iburg paper. - anton M. Anton Ertl Some things have to be seen to be believed anton@mips.complang.tuwien.ac.at Most things have to be believed to be seen Post a followup to this message Return to the comp.compilers page. Search the comp.compilers archives again.
{"url":"http://compilers.iecc.com/comparch/article/98-05-140","timestamp":"2014-04-19T17:11:08Z","content_type":null,"content_length":"8895","record_id":"<urn:uuid:3b2d1bad-0b86-4682-b050-3f207a408b47>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00125-ip-10-147-4-33.ec2.internal.warc.gz"}
Stat attack- Fastest Laps - Racing Comments Archive If anyone is interested: Fastest laps this year with two races to go are split between 10 different drivers... From looking at previous seasons I can only see that 1982 equals this with 10 different drivers picking up fastest laps also. I haven't looked further back accept for checking the competitive 1975 season which also equals it at 10. Anyone else feel free to prove otherwise (I havent got time!) I think we could have an all-time record by the end of this year if Kimi steps up to his usual Fastest lap scoring... F-laps so far in 2009: Button (2) Barrichello (2) Vettel (2) Alonso (2) Webber (2)
{"url":"http://forums.autosport.com/topic/116502-stat-attack-fastest-laps/","timestamp":"2014-04-17T07:37:44Z","content_type":null,"content_length":"219156","record_id":"<urn:uuid:e98d1b9f-d962-4c36-bef5-3464ed80170b>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00598-ip-10-147-4-33.ec2.internal.warc.gz"}
[BioC] Multiple test question in micrarray- FDR Naomi Altman naomi at stat.psu.edu Mon Dec 15 01:20:44 CET 2008 I want to reiterate - a q-value is related to FDR. This is the expected percentage of false positives in your gene list if you reject at this p-value or less. It is not the statistical significance of any gene on your gene list. There is however, a Bayesian interpretation that says it is the posterior probability that the gene really differentially expressed. If you are Bayesian, you will always update your posterior with any new information that arises, such as the results of your PCR study. At 02:38 PM 12/14/2008, Wayne Xu wrote: >Dear Naomi, >I may have a silly question. I read a few papers on microarray >multiple test, I understood what points they were trying to make. >But I still have doubts about it. Since now many journal reviewers >require the FDR for microarray differential expresses genes in >manuscripts, I really want to clear my doubts. >1). The mathematics model is different from the biology model: >The typical math model to bring up the multiple test issue is >following example: 20 balls in a box with 1 in red and 19 in blue. >The possibility of picking up the red ball from the box each time is >1/20, i.e 0.05. If draw 20 times, the chance is 0.05 multiplied by 20 is 1. >Suppose the red represents false positive, if draw one time the FDR >is 0.05, if 20 times then FDR is 1. People bring this multiple test >issue into microarray data analysis. But in microarray, at least two >aspects are different from this math model: >a). The raw P values are determined by the expression values of >samples, not affected by the total number of genes. So it is >different from above example of 1 out of 20 is 0.05. >b). Pick up a ball and then put it back to the box, you have chance >to pick up the exactly same ball twice or more. But in microarray, >each genes are tested individually at the same time, and each gene >only tested exactly once. >They are obviously different. If this math model is the only reason >that brought up the multiple test issue in microarray, it may be a >misleading (I may be silly, since no one else doubts about multiple >test in microarray?) >2). Not make biological sense: >Suppose a gene called XYZ has a raw P value of 0.00001 in two group >T test, and it was validated by biological test, e.g. RT-PCR. If the >micoarray chip has 40,000 genes, then by whatever adjustment FDR >method, the adj P-value may be 0.4 or lower or higher. If I use FDR >cutoff 0.1, this XYZ gene has higher FDR and is not in my interest >positive gene list. >OK, now I play a math game, filter gene by variance or other, shrink >the gene list to 5000 (since XYZ gene has low P value, suppose it is >within the 5000). Then the XYZ has low FDR and in my interest >differential gene list. But this is just a math game! >The biological reality is XYZ is positive, this positive is >determined by, for example 4 control samples and 4 treatment >samples, the mean may be big different, and within group variance is >very small. and RT-PCR validated. This reality can not be changed by >whatever number of genes to be tested. The raw P value is close the >biological reality, and it is good to represent the biological >reality. The multiple test here just make you feel happier but not a >biological sense. >FDR is a very useful term in many biological cases. But it seems >not a good example here for microarray? >Please help to clear it up. >Thank you, >Naomi Altman wrote: >>Remember that FDR is a rate - i.e. the expected false discovery rate. >>If the set of genes is changeds, FDR will change because the >>comparison set is different. This is NOT the same as a p-value, >>which depends only on the value of the current test statistic. >>The same thing happens with FWER, because these methods control the >>probability of making at least one mistake, which clearly depends >>on which set of tests are performed. >>At 03:11 PM 12/13/2008, Sean Davis wrote: >>>On Sat, Dec 13, 2008 at 12:36 PM, Wayne Xu <wxu at msi.umn.edu> wrote: >>> > Hello, >>> > I am not sure this is a right place to ask this question, but it is about >>> > micrarray data analysis: >>> > >>> > In two group t test, the multiple test Q values are depending >>> on the total >>> > number of genes in the test. If I filter the gene list first, >>> for example, I >>> > only use those genes that have1.2 fold changes for T test and >>> multiple test, >>> > this gene list is much smaller than the total gene list, then >>> the multiple >>> > test q values are much smaller. >>> > >>> > Do you think above is a correct way? People who do not do that way may >>> > consider the statistical power may be lost? But how much power >>> lost and how >>> > to calculate the power in this case? >>>No, you cannot filter based on fold change. However, you can filter >>>based on variance or some other measure that does not depend on the >>>two groups being compared. Anything that filters genes based on >>>"knowing" the two groups will lead to a biased test. Remember that >>>filtering removes genes from consideration from further analysis. >>>For further details, there are MANY discussions of this topic in the >>>mailing list. >>> > When people report multiple test Q values, they usually do not >>> mention how >>> > many genes are used in this multiple test. You can get different Q values >>> > (even use the same method, e.g. Benjamin and Holm adjust >>> method) in the same >>> > dataset. Then how can it make sense if the same genes have different Q >>> > values? >>>A good manuscript should describe in detail the preprocessing and >>>filtering steps, the statistical tests used, and the methods for >>>correcting for multiple testing. You are correct that many papers do >>>not do so. >>>As for different q-values in the same dataset using different methods, >>>it is important to note that one should not do an analysis, get a >>>result, and then, based on that result, go back and redo the analysis >>>with different parameters to get a "better" result. It is very >>>important that each step of an analysis (preprocessing, filtering, >>>testing, multiple-testing correction) be justifiable independent of >>>the other steps in order for the results to be interpretable. >>>Bioconductor mailing list >>>Bioconductor at stat.math.ethz.ch >>>Search the archives: >>Naomi S. Altman 814-865-3791 (voice) >>Associate Professor >>Dept. of Statistics 814-863-7114 (fax) >>Penn State University 814-865-1348 (Statistics) >>University Park, PA 16802-2111 >Bioconductor mailing list >Bioconductor at stat.math.ethz.ch >Search the archives: Naomi S. Altman 814-865-3791 (voice) Associate Professor Dept. of Statistics 814-863-7114 (fax) Penn State University 814-865-1348 (Statistics) University Park, PA 16802-2111 More information about the Bioconductor mailing list
{"url":"https://stat.ethz.ch/pipermail/bioconductor/2008-December/025557.html","timestamp":"2014-04-19T10:02:50Z","content_type":null,"content_length":"12542","record_id":"<urn:uuid:f8d603ce-6b1e-438a-999e-c3f4479130de>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00405-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts by Total # Posts: 214 Thank you very much. This helps a lot. Marcus, age 10 Why is Venus hotter than Mercury even though Mercury is closest to the Sun? How would day and night be different if Earth's axis were not tilted? Math: Geometry The circumference of the base of a cylinder is 16 cm. The height of the cylinder is 10cm. a. What is the surface area of the cylinder? b. What is the volume of the cylinder? Need help on this one. I have the formula but I'm a little confused because I don't know what t... Kaplan University The Linear equation y=6x-5 graphs as a horizontal/vertical/diagonal line ( choose the correct label and type it below) What is the relationship between shapes and shapes? What is the relationship between shapes and other shapes media analysis Just some sources of info, maybe ideas, i have to do an in-class essay and im having a bit of trouble locating information media analysis Who is a jounalist responsible to, they're employer or the viewing public? I also need particular examples of when news has been withheld for the good of the public or vice-versa for profit/ratings. if you can help then thanks :-) 7th grade math Thank you DrBob222 I am having lots of problems with this homework 7th grade math place the operation signs (+,-,x, divide) in the order that makes each sentence true 18__6__2=15 social studies I need names of twenty famous people in world history, who are not americans. Here is web site that will not get 20 but will get close. The disadvantage is that all of them are from or lived in a small town AND it was years ago. You may want newer faces. You can add the names ... 4th Grade math Solve. I have no digits the same. My thousands digit is less than my tens digit. My hundreds digit is twice my ones digit. If you add my digits, the sum would be 25. Who could I be? Could I be 2 numbers? We can not figure this out. Can someone help please. Let's find out M... mis padres y yo ____ de carldo. whats the correct verb Pages: <<Prev | 1 | 2 | 3
{"url":"http://www.jiskha.com/members/profile/posts.cgi?name=marcus&page=3","timestamp":"2014-04-16T10:55:37Z","content_type":null,"content_length":"8835","record_id":"<urn:uuid:519c53bf-fb59-489f-a2c2-c086a1e73c3b>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00403-ip-10-147-4-33.ec2.internal.warc.gz"}
Wave-based numerical methods for acoustics Astley, R.J., Gamallo, P. and Gabard, G. (2008) Wave-based numerical methods for acoustics. In, 8th.World Congress on Computational Mechanics (WCCM8): 5th European Congress on Computational Methods in Applied Sciences and Engineeering (ECCOMAS 2008), Venice, Italy, 30 Jun - 05 Jul 2008. PDF (Abstract) Download (73Kb) Recent developments in wave-based numerical methods are reviewed in application to problems in acoustics where small perturbations of pressure and velocity propagate on a steady compressible mean flow. In the most general case, such wave fields are represented by the solution of the linearized Euler equations. In the case of steady time-harmonic solutions in a homogeneous medium and in the absence of mean flow, these reduce to the solution of the Helmholtz equation. When irrotational mean flow is present, they can be represented by a convected Helmholtz-like equation. When rotational mean flow effects are significant, the time harmonic, coupled, first order linearized Euler equations must be solved. Wave-based numerical methods have been applied to all three categories of problem. The solution of the time-harmonic acoustic equations, particularly for large three-dimensional domains, is computationally challenging. When traditional finite element methods are used with polynomial basis functions within each element, approximability arguments alone require the use of many nodes per wavelength to resolve the resulting spatially harmonic solutions. It is moreover well known that these requirements are exacerbated when the computational domain spans many wavelengths of the solution. In such cases the ’pollution’ effect [1] means that the global error increases with frequency, irrespective of the number of nodes per wavelength that are used to resolve the solution. This effect is particularly acute for problems which involve exterior scattering by objects whose geometric lengthscale is large compared to a typical wavelength. A problem this type which is of particular interest to the authors is acoustic radiation from aero-engine nacelles where the length scale of the acoustic disturbance at peak frequencies is an order of magnitude smaller than the diameter of the nacelle. Another problem which exhibits the same disparity of lengthscales and where wave-based methods have been recently been applied is in the calculation of Head Related Transfer Functions for the human head and torso. Here a similar relationship holds between the wavelength of the disturbance and the dimensions of the torso over much of the audible range. To resolve either of these problems in three dimensions by using conventional numerical methods requires many millions of node or grid points. The use of non-polynomial, wave-like bases to represent such solutions more effectively with fewer degrees of freedom, can be traced back to infinite element schemes developed over several decades, in which a single outwardly propagating wave direction is used [2,3]. Such methods are now routinely implemented in a number of commercial codes. More recently, the Partition of Unity method [4,5,6] has been applied to a more general class of problem where no a priori wave direction is known. Here a continuous wave-like trial solution is generated from a conventional Finite Element mesh by multiplying the shape functions by a discrete set of wave solutions. A number of discontinuous formulations have also been explored. In such methods, continuity at element boundaries must be imposed on wavelike trial solutions defined within each element [7,8]. In the case of the Helmholtz problem, such methods can also be regarded as Trefftz solutions [9], and have been shown to embrace also the more esoteric Ultra-Weak Variational Approach [10]. All of the above methods suffer from poor conditioning as the number of wave directions increases. The extent to which this is an impediment to their practical implementation is significant in assessing the utility of each method, as are potential pre-conditioning strategies to reduce condition number. While most of the methods noted above, have been developed for the homogeneous Helmholtz problem in the absence of flow, some have been applied also to the case with non-zero mean flow [8]. Particular attention will be given in the current review to the effectiveness of these formulations for the flow case. Actions (login required)
{"url":"http://eprints.soton.ac.uk/63665/","timestamp":"2014-04-21T14:59:46Z","content_type":null,"content_length":"32561","record_id":"<urn:uuid:fa9c8803-6e4c-42ac-a65d-9601c3043ccc>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00263-ip-10-147-4-33.ec2.internal.warc.gz"}
Figure 1 Genome assembly graph complexity is reduced as sequence length increases. Three de Bruijn graphs for E. coli K12 are shown for of 50, 1,000, and 5,000. The graphs are constructed from the reference and are error-free following the methodology of Kingsford et al. ]. Non-branching paths have been collapsed, so each node can be thought of as a contig with edges indicating adjacency relationships that cannot be resolved, leaving a repeat-induced gap in the = 50, the graph is tangled with hundreds of contigs. Increasing the k-mer size to = 1,000 significantly simplifies the graph, but unresolved repeats remain. At k = 5,000, the graph is fully resolved into a single contig. The single contig is self-adjacent, reflecting the circular chromosome of the bacterium.
{"url":"http://genomebiology.com/2013/14/9/R101/figure/F1","timestamp":"2014-04-20T00:51:09Z","content_type":null,"content_length":"12432","record_id":"<urn:uuid:edcd18e9-4fea-4ff9-9ec3-a3e1b9d0baad>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00337-ip-10-147-4-33.ec2.internal.warc.gz"}
Summary: Inplace Run­Length 2d Compressed Search Amihood Amir \Lambda Gad M. Landau y Dina Sokol z 1 Introduction Recent developments in multimedia have led to a vast increase in the amount of stored data. This increase has made it critically important to store and transmit files in a compressed form. The need to quickly access this data has given rise to a new paradigm in searching, that of compressed matching [1, 5, 6]. In traditional pattern matching, the pattern (P ) and text (T ) are explicitly given, and all occurrences of P in T are sought. In compressed pattern matching the goal is the same, however, the pattern and text are given in compressed form. Let c be a compression algorithm, and let c(D) be the result of c compressing data D. A compressed matching algorithm is optimal if its time complexity is O(c(T ) + c(P )). Although optimality in terms of time is always impor­ tant, when dealing with compression, the criterion of extra space is equally important. In many applica­
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/806/3750337.html","timestamp":"2014-04-17T22:16:52Z","content_type":null,"content_length":"8089","record_id":"<urn:uuid:da768857-f245-445c-acdf-f3ed17a25493>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00561-ip-10-147-4-33.ec2.internal.warc.gz"}
Approximate Construction of Regular Polygons: Two Renaissance Artists - Introduction Regular polygons and polyhedra have interested mathematicians at least since Euclid (c. 300 BCE), who dedicated Books IV and XIII of his Elements to them. (In fact, much of the knowledge contained in Book XIII had come down to him from the disciples of Pythagoras (c. 569-470 BCE).) The interest of these figures lies in their beauty and in the challenge posed by their construction; it is not easy to construct a polygon of, say, five sides, all of them equal! Well, since the sides of such a polygon are all equal, they may be viewed as chords of a circle, all equidistant from its center. This circle will, then, be circumscribed about the polygon. We may, if we wish, take as our initial datum either the side of the n-gon (polygon of n sides), l[n], or the radius R of the circumscribed circle, for there is a unique relation between them: l[n]/2 = R sin((2p/n)/2) (1) here, q [n] = 2p/n is the central angle subtended by one side l[n], obviously equal to one n -th of the complete circle. But the sine in Eq.(1) makes things hard if you want to construct with ruler and compass (besides, trigonometric functions were not invented until much later). So the Greek mathematicians relied on ordinary geometry for their constructions. They did well; they found how to construct regular polygons of 3, 4, 5, 6, 8, 10 and 15 sides. As is well known, Greek mathematics was forgotten in Europe during the Middle Ages, but was rediscovered in the Renaissance (1400-1600 CE). At this time, artists with a mathematical ability became interested in problems of perspective, in drafting, and in the amazing, quasi-mystical, properties of the golden section --- which appears prominently in connection with the regular pentagon. For these reasons, some artists looked for procedures to construct regular polygons. It did not matter whether these procedures were old or new; it did not matter either if they were not exact, but only approximate. They were intended for use by painters, architects and draftsmen, not for contemplation by pure geometers.
{"url":"http://www.maa.org/publications/periodicals/convergence/approximate-construction-of-regular-polygons-two-renaissance-artists-introduction","timestamp":"2014-04-19T06:04:34Z","content_type":null,"content_length":"100773","record_id":"<urn:uuid:f3f501f3-8168-4b75-966c-a54271335308>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00227-ip-10-147-4-33.ec2.internal.warc.gz"}
Need help with a homework problem June 18th, 2013, 04:46 AM x 0i Need help with a homework problem Having trouble with this one... Write a Java program that will distribute the total amount of money found in a piggy bank into the following currencies: half dollars, quarters, dimes, nickels and pennies. The distribution will begin with the largest currency and move downward to the smallest. The process will follow these steps: 1.) The program is to prompt the user for the currency found in the bank; 2a.) find the largest equivalent number of half dollars present in the total amount; 2b.) reduce the total amount of money by the value of the half dollar amount; proceed by repeated the above process (steps 2a and 2b, using the new remaining dollar amount) to find the largest equivalent number of quarters, then dimes, then nickels, and finally, the remaining pennies 3.) the program will then display the equivalent number of half dollars, quarters, then dimes, then nickels, and finally, the remaining pennies. Sample program: Please enter the total dollar amount in the piggy bank: $16.72 In $16.72 there are: 33 half dollar(s), 0 quarter(s), 2 dime(s), 0 nickel(s), and 2 cent(s). The output must be exactly the same as that June 18th, 2013, 02:52 PM Re: Need help with a homework problem Can you post some code to see what you have so far? June 18th, 2013, 04:21 PM x 0i Re: Need help with a homework problem import java.util.Scanner; import java.text.DecimalFormat; public class ProgramIII public static void main(String [] args) DecimalFormat f = new DecimalFormat("$0.00"); double total = 0; double hf = 0; double q = 0; double d = 0; double n = 0; double c = 0; Scanner kbd = new Scanner(System.in); System.out.println("Enter amount of money found in piggy bank: "); total = kbd.nextInt(); hf = total / .50; q = d = n = c = System.out.println("In " + (f.format(total)) + " there are: "); System.out.println(" " +
{"url":"http://forums.codeguru.com/printthread.php?t=537863&pp=15&page=1","timestamp":"2014-04-24T09:34:00Z","content_type":null,"content_length":"8247","record_id":"<urn:uuid:ff39c951-19ab-4995-bd9d-7b908af6be79>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00016-ip-10-147-4-33.ec2.internal.warc.gz"}
Random turtles in the hyperbolic plane My eldest daughter Lisa recently brought home a note from her school from her computer class teacher. Apparently, the 5th grade kids have been learning to program in Logo, in the MicroWorlds programming environment. I have very pleasant memories of learning to program in Logo back when I was in middle school. If you’re not familiar with Logo, it’s a simple variant of Lisp designed by Seymour Papert, whereby the programmer directs a turtle cursor to move about the screen, moving forward some distance, turning left or right, etc. The turtle can also be directed to raise or lower a pen, and one can draw very pretty pictures in Logo as the track of the turtle’s motion. Let’s restrict our turtle’s movements to alternating between taking a step of a fixed size S, and turning either left or right through some fixed angle A. Then a (compiled) “program” is just a finite string in the two letter alphabet L and R, indicating the direction of turning at each step. A “random turtle” is one for which the choice of L or R at each step is made randomly, say with equal probability, and choices made independently at each step. The motion of a Euclidean random turtle on a small scale is determined by its turning angle A, but on a large scale “looks like” Brownian motion. Here are two examples of Euclidean random turtles for A=45 degrees and A=60 degrees respectively. The purpose of this blog post is to describe the behavior of a random turtle in the hyperbolic plane, and the appearance of an interesting phase transition at $\sin(A/2) = \tanh^{-1}(S)$. This example illustrates nicely some themes in probability and group dynamics, and lends itself easily to visualization. Let’s work in the Poincaré unit disk model of hyperbolic geometry. In this model, the hyperbolic plane is thought of as the interior of the unit disk in the Euclidean plane, and the hyperbolic metric is related to the Euclidean metric by multiplying distances infinitesimally by $2/(1-r^2)$ at a point whose (Euclidean) distance from the origin is $r$. In this model, the hyperbolic distance between a point at the origin and a point at Euclidean distance $r$ away is $2\tanh^{-1}(r)$. So, at the risk of being slightly confusing, let me say that a hyperbolic random turtle has “step size S” if the first step, starting at the origin, lands on the Euclidean circle of radius S. I wrote a little program called turtle to illustrate the motion of a random turtle for various values of S and A; it can be downloaded from my github repository if you want to play with it. The figures below are all produced with it. Let’s look at a few examples. The phase transition alluded to earlier is very evident in these pictures: for large S and small A, the turtle zooms off in an almost straight line to the boundary, with very little wiggling along the way. For small S and large A, the turtle meanders around aimlessly, filling up lots of space, intersecting its path many times, until eventually wandering off to the boundary in a more or less random direction. For a given length, what is the critical turning angle? The “worst case” scenario is a turtle which always turns left (or always turns right). For such a turtle there is a critical angle (for a given length) such that the trajectory of the turtle just fails to close up. Technically, the hyperbolic isometry describing the turtle’s motion at each step is parabolic, and fixes a unique point at infinity. The segments of the turtle’s trajectory will then osculate an invariant horocycle for the parabolic isometry, when the (discrete) atoms of positive turning curvature at the vertices exactly balance the negative curvature of the hyperbolic plane. A critical turtle trajectory osculates a horocycle The critical relationship is precisely that $\sin(A/2) = S$, with our convention about the relationship between S and the hyperbolic length of the segments. For angles smaller than this value, the trajectory is a quasigeodesic — i.e. it stays within a bounded (hyperbolic) distance of an honest geodesic, and does not wind around at all. For angles bigger than this value, there is a definite probability at every stage that the trajectory will undergo some number of complete full turns, and it might return to some region it has visited before. The trajectory still converges to a point at infinity with probability one (this is a very robust feature of random walk in negatively curved spaces) but it makes deviation of order $\log(n)$ from this geodesic in the first $n$ steps. One interesting statistic for an immersed path $\gamma$ in the plane is the winding number. If we trivialize the unit tangent bundle, the derivative $\gamma'$ can be thought of as a map to the circle, and we can ask how many times it winds around. In the Euclidean plane there is a natural trivialization of the unit tangent bundle via parallel transport, because of the flatness; technically there is a flat orthogonal connection. In the hyperbolic plane any orthogonal connection must have curvature, but there is a flat connection with structure group equal to the group of (hyperbolic) isometries, by identifying the unit circle in each tangent bundle with the circle at infinity. Explicitly: every tangent vector $v$ is tangent to a unique oriented geodesic $\gamma$ which limits to a unique point $\gamma^+$ in the circle at infinity. This identification is global, and respected by the natural action of the isometry group. For a random turtle in the Euclidean plane, the trajectory turns left or right through angle A at every step, and the winding number after some number of steps is distributed like simple random walk on the integers. That is, if $W_n$ denotes the winding number after $n$ steps, then the random variable $n^{-1/2} W_n$ converges to a normal distribution with mean zero and standard deviation A. The point is that the increments at every stage are independent and identically distributed. On the other hand, for a random turtle in the hyperbolic plane, each step induces an isometry of the hyperbolic plane, and thereby a projective transformation of the boundary circle. There is no natural invariant metric on this boundary circle, and therefore it is more subtle to compute winding number from this action. Let’s abstract the discussion somewhat. Suppose we are given a finite collection $X$ of (orientation-preserving) homeomorphisms of the circle. The circle is covered by the line, and the group $\text {Homeo}^+(S^1)$ of orientation-preserving homeomorphisms of the circle is covered by the group of orientation-preserving homeomorphisms of the line that commute with integer translation. Call this covering group $\text{Homeo}^+(S^1)^\sim$, where the tilde denotes central extension. Poincaré’s rotation number is a function from $\text{Homeo}^+(S^1)^\sim$ to the real numbers, whose reduction mod the integers is the usual rotation number for a circle homeomorphism. Thinking of our turtle as turning left or turning right continuously implicitly determines a lift of the motion to the universal covering group, so we can suppose that we are given a finite collection $X^\sim$ of lifts of $X$. Now we consider some random walk $x_0 x_1 x_2 \cdots$ where each $x_i$ is drawn independently and uniformly from $X^\sim$, and we ask about the distribution of the random variable $W_n$, which is defined to be the (real valued) rotation number of the composition $x_0 x_1 \cdots x_n$. Now, although there is typically no metric/measure on the circle left invariant by $X$ there is a natural measure — the so-called harmonic measure — which is invariant on average. If $\mu$ is a probability measure on the circle, we can define $X_* \mu: = \frac {1}{|X|} \sum_{x\in X} x_*\mu$, and then let $\mu_n: = \frac 1 n \sum_{i=0}^{n-1} X_*^i \mu$. The $\mu_n$ have a subsequence converging to a fixed point for the operator $X_*$; such a fixed point $\mu_\infty$ is a harmonic measure. Note that such a harmonic measure is quasi-invariant under every $x \in X$. The measure $\ mu_\infty$ pulls back to a locally finite measure $\mu_\infty^\sim$ on the real line, and this pullback is harmonic for the action of $X^\sim$. We can define a function $M:\mathbb{R} \to \mathbb{R}$ as follows. For each $t$ choose some $T\ll t$ and define $M(t) = \mu_\infty([T,t]) - \mu_\infty([T,0])$. Then $M$ is monotone nondecreasing, and $M(t+n) = M(t) + n$ for any $t$ and any integer $n$. In particular, the winding number $W_n$ satisfies $|W_n - M(x_0x_1\cdots x_n(0))| < 1$ for any $n$. Now, by the definition of a harmonic measure, for any $s,t$ and for random $x\in X^\sim$, there is an equality $\mathbb{E}(M(x(t)) - M(x(s))) = M(t) - M(s)$ (here the notation $\mathbb{E}(\cdot)$ means the expectation of a random function). In particular, $\mathbb{E}(M(x(t))) - M(t)$ is constant independent of $t$. We call this constant quantity the drift and denote it by $D$. Define a sequence of random variables $W'_n$ by $W'_n:=M(x_0x_1\cdots x_n(0)) - nD$. By the calculation above we see that for each $n$, the expectation of $W'_n$ conditioned on a particular value of $W'_{n-1} $ is equal to the given value of $W'_{n-1}$. More informally, we could just write $\mathbb{E}(W'_n) = W'_{n-1}$ and say that at every step, the expected change in the value of $W'$ is zero. This is a familiar object in probability theory, and is known as a martingale. One can think of the values of the martingale as the wealth of a gambler who makes a succession of fair bets. The wealth of such a gambler over time looks roughly like a simple random walk, after reparameterizing time proportional to the rate at which the gambler takes risks (as measured by the variance of the outcomes of each bet). For our random product of homeomorphisms, there are two possibilities: either the martingale converges, as successive “bets” become smaller and smaller, and the winding number converges to some final value (this happens in the case that the length of the turtle’s steps are big compared to the turning angle), or else the position of the point $x_0x_1\cdots x_n(0)$ is equidistributed in the circle with respect to $\mu_\infty$, and there is a central limit theorem: $n^{-1/2}W'_n$ converges to a Gaussian. Returning to our original setup, the left-right symmetry forces the drift $D$ to equal zero, and we can identify $W'_n$ with the winding number $W_n$ up to a constant. How does the variance of $n^{-1 /2}W_n$ depend on the variables S and A? The following figure shows a graph of the variance as a function of S and A. The red line marks the phase transition from zero variance (i.e. quasigeodesic turtle trajectories) to strictly positive variance. As one sees from the figure, the phase transition is not something sharp that can be easily seen experimentally, and in fact, the graph looks completely smooth along the phase locus (although we know it can’t be real analytic there). This experimental observation can be theoretically confirmed, as follows. Consider the behavior of a random turtle, with fixed stepsize, for some turning angle A’ just marginally bigger than the critical angle A. The critical turtle trajectory bounds an infinite polygon with edges of length $2\tanh^{-1}(S)$ and external angles A; this polygon can be decomposed into semi-ideal triangles with internal angles $(\pi-A)/2, \pi/2, 0$ and finite side length $S':=\tanh^{-1} (S)$. As we deform the angle $A$ we get a new triangle with angles $\alpha, \pi/2,\epsilon$ where $\alpha = (\pi-A')/2$, and the angle $\epsilon$ is opposite a side of fixed length $S'$. The hyperbolic law of cosines says in this context that $\cos(\epsilon) = \sin(\alpha)\cosh(S')$. Since $S'$ is fixed, and $\epsilon$ is small, we can approximate $\cos(\epsilon) \sim 1-\epsilon^2/2$; in other words, the angle $\epsilon$ is of polynomial (actually, quadratic) order in the difference $A'-A$. Now, suppose $\epsilon = 1/N$ for some very large $N$. A turtle trajectory with the property that there is at least one left and at least one right turn in every $N$ steps will be quasigeodesic; the only full twists will occur when there is a sequence of at least $N$ left turns or right turns in a row. This is a very rare occurrence — it will typically only happen twice in a sequence of $2^N$ steps. Hence the variance of the winding number $W_n$ is of order $2^{-N}$. In particular, the graph of the variance is tangent to zero to infinite order along the phase locus, as claimed. (Update:) At Dylan’s request I’ve added a slice of the variance graph, at $S=0.05$ with angle varying from 0 to 0.2. The vertical axis has been stretched (relative to the 3d graph above) for legibility. The phase transition is at angle 0.1000417 and I must say the graph looks pretty flat there. Recent Comments Ian Agol on Cube complexes, Reidemeister 3… Danny Calegari on kleinian, a tool for visualizi… Quod est Absurdum |… on kleinian, a tool for visualizi… dipankar on kleinian, a tool for visualizi… Ludwig Bach on Liouville illiouminated 5 comments Dylan Thurston What makes you sure the graph is not real analytic at the phase transition? The graph is pretty hard to read in the 3-d form like that; a 2-d slice would help. Also, I must be confused about the definition of quasi-geodesic, because I didn’t think self-intersection was a barrier to being a quasi-geodesic. Danny Calegari Hey Dylan – the random function $W_n$ is bounded on one side of the phase transition locus, so the variance of $n^{-1/2}W_n$ (the function whose graph is being plotted) is identically zero there. On the other hand, the variance is strictly positive on the other side. Hence the function is not real analytic at any point on the phase transition locus. (But: it is infinitely tangent to the identity there) Also: I didn’t mean to give the impression that the quasigeodesity was certified by the turtle path being embedded. It just so happens that below the phase transition the paths are embedded and are quasigeodesics, and above the phase transition the paths are not embedded and are not quasigeodesics, with probability one. I haven’t used turtle graphics/logo since 1986 or so (in a computer “class” which also covered Transylvania and lemonade stand), but I must say I remember it fondly. Lemonade stand is awesome! […] explicitly known. One example concerns the random turtles in the hyperbolic plane, discussed in a previous post. One fixes a distance D and an angle A and considers a “turtle” in the hyperbolic Leave a Reply Cancel reply
{"url":"http://lamington.wordpress.com/2012/12/15/random-turtles-in-the-hyperbolic-plane/","timestamp":"2014-04-17T07:22:40Z","content_type":null,"content_length":"97135","record_id":"<urn:uuid:471195d6-c98e-4ca9-b668-c7d39294cfeb>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00163-ip-10-147-4-33.ec2.internal.warc.gz"}
User-Defined Methods July 25th, 2011, 11:13 PM User-Defined Methods I am writing a program for my Java class that takes the length of two sides of a right triangle, and calculates the hypotenuse, as well as the sine, cosine, and tangent of each non-right angle of the right triangle. The calculations for the hypotenuse, sine, cosine, and tangent should be placed in separate methods. I keep getting two errors when I try to declare my methods, 'class' expected and ')' expected. I cannot figure out what I am doing wrong, any help would be appreciated. Code Java: import java.util.*; import static java.lang.Math.*; public class RightTriangle public static void main(String[] args) double a, b, c; System.out.println("Please enter first length" + " of side of the triangle", a); a = console.nextDouble(); System.out.println("The opposite side of the" + "triangle is ", a); System.out.println("Please enter second length" + " of side of the triangle", b); b = console.nextDouble(); System.out.println("The adjacent side of the" + "trianlge is ", b); c =((a*a) + (b*b)); hypotenuse = c*c; System.out.println("The hypotenuse of the triangle" + "is ", hypotenuse); sine(double a, double c); cosine(double b, double c); tangent(double b, double c); public static double getSine(double a, double c) double opp; double hyp; opp = a.getNum(); hyp = c.getNum(); sine = opp/hyp; System.out.println("The sine of the triangle" + "is ", sine); public static double getCosine(double b, double c) double adj; double hyp; adj = b.getNum(); hyp = c.getNum(); cosine = adj/hyp; System.out.println("The cosine of the triangle" + "is ", cosine); public static double getTangent(double a, double b) double opp; double adj; opp = a.getNum(); adj = b.getNum(); tangent = a/b; System.out.println("The tangent of the triangle" + "is ", tangent); July 25th, 2011, 11:28 PM Re: User-Defined Methods Code java: String s = "hello world"; System.out.println(String s); What is wrong with the above code? July 25th, 2011, 11:30 PM Re: User-Defined Methods Once you solve that riddle you have numerous other problems with your code. July 28th, 2011, 10:02 PM Re: User-Defined Methods Code java: String s = "hello world"; System.out.println(String s); What is wrong with the above code? July 28th, 2011, 10:23 PM Re: User-Defined Methods Congratulations but the question was aimed at ZippyShannon so they can work out for themselves what is wrong with their code.
{"url":"http://www.javaprogrammingforums.com/%20whats-wrong-my-code/10084-user-defined-methods-printingthethread.html","timestamp":"2014-04-19T20:48:30Z","content_type":null,"content_length":"20615","record_id":"<urn:uuid:ccc59c5c-a854-4fa3-8dfa-c13e00f1f5e9>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00421-ip-10-147-4-33.ec2.internal.warc.gz"}
Alviso Precalculus Tutor Find an Alviso Precalculus Tutor ...I have tutored many students over the years, including my son, who is now in the twelfth grade. I can offer tutoring in many subjects, but my specialties are science, math, and computer programming (I favor Python). I hope to teach you more than just what you need for tomorrow's test. I hope yo... 17 Subjects: including precalculus, chemistry, writing, geometry My years as a UCLA student were great memorable years. I worked as TA and RA at UCLA, while attending classes. I tutored my classmates as well as middle school, high school and college students outside UCLA, in math (prealgebra to calculus at all levels, including multivariable, and more), and also physics at all levels including college and university, supporting myself financially that 9 Subjects: including precalculus, physics, calculus, geometry ...I will begin by leading you in a simple, logical way to discover for yourself that the underlying concepts make sense. Once you are totally convinced, the best way to become proficient is through practice, and I will provide you with an unlimited supply of practice problems to work on, with me o... 12 Subjects: including precalculus, chemistry, calculus, physics ...I show them the meaning behind each of exercises. What they are all about. How we should tackle the problems. 13 Subjects: including precalculus, calculus, algebra 1, algebra 2 I am a math teacher, currently grades 6-12, with several years of experience helping students succeed in their math courses. I worked for four years at Cosumnes River College in the Math Learning Center and also worked at Mathnasium, an after school tutoring center. I graduated from California Sta... 12 Subjects: including precalculus, calculus, geometry, statistics
{"url":"http://www.purplemath.com/Alviso_precalculus_tutors.php","timestamp":"2014-04-16T10:48:44Z","content_type":null,"content_length":"23881","record_id":"<urn:uuid:a1f3ee2c-4fbb-4d40-a7cb-1171dcf8ba3f>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00142-ip-10-147-4-33.ec2.internal.warc.gz"}
Analysing the benefits of Dodge/Parry and Mastery(Editing) 12-29-2010, 12:07 AM Analysing the benefits of Dodge/Parry and Mastery(Editing) First of all,we must know how to calculate the Dodge/Parry/Mastery on your character sheet Parry = 5%+ parry(after DR) Dodge = 3.9% +dodge(after DR) Block = 5%+15%(Sentinel)+Mastery*1.5%+0.44%(shield enchant) DR math : 1/y = 1/c +k/x x= Dodge/Parry Rating adds Dodge/Parry chance before DR eg: Dodge rating 1384 of adds 7.84% Dodge chance,so x = 7.84 c=65.63 k=0.956 In order to calculate easily,i ignore the 2 factors 1 Hold the Line in the talent 2 ShieldBlock in the skill Attachment 2513 some explanations for this table Parry = parry on your character sheet after add 176.72 parry rating Before DR = increase 176.72 parry rating adds parry chance before DR DR = increase 176.72 parry rating adds parry chance after DR 1%parry = the increase reduction when add 176.72 parry rating(1%parry =1reduction) eg:the row of Parry = 15.58, 1%parry=0.75% it's mean that 14.79%→15.58%,add 176.72 parry rating can take the 0.75% reduction Mastery = mastery on your character sheet after add 179.28% mastery rating Mitigation = the matery you had gain the reduction,In fact, it's a Mathematical expectation math = block*(critBlock*0.6+(1-critBlock)*0.3) 1%Mastery =the increase reduction when add 179.28 mastery rating 1%Mastery(actual) = add 176.72 converse 0.9852 mastery' effect in actual becase 176.72 = 1% Dodge/Parry(before DR) =0.9852 Mastery it's easy to find that Parry:13.23%→14.02% 1%Parry =0.79% Mastery : in the same row 17→18 Mastery 1%mastery(actual)=0.77% Parry:14.02%→14.79% 1%Parry =0.77% Mastery : in the same row 18→19 Mastery 1%mastery(actual)=0.78% so,it's mean when your character sheet parry=14.79 mastery=19 Regardless add the points from gems or refoged add the same points, the reduction from parry < mastery however, when parry<15,mastery<19 you should choose the parry, because parry's reduction >mastery Attachment 2514 from this picture ,wo can know the rate of change from this 2 attributes X-axis is mean adding X% parry before DR in the picture,the cross-point is close to 11(before DR),so 5%+after DR(11) =14.59 At the last, for the easy to remember,the conclusion is that 1.if your character sheet(buffed) Dodge/Parry<14/15 and Mastery<19,you should reforged or gems to Dodge/Parry is better than Mastery It's gain the better reduction that maintaining the blance between Dodge and Parry as soon as possible 2.if your character sheet(buffed) Dodge/Parry>14/15 and Mastery>20 , stack the mastery util your unhittable =102.4% note : parry14.59≈ 15 because 15 is more remmbered dodge have the similar relationship when parry-1.1 ,so dodge =14http://www.tankspot.com/images/misc/pencil.png 12-29-2010, 12:22 AM my English is pool ,so i'm srroy to let everyone watch my article hardly 12-29-2010, 12:39 AM e.... i forget the title cann't be edited in this forum 12-29-2010, 01:32 AM But I love some anal zing!! Jokes aside, I don't think you can just cast aside Shield Block. It GREATLY improves Mastery rating. Also, you don't really explain what I should do with Dodge/Parry > 14-15 & Mastery < 19 or the other way around. Its probably not a case that will happen that often, but just for the people that like to know everything ;) 12-29-2010, 02:10 AM because Shield Block can be casted whenever parry15/mastery20 or parry 16/mastery19 and we change variable only = 1% ,△Reduction Expectation is too small, but it let me construct the math model become complex so,for this reason, i decide to ignore the ShieldBlock factor,and same to Hold the line 12-29-2010, 02:12 AM Am I interpreting this properly? Each parry step is 176.72 parry rating and each mastery step is 179.28 mastery rating - then you are comparing the values directly across... So when you are at 10 pre-DR parry and 18 mastery, you are actually comparing: 1767.2 parry rating against 1792.8 mastery rating? Sure it's only 1.4% difference, but your conclusions are also based on percentage point differences... Surely you should be comparing ratings directly. 12-29-2010, 02:29 AM But I love some anal zing!! Jokes aside, I don't think you can just cast aside Shield Block. It GREATLY improves Mastery rating. Also, you don't really explain what I should do with Dodge/Parry > 14-15 & Mastery < 19 or the other way around. Its probably not a case that will happen that often, but just for the people that like to know everything ;) for this model: parry =14.02,mastery =18 if you get the 176.72 2nd Attribute's points from reforged 1.you converse to the parry , then your parry increase to 14.79, =your reduction +0.77% = the Boss Damage -0.77% (+1%parry =-1%damage) 2 you converse to the mastery ,then your mastery increase to 18.9852(176.72 =0.9852mastery) = your reduction +0.79% = the Boss Damage -0.79% so, it is easy to see, in the situation ,mastery is better then the parry On the contrary , we find the excel table, when parry14.02/mastery18,1%parry>1%mastery(actual) So,conclusion is deduced for these 12-29-2010, 02:40 AM Am I interpreting this properly? Each parry step is 176.72 parry rating and each mastery step is 179.28 mastery rating - then you are comparing the values directly across... So when you are at 10 pre-DR parry and 18 mastery, you are actually comparing: 1767.2 parry rating against 1792.8 mastery rating? Sure it's only 1.4% difference, but your conclusions are also based on percentage point differences... Surely you should be comparing ratings directly. i edited the size of picture , now ,the picture is normal now,you look the colunm K 1% Mastery(actual) 1% Mastery(Actual) = 176.72*0.9852 12-29-2010, 03:09 AM Unless I'm reading this wrong, you are looking solely at the pure damage reduction benefits. Since mastery prevents less damage but over a larger percentage of hits, it also functions to flatten out damage graphs. While this may not be the intent of your analysis, it is worth noting, as it (in my opinion) gives mastery slightly more value. It's the same reasoning why armor was valued so much higher than avoidance pre 4.0.1 - since it was easier to predict, it was better. 12-29-2010, 05:36 AM Unless I'm reading this wrong, you are looking solely at the pure damage reduction benefits. Since mastery prevents less damage but over a larger percentage of hits, it also functions to flatten out damage graphs. While this may not be the intent of your analysis, it is worth noting, as it (in my opinion) gives mastery slightly more value. It's the same reasoning why armor was valued so much higher than avoidance pre 4.0.1 - since it was easier to predict, it was better. yes,you r right,but we also know that ,in the low parry,the DR effect is also low parry 14% +176.72 = 0.77%parry =0.9852*mastery parry 30% +176.72 = 0.33%parry =0.9852*mastery In defferent time,in the same rating converse rate are different the conclusion is only dodge/parry = 14/15 , we only need few rating from gems or reforged can be achieved the Dodge/Parry to these Criital points if 3parry vs 4mastery, i think i could choose 3parry,because parry is avoidance,but block is not for the smooth out the damage , if Boss attack =60K, it have 30% chance direct hit you +3% vs 6%block continual direct double hit 3%parry =9% chance,and the 6%block =7% the former have 2% less than the latter,it is mean latter have relative 2% smooth more than former,but parry is aviodance,block only mitigation 12-29-2010, 07:07 AM I like the graph and the table but to make it realistic you should include shieldblock and Hold the Line. Hold the Line is only complicated if you want to do it accurately. 1% parry increases the uptime for HtL around 2-3% depending on the swingtimer of the boss and current amount of parry. Average amount of block is between 55-60% including shieldblock so HtL will turn an additional 5,5-6% blocks into crit blocks for an additional 30% damage reduction. 2,5%*6%*30%= 0,045% I would just add a static 0,035-0,045% into the 1%parry column to account for HtL. Not sure how you calculate the mitigation from mastery, should be %block*average block. Where average block is 0,3*(1+crit block). Mitigation from 1 mastery: 1,5% crit block*0,3*old block percentage + 1,5% block*average block if you rewrite this you get: 1,5% crit block*0,3*old block percentage + 1,5% block*(0,3*(1+critblock) 0,45%*old block percentage+0,45%*(1+critblock) 0,45%*old block percentage+0,45%+ 0,45%*critblock 0,45% + 0,45%*(old block percentage + crit block) In the last formula its pretty easy to add in shieldblock just add 50/3=16,67 for 25% critblock + 25% block divided by 1/3 uptime 0,45% + 0,45%*(old block percentage + crit block + 0,1667) One last thing that changes the value of mastery a little is the 10% crit block from Hold the Line. When i did some calculations a while back i just added average 5% crit block for a 50% uptime on HtL. Hope this helps :) 12-29-2010, 09:34 AM I am still opposed to averaging in shield block for damage reduction in stat comparisons. You don't die with shield block up. You are unhittable and crit block is over 50%; It just doesn't happen. The marginal difference between stat priorities with shield block up, isn't worth the cost of an effective comparison while you are at your weakest. 12-29-2010, 09:42 AM Impact of shieldblock on the value of mastery is easy to seperate if you dont want to average it. 0,5*0,45%=0,225% extra damage reduction during shieldblock for 1 point of mastery. You can divide it by 3 and add it up to the total or just keep it separate. its significant enough that it shouldn't be left out. 12-29-2010, 09:53 AM Impact of shieldblock on the value of mastery is easy to seperate if you dont want to average it. 0,5*0,45%=0,225% extra damage reduction during shieldblock for 1 point of mastery. You can divide it by 3 and add it up to the total or just keep it separate. its significant enough that it shouldn't be left out. Absolutely, I have no issues with it's separation - just trying to avoid failures we've run into in other threads. 12-29-2010, 07:40 PM Why i ignore the Hold the Line(HtL) and ShieldBlock(SB) if your sheet parry10,mastery19, they can be casted if your sheet parry11,mastery18, they can be casted now,△parry/mastery = 1, the △benefit=1masteryBenefit(cause SB)-1parryBenefit(cause HtL) is too small if i import them,my math will become complex,but △result is too small,and you know,more complex more may go to wrong i wander to get the dim value like 14、15 etc, not 14.897、15.999 because nobody likes to remember 14.xxx%、15.xxx 12-30-2010, 04:16 AM Sure, math will become more complex with them, but they do actually change the outcome. Especially SB. It's up either 1/3 of the time or when needed most. (You have other big CDs as well that can be used at SB downtime if you go low there, so it's not totally wrong to assume, that one of the two above assumptions is true for a prot warrior who knows her class and the encounter.) SB pushes both, survivability and dmg/threat. I doubt many prot warriors will allow SB to have a much longer downtime than 2/3 of the encouter. If you want to be on the save side, just calculate it's uptime around 1/4 of the time. That would be far off of what it's for most tanks but it's much better than to ignore it completely. Same for HtL. Well you can chose to not skill it, but if you are interested in best survivability you should actually consider it. Everything else does not yield relevant results. It's just not important if the values are 14 or 10 or 19 IF you don't take HtL and never use SB. Anyone not doing this either does not have any problems with survivability or does not know the class. If you want to give such easy to remember values - which I appreciate - you should at least make some approach to include them. It does not have to be accurate since you just want to give some easy to remember numbers. But if you want to have those numbers taken serious you have to include such important things like HtL and SB.
{"url":"http://www.tankspot.com/printthread.php?t=72913&pp=20&page=1","timestamp":"2014-04-19T03:20:20Z","content_type":null,"content_length":"22586","record_id":"<urn:uuid:38f0d0c5-8cfb-43f0-927d-a12b1d5b9890>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00091-ip-10-147-4-33.ec2.internal.warc.gz"}
Disagreement with the Duocylinder I was thinking about the rotachora, trying to get a picture in my head about how they're formed when I noticed that I found out how to get every single rotachoron except the duocylinder. Hmm... I finally reached this conclusion after some thought: On the rotatopes page they say that a rotatope is an n-dimensional figure formed by extending or rotating a figure of the dimension before it. And then you find out that the five rotachora are the Hypercube, Cubinder, Duocylinder, Spherinder, and the Glome. And then these processes went through my head - you get the hypercube by extending the cube, you get the cubinder by extending the cylinder or rotating the cube, you get the spherinder by extending the sphere or rotating the cylinder, and you get the glome by rotating the sphere. Where does the duocylinder come into play? We already rotated and extended everything we could. I believe the duocylinder isn't a rotachoron at all. My depiction of a duocylinder (which I also call the cubispherinder) is a rotatetron (5D rotatope) and results when you extend the spherinder or rotate the cubinder. Furthermore, I believe those cross-sections that you see are in fact cross-sections of the cubinder tilted at a 45-degree The duocylinder is [(w,x),(y,z)], which means that you can get it from lesser elements in this way: [(w,x),(y,z)] = rotation of cylinder [x,(y,z)] in the wx plane. In every case, you need to make it out so that the top and bottom circles of a cylinder arise from the same thing, by rotating it in the hedrix made from the height and the added dimension. The dream you dream alone is only a dream the dream we dream together is reality. Your enumeration is wrong because of one little fact: Every rotatope can be extended in one way, BUT they can be rotated in several non-identical ways. When you rotate a 3D body through 4D, you rotate it around a plane, and in this case, the plane is one of the three coordinate planes. Cube and sphere can be rotated in only one way (to cubinder, resp. glome), as all three coordinate planes are symmetrical. However, the cylinder has two kinds of coordinate planes, which cut it in either a square or a circle. Rotating around these planes will give you different results. (If I see it correctly, rotation around "square" plane should give spherinder, and rotation around "circle" should give duocylinder) A way to see the duocylinder is that it's Cartesian product of two circles. BTW - the rotation of cubinder to 5D is, once again, non-unique: rotating it around a cubic cross-section will indeed lead to the shape you describe (called spherisquare in my system), but if you rotate it around its cylindrical cross-section, you will get a different rotatope called "dual cylinder", which can be also gotten by extending duocylinger. So what you're saying is that if you rotate the two connected circles of the cylinder the "ordinary" way in the same direction, and have them remain connected, you get the spherinder, and if you rotate them the "other" way (like drawing a circle on a piece of paper and turning the piece of paper around on the table), then you get the duocylinder. I only have one question. What 3D rotatope do you get when you rotate a single circle the same way? Neues Kinder wrote:So what you're saying is that if you rotate the two connected circles of the cylinder the "ordinary" way in the same direction, and have them remain connected, you get the spherinder, and if you rotate them the "other" way (like drawing a circle on a piece of paper and turning the piece of paper around on the table), then you get the duocylinder. I only have one question. What 3D rotatope do you get when you rotate a single circle the same way? This sounds a bit more complex that I would put it. If you rotate a single circle around any of its axes, you get a sphere, of course. Both coordinate axes of a circle are symmetrical. You only get two different results for a cylinder because a cylinder can be cut by coordinate planes in two very different ways. You do realize that in 4D, the rotation happens around a plane, not around an axis as in 3D case, right? The simplest way of constructing product-rototopes, is to use the notion that any axis can devolve into a circle or a square. What happens is that one replaces the simple axis x with either [w,x] for prism, and (w,x) for circle. So we could construct the duo-cylinder as: x line [x,y] square [x,(y,z)] cylinder [(w,x),(y,z)] duocylinder You see that regardless of whether we add w, x, y, z last, the previous 3d state is a cylinder. Many of these figures have the symmetry construction in the group r. There are other figures in other symmetries, such as h, hr, hh, f. For example, one could take a {3,3,5}, and a {5,3,3}. If one draws say the {5,3,3} on a glome, and then sets the {3,3,5} and {5,3,3} to cross at the edges of the {5,3,3}, one gets a rototope with 1200 faces, each a triangle tegum (bipyramid), where the axis is bent around the curve of the glome. The dream you dream alone is only a dream the dream we dream together is reality. Marek14 wrote:If you rotate a single circle around any of its axes, you get a sphere, of course. Both coordinate axes of a circle are symmetrical. You only get two different results for a cylinder because a cylinder can be cut by coordinate planes in two very different ways. Yes, you can rotate a circle around the x and y axes to get a sphere. And you can rotate a cylinder around the xy, xz, and yz planes to get - as you say - either a spherinder or a duocylinder. But, as wendy points out (inderectly), you can also rotate a cylinder around the wx, wy, and wz planes. If that is true you should also be able to rotate a circle around the z axis. And what I'm asking is what rotahedron do you get when you rotate a circle around the z axis? The method that i use to increase dimension, is to replace one dimension with two. That is, you can replace x with (x,y) or [x,y]. What arises out of this is a nested prismic / spheric product. You can remove any set of brackets that are identical to the enclosing brackets. eg as (3+(2+2)) = (3+2+2) [x,[y,z]] = [x,y,z] cube = square prism (x,(y,z)) = (x,y,z) sphere = circular spheric Note also that you can remove "non-adjacent" letters, eg cylinder = [x,(y,z)] When one wants to look down any given set of axies, one simply removes what ever letters are not needed, and simplifies the brackets. duocylinder = [(w,x),(y,z)] in the wy axis = [(w),(y)] = [w,y] = square. Marek 14 further discovers that you can for a set of w,x,y,z freely decide what the wx, wy, wz, xy, xz, yz axies ought hold. That is, you can freely populate these with square or circle sections. Of course, the 64 possibilities here reduce to 11, after orientations are taken to account, but one of the 11 (longdome, wx=xy=yz = (), and yw=wz=zx=[]), can not be expressed in a direct product of [] and () around w,x,y,z. The dream you dream alone is only a dream the dream we dream together is reality. Neues Kinder wrote: Marek14 wrote:If you rotate a single circle around any of its axes, you get a sphere, of course. Both coordinate axes of a circle are symmetrical. You only get two different results for a cylinder because a cylinder can be cut by coordinate planes in two very different ways. Yes, you can rotate a circle around the x and y axes to get a sphere. And you can rotate a cylinder around the xy, xz, and yz planes to get - as you say - either a spherinder or a duocylinder. But, as wendy points out (inderectly), you can also rotate a cylinder around the wx, wy, and wz planes. If that is true you should also be able to rotate a circle around the z axis. And what I'm asking is what rotahedron do you get when you rotate a circle around the z axis? When you rotate the circle around the z axis, you get the same figure as if you rotated it around its center in 2D - in other words, since the axis of rotation doesn't lie within the plane of the figure, it will stay 2D figure, as none of its points will ever leave the plane while it rotates. In this particular case, the circle will stay unchanged, but this is, of course, not the general Ahh, I get it. Getting the duocylinder is like rotating all the lines in the cylinder and not the circles. So the circles, when revolving around the center and not rotating, will form a 4D torus. And another 4D torus will fill in the gap in the surface, like a 3D tube forms the outside of the cylinder, and you need two circles to fill in the gaps in the surface. You can visualize getting a cylinder as taking a line and extending it around in a circle. As such, you can also get the duocylinder by taking a circle and extending it around in a circle, hence the (2,2) identifier. I call it the torinder, because it is made up of two tori. So there are 5 rotatopes in tetraspace, and there are 7 rotatopes in pentaspace: Pentacube (1,1,1,1,1), tetracubinder (1,1,1,2), cubispherinder - my duocylinder - (1,1,3), cubitorinder (1,2,2), spheritorinder (2,3), glominder (1,4), and the pentome (5). And I came up with the 9 rotahexxa (6D rotatopes) just two minutes ago: (1,1,1,1,1,1) Hexacube (1,1,1,1,2) Pentacubinder (1,1,1,3) Tetraspherinder (1,1,2,2) Tetracubitorinder (1,2,3) Cylitorinder (2,4) Glomitorinder (1,1,4) Tetracubiglominder (1,5) Cubipentominder (6) Hexome I named the (1,2,3) the Cylitorinder because just like you extend the line and then rotate it, you extend the Torinder and then rotate it to get the Cylitorinder. Neues Kinder wrote:Ahh, I get it. Getting the duocylinder is like rotating all the lines in the cylinder and not the circles. So the circles, when revolving around the center and not rotating, will form a 4D torus. And another 4D torus will fill in the gap in the surface, like a 3D tube forms the outside of the cylinder, and you need two circles to fill in the gaps in the surface. You can visualize getting a cylinder as taking a line and extending it around in a circle. As such, you can also get the duocylinder by taking a circle and extending it around in a circle, hence the (2,2) identifier. I call it the torinder, because it is made up of two tori. So there are 5 rotatopes in tetraspace, and there are 7 rotatopes in pentaspace: Pentacube (1,1,1,1,1), tetracubinder (1,1,1,2), cubispherinder - my duocylinder - (1,1,3), cubitorinder (1,2,2), spheritorinder (2,3), glominder (1,4), and the pentome (5). And I came up with the 9 rotahexxa (6D rotatopes) just two minutes ago: (1,1,1,1,1,1) Hexacube (1,1,1,1,2) Pentacubinder (1,1,1,3) Tetraspherinder (1,1,2,2) Tetracubitorinder (1,2,3) Cylitorinder (2,4) Glomitorinder (1,1,4) Tetracubiglominder (1,5) Cubipentominder (6) Hexome I named the (1,2,3) the Cylitorinder because just like you extend the line and then rotate it, you extend the Torinder and then rotate it to get the Cylitorinder. You are pretty much correct, although your labels don't match with mine. I have, in 5 and 6D (I did it even further) (1,1,1,1,1) - penteract (1,1,1,2) - cubicircle (1,2,2) - dual cylinder (1,1,3) - spherisquare (2,3) - sphericircle (1,4) - glominder (5) - petaglome (1,1,1,1,1,1) - hexeract (1,1,1,1,2,) - tesseracticircle (1,1,2,2) - duocubinder (2,2,2) - tricilynder (you missed this one) (1,1,1,3) - sphericube (1,2,3) - sphericylinder (3,3) - duosphere (you missed this one) (1,1,4) - glomosquare (2,4) - glomocircle (1,5) - petaglominder (6) - exaglome I also looked through all the "extended" rotatopes in 3 to 5 dimensions. Those are shapes which are given by arbitrarily dividing the set of coordinate planes in those that cut them in circles, and those that cut them in squares. In 3D, you have cube (three squares), cylinder (two squares, one circle), and sphere (three circles). There is also a fourth, extended rotatope which has two circles and one square as its cross-sections. Can you find it? Marek14 wrote:There is also a fourth, extended rotatope which has two circles and one square as its cross-sections. Can you find it? The crind. Intersection of two perpendicular cylinders. Yeah, I did think about the (3,3) and the (2,2,2) earlier today. I call the (3,3) the Duospheritorinder and the (2,2,2) the Duotorinder (I originally was going to call it the Sphericubitorinder, but it can easily be misinterpreted as the Cylitorinder) I also came up with the rotahexxa (7D rotatopes - I meant to say rotapenta when I was doing the 6D ones) by extending or rotating the ones before them the "original" way: (1,1,1,1,1,1,1) Heptacube (1,1,1,1,1,2) Hexacubinder (1,1,1,1,3) Pentacubispherinder (1,1,1,4) Tetracubiglominder (1,1,5) Pentomicubinder (1,6) Hexominder (7) Heptome And by rotating the "other" way: (1,1,1,2,2) Pentacubitorinder (1,1,2,3) Cubicylitorinder (1,2,2,2) Cubiduotorinder (1,2,4) Sphericylitorinder (1,3,3) Cubiduospheritorinder (2,5) Pentomitorinder (3,4) Glomispheritorinder (Spheriduospheritorinder) And to better organize them and make sure you didn't miss any, you can sort them like this: So that you can do even higher ones like this: (1,8} (regular ")" after the eight makes a smiley) There's a total of 30 rotaocta (9D rotatopes) CHALLENGE: Who can name them all? (must be logical names - names like tetrahedronicubicone or Bob aren't acceptable) Unfortunately, one can't just use partitions to find rototopes. For example, the same partition 2,2 gives distinct figures ([w,x],[y,z]) and [(w,x),(y,z)]. Even so, one can have something like [(1,[2,3]),(4,5)], which isn't a partition at all. It ceartianly isn't [1, (2,3), (4,5)] or anything. In any case, the rototopes on group r correspond to the free marking of simplex edges (N vertices for N dimensions), such that they are either square or circular. One might also note that the intersection of three cylinders gives a circle in the xy, yz and zx direction. This is the cyclotegmated octahedron, there are examples of this for every regular figure, and in every dimension. For example, the o3m3o5o yields two different rototopes, co3m3o5o and o3m3o5oc. The example listed here is co3m4o, which is a o3m4o rhombic dodecahedron, cyclated on the octahedron end. There is also a different rototope o3m4oc. The dream you dream alone is only a dream the dream we dream together is reality. Neues Kinder wrote:There's a total of 30 rotaocta (9D rotatopes) CHALLENGE: Who can name them all? (must be logical names - names like tetrahedronicubicone or Bob aren't acceptable) Well, I can, for one. I ended with 6D, didn't I? I think you will be able to deduce my system from these: (1,1,1,1,1,1,1) - hepteract (1,1,1,1,1,2) - penteracticircle (1,1,1,2,2) - cubiduocylinder (1,2,2,2) - triple cylinder (1,1,1,1,3) - tesseractisphere (1,1,2,3) - sphericubinder (2,2,3) - spheriduocylinder (1,1,1,4) - glomocube (1,2,4) - glomocylinder (3,4) - glomosphere (1,1,5) - petaglomosquare (2,5) - petaglomocircle (1,6) - exaglominder (7) - zettaglome (1,1,1,1,1,1,1,1) - octaract (1,1,1,1,1,1,2) - hexeracticircle (1,1,1,1,2,2) - tesseractiduocylinder (1,1,2,2,2) - tricubinder (2,2,2,2) - tetracylinder (1,1,1,1,1,3) - penteractisphere (1,1,1,2,3) - sphericubicircle (1,2,2,3) - dual sphericylinder (1,1,3,3) - duospherisquare (2,3,3) - duosphericircle (1,1,1,1,4) - glomotesseract (1,1,2,4) - glomocubinder (2,2,4) - glomoduocylinder (1,3,4) - glomospherinder (4,4) - duoglome (1,1,1,5) - petaglomocube (1,2,5) - petaglomocylinder (3,5) - petaglomosphere (1,1,6) - exaglomosquare (2,6) - exaglomocircle (1,7) - zettaglominder (8) - yottaglome (1,1,1,1,1,1,1,1,1) - ennearact (1,1,1,1,1,1,1,2) - hepteracticircle (1,1,1,1,1,2,2) - penteractiduocylinder (1,1,1,2,2,2) - spheritricylinder (1,2,2,2,2) - quadruple cylinder (1,1,1,1,1,1,3) - hexeractisphere (1,1,1,1,2,3) - tesseractisphericircle (1,1,2,2,3) - spheriduocubinder (2,2,2,3) - spheritricylinder (1,1,1,3,3) - duosphericube (1,2,3,3) - duosphericylinder (3,3,3) - trisphere (1,1,1,1,1,4) - penteractiglome (1,1,1,2,4) - glomocubicircle (1,2,2,4) - dual glomocylinder (1,1,3,4) - glomospherisquare (2,3,4) - glomosphericircle (1,4,4) - duoglominder (1,1,1,1,5) - petaglomotesseract (1,1,2,5) - petaglomocubinder (2,2,5) - petaglomoduocylinder (1,3,5) - petaglomospherinder (4,5) - petaglomoglome (1,1,1,6) - exaglomocube (1,2,6) - exaglomocylinder (3,6) - exaglomosphere (1,1,7) - zettaglomosquare (2,7) - zettaglomocircle (1,8) - yottaglominder (9) - xennaglome And here are 10D, for good measure: (1,1,1,1,1,1,1,1,1,1) - decaract (1,1,1,1,1,1,1,1,2) - octaracticircle (1,1,1,1,1,1,2,2) - hexeractiduocylinder (1,1,1,1,2,2,2) - tesseractitricylinder (1,1,2,2,2,2) - tetracubinder (2,2,2,2,2) - pentacylinder (1,1,1,1,1,1,1,3) - hepteractisphere (1,1,1,1,1,2,3) - penteractisphericircle (1,1,1,2,2,3) - cubispheriduocylinder (1,2,2,2,3) - triple sphericylinder (1,1,1,1,3,3) - tesseractiduosphere (1,1,2,3,3) - duosphericubinder (2,2,3,3) - duospheriduocylinder (1,3,3,3) - trispherinder (1,1,1,1,1,1,4) - hexeractiglome (1,1,1,1,2,4) - tesseractiglomocircle (1,1,2,2,4) - glomoduocubinder (2,2,2,4) - glomotricylinder (1,1,1,3,4) - glomosphericube (1,2,3,4) - glomosphericylinder (3,3,4) - glomoduosphere (1,1,4,4) - duoglomosquare (2,4,4) - duoglomocircle (1,1,1,1,1,5) - petaglomopenteract (1,1,1,2,5) - petaglomocubicircle (1,2,2,5) - dual petaglomocylinder (1,1,3,5) - petaglomospherisquare (2,3,5) - petaglomosphericircle (1,4,5) - petaglomoglominder (5,5) - duopetaglome (1,1,1,1,6) - exaglomotesseract (1,1,2,6) - exaglomocubinder (2,2,6) - exaglomoduocylinder (1,3,6) - exaglomospherinder (4,6) - exaglomoglome (1,1,1,7) - zettaglomocube (1,2,7) - zettaglomocylinder (3,7) - zettaglomosphere (1,1,8) - yottaglomosquare (2,8) - yottaglomocircle (1,9) - xennaglominder (10) - dakaglome By the way, here are the general rules for extending and rotating: Extending - just add "1" to the list. Rotating - n-dimensional figure is rotated around (n-1)-dimensional hyperplane into (n+1) dimensions. If you have the list, (n-1) dimensions mean that there is exactly one dimension of the figure it will be rotated in (not around). This dimension falls in exactly one number of the list. To produce the rotation, increase this number by 1. Example: Let's take the glomosphericylinder (1,2,3,4). It can be extended to (1,1,2,3,4) - glomosphericubinder, or rotated in four different ways: 1. Around (2,3,4) - glomosphericircle, to (2,2,3,4) - glomospheriduocylinder 2. Around (1,1,3,4) - glomospherisquare, to (1,3,3,4) - glomoduospherinder 3. Around (1,2,2,4) - dual glomocylinder, to (1,2,4,4) - duoglomocylinder 4. Around (1,2,3,3) - duosphericylinder, to (1,2,3,5) - petaglomosphericylinder And, still, somewhere along the way, we forget that the spheric product is a coherent, radial product, and that you can treat it akin to the prism or tegum products, viz p-gon () q-gon spherial. Wherever one has a prism one can have a spherion, eg pentagon-enneagon spherion. By the time one hits nine dimensions, one can have such ungamely things as a dodecahedron-icosahedron-prism by rhombododecahedron spherion. ( x5o3o #* x3o5o ) ø (o3m4o) Maybe i'm amazed The dream you dream alone is only a dream the dream we dream together is reality. OK, I sort of understand what you're saying, but I sort of don't. Probably because I'm still in High School and I haven't been exposed to those types of equations and terms yet... Here is what I named all 30 rotaocta: (1,1,1,1,1,1,1,1,1) Nentacube (1,1,1,1,1,1,1,2) Octacubinder (1,1,1,1,1,1,3) Heptacubispherinder (1,1,1,1,1,2,2) Heptacubitorinder (1,1,1,1,1,4) Hexacubiglominder (1,1,1,1,2,3) Hexacubispheritorinder (1,1,1,2,2,2) Pentacubiduotorinder (1,1,1,1,5) Pentacubipentominder (1,1,1,2,4) Pentacubiglomitorinder (1,1,1,3,3) Pentacubiduospheritorinder (1,1,2,2,3) Tetracubispheriduotorinder (1,2,2,2,2) Cubitritorinder (1,1,1,6) Hexomitetracubinder (1,1,2,5) Tetracubipentomitorinder (1,1,3,4) Tetracubiglomispheritorinder (1,2,2,4) Cubiglomiduotorinder (1,2,3,3) Cubiduospheriduotorinder (2,2,2,3) Spheritritorinder (1,1,7) Heptomicubinder (1,2,6) Cubihexomitorinder (1,3,5) Cubipentomispheritorinder (2,2,5) Pentomiduotorinder (1,4,4) Duoglomicubitorinder (2,3,4) Glomispheriduotorinder (3,3,3) Trispheriduotorinder (1,8} Octominder (2,7) Heptomitorinder (3,6) Hexomispheritorinder (4,5) Pentomiglomitorinder (9) Nentome Here are my rules for naming the rotatopes: 1. When doing the cube part of the name, always name it first if it's the highest dimension part not characterizing the torinder part, and last if it's the lowest dimension part not characterizing the torinder part. For example, (1,1,4) would be named Glomicubinder, while (1,1,1,3) would be named Tetracubispherinder, and the (1,2,5) would be named Cubipentomitorinder. 2. Also when doing the cube part of the name, always name the cube part according to the number of numbers in the identifier. If the rotatope is a torinder of some sort, then according to the number of numbers up to the second number greater than one, like (1,2,2,2) would be named the Cubitritorinder, since the third number is the second number greater than one. 3. When doing the sphere part, name every sphere part in the rotatope as an individual sphere part and according to all the dimension parts greater than 2, like the (1,1,1,3) would be named the Tetracubispherinder, the (3,4,5) would be named the Pentomiglomispheriduotorinder, despite how long the name is, and the (1,1,1,1,2) would just be named the Pentacubinder. Always place a sphere part after the cube part if it's dimension is higher and it doesn't characterize the torinder part, and always place the cube part right before the sphere part if the sphere characterizes the torinder. 4. When doing the torinder part, put the word "torinder" at the end of the name always and only if there is more than one number greater than one in the identifier, according to how many of such numbers there are minus one. Like (2,2) is named the torinder, the (2,2,2) is named the duotorinder, the (2,2,2,2) is named the tritorinder, the (2,2,2,2,2) is named the tetratorinder, etc. Also put all sphere parts characterizing the torinder part directly before n-torinder, and placed in order from highest dimension to lowest dimension. (1,1,2,3) is named Tetracubispheritorinder, and (1,1,3,6,7) would be named Tetracubiheptomihexomispheriduotorinder (please don't get into explanations about the 18th dimension and up, this is just for demonstrational purposes). To make it so that you don't have to think, just name the spheres backwards (if you label the rotatope from lowest to highest dimensional parts). 5. For an n-dimensional rotatope, if there are n ones in the identifier, or there is just the number n, then you leave "inder" out of the name, for all else put "inder" at the end. OK, I just need to brush up on these numbers. I see that an n-dimensional object that is all ones is that dimension's orthotope (hypercube), and an object just denoted by n in itself is the hypersphere (or apeirotope/achanetope in my nomenclature). So I take it "1" represents linear extension into the next highest dimension. 2 is supposed to be rotations, right? 3 and above represents the number of axes of rotation? A circle is one rotation of a line, and to rotate a circle into a sphere would be two rotations. Or is the number of rotations just the total dimensions of the object formed by the rotation? It is interesting that you all have named all of these higher dimensional things, because now we can know the name of the shape of the universe in string theory. In addition to the three extended dimensions we are familiar with, there are supposed to be six tiny dimensions (forming what Michio Kaku calls "a twisted 6D torus"). That would be (1, 1, 1, 6) exaglomocube The way I had named these objects was simply by their bases or number of "curved" versus "straight" dimensions of the surface, so in 4D you could have a spherical hypercylinder, (2 curved, one straight) or a cylindrical hypercylinder (2 straight, one curved). Of course, in higher dimensions, it would get messy, as sperical hypercylindrical hyper-hypercylinder, etc. (I never even bothered with the 1,1,1,6 system). Then, I found you all simpler naming system. Actually, the digits in the numbers represent the n-dimensional spherical parts. Like 1 represents a 1D sphere, or a line, 2 represents a 2D sphere - circle - 3 represents a sphere, 4 a glome, 5 pentome, etc. The numbers are like a set of instructions on how to get the rotatope starting with a point. Take the cylinder for instance. Its number is (1,2), meaning to take a point, extend it linearly, then extend it circularly. Or you could take a point, extend it circularly, then extend it linearly - you can do it in any order. The spherinder (1,3) tells you to take a point, extend it linearly, then extend it spherically - or take a point, extend it spherically, then extend it linearly. The duocylinder (2,2) tells you to take a point, extend it circularly twice. Speaking of duocylinders, I found something out. When you take a circle and extend it around in a circle, you don't get a 4D torus, but you get what I call a near four-dimensional cylindrical tube. A near-dimensional object is an n-dimensional object curved or folded (n+1)-dimensionally. Say you have a rectangle and you roll it up into a tube. The resultant shape isn't 2D, because it curves 3D, and it isn't 3D either, because, being a rectangle curved in 3D space, it has no depth, so it's neither 2D nor 3D, so it's an N3D rectangular tube. To make it a cylinder, you need to attach circles at the two ends and fill in the empty space inside. If you don't fill in the empty space, then it's just an N3D cylindrical surface. it still doesn't have any volume, because it's just made up of a curved rectangle and two circles. When you talk about the volume of a hollow 3D object, you're actually talking about the volume of space inside it. Here's another thing - how are we going to use numbers to identify prisms? I came up with a system that might work. Rotatopes could be said to be round prisms, so when naming the other prisms, let's keep the ( , ) format. To show a polytope part, surround it in brackets []. So a triangular prisminder's identifier would look like this: ([3], 2). You can put more than one number in the brackets, so if you're talking about a Tripentagonal Prisminder (cartesian product of a triangle, pentagon, and circle), then you write ([3, 5], 2). A Triangular Spheriprisminder would look like ([3], 3), and a Tripentagonal Glomiprisminder would look like ([3, 5], 4). When you're talking about what Marek14 calls duoprisms, what I call polyprisms, both parts are polytopes, so you put both parts in brackets. A duotriangular prism would be written as ([3], [3]). A tripentagonal prism would be written as ([3], [5]). A Trirectangular Pentagonal Prism could either be written as ([3, 4], [5]) or ([3], [4, 5]). But what about prisms with Archimedean parts? Like the octahedric triangular prism or the hexacosichoral tetrahedric prism? Instead of putting those parts in brackets, put their Schläfli symbols in curly braces. The tetrahedric prism would be ({3,3}, 1). The octahedric pentagonal prism would look like ({3,4}, [5]). The dodecahedric heptagonal spheriprisminder would look like ([{3,5}, 7], 3). And you can have shapes with 4D+ Archimedean parts, like the Hexidecachoral Tripentagonal Pentomiprisminder, which is written as ([{3,3,3}, [3, 5]], 5). What about a duotetrahedric pentaheptagonal duopentagonal duospheriprisminder? That could be written out as ([[{3,3}, {3,3}], [[5, 7], [5, 5]]], (3, 3)), but that's a lot of brackets and braces to keep up with, and you could get easily confused. There is another format that you can use instead of the double-part format. You can use the multi-part format, so now the duotetrahedric pentaheptagonal duopentagonal duospheriprisminder can be written out as ({3,3}, {3,3}, [5, 7], [5, 5], 3, 3). You can also group them all together in brackets, like this: [{3,3}, {3,3}, 5, 7, 5, 5, (3, 3)]. Say you have a rectangle and you roll it up into a tube. The resultant shape isn't 2D, because it curves 3D, and it isn't 3D either, because, being a rectangle curved in 3D space, it has no depth, so it's neither 2D nor 3D, so it's an N3D rectangular tube. To make it a cylinder, you need to attach circles at the two ends and fill in the empty space inside. In a notation I came up with in another thread, the object you're describing is one "cell" of the 2D form of the cylinder. The 1D form is a pair of hollow circles. The 2D form has two cells: a curved rectangle, and a pair of solid circles. The 3D form is simply a solid cylinder. The general notation that i use for the various cylinders and prism products are as follows. 1. Mirror-edge (ie most of the uniform figures) figures are given by the lining notation, eg truncated cube = x4x3o. 2. Circles and cylinders are treated as the polytope group {O}, {O,O}, with the letter O. This allows xOo = circle, xOoOo = sphere, etc. 3. When multiple x appear in the circle/sphere group, this gives rise to an ellipsoid, the further x increases the size of the axies, ie xOo = circle xOx = ellipse xOoOo = sphere, xOxOo = oblate ellipsoid, xOoOx = prolate ellipsoid. ie you can write aObOcOd, and x means <, o as =, and start from zero, so xOoOxOo means 0 < a = b < c = d. 4. & generates the prism product, so xOo&xOo = duocylinder xOo&x3o5x = circle . rhombododecaicosahedron &c, &c. The dream you dream alone is only a dream the dream we dream together is reality. Actually, the digits in the numbers represent the n-dimensional spherical parts. Like 1 represents a 1D sphere, or a line, 2 represents a 2D sphere - circle - 3 represents a sphere, 4 a glome, 5 pentome, etc. The numbers are like a set of instructions on how to get the rotatope starting with a point. Take the cylinder for instance. Its number is (1,2), meaning to take a point, extend it linearly, then extend it circularly. Or you could take a point, extend it circularly, then extend it linearly - you can do it in any order. The spherinder (1,3) tells you to take a point, extend it linearly, then extend it spherically - or take a point, extend it spherically, then extend it linearly. The duocylinder (2,2) tells you to take a point, extend it circularly twice. Thanks, though I'm still trying to understand "extending a point spherically". Speaking of duocylinders, I found something out. When you take a circle and extend it around in a circle, you don't get a 4D torus, but you get what I call a near four-dimensional cylindrical tube. A near-dimensional object is an n-dimensional object curved or folded (n+1)-dimensionally. Say you have a rectangle and you roll it up into a tube. The resultant shape isn't 2D, because it curves 3D, and it isn't 3D either, because, being a rectangle curved in 3D space, it has no depth, so it's neither 2D nor 3D, so it's an N3D rectangular tube. To make it a cylinder, you need to attach circles at the two ends and fill in the empty space inside. If you don't fill in the empty space, then it's just an N3D cylindrical surface. it still doesn't have any volume, because it's just made up of a curved rectangle and two circles. When you talk about the volume of a hollow 3D object, you're actually talking about the volume of space inside it. Well, as I was discussing a while ago, if you take the two ends of that hollow tube, and join them together in 4-space, that is the true shape of the Asteroids screen, and all you have to do is fill the ends in with torii, and that is the duocylinder. But when dealing with curved spaces, the dimensionality does not change into a "near n+1". Our space may be curved, but it is not consideed "near 4", it is normal 3-space. Dealing with the surface of the brane is not the same as dealing with the overall shape (n+1) it curves into. Thanks, though I'm still trying to understand "extending a point spherically". That means to replace the point with a sphere. But it's important to consider what dimensions the sphere is in. Sometimes it's possible to "spherate" without changing the dimension at all. For instance, the duocylinder is a 2D object in 4D space. This means it has two leftover dimensions, so you can replace each point with a circle and still get a 4D object (the tiger). I've found that the bracket notation is probably the simplest way to describe rotatopes and toratopes. You can either use xyz, or simply 111 (i.e. the same as Marek14's notation without the plusses). The 4D rotatopes are xyzw = 1111 (xy)zw = (11)11 = 211 (xyz)w = (111)1 = 31 (xy)(zw) = 22 (xyzw) = 4 and the other 4D shapes are: ((xy)z)w = ((11)1)1 = (21)1 ((xy)zw) = ((11)11) = (211) ((xyz)w) = ((111)1) = (31) ((xy)z)w) = (((11)1)1) = ((21)1) ((xy)(zw)) = ((11)(11)) = (22) If you want to include polygons and stuff, you need to define several kinds of products. It's no good to just make up names for them, you need to work out exactly what they mean. That way we can take the product of a "duotetrahedric pentaheptagonal duopentagonal duospheriprisminder" and a catenoid, or a klein bottle and a cantor set. Last edited by PWrong on Wed Jan 11, 2006 3:23 pm, edited 1 time in total. OK, I think I get it. Though that word "extend", while easy to understans when speaking "linearly" (a point extended once linearly is a line) or even circularly (a point rotated once circularly is a circle), when you talk of a sphere (which is not nearly as simple to be swept out by a single point rotating), and then say "replace", it throws you off. I understand it all better in terms of numbers of "straight dimensions" versus "curved" (circular) dimensions. So I take it in the bracket notations, the numbers in parentheses are the "curved" dimensions and the ones outside are the straight ones. Now, I've heard of the torinder. (But for some reason have missed a full description on what exactly it is. I imagine it is a cylinder rotated in 4D somehow). But what are these other things you mention? A "tiger"? "circles phere, sphere circle, and circle3"? I see you have brackets within brackets, there. 22 vs. (22), etc. I know, for instance, the duocylinder is "circular" in two perpendicular (i.e. "straight") dimensions. So is this (22) denoting something perpendicular in circular dimensions, or something like that? Then you have two that come out as (31). Are they the same thing, but arrived at different ways? Eric B wrote:Now, I've heard of the torinder. (But for some reason have missed a full description on what exactly it is. I imagine it is a cylinder rotated in 4D somehow). Torinder is a torus/line prism. I.e. what you get when you take a torus and drag it along a line into 4th dimension. I think its symbol in this notation should be ONLY (21)1, definitely not 31, which is spherinder. But what are these other things you mention? A "tiger"? These require some background. We discussed these shapes thoroughly before on previous threads - you might want to read them so we wouldjn't need to repost everything in here. Basically, tiger started its existence as a 4D surface with anomalous parametric equations: x = r1*cos a + r3*cos a*cos c y = r1*sin a + r3*sin a*cos c z = r2*cos b + r3*cos b*sin c w = r3*sin b + r3*sin b*sin c This is what we got from pondering about parametric equations of various kinds of torii. "circles phere, sphere circle, and circle3"? I see you have brackets within brackets, there. 22 vs. (22), etc. I know, for instance, the duocylinder is "circular" in two perpendicular (i.e. "straight") dimensions. So is this (22) denoting something perpendicular in circular dimensions, or something like that? Then you have two that come out as (31). Are they the same thing, but arrived at different ways? Every toratope is either a number or something enclosed in parenthesis. If multiple toratopes are linked together without being enclosed in parenthesis, they are combined in prism product. (22) is a tiger, and you will probably have to read the other threads to find out what it really is. (31) is sphere*circle which can be imagined in this way: 1. have a 3D sphere and put it into 4-space. 2. replace each point of this sphere with a circle whose one dimension is the radial dimension (line from the point to the center of the sphere) and whose second dimension is the 4th dimension, perpendicular to the 3-space where the sphere lies. In effect, it's a set of all points which have a specific distance from the sphere in 4D. This is analogical of one way how a torus is constructed. Analogically, circle*sphere is a set of points in 4D with specific distance from a circle and circle^3 is a set of points in 4D with specific distance from a torus. I think its symbol in this notation should be ONLY (21)1, definitely not 31, which is spherinder. You're right, that was misleading. I'll edit it. However, I'm working on a efficient way to count all of these shapes without writing them all down, since I can't find a formula. Under this system, (21)1 and 31 would be equivalent, because they can both be "bracketed" in only one way i.e. (31). Torinder is a torus/line prism. I.e. what you get when you take a torus and drag it along a line into 4th dimension. I should also mention that I sometimes use "torinder" (for want of a better name) to refer to anything that isn't a rotope or a toratope, i.e. a prism product of toratopes. It turns out the formula I found on mathworld (in another thread) was right after all! The rotopes can easily be divided into two categories: those completely enclosed by brackets (toratopes), and those that aren't (rotatopes and torinders). Each rototope/torinder can be turned into a toratope. Just put brackets around it. 1111 -> 4 211 -> (211) 22 -> (22) 31 -> (31) In the old thread, I described a method for counting rotopes. You take a rotatope, and replace each n-sphere with any nD toratope (toratopes also include spheres). The only problem was I didn't allow you to replace a sphere with a beast. I've just fixed that problem, and even written a program in mathematica to list the rotopes in any dimension (see the programming forum). Marek14 wrote:Torinder is a torus/line prism. I.e. what you get when you take a torus and drag it along a line into 4th dimension. I think its symbol in this notation should be ONLY (21)1, definitely not 31, which is spherinder. Actually, when I said "torinder" I meant "duocylinder". It was a name I came up with for that object to make the names for classifying higher-dimensional rotatopes easier. But the figure was incorrectly named. I called it the "torinder" because when you extend a circle in a circular path you get a figure which I thought was a 4D torus. But it's actually - what I call - a near-4D cylindrical tube (like a hollow cylinder is a rectangular tube). Eric B wrote:But when dealing with curved spaces, the dimensionality does not change into a "near n+1". Our space may be curved, but it is not consideed "near 4", it is normal 3-space. Dealing with the surface of the brane is not the same as dealing with the overall shape (n+1) it curves into. That is kinda true. If our 3D space is curved, it still is 3D space, but it is curved in 4D space to form the surface of a 4D figure. It is still 3D in the sense that there are 3 perpendicular dimensions in it, but it is also 4D in the sense that it is contained in a 4D hyperplane. It is both at the same time, but it isn't entirely 3D, because it isn't contained in a 3D hyperplane, and it isn't entirely 4D, because it has no size in 4D space. Now I remember that during a boring history class, I wrote the cartesian for the duocylinder, as I think it is (x**2+y**2<=1, z**2+w**2<=1) and, by projecting it on the four mutually perpendicular hyperplanes Oxyz, Oxyw, Oxzw and Oyzw, I got only cylinders, which seemed a little odd to me. Is it correct?
{"url":"http://hddb.teamikaria.com/forum/viewtopic.php?f=24&t=409","timestamp":"2014-04-17T00:56:28Z","content_type":null,"content_length":"104175","record_id":"<urn:uuid:a74f9189-1d4f-452e-89b3-c5fd5740cb69>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00433-ip-10-147-4-33.ec2.internal.warc.gz"}
a Barlow This page last modified 2003 Aug 04 Barlow Lenses What is a Barlow Lens? A Barlow is a negative (diverging) lens that is placed between the objective lens (or primary mirror — from now on these words will be used interchangeably) and the eyepiece of a telescope. It increases the effective focal length of an objective lens, thereby increasing the magnification. The idea is that 2 eyepieces and a Barlow will give you the flexibility of magnification of 4 eyepieces, and will give higher magnifications with less powerful eyepieces. What are its Advantages and Disadvantages? Assuming that the Barlow is a good one, the only disadvantage is a slight loss of light throughput — this is of the order of 3%. The advantages are numerous: • Higher magnifications can be attained with longer focal-length eyepieces than would be possible without the Barlow. Short focal length eyepieces necessarily have optical surfaces that are more curved and therefore are likely to introduce more aberrations. • A Barlow increases the effective focal ratio of the objective. This gives a more acute light cone, which is less demanding of eyepiece quality because: 1. Rays at the periphery of the cone are closer to being paraxial and thus are less subject to aberration. 2. A smaller area of the field lens is used. • Many eyepieces have an eye relief (distance of exit pupil from eye lens) that is directly related to its focal length. For example, the eye relief of a Plössl is 0.73 × its focal length. Thus, with these eyepieces, for a given magnification there will be greater eye relief with a barlow than without. • Many eyepiece types do not work well with short focal-ratio objectives. The Barlow effectively increases the focal ratio, allowing the eyepiece to work well. How does a Barlow work? Barlow Amplification The amplification factor of a Barlow is a function of its position in relation to the eyepiece and the objective lens (or primary mirror). For any given eyepiece and objective, the Barlow-eyepiece separation and the Barlow-objective separation are related because the focal plane of the eyepiece is the same as the focal plane of the objective-Barlow combination; as the separation between the eyepiece and the Barlow increases, the separation of the Barlow and objective decreases. The amplification factor of a Barlow can be increased by increasing its separation from the eyepiece using an extension tube — it must simultaneously be brought closer to the objective. One thing that you need to watch for with Barlows used outside their design amplification factor is spherical aberration. SA will be minimised at the design factor, but will almost certainly be present outside this, although it may not be discernible. (But visually, using the old trick of shifting the Barlow to the "other" side of the star diagonal or of using extension tubes, this may be compensated by reduced SA in the eyepiece, as a consequence of a more acute light cone.) Eyepiece Choice If you use a Barlow with fixed-focus eyepieces, you need to give some thought to a suitable choice. If, for example, you have a x2 Barlow and a 25mm eyepiece, there is little point in acquiring a 12.5mm; it will mimic the 25mm + Barlow. A suitable choice might be 32mm, 18mm, 12mm. Stop here unless you fancy some basic high school physics & maths. Barlow Maths Calculating Barlow magnification: F = focal length of objective or primary f = focal length of Barlow [1] J = joint focal length (effective focal length) d = separation of Barlow and original focal plane (objective focal plane) x = separation of barlow and new focal plane (eyepiece focal plane) M = amplification of Barlow J = (F×f)/(f-d) ...(1) (combined lens formula) M = J/F ...(2) (by definition) = f/(f-d) The separation of the Barlow and the new focal plane can be calculated from M and f: x = f×(M-1) ...(3) ...from which we get : M = 1 + (x/f) One of the connotations of all this is that a Barlow that is its own focal length inside the original focal plane (d) will produce a collimated (i.e. parallel) beam. Another is that d only needs to change slightly to bring about significant variations in x (play with the formulae — or your telescope — to see this) [2]. Finding the approximate Focal Length of a X2 Barlow The simplest way to do this is as follows: 1. Locate the location of the field stop inside an eyepiece. 2. Mark this position on the outside of the eyepiece barrel. 3. Locate the position of the middle of the lens grouping in the Barlow. 4. Mark this position on the outside of the Barlow barrel. 5. Insert the eyepiece into the Barlow. 6. Measure the distance between the two marks. This is the approximate focal length of the Barlow. Note: This can only be approximate as the distance of the field stop from the "shoulder" of the eyepiece barrel varies from eyepiece to eyepiece. This is why the marked amplification factor of a Barlow can only be nominal. Worked examples: 1. Based on Separation of Eyepiece Focal Plane and Barlow Let us take a 75mm focal length x2 (nominal) Barlow used at its designed amplification. (f = 75mm, M = 2) M = 1 + (x/f) δx = f(M - 1) = 75(2 - 1) mm = 75mm This relationship (the separation of Barlow and the new focal plane is equal to the focal length of the Barlow) holds for any x2 Barlow. Let us now use the old trick of increasing Barlow amplification by inserting a star diagonal between the eyepiece and Barlow. Assume that the star diagonal adds 80mm to the optical path. M = 1 + (x/f) = 1 + (75 + 80)/75 = 3.07 i.e. a nominal x2 Barlow has become an (approximate) x3 Barlow. Similarly, the introduction of a 150mm extension tube instead of the diagonal will give an amplification factor of x4. 2. Based on Separation of Objective Focal Plane and Barlow. Let's take a 150mm f/10 objective (F = 1500mm) with a 75mm focal length Barlow (f) placed 50mm inside focus (d). Substituting in equation (1): J = (F×f)/(f-d) = (1500 × 75)/(75-50) mm = 4500mm Substituting in equation (2): M = J/F = 4500/1500 = 3 Hence we have an amplification factor of ×3. Substituting in equation (3): x = f×(M-1) = 75 × (3 - 1) mm = 150mm Using the same objective with the Barlow 37.5mm inside the original focus, equation (1) gives J = 3000, equation (2) gives M = ×2, and equation (3) gives x = 75mm. [2] [1] For the purposes of these equations, the focal length of the Barlow is signed positive. Although I generally use RIP, in this context I prefer this way of doing things because the introduction of a negative f tends to lead to more errors. If you wish to use the RIP convention, f is negative and the equations must be modified accordingly. I will leave that as an exercise for the interested [2] Note, from the numerical examples, how a 12.5mm shift in the Barlow has resulted in a 75mm change in x. This also explains why, when you use a zoom eyepiece (zoom is essentially a moveable Barlow), only slight refocusing is required when you change the effective focal length of the eyepiece.
{"url":"http://astunit.com/astunit_tutorial.php?topic=barlow","timestamp":"2014-04-19T17:31:52Z","content_type":null,"content_length":"21669","record_id":"<urn:uuid:f0e49d86-4c9a-4b55-bd4a-d78c80ddc6e3>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00614-ip-10-147-4-33.ec2.internal.warc.gz"}
hemisphere flux for exams June 12th 2013, 03:02 AM #1 May 2013 hemisphere flux for exams I require help with this q.thanks. find the flux of F= (xz²+y )i+( x²y-z)j +(xy²+y²z)k outwards across the entire surface of the hemispherial region bounded by z=(1-x²-y²)^½ and z=0 Re: hemisphere flux for exams Hey n22. It's been a while since I did these integrals but you need to find the normal vector to start off with. Remember that the normal vector is the cross product of the derivative in one direction with the derivative in another (i.e. with respect to x and y). June 12th 2013, 03:36 AM #2 MHF Contributor Sep 2012
{"url":"http://mathhelpforum.com/calculus/219763-hemisphere-flux-exams.html","timestamp":"2014-04-17T19:21:38Z","content_type":null,"content_length":"31059","record_id":"<urn:uuid:611aef32-2724-495d-8f72-347b6acecc89>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00197-ip-10-147-4-33.ec2.internal.warc.gz"}
An Isaac Newton Institute Workshop First-Passage and Extreme Value Problems in Random Processes The statistics of occupation times 29th June 2006 Author: Godreche, C (CEA Saclay) I will present a mini-review of some results on the statistics of occupation times for coarsening systems (spin systems, diffusion equation), or for simpler stochastic models (e.g., renewal I will present a mini-review of some results on the statistics of occupation times for coarsening systems (spin systems, diffusion equation), or for simpler stochastic models (e.g., renewal
{"url":"http://www.newton.ac.uk/programmes/PDS/Abstract3/godreche.html","timestamp":"2014-04-18T15:43:36Z","content_type":null,"content_length":"2291","record_id":"<urn:uuid:a5b39084-0d29-47e7-9b34-fdfa784a8299>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00194-ip-10-147-4-33.ec2.internal.warc.gz"}
May 20th 2010, 12:00 PM #1 Nov 2008 Let K be a field, and (x) -> K[x] be an injective map where (x) is the principal ideal generated by x. (i) Prove that the quotient K[x]-module, K[x]/(x), is isomorphic to K as an abelian group and also as a K-vector space. Hence there is an exact sequence of K[x]-modules : 0 -> (x) -> K[x] -> K -> 0 (ii) Is this sequence split as a sequence of K[x]-modules ? (iii) Is this sequence split as a sequence of K-modules ? (i) is clear. But I can't make sense nor find a solution for (ii) and (iii). Which is why I require your help ! Let K be a field, and (x) -> K[x] be an injective map where (x) is the principal ideal generated by x. (i) Prove that the quotient K[x]-module, K[x]/(x), is isomorphic to K as an abelian group and also as a K-vector space. Hence there is an exact sequence of K[x]-modules : 0 -> (x) -> K[x] -> K -> 0 (ii) Is this sequence split as a sequence of K[x]-modules ? (iii) Is this sequence split as a sequence of K-modules ? (i) is clear. But I can't make sense nor find a solution for (ii) and (iii). Which is why I require your help ! the answer to (ii) is negative and to (iii) is positive. to see this, let $I=\langle x \rangle$ and $\pi: K[x] \longrightarrow K[x]/I$ be the natural homomorphism. let $\text{id}$ be the identity map over $K[x]/I.$ suppose that $\alpha: K[x]/I \longrightarrow K[x]$ is a $K[x]$ homomorhism such that $\pi \alpha = \text{id}.$ now, for any $p(x) \in K[x]$ we have $xp(x) \in I$ and thus $0=\alpha(xp(x)+I)=x \ which means $\alpha(p(x)+I)=0,$ i.e. $\alpha = 0$ and thus $\text{id}=\pi \alpha = 0,$ which is non-sense. for (ii) just define $\beta : K[x]/I \longrightarrow K[x]$ by $\beta(p(x)+I)=p(0).$ see that $\beta$ is a well-defined $K$ homomorphism and $\pi \beta = \text{id}.$ (note that $p(0)+I=p(x)+I.$) May 20th 2010, 03:03 PM #2 MHF Contributor May 2008
{"url":"http://mathhelpforum.com/advanced-algebra/145736-k-x-module.html","timestamp":"2014-04-17T22:40:14Z","content_type":null,"content_length":"37349","record_id":"<urn:uuid:69c9f8f0-eaf4-4e2e-8b89-446b5df95ffb>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00054-ip-10-147-4-33.ec2.internal.warc.gz"}
Mathematician Fills in a Blank for a Fresh Insight on Art Titled "Print Gallery," it provides a glimpse through a row of arching windows into an art gallery, where a man is gazing at a picture on the wall. The picture depicts a row of Mediterranean-style buildings with turrets and balconies, fronting a quay on the island of Malta. As the viewer's eye follows the line of buildings to the right, it begins to bulge outward and twist downward, until it sweeps around to include the art gallery itself. In the center of the dizzying whorl of buildings, ships and sky, is a large, circular patch that Escher left blank. His signature is scrawled across it. As Dr. Lenstra studied the print he found his attention returning again and again to that central patch, puzzling over the reason Escher had not filled it in. "I wondered whether if you continue the lines inward, if there's a mathematical problem that cannot be solved," he said. "More generally, I also wondered what the structure is behind the picture: how would I, as a mathematician, make a picture like that?" Most people, having thought this far, might have turned the page, content to leave the puzzle unsolved. But to Dr. Lenstra, a professor at the University of California at Berkeley and the University of Leiden in the Netherlands, solving mathematical puzzles is as natural as breathing. He has been known, when walking to a friend's house, to factor the street address into prime numbers in order to better fix it in his mind. So Dr. Lenstra continued to mull over the mystery and, within a few days of his arrival, was able to answer the questions he had posed. Then, with students and colleagues in Leiden, he began a two-year side project, resulting in a precise mathematical version of the concept Escher seemed to be intuitively expressing in his picture. Maurits Cornelis Escher, who died in 1972, had only a high school education in mathematics and little interest in its formalities. Still, he was fascinated by visual mathematical concepts and often featured them in his art. One well-known print, for instance, shows a line of ants, crawling around a Moebius strip, a mathematical object with only one side. Another shows people marching around a circle of stairs that manage, through a trick of geometry, to always go up. The goal of his art, Escher once wrote in a letter, is not to create something beautiful, but to inspire wonder in his audience. Seeking insight into Escher's creative process, Dr. Lenstra turned to "The Magic Mirror of M. C. Escher," a book written (under the pen name of Bruno Ernst) by Hans de Rijk, a friend of Escher's, who visited the artist as he created "Print Gallery." Escher's goal, wrote Mr. de Rijk, was to create a cyclic bulge "having neither beginning nor end." To achieve this, Escher first created the desired distortion with a grid of crisscrossing lines, arranging them so that, moving clockwise around the center, they gradually spread farther apart. But the trick didn't quite work with straight lines, so he curved them. Then, starting with an undistorted rendition of the quayside scene, he used this curved grid to distort the scene one tiny square at a time. After examining the grid, Dr. Lenstra realized that carried to its logical extent, the process would have generated an image that continually repeats itself, a picture inside a picture and so on, like a set of nested Russian wooden dolls. Thus, the logical extension of the undistorted picture Escher started with would have shown a man in an art gallery looking at print on the wall of a quayside scene containing a smaller copy of the art gallery with the man looking at a print on the wall, and so on. The logical extension of "Print Gallery," too, would repeat itself, but in a more complicated way. As the viewer zooms in, the picture bulges outward and twists around onto itself before it repeats. Once Dr. Lenstra understood this basic structure, the task was clear: If he could find an exact mathematical formula for the repetitive pattern, he would have a recipe for making such a picture with the missing spot filled in. Measuring with a ruler and protractor, he was able to estimate the bulging and twisting. But to compute the distortion exactly, he resorted to elliptic curves, the hot topic of mathematical research that was behind the proof of Fermat's last theorem. Dr. Lenstra knew he could apply elliptic curve theory only after reading a crucial sentence in Mr. de Rijk's book. For esthetic reasons, Mr. de Rijk explains, Escher fashioned his grid in such a way that "the original small squares could better retain their square appearance." Otherwise, the distortion of the picture would become too extreme, smearing individual elements like windows and people to the point that they were no longer recognizable. "At first, I followed many false leads, but that sentence was the key," Dr. Lenstra said. "After I read that, I knew exactly what was happening." Escher was creating a distortion with a well-known mathematical property: if you look at small regions of the distorted picture, the angles between lines have been preserved. "Conformal maps," as such distortions are known, have been extensively studied by mathematicians. In practice, they are used in Mercator projection maps, which spread the rounded surface of the earth onto a piece of paper in such a way that although land masses are enlarged near the poles, compass directions are preserved. Conformal principles are also used to map the surface of the human brain with all the folds flattened out. Knowing that Escher's distortion followed this principle, Dr. Lenstra was able to use elliptic curves to convert his rough approximation of the distortion into an exact mathematical recipe. He then enlisted a Leiden colleague, Bart de Smit, to manage the project and several students to help him. First, the mathematicians had to unravel Escher's distortion to obtain the picture he started with. A student, Joost Batenburg, wrote a computer program that took Escher's picture and grid as input and reversed Escher's tedious procedure. Once the distortion was undone, the resulting picture was incomplete. Some of the blank patch in the center of "Print Gallery" translated into a blurred swath spiraling across the top of the picture. So, the researchers hired an artist to fill in the swath with buildings, pavement and water in the spirit of Escher. Starting with this completed picture, Dr. de Smit and Mr. Batenburg then used their computer program in a different way, to apply Dr. Lenstra's formula for generating the distortion. Finally, they achieved their goal: a completed, idealized version of Escher's "Print Gallery." In the center of the mathematician's version, the mysterious blank patch is filled with another, smaller copy of the distorted quayside scene, turned almost upside-down. Within that is a still smaller copy of the scene, and so on, with the remaining infinity of tiny copies disappearing into the center. Since Escher's distortion was not perfectly conformal, the mathematician's rendition differs slightly from his in other ways as well. Away from the center, for example, the lines of some of the buildings curve the opposite way. The researchers also used their program to create variations on Escher's idea: one in which the center bulges in the opposite direction, and even an animated version that corkscrews outward as the viewer seemingly falls into the center. After a recent talk Dr. Lenstra gave at Berkeley, the audience remained seated for several minutes, mesmerized by the spiraling scene. While Dr. Lenstra has solved the mystery of the blank patch and more, one question remains. Did Escher know what belonged in the center and choose not to represent it, or did he leave it blank because he didn't know what to put there? As a man of science, Dr. Lenstra said he found it impossible to put himself inside Escher's mind. "I find it most useful to identify Escher with nature," he said, "and myself with a physicist that tries to model nature." Mr. de Rijk, now in his 70's, said he believed Escher knew his picture could continue toward the center, but did not understand precisely what should go there. "He would be astonished to experience that his print was still much more interesting than was his intention," Mr. de Rijk said. He added that while he knew of another effort to fill in Escher's picture, it was not based on an understanding of the mathematics behind it. "He was always interested when somebody used his prints as a base for further study and applications," Mr. de Rijk said. "When they were too mathematical, he didn't understand them, but he was always proud when mathematicians did something with his work."
{"url":"http://www.msri.org/people/members/sara/articles/escher.html","timestamp":"2014-04-20T23:28:51Z","content_type":null,"content_length":"64839","record_id":"<urn:uuid:c54aa7be-dab1-4d82-a039-19a56cf68d4d>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00399-ip-10-147-4-33.ec2.internal.warc.gz"}
Loading the player ... Download list: This podcast is part of the series: Alabama Quality Teaching Standards Demonstration Lessons Suzanne Culbreath Governors Commission on Quality Teaching in cooperation with Spain Park High School, Hoover City Schools Part 1 of 2: This video begins with a thought jogger exercise to reinforce recent learning about triangles and segues to an interactive discussion of how to determine the area of an equilateral triangle. That information is used to begin instruction about finding the area of a regular polygon. The lesson utilizes manipulatives. Length: 24:16 Content Areas: Math, Professional Development Alabama Course of Study Alignments and/or Professional Development Standard Alignments: AQTS_1.A.2: Academic Discipline(s) [Knowledge of ways to organize and present content so that it is meaningful and engaging to all learners whom they teach (pedagogical content knowledge).] AQTS_1.A.3: Academic Discipline(s) [Ability to use students' prior knowledge and experiences to introduce new subject-area related content.] AQTS_2.A.5: Human Development [Ability to teach explicit cognitive, metacognitive, and other learning strategies to support students in becoming more successful learners.] AQTS_2.C.1: Learning Environment [Knowledge of norms and structures that contribute to a safe and stimulating learning environment.] AQTS_2.D.3: Instructional Strategies [Knowledge of strategies that promote retention as well as transfer of learning and the relationship between these two learning outcomes.] AQTS_2.D.9: Instructional Strategies [Ability to use questions and questioning to assist all students in developing skills and strategies in critical and high order thinking and problem solving.] AQTS_2.E.11: Assessment [Ability to engage all students in assessing and understanding their own learning and behavior.] AQTS_3.A.4: Oral and Written Communications [Ability to model appropriate oral and written communications.] AQTS_3.A.5: Oral and Written Communications [Ability to demonstrate appropriate communication strategies that include questioning and active and reflective listening.] AQTS_3.C.1: Mathematics [Knowledge of the role that mathematics plays in everyday life.] AQTS_3.C.2: Mathematics [Knowledge of the concepts and relationships in number systems.] AQTS_3.C.3: Mathematics [Knowledge of the appropriate use of various types of reasoning, including inductive, deductive, spatial and proportional, and understanding of valid and invalid forms of AQTS_3.C.6: Mathematics [Ability to communicate with others about mathematical concepts, processes, and symbols.] AQTS_3.D.2: Technology [Knowledge of the wide range of technologies that support and enhance instruction, including classroom and school resources as well as distance learning and online learning AQTS_3.D.3: Technology [Ability to integrate technology into the teaching of all content areas.]
{"url":"http://alex.state.al.us/podcast_view.php?podcast_id=458","timestamp":"2014-04-16T04:11:16Z","content_type":null,"content_length":"11892","record_id":"<urn:uuid:10d6d6f9-8fcf-4650-98c6-fe859ca457d0>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00027-ip-10-147-4-33.ec2.internal.warc.gz"}
Some Random Questions.... October 28th 2006, 07:58 AM Some Random Questions.... Hello! I'm having some difficulty with these questions. Thanks for the help! If a = 2j - 3k b = 3i - 3j + ck abs(a+b) = root35 Then c is equal? The angle between vectors a and b is 120 and abs of a = abs of b. The angle b/w a and a + b is? A woman of mass 50kg stands in a lift which is moving upwards with acceleration of am/s^2. THe floor of the lift exerts a force of magnitude 75g Newtons on the women. Then value of a is? A body of mass 15kg slides down a straight slide which is inclined at an angle of 30 degrees to horizontal. The normal force, in Newtons of the slide acting on the boy is? October 28th 2006, 08:44 AM a+b=3i + (2-3)j + (-3+c)k = 3i -j+(c-3)k |a+b|^2 = 3^2 + 1 + (c-3)^2 = c^2 - 6c + 19 so if |a+b|=sqrt(35) then: c^2 - 6c + 19 = 35, c^2 - 6c -16 = 0 which factorises by inspection to: so c=8 or -2. October 28th 2006, 09:19 AM Hello, classicstrings! I drew a sketch for #2 . . . and saw an "eyeball" solution! $\text{2. The angle between vectors }\vec{a}\text{ and }\vec{b}\text{ is }120^o,\;\text{ and }|a| = |b|.$ $\text{Find the angle between }\vec{a}\text{ and }\overrightarrow{a + b}$ * * * * b * * a+b * * * 120* * * * * * * * * * * * We are given: The angle between $\vec{a}$ and $\vec{b}$ is $120^o$ and $|a| = |b|.$ We have an isosceles triangle with equal sides $|a|$ and vertex angle $120^o.$ Therefore, the base angles are $30^o.$ October 28th 2006, 11:16 AM When in doubt, use Newton's 2nd. I have a Free-Body Diagram for the woman. There is a normal force, N, acting upward, she has a weight, w = mg, acting downward. The lift the woman is standing in is accelerating upward at a m/s^2, so she is as well. I have a +y direction in the upward direction. The floor of the lift is exerting a force of 75g N on her. This is the normal force, so N = 75g N. Thus Newton's 2nd on the woman reads: $\sum F_y = N - w = ma$ $75g - mg = ma$ $a = \frac{1}{m} (75g - mg)$ where m = 50 kg. Thus a = 4.9 m/s^2. October 28th 2006, 11:21 AM I have a Free-Body Diagram of the boy. There is a normal force, N, acting perpendicularlly out of the slide, there is a weight, w = mg, acting straight down. Friction is not mentioned in the problem so I will ignore it. I am going to set a +x direction down the slide and a +y direction in the direction of the normal force. Newton's 2nd in the +y direction says: $\sum F_y = N - w \cdot cos(30) = ma_y$ <--Make sure you understand why this is cosine, not sine! There is no acceleration in the y direction so $a_y = 0$. $N - mg \cdot cos(30) = 0$ $N = mg \cdot cos(30) = 127.306 \, N$ or N = 130 N (keeping 2 significant figures.)
{"url":"http://mathhelpforum.com/math-topics/6952-some-random-questions-print.html","timestamp":"2014-04-18T09:53:32Z","content_type":null,"content_length":"12600","record_id":"<urn:uuid:689f18b2-5562-46e2-9d99-2423c2da7608>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00551-ip-10-147-4-33.ec2.internal.warc.gz"}
How the Concept of Average Might be Tested on the ACT or SAT Figuring out the average of a group of numbers is easy. We have learned since middle school to add up the numbers and divide by however many numbers we have. This gives us the average. SAT and ACT math tests often try to be tricky and ask us to use this average formula in reverse. Knowing how to solve these types of questions will come in handy when you take the test. Here is an example: Ex 1: The average of 4 different numbers is 808. What is the sum of the numbers? This is an example of a simple average problem in reverse. If they gave us the four numbers and wanted the average, this problem would be easy. Instead, they gave us the average and want us to find the sum. Let's take a look at how the average formula works: THE SUM OF THE NUMBERS = 808 If we want to solve for "the sum", then we just want to get "the sum" all by itself on the left side of the equation. We can do this by simply multiplying both sides by four: 4 x THE SUM OF THE NUMBERS = 808 x 4 We are left with: THE SUM OF THE NUMBERS = 3,232. So the answer to Example 1 is 3,232. This method will work for any average problem where you need to find the sum. Let's look at a typical problem that is related but is a few steps harder: Ex 2: The average of four different integers is 808. One of the integers is 107 and one of the integers is 800. If all of the integers are positive, what is the largest that one of the other two numbers could be? On any problem that involves figuring out a group of four or five numbers, it is helpful to draw lines on your test booklet like this: ___ ___ ___ ___ Then fill in the numbers that you already know: 107 800 ___ ___ Doing this allows us to see what we have left to figure out. We want to figure out the greatest possible value of one of the numbers. We know that the numbers must add up to 3,232 because we already did this problem in the above example. If we know that a group of numbers has a certain sum and we want one of the numbers to be as large as possible, then we want the other unknown number to be as small as possible. Since the problem tells us that all of the numbers have to be positive, the smallest that a number could be is 1. We can now set up a problem like this: 107 + 800 + 1 + x = 3,232 Solving for x will give us our answer, which is 2,324. Currently, there are no comments. Be the first to post one!
{"url":"http://info.methodtestprep.com/blog/bid/63424/How-the-Concept-of-Average-Might-be-Tested-on-the-ACT-or-SAT","timestamp":"2014-04-19T11:57:56Z","content_type":null,"content_length":"47927","record_id":"<urn:uuid:6395ee18-30e7-4dc7-9f86-7a73931408cf>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00001-ip-10-147-4-33.ec2.internal.warc.gz"}
Is this cool with you? Re: Is this cool with you? Yes, that book has a good collection of problems! "Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha? "Data! Data! Data!" he cried impatiently. "I can't make bricks without clay." Re: Is this cool with you? How did you do on your exams? In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Is this cool with you? Exams were okay... "Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha? "Data! Data! Data!" he cried impatiently. "I can't make bricks without clay." Re: Is this cool with you? Good! I am glad you had no problems. What made you choose that probability book and is it the third edition? In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Is this cool with you? That probability book is the fourth edition. I wanted a book on probability, it looked better among the books there. Do you know of any other books which you enjoyed? "Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha? "Data! Data! Data!" he cried impatiently. "I can't make bricks without clay." Re: Is this cool with you? The Feller books, V1,2,3. The Tucker book Vilenkins Combinatorics Nivens the art of counting. Rosens Enumerative Combinatorics. Parts of A=B and Generatingfunctionology In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Is this cool with you? I'll check them too... Thanks for telling. "Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha? "Data! Data! Data!" he cried impatiently. "I can't make bricks without clay." Re: Is this cool with you? There is one more that I have been using Lectures on generating functions by S.K. Lando In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Is this cool with you? Okay, thank you. "Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha? "Data! Data! Data!" he cried impatiently. "I can't make bricks without clay." Re: Is this cool with you? I know that is a big list but looking through a lot of books helps to understand difficult concepts. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Is this cool with you? Yes, that's right! "Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha? "Data! Data! Data!" he cried impatiently. "I can't make bricks without clay." Re: Is this cool with you? Trying to learn from a book is hard. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Is this cool with you? Yes, but I like to learn the hard way! "Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha? "Data! Data! Data!" he cried impatiently. "I can't make bricks without clay." Re: Is this cool with you? It is rewarding to learn like that and very private. Sometimes I wish there was a faster way. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Is this cool with you? Learning at our own pace is the way to learn! I wish that there's a way to find whether a particular problem has been solved already! Current search engines are bad at that task! "Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha? "Data! Data! Data!" he cried impatiently. "I can't make bricks without clay." Re: Is this cool with you? There are two compendiums. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Is this cool with you? What are they? "Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha? "Data! Data! Data!" he cried impatiently. "I can't make bricks without clay." Re: Is this cool with you? CRC concise encyclopedia of mathematics V1,2,3,4 Eric Weisstein Index to mathematical problems 1975-1979 Stanley Rabinowitz In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Is this cool with you? Oh, okay. Thanks. "Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha? "Data! Data! Data!" he cried impatiently. "I can't make bricks without clay." Re: Is this cool with you? Also another: Index to mathematical problems 1980-1984 Stanley Rabinowitz In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Is this cool with you? "Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha? "Data! Data! Data!" he cried impatiently. "I can't make bricks without clay." Re: Is this cool with you? They are not exhaustive but they do contain lots of problems. Also Guy has a book on open number theory problems. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Is this cool with you? Yes, I don't know whether I'll be able to go through all of them. "Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha? "Data! Data! Data!" he cried impatiently. "I can't make bricks without clay." Re: Is this cool with you? Who could? I like the OEIS best. Generate a sequence and then find the generating function already done for you. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Is this cool with you? I too frequently use OEIS for problems involving sequences. "Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha? "Data! Data! Data!" he cried impatiently. "I can't make bricks without clay."
{"url":"http://www.mathisfunforum.com/viewtopic.php?id=12832&p=50","timestamp":"2014-04-20T21:26:43Z","content_type":null,"content_length":"37407","record_id":"<urn:uuid:b0e1419c-bb03-48a7-a93e-ff73bcd967bc>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00431-ip-10-147-4-33.ec2.internal.warc.gz"}
Antonio Gulli's coding playground Search engines mantains a word index made up by a dictionary of words and, for each work a list of documents containing the. For instance impossible -> 343, 5, 63459, 4, ....., 32 mission -> 3449, 558, ...., 49 Suppose the two lists are very long. For instance, the word 'impossible' can be contained in 100*10^6 documents, while the word mission can be contained in 50*10^6 documents. 1) return the documents containing the words 'impossible mission' (documents must contain both the words) 2) what is the complexity? 3) can you accellerate the computation? 4) how to compute the size of the lists intersection? 5) can you estimate this size in a fast way?
{"url":"http://codingplayground.blogspot.com/2010_03_01_archive.html","timestamp":"2014-04-16T19:20:36Z","content_type":null,"content_length":"279175","record_id":"<urn:uuid:df18885d-6559-4159-9123-8c9f73c5006b>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00211-ip-10-147-4-33.ec2.internal.warc.gz"}
Opinions and PCAs Released the Week of June 25, 2001 5D99-826 Nelson v. Nelson corrected October 30, 2001 5D00-1852 Archambault v. State 5D00-2025 & 5D00-3233Amercian Equity v. Ginhoven 5D00-1721 Mack v. State 5D00-2043 Sebulski v. State 5D00-2511 Happy v. DCF 5D00-2851 W.P. v. DCF 5D00-2930 Blue v. State 5D00-3184 D.M. v. DCF 5D00-3297 J.R.J. v. State 5D00-3305 Ring v. State 5D00-3343 Merchant v. State 5D00-3510 Welbes v. State 5D00-3533 Nelmes v. State 5D00-3561 K.M. v. DCF 5D00-3563 Paul v. State 5D00-3570 Botello v. State 5D00-3582 Weaver v. State 5D01-60 Forkner v. State 5D01-151 Bailey v. State 5D01-183 Howard v. State 5D01-193 Thompson v. State 5D01-206 Gantt v. State 5D01-227 Roberts v. State 5D01-352 Barnes v. State 5D01-427 King v. State 5D01-621 Manning v. State 5D01-690 Johnson v. State 5D01-839 Selway v. State 5D01-1317 Ward v. State 5D01-1319 Saez v. State 5D01-1481 Debose v. State 5D01-1500 Eberhart v. State 5D01-1522 Smith v. State 5D01-1582 Billue v. State 5D01-1637 Lott v. State 5D01-1645 Smith v. State 5D01-1650 Carson v . State 5D01-1651 Hill v. State 5D01-1657 Post v. State 5D01-1676 Walker v. State 5D01-1686 Rose v. State 5D01-1730 Dumas v. State
{"url":"http://www.5dca.org/Opinions/Opin2001/062501/Filings062501.htm","timestamp":"2014-04-21T01:59:12Z","content_type":null,"content_length":"4959","record_id":"<urn:uuid:b62cb003-9bae-4f2f-899c-752b1a293564>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00058-ip-10-147-4-33.ec2.internal.warc.gz"}
st: question: how to collapse data fast for simplified, binned scatter p Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] st: question: how to collapse data fast for simplified, binned scatter plots From László Sándor <sandorl@gmail.com> To statalist@hsphsun2.harvard.edu Subject st: question: how to collapse data fast for simplified, binned scatter plots Date Mon, 26 Mar 2012 18:12:13 -0400 Hi all, I have a relatively simple goal, but I am not sure which is the most efficient way to achieve it. Let me describe what it aims to be and how I currently do it under Stata 10.1 for Windows, and then please comment on whether it could be faster. Basically, I want to clarify scatter plots, as in vast datasets it is more informative to plot means (or some quantiles) of y against "bins" of x, where actually it is informative to use some quantiles to bin x (i.e. have even frequencies in the bins instead of, say, even raw distances between the bins). Basically, the graphs could like the second graph here: Yes, it would be great if I could add a plot of linear fit later on, or perhaps plot multiple y variables against the same x, or a single y broken down by a categorical z, or two different quantiles of the same y. Also, for some applications I would want to plot only a residual after some linear fit (including an -areg- absorbing for some averages in some categories). I am not aware of anything built in for this. But once one has the bins of x, it is not that hard to collect the y against it. However, -collapse- is surprisingly slow in this regard (at least with millions or tens of millions of observations), and I had to use a workaround with tabulate and more. I am puzzled that this could be faster than -collapse-, but so it seems. Basically: if -collapse- is not the fastest tool for this (with the fast option), then what is? What does -twoway bar- use underneath, for example? What does -tabulate, summarize- use behind the scenes? Would you suggest an alternative route? Something more efficient? Something built-in? Some polished user-written tool? Thank you very much, * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2012-03/msg01139.html","timestamp":"2014-04-19T12:12:45Z","content_type":null,"content_length":"9329","record_id":"<urn:uuid:20744876-3b16-47fc-a215-4e158943d7b1>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00662-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: st: RES: treatreg model with binary outcomes [Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index] Re: st: RES: treatreg model with binary outcomes From Kim Manturuk <manturuk@email.unc.edu> To statalist@hsphsun2.harvard.edu Subject Re: st: RES: treatreg model with binary outcomes Date Tue, 27 Nov 2007 10:09:32 -0500 Thanks - this is exactly what I was looking for! nicola.baldini2@unibo.it wrote: -cmp- from ssc may also help in this cases At 02.33 21/11/2007 -0500, you wrote: I do not know if I understand your question well. I think that I had a similar problem: estimating the effect of a social policy (a government program to combat poverty) on an indicator of food security. Participation in the program is defined as a dummy variable (P) and Y is the indicator of food security which is also a dummy. The first equation is a logit model: Ln (p / (1-p)) = beta0 + beta1 * P + beta2 * X1 + ... + betap * Xp + u (1) Where p is the probability of being in a situation of food security. P is a dummy of selection in the program Clearly here P is endogenous because participation in the program is determined by the situation of food security of the household. Then I used a second equation: Ln (q / (1-q)) = gamma0 + gamma1 * M1 + ... + gammap * Mp + v (2) Where q is the selection probability in the program and M1 ,..., Mp are explanatory factors for this participation. This equation is the first stage of an estimation by instrumental variables in two stages. I replaced the estimate of q obtained through the logit model (2) in place of P variable in the logit model (1). I wonder if this helps you and also on the part of any list member if there is any flaw in this econometric method. Henrique Neder - -----Mensagem original----- De: owner-statalist@hsphsun2.harvard.edu [mailto:owner-statalist@hsphsun2.harvard.edu] Em nome de Enviada em: segunda-feira, 19 de novembro de 2007 23:04 Para: statalist@hsphsun2.harvard.edu Assunto: st: treatreg model with binary outcomes Hi all! I want to run a treatment effects model. I have binary outcome variables for both parts of a two-stage model. Essentially, I have this: W=Y+x1...xi+e where Y=Z+x1...x1+e My outcome of interest is W (binary) and it is predicted by Y (binary) and control variables x1...xi. Y is predicted by an instrumental variable Z (continuous, unrelated to W) and the the same set of control variables x1...xi (predictive of both W and Y). It is my understanding that the treatreg command is appropriate only for a continuous outcome variable. I have tried looking at the biprobit command but I have not been able to figure out if that is the appropriate model to use. If so, which model gets the x1---xi variables? - -- Kim Manturuk * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/ * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2007-11/msg00869.html","timestamp":"2014-04-20T01:10:45Z","content_type":null,"content_length":"8236","record_id":"<urn:uuid:7bd606fa-d53f-4ff0-99c1-0132a9a0c4d1>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00240-ip-10-147-4-33.ec2.internal.warc.gz"}
"borrowing" energy I'm taking a course on particle and nuclear physics at the moment and one of the first things my lecturer, who has worked in the field for many years, said was that nature can "cheat" by violating energy conservation for a very short length of time. It can borrow energy (apparently not from anywhere in particular) during an interaction to create a virtual particle then return it in the next "step" of said interaction. My textbook on the subject (Particle Physics by Martin & Shaw) says the same. Though I thought this was weird I assumed it was just one of those crazy things you're supposed to accept, but then, while reading up on QM, to my surprise I read in Griffith's that the "borrowing energy" interpretation of the energy-time uncertainty relation wasn't valid and was a common misconception. Wikipedia seems somewhat divided on this with the uncertainty principle article stating the same but the virtual particle article (which at least says conservation of energy isn't violated) uses the same interpretation of borrowing energy. I could understand if this was just one of those numerous misconceptions only found in pop science texts, but this has got to be the widest misconception I've ever come about, evidently common even among people in the field. Is the borrowing energy interpretation just accepted as extremely sloppy language or is this myth so widespread that there are actually Ph.d.s in HEP who doesn't know that it's wrong? Last edited by some_dude on Fri Oct 07, 2011 9:12 pm UTC, edited 1 time in total. Re: "burrowing" energy I think it's like quantum tunneling. So you can do things that, classically, would have required violation of conservation of energy, but the probability decreases exponentially with the amount of energy * the amount of time. "Uncertainty relation" usually refers to the useful fact about fourier transforms, so though it's a real effect, it's doesn't have to do with limits on measurement. Some people tell me I laugh too much. To them I say, "ha ha ha!" Re: "burrowing" energy By any chance were they referring to "nature" as in plants and photosyntheses or "nature" as in just "the natural laws of physics" in general? As I know there are quantum effects with photosyntheses. http://lmgtfy.com/?q=photosynthesis+use ... vest+light I've no idea if it's the same system your describing though, it seems to be quantum coherence. Quickly skimming the article suggests it's not using virtual particles. Wiki links to this paper... http://pubs.acs.org/doi/abs/10.1021/jz900062f. Again, not read it myself, sorry. Last edited by Technical Ben on Tue Oct 04, 2011 11:20 pm UTC, edited 1 time in total. It's all physics and stamp collecting. It's not a particle or a wave. It's just an exchange. Re: "burrowing" energy Is burrowed energy measured in moles? But seriously, I remember reading about it possibly in The Little Book of String Theory by... err... I can't remember. Look it up. Short & sweet read. Explosion, WAH! Re: "burrowing" energy It's a heuristic which is valid for virtual particles and QED calculations, but feels like quantum cosmology when applied to Schroedinger's cat scales. You'd do best to really understand Griffith's take on it by going over his general derivation of uncertainty relations of quantum operators, because that's where it all comes from. It's not that the heuristic is wrong really. Just, as he says, when someone tries to invoke an uncertainty argument to appeal to something like this it's best to hold onto your wallet. This includes claims like "you can have a universe pop into existence from nothing so long as it's only around for a very small amount of time." Although the charming thing about that argument and also Boltzmann brains is that they require that this near-instantaneous randomized glimpse of a universe just happens to have derived the theories which predict those phenomena. Just a very tidy self-contained system of unthinkable odds. Technical Ben wrote:PS, doogly, way to miss the point. Re: "burrowing" energy I can accept it as a heuristic, but it really bothers me that it's presented as a valid interpretation of what's really happening. In general I really don't like virtual particles and all the handwaving surrounding them. Isn't it fundamentally nonsensical to try and describe what happens in between measurements of a QM system, other than the evolution of the wavefunction? Re: "burrowing" energy Oh yes, no one should ever be talking about particles seriously. Of course we all know everything is really a field. LE4dGOLEM: What's a Doug? Noc: A larval Doogly. They grow the tail and stinger upon reaching adulthood. Keep waggling your butt brows Brothers. Or; Is that your eye butthairs? Re: "burrowing" energy At first glance I thought this was about depositing macroscopic energy reserves in the Earth's crust. EvanED wrote:be aware that when most people say "regular expression" they really mean "something that is almost, but not quite, entirely unlike a regular expression" Re: "burrowing" energy Malconstant, interesting that you don't like the "popping into existence" ideas on cosmology. I've missed if you've made those comments before in the other QM threads. Which is a pity, because I've never liked those "thought experiments" or Boltzmann brains*. But never know if there are any fundamental laws or calculations that disallow for them. [*The article in New Scientist about 5 years ago on BBs made me stop buying it and rename it "New Age Mysticism".] It's all physics and stamp collecting. It's not a particle or a wave. It's just an exchange. Re: "burrowing" energy Nobody *likes* Boltzman brains. If you are proposing a model for cosmology in which the vast majority of observers are BBs, that's a sign that you need to fix your model. LE4dGOLEM: What's a Doug? Noc: A larval Doogly. They grow the tail and stinger upon reaching adulthood. Keep waggling your butt brows Brothers. Or; Is that your eye butthairs? Re: "burrowing" energy MHD wrote:At first glance I thought this was about depositing macroscopic energy reserves in the Earth's crust. Well that is pretty embarrassing, I normally consider myself quite fluent in English even though it isn't my first language, but I didn't even know the word "burrow" and thought that's how you're supposed to spell "borrow". To put an end to the confusion, I've edited my first post. Re: "borrowing" energy I appreciate the point where you need to change your model, not because of any measurable reason why it might be wrong, but because its implications are unbearably depressing. And the implication for society becomes too destructive. There must be a preserved element of social utility in our theories on cosmology. Occam's opium. Technical Ben wrote:PS, doogly, way to miss the point. Re: "borrowing" energy That's it? The "ridiculousness" of it would have done it for me. I'd suggest a failure in the system of describing "complexity" or "brains" if you think BB are more likely to exist than the current universe. IE your models for probability might be weighted in the wrong places. It's all physics and stamp collecting. It's not a particle or a wave. It's just an exchange. Re: "borrowing" energy Heisenberg's Uncertainty principle. That allows it and it's consistent. I think that's the basis of all ideas of virtual particles and quantum fluctuations and all that. But the product of uncertainty in time and energy of the particle is greater than or equal to Planck's reduced constant on two. So it's a quantified inequality that allows temporary violations. I have it tattooed on my right arm. Awww yeee. Re: "borrowing" energy Yeah, that is the topic of discussion. Have you been following it at all? Because t is not an operator, delta t delta H is a completely different beast than every other operator uncertainty inequality. LE4dGOLEM: What's a Doug? Noc: A larval Doogly. They grow the tail and stinger upon reaching adulthood. Keep waggling your butt brows Brothers. Or; Is that your eye butthairs? Re: "borrowing" energy Technical Ben wrote:That's it? The "ridiculousness" of it would have done it for me. Mere "ridiculousness" isn't enough to discount an idea in science, otherwise we would still be using pre-Copernican cosmology. Technical Ben wrote:I'd suggest a failure in the system of describing "complexity" or "brains" if you think BB are more likely to exist than the current universe. IE your models for probability might be weighted in the wrong places. One brain is a whole lot less complex than a whole universe, even a universe devoid of brains. doogly wrote:Nobody *likes* Boltzman brains. Apart from thermodynamic zombies. Re: "borrowing" energy PS PM 2Ring, thanks for the corrections. However, the problem still lies "define complexity" The entropy of a brain may be much greater than the universe etc. However, I'd guess those doing the maths have checked that already. The idea I heard given for photosynthesis is that the particles take the route that uses the least energy each time. The question "how does the particle know which route is the best" was answered by stating it's in a quantum super position and takes every route, then the least energetic wins out. The reason I was reminded of this, is that each super position is in effect another virtual particle. Like rolling 6 quantum dice till you get the number you need. Then only keeping the one that's correct. It's all physics and stamp collecting. It's not a particle or a wave. It's just an exchange. Re: "borrowing" energy Technical Ben wrote:PS PM 2Ring, thanks for the corrections. However, the problem still lies "define complexity" The entropy of a brain may be much greater than the universe etc. However, I'd guess those doing the maths have checked that already. The entropy of a subset of the universe is not going to be more than the whole universe. And there are very well defined mathematical definitions of complexity. Re: "borrowing" energy Kolmogorov is your best bet. See scottaaronson.com/blog for some near-technical discussion of things. LE4dGOLEM: What's a Doug? Noc: A larval Doogly. They grow the tail and stinger upon reaching adulthood. Keep waggling your butt brows Brothers. Or; Is that your eye butthairs? Re: "borrowing" energy BlackSails wrote: Technical Ben wrote:PS PM 2Ring, thanks for the corrections. However, the problem still lies "define complexity" The entropy of a brain may be much greater than the universe etc. However, I'd guess those doing the maths have checked that already. The entropy of a subset of the universe is not going to be more than the whole universe. And there are very well defined mathematical definitions of complexity. So is that a mathematical reason why BB are not probable? The system with more entropy is the most probable one right? So a brain has too little entropy to appear "randomly"? So the BB thought experiment is redundant? It's all physics and stamp collecting. It's not a particle or a wave. It's just an exchange. Re: "borrowing" energy It's actually more the reverse. If you have a model in which BBs are more probable, then you decide that this wasn't a good model. Of course this is in a really strange subset of physics, eternal inflation with string theory landscape. I would not play with this area of research until full comfort has been reached with things like relativity and quantum mechanics. Then we can get your basics of inflationary cosmology. Then we can speculate on multiverses. LE4dGOLEM: What's a Doug? Noc: A larval Doogly. They grow the tail and stinger upon reaching adulthood. Keep waggling your butt brows Brothers. Or; Is that your eye butthairs? Re: "borrowing" energy I have another question which doesn't seem to require a new thread, since it's kind of in the vain of the general discussion in here. So "phonons" were mentioned today in my particle physics course. Apparently they're "quasiparticles", that is the vibrational energy is exchanged between particles as though it was carried by bosons, but we know it's really not. This made me wonder how come virtual particles aren't classified as quasiparticles if most agree they're just a heuristic as well. Also the book introduced phonons as "quantas of vibrational energy in the same way that photons are quanta of EM energy", which further confuses me as to when and how exactly you can tell if a particle is real, vitual or quasi as clearly photons can be real. Re: "borrowing" energy If your particle physics comrades tell you there is a difference, tell them they are just being chauvinists. You have an effective field theory, you quantize it, boom, you call it a particle. LE4dGOLEM: What's a Doug? Noc: A larval Doogly. They grow the tail and stinger upon reaching adulthood. Keep waggling your butt brows Brothers. Or; Is that your eye butthairs? Re: "borrowing" energy PM 2Ring wrote: doogly wrote:Nobody *likes* Boltzman brains. Apart from thermodynamic zombies. We have those now, too‽ Between Maxwell's Demon and thermodynamic zombies, I'm starting to think it's time to call the clergy. And by clergy, I mean battle priests armed with mothereffin' tachyon Re: "borrowing" energy doogly wrote:It's actually more the reverse. If you have a model in which BBs are more probable, then you decide that this wasn't a good model. Of course this is in a really strange subset of physics, eternal inflation with string theory landscape. I would not play with this area of research until full comfort has been reached with things like relativity and quantum mechanics. Then we can get your basics of inflationary cosmology. Then we can speculate on multiverses. I thought you were suggesting everyone move onto string theory after studying GR... It's all physics and stamp collecting. It's not a particle or a wave. It's just an exchange. Re: "borrowing" energy doogly wrote:If your particle physics comrades tell you there is a difference, tell them they are just being chauvinists. You have an effective field theory, you quantize it, boom, you call it a particle. I imagine this is why Feynmann never brought up virtual particles in the famous "magnets" clip--as the inventor of QED, he knew full well that "the magnets exchange virtual photons" is essentially a very roundabout way of saying "the magnets repel", and he was determined not to cop out like that.
{"url":"http://forums.xkcd.com/viewtopic.php?f=18&t=74983","timestamp":"2014-04-20T05:57:00Z","content_type":null,"content_length":"67531","record_id":"<urn:uuid:b1c47986-cb39-42ae-90f0-7b7f5fcf0996>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00086-ip-10-147-4-33.ec2.internal.warc.gz"}
Open PID Tuner for PID tuning pidtool(sys,type) launches the PID Tuner GUI and designs a controller of type type for plant sys. pidtool(sys,Cbase) launches the GUI with a baseline controller Cbase so that you can compare performance between the designed controller and the baseline controller. If Cbase is a pid or pidstd controller object, the PID Tuner designs a controller of the same form, type, and discrete integrator formulas as Cbase. pidtool(sys) designs a parallel-form PI controller. pidtool launches the GUI with default plant of 1 and proportional (P) controller of 1. Input Arguments sys Plant model for controller design. sys can be: ● Any SISO LTI system (such as ss, tf, zpk, or frd). ● Any System Identification Toolbox™ SISO linear model (idarx, idfrd, idgrey, idpoly, idproc, or idss). ● A continuous- or discrete-time model. ● Stable, unstable, or integrating. However, you might not be able to stabilize a plant with unstable poles under PID control. ● A model that includes any type of time delay. A plant with long time delays, however, might not achieve adequate performance under PID control. If the plant has unstable poles, and sys is either: then you must specify the number of unstable poles in the plant. To do this, After launching the PID Tuner GUI, click the button to open the Import Linear System dialog box. In that dialog box, you can reimport sys, specifying the number of unstable poles where prompted. type Controller type (actions) of the controller you are designing, specified as one of the following strings: ┃ String │ Type │ Continuous-Time Controller Formula │ Discrete-Time Controller Formula (parallel form, ForwardEuler ┃ ┃ │ │ (parallel form) │ integration method) ┃ ┃ 'p' │ proportional only │ K[p] │ K[p] ┃ ┃ 'i' │ integral only │ │ ┃ ┃ 'pi' │ proportional and integral │ │ ┃ ┃ 'pd' │ proportional and derivative │ │ ┃ ┃ 'pdf' │ proportional and derivative with first-order filter on derivative │ │ ┃ ┃ │ term │ │ ┃ ┃ 'pid' │ proportional, integral, and derivative │ │ ┃ ┃ 'pidf' │ proportional, integral, and derivative with first-order filter on │ │ ┃ ┃ │ derivative term │ │ ┃ When you use the type input, the PID Tuner designs a controller in parallel form. If you want to design a controller in standard form, Use the input Cbase instead of type, or select Standard from the Form menu. For more information about parallel and standard forms, see the pid and pidstd reference pages. If sys is a discrete-time model with sampling time Ts, the PID Tuner designs a discrete-time pid controller using the ForwardEuler discrete integrator formula. If you want to design a controller having a different discrete integrator formula, use the input Cbase instead of type or the Preferences dialog box. For more information about discrete integrator formulas, see the pid and pidstd reference pages. Cbase A dynamic system representing a baseline controller, permitting comparison of the performance of the designed controller to the performance of Cbase. If Cbase is a pid or pidstd object, the PID Tuner also uses it to configure the type, form, and discrete integrator formulas of the designed controller. The designed controller: ● Is the type represented by Cbase. ● Is a parallel-form controller, if Cbase is a pid controller object. ● Is a standard-form controller, if Cbase is a pidstd controller object. ● Has the same Iformula and Dformula values as Cbase. For more information about Iformula and Dformula, see the pid and pidstd reference pages . If Cbase is any other dynamic system, the PID Tuner designs a parallel-form PI controller. You can change the controller form and type using the Form and Type menus after launching the PID Interactive PID Tuning of Parallel-Form Controller Launch the PID Tuner to design a parallel-form PIDF controller for a discrete-time plant: Gc = zpk([],[-1 -1 -1],1); Gd = c2d(Gc,0.1); % Create discrete-time plant pidtool(Gd,'pidf') % Launch PID Tuner Interactive PID Tuning of Standard-Form Controller Using Integrator Discretization Method Design a standard-form PIDF controller using BackwardEuler discrete integrator formula: Gc = zpk([],[-1 -1 -1],1); Gd = c2d(Gc,0.1); % Create discrete-time plant % Create baseline controller. Cbase = pidstd(1,2,3,4,'Ts',0.1,... pidtool(Gd,Cbase) % Launch PID Tuner The PID Tuner designs a controller for Gd having the same form, type, and discrete integrator formulas as Cbase. For comparison, you can display the response plots of Cbase with the response plots of the designed controller by clicking the Show baseline checkbox on the PID Tuner GUI. For PID tuning at the command line, use pidtune. pidtune can design controllers for multiple plants at once. More About ● The PID Tuner designs a controller in the feedforward path of a single control loop with unit feedback: ● The PID Tuner has a default target phase margin of 60 degrees and automatically tunes the PID gains to balance performance (response time) and robustness (stability margins). Use the Response time or Bandwidth and Phase Margin sliders to tune the controller's performance to your requirements. Increasing performance typically decreases robustness, and vice versa. ● Select response plots from the Response menu to analyze the controller's performance. ● If you provide Cbase, check Show baseline to display the response of the baseline controller. ● For more detailed information about using the PID Tuner, see Designing PID Controllers with the PID Tuner. Typical PID tuning objectives include: ● Closed-loop stability — The closed-loop system output remains bounded for bounded input. ● Adequate performance — The closed-loop system tracks reference changes and suppresses disturbances as rapidly as possible. The larger the loop bandwidth (the first frequency at which the open-loop gain is unity), the faster the controller responds to changes in the reference or disturbances in the loop. ● Adequate robustness — The loop design has enough phase margin and gain margin to allow for modeling errors or variations in system dynamics. The MathWorks algorithm for tuning PID controllers helps you meet these objectives by automatically tuning the PID gains to balance performance (response time) and robustness (stability margins). By default, the algorithm chooses a crossover frequency (loop bandwidth) based upon the plant dynamics, and designs for a target phase margin of 60°. If you change the bandwidth or phase margin using the sliders in the PID Tuner GUI, the algorithm computes PID gains that best meet those targets. Åström, K. J. and Hägglund, T. Advanced PID Control, Research Triangle Park, NC: Instrumentation, Systems, and Automation Society, 2006. See Also pid | pidstd | pidtune
{"url":"http://www.mathworks.com/help/control/ref/pidtool.html?nocookie=true","timestamp":"2014-04-19T09:47:40Z","content_type":null,"content_length":"51574","record_id":"<urn:uuid:6de29b9f-f0ff-4564-a4e7-4e1c64628436>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00263-ip-10-147-4-33.ec2.internal.warc.gz"}
formula getting first two numbers in another cell formula getting first two numbers in another cell in finding the right formula getting only the first two numbers of a date. Cell a1 Cell a2 Cell a1 Cell a2 I need a formula in cell a2 that only gets first two numbers of another cell. Visit publisher's web-site: formula getting first two numbers in another cell Related Tutorials a newbie here but I wonder if anyone could help me please. I would like a formula that will be able to count in between odd/even numbers... So for example: The numbers in between 1-9 odd, (including both 1 and 9) is 5 so in Excel I would like to put in Cell A1 the number 1 and then in Cell A2 the number 9 and the formula in Cell A3 would automatically give the number 5. Hope this all makes sense and any help would be HIGHLY appreciated! I want are numbers automatically in yellow cell if was a value in red cell below it not equal zero and the numbers be sequential and start with numbers from (1-50) and if was there empty cell for example (E4) not numbers (E2) and numbers the followed cell (G4) if (G4) was contained a value if the numbers reach to (50) then restart the numbers from number (1- 50) for more explanation , there are attached below Want to advance numbers automatically for invoice numbers. Needs to be for single cell in Excel. Need formula. the freest one fell apart when it sent. I am trying again. In col A through F there are numbers about 150 rows down. In G1 through L1 there are numbers and these will change from time to time. G2 through L2 and down I want a formula that will find the numbers that are in A through F and put YES in the corresponding cell under G to L. So any cell in A:F that has one of the numbers in G:L the formula will put yes in the corresponding cell under G:L if the numbers is in G1:L1. A B C D E F G H I J 32 21 5 23 50 Yes 9 43 21 18 22 Yes 2 34 1 54 8 Yes Yes I am new to use MS Office Excel 2010. I have 10 Lac Numbers but all numbers are mixed with each other like fax or land line numbers but I need only cell numbers except other numbers how can I do it. cell numbers starts from "03". Like: "0300-1234567" but some cells have data like this (042-1234567 0300-1234567) How can I separate it. All numbers are paste in A Column. How can I do it. Related Applications & Scripts This display allows you to show any number with style. It is easily and fully customizable: change dimensions. choose how many numbers visualize. change the background color of the numbers. change the numbers color. change the distance between numbers. The KeyGenerator is a .NET component for .NET applications that allows to developers generate random values using the combination of numbers, letters, characters and custom values. Principal features: - Get random values using Numbers, Letters, Numbers & Letters, Numbers-Letters-Characters or custom values defined by you - Include source code so you can add custom elements - It is compatible with C# and VB projects (include demo apps for both) - It is compatible with different types of .NET projects (web, desktop, console) The Counter Preloader is a simple, percentage-based preloader that displays the loading progress of your SWF in text. As the loading progresses, the numbers in the counter increment with a blur effect. The numbers are given a glow and set upon a lighted surface. The entire setup is placed slightly off the center of the stage, and adjusts itself automatically to stage resizing and fullscreen Block Text with pixel cubes creating letters and numbers. Capital letters, numeric digits plus extra exclamation mark. Easy to use and customize with component parameters. Among them: - 2 types of numbers, see below; - Delay between letters; - Spacing between letters; - Speed/ duration of blocks movements; - Horizontal and vertical blocks movement distances; - Direction of in/ out blocks; - Scale and Fade in/ out option for blocks; - Shake blocks; - Shake movement distance. Aurora Password Manager is a full-featured solution for storing website and e-mail access passwords, credit card numbers and other sensitive data. Multi-user and privilege management support makes it an ideal solution for both corporate and individual users. A built-in advanced password generator helps you to create 100% secure passwords, and the Internet Explorer and FireFox toolbar plug-in provides automatic web-form filling functionality.
{"url":"http://tutorialsources.com/excel/tutorials/formula-getting-first-two-numbers-in-another-cell.htm","timestamp":"2014-04-17T13:35:03Z","content_type":null,"content_length":"19137","record_id":"<urn:uuid:a217e6b4-595e-4c72-bec0-1c296c55ccc8>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00025-ip-10-147-4-33.ec2.internal.warc.gz"}
LTPP Computed Parameter: Dynamic Modulus LTPP Computed Parameter: Dynamic Modulus APPENDIX C: AMPT VERSUS TP-62 C.1 EXPERIMENTAL VERIFICATION OF AMPT AND TP-62 DIFFERENCES To assess differences in the measured moduli determined from the AMPT and TP-62 protocols, a joint study was carried out between researchers at the Turner-Fairbank Highway Research Center (TFHRC) and NCSU. For this study, TFHRC performed dynamic modulus testing on a mixture following the AMPT TP, and NCSU performed testing on the same mixture using the TP-62 protocol.^(8) In both cases, three replicates have been tested. To reduce any variability not related to the equipment and protocols, all specimens were fabricated at NCSU and randomly sampled for either AMPT testing or TP-62 testing. The details of each testing protocol are summarized in table 43. Table 43. TP summary. Factor AMPT TP-62 Temperature (°F) 40, 70, 100, and 130 14, 40, 70, 100, and 130 Frequency (Hz) 20, 10, 5, 1, 0.5, and 0.1 25, 10, 5, 1, 0.5, and 0.1 Microstrain target 75–125 50–75 LVDT gauge length (mm) 70 100 Load direction Bottom loading Top loading End treatment Teflon^® Greased double latex membranes Conditioning External temperature chamber, then equalize in AMPT for 3 min Equalize for 2.5–3.0 h in test machine Rest period between frequencies (s) 0 300 Calculations NCHRP 09-29 final 10 cycles^(50) NCHRP 09-29 final five cycles^(50) °C = (°F−32)/1.8 1 inch = 25.4 mm The mixture used for this purpose is a 0.371-inch (9.5-mm) Superpave™ mixture typically used in North Carolina for surface courses. The gradation of this mixture is given in figure 133, and the relevant volumetric properties are summarized in table 44. All tests were conducted at 5.9 percent ±0.1 percent air void levels. Figure 133. Graph. Test mixture gradation. Table 44. Test mixture volumetric properties. Volumetric Property Mix Design Test Samples V[a] (percent) 3.8 5.9 VMA (percent) 15.6 17.5 VFA (percent) 75.7 66.2 Asphalt content (percent) 5.2 5.2 Percent effective binder content 4.9 4.9 Dust percentage 1.2 1.2 G[mm] 2.616 2.616 Bulk specific gravity of the aggregate 2.828 2.828 Effective specific gravity of the aggregate 2.855 2.855 G[b] 1.035 1.035 Results from the experimental study are summarized in figure 134 and figure 135, where the average dynamic moduli from the TP-62 protocol are plotted against the average moduli from the AMPT protocol. Error bars in these figures represent a single standard deviation from the mean. From these figures, it is observed that the AMPT test results are systematically lower than those from the TP-62 protocol; the difference between the two datasets is approximately 13 percent. Statistical analysis of these values using the step-down bootstrap method has also been performed. This method is used in lieu of multiple paired t-tests due to the effect of experimentwise error rates, which results in statistical errors when making multiple comparisons. Specifically, failing to account for this error rate increases the probability of finding significance when none is present. The statistical analysis results are shown by temperature and frequency in table 45. Note that in this table, the conditions under which the means are statistically similar are bold. Figure 134. Graph. Comparison of |E*| measured via AMPT and TP-62 protocols in arithmetic scale. Figure 135. Graph. Comparison of |E*| measured via AMPT and TP-62 protocols in logarithmic scale. Table 45. Statistical summary of AMPT and TP-62 test results. Temperature (°C) Frequency (Hz) |E*| AMPT (psi) |E*| TP-62 (psi) p-Value 4 25.00 2,145,226 2,420,540 0.032 4 10.00 1,989,606 2,284,746 0.020 4 5.00 1,838,144 2,111,129 0.030 4 1.00 1,503,747 1,726,774 0.026 4 0.50 1,359,729 1,601,117 0.019 4 0.10 1,050,375 1,234,431 0.023 21 25.00 1,030,409 1,237,696 0.020 21 10.00 899,831 1,022,446 0.023 21 5.00 785,545 881,347 0.025 21 1.00 550,882 628,569 0.033 21 0.50 468,842 524,847 0.068 21 0.10 306,841 358,120 0.057 37 25.00 385,448 464,233 0.008 37 10.00 318,540 384,219 0.010 37 5.00 263,476 330,282 0.002 37 1.00 160,938 198,110 0.008 37 0.50 130,346 167,580 0.005 37 0.10 75,190 96,587 0.011 54 25.00 153,735 177,050 0.003 54 10.00 127,039 128,097 0.801 54 5.00 102,669 101,164 0.672 54 1.00 58,086 59,737 0.377 54 0.50 42,997 48,863 0.022 54 0.10 23,863 33,547 0.005 °C = (°F−32)/1.8 1 psi = 6.86 kPa Note: Bold text indicates conditions where means are statistically similar. C.2 COMPARISON OF AMPT AND TP-62 PROTOCOLS WITH THE AVAILABLE DATABASE To assess the differences observed between the two |E*| measurement protocols, a more comprehensive analysis was performed using the databases available in this study. The two AMPT and TP-62 databases were segregated based on the temperatures at which the |E*| values were measured. Because these two databases cover different ranges of parameters, it is useful to examine the distribution of the relevant parameters for the two databases. Figure 136 through figure 157 present the distribution and range of each parameter in the two databases. In figure 158 through figure 162, the measured |E*| data points available for some specific temperatures for each type of database are shown by frequency. Based on observations from these figures and the difference equation shown in equation 100, differences between the databases containing AMPT and TP-62 measurements are evident, as can be seen in table 46. Based on this description, the following differences are observed at each temperature: • 40 °F (4.4 °C): 43.63 percent. • 70 °F (21.1 °C): 57.84 percent. • 100 °F (37.8 °C): 61.95 percent. • 129 °F (53.9 °C): 46.26 percent. • 130 °F (54.4 °C): 57.83 percent. Figure 136. Graph. Frequency distribution of temperature in AMPT versus TP-62 databases. Figure 137. Graph. Range of temperature in AMPT versus TP-62 databases. Figure 138. Graph. Frequency distribution of frequency in AMPT versus TP-62 databases. Figure 139. Graph. Range of loading frequency in AMPT versus TP-62 databases. Figure 140. Graph. Frequency distribution of percentage retained on ¾-inch (19.05-mm) sieve (ρ[34]) in AMPT versus TP-62 databases. Figure 141. Graph. Range of percentage retained on ¾-inch (19.05-mm) sieve (ρ[34]) in AMPT versus TP-62 databases. Figure 142. Graph. Frequency distribution of percentage retained on ^3/[8]-inch (9.56-mm) sieve (ρ[38]) in AMPT versus TP-62 databases. Figure 143. Graph. Range of percentage retained on ^3/[8]-inch (9.56-mm) sieve (ρ[38]) in AMPT versus TP-62 databases. Figure 144. Graph. Frequency distribution of percentage retained on #4 sieve (ρ[4]) in AMPT versus TP-62 databases. Figure 145. Graph. Range of percentage retained on #4 sieve (ρ[4]) in AMPT versus TP-62 databases. Figure 146. Graph. Frequency distribution of percentage passing #200 sieve (ρ[200]) in AMPT versus TP-62 databases. Figure 147. Graph. Range of percentage passing #200 sieve (ρ[200]) in AMPT versus TP-62 databases. Figure 148. Graph. Frequency distribution of specimen air voids in AMPT versus TP-62 databases. Figure 149. Graph. Range of specimen air voids in AMPT versus TP-62 databases. Figure 150. Graph. Frequency distribution of effective binder volume in AMPT versus TP-62 databases. Figure 151. Graph. Range of effective binder volume in AMPT versus TP-62 databases. Figure 152. Graph. Frequency distribution of VMA in AMPT versus TP-62 databases. Figure 153. Graph. Range of VMA in AMPT versus TP-62 databases. Figure 154. Graph. Frequency distribution of VFA in AMPT versus TP-62 databases. Figure 155. Graph. Range of VFA in AMPT versus TP-62 databases. Figure 156. Graph. Frequency distribution of |G*| n AMPT versus TP-62 databases. Figure 157. Range of |G*| in AMPT versus TP-62 databases. Figure 158. Graph. Percentage of difference between AMPT versus TP-62 databases based on similar ranges of different variables at 39.9 °F (4.4 °C). Figure 159. Graph. Percentage of difference between AMPT versus TP-62 databases based on similar ranges of different variables at 69.9 °F (21.1 °C). Figure 160. Graph. Percentage of difference between AMPT versus TP-62 databases based on similar ranges of different variables at 100 °F (37.8 °C). Figure 161. Graph. Percentage of difference between AMPT versus TP-62 databases based on similar ranges of different variables at 129.2 °F (54.0 °C). Figure 162. Graph. Percentage of difference between AMPT versus TP-62 databases based on similar ranges of different variables at 129.9 °F (54.4 °C). Table 46. Percentage of difference between AMPT versus TP-62 database based on similar ranges of different variables. Temp (°F) 0 ≤ ρ[34] ≤ 15 5 ≤ ρ[38] ≤ 50 30 ≤ ρ[4] ≤ 70 3 ≤ ρ[200] ≤ 7 5 ≤ V[a] ≤ 9 8 ≤ V[beff] ≤ 14 12 ≤ VMA ≤ 20 50 ≤ VFA ≤ 80 1e-2 ≤ |G*| ≤ 1e5 40 46.08 39.80 41.14 43.54 42.75 45.65 44.81 44.29 43.60 70 59.39 47.54 51.74 57.66 57.67 60.23 59.91 58.32 57.84 100 62.63 49.53 51.35 61.61 63.38 64.49 64.36 62.66 61.95 129 45.60 51.02 49.65 46.26 N.A 63.01 46.16 52.09 46.26 130 57.55 40.76 44.14 57.46 60.50 59.99 60.54 57.93 57.83 °C = (°F−32)/1.8 Similar ranges of each variable have been considered for each temperature, and the percentage of error has been calculated based on the difference of average TP-62 versus AMPT |E*| measurements for the corresponding temperature. C.3 EVALUATION OF AMPT VERSUS TP-62 PROTOCOLS USING ANN MODEL A preliminary study was conducted to determine the feasibility and predictability of the ANN modeling technique relative to the existing models. This feasibility study was first conducted based on | G*| because more existing closed-form models use this parameter as their primary input parameter. The ANN models used in this preliminary study are not the final models suggested by the research team, but they are similar in form and validation. To ensure full coverage of the expected conditions, the most recent Witczak database with available measured |G*| data and a portion of the dataset obtained at NCSU with support from the NCDOT were utilized as the TP-62 training database. Also, appropriate portions of the FHWA mobile trailer database and the WRI database (from Kansas and Nevada sites) were considered as the AMPT training database (see table 47).^(51,52) New parameters were not identified through this study. Instead, only those that have been used in the modified Witczak model are incorporated. For verification purposes, three different sets of independent databases were used (see table 48). As a corollary to this study, an additional ANN model was trained that uses the Hirsch model input parameters. The results from this model are given in this section, as well. Table 47. Summary of database used for training ANN models. Type of Database AMPT TP-62 Total FHWA I WRI Witczak NCDOT I Number of mixtures 409 24 106 24 563 Number of data points 7,827 500 3,180 644 12,151 Number of binders 13 8 17 5 43 Number of gradation variations 13 12 13 19 57 Number of volumetric variations 256 13 98 24 391 Note: FHWA I consists of the mixtures from 12 States Table 48. Summary of the database used for verification of ANN models. Type of Database AMPT TP-62 Total FHWA II Citgo NCDOT II Number of mixtures 84 8 12 104 Number of data points 1,652 168 338 2,158 Number of binders 3 2 3 8 Number of gradation variations 3 1 12 16 Number of volumetric variations 75 1 12 88 Note: FHWA II consists of the mixtures from three States in the FHWA mobile trailer database with the following site IDs: 1-IA0358, 2-WA0463, and 3-KS464. It should be noted that the two TPs, AMPT and TP-62, were used to measure the |E*| values in the various databases. To illustrate any possible differences between the two protocols, three different ANNs were developed using the Witczak-based input parameters, as shown in table 49. G-GR pANN was trained using data from both the AMPT and TP-62 protocols, whereas AMPT pANN and TP-62 pANN models were trained using the data from AMPT only and TP-62 only. Table 49 summarizes the databases used to train and verify the ANNs. Table 49. Description of the developed ANN Models and their validation statistics. Data Used in ANN Reference Statistical Parameters for Training Statistical Parameters for Model Training Description Scale Data Verification Data AMPT TP-62 FHWA II NCDOT II Citgo FHWA I Witczak Arithmetic Se/Sy = 0.29 R^2 = 0.92 Se/Sy = 0.38 R^2 = Se/Sy = 0.33 R^2 = Se/Sy = 0.52 R^2 = G-GR pANN 0.86 0.97 0.94 WRI NCDOT I Log Se/Sy = 0.15 R^2 = 0.98 Se/Sy = 0.35 R^2 = Se/Sy = 0.27 R^2 = Se/Sy = 0.59 R^2 = 0.91 0.96 0.96 FHWA I Arithmetic Se/Sy = 0.24 R^2 = 0.94 Se/Sy = 0.36 R^2 = Se/Sy = 0.63 R^2 = Se/Sy = 0.37 R^2 = AMPT pANN ANNs trained with modified Witczak 0.91 0.87 0.88 WRI parameters Log Se/Sy = 0.16 R^2 = 0.97 Se/Sy = 0.38 R^2 = Se/Sy = 0.60 R^2 = Se/Sy = 0.48 R^2 = 0.90 0.89 0.91 Witczak Arithmetic Se/Sy = 0.34 R^2 = 0.88 Se/Sy = 2.08 R^2 = Se/Sy = 0.24 R^2 = Se/Sy = 1.20 R^2 = TP-62 pANN 0.77 0.95 0.97 NCDOT I Log Se/Sy = 0.18 R^2 = 0.97 Se/Sy = 0.99 R^2 = Se/Sy = 0.27 R^2 = Se/Sy = 0.53 R^2 = 0.82 0.93 0.99 Arithmetic Se/Sy = 0.92 R^2 = Se/Sy = 0.71 R^2 = Se/Sy = 0.64 R^2 = Modified Witczak 0.91 0.91 0.98 Model Log Se/Sy = 0.58 R^2 = Se/Sy = 0.19 R^2 = Se/Sy = 0.26 R^2 = 0.92 0.98 0.99 Arithmetic Se/Sy = 0.30 R^2 = Se/Sy = 0.47 R^2 = Se/Sy = 0.11 R^2 = Hirsch Model 0.92 0.97 0.99 Log Se/Sy = 0.39 R^2 = Se/Sy = 0.26 R^2 = Se/Sy = 0.09 R^2 = 0.92 0.97 0.99 Arithmetic Se/Sy = 0.48 R^2 = Se/Sy = 0.55 R^2 = Se/Sy = 0.36 R^2 = Al-Khateeb Model 0.89 0.93 0.93 Log Se/Sy = 0.43 R^2 = Se/Sy = 0.40 R^2 = Se/Sy = 0.17 R^2 = 0.92 0.93 0.97 Note: Blank cells indicate information is not applicable. The ANN models perform well, as shown in figure 163 to figure 180, which display the prediction accuracies of the different models for the combined AMPT and TP-62 data (figure 163 to figure 168), TP-62 data only (figure 169 to figure 174), and AMPT data only (figure 175 to figure 180). Also, these three groups of figures show the prediction accuracies of the ANNs separately. In these three figures, the type of data (i.e., AMPT versus TP-62) used in the ANN training matches the type of data used in the verification (e.g., figure 163 shows the prediction accuracy of the G-GR pANN model trained with the combined AMPT and TP-62 data on the combined AMPT and TP-62 data, etc.). It is noted that the data used in these figures were not included in the ANN training. Figure 181 through figure 204 further demonstrate the differences between the AMPT and the TP-62 data and their effect on the prediction accuracies of the different ANNs. FHWA II data used in figure 181 through figure 188 are obtained using the AMPT protocol. The TP-62 pANN model trained with the TP-62 data and the modified Witczak model overpredict the measured |E*| values. Figure 189 through figure 196 present the prediction results for the NCDOT II data, which were measured using the TP-62 protocol. These figures illustrate the opposite effect on the prediction bias, that is, the effect of using the TP-62 data in the ANN training and predicting the AMPT data. In this case, the AMPT pANN model, trained using the AMPT data, underpredicts the |E*| values. The G-GR pANN model provides a promising ANN-based |E*| model, and the TP-62 pANN model shows good predictions without any significant bias. With the exception of the Citgo dataset, the G-GR pANN model provides high goodness of fit and correlation, as seen in table 49. The promising feature of the G-GR pANN model is that it improves the bias of |E*| predictions, particularly at high and low temperatures. This new ANN model is more sensitive to, and thus more likely to capture, the changes in volumetric parameters than all the other existing predictive models. The findings from figure 163 to figure 204 are summarized as follows: • The |E*| values measured by the AMPT protocol seem to be slightly different from those measured by the TP-62 protocol. The |E*| predictive models developed using the |E*| values measured by the TP-62 overpredict the |E*| values measured by the AMPT. • Overall, the G-GR pANN model, trained with the combination of the AMPT and TP-62 data, shows excellent statistics in terms of high accuracy and low bias, especially at extremely high and low Figure 163. Graph. Prediction of the combination of AMPT and TP-62 data using the modified Witczak and G-GR pANN models in arithmetic scale. Figure 164. Graph. Prediction of the combination of AMPT and TP-62 data using the modified Witczak and G-GR pANN models in logarithmic scale. Figure 165. Graph. Prediction of the combination of AMPT and TP-62 data using the Hirsch model in arithmetic scale. Figure 166. Graph. Prediction of the combination of AMPT and TP-62 data using the Hirsch model in logarithmic scale. Figure 167. Graph. Prediction of the combination of AMPT and TP-62 data using the Al-Khateeb model in arithmetic scale. Figure 168. Graph. Prediction of the combination of AMPT and TP-62 data using the Al-Khateeb model in logarithmic scale. Figure 169. Graph. Prediction of the AMPT data using the modified Witczak and AMPT pANN models in arithmetic scale. Figure 170. Graph. Prediction of the AMPT data using the modified Witczak and AMPT pANN models in logarithmic scale. Figure 171. Graph. Prediction of the AMPT data using the Hirsch model in arithmetic scale. Figure 172. Graph. Prediction of the AMPT data using the Hirsch model in logarithmic scale. Figure 173. Graph. Prediction of the AMPT data using the Al-Khateeb model in arithmetic scale. Figure 174. Graph. Prediction of the AMPT data using the Al-Khateeb model in logarithmic scale. Figure 175. Graph. Prediction of the TP-62 data using the modified Witczak and TP-62 pANN models in arithmetic scale. Figure 176. Graph. Prediction of the TP-62 data using the modified Witczak and TP-62 pANN models in logarithmic scale. Figure 177. Graph. Prediction of the TP-62 data using the Hirsch model in arithmetic scale. Figure 178. Graph. Prediction of the TP-62 data using the Hirsch model in logarithmic scale. Figure 179. Graph. Prediction of the TP-62 data using the Al-Khateeb model in arithmetic scale. Figure 180. Graph. Prediction of the TP-62 data using the Al-Khateeb model in logarithmic scale. Figure 181. Graph. Prediction of the FHWA II data using the modified Witczak and G-GR pANN models in arithmetic scale. Figure 182. Graph. Prediction of the FHWA II data using the modified Witczak and G-GR pANN models in logarithmic scale. Figure 183. Graph. Prediction of the FHWA II data using the AMPT pANN and TP-62 pANN models in arithmetic scale. Figure 184. Graph. Prediction of the FHWA II data using the AMPT pANN and TP-62 pANN models in logarithmic scale. Figure 185. Graph. Prediction of the FHWA II data using the Hirsch model in arithmetic scale. Figure 186. Graph. Prediction of the FHWA II data using the Hirsch model in logarithmic scale. Figure 187. Graph. Prediction of the FHWA II data using the Al-Khateeb model in arithmetic scale. Figure 188. Graph. Prediction of the FHWA II data using the Al-Khateeb model in logarithmic scale. Figure 189. Graph. Prediction of the NCDOT II data using the modified Witczak and G-GR pANN models in arithmetic scale. Figure 190. Graph. Prediction of the NCDOT II data using the modified Witczak and G-GR pANN models in logarithmic scale. Figure 191. Graph. Prediction of the NCDOT II data using the AMPT pANN and TP-62 pANN models in arithmetic scale. Figure 192. Graph. Prediction of the NCDOT II data using the AMPT pANN and TP-62 pANN models in logarithmic scale. Figure 193. Graph. Prediction of the NCDOT II data using the Hirsch model in arithmetic scale. Figure 194. Graph. Prediction of the NCDOT II data using the Hirsch model in logarithmic scale. Figure 195. Graph. Prediction of the NCDOT II data using the Al-Khateeb model in arithmetic scale. Figure 196. Graph. Prediction of the NCDOT II data using the Al-Khateeb model in logarithmic scale. Figure 197. Graph. Prediction of the Citgo data using the modified Witczak and G-GR pANN models in arithmetic scale. Figure 198. Graph. Prediction of the Citgo data using the modified Witczak and G-GR pANN models in logarithmic scale. Figure 199. Graph. Prediction of the Citgo data using the AMPT pANN and TP-62 pANN models in arithmetic scale. Figure 200. Graph. Prediction of the Citgo data using the AMPT pANN and TP-62 pANN models in logarithmic scale. Figure 201. Graph. Prediction of the Citgo data using the Hirsch model in arithmetic scale. Figure 202. Graph. Prediction of the Citgo data using the Hirsch model in logarithmic scale. Figure 203. Graph. Prediction of the Citgo data using the Al-Khateeb model in arithmetic scale. Figure 204. Graph. Prediction of the Citgo data using the Al-Khateeb model in logarithmic scale.
{"url":"http://www.fhwa.dot.gov/publications/research/infrastructure/pavements/ltpp/10035/010.cfm","timestamp":"2014-04-21T15:15:43Z","content_type":null,"content_length":"129718","record_id":"<urn:uuid:9876543b-f250-4a9f-b2ee-a8914a15f402>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00122-ip-10-147-4-33.ec2.internal.warc.gz"}
Bodega Statistics Tutors ...I help the students to efficiently develop a system to categorize, memorize, and repeat the information. I teach all subjects with the same fundamental process. I evaluate a student's baseline ability and identify the types of thinking skills that can be developed. 37 Subjects: including statistics, chemistry, algebra 1, algebra 2 ...He enjoys music, hiking and geocaching.Dr. Andrew G. has a Ph.D. from Caltech in environmental engineering science with a minor in numerical methods. In addition he has over 30 years experience as a practicing atmospheric scientist and dispersion modeler. 13 Subjects: including statistics, calculus, physics, algebra 2 ...My undergraduate degree is in mathematics, and I have worked as a computer professional, as well as a math tutor. My doctoral degree is in psychology. I think this is a wonderful combination: I can relate to students, understand their frustrations and fears, and at the same time I deeply unders... 20 Subjects: including statistics, calculus, geometry, biology ...Math is much easier than you think it is. What a handful of recent/current clients have said: "Ben has been tremendous to work with. He is positive, punctual and makes things very easy. 21 Subjects: including statistics, English, reading, geometry ...I worked a number of years as a data analyst and computer programmer and am well versed in communicating with people who have a variety of mathematical and technical skills.I have years of experience in discrete math. I took a number of courses in the subject. I've used the concepts during my years as a programmer and have tutored many students in the subject. 49 Subjects: including statistics, calculus, physics, geometry
{"url":"http://www.algebrahelp.com/Bodega_statistics_tutors.jsp","timestamp":"2014-04-21T15:17:47Z","content_type":null,"content_length":"24826","record_id":"<urn:uuid:52fc846c-9191-4369-865b-128604ff93b5>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00294-ip-10-147-4-33.ec2.internal.warc.gz"}
If A and B are 4x4 matrices, det(A) = -4, det(B) = 3, then (b) det(3A) = ????, - Homework Help - eNotes.com If A and B are 4x4 matrices, det(A) = -4, det(B) = 3, then (b) det(3A) = ????, One property of determinants says for `nxxn` matrix `A` and some scalar `c` the following holds `det(c cdot A)=c^ndet(A)` Hence we have `det(3A)=3^4det(A)=81cdot(-4)=-324` <-- Solution Join to answer this question Join a community of thousands of dedicated teachers and students. Join eNotes
{"url":"http://www.enotes.com/homework-help/b-4x4-matrices-det-4-det-b-3-then-b-det-3a-431094","timestamp":"2014-04-17T05:24:51Z","content_type":null,"content_length":"24662","record_id":"<urn:uuid:931d19e3-532d-4f24-a74a-4634213b5800>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00335-ip-10-147-4-33.ec2.internal.warc.gz"}
North Bergen Algebra 2 Tutor Find a North Bergen Algebra 2 Tutor ...I'm a college graduate in Physics. 1 year of calculus in high school, 2 years of calculus/analysis in university. In-depth familiarity with all aspects of the subject and an intuitive feel for it that I do my best to transmit to students. I too thought it was incredibly hard the first time around, it gets better! 17 Subjects: including algebra 2, chemistry, Spanish, calculus I obtained my BSc in Applied Mathematics and BA in Economics dual-degree from the University of Rochester (NY) in 2013. I am a part-time tutor in New York City and want to help those students who need exam preparation support or language training. I used to work at the Department of Mathematics on campus as Teaching Assistance for two years and I know how to help you improve your skills. 7 Subjects: including algebra 2, calculus, algebra 1, actuarial science ...I have also worked with middle and high school students. Over the years, I have gained experience working with students who have a wide variety of learning styles. For something to ‘click’ it must be presented in a way that makes sense to you based on what you already understand and how you process information. 10 Subjects: including algebra 2, calculus, statistics, geometry Hi, my name is Christian Alfred and I've been a tutor for 2 years. I'm 23 years old and I recently graduated from FIU with a Bachelors of Arts in Psychology. I specialize in elementary math, algebra, geometry, trigonometry, and pre-calculus. 11 Subjects: including algebra 2, geometry, trigonometry, elementary (k-6th) ...I love the process of teaching and learning, and would look forward to tutoring you if you are the student, or your child if you are the parent. I wanted to start out tutoring things more outside of my direct area of expertise, because I wanted the challenge of learning to teach. This is more d... 11 Subjects: including algebra 2, chemistry, physics, biology
{"url":"http://www.purplemath.com/north_bergen_algebra_2_tutors.php","timestamp":"2014-04-17T13:46:19Z","content_type":null,"content_length":"24300","record_id":"<urn:uuid:2aa8dc64-ab63-4221-901b-a08faa399b33>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00615-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: October 1997 [00274] [Date Index] [Thread Index] [Author Index] Re: Horse Race Puzzle • To: mathgroup at smc.vnet.net • Subject: [mg9238] Re: [mg9162] Horse Race Puzzle • From: Robert Pratt <rpratt at math.unc.edu> • Date: Fri, 24 Oct 1997 01:00:52 -0400 • Sender: owner-wri-mathgroup at wolfram.com The solutions can be computed recursively using pattern matching as Unfortunately, Mathematica seems to ignore the Flatten[{y,n}] command, returning {y,n} unflattened. However, this only gives some unambiguous extra nesting in the solutions. Also, the number of solutions a[n] for n horses is given by a[n_]:=a[n]=Sum[Binomial[n,k] a[n-k],{k,n}] Rob Pratt Department of Mathematics The University of North Carolina at Chapel Hill CB# 3250, 331 Phillips Chapel Hill, NC 27599-3250 rpratt at math.unc.edu On Thu, 16 Oct 1997, Seth Chandler wrote: > Here's a mathematics problem that might be well suited to some elegant > Mathematica programming. > N horses enter a race. Given the possibility of ties, how many different > finishes to the horse race exist. Write a Mathematica program that > shows all the possibilities. > By way of example: here is the solution (13) by brute force for N=3. The > horses are creatively named a, b and c. The expression {{b,c},a} > denotes a finish in which b and c tie for first and a comes in next. > {a, b, c}, {a, c, b}, {b, a, c}, {b, c, a}, {c, b, a}, {c, a, b}, > {a,{b,c}}, {{b,c},a}, {b,{a,c}}, > {{a,c},b},{{c,{a,b}},{{a,b},c},{{a,b,c}} > P.S. I have a solution to the problem, I think, but it seems unduly > complex and relies on the package DiscreteMath`Combinatorica` > Seth J. Chandler > Associate Professor of Law > University of Houston Law Center
{"url":"http://forums.wolfram.com/mathgroup/archive/1997/Oct/msg00274.html","timestamp":"2014-04-21T07:17:47Z","content_type":null,"content_length":"35771","record_id":"<urn:uuid:8bc3f109-f49c-4139-a177-8a805377298e>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00315-ip-10-147-4-33.ec2.internal.warc.gz"}
RE: Re: lookup-table thoughts (was Re: matching multip [XSL-LIST Mailing List Archive Home] [By Thread] [By Date] [Recent Entries] [Reply To This Message] RE: Re: lookup-table thoughts (was Re: matching multip Play the video Subject: RE: Re: lookup-table thoughts (was Re: matching multiple times, outputting once? From: Tom Myers <tommy@xxxxxxxxxxxxxx> Date: Thu, 08 Nov 2001 13:59:37 -0500 At 02:27 PM 11/8/2001 +0000, Michael Kay wrote: >The reason the tail-recursive version is taking longer is that each time >round the loop, it makes a copy of the complete tree built so far. It's >therefore not surprising that it has O(n^n) performance (not at all the same >thing as increasing exponentially, by the way!). The non-tail-recursive >solution creates a single temporary tree, adding nodes to it at each level >of recursion, and never copying the tree until its final iteration. >As for the divide-and-conquer algorithm, it looks interesting and performs >well, but as it produces completely different output from the other two, I >can't quite see the relevance. I assume that "O(n^n)" above is a typo for "O(n^2)" = "O(n*n)", right? Quadratic. Let me just check that I understand this from Saxon's point of view...I'm trying to come up with a simple moral to the tale, something like "recursion inside constructors should not usually be made into tail recursions, but other recursions should be", to go along with "accumulations of associative functions can usually be made into divide-and-conquer templates" and other such rules that I often find helpful. Simplifying Jeni's templates slightly, I do a constructor recursion like so: consRec(N) = [], if N=0 cons("X", consRec(N-1)) otherwise <xsl:template name="consRec"> <xsl:param name="N" select="100"/> <xsl:if test="$N"> <xsl:call-template name="consRec"> <xsl:with-param name="N" select="$N - 1"/> and if called by <xsl:call-template name="consRec"/> it will generate 100 nested X-tags, the innermost being empty, and Saxon will do this in O(n) time, O(n) stack space, and O(n) heap space -- whereas a lazy implementation would still need O(n) time and O(n) heap space but O(1) stack space, since it would return the top of the tree without waiting for the bottom. This is not, however, an argument for a lazy evaluator, since actual tree-depth is rarely a limiting factor in recursion. Anyway, to do this tail-recursively PURCHASE STYLUS STUDIO ONLINE TODAY! we do need an accumulator: Purchasing Stylus Studio from our online shop is Easy, Secure and Value Priced! tailRec(N,A) = A, if N=0 = tailRec(N-1,cons("X",A)) otherwise <xsl:template name="tailRec"> Download The World's Best XML IDE! <xsl:param name="N" select="100"/> <xsl:param name="A" select="/.."/> Accelerate XML development with our award-winning XML IDE - Download a free trial today! <xsl:when test="not($N)"><xsl:copy-of select="$A"/></xsl:when> Subscribe in XML format <xsl:call-template name="tailRec"> <xsl:with-param name="N" select="$N - 1"/> <xsl:with-param name="A"> <X><xsl:copy-of select="$A"/></X> And Saxon will do this in O(n^2) time, O(1) stack space, and O(n) max heap requirement but O(n^2) total heap allocation, whereas if you could somehow replace "copy-of" with "reference-to", using the fact that the accumulator $A is only used once on any computation path, then you'd have O(n) time, O(1) stack, O(n) max heap. My main question is "hey, is this right for Saxon?" but I'd like to know also -- is such a replacement of "copy-of" with something like "reference-to" possible in principle, or does it violate some XSLT principle of which trees go where, or...? As a distinctly stranger thought, to which you will probably not feel like responding but Dimitre might: You note that Jeni's divconq version doesn't generate the same output. A divide-and-conquer equivalent is possible in languages with higher-order functions; if we think of consX(a) = cons("X",a) as a function, then tailRec(N) = divConq(N) = consX(consX(consX(... ([]) ...))) and we get divConq(N) = dC(N)([]) dC(0)(a) = a dC(1)(a) = consX(a) = cons("X",a) and so on, in other words dC(0) = IdentityFunction dC(2*N) = compose(dC(N),dC(N)) dC(2*N + 1) = compose(dC(N),compose(dC(N),consX)); and divConq works just like foldL and foldR (because function composition is associative). I admit I'm thinking back to Backus' FP systems, wherein function composition was one of the primitives (but application was not.) I think that some of the massively-parallel proposals for implementations of FP (or FFP) might in fact construct the N-deep tree in logarithmic time, using linearly many processors, just about as readily as they'd construct an N-long list the same way. But I'm not sure, not sure at all. Tom Myers XSL-List info and archive: http://www.mulberrytech.com/xsl/xsl-list
{"url":"http://www.stylusstudio.com/xsllist/200111/post90330.html","timestamp":"2014-04-17T18:40:26Z","content_type":null,"content_length":"28443","record_id":"<urn:uuid:cb97164b-f5e1-4226-a6f2-fa17c59927cb>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00396-ip-10-147-4-33.ec2.internal.warc.gz"}
Introductory Biological Statistics ISBN: 9781577663805 | 1577663802 Edition: 2nd Format: Paperback Publisher: Waveland Pr Inc Pub. Date: 8/22/2005 Why Rent from Knetbooks? Because Knetbooks knows college students. Our rental program is designed to save you time and money. Whether you need a textbook for a semester, quarter or even a summer session, we have an option for you. Simply select a rental period, enter your information and your book will be on its way! Top 5 reasons to order all your textbooks from Knetbooks: • We have the lowest prices on thousands of popular textbooks • Free shipping both ways on ALL orders • Most orders ship within 48 hours • Need your book longer than expected? Extending your rental is simple • Our customer support team is always here to help
{"url":"http://www.knetbooks.com/introductory-biological-statistics-2nd/bk/9781577663805","timestamp":"2014-04-16T13:58:01Z","content_type":null,"content_length":"51253","record_id":"<urn:uuid:5c703df5-5c2e-4bb2-ba1f-efe7630fe9bc>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00216-ip-10-147-4-33.ec2.internal.warc.gz"}
exponential equation You might be able to find a specific solution by trial and error (I couldn't but someone might) but there is no general solution method for this. You are going to have to find an approximate solution. Lambert's W function is the usual advice for something like this but I can't think of any immediate way to use that either. -Dan The graph doesn't cross the x axis, so there are no real solutions. Where did this problem come from? -Dan
{"url":"http://mathhelpforum.com/algebra/218448-exponential-equation.html","timestamp":"2014-04-20T03:49:05Z","content_type":null,"content_length":"38404","record_id":"<urn:uuid:34b53bc6-b765-47c8-b848-29fbfd137f71>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00227-ip-10-147-4-33.ec2.internal.warc.gz"}
Third Base Third Base Martha Stewart's File Cabinet Some weeks ago, rooting around in files of old clippings and correspondence, I made a discovery of astonishing obviousness and triviality. What I found had nothing to do with the content of the files; it was about their arrangement in the drawer. Imagine a fastidious office worker—a Martha Stewart of filing—who insists that no file folder lurk in the shadow of another. The protruding tabs on the folders must be arranged so that adjacent folders always have tabs in different positions. Achieving this staggered arrangement is easy if you're setting up a new file, but it gets messy when folders are added or deleted at random. A drawer filled with "half-cut" folders, which have just two tab positions, might initially alternate left-right-left-right. The pattern is spoiled, however, as soon as you insert a folder in the middle of the drawer. No matter which type of folder you choose and no matter where you put it (except at the very ends of the sequence), every such insertion generates a conflict. Removing a folder has the same effect. Translated into a binary numeral with left=0 and right=1, the pristine file is the alternating sequence ...0101010101.... An insertion or deletion creates either a 00 or a 11—a flaw much like a dislocation in a crystal. Although in principle the flaw could be repaired—either by introducing a second flaw of the opposite polarity or by flipping all the bits between the site of the flaw and the end of the sequence—even the most maniacally tidy record-keeper is unlikely to adopt such practices in a real file drawer. In my own files I use third-cut rather than half-cut folders; the tabs appear in three positions, left, middle and right. Nevertheless, I had long thought—or rather I had assumed without bothering to think—that a similar analysis would apply, and that I couldn't be sure of avoiding conflicts between adjacent folders unless I was willing to shift files to new folders after every insertion. Then came my Epiphany of the File Cabinet a few weeks ago: Suddenly I understood that going from half-cut to third-cut folders makes all the difference. It's easy to see why; just interpret the drawerful of third-cut folders as a sequence of ternary digits. At any position in any such sequence, you can always insert a new digit that differs from both of its neighbors. Base 3 is the smallest base that has this property. Moreover, if you build up a ternary sequence by consistently inserting digits that avoid conflicts, then the choice of which symbol to insert is always a forced one; you never have to make an arbitrary selection among two or more legal possibilities. Thus, as a file drawer fills up, it is not only possible to maintain perfect Martha Stewart order; it's actually quite easy. Deletions, regrettably, are more troublesome than insertions. There is no way to remove arbitrary elements from either a binary or a ternary sequence with a guarantee that two identical digits won't be brought together. (On the other hand, if you're fussy enough to fret about the positions of tabs on file folders, you probably never throw anything away anyhow.) The protocol for avoiding conflicts between third-cut file folders is so obvious that I assume it must be known to file clerks everywhere. But in half a dozen textbooks on filing—admittedly a small sample of a surprisingly extensive literature—I found no clear statement of the principle. Strangely enough, my trifling observation about arranging folders in file drawers leads to some mathematics of wider interest. Suppose you seek an arrangement of folders in which you not only avoid putting any two identical tabs next to each other, but you also avoid repeating any longer patterns. This would rule out not only 00 and 11 but also 0101 and 021021. Sequences that have no adjacent repeated patterns of any length are said to be "square free," by analogy to numbers that have no duplicated prime factors. In binary notation, the one-digit sequences 0 and 1 are obviously square free, and so are 01 and 10 (but not 00 or 11); then among sequences three bits long there are 010 and 101, but none of the other six possibilities is square free. If you now try to create a four-digit square-free binary sequence, you'll find that you're stuck. No such sequences exist. What about square-free ternary sequences? Try to grow one digit by digit, and you're likely to find your path blocked at some point. For example, you might stumble onto the sequence 0102010, which is square free but cannot be extended without creating a square. Many other ternary sequences also lead to such dead ends. Nevertheless, the Norwegian mathematician Axel Thue proved almost a century ago that unbounded square-free ternary sequences exist, and he gave a method for constructing one. The heart of the algorithm is a set of digit replacement rules: 0 → 12, 1 → 102, 2 → 0. At each stage in the construction of the sequence, the appropriate rule is applied to each digit, and the result becomes the starting point for the next stage. Figure 4 shows a few iterations of this process. Thue showed that if you start with a square-free sequence and keep applying the rules, the sequence will grow without bound and will never contain a square. More recently, attention has turned to the question of how many ternary sequences are square free. Doron Zeilberger of Rutgers University, in a paper co-authored with his computer Shalosh B. Ekhad, established that among the 3^n n-digit ternary sequences at least 2^n/17 are square free. Uwe Grimm of the Universiteit van Amsterdam has tightened this lower bound somewhat; he has also found an upper bound and has counted all the n-digit sequences up to n=110. It turns out there are 50,499,301,907,904 ways of arranging 110 ternary digits that avoid all repeated patterns. I'll have to choose one of them when I set up my square-free file drawer. © Brian Hayes
{"url":"http://www.americanscientist.org/issues/pub/2001/11/third-base/6","timestamp":"2014-04-19T09:26:29Z","content_type":null,"content_length":"132710","record_id":"<urn:uuid:c9fa376a-21d0-40ab-b06b-d361d263fd46>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00227-ip-10-147-4-33.ec2.internal.warc.gz"}
Convert a Distance Estimate to a Mesh I had a need to create a simple mandelbulb mesh so I made an app to shoot rays from a bounding box to the origin and deform the box to the shape of the mandelbulb. It worked "ok" so I added user scripting so any distance estimation function could be used. Attached is the .NET executable and project source (only 65k!) This certainly isn't an optimal mesh and lots of geometry is missed (occluded from the ray cast) but it served the purpose I had. If anyone has ideas on an algorithm to fully capture a fractal as a mesh I'd love to hear it! UPDATE - Thanks Fractower for the algorithm! The attachment now converts the full geometry. UPDATE 2 - Now with texture creation and .obj export.
{"url":"http://www.fractalforums.com/general-discussion/convert-a-distance-estimate-to-a-mesh/","timestamp":"2014-04-20T08:14:35Z","content_type":null,"content_length":"82169","record_id":"<urn:uuid:5fd3ed13-789e-4f47-9028-a4983186abf7>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00272-ip-10-147-4-33.ec2.internal.warc.gz"}
Derive expression based on Cantor SFC September 22nd 2010, 02:14 AM #1 Junior Member Mar 2009 Derive expression based on Cantor SFC I’m trying to compare one system against another and form some type of expression to compare the cost to update a server. System One can update a server in log (N) overlay hops (the overlay can be visualized as a circle) where N is the number of nodes in the system. However because these overlay devices are pick arbitrarily one “overlay hop” on the circle could result in multiple physical hops in the underlying network. This means it would basically cost the same to update your server whether you’re close or far from it. Thus the expression I’m using can be seen below where the second number represents the distance between two randomly chosen points in a square, sourced from here: http:// Log(N) * 0.521405433. Now System two works in a similar way. The difference is that the overlay hops are not randomly chosen. A Cantor SFC is used in an attempt to improve the way overlay nodes are chosen so that nodes that are physically close in the network are also somewhat close on the overlay circle. Therefore if you are close to your server, there should be less physical hops to update it. Can someone please help me to form an expression for this, based on a Cantor SFC? Follow Math Help Forum on Facebook and Google+
{"url":"http://mathhelpforum.com/differential-geometry/157043-derive-expression-based-cantor-sfc.html","timestamp":"2014-04-16T16:04:28Z","content_type":null,"content_length":"30944","record_id":"<urn:uuid:06a7299d-9b63-4f3a-82ed-2c8f0192311b>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00333-ip-10-147-4-33.ec2.internal.warc.gz"}
Checking consistency of a system of linear equations and inequalities up vote 1 down vote favorite I have a lot of systems of equations and inequalities of the following form: $$ a_{1,1}x+a_{1,2}y+a_{1,3}z+a_{1,4}w = 2 $$ $$ \ldots $$ $$ 0 < x < 2 $$ $$ 0 < y < 2 $$ $$ 0 < z < 2 $$ $$ 0 < w < 2 $$ There are always at least two equations, and I probably won't consider cases with more than twenty equations. All coefficients $a_{i,j}$ are positive integers and some can be zero. We also have the property that $\sum_{j=1}^4a_{i,j}\geq3$ for all $i$. The solutions are real numbers. I don't need to solve these systems, but I need to be able to tell whether there exists a solutions. If it isn't possible to tell for each system whether it is consistent or not, any method which identifies as many inconsistent systems as possible is greatly appreciated. I have a few hundred millions of these systems, so I'm specifically looking for things that can easily be turned into a program. (I know the basic techniques to do this by hand, and am looking for some handy tricks that can be done by a computer. I have some programming experience, but not really with programming this kind of problems.) linear-algebra algorithms 1 Its difficult to give an appropriate answer here but a very simple approach would be to use view this as a convex feasibility problem: You try to decide if the intersection of the solution space of the linear equation with the cube defined by the inequalities is not empty. Since your linear systems seem to be small and it looks like projecting a point onto the polytope formed by the linear constraints is also easy, alternating projections should be easy and fast. – Dirk Mar 4 '12 at 10:43 1 Have a look at constraint-logic programming (in particular, over the reals -- CLP(R)) which is designed for just such problems. Typically these are implemented as Prolog libraries (Sicstus, SWI), but I believe that stand-alone versions are available too. – J.J. Green Mar 4 '12 at 11:21 For these systems solving them is no harder than detecting feasibility. So just add an objective function, e.g., x+y+w+z, and now you have a linear program. Stuff this into an LP solver and it will tell if your feasible region is empty. If you read the manual, there may a flag you can set to avoid dding the objective function. This question would be just as well answered at math.stackexchange, and is not appropriate for this site. – Chris Godsil Mar 4 '12 at 15:10 Chris, the inequalities there are strict, and these need a bit of work. A solution to your LP would likely violate the feasibility of these strict inequalities. – Dima Pasechnik Mar 4 '12 at 15:28 Dima: you're right, of course. – Chris Godsil Mar 4 '12 at 17:02 add comment 2 Answers active oldest votes There is a criterion for solvability of a system of strict inequalities $Mt\lt b$ due to Carver (cf. A.Schrijver "Theory of linear and integer programming", Sect. 3.7.8). It says that $Mt\lt b$ is solvable if and only if $v=0$ is the only solution of the system $$\label{eee} v\geq 0,\ M^\top v=0,\ v^\top b\leq 0.\qquad \qquad (*)$$ Let us see how to get $Mt\lt b$. To do this, let $\zeta=\frac{1}{2}(x,y,z,w)$, and write your linear equations as $A\zeta=e$, where $e$ denotes all-1 vector. If this system has no solution, done. If it has only one solution, you can check it with your inequalities directly. If there are several solutions, linear algebra software will be able to rewrite your system in the form $(I\ B)\zeta'=d$, where $I$ is the identity matrix of size 1, 2, or 3, $B$ is a matrix of the appropriate size, and $\zeta'$ is a permutation of the original variables $\ up vote 3 zeta$. In other words it gives you expressions $\zeta'_k=d_k-\sum_{j\neq k} B_{kj}\zeta'_j$, for $1\leq k\leq m$, and $m$ being 1, 2, or 3, depending upon $A$. down vote accepted This reduces your original system to the system of strict inequalities in the remaining unexpressed $\zeta'$. (i.e. in $4-m$ variables). Finally, apply Carver's criterion by solving a linear programming problem: $\max\sum_{i} v_i$ subject to $(*)$. If this maximum is strictly bigger than 0 then the original system $Mt\lt b$ has no solution, otherwise it does have one. add comment To simplify the notation, let $A$ be the coefficient matrix in a given instance of your problem, let $\xi = ((x,y,z,w)^T)/2$, and let $O = (0,0,0,0)^T$, $e = (1,1,1,1,...)^T$, $e_4 = (1,1,1,1)^T$. The problem then can be written in shorthand as $$ A \xi = e, O < \xi < e_4 $$ where $O < \xi < e_4$ is understood component wise. To check if there is a feasible solution, proceed in two steps: 1. Check if there is a feasible solution of the linear system $A \xi = e$. If there is one, it can be found as $\xi = (A^TA)A^Te$ and therefore $A(A^TA)^{-1}A^Te = e$ must hold. In practice, compute the $QR$ decomposition of $A$, $A = QR$ where $R$ is square and upper triangular and $Q$ has the same dimensions as $A$ and satisfies $Q^TQ = I$ (identity matrix) and look at the system $R \xi = Q^T e$. If $R$ has full rank, you can find $\xi$ and compare $A \xi $ to $e$. If $R$ does not have full rank (i.e. there are zero rows at the bottom), this up vote 0 also tells you if there is a solution. down vote 2. Suppose now there is a nontrivial solution of $A \xi = e$. Then choose a small number $\epsilon$, e.g. $\epsilon = 10^{-8}$, and solve the linear program $$ A \xi = e, \epsilon e_4 \le \xi \le (1 - \epsilon) e_4, c^T \xi \to \max $$ with any vector $c$. If there is a feasible solution to the full problem, it will show up as the optimum (and some of its components may be equal to $\epsilon$ or $1 - \epsilon$). By varying $\epsilon$ for these problems, you may be able to find solutions which are further in the interior of the four-dimensional cube in which your solution is supposed to be. The entire method should be easily implementable in e.g. R (package lpSolve) and it is obvious how to parallelize it. add comment Not the answer you're looking for? Browse other questions tagged linear-algebra algorithms or ask your own question.
{"url":"http://mathoverflow.net/questions/90189/checking-consistency-of-a-system-of-linear-equations-and-inequalities?sort=votes","timestamp":"2014-04-19T02:39:33Z","content_type":null,"content_length":"64002","record_id":"<urn:uuid:9ca91ead-3856-4de5-bbd8-cdd3b8a7c74b>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00189-ip-10-147-4-33.ec2.internal.warc.gz"}
Fresh O'Caml examples hm.ml A naive implementation of Hindley-Milner type inference for a mini ML. minimetaml.ml A semantics-based interpreter for a miniature, untyped MetaML. nbe.ml Normalization by evaluation for untyped lambda terms. nomst.ml Normalization by evaluation for Nominal System T. pi-caclulator.ml Program to calculate the possible labelled transitions from a process expression in the Milner-Parrow-Walker Pi-Calculus. plc.ml Type checking and syntactic normalization for Polymorphic Lambda Calculus (PLC). plc-nbe.ml Type checking and normalisation by evaluation for Polymorphic Lambda Calculus (PLC). stlc.ml Type-checking simply typed lambda calculus as an example of using general abstraction types. Last modified: $Date: 2007/02/06 12:59:55 $
{"url":"http://www.cl.cam.ac.uk/~amp12/fresh-ocaml/examples.html","timestamp":"2014-04-21T04:54:54Z","content_type":null,"content_length":"1974","record_id":"<urn:uuid:17253956-3061-44c4-9cbd-d30d37fa497f>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00199-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50b3e243e4b09749ccad074d","timestamp":"2014-04-18T11:04:23Z","content_type":null,"content_length":"51370","record_id":"<urn:uuid:aedd8d3c-2c6e-4704-a6fc-4ee0930c9a83>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00581-ip-10-147-4-33.ec2.internal.warc.gz"}
Pan Balance – Expressions This interactive pan balance allows numeric or algebraic expressions to be entered and compared. You can "weigh" the expressions you want to compare by entering them on either side of the balance. Using this interactive tool, you can practice arithmetic and algebraic skills, and investigate the important concept of equivalence. Two other tools, Pan Balance – Numbers and Pan Balance – Shapes, are natural extensions. Place an algebraic expression in each of the red and blue pans. These expressions may or may not include the variable x. Enter a value for x, or adjust the value of x by moving the slider. As the value of x changes, the results will be graphed. Use the Zoom In and Zoom Out buttons, or adjust the values for the x- and y‑axes with the sliders, to change the portion of the graph that is The Reset Balance button removes the expressions from the pans and clears the graph. Explore algebraic equivalence with the following investigation. 1. Enter the expression 2x into the red pan, and enter the expression x + 4 into the blue pan. 2. Enter the value x = -5 into the box near the top. What happens? Change the value of x to 0 and then to 5. How does this change the relationship between the pans? 3. Find a value of x such that the red pan equals 0. Where is the red dot when the red pan has a value of 0? 4. Find a value of x such that the blue pan equals 0. Where is the blue dot when the blue pan has a value of 0? 5. Move the slider to adjust the value of x. For what value of x do the red and blue pans have equal values? What happens in the graph when the values of the pans are equal? 6. What other observations can you make about the relationship between the values of the pans and the graph?
{"url":"http://illuminations.nctm.org/Activity.aspx?id=3529","timestamp":"2014-04-17T21:23:38Z","content_type":null,"content_length":"35456","record_id":"<urn:uuid:e557a037-7f9b-480b-9592-f2db4fa3b2e2>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00141-ip-10-147-4-33.ec2.internal.warc.gz"}
The evolution of reversible switches in the presence of irreversible mimics • We are sorry, but NCBI web applications do not support your browser and may not function properly. More information Evolution. Author manuscript; available in PMC Sep 1, 2010. Published in final edited form as: PMCID: PMC2770902 NIHMSID: NIHMS125946 The evolution of reversible switches in the presence of irreversible mimics Reversible phenotypic switching can be caused by a number of different mechanisms including epigenetic inheritance systems and DNA-based contingency loci. Previous work has shown that reversible switching systems may be favored by natural selection. Many switches can be characterized as “on/off” where the “off” state constitutes a temporary and reversible loss of function. Loss of function phenotypes corresponding to the “off” state can be produced in many different ways, all yielding an identical fitness in the short term. In the long term, however, a switch-induced loss of function can be reversed, while many loss of function mutations, especially deletions, cannot. We refer to these loss of function mutations as “irreversible mimics” of the reversible switch. Here we develop a model where a reversible switch evolves in the presence of both irreversible mimics and metapopulation structure. We calculate that when the rate of appearance of irreversible mimics exceeds the migration rate, the evolved reversible switching rate will exceed the bet-hedging rate predicted by panmictic models. 1 Introduction A variety of mechanisms allow for heritable, reversible phenotypic switching. These include most epigenetic inheritance systems (Rando and Verstrepen 2007), in which no change in DNA sequence occurs, as well DNA-based “contingency loci” (Moxon et al. 1994) that may be readily reversible through repeat contractions and expansions. These switches differ in three ways from classical mutations such as DNA point mutations and deletions. First, switching is easily reversible. Second, switching frequencies are typically higher than average mutation frequencies for most taxa (Drake 1999; Rando and Verstrepen 2007). Finally, switching may be preferentially induced at times when it is most likely to be beneficial. Here we neglect environmental induction and restrict our analysis to random reversible switching mechanisms. This simplifying assumption is conservative with respect to the evolution of switching mechanisms (Jablonka et al. 1995; Kussell and Leibler 2005; Wolf et al. 2005). Phenotypic switching may sometimes be beneficial by producing adaptive phenotypes, and at other times be costly by producing phenotypes that are not adaptive. Several models have concluded that for organisms living in a fluctuating environment, mechanisms that enable reversible switching can evolve, with the optimal switching rate (m[opt]) predicted to be equal to the frequency of environmental change events that make switching adaptive (Ω) (Lachmann and Jablonka 1996; Kussell and Leibler 2005; Kussell et al. 2005; Wolf et al. 2005; King and Masel 2007). Natural selection for reversible switching is strong enough to overcome genetic drift so long as Ns NΩ s is the selective advantage of phenotypic switching when the environment changes and N is the effective population size (King and Masel 2007). Previous models have not, however, considered the possibility that the same phenotypes generated through evolved reversible switching mechanisms may also be generated by a loss of function mutation such as a deletion. Many phenotypic switches have a simple “on/off” form, where the “off” state represents a temporary and reversible loss of function. Phenotypic mimicry may be a common phenomenon, based on the prevalence of well-documented phenocopies and genocopies in a wide range of taxa (reviewed in West-Eberhard 2003), but the mechanism and inevitability of mimicry are particularly obvious when a phenotype is based on loss of function. Here we investigate the effect of the existence of loss of function irreversible mimics on the evolution of a reversible phenotypic switching system. If, in a new environment, mutation alone gives rise to adaptations at a rate less than the optimal Ω, then previous models predict that a modifier allele facilitating more rapid phenotypic switching will invade the population. Indeed, many switching mechanisms found in nature have switching rates around 10^−1 to 10^−6 per generation (Jablonka and Lamb 1998; Rando and Verstrepen 2007), higher than typical mutation rates. However, for at least for one switching system, the yeast prion [PSI +], irreversible mimics of [PSI +] appear spontaneously even more often than [PSI +] (Lund and Cox 1981; Lancaster, Bardill, True, and Masel, in prep). Previous models predict how the evolution of phenotypic switching mechanisms is driven by the advantages of rapid switching. This approach cannot explain the evolution of the [PSI +] system, which switches less often than its mimics. Our model focuses on whether the property of reversibility rather than rapidity can explain the evolution of phenotypic switching in such cases. 2 Model overview To capture the key elements of the biology of reversible and irreversible switches we employed a two level model: (1) The benefits of reversibly-induced variation when adaptive and costs when maladaptive are represented in a model of evolution at a modifier locus with two alleles, M[1] and M[2], that cause reversible switching at rates m[1] and m[2], respectively (Figure 1a). This stochastic model assumes an asexual haploid population and is based upon the finite population approach developed by King and Masel (2007). (2) This model is then nested within a deterministic metapopulation island model (similar to Levins 1969, 1970) to introduce the long-term threat from irreversible mimics (appearing at rate μ[irr]). Irreversible mimics initially mediate adaptation, but in the long term cause demes fixed for the mimic to go extinct, since they are unable to switch back their phenotype when the environment switches back. Even if we relax the assumption of complete irreversibility, a delay in reversibility can be sufficient to cause such a population to be outcompeted by a rival reversibly switched population that is not handicapped by such a delay, again leading to the longterm extinction of the handicapped lineage. These two levels of the model constitute selection at the individual and deme levels, respectively. In order to avert extinction events caused by the appearance of mimics, evolution in a metapopulation may favor a higher rate of reversible switching than if mimics are not considered. The extent of the risk from mimics is captured in our model by their rate of appearance μ[irr]. Summary of model: M[1] and M[2] are haploid genotypes at the reversible switching modifier locus; circles indicate phenotype A, adaptive in environment E, and squares indicate phenotype B, adaptive in environment F. Colored squares are irreversibly switched. ... Environmental change from environment E to environment F always leads to adaptation mediated by phenotype B. For this simplifying assumption to hold, we restrict our parameter space to Ns s is the selective advantage of B in environment E, and (m + μ[irr])N ≥1 where m + μ[irr] is the total rate of appearance of the B phenotype. Each environmental change event then leads to fixation of either the reversible B[r] or the irreversible B[i] within the deme (Figure 1b). If it is the B[i] mimic that becomes fixed, the deme incurs a permanent handicap either by its inability to reverse its adaptation back to environment E, or for other long-term reasons related to loss of function. For mathematical simplicity, we assume that demes fixed for B[i] go instantaneously extinct (Figure 1b). We make the simplifying assumption that on the timescale of the metapopulation model, a deme is dominated either by an M[1] or M[2] allele, or it is empty. Random environmental change events within a deme occur independently with respect to other demes. Individuals in demes can (1) migrate and colonize empty demes (2) migrate and replace occupied demes of the opposite allele type, (3) go extinct, together with the rest of their deme, due to the fixation of an irreversible mimic phenotype. An empty deme is colonized when a migrant arrives (at rate Nm[k]) from an occupied deme. Demes go extinct when environmental change (at rate Ω) leads to adaptation via an irreversible mimic. These give colonization and extinction rates conforming to the Levins model. In addition, demes switch type when a migrant allele becomes fixed and replaces the resident allele, a process that is not part of the original Levins model. The probability that such migration leads to replacement of the resident allele is computed by the population genetic model described in Section 2.1 and the replacement is approximated as instantaneous (see Equation (1) and Figure 2). A single M[2] individual migrates to a deme fixed for M[1]. The deme becomes fixed for M[2] with probability p[fix][2]. The p[fix][2] box is a visual representation of Equation (1) that includes the probability q(1, i) that there are i M[2] alleles when the environment ... For a given pair of switching rates, we can compute the equilibrium between colonization, invasion and extinction to determine which of the two modifier alleles dominates more demes. Our analytical work assumes an infinite number of demes for mathematical tractability, but we also examine the effect of a finite number of demes for a limited number of test cases, as illustrated in Figure 7. Our model then explores a range of reversible switching rates to determine which rate (m[evolved]) we expect natural selection to favor. We then investigate how m[evolved] shifts in response to changes in the rate of appearance of irreversible mimics (μ[irr]). The model is fully general and applies to any system that includes both evolved reversible phenotypic switching and intrinsic irreversible m[2] = 0.000398761 is the optimum switching rate in both infinite and finite numbers of demes for N = 10^5, Ω = 10^−4, m[k] = 10^−7, s = 0.01, μ[irr] = 2×10^−6. We competed this m[2] against a variety of alternative ... 2.1 Population genetic model within each deme We use a modified version of the within-deme model introduced by King and Masel (2007) and shown in schematic form in Figure 1 and Figure 2. For the most part we follow their simplifying assumptions and notation (with the notable exception of using Ω to represent the rate of environmental change rather than Θ). Within each independent deme, the environment switches from state E to state F at rate Ω. The deme is of constant size N and consists of haploid individuals with one of two possible alleles M[1], M[2] at a modifier locus. M[1] and M[2] alleles cause reversible switching from A to B[r] with rates m[1] and m[2], respectively. (We can ignore backswitching from phenotype B[r] to A for the purposes of the within-deme model, since in environment E B[r] individuals do not persist long enough to switch back, and in environment F reverse switching is initially both rare and unfavorable. If genetic assimilation is rapid, reverse switching may start to become relevant in environment F before M allele fixation is complete, but by this stage the relevant M allele will already have derived most of its benefit, and so we approximate fixation as complete before genetic assimilation.) In addition, alleles at one or more loci that cause the irreversible B[i] phenotype are assumed to appear at rate μ[irr]. Note that phenotype B[i] is functionally identical to B[r] according to the within-deme model; its long-term disadvantage is captured at the level of deme persistence within the metapopulation. We assume that environmental change is rare relative to the timescale of fixation of B[i] or B[r] in response to each environmental change. In this way we can consider only the environment change from E to F and associated phenotypic switching from A to B[r]. The reverse direction from F to E is implicit in extinction of B[i] demes at the metapopulation level and in the repetitive nature of E to F environmental switching events. We use a Moran model for the evolutionary process. At each time step one individual is chosen uniformly at random to die, and one individual is chosen to reproduce according to its fitness. Reversible or irreversible switching may occur at the moment of reproduction. We ignore rare cases where both occur simultaneously. At each time step the environment changes with probability e^−Ω^/N. This represents environmental change at rate Ω per generation, corrected for the fact that one generation corresponds to N time steps in the Moran model. Again following King and Masel (2007) we assume that phenotypes B[r] and B[i] have fitness zero in the original environment E, but a selective advantage in the new environment. Relative fitnesses are f[AE] = 1 and f[BE] = 0 in the old environment, and f[AF] = 1 and f[BF] = 1 + s in the new environment, where s is the selective advantage. Newborn B individuals in environment E are immediately replaced. The population in E therefore contains no B individuals, but two types (m[1] and m[2]) of A individuals. In environment E, immediate replacement of B individuals means that the fitnesses of the M[1] and M[2] individuals are (1 − m[1] − μ[irr]) and (1 − m[2] − μ[irr]), respectively. The probability that a single M[2] allele will fix in a population of M[1] alleles is given by (Figure 2; King and Masel 2007): where q(1, i) is the probability that there are i M[2] alleles at the time of the next environment change event from E to F, given that there is initially one M[2] allele; and p(i) is the probability that an individual bearing the M[2] allele, which has not undergone irreversible switching, becomes fixed given that there i M[2] individuals at the moment of environmental change. Intuitively, q can be seen as representing the disadvantages of frequent switching in the old environment (E), while p represents the advantages of frequent switching in the new environment (F). The equations are based on the approach of King and Masel (2007), suitably modified to take into account the irreversible switching rate μ[irr]. For details of the calculations of q and p, see Appendix A. 2.2 Metapopulation model Our metapopulation model is based on those by Levins (1969, 1970) and assumes uniform migration among an infinite number of demes. Following migration between demes fixed for different M alleles, Equation (1) from Section 2.1 determines the probability that the single migrant will displace the resident (see Figure 2). The fixation process is approximated as instantaneous. Let the fraction of empty demes, demes dominated by the M[1] allele and demes dominated by the M[2] allele be given by P[0], P[1] and P[2], respectively. The model can now be represented by the coupled differential equations: where M[1] and M[2] demes go extinct at rates e[1] and e[2], empty demes are colonized by M[1] and M[2] at rates cP[1] and cP[2], and M[1] demes switch genotypes to M[2] at rate g[12]P[2] and M[2] demes to M[1] at rate g[21]P[1]. An overview is shown in Figure 3. Schematic of metapopulation model showing the transition rates between demes of each type. Demes may be empty (frequency P[0]), fixed for M[1](P[1]), fixed for M[2](P[2]), or fixed for an irreversible mimic. We approximate the extinction of a deme fixed for the ... Environmental change events occur independently in each deme in the metapopulation. Adaptation to the new environment F can be mediated either by B[i] or by B[r]. However, demes fixed for B[i] will eventually go extinct, because irreversibility is now a liability. In our parameter range of interest and given enough time, one or the other B lineage will eventually fix. Since B[r] and B[i] are initially selectively neutral relative to each other, fixation probabilities are proportional to their appearance rates. Since extinction corresponds to B[i] fixation, extinction rates for demes of types M[1] and M[2] are given by: The rate of migration is equal to the probability that an individual migrates (m[k]) multiplied by the number of individuals that might migrate, which is the size of each deme (N). Hence the rate that an empty deme is colonized is given by: For a deme to change type, an individual must migrate (m[k]N ) from a deme of opposite type (P[1] or P[2]) and take over, i.e. fix, once it arrives (computed from equation (1)). The probability that a single M[1] migrant fixes in an M[2] deme is given by p[fix][1] =[fix](m[2], m[1], μ[irr], N, Ω, s), and the probability that a single M[2] migrant fixes in an M[1] deme is given by p[fix][2] = [fix](m[1], m[2], μ[irr], N, Ω, s), hence M[1] and M[2] demes change types at rates: The total fraction of demes must be unity (P[o] + P[1] + P[2] = 1) and so the system can be reduced to the two dimensional system: Equilibrium solutions to these equations can be found by standard techniques, details are in Appendix B. We look at max[[1], [2]] to determine the “winner”. 2.3 Finding the “evolved” switching rate We are interested in identifying the reversible switching rate that is favored by evolution at the modifier locus. A common approach is to define optimality as that which maximizes some measure of fitness, such as geometric mean fitness (Seger and Brockman 1987). A drawback of this technique is that it assumes infinite population sizes and does not deal with the case of weak selection that may exist in real populations (Philippi 1993; King and Masel 2007). An alternative approach is to focus on pairwise comparisons, e.g., evolutionary stable strategies (ESS) (Maynard Smith and Price 1973; Maynard Smith 1982) or fixation versus counter-fixation probabilities (Masel 2005; King and Masel 2007). In this approach, the optimal strategy is defined as that which beats all others in pairwise competition. When pairwise comparisons are nontransitive, this definition sometimes fails to imply a unique optimum, and unfortunately this problem arises for our model: see Appendix C for examples. An alternative to pairwise comparisons is to consider K possible allele types with transition probabilities defined for each pair, based on the products of mutation rates and fixation probabilities (see, e.g., section 4.1 of King and Masel 2007). We can then calculate the stationary distribution of the system (Claussen and Traulsen 2005; Fudenberg et al. 2006) and summarize it according to an average long-term evolved switching rate, m[evolved]. By analogy to this approach, we assumed a mutational model where the reversible switching rate m is treated as a quantitative trait. A Monte Carlo simulation was then used. In each step, a single mutant was selected from a normal distribution (on the logarithmic scale), centered on m with variance σ^2 = 0.1, and compared to the resident using our deterministic metapopulation model. The winner according to this deterministic comparison was retained. The final “evolved” switching rate (m[evolved]) was computed as the average switching rate over time. Details of the algorithm can be found in Appendix C. 2.4 Parameter restrictions We consider only biologically realistic and interesting parameter ranges, leading to four restrictions on the parameters. First, natural selection for switching is too weak to overcome genetic drift within a single finite deme unless Ω > 1/N (King and Masel 2007). Second, the population genetic equations of the within-deme dynamics are based on the assumption of a single founder allele being introduced to the deme at the time of environmental change. This sets an upper limit to migration m[k]N < 1 in order to maintain the accuracy of our approximation. Higher levels of migration would in any case lead to a loss of the population structure that is of interest in the current model. Third, the metapopulation will go extinct if demes die at a greater rate than they colonize. Since the deme “death rate” is proportional to the rate of environmental change Ω and the “birth rate” is proportional to m[k]N, we restrict to m[k]N > Ω. Finally, as discussed in the model overview, since we assume all environmental change events lead to fixation of the B phenotype, we restrict our parameter space to Ns m + μ[irr])N ≥1. 3 Results We computed the evolved switching rate, m[evolved], for a metapopulation with a migration m[k], deme size N, selection strength s, environmental change rate Ω, and irreversible mimic mutation rate μ [irr], using the algorithm from Appendix C. Within the parameter range restrictions described above, we found that s and N played little role (Figure 4), and we consequently focus on N = 10^6 and s = 0.001. Here we examine how curves of m[evolved] versus μ[irr] depend on the model parameters Ω and m[k]. (a) (NΩ= 1000, Nm[k]=0.1, N = 10^6) The strength of selection, s, has no effect within the parameter range Ns s such that Ns=1000 throughout the rest of this paper. (b) (Ω= 0.001, m ... Environmental change rate Ω When irreversible mimics are rare (μ[irr] small), m[evolved] ≈ Ω (Figure 5). This is the classic bet-hedging result described by Cohen (1966) in the absence of irreversible mimics. As μ[irr] increases and the mimic appears more frequently, the reversible switching rate increases until it reaches a peak, then descends before the metapopulation goes extinct at very high irreversible mimic appearance rates. The curves for different values of Ω appear to parallel each other for most of the range, however lower Ω curves reach the peak slightly earlier than higher Ω. m[evolved] increases as the appearance of irreversible mimics at rate μ[irr] becomes significant. Ns =1000, Nm[k]=0.1 and N = 10^6. This increase in m[evolved] can be interpreted as selection for demes with higher m that are better able to avoid extinction. Once μ[irr] is sufficiently high, there is no m[evolved] that can avoid demes being dominated by B[i] and therefore the entire metapopulation eventually goes extinct. This result is shown as white space at the right of the figure. The drop-off observed at high μ[irr], just before extinction, is a result of a high degree of non-transitivity, and as a result m[evolved] is not well-defined in this region (see Appendix C for details). m[evolved] increases with population structure A low migration rate m[k] indicates increased population structure in our model. With more population structure, reversible switching m[evolved] both rises above Ω for lower levels of μ[irr], and exceeds Ω by a larger margin for a given value of μ[irr] (Figure 6). This is expected, since selection to avoid mimic-driven extinction acts at the deme level while individual-level selection favors m[evolved] ≈ Ω, and the extent of population structure affects the balance between the two. With high levels of migration, m[evolved] ≈ Ω alleles can “outrun” extinction by continuing to colonize new demes, even as these demes suffer from frequent extinction. In the limit, when gene flow is very high (Nm[k] m[evolved] ≈ Ω. This single population is of course highly vulnerable to one large extinction event. Population structure, indicated by the migration rate m[k], causes reversible switching to evolve to higher levels in order to outcompete mimics by a larger margin. N = 10^6, NΩ = 10 and Ns=1000. Modeling a finite number of demes is likely to weaken selection at the deme level by introducing random effects. We developed a finite deme version of the model (Appendix D) and in the limited number of test cases we examined, found similar results to the infinite deme model. In Figure 7 we show representative results for a transitive test case. Finite demes do not change the value of the optimum, and introduce only a modest amount of noise into the solution. 4 Discussion Our model shows that a reversible switching system can evolve in the absence of environmental sensing despite the presence of irreversible mimics. Although mimics initially share the same adaptive phenotype, their irreversibility dooms them to extinction at the next environmental change event, allowing a long-term advantage that can be exploited by a reversible switching mechanism. In contrast to previous work that neglects mimics (Lachmann and Jablonka 1996; Wolf et al. 2005; Kussell et al. 2005; Kussell and Leibler 2005; King and Masel 2007), we find that the evolved reversible switching rate (m[evolved]) is not necessarily equal to the rate of environmental change (Ω). m[evolved] increases significantly away from Ω as the irreversible mimic rate μ[irr] increases. The critical μ[irr] at which this departure from Ω occurs depends on the amount of gene flow between the demes in the metapopulation, captured in our model by the product Nm[k]. Our model considers the parameter range Nm[k] < 1 for which significant population structure exists. Modeling assumptions We have assumed a separation of timescales such that within-deme dynamics are instantaneous and so for the purposes of the metapopulation model, each deme is always dominated by one of the two possible genotypes. When the rate of environmental change Ω is small, this assumption is warranted as the transient dynamics of fixation and extinction will have completed by the time of the next environmental change. We used the simple island model approximation of metapopulation dynamics to simplify migration patterns. However, we saw similar qualitative effects so long as some population structure existed, with the exact quantity of gene flow (Nm[k]) affecting the magnitude. A second assumption of our island model is that there are infinite demes. Modeling a finite number of demes is likely to weaken selection at the deme level by introducing random effects. We therefore also examined a finite deme version of the model and found no appreciable change in our results. Note that from the perspective of the metapopulation model, each within-deme fixation event is instantaneous. This approximation might change the “effective” migration rate, perhaps even making it slightly different between the metapopulation and within-deme models. We assumed that reversible and irreversible switches are equally able to meet the challenge of environmental change in the short term. A previous model by Masel and Bergman (2003) addressed the presence of irreversible mimics indirectly by defining environmental change as that leading to extinction unless reversible switching occurred. This implicitly assumes that if both reversible and irreversible mimic phenotypes appear in the population, then the mimics always lose in direct competition even in the short-term. Here we allow each an equal chance of taking over the population in the short-term. The disadvantage associated with mimics is instead captured indirectly through longterm extinction at the deme level. This approach therefore captures one of the key advantages of reversibility. Note that it is also possible that mimics do better than reversible phenotypes in the short term. For example, reversible phenotypes may suffer a cost from prematurely switching back. This could be captured through an extension of our model, and would lead to a higher “effective” irreversible mutation rate μ[irr] Note that if reversible switching is not random but induced at an elevated rate by the environment when it is most likely to be needed, then the evolution of reversible switching mechanisms becomes even more likely (Jablonka et al. 1995; Kussell and Leibler 2005; Wolf et al. 2005). Our assumption that switching is random is therefore conservative with respect to the evolution of reversible switching. Metabolic requirements to maintain environmental sensors may mean, however, that induced switching also has a cost (Wolf et al. 2005; Kussell and Leibler 2005) and random switching can be favored over direct sensing of the environment when environmental change rates are low (Kussell and Leibler 2005; Wolf et al. 2005). Our modeling approach has three chief strengths. First, we represent all the dynamics occurring within a deme stochastically: this allows us to model both finite deme sizes and rare stochastic events. Second, all computation is done without recourse to individual-level simulation, drastically reducing the amount of computation time needed for a given set of parameters. Third, our model examines group-level benefits that reversible switching mechanisms can confer on a metapopulation. We thank Christine Lamanna for her early work on this project, Oliver D. King for C code, Cortland Griswold, Oliver King, Grant Peterson and Jessica Garb for helpful discussions, and the National Institutes of Health for funding (R01 GM076041). J.M. is a Pew Scholar in the Biomedical Sciences and an Alfred P. Sloan Research Fellow. A Within-deme model equations Both q(1, i), representing the model before the environmental change, and p(i), representing the model after the environmental change, can be computed by solving tridiagonal systems of linear equations using standard techniques. Model before environmental change q(1, i) is the probability that there are i M[2] alleles at the time of an environment change event, assuming that there is initially one M[2] allele appearing through mutation. It is given in section 2.4 of King and Masel (2007) by the following tri-diagonal system of equations: where α[i] and β[i] are the probabilities that the number of M[2] alleles increase and decrease from i to i + 1 and i − 1, respectively (α and β replace the λ and μ symbols from King and Masel). To incorporate the effect of irreversible mimics, the computations of α[i] and β[i] need to be modified from King and Masel (2007). In the old environment E irreversible mimics increase the rate at which phenotype A switches to phenotype B. This means that M[1] and M[2] individuals in E now switch to B at rates m[1] + μ[irr] and m[2] + μ[irr], respectively. As described in the main text, as some individuals immediately switch to the zero-fitness B phenotype, the fitness of phenotype A is reduced to (1 − m[1] − μ[irr]) and (1 − m[2] − μ[irr]) for the M[1] and M[2] genotypes, respectively. Following the first equation in section 2.4 of King and Masel (2007) (modified by the substitutions m[1] → m[1] + μ[irr] and m[2] → m[2] + μ[irr]), the probability that we transition from i → i + 1 is given by the probability that an M[2] individual is chosen to reproduce while an M[1] individual is chosen to die: Similarly, following the second equation in section 2.4 of King and Masel (2007), the probability that we transition from i → i − 1 is given by the probability that an M[1] individual is chosen to reproduce while an M[2] individual is chosen to die: Model after environmental change p(i) is the probability that a genotype with the M[2] allele but no irreversible mimic becomes fixed, given that there are currently i M[2] individuals in environmental F. Both M[1] fixation and deme extinction are captured by the state i=0. M[2] fixation may be due either to an M[2] lineage with the adaptive phenotype B[r] sweeping the population or to an M[2] lineage with the A phenotype taking over by drift, possibly before the environment ever changes. We modify p(i) from section 2.5 of King and Masel (2007) to the following tridiagonal system of equations: Equation (15) explicitly shows all the transition probabilities multiplied by the subsequent fixation probabilities. This includes those transitions which do not lead to M[2] fixation, and which accordingly are multiplied by zero. We assume that the processes of fixation (after a B[r] destined for fixation appears) and extinction (after a B[i] destined for fixation appears) are instantaneous and therefore model both processes by introducing “jump” moves into the Markov chain. Each of these processes thus becomes a single step in the Markov chain (see King and Masel (2007) section 2.5 for details). In the first term, r[i] represents the probability that an M[2] with adaptive phenotype B[r] sweeps the population, hence jumps to the p(N ) = 1 absorbing state. In the second term, $ri′$ represents that an M[2] with the irreversible mimic phenotype B[i] sweeps the population, and hence jumps to a state where the deme eventually goes extinct. The probability that an M[1] allele with either B[r] or B[i] phenotype sweeps the population, and hences jumps to either M[1] fixation or deme extinction, is given by d[i]. The approximation of “jump” moves was numerically tested by King and Masel (2007) and found not to affect results. Note that in the current work this approximation also means that an adaptive B[r] lineage does not subsequently acquire a B[i] mutation. The dynamics of such mutational degradation were explored by Masel et al. (2007), and this phenomenon is not a problem for the parameters considered here. Noting that the transition probability b[i] of remaining in the p(i) state is given the sum of all other transition probabilities subtracted from 1, the coefficients a[i], b[i], c[i], d[i], r[i], $ri′$ in the above equations can be suitably modified from King and Masel (2007) to give where y = (1 − (1 + s) ^−1)/(1 − (1 + s) ^−^N) is the probability that a B individual is destined for fixation (see King and Masel (2007) section 2.5). Substituting in the values of the p(N) and p(0) reduces the system to B Metapopulation model equations Using standard techniques, four possible equilibrium solutions of equations (3) and (4) can be found. Equation (18) represents extinction of all demes, Equation (19) and Equation (20) represent dominance of all occupied demes by the M[1] or M[2] alleles respectively, and Equation (21) represents co-existence, where each allele dominates a fraction of demes. Note that not all solutions apply to all parameter values. For a given set of parameter values, the first constraint we applied is that both P[1] and P[2], and their sum P[1] + P[2], must be bounded within the [0, 1] interval since they represent fractions of the total number of demes in the metapopulation model. After this constraint was used to eliminate potential solutions, we evaluated the stability of the remaining solution(s) by checking the signs of the derivatives about the equilibrium point. If none of the solutions in Equations (19), (20) or (21) were appropriately bounded or stable, then Equation (18), which represents extinction of the entire metapopulation, was assumed to be in effect. C Algorithm for computing m[evolved] To compute m[evolved], we employed a Monte Carlo approach by competing pairs of switching rates in a series of rounds. In each round, the switching rate that “won” the pairwise comparison by the criterion in equation (22) would progress to the next round. Another randomly chosen switching rate close to the original winner would then be competed against that previous winner. We then found the evolved switching rate by computing a running average of the winning switching rate. Pseudocode describing the computation of a single replicate of m[evolved] is found in Algorithm 1. For each data point in our figures, we then averaged m[evolved] over 10 replicates of this algorithm to minimize noise introduced through the Monte Carlo sampling process. Note that in a typical Monte Carlo simulation, moves that decrease fitness are accepted with some low probability, in order to escape local optima and sample the entire parameter space. In our simulations, we suffered the opposite problem of lack of stability, and so this part of the classic algorithm was not done. Algorithm 1 1. m[opt:]= use a modified Golden ratio search over the [0, 1] interval. 2. m[evolved]:= 0 (initialize running average) 3. for t:= 1, S + S[burnin] 1. x:= sample from the normal distribution, N(0, σ^2), with mean 0 and variance σ^2 (mutation step) 2. m:= e^xm[opt] (mutation steps are normally distributed on the log scale) 3. if P[m[opt]] P[m] (determine whether m or m[opt] “wins” according to equation (22)) ☆ m[opt:]= m[opt](keep current m[opt]) ☆ m[opt:]= m (new optimum is found) 4. if t > S[burnin] (don’t start recording average until burn in complete) ☆ m[evolved:]= m[evolved] + (m[opt] − m [evolved])/(t − S[burnin])(compute running average) Similar to an ESS or to the criteria used by King and Masel (2007), an optimal reversible switching rate m[opt] could be defined as that in which the corresponding M[opt] allele outcompetes any other possible allele M. In our model, this corresponds to the condition where P[m[opt]] and P[m] are the fraction of demes which are fixed for the allele of the respective switching rates. If there is transitivity, there exists a unique solution to (22). If not, there may be no solution. Transitivity means if P[m[a]](m[b], m[a]) > P[m[b]](m[b], m[a]) and P[m[b]](m[c], m[b]) > P[m[c]](m [c], m[b]) then P[m[a]](m[c], m[a]) > P[m[c]](m[c], m[a]) must hold. Typically high degrees of non-transitivity are found for switching rates that are close together and for higher values of μ[irr] ( Figure 8). When μ[irr] is very high, there is no single well-defined optimum, or fitness, and Algorithm 1 results in a final m[evolved] that exhibits a “drop-off” from the peak value. (a) Drop-off of m[evolved] at high μ[irr] results from non-transitive relationships. (b) Transitivity holds for μ[irr] = 10^−6 as shown in a grid of values where m[2] beating m[1] is depicted as a black box and m[1] beating m[2], an “x” ... D Algorithm for finite deme model Algorithm 2 2. initialize the discrete deme type D[0], D[1], D[2] based on the infinite solution 3. initialize the current proportions: P[1:]= D[1]/D, P[2:]= D[2]/D, P[0:]= 1 − (P [1] + P [2] ) 4. $P2total:=0$(initialize weighted total) 5. t:= 0.0 (initialize time) 6. for s:= 1, S + S[burnin] 1. r[1:]= m[k]NP[1]P[0]; r[2]:= m[k]NP[2]P[0] (colonization by M[1]and M[2], respectively) r[3:]= Ωμ[irr]P[1]/(μ[irr] + m[1]); r[4:]= Ωμ[irr] P[2]/(μ[irr] + m[2]) (extinction of M[1] and M[2], respectively) r[5:]= m[k]NP[2]p[fix][2]P[1;] r[6:]= m[k]NP[1]p[fix][1]P[2](deme switch M[1] → M[2] and M[2] → M[1], respectively) 2. $R:=∑i=16ri$(compute the total rate) 3. Δt:= 1/R (choose a new timestep based on the total rate) 4. u:=sample from the uniform distribution [0, R] interval 5. choose event i if u falls within the r[i] proportion of the total rate interval, R 6. if event 1 then D[1:]= D[1] + 1, D[o:]= D [o] − 1 (an empty deme is colonized with M[1]) else if event 2 then D[2:]= D[2] + 1, D[o:]= D[o] − 1 (an empty deme is colonized with M[2]) else if event 3 then D[1:]= D[1] − 1, D[o:]= D[o] + 1 (deme of type M[1] goes extinct) else if event 4 then D[2:]= D[2] − 1, D[o:]= D[o] + 1 (deme of type M[2] goes extinct) else if event 5 then D[1:]= D[1] − 1, D[2:]= D[2] + 1 (deme switches from M[1] to M[2] ) else if event 6 then D[1:]= D[1] + 1, D[2:]= D[2] − 1 (deme switches from M[2] to M[1] ) 7. P[1:]= D[1]/D, P[2:]= D[2]/D, P[0:]= D[o]/D (recompute proportions) 8. if s > S[burnin] (don’t start recording average until burn in complete, save this time as t[burnin]) ☆ $P2total:=P2total+ΔtP2$(update the weighted total, weighting the current P[2] by the length of the timestep) ☆ $P¯2:=P2total/(t−tburnin+Δt)$(compute new time-weighted average) 9. t:= t + Δt (update current time) Literature Cited • Claussen JC, Traulsen A. Non-Gaussian fluctuations arising from finite populations: Exact results for the evolutionary Moran process. Phys Rev E Stat Nonlin Soft Matter Phys. 2005;71:025101. [ • Cohen D. Optimizing reproduction in a randomly varying environment. J Theor Biol. 1966;12:119–29. [PubMed] • Drake JW. The distribution of rates of spontaneous mutation over viruses, prokaryotes, and eukaryotes. Ann N Y Acad Sci. 1999;870:100–7. [PubMed] • Fudenberg D, Nowak MA, Taylor C, Imhof LA. Evolutionary game dynamics in finite populations with strong selection and weak mutation. Theor Popul Biol. 2006;70:352–63. [PMC free article] [PubMed] • Jablonka E, Lamb MJ. Epigenetic inheritance in evolution. J Evol Biol. 1998;11:159–183. • Jablonka E, Oborny B, Molnar I, Kisdi E, Hofbauer J, Czaran T. The adaptive advantage of phenotypic memory in changing environments. Philos Trans R Soc Lond B Biol Sci. 1995;350:133–41. [PubMed] • King OD, Masel J. The evolution of bet-hedging adaptations to rare scenarios. Theor Popul Biol. 2007 Dec;72(4):560–575. [PMC free article] [PubMed] • Kussell E, Kishony R, Balaban NQ, Leibler S. Bacterial persistence: a model of survival in changing environments. Genetics. 2005;169:1807–14. [PMC free article] [PubMed] • Kussell E, Leibler S. Phenotypic diversity, population growth, and information in fluctuating environments. Science. 2005;309:2075–8. [PubMed] • Lachmann M, Jablonka E. The inheritance of phenotypes: an adaptation to fluctuating environments. J Theor Biol. 1996;181:1–9. [PubMed] • Lancaster AK, Bardill JP, True HL, Masel J. The spontaneous appearance rates of both the yeast prion [PSI+] and of other [PSI+]-like phenotypes in prep. • Levins R. Some demographic and genetic consequences of environmental heterogeneity for biological control. Bulletin of the Entomological Society of America. 1969;15:237–240. • Levins R. Extinctions. Lect Notes Math. 1970;2:77–107. • Lund PM, Cox BS. Reversion analysis of [psi-] mutations in Saccharomyces cerevisiae. Genet Res. 1981;37:173–82. [PubMed] • Masel J. Evolutionary capacitance may be favored by natural selection. Genetics. 2005;170:1359–71. [PMC free article] [PubMed] • Masel J, Bergman A. The evolution of the evolvability properties of the yeast prion [PSI+] Evolution. 2003;57:1498–512. [PubMed] • Masel J, King OD, Maughan H. The loss of adaptive plasticity during long periods of environmental stasis. Am Nat. 2007;169:38–46. [PMC free article] [PubMed] • Maynard Smith J. Evolution and the Theory of Games. Cambridge: Cambridge University Press; 1982. • Maynard Smith J, Price G. The logic of animal conflict. Nature. 1973;246:15–18. • Moxon ER, Rainey PB, Nowak MA, Lenski RE. Adaptive evolution of highly mutable loci in pathogenic bacteria. Curr Biol. 1994;4:24–33. [PubMed] • Philippi T. Bet-hedging germination of desert annuals: beyond the first year. Am Nat. 1993;142:474–487. [PubMed] • Press W, Teukolsky S, Vetterling W, Flannery B. Numerical Recipes in C: The Art of Scientific Computing. 2. Cambridge University Press; Cambridge, UK: 1992. • Rando OJ, Verstrepen KJ. Timescales of genetic and epigenetic inheritance. Cell. 2007;128:655–68. [PubMed] • Seger J, Brockman HJ. What is bet-hedging? In: Harvey PH, Partridge L, editors. Oxford Surveys in Evolutionary Biology. Oxford University Press; 1987. pp. 182–211. • West-Eberhard MJ. Developmental Plasticity and Evolution. Oxford University Press; 2003. • Wolf DM, V, Vazirani V, Arkin AP. Diversity in times of adversity: probabilistic strategies in microbial survival games. J Theor Biol. 2005;234:227–53. [PubMed] Your browsing activity is empty. Activity recording is turned off. See more...
{"url":"http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2770902/?tool=pubmed","timestamp":"2014-04-17T19:01:59Z","content_type":null,"content_length":"154854","record_id":"<urn:uuid:78f981d8-ae63-4cee-b190-9f4fe87713ea>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00175-ip-10-147-4-33.ec2.internal.warc.gz"}
Get TF and IDF of all the terms of an index [Lucene 4.3] Let's assume that you have indexed a number of documents with Lucene 4.3. The database created by Lucene is a "flat" database that has a number of fields for every document. Each field contains the terms of a document and their respective frequencies in a termVector . For those who tried to migrate from older versions of Lucene extracting statistics like TF and IDF in Lucene 4.3 can seem a bit more tricky. Newer versions of Lucene are indeed less intuitive but on the other hand they are more flexible. Firstly, a reader must be initiated in order to access the index, and also a TFIDFSimilarity class that will help us calculate the frequencies( ), and a HashMap that will hold the scores (tf*idf). IndexReader reader = DirectoryReader.open(FSDirectory.open(new File(index))); TFIDFSimilarity tfidfSIM = new DefaultSimilarity(); Map<String, Float> tf_Idf_Weights = new HashMap<>(); Map<String, Float> termFrequencies = new HashMap<>(); Secondly, in order to get the terms of every document we must iterate through the enumeration of the terms and documents, respectively for every indexed document. Practically we iterate the enumerations for every document in the index : *Pay attention ! During indexing the termVectors must be stored. *The terms are stored in the index as Bytes* Calculating the Inverse Document Frequencies: Firstly, we create a Map for adding the idf values: Map<String, Float> docFrequencies = new HashMap<>(); The function below is field-specific and the value is calculated while looping through the termsEnum: /*** GET ALL THE IDFs ***/ Map<String, Float> getIdfs(IndexReader reader, String field) throws IOException /** GET FIELDS **/ Fields fields = MultiFields.getFields(reader); //Get the Fields of the index TFIDFSimilarity tfidfSIM = new DefaultSimilarity(); for (String field: fields) TermsEnum termEnum = MultiFields.getTerms(reader, field).iterator(null); BytesRef bytesRef; while ((bytesRef = termEnum.next()) != null) if (termEnum.seekExact(bytesRef, true)) String term = bytesRef.utf8ToString(); float idf = tfidfSIM.idf( termEnum.docFreq(), reader.numDocs() ); docFrequencies.put(term, idf); return docFrequencies; In particular the Lucene function that we use to get the inverse document frequency is: tfidfSIM.idf(termEnum.docFreq(), reader.numDocs()) It practically computes a score factor based on a term's document frequency (the number of documents which contain the term). This value is multiplied by the tf(int) factor for each term in the Calculating the Term Frequencies: for (int docID=0; docID< reader.maxdoc(); docID++) TermsEnum termsEnum = MultiFields.getTerms(reader, field).iterator(null); DocsEnum docsEnum = null; Terms vector = reader.getTermVector(docId, CONTENT); termsEnum = vector.iterator(termsEnum); catch (NullPointerException e) BytesRef bytesRef = null; while ((bytesRef = termsEnum.next()) != null) if (termsEnum.seekExact(bytesRef, true)) String term = bytesRef.utf8ToString(); float tf = 0; docsEnum = termsEnum.docs(null, null, DocsEnum.FLAG_FREQS); while (docsEnum.nextDoc() != DocIdSetIterator.NO_MORE_DOCS) tf = tfidfSIM.tf(docsEnum.freq()); termFrequencies.put(term, tf); float idf = docFrequencies.get(term); float w = tf * idf; tf_Idf_Weights.put(term, w); return tf_Idf_Weights; Lucene has an inverted index data structure which means that the process to find the term frequencies for every document is not as direct as one can think. The reason for that is that the inverted index stores a list of fields(for every docId) containing each term and its frequency throughout the documents. This means that we can easily retrieve the number of matching documents for a certain term, but in order for us to get the tf we proceed by iterating through DocEnum and calling tf = tfidfSIM.tf(docsEnum.freq()); for every term. * The function freq() returns the term frequency in the current document * After calculating and adding the of every term, we can get the weight ( w = tf * idf ) and store it. This way we can create a vector for each document that will contain the respective weights of its terms, and therefore we can calculate the distance between vectors. If you want to take a look at a complete java class implementation of this functionality check out this 3 comments: 1. in your code there is variabel "CONTENT"(Calculating the Term Frequencies) is it a parameter ?, could you write complete your method ? thank you for your sharing 1. CONTENT is the name of the field that we are interested in. Note that MultiFields.getTerms(reader, field).iterator(null) is used to enable us to iterate the terms for all fields. So dont be troubled with this line : Terms vector = reader.getTermVector(docId, CONTENT); This was originally inside another loop in order to get the vectors from all fields: Terms vector = reader.getTermVector(docId, field); 2. Thanks for the post. I am getting a null pointer exception at line: TermsEnum termsEnum = MultiFields.getTerms(reader, field).iterator(null); How could I resolve? Thanks in advance
{"url":"http://filotechnologia.blogspot.it/2013/11/get-tf-and-idf-of-all-terms-of-index.html","timestamp":"2014-04-21T07:04:37Z","content_type":null,"content_length":"87217","record_id":"<urn:uuid:cf7b0e2b-1432-4a6f-8d12-e0896f6ccd3f>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00343-ip-10-147-4-33.ec2.internal.warc.gz"}
2012-2013 Catalogue Mathematics (Bachelor of Science) The mathematics curriculum is quite flexible. It is designed to provide a sound basic training in mathematics that allows a student to experience the broad sweep of mathematical ideas and techniques, to utilize the computer in mathematics, and to develop an area of special interest in the mathematical sciences. A Bachelor of Arts with a major in Mathematics is offered and supervised by the College of Arts and Sciences. Students opting for this degree require an advisor from the Department of Mathematics and Statistics. Refer to the CAS section of this catalogue for more information. Concentrations that provide ideal preparation for a student’s career plans are listed in the next section, along with the courses recommended for each concentration. General Requirements Specific Requirements Recommendations for Major Courses In consultation with their advisor, students should choose an area of interest within the mathematics major and plan a coherent program that addresses their interests in mathematics and its applications. This area might be one of those listed below, or it might be another area suggested by the student. As a guide, students interested in one of the areas would typically take at least three courses in that area, including all of the courses marked with an asterisk (*). In addition, students should take courses from at least two other areas. Because of its centrality in mathematics, students should make sure that they take at least one course listed under Classical Mathematics. In following these recommendations, a course listed in more than one area is meant to be counted only once. 1. Classical Mathematics Classical mathematics encompasses those areas having their roots in the great traditions of mathematical thought, such as geometry and topology, mathematical analysis, algebra and number theory, and discrete mathematics. Courses in this area include the following: MATH 141, MATH 151, MATH 173, MATH 236, MATH 240, MATH 241*, MATH 242, MATH 251*, MATH 252, MATH 255, MATH 257, MATH 260, MATH 264, MATH 273, MATH 331, MATH 353. 2. Applied Mathematics Applied mathematics involves the use of mathematical methods to investigate problems originating in the physical, biological, and social sciences, and engineering. Mathematical modeling, coupled with the development of mathematical and computational solution techniques, illuminates mechanisms which govern a problem and allows predictions to be made about an actual physical situation. Current research interests of the faculty include biomedical mathematics, fluid mechanics and hydrodynamic stability, asymptotics, and singular perturbation theory. Courses in this area include the following: MATH 230*, MATH 236, MATH 237*, MATH 238, MATH 240, MATH 272, MATH 273, MATH 274. 3. Computational Mathematics Computational mathematics involves both the development of new computational techniques and the innovative modification and application of existing computational strategies to new contexts where they have not been previously employed. Intensive computation is central to the solution of many problems in areas such as applied mathematics, number theory, engineering, and the physical, biological and natural sciences. Computational mathematics is often interdisciplinary in nature, with algorithm development and implementation forming a bridge between underlying mathematical results and the solution to the physical problem of interest. Courses in this area include the following: MATH 173, MATH 230, MATH 237*, MATH 238, MATH 274, STAT 201. 4. Theory of Computing The mathematical theory of computing deals with the mathematical underpinnings allowing effective use of the computer as a tool in problem solving. Aspects of the theory of computing include: designing parallel computing strategies (graph theory), analyzing strengths and effectiveness of competing algorithms (analysis of algorithms), examining conditions which ensure that a problem can be solved by computational means (automata theory and computability), and rigorous analysis of run times (complexity theory). Courses in this area include the following: MATH 173, MATH 223, MATH 224*, MATH 243, MATH 273, MATH 325, CS 346, CS 353. 5. Mathematics of Management Mathematics of Management involves the quantitative description and study of problems particularly concerned with the making of decisions in an organization. Problems are usually encountered in business, government, service industries, etc., and typically involve the allocation of resources, inventory control, product transportation, traffic control, assignment of personnel, and investment diversification. Courses in this area include the following: MATH 173, MATH 221*, MATH 222, MATH 230, MATH 236, MATH 273, STAT 141 or STAT 211, STAT 151 or MATH 207, STAT 224, STAT 241 , STAT 253. 6. Actuarial Mathematics Actuaries use quantitative skills to address a variety of risk related problems within financial environments. A unique feature of the actuarial profession is that a considerable amount of the formal training is typically completed after graduation "on-the-job." The Society of Actuaries is an international organization that regulates education and advancement within the profession. Candidates may earn designation as an Associate of the Society of Actuaries (ASA) by satisfying three general requirements. These are: (1) Preliminary Education Requirements, PE; (2) the Fundamentals of Actuarial Practice Course, FAP; and (3) the Associateship Professionalism Course, APC. The multiple component FAP is based on an e-learning format, and can be pursued independently. After completing the PE and at least one of the FAP components, candidates are eligible to register for the one-half day APC. The Preliminary Education Requirements consist of (1) prerequisites (2) subjects to be validated by educational experience (VEE), and (3) four examinations. While at the university, students can satisfy the prerequisites, the VEE courses, and the first two preliminary examinations. The following courses are recommended as preparation for the specific requirements. Prerequisites. Calculus MATH 021, MATH 022, and MATH 121, Linear Algebra MATH 124, Introductory Accounting (BSAD 060, BSAD 061), Business Law (BSAD 017, BSAD 018), and Mathematical Statistics ( STAT 261, STAT 262). These are topics that will assist candidates in their exam progress and work life but will not be directly tested or validated. Subjects Validated by Educational Experience. Economics (EC 011, EC 012), Corporate Finance (BSAD 180, BSAD 181), and Applied Statistical Methods (STAT 221, STAT 253). Candidates will demonstrate proficiency in these subjects by submitting transcripts. Preliminary Examinations. Exam P - Probability (STAT 151, STAT 251), Exam FM - Mathematics of Finance (BSAD 180,BSAD 181). Other applicable departmental courses include: Statistics for Business STAT 195, Statistical Analysis via Computers STAT 201, Applied Regression Analysis STAT 225, Survival Analysis STAT 229, Categorical Data Analysis STAT 235, Nonparametric methods STAT 237, Combinatorics MATH 173, and Operations Research MATH 221, MATH 222. 7. Probability and Statistical Theory Probabilistic reasoning is often a critical component of practical mathematical analysis or risk analysis and can usefully extend classical deterministic analysis to provide stochastic models. It also provides a basis for statistical theory, which is concerned with how inferences can be drawn from real data in any of the social or physical sciences. Courses in this area include the following: MATH 222, MATH 241, MATH 242, (STAT 151 or MATH 207), STAT 241, STAT 252a, STAT 252b, STAT 261, STAT 262, STAT 270. Recommendations for Allied Field Courses Students should discuss Allied Field courses with their advisor and choose ones which complement their mathematical interests. Students with certain mathematical interests are advised to emphasize an appropriate Allied Field as indicated below and take at least six credits in courses numbered 100 or above in that field. Applied Mathematics: Allied Field (1), (2), (3), (4), (6) or (9). Computational Mathematics: Allied Field (4) or (5). Mathematics of Management: Allied Field (7). Students interested in Mathematics of Management are advised to include economics EC 011 and EC 012 in their choice of Humanities and Social Sciences courses, and to include business administration BSAD 060 and BSAD 061 in their choice of Allied Field courses. Those wishing to minor in business administration should contact the School of Business Administration and also take business administration BSAD 173 and two other courses chosen from Allied Field courses.
{"url":"http://www.uvm.edu/academics/catalogue2012-13/?Page=read.php&p=/Colleges_and_Schools/College_of_Engineering_and_Mathematical_Sciences/Academic_Offerings/Mathematics_(B.S.)&SM=collegemenu.html","timestamp":"2014-04-19T10:14:48Z","content_type":null,"content_length":"34256","record_id":"<urn:uuid:78204944-ce8d-4b75-99ed-63cb1f5d0242>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00638-ip-10-147-4-33.ec2.internal.warc.gz"}
P.rex: An Interactive Proof Explainer Results 1 - 10 of 19 , 2003 "... Current Description Logic reasoning systems provide only limited support for debugging logically erroneous knowledge bases. In this paper we propose new non-standard reasoning services which we designed and implemented to pinpoint logical contradictions when developing the medical terminology DICE. ..." Cited by 106 (5 self) Add to MetaCart Current Description Logic reasoning systems provide only limited support for debugging logically erroneous knowledge bases. In this paper we propose new non-standard reasoning services which we designed and implemented to pinpoint logical contradictions when developing the medical terminology DICE. We provide complete algorithms for unfoldable ACC-TBoxes based on minimisation of axioms using Boolean methods for minimal unsatisfiability-presening sub-TBoxes, and an incomplete bottom-up method for generalised incoherence-preserving terminologies. 1 , 2006 "... With the advent of Semantic Web languages such as OWL (Web Ontology Language), the expressive Description Logic SHOIN is exposed to a wider audience of ontology users and developers. As an increasingly large number of OWL ontologies become available on the Semantic Web and the descriptions in the on ..." Cited by 38 (0 self) Add to MetaCart With the advent of Semantic Web languages such as OWL (Web Ontology Language), the expressive Description Logic SHOIN is exposed to a wider audience of ontology users and developers. As an increasingly large number of OWL ontologies become available on the Semantic Web and the descriptions in the ontologies become more complicated, finding the cause of errors becomes an extremely hard task even for experts. The problem is worse for newcomers to OWL who have little or no experience with DL-based knowledge representation. Existing ontology development environments, in conjunction with a reasoner, provide some limited debugging support, however this is restricted to merely reporting errors in the ontology, whereas bug diagnosis and resolution is usually left to the user. In this thesis, I present a complete end-to-end framework for explaining, pinpointing and repairing semantic defects in OWL-DL ontologies (or in other words, a SHOIN knowledge base). Semantic defects are logical contradictions that manifest as either inconsistent ontologies or unsatisfiable concepts. Where possible, I show extensions to handle related defects such as unsatisfiable roles, unintended entailments and nonentailments, - In F. Baader, ed, CADE-19, LNAI 2741 , 2003 "... Abstract. We describe the integration of permutation group algorithms with proof planning. We consider eight basic questions arising in computational permutation group theory, for which our code provides both answers and a set of certificates enabling a user, or an intelligent software system, to pr ..." Cited by 13 (0 self) Add to MetaCart Abstract. We describe the integration of permutation group algorithms with proof planning. We consider eight basic questions arising in computational permutation group theory, for which our code provides both answers and a set of certificates enabling a user, or an intelligent software system, to provide a full proof of correctness of the answer. To guarantee correctness we use proof planning techniques, which construct proofs in a human-oriented reasoning style. This gives the human mathematician the necessary insight into the computed solution, as well as making it feasible to check the solution for relatively large groups. 1 - In In Proc. of KI 2001, volume 2174 of LNAI , 2001 "... Abstract. This paper discusses experiments with an agent oriented approach to automated and interactive reasoning. The approach combines ideas from two subfields of AI (theorem proving/proof planning and multi-agent systems) and makes use of state of the art distribution techniques to decentralise a ..." Cited by 12 (8 self) Add to MetaCart Abstract. This paper discusses experiments with an agent oriented approach to automated and interactive reasoning. The approach combines ideas from two subfields of AI (theorem proving/proof planning and multi-agent systems) and makes use of state of the art distribution techniques to decentralise and spread its reasoning agents over the internet. It particularly supports cooperative proofs between reasoning systems which are strong in different application areas, e.g., higher-order and first-order theorem provers and computer algebra systems. 1 - PROCEEDINGS OF THE 18TH CONFERENCE ON AUTOMATED DEDUCTION (CADE–18), VOLUME 2392 OF LNAI , 2002 "... ..." - L. J. of the IGPL , 2002 "... Our research interests in this project are in exploring how automated reasoning systems can learn theorem proving strategies. In particular, we are looking into how a proof planning system (Bundy, 1988) can automatically learn ..." Cited by 8 (4 self) Add to MetaCart Our research interests in this project are in exploring how automated reasoning systems can learn theorem proving strategies. In particular, we are looking into how a proof planning system (Bundy, 1988) can automatically learn - Second International Joint Conference on Automated Reasoning — Workshop on Computer-Supported Mathematical Theory Development , 2004 "... Automated theorem proving is becoming more important as the volume of applications in industrial and practical research areas increases. Due to the formalism of theorem provers and the massive amount of information included in machine-oriented proofs, formal proofs are difficult to understand withou ..." Cited by 4 (0 self) Add to MetaCart Automated theorem proving is becoming more important as the volume of applications in industrial and practical research areas increases. Due to the formalism of theorem provers and the massive amount of information included in machine-oriented proofs, formal proofs are difficult to understand without specific training. A verbalisation system, ClamNL, was developed to generate English text from formal representations of inductive proofs, as produced by the Clam proof planner. The aim was to generate natural language proofs that resemble the presentation of proofs found in mathematical textbooks and that contain only the mathematically interesting parts of the proof. 1 - IN: PROCEEDINGS OF THE 27TH GERMAN CONFERENCE ON ARTIFICIAL INTELLIGENCE (KI 2004) , 2004 "... The year 2004 marks the fiftieth birthday of the first computer generated proof of a mathematical theorem: “the sum of two even numbers is again an even number” (with Martin Davis’ implementation of Presburger Arithmetic in 1954). While Martin Davis and later the research community of automated dedu ..." Cited by 3 (3 self) Add to MetaCart The year 2004 marks the fiftieth birthday of the first computer generated proof of a mathematical theorem: “the sum of two even numbers is again an even number” (with Martin Davis’ implementation of Presburger Arithmetic in 1954). While Martin Davis and later the research community of automated deduction used machine oriented calculi to find the proof for a theorem by automatic means, the Automath project of N.G. de Bruijn – more modest in its aims with respect to automation – showed in the late 1960s and early 70s that a complete mathematical textbook could be coded and proof-checked by a computer. Classical theorem proving procedures of today are based on ingenious search techniques to find a proof for a given theorem in very large search spaces – often in the range of several billion clauses. But in spite of many successful attempts to prove even open mathematical problems automatically, their use in everyday mathematical practice is still limited. The shift "... Abstract. When mathematicians present proofs they usually adapt their explanations to their didactic goals and to the (assumed) knowledge of their addressees. Modern automated theorem provers, in contrast, present proofs usually at a fixed level of detail (also called granularity). Often these prese ..." Cited by 3 (2 self) Add to MetaCart Abstract. When mathematicians present proofs they usually adapt their explanations to their didactic goals and to the (assumed) knowledge of their addressees. Modern automated theorem provers, in contrast, present proofs usually at a fixed level of detail (also called granularity). Often these presentations are neither intended nor suitable for human use. A challenge therefore is to develop user- and goal-adaptive proof presentation techniques that obey common mathematical practice. We present a flexible and adaptive approach to proof presentation based on classification. Expert knowledge for the classification task can be handauthored or extracted from annotated proof examples via machine learning techniques. The obtained models are employed for the automated generation of further proofs at an adapted level of granularity.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=407396","timestamp":"2014-04-16T12:08:22Z","content_type":null,"content_length":"36710","record_id":"<urn:uuid:218c3554-aec2-4a26-9bcc-45c5f4b46f87>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00100-ip-10-147-4-33.ec2.internal.warc.gz"}
sphere, in geometry, the three-dimensional analogue of a circle. The term is applied to the spherical surface, every point of which is the same distance (the radius) from a certain fixed point (the center), and also to the volume enclosed by such a surface. The curve formed by a plane cutting a sphere is a circle. If the plane goes through the center of the sphere, the circle is called a great circle of the sphere. It is the largest circle that can be drawn upon the sphere, and all great circles of the same or equal spheres are of equal size. The shortest distance between two points on a spherical surface, measured on the surface, is the distance along the great circle through those points. A plane cutting a sphere in a great circle divides the sphere into two equal segments called hemispheres. The diameter of a sphere is the diameter of one of its great circles. The formula for the area of the surface of a sphere is S = 4π r ^2, and for the volume it is V = 4/3 π r ^3, where r is the radius of the sphere. Spherical geometry and spherical trigonometry are methods of determining magnitudes and figures on a spherical surface. The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved. More on sphere from Fact Monster: See more Encyclopedia articles on: Mathematics
{"url":"http://www.factmonster.com/encyclopedia/science/sphere.html","timestamp":"2014-04-17T06:58:21Z","content_type":null,"content_length":"21020","record_id":"<urn:uuid:89c392bb-2ac3-4e4c-820c-d4b993a6630b>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00136-ip-10-147-4-33.ec2.internal.warc.gz"}
A result about Fredholm operator up vote 0 down vote favorite When I read the article "Index Theory" in Handbook of global analysis, I meet a result as below(Corollary 2.13): If every $F_0\in \mathcal {F}(H_1,H_2)$, there is an open neighborhood $U_0\subseteq \mathcal {B}(H_1,H_2)$, such that $F\in U_0$ implies $F((KerF_0)^\perp)\oplus F_0(H_1)^\perp =H_2$ I didn't find this result in other books. I can't understand the proof about it. $Fv+w=F(v-f_0)+w$? Why? Edit: $H_1$ and $H_2$ are separable Hilbert spaces. $\mathcal {F}(H_1,H_2)$ is the spaces of Fredholm operators. $\mathcal {B}(H_1,H_2)$ is the spaces of bounded operators. In the proof, construct a $\overline{F}:H_1\oplus F_0(H_1)^\perp \to H_2\oplus kerF_0$ by $\overline{F}(v,w)=(Fv-w,\pi_{KerF_0}v)$, this is a isomorphism. Since $\overline{F}$ is onto, for any $(u,\ f_0)\in H_2\oplus kerF_0$, there is $(v,w)\in H_1\oplus F_0(H_1)^\perp$, with $u=Fv-w$ and $\pi_{KerF_0}v=f_0$. $\pi_{KerF_0}: H_1\to KerF_0$ oa.operator-algebras fa.functional-analysis While it is probably obvious for people in the field, you should really define all your notations why asking a question on MO. Just so that readers don't have to guess that you meant $\mathcal{F}$ to be the Fredholm operators and $\mathcal{B}$ to be the bounded operators. – Willie Wong Aug 26 '10 at 12:59 ... I meant "when" instead of "why" above, of course. – Willie Wong Aug 26 '10 at 13:00 I agree with Willie. The question should be reformulated so as to clarify all the notations used. – André Henriques Aug 26 '10 at 13:14 Thanks, Willie Wong and André Henriques, I edit my question. – Chen Aug 26 '10 at 13:39 And what is it that you don't understand? The only slightly non-trivial step I see in your sketch of the proof is the fact that $\bar{F}$ is an isomorphism. Once you have that, picking $f_0 = 0$ you see that every element of $H_2$ can be written as $Fv + w$ where $v\in H_1$ and $w\in F_0(H_1)^\perp$. On the other hand, by definition $\bar{F} |_{(ker F_0)^\perp\oplus F(H_1)^\perp}$ clearly has range contained in $H_2$. Hence your corollary. – Willie Wong Aug 26 '10 at 14:46 show 2 more comments Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook. Browse other questions tagged oa.operator-algebras fa.functional-analysis or ask your own question.
{"url":"http://mathoverflow.net/questions/36733/a-result-about-fredholm-operator","timestamp":"2014-04-18T11:08:48Z","content_type":null,"content_length":"52679","record_id":"<urn:uuid:41caebcc-8bcf-46df-a7c9-a28be173ae97>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00124-ip-10-147-4-33.ec2.internal.warc.gz"}
JEE Advanced Syllabus Mathematics | Entranceindia JEE Advanced Syllabus Mathematics Algebra: Algebra of complex numbers, addition, multiplication, conjugation, polar representation, properties of modulus and principal argument, triangle inequality, cube roots of unity, geometric Quadratic equations with real coefficients, relations between roots and coefficients, formation of quadratic equations with given roots, symmetric functions of roots. Arithmetic, geometric and harmonic progressions, arithmetic, geometric and harmonic means, sums of finite arithmetic and geometric progressions, infinite geometric series, sums of squares and cubes of the first n natural numbers. Logarithms and their properties. Permutations and combinations, Binomial theorem for a positive integral index, properties of binomial coefficients. Matrices as a rectangular array of real numbers, equality of matrices, addition, multiplication by a scalar and product of matrices, transpose of a matrix, determinant of a square matrix of order up to three, inverse of a square matrix of order up to three, properties of these matrix operations, diagonal, symmetric and skew-symmetric matrices and their properties, solutions of simultaneous linear equations in two or three variables. Addition and multiplication rules of probability, conditional probability, Bayes Theorem, independence of events, computation of probability of events using permutations and combinations. Trigonometry: Trigonometric functions, their periodicity and graphs, addition and subtraction formulae, formulae involving multiple and sub-multiple angles, general solution of trigonometric Relations between sides and angles of a triangle, sine rule, cosine rule, half-angle formula and the area of a triangle, inverse trigonometric functions (principal value only). Analytical geometry: Two dimensions: Cartesian coordinates, distance between two points, section formulae, shift of origin. Equation of a straight line in various forms, angle between two lines, distance of a point from a line; Lines through the point of intersection of two given lines, equation of the bisector of the angle between two lines, concurrency of lines; Centroid, orthocentre, incentre and circumcentre of a triangle. Equation of a circle in various forms, equations of tangent, normal and chord. Parametric equations of a circle, intersection of a circle with a straight line or a circle, equation of a circle through the points of intersection of two circles and those of a circle and a straight line. Equations of a parabola, ellipse and hyperbola in standard form, their foci, directrices and eccentricity, parametric equations, equations of tangent and normal. Locus Problems. Three dimensions: Direction cosines and direction ratios, equation of a straight line in space, equation of a plane, distance of a point from a plane. Differential calculus: Real valued functions of a real variable, into, onto and one-to-one functions, sum, difference, product and quotient of two functions, composite functions, absolute value, polynomial, rational, trigonometric, exponential and logarithmic functions. Limit and continuity of a function, limit and continuity of the sum, difference, product and quotient of two functions, L’Hospital rule of evaluation of limits of functions. Even and odd functions, inverse of a function, continuity of composite functions, intermediate value property of continuous functions. Derivative of a function, derivative of the sum, difference, product and quotient of two functions, chain rule, derivatives of polynomial, rational, trigonometric, inverse trigonometric, exponential and logarithmic functions. Derivatives of implicit functions, derivatives up to order two, geometrical interpretation of the derivative, tangents and normals, increasing and decreasing functions, maximum and minimum values of a function, Rolle’s Theorem and Lagrange’s Mean Value Theorem. Integral calculus: Integration as the inverse process of differentiation, indefinite integrals of standard functions, definite integrals and their properties, Fundamental Theorem of Integral Integration by parts, integration by the methods of substitution and partial fractions, application of definite integrals to the determination of areas involving simple curves. Formation of ordinary differential equations, solution of homogeneous differential equations, separation of variables method, linear first order differential equations. Vectors: Addition of vectors, scalar multiplication, dot and cross products, scalar triple products and their geometrical interpretations. JEE Advanced JEE Advanced Syllabus JEE Advanced Syllabus Mathematics
{"url":"http://www.entranceindia.com/jee-advanced/jee-advanced-syllabus-mathematics/","timestamp":"2014-04-17T06:52:53Z","content_type":null,"content_length":"36619","record_id":"<urn:uuid:3f3f357d-3d2c-4502-a105-32c2dff90847>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00408-ip-10-147-4-33.ec2.internal.warc.gz"}
Any math teachers in the house? Really need help with teaching my kids math :( 02-05-2013, 03:11 PM Any math teachers in the house? Really need help with teaching my kids math :( So my 9-year old is struggling big time with math. Dad is a buffoon who hates math, and is also afraid & concerned that I am confusing my daughter even more. Just looking for some advise on any websites or apps we can try to help her out. Right now she's in 4th grade and in the midst of double-digit multiplication and just starting division. Homework has become an absolute nightmare, could really use some insight if anyone has experiences similar issues. :( 02-05-2013, 03:17 PM what do you need to know? some teachers want the kids to show their work the way they were taught (which is not necessarily the way we were taught) 02-05-2013, 03:18 PM Went through it with my son to a degree (HS now) and my daughter soon (1st grade) Problem for me was that they teach it differently now. My wife bought a few books and she's good at math, so I pushed it off on her. 02-05-2013, 03:20 PM 02-05-2013, 03:24 PM Big L Is there a text book for 4th grade math? Not sure what they teach now, but, as with any complex problem, break it down in to smaller, solvable problems. Finding the answer to a double digit multiplication (complex) problem involves several single digit (smaller) multiplications and then addtitions. 02-05-2013, 03:26 PM Fish, definitely take a look at Kahn Academy - an excellent free resource started by a father pretty much in your boat (except with a PhD:) 02-05-2013, 03:32 PM I don't like this trick because I think it avoids memorizing times tables which I think ultimately will hinder students. But for the context of this question the below should help. I would make vertical lines first with your highest (in this case tens) digit being on the left and your next lowest to the right and so on (in this case the ones digit). Then the second number (factor) as horizontal lines with highest place digit on top and lowest on bottom. Count the separated number of intersecting points from top left to bottom right. Hippie stuff if you ask me. The other option would be the old(ish) school version where you use a zero in the second (and subsequent) lines of product and add up when through. The justification of the zero in the ones' column is that in a way you are multiplying by a multiple of 10 (i.e. 26X48 is really 6x48 + 20x48. Since 0xanything is 0, the 0 will go in the ones column when multiplying by the tens, two zeors when multiplying by hundreds, so on and so on). Sorry if I made it worse. 02-05-2013, 03:33 PM Just back from my son's high school for a meeting. You would not believe his English teacher. Young, leggy blonde. Beautiful and had to be 22-23 tops. I just about asked her out. 02-05-2013, 03:34 PM I teach 5th grade pm me. 02-05-2013, 03:35 PM 02-05-2013, 03:37 PM Fish, definitely take a look at Kahn Academy - an excellent free resource started by a father pretty much in your boat (except with a PhD:) That is a good start. This old man is up at 4am each day, trundles home at 5pm, and to watch me sit there and try (intelligently) explain math concepts to an unblinking child is comical, if not sad. When Im halfway through explaining a problem Ive found that her mind just wanders off, cant say I blame her but I need something to grab her attention with this. 02-05-2013, 03:37 PM The old days in Police Academy math class... "If each donut costs 50 cents, how much is a dozen donuts?" "Trick question... cops don't actually pay for the donuts." 02-05-2013, 03:41 PM Bonhomme Richard I don't like this trick because I think it avoids memorizing times tables which I think ultimately will hinder students. But for the context of this question the below should help. I would make vertical lines first with your highest (in this case tens) digit being on the left and your next lowest to the right and so on (in this case the ones digit). Then the second number (factor) as horizontal lines with highest place digit on top and lowest on bottom. Count the separated number of intersecting points from top left to bottom right. Hippie stuff if you ask me. The other option would be the old(ish) school version where you use a zero in the second (and subsequent) lines of product and add up when through. The justification of the zero in the ones' column is that in a way you are multiplying by a multiple of 10 (i.e. 26X48 is really 6x48 + 20x48. Since 0xanything is 0, the 0 will go in the ones column when multiplying by the tens, two zeors when multiplying by hundreds, so on and so on). Sorry if I made it worse. What the fck is this sorcery? 02-05-2013, 03:43 PM 02-05-2013, 03:44 PM Big L 02-05-2013, 03:46 PM 02-05-2013, 03:48 PM 02-05-2013, 03:50 PM Fish, definitely take a look at Kahn Academy - an excellent free resource started by a father pretty much in your boat (except with a PhD:) Also while on the subject of 'khan' this is what I look like doing math at home 02-05-2013, 03:54 PM The old days in Police Academy math class... "If each donut costs 50 cents, how much is a dozen donuts?" "Trick question... cops don't actually pay for the donuts." Dimitri can compute anything that has donuts in it. In fact, world renown physicists call upon him to solve advanced physics quandaries. They just change "Joules" or "kilo pascals" to donuts. 2 + 2 without donuts, he can't do. 02-05-2013, 03:54 PM Lone Star Lady Hey, Fish, I used to be a high school math teacher (go ahead, make your girl geek jokes :D ). I now work for a textbook publisher making math books/digital content. I'm swamped at work right now, but I'll get back to you with some websites. I know there are a bunch of youtube videos about math -- don't know how good any of them are. EDIT: If you have specific math questions, feel free to PM me.
{"url":"http://www.jetsinsider.com/forums/printthread.php?t=253828","timestamp":"2014-04-20T22:04:26Z","content_type":null,"content_length":"24917","record_id":"<urn:uuid:9f30124a-137c-4989-b21a-493a3a72d0ee>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00338-ip-10-147-4-33.ec2.internal.warc.gz"}
North Miami Beach ACT Tutor Find a North Miami Beach ACT Tutor ...By the end of 3L, I was already looking for a full-time job as a teacher. I dabbled in public school teaching (as a middle school sub) and then spent a year as a full-time intern at a private school in the Boston area. After a few months in a 4th grade classroom, I'd found my niche. 61 Subjects: including ACT Math, English, reading, Spanish ...Many of the skills required in chemical engineering are cross disciplinary. I feel confident I could help a mechanical engineering student learn how to use resources and better understand principles of mathematics, physics, materials science, project planning, and engineering design. I passed t... 27 Subjects: including ACT Math, chemistry, calculus, GRE ...The Montessori Method has certainly trained me to be prepared with hands on materials and fun activities to motivate and advance the child to the next level. I hold myself to an excellent standard. For that reason, I often ask for some feedback, and will never bill a lesson in which the parent (or child) is not completely satisfied with my tutoring. 16 Subjects: including ACT Math, English, ESL/ESOL, SPSS ...If this subject is mastered, students can learn easier higher levels of math. With the experience I have teaching Algebra 2, in high school and also one on one with my students, I am able to see exactly where the student is at. I am also able to show simple ways for students to understand the material. 48 Subjects: including ACT Math, reading, calculus, chemistry ...Additionally, I worked as a discussion leader for both general and organic chemistry where I led students through problem sets and answered any questions they may have had. Finally, I worked as a chemistry laboratory teaching assistant (TA) for two years and was recognized for my work by receive... 14 Subjects: including ACT Math, chemistry, geometry, calculus
{"url":"http://www.purplemath.com/North_Miami_Beach_ACT_tutors.php","timestamp":"2014-04-16T07:57:51Z","content_type":null,"content_length":"24240","record_id":"<urn:uuid:3d028ee6-828d-4da1-8978-169eaa3a732c>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00344-ip-10-147-4-33.ec2.internal.warc.gz"}
Can You Explain What Each Of These Graphs Mean? ... | Chegg.com Can you explain what each of these graphs mean? Thepeaks, and the different values on the x axis. I know how to get them but I can't explain them, especiallywhat the difference between the two is. The t's have the same value in both I know the top one is due to aliasing and the LPF filter I putit trough. I think the differences at least with the values on thex axis is that the first one samples faster than the second. Idon't know what the peak values mean though in respect to sampling.I defiinitely don't understand what having that squared off partdoes in the first. Thank you so much it is symmetrical. sorry bad diagram the other is Electrical Engineering
{"url":"http://www.chegg.com/homework-help/questions-and-answers/explain-graphs-mean-thepeaks-different-values-x-axis-know-get-t-explain-especiallywhat-dif-q409413","timestamp":"2014-04-20T22:12:11Z","content_type":null,"content_length":"19628","record_id":"<urn:uuid:2c3bdade-09e7-49a3-9aca-a7a99d77a5d2>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00239-ip-10-147-4-33.ec2.internal.warc.gz"}
drag coefficient Definitions for drag coefficient This page provides all possible meanings and translations of the word drag coefficient Princeton's WordNet 1. drag coefficient, coefficient of drag(noun) the ratio of the drag on a body moving through air to the product of the velocity and the surface area of the body 1. Drag coefficient In fluid dynamics, the drag coefficient is a dimensionless quantity that is used to quantify the drag or resistance of an object in a fluid environment such as air or water. It is used in the drag equation, where a lower drag coefficient indicates the object will have less aerodynamic or hydrodynamic drag. The drag coefficient is always associated with a particular surface area. The drag coefficient of any object comprises the effects of the two basic contributors to fluid dynamic drag: skin friction and form drag. The drag coefficient of a lifting airfoil or hydrofoil also includes the effects of lift-induced drag. The drag coefficient of a complete structure such as an aircraft also includes the effects of interference drag. Find a translation for the drag coefficient definition in other languages: Use the citation below to add this definition to your bibliography: Are we missing a good definition for drag coefficient?
{"url":"http://www.definitions.net/definition/drag%20coefficient","timestamp":"2014-04-19T02:16:08Z","content_type":null,"content_length":"24954","record_id":"<urn:uuid:16f9176c-9792-46ec-b5c4-05dcfb3e11d6>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00586-ip-10-147-4-33.ec2.internal.warc.gz"}
Decimals videos Here are some of my recent additions to Math Mammoth Youtube channel - videos about decimal arithmetic. Add and subtract decimals I explain the main principle in adding or subtracting decimals: we can add or subtract "as if" there was no decimal point IF the decimals have the same kind of parts--either tenths, hundredths, or thousandths. Many students have a misconception of thinking of the "part" after the decimal point as "plain numbers." Such students will calculate 0.7 + 0.05 = 0.12, which is wrong, and I explain why in the video. Multiply decimals by whole numbers I explain how to multiply decimals by whole numbers: think of your decimal as so many "tenths", "hundredths", or "thousandths", and simply multiply as if there was no decimal point. Compare to multiplying so many "apples". For example, 5 x 0.06 is five copies of six "hundredths". Multiply 5 x 6 = 30. The answer has to be 30 hundredths (hundredths corresponding to apples), or 0.30, which simplifies to 0.3. Divide decimals using mental math I explain two basic situations where you can use mental math to divide decimals: 1) Think of "stuff" (which is tenths, hundredths, or thousandths) shared evenly between so many people; OR 2) Think how many times the divisor fits into the dividend. Long division with decimals When the dividend is a decimal, and the divisor is a whole number, long division is easy: just divide as if there was no decimal point, and then put a decimal point in the answer in the same place as it is in the dividend. I also show an example where we add decimal zeros to the dividend, in order to get an even division. Lastly I show how the fraction 3/7 is converted into a decimal using long
{"url":"http://homeschoolmath.blogspot.com/2011/04/decimals-videos.html","timestamp":"2014-04-17T04:35:55Z","content_type":null,"content_length":"110646","record_id":"<urn:uuid:153d1674-93e2-4dda-8abb-539b53c57c52>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00190-ip-10-147-4-33.ec2.internal.warc.gz"}
The force F of attraction between two bodies varies jointly as the weights of the two bodies and inversely as the square of the distance between them. Express this fact as a variation using c as a constant. Use M1 and M2 for the weights of the two bodies. The force F of attraction between two bodies varies jointly as the weights of the two bodies and inversely as the square of the distance between them. Express this fact as a variation using c as a constant. Use M1 and M2 for the weights of the two bodies. That's Newton's Law of Gravitation. Since it varies directly with M1 and M2 and inversely with the square of the distance, we have: F \alpha \frac{M_1 M_2}{d^2} Where alpha means that Force is proportional to that. [ But we need a constant, known as proportionality constant, to make the equation true. So: F = c \frac{M_1 M_2}{d^2} ] Expert answered| |Points 7554| Question|Rated bad Asked 10/29/2012 7:11:02 AM 0 Answers/Comments Not a good answer? Get an answer now. (FREE) There are no new answers.
{"url":"http://www.weegy.com/?ConversationId=2546F971","timestamp":"2014-04-20T05:45:54Z","content_type":null,"content_length":"33013","record_id":"<urn:uuid:6064b7c6-3940-47d9-b3b5-6e0203fdd85e>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00141-ip-10-147-4-33.ec2.internal.warc.gz"}
multivariable calc question August 28th 2009, 12:58 PM #1 Jul 2009 Describe the given set with a single equation or with a pair of equations: The set of points in space that lie 2 units from the point (0,0,1) and, at the same time, 2 units from the point (0,0,-1). I got the circle x^2 +y^2=3, z=0 is this correct? I agree 100% It would be a circle in $R^2$ with radius $\sqrt{3}$ which is what u have. August 28th 2009, 01:02 PM #2
{"url":"http://mathhelpforum.com/calculus/99592-multivariable-calc-question.html","timestamp":"2014-04-18T13:58:19Z","content_type":null,"content_length":"31481","record_id":"<urn:uuid:a72c738f-256c-4918-bcb2-c2cbf1e07f9d>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00138-ip-10-147-4-33.ec2.internal.warc.gz"}
How to convert a number to binary and then seperate into digits in pbasic? 08-29-2009, 03:44 AM #1 I want to use the potentiometer to provide input to a neural network, so I want to convert the rctime number to a binary number, and then separate it into 1 digit lengths. For example, say the potmeter gives the rctime value of 323. In binary it would be 0101000011. Then in a format understandable by the neural net, it would be split, literally, into 10 separate digits, i.e. 0, 1, 0, 1, 0, 0, 0, 0, 1, and 1. Each of these would be stored as a variable (or a constant?). Another example would be the number 700 (around the max of the potmeter). In binary it would be 1010111100. So again, split into 10 digits this would be: 1, 0, 1, 0, 1, 1, 1, 1, 0, and 0. So the steps pbasic would have to perform would be as follows: 1. Take the number of the current rctime and convert it to binary. 2. Take that binary number and split it into 10 separate digits. 3. Store each of those digits as a variable. I've been trying to look through the documentation, trying to figure out how to do this. Can anybody help me? Last edited by ForumTools; 10-02-2010 at 11:05 PM. Reason: Forum Migration Hi , converting to binary is not a problem as the Stamp stores all numbers as binary anyway, decimal format is just a way of displaying the value so that it's easier for you and I to understand. You could create an array for each bit of the number and read the bits into the array something like the following x VAR Bit(10) ' ten bit array idx VAR Nib ' variable use for the count loop value VAR Word ' value that contains the ten bits of interest FOR idx =0 TO 9 ' ten iterations of the loop x(idx)=value.BIT0 ' store bit value=value>>1 ' shift value right one bit Jeff T. Last edited by ForumTools; 10-02-2010 at 11:05 PM. Reason: Forum Migration If you're sending the data serially to a device this could be dine simply with the BIN formatter of the SEROUT command. It's not clear to me where the data is originating and where it is being stored though. Chris Savage Parallax Engineering 50 72 6F 6A 65 63 74 20 53 69 74 65 Last edited by ForumTools; 10-02-2010 at 11:05 PM. Reason: Forum Migration I'm writing a neural net in pbasic to control the servo, and on the input side of the neural net, I need 10 1-bit values, which I want to get from a potentiometer, also hooked up to the stamp Thanks Unsoundcode, I'm not sure what the heck you explained to me, but I may be able to figure it out =) Apologies, I am still new at programming. Last edited by ForumTools; 10-02-2010 at 11:05 PM. Reason: Forum Migration @ Chris , is that new or have I been to blind to see it , neat link. Jeff T. Last edited by ForumTools; 10-02-2010 at 11:05 PM. Reason: Forum Migration Jeff, not new...just hasn't been updated in forever. I've been too busy, but it seems recently there has been a need to get information up fast and so I am using the tool again. Chris Savage Parallax Engineering 50 72 6F 6A 65 63 74 20 53 69 74 65 Last edited by ForumTools; 10-02-2010 at 11:05 PM. Reason: Forum Migration 08-29-2009, 04:15 AM #2 08-29-2009, 04:42 AM #3 Parallax Engineering Rocklin, CA Blog Entries 08-29-2009, 05:23 AM #4 08-29-2009, 05:44 AM #5 08-29-2009, 05:47 AM #6 Parallax Engineering Rocklin, CA Blog Entries
{"url":"http://forums.parallax.com/showthread.php/115582-How-to-convert-a-number-to-binary-and-then-seperate-into-digits-in-pbasic","timestamp":"2014-04-19T22:16:24Z","content_type":null,"content_length":"61088","record_id":"<urn:uuid:ddbe6273-41e8-4919-b0c5-a96b16b1331c>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00662-ip-10-147-4-33.ec2.internal.warc.gz"}
PR70 versus PF70 Does anybody know the difference between a PR70 and a PF70? I know 70 is the highest grade a coin can get, but is there a PR proof and a PF proof? Thanks for your help. Terribly think we be talkin' about the same thing old fellow..... stainless ANTONINIVS they're both to same - proof 70 mpcusa Online Dealer of Mpc Yep!, just another way of saying the same thing borgovan Supporter** Absolutely no difference. It was PR-70 when I was growing up, but that was a couple of decades ago. Either PR or PF is an acceptable prefix; they both mean "proof." raider34 WINS Member Yep, they both mean proof 70. PCGS uses PR and NGC uses PF. And for a second there I was having my doubts (Olympic Figurer Skating). Guess it's not the booze? :mouth: mikenoodle The Village Idiot Supporter They are exactly the same and are interchangeable. Just like XF and EF. They mean the same, just some graders use one, some the other. Thank you all for your reply. "If you're going to collect something, it might as well be money" ~ Mike Mezack
{"url":"http://www.cointalk.com/threads/pr70-versus-pf70.94477/","timestamp":"2014-04-20T05:43:44Z","content_type":null,"content_length":"50036","record_id":"<urn:uuid:bc33d935-9255-48a6-a007-658765543e1c>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00299-ip-10-147-4-33.ec2.internal.warc.gz"}
11.2.4 Bit-Value Type - The BIT data type is used to store bit-field values. A type of BIT(M) enables storage of M-bit values. M can range from 1 to 64. To specify bit values, b'value' notation can be used. value is a binary value written using zeros and ones. For example, b'111' and b'10000000' represent 7 and 128, respectively. See Section 9.1.6, “Bit-Field Literals”. If you assign a value to a BIT(M) column that is less than M bits long, the value is padded on the left with zeros. For example, assigning a value of b'101' to a BIT(6) column is, in effect, the same as assigning b'000101'.
{"url":"http://docs.oracle.com/cd/E17952_01/refman-5.6-en/bit-type.html","timestamp":"2014-04-17T14:01:55Z","content_type":null,"content_length":"4977","record_id":"<urn:uuid:1092041a-be1c-4624-ba59-7c185505ec8e>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00467-ip-10-147-4-33.ec2.internal.warc.gz"}
Palisade, NJ Math Tutor Find a Palisade, NJ Math Tutor ...Whether a student needs to learn addition or upper level algebra, basic reading skills or SAT level English, I can help. I will methodically and patiently work step by step to make the material easy. I instruct students ranging from pre-K to adult. 30 Subjects: including prealgebra, geometry, reading, statistics ...I have countless hours of experience as a peer tutor throughout high school and college, and have also worked with a variety of age groups (K-12). I specialize in biology/biochemistry (any subfield/level), math (geometry, pre-algebra, algebra I, algebra II, and elementary math), English (litera... 24 Subjects: including geometry, algebra 1, algebra 2, prealgebra ...By teaching students the art of self-learning, their confidence in their learning ability will improve while they develop and enhance their analytical and problem-solving skills. These skills are important, because they are applicable in any endeavor, such as being a doctor, engineer, attorney, or an entrepreneur. TEACHING STYLEMy teaching style is also very simple: teach to be taught. 13 Subjects: including linear algebra, algebra 1, algebra 2, calculus I’m patient and effective. I encourage questions since learning is always an interactive process. I’ve taught both high school and college including remedial courses at both levels. 9 Subjects: including precalculus, trigonometry, statistics, algebra 1 ...His skills, professionalism and dedication are outstanding." - Lauren D., Assistant Director, Monroe College Writing CenterIn regards to study skills, I believe that I am qualified to teach this subject for two main reasons. The first reason is my educational attainment. I obtained a 4.0 GPA in... 50 Subjects: including algebra 2, elementary (k-6th), music history, religion
{"url":"http://www.purplemath.com/palisade_nj_math_tutors.php","timestamp":"2014-04-20T16:36:28Z","content_type":null,"content_length":"23872","record_id":"<urn:uuid:39cb2c34-c4d7-4226-b729-10d3ac8b60cd>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00437-ip-10-147-4-33.ec2.internal.warc.gz"}
Topology Seminars Spring 2014 Seminars will begin this semester on Monday 3 February. They will be held at 4.00pm in Frank Adams 2, preceded by high-class tea, coffee and biscuits from 3.30pm on the Atrium Bridge. We will visit Sandbar after each meeting, for refreshments and further discussion... • Monday 03 February 2014 The topology of Stein fillable contact manifolds in higher dimensions Diarmuid Crowley (Max Planck Institute, Bonn) 4pm FRANK ADAMS 2 Abstract (click to view) An almost contact manifold M is a closed oriented (2q+1)-manifold with a reduction of its structure group to U(q). It is an open question in dimensions 7 and higher whether every almost contact manifold admits an actual contact structure. A special class of contact structures arise when M is the boundary of a Stein domain and Eliashberg's h-principle, a deep result in the subject, characterises Stein domains. In this talk I will report on a joint project with Jonathan Bowden and Andras Stipsicz where we organise Eliashberg's h-principle in the setting of Kreck's modified surgery. As a consequence, we obtain a bordism-theoretic characterisation of which almost contact manifolds admit Stein fillings. As an application, we show that every simply connected almost contact 7-manifold with torsion free second homotopy admits a Stein filling. • Monday 10 February 2014 Towards the Grothendieck-Teichmuller Group Goran Malic (University of Manchester) 4pm FRANK ADAMS 2 Abstract (click to view) One of the most mysterious objects in mathematics is the absolute Galois group G:=Gal(Q/Q), the group of automorphisms of Q which fix Q pointwise. This group encodes Galois theory over Q. From 1972 to about 1984 Grothendieck sketched out an approach to studying this group via its action on Dehn twists and connected bipartite graphs cellularly embedded onto compact and closed Riemann surfaces. In 1990 Drinfeld constructed a certain group called the Grothendieck-Teichmuller group, denoted by GT, which acts on a certain braided tensor category. It is conjectured that G and GT are isomorphic. In this talk I will describe some elementary aspects of the action of G on bipartite graphs cellularly embededd onto Riemann surfaces, and demonstrate an elementary construction of GT. • Monday 17 February 2014 No Seminar Room in use for President's visit. 4pm FRANK ADAMS 2 • Monday 24 February 2014 An introduction to stunted weighted projective space Beverley O'Neill (University of Manchester) 4pm FRANK ADAMS 2 Abstract (click to view) Weighted projective spaces provide the most basic examples of toric orbifolds. Although they belong to such a restricted class of algebraic varieties, their simplicity has made them attractive objects to study in algebraic and differential geometry as well as theoretical physics. Their integral cohomology ring was computed in the pioneering work of Kawasaki in 1973 but has a chaotic feel. For many years since then algebraic topologists have paid little attention to weighted projective spaces. By bringing some order to Kawasaki's work, I will introduce stunted weighted projective space, a generalisation of stunted complex projective space, and extend his results to compute their integral cohomology rings. It is possible to impose a CW-structure and identify the corresponding homology generators in terms of cellular cycles, but this is usually rather complicated. If time permits, I shall give a brief overview of how it may be achieved. • Monday 2014 Seminar Cancelled, owing to unforseen circumstances 4pm in FRANK ADAMS 2 • Monday 2014 Thom spaces and Thom isomorphisms; Spin, Spin^C and all that Nigel Ray (University of Manchester) 4pm in FRANK ADAMS 2 Abstract (click to view) A remarkable number of spaces that arise in complex and quaternionic geometry are related in some way or other to Thom spaces of vector bundles. Such relationships often give excellent insight into the deeper topological structure of the spaces in question; for example, they may facilitate computations of their E*(-) cohomology rings. Here, E ranges from the basic examples of singular cohomology with Z/2 or Z coefficients, via real and complex K theory, through to complex and quaternionic cobordism. The key ingredient in all these cases is the Thom isomorphism --- whose existence depends on the orientability of the bundle with respect to E*(-). Conditions for orientability vary from theory to theory, and lead us naturally to the Lie groups Spin and SpinC. Rather than focus on the details of these ideas, I shall try to interest the audience in more general aspects of key examples (which can be difficult to extract from the literature). • Monday 2014 The toric structure of (2n,k)-manifolds Victor M Buchstaber (Moscow State University and The Steklov Institute) 4pm in FRANK ADAMS 2 Abstract (click to view) This talk is based on recent results obtained with Svjetlana Terzic. It aims to axiomatise a notion of generalised quasitoric manifold, so as to include rich and combinatorially beautiful examples such as complex Grassmannians. We apply and develop methods and results from the algebraic geometry of homogeneous spaces, and from toric topology • Monday 2014 Moduli spaces of labelled graphs James Griffin (University of Glasgow) 4pm in FRANK ADAMS 2 Abstract (click to view) Although Aut(Fr) has a finite simplicial set as a classifying space, its homology is extremely difficult to calculate and the problem just gets worse as the rank r increases. However by a result of Hatcher and Vogtmann the homology is known to be stable, and Galatius computed this stable homology to be that of the sphere spectrum. More generally Hatcher and Wahl conjectured that automorphism groups Aut(H*G*...*G) of free products of groups are homologically stable. I'll prove this via a moduli space of labelled graphs and a little category theory. • Monday 2014 Topology of spaces of symmetric loops James Montaldi (University of Manchester) 4pm in FRANK ADAMS 2 Abstract (click to view) There is a classical result that the set of connected components of the loop space of a manifold is in 1-1 correspondence with the conjugacy classes of the fundamental group of the manifold. Motivated by the study of symmetries of planar choreographies, I will describe the extension to symmetric loops (periodic orbits) of this result, when the manifold has a group action. This is joint work with my PhD student Katie Steckles. • Monday 2014 TBA James Cranch (University of Sheffield) 4pm in FRANK ADAMS 2 • Monday 2014 No Seminar - Bank Holiday 4pm in FRANK ADAMS 2 • Monday 2014 Adjunctions as homotopy Amit Kuber and David Wilding (Double act - University of Manchester) 4pm in FRANK ADAMS 2 Abstract (click to view) When treating (small) categories as topological spaces (obtained by realizing the nerve of the category), adjunctions yield homotopy equivalence. Since homotopy is an equivalence relation, one forgets about which adjoint is the left one. In this talk, we will explain some ideas from our joint work over the past year about an ordered version of homotopy theory. This definition of homotopy uses adjoints in a way that distinguishes between left and right adjoints. We will describe a way to assign a sequence of posets to a (small) category, and we will explain their significance using several examples from the (2-) category of posets. We will also discuss the classes of maps on which this assignment is functorial. These constructions do not generalise the topological homotopy theory and are not based on any model structure; they are supposed to capture a different notion of deformation of a map into another. Further information For further information please contact the seminar organiser. Spring 2008 Autumn 2008 Spring 2010 Summer 2010 Spring 2012
{"url":"http://www.mims.manchester.ac.uk/events/seminars/topology.php","timestamp":"2014-04-18T10:50:54Z","content_type":null,"content_length":"17700","record_id":"<urn:uuid:68c81ffe-8fa2-4335-9adc-d4fed7e99e2f>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00283-ip-10-147-4-33.ec2.internal.warc.gz"}
188 helpers are online right now 75% of questions are answered within 5 minutes. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/users/hoblos/answered/1","timestamp":"2014-04-18T13:51:57Z","content_type":null,"content_length":"121439","record_id":"<urn:uuid:79eb0963-47de-4b2a-8dc8-554f1bd6c8c5>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00476-ip-10-147-4-33.ec2.internal.warc.gz"}
Problem of a function January 26th 2007, 09:39 AM #1 Super Member Mar 2006 Problem of a function Suppose that $f:\mathbb{R}\mapsto\mathbb{R}$ is a continuous, bounded, strictly increasing function. (a) Show that there is a point $a_{1}\epsilon\mathbb{R}$ such that $f(a_{1})>a_{1}$ (b) For each $n\epsilon\mathbb{N}$, define $a_{n+1}=f(a_{n})$. Explain why $a_{2}>a_{1}$. Then explain why $a_{n+1}>a_{n}$ for all $n\epsilon\mathbb{N}$ (c) Explain why the sequence ${a_{n}}$ is bounded above. (d) Explain why the sequence ${a_{n}}$ converges to some number, L. (e) Explain why f(L) = L. My work so far: (a) f is bounded and strictly increasing, therefore f'(x) > 0, so there is a point $x\epsilon\mathbb{R}$ such that x < f(x). I know this reason is not enough, still working on the proper proof. (b) Now $a_{2}=a_{1+1}=f(a_{1})>a_{1}$ as defined by part (a). Let $S = {n \epsilon \mathbb{N}:a_{n+1}>a_{n}}$ $1 \epsilon S$ as $a_{1+1}=a_{2} > a_{1}$ let $k \epsilon S$, then $a_{k+1} > a_{k}$ Now $a_{k+1+1} = f(a_{k+1}) > a_{k+1}$, thus proved $k+1 \epsilon S$ Therefore S is inductive, and proved $a_{n+1}>a_{n}$ (c) Because f is continuous and bounded. (d) Because f is monotonic, implies ${a_{n}}$ is monotonic, thus ${a_{n}}$ is bounded. All bounded monotonic sequence converges, thus the sequence converges to L. (e) ${a_{n}}$converges to L, then $f({a_{n}})$ converges to L because f is continuous. I know I made some mistakes up there, please check, thank you. Last edited by tttcomrader; January 26th 2007 at 10:04 AM. Let us assume that $f(x)\geq x$ for all $x\in \mathbb{R}$. Then $f(x)$ is not bounded because the function $x$ is not bounded. Which is a contradiction. Now follow the logic.... FOR ALL $x\in \mathbb{R}$ we have $f(x)\geq x$. The negation of that (which must be because by contradiction) is, FOR SOME $x\in \mathbb{R}$ we have $f(x)<x$. Because the negation of a universal quantifier is an existencial quantifier. And the negation of $\geq$ is $<$. The problem with that is we do not know whether $f$ is differenciable or not. (b) Now $a_{2}=a_{1+1}=f(a_{1})>a_{1}$ as defined by part (a). Let $S = {n \epsilon \mathbb{N}:a_{n+1}>a_{n}}$ $1 \epsilon S$ as $a_{1+1}=a_{2} > a_{1}$ let $k \epsilon S$, then $a_{k+1} > a_{k}$ Now $a_{k+1+1} = f(a_{k+1}) > a_{k+1}$, thus proved $k+1 \epsilon S$ Therefore S is inductive, and proved $a_{n+1}>a_{n}$ I got the same thing. It is inductive because $a_{n}>a_{n-1}$ then $a_{n+1}=f(a_n)>f(a_{n-1})=a_n$ because it is strictly increasing. I'm sorry, I don't really understand what you were doing. Did you proved f(x) < x by contradiction? But I'm trying to prove f(x) > x. What is negation? Sorry about my lack of understanding. January 26th 2007, 09:55 AM #2 Global Moderator Nov 2005 New York City January 26th 2007, 10:04 AM #3 Global Moderator Nov 2005 New York City January 26th 2007, 10:07 AM #4 Super Member Mar 2006 January 26th 2007, 10:10 AM #5 Global Moderator Nov 2005 New York City January 26th 2007, 10:12 AM #6 Super Member Mar 2006 January 26th 2007, 10:15 AM #7 Global Moderator Nov 2005 New York City
{"url":"http://mathhelpforum.com/calculus/10678-problem-function.html","timestamp":"2014-04-17T04:07:26Z","content_type":null,"content_length":"64067","record_id":"<urn:uuid:e46a6b2a-59a5-48de-9529-c785cae02912>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00614-ip-10-147-4-33.ec2.internal.warc.gz"}
The influence of network rigidity on the electrical switching behaviour of Ge–Te–Si glasses suitable for phase change memory applications Anbarasu, M and Asokan, S (2007) The influence of network rigidity on the electrical switching behaviour of Ge–Te–Si glasses suitable for phase change memory applications. In: Journal of Physics D: Applied Physics, 40 (23). pp. 7515-7518. Restricted to Registered users only Download (654Kb) | Request a copy $Ge_{15}Te_{85-x}Si_x (2 \leq x \leq 12)$ glasses of a wide range of compositions have been found to exhibit electrical switching at threshold voltages in the range 100–600V, for a sample thickness of 0.3 mm. The samples become latched to the ON state (memory behaviour) at higher ON state currents (>1 mA).However, the switching is found to be reversible (threshold behaviour) if the ON state current is limited to lower values ( \leq 0.7 mA). While $Ge_{15}Te_{85-x}Si_x$ glasses with x \leq 5 exhibit a normal electrical switching, an unstable behaviour is seen in the I–V characteristics of $Ge_{15}Te_{85-x}Si_x$ glasses with x > 5 during the transition to the ON state. Further, a sparking in the electrode region and the splashing of the active material is observed in $Ge_{15}Te_ {85-x}Si_x$ glasses with x > 5. It is also interesting to note that the switching voltage $(V_T)$ and initial resistance (R) of $Ge_{15}Te_{85-x}Si_x$ glasses increase with addition of Si, exhibiting a change in slope at a composition x = 5 ( \langle r \rangle = 2.4). The observed electrical switching behaviour of $Ge_{15}Te_{85-x}Si_x$ glasses has been understood on the basis that the composition x = 5 is the rigidity percolation threshold of the $Ge_{15}Te_{85-x}Si_x$ system. It is also proposed that the $Ge_{15}Te_{85-x}Si_x$ glasses with x < 5 are likely to be more suitable for phase change memory applications. Actions (login required)
{"url":"http://eprints.iisc.ernet.in/13015/","timestamp":"2014-04-23T16:03:22Z","content_type":null,"content_length":"24255","record_id":"<urn:uuid:4137c9ea-1aeb-494d-9ef4-9f5c01338750>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00567-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: January 2009 [00132] [Date Index] [Thread Index] [Author Index] Re: Animation = Translation + Vibration, But How? • To: mathgroup at smc.vnet.net • Subject: [mg95192] Re: Animation = Translation + Vibration, But How? • From: Alexei Boulbitch <Alexei.Boulbitch at iee.lu> • Date: Fri, 9 Jan 2009 06:24:39 -0500 (EST) Hi, Gidil, there was a nice joke (though I would not take a risk of repeating it here) ending up with a dialog: "But why?" "But how?". Did you mean it? :-) To be serious, why not to take a well-known solution of a cantilever and animate it? It will then at least behave as it should. See for instance, Landau, L. D. & Lifshitz, E. M. Theory of Elasticity (Pergamon Press, Oxford, 1986) the chapter on bending of rods. Take the one with one end clumped, another loaded by a point force. You will easily find it in problems to the chapter on small bending. Just to give an example: (* Begin of the example 1 *) (* This is the solution I mean for a cantilever, length 1, clumped at its left end and *) (* loaded by a force 0.1*Sin[t] at its another end *) z[x_, t_] := x^2*(3 - x)*0.1*Sin[t]; (* This shows vibration of its free end. Play here with the option Thickness[n] *) (* to display the rod as a rod, rather than a line *) Animate[Plot[z[x, t], {x, 0, 1}, PlotStyle -> Thickness[0.02], PlotRange -> {-1, 1}, Frame -> False, Ticks -> None, Axes -> None], {t, 0, 10 Pi, 0.1}, Paneled -> False] (* End of the example 1 *) Try this. If you need to show a more complex motion you may again play with equation of motion of its left end for which you can give any function. For example, try this: (* Begin of example 2 *) (* this is the law of motion of the left end *) x0[t_] := Cos[t/5]; (* This we already had in the previous example, but now the left end moves *) z[x_, t_] := (x - x0[t])^2*(3 - (x - x0[t]))*0.1*Sin[t]; Animate[Plot[z[x, t], {x, x0[t], 1 + x0[t]}, PlotStyle -> Thickness[0.02], PlotRange -> {{-1, 2}, {-1, 1}}, Frame -> False, Ticks -> None, Axes -> None], {t, 0, 10 Pi, 0.1}, Paneled -> False] (* End of Example 2 *) You may want to attach any object to the oscillating cantilever end. Try this: (* Begin of the example 3 *) x0[t_] := Cos[t/5]; z[x_, t_] := (x - x0[t])^2*(3 - (x - x0[t]))*0.1*Sin[t]; Animate[Show[{Plot[z[x, t], {x, x0[t], 1 + x0[t]}, PlotStyle -> Thickness[0.02], PlotRange -> {{-1, 2.3}, {-1, 1}}, Frame -> False, Ticks -> None, Axes -> None], Graphics[Disk[{x0[t] + 1, z[x0[t] + 1, t]}, 0.1]]}], {t, 0, 10 Pi, 0.1}, Paneled -> False] (* End of the example 3 *) .... and so on. Finally, if you need to to have a self-standing movie, have a look into the today-posted discussion Re: [mg95123] Manipulate, Export, .avi, forward run without the slider in the... Have fun :-) , Alexei GidiL wrote: > Dear All! > I created a cantilever in Mathematica (nothing fancy, a Graphics 3D > object created with Polygon). > The only thing that I want now is to simulate its movement. I thought > it would be easy, but it's proving to be diabolically difficult. > Boundary conditions: the cantilever should be fixed in one end, and > allowed to oscillate in the other (the oscillations are predetermined > by some simple trigonometric function). > This system should be allowed to translate in space (a moving beam, so > to speak). > So it should be allowed to move in the X-Y plane and oscillate along > the Z- axis. > Moving it in the X-Y plane is accomplished with the Translate > function. But how can I make it oscillate in a specific manner? How > can I combine in one animation both movements? > Any help would be greatly apprerciated, > Gideon Alexei Boulbitch, Dr., Habil. Senior Scientist IEE S.A. ZAE Weiergewan 11, rue Edmond Reuter L-5326 Contern Phone: +352 2454 2566 Fax: +352 2454 3566 Website: www.iee.lu This e-mail may contain trade secrets or privileged, undisclosed or otherwise confidential information. If you are not the intended recipient and have received this e-mail in error, you are hereby notified that any review, copying or distribution of it is strictly prohibited. Please inform us immediately and destroy the original transmittal from your system. Thank you for your co-operation.
{"url":"http://forums.wolfram.com/mathgroup/archive/2009/Jan/msg00132.html","timestamp":"2014-04-18T13:20:52Z","content_type":null,"content_length":"29123","record_id":"<urn:uuid:dc8be6f5-e2ec-4a71-9a60-aa3ffeed93ce>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00554-ip-10-147-4-33.ec2.internal.warc.gz"}
The Finance Professionals' Post A Primer on Value at Risk Click to Print This Page Value at risk, or VaR, is viewed by some as a massively important measure. It is unique in how it characterizes risk. Most measures show risk either as a percentage (as standard deviation and tracking error do) or in units (as the Sharpe and Treynor risk-adjusted measures do). VaR shows risk in terms of money—that is, the money that might be lost. The main purpose of VaR is to assess market risks that result from changes to market prices. VaR assesses risk by using standard statistical techniques that are routinely used in other technical fields. It can be viewed formally as measuring the worst expected loss over a given horizon at a given confidence level (Jorion 2001). We will explain more about this shortly. To many, VaR sounds like a complex method to evaluate risk. This is due, perhaps, to the approaches that are used, which often sound complicated; this article hopes to make these less perplexing. In addition, how VaR is expressed is unique, which perhaps is why some find it difficult to understand. There are three key characteristics of any VaR statistic: • Money amount • Time frame • Confidence level For example, we might say that a portfolio’s VaR is $1 million, over the next week, at a confidence of 95%. This means that there is a 95% chance that the most we can lose over the next week is $1 million. Can we lose more? Yes, because we are only confident at the 95% level. There is a 5% chance we can lose more. Might we lose less? Yes, of course. We might not lose anything; this is just a worst-case scenario. It is not truly the worst case, since we could lose more, but it is the worst case within a certain level of confidence. Now you can see why this is confusing. If we increase our confidence level (to 98%, for example), our VaR increases because we are taking into consideration even more bad events that might occur. As we increase our confidence level, the potential loss increases. If we increase the time horizon, our VaR will also increase, because we have extended the time in which bad things may occur. Jorion (2009) offers a few caveats regarding VaR: First, it does not describe the worst loss given the employment of a confidence level, meaning we have an expectation that there will be times when the loss will be greater. Second, VaR does not describe the loss in the left tail but rather indicates the probability of such a value occurring. Finally, VaR is measured with some degree of error as it is subject to normal sampling VaRiation. Given the options available when employing VaR—confidence interval, length of sample period—different results can be obtained. There are three approaches used to calculate VaR: • VARIANCE COVARIANCE (VCV): This method uses the variance and covariance of our assets as parameters and assumes that our distribution of returns is normal. In reality the distribution is probably not normal. In spite of this conflict, the VCV remains a commonly used measure and is the one to which we will devote most of our time. • HISTORICAL: This approach looks at our returns over some prior period (e.g., for the last 1,000 days) and ranks them from worst to best. We then pick a level we are interested in (e.g., 95%) and use that return as our prediction of what is the worst that can happen. We then apply this return to our portfolio to determine what the impact would be. There is no requirement to assume a normal distribution. This approach involves applying the portfolio’s current weights to a time series of historical returns. Jorion (2009) describes this as replaying a “tape” of history with current weights. While an advantage of this method is that it makes no assumptions regarding the distribution, it relies on a short historical window, which may not contain likely market moves, and therefore may miss certain risks. This approach is based on a window of recent historical data that employs an assumption that this window reflects the range of future outcomes. When this is not the case, the results can be misleading. • MONTE CARLO (MC): This method does not require our returns to be normally distributed but does make certain assumptions about the distribution (for example, that it might be leptokurtic, with a higher peak around the mean and fatter tails than in a normal distribution). Random numbers are created and a simulation is run to try to estimate what might occur; from this we derive the VaR. This method is similar to the historical method, except that random drawings from a prespecified distribution are used to predict market movements. A random number generator produces a distribution from which the returns are drawn and from which the VaR is derived. This method has a significant computational requirement and entails assumptions about the stochastic process. There are also sampling issues because different random numbers will result in different results. The key benefit of the MC VaR is its ability to deal with exotics, such as path-dependent Variance-Covariance Method We will step through an example of how to derive the VaR for a portfolio (with two securities at a combined market value of $2 million) using the VCV method. Everything we show can be replicated in Excel, though the math might get a tad more complex with examples involving many more securities. Although we refer to this approach as variance covariance, we begin by calculating standard deviation. But, as you will see later, we convert standard deviation into variance by squaring it. We will also use correlation rather than covariance. While covariance can also be used, I was shown this approach using correlation and so employ it here. (Note that there is no advantage to one measure over the other.) We begin by deriving the standard deviation for each of the securities in our portfolio. We take the holdings as of today and find their returns for a prior period. Here we have some flexibility in deciding how far back we wish to go. We would expect to use daily returns and need to go back at least 30 trading days. (To have a normal distribution we need to have at least 30 elements in our sample size.) The standard deviation formula is quite simple: r[i] = the return for period i r = average return for the period n = number of discrete periods over which standard deviation is being measured The STDEV Excel function can be used to replicate this formula. The n-1 is used because we are dealing with a sample rather than the entire population. (An alternative formula is the STDEVP function, which replaces n-1 with n, or the total number of returns in the period. There is no consensus as to which approach to employ, though STDEV is a bit more conservative.) │ │ │ │ Standard │ │ │ │ Market │ │ Deviation │ Correlation │ │ Stocks │ Value │ Weight │ (σ Annual) │ (ρ) │ │ a │ 750,000 │ 37.5% │ 25% │ 20% │ │ b │ 1,250,000 │ 62.5% │ 40% │ │ │ Total │ 2,000,000 │ 100% │ │ │ We have to measure the correlation between each possible pairing of securities in our portfolio. The Excel function for correlation is CORREL. In our example we will use two securities to keep our math simple, though later we will discuss what is involved when you have more than two. Table 1 provides the details we will use for our VaR calculation. Note that we assume a zero mean, which is a conservative assumption. We next need to derive the standard deviation for the entire portfolio. While you might think this would be a simple or weighted average, it is not. The formula is a tad more complicated: w[a] = stock a’s weight w[b] = stock b’s weight σ[a] = stock a’s annual standard deviation σ[b] = stock b’s annual standard deviation ρ[a,b] = correlation of a and b While this formula might look challenging, it is really quite simple to employ. As noted above, the square of the standard deviation is the variance. When we apply this formula to our values we obtain 28.401%. This is an annualized value. Let us say we want to derive the VaR for the next day, which is a common requirement: we will need to convert this value into its daily equivalent. This is done by dividing it by the square root of 252. (There are roughly 252 trading days in a year. One might argue that it should be 250, 251, 253, which is fine; the difference will be negligible.) The one-day standard deviation is 1.7891%. We are now ready to calculate our value at risk. But first, we need to decide what our confidence level will be. Let us say we want to be 95% confident, what do we use? Well, recall that for a normal distribution, plus or minus one standard deviation covers roughly 68% of the distribution. So how many standard deviations do we have to move from our mean to cover 95% of the distribution? To determine this we can use the Excel NORMSINV function. (The result the function provides us is for the confidence of just one side or tail of our distribution. Here we are only interested in potential losses or the downside of the distribution.) We simply key in =NORMSINV(0.95) to obtain this value, which is 1.645. The VaR formula is: VaR[95%Confidence] = P × 1.645 × σ P = the portfolio value σ = the portfolio’s standard deviation (which we just calculated to be 1.7891%) And so, if we carry out this math we find our value at risk to be $58,862. And so, the “most” we can lose over the next day, at a confidence of 95%, is $58,862. Again, there is a 5% chance we could lose more, but we decided to evaluate this at the 95% confidence level. This example was done with only two securities, but if we have 50 securities in our portfolio, is it much harder? The basic math is the same: the challenge is deriving the correlation values. We measure correlations between two securities at a time, meaning that we would have to compare a lot of relationships with 50 securities in our portfolio. How many? The formula to derive the number of correlations is: n (n - 1) n = the number of securities Therefore, if we have 50 securities, the result is 1,225, meaning we would have to derive 1,225 individual correlations. What if we had 100 positions? Then we would need to derive 4,950. Imagine a scenario in which we are measuring VaR for all of our clients, and we have 1,000 portfolios with an average of 50 to 100 securities in each; we would be doing a lot of work. We would get pretty tired of doing this with Excel, which is the reason most firms employ software packages. Also, mapping procedures allow firms to avoid estimating and managing millions of correlations. Is this all there is to VaR? Well, no. As we discussed above, there are two other methods plus a variety of alternative approaches to VaR, such as conditional VaR (CVaR), which is the expected loss if events fall outside of the confidence level. But what we have described here is a typical way to derive a basic VaR for a portfolio. VaR, like many risk measures, has its supporters and detractors. For example, in a recent Wall Street Journal article, Eleanor Laise (2009) pointed out that the Monte Carlo approach to VaR is used to provide a more accurate assessment of risk, as it does not rely on a normal distribution. Writing for the Los Angeles Times, Morgen Witzel (2009) discussed how VaR falls short and referenced a new book by Pablo Triana, Lecturing Birds on Flying: Can Mathematical Theories Destroy the Financial Markets?, which details the model’s shortcomings. It is not the intent of this article to discuss these issues but rather to simply provide an explanation of the measure itself. To gain further insight into this topic, may we suggest a recent article by Neil A. O’Hara (2009) that appeared in the Investment Professional. Regardless of the criticism, this measure is not going to go away. Therefore, the more you know, the better. This article provided a brief explanation of the concepts. In the next article, “Proceed with Caution: The Pitfalls of Value at Risk,” I provide further elaboration on the controversies surrounding this topic. Jorion, Philippe. 2001. Value at Risk: The New Benchmark for Managing Financial Risk. New York, NY. McGraw-Hill. ———. 2009. Financial Risk Manager Handbook. Hoboken, NJ. John Wiley & Sons. Laise, Eleanor. September 8, 2009. “Some Funds Stop Grading on the Curve.” Wall Street Journal. O’Hara, Neil A. Winter 2009. “The Greater Fool Theory.” The Investment Professional, vol. 2, no. 1. 28–33.. Witzel, Morgen. September 7, 2009. “Financial Crisis Has Deep Roots in Academia.” Los Angeles Times. –David Spaulding, CIPM, is an internationally recognized authority on investment performance measurement and president of the Spaulding Group Inc. Based in Somerset, New Jersey, the Spaulding Group is a provider of investment performance products and services, including the Journal of Performance Measurement. Spaulding is the author, contributing author, and coeditor of several books on performance measurement. The author thanks Pace University professor Aron Gottesman for discussion related to the methodologies and for reviewing an earlier draft. To learn more about overlooked Excel formulas that are useful for financial professionals, check out Advanced Excel for Data Analysis. How are long/short assets treated in an excel VAR model? Thanks. I tend to use the modified Value at Risk, which accounts for skew and kurtosis in the returns distribution (which are required to correctly model non-normally distributed returns). There's an explanation and an Excel spreadsheet here: http://optimizeyourportfolio.blogspot.com/2011/07/modified-value-at-risk.html There's a summary of several methods to calculate Value at Risk at http://investexcel.net/1506/value-at-risk-methods-spreadsheets/ This is only a preview. Your comment has not yet been posted. Your comment could not be posted. Error type: Your comment has been saved. Comments are moderated and will not appear until approved by the author. Post another comment As a final step before posting your comment, enter the letters and numbers you see in the image below. This prevents automated programs from posting comments. Having trouble reading this image? View an alternate. Recent Comments
{"url":"http://post.nyssa.org/nyssa-news/2010/05/a-primer-on-value-at-risk.html","timestamp":"2014-04-16T07:13:28Z","content_type":null,"content_length":"76666","record_id":"<urn:uuid:a10c0785-52eb-41cf-a977-c2f8ac7d7794>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00081-ip-10-147-4-33.ec2.internal.warc.gz"}
Ordering labellings of a fixed poset. up vote 0 down vote favorite Let $\{A_1,\ldots, A_m\}$ be a family of sets and $I=\{1, \ldots, m\}$. Assume for any $J\subset I$, $B_J=\bigcap_{i\in J}A_j$ satisfies $1\leq |B_J| \leq m-1$ as long as $|J|>1$. We define a labelling of $J\subseteq I$ as follows. $l(J)=|B_J|$ if $|J|>1$ and $l(J)=m-1$ otherwise. Then we have the labelled poset $(2^I, \subseteq, l)$ (or labelled lattice). Observe that $l\equiv m-1$ if $l(I)=m-1$. If $l(I)=m-2$, then there are two possibilities: 1. $l(J)=m-2$ for all $J$ with $|J|=m-1$ or 2. For a fixed $J_0$ with $|J_0|=m-1$, $l(J_0)=m-1$ and for all other $J$ with $|J|=m-1$, $l(J)=m-2$. My question is this: How can I order(partially is fine) this fixed lattice with respect to different labellings so that I will have the labels distributed "nicely"? As you see I also am not sure of the type of the order. I like to see m-1's close to the top but also many of them. I also like to see large labels more than small labels... any idea???? 1. If $J,K \subset I$ with $|J\cap K|>1$ and $l(J)=l(K)=m-1$, then $l(J\cup K)= m-1$. 2. If $K\subset J$, then $l(K)\geq l(J)$. 3. If $J_1, \ldots, J_s$ are maximal such that $l(J_i)=m-1$, then $|J_1|+\ldots+|J_s|=m$. posets lattices Does i different from j mean A_i is different from A_j? I note there are nonconstant labellings l which satisfy l(I) = m-1. I suspect there are more than two possibilities if I gets label m-2. Finally, it is not clear what nicely is. Do you want an order different from the lattice order which "does what" with respect to a labelling? Or do you want an order on the labellings themselves? More clarity is needed before I will think about this further. Gerhard "Ask Me About System Design" Paseman, 2011.05.10 – Gerhard Paseman May 10 '11 at 18:56 Sorry! I made a little mistake while defining $B_J$... It is fixed now. In the fixed version, if there is $B_J$ and $B_K$ with $|J|=|K|=m-1$ and $|B_K|=|B_J|=m-1$ then $|B_I|=m-1$. Sorry for the mistake.... – Kurt May 10 '11 at 19:51 Say I have two different sets $\{A_1,A_2,…,A_m\}$ and $\{C_1,C_2,…,C_m\}$ satisfying the same property. Let $l_A$ and $l_B$ be the labellings of $(2^I,\subseteq)$ induced by A's and C's. I want to be able to compare them. Hey! This tells me that I need an equivalence relation, not a partial order relation. But after the equivalence relation, I should be able to order the equivalence classes.... – Kurt May 10 '11 at 19:59 I am trying to order labellings of the same poset $(2^I,\subseteq)$ where $I=\{1,\ldots, r}$ and $l$ is a label induced by a family of sets $\{A_1, \ldots, A_{m-1}\}$ as in the original posting. – Kurt May 10 '11 at 20:03 add comment Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook. Browse other questions tagged posets lattices or ask your own question.
{"url":"http://mathoverflow.net/questions/64518/ordering-labellings-of-a-fixed-poset","timestamp":"2014-04-20T18:37:59Z","content_type":null,"content_length":"51183","record_id":"<urn:uuid:ed56f9b9-6828-4de7-b1fa-15180c2bc5e3>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00515-ip-10-147-4-33.ec2.internal.warc.gz"}
Relation between Hecke Operator and Hecke Algebra up vote 9 down vote favorite In the study of number theory (and in other branches of mathematics) presence of Hecke Algebra and Hecke Operator is very prominent. One of the many ways to define the Hecke Operator $T(p)$ is in terms of double coset operator corresponding to the matrix $ \begin{bmatrix} 1 & 0 \\ 0 & p \end{bmatrix}$ . On the other hand Hecke Algebra $\mathcal{H}(G,K)$ associated to a group $G$ of td-type ( topological group, such that every neighborhood of unity contains a compact open subgroup), where $K$ is a compact open subgroup of $G$ is defined as the space of locally constant compactly supported $K$ bi-invariant functions on $G$. Convolution product makes it an associative algebra. I was told that the hecke algebra $\mathcal{H}(Gl(2,\mathbb{Q}_p) , Gl(2,\mathbb(Z)_p))$ corresponds to the classical algebra of hecke operators attached to $p$ via Satake Isomorphism Theorem. Using Satake Isomorphism theorem I can show $\mathcal{H}(Gl(2,\mathbb{Q}_p) ,Gl(2,\mathbb(Z)_p))$ is commutative and finitely generated over $\mathbb{C}$. So my question is how one uses Satake Isomorphism Theorem (or otherwise) to see this? And secondly in general what is the relation between hecke operators and hecke algebra? nt.number-theory modular-forms algebraic-groups add comment 3 Answers active oldest votes The fact that Hecke operators (double coset stuff coming from $SL_2(\mathbf{Z})$ acting on modular forms) and Hecke algebras (locally constant functions on $GL_2(\mathbf{Q}_p)$) are related has nothing really to do with the Satake isomorphism. The crucial observation is that instead of thinking of modular forms as functions on the upper half plane, you can think of them as functions on $GL_2(\mathbf{R})$ which transform in a certain way under a subgroup of $GL_2(\mathbf{Z})$, and then as functions on $GL_2(\mathbf{A})$ ($\mathbf{A}$ the adeles) which are left invariant under $GL_2(\mathbf{Q})$ and right invariant under some compact open subgroup of $GL_2(\widehat{\mathbf{Z}})$. Now there's just some general algebra yoga which says that if $H$ is a subgroup of $G$ and $f$ is a function on $G/H$, and $g\in G$ such that the $HgH$ is a finite union of cosets $g_iH$, then you can define a Hecke operator $T=[HgH]$ acting on the functions on $G/H$, by $Tf(g)=\sum_i f(gg_i)$; the lemma is that this is still $H$-invariant. up vote 15 Next you do the tedious but entirely elementary check that if you consider modular forms not as functions on the upper half plane but as functions on $GL_2(\mathbf{A})$, then the down vote classical Hecke operators have interpretations as operators $T=[HgH]$ as above, with $T_p$ corresponding to the function supported at $p$ and with $g=(p,0;0,1)$. Because the action is accepted "all going on locally" you may as well compute the double coset space locally, that is, if $H=H^pH_p$ with $H_p$ a compact open subgroup of $GL_2(\mathbf{Q}_p)$, then you can do all your coset decompositions and actions locally at $p$. Now finally you have your link, because you can think of $T$ as being the characteristic function of the double coset space $HgH$ which is precisely the sort of Hecke operator in your Hecke algebra of locally constant functions. Furthermore the sum $f(gg_i)$ is just an explicit way of writing convolution, so everything is consistent. I don't know a book that explains how to get from the classical to the adelic point of view in a nice low-level way, but I am sure there will be some out there by now. Oh---maybe Bump? This explanation is very useful.The place where I read about how to think of hecke operators adelicly was the book Automorphic forms on Adele groups by Gelbert. I recently came up with a version of Satake Isomorphism Theorem which says $\mathcal(H)(Gl(2,\mathbb(Q)_p),Gl(2,\mathbb(Z)_p)) \cong \mathbb(C)[T,S,S^-1] $, I was told that $T$ corresponds to the hecke operator $T(p)$ when thought as an operator on automorphic form.I am not familiar with this form of Satake isomorphism theorem, so was hoping someone can explain this.But your answer clearly shows there is a much elementary connection. – Dipramit Majumdar Mar 29 '10 at 19:33 There are also some very nice notes of W. Gan explaining the passage from classical to adelic. They are actually slides for some talks (see the bottom of math.ucsd.edu/~wgan), but are clearer than most other sources I have seen! – David Hansen Mar 29 '10 at 19:37 Right: if G=GL_2(Q_p) and K is GL_2(Z_p) then H(G//K) is C[T,S,S^{-1}] and this is Satake. Unravelling the explicit isomorphism isn't too hard (see e.g. Cartier's notes in Corvallis?), and you can check that T is the char fn of K(p 0;0 1)K and S the char fn of K(p 0;0 p)K. As I say, the lemma now is to write down the explicit dictionary which starts with a modular form and produces a function on GL_2(adeles) and check that the usual T_p on modular forms induces the operator [K(p 0;0 1)K] on functions on GL_2(adeles). But this isn't Satake so I am a bit confused about what you are asking. – Kevin Buzzard Mar 29 '10 at 20:13 Your explanation of the relationship between Hecke Algebra and Hecke Operator is great. In fact it shows the relationship is much more basic than the Satake Isomorphism. My understanding of Statake Isomorphism was H(G//K) is the symmetric polynomials in 2 variables, rather than C[T,S,S^{-1}], as Weyl group is S_2. Probably I misunderstood,I will go back to Cartier article and unravell the definitions and isomorphisms .Thanks for this valuable comment and the answer. – Dipramit Majumdar Mar 29 '10 at 20:44 H(G//K)=H(T//(T intersect K))^{S_2} (T the torus) but the map is quite explicit. H(T//(T intersect K)) is C[X,Y,X^{-1},Y^{-1}] and S_2 acts by switching X and Y. One sets T_p=X+Y and 1 S_p=XY. The exercise is to check that the characteristic function of K(p 0;0 1)K becomes identified with X+Y (possibly up to a power of p^{1/2}) via the map. The map is quite explicit: it's "restrict to B, average over N, and renormalise by 1/2 the sum of the positive roots". You can easily work it out in this case (in the sense that I could once so it can't be hard!) – Kevin Buzzard Mar 30 '10 at 6:51 show 1 more comment Sorry, the first edition of this answer was shamefully incoherent. We'll see if this attempt is any better. Any double coset KgK (for G and K as given) has a unique representative in elementary divisor form $\binom{a0}{0d}$ where a and d are (possibly negative) powers of p and a/d is a p-adic integer (i.e., a positive power of p). The Hecke operator $T(p^n)$ is given by a sum over convolutions with KgK as g ranges over elementary divisor matrices with p-adic integer entries with determinant $p^n$. In particular, T(p) is given by convolving with the double coset corresponding to $\binom{p0}{01}$. In the notation of Buzzard's answer, the operators $T(p^n)$ generate the subalgebra of the Hecke algebra generated by $S = \binom{p0}{0p}$ and $T = \binom{p0}{01}$, and it coincides with the subalgebra generated by those double cosets whose elementary divisor representative has p-adic integer entries. up vote 3 down You can find a non-adelic treatment in terms of Hecke operators acting on modular forms on the upper half plane in section 1.4 of Bump's Automorphic Forms and Representations, where he vote introduces operators $T_\alpha$ for diagonal matrices $\alpha$ in elementary divisor form, and shows how $T(n)$ is given as a sum over double cosets with determinant n. Decomposing these double cosets into left cosets for $\Gamma(1)$ yields the usual set of representatives $\{ a,b,d| ad=n, 0 \leq b < d \}$ over which one sums when evaluating a Hecke operator. The Satake isomorphism gives an isomorphism with the representation ring of $GL_2(\mathbb{C})$, which is commutative and finitely generated. This implies the Hecke algebra here is commutative and finitely generated, but this can be seen without invoking such machinery. In the case of $GL_2$, the isomorphism can be made very explicit. $T(p)$ corresponds to the standard 2 dimensional representation, the scalar matrices give powers of determinant, and $T(p^n)$ corresponds to the $n$th symmetric power. add comment Regarding your second question, the relationship between Hecke operators and algebras was discussed in Baez's This Week's Finds. For instance, take a look at David Ben-Zvi's up vote 2 down comment on Week 254. add comment Not the answer you're looking for? Browse other questions tagged nt.number-theory modular-forms algebraic-groups or ask your own question.
{"url":"http://mathoverflow.net/questions/19684/relation-between-hecke-operator-and-hecke-algebra/19690","timestamp":"2014-04-18T13:44:01Z","content_type":null,"content_length":"68683","record_id":"<urn:uuid:c08e8a73-0ad2-4ac5-a25e-fa234fd56c43>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00385-ip-10-147-4-33.ec2.internal.warc.gz"}
Department of Biomathematics • Krzysztof Argasiński (Assistant Professor) Division in Katowice Phd Students: • Przemysław PaĽdziorek About the Department The Katowice Branch of the Institute of Mathematics was founded in 1966. Jan Mikusiński was the head of the Branch until his retirement in 1984. From 1985 to 1994 the Branch was headed by Piotr Antosik, and then by Ryszard Rudnicki. The following mathematicians worked in the Branch: B. Aniszczyk, P. Antosik, J. Burzyk, T. Dłotko, C. Ferens, P. Hallala, A. Kamiński, W. Kierat, C. Kli¶, S. Krasińska, M. Kuczma, A. Lasota, S. Lewandowska, Z. Lipecki, K. Łoskot, J. Mikusiński, P. Mikusiński, J. Mioduszewski, J. Pochciał, R. Rudnicki, Z. Sadlok, K. Skórnik, W. Smajdor, T. Szarek, Z. Tyc, J. Uryga and P. Uss. The main line of research has been closely related to Prof. Mikusiński's interests. The dominating topics of investigations are sequential theory of distributions, Mikusiński operational calculus and convergence theory. The main results obtained in this area are: introduction of regular and irregular operations on distributions and local derivatives, functional description of the convergence in the field of Mikusiński operators, axiomatic theory of convergence, diagonal theorems and Paley-Wiener type theorems for regular operators. Moreover, several results concerning applications of operational calculus to differential equations, theory of controllability and special functions have been obtained. Numerous results obtained by Mikusiński's team are presented in five books written by Mikusiński, Antosik, Sikorski and Boehme. Mikusiński's books have been translated into various languages, for example "Operational Calculus'' was published in Polish, English, Russian, German, Hungarian and Japanese. In the early nineties a group of scientists connected with Prof. Andrzej Lasota began to work in the Branch. Their main research interests are in probability theory, partial differential equations and biomathematics. The main results obtained are: sufficient conditions for asymptotic stability of Markov operators and semigroups, asymptotic behaviour of solutions of generalized Fokker-Planck equations, constructions of semifractals and global properties of nonlinear models of population dynamics. Selected publications 1. J. Mikusiński, Operational Calculus, Pergamon Press and PWN, 1967; 1983. 2. J. Mikusiński and T.K. Boehme, Operational Calculus, Volume II, PWN and Pergamon Press, 1987. 3. P. Antosik, J. Mikusiński and R. Sikorski, Theory of Distributions, The Sequential Approach, Elsevier-PWN, 1973 (Russian edition 1976). 4. J. Mikusiński, The Bochner Integral, Birkhäuser, 1987; Academic Press, 1978. 5. P. Antosik and C. Swartz, Matrix Methods in Analysis, Springer, 1985. 1. P. Antosik, On the Mikusiński diagonal theorem, Bull. Acad. Polon. Sci. 20 (1972), 373-377. 2. J. Burzyk, On convergence in the Mikusiński operational calculus, Studia Math. 75 (1983), 313-333. 3. J. Burzyk, A Paley-Wiener type theorem for regular operators, Studia Math. 93 (1989), 187-200. 4. H. Gacki, T. Szarek and S. Wędrychowicz, On existence and stability of solutions of stochastic integral equations with applications to control system, Indian J. Pure Appl. Math. 29 (1998), 5. A. Kamiński, On the Rényi theory of conditional probabilities, Studia Math. 79 (1984), 151-191. 6. A. Kamiński, D. Kova?ević and S. Pilipović, The equivalence of various definitions of the convolution of ultradistributions, Trudy Mat. Inst. Steklov. 203 (1994) 307-322. 7. C. Kli¶, An example of a non-complete normed (K) space, Bull. Acad. Polon. Sci. 26 (1978), 415-420. 8. A. Lasota and J. Myjak, Semifractals, Bull. Polish Acad. Sci. Math. 44 (1996), 5-21. 9. A. Lasota and J. A. Yorke, When the long time behavior is independent of the initial density, SIAM J. Math. Anal. 27 (1996), 221-240. 10. K. Łoskot and R. Rudnicki, Limit theorems for stochastically perturbed dynamical systems, J. Appl. Probab. 32 (1995), 459-469. 11. J. Łuczka and R. Rudnicki, Randomly flashing diffusion: asymptotic properties, J. Statist. Phys. 83 (1996), 1149-1164. 12. M. C. Mackey and R. Rudnicki, Asymptotic similarity and Malthusian growth in autonomous and nonautonomous populations, J. Math. Anal. Appl. 187 (1994), 548-566. 13. M. C. Mackey and R. Rudnicki, Global stability in a delayed partial differential equation describing cellular replication, J. Math. Biol. 33 (1994), 89-109. 14. J. Mikusiński, Sequential theory of the convolution of distributions, Studia Math. 29 (1968), 151-160. 15. J. Mikusiński, A theorem on vector matrices and its applications in measure theory and functional analysis, Bull. Acad. Polon. Sci. 18 (1970), 151-155. 16. J. Mikusiński, On full derivatives and on the integral substitution formula, Accad. Naz. Lincei Probl. Atti Sci. Cult. 217 (1975), 377-390. 17. J. Mikusiński and P. Mikusiński, Quotients de suites et leurs applications dans l'analyse fonctionnelle, C. R. Acad. Sci. Paris Sér. I Math. 293 (1981), 463-464. 18. K. Pichór and R. Rudnicki, Stability of Markov semigroups and applications to parabolic systems, J. Math. Anal. Appl. 215 (1997), 56-74. 19. J. Pochciał, Sequential characterizations of metrizability, Czech. Math. J. 41 (1991), 203-215. 20. R. Rudnicki, Asymptotical stability in L1 of parabolic equations, J. Differential Equations 102 (1993), 391-401. 21. R. Rudnicki, On asymptotic stability and sweeping for Markov operators, Bull. Polish Acad. Sci. Math. 43 (1995), 245-262. 22. K. Skórnik, On fractional integrals and derivatives of a class of generalized functions, Soviet Math. Dokl. 22 (1980), 541-543. 23. K. Skórnik and J. Wloka, m-reduction of ordinary differential equations, Colloq. Math. 78 (1998), 195-212.
{"url":"http://www.impan.pl/EN/Depts/genf.html","timestamp":"2014-04-17T12:33:05Z","content_type":null,"content_length":"10052","record_id":"<urn:uuid:f7f21e71-25bd-4b9d-85e8-e0189260d717>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00002-ip-10-147-4-33.ec2.internal.warc.gz"}
Summary: 52 MATHEMATICS MAGAZINE Plausible and Genuine Extensions of L'Hospital's Rule DePaul University Chicago, IL 60614 DePaul University Chicago, IL 60614 DePaul University Chicago, IL 60614 A plausible extension Roughly speaking, L'Hospital's Rule says that if f (x)/g(x) is indeterminate at infinity and if x is large, then f (x)/g(x) is approximately equal to f (x)/g (x). Also, the limit comparison test says that if an is approximately equal to bn then an converges if and only if bn converges. The best thing that one could possibly hope for in trying to
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/580/5344909.html","timestamp":"2014-04-18T18:44:23Z","content_type":null,"content_length":"7740","record_id":"<urn:uuid:e81b04f5-02f9-41dc-a7db-f2ea091f2ef9>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00558-ip-10-147-4-33.ec2.internal.warc.gz"}
Noncombinatorial proofs of Ramsey's Theorem? up vote 9 down vote favorite I know of 2(.5) proofs of Ramsey's theorem, which states (in its simplest form) that for all $k, l\in \mathbb{N}$ there exists an integer $R(k, l)$ with the following property: for any $n>R(k, l)$, any $2$-coloring of the edges of $K_n$ contains either a red $K_k$ or a blue $K_l$. Both the finite and the infinite versions (the latter being--a 2-coloring of the edges of $K_\mathbb{N}$ contains an infinite monochrome $K_\mathbb{N}$) are proven on Wikipedia, and one may deduce the finite version from the infinite one by compactness, or equivalently using Konig's lemma. The infinitary proof does not give effective bounds on $R(k, l)$, but can be converted to one that does as follows (this is the .5 proof): Consider a $2$-coloring of the edges of a complete graph on $N=2^{k+l}$ vertices, $v_1, ..., v_N$. Let $V_0$ be the set of all vertices, and let $V_i$ be the largest subset of $V_{i-1}$ connected to $v_i$ by edges of a single color, $c_i$. After $k+l$ steps, at least $k$ of the $c_i$ are read or $l$ of the $c_i$ are blue by pigeonhole; let the set of indices for which this happens be denoted $S$. Then $(v_i)_{i\in S}$ is the desired subgraph. My question is: Does anyone have a fundamentally different proof of this theorem? In particular, I am curious to know if there are any of a less combinatorial flavor. co.combinatorics ramsey-theory slick-proof add comment 2 Answers active oldest votes I hope this is close to what you are asking. The following compactness principle turns out to be useful in certain construction in dynamical systems and in probability (in particular, in the theory of exchangeable random variables), and it may be seen as a topological version of the infinitary Ramsey theorem. Lemma. Let $X,d$ a compact metric space, and $u:\mathbb{N}^2\to X$ a double sequence in $X$. Then, there exists a strictly increasing $\sigma:\mathbb{N}\to\mathbb{N}$ such that, denoting $v(i,j):=u(\sigma(i),\sigma(\\, j)),$ the following limits exist, and coincide: $$\lim_{i\to\infty} \lim_{j\to\infty} > v(i,j) \\ = \lim_{ i< j,\\, (i,j)\to\infty} > v(i,j). $$ The proof is just routine (iterated) application of the usual diagonal argument for sequences. How does it implies the infinitary Ramsey theorem? Take $X$ a discrete space of colors and $u$ an $X$-coloring of the complete graph with vertices set $\mathbb{N}$. Then the existence of the limit in the RHS means that the set $\{\sigma(i):i> c\}$ for some c is a monocromatic complete subgraph. up vote 14 down One can even state a more general version for multi-sequences $u(i)$ indicized on increasing $n$-ples $i:=(i_1< i_2\dots < i_n)$ of natural numbers; the game is that any vote accepted parenthesization produces a different iterated way of letting $i$ go to infinity (like when making the beads sliding from left to right in an abacus: in small clusters, one at a time, or all together ). The corresponding limits for u(i) may or may not exist and/or coincide; but, up to a selection of indices via a strictly increasing $\sigma:\mathbb{N}\to\mathbb{N}$, all these iterated limits do exist and coincide. Actually, it may be argued whether it really gives a fundamentally different proof of Ramsey theorem as you are asking. Nevertheless, if you try submitting it to an analyst or to a geometer, I think you are much more likely to obtain an immediate proof of it than with the original set theoretic version (please confirm my guess). On the other hand, it may sound quite weird to a pure set theorist (do not necessarily confirm this statement). This is interesting! – Daniel Litt Jun 24 '10 at 22:08 3 +1! Speaking as a set theorist, I sometimes give this as an application of Ramsey's Theorem - mathoverflow.net/questions/12211/… - Speaking as a reverse mathematician, I don't think of it as an alternate proof. – François G. Dorais♦ Jun 24 '10 at 22:55 add comment One can prove Ramsey's theorem by using a minimal ultrafilter on the infinite permutation group to make a graph monochromatic, see up vote 8 (This proof was first discovered by Neil Hindman.) But this proof is somewhat idiosyncratic and might not be to most people's taste. (Also, on some level it is equivalent to the usual down vote iterated pigeonhole proof, though heavily disguised through several applications of the axiom of infinity and axiom of choice.) It is also closely related to the proof that Pietro These "special ultrafilter" proofs are always beautiful but are somehow unsatisfying to me; they always feel very mysterious (perhaps because we can't get our hands on any non-principal ultrafilters). The ultrafilte proof of Hindman's theorem is amazing though. – Daniel Litt Jun 25 '10 at 17:14 add comment Not the answer you're looking for? Browse other questions tagged co.combinatorics ramsey-theory slick-proof or ask your own question.
{"url":"http://mathoverflow.net/questions/29427/noncombinatorial-proofs-of-ramseys-theorem","timestamp":"2014-04-17T15:41:31Z","content_type":null,"content_length":"60211","record_id":"<urn:uuid:3b44b3a5-83e9-4531-bd13-c9b559d03179>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00451-ip-10-147-4-33.ec2.internal.warc.gz"}
Mapping class group and CAT(0) spaces up vote 10 down vote favorite I hope the questions are not too vague. 1. Is the mapping class group of an orientable punctured surface $CAT(0)$ ? 2. Is any of the remarkable simplicial complexes (curve complex, arc complex...) built on a punctured surface $CAT(0)$? 3. Is there any "nice" action (say, proper or cocompact) of the mapping class group on a $CAT(0)$ space? gt.geometric-topology mg.metric-geometry geometric-group-theory mapping-class-groups add comment 1 Answer active oldest votes (1) Bridson showed that if a mapping class group of a surface (of genus at least 3) acts on a CAT(0) space, then Dehn twists act as elliptic or parabolic elements. This implies that the mapping class groups of genus $\geq 3$ are not CAT(0) (Edit: as pointed out by Misha in the comments, this was originally proved by Kapovich and Leeb, based on an observation of Mess that there is a non-product surface-by-$\mathbb{Z}$ subgroup of the mapping class group of a genus $\geq 3$ surface). On the other hand, the mapping class group of a genus 2 surface acts properly on a CAT(0) space (this is not surprising, since it is linear). I think it's unresolved whether the mapping class group of genus 2 is CAT(0) though (this is essentially equivalent to the same question for the 5-strand braid group). up vote 13 (2) The curve complex cannot admit a CAT(0) metric, since it is homotopy equivalent to a wedge of spheres. down vote accepted (3) The mapping class group acts cocompactly by isometries on the completion of the Weil-Petersson metric on Teichmuller space, which is CAT(0). However, this metric is not proper (although as Bridson shows above, the action is semisimple, Dehn twists acting by elliptic isometries). So I guess it's unresolved whether there is a proper action of the mapping class groups of genus $\geq 3$ on a CAT(0) space (where the Dehn twists act as parabolics). This is unsurprising, since it is unknown whether these groups are linear (a finitely generated linear group acts properly on a CAT(0) space which is a product of symmetric spaces and buildings). 4 Ian: This is theorem 4.2 of M. Kapovich, B. Leeb, Actions of discrete groups on Hadamard spaces, Math. Annalen, Bd. 306 (1996) p. 341-352. – Misha Nov 11 '13 at 10:28 @Misha: The published title is a bit different, "Actions of discrete groups on nonpositively curved spaces" – Lee Mosher Nov 11 '13 at 15:59 @Misha: Right, I thought of this proof (you use the observation of Geoff Mess that there's a non-trivial circle bundle over a surface?), but I thought it was only the case of smooth non-positively curved manifolds. – Ian Agol Nov 11 '13 at 19:20 Ian: This is a general argument based on product decomposition of parallel sets in CAT(0) spaces. – Misha Nov 11 '13 at 19:31 1 Brady and MacCammond showed in arxiv.org/abs/0909.4778 that the 5-strand braid group is CAT(0). – Luc Nov 14 '13 at 13:41 show 1 more comment Not the answer you're looking for? Browse other questions tagged gt.geometric-topology mg.metric-geometry geometric-group-theory mapping-class-groups or ask your own question.
{"url":"http://mathoverflow.net/questions/148536/mapping-class-group-and-cat0-spaces/148543","timestamp":"2014-04-19T23:01:54Z","content_type":null,"content_length":"59341","record_id":"<urn:uuid:766c34dd-4558-47ed-8bc2-ffea7381b284>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00619-ip-10-147-4-33.ec2.internal.warc.gz"}
TI-89 Differential Equations February 16th 2011, 08:15 PM #1 Oct 2010 TI-89 Differential Equations Is there a way to transform a function on the TI-89 in this manner? 1. The function is f(x,y) 2. I want to transform it to f(v) where v=y/x For example, the function is f(x,y)=y/x and I want to write it as f(v)=v where v=y/x (although the function will be more complicated) any help would be greatly appreciated! Is there a way to transform a function on the TI-89 in this manner? 1. The function is f(x,y) 2. I want to transform it to f(v) where v=y/x For example, the function is f(x,y)=y/x and I want to write it as f(v)=v where v=y/x (although the function will be more complicated) any help would be greatly appreciated! No. Once you have definded a function of two variables you cannot reduce it to one. You can recast it as a function of x and v by f(x, y)|y = xv, but that's all. February 16th 2011, 08:26 PM #2
{"url":"http://mathhelpforum.com/calculators/171571-ti-89-differential-equations.html","timestamp":"2014-04-18T17:43:26Z","content_type":null,"content_length":"34279","record_id":"<urn:uuid:5b05bcc9-c90e-4aa1-a336-bfd4f4b82589>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00202-ip-10-147-4-33.ec2.internal.warc.gz"}
quadrant iv Find sinΦ, given that cosΦ=4/5 and Φ is in quadrant IV. Use idenies to solve. How do you find cos(arcsin(-2/3)) without using a calculator? let θ be an angle in quadrant IV, such that cosθ= 1/4. Find cscθ and tanθ. Basic Trig question about cot and other values? In which quadrant must the terminal side of lie if sin and tan to have the different sign? For θ = arcsin x, if x is negative, then θ is in what quadrant(s)? Multiple choice? given cos A=4/5 where A is in quadrant IV, find the exact value of sin 2A? Algebra subset of plane in terms of quadrants? Algebra Help Multiple Choice? If sin(x) = -3/5 with x in quadrant IV, find the sec(x) help please? How do you find cosine and tangent from sine? Find the exact value of tan(a+b) given that sina= -(5/13) nd cosb= (4/5). (Both a and b in Quadrant IV.)? when represented graphically, in which quadrant does the sum of -4-i and 3+4i lie? Find tan2θ if θ terminates in Quadrant IV and cosθ = 3/5? If sin A= -3/5 and angle A is in quadrant IV, find cos A. . Angle(theta) lies in quadrant IV with point P on the terminal arm and tan(theta) = -3/5. Whats sine(theta)? If x is an angle in quadrant IV and tan x = -12/5, find the value of cos 2x. If sinθ = -1/3 in quadrant IV, find: a) cosθ b) cotθ? If cot(x) = -(16/15), where x is in Quadrant IV, what is the exact value of sin(2x)? How do you find cosine and tangent from sine? pre-calculus question (name the quadrant)? Clify 725 ° by quadrant, and state the positive angle with measure less than 360 ° that is coterminal with? If the point P= (-4/5, y) is on the unit quadrant II then y=? Describe the location of the point having the following coordinates? If a point in Quadrant IV is reflected in the y-axis, its image will lie in which Quadrant? Help with beginner Trig problem? How do you solve this pre-calculus problem? Find sin (α), given that cos (α) = 4/7 and α in quadrant IV. In the Cartesian Coordinate System describe the coordinates of ANY point found in Quadrant IV? If the point P = (4/5,y) is on the unit circle in quadrant IV, then what is y? Use an integer to represent 32F below Zero? If sin(x) = -3/5 with x in quadrant IV, find the sec(x? If θ is in quadrant IV and cot(θR) = 3, what is cot(θ)? Find sinΦ, given that cosΦ=4/5 and Φ is in quadrant IV. Use idenies to solve. If the point P = (4/5,y) is on the unit circle in quadrant IV, then y =___? please help! If the point (4/5,y) is on the unit circle in quadrant IV, then y is? What is the exact value of tan θ if sec θ = 9/8 and θ is in quadrant IV? If sin(x) = -3/5 with x in quadrant IV, find the sec(x)? Does anyone know precalculus? If θ is an angle in standard position that terminates in Quadrant IV such that cosθ = 3/5, then cos2θ = _____. Let θ be an angle in the standard position. Name the quadrant in which θ lies, if sin θ < 0, cos θ > 0? If theta is an angle in quadrant IV, such that sec theta=5/3. what would the cot theta and sin theta be? Trigonometry help please :)? Math help asap easy ? Let (theta) be an angle in quadrant IV such that sin(theta) = -1/3, Find sec(theta) and cot(theta)? Trigonometry Problem best answer to helper? What would be an equation of a line that could bisect the quadrants II and IV? Can someone help in Trigonometry? How to determine in what quadrant an angle is in? Write first expression in terms of the second if the terminal point determined by t is in the given quadrant? CAN SOMEONE HELP me solve this? thanks? If tanθ= -5/7 and is in QII, then cot(θ + 1/2 pi) =? How to determine in what quadrant an angle is in? In which quadrant does point P(-5, 5) lie? If sin(x) = -3/5 with x in quadrant IV, find the sec(x) help please? in which quadrant is the angle 3 pie divided by 4 ? I, II. III, or IV? If the point P=(4/5,y) is on the unit circle in quadrant IV, then whats Y? please answer math. Find the exact value of under the following conditions:? Is it possible to write in math quadrant IIII instead of quadrant IV? Find sin s if cos s = 2/3 and s is in quadrant IV. amazing hard math problem? can u help please? Evaluate the expression: cos(θ − x); cos θ = 8/17, θ in Quadrant IV, tan x = -√3, x in Quadrant II? What percentage of all angles terminating in quadrants I and IV have tangents less than or equal to 1? In which quadrant is (-1/9,11/5) located? I, II, II, or IV? use the given information to find the exact value of a.sin 20 b.cos 20 and c. tan 20? Given that (3/2)π<0<2π (quadrant IV) and tan @= -4/3, find cos@. Write the equation of an ellipse that is contained within Quadrant IV? From the information given, find the quadrant in which the terminal point determined by t lies.Input I, II, IV? In which quadrant is sin negative? Which quandrant is tan positive? what the exact values of secant theta and cotangent theta when sine theta=-(4/5)? Find the value of the sin of alpha/2 given that cos alpha= 5/13 and alpha is in Quadrant IV? How do you solve this pre-calculus problem? Which quadrant is the terminal side of a -191° angle in? Math Question. First one with correct answer gets best answer! Is it possible to write in math quadrant IIII instead of quadrant IV? Find all 6 trig functions? the graph of ax+by=c ped only through quadrants I and IV. What is known about the constants A,B,C? The point P(x,y) is on the unit circle in quadrant IV. If x = [sqrt(11) / 6], how do you find y? Find value of sin(α − β) given that sin(α) = -8/17 & cos(β) = 4/5 with α in quadrant III and β in quadrant IV? given cos A=4/5 where A is in quadrant IV, find the exact value of sin 2A? Find the indicated trigonometric value in the specified quadrant. The given angle is in standard position. Determine the quadrant in which the angle lies: -163 degrees? Math question....................... Given sin u = -5/9, and u is in quadrant IV, find sin 2u, cos 2u and tan 2u? Trig Question: What quadrant will angle lie if: sin -1/2 , and cos>0 ? In which quadrant must the terminal side of lie if sin and tan to have the different sign? in which quadrant is the angle 3 pie divided by 4 ? I, II. III, or IV? 3)If sinθ>0 and secθ<0, in which quadrant does the terminal side of angle θ lie? In which quadrant would the image of point (5,-3) fall after a dilation using a factor of -3? Sketch the angle in standard position, indicating its rotation by a curved arrow. Choose the quadrant where... TRIG PROBLEM PLEASE HELP: if sin 2 alpha= - 4/5 and 2 alpha is in quadrant IV, find sin 4 alpha? MATH graphing problems! And I dont get it! 1)1912 degree then 2)599 degree then 3) 26 degree in which quadrant does the terminal side lie? Trig math problem please help me :if csc (t) = - 17/8 and P(t) is in quadrant IV, find tan 2t? In which quadrant is the point (–3, –7) located? Confused with a quadrants problem? Which quadrant is completely shaded in the graph of y ≤ 9x+5? Find sin 2(theta). cos(theta) = 24/25, theta lies in quadrant IV? what is the value of theta, if cos(theta) = 3/5, and theta is in quadrant IV? find sin 2x, cos 2x, and tan 2x from the given information? Name the quadrant or quadrants that satisfy the given condition? , 1 question, multiple choice? what is QuadrantIII and Quadrant IV? given cos(alpha)=7/8 and alpha is in quadrant IV, find the exact value of cos(alpha/2). if cot t = -0.6 and t is in quadrant IV, then how the heck do I find... Given that (3/2)π<0<2π (quadrant IV) and tan @= -4/3, find cos@. Which quadrant is the terminal side of a -191° angle in? HELLLLP! does anyone know how to calculate angles for sine, cosine, and tan? Help with beginner trig problem? Geometric Transformations- translation? How do you solve for tan 2theta? Can you help me with another difficult math question? Name the quadrant or axis in which the point lies. a question involving sin, cos, and tan... please help! find the exact value of the expression using the provided information. SHOW ALL WORK. Suppose cot t = −0.9 and t is in quadrant IV. find:? If sin A = -4/5 and A terminates in quadrant IV, then find sin 2A? Advance Algebra (Home work help)? Theta in quadrant IV and I have cos theta, what is csc and tan? Find the exact value of cos(a-B) if sin a=-7/25 and sin B=8/17, with a in quadrant IV and B in quadrant II? Can anyone help me with solving these trig functions? Which quadrant is the given point? HELP ME PLEASE? If cot(x) = −11/10 , where x is in Quadrant IV, what is the exact value of sin(2x)? What the answers to these math questions? How to determine in what quadrant an angle is in? find the exact value of the remaining trig functions? Given that tan t=-1 and that the terminal sind of t lies in Quadrant IV, find sin t. The given angle is in standard position. Determine the quadrant in which the angle lies: -163 degrees? Trigonometry istance desperately needed? Trigonometric Functions help? I need help with this math problem? if P(squ root of 11/4,y)is on a unit circle and P lies on quadrant IV, what is the value of y? Find the values of the six trigonometric functions of θ by finding a point on the line. Point P(-3, 6) is reflected in the line whose equation is y = x. In which quadrant does the image P' lie? Please help with pre cal, quick please? From the information given, find the quadrant in which the terminal point determined by t lies. If sin(x) = -3/5 with x in quadrant IV, find the sec(x)? 29. Will the graph of y = |x| ever have points in Quadrants III and IV? Explain. Theta in quadrant IV and I have cos theta, what is csc and tan? What is IV sedation or G.A. like for getting 4 teeth out? find missing coordinate of p using the fact that P lies on unit circle in quadrant IV (?, -3/5)? In the system below, What the coordinates of the solution that lies in quadrant IV? Other trigonometric value for sinθ=(-3/5) in quadrant IV... what is QuadrantIII and Quadrant IV? If cos(theta) =1/3, theta in quadrant IV, find the exact value of tan(theta +pi/4). Need to use sum formula? Very confused? :[ how do i find this? PHYSIC Help....10) Assuming the objects have equal mes, the direction of the average force on object B? what is the equationfor a line that pes through the point (-1,8)and only quadrants I,II,and IV on a graph? A line through (0, -15) is 12 units from the origin quadrant IV. Find its equation. How do you find sin 2θ, given cos θ = 5/13 and θ lies in Quadrant IV. When the sum of 4 + 5i and -3 - 7 is represented graphically, in which quadrant does the sum lie? Find the exact value of: sin (arctan 1/2). How does one know to draw the triangle in Quadrant I? Which quadrant (I, II, III or IV) is this angle in? Which type of transformation was done to move polygon ABCDE from Quadrant II to Quadrant IV? How to find the sin of angle when given cos 24/25 in the IV quadrant? If θ is an angle in standard position that terminates in Quadrant IV such that cosθ = 3/5, then cos2θ = _____. Tanx is 1/3 and x is in quadrant IV, what is the value of cosx? Let x be an angle in quadrant IV such that cos x=5/13? In which quadrant would you find the point A (-3, -3)? In which quadrant does the image of (4, -7) lie after the translation that shifts (x, y) to (x - 6, y + 3)? trig values? Algebra and Trig question. Please show work. Serious Replies only! Thankyou very much!! If secant(angle)=3 in quadrant IV find cos(2angle)? Find (x,y) when (x,y) is squ root 10 from the origin? The point (-2, 3) is in what quadrant? is it II, III or IV PLEASE HELP! THANK U? alg 2 help!!!! Need some urance with this math problem? Let theta be an angle in quadrant IV,such that cos theta=1/3.Find csc theta and cot theta? If cos θ = 12/13 and angle θ is in quadrant IV, determine some co ordinates for point p on terminal arm angle? Find sin( A-B) given that cos Cos A=1/3, with A in quadrant 1, Sin B=-1/2, with B in quadrant iv? illustrations & explanation of quadratic functions, when the equation in standard form into i and iv quadrant? I need help on a math problem? Let (theta) be an angle in quadrant IV such that sin(theta) = -1/3, Find sec(theta) and cot(theta)? Use idenies to solve each of the following? Use idenies to solve each of the following? In which quadrant(s) do coordinates with x = y lie? Give the quadrant and one trigonometric function value of 0 in standard position, find the exact value? can someone please help me n this question? Please help with math :)? 2 questions? Find an equation for the line that bisects the quadrants II and IV. Identify the quadrant at which the angle terminates.. Find the exact value of: sin (arctan 1/2). How does one know to draw the triangle in Quadrant I? Use idenies to find indicated function value. How do I find the values of the six trigonometric functions for this problem? Identify the quadrant at which the angle terminates.. I need help on a math problem? In which quadrant is the point (5.2, –1.7) located? ? i need help in algebra please help! How do I find the sin 2x, cos 2x, and tan 2x? How do we know which octant is which? Find the required value? If cos x = 3/4 the which quadrant could x belong to? Determine the equation of this circle. (Show work)? I need help with this trig problem? Can someone help with this math problem? If sin B < 0 and cos B > 0, in which quadrant does angle B terminate? In which quadrant is the point located? 1st coordinate is positive, and the 2nd coordinate is not zero. When the sum of 4 + 5i and -3 - 7 is represented graphically, in which quadrant does the sum lie? Name the quadrant or axis in which the point lies. If secx=5/2, x is in quadrant IV, what is tanx? Which ordered pair is located in Quadrant IV? (3, –4) (–2, –5) (5, 3) (–6, 2) (0, 2) ? In the graph of y≤x, which quadrant is copleteley shaded? (I, II, III, IV))? In the system below, What the coordinates of the solution that lies in quadrant IV? Multiple choice math problem please help? When plotted on the rectagular coodinate system in which quadrant would the following point be located for ? Which quadrant is the given point????? Please help me? Write an equation for f? What the values of all trigonometric functions? of t? In which quadrant might the point lie on? Algebra 2 questions. Best answer will be given. please. can u check my multiple choice math questions with angles? Find, to the nest degree, the solution set of 20 cot x - 13 = 0 over the following domain: FOUR problems:? sine of the difference of two angle measures help? in which quadrant cotangent and cosecant both negative? Suppose tan t = -3 and t is in quadrant IV. Find each of the following:? Determine the quadrant in which the terminal side of an angle of 5π/6 lies? Pathgorean Idenes? how to solve this math problem? Double-Angle, Power-Reducing, and Half-Angle Formulas Question? Write the equation of an ellipse that is contained within Quadrant IV? How do I solve for a point on the Unit Circle given the other? Matheactics Algebra One Help? How to find the sin of angle when given cos 24/25 in the IV quadrant? Math cos/tan question please help ? In the Cartesian Coordinate System describe the coordinates of ANY point found in Quadrant IV? How to solve this kind of trig function problem? (Sin, cos, tan, cot ....)? Use idenies to solve each of the following? if P(squ root of 11/4,y)is on a unit circle and P lies on quadrant IV, what is the value of y? 10 pts to first answer..can someone pleeease help me with these 2 pre calc- problems?..Half Angle Idenies? if tan A < 0 and cos A > 0,in which quadrant does angle A terminate? Find the eact value, given that SinA = -4/5 in Quadrant IV.... How do you find the exact value of sin2x given that cosx=5/13 and x is in quadrant IV? value of sin2x given that cosx=5/13 and x is in quadrant IV? HELP!!! TRIG PROBLEM! just one quick problem!! From the information given, find the quadrant in which the terminal point determined by t lies.Input I, II, IV? Can anyone help me with solving these trig functions? Trigonometry Problem Help? illustrations & explanation of quadratic functions, when the equation in standard form into i and iv quadrant? Find the exact value of the indicated trig. function of θ from the given information? The point (-2, 3) is in what quadrant? is it II, III or IV PLEASE HELP! THANK U? Trigonometry help please :)? Can someone help me with my hw for math? If cos θ > 0 and csc θ < 0, in which quadrant does the terminal side of θ lie? How do you find cos(arcsin(-2/3)) without using a calculator? If a circle is split into 360 degrees, with Quadrants I, II, II, and IV? What percentage of all angles terminating in quadrants I and IV have tangents less than or equal to 1? describe the signs of the coordinates of a point in. Pre calculus quick question? Write the first expression in terms of the second if the terminal point given by 't' is in the given quadrant Some trigonometry help? Identify the quadrant at which the angle terminates.. If the degree exceeds 360 degrees how can i find out which quadrant the number is in. Someone please help! If a point in Quadrant IV is reflected in the y-axis, its image will lie in which Quadrant? If sec (Theta) is 12/5 with theta in Quadrant IV, find tan (Theta)? let theta be an angle in quadrant iv such that sintheta = -15/17? In the graph of y≤x, which quadrant is copleteley shaded? (I, II, III, IV))? if sec (Theta) is 12/5 with theta in Quadrant IV, find tan (Theta)? If the point p= (4/5, y) is on the unit circle in quadrant IV, then y=? ned a lot of serious help with trigonometry paper.. Suppose sec θ = 3 and the terminal sideof the angle lies in quadrant IV? I got a math question that I am stuck on... Can anyone help me please? If the point P = (4/5,y) is on the unit circle in quadrant IV, then what is y? Find the missing coordinate of P, using the fact that P lies on the unit circle in the given quadrant? If secant(angle)=3 in quadrant IV find cos(2angle)? i need help with this problem (indicate the quadrant location)? Can you help me with another difficult math question? find 2x, cos 2x, and tan 2x from the given info. How do you find cosine and tangent from sine? why is this sign positive i thought it was supposed to be negative? How to find the exact value for csc ø and tan ø? Given that tan t=-1 and that the terminal sind of t lies in Quadrant IV, find sin t. Find sin 2(theta). cos(theta) = 24/25, theta lies in quadrant IV? How do I find the sin 2x, cos 2x, and tan 2x? Name the trigonometric function(s) which have a negative value when their corresponding angle is located in Qu? The coordinates of two vertices of a squ (1, 2) and (1, 0). If the point P(6/7,y) is on the unit circle in quadrant IV, then y=? Please help me rightnow rightnow? Let 0 be an angle in standard position. Name the quadrant in which the angle O lies. Basic Trig question about cot and other values? In which quadrant will the image lie, if CD is reflected in the x-axis? Need help with a trig problem? what the answer to this question? Trigonometry help please :)? In which quadrant will the image lie, if CD is reflected in the x-axis? Need help with a trig problem? find cos 2x if cos(x) =3/5 and x is in quadrant IV? Suppose tan t = -3 and t is in quadrant IV. Find each of the following:? In which quadrant is (-1/9,11/5) located? I, II, II, or IV? Find the values of the six trigonometric functions of θ by finding a point on the line. When the sum of 4 + 5i and –3 - 7i is represented graphically, in which quadrant does the sum lie? Will you help me with 3 quick Trigonometry questions on exact values? Sin theta= -5/3 with the angle theta terminating in quadrant IV, find sin2theta. Need help with trig homework? let sin s= 1/4 with s in quadrant IV and let cos t= -2/5, with t in quadrant II. find each of the following:? i need help with this problem (indicate the quadrant location)? Tan x is -1/3 and x is in quadrant IV, what is the value of cos x? I would like some help in my math problems. If sin(x) = -3/5 with x in quadrant IV, find the sec(x). math... nott my bestt subjectt... help ?! Find the exact value of the expression? Trig homework help please? How do you find the values of trig functions IF.... Algebra 2: Need help with word problems involving quadratic equations and functions? indicated trigonometric value in the specified quadrant? if Ɵ is in quadrant iv and sin Ɵ = -2/3 find the exact value of cos(Ɵ/2)? Find tan2θ if θ terminates in Quadrant IV and cosθ = 4/5? Suppose that is a point in quadrant lying on the unit circle. Find . Write the exact value, not a decimal? If cot(x) = -(16/15), where x is in Quadrant IV, what is the exact value of sin(2x)? Some confusion...trig and reference angles? In which quadrant does the image of (4, -7) lie after the translation that shifts (x, y) to (x - 6, y + 3)? which of these ordered pairs would be found in quadrant iv of the coordinate plane (-8,-8) (-8,8) (8,-8) (8,8)? Find the indicated trigonometric value in the specified quadrant. Function? If sin(x) = -3/5 with x in quadrant IV, find the sec(x). Help on a math problem please? Will you help me with 3 quick Trigonometry questions on exact values? Math Question. First with correct answer will get best answer. Which quadrant is the given point? HELP ME PLEASE? please help! If the point (4/5,y) is on the unit circle in quadrant IV, then y is? the point (-2 and -4) is in which quadrant of a coordinate grid? Find the exact value of sin(2α) given that tan(α) = -8/15 and α is in quadrant IV? If sin(x) = -3/5 with x in quadrant IV, find the sec(x? i NEED HELP PLEASE i AM TOTALLY LOST? Let Θ be an angle in quadrant IV, such that cosΘ=(2/3). Find cscΘ and cotΘ. I am having problems with this!! What would be an equation of a line that could bisect the quadrants II and IV? What is IV sedation or G.A. like for getting 4 teeth out? The point P(x,y) is on the unit circle in quadrant IV. If x = [sqrt(11) / 6], how do you find y? In which quadrant is the coordinate point A(-4, 3)? Sin theta= -5/3 with the angle theta terminating in quadrant IV, find sin2theta. let theta be an angle in quadrant iv such that sintheta = -15/17? I really need help with this trig. question, steps and answer please!!! I really need a step by step and answer for this trig question. Let theta be an angle in quadrant IV,such that cos theta=1/3.Find csc theta and cot theta? I am trying to find sin θ/2 for sin θ = - 3/5 and θ lies in Quadrant IV. On my xbox 360 there is one red light in the lower right hand quadrant and it is a hw failure. How can fix it Given that tan t = -1 and that the terminal side of t lies in Quadrant IV, find sin t? the graph of ax+by=c ped only through quadrants I and IV. What is known about the constants A,B,C? Clify 725 ° by quadrant, and state the positive angle with measure less than 360 ° that is coterminal with? Which quadrant is completely shaded in the graph of y ≤ 9x+5? An angle is 145 Degrees. It belongs in which Quadrant? Find the exact value of the indicated trig. function of θ from the given information? Pre-calc question please help? Which quadrant would (0, -1) be? Find the value of the sin of alpha/2 given that cos alpha= 5/13 and alpha is in Quadrant IV? If the point P = (4/5,y) is on the unit circle in quadrant IV, then y=? Precalculus, based on trig functions? Find sin (2x), cos(2x), and tan(2x) from the given information... if cos(2Θ)=3/8, and 2Θ is in quadrant IV... Find value of sin(α − β) given that sin(α) = -8/17 & cos(β) = 4/5 with α in quadrant III and β in quadrant IV? Cosθ = 2/5, θ in quadrant IV? I need to find the exact value of the indicated trig function of θ, Find tan? how do i find: if the cos theda= 5/13, find csc theda if theda is in quadrant IV? sin (theta) = -21/29, theta lies in quadrant IV. Find cos (theta/2)? Addition and Subtraction of Trig Problems. Find the missing coordinate of P, using the fact that P lies on the unit circle in the given quadrant? How do you this and help me to find it? value of sin2x given that cosx=5/13 and x is in quadrant IV? If Tan(α) = -3/4 and α is in Quadrant II, what is α in radians? In which quadrant would the image of point (5,-3) fall after a dilation using a factor of -3? sin t in terms of sec t in quadrant IV? Find tan2θ if θ terminates in Quadrant IV and cosθ = 4/5? Pre-Calculus Help!!...*Please explain how you reached your answer? How do you solve this circle equation problem? In which quadrant does the graph of the sum of (3 5i) and (2 + 4i) lie? If sec (Theta) is 12/5 with theta in Quadrant IV, find tan (Theta)? Find the exact value of cos(u-v) given that sin u= - 5/13 and cos v= 4/5. (Both u and v in Quadrant IV.)? Give the quadrant and one trigonometric function value of 0 in standard position, find the exact value? If cos x = 3/4 the which quadrant could x belong to? How do I solve for a point on the Unit Circle given the other? Is what quadrant is point (3, -4) located? On what axis is point (0, -7) located? Any help with this homework for trig? 3)If sinθ>0 and secθ<0, in which quadrant does the terminal side of angle θ lie? I really need a step by step and answer for this trig question. How do you solve for tan 2theta? If the point P = (4/5,y) is on the unit circle in quadrant IV, then y =___? I am trying to find sin θ/2 for sin θ = - 3/5 and θ lies in Quadrant IV. Which quadrant (I, II, III or IV) is this angle in? I need some help with my trigonometry? If theta is an angle in quadrant IV, such that sec theta=5/3. what would the cot theta and sin theta be? Given that SinΘ = -24/25 and Θ is quadrant IV, find Sin2Θ. The function f(x)=-7/x^4 will always be graphed in quadrants III and IV why? Desperate for Pre Calculus Help? What quadrant the trig functions in? Someone help me with 1 math question I stuck on it! :/? A line through (0, -15) is 12 units from the origin quadrant IV. Find its equation. question involving quadrants, sin, cos, and tan.. help please! Find the exact values of sinθ and secθ? find cos 2x if cos(x) =3/5 and x is in quadrant IV? Find the required value? If the degree exceeds 360 degrees how can i find out which quadrant the number is in. Someone please help! I need help with this math problem? if Ɵ is in quadrant iv and sin Ɵ = -2/3 find the exact value of cos(Ɵ/2)? If sin(x) = -3/5 with x in quadrant IV, find the sec(x)? if cos(2Θ)=3/8, and 2Θ is in quadrant IV... let θ be an angle in quadrant IV, such that cosθ= 1/4. Find cscθ and tanθ. Find the values of the six trigonometric functions of θ by finding a point on the line. Name the trigonometric function(s) which have a negative value when their corresponding angle is located in Q4? What is the exact value of tan θ if sec θ = 9/8 and θ is in quadrant IV? Can someone help with this math problem? Evaluate the expression: cos(θ − x); cos θ = 8/17, θ in Quadrant IV, tan x = -√3, x in Quadrant II? Trigonometry help please :)? in which quadrant will the angle -305degrees lie in the standard position? Determine the quadrant in which the angles lies. (The angle measure is given in radians.)? The graph of 3x + 4y > 1 has no points in which quadrant? 29. Will the graph of y = |x| ever have points in Quadrants III and IV? Explain. Find the indicated trigonometric value in the specified quadrant. Function? What is DEEP IV sedation like for getting 4 teeth extracted? What is the signifigance of Range in inverse trig? cosθ = 2/5, θ in quadrant IV? If sin A = -4/5 and A terminates in quadrant IV, then find sin 2A? Find the trigonometric function: cos θ = 2/9 and θ in quadrant IV. Find sin θ? Pre Calculus Help DESPERATELY Needed? An angle that measures 5/6pi radians is drawn in standard position. which quadrant does the terminal side lie? Let x be an angle in quadrant IV such that cos x=5/13? I need help on a math problem? How do I find the sin 2x, cos 2x, and tan 2x? In which quadrant is the point (–3, –7) located? How do you solve this circle equation problem? The point (52, 6) is in what quadrant? is it II IV or I? TRIG PROBLEM PLEASE HELP: if sin 2 alpha= - 4/5 and 2 alpha is in quadrant IV, find sin 4 alpha? If a circle is split into 360 degrees, with Quadrants I, II, II, and IV? Math B/Precalculus question: If z1 = 3 + 2i and z2 = 4 - 3i, in which quadrant does the graph (z2 - z1) lie? Trigonometric function help? If θ is in quadrant IV and cot(θR) = 3, what is cot(θ)? given that sin a =-8/17, a is in QIV, find sin(a/2), and cos(a/2)? sin θ = -4/5, θ in quadrant IV, find sin2θ? How to find the exact value for csc 0 and tan 0? Name the quadrant in which the angle θ lies. On my xbox 360 there is one red light in the lower right hand quadrant and it is a hw failure. How can fix it Describe the location of the point having the following coordinates? when the sum of 4+6i and 6-8i is graphed, in which quadrant does it lie? Write the first expression in terms of the second if the terminal point given by 't' is in the given quadrant If the angle -1500 degrees is in standard position what is the quadrant in which its terminal side lies? please help algebra? Math help plz easy math help? Find the trigonometric function: cos θ = 2/9 and θ in quadrant IV. Find sin θ? Find the exact value of cos(a-B) if sin a=-7/25 and sin B=8/17, with a in quadrant IV and B in quadrant II? how to identify the quadrant in which thea lies in? cosθ = 2/5, θ in quadrant IV? When plotted on the rectagular coodinate system in which quadrant would the following point be located for ? How to determine in what quadrant an angle is in? How do you find sin 2θ, given cos θ = 5/13 and θ lies in Quadrant IV. Show all work please? If x is an angle in quadrant IV and tan x = -12/5, find the value of cos 2x. why the line 2x-y=0 that lies on the Quadrant III has a sin of (-2sqrt(5))/5? math: TRIG? need help showing work, have answers? 7. In which quadrant will the angle -305 degrees lie in standard position? (-3, 5) is located in _____? which coordinates of the point that is in the given quadrant or on the given axis. In which quadrant does it lie ? algebra help 1 question? Find the exact value of cosθ if sinθ=-2/3 , and θ is.... An angle that measures 5π/6 radians is drawn in standard position.In which quadrant does the terminal side lie? In which quadrant is (-1/9,11/5) located? I, II, II, or IV? What is the exact value of tan θ if sec θ = 9/8 and θ is in quadrant IV? Does anyone have any insight? Find the exact value of sin(2α) given that tan(α) = -8/15 and α is in quadrant IV? Find Sin2x, Cos2x, & Tan2x, Given that Sec x=6 and x is in Quadrant IV. Trigonometric functions help on my homework? If the point P=(4/5,y) is on the unit circle in quadrant IV, then whats Y? Trig Question: What quadrant will angle lie if: sin -1/2 , and cos>0 ? Tanx is 1/3 and x is in quadrant IV, what is the value of cosx? Trigonometry Help. I need help with this problem. An angle is 145 Degrees. It belongs in which Quadrant? How do you know in what Quadrant x is in if... more math questions, help again please! thank you so much. Other trigonometric value for sinθ=(-3/5) in quadrant IV... Trigonemetric Idenies Problem. Find COT & SIN. Help Please!! for answer to this trig question! The point (52, 6) is in what quadrant? is it II IV or I? Triangle ABC is drawn in Quadrant III. If ΔABC is reflected in the y-axis, its image will lie in Quadrant? If cot(x) = −11/10 , where x is in Quadrant IV, what is the exact value of sin(2x)? If sinθ = -1/3 in quadrant IV, find: a) cosθ b) cotθ? slope fields (calculus) true and false? find the exact value of TAN theta? Suppose a parabola has a vertex... one problem I need help!! Quadrant?I need help on finding the answer? when represented graphically, in which quadrant does the sum of -4-i and 3+4i lie? Given that tan t = -1 and that the terminal side of t lies in Quadrant IV, find sin t? Trigonometry istance desperately needed? if cot t = -0.6 and t is in quadrant IV, then how the heck do I find... If sin(x) = -3/5 with x in quadrant IV, find the sec(x)? Name the quadrant or axis in which the point lies. I have a math question? Translation and Quadrant question? sin θ = -4/5, θ in quadrant IV, find sin2θ? why is this sign positive i thought it was supposed to be negative? What is IV sedation or G.A. like for getting 4 teeth out? If the point P = (4/5,y) is on the unit circle in quadrant IV, then y=? Find the eact value, given that SinA = -4/5 in Quadrant IV.... sin (theta) = -21/29, theta lies in quadrant IV. Find cos (theta/2)? Will you help me with 3 quick Trigonometry questions on exact values? How to find the exact value for csc 0 and tan 0? 18. What quadrant is the vertex of Y = f(X + 4) -4 located in? How do I find the sin 2x, cos 2x, and tan 2x? If sin A = -1/2, A in quadrant IV, what is tan 2A? Find Sin2x, Cos2x, & Tan2x, Given that Sec x=6 and x is in Quadrant IV. Find the trigonometric function value of angle @? Find the exact value of tan(a+b) given that sina= -(5/13) nd cosb= (4/5). (Both a and b in Quadrant IV.)? Not sure how to find this answer (trig idenies)? In which quadrant would you find the point A (-3, -3)? Which ordered pair is located in Quadrant IV? (3, –4) (–2, –5) (5, 3) (–6, 2) (0, 2) ? Geometry Help!! Reflecting a tzoid in a quadrant. Write first expression in terms of the second if the terminal point determined by t is in the given quadrant? How to find the quadrant of a negative number? (math help please?) Find the exact value, given that sin A= -4/5 with A in quadrant IV? Find the exact value of cos(u-v) given that sin u= - 5/13 and cos v= 4/5. (Both u and v in Quadrant IV.)? Given sin u = -5/9, and u is in quadrant IV, find sin 2u, cos 2u and tan 2u? which of these ordered pairs would be found in quadrant iv of the coordinate plane (-8,-8) (-8,8) (8,-8) (8,8)? Can you help me with another difficult math question? 7. In which quadrant will the angle -305 degrees lie in standard position? The vertex of the graph of y= -x^2-16x-62 lies in which quadrant? How do you find the exact value of sin2x given that cosx=5/13 and x is in quadrant IV? (math help please?) Find the exact value, given that sin A= -4/5 with A in quadrant IV? Trig homework help please? How do you this and help me to find it? indicated trigonometric value in the specified quadrant? I really need a step by step and answer for this trig question. can somebody help me with this trig please. quadrant questions? Unit Circle trigonometry? math help plz need answers badly? What would be an equation of a line that could bisect the quadrants II and IV? Find the indicated trigonometric value in the specified quadrant. Find sin s if cos s = 2/3 and s is in quadrant IV. In which quadrant might the point lie on? Find sin 2x, cos 2x and tan 2x if cos x = 3 / sqrt(10) and x terminates in quadrant IV. two really easy math multiple choice questions?! (You dont even need a pen and paper)? how do i find: if the cos theda= 5/13, find csc theda if theda is in quadrant IV? how to identify the quadrant in which thea lies in? Suppose cot t = −0.9 and t is in quadrant IV. find:? which coordinates of the point that is in the given quadrant or on the given axis. I have a SAT Math Question? If sin A= -3/5 and angle A is in quadrant IV, find cos A. . What quadrant does -168 degrees lie in? Can someone help with this math problem? 5. In which quadrant will the angle 100 degrees lie in the standard position? find the exact value of the expression using the provided information...SHOW ALL WORK. Some MORE trigonometry problems (a and B actually mathmatical symbols I don't know how to type)? Sum and Difference Formulas Question? Find the Exact value of trig functions? 21 is anyone good at math that can help me? If cos θ = 12/13 and angle θ is in quadrant IV, determine some co ordinates for point p on terminal arm angle? Find sin 2x, cos 2x and tan 2x if cos x = 3 / sqrt(10) and x terminates in quadrant IV. quadrant questions? Angle(theta) lies in quadrant IV with point P on the terminal arm and tan(theta) = -3/5. Whats sine(theta)? How do I find 2x, cos2x, tan2x? What quadrant is angle x if -- Cos x < 0 and Csc x < 0? can you help to solve these questions... Find tan2θ if θ terminates in Quadrant IV and cosθ = 3/5? If secx=5/2, x is in quadrant IV, what is tanx? what is the value of theta, if cos(theta) = 3/5, and theta is in quadrant IV? find missing coordinate of p using the fact that P lies on unit circle in quadrant IV (?, -3/5)? Tan x is -1/3 and x is in quadrant IV, what is the value of cos x? 5. In which quadrant will the angle 100 degrees lie in the standard position? Given that tan t =1 and the terminal of t lis in quadrant IV find sin t? Find the exact value of the expression using the provided information. SHOW ALL WORK. Trig Homework please help explain? If cos(theta) =1/3, theta in quadrant IV, find the exact value of tan(theta +pi/4). Need to use sum formula? How i can solve this (trigonometry)? How to find the exact value for csc ø and tan ø? Name the quadrant or axis in which the point lies. Let Θ be an angle in quadrant IV, such that cosΘ=(2/3). Find cscΘ and cotΘ. I am having problems with this!! If sin A = -1/2, A in quadrant IV, what is tan 2A? In which quadrant does it lie ? algebra help 1 question? Algebra Question in graphing? 10 easy points!!! im a junior in hs..I need urgent help trigonometry and quadrants im frustrated!! plz help me Use an integer to represent 32F below Zero? if sec (Theta) is 12/5 with theta in Quadrant IV, find tan (Theta)? Trigonometry help please :)? Find Trig function values by finding a point on the line? I need help on this math problem? If the point P(6/7,y) is on the unit circle in quadrant IV, then y=? How do you know in what Quadrant x is in if... sin t in terms of sec t in quadrant IV? Which type of transformation was done to move polygon ABCDE from Quadrant II to Quadrant IV? If the point p= (4/5, y) is on the unit circle in quadrant IV, then y=? let sin s= 1/4 with s in quadrant IV and let cos t= -2/5, with t in quadrant II. find each of the following:? Let 0 be an angle in standard position. Name the quadrant in which the angle O lies. given cos(alpha)=7/8 and alpha is in quadrant IV, find the exact value of cos(alpha/2). Find Sin2x, Cos2x, and Tan2x if Tan x equals -15/8 and x terminates in Quadrant IV? How do I find the values of the six trigonometric functions for this problem? Algebra 2: Need help with word problems involving quadratic equations and functions? Find Sin2x, Cos2x, and Tan2x if Tan x equals -15/8 and x terminates in Quadrant IV? Find sin( A-B) given that cos Cos A=1/3, with A in quadrant 1, Sin B=-1/2, with B in quadrant iv? Which quadrant is the terminal side of a -191° angle in? In which quadrant does point P(-5, 5) lie? What is the exact value of tan θ if sec θ = 9/8 and θ is in quadrant IV? Does anyone have any insight? In which quadrant is (-1/9,11/5) located? I, II, II, or IV? Helwith 8th grade math please? Trig math problem please help me :if csc (t) = - 17/8 and P(t) is in quadrant IV, find tan 2t? Given that tan t =1 and the terminal of t lis in quadrant IV find sin t? Which quadrant is the terminal side of a -191° angle in? From the information given, find the quadrant in which the terminal point determined by t lies. Given that SinΘ = -24/25 and Θ is quadrant IV, find Sin2Θ. Anyone know how to solve this? Suppose sec θ = 3 and the terminal sideof the angle lies in quadrant IV? exact value of expression sin 2theta? Two really easy math multiple choice questions?! (You dont even need a pen and paper)? Which of the following points lies on the circle whose center is at the origin and whose radius is 13? What quadrant the trig functions in? TRIG PROBLEM PLEASE HELP: if sin 2 alpha= - 4/5 and 2 alpha is in quadrant IV, find sin 4 alpha?
{"url":"http://okay.so/en/list/quadrant+iv/0/","timestamp":"2014-04-18T20:48:25Z","content_type":null,"content_length":"86189","record_id":"<urn:uuid:f237c4b3-da38-44df-8e21-5ba4b76c7e6d>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00519-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Best Response You've already chosen the best response. Question 2. What if Randall goes first? Question 3. What if it's an m×n grid? \[\color{gray}\Huge{\text{___________________________________________________________________________}}\] Source: Best Response You've already chosen the best response. can mike darken only the squares he has written numbers into or all the squares with the largest number in any row? Best Response You've already chosen the best response. The ones with largest number I think. Best Response You've already chosen the best response. For question 1 and 2, the second player has the winning strategy. For any turn, if Player 1 puts a number in row i, Player 2 needs to put a number in the same row. In each row, make the first three spots group 1, and the second 3 spots group 2. Whichever group Player 1 puts the number in, Player 2 puts his number in the other group. Now, in the odd rows, Player 2 can force the black square to end up in Group 1 since: 1) If player 1 puts numbers in group 1, player 2 responds with smaller numbers in group 2 2) If player 1 puts numbers in group 2, player 2 responds with larger numbers in group 1. In the even rows, do the opposite (switch the words smaller and larger), and this will result in the black squares ending up in group 2. If player 2 plays like this, the only possible way player 1 can win is if the grid looks like this:|dw:1335510717037:dw|So to counter this, all player 2 has to do is change his strategy in the sixth row, and make sure the black square doesnt end up in column 2, 3, or 4. to accomplish this task, make group 1 columns 2, 3, and 4, and group 2 columns 1, 5, and 6. If player 2 plays like he did in the other even rows, this will result in his victory. Best Response You've already chosen the best response. To be a little more specific about what size numbers player two should be placing, they should be the largest in the row if he's putting a larger number in, and the smallest in the row if hes putting a small number in. Best Response You've already chosen the best response. |dw:1335512114034:dw|for anyone who wants to play lol, use the strategy :P Best Response You've already chosen the best response. @Ishaan94 How did you make that <hr>? Best Response You've already chosen the best response. lol, it's a stupid way using \(\LaTeX\). Best Response You've already chosen the best response. Oh, that's LaTeX. :P Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4f7fa46fe4b0bfe8930bdd5c","timestamp":"2014-04-21T02:22:59Z","content_type":null,"content_length":"114473","record_id":"<urn:uuid:0534e813-e620-4f49-89a5-24c09b7a937e>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00157-ip-10-147-4-33.ec2.internal.warc.gz"}
Patent application title: METHOD AND APPARATUS FOR A LOCAL COMPETITIVE LEARNING RULE THAT LEADS TO SPARSE CONNECTIVITY Sign up to receive free email alerts when patent applications with chosen keywords are published SIGN UP Certain aspects of the present disclosure support a local competitive learning rule applied in a computational network that leads to sparse connectivity among processing units of the network. The present disclosure provides a modification to the Oja learning rule, modifying the constraint on the sum of squared weights in the Oja rule. This constraining can be intrinsic and local as opposed to the commonly used multiplicative and subtractive normalizations, which are explicit and require the knowledge of all input weights of a processing unit to update each one of them individually. The presented rule provides convergence to a weight vector that is sparser (i.e., has more zero elements) than the weight vector learned by the original Oja rule. Such sparse connectivity can lead to a higher selectivity of processing units to specific features, and it may require less memory to store the network configuration and less energy to operate it. A method of training a computational network, comprising: computing an output of a processing unit in the computational network based at least in part on at least one existing weight; and changing the at least one weight of the processing unit using a local training rule, wherein the local training rule creates sparse connectivity between processing units of the computational network. The method of claim 1, wherein changing the at least one weight using the local training rule comprises updating the at least one weight based on at least one of: one or more inputs in the processing unit, the output, or the at least one existing weight before the change. The method of claim 2, wherein the sparse connectivity is created after multiple updating of the at least one weight. The method of claim 1, wherein: the computational network comprises a neural network, and the processing unit comprises a neuron circuit. The method of claim 1, wherein the local training rule imposes a constraint on a sum of absolute values of the weights. The method of claim 5, wherein the constrained weights are associated with input connections to the processing unit. The method of claim 1, wherein the local training rule constrains an input weight vector associated with input connections to the processing unit at an equilibrium point. The method of claim 1, wherein the local training rule constrains a number of nonzero weights. The method of claim 8, wherein the constrained weights are associated with input connections to the processing unit. The method of claim 1, wherein the processing unit utilizes one or more non-linear operations. The method of claim 1, wherein the local training rule imposes bounds on individual weights associated with input connections to the processing unit. The method of claim 11, wherein a maximum value of the individual weights is bounded by an upper bound. The method of claim 11, wherein a minimum value of the individual weights is bounded by a lower bound. The method of claim 11, wherein both maximum and minimum values of the individual weights are bounded. The method of claim 11, wherein the bounds vary for the individual weights. An apparatus of a computational network, comprising: a first circuit configured to compute an output of the apparatus in the computational network based at least in part on at least one existing weight; and a second circuit configured to change the at least one weight of the apparatus using a local training rule, wherein the local training rule creates sparse connectivity between apparatuses of the computational network. The apparatus of claim 16, wherein the second circuit is also configured to update the at least one weight based on at least one of: one or more inputs in the apparatus, the output, or the at least one existing weight before the change. The apparatus of claim 17, wherein the sparse connectivity is created after multiple updating of the at least one weight. The apparatus of claim 16, wherein: the computational network comprises a neural network, and the apparatus comprises a neuron circuit. The apparatus of claim 16, wherein the local training rule imposes a constraint on a sum of absolute values of the weights. The apparatus of claim 20, wherein the constrained weights are associated with input connections to the apparatus. The apparatus of claim 16, wherein the local training rule constrains an input weight vector associated with input connections to the apparatus at an equilibrium point. The apparatus of claim 16, wherein the local training rule constrains a number of nonzero weights. The apparatus of claim 23, wherein the constrained weights are associated with input connections to the apparatus. The apparatus of claim 16, wherein the apparatus utilizes one or more non-linear operations. The apparatus of claim 16, wherein the local training rule imposes bounds on individual weights associated with input connections to the apparatus. The apparatus of claim 26, wherein a maximum value of the individual weights is bounded by an upper bound. The apparatus of claim 26, wherein a minimum value of the individual weights is bounded by a lower bound. The apparatus of claim 26, wherein both maximum and minimum values of the individual weights are bounded. The apparatus of claim 26, wherein the bounds vary for the individual weights. An apparatus of a computational network, comprising: means for computing an output of the apparatus in the computational network based at least in part on at least one existing weight; and means for changing the at least one weight of the processing unit using a local training rule, wherein the local training rule creates sparse connectivity between apparatuses of the computational network. The apparatus of claim 31, wherein the means for changing the at least one weight using the local training rule comprises means for updating the at least one weight based on at least one of: one or more inputs in the apparatus, the output, or the at least one existing weight before the change. The apparatus of claim 32, wherein the sparse connectivity is created after multiple updating of the at least one weight. The apparatus of claim 31, wherein: the computational network comprises a neural network, and the apparatus comprises a neuron circuit. The apparatus of claim 31, wherein the local training rule imposes a constraint on a sum of absolute values of the weights. The apparatus of claim 35, wherein the constrained weights are associated with input connections to the apparatus. The apparatus of claim 31, wherein the local training rule constrains an input weight vector associated with input connections to the apparatus at an equilibrium point. The apparatus of claim 31, wherein the local training rule constrains a number of nonzero weights. The apparatus of claim 38, wherein the constrained weights are associated with input connections to the apparatus. The apparatus of claim 31, wherein the apparatus utilizes one or more non-linear operations. The apparatus of claim 31, wherein the local training rule imposes bounds on individual weights associated with input connections to the apparatus. The apparatus of claim 41, wherein a maximum value of the individual weights is bounded by an upper bound. The apparatus of claim 41, wherein a minimum value of the individual weights is bounded by a lower bound. The apparatus of claim 41, wherein both maximum and minimum values of the individual weights are bounded. The apparatus of claim 41, wherein the bounds vary for the individual weights. A computer program product for training a computational network, comprising a computer-readable medium comprising code for: computing an output of a processing unit in the computational network based at least in part on at least one existing weight; and changing the at least one weight of the processing unit using a local training rule, wherein the local training rule creates sparse connectivity between processing units of the computational network. The computer program product of claim 46, wherein the computer-readable medium further comprising code for updating the at least one weight based on at least one of: one or more inputs in the processing unit, the output, or the at least one existing weight before the change. The computer program product of claim 47, wherein the sparse connectivity is created after multiple updating of the at least one weight. The computer program product of claim 46, wherein: the computational network comprises a neural network, and the processing unit comprises a neuron circuit. The computer program product of claim 46, wherein the local training rule imposes a constraint on a sum of absolute values of the weights. The computer program product of claim 50, wherein the constrained weights are associated with input connections to the processing unit. The computer program product of claim 46, wherein the local training rule constrains an input weight vector associated with input connections to the processing unit at an equilibrium point. The computer program product of claim 46, wherein the local training rule constrains a number of nonzero weights. The computer program product of claim 53, wherein the constrained weights are associated with input connections to the processing unit. The computer program product of claim 46, wherein the processing unit utilizes one or more non-linear operations. The computer program product of claim 46, wherein the local training rule imposes bounds on individual weights associated with input connections to the processing unit. The computer program product of claim 56, wherein a maximum value of the individual weights is bounded by an upper bound. The computer program product of claim 56, wherein a minimum value of the individual weights is bounded by a lower bound. The computer program product of claim 56, wherein both maximum and minimum values of the individual weights are bounded. The computer program product of claim 56, wherein the bounds vary for the individual weights. BACKGROUND [0001] 1. Field Certain aspects of the present disclosure generally relate to neural system engineering and, more particularly, to a method and apparatus for training a computational network using a local training rule that creates sparse connectivity. 2. Background A developing brain of humans and animals undergoes a synaptic growth spurt in early childhood followed by a massive synaptic pruning, which removes about half of the synapses by adulthood. Synaptic rewiring (structural plasticity) continues in mature brain but at a slower rate. The synaptic pruning is found to be activity dependent and to remove weaker synapses. Because of that, it may be explained by a synaptic plasticity, in which synapses compete for finite resources such as neurotrophic factors. Synaptic pruning helps to increase the brain efficiency, which can be generally defined as the same functionality with fewer synapses. Since transmission of signals through synapses requires energy, a higher efficiency also means a lower energy. Existing unsupervised learning rules model the synaptic competition for limited resources either explicitly, by the multiplicative or subtractive normalization, or implicitly. However, the explicit normalizations may be nonlocal, i.e., they require the knowledge of all input weights of a neuron to update each one of them individually. However, this may not be biologically plausible. The Oja rule, on the other hand, uses only local information available to a synapse to compute its weight update, but it asymptotically constrains the sum of squared weights, which does not have a biological SUMMARY [0006] Certain aspects of the present disclosure provide a method of training a computational network. The method generally includes computing an output of a processing unit in the computational network based at least in part on at least one existing weight, and changing the at least one weight of the processing unit using a local training rule, wherein the local training rule creates sparse connectivity between processing units of the computational network. Certain aspects of the present disclosure provide an apparatus of a computational network. The apparatus generally includes a first circuit configured to compute an output of the apparatus in the computational network based at least in part on at least one existing weight, and a second circuit configured to change the at least one weight of the apparatus using a local training rule, wherein the local training rule creates sparse connectivity between apparatuses of the computational network. Certain aspects of the present disclosure provide an apparatus of a computational network. The apparatus generally includes means for computing an output of the apparatus in the computational network based at least in part on at least one existing weight, and means for changing the at least one weight of the processing unit using a local training rule, wherein the local training rule creates sparse connectivity between apparatuses of the computational network. Certain aspects of the present disclosure provide a computer program product for training a computational network. The computer program product generally includes a computer-readable medium comprising code for computing an output of a processing unit in the computational network based at least in part on at least one existing weight, and changing the at least one weight of the processing unit using a local training rule, wherein the local training rule creates sparse connectivity between processing units of the computational network. BRIEF DESCRIPTION OF THE DRAWINGS [0010] So that the manner in which the above-recited features of the present disclosure can be understood in detail, a more particular description, briefly summarized above, may be had by reference to aspects, some of which are illustrated in the appended drawings. It is to be noted, however, that the appended drawings illustrate only certain typical aspects of this disclosure and are therefore not to be considered limiting of its scope, for the description may admit to other equally effective aspects. FIG. 1 illustrates an example processing unit of a neural system in accordance with certain aspects of the present disclosure. FIGS. 2A-2D illustrate example geometric analysis of asymptotic solutions in accordance with certain aspects of the present disclosure. FIG. 3 illustrates operations for updating synapse weights of the neural system using a local training rule in accordance with certain aspects of the present disclosure. FIG. 3A illustrates example components capable of performing the operations illustrated in FIG. 3. FIGS. 4A-4D illustrate afferent receptive fields of a simple cell trained by four different rules in accordance with certain aspects of the present disclosure. FIGS. 5A-5D illustrate distributions of Retinal Ganglion Cell to simple-cell (RGC-to-S1) weights in accordance with certain aspects of the present disclosure. FIGS. 6A-6D illustrate a simple-cell orientation map with connections from a pool of simple cells to the same complex cell in accordance with certain aspects of the present disclosure. FIGS. 7A-7D illustrate a distribution of simple-cell to complex-cell (S1-to-C1) weights trained by four different rules in accordance with certain aspects of the present disclosure. FIG. 8 illustrates an example software implementation of a local training rule using a general-purpose processor in accordance with certain aspects of the present disclosure. FIG. 9 illustrates an example implementation of a local training rule where a weight memory is interfaced with individual distributed processing units in accordance with certain aspects of the present disclosure. FIG. 10 illustrates an example implementation of a local training rule based on distributed weight memories and distributed processing units in accordance with certain aspects of the present DETAILED DESCRIPTION [0022] Various aspects of the disclosure are described more fully hereinafter with reference to the accompanying drawings. This disclosure may, however, be embodied in many different forms and should not be construed as limited to any specific structure or function presented throughout this disclosure. Rather, these aspects are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art. Based on the teachings herein one skilled in the art should appreciate that the scope of the disclosure is intended to cover any aspect of the disclosure disclosed herein, whether implemented independently of or combined with any other aspect of the disclosure. For example, an apparatus may be implemented or a method may be practiced using any number of the aspects set forth herein. In addition, the scope of the disclosure is intended to cover such an apparatus or method which is practiced using other structure, functionality, or structure and functionality in addition to or other than the various aspects of the disclosure set forth herein. It should be understood that any aspect of the disclosure disclosed herein may be embodied by one or more elements of a claim. The word "exemplary" is used herein to mean "serving as an example, instance, or illustration." Any aspect described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other aspects. Although particular aspects are described herein, many variations and permutations of these aspects fall within the scope of the disclosure. Although some benefits and advantages of the preferred aspects are mentioned, the scope of the disclosure is not intended to be limited to particular benefits, uses or objectives. Rather, aspects of the disclosure are intended to be broadly applicable to different technologies, system configurations, networks and protocols, some of which are illustrated by way of example in the figures and in the following description of the preferred aspects. The detailed description and drawings are merely illustrative of the disclosure rather than limiting, the scope of the disclosure being defined by the appended claims and equivalents thereof. An Example Neural System [0025] FIG. 1 illustrates an example 100 of a processing unit (e.g., a neuron) 102 of a computational network (e.g., a neural system) in accordance with certain aspects of the present disclosure. The neuron 102 may receive multiple input signals 104 ), which may be signals external to the neural system, or signals generated by other neurons of the same neural system, or both. The input signal may be a current or a voltage, real-valued or complex-valued. The input signal may comprise a numerical value with a fixed-point or a floating-point representation. These input signals may be delivered to the neuron 102 through synaptic connections that scale the signals according to adjustable synaptic weights 106 ), where N may be a total number of input connections of the neuron 102. The neuron 102 may combine the scaled input signals and use the combined scaled inputs to generate an output signal 108 (i.e., a signal y). The output signal 108 may be a current, or a voltage, real-valued or complex-valued. The output signal may comprise a numerical value with a fixed-point or a floating-point representation. The output signal 108 may be then transferred as an input signal to other neurons of the same neural system, or as an input signal to the same neuron 102, or as an output of the neural system. The processing unit (neuron) 102 may be emulated by an electrical circuit, and its input and output connections may be emulated by wires with synaptic circuits. The processing unit 102, its input and output connections may also be emulated by a software code. The processing unit 102 may also be emulated by an electric circuit, whereas its input and output connections may be emulated by a software code. In one aspect of the present disclosure, the processing unit 102 in the computational network may comprise an analog electrical circuit. In another aspect, the processing unit 102 may comprise a digital electrical circuit. In yet another aspect, the processing unit 102 may comprise a mixed-signal electrical circuit with both analog and digital components. The computational network may comprise processing units in any of the aforementioned forms. The computational network (neural system) using such processing units may be utilized in a large range of applications, such as image and pattern recognition, machine learning, motor control, and alike. Local Competitive Learning Rule with L 1 Constraint Certain aspects of the present disclosure support a local competitive learning rule for updating weights associated with one or more processing units (neurons) of a computational network (a neural system), such as the weights 106 illustrated in FIG. 1). The applied local competitive learning rule may lead to sparse connectivity between the processing units, i.e., some of the weights may be equal to zero, or be below a certain threshold value, once the learning process in finished. In an aspect, the general Hebb's learning rule of synaptic weights which may be expressed as . (1) is the change in the ith synaptic weight w , η is a learning rate, x is the ith input (presynaptic response), and y is the neuron output (postsynaptic response). The rule defined by equation (1) may cause unbounded weight growth, thus failing to account for the brain's limited resources for synaptic potentiation and the resulting competition between synapses for these resources. 100281 Several modifications to the Hebb's rule may help to overcome its drawbacks. For example, a passive weight decay term may be added to equation (1) to restrict the weight growth: , 0<γ<1. (2) This rule may prune connections with low activity , and it may prune all connections if γ is not chosen carefully. Further, the so-called "instar rule", in which the decay term may be gated with the postsynaptic activity y, may circumvent this problem as given by: )y. (3) A similar rule as the one defined by equation (3) may utilized in the self-organizing maps. It should be noted that this rule may converge to w A covariance rule can be proposed, which may remove the bias of the Hebb's rule due to nonzero means of x, and y and, at the same time, may add the synaptic depression as given by: ) (y-y), (4) where x[i] and y are the average pre- and postsynaptic activities, respectively. Just like the Hebb's rule, the rule defined by equation (4) may not limit the weight growth and may not force a synaptic To achieve a synaptic competition, a postsynaptic threshold that grows faster than linearly with the average postsynaptic activity y may be used. The resulting learning rule, called the BCM rule (Bienenstock-Cooper-Munro rule), may be written as: Δ w i = μ x i [ y - y ( y y 0 ) p ] . ( 5 ) ##EQU00001## where y[0] represents an asymptotic target for y, and p>1 is a constant. To prevent the unbounded growth of weights, the weights may be divided by their sum to keep them constant as given by: w i ( t ) = α w i ' ( t ) i w i ' ( t ) , ( 6 ) ##EQU00002## where w , α is a target for Σ (t), and t is the time index. This type of weight bounding can be called the multiplicative normalization. In its original form, the multiplicative normalization may be applied to unipolar weights. However, it may be expanded to bipolar weights by changing the denominator in equation (6) to the L -norm Σ (t)|. It can also be modified to limit the weight vector length (the L -norm) by changing the denominator to {square root over (Σ . Because the weights in equation (6) may be trained by the Hebb's rule and then scaled by a common factor, both learning rules defined by equations (1) and (6) may converge to the weight vectors pointing in the same direction, but having different lengths. 100321 One may also subtract an equal amount from each weight after they are modified by the learning rule defined by equation (1), with the amount chosen so that the total sum of the weights may remain constant: w i ( t ) = w i ' ( t ) - 1 N ( i w i ' ( t ) - α ) , ( 7 ) ##EQU00003## where N is a number of inputs . This type of weight bounding can be called the subtractive normalization. Substituting w' into equation (7) and taking into account Σ (t-1)=α, then the learning rule defined by (7) may reduce to Δ w i = η ( x i - 1 N i x i ) y . ( 8 ) ##EQU00004## The subtractive normalization may be typically applied to unipolar weights and, thus, may require a zero bound to prevent weights from changing their sign. With the zero bound, all inputs weights of a neuron trained by equation (7) may asymptotically converge to zero except one weight. To prevent a single nonzero weight, an upper bound on the weight magnitude may also be imposed. The main drawback of both multiplicative and subtractive normalizations may be that they are nonlocal, i.e., they may require the knowledge of all input weights or inputs of a neuron to compute each weight individually. A local learning rule known as the Oja learning rule may constrain the L norm of an input weight vector at the equilibrium point. In a general form, the Oja rule may be written as: Δ w i = η ( x i y - w i y 2 α ) . ( 9 ) ##EQU00005## α is a target for Σ at the equilibrium point. While this rule may create a competition between synaptic weights for limited resources, modeling these resources as a sum of the squared weights may not be biologically 100341 The aforementioned learning rules may be typically applied to unipolar weights to obey a principle according to which connections from excitatory neurons may need to have positive weights and connections from inhibitory neurons may need to have negative weights. In an aspect, weights may not be allowed to change their sign by using a zero bound. If a rule can segregate afferents, the zero bound may often lead to weight vectors with many zero elements (sparse vectors). However, if weights are allowed to change their sign, then the aforementioned rules may converge to weight vectors with few zero elements (non-sparse vectors). According to certain aspects of the present disclosure, a modification to the Oja rule defined by equation (9) is proposed as given by: Δ w i = η ( x i y - sgn ( w i ) y 2 α ) , ( 10 ) ##EQU00006## is the change in the ith synaptic weight w , η is a learning rate, x is the ith input (presynaptic response), y is the neuron output (postsynaptic response), α is a target for Σ |, and sgn( ) is the sign function. In order to prove that the proposed rule given by equation (10) constrains Σ | to α at the equilibrium point, it can be assumed that the output y is generated as the weighted sum of the neuron's inputs, i.e.: = k w k , r k . ( 11 ) ##EQU00007## Substituting equation (11) into equation (10) and taking the time average of the result with an assumption that the weight changes are slow relative to the time over which the input patterns are presented, may result Δ w i η = k w k x i x k - sgn ( w i ) α j , k w j x j w k x k = k C ik w k - sgn ( w i ) α j , k w j C jk w k , ( 12 ) or , in the matrix form , Δ w η = Cw - sgn ( w ) α [ w T Cw ] , ( 13 ) ## where w is the input weight vector , T in the superscript means transpose, and the matrix C with an element C is a correlation matrix of the inputs. At the equilibrium point, the average weight change should be equal to zero, i.e.: 0 = Cw - sgn ( w ) α [ w T Cw ] . ( 14 ) ##EQU00009## Multiplying both sides of equation (14) by w from the left, dividing the resulting equation by the scalar [w Cw], and rearranging the terms, may result into: =α. (15) .e., the L -norm of the weight vector w may be equal to a at the equilibrium point. In a similar manner, it can be proved that the following rule may constrain the L -norm of the weight vector at the equilibrium point: Δ w i = { η ( x i - y α w i ) y if w i ≠ 0 , η ( x i - y α β ) y if w i = 0 , ( 16 ) ##EQU00010## β is a constant (for example, β=1 or β=0), and a is a target for the count of nonzero elements in w. Because of the division by w , the rule defined by equation (16) may create large Δw updates when w is close to 0, making it oscillate around 0 and never reaching the target unless the zero bound is used. On the other hand, the learning rule defined by equation (10) may not show such behavior and may converge to a sparse w with or without the zero bound, as will be shown in greater detail below. As a simple example, a linear neuron with two inputs x and x and the corresponding weights w and w can be considered. Then, the neuron output may be given by: . (17) where all quantities may be either positive , negative, or zero. If the inputs are zero mean, then the output y may also be zero mean, and the covariance rule defined by equation (4) may reduce to the Hebb's rule defined by equation (1). The Hebb's rule can be viewed as an optimization step in the direction of the gradient of a cost function E: Δ w i = η E w i . ( 18 ) ##EQU00011## It can be shown that E /2, i.e., the Hebb's rule may maximize the neuron energy, thus the unbounded growth of the weight magnitudes. There may be two possible solution paths of the gradient ascent: along the left (y<0) and right (y>0) sides of the parabola y /2, depending on the initial value of y. For simplicity, this initial value may be assumed being positive, such that the learning rule defined by equation (18) moves along the right side of the parabola y /2. In this case, maximization of y /2 may be equivalent to maximization of y. To prevent the unbounded weight growth, a constraint may be imposed on the weight magnitudes: |w |≦α and |w |≦α. This constraint may draw a square 202 on the (w ) plane, as illustrated in FIG. 2A. A straight line 204 may draw all possible (w ; w ) solutions for given y, x , and x . The slope of the line 204 may be determined by -x , and its position relative to the center may be determined by y/x . Maximization of y may move the line 204 away from the center (up if x >0 or down if x <0). The asymptotic solution (w (∞)) may be found by moving the line 204 in the direction of increasingly until it touches the square 202 at just one point, which may be always one of the corners unless x =0 or x =0. As it can be observed in FIG. 2A, for the vast majority of inputs, the Hebb's rule with the specified bounds may lead to a solution in which all weights have the maximum magnitude, i.e., |w The Hebb's rule with the subtractive normalization defined by equation (7) may maintain the total sum of the weights constant, i.e., w =α. This constraint may draw a straight line 206 passing through (α, 0) and (0, α) on the (w ) plane as illustrated in FIG. 2B. Two constraints are illustrated: the line 206 may be associated with nonnegative weights, and another line 208 may be associated with bipolar weights. Possible asymptotic solutions are marked with 210. The subtractive normalization may be typically applied to nonnegative weights, in which case α>0 and the weights are bounded at zero, i.e., w ≧0 and w ≧0. The asymptotic solutions may be (α, 0) and (0, α), which are both sparse. If weights are allowed to change their sign, then the asymptotic solutions may be unbounded unless bounds may be forced. If the maximum weight magnitude is constrained at α, then the asymptotic solutions may be (-α, α) and (α, -α), which are both non-sparse. To the first-order approximation, the Oja rule defined by equation (9) may be broken into the Hebbian term (the first term in the parentheses of equation (9)) and the constraint term (the second term in the parentheses of equation (9)). The Hebbian term may maximize the output y assuming that the initial output y is positive, and the second term may impose the constraint w =α on the asymptotic solution. This constraint may draw a circle 212 with the radius {square root over (α)} on the (w ) plane, as illustrated in FIG. 2C. The asymptotic solution (w (∞)) may be found as a point, at which a solution line 214 defined by equation (17) is tangent to the circle 212. As it can be observed in FIG. 2C, it may be impossible to obtain a sparse solution with the Oja rule unless x =0 or x =0. A more rigorous analysis can show that the Oja rule may converge to the principal eigenvector of the data covariance matrix C with C Certain aspects of the present disclosure support the local learning rule defined by equation (10), which may impose the asymptotic constraint |w |=α. This constraint may draws a rhombus 216 with all sides equal to {square root over (2α)} on the (w ) plane, as illustrated in FIG. 2D. The asymptotic solution (w (∞)) may be found by moving a solution line 218 defined by equation (17) in the direction of increasing output y until it touches the rhombus 216 at just one point, which may always be one of the vertices unless |x |. Therefore, for the vast majority of inputs, the proposed rule may provide a sparse solution (i.e., one of the two weights may be zero). In a general case of N input weights, the rule defined by equation (10) theoretically may converge to a solution with only one nonzero weight of magnitude α. It may be desirable to allow the weight vector to have more than one nonzero element. To achieve that, an upper limit on each weight magnitude, w , may be imposed such that w <α, where α/w may be a target for the count of nonzero elements in w. In an aspect, the choice of α may be arbitrary. However, if all inputs and outputs in the network are desired to be within the same bounds (e.g., x .di-elect cons. [-1; 1] and y .di-elect cons. [-1; 1]), then the appropriate value for α may be one. In this case, the only input parameters required for the learning rule may be the learning rate q and the weight magnitude limit w The L -constraint rule defined by equation (16) may also be forced to keep the network inputs and outputs within the same bounds by limiting the maximum weight magnitude to w =1/α, where α is a number of nonzero elements in each weight vector. FIG. 3 illustrates example operations 300 for training a computational network (neural network) in accordance with aspects of the present disclosure. At 302 an output of a processing unit (neuron) in the computational network may be computed based at least in part on at least one existing weight. At 304, the at least one weight of the processing unit may be changed using a local training rule, wherein the local training rule may create sparse connectivity between processing units of the computational network. In accordance with certain aspects of the present disclosure, changing the at least one weight using the local training rule may comprise updating the at least one weight based on at least one of: one or more inputs in the processing unit, the output, or the at least on existing weight before the change. According to certain embodiments, sparse connectivity may be created after multiple updating of the at least one weight. To demonstrate certain embodiments of the present disclosure, the learning rule defined by equation (10) and its difference from other rules, may be used to train the feed-forward connection weights in a primary visual cortex (V1) neural network model. The network may consists of four two-dimensional layers: photoreceptors, retinal ganglion cells (RGCs), V1 simple cells (S1s), and V1 complex cells (C1s). The photoreceptors may be mapped 1:1 to the pixels of an input image. Each photoreceptor may code the luminosity of the corresponding pixel in the range [-1, 1]. The photoreceptor outputs may be fed to the retinotopically mapped RGCs through fixed-weight connections performing a spatial filtering of the input image with a Difference of Gaussians (DoG). The output of each RGC may be calculated as a linear sum of the weighted inputs. It may be either positive, negative, or zero. Such RGC may combine ON and OFF cells with the same inputs and opposite-polarity input weights. Its output may be equal to the difference of the corresponding ON- and OFF-cell outputs. The RGC outputs may be fed to the simple cells through adaptive bipolar weights, which may model the difference between the weights from the corresponding ON and OFF cells. These RGC-to-S1 weights may determine the receptive fields of the simple cells. The S1 layer may also have lateral connections with a short-range excitation and a long-range inhibition. These lateral connections may help the simple cells to self-organize into the orientation map with pinwheels and linear zones. Each simple cell S1 may be modeled as a sum of weighted inputs passed through a half-wave rectifier, which may preserve the positive part of the output and clip the negative part to zero. The positive outputs of S1s may be fed to the C1s through adaptive positive weights. First, the RGC-to-S1 connections were trained using four rules: the Hebb's rule with the subtractive normalization defined by equation (7), the Oja rule defined by equation (9), the proposed local learning rule defined by equation (10), and the modified local learning rule defined by (16). All four rules may result into weight bounding, wherein weights being learned may be bounded to the range of [-w , w FIGS. 4A-4D illustrate examples of the emerged RGC-to-S1 weight matrices, in which the filled circles represent positive weights (the ON region), and the hollow circles represent negative weights (the OFF region). The circle diameter may be proportional to the weight magnitude. The weights may be trained by four aforementioned rules with a [w , w ] bounding. FIG. 4A illustrates weights trained by the Hebb's rule with subtractive normalization. FIG. 4B illustrates weights trained by the Oja rule. FIG. 4C illustrates weights trained by the rule defined by equation (10) with L constraint. FIG. 4D illustrates weights trained by the rule with L constraint defined by equation (16). FIGS. 5A-5D illustrate the corresponding distributions of all RGC-to-S1 weights. As illustrated in FIG. 5A, the Hebb's rule with the subtractive normalization may converge to the weights of maximum magnitude. The Oja rule, illustrated in FIG. 5B, may converge to graded weights, some of which may have small but nonzero values. The proposed rule defined by equation (10), illustrated in FIG. 5C, may converge to a weight matrix with well-defined ON and OFF regions and many close-to-zero elements. The rule defined by equation (16), illustrated in FIG. 5D, may fail to converge to a sparse weight matrix because of the division by w , which may make small weights to oscillate around zero. It may be impossible to obtain exactly-zero weights without the zero bound in any of the rules. Therefore, to estimate the sparsity of weight matrices, the weights that are zero within a chosen rounding error may be counted. With the rounding error of 0:01 w , approximately 54% of RGC-to-S1 weights trained by the proposed rule defined by equation (10) may be zero, whereas less than 3% of weights trained by the other three rules may be zero. FIGS. 6A-6D illustrate the same four rules used to train the S1-to-C1 connections. This time, each rule may have an added weight bounding to [0, w ]. A fragment of the S1 layer is illustrated in FIGS. 6A-6D, which shows a two-dimensional arrangement of simple cells as an iso-orientation contour plot of their preferred orientations (values over the contours represent preferred orientations of the simple cells located under these contours). Boxes 602, 604, 606, 608 may outline the pool of S1 cells, whose outputs may be fed to the same complex cell. The hollow circles 612, 614, 616, 618 inside the boxes may indicate the connection strengths from these simple cells to the chosen complex cell: the larger the circle, the larger the weight. FIG. 6A illustrates weights trained by the Hebb's rule with subtractive normalization. FIG. 6B illustrates weights trained by the Oja rule. FIG. 6C illustrates weights trained by the rule defined by equation (10) with L constraint. FIG. 6D illustrates weights trained by the rule defined by equation (16) with L FIGS. 7A-7D illustrate the corresponding distributions of all S1-to-C1 weights. It can be observed in FIG. 7A that the Hebb's rule with the subtractive normalization may creates a sparse S1-to-C1 connectivity due to to the zero lower bound. FIG. 7B illustrates that the Oja rule creates connections of variable strength to all simple cells within the box, even to those with orthogonal orientations. According to certain aspects of the present disclosure, as illustrated in FIG. 7C, the proposed local learning rule defined by equation (10) may create strong connections to the simple cells of similar orientations and the zero-strength connections to the simple cells of other orientations, which may be consistent with biological data of orientation-selective and shift-invariant complex cells. The learning rule defined by equation (16), as illustrated in FIG. 7D, may also create a sparse S1-to-C1 connectivity due to clipping of the negative weights to zero. FIG. 8 illustrates an example software implementation 800 of the aforementioned local training rule using a general-purpose processor 802 in accordance with certain aspects of the present disclosure. Existing weights associated with each processing unit (neuron) of a computational network (neural network) may be stored in a memory block 804, while instructions related to the local training rule being executed at the general-purpose processor 802 may be loaded from a program memory 806. According to certain aspects of the present disclosure, the loaded instructions may comprise code for computing an output of each processing unit in the computational network based at least in part on at least one existing weight stored in the memory block 804. Further, the loaded instructions may comprise code for changing the at least one weight of that processing unit according to the local training rule, wherein the local training rule may create sparse connectivity between processing units of the computational network. In an aspect of the present disclosure, the code for changing the at least one weight of that processing unit may comprise code for updating the at least one weight based on at least one of: one or more inputs in that processing unit, the previously computed output, or the at least one existing weight before the change. The updated weights may be stored in the memory block 804 replacing old weights. FIG. 9 illustrates an example implementation 900 of the aforementioned local training rule where a weight memory 902 is interfaced via an interconnection network 904 with individual (distributed) processing units (neurons) 906 of a computational network (neural network) in accordance with certain aspects of the present disclosure. At least one existing weight associated with a processing unit 906 may be loaded from the memory 902 via connection(s) of the interconnection network 904 into that processing unit 906. The processing unit 906 may be configured to compute its output based at least in part on the at least one existing weight. Further, the processing unit 906 may be configured to change the at least one weight associated with that processing unit according to the local training rule, wherein the local training rule may create sparse connectivity between the processing units 906 of the computational network. In an aspect of the present disclosure, changing the at least one weight according to the local learning rule may further comprise updating the at least one weight based on at least one of: one or more inputs in the processing unit 906, the previously computed output of the processing unit 906, or the at least one existing weight before the change. The updated weights may be stored in the memory 904 replacing old weights associated with that processing unit 906. FIG. 10 illustrates an example implementation 1000 of the aforementioned local training rule based on distributed weight memories 1002 and distributed processing units 1004 in accordance with certain aspects of the present disclosure. As illustrated in FIG. 10, one weight memory bank 1002 may be directly interfaced with one processing unit (neuron) 1004 of a computational network (neural network), wherein that memory bank 1002 may store at least one existing weight associated with that processing unit 1004. The processing unit 1004 may be configured to compute its output based at least in part on the at least one existing weight loaded from the corresponding weight memory bank 1002. Further, the processing unit 1004 may be configured to change the at least one weight associated with that processing unit according to the local training rule, wherein the local training rule may create sparse connectivity between the processing units 1004 of the computational network. In an aspect of the present disclosure, changing the at least one weight by the processing unit 1004 according to the local learning rule may further comprise updating the at least one weight based on at least one of: one or more inputs in the processing unit 1004, the previously computed output of the processing unit 1004, or the at least one existing weight before the change. The updated weights may be stored in the corresponding memory bank 1002 replacing old weights. According to aspects of the present disclosure, the proposed learning rule may constrain the L1-norm of the input weight vector of a neuron at the equilibrium point. The learning rule may be local and intrinsic, which may make software and hardware implementations simpler. This rule may converge to a sparser weight vector than that learned by the original Oja rule with or without the zero bound. Such sparse connectivity may lead to higher selectivity of neurons to specific features, which may be found in many biological studies. Another advantage of constraining the L1-norm instead of the L2- norm may be the simplicity of keeping the inputs and outputs in the network within the same bounds by choosing α=1. The various operations of methods described above may be performed by any suitable means capable of performing the corresponding functions. The means may include various hardware and/or software component(s) and/or module(s), including, but not limited to a circuit, an application specific integrate circuit (ASIC), or processor. Generally, where there are operations illustrated in Figures, those operations may have corresponding counterpart means-plus-function components with similar numbering. For example, operations 300 illustrated in FIG. 3 correspond to components 300A illustrated in FIG. 3A. As used herein, the term "determining" encompasses a wide variety of actions. For example, "determining" may include calculating, computing, processing, deriving, investigating, looking up (e.g., looking up in a table, a database or another data structure), ascertaining and the like. Also, "determining" may include receiving (e.g., receiving information), accessing (e.g., accessing data in a memory) and the like. Also, "determining" may include resolving, selecting, choosing, establishing and the like. As used herein, a phrase referring to "at least one of" a list of items refers to any combination of those items, including single members. As an example, "at least one of: a, b, or c" is intended to cover: a, b, c, a-b, a-c, b-c, and a-b-c. The various operations of methods described above may be performed by any suitable means capable of performing the operations, such as various hardware and/or software component(s), circuits, and/or module(s). Generally, any operations illustrated in the Figures may be performed by corresponding functional means capable of performing the operations. The various illustrative logical blocks, modules and circuits described in connection with the present disclosure may be implemented or performed with a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array signal (FPGA) or other programmable logic device (PLD), discrete gate or transistor logic, discrete hardware components or any combination thereof designed to perform the functions described herein. A general purpose processor may be a microprocessor, but in the alternative, the processor may be any commercially available processor, controller, microcontroller or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. The steps of a method or algorithm described in connection with the present disclosure may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in any form of storage medium that is known in the art. Some examples of storage media that may be used include random access memory (RAM), read only memory (ROM), flash memorry, EPROM memory, EEPROM memory, registers, a hard disk, a removable disk, a CD-ROM and so forth. A software module may comprise a single instruction, or many instructions, and may be distributed over several different code segments, among different programs, and across multiple storage media. A storage medium may be coupled to a processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The methods disclosed herein comprise one or more steps or actions for achieving the described method. The method steps and/or actions may be interchanged with one another without departing from the scope of the claims. In other words, unless a specific order of steps or actions is specified, the order and/or use of specific steps and/or actions may be modified without departing from the scope of the claims. The functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media include both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage medium may be any available medium that can be accessed by a computer. By way of example, and not limitation, such computer-readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. Also, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared (IR), radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Disk and disc, as used herein, include compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk, and Blu-ray® disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Thus, in some aspects computer-readable media may comprise non-transitory computer-readable media (e.g., tangible media). In addition, for other aspects computer-readable media may comprise transitory computer-readable media (e.g., a signal). Combinations of the above should also be included within the scope of computer-readable media. Thus, certain aspects may comprise a computer program product for performing the operations presented herein. For example, such a computer program product may comprise a computer readable medium having instructions stored (and/or encoded) thereon, the instructions being executable by one or more processors to perform the operations described herein. For certain aspects, the computer program product may include packaging material. Software or instructions may also be transmitted over a transmission medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of transmission medium. Further, it should be appreciated that modules and/or other appropriate means for performing the methods and techniques described herein can be downloaded and/or otherwise obtained by a user terminal and/or base station as applicable. For example, such a device can be coupled to a server to facilitate the transfer of means for performing the methods described herein. Alternatively, various methods described herein can be provided via storage means (e.g., RAM, ROM, a physical storage medium such as a compact disc (CD) or floppy disk, etc.), such that a user terminal and/or base station can obtain the various methods upon coupling or providing the storage means to the device. Moreover, any other suitable technique for providing the methods and techniques described herein to a device can be utilized. It is to be understood that the claims are not limited to the precise configuration and components illustrated above. Various modifications, changes and variations may be made in the arrangement, operation and details of the methods and apparatus described above without departing from the scope of the claims. While the foregoing is directed to aspects of the present disclosure, other and further aspects of the disclosure may be devised without departing from the basic scope thereof, and the scope thereof is determined by the claims that follow. Patent applications by Vladimir Aparin, San Diego, CA US Patent applications by QUALCOMM INCORPORATED Patent applications in class Constraint optimization problem solving Patent applications in all subclasses Constraint optimization problem solving User Contributions: Comment about this patent or add new information about this topic:
{"url":"http://www.faqs.org/patents/app/20120330870","timestamp":"2014-04-20T17:47:45Z","content_type":null,"content_length":"90862","record_id":"<urn:uuid:2a3db6d1-cc11-46e0-aabe-f14accff3a3b>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00148-ip-10-147-4-33.ec2.internal.warc.gz"}
El Segundo Prealgebra Tutor Find an El Segundo Prealgebra Tutor ...All I expect is a hard-working student who will put in the time necessary for the level of success they want to reach. I will travel to any location within a 25 mile radius of Manhattan Beach. I am mostly available only on weekends, but can do long hours and some weekday nights as well. 22 Subjects: including prealgebra, reading, English, writing ...I also trained in a molecular biology lab as a post-doctorate fellow. Currently, I study how genes are turned on and off, and how this process becomes altered in human cancers. I enjoy teaching and helping others learn the material. 27 Subjects: including prealgebra, chemistry, physics, GRE ...I have tutored SAT Math and the SAT Math II subject test professionally for the past year. Three of my SAT Math II students were boarding school students in Russia; I tutored them online via Skype-esque video conferencing, advanced document mark-up & screen-sharing, and other technological tools... 60 Subjects: including prealgebra, reading, Spanish, English ...My second specialty is math. I started tutoring math voluntarily when I was a sophomore in high school (oh my gosh, about 8 years ago now!) and have been teaching it professionally for the last 3+ years. I can teach all levels and ages; I have taught students from the 3rd grade all the way up to college. 18 Subjects: including prealgebra, reading, geometry, statistics ...I understand how to construct and use truth tables in order to find the answer to logic and false logic puzzles. From my computer science background, I have an understanding of how the 'and', 'or', 'nand', 'nor', and 'not' operators work, and their role these operators play in formal logic. Us... 44 Subjects: including prealgebra, reading, chemistry, Spanish
{"url":"http://www.purplemath.com/El_Segundo_Prealgebra_tutors.php","timestamp":"2014-04-21T11:07:48Z","content_type":null,"content_length":"24193","record_id":"<urn:uuid:20ab6cc1-52e5-4658-9634-f76dcb0d85e0>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00131-ip-10-147-4-33.ec2.internal.warc.gz"}