content
stringlengths
86
994k
meta
stringlengths
288
619
Lauderdale Isles, FL Math Tutor Find a Lauderdale Isles, FL Math Tutor ...I have over 1000 in service points and course work related to my field. I have taught students in center schools, hospital/home-bound students, students in self contained classrooms, and have work as a support facilitator for students who are in general education class that need extra assistance. As a certified ESE teacher, I work with students on a daily basis with ADD/ADHD. 33 Subjects: including algebra 1, special needs, elementary (k-6th), grammar ...I have over six years experience teaching reading skills. There are certain things good readers do. Once an individual learn those strategies they can read anything. 14 Subjects: including prealgebra, English, reading, writing ...When the children enrolled, they did not speak English at all. I have taught children how to speak English at a very young age, as young as two-and-a-half years old. In a month, the child learns many vocabulary words in English, and by four months the child is speaking in sentences. 16 Subjects: including SPSS, ACT Math, ESL/ESOL, English ...I have a Bachelor's of Science in Family and Child Sciences, with a minor is Psychology. I am also working on a Nursing Degree. I have a very broad base of knowledge in the arts and sciences, so I can assist students in many different subjects. 14 Subjects: including geometry, economics, algebra 1, ACT Math ...I was very active in High School in percussive performance and love to play the drum set. I am a multiple award member of member of Who's Who in American High School Scholars, have an Associates Degree in General Engineering and qualify for(but do not have) an Associates Degree in Liberal Arts. I graduated 14th in my High School in 1998 and have a current GPA of .366 at Broward College. 36 Subjects: including prealgebra, chemistry, geometry, grammar Related Lauderdale Isles, FL Tutors Lauderdale Isles, FL Accounting Tutors Lauderdale Isles, FL ACT Tutors Lauderdale Isles, FL Algebra Tutors Lauderdale Isles, FL Algebra 2 Tutors Lauderdale Isles, FL Calculus Tutors Lauderdale Isles, FL Geometry Tutors Lauderdale Isles, FL Math Tutors Lauderdale Isles, FL Prealgebra Tutors Lauderdale Isles, FL Precalculus Tutors Lauderdale Isles, FL SAT Tutors Lauderdale Isles, FL SAT Math Tutors Lauderdale Isles, FL Science Tutors Lauderdale Isles, FL Statistics Tutors Lauderdale Isles, FL Trigonometry Tutors Nearby Cities With Math Tutor Brickell, FL Math Tutors Carl Fisher, FL Math Tutors Flamingo Lodge, FL Math Tutors Golden Isles, FL Math Tutors Hallandale Beach, FL Math Tutors Inverrary, FL Math Tutors Keystone Islands, FL Math Tutors Ludlam, FL Math Tutors Melrose Vista, FL Math Tutors Modello, FL Math Tutors Port Everglades, FL Math Tutors South Florida, FL Math Tutors Sunset Island, FL Math Tutors West Dade, FL Math Tutors West Hollywood, FL Math Tutors
{"url":"http://www.purplemath.com/Lauderdale_Isles_FL_Math_tutors.php","timestamp":"2014-04-21T02:05:51Z","content_type":null,"content_length":"24509","record_id":"<urn:uuid:2450c04e-c897-4577-92c3-4d8e176125d1>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00556-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: In triangle ABC, CD bisects the angle BCA and D lies on side AB. Angle c = 60 degrees, AC = 8cm and BC = 6cm • one year ago • one year ago Best Response You've already chosen the best response. Best Response You've already chosen the best response. calculate the length of AB, giving your answer correct to one decimal place Best Response You've already chosen the best response. Are you allowed to use the law of cosines? Best Response You've already chosen the best response. use the law of cosines.... \(\large (AB)^2=(AC)^2+(BC)^2-2(AC)(BC)cosC \) Best Response You've already chosen the best response. I'm not sure what DC has to do with anything...put there to confuse, I guess? I can't think of a better way to do solve it. Best Response You've already chosen the best response. @dpaInc I tried to use cosine rule, but i don't think that is how you do it cause you get a whole no. as an answer and the question asks you to round off to 1 d.p Furthermore i checked the solution booklet and it says the answer is 7.2 cm Best Response You've already chosen the best response. ...I didn't get a whole answer, but I also didn't get 7.2. Hmm... Best Response You've already chosen the best response. Oh, wait. Hold on. Best Response You've already chosen the best response. No, I did get 7.2, never mind. c^2 = 6^2 + 8^2 - 2(6)(8)cos(60) Best Response You've already chosen the best response. oh i must have made a mistake then! THANKS Best Response You've already chosen the best response. I get 13.8 for some reason Best Response You've already chosen the best response. put the calculator in degrees measure.. . Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4fffc67de4b0848ddd64d7a2","timestamp":"2014-04-17T09:59:43Z","content_type":null,"content_length":"68922","record_id":"<urn:uuid:61fdaa88-efa9-4a8d-bfa0-066f94d3e2c4>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00020-ip-10-147-4-33.ec2.internal.warc.gz"}
Prime Beef Prime Beef To the Editors: I have two remarks regarding Sarah Glaz’s interesting column “Ode to Prime Numbers” (Macroscope, July–August). First, it is true that Leonhard Euler was the first to study the zeta function in depth, as Glaz says, but it wasn’t “in the early 1800s”—the great mathematician died in 1783 (a prime number!). Second, contrary to the author’s statement, there are ingenious formulas that yield the full set of prime numbers. On the other hand, as far as I know, none is even close to being computationally practical. Here is one that has been called “prime beef,” based on Wilson’s Theorem: Let m and n be nonnegative integers, and let k = n! – m(n + 1) + 1. Then, f(m, n) = 2 + ½ (n – 1) [|k^2 – 1| – (k^ 2 – 1)] yields all and only primes. Don’t be surprised if the equation yields only 2 for a long time as you move through increasing integers. But eventually, for example, f(329,891, 10) = 11. I suppose even a computer would go batty calculating primes this way. Luis F. Moreno Binghamton, NY Dr. Glaz responds: Euler indeed died in 1783. He gave the infinite product formula in 1737—“early in the 18th century” and not “in the early 1800s.” I am happy for the opportunity to correct this typo and also thank Michael Rochester (Professor Emeritus at Memorial University) for letting me know of it in a private email. On the other hand, my statement about prime numbers, “no formula has been found that covers them all,” is correct. There exist a number of formulas supposed to generate all the prime numbers, but in practice all of them are deficient in some way. An informative essay about formulas for primes may be found at Wolfram Math World (http://mathworld.wolfram.com/PrimeFormulas.html). The essay begins as follows: “There exist a variety of formulas for either producing the nth prime as a function of n or taking on only prime values. However, all such formulas require either extremely accurate knowledge of some unknown constant, or effectively require knowledge of the primes ahead of time in order to use the formula.” Although formulas for primes seem to exist, for all practical purposes, they do not. I hope the situation will be remedied in the future. Meanwhile, it is an interesting topic for a poem.
{"url":"http://www.americanscientist.org/issues/pub/2013/6/prime-beef","timestamp":"2014-04-18T01:40:22Z","content_type":null,"content_length":"85379","record_id":"<urn:uuid:81538e41-fe6c-49e8-8f00-a43316230d05>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00652-ip-10-147-4-33.ec2.internal.warc.gz"}
A constant associated to the character table of a finite group up vote 6 down vote favorite Following on from some of myprevious MO questions on finite group theory... $\newcommand{\Irr}{\operatorname{Irr}}\newcommand{\Conj}{\operatorname{Conj}}\newcommand{\AMZL}{{\rm AM}_{\rm Z}}$ Let $G$ be a finite group, $\Irr$ the set of irreducible characters (working over $\mathbb C$) and $\Conj$ the set of conjugacy classes. Consider the following quantity, which seems to have first been introduced in work of Azimifard, Samei and Spronk $$\alpha_G = \frac{1}{|G|^2} \sum_{\phi\in\Irr} \sum_{C\in\Conj} \phi(e)^2 |\phi(C)|^2 |C|^2 . $$ It is not hard to show, just using basic character theory, that $\alpha_G \geq 1$ with equality if and only if $G$ is abelian. This constant/invariant arises as a minorant for a more interesting invariant of $G$ which was studied in the aforementioned paper. Call this "more interesting invariant" $\AMZL(G)$. It turns out that $\AMZL(G)\geq \AMZL(G/N)$ for every normal subgroup $N$, a property which is vital to some work I am writing up that obtains lower bounds on $\AMZL(G)$. One branch of the case-by-case attack I'm using works by getting lower bounds on $\alpha_G$ when $G$ has trivial centre, and so if $\alpha_G$ never increases when we replace $G$ with a quotient then I could in fact get general lower bounds on $\alpha_G$ by inductively modding out by the centre. Question. Do there exists a finite group G and a normal subgroup N for which $\alpha_{(G/N)} > \alpha_G$ ? I did try for a while to show this can never happen but rapidly got stuck, so I'm hoping that MO readers who are more skilled/experienced with finite groups can either suggest counterexamples to try, or give some evidence that no such counterexamples exist. Please bear in mind that I have next to no experience with GAP or similar, so I don't mind concrete suggestions of code, but responses of the form "get GAP to look through all non-simple non-abelian examples of order < 100" may not be as helpful as their authors imagine. Just in case it helps to rule out certain avenues of attack: if $G$ is a group with two character degrees (i.e. there is an integer $m$ such that $\phi(e)\in \{1,m\}$ for all $\phi\in\Irr$) then one can evaluate $\alpha_G$ explicitly, using ideas similar to this paper: one obtains $$ \alpha_G - 1 = (m^2-1) \left( 1- \frac{|L|}{|G|^2} \sum_{C\in\Conj} |C|^2 \right) $$ where $|L|$ is the number of linear characters on $G$. In particular, suppose if $G=H \rtimes C_2$ is a generalized dihedral group with $|H|=2n+1$ ($n$ a positive integer). Back of the envelope scribbling gives me $m=2$, $|L|=2$ and the conjugacy classes have sizes $1$, $2$ repeated $n$ times, and $2n+1$, so that $$ \frac{\alpha_G -1}{3} = 1 - \frac{1}{2(2n+1)^2}(1+ 4n + (2n+1)^2) = \frac{1}{2}\left(1-\frac{1}{2n+1}\right)^2 $$ It also seems to me that proper quotients of $G$ must either be abelian or generalized dihedral with smaller order, in which case we would have $\alpha_G \geq \alpha_{(G/N)}$ for every $N \lhd G$. finite-groups character-theory Something must be wrong here, because if $\alpha$ has the properties you describe then examples of $\alpha_G > \alpha_{G/N}$ must abound: given that $\alpha_G \geq 1$ with equality iff $G$ is abelian, it's enough to use any $G,N$ where $G/N$ is abelian but $G$ is not. For example, $G=S_n$ and $N=A_n$ for any $n\geq 3$, or $G$ the quaternion group and $N$ its center. – Noam D. Elkies Aug 14 '13 at 23:22 @NoamD.Elkies Did I mistype something? I'm looking for examples where alpha of the quotient is strictly greater than alpha of the original G. – Yemon Choi Aug 14 '13 at 23:54 Sorry, my fault: I misread the inequality; you're right. – Noam D. Elkies Aug 15 '13 at 1:34 add comment Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook. Browse other questions tagged finite-groups character-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/139472/a-constant-associated-to-the-character-table-of-a-finite-group","timestamp":"2014-04-19T15:42:19Z","content_type":null,"content_length":"52333","record_id":"<urn:uuid:6622a1ef-3994-4032-8324-3bc58c89fa41>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00084-ip-10-147-4-33.ec2.internal.warc.gz"}
Color algebra: How do I pick a color F will make known color C against known background B at opacity N? February 18, 2008 4:39 PM Subscribe Color algebra: Given a color C, how do I pick a color F that will color C when against background color B at opacity N? How would I do the same thing for a color multiplication? I'm guessing one can start by treating each color as an RGB or CMYK vector, but I'm more than a little fuzzy on exactly what these operations mean and therefore how to invert them to pick F. If context matters in defining the operations, practically I'm concerned with whatever Illustrator CS and Fireworks 8 mean by the terms, but I'm interested in general observations as well. posted by weston to Computers & Internet (18 answers total) 5 users marked this as a favorite Assuming opacity is defined in terms of a linear blending of the two layers, you can just use algebra. For some values of C, B, and N, F might be outside the color space. C = B * (1 - N) + F * N F * N = B * (1 - N) - C F = (B * (1 - N) - C) / N posted by 0xFCAF at 5:01 PM on February 18, 2008 0xFCAF: Obviously a computer isn't going to produce colors outside of the color space by any means. The method used by computers to determine the output color of two colors with an alpha channel is Alpha Blending . The color values used in that formula are normalized from 0 - 1. posted by delmoi at 5:20 PM on February 18, 2008 I'm reasonably sure this can't be done. Certainly not universally for any opacity and any background/foreground color combination, though you might be able to make it work for very slight color Even if opacity didn't enter into the mix, determining what the color will "be" is tricky. Two of the classics of color theory, The Elements of Color Interaction of Color describe perceptual color illusions. The effect is better in print, but surrounding colors influence our perception. That's why the two identical blue squares look different and the two identical brown squares look different, even though they're not. But putting aside perceptual problems, you're still dealing with aspects of Hue, Saturation and Brightness. Once you throw opacity into the mix any background color that differs noticably along more than one of those axes is going to be practically impossible to compensate for. posted by Jeff Howard at 5:32 PM on February 18, 2008 Normalization isn't relevant - it's still possible that F is not in the range of a given color space. Set N = 0.00001, B = black and C = white --> F will be 'brighter' than any color that is normally posted by 0xFCAF at 5:42 PM on February 18, 2008 0xFCAF: eponysterical answerer? posted by jpdoane at 7:07 PM on February 18, 2008 Disregarding transparency: Find F such that F on background color B looks like color C. This is underdetermined, because color F also depends on the size and shape of the region (and also on the size and shape of the surround) and also on whether the F region is moving. posted by hexatron at 7:07 PM on February 18, 2008 F = (B * (1 - N) - C) / N So what's the multiplication (and associated division) operation here? Is it multiplication inside each position of an RGB or CMYK tuple? But putting aside perceptual problems, you're still dealing with aspects of Hue, Saturation and Brightness. Once you throw opacity into the mix any background color that differs noticably along more than one of those axes is going to be practically impossible to compensate for. This is underdetermined, because color F also depends on the size and shape of the region (and also on the size and shape of the surround) and also on whether the F region is moving. I think it's safe to put aside perceptual issues for the practical application I'm interested in. But if there is a mechanic involved in the interaction between an arbitrary background and a transparent foreground that makes combining the two to yield a given color difficult or impossible, I'm interested. posted by weston at 8:28 PM on February 18, 2008 I'm not knowlegable enough with the workings of CMYK tuples to opine on them, but if you work in RGB space, then it's a vector equation that works for each color element independently. e.g. F_red = [B_red * (1 - N) - C_red] / N and the same equation works for the green and blue components respectively. And as 0xFCAF said, there are some values for this equation that don't work, because no color combined with B with opacity N couldn't possibly result in C. If you try to use this equation in such a circumstance, it will give you a color F that is nonsensical (i.e. negative or impossibly bright). posted by Humanzee at 8:52 PM on February 18, 2008 For theoretical applications, the existance of an equation is interesting, but for practical applications it seems less helpful. Humanzee's illustration of negative or impossibly bright colors is a good example. Those colors don't actually exist in a way that Fireworks or Illustrator can utilize. There are a narrow range of circumstances where what you're suggesting is possible. If the background and the color to match differ only in saturation then F is pretty easy to find, especially if the color to match is in the middle of the spectrum. Likewise for two colors that differ only in brightness. If you want to do this with colors that differ in hue, C needs to be a hue that incorporates B as a component. So if B is yellow, then this only works if C happens to be either green or orange otherwise there's no practical F that can mix with B to get C. (And if B isn't a primary color you're out of luck.) Once you add variations in saturation and brightness to variations in hue, this becomes almost impossible as a practical exercise. posted by Jeff Howard at 10:43 PM on February 18, 2008 Normalization isn't relevant - it's still possible that F is not in the range of a given color space. Set N = 0.00001, B = black and C = white --> F will be 'brighter' than any color that is normally Um, no, according to your own formula "F = (B * (1 - N) - C) / N." Then F = (0 - 1) / 10 = -10 , which is actually 'darker' then any color expressed. Unfortunately, your formula is wrong. The actual formula (for C 'over' C ), as per wikipedia would be = C + C Doing variable substitution, that works out too: F = C*N + B*1 Doing numerical substitution we get F = 1*10 + 0*1 Which of course works out too , a positive number between 0 and 1 posted by delmoi at 10:08 AM on February 19, 2008 I'm reasonably sure this can't be done. Certainly not universally for any opacity and any background/foreground color combination, though you might be able to make it work for very slight color This is totally absurd. If it were not possible, then there would be no way to do layer opacity in photoshop, for example. Alpha blending is a mathematical formula applied to binary data. The result is always the same, and weston's goal here is to produce the same output as various art programs. posted by delmoi at 10:12 AM on February 19, 2008 For theoretical applications, the existance of an equation is interesting, but for practical applications it seems less helpful. Humanzee's illustration of negative or impossibly bright colors is a good example. No, it's a fucking mistake. posted by delmoi at 10:13 AM on February 19, 2008 (sorry for getting upset, it bugs me when someone will just say something in the beginning of a thread and everyone just goes along with it. I posted a link to the correct formulas in the second posted by delmoi at 10:19 AM on February 19, 2008 Delmoi, I think you're misreading my post. I'm not arguing that layer blending is impossible. I'm only arguing that there are limits to the range of colors that are possible to mimic using layer blending. As I understand it, the OP's goal is to match colors precisely using layer blending techniques common to Illustrator and Fireworks. He essentially wants to "game" the blending to arrive at a pre-determined color. But it only works under a narrow range of circumstances. Say you want to place a 50% transparent color (F) over bright green (B). You're trying to produce bright red (C). There's no color that will mix with bright green at 50% and get a bright red. You can certainly blend any two colors, but not to arrive at any arbitrary result. posted by Jeff Howard at 10:32 AM on February 19, 2008 I think you're misreading my post. I'm not arguing that layer blending is impossible. I'm only arguing that there are limits to the range of colors that are possible to mimic using layer blending. Ugh, crap your right. I somehow managed to read this post wrong today. Ugh. posted by delmoi at 10:37 AM on February 19, 2008 Yes, it turns out I am retarded. Sorry. posted by delmoi at 10:39 AM on February 19, 2008 Say you want to place a 50% transparent color (F) over bright green (B). You're trying to produce bright red (C). There's no color that will mix with bright green at 50% and get a bright red. You can certainly blend any two colors, but not to arrive at any arbitrary result. Right of course. However you could still store the 'impossible color' in memory, and display a clipped version on the screen (so 'superbright' colors would appear as 100%, and 'superdark' colors would appear as 0%). The impossible color needed in your example could be a color that would always have a maxed out red channel, but use the normal alpha blending function on the green and blue But you would need to write the composing function yourself, and I'm not sure if you can do things like that in Illustrator or Fireworks. Also, I'm still a total moron. Apologies to everyone :P posted by delmoi at 11:01 AM on February 19, 2008 A day late and a dollar short-- I misunderstood the problem in my first comment. But in fact, the first solution F = (B * (1 - N) - C) / N posted by 0xFCAF at 8:01 PM on February 18 is correct, except that it disregards When arithmetic operations are performed on color values, the values are assumed to be linear wrt brightness, so that if A=2*B, then A emits twice as many photons/second as B). But the numbers given for RGB or CMYK are not linear wrt brightness--they are roughly exponential. And so we assume they are actually exponential, with exponent (γ) (the letter long used in photographic science, and taken over into computer graphics. For most monitors, γ is between 2 & 3. And the formula above turns into the following mess: Let f=(pow(B,γ)*(1-N) - pow(C,γ)) / N (applying gamma) Then if f>0 F= pow( f , 1/γ) (converting from linear back to pixel values) where pow(a,b) is the power function a to the b-th power. Note that f can be negative. This means that F is not just out-of-gamut, it is actually an unrealizable color. As noted above, there is NO 50% transparent color you can put over a pure green source that will result in pure red. The green source may have absolutely no red in it. Since the filter is not luminous itself, where would the red photons come from? And this has nothing to do with the limitations of a CRT or color space. It's fizzicks. posted by hexatron at 5:50 PM on February 20, 2008 « Older Do I have anxiety?... | One of this weeks postsecrets ... Newer » This thread is closed to new comments.
{"url":"http://ask.metafilter.com/83972/Color-algebra-How-do-I-pick-a-color-F-will-make-known-color-C-against-known-background-B-at-opacity-N","timestamp":"2014-04-18T23:24:49Z","content_type":null,"content_length":"38582","record_id":"<urn:uuid:117c7d7e-de71-443b-b60b-157511abb1fa>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00366-ip-10-147-4-33.ec2.internal.warc.gz"}
[Numpy-discussion] How do I make a diagonal matrix? Alan G Isaac aisaac at american.edu Fri Jun 23 16:00:29 CDT 2006 > Alan G Isaac wrote: >> Why is a.flat not the same as a.A.flat? On Fri, 23 Jun 2006, Travis Oliphant apparently wrote: > It is the same object except for the pointer to the > underlying array. When asarray(a.flat) get's called it > looks to the underlying array to get the sub-class and > constructs that sub-class (and matrices can never be 1-d). > Thus, it's a "feature" I doubt I will prove the only one to stumble over this. I can roughly understand why a.ravel() returns a matrix; but is there a good reason to forbid truly flattening the matrix? My instincts are that a flatiter object should not have this hidden "feature": flatiter objects should produce a consistent behavior in all settings, regardless of the underlying array. Anything else will prove too surprising. More information about the Numpy-discussion mailing list
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2006-June/008793.html","timestamp":"2014-04-16T19:33:10Z","content_type":null,"content_length":"3483","record_id":"<urn:uuid:f79cbe08-ad3e-4440-8322-d0ff456b7da5>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00052-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions - User Profile for: sam_x_@_otmail.com Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. User Profile: sam_x_@_otmail.com User Profile for: sam_x_@_otmail.com UserID: 877749 Name: Sam Registered: 5/15/13 Total Posts: 6 Show all user messages
{"url":"http://mathforum.org/kb/profile.jspa?userID=877749","timestamp":"2014-04-19T05:26:26Z","content_type":null,"content_length":"12677","record_id":"<urn:uuid:c59f971e-c960-4f5d-939e-65e9ffe0e1de>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00482-ip-10-147-4-33.ec2.internal.warc.gz"}
characterization of cofibrations in CW-complexes with G-action up vote 2 down vote favorite Is there a condition for a $G$-equivariant map $X \to Y$ to be a cofibration of $G$-spaces? Here $X$ and $Y$ are CW complexes, the group $G$ is finite, and acts by cellular maps. I am using the model structure on CW-complexes with G-action where the fibrations and weak equivalences are those maps which are fibrations, weak equivalences respectively when we forget the $G$ 1 There is no model structure on the category of CW-complexes since it is not complete or cocomplete, but I presume you mean the model structure on all G-spaces. – Mike Shulman Feb 20 '10 at 1:37 Since this was already on the front page, I decided to retag with model-categories. – David White Aug 18 '11 at 22:41 add comment 1 Answer active oldest votes In the model structure you describe, the cofibrations should be the retracts of the free relative G-cell maps: i.e., retracts of maps obtained by attaching cells of the form $G \times S^ {n} \to G \times D^{n+1}$. One way to see this is via the following general machine: There is an adjoint pair $$ G \times -: \mathbf{Top} \leftrightarrow \mathbf{GTop}: forget $$ $\mathbf{Top}$ is a cofibrantly generated model category and one can check that this adjoint pair satisfies the conditions of the standard Lemma for transporting cofibrantly generated model structures across adjoint pairs (see e.g., Hirschorn's "Model categories and their localizations" Theorem 11.3.2). Thus, it gives rise to a model structure on $\mathbf{GTop}$ such that a map in $\mathbf{GTop}$ is an equivalence(resp. fibration) iff its image under the right adjoint (forget) is so. Moreover, the generating (acyclic/)cofibrations are precisely the images under the left adjoint ($G \ up vote 8 times -$) of the generating (acyclic/)cofibrations in $\mathbf{Top}$. This yields the description of the cofibrations as retracts of (free G-)"cellular" maps. down vote accepted Also, some context: The model structure you describe (which I'd like to call "Spaces over BG") is a localization of a model structure "G-Spaces" (where the weak equivalences are maps inducing weak equivalences on all fixed point sets). An argument along the lines of the above constructs this other model structure and identifies its cofibrations with retracts of (arbitrary) relative G-cell maps: i.e., retracts of maps obtained by attaching cells of the form $G/H \times S^n \to G/H \times D^{n+1}$ for $H$ a closed subgroup of $G$. add comment Not the answer you're looking for? Browse other questions tagged at.algebraic-topology homotopy-theory equivariant-homotopy cw-complexes model-categories or ask your own question.
{"url":"http://mathoverflow.net/questions/15850/characterization-of-cofibrations-in-cw-complexes-with-g-action?sort=newest","timestamp":"2014-04-21T09:41:57Z","content_type":null,"content_length":"55296","record_id":"<urn:uuid:cc33d0e7-97ca-4de7-97c8-397a5e9df7f1>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00380-ip-10-147-4-33.ec2.internal.warc.gz"}
ALEX Lesson Plans Subject: Mathematics (9 - 12), or Technology Education (9 - 12) Title: Family Ties: Parabolas Description: This lesson allows students to manipulate the parameters while using the vertex form of the equation of a parabola to see the effects on the graph. The spreadsheet can be altered for other functions.This lesson plan was created as a result of the Girls Engaged in Math and Science University, GEMS-U Project.
{"url":"http://alex.state.al.us/all.php?std_id=54369","timestamp":"2014-04-17T00:55:34Z","content_type":null,"content_length":"26332","record_id":"<urn:uuid:d39337fc-2afb-47b0-8989-17c3108853f2>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00551-ip-10-147-4-33.ec2.internal.warc.gz"}
Stokes Theorm October 27th 2012, 03:00 AM #1 Junior Member Sep 2012 Stokes Theorm I am attempting to solve a question a question of stokes theorm, but i have got stuck with this question in which i have to first find the equation of a line passing through three points. The points are (3,0,0),(0,3/2,0),(0,0,3). I remember doing something like this in calculus class but i dont remember how exactly to go about such problems. Someone please help me out with this. Help will be highly appreciated. Re: Stokes Theorm Two distinct points determine a line. When you have three distinct points, they *might* all be on the same line (in which case those points would be called "collinear"). So, do it for two points, then see if the 3rd point is also on that line. So, pick two of those points (say (3, 0 0) and (0, 0, 3) since their the simpliest) and find the line between them. There are a few ways to do this. I prefer vectors. $\text{Let } \vec{v} = 3\hat{i} - 3\hat{k} \ ( \ = (3, 0, 0) - (0, 0, 3) = (3, 0, -3) \ ).$ $\text{Let } \vec{w} = 3\hat{i} \ ( \ = (3, 0, 0) \ ).$ $\text{Then } \vec{L}(t) = \vec{v} \ t + \vec{w} \text{ is the equation of the line through those two points.}$ $\text{Note that } \vec{L}(t) = \vec{v} \ t + \vec{w} = (3\hat{i} - 3\hat{k}) \ t + (3\hat{i}) \in \text{LinearSpan} \{ \hat{i}, \hat{k} \},$ $\text{and so } \vec{L}(t) e (3/2)\hat{j} \text{ for any value of } t.$ Thus those three points are not collinear - there is no single line going through all three of them. Last edited by johnsomeone; October 27th 2012 at 03:49 AM. Re: Stokes Theorm Ok i got that , but what if i want to find the equation for the triangle that passes through these three points ? How do i go about that ? Re: Stokes Theorm "The equation of a triangle"? What do you mean? 1) the equations for the 3 line segments that comprise the triangle (3 different equations, since the points are not collinear). 2) the equation of the plane passing through those three points. 3) the formula for any point in the convex hull formed by those three points (i.e. all points in the filled-in triangle). Re: Stokes Theorm Sorry for not being clear, but what i meant was , how do i find the equation of the plane passing through these three points, which actually is a tringle in the 3D space ? Is it going to be x+2y+z=3 ? Last edited by mrmaaza123; October 27th 2012 at 05:53 PM. Re: Stokes Theorm That's correct. The equation of the plane containing those 3 points is x+2y+z = 3. Since three distinct non-collinear points determines a unique plane, and since those three points each satisfy that equation for a plane, you can be sure that that equation is the equation of the plane containing those three points. October 27th 2012, 03:46 AM #2 Super Member Sep 2012 Washington DC USA October 27th 2012, 06:20 AM #3 Junior Member Sep 2012 October 27th 2012, 09:48 AM #4 Super Member Sep 2012 Washington DC USA October 27th 2012, 05:47 PM #5 Junior Member Sep 2012 October 27th 2012, 10:55 PM #6 Super Member Sep 2012 Washington DC USA
{"url":"http://mathhelpforum.com/calculus/206169-stokes-theorm.html","timestamp":"2014-04-18T03:08:55Z","content_type":null,"content_length":"44636","record_id":"<urn:uuid:b47bf09f-3886-40c5-bf60-9c41071e5cd5>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00111-ip-10-147-4-33.ec2.internal.warc.gz"}
Development and Implementation of a Performance-Related Specification for I-65 Tennessee: Final Report << Previous Contents Next >> Development and Implementation of a Performance-Related Specification for I-65 Tennessee: Final Report Chapter 4. Implementation of the Performance-Related Specification The evaluated I-65 construction work was completed between May and October 2004. It included two outside lanes in both the northbound and southbound directions. Fourteen lots (24 ft [7.3 m] wide) were placed, ranging in length from 1,180 to 2,380 ft (359.7 to 725.4 m). Photos of the PCC pavement placement are shown in figures 5 and 6. Figure 5. General view of concrete pavement construction on northbound I-65. Figure 6. General view of concrete pavement construction on northbound I-65. Layout of Lots and Sublots The layout and sampling of typical sublots within a lot are shown in figure 7. The width of the lot is the width of paving: one, two, or more traffic lanes, typically. Sampling is random within each Figure 7. Layout and sampling of typical sublots. Pay Adjustment Computation Equations The lot composite (overall) pay factor is computed as follows. PF[composite] = (PF[PI]*PF[strength]*PF[thickness])/10000 (5) • PF[composite] = Composite (overall) pay factor, % • PF[strength] = Compressive strength pay factor (obtain from table 9), % • PF[thickness] = Slab thickness pay factor (obtain from table 10), % • PF[PI] = Initial PI pay factor (obtain from table 11), % Averaging of pay factors from each AQC could have also been used; however, the multiplicative model is believed to more closely approximate actual performance and LCC analysis. The actual pay adjustment for an as-constructed lot is computed using the lot composite pay factor as follows. Pay adjustments will be made only on the individual lots. PAYADJ[Lot] = BID * AREA[Lot] * (PF[composite] – 100)/100 (6) • PAYADJ[Lot] = Pay increase (+) or decrease (-), $ • BID = Contractor bid price for pay item (31.95, $/yd^2) • AREA[Lot] = Measured actual area of the as-constructed lot, yd^2 • PF[composite] = Composite pay factor (from equation 5), percent (e.g., 101 percent is expressed as 101.0) PAY[Lot] = BID * AREA[Lot] + PAYADJ[Lot ](7) PAY[Lot] = Adjusted payment for the as-constructed lot, $ The absolute minimum value of the Composite Pay Adjustment Factor for a given lot was limited to 80 percent, and the absolute maximum value was limited to 110 percent. Testing and Calculations of Pay Factors Samples were collected and tests were run, as required, for each sublot and lot. The results were recorded in a spreadsheet. The example shown in figure 8 contains results for a typical lot with four sublots. The pay factors were calculated for thickness, strength, and smoothness separately. The overall lot pay factor was then determined, and the contractor pay for the lot was calculated as shown. Results from all 14 lots are provided in appendix B. A set of expected pay charts are provided in appendix C. Figure 8. Spreadsheet used for calculating pay factors for thickness, strength, and smoothness for each sublot; overall lot pay factor; and contractor pay. << Previous Contents Next >> Updated: 04/07/2011
{"url":"http://www.fhwa.dot.gov/pavement/concrete/pubs/if06008/ch4.cfm","timestamp":"2014-04-17T02:20:22Z","content_type":null,"content_length":"9949","record_id":"<urn:uuid:a6df386a-bce4-4156-9bef-d5bcb9f5ffb4>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00145-ip-10-147-4-33.ec2.internal.warc.gz"}
Euler's totient function (pronounced to rhyme with 'quotient') is a unary function on positive integers. It is written as a lowercase phi letter, φ. φ(n) is the number of positive integers less than n and relatively prime to n (including 1). The totient of a prime p is always (p - 1), because every positive integer less than p is prime relative to p. The totient of the power of a prime, n = p^i, is (n - p^(i - 1)) = (p^i - p^(i - 1)) = (n * (1 - (1 / p))) because there are p^(i - 1) positive integers less than n which have a factor of p, and so are not relatively prime to n. The totient of the product of a set of relatively prime numbers is equal to the product of the totients of each element. This can be seen by taking any pair of relatively prime numbers, x and y. There is a set of positive integers, x[1], x[2], …, x[a] which are all prime relative to x, and φ(x) = a. Likewise there is a set of numbers y[1], y[2], …, y[b] which are all prime relative to y, and φ(y) = b. Consider the set of numbers {(x[i] * y) + (y [j] : 0 < i ≤ φ(x); 0 < j ≤ φ(y)}. It is easy to see that the size of the set is (φ(x) * φ(y)), and that each element of the set is less than (x * y). It is not so easy to see that each element is prime relative to (x * y), and that every positive integer less than (x * y) and prime relative to it is included in the set, but it is true. So the general formula for φ, using Eindhoven notation, φ(n) = n * (* : (p is prime) ∧ (p is a factor of n) : 1 - (1 / p))
{"url":"http://everything2.com/title/totient","timestamp":"2014-04-16T05:05:10Z","content_type":null,"content_length":"21488","record_id":"<urn:uuid:b1cfddf9-5296-48cd-a40f-97d6541ebd13>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00030-ip-10-147-4-33.ec2.internal.warc.gz"}
Quasiparticles do the twist One of the most memorable lessons of an undergraduate course in quantum mechanics is that identical particles can have two types of “statistics.” An exchange of two identical bosons leaves the many-particle wave function unchanged, while an exchange of two identical fermions introduces a minus sign. The consequences of this minus sign are profound in all areas of physics. Moreover, it has long been known that two-dimensional systems can have statistics that are neither bosonic nor fermionic. Robert Laughlin’s explanation of the fractional quantum Hall effect introduced quasiparticles with fractional charges such as $e/3$. These are thought to possess fractional statistics; if you exchange two identical quasiparticles, the wave function picks up a factor $exp(iϕ)$. In more complex quantum Hall states, different quasiparticle exchanges, represented by matrices, do not always commute: the quasiparticles obey non-Abelian statistics. In a theoretical paper in Physical Review B [1], Waheb Bishara and coauthors analyze recent experimental evidence [2] for this remarkable property and discuss how further experiments might probe non-Abelian statistics—an essential ingredient for the “topological quantum computer”—in more detail. Interest in such interferometry experiments has exploded in recent years, driven by the possibility of building a quantum computer that would perform operations by manipulating non-Abelian quasiparticles. The action of a quantum computer can be described by a unitary transformation, and in a topological quantum computer, that unitary transformation is built up of the matrices that describe “braiding” of quasiparticles (Fig. 1, left). These quasiparticle manipulations can be performed while keeping the quasiparticles spatially separated and also maintaining the energy gap between the low-energy states, on which the braiding acts, and others. The resulting advantage over other proposals for quantum computation is that unavoidable small errors in the braiding process, which do not change the topology of the braid, do not degrade the computation. The existence of fractional statistics is a basic consequence of the “topological order” that defines quantum Hall states in the same way that symmetry breaking defines more conventional states such as superfluids and magnets. Another consequence illustrates better why this type of order is called topological: in many quantum Hall states, the number of degenerate ground states depends on the topology of the system (whether it is a sphere or torus, for example) but not on its geometry. The essence of the quantum Hall effect is that plateaus in the Hall conductance are observed in a two-dimensional (2D) electron gas at certain densities where electrons form a quantum liquid, as first explained by Laughlin [3]. However, the nature of a few quantum Hall plateaus could not be explained by the first generalizations of the Laughlin wave function. The location of a plateau is typically described by the filling factor, which is the density in units of Landau levels (Landau levels are the highly degenerate eigenstates of a single 2D electron in a magnetic field). One unexplained plateau was at filling factor $ν=5/2$ (i.e., two filled Landau levels, plus a half-filled one) where there is no plateau at intermediate temperatures. In high-mobility samples at rather low temperatures, a clear plateau in the Hall conductance does appear at $ν=5/2$. Gregory Moore and Nicholas Read constructed a remarkable quantum Hall state [4] that Martin Greiter, Xiao-Gang Wen, and Frank Wilczek proposed as an explanation of the plateau at $ν=5/2$ (see Ref. [5]). The Moore-Read state can be viewed as a superconducting state obtained by pair formation from the composite-fermion metal that exists at slightly higher temperatures. The Moore-Read state was initially appealing on aesthetic grounds and received important numerical support a few years later [6]. Indirect experimental evidence has been accumulating [7], and earlier this year an interferometry experiment by Robert Willett and collaborators observed a remarkable property of the Moore-Read state: its quasiparticles obey statistics that are not just fractional, but non-Abelian. Non-Abelian statistics are possible in a system with degenerate ground states. Moving one quasiparticle around another does not simply multiply the ground state by a phase factor, but acts as a matrix on the whole space of ground states, and the matrices of different quasiparticle “braiding” operations (Fig. 1, left) need not commute. (Here we follow convention in using “ground state” to denote one of the degenerate lowest-energy states with a particular quasiparticle configuration, not the absolute ground state with no quasiparticles.) The first part of the paper by Bishara et al. critically reviews alternate explanations of the data of this interferometry experiment. All forms of fractional statistics, even the simpler Abelian variety, have been surprisingly difficult to confirm experimentally. While fractional charge can be probed relatively simply, by noise measurements of edge currents [8], for example, a measurement of statistics requires a nonlocal process in which one quasiparticle moves either around another or around a suitable defect. One experiment of this type is a Fabry-Pérot interferometer that coherently combines two paths of a quasiparticle moving along the edge around a bulk region that itself contains quasiparticles (Fig. 1, right). The actual interferometry experiment is somewhat complicated. As a side gate changes the area of the quantum Hall liquid, one sees oscillations in the total conductance of the two point contacts encircling the liquid. In Abelian quantum Hall states, the oscillations can be understood from the Aharonov-Bohm effect, which now involves the fractional quasiparticle charge, as may have been observed in one of the early quantum Hall interferometry experiments [9]. At $ν=5/2$, an additional oscillation appears for even values of the number of bulk quasiparticles. This number modifies the point-contact conductance through the braiding effect. One competing explanation for the observed signal in such an interferometer is the Coulomb blockade effect. When electrons are confined, Coulomb repulsion can give rise to oscillatory features in the tunneling conductance that mimic the Aharonov-Bohm effect [10]. The Coulomb blockade effect would not generally show the same magnetic-field dependence as the Aharonov-Bohm effect, but the field-dependent change in the area of the quantum Hall droplet might cause the effects to appear similar. Bishara et al. consider this and several other scenarios and conclude that current data strongly support the non-Abelian statistics explanation. Future experiments are suggested to distinguish between the Moore-Read state and other proposed states at $ν=5/2$. In particular, two alternatives that are consistent with the existing experiments are the “anti-Pfaffian” [11] state, which is essentially obtained by “subtracting” the Moore-Read state from filled Landau levels, and the $SU(2)2$ state [12]. Two recent works begin to investigate how interferometry measurements are modified when bulk quasiparticles interact with the edge strongly (i.e., not just through statistics), which may be necessary for a detailed understanding of the experiments [13, 14]. Bishara et al. also outline which interferometry experiments would address more complex states than those at $ν=5/2$. Two families of such states were introduced by Read and Rezayi [15] and by Bonderson and Slingerland [16]. There are at least two motivations for looking at these states. First, observation of the Read-Rezayi candidate state at $ν=12/5$, for example, would be the first step toward a truly “universal” topological quantum computer. Every operation needed for a quantum computer can be encoded as a braiding of quasiparticles in this state, which is not the case for any of the $ν=5/2$ candidates, nor for the Bonderson-Slingerland candidate at $ν=12/5$. In the Moore-Read state, for example, the topological braiding operations need to be supplemented by one nontopological operation in order to make a universal computer. Second, observation of the Read-Rezayi state would also be important purely on scientific grounds because its structure is more complex than that of the Moore-Read state. Most of the basic properties of the Moore-Read state appear in any weak-coupling two-dimensional superconductor with “$p+ip$” symmetry of the pair wave function, and the Bogoliubov-de Gennes description of this type of superconductor gives a simple, useful approach, based upon free particles, to the Moore-Read state. The Read-Rezayi state does not seem to have any comparably simple representation. As these rather complex fractional quantum Hall states are being probed, the topological ideas that came to condensed matter physics via the quantum Hall effect are finding wider application, e.g., in the “topological insulator” materials discovered recently in two [17] and three [18] dimensions. While these discoveries mean that topological phases of electrons can now be studied at room temperature in bulk materials, the most profound examples of how electrons are ordered continue to be found in the physics of the two-dimensional electron gas in a strong magnetic field. The experiments proposed by Bishara and collaborators to separate candidate quantum Hall states at $ν=5/2$ and other fractions will either indicate which existing theory is correct or show that, despite the many aspects of the fractional quantum Hall effect that are understood, there remain mysteries in the statistical interactions of quasiparticles.
{"url":"http://physics.aps.org/articles/v2/82","timestamp":"2014-04-17T21:34:42Z","content_type":null,"content_length":"34794","record_id":"<urn:uuid:90b788ea-b5ea-49ac-bedd-e870b2be86bc>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00288-ip-10-147-4-33.ec2.internal.warc.gz"}
2011 New Year - New Way of Looking at Infinite-Recurring 0.9 GeniusIsBack wrote: ( THE ROOT IS WHAT YOU THINK! Infinite-Recurring 0.9 Is ) I just Gave this to a Goldfish! and it gave a Better Answer than what I have seen So Far! I think, and one of the definitions you provided stated, that ROOT is simply an informal way of shouting "square root of." Thus, ROOT 100 = √(100) = 10. If you are using ROOT in a different manner, please provide a rigorous definition. If you don't explain your non-standard terminology to your audience, then, however genius your claim may be, it is indistinguishable from gibberish. Last edited by All_Is_Number (2011-01-27 04:55:52)
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=163797","timestamp":"2014-04-20T06:13:32Z","content_type":null,"content_length":"17329","record_id":"<urn:uuid:3baac8b7-96ad-4fcc-abb5-3144a0bd297b>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00252-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: divergence-free vector fields in the plane and on torus Replies: 0 divergence-free vector fields in the plane and on torus Posted: Sep 16, 1991 6:28 AM DIVERGENCE-FREE VECTOR FIELDS IN R^2 AND ON TORUS Wlodzimierz Holsztynski R - the field of reals Z - the ring of (rational) integers Pi - the smallest positive real for which sin(pi)=0 T = (R / 2*Pi*Z) ^ 2 - the torus q : R^2 --> T - the canonical quotient hom. (of ab. gp's) Div(V) = D1(V_1) + D2(V_2) is the divergence of vector field V(x,y) = (V_1(x,y), V_2(x,y)) in R^2 or on torus (D1 & D2 are the partial derivatives). Q = R^2 \ { (0,0) } S = { (x,y) in R^2 | x^2 + y^2 = 1 } - the unit circle u : Q --> S - the normalizing map; u(x,y) = (x,y)/(x^2 + y^2)^(1/2) A notion named N introduced by a definition, appears in that definition as *N* (so that it's easy to see what is defined). o - the symbol of composition of functions For homotopy classes of nowhere vanishing vector fields on torus it is well known that: [T, Q] = [T, S] = H^1(T) = Z^2 Definition. W : R^2 --> Q is *diperiodic* if there exists V : T --> S such that V o q = u o W. We may consider the homotopy equivalence within the diperiodic vector fields in R^2. The homotopy classes of diperiodic vector fields, [[R^2, Q]], are in a bijective correspondence (induced by q and u) with [T, S] = Z^2. We want to study the divergence-free vector fields, i.e. fields W for which Div(W) = 0; The following divergence-free diperiodic vector fields W : R^2 --> Q represent all classes [[R^2, Q]] (one per class) ; W(x,y) = exp(-k*x + n*y) * ( cos(n*x + k*y), sin(n*x + k*y) ) for every (x,y) in R^2, where k and n are arbitrary integers; parameters (k,n) associate the above examples W with Z^2 = [T, S] = [[R^2, Q]]. The diperiodic divergence-free vector fields are the best thing next to divergence-free vector fields on torus. We think that: CONJECTURE. All (everywhere non-vanishing) divergence-free vector fields on torus are homotopically trivial. I didn't just "guess" the above examples. If there is still an interest in this topic I may "derive" my examples and provide my motivation behind the conjecture. Kenton Yee has asked about "topologically non-trivial" (:-) divergence-free vector fields on torus in an article on sci.math (without requiring that they are nowhere vanishing). For this I am grateful to him. However I didn't appreciate his "mathematical" style nor his arrogant and egoistic attitude toward sci.math (because of that I didn't feel like contributing to this otherwise interesting thread; only the creation of sci.math.research somehow has caused me to change my mind). PS. I'd be grateful for related references and quotations (since I have difficulties to reach any math library).
{"url":"http://mathforum.org/kb/thread.jspa?threadID=563772","timestamp":"2014-04-21T10:38:53Z","content_type":null,"content_length":"16575","record_id":"<urn:uuid:419e502a-66fb-4860-ac2b-8a6994131f55>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00112-ip-10-147-4-33.ec2.internal.warc.gz"}
Prove square root of 3 is irrational What is interesting is the proof seems to originate with the Greeks with their proof that the square root of 2 was irrational, and, as they thought, "unutterable." But is seem that, historically, the proof was not followed up immediately for other integers. Taking some time to arrive at the irrationality of the square root of 17! I quote: Plato tells us,Theaetetus: Theodorus was proving to us a certain thing about square roots, I mean the square roots of three square feet and five square feet, namely, that these roots are not commensurable in length with the foot-length, and he proceeded in this way, taking each case in turn up to the root of seventeen square feet; at this point for some reason he stopped. Now it occurred to us, since the number of square roots appeared to be unlimited, to try to gather them into one class, by which we could henceforth describe all the roots. Socrates: And did you find such a class? Theaetetus: I think we did.
{"url":"http://www.physicsforums.com/showthread.php?t=389105","timestamp":"2014-04-19T22:57:49Z","content_type":null,"content_length":"48839","record_id":"<urn:uuid:4df17cef-7e6a-4baa-a14a-afbc4ee96dd8>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00398-ip-10-147-4-33.ec2.internal.warc.gz"}
Re-writing integral of x^x Hello and thanks to those reading my question. I don't know if this can be done, but I'm trying to re-write the integral of x^x in the form of an integral of R(x) e^g(x) where R(x) and g(x) are rational functions. Starting from x^x = e^(x ln(x)) I have tried various substitutions (eg. u = ln(x), u = e^x etc.) and also integration by parts but have had no luck. As I said, I'm not sure if what I'm trying to do can be done, but if it can I'm hoping someone can see how to do it and suggest a productive approach. Thanks in advance to all who have read this.
{"url":"http://mathhelpforum.com/calculus/177949-re-writing-integral-x-x.html","timestamp":"2014-04-17T11:39:34Z","content_type":null,"content_length":"40671","record_id":"<urn:uuid:edbc9d8f-7734-4e0f-a702-85ee8c995cd9>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00548-ip-10-147-4-33.ec2.internal.warc.gz"}
Types of quadrilaterals are discussed here: Types of Quadrilaterals Types of quadrilaterals are discussed here: 1. Parallelogram: A quadrilateral whose opposite sides are parallel and equal is called a parallelogram. Its opposite angles are equal. Here LMNO is a parallelogram. (i) LM is parallel to ON. LO is parallel to MN. (ii) LO = MN LM = ON (iii) ∠L = ∠N ∠M = ∠O 2. Rectangle: A rectangle is a parallelogram in which all the angles are right angles. Here ABCD is a rectangle. (i) So, AB is parallel to DC. AD is parallel to BC. (ii) AD = BC AB = DC (iii) ∠A = ∠B = ∠C = ∠D = 90° 3. Square: A square is a rectangle in which all sides are equal. Here ABCD is a square. (i) AB is parallel to DC. BC is parallel to AD. (ii) AB = BC = CD = DA. (iii) ∠A = ∠B = ∠C = ∠D = 90° 4. Rhombus: A rhombus is a parallelogram whose all sides are equal, opposite sides are parallel and opposite angles are equal. Here LMNO is a rhombus. (i) LM = MN = NO = OL (ii) ∠L = ∠N ∠O = ∠M (iii) LM is parallel to ON OL is parallel to MN 5. Trapezium: A quadrilateral is called trapezium if a pair of its opposite sides are parallel. Here ABCD is a trapezium. AB is parallel to DC If a trapezium has non parallel sides equal, it is called isosceles trapezium. Let’s discuss about the properties of types of quadrilateral in the table below; Comparing the sides and angles of the different quadrilateral are discussed above. Related Links : ● Quadrilaterals. 5th Grade Geometry Page 5th Grade Math Problems From Types of Quadrilaterals to HOME PAGE
{"url":"http://www.math-only-math.com/types-of-quadrilaterals.html","timestamp":"2014-04-17T01:02:38Z","content_type":null,"content_length":"15326","record_id":"<urn:uuid:8b007cc3-40ef-43d5-a842-166b9074d977>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00453-ip-10-147-4-33.ec2.internal.warc.gz"}
What's the chance of rain? Everyday probability barely shows up in weather forecasts these days. For example, Yahoo's weather will say something like "a few showers" to mean that there is a chance of showers. However, if you are a purist, go to the National Weather Service (NWS) site, where they still make predictions using probabilities. See the Chicago forecast for this week in and at the NWS site (the online NY times can barely be bothered to give any information at all, unless you can interpret their icons). When it comes to data, I've always felt more is more, and so if I am really interested in the weather, I go to the NWS site where I'll get more than just "chance of rain" or "a few showers." But what does it mean when the weather forecast says there is a 30% chance of rain Wednesday and a 50% chance of rain Thursday? If we focus on a single time period, say Thursday, the conclusion if pretty clear: there's a 50-50 chance of rain. Put another way, when encountering conditions like this in the past, the NWS model data shows rain half the time and no rain half the time. The inference becomes more difficult when we want to ask a more complex question. For example, suppose I'm going to Chicago Wednesday and returning Thursday night. I want to know whether to bring an umbrella. Since I hate lugging an umbrella along, I only want to bring one if there is at least a 75% chance of rain at some point while I'm there. It turns out that the answer to this question cannot be determined with the information given (don't you just love that choice on multiple choice tests?). Before we explain why, though, we need some definitions and notation. To do the math for this, we generally define each possible outcome as an . In this case, we have the following events: Event A: Rains Wednesday Event B: Rains Thursday We are interested in the chance that either Event A or Event B occurs. We have a shorthand for expressing the probability of Event A: "P(A)". There is a simple probability formula that is very useful here: P(A or B) = P(A) + P(B) - P(A and B) This formula says that the probability of Event A or Event B happening is the probability of A plus the probability of B minus the probability that A and B both happened (the event that A and B occurred is called the of Events A and B). This makes sense because if we just added them (as you might intuitively do) we are double counting the times both events occur, and thus we need to subtract out the intersection once at the end. In some cases P(A and B)=0. In other words, Events A and B never occur together. You may have noticed this comes up when you toss a coin: it is never both heads and tails at the same time (except for that time I got the coin to stand on its side). Events like these are called mutually exclusive. In other cases P(A and B)=P(A)*P(B). This means the probability of A and B is the product of the probabilities A and B. In this case, the two events can both occur, but they have nothing to do with each other. Events like these are called independent events. In still other cases, P(A and B) is neither P(A) + P(B) or P(A)*P(B). If we assume the events A and B are mutually exclusive, then there's an 80% chance (50+30) of rain either Wednesday or Thursday. This seems unlikely though, because most storms could last more than an evening. If we assume the events A and B are independent, then there's an 65% chance of rain either Wednesday or Thursday. This is a little more complicated to calculate, because we need to figure out the chances of it raining both Wed. and thursday, which we assume is independent and thus is P(A)*P(B)=30%*50%=15%. Thus: P(A or B)=P(A) + P(B) - P(A and B)=50% + 30% -15%=65%. [we could also figure out the chances of not raining either night. Since the chance of rain is independent, the chance of no rain is also independent. Also, the chance of rain plus the chance of no rain must be one. Thus P(no rain Wednesday)=1-P(A)=100%-30%=70%. Similary, P(not B)=100%-50%=50%. Then, the chance of no rain either time period = P(not A and not B)=70%*50%=35%. Thus, there is a 35% chance it will not rain either night, and we can conclude there would be a 65% chance of rain one of the nights, of course all hinging on the independence assumption]. okay. So finally, we have two probabilities 80% and 65%, based on two different and rather extreme assumptions. On the high side, 80% is the most extreme. We can see this by seeing that in order to get a larger number, we'd have to plug in a negative probability for the value of P(A and B) in the general formula (which does not assume independence or anything else): P(A or B) = P(A) + P(B) - P(A and B) Since probabilities must always be at least 0 and at most 100%, we cannot have a negative number for P(A and B). So at most, the chance of rain Wednesday or Thursday is 80%. But what about the least the chance might be? Independence seems a pretty extreme assumption in the other direction, but in fact it is not. What would lead to the smallest probability is if the two events A and B were highly related--in fact so related that P(B)=1 if A occurs. This would mean the P(A and B)=30% (the smaller of P(A) and P(B)). This would lead to a probability that it rains either Wednesday or Thursday of just 50%: P(A or B) = P(A) + P(B) - P(A and B) = 30% + 50% - 30% = 50% So now that we've got rain down, let me go back to the original impetus for this blog: it is easy to make the wrong inference when given information about the chances of a series of events. The recent Freakonomics blog about chances of birth defects addresses this issue. In it, Steven Levitt describes a couple who was told that there was a 1 in 10 chance that a test, which showed an embryo was not vailable, was wrong. The test was done twice on each of two embryos, and all four times the outcome was that the embryos were not viable. Thus, the lab told them that the chances of having healthy twins from two such embryos was 1 in 10,000. Of course, after reading about rain above, you recognize this as the application of the independence assumption (1/10 times 1/10 times 1/10 times 1/10 equals 1 in 10,000). The couple didn't listen to the lab though, and, nine months later, 2 very vaiable and very healthy babies were born. Post hoc, it seems the lab should have (at least) said the chances were somewhere between 1 in 10 and 1 in 10,000. In addition, the 1 in 10 seems like an awfully round number--could it have been rounded down from some larger probability (1 in 8, 1 in 5, 1 in 3, who knows?). Levitt wonders whether the whole test is just nonsense in the first place. So what do you do when confronted with a critical medical problem and a slew of probabilities? There's no easy answer, of course, but I believe gathering as much hard data as possible is important. Then make sure you distinguish between the inferences made by your doctor, nurse, or lab technician (which are more subject to error) and the underlying probabilities associated with the drug, the test, or the procedure (which are less subject to error). 2 comments: I began my academic career studying Meteorology, and I was surprised to find that I was mistaken on what the percentage chance of rain really means. I too thought it was the likelihood of rain. What it actually designates is how much of the area covered by the prediction is expected to get precipitation. For example, here in the Chicago DMA when Tom Skilling says there is a 50% chance of rain what he means is that 50% of the broadcast area is expected to see rain. For me this has meant a simple check of the streaming radar feed and figuring out if it will rain where I will be is really not that hard, it's just too difficult for big predictors to get so focused. One other thing to consider is if you live on the side of a mountain that gets 2 times the rain of the other side, and the forecast of 50% rain includes an area of BOTH sides of the mountain of equal area, then you should plan on your side having all the rain! Maybe a good day for a picnic on the other side of the mountain.
{"url":"http://what-are-the-chances.blogspot.com/2008/04/whats-chance-of-rain.html","timestamp":"2014-04-17T10:32:13Z","content_type":null,"content_length":"52973","record_id":"<urn:uuid:3e7d6475-be6d-4fab-aedf-1751b3bd646e>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00084-ip-10-147-4-33.ec2.internal.warc.gz"}
Euler's totient function (pronounced to rhyme with 'quotient') is a unary function on positive integers. It is written as a lowercase phi letter, φ. φ(n) is the number of positive integers less than n and relatively prime to n (including 1). The totient of a prime p is always (p - 1), because every positive integer less than p is prime relative to p. The totient of the power of a prime, n = p^i, is (n - p^(i - 1)) = (p^i - p^(i - 1)) = (n * (1 - (1 / p))) because there are p^(i - 1) positive integers less than n which have a factor of p, and so are not relatively prime to n. The totient of the product of a set of relatively prime numbers is equal to the product of the totients of each element. This can be seen by taking any pair of relatively prime numbers, x and y. There is a set of positive integers, x[1], x[2], …, x[a] which are all prime relative to x, and φ(x) = a. Likewise there is a set of numbers y[1], y[2], …, y[b] which are all prime relative to y, and φ(y) = b. Consider the set of numbers {(x[i] * y) + (y [j] : 0 < i ≤ φ(x); 0 < j ≤ φ(y)}. It is easy to see that the size of the set is (φ(x) * φ(y)), and that each element of the set is less than (x * y). It is not so easy to see that each element is prime relative to (x * y), and that every positive integer less than (x * y) and prime relative to it is included in the set, but it is true. So the general formula for φ, using Eindhoven notation, φ(n) = n * (* : (p is prime) ∧ (p is a factor of n) : 1 - (1 / p))
{"url":"http://everything2.com/title/totient","timestamp":"2014-04-16T05:05:10Z","content_type":null,"content_length":"21488","record_id":"<urn:uuid:b1cfddf9-5296-48cd-a40f-97d6541ebd13>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00030-ip-10-147-4-33.ec2.internal.warc.gz"}
A simple math expression solver Posted Monday, November 21, 2011 11:24 AM SSCommitted You earler post had a commet about the function only handles one expression at a time. That was kind of the point. In a select statement the function could be called for multiple expressions. That is actually the case in my situation. A table column contains a simple math expression. So using a function would allow me to get the answer to as many lines as would get returned in the select statement. Group: General Thanks for your posts. Forum Members Last Login: Ben Tuesday, April 01, 2014 9:12 AM Points: 1,730, Visits: 404 Posted Monday, November 21, 2011 12:17 PM Valued Member The method for evaluating algebraic equations is properly done through the use of reverse polish notation or more formally known as postfix notation. (It's referred to a reverse Polish notation because a Polish fellow figured it out) I've written these programs in COBOL, never thought about doing it in in T-SQL, maybe VB.NET. Could be a challenge. Through the use of cursors it might not be that difficult. I'll see if I can come up with something in a few days. You can use this method with a few modifications to evaluate formal logic as well. i.e is (A > B) or (C > B & D = B) a true statement. That can get very complex, trust me. So can algebra. Group: General Forum Members Last Login: Monday, January 07, 2013 2:28 PM Points: 51, Visits: Posted Monday, November 21, 2011 12:26 PM Valued Member My earlier post was used in a software package to evaluare formulas in a table. The table was comprised of three "columns". The first column had the user friendly version of the forumla, the second column had the postfix form of that equation, the third was a key field used by an evaluation program to find the proper formula. The the evaluater program would read the table based on the key column and evaluate the second column streight away. Prettly slick, really. Group: General When I say user friendly think of (a + b) / c. I don't have access to my programs right now to show the exact form of that in postfix but t is something like: c!/!a!b!+ (the ! is Forum Members used as a separator similar to a , in a .csv file). To get the user friendly formula into postfix you have to use stack processing which lends itself to cursor processing (I Last Login: Monday, think). January 07, 2013 2:28 PM Points: 51, Visits: Posted Monday, November 21, 2011 12:27 PM SSCommitted I would stay away from cursors if you decide to do it in t-sql. It might be better to use vb.net or c# and then use the clr. Although I believe you could still use a recursive function call if you were able to properly parse the equation. Group: General Forum Members Last Login: Tuesday, April 01, 2014 9:12 AM Points: 1,730, Visits: 404 Posted Monday, November 21, 2011 12:33 PM Valued Member The key to this method is the formulas have already been converted to postfix notation and all the evaluator function has to do is peddle down the string evaluating the variables as you go and then applying the op code (+ - * /,etc.) to the variable. If you use an HP12c calculator you would be familiar with postfix notation. Formulas are generally pre-thought out and entered via key strokes. Once the operator exits that field the program would immediately format the formula into postfix noation ready for the processing program to select it and evaluate it. You have to think how something like this would be used. At a data entry level you don't have to worry about speed. Group: General Forum Members Last Login: Monday, January 07, 2013 2:28 PM Points: 51, Visits: Posted Monday, November 21, 2011 1:28 PM Forum Newbie I wonder if it would be fair to call this code a simplified RPN processor. Essentially you are taking infix notation and putting into a form that used to be required on some Group: General Forum Members Last Login: Thursday, March 13, 2014 4:18 PM Points: 3, Visits: Posted Monday, November 21, 2011 1:33 PM Valued Member Sort of. The code for evaluating a simple formula like mentioned early in this thread could be called a simplified RPN evaluator, I guess. I really didn't spend much time on it because I know intemately what an RPN process is all about. A real RPN processor can evaluate any depth algebraic equation. Group: General Forum Members Last Login: Monday, January 07, 2013 2:28 PM Points: 51, Visits: Posted Monday, November 21, 2011 1:53 PM Forum Newbie ------considering execution for many rows----------------- DECLARE @ExprCollection Table(expr VARCHAR(100) ) Group: General INSERT INTO @ExprCollection(expr) Forum Members SELECT '2+1' Last Login: UNION ALL Thursday, December SELECT '5*2' 12, 2013 11:25 AM UNION ALL Points: 5, Visits: SELECT '6/3' DECLARE @genSql AS VARCHAR(MAX) CREATE TABLE #ExpValueTable ([value] INT,[Id_Exp] [VARCHAR](100) PRIMARY KEY) SELECT @genSql = CASE WHEN @genSql Is Null THEN ' INSERT INTO #ExpValueTable ([value],[Id_Exp]) ' + ' SELECT ' + expr + ' AS ExpValue,''' + expr +''' As Exp' ELSE @genSql + ' UNION SELECT ' + expr + ' AS ExpValue,''' + expr +''' As Exp' END FROM @ExprCollection GROUP BY expr SELECT * FROM #ExpValueTable --- OR JOIN YOUR TABLE TO THE TEMPORARY TABLE... AND BETTER USING THE TABLE PRIMARY KEY DROP TABLE #ExpValueTable Posted Monday, November 21, 2011 2:20 PM Valued Member In this example the size of the table of literals is unlimited. However aditional coding is needed tu use variable names i.e. A, B, C, etc. A little fooling around with this thing and it might serve the most elementary uses. DECLARE @ExprCollection Table(expr VARCHAR(100) ) Group: General Forum Members INSERT INTO @ExprCollection(expr) Last Login: Monday, SELECT '2+1' January 07, 2013 UNION ALL 2:28 PM SELECT '5*2' Points: 51, Visits: UNION ALL 158 SELECT '6/3' drop table #ExpValueTable CREATE TABLE #ExpValueTable ([value] decimal(18,4),[Id_Exp] [VARCHAR](100) PRIMARY KEY) declare ExprCursor cursor select * from @ExprCollection declare @Expr as varchar(100), @SQL varchar(max) open ExprCursor fetch next from ExprCursor into @Expr while @@Fetch_Status = 0 set @SQL = ' INSERT INTO #ExpValueTable ([value],[Id_Exp]) SELECT ' + @expr + ' AS ExpValue,''' + @expr +''' As Exp' fetch next from ExprCursor into @Expr exec (@SQL) close ExprCursor deallocate ExprCursor select * from #ExpValueTable Posted Monday, November 21, 2011 4:46 PM Grasshopper When I ran Ben's code, I had some problems if I didn't leave spaces between the operators and the values. i.e., fn_simplemath('3 - 4') worked while fn_simplemath('3-4') failed... I suspect this would require a minor tweaking of some of the substring arguments. But trying to debug a recursive function can be a bit troublesome. It also occurred to me that since were were not concerned with operator precedence and simply calculating from left to right, we should be able to simply loop through the string Group: General without needing to be recursive. Forum Members Last Login: My approach was to create a string that duplicates the input except for replacing all the "acceptable" operator values with a single unique character, in my case I used a tilde Thursday, November (~). This second string can then be used to identify the position of all the operators in the original string and make it easier to parse out the individual operand values 14, 2013 5:22 PM Points: 13, Visits: Here is my code: CREATE Function [dbo].[fn_simplemath2] ( @Expression1 varchar(255)) returns numeric(18,6) -- @Expression2 will duplicate @Expression1 with all operators replaced with ~ Declare @Expression2 varchar(255) Set @Expression2 = -- Local variables Declare @PrevOpLoc int -- Location of previous operator Declare @NextOpLoc int -- Location of next operator Declare @OpChar Char(1) -- Current operator character Declare @Result numeric(18,6) -- Hold running calculation and final return value Declare @NextVal numeric(18,6) -- the next substring value to be used to modify result base on operator -- Find the first operator Set @NextOpLoc = CHARINDEX('~',@Expression2) -- Initialize @Result to the first substring, If there are no operators, move entire string to @Result Set @Result = When @NextOpLoc = 0 then CAST(@Expression1 as numeric(18,6)) Else CAST(LEFT(@Expression1,@NextOpLoc-1) as numeric(18,6)) -- Now we will loop until we run out of operators While @NextOpLoc <> 0 -- Get the actual operator from @Expression1, pull out the next substring value Set @OpChar = SUBSTRING(@Expression1,@NextOpLoc,1) Set @PrevOpLoc = @NextOpLoc Set @NextOpLoc = CHARINDEX('~',@Expression2, @NextOpLoc + 1) Set @NextVal= Cast( When @NextOpLoc = 0 then LEN(@Expression1) Else @NextOpLoc-1 - @PrevOpLoc) as numeric(18,6)) -- Perform the appropriate operation Set @Result = Case @OpChar When '-' then @Result - @NextVal When '+' then @Result + @NextVal When '*' then @Result * @NextVal When '/' then @Result / @NextVal When '%' then @Result % @NextVal Else null Return @Result
{"url":"http://www.sqlservercentral.com/Forums/Topic1209004-2611-2.aspx","timestamp":"2014-04-17T04:40:07Z","content_type":null,"content_length":"176424","record_id":"<urn:uuid:e771b747-f436-43cf-ae67-c0fb725c147b>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00130-ip-10-147-4-33.ec2.internal.warc.gz"}
Division of Epidemiology and Biostatistics questions, short answers, essay-type question-and-answer ... questions in the take-home exam, one will require students to ... Division of Epidemiology and Biostatistics ... Biostatistics I (BIOS 806) Syllabus Biostatistics I (BIOS 806) Syllabus Fall 2008 (3 credits) This course is designed to prepare the graduate student to understand and apply biostatistical methods ... BIOSTATISTICS 600 PRINCIPLES OF STATISTICAL INFERENCE Fall 2009 ... test facility to submit your answers, some additional questions will be ... Page 5 of 8 instructor after the exam due ... bios600w@bios.unc.edu if you have questions. Biostatistics 600 ... ACCP Clinical Pharmacy Challenge Sample Questions with Answers Microsoft Word - Sample Questions with Answers for Web site.doc Practice Exam Questions and Solutions for Midterm 1 Statistics 301 ... Practice Exam Questions and Solutions for Midterm 1 Statistics 301, Professor Wardrop 1. Sarah performsa CRD with a dichotomous response. She obtains the sampling ... BIOSTATISTICS 164/STATISTICS 104 2 CLASS MEETINGS: All class meetings will begin promptly at 10:00am on Mondays and Wednesdays. Since these meetings are intended to be a forum for discussion, questions and ... Final Exam-Biostatistics 1 Final Exam-Biostatistics Name: Please, provide the answer to the following questions. Your answers should be complete and placed in the provided space. EPIDEMIOLOGY-BIOSTATISTICS EXAM Exam 1, 2000 EPIDEMIOLOGY-BIOSTATISTICS EXAM ... For questions 11 - 20, write your answers in the spaces provided. Submit the exam and your answer sheet as directed ... BIOSTATISTICS DESCRIBING DATA, THE NORMAL DISTRIBUTION 1. The duration of time from first ... You will need the following information to answer questions 6 through 8: There were ... STA 102: Introduction to Biostatistics STA 102: Introduction to Biostatistics Course Professor: Dr. Heidi Ashih Required Textbook: Bernard Rosner Fundamentals of Biostatistics, 5 th edition Course Homepage ... Statistics Exam Statistics Exam NAME:_____ Part I - Multiple Choice. Each problem is worth 4 points. 1. Ten pairs of chicks were selected to test the ... DRUG LITERATURE EVALUATION AND BIOSTATISTICS (PHR 394R) Tuesday ... 1 DRUG LITERATURE EVALUATION AND BIOSTATISTICS (PHR 394R) Tuesday and Thursday (9:30-11:00 AM) San Antonio: McDermott Room 2.108 Austin Room: PHR 3.106 El Paso Room ... Final Exam-Biostatistics 1 Final Exam-Biostatistics Name: Please, provide the answer to the following questions. Your answers should be complete and placed in the provided space. Biostatistics I (BIOS 806) Syllabus Required Text: Principles of Biostatistics, ... Answers to homework questions should include relevant SPSS output ... answers to be copied, signaling answers or taking an exam ... Final Exam-Biostatistics Solutions 1 Final Exam-Biostatistics Solutions Name: Please, provide the answer to the following questions. Your answers should be complete and placed in the provided space. AP Statistics 2010 Free-Response Questions 2010 Free-Response Questions The College Board The College Board is a not-for-profit ... Spend about 65 minutes on this part of the exam. Percent of Section II score75 BIOSTATISTICS 600 - PRINCIPLES OF STATISTICAL ... your answers, allow time to submit the answers to the questions you ... tests; the last long test is the final exam ... Epidemiology-Biostatistics Exam Exam 2, 2001 Epidemiology-Biostatistics Exam Exam 2, 2001 PRINT YOUR LEGAL NAME: _____ ... STATISTICS 7 CHAPTERS 1 TO 6, SAMPLE MULTIPLE CHOICE QUESTIONS STATISTICS 7 CHAPTERS 1 TO 6, SAMPLE MULTIPLE CHOICE QUESTIONS Correct answers are in bold italics. . ... to score 5 points higher than the other student on the final exam. d ... BIOSTATISTICS 164/STATISTICS 104 2 CLASS MEETINGS: Most class meetings on Mondays and Wednesdays will consist of a single 75-minute segment. Since these meetings are intended to be a forum for discussion ... GD Star Rating EPIDEMIOLOGY-BIOSTATISTICS EXAM Exam 1, 2000 EPIDEMIOLOGY-BIOSTATISTICS EXAM Exam 1, 2000 PRINT YOUR LEGAL NAME: _____ Instructions: This ... BIOSTATISTICS 600 Principles of Statistical Inference Spring 2008 Instructor: Dr. Jane Monaco Instructor Email: bios600w@bios.unc.edu , or jmonaco@bios.unc.edu Office: 3101A ...
{"url":"http://www.cawnet.org/docid/biostatistics+exam+questions+and+answers/","timestamp":"2014-04-19T05:39:43Z","content_type":null,"content_length":"54680","record_id":"<urn:uuid:cccc1166-c120-4132-b3ec-b248f3b779cc>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00391-ip-10-147-4-33.ec2.internal.warc.gz"}
Scalable Datasets: Bloom Filters in Ruby When you're working with large datasets it's always nice to have a few algorithmic tricks up your sleeve, and Bloom Filters are exactly that - often overlooked, but an extremely powerful tool when used in the right context. A Bloom Filter is a probabilistic data structure that is used to test whether an element is a member of a set, or more simply, it's an incredibly space efficient hash table that is often used as a first line of defense in high performance caches. Database queries too expensive? Then a Bloom Filter might help. As an example, Google's Bigtable uses a bloom filter as first lookup to avoid unnecessary disk accesses. Bloom Filter theory and Applications Instead of storing the key-value pairs, as a regular hash table would, a Bloom filter will give you only one piece of information: true or false based on the presence of a key in the hash table (equivalent to Enumerable's include?() function in Ruby). This relaxation allows the filter to be represented with a much smaller piece of memory: instead of storing each value, a bloom filter is simply an array of bits indicating the presence of that key in the filter. Trying to build a fast spellchecker? Bloom filters are your best friend. The 'probabilistic' part of the filter comes from the fact as with any Hash table, there is a probability of collision where two keys may map into the same bucket. Hence, false positives are possible (a filter with a single entry ('word') may indicate that ('word2') is part of the set, but the reverse (false-negatives) are not possible). The possibility of a false-positive is something you'll have to deal with (for most caches it doesn't matter, since Bloom filters are an optimization technique to guard against the expensive operations), but as with any probability we can optimize it for each use case (speed vs error rate). For this exact reason, Bloom filters often use multiple hash functions for each key. Let's see what that means... Understanding the Math Wikipedia has a great explanation of the math behind Bloom Filters, which I would encourage you to walk through, but the takeaway is that the probability of a false-positive (where k is the number of hashing functions, n is the size of the set, and m is the size of the bloom filter in bits) is: Hence, we can pick a desired error rate and optimize the size of the filter to match our requirements. In fact, if you do the math, you'll find an interesting rule of thumb: a Bloom filter with a 1% error rate and an optimal value for k only needs 9.6 bits per key, and each time we add 4.8 bits per element we decrease the error rate by ten times. Let's see what that means in practice for a 10,000 word dictionary: 0.1% error rate for a 10,000 word dictionary in 18kB of memory? Not bad! Now let's do it Ruby. Working with Bloom Filters in Ruby After some poking around I've resurrected Tatsuya Mori's sbloomfilter, fixed a few bugs, and extended the library. To create the filter, you'll have to specify the size of the filter (m), the number of hash functions (k), and a random seed: #!/usr/bin/env ruby require 'bloomfilter' WORDS = %w(duck penguin bear panda) TEST = %w(penguin moose racooon) # m = 100, k = 4, seed = 1 bf = BloomFilter.new(100, 4, 1) WORDS.each { |w| bf.insert(w) } TEST.each do |w| puts "#{w}: #{bf.include?(w)}" # penguin: true # moose: false # racooon: false # Number of filter bits (m): 100 # Number of filter elements (n): 4 # Number of filter hashes (k) : 4 # Predicted false positive rate = 0.05% Once the filter is populated you can easily view the stats and expected error rate. Current limitations are: the hash function is fixed as CRC32 and seeded with k different values, and currently you cannot delete entries from a Bloom Filter - for that, a counting Bloom filter must be implemented. Go forth and conquer! Ilya Grigorik is a web performance engineer and developer advocate at Google, where his focus is on making the web fast and driving adoption of performance best practices at Google and beyond. Follow @igrigorik
{"url":"http://www.igvita.com/2008/12/27/scalable-datasets-bloom-filters-in-ruby/","timestamp":"2014-04-19T01:58:53Z","content_type":null,"content_length":"17004","record_id":"<urn:uuid:9a8b8a87-bfd2-46cf-b8ab-4b74979d1001>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00154-ip-10-147-4-33.ec2.internal.warc.gz"}
The Asymptotic Behavior of a Network Multiplexer with Multiple Time Scale and Subexponential Arrivals - IEEE TRANSACTIONS ON INFORMATION THEORY , 1999 "... We develop a network calculus for processes whose burstiness is stochastically bounded by general decreasing functions. This calculus enables us to prove the stability of feedforward networks and obtain statistical upper bounds on interesting performance measures such as delay, at each buffer in the ..." Cited by 57 (4 self) Add to MetaCart We develop a network calculus for processes whose burstiness is stochastically bounded by general decreasing functions. This calculus enables us to prove the stability of feedforward networks and obtain statistical upper bounds on interesting performance measures such as delay, at each buffer in the network. Our bounding methodology is useful for a large class of input processes, including important processes exhibiting "subexponentially bounded burstiness" such as fractional Brownian motion. Moreover, it generalizes previous approaches and provides much better bounds for common models of real-time traffic, like Markov modulated processes and other multiple time-scale processes. We expect that this new calculus will be of particular interest in the implementation of services providing statistical guarantees. - PH.D. DISSERTATION , 1999 "... ..." , 2003 "... this paper, we further extend our statistical delay analysis to cover the entire network. We first show that the superposition of two self-similar processes remains self-similar. Then we show that the self-similar properties will not be altered by any server mechanism (e.g. switch with different ..." Add to MetaCart this paper, we further extend our statistical delay analysis to cover the entire network. We first show that the superposition of two self-similar processes remains self-similar. Then we show that the self-similar properties will not be altered by any server mechanism (e.g. switch with different scheduling policies). With the above, we can derive the statistical end-to-end delay guarantee for a switched network "... Real-time communication requires performance guarantee from the underlying network. In order to analyse the network performance, we must find the traffic characterization in every server of the network. Due to the strong experimental evidence that network traffic is self-similar in nature, it is imp ..." Add to MetaCart Real-time communication requires performance guarantee from the underlying network. In order to analyse the network performance, we must find the traffic characterization in every server of the network. Due to the strong experimental evidence that network traffic is self-similar in nature, it is important to study the problems to see whether the superposition of two self- similar processes retains the property of self-similarity and whether the service of a server changes the self-similarity property of the input traffic. In this paper, we first discusses some definitions and superposition properties of self-similar processes. Then we gives a model of a single server with infinite buffer and prove that when the queue length has finite second-order moment, the input process being strong asymptotically second-order self-similar(sas-s) is equivalent to the output process also bearing the sas-s property. Given the method for determinating the worst case cell delay for an ATM switch with self- similar input traffic, we can determine the end-to-end delay for such real-time communications in an ATM network by summing the cell delay experienced by each of the ATM switch along each connection.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1356525","timestamp":"2014-04-21T08:48:00Z","content_type":null,"content_length":"20196","record_id":"<urn:uuid:0e90c5bf-c69b-418b-9182-227a50857d80>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00646-ip-10-147-4-33.ec2.internal.warc.gz"}
Patent US4819193 - Gradation processing method for color images The present invention relates to gradation processing for color images in which gradation image data including a plurality of colors is recorded by a color printer by way of example, and more particularly to forming of an effective screen angle when a plurality of picture elements are used to represent one gradation. The present invention also relates to an information converting method for use in converting color component gradation information read by such as a color scanner to color component recording gradation information applied to information recording equipment such as a color printer. Certain digital recording apparatus can only record a unit recording picture element in a binary manner; "recording" or "non-recording". When using this type of digital recording apparatus, gradation cannot be represented for each unit of picture element, but it can effectively be represented by making a plurality of recording picture elements correspondent to one gradation information. A dither method or a density pattern method has been frequently utilized to represent such gradation. When gradation is converted to binary data, there must be made judgment on "recording:1" or "non-recording:0" with a certain threshold set as the border level. Variation in this threshold permits representation gradation. In other words, by allocating different thresholds from each other in their values to respective elements of a two-dimensional matrix in X rows and Y columns and then comparing the thresholds of the respective elements with one or different gradation picture element data for conversion of gradation data to binary recording data, it is possible to change the number of picture elements above the "recording" level, i.e., recording density, in accordance with gradation of the gradation picture element data for each plurality of recording picture elements corresponding to the two-dimensional matrix of thresholds. In the dither method, one threshold in the matrix is allocated with respect to the data for one gradation picture element, and location of the selected threshold is updated every time the matrix everywhen location of the gradation picture element is changed. In the density pattern method, all the elements of the matrix are made correspondent to the data of one gradation picture element and, when one gradation picture element data is input, the thresholds are selected in sequence to produce recording picture element information in accordance with the number of elementsof the matrix. Meanwhile, when color recording is effected using ink of plural colors, such as Y (yellow), M (magenta) and C (cyan) for example, there may occur a moire due to interference between recording images of different colors. Also, because the threshold matrix utilized in converting gradation data to binary data is repeatedly used for each of the predetermined picture elements, the array pattern of thresholds may be remarked in the recording image. Further, when recording locations of ink of different colors are shifted from each other due to mechanical positioning errors, such a shift appears in a certain direction throughout the entire image, and the resulting color shift will appear continuous, loud and marked image in the form of line, for example. Moreover, depending on picture elements, ink of different colors may be recorded on a single picture element in superimposed relation. Theoretically, mixing of the three primary colors Y, M and C can reproduce any desired colors. In practice, however, depending on whether or not ink of different colors are superimposed, and the order of superimposition thereof, the reproduced color is changed and hence turbidity is caused in the reproduced color in relation to such factors as the transparency of the ink. In the field of printing, the above-mentioned disadvantages in color recording have been heretofore solved by inclining a reticulated plate with respect to recording paper for each color by a predetermined angle, to such an extent that the adverse recording will not be practically appreciated by eyes of human beings. This inclination is generally known as a screen angle. However, such a screen angle cannot be set in digital image recording of dot matrix system. Accordingly, there has been proposed a method of practically forming a screen angle by causing an array of thresholds in the threshold matrix to have an inclination. But, the conventional method has the small degree of freedom in setting a screen angle, and a memory of very large capacity must be prepared for the threshold value in order to finely set the screen angle. Meanwhile, color component information read by a color scanner consists of red (R), green (G) and blue (B) and, in case of gradation processing, the information is rendered to a signal level or digital data corresponding to color components of an image. For color recording on the basis of such color component information, because the main recording colors are yellow (Y), magenta (M) and cyan (C), the read color component gradation information R, G and B are converted to recording color component gradation information Y, M and C. In case of color image reproduction due to color ink recording, for example, such information conversion has been heretofore performed following the equations below, which are known as masking equations, to correct a spectroscopic characteristic of the ink: To compensate failure of proportion and addition rules of ink, it can be also envisaged to perform color correction using such as the following masking equations including terms of higher orders: i Y'a11R+a12G+a13B+a14RG+a15GB+a16BR+a17R^2 +a18G^2 +a19B^2 M'a21R+a22G+a23B+a24RG'a25GB+a26BR+a27R^2 +a28G^2 +a29B^2 C=a31R+a32G+a33B+a34RG+a35GB+a36BR+a37R^2 +a38G^2 +a39B^2 In the past, arithmetic operation following the masking equations has been implemented as to the simple masking equations of first degree and conceived as to the complex masking equations of second or higher degrees with the aid of (a) an analog circuit using an operational amplifier, (b) a computer to conduct the desired calculation, (c) a digital arithmetic circuit in combination with a multiplier or an adder, or (d) a look-up table memory-digital system with which Y, M and C are read out through accessing with R, G and B as parameters. The method of (a) has a high processing speed, but requires a number of operational amplifiers to implement the complex masking equations, thus resulting in a very complicated circuit configuration. The method of (b) employs a microprocessor to calculate the complex masking equations and hence a low processing speed. The method of (c) requires a number of expensive multipliers and hence must resort to a very complicated circuit configuration. Finally, with the method of (d), if 8 bits are allocated to each of R, G and B by way of example, there are needed memory addresses of 2^24 and the memory capacity of 2^24 (abut 16M) words, thus resulting in the enormous memory capacity to be required. As mentioned above, for conventional color information conversion, it is difficult to perform the masking equations for second or higher degrees. A first object of the present invention is to enable the setting of any desired screen angle and improve the recording quality in gradation color recording without the need of a memory of large A second ofject of the present invention is to perform higher accurate color information conversion with a relatively simple circuit configuration, and a third object thereof is to perform color information conversion following the masking equations of second or higher degrees with the aid of a memory of relatively small capacity. To achieve the first object, according to the present invention, the screen angle is set through arithmetic operation of coordinate conversion. For example, while leaving an array of the threshold matrix as conventional, the coordinate values of picture element data to be processed are rendered to new ones through coordinate conversion in accordance with the screen angle and then referred to the threshold matrix, whereby the screen angle is effectively produced. Now referring to FIG. 3, consider an arbitrary cell Cxy on the x, y coordinates. Assuming that Uuv indicates the coordinate axis inclined with respect to the x, y coordinate axis Axy by a predtermined angle O, when the cell Cxy is rotated through the angle O to coincide with the coordinate axis Auv, the resulting coordinate values u, v of the new cell Cuv on the coordinate axis Axy are represented as follows, respectively: ##EQU1## where S1, S2, S3 and S4 are signs (+, -) determined in response to the angle More specifically, by setting the above parameters a1, a2, a3 and a4 in accordance with the screen angle and then performing arithmetic operations of the above equations (1) and (2), the new coordinate values after coordinate conversion are determined which are then utilized to refer the threshold matrix (table), whereby the screen angle is produced. Accordingly, with the parameters a1-a4 set to different values for each color, the screen angle is changed for each color to solve the above-mentioned disadvantages similarly to the result effected in the field of printing. In this connection, when performing calculations of coordinate conversion, the results are usually not in the form of integers, while the coordinate values are integers. When the threshold matrix is referred using only the integer part of the calculated result, there is a possibility of causing a relatively large error. In a preferred embodiment of the present invention, therefore, interpolation is carried out to reduce the size of the errors. Further, to achieve the above second and third objects, in a preferred embodiment of the present invention, the recording gradation information for each color, is determined by the following equations below; where f1(R), f2(G), . . . , f9(B) are functions of second or higher degrees calculation values of the functions f1(R), f2(G), . . . , f9(B) for respective values in predicted ranges of parameters R, G and B are calculated in advance individually, and stored in memory means corresponding to the respective parameter values separately as to functions; and at the time of converting color information, the memory means is accessed with R, G and B as parameters to read the corresponding calculated values to thereby obtain Y from addition of the read calculated values of f1(R), f2(G) and f3(B), M from addition of the read calculated values of f4(R), f5(G) and f6(B), and C from addition of the read calculated values of f7(R), f8(G) and f9(B). With this embodiment, if 8 bits are allocated to each of R, G and B, the resulting memory capacity is in order of 3×2^8 words which is remarkably smaller than the memory capacity of 2^24 (aout 16M) to be necessary in the table look-up memory-digital system that has been envisaged in the past. Nevertheless, in additiion to the above memory, the embodiment requires only an addition using an arithmetic unit such as an adder or microprocessor. The adder is relatively cheap and does not so complicate the circuit configuration, while additive operation using the microprocessor will not provide a significant delay in the processing speed. It is, therefore, possible to perform high-accurate conversion of color information using the masking equations of second or higher degrees. FIG. 1 is a block diagram showing an image processing apparatus of one type embodying the present invention; FIG. 2 is a block diagram showin a conversion unit 200 in FIG. 1; FIG. 3 and FIGS. 4a and 4b are plan views showing the processing of coordinate convertion; FIGS. 5 and 7 are block diagrams showing other embodiments of the conversion unit 200, respectively; FIGS. 6, 8 and 9 are plan view showing the processing of interporation; FIG. 10 is a block diagram showing the apparatus constitution embodying the present invention in one aspect; FIG. 11 is a block diagram showing the apparatus constitution embodying the present invention in another aspect; FIG. 12 is a plan view showing the content of color information data applied to ROM's 12[1] -12[3] in FIG. 11; FIG. 13a is a block diagram showing the apparatus constitution embodying the present invention in still another aspect; FIG. 13b is a flow chart showing the conversion processing operation of color gradation information effected by a microprocessor 18 in FIG. 13a; and FIG. 13c is a time chart showing the operation timings of respective parts of the apparatus during the conversion processing in FIG. 13b. Hereinafter, preferred embodiments of the present invention will be described with reference to the drawings. FIG. 1 shows an image processing apparatus of one type embodying the present invention. Description will now be made by referring to FIG. 1. A control unit CPU comprises a microprocessor, a RAM (read /write memory), a ROM (read only memory:program memory), an I/O port, an oscillator, etc. A scanner 100 is connected to the I/O port of the control unit CPU through certain signal lines. The scanner reads information on a predetermined document for each of three primary colors (R, G, B) and outputs digital multi-value data in response to a gradation level for each color. The control unit CPU reads image data from the scanner 100 for each color and, after carrying out the predetermined processing, conducts the recording processing operation on a color printer 500. Between the control unit CPU and the color printer 500, there are provided conversion units 200, 300 and 400 for converting gradation data to binary data. These conversion units 200, 300 and 400 process color information of Y (yellow), M (magenta) and C (cyan), respectively. FIG. 2 shows an embodiment of the conversion unit 200 in FIG. 1 (other conversion units 300 and 400 have the same elements). First explaining briefly, in this embodiment, coordinate conversion is performed using a circuit composed of multipliers MP1, MP2, MP3 and MP4 as well as adders AD5 and AD6. A memory ROM stores therein a table in which predetermined threshold data are allocated to individual elements of a two-dimensional matrix one to one. In this embodiment, the matrix consists of the elements 4×4. The remaining circuit elements are provided to perform the processing of The principles of the interpolation processing in this embodiment will now be described. In the illustrated example, interpolation is performed on the basis of a so-called area allocation method. Referring to FIG. 4, a cell Cuv after coordinate conversion is projected in parts over a plurality of cells having the coordinates (1,2), (1,3), (2,2), (2,2) and (2,3) on the original x, y coordinate system. Assuming that the areas projected over those cells are S2, S3, S4, S5 and S6, respectively, and the thresholds allocated to those cells are C(1,2), C(1,3), C(2,1), C(2,2) and C(2,3), respectively, the threshold C(u,v) of the cell Cuv after coordinate conversion is represented by the following equation: C(u,v)=[S2 C(1,2)+S3·C(1,3)+S4·C(2,1)+S5·C(2,2)+S6.multidot.C(2,3)]/(S2+S3+S4+S5+S6) (3) In other words, according to this interpolation method, the thresholds of all the cells (Cxy) over which is projected the cell Cuv are weighed in accordance with the respective projected areas, and the total sum of the weighed values is divided by the number of cells. The resulting average value is regarded as the result of interpolation (i.e., threshold). However, such area calculation and so on takes a very long time in case of resorting to a software, while the apparatus constitution is very complicated in case of resorting to a hardware. In this embodiment, therefore, interporation of the area allocation is approximately performed as follows. More specifically, as shown in FIG. 4b, the cell Cuv is divided into 16 parts equal to one another and it is judged over which cell (Cxy) is most largely projected each of the divided minute cells (to which cell (Cxy) belongs the center of each minute cells. Then, the number of corresponding minute cells is assumed to represent the projected area. In FIG. 4b, for example, the magnitudes (or the numbers of minute cells) of a region Ca projected over C(a,2), a region Cb projected over C(2,3) and a region Cc projected over C(2,2) are 2, 1 and 13, respectively. Accordingly, the threshold C(u,v) in this case is determined as follows: Such division into 16 parts equal to one another practically means that a minute coordinate shift (Δ×x,Δy) below 1, and corresponding to 16-division is applied to each coordinate Cuv determined through coordinate conversion. Then, the above processing can be implemented by using the integer parts of the results to refer to thresholds allocated to the respective integer coordinates, and by dividing the addition value of those thresholds by 16. Description will be continued by returning back to FIG. 2. In this embodiment, 16 clock pulses are applied to a signal line CLK for each input gradation data (DATA). CNT designates a 4-bit counter for counting those clock pulses. Lower two bits of an output from the counter CNT are connected to lower two bits of an input terminal B of each of the multipliers MP1 and MP3, while upper two bits of the output from the counter CNT are connected to lower two bits of an input terminal B of each of the multipliers MP2 and MP4. It is to be noted that, in this embodiment, each of the multipliers MP1, MP2, MP3 and MP4 performs multiplication of 4 bits (A)×6 bits (B) and outputs data of 10-bit. The coordinate values x and y updated upon incoming of each gradation data are applied to upper four bits of input terminals B of the multipliers MP1, MP3 and MP2, MP4, respectively. To four bits of input terminals A of the multipliers MP1, MP2, MP3 and MP4, there apply data of constants a1, a2, a3 and a4 in accordance with the screen angle, respectively. Output terminals of the multipliers MP1 and MP2 are respectively connected to input terminals A and B of the adder AD5, while output terminals of the multipliers MP3 and MP4 are respectively connected to input terminals A and B of the adder AD6. Therefore, the following coordinate values u and v appear at output terminals of the adders AD5 and AD6. u=a1(x+Δx)+a2(y+Δy) (4) v=a3(x+Δx)+a4(y+Δy) (5) where Δx and Δy are minute coordinate shifts respectively given by the lower two bits and the upper two bits of the counter CNT. Lower two bits of each of the adders AD5 and AD6 correspond to a decimal part of the coordinate value. Accordingly, these lower two bits are deleted and, since the threshold matrix is in the form of 4×4 in this embodiment, only those respective upper two bits (Au, Av) higher than the deleted bits are sampled. The total four bits are applied to address lines of the memory ROM. The memory ROM includes sixteen 4-bit memories corresponding to 4-bit addresses, and each memory stores therein preset threshold data. A data output terminal of the memory ROM is connected to lower four bits of an input terminal A of the adder AD7, while upper four bits of the input terminal A of the adder AD7 are set to zero at all times. An output terminal of a latch LT is connected to an input terminal B of the adder AD7. To a latch pulse input terminal CP of the latch LT is applied a signal on the signal line CLK through an inverter INV. Thus, every time u and v given by the equations (4) and (5), i.e., address information applied to the memory ROM, are updated, the latch LT serves to latch output data from the adder AD7 in synchronous relation with clock pulses a half clock cycle behind the update. The data latched by the latch LT is applied to an input terminal B of the adder AD7. This operation is repeated sixteen times for each gradation data so that, after the completion of sixteen operations, the total sum of sixteen threshold data appears at the output terminal of the adder AD7. Among 8-bit signal lines output from the adder AD7, upper four bits are connected to an input terminal A of a digital comparator CMP. In other words, deletion of lower four bits of the data and sampling of upper four bits thereof are equivalent to the fact that the data is shifted toward lower digits by four bits, and mean in a binary code that the data is divided by 16. More specifically, the 4-bit data (threshold) applied to the input terminal A of the digital comparator CMP represents the value which is obtained by applying different minute coordinate shifts of 0, 1/4, 2/4 or 182 to the coordinate values x and y of the input gradation data (DATA) in sixteen times, referring to the threshold matrix table with sixteen coordinates given by the equations (4) and (5), and then dividing the total sum of the referred values by 16. The resulting value corresponds to a threshold which is obrained from the values after coordinate conversion through interporation on the basis of the method that is effectively regarded as an area allocation method. To an input terminal B of the digital comparator CMP is input the 4-bit gradation data DATA. The digital comparator CMP serves to compare the threshold applied to its input terminal A with the gradation level applied to its input terminal B, so that it is issues a high level H (recording level) at an output terminal OUT if A<B is met, and sets a low level L (non-recording level) at the output terminal under other conditions. Consequently, binary data is obtained at the output terminal of the digital comparator CMP. Referring again to FIG. 1, there are provided three conversion units 200, 300 and 400 in this embodiment. Coordinate information x, y of the input gradation data are commonly applied to those three conversion units. The control unit CPU is connected to the conversion unit 200 through a data line PSY of 20 bits comprising 16-bit constant data (a1-a4) in response to a yellow screen angle and 4-bit gradation data (DATA) of a yellow component, the control unit CPU is connected to thre conversion unit 300 through a data line PSM of 20 bits comprising 16-bit constant data in response to a magenta screen angle and 4-bit gradation data of a magent component, and the control unit CPU is connected to the conversion unit 400 through a data line of 20 bits comprising 16-bit constant data in response to a cyan screen angle and 4-bit gradation data of a cyan component. Clock pulses from the oscillator and a clear signal (CLR) from the I/O port are commonly applied to the conversion units 200, 300 and 400. Output lines SY, SM and SC of the conversion units 200, 300 and 400 are connected to the color printer 500. A timing signal indicating effectiveness of a signal level on each of the signal lines SY, SM and SC (or indicating that the total sum of sixteen threshold data is appearing at the output terminal of the adder AD7) is applied from the control unit CPU to the color printer 500 through a control line CON2. FIG. 5 shows a conversion unit embodying the present invention in another aspect. In this embodiment, similarly to the foregoing embodiment, coordinate conversion is performed using four multipliers MP1, MP2, MP3, MP4 and two adders AD1, AD2. In the illustrated example, each of the multipliers MP1-MP4 performs calculation of 4 bits×4 bits and outputs the calculated result of 8 bits. In the embodiment of FIG. 5, the integer coordinate nearest to the coordinate after coordinate conversion is used to refer to the threshold matrix table. By obtaining the nearest integer coordinate, a coordinate error is restrained to be 0.5 at maximum with respect to each coordinate axis. This processing can be implemented by adding 0.5 to the values after coordinate conversion and then sampling only the integer parts of the resulting sums. More specifically, assume that, as shown in FIG. 6 by way of example, arbitrary coordinate cells P1a, P2a, P3a and P4a have the cells C(1,0), C(0,0), C(1,1) and C(1,0) to be selected as the nearest integer coordinate cells, respectively. A shift of +0.5 is added to the coordinate values x and y of each of those arbitray coordinate cells, as a result of which coordinate values (x, y) of the cells P1b, P2b, P3b and P4b after shifting are given as (0+α1, 1+β1), (0+α2, 0+β2), (1+α3, 1+β3) and (1+α4, 0+β4), respectively, where α1 to α4<0.5 and β1 to β4<0.5. By sampling only the integer parts of the resulting values, the coordinate values (x, y) after shifting become (0,1), (0,0), (1,1) and (1,0) which are apparently coincident with the coordinate values of the cells C(0,1), C(0,0), C(1,1) and C(1,0) to be selected, respectively. The adders AD1 and AD2 shown in FIG. 5 are each an adder of 9 bits+9 bits, and 8-bit output lines of the multipliers MP1, MP2 and MP3, MP4 are connected to upper eight bits of the adders AD1 and AD2. An input terminal A of each of the adders AD1, AD2 has a least significant bit (LSB) fixed at zero (low level L), while an input terminal B thereof has a least significant bit fixed at one (high level H). Thus, the value given by adding 0.5 to an output value of the multiplier MP2 is applied to the input terminal B of the adder AD1, while the value given by adding 0.5 to an output value of the multiplier MP4 is applied to the input terminal B of the adder AD2. Because the least significant bit of an output from each of the adders AD1 and AD2 represents the decimal part, the predetermined lower bits (corresponding to an array of the matrix) other than the least significant bit are connected to an address terminal of a memory ROM. In this embodiment, the ROM directly stores therein as data the thresholds in response to the array coordinates output from the adders AD1 and AD2, and the rsults of comparison of their values with the gradation data values. Therefore, the gradation data is also applied to a predetermined address terminal of the memory ROM. Binary data is obtained at a data output terminal of the memory ROM. FIG. 7 shows a conversion unit embodying the present invention in still another aspect. In this embodiment, similarly to the foregoing embodiments, coordinate conversion is performed using four multipliers MP1, MP2, MP3, MP4 and two adders AD1, AD2. In the illustrated example, an integer part and a decimal part of the coordinate value obtained at an output terminal of each of the adders AD1 and AD2 are both applied to address terminals of a memory ROM. The memory ROM has also previously stored the threshold data after interporation on the decimal coordinate system in the form of a table in the predetermined addresses. Accordingly, with only addresses applied to the memory ROM, the threshold data after interporation is issued from a data output terminal of the memory ROM. This threshold data is compared with the gradation data DATA in a digital comparator CMP, so that binary data corresponding to the comparison result is issued from an output terminal of the CMP. A part of the table to be stored in the memory ROM in this case is given by the following Table 1. TABLE 1______________________________________Upper Addressesx - 1/4 x x + 1/4 x + 2/4 x + 3/4 x + 1______________________________________Lower Addressesy - 3/4 D1 D9 D17 D25 D33 D41y - 2/4 D2 Dl0 D18 D26 D34 D42y - 1/4 D3 D11 D19 D27 D35 D43y D4 D12 D20 D28 D36 D44y + 1/4 D5 D13 D21 D29 D37 D45y + 2/4 D6 D14 D22 D30 D38 D46y + 3/4 D7 D15 D23 D31 D39 D47y + 1 D8 D16 D24 D32 D40 D48______________________________________ (Note) D8 and D9 (and others in similar relation) are not on successive addresses. In the Table 1, data are stored in 1/4 coordinate units such that the data locating on integer coordinates are D12, D16, D44 and D48 and the remaining data are those produced through interporation. When the thresholds after interporation are stored in the form of a table as in this embodiment, the stored thresholds are variable in accordance with the method of interporation. In other words, any desired interporation method can be used in the constitution of this type. For example, the data interporated on the basis of the area allocation method shown in the equation (3) or the method shown in FIG. 5 may be stored in the form of a table. Herein, there will be described the cases of using other three types of interporation methods. As shown in FIG. 8, consiser an arbitrary coordinate cell Cuv and four integer coordinate cells C(x,y), C(x+1,y), C(x,y+1) and C(x+1,y+1) surrounding the cell Cuv. Decimal coordinate values of the arbitrary coordinate cell are Δx and Δy. At this time, the threshold at each of the integer coordinate cells is weighed with the product of (complement of) a distance along the x-axis and (complement of) a distance along the y-axis between the relevant cell and the arbitrary coordinate cell. The total sum of the weighed values is regarded as a threshold C(u,v) of the arbitrary coordinate cell Cuv. Thus, the threshold is determined from the following equation: C(u,v)=C(x,y)·(1-Δx)·(1-Δy)+C(x+1, y)·Δx·(1-Δy)+C(x, y+1)·(1-Δx)·Δy+C(x+1, y+1)·Δx·Δy (6) By way of example, consider application of this method to data D22 in case of producing the data as shown in the Table 1. The decimal coordinate values Δx and Δy in this case are 1/4 and 2/4, respectively, and the four integer coordinate data surrounding the data D22 ade D12, D16, D44 and D48. Accordingly, the following is obtained from the equation (6): ##EQU2## By substituting 5, 6, 7 and 8 for D12, D44, D16 and D48, respectively by way of example, as specific numeral values: ##EQU3## As shown in FIG. 8, consider an arbitrary coordinate cell Cuv and four integer coordinate cells C(x,y), C(x+1,y), C(x,y+1) and C(x+1,y+1) surrounding the cell Cuv. At this time, a distance r between the arbitrary coordinate cell Cuv and the integer coordinate Cxy is given by Δx^2 +Δy^2).sup.α (where α equals to 1/2). Supposing that the erspective integer coordinate cells C(x,y), C(x+1,y), C (x,y+1) and C(x+1,y+1) would influence the arbitrary coordinate cell Cuv inversely proportional to distances therebetween, a threshold C(u,v) of the arbitrary coordinate cell Cuv can be obtained from the following equation on the assumption that those distances are r1, r2, r3 and r4. C(u,v)=[C(x,y)/r[1] +C(x+1,y)/r[2] +C(x,y+1)/r[3] +C(x+1,y+1)/r[4] ]/[(1/r[1])+(1/r[2])+(1/r[3])+(1/r[4])](7) Herein, consider again application of this method to the data D22 in the table 1. The decimal coordinate values Δx and Δy in this case are 1/4 and 2/4, respectively. First, the distances r1 to r4 are determined as follows: r1=[(1/4)^2 +(2/4)^2 ].sup.α =(5/16).sup.α r2=[(1-1/4)^2 +(2/4)^2 ].sup.α =(13/16).sup.α r3=[(1/4)^2 +(1-2/4)^2 ].sup.α =(5/16).sup.α r4=[(1-1/4)^2 +(1-2/4)^2 ].sup.α =(13/16).sup.α By substituting these distances into the equation (7): D22=[D12/(5/16).sup.α +D44/(13/16).sup.α +D16/(5/16).sup.α +D48/(13/16).sup.α ]×2/[(5).sup.α +(13).sup.α ] Herein, by substituting 5, 6, 7 and 8 for D12, D44, D16 and D48, respectively by way of example, as specific numeral values: ##EQU4## Supposing that, as shown in FIG. 9, sixteen integer coordinate cells Cxy surrounding an arbitrary coordinate cell Cuv would influence the cell Cuv in accordance with the term of Sin πd/πd (where d is a distance along each axis), interporation is performed through approximation using a cubic polynominal expression of Sin πd/πd. More specifically, the objective threshold can be obtained as follows on the assumption that respective integer coordinates are represented by i, j and thresholds at the integer coordinates are given by C(i,j). ##EQU5## where g(d) is given as follows: ##EQU6## Herein, determine the threshold C(u,v) in case for u and v to offer 1/4 and 2/4, respectively, supposing that the thresholds of those coordinates corresponding to the sixteen surrounding cells are represented by the following table 2. TABLE 2______________________________________ Coordinate i - 1 0 1 2______________________________________Coordiante j-1 2 10 17 440 3 7 21 481 11 14 29 522 18 26 33 56______________________________________ The following Table 3 represents distances |i-u| between the sixteen coordinates and the arbitrary coordinate Cuv, and the Table 4 represents distances |j-v|. TABLE 3______________________________________ Coordinate i|i - u| -1 0 1 2______________________________________Coordinate j-1 5/4 1/4 3/4 7/4δ 5/4 1/4 .sub.⊕ 7/41 5/4 1/4 3/4 7/42 5/4 1/4 3/4 7/4______________________________________ TABLE 4______________________________________ Coordinate i|j - v| -1 0 1 2______________________________________Coordinate j-1 6/4 6/4 6/4 6/40 2/4 2/4 2/4 2/41 2/4 2/4 2/4 2/42 6/4 6/4 6/4 6/4______________________________________ From the values in the Tables 3 and 4, g(i-u) and g(j-v) in the equation (8) are determined as in the following Tables 5 and 6, respectively. TABLE 5______________________________________ Coordinate ig(i - u) -1 0 1 2______________________________________Cordinate j-1 -9/64 57/64 19/64 -3/640 -9/64 57/64 19/64 -3/641 -9/64 57/64 19/64 -3/642 -9/64 57/64 19/64 -3/64______________________________________ TABLE 6______________________________________ Coordinate ig (j - v) -1 0 1 2______________________________________Coordinate j-1 -8/64 -8/64 -8/64 -8/640 40/64 40/64 40/64 40/641 40/64 40/64 40/64 40/642 -8/64 -8/64 -8/64 -8/64______________________________________ Accordingly, influences CiJ due to the respective integer coordinate cells Cij are given as follows from the Tables 2, 5 and 6: ##EQU7## where a=-1, b=0, c=1, d=2 ##EQU8## As described in the above, according to this embodiment, it becomes possible to set any desired screen angle through calculation of coordinate conversion. Also, an error caused by coordinate conversion can be made small with the processing of interporation. Next, still other embodiments of the present invention will be described. It is to be noted that, in any of the later-described embodiments, color information is converted following the masking equations below: ##EQU9## FIG. 10 shows the apparatus constitution embodying the present invention in one aspect. In this constitution, color image information on a color document 1 is read by a scanner 3 of an image reading unit 2 in separate color components, and color component gradation data R, G and B are applied to address data input terminals of ROM's 6[1] to 6[9]. The ROM 6[1] has previously memorized therein the calculated values of f1(R) for respective R values in a predicted range of R with the R values as addresses. Six bits are allocated to R to provide 64 gradations (or 65 gradations including colorless one 0), and memory data consists of 2^8 words. The memory data is obtained by experimentally setting correction coefficients all, a12 and a13 of f1 (R) suitable for the system (combination of the image reading unit 2 and an image recording unit 8) using the method of least squares, and then calculating the value of f1(R) for each of R=1 to 64. Likewise, the ROM 6[2] has previously memorized therein the calculated values of f2(B) for respective G values in a predicted range of G with the G values as addresses. In like manner, the ROM 6[3] has previously memorized therein the calculated values of f3(B) for respective B values in a predicted range of B with the B values as addresses, the memory ROM 6[4] has previously memorized therein the calculated values of f4(R) for respective R values in a predicted range of R, the memory ROM 6[5] has previously memorized therein the calculated values of f5(G) for respective G values in a predicted range of G, the memory 6[6] has previously memorized therein the calculated values of f6(B) for respective B values in a predicted range of B, the memory ROM 6[7] has previously memorized therein the calculated values of f7(R) for respective R values in a predicted range of R, the memory 6[8] has previously memorized therein the calculated values of f8(G) for respective G values in a predicted range of G, and the memory 6[9] has previously memorized therein the calculated values of f9(B) for respective B values in a predicted range of B. Therefore, when the output gradation data from the image reading unit 2 are given by Ri, Gi and Bi, the ROM's 6[1] to 6[9] issue outputs as follows: ##EQU10## An adder 7y receives outputs from the ROM's 6[1] to 6[3] and applies data indicating the sum of those outputs, i.e., Yi=f1(ri)+f2(Gi)+f3(Bi), to the image reading unit 8. The image reading unit 8 stores Yi in a yellow recording gradation data buffer. An adder 7m receives outputs from the ROM's 6[4] to 6[6] and applies data indicating the sum of those outputs, i.e., Mi=f4(Ri)+f5(Gi)+f6(Bi), to the image reading unit 8. The image reading unit 8 stores Mi in a magenta recording gradation data buffer. An adder 7c receives outputs from the ROM's 6[7] to 6[9] and applies data indicating the sum of those outputs, i.e., Ci=f7(Ri)+f8(Gi)+f9(Bi), to the image reading unit 8. The image reading unit 8 stores Ci in a cyan recording fradation data buffer. To the image reading unit 2 are applied a line synchronization signal, an element synchronization signal and other control signals from a timing controller 10, so that in synchronous relation with the element synchronization element, the color component gradation data Ri, Gi and Bi for each element are selectively applied to the ROM's 6[1] to 6[9] through separate lines. The ROM's 6[1] to 6[9] update and latch the address data (Ri, Gi and Bi), in synchronous relation with the element synchronization signal, and output the corresponding memory data f1(Ri) to f9(Bi) in the addresses indicated by the latched data. Each of the adders 7y to 7c implements addition of the input data and then outputs and latches the calculated result while updating the same in synchronous relation with the element synchronization signal. The image recording unit 8 transfers the data in the unput buffer memory into another buffer memory in synchronous relation with the line synchronization signal, and reads the input data Yi, Mi and Ci into the input buffer memory in synchronous relation with the element synchronization signal. As described in the above, in this embodiment, the ROM's are accessed with the document reading color component gradation data Ri, Gi and Bi to read the functional values f1(Ri), f2(Gi), f3(Bi), f4 (Ri), f5(Gi), f6(Bi), f7(Ri), f8(Gi) and f9(Bi) corresponding to the gradation data Ri, Gi and Bi out of the ROM's, so that the sum of fi(Ri), f2(Gi) and f3(Bi), the sum of f4(Ri), f5(Gi) and f6(Bi), and the sum of f7(Ri), f8(Gi) and f9(Bi) are calculaetd by the adders to obtain the recording gradation data Yi, Mi and Ci. As a result, the processing speed becomes high and the memory capacity necessary for the ROM's is reduced down to the order of 2^6 ×9 words for three components each having 64 gradations. Nevertheless, conversion information can be obtained using the masking equations of second or higher degrees, whereby it is possible to convert the color information with high color reproducibility in conformity with characteristics of the image reading unit 2 and the image recording unit 8(e.g., a color gradation reading characteristic of the scanner and a color expression characteristic of the printer). It is to be noted that, if the table look-up system envisaged in the past is implemented similarly for three colors each having 64 gradations, the memory capacity of 2^18 would be needed. Thus, in this embodiment, the memory capacity corresponding to the difference of 2^18 -2^6 ×9 can be saved. FIG. 11 shows the apparatus constitution embodying the present invention in another aspect. In this constitution, color component gradation data R, G and B from an image reading unit 2 each for one line are stored into buffer memories 11r, 11g and 11b, respectively. At the time when the data for one line has been stored, fthe color component gradation data R, G and B for each element are read in sequence and applied to address data input terminals of ROM's 12[1], 12[2] and 12[3]. The ROM 12[1] has previously memorized therein the calculated values of a function f1(R) for respective R values, the calculated values of a function f4(R) for respective R values, and the calculated values of a function f7(R) for respective R values with both color information data (2 bits) and the R values as addresses. As shown in FIG. 12, the color information data consists of 01, 10 and 11 allocated to designation of yellow Y, magenta M and cyan C, respectively. Specifically, the calculated values of the function f1(R) for respective R values have been previously stored in those addresses specified by both the yellow Y designation data of 2 bits 01 and the R value of 6 bits, the calculated values of the function f4(R) for respective R values have been previously stored in those addresses specified by both the magenta M designation data of 2 bits 10 and the R value of 6 bits, and the calculated values of the function f7(R) for respective R values have been previously stored in those addresses specified by both the cyan C designation data of 2 bits 11 and the R value of 6 bits. The ROM 12[2] has previously memorized therein the calculated values of a function f2(R) for respective G values, the calculated values of a function f5(G) for respective G values, and the calculated values of a function f8(R) for respective G values with both the color information data (2 bits) and the R values as addresses. More specifically, the calculated values of the function f2(G) for respective G values have been previously stored in those addresses specified by both the yellow Y designation data of 2 bits 01 and the G value of 6 bits, the calculated values of the function f5(G) for respective G values have been previously stored in those addresses specified by both the magenta M designation data of 2 bits 10 and the G value of 6 bits, and the calculated values of the function f3(G) for respective G values have been previously stored in those addresses specified by both the cyan designation data of 2 bits 11 and the G value of 6 bits. The ROM 12[3] has previously memorized therein the calculated values of a function f3(B) for respective B values, the calculated values of a function f6(B) for respective B values, and the calculated values of a function f9(B) for respective B values with both the color information data (2 bits) and the B values as addresses. More specifically, the calculated values of the function f3(B) for respective B values have been previously stored in those addresses specified by both the yellow Y designation data of 2 bits 01 and the B value of 6 bits, the calculated values of the function f6(B) for respective B values have been previously stored in those addresses specified by both the magenta M designation data of 2 bits 10 and the B value of 6 bits, and the calculated values of the function f9(B) for respective B values have been previously stored in those addresses specified by both the cyan C designation data of 2 bits 11 and the B value of 6 bits. In this embodiment, in timed relationship with that the color gradation data for one line has been stored into each of the buffer memories 11r, 11g and 11b, a timing controller 10 reads the data for each element sequentially from the leading one in the one-line data and, after update of reading for each element, first sets a distributor 14 causing an output from the adder 13 to be applied to a yellow data buffer memory (Y) of an image recording unit 8, applies the color information data 01 (yellow: Y) to the ROM's 12[1] -12[3], instructs the adder 13 to perform an addition, and then instructs the image recording unit 8 to read the added result into the buffer memory (Y). Subsequently, the timing controller 10 sets the distributor 14 causing an output from the adder 13 to be applied to a magenta data buffer memory (M) of the image recording unit 8, applies the color information data 10 (magenta: M) to the ROM's 12[1] -12[3], instructs the adder 13 to perform an addition, and then instructs the image recording unit 8 to read the added result into the buffer memory (M). After that, the timing controller 10 sets the distributor 14 causing an output from the adder 13 to be applied to a cyan data buffer memory (C) of the image recording unit 8, applies the color information data 11 (cyan: C) to the ROM's 12[1] -12[3], instructs the adder 13 to perform an addition, and then instructs the image recording unit 8 to read the added result into the buffer memory (C) of the image recording unit 8. Thereafter, reading of the buffer memories 11r-11b is updated to the gradation data for the next data. The above process is repeated until the data for one line has been read out of each of the buffer memories 11r--11b and the conversion data for one one line has been sent to the image recording unit 8. Upon this, the timing controller 10 requests the image reading unit 2 to receive the color component gradation data for the next line, thus causing the data to be read into the buffer memories 11r--11b. Then, the similar processing will be continues for the subsequent lines. Also in this embodiment, the ROM's 12[1] to 12[3] requires the total memory capacity of only 2^6 ×9 words as in the embodiment of FIG. 10. From the reason that reading addresses of each ROM are set to be 8 bits including the color information data of 2 bits, if all the addresses represented by 8 bits are to be stored in a memory area, the required memory capacity is as much as 2^8 ×3 words. In case of similarly implementing the table look-up system envisaged in the past for three colors each having 64 gradations, the memory capacity of 2^18 would be needed. Thus, in this embodiment, the memory capacity corresponding to the difference of 2^18 -2^8 ×3 or above can be saved. FIG. 13a shows the apparatus constitution embodying the present invention in still another aspect. In this constitution, A ROM 22 as a processing control data memory stores therein a calculation/ write program which is used specify correction coefficient data in association with the correction characteristic data (color reading characteristic of a scanner 3, color reproduction characteristic of recording paper and color expression characteristic of ink), as well as correction coefficients in association with the correction characteristic data applied to an input/output port 23, calculates the values of functions f1(R) to f9(B) (17) for respective values in predicted ranges of R, G and B, and memorizes the calculated values of f1(R), f4(R) and f7(R) into a RAM 16[1], the calculated values of f2(G), f5(G) and f8(G) into a RAM 16[2], and the calculated values of f3(B), f6(B) and f9(B) into a RAM 16[3]. It is to be noted that, in FIG. 13a, a block 17 designates the calculated values of the functions in separate groups to explain allocation of those calculated values to the respective RAM's 16[1] -16[3] and, therefore, the block 17 does not represent the constitutional element of hardware. The above correction coefficient data are given as follows: ##EQU11## Before starting to read, reproduce and record a color image of one page, a microprocessor 18 reads the correction characteristic instruction data at the input/output port 23, specifies the respective correction coefficients all -a39 and hence the respective functions f1(R)-f9(B) corresponding to that data, calculate the functional values for respective values in ranges of R=1-64, G=1-64 and B= 1-64 and B=1-64, and then writes the calculated values into the corresponding RAM's on the basis of relationship between the block 17 and the RAM's 16[1] -16[3]. Stated differently, the correction characteristic instruction data represents a color reading characteristic of the scanner 3, a color reproduction characteristic of recording paper and a color rxpression characteristic of recording ink to specify the correction coefficients (i.e., functions; masking equations) which provide a color reproduction characteristic most coincident with the foregoing characteristics. Based on the specified correction coefficients, the calculated values of the functions for respective values in a predicated range of 1-64 of the parameters R, G and B are memorized into the corresponding RAM's 16[1] -16[3]. After setting the optimum (coefficients of the) masking equations in accordance with the correction characteristic instruction data and then memorizing the calculated values of the functions constituting those equations into the RAM's 16[1] -16[3] as mentioned above, processing control can be proceeded in a similar logical manner to that in the embodiment of FIG. 11. The difference therebetween is only in that the ROM's are replaced by the RAM's. In this embodiment, however, the Y conversion data for one line is obtained and stored into a line buffer memory 15y while the reading gradation data for one line is obtained. Then, the M conversion data for one line is obtained and stored into a line buffer memory 15m, and the C conversion data for one line is obtained and stored into a line buffer memory 15c. After that, the data in the line buffer memories 15y-15c are transferred to line buffer memories 16y-16c in an image recording unit 8 and, subsequently, reading of color information for the next line is started. The microprocessor 18 serves as a main controller in the above processing, under control of which a timing signal generator 19 and a system controller 20 applies a timing signal and a control signal to the respective circuit elements. Incidentially, a data selector 15 applies address buses of the microprocessor 18 system to the corresponding RAM's when the functional calculated data (17) are written into the RAM's 16[1] -16[3], applies the read data directly to address input terminals of the RAM's 16[1] -16[3] (and the line buffers 11r-11b) while the read data for one line is obtained from the image recording unit 2, and then serves as a switching gate for applying the read data to the address input terminals of the RAM's 16[1] -16[3] two times after the read data for one line has been stored in the line buffers FIG. 13b shows the processing control operation for conversion of color information mainly effected by the microprocessor 18, and FIG. 13c shows the processing timings. The processing control will now be described with reference to these figures. When powered on, the microprocessor 18 initializes the input/output port 23, internal registers, counters, timers, etc. (step 2 in FIG. 13b; hereinafter the word of step will be omitted in the parenthesis), and reads the input port 23 (3). In accordance with the correction characteristic instruction data applied to the input port 23, the microprocessor specifies the correction coefficients and hence the functions f1(R) to f9(B), calculates the functional values for respective values in ranges of R=1-64, G=1-64 and B=1-64, and then writes the calculated data into the RAM's 16[1] -16[3] based on the relationship between the block 17 and the RAM's 16[1] -16[3] shown in FIG. 13a (4[11] to 4[33]). Next, input reading is continued and the microprocessor waits for instruction of starting (5). If the content of the correction characteristic instruction data is changed while waiting, the steps 4[11] to 4[33] are executed correspondingly to calculate the new functional values and writes the results into the RAM's 16[1] -16[3] for update. Upon instruction of starting (5), color information applied to (the address input terminals of) the RAM's 16[1] -16[3] is set to 01 indicative of yellow Y, the data selector 15 is set into such connection as to apply the output data from the reading unit 2 to the address input terminals of the RAM's 16[1] -16[3], and the distributor 14 is set into such connection as to apply the output data from the adder 13 to the line buffer 15y, thereby instructing the reading unit 2 to read the data for one line. The gradation data output from the reading unit 2 are memorized in the line buffers 11r-11b, and the sum of the data read out of the RAM's 16[1] -16[3] accessed with the output data from the reading unit 2 (i.e., output of the adder 13) is memorized in the line buffer 15y (7). After receiving the gradation data for one line from the reading unit 2, the microprocessor 18 now processes such that color information applied to (the address input terminals of) the RAM's 16[1] -16[3] is set to 10 indicative of magenta M (8), the data selector 15 is set into such connection as to apply the read data from the line buffers 11r, 11g and 11b to the address input terminals of the RAM's 16[1] -16[3], the distributor 14 is set into such connection as to apply the output data from the adder 13 to the line buffer 15m to thereby read the gradation data out of the line buffers 11r-11b, and the sum of the data read out of the RAM's 16[1] -16[3] accessed with the above read data (i.e., output of the adder 13) is memorized in the line buffer 15m (9). Subsequently, color information applied to (the address input terminals of) the RAM's 16[1] -16[3] is set to 11 indicative of cyan C (10), the data selector 15 is left set into such connection as to apply the read data from the line buffers 11r, 11g and 11b to the address input terminals of the RAM's 16[1] -16[3], the distributor 14 is set into such connection as to apply the output data from the adder 13 to the line buffer 15c to thereby read the gradation data out of the line buffers 11r-11b, and the sum of the data read out of the RAM's 16[1] -16[3] accessed with the above read data (i.e., output of the adder 13) is memorized in the line buffer 15c (11). Then, the data in the line buffers 15y, 15m and 15c are transferred to the image recording unit 8. After the completion of this transfer, the content of the line counter register is incremented by one (13), and whether or not there comes at the end of page is judged from the content of the register (14). If there comes not at the end of page, the process subsequent to step 6 will be repeated in like manner as the above. If there comes at the end of page, the microprocessor proceeds to the step of reading input (3) and executes the processing operation in like manner as the above in response to an instruction from the exterior. FIG. 13c shows the time relationship of read and write of the data in the foregoing conversion processing on the color gradation information reading--recording information. Also in this embodiment, the memory capacity required for the RAM's 16[1] -16[3] is as much as that required for the ROM's 12[1] -12[3] shown in FIG. 11, and hence relatively small. As described in the above, in this embodiment, since the coefficients of the masking equations are set in response to the correction characteristic instruction data, it becomes possible to perform appropriate conversion of color information in conformity with recording paper to be set, as well as the characteristics of ink and scanner. According to the foregoing embodiments as described hereinabove, the color reproducibility is improved through conversion of color information using the equation of second or higher degrees. In addition, such conversion requires only relatively small memory capacity, and the needed hardware is mainly composed of a memory and an adder (which may be replaced by a microprocessor). As a result, the constitution is not especially complicated and the processing speed is relatively high. In particular, the embodiment shown in FIG. 13a permits to selectively set the masking equations depending on a color reading characteristic of a scanner, color and a color reproduction characteristic of recording paper, and/or a color reproduction characteristic of a recording medium (such as ink or tonner), etc., thus making it possible to utilize the conversion of color information in still wider fields.
{"url":"http://www.google.com.au/patents/US4819193","timestamp":"2014-04-17T18:35:25Z","content_type":null,"content_length":"132164","record_id":"<urn:uuid:20aea5d7-b683-4012-9f3c-0ab01017663e>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00188-ip-10-147-4-33.ec2.internal.warc.gz"}
Escondido, San Diego, CA Carlsbad, CA 92009 Recent Cal Poly graduate who loves tutoring math! I am a recent graduate from Cal Poly looking for students in need of a math tutor. I majored in Physics and minored in with a 3.5 GPA. I tutor Pre-Algebra, Algebra, Geometry, Trigonometry, Calculus, and Physics for all grade levels. I have a year of math... Offering 8 subjects including algebra 1, algebra 2 and calculus
{"url":"http://www.wyzant.com/Escondido_San_Diego_CA_mathematics_tutors.aspx","timestamp":"2014-04-16T07:49:26Z","content_type":null,"content_length":"61554","record_id":"<urn:uuid:f78cd7e8-6470-4257-81dc-0dd19de78dac>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00279-ip-10-147-4-33.ec2.internal.warc.gz"}
Multiplication tables 1 to 10 math lessons for elementary school (Preschool, Kindergarten, Grades 1-5) math - Multiplication tables 1 to 10 - lesson math - Multiplication tables 1 to 10 - lesson How to play The screen is filled with bouncing balls with numbers on them. The monkey tells you what multiplication time table you have to solve. Click on correct number in the right order. Controls: mouse What we learn Multiplication tables 1 to 10 Ready for classrooms with a SMARTboard or Interactive Whiteboard Educational math lessons for elementary school (Preschool, Kindergarten, Grades 1-5) online homeschool curriculum
{"url":"http://www.cyberkidzgames.com/cyberkidz/game.php?spelUrl=library/rekenen/groep5/rekenen3/&spelNaam=Multiplication%20tables%201%20to%2010&groep=5&vak=rekenen","timestamp":"2014-04-17T01:01:18Z","content_type":null,"content_length":"10783","record_id":"<urn:uuid:7d5bec37-07e3-4ddb-9f1f-0d2f1b43a761>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00479-ip-10-147-4-33.ec2.internal.warc.gz"}
Limiting XYZ Geometry Drift I just want to move my hard work from a supplied Shapefile to File Geodatabase then into the Enterprise Geodatabase. However the geography is moving around like it’s in the wind. My cadastre now has all sorts of sliver polygons and the attribute table reports different areas and distances to the original data sets. Never fear there’s a few basic things you can do to begin to mitigate these drift issues. Each spatial data type stores the geometry differently and each storage method can have vastly different limitations. In addition the projection and datum defined on the spatial data impacts significantly on geometry drift. Assumption and ignorance can be the step mother of many issues. Drift is specifically caused by: • The underlying spatial datatypes: • XY Resolution, • XY Tolerance and; • Projections To mitigate drift: • Standardise your projection and datum • Standardise your transformation equations • Standardise your XY Resolution • Understand your data’s geometry Vector geometries typically come in three major classes: Points: consisting of X Y Z coordinates, Lines: Many X Y Z coordinates linked together with mathematical equations and Polygons: are akin to a line, but the last mathematical equation links back to the first XYZ in the sequence. Theses XYZ’s and their math are stored differently between all vector formats. The common factor is that an XYZ coordinate must be located at the intersection of a coordinate grid (figure 1). When an attempt is made to place an XYZ at any other location it is moved to the grids intersection (figure 2). This movement is like a magnetic force sucking the XYZ into the correct location, this concept is called “tolerance”. Coordinate grids can be fine like figure 3 or coarse like figure 4, this is called “XY resolution”. When a coarse XY resolution dataset is moved into a fine one (figure 1 moved into figure 3) there’s insignificant physical adjustment. However when a fine XY dataset is to moved into a coarse one (figure 3 to figure 4) things move. The XY tolerance moves the XY’s to the intersections of the coarser coordinate grid and error is introduced. Each data type for example: Shapefile, CAD, Mif and Geodatabase have different XY resolution abilities, in addition the XY resolution capabilities change depending on the projection and datum. In the scenario of importing shapefile data into a File Geodatabase and then into SDO_Geometery (a class of Enterprise Geodatabase): Esri Shapefiles store XYZ’s in a “Shape field” of a double type. This format has no controls over the XY resolution or tolerances. A shapefiles geometry is impacted primarily by the projection and then the double field. Shapefiles are better than Esri Coverages, which have a single type “shape field”. The precision of these geometry fields affects the way XYZ’s are stored. In a ArcGIS 9.2 File Geodatabase and above, when you create a new feature class the user can specify the: Projection, the XY Resolution and XY Tolerance. The Default XY Resolution for GDA_1994 Zone is 1mm. This resolution is not suitable for some GIS purposes (Roads, Geology, Rivers, Cadastre, and so on). Oracle Spatial (Enterprise Geodatabase) stores its XYZ’s in a SDO_Geometry type field. This data type can not ‘typically’ store an XYZ resolution finer that 5cm. Therefore bringing a sub millimetre file geodatabase feature class or CAD file into Oracle will result in drift. If you constantly go between Shapefiles, File Geodatabase and Oracle than you should set up your Geodatabase Feature classes to cater to the lowest common known denominator. Prescribe an XY resolution of 5cm to feature classes. Projections and datum’s influence the XYZ resolution, therefore it is wise to standardise the Geographic and Cartesian systems. Transformation equations that take one data set and re-projects it to another system impacts on the XY resolution. For example the NVT2 transformation, transforms AGD 66 to GDA 94. This transformation has a XY resolution of 10cm. Therefore there is drift. It is prudent to research the transformation equations that you use, then standardise their use. Interesting Ideas from the Esri help: “- To keep coordinate movement small, keep the XY tolerance small. However, an XY tolerance that is too small (such as 2 * XY Resolution or less) may not properly integrate the line work of shared boundaries. (So for us let’s not make a cadastre in the sub millimeters) - Conversely, if your XY tolerance is too large, feature coordinates may collapse on one another. This can compromise the accuracy of feature boundary representations. (So let’s not make a cadastre this in the meters) Your XY tolerance should never approach your data capture resolution. For example, at a map scale of 1:12,000, one inch equals 1,000 feet, and 1/50 of an inch still equals 20 feet – a data capture accuracy that would be hard to meet during digitizing and scan conversion. You’ll want to keep the coordinate movement using the XY tolerance well under these numbers. Remember, the default XY tolerance in this case would be 0.003281 feet.” File Geodatabase XY Resolution Workflow: 1. Run a repair geometry geo process on the Shapefile (repeat until no reports of errors) 2. Create a new File Geodatabase (not personal) 3. Create either a new feature dataset or new feature class 4. Import or select the Projection (In my case: GDA 94) 5. On the XY Tolerance Window: Change the defaults: ie 0.001 Meter to 0.05 Meter, as below (these should always be a magnitude above the resolution) 6. Un Tick “Accept default resolution and domain extent” 7. On the XY resolution box: Change the defaults: ie 0.0001 Meter to 0.005 Meter 8. Finish the rest of dialog 9. Import the repaired shapefile into the new feature class (try the simple data loader) Alternatively you can directly import the shapefile (or other formats) into the file geodatabase. However ensure the environment settings specify the XY Resolution and Tolerance of 5cm or the One Response to Limiting XYZ Geometry Drift 1. Pingback: File Geodatabase Admin | gisintelligence This entry was posted in ArcGIS Admin, Geodatabase, Geometry, Shapefile, XY Resolution. Bookmark the permalink.
{"url":"https://gisintelligence.wordpress.com/2011/04/19/limiting-xyz-geometry-drift/","timestamp":"2014-04-21T01:59:04Z","content_type":null,"content_length":"65058","record_id":"<urn:uuid:746ea2bb-6097-468b-b1a9-8d01d1ea55d2>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00483-ip-10-147-4-33.ec2.internal.warc.gz"}
Unveiling the Mysteries of GLL Part 2: The Problem Space In the previous article, we skimmed the surface of automated text parsing and set the stage for our impending exploration of the GLL algorithm itself. However, before we can move ahead and do just that, we should first build up some idea of what the requirements are for truly generalized parsing and what sort of problems we are likely to encounter. I’m going to assume you already have a working understanding of context-free grammars and how to read them. If you don’t, then I recommend you to the Wikipedia page on CFGs. Specifically, the examples are quite instructive. S ::= '(' S ')' | '(' ')' In this grammar, the S non-terminal is recursive because one of its productions refers back to itself. Specifically, the first rule corresponding to the S non-terminal is of the form α S β, where α and β stand for some arbitrary rule fragments (in this case, '(' and ')', respectively). When a non-terminal maps to a production which is recursive in its first token, we say that rule is left-recursive. For example: E ::= E '+' N | E '-' N | N N ::= '1' | '2' | '3' | ... In this grammar, the E non-terminal is left-recursive in two of its three productions. Left-recursion is a particularly significant property of a grammar because it means that any left-to-right parse process would need to parse E by first parsing E itself, and then parsing '+' and finally N (assuming that the parser is using the first production). As you can imagine, it would be very easy for a naïve parsing algorithm to get into an infinite loop, trying to parse E by first parsing E, which requires parsing E, which requires parsing E, etc. Mathematically, left-recursive productions are always of the form α E β where α —> ε. In plain-English, this means that a production is left recursive if the part of the production preceding the recursive token represents the empty string (ε). This is a very nice way of defining left-recursion, because it allows for a specific type of left-recursion known as hidden left-recursion. For A ::= B A '.' | '.' B ::= ',' Notice how the second production for B is empty? This means that B can map to ε, and thus A exhibits hidden left-recursion. The difference between hidden and direct left-recursion is that hidden left-recursion is obscured by other rules in the grammar. If we didn’t know that B had the potential to produce the empty string, then we would never have realized that A is left-recursive. LR parsing algorithms (such as tabular LALR or recursive-ascent) can handle direct left-recursion without a problem. However, not even Tomita’s GLR can handle hidden left-recursion (which technically means that the GLR algorithm isn’t fully general). Hidden left-recursion is a perfectly valid property for a context-free grammar to exhibit, and so in order to be fully general, a parsing algorithm must be able to handle it. As it turns out, this is just a little bit troublesome, and many papers on parsing algorithms spend a large majority of their time trying to explain how they handle hidden It’s worth noting that left-recursion cannot be handled by top-down algorithms (such as tabular LL(k) or recursive-descent) without fairly significant contortions. However, such algorithms have no trouble at all with other forms of recursion (such as our original recursive example with S). Left-recursion arises very naturally in many grammars (particularly involving binary forms such as object-oriented method dispatch or mathematical operators) and is one of the primary reasons why many people prefer algorithms in the LR family over LL algorithms. It is perhaps surprising that context-free grammars are not required to be unambiguous. This means that a grammar is allowed to accept a particular input by using more than one possible sequence of rules. The classic example of this is arithmetic associativity: E ::= E '+' E | E '-' E | '1' | '2' | '3' | '4' | ... This is an extremely natural way to encode the grammar for mathematical plus and minus. After all, when we mentally think about the + operator, we imagine the structure as two expressions separated by +, where an expression may be a primitive number, or a complex expression like another addition or a subtraction operation. Unfortunately, this particular encoding has a rather problematic ambiguity. Consider the following expression: 4 + 5 + 2 Clearly this is a valid expression, and a parser for the example grammar will certainly accept it as input. However, if we try to generate a parse tree for this expression, we’re going to run into two possible outcomes: Literally, the question is whether or not we first expand the left or the right E in the top-level + expression. Expanding the left E will give us the tree to the left, where the first two operands (4 and 5) are added together with the final result being added to 2. Expanding the right E gives us the tree to the right, where we add 5 and 2 together, adding that to 4. Of course, in the case of addition, associativity doesn’t matter too much, we get the same answer either way. However, if this were division, then associativity could make all the difference in the world. (4 / 5) / 2 = 0.4, but 4 / (5 / 2) = 1.6. The point is that we can follow all of the rules set forth by the grammar and arrive at two very different answers. This is the essence of ambiguity, and it poses endless problems for most parsing algorithms. If you think about it, in order to correctly handle ambiguity, a parser would need to return not just one parse tree for a particular input, but all possible parse trees. The parser’s execution would have to keep track of all of these possibilities at the same time, somehow following each one to its conclusion maintaining its state to the very end. This is not an easy problem, particularly in the face of grammars like the following: S ::= S S S | S S | 'a' Clearly, this is a contrived grammar. However, it’s still a valid CFG that a generalized parsing algorithm would need to be able to handle. The problem is that there are an exponential number of possible parse trees for any given (valid) input. If the parser were to naïvely follow each and every one of these possibilities one at a time, even on a short string, the parse process would take more time than is left in the age of the universe as we know it. Obviously, that’s not an option. As you can see, generalized parsing has some very thorny problems to solve. It’s really no wonder that the algorithms tend to be cryptic and difficult to understand. However, this is not to say that the problems are insurmountable. There are some very elegant and easy-to-understand algorithms for solving these problems, and GLL is one of them. In the next article, we will start looking at the GLL algorithm itself, along with that chronically under-documented data structure at its core, the graph-structured stack (GSS). 1. Dan, I’m learning about some newer parsing techniques and I just wanted to ask, is GLL the same or similar to Earley parsing? If it’s different, how so? 2. I’ve recently begun to read Aho et al.’s Compilers, and I’m happy I did so before reading your series. Makes it far easier to follow. Just a technical question from blogger to blogger: what tool did you use to generate those graphs? I intend to write a series of blog posts about the thermal simulation of buildings, which also makes heavy use of graphs and nodes. I’d very much appreciate it if you would share. 3. What do you think about generalized parsing vs PEG parsing with left recursion? 4. This post helped me to track down a problem in my parser. Turns out I had some rules that had an exponential amount of possible parse trees. I’m using a parser generator that has automatic backtracking and it was causing the backtracking algorithm to appear to hang. I couldn’t figure out what was going on but when I read this post I realized that had to be the problem. Thanks! 5. Really great articles!! But please continue them soon – you’ve left us on a bit of a cliffhanger here and I’m pretty eager to know about the GLL algorithm… 6. “It is perhaps surprising that context-free grammars are not required to be unambiguous. This means that a grammar is allowed to accept a particular input by using more than one possible sequence of rules.” Technically speaking, CFGs are generating grammers, not parsing grammars. From that perspective, it’s quite natural that they’re not required to be unambiguous. You don’t speak of generating grammars as “accepting input”. 7. I am eagerly awaiting Part 3. 8. Jules, it is a hack that frankly, doesn’t work right, often generating the wrong parse. 9. Hi, I would also be happy to read the next article of the series – or to see performance improvements for your GLL combinators. In my research group we considered using them last year (IIRC when working on the OOPSLA paper for TypeChef), but we ran away because of the comment “performance at the moment is non-existant” on the project README. Afterwards you seem to say “but for non-ambiguous grammars it’s not so bad, actually we’re faster”, so I’m really confused. 10. Thanks for the articles in the series so far; it’s a fascinating topic. I too would be greatly interested in the next one! Post a Comment Comments are automatically formatted. Markup are either stripped or will cause large blocks of text to be eaten, depending on the phase of the moon. Code snippets should be wrapped in <pre>...</pre> tags. Indentation within pre tags will be preserved, and most instances of "<" and ">" will work without a problem. Please note that first-time commenters are moderated, so don't panic if your comment doesn't appear immediately.
{"url":"http://www.codecommit.com/blog/scala/unveiling-the-mysteries-of-gll-part-2","timestamp":"2014-04-16T18:57:27Z","content_type":null,"content_length":"27942","record_id":"<urn:uuid:b74d4e46-1ae6-4f3a-b741-fd8599a61660>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00204-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts about statistics on Serious Stats Multicollinearity tutoral I just posted brief multicollinearity tutorial on my other blog (loosely based on the material from the Serious Stats book). You can read it here. There are many good resources online for learning R. However, I recently discovered Try R from Code school – which is interactive, goes at a very gentle pace and also looks very pretty: It is now increasingly common for experimental psychologists (among others) to use multilevel models (also known as linear mixed models) to analyze data that used to be shoe-horned into a repeated measures ANOVA design. Chapter 18 of Serious Stats introduces multilevel models by considering them as an extension of repeated measures ANOVA models that can cope with missing outcomes, time-varying covariates and can relax the sphericity assumption of conventional repeated measures ANOVA. They can also deal with other – less well known – problems such as having stimuli that are random factor (e.g., see this post on my Psychological Statistics blog). Last, but not least, multilevel generalised linear models allow you to have discrete and bounded outcomes (e.g., dichotomous, ordinal or count data) rather than be constrained by as assuming a continuous response with normal errors. There are two main practical problems to bear in mind when switching to the multilevel approach. First, the additional complexity of the approach can be daunting at first – though it is possible to built up gently to more complex models. Recent improvements in availability of software and support (textbooks, papers and online resources) also help. The second is that as soon as a model departs markedly from a conventional repeated measures ANOVA, correct inferences (notably significance tests and interval estimates such as confidence intervals) can be difficult to obtain. If the usual ANOVA assumptions hold in a nested, balanced design then there is a known equivalence between the multilevel model inferences using t or F tests and the familiar ANOVA tests (and this case the expected output of the tests is the same). The main culprits are boundary effects (which effect inferences about variances and hence most tests of random effects) and working out the correct degrees of freedom (df) to use for your test statistic. Both these problems are discussed in Chapter 18 of the book. If you have very large samples an asymptotic approach (using Wald z or chi-square statistics) is probably just fine. However, the further you depart from conventional repeated measures ANOVA assumptions the harder it is to know how large a sample news to be before the asymptotics kick in. In other words, the more attractive the multilevel approach the less you can rely on the Wald tests (or indeed the Wald-style t or F tests). The solution I advocate in Serious Stats is either to use parametric bootstrapping or Markov chain Monte Carlo (MCMC) approaches. Another approach is to use some form of correction to the df or test statistic such as the Welch-Satterthwaite correction. For multilevel models with factorial type designs the recommended correction is generally the Kenward-Roger approximation. This is implemented in SAS, but (until recently) not available in R. Judd, Westfall and Kenny (2012) describe how to use the Kenward-Roger approximation to get more accurate significance tests from a multilevel model using R. Their examples use the newly developed pbkrtest package (Halekoh & Højsgaard, 2012) – which also has functions for parametric bootstrapping. My purpose here is to contrast the the MCMC and Kenward-Roger correction (ignoring the parametric bootstrap for the moment). To do that I’ll go through a worked example – looking to obtain a significance test and a 95% confidence interval (CI) for a single effect. The pitch data example The example I’ll use is for the pitch data from from Chapter 18 of the book. This experiment (from a collaboration with Tim Wells and Andrew Dunn) involves looking at the at pitch of male voices making attractiveness ratings with respect to female faces. The effect of interest (for this example) is whether average pitch goes up or done for higher ratings (and if so, by how much). A conventional ANOVA is problematic because this is a design with two fully crossed random factors – each participant (n = 30) sees each face (n = 32) and any conclusions ought to generalise both to other participants and (crucially) to other faces. Furthermore, there is a time-varying covariate – the baseline pitch of the numerical rating when no face is presented. The significance tests or CIs reported by most multilevel modelling packages with also be suspect. Running the analysis in the R package lme4 gives parameter estimates and t statistics for the fixed effects but no p values or CIs. The following R code loads the pitch data, checks the first few cases, loads lme4 and runs the model of interest. (You should install lme4 using the command install.packages(‘lme4′) if you haven’t done so already). pitch.dat <- read.csv('http://www2.ntupsychology.net/seriousstats/pitch.csv') pitch.me <- lmer(pitch ~ base + attract + (1|Face) + (1|Participant), data=pitch.dat) Note the lack of df and p values. This is deliberate policy by the lme4 authors; they are not keen on giving users output that has a good chance of being very wrong. The Kenward-Roger approximation This approximation involves adjusting both the F statistic and its df so that the p value comes out approximately correct (see references below for further information). It won’t hurt too much to think of it as turbocharged Welch-Satterthwaite correction. To get the corrected p value from this approach first install the pbkrtest package and then load it. The approximation is computed using the KRmodcomp() function. This takes the model of interest (with the focal effect) and a reduced model (one without the focal effect). The code below installs and loads everything, runs the reduced model and then uses KRmodcomp() to get the corrected p value. Note that it may take a while to run (it took about 30 seconds on my laptop). pitch.red <- lmer(pitch ~ base + (1|Face) + (1|Participant), data=pitch.dat) KRmodcomp(pitch.me, pitch.red) The corrected p value is .0001024. The result could reported as a Kenward-Roger corrected test with F(1, 118.5) = 16.17, p = .0001024. In this case the Wald z test would have given a p value of around .0000435. Here the effect is sufficiently large that the difference in approaches doesn’t matter – but that won’t always be true. The MCMC approach The MCMC approach (discussed in Chapter 18) can be run in several ways – with the lme4 functions or those in MCMCglmm being fairly easy to implement. Here I’ll stick with lme4 (but for more complex models MCMCglmm is likely to be better). First you need to obtain a large number of Monte Carlo simulations from the model of interest. I’ll use 25,000 here (but I often start with 1,000 and work up to a bigger sample). Again this may take a while (about 30 or 40 seconds on my laptop). pitch.mcmc <- mcmcsamp(pitch.me, n = 25000) For MCMC approaches it is useful to check the estimates from the simulations. Here I’ll take a quick look at the trace plot (though a density plot is also sensible – see chapter 18). This produces the following plot (or something close to it): The trace for the fixed effect of attractiveness looks pretty healthy – the thich black central portion indicating that it doesn’t jump around too much. Now we can look at the 95% confidence interval (strictly a Bayesian highest posterior density or HPD interval – but for present purposes it approximates to a 95% CI). This gives the interval estimate [0.2227276, 0.6578456]. This excludes zero so it is statistically significant (and MCMCglmm would have given an us MCMC-derived estimate of the p value). Comparison and reccomendation Although the Kenward-Roger approach is well-regarded, for the moment I would reccomend the MCMC approach. The pbkrtest package is still under development and I could not always get the approximation or the parametric bootstrap to work (but the parametric bootstrap can also be obtained in other ways – see Chapter 18). The MCMC approach is also preferable in that it should generalize safely to models where the performance of the Kenward-Roger approximation is unknown (or poor) such as for discrete or ordinal outcomes. It also provides interval estimates rather than just p values. The main downside is that you need to familiarize yourself with some basic MCMC diagnostics (e.g., trace and density plots at the very least) and be willing to re-run the simulations to check that the interval estimates are stable. Judd, C. M., Westfall, J., & Kenny, D. A. (2012). Treating stimuli as a random factor in social psychology: A new and comprehensive solution to a pervasive but largely ignored problem. Journal of Personality and Social Psychology, 103, 54-69. Halekoh, U., & Højsgaard, S. (2012) A Kenward-Roger approximation and parametric bootstrap methods for tests in linear mixed models – the R package pbkrtest. Submitted to Journal of Statistical Ben Bolker pointed out that future versions of lme4 may well drop the MCMC functions (which are limited, at present, to fairly basic models). In the book I mainly used MCMCglmm – which is rather good at fitting fully crossed factorial models. Here is the R code for the pitch data. Using 50,000 simulations seems to give decent estimates of the attractiveness effect. Plotting the model object gives both MCMC trace plots and kernel density plots of the MCMC estimates (hit return in the console to see all the plots). nsims <- 50000 pitch.mcmcglmm <- MCMCglmm(pitch ~ base + attract, random= ~ Participant + Face, nitt=nsims, data=pitch.dat) Last but not least, any one interested in the topic should keep an eye on the draft r-sig-mixed-modelling FAQ for a summary of the challenges and latest available solutions for multilevel inference in R (and other packages). One of the main attractions of R (for me) is the ability to produce high quality graphics that look just the way you want them to. The basic plot functions are generally excellent for exploratory work and for getting to know your data. Most packages have additional functions for appropriate exploratory work or for summarizing and communicating inferences. Generally the default plots are at least as good as other (e.g., commercial packages) but with the added advantage of being fairly easy to customize once you understand basic plotting functions and parameters. Even so, getting a plot looking just right for a presentation or publication often takes a lot of work using basic plotting functions. One reason for this is that constructing a good graphic is an inherently difficult enterprise, one that balances aesthetic factors and statistical factors and that requires a good understanding of who will look at the graphic, what they know, what they want to know and how they will interpret it. It can takes hours – maybe days – to get a graphic right. In Serious Stats I focused on exploratory plots and how to use basic plotting functions to customize them. I think this was important to include, but one of my regrets was not having enough space to cover a different approach to plotting in R. This is Hadley Wickham’s ggplot2 package (inspired by Leland Wilkinson’s grammar of graphics approach). In this blog post I’ll quickly demonstrate a few ways that ggplot2 can be used to quickly produce amazing graphics for presentations or publication. I’ll finish by mentioning some pros and cons of the approach. The main attraction of ggplot2 for newcomers to R is the qplot() quick plot function. Like the R plot() function it will recognize certain types and combinations of R objects and produce an appropriate plot (in most cases). Unlike the basic R plots the output tends to be both functional and pretty. Thus you may be able to generate the graph you need for your talk or paper almost A good place to start is the vanilla scatter plot. Here is the R default: Compare it with the ggplot2 default: Below is the R code for comparison. (The data here are from hov.csv file used in Chapter 10 Example 10.2 of Serious Stats). # install and then load the package # get the data hov.dat <- read.csv('http://www2.ntupsychology.net/seriousstats/hov.csv') # using plot() with(hov.dat, plot(x, y)) # using qplot() qplot(x, y, data=hov.dat) R code formatted by Pretty R at inside-R.org Adding a line of best fit The ggplot2 version is (in my view) rather prettier, but a big advantage is being able to add a range of different model fits very easily. The common choice of model fit is that of a straight line (usually the least squares regression line). Doing this in ggplot2 is easier than with basic plot functions (and you also get 95% confidence bands by default). Here is the straight line fit from a linear model: qplot(x, y, data=hov.dat, geom=c(‘point’, ‘smooth’), method=’lm’) The geom specifies the type of plot (one with points and a smoothed line in this case) while the method specifies the model for obtaining the smoothed line. A formula can also be added (but the formula defaults to y as a simple linear function of x). Loess, polynomial fits or splines Mind you, the linear model fit has some disadvantages. Even if you are working with a related statistical model (e.g., a Pearson’s r or least squares simple or multiple regression) you might want to have a more data driven plot. A good choice here is to use a local regression approach such as loess. This lets the data speak for themselves – effectively fitting a complex curve driven by the local properties of the data. If this is reasonably linear then your audience should be able to see the quality of the straight-line fit themselves. The local regression also gives approximate 95% confidence bands. These may support informal inference without having to make strong assumptions about the model. Here is the loess plot: Here is the code for the loess plot: qplot(x, y, data=hov.dat, geom=c(‘point’, ‘smooth’), method=’loess’) I like the loess approach here because its fairly obvious that the linear fit does quite well. showing the straight line fit has the appearance of imposing the pattern on the data, whereas a local regression approach illustrates the pattern while allowing departures from the straight line fit to show through. In Serious Stats I mention loess only in passing (as an alternative to polynomial regression). Loess is generally superior as an exploratory tool – whereas polynomial regression (particularly quadratic and cubic fits) are more useful for inference. Here is an example of a cubic polynomial fit (followed by R code): qplot(x, y, data=hov.dat, geom=c(‘point’, ‘smooth’), method=’lm’, formula= y ~ poly(x, 2)) Also available are fits using robust linear regression or splines. Robust linear regression (see section 10.5.2 of Serious Stats for a brief introduction) changes the loss function least squares in order to reduce impact of extreme points. Sample R code (graph not shown): qplot(x, y, data=hov.dat, geom=c(‘point’, ‘smooth’), method=’rlm’) One slight problem here is that the approximate confidence bands assume normality and thus are probably too narrow. Splines are an alternative to loess that fits sections of simpler curves together. Here is a spline with three degrees of freedom: qplot(x, y, data=hov.dat, geom=c(‘point’, ‘smooth’), method=’lm’, formula=y ~ ns(x, 3)) A few final thoughts If you want to know more the best place to start is with Hadley Wickham’s book. Chapter 2 covers qplot() and is available free online. The immediate pros of the ggplot2 approach are fairly obvious – quick, good-looking graphs. There is, however, much more to the package and there is almost no limit to what you can produce. The output of the ggplot2 functions is itself an R object that can be stored and edited to create new graphs. You can use qplot() to create many other graphs – notably kernel density plots, bar charts, box plots and histograms. You can get these by changing the geom (or by default with certain object types an input). The cons are less obvious. First, it takes some time investment to get to grips with the grammar of graphics approach (though this is very minimal if you stick with the quick plot function). Second, you may not like the default look of the ggplot2 output (though you can tweak it fairly easily). For instance, I prefer the default kernel density and histogram plots from the R base package to the default ggplot2 ones. I like to take a bare bones plot and build it up … trying to keep visual clutter to a minimum. I also tend to want black and white images for publication (whereas I would use grey and colour images more often in presentations). This is mostly to do with personal taste. In a previous post I showed how to plot difference-adjusted CIs for between-subjects (independent measures) ANOVA designs (see here). The rationale behind this kind of graphical display is introduced in Chapter 3 of Serious stats (and summarized in my earlier blog post). In a between-subjects – or in indeed in a within-subjects (repeated measures) – design you or your audience will not always be interested only in the differences between the means. Rarely, the main focus may even be on the individual estimates themselves. A CI for each of the individual means might be informative for several reasons. First, it may be important to know that the interval excludes an important parameter value (e.g., zero). The example in Chapter 3 of Serious Stats involved a task in which participants had to decide which of two diagrams matched a description they had just read. Chance performance is 50% matching accuracy, so a graphical display that showed that the 95% CI for each mean excludes 50% suggests that participants in each group were performing above chance. Second the CI for an individual mean gives you an idea of the relative precision with which that quantity is measured. This may be particularly important in an applied domain. For example, you may want to be fairly sure that performance on a task is high in some conditions as well as being sure that there are differences between conditions. Third, the CIs for the individual means are revealing about changes in the precision between conditions. If the sample sizes are equal (or nearly equal) they are also revealing about patterns in the variances. This is because the precision of the individual means is a function of the standard error and n. This may be obscured when difference-adjusted CIs are plotted – though mainly for within-subjects (repeated measures) designs which have to allow for the correlation between the samples. In any case, it may be desirable to display CIs for individual means and difference-adjusted means on the same plot. This could be accomplished in several ways but I have proposed using a two-tiered CI plot (see here for a brief summary of my BRM paper on this or see Chapter 16 of Serious stats). A common approach (for either individual means or difference-adjusted CIs) is to adopt a pooled error term. This results in a more accurate CI if the homogeneity of variance assumption is met. For the purposes of a graphical display I would generally avoid pooled error terms (even if you use a pooled error term in your ANOVA). A graphical display of means is useful as an exploratory aid and supports informal inference. You want to be able to see any patterns in the precision (or variances) of the means. Sometimes these patterns are clear enough to be convincing without further (formal) inference or modeling. If they aren’t completely convincing it usually better to show the noisy graphic and supplement it with formal inference if necessary. Experienced researchers understand that real data are noisy and may (indeed should!) get suspicious if data are too clean. (I’m perhaps being optimistic here – but we really ought to have more tolerance for noisy data, as this should reduce the pressure on honest researchers to ‘optimize’ their analyses – e.g., see here). My earlier post on this blog provided functions for the single tier difference-adjusted CIs. Here is the two-tiered function (for a oneway design): plot.bsci.tiered <- function(data.frame, group.var=1, dv.var=2, var.equal=FALSE, conf.level = 0.95, xlab = NULL, ylab = NULL, level.labels = NULL, main = NULL, pch = 19, pch.cex = 1.3, text.cex = 1.2, ylim = c(min.y, max.y), line.width= c(1.5, 1.5), tier.width=0, grid=TRUE) { data <- subset(data.frame, select=c(group.var, dv.var)) fact <- factor(data[[1]]) dv <- data[[2]] J <- nlevels(fact) ci.outer <- bsci(data.frame=data.frame , group.var=group.var, dv.var=dv.var, difference=FALSE, var.equal=var.equal, conf.level =conf.level) ci.inner <- bsci(data.frame=data.frame , group.var=group.var, dv.var=dv.var, difference=TRUE, var.equal=var.equal, conf.level =conf.level) moe.y <- max(ci.outer) - min(ci.outer) min.y <- min(ci.outer) - moe.y/3 max.y <- max(ci.outer) + moe.y/3 if (missing(xlab)) xlab <- "Groups" if (missing(ylab)) ylab <- "Confidence interval for mean" plot(0, 0, ylim = ylim, xaxt = "n", xlim = c(0.7, J + 0.3), xlab = xlab, ylab = ylab, main = main, cex.lab = text.cex) if (grid == TRUE) grid() points(ci.outer[,2], pch = pch, bg = "black", cex = pch.cex) index <- 1:J segments(index, ci.outer[, 1], index, ci.outer[, 3], lwd = line.width[1]) axis(1, index, labels = level.labels) if(tier.width==0) { segments(index - 0.025, ci.inner[, 1], index + 0.025, ci.inner[, 1], lwd = line.width[2]) segments(index - 0.025, ci.inner[, 3], index + 0.025, ci.inner[, 3], lwd = line.width[2]) else segments(index, ci.inner[, 1], index, ci.inner[, 3], lwd = line.width[1]*(1 + abs(tier.width))) The following example uses the diagram data from the book: diag.dat <- read.csv('http://www2.ntupsychology.net/seriousstats/diagram.csv') plot.bsci.tiered(diag.dat, group.var=2, dv.var=4, ylab='Mean description quality', main = 'Two-tiered CIs for the Diagram data', tier.width=1) The result is a plot that looks something like this (though I should probably have reordered the groups and labeled them): For these data the group sizes are equal and thus the width of the outer tier reflect differences in variances between the groups. The variances are not very unequal, but neither are they particularly homogenous. The inner tier suggests group three is different from groups 2 and 4 (but not from group 1). This is a pretty decent summary of what’s going on and could be supplemented by formal inference (see Chapter 13 for a comparison of several formal approaches also using this data set). N.B. R code formatted via Pretty R at inside-R.org Footnote: The aesthetics of error bar plots A major difference between the plot shown here and that in my BRM paper or in the book is that I have changed the method of plotting the tiers. The change is mainly aesthetic, but also reflects the desire not to emphasize the extremes of the error bar. The most plausible values of the parameter (e.g., mean) are towards the center of the interval – not at the extremes. I have discussed the reasons for my change of heart in a bit more detail elsewhere. To this end I have also updated all my plotting functions. They still use the crossbar style from the book by default but this is controlled by a tier width argument. If tier.width=0 the crossbar style is used otherwise it used the tier.width to control the additional thickness of the difference-adjusted lines. In general, tier.width=1 seems to work well (but the crossbar style may be necessary for some unusual within-subject CIs where the difference-adjusted CI is wider than the CI for the individual means). UPDATE: Some problems arose with my previous host so I have now updated the links here and elsewhere on the blog. The companion web site for Serious StatsR scripts for each chapter. This contains examples of R code and and all my functions from the book (and a few extras). This is a convenient form for working through the examples. However, if you just want to access the functions it is more convenient to load them all in at once. The functions can be downloaded as a text file from: More conveniently, you can load them directly into R with the following call: In addition to the Serious Stats functions, a number of other functions are contained in the text file. These include functions published on this blog for comparing correlations or confidence intervals for independent measures ANOVA and functions my paper on confidence intervals for repeated measures ANOVA. The companion web site for Serious stats is now live: It includes a sample chapter (Chapter 15: Contrasts), data sets, R scripts for all the examples and supplementary material. In Chapter 2 (Confidence Intervals) of Serious stats I consider the problem of displaying confidence intervals (CIs) of a set of means (which I illustrate with the simple case of two independent means). Later, in Chapter 16 (Repeated Measures ANOVA), I consider the trickier problem of displaying of two or more means from paired or repeated measures. The example in Chapter 16 uses R functions from my recent paper reviewing different methods for displaying means for repeated measures (within-subjects) ANOVA designs (Baguley, 2012b). For further details and links see a brief summary on my psychological statistics blog. The R functions included a version for independent measures (between-subject) designs, but this was a rather limited designed for comparison purposes (and not for actual use). The independent measures case is relatively straight-forward to implement and I hadn’t originally planned to write functions for it. Since then, however, I have decided that it is worth doing. Setting up the plots can be quite fiddly and it may be useful to go over the key points for the independent case before you move on to the repeated measures case. This post therefore adapts my code for independent measures (between-subjects) designs. The approach I propose is inspired by Goldstein and Healy (1995) – though other authors have made similar suggestions over the years (see Baguley, 2012b). Their aim was to provide a simple method for displaying a large collection of independent means (or other independent statistics). At its simplest the method reduces to plotting each statistic with error bars equal to ±1.39 standard errors of the mean. This result is a normal approximation that can be refined in various ways (e.g., by using the t distribution or by extending it to take account of correlations between conditions). Using a Goldstein-Healy plot two means are considered different with 95% confidence if their two intervals do not overlap. In other words non-overlapping CIs are (in this form of plot) approximately equivalent to a statistically significant difference between the two means with α = .05. For convenience I will refer to CIs that have this property as difference-adjusted CIs (to distinguish them from conventional CIs). It is important to realize that conventional 95% CIs constructed around each mean won’t have this property. For independent means they are usually around 40% too wide and thus will often overlap even if the usual t test of their difference is statistically significant at p < .05. This happens because the variance of a difference is (in independent samples) equal to the sum of the variances of the individual samples. Thus the standard error of the difference is around $\sqrt 2$ times too large (assuming equal variances). For a more comprehensive explanation see Chapter 3 of Serious stats or Baguley (2012b). What to plot If you have only two means there are at least three basic options: 1) plot the individual means with conventional 95% CIs around each mean 2) plot the difference between means and a 95% CI for the difference 3) plot some form of difference-adjusted CI Which option is best? It depends on what you are trying to do. A good place to start is with your reasons for constructing a graphical display in the first place. Graphs are not particularly good for formal inference and other options (e.g., significance tests, reporting point estimates CIs in text, likelihood ratios, Bayes factors and so forth) exist for reporting the outcome of formal hypothesis tests. Graphs are appropriate for informal inference. This includes exploratory data analysis, to aid the interpretation of complex patterns or to summarize a number of simple patterns in a single display. If the patterns are very clear, informal inference might be sufficient. In other cases it can be supplemented with formal inference. What patterns do the three basic options above reveal? Option 1) shows the precision around individual means. This readily supports inference about the individual means (but not their difference). For example, a true population outside the 95% CI is considered implausible (and the observed mean would be different from that hypothesized value with p < .05 using a one sample t test). Option 2) makes for a rather dull plot because it just involves a single point estimate for the difference in means and the 95% CI for the difference. If this is the only quantity of interest you’d be better off just reporting the mean and 95% CI in the text. This has advantage of being more compact and more accurate than trying to read the numbers off a graph. [This is one reason that graphs aren't optimal for formal inference; it can be hard, for instance, to tell whether a line includes zero or excludes zero when the difference is just statistically significant or just statistically non-significant. With informal inference you shouldn't care where p = .049 or p = .051, but whether there are any clear patterns in the data] Option 3) shows you the individual means but calibrates the CIs so that you can tell if it is plausible that the sample means differ (using 95% confidence in the difference as a standard). Thus it seems like a good choice for graphical display if you are primarily interested in the differences between means. For formal inference it can be supplemented by reporting a hypothesis test in the text (or possibly a Figure caption). It is worth noting that option 3) becomes even more attractive if you have more than two means to plot. It allows you to see patterns that emerge over the set of means (e.g., linear or non-linear trends or – if n per sample is similar – changes in variances) and to compare pairs of means to see whether it is plausible that they are different. In contrast, option 2) is rather unattractive with more than two means. First, with J means there are J(J-1)/2 differences and thus an unnecessarily cluttered graphical display (e.g., with J = 5 means there are 10 Cis to plot). Second, plotting only the differences can obscure important patterns in the data (e.g., an increasing or decreasing trend in the means or variances would be difficult to identify). Difference-adjusted CIs using the t distribution Where only a few means are to be plotted (as is common in ANOVA) it makes sense to take a slight more accurate approach than the approximation originally proposed by Goldstein and Healy for large collections of means. This approach uses the t distribution. A similar approach is advocated by Afshartous and Preston (2010) who also provide R code for calculating multipliers for the standard errors using the t distribution (and an extension for the repeated measures). My approach is similar, but involves calculating the margin of error (half width of the error bars) directly rather than computing a multiplier to apply to the standard error. Difference-adjusted CIs for the mean of each sample from an independent measures (between-subjects) ANOVA design is given by Equation 3.31 of Serious stats: $\hat \mu _j \pm t_{n_j - 1,1 - {\alpha \mathord{\left/ {\vphantom {\alpha 2}} \right. \kern-ulldelimiterspace} 2}} {{\sqrt 2 } \over 2} \times \hat \sigma _{\hat \mu _j }$ The $\hat \mu _j$ term is the mean of the jth sample (where samples are labeled j = 1 to J) and $\hat \sigma _{\hat \mu _j }$ is the standard error of that sample. The $t_{n_j - 1,1 - {\alpha \ mathord{\left/ {\vphantom {\alpha 2}} \right. \kern-ulldelimiterspace} 2}}$ term is the quantile of the t distribution with $n_j - 1$ degrees of freedom (where $n_j$ is the size of jth sample) that includes to 100(1 - α) % of the distribution. Thus, apart from the ${{\sqrt 2 } \mathord{\left/ {\vphantom {{\sqrt 2 } 2}} \right. \kern-ulldelimiterspace} 2}$ term, this equation is identical to that for a 95% CI around the individual means, with the proviso that the standard error here is computed separately for each sample. This differs from the usual approach to plotting CIs for independent measures ANOVA design – where it is common to use a pooled standard error computed from a pooled standard deviation ( the root mean square error of the ANOVA) . While a pooled error term is sometimes appropriate, it is generally a bad idea for graphical display of the CIs because it will obscure any patterns in the variability of the samples. [Nevertheless, where $n_j$ is very small it make make sense to use a pooled error term on the grounds that each sample provides an exceptionally poor estimate of its population standard deviation] However, the most important change is the ${{\sqrt 2 } \mathord{\left/ {\vphantom {{\sqrt 2 } 2}} \right. \kern-ulldelimiterspace} 2}$ term. It creates a difference-adjusted CI by ensuring that the joint width of the margin of error around any two means is $latex \sqrt 2 $ times larger than for a single mean. The division by 2 arises merely as a consequence of dealing jointly with two error bars. Their total has to be $latex \sqrt 2 $ times larger and therefore each one needs only to be ${{\sqrt 2 } \mathord{\left/ {\vphantom {{\sqrt 2 } 2}} \right. \kern-ulldelimiterspace} 2}$ times its conventional value (for an unadjusted CI). This is discussed in more detail by Baguley (2012a; 2012b). This equation should perform well (e.g., providing fairly accurate coverage) as long as variances are not very unequal and the samples are approximately normal. Even when these conditions are not met, remember the aim is not to support formal inference. In addition, the approach is likely to be slightly more robust than ANOVA (at least to homogeneity of variance and unequal sample sizes). So this method is likely to be a good choice whenever ANOVA is appropriate. R functions for independent measures (between-subjects) ANOVA designs Two R functions for difference-adjusted CIs in independent measures ANOVA designs are provided here. The first function bsci() calculates conventional or difference-adjusted CIs for a one-way ANOVA bsci <- function(data.frame, group.var=1, dv.var=2, difference=FALSE, pooled.error=FALSE, conf.level=0.95) { data <- subset(data.frame, select=c(group.var, dv.var)) fact <- factor(data[[1]]) dv <- data[[2]] J <- nlevels(fact) N <- length(dv) ci.mat <- matrix(,J,3, dimnames=list(levels(fact), c('lower', 'mean', 'upper'))) ci.mat[,2] <- tapply(dv, fact, mean) n.per.group <- tapply(dv, fact, length) if(difference==TRUE) diff.factor= 2^0.5/2 else diff.factor=1 if(pooled.error==TRUE) { for(i in 1:J) { moe <- summary(lm(dv ~ 0 + fact))$sigma/(n.per.group[[i]])^0.5 * qt(1-(1-conf.level)/2,N-J) * diff.factor ci.mat[i,1] <- ci.mat[i,2] - moe ci.mat[i,3] <- ci.mat[i,2] + moe if(pooled.error==FALSE) { for(i in 1:J) { group.dat <- subset(data, data[1]==levels(fact)[i])[[2]] moe <- sd(group.dat)/sqrt(n.per.group[[i]]) * qt(1-(1-conf.level)/2,n.per.group[[i]]-1) * diff.factor ci.mat[i,1] <- ci.mat[i,2] - moe ci.mat[i,3] <- ci.mat[i,2] + moe plot.bsci <- function(data.frame, group.var=1, dv.var=2, difference=TRUE, pooled.error=FALSE, conf.level=0.95, xlab=NULL, ylab=NULL, level.labels=NULL, main=NULL, pch=21, ylim=c(min.y, max.y), line.width=c(1.5, 0), grid=TRUE) { data <- subset(data.frame, select=c(group.var, dv.var)) if(missing(level.labels)) level.labels <- levels(data[[1]]) if (is.factor(data[[1]])==FALSE) data[[1]] <- factor(data[[1]]) if (is.factor(data[[1]])==TRUE) data[[1]] <- factor(data[[1]]) dv <- data[[2]] J <- nlevels(data[[1]]) ci.mat <- bsci(data.frame=data.frame, group.var=group.var, dv.var=dv.var, difference=difference, pooled.error=pooled.error, conf.level=conf.level) moe.y <- max(ci.mat) - min(ci.mat) min.y <- min(ci.mat) - moe.y/3 max.y <- max(ci.mat) + moe.y/3 if (missing(xlab)) xlab <- "Groups" if (missing(ylab)) ylab <- "Confidence interval for mean" plot(0, 0, ylim = ylim, xaxt = "n", xlim = c(0.7, J + 0.3), xlab = xlab, ylab = ylab, main = main) points(ci.mat[,2], pch = pch, bg = "black") index <- 1:J segments(index, ci.mat[, 1], index, ci.mat[, 3], lwd = line.width[1]) segments(index - 0.02, ci.mat[, 1], index + 0.02, ci.mat[, 1], lwd = line.width[2]) segments(index - 0.02, ci.mat[, 3], index + 0.02, ci.mat[, 3], lwd = line.width[2]) axis(1, index, labels=level.labels) The default is difference=FALSE (on the basis that these are the CIs most likely to be reported in text or tables). The second function plot.bsci() uses the former function to plot the means and CIs the default here is difference=TRUE (on the basis that it the difference-adjusted CIs are likely to be more useful for graphical display). For both functions the default is a pooled error term ( pooled.error=FALSE) and a 95% confidence level (conf.level=0.95). Each function also takes input as a data frame and assumes that the grouping variable is the first column and the dependent variable the second column. If the appropriate variables are in different columns, the correct columns can be specified with the arguments group.var and dv.var. The plotting function also takes some standard graphical parameters (e.g., for labels and so forth). The following examples use the diagram data set from Serious stats. The first line loads the data set (if you have a live internet connection). The second line generated the difference-adjusted CIs. The third line plots the difference adjusted CIs. Note that the grouping variable (factor) is in the second column and the DV is in the fourth column. diag.dat <- read.csv('http://www2.ntupsychology.net/seriousstats/diagram.csv') bsci(diag.dat, group.var=2, dv.var=4, difference=TRUE) plot.bsci(diag.dat, group.var=2, dv.var=4, ylab='Mean description quality', main = 'Difference-adjusted 95% CIs for the Diagram data') In this case the graph looks like this: It should be immediately clear that while the segmented diagram condition (S) tends to have higher scores than the text (T) or picture (P) conditions, but the full diagram (F) condition is somewhere in between. This matches the uncorrected pairwise comparisons where S > P = T, S = F, and F = P = T. At some point I will also add a function to plot two-tiered error bars (combining option 1 and 3). For details of the extension to repeated measures designs see Baguley (2012b). The code and date sets are available here. Afshartous D., & Preston R. A. (2010). Confidence intervals for dependent data: equating nonoverlap with statistical significance. Computational Statistics and Data Analysis. 54, 2296-2305. Baguley, T. (2012a, in press). Serious stats: A guide to advanced statistics for the behavioral sciences. Basingstoke: Palgrave. Baguley, T. (2012b). Calculating and graphing within-subject confidence intervals for ANOVA. Behavior Research Methods, 44, 158-175. Goldstein, H., & Healy, M. J. R. (1995). Journal of the Royal Statistical Society. Series A (Statistics in Society), 158, 175-177. Schenker, N., & Gentleman, J. F. (2001). On judging the significance of differences by examining the overlap between confidence intervals. The American Statistician, 55, 182-186. In Chapter 6 (correlation and covariance) I consider how to construct a confidence interval (CI) for the difference between two independent correlations. The standard approach uses the Fisher z transformation to deal with boundary effects (the squashing of the distribution and increasing asymmetry as r approaches -1 or 1). As z[r] is approximately normally distributed (which r is decidedly not) you can create a standard error for the difference by summing the sampling variances according to the variance sum law (see chapter 3). This works well for the CI around a single correlation (assuming the main assumptions – bivariate normality and homogeneity of variance – broadly hold) or for differences between means, but can perform badly when looking at the difference between two correlations. Zou (2007) proposed modification to the standard approach that uses the upper and lower bounds of the CIs for individual correlations to calculate a CI for their difference. He considered three cases: independent correlations and two types of dependent correlations (overlapping and non-overlapping). He also considered differences in R^2 (not relevant here). Independent correlations In section 6.6.2 (p. 224) I illustrate Zou’s approach for independent correlations and provide R code in sections 6.7.5 and 6.7.6 to automate the calculations. Section 6.7.5 shows how to write a simple R function and illustrates it with a function to calculate a CI for Pearson’s r using the Fisher z transformation. Whilst writing the book I encountered several functions do do exactly this. The cor.test() function in the base package does this for raw data (along with computing the correlation and usual NHST). A number of functions compute it using the usual text book formula. My function relies on R primitive hyperbolic functions (as the Fisher z transformation is related to the geometry of hyperbolas), which may be useful if you need to use it intensively (e.g., for rz.ci <- function(r, N, conf.level = 0.95) { zr.se <- 1/(N - 3)^0.5 moe <- qnorm(1 - (1 - conf.level)/2) * zr.se zu <- atanh(r) + moe zl <- atanh(r) - moe tanh(c(zl, zu)) The function is 6.7.6 uses the rz.ci() function to construct a CI for the difference between two independent correlations. See section 6.6.2 of Serious stats or Zou (2007) for further details and a worked example. My function from section 6.7.6 is reproduced here: r.ind.ci <- function(r1, r2, n1, n2=n1, conf.level = 0.95) { L1 <- rz.ci(r1, n1, conf.level = conf.level)[1] U1 <- rz.ci(r1, n1, conf.level = conf.level)[2] L2 <- rz.ci(r2, n2, conf.level = conf.level)[1] U2 <- rz.ci(r2, n2, conf.level = conf.level)[2] lower <- r1 - r2 - ((r1 - L1)^2 + (U2 - r2)^2)^0.5 upper <- r1 - r2 + ((U1 - r1)^2 + (r2 - L2)^2)^0.5 c(lower, upper) The call the function use the two correlation coefficients an sample as input (the default is to assume equal n and a 95% CI). A caveat As I point out in chapter 6, just because you can compare two correlation coefficients doesn’t mean it is a good idea. Correlations are standardized simple linear regression coefficients and even if the two regression coefficients measure the same effect, it doesn’t follow that their standardized counterparts do. This is not merely the problem that it may be meaningless to compare, say, a correlation between height and weight with a correlation between anxiety and neuroticism. Two correlations between the same variables in different samples might not be meaningfully comparable (e.g., because of differences in reliability, range restriction and so forth). Dependent overlapping correlations In many cases the correlations you want to compare aren’t independent. One reason for this is that the correlations share a common variable. For example if you correlate X with Y and X with Z you might be interested in whether the correlation r[XY] is larger than r[XZ]. As X is common to both variables the correlations are not independent. Zou (2007) describes how to adjust the interval to account for this correlation. In essence the sampling variances of the correlations are tweaked using a version of the variance sum law (again see chapter 3). The following functions (not in the book) compute the correlation between the correlations and use it to adjust the CI for the difference in correlations to account for overlap (a shared predictor). Note that both functions and rz.ci() must be loaded into R. Also included is a calls to the main function that reproduces the output from example 2 of Zou (2007). rho.rxy.rxz <- function(rxy, rxz, ryz) { num <- (ryz-1/2*rxy*rxz)*(1-rxy^2-rxz^2-ryz^2)+ryz^3 den <- (1 - rxy^2) * (1 - rxz^2) r.dol.ci <- function(r12, r13, r23, n, conf.level = 0.95) { L1 <- rz.ci(r12, n, conf.level = conf.level)[1] U1 <- rz.ci(r12, n, conf.level = conf.level)[2] L2 <- rz.ci(r13, n, conf.level = conf.level)[1] U2 <- rz.ci(r13, n, conf.level = conf.level)[2] rho.r12.r13 <- rho.rxy.rxz(r12, r13, r23) lower <- r12-r13-((r12-L1)^2+(U2-r13)^2-2*rho.r12.r13*(r12-L1)*(U2- r13))^0.5 upper <- r12-r13+((U1-r12)^2+(r13-L2)^2-2*rho.r12.r13*(U1-r12)*(r13-L2))^0.5 c(lower, upper) # input from example 2 of Zou (2007, p.409) r.dol.ci(.396, .179, .088, 66) The r.dol.ci() function takes three correlations as input – the correlations of interest (e.g., r[XY] and r[XZ]) and the correlation between the non-overlapping variables (e.g., r[YZ]). Also required is the sample size (often identical for both correlations). Dependent non-overlapping correlations Overlapping correlations are not the only cause of dependency between correlations. The samples themselves could be correlated. Zou (2007) gives the example of a correlation between two variables for a sample of mothers. The same correlation could be computed for their children. As the children and mothers have correlated scores on each variable, the correlation between the same two variables will be correlated (but not overlapping in the sense used earlier). The following functions compute the CI for the difference in correlations between dependent non-overlapping correlations. Also included is a call to the main function that reproduces Zou (2007) example 3. rho.rab.rcd <- function(rab, rac, rad, rbc, rbd, rcd) { num <- 1/2*rab*rcd * (rac^2 + rad^2 + rbc^2 + rbd^2) + rac*rbd + rad*rbc - (rab*rac*rad + rab*rbc*rbd + rac*rbc*rcd + rad*rbd*rcd) den <- (1 - rab^2) * (1 - rcd^2) r.dnol.ci <- function(r12, r13, r14, r23, r24, r34, n12, n34=n12, conf.level=0.95) { L1 <- rz.ci(r12, n12, conf.level = conf.level)[1] U1 <- rz.ci(r12, n12, conf.level = conf.level)[2] L2 <- rz.ci(r34, n34, conf.level = conf.level)[1] U2 <- rz.ci(r34, n34, conf.level = conf.level)[2] rho.r12.r34 <- rho.rab.rcd(r12, r13, r14, r23, r24, r34) lower <- r12 - r34 - ((r12 - L1)^2 + (U2 - r34)^2 - 2 * rho.r12.r34 * (r12 - L1) * (U2 - r34))^0.5 upper <- r12 - r34 + ((U1 - r12)^2 + (r34 - L2)^2 - 2 * rho.r12.r34 * (U1 - r12) * (r34 - L2))^0.5 c(lower, upper) # from example 3 of Zou (2007, p.409-10) r.dnol.ci(.396, .208, .143, .023, .423, .189, 66) Although this call reproduces the final output for example 3 it produces slightly different intermediate results (0.0891 vs. 0.0917) for the correlation between correlations. Zou (personal communication) confirms that this is either a typo or rounding error (e.g., arising from hand calculation) in example 3 and that the function here produces accurate output. The input here requires the correlations from every possible correlation between the four variables being compared (and the relevant sample size for the correlations being compared). The easiest way to get the correlations is from a correlation matrix of the four variables. Robust alternatives Wilcox (2009) describes a robust alternative to these methods for independent correlations and modifications to Zou’s method that make the dependent correlation methods robust to violations of bivariate normality and (in particular) homogeneity of variance assumptions. Wilcox provides R functions for these approaches on his web pages. His functions take raw data as input and are computationally intensive. For instance the dependent correlation methods use Zou’s approach but take boostrap CIs for the individual correlations as input (rather than the simpler Fisher z transformed versions). The relevant functions are twopcor() for the independent case, TWOpov() for the dependent overlapping case and TWOpNOV() for the non-overlapping case. Zou’s modified asymptotic method is easy enough that you can run it in Excel. I’ve added an Excel spreadsheet to the blog resources that should implement the methods (and matches the output to R fairly closely). As it uses Excel it may not cope gracefully with some calculations (e.g., with extremely small or large values or r or other extreme cases) – and I have more confidence in the R Baguley, T. (2012, in press). Serious stats: A guide to advanced statistics for the behavioral sciences. Basingstoke: Palgrave. Zou, G. Y. (2007). Toward using confidence intervals to compare correlations. Psychological Methods, 12, 399-413. Wilcox, R. R. (2009). Comparing Pearson correlations: Dealing with heteroscedascity and non-normality. Communications in Statistics – Simulation & Computation, 38, 2220-2234. N.B. R code formatted via Pretty R at inside-R.org Posted by Thom Baguley on November 11, 2013 A gentle introduction to learning R Posted by Thom Baguley on October 22, 2013 Using multilevel models to get accurate inferences for repeated measures ANOVA designs Posted by Thom Baguley on April 18, 2013 Near-instant high quality graphs in R Posted by Thom Baguley on September 5, 2012 Confidence intervals with tiers: functions for between-subjects (independent measures) ANOVA Posted by Thom Baguley on June 21, 2012 R functions for serious stats Posted by Thom Baguley on March 26, 2012 Serious stats companion web site now live: sample chapter, data and R scripts Posted by Thom Baguley on March 23, 2012 Independent measures (between-subjects) ANOVA and displaying confidence intervals for differences in means Posted by Thom Baguley on March 18, 2012 Comparing correlations: independent and dependent (overlapping or non-overlapping) Posted by Thom Baguley on February 5, 2012
{"url":"http://seriousstats.wordpress.com/tag/statistics/","timestamp":"2014-04-16T04:18:38Z","content_type":null,"content_length":"190496","record_id":"<urn:uuid:504f8452-b2b9-45b0-a9e0-eaa9b6234791>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00514-ip-10-147-4-33.ec2.internal.warc.gz"}
Stuggling with a rational exponent. March 5th 2009, 08:26 PM Stuggling with a rational exponent. I am trying to figure out what $(-3)^\frac{2}{3}$ is. I calculated it, possibly incorrectly as $\approx{2.08}$ by $\sqrt[3]{(-3)^2}$. I would calculate $-\sqrt[3]{3}, as \approx{-1.4422}$ then square that to get $\approx{2.08}$ All the maths applications I have calculate it as -1.0400 + 1.8014i. Which is $(\sqrt[3]{3}[\frac{1+i\sqrt{3}}{2}])^2$. This complex fraction is courtesy of my HP. My questions are twofold. Why if cube roots are defined for all values on the number line do the calculators, and matlab produce an answer with an imaginary part? I thought that all even rational exponents produce positive real results, and not imaginary results? If the imaginary result above is correct, and I can trust the calculator, how on earth did it get to that complex fraction? March 5th 2009, 09:57 PM I have only used matlab once before (in the past week). I don't know if this works for only integers but have you tried the "nthroot" function? I found matlab gave an imaginary solution to a cube root as well rather than the real root, strangely. March 5th 2009, 10:01 PM Right answer? I know, it's very strange. Would you also calculate it as $\approx{+2.08}$ March 5th 2009, 10:11 PM Yes that is the real cube root, correct. Matlab has given you a correct complex solution but it may not be much use in many cases!
{"url":"http://mathhelpforum.com/pre-calculus/77180-stuggling-rational-exponent-print.html","timestamp":"2014-04-16T22:09:46Z","content_type":null,"content_length":"6431","record_id":"<urn:uuid:4f015286-fdfc-47e8-8564-c8e2f9f85201>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00112-ip-10-147-4-33.ec2.internal.warc.gz"}
Coding on countably infinite alphabets Stephane Boucheron, Aurelien Garivier and Elisabeth Gassiat IEEE Transactions on Information Theory Volume 55, Number 1, , 2009. ISSN 0018-9448 This paper describes universal lossless coding strategies for compressing sources on countably infinite alphabets. Classes of memoryless sources defined by an envelope condition on the marginal distribution provide benchmarks for coding techniques originating from the theory of universal coding over finite alphabets. We prove general upperbounds on minimax regret and lower-bounds on minimax redundancy for such source classes. The general upper bounds emphasize the role of the Normalized Maximum Likelihood codes with respect to minimax regret in the infinite alphabet context. Lower bounds are derived by tailoring sharp bounds on the redundancy of Krichevsky- Trofimov coders for sources over finite alphabets. Up to logarithmic (resp. constant) factors the bounds are matching for source classes defined by algebraically declining (resp. exponentially vanishing) envelopes. Effective and (almost) adaptive coding techniques are described for the collection of source classes defined by algebraically vanishing envelopes. Those results extend our knowledge concerning universal coding to contexts where the key tools from parametric inference (Bernstein-Von Mises theorem, Wilks theorem) are known to fail. PDF - Requires Adobe Acrobat Reader or other PDF viewer.
{"url":"http://eprints.pascal-network.org/archive/00003626/","timestamp":"2014-04-17T01:06:28Z","content_type":null,"content_length":"8654","record_id":"<urn:uuid:ac451cea-8aeb-4a90-9dd7-63f9c247246d>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00427-ip-10-147-4-33.ec2.internal.warc.gz"}
Calculus Tutors Atlanta, GA 30308 Math, microeconomics or criminal justice ...I have taught in community colleges, about ten years, in math, economics, philosophy and legal writing. I took the GRE in January 2014, with a 93 percentile on the writing portion, or a 5.0. I taught trigonometry, brief , finite mathematics which included... Offering 10+ subjects including calculus
{"url":"http://www.wyzant.com/Tucker_GA_calculus_tutors.aspx?g=3OHW","timestamp":"2014-04-25T07:20:44Z","content_type":null,"content_length":"60326","record_id":"<urn:uuid:e061e4d1-f57f-452f-b3ec-b6126f63417f>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00657-ip-10-147-4-33.ec2.internal.warc.gz"}
Spring 2001 LACC Math Contest - Problem 10 Problem 10. SOME MONKEY BUSINESS: Hanging over a pulley is a rope. At one end of the rope is a sand bag and at the other end is a monkey. The weight of the monkey is the same as the weight of the sand bag. The rope weighs 4 ounces per foot. The sum of the ages of the monkey and the monkey's mother is 4 years. The monkey's weight in pounds is the same as the mother's age in years. The mother is twice as old as the monkey was when the mother was half as old as the monkey will be when the monkey is three times as old as the mother was when she was three times as old as the monkey was. The weight of the rope and the sand bag is one and a half times the weight of the monkey. How long is the rope? [Problem submitted by Roger Wolf, LACC Professor of Mathematics and Chairman of the Department of Mathematics and Computer Science.] a) Let t be the age in years of the monkey when the monkey's mother was three times as old as the monkey. Then at that time the mother was 3t years old. Note that the difference between the mother's age and the monkey's age is 2t years, which remains the constant difference between their ages at any time. b) When the monkey is three times the mother's age in a), the monkey will be 9t years old. c) When the mother was half the monkey's age in b), the mother was d) The mother's current age is twice the age of the monkey in c). So, the mother is 5t years old. Then the monkey is 3t years old. e) The sum of their ages is 4 years. So, 5t +3t = 4. Therefore, t = f) The monkey's weight in pounds is the same as the mother's age in years; so the monkey weighs g) The sandbag weighs the same as the monkey, 40 ounces. h) Let r be the weight of the rope in ounces and l the length of the rope in feet. The rope weighs 4 ounces per foot: 4l = r. i) The weight of the rope and the sand bag is one and one half times the monkey's weight. r + 40 = j) Substituting 4l for r: 4l + 40 =
{"url":"http://lacitycollege.edu/academic/departments/mathdept/samplequestions/2001solution10.html","timestamp":"2014-04-19T04:20:47Z","content_type":null,"content_length":"4976","record_id":"<urn:uuid:28f2bc39-df64-4b42-a7e6-90bb93545663>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00113-ip-10-147-4-33.ec2.internal.warc.gz"}
[Numpy-discussion] Vectorization of the product of several matrices ? [Numpy-discussion] Vectorization of the product of several matrices ? oc-spam66 oc-spam66@laposte.... Sun Sep 28 10:47:27 CDT 2008 I have two lists of numpy matrices : LM = [M_i, i=1..N] and LN = [N_i, i =1..N] and I would like to compute the list of the products : LP = [M_i * N_i, i=1..N]. I can do : for i in range(N) : But this is not vectorized. Is there a faster solution ? Can I vectorize this operation ? How ? Will it be more efficient than the code above ? Thank you for your help, Créez votre adresse électronique prenom.nom@laposte.net 1 Go d'espace de stockage, anti-spam et anti-virus intégrés. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://projects.scipy.org/pipermail/numpy-discussion/attachments/20080928/916d8ff8/attachment.html More information about the Numpy-discussion mailing list
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2008-September/037718.html","timestamp":"2014-04-16T22:18:34Z","content_type":null,"content_length":"3626","record_id":"<urn:uuid:85d95eb2-988a-4a2c-9c5c-98771000f179>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00138-ip-10-147-4-33.ec2.internal.warc.gz"}
Trigonometric Identity - All that Good Stuff! November 8th 2009, 09:35 AM Trigonometric Identity - All that Good Stuff! Well guys, I really would appreciate it were someone able to prove to me this: cos^2x-sin^2x = (1-tan^2x)/(1+tan^2x) I think the left side equals cos(2x), but I'm not even sure of that! Any help is much appreciated! (Happy) November 8th 2009, 09:41 AM You're correct about the left side of the equation, but that isn't going to help you any. Just use the identity $tan(x)=\frac{sin(x)}{cos(x)}$ on the right side of the equation. If you simplify the fraction, you'll obtain the left side: $\frac{1-\frac{sin^2(x)}{cos^2(x)}}{1+\frac{sin^2(x)}{cos^2 (x)}}=\frac{\frac{cos^2(x)-sin^2(x)}{cos^2(x)}}{\frac{cos^2(x)+sin^2(x)}{cos^ 2(x)}}=\frac{cos^2(x)-sin^2(x)}{cos^2(x)+sin^2(x)}=cos^2 November 8th 2009, 09:43 AM Yes the left side does equal cos(2x) but I'm going to start with the right side and get the left. note that $tan^2(x) = \frac{sin^2(x)}{cos^2(x)}$ and $1+tan^2(x) = sec^2(x)$ Rewrite in terms of sin and cos: Get the same denominator for both terms: Combine the two again $\frac{cos^2(x)-sin^2(x)}{cos^2(x)} \times \frac{1}{sec^2(x)} = \frac{cos^2(x)-sin^2(x)}{cos^2(x)} \times cos^2(x)$ $= cos^2(x)-sin^2(x) = LHS$ November 8th 2009, 10:00 AM Thank you so much, both of you! I really understand it now! :) Guess I need to work on the algebraic skills a bit more.
{"url":"http://mathhelpforum.com/trigonometry/113204-trigonometric-identity-all-good-stuff-print.html","timestamp":"2014-04-21T08:30:39Z","content_type":null,"content_length":"10121","record_id":"<urn:uuid:01c174ee-7b67-4ee6-9bab-e978b2ddbaa4>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00498-ip-10-147-4-33.ec2.internal.warc.gz"}
Sequential screening with elementary effects Seminar Room 1, Newton Institute The Elementary Effects (EE) method (Morris, 1991) is a simple but effective screening strategy. Starting from a number of initial points, the method creates random trajectories to then estimate factor effects. In turn, those estimates are used for factor screening. Recent research advances (Campolongo et al., 2004,2006) have enhanced the performance of the elementary effects method and the projections of the resulting design (Pujol, 2008). The presentation concentrates on a proposal (Boukouvalas et al., 2011) which turns the elementary effects method into a sequential design strategy. After describing the methodology, some examples are given and compared against the traditional EE method. The video for this talk should appear here if JavaScript is enabled. If it doesn't, something may have gone wrong with our embedded player. We'll get it fixed as soon as possible.
{"url":"http://www.newton.ac.uk/programmes/DAE/seminars/2011090612001.html","timestamp":"2014-04-21T13:30:39Z","content_type":null,"content_length":"6510","record_id":"<urn:uuid:7e158a33-fe82-454c-8a51-93bfbd1ce362>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00644-ip-10-147-4-33.ec2.internal.warc.gz"}
Zariski Topology Zariski Topology prior to the Grothendieck revolution of the late 1950s and 1960s) the Zariski topology was defined in the following way. Zariski Topology is described in multiple online sources, as addition to our editors' articles, see section below for printable documents, Zariski Topology books and related discussion. Suggested Pdf Resources Suggested Web Resources Great care has been taken to prepare the information on this page. Elements of the content come from factual and lexical knowledge databases, realmagick.com library and third-party sources. We appreciate your suggestions and comments on further improvements of the site.
{"url":"http://www.realmagick.com/zariski-topology/","timestamp":"2014-04-17T09:54:31Z","content_type":null,"content_length":"19801","record_id":"<urn:uuid:b856f229-e7a5-44e3-84fc-6b37ce3aa11f>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00494-ip-10-147-4-33.ec2.internal.warc.gz"}
Richmond, CA Statistics Tutor Find a Richmond, CA Statistics Tutor ...My style is personable, engaging, and easygoing. I have extensive knowledge of pretty much all of math through the end of college, and of statistics well beyond that. I'm good at zeroing in on precisely what's giving you trouble, and will break each problem into pieces you can understand and learn. 14 Subjects: including statistics, geometry, ASVAB, algebra 1 ...I also teach calculus to individuals outside of any class. I supervised small groups with aspiring math teachers learning calculus and I taught for several years my own university courses that built heavily on calculus. More than one third of my tutored hours covers calculus. "Great Tutor" - Alan K. 41 Subjects: including statistics, calculus, geometry, algebra 1 I am a Naperville North High School and University of Illinois Urbana-Champaign graduate with a degree in Mathematics. I have extensive problem-solving experience including winning 1st place at the ICTM State Math Contest in 2007 and placing in the top 500 in the national Putnam competition. My tu... 17 Subjects: including statistics, chemistry, calculus, physics ...Regards.I have >= 7 years of experience with SAS. Over these years, I have worked extensively on data manipulation (cleaning, recoding, merging), linear and nonlinear regression analysis, analysis of variance, generalized linear models, bootstrap, Monte Carlo, survival analysis, trading strategi... 9 Subjects: including statistics, finance, SPSS, MATLAB ...In the MAST seminars I have had the privilege to help tutor a 3rd grade class, a 6th grade GATE (Gifted and Talented) class, and two 9th grade geometry classes. During the internship I was required to keep detailed logs of my tutoring experience as well as write and read case studies. I have tutored college, high school, middle school, and elementary students outside of MAST. 34 Subjects: including statistics, chemistry, writing, calculus
{"url":"http://www.purplemath.com/richmond_ca_statistics_tutors.php","timestamp":"2014-04-19T07:16:42Z","content_type":null,"content_length":"24243","record_id":"<urn:uuid:e20652d9-39e5-4bbb-8d5c-cb63b9321deb>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00525-ip-10-147-4-33.ec2.internal.warc.gz"}
RSS Matters The All-encompassing SAS 8 (1/2) By Dr.Karl Ho, Research and Statistical Support Services Manager Last year, when I wrote an evaluation note on SAS 7, (which was a transitional release from SAS' first windows generation 6.x to the current version SAS 8), I fell short of giving a full coverage because of SAS' enormity, it is composed of numerous modules and procedures. Another reason was that SAS 7 was still in a developer's release (a "post-beta" beta version.) After one year, when I decided to evaluate the new SAS version 8, I have to say I am still shy of giving a satisfactory report: The best I can do is split the evaluation into two articles, just to introduce the new features that are included in version 8 alone. The New SAS The new SAS not only demonstrates a higher level of stability in the MS Windows operating system (geared for Windows 2000)*, it introduces a wave of new functionalities and features that give the software a facelift from its previous mainframe-adapted outlook. Windows users may still refrain from choosing SAS 7 in lieu of other GUI-based packages such as SPSS or Statistica since SAS is known for its syntax-based operation. With the three new add-on modules (SAS/Analyst, SAS/LAB, SAS/INSIGHT) plus the 3-D graphic PROC G3D procedure, I would declare SAS is now fully gooey (GUI). For instance, with Analyst (Solutions--> Analysis --> Analyst), users can simply import data in various formats and start analyzing in the spreadsheet-like, explorer interface. A wide variety of procedures are ready-to-use in Analyst, such as performing bivariate analyses (e.g. T-test, correlations, ANOVA) and multivariate analyses (GLM, Regression, Power analysis, Principal Components and Survival models). Users can also easily select samples out of an existing data set and create charts by point-and-clicking. However, comparative advantages of SAS are still on its advancement in research and development, that is exemplified in the new data analysis procedures. In the following I will briefly introduce these procedures new to the release 8.1 with some sample outputs. 1. Survey Sampling When starting a survey, particularly a large-scale or national survey, researchers are concerned how to extract samples from the population and if and how weighting should be applied to certain under-represented (certain social-economic status group in some geographic areas) or over-represented groups (e.g. upper-middle class among email recipients). SAS 8 introduces a new series of SAS procedures enables survey researchers to select their survey samples using different designs: □ simple random □ stratified □ clustering □ unequal weighting PROC SURVEYSELECT selects samples via a variety of methods ranging from simple random to complex multi-stage design sampling. With another two new procedures, SURVEYMEANS and SURVEYREG, researchers can easily estimate sample and population means, variances, confidence limits, and other descriptive statistics, sampling errors and regression models, taking into account the sampling design and weighting scheme introduced in the sample selection process. (sample output) 2. Nonparametric Modeling SAS incorporates in the newest version 8.1 one of the latest techniques in modeling non-linear models: nonparametric regression. It encompasses a suite of nonparametric techniques including kernel density estimation and loess smoothing. The PROC KDE procedure compute nonparametric estimates using the method of kernel density estimation, saving the estimate for subsequent plotting and analysis. The PROC LOESS and PROC TPSPLINE provide various smoothing methods to conduct exploratory data analysis and fit nonparametric or semiparametric models. Sample output: 3. Spatial Prediction Variogram and 2-dimensional Kriging (Spatial analyses in geology, petroleum exploration, mining, and water pollution analysis) PROC VARIOGRAM and PROC KRIGE2D implement the spatial prediction of unsampled locations using two-dimensional data based on spatial continuity. Sample plots: 4. Qualitative and Limited Dependent Variable Models Researchers are very often faced with dependent variables that are not continuous. These discrete variables (sometime called categorical choice) include the choice of political parties, presidential candidates and decision to take a bus or a train. One of the most renowned examples is what the 2000 Nobel prize laureate, Daniel L. McFadden, has been studying since 1974: commuters' choice of transportation mode(**).Multinomial logit and probit models estimate the probability of the limited dependent variable such as a commuter's choice of whether taking a bus or driving a car. A new procedure in SAS/ETS is introduced to estimate the family of discrete choice model. PROC QLIM can analyze the regular binary (two-choice) probit and logit models, but also: □ ordinal probit □ nested logit □ multinomial logit (more than two categories □ tobit □ endogenous switching regression □ simultaneous equations 5. Other New tests/features include: • Exact Logistic Regression (sample output) • Exact tests: generating direct exact p-values, or using Monte Carlo simulation (10000 samples) to estimate exact p-values. • Numerically Precise Regression (PROC ORTHOREG***): The new procedure produces more numerically accurate estimates than other regression procedures (e.g. REG, GLM) when data are ill conditioned or badly scaled. In the next article, I will introduce the following new features: 1. Partial Least Square 2. IML workshop 3. Multiple Imputation for Missing Data 4. Distribution analysis 5. Robust regression * I should have mentioned SAS for UNIX (version 8) delivers at least as much as its Windows version. Given the limit in space, I only focus on the latter. ** McFadden, D. 1974. "The Measurement of Urban Travel Demand" Journal of Public Economics, 3:303-28. Another laureate, James Heckman, another econometrician, is known for the selection bias model, also called Heckman model. *** Orthogonal regression minimizes the distance between the X/Y points taken together and the regression line but PROC ORTHOREG uses least squares. An, Anthony and Donna Watts. 1998 "New SAS Procedures for Analysis of Sample Survey Data" SUGI Proceedings What's New in Data Analysis on SAS Research and Development communities Web (http://www.sas.com/rnd/app/da/danew.html)
{"url":"http://www.unt.edu/benchmarks/archives/2000/october00/rss.htm","timestamp":"2014-04-19T19:33:42Z","content_type":null,"content_length":"15549","record_id":"<urn:uuid:80c71456-3cad-4d8f-82ee-14dad1568637>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00294-ip-10-147-4-33.ec2.internal.warc.gz"}
Fibonacci Numbers and The Golden Section in Art, Architecture and Music This section introduces you to some of the occurrences of the Fibonacci series and the Golden Ratio in architecture, art and music. Contents of this page The Things to do investigation at the end of the section. 1·61803 39887 49894 84820 45868 34365 63811 77203 09179 80576 ..More.. The Golden section in architecture The Parthenon and Greek Architecture The ancient Greeks knew of a rectangle whose sides are in the golden proportion (1 : 1.618 which is the same as 0.618 : 1). It occurs naturally in some of the proportions of the Five Platonic Solids (as we have already seen). A construction for the golden section point is found in Euclid's Elements. The golden rectangle is supposed to appear in many of the proportions of that famous ancient Greek temple, the Parthenon, in the Acropolis in Athens, Greece but there is no original documentary evidence that this was deliberately designed in. (There is a replica of the original building (accurate to one-eighth of an inch!) at Nashville which calls itself "The Athens of South USA".) Greek Temples (HC) Stuart Revett Buy This Art Print At AllPosters.com The Acropolis (see a plan diagram or Roy George's plan of the Parthenon with active spots to click on to view photographs), in the centre of Athens, is an outcrop of rock that dominates the ancient city. Its most famous monument is the Parthenon, a temple to the goddess Athena built around 430 or 440 BC. It is largely in ruins but is now undergoing some restoration (see the photos at Roy George's site in the link above). Again there are no original plans of the Parthenon itself. It appears to be built on a design of golden rectangles and root-5 rectangles: However, due to the top part being missing and the base being curved to counteract an optical illusion of level lines appearing bowed, these are only an approximate measures but reasonably good ones. The Panthenon image here shows clear golden sections in the placing of the three horizontal lines but the overall shape and the other prominent features are not golden section ratios. Pantheon, Libero There is a wonderful collection of pictures of the Parthenon and the Acropolis at Indiana University's web site. Dr Ann M Nicgorski of the Department of Art and Art History at Williamette University in the USA has a large collection of links to Parthenon pictures with many details of the building. David Silverman's page on the Parthenon has lots of information. Look at the plan of the Parthenon. The dividing partition in the inner temple seems to be on the golden section both of the main temple and the inner temple. Apart from that, I cannot see any other clear golden sections - can you? Allan T Kohl's Art Images for College Teaching has a lot of images on ancient art and architecture. Modern Architecture The Eden Project's new Education Building The Eden Project in St. Austell, between Plymouth and Penzance in SW England and 50 miles from Land's End, has some wonderfully impressive greenhouses based on geodesic domes (called biomes) built in an old quarry. It marks the Millenium in the year 2000 and is now one of the most popular tourist attractions in the SW of England. A new £15 million Education Centre called The Core has been designed using Fibonacci Numbers and plant spirals to reflect the nature of the site - plants from all over the world. The logo shows the pattern of panels on the roof. What is 300 million years old, weights 70 tonnes and is the largest of its type in the world? It is the new sculpture called The Seed at the centre of The Core which was unveiled on Midsummer's Day 2007 (June 23). Peter Randall-Paige's design is based on the spirals found in seeds and sunflowers and pinecones. California Polytechnic Engineering Plaza California Polytechnic State University have plans for a new Engineering Plaza based on the Fibonacci numbers and several geometric diagrams you will also have seen on other pages here. There is also a page of images of the new building. The designer of the Plaza and former student of Cal Poly, Jeffrey Gordon Smith, says As a guiding element, we selected the Fibonacci series spiral, or golden mean, as the representation of engineering knowledge. The start of construction is currently planned for late 2005 or early in 2006. The United Nations Building in New York The architect Le Corbusier deliberately incorporated some golden rectangles as the shapes of windows or other aspects of buildings he designed. One of these (not designed by Le Corbusier) is the United Nations building in New York which is L-shaped. Although you will read in some books that "the upright part of the L has sides in the golden ratio, and there are distinctive marks on this taller part which divide the height by the golden ratio", when I looked at photos of the building, I could not find these measurements. Can you? [With thanks to Bjorn Smestad of Finnmark College, Norway for mentioning these links.] More Architecture links University of Wisconsin's Library of Art History images is an excellent source of architecture images and well worth checking out! It has many images of the Parthenon, pictures of its friezes and other details. Use their searcher selecting the Period Ancient Greece: Classical and the Site Athens. Note: the images cannot be copied or even made into links, only viewed on their page! June Komisar's page of architectural links from the University of Michigan. She points to the Great Building Collection which has some excellent photo images on their Parthenon page. Do check this out as they have a FREE 3D viewer to download and lots of buildings in 3D to view. You can take your own virtual walk through the Parthenon! The Kings Tomb in Egypt and the golden section. 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610, 987 ..More.. The Golden Section and Art Luca Pacioli (1445-1517) in his Divina proportione (On Divine Proportion) wrote about the golden section also called the golden mean or the divine proportion: The line AB is divided at point M so that the ratio of the two parts, the smaller MB to the larger AM is the same as the ratio of the larger part AM to the whole AB. We have seen on earlier pages at this site that this gives two ratios, AM:AB which is also BM:AM and is 0.618... which we call phi (beginning with a small p). The other ratio is AB:AM = AM:MB = 1 /phi= 1.618... or Phi (note the capital P). Both of these are variously called the golden number or golden ratio, golden section, golden mean or the divine proportion. Other pages at this site explain a lot more about it and its amazing mathematical properties and it relation to the Fibonacci Numbers. Pacioli's work influenced Leonardo da Vinci (1452-1519) and Albrecht Durer (1471-1528) and is seen in some of the work of Georges Seurat, Paul Signac and Mondrian, for instance. The Uffizi Gallery's Web site in Florence, Italy, has a virtual room of some of Leonardo da Vinci's paintings and drawings. I suggest the following two of Leonardo Da Vinci's paintings to analyse for is a picture that looks like it is in a frame of 1:sqrt(5) shape (a root-5 rectangle). Print it and measure it - is it a root-5 rectangle? Divide it into a square on the left and another on the right. (If it is a root-5 rectangle, these lines mark out two golden-section rectangles as the parts remaining after a square has been removed). Also mark in the lines across the picture which are 0·618 of the way up and 0·618 of the way down it. Also mark in the vertical lines which are 0·618 of the way along from both ends. You will see that these lines mark out significant parts of the picture or go through important objects. You can then try marking lines that divide these parts into their golden sections too. Leonardo's Madonna with Child and Saints is in a square frame. Look at the golden section lines (0·618 of the way down and up the frame and 0·618 of the way across from the left and from the right) and see if these lines mark out significant parts of the picture. Do other sub-divisions look like further golden sections? Graham Sutherland's (1903-1980) huge tapestry of Christ The King behind the altar in Coventry Cathedral here in a picture taken by Rob Orland. It seems to have been designed with some clear golden sections as I've shown on Rob's picture: • The figure of Christ is framed by an oval with a flattened top. At the golden section point vertically is the navel indicated at the narrowest part of the waist and also the lower edge of the girdle (belt or waist-band), shown by blue arrows. • The bottom the the girdle (waist-band) is also at a golden section point for the whole figure from the top of the head to the soles of the feet, shown by purple arrows. Since this is also the position of the navel in the human body, this indicates the figure is standing. • The top of the girdle and the line of the chest are at golden sections between the base of the girdle and the top of the garment (the shoulders) shown by red arrows. • The face also has several golden sections in it, the line of the eyes and the nostrils being at the major golden sections, shown by yellow lines. • The two ovals forming the apron and the face are positioned vertically at golden section points apart and at golden sections in size as shown by the green arrows. • The other two ovals, the sleeves, have a width that is 0.618 of the distance between the sleeves, shown by grey arrows. Can you find any more golden sections? More information on the tapestry. Take a virtual tour of the Cathedral. Rob Orland's Photos website Links specifically related to the Fibonacci numbers or the golden section (Phi): A ray traced image based on Fibonacci spirals and rectangles the Web Museum pages on Durer, Famous Painting Virtual Exhibition. their long list of famous artists and their works. There is a very useful set of mathematical links to Art and Music web resources from Mathematics Archives that is worth looking at. Links to major sources of Art on the Web: Top9.com's List of the top art sources on the web is an excellent place for links to good art sources on the web. Highly recommended! The Metropolitan Museum of Art in New York houses more than 2 million works of art. The Fine Arts Museums of San Francisco site has an Image base of 65,000 works of art. It includes art from Ancient to Modern, from paintings to ceramics and textiles, from all over the world as well as America. A Guide to Art Collections in the UK Michelangelo is famous for his paintings (such as the ceiling in the Sistine Chapel) and his sculptures (for instance David). This site has links to several sources and images of his works and some links to sites on the golden section. Using the picture of his David sculpture, measure it and see if he has used Phi - eg is the navel ("belly button") 0·618 of the David's height? Why not visit the Leonardo Museum in the town of Vinci (Italy) itself from which town Leonardo is named, of course. There are many sketches and paintings of Leonardo's at The WebMuseum, Paris too. The work of modern artists using the Golden Section When I was giving a talk at The Eden Project in Cornwall in July 2007, Patricia Bennetts and Mary Miller of Falmouth introduced me to using Fibonacci Numbers in Quilt design. (Let your mouse rest on their names to see their email addresses.) Their two designs are based on the pattern in the middle where the strips in the lower half are of widths 1, 2, 3, 5, 8 and 13 in brown which are alternated with lighter strips of the same widths but in decreasing order. Woolly Thoughts is Steve Plummer and Pat Ashforth's web site with many maths inspired knitting and crochet projects, including designs based on Fibonacci numbers, the golden spiral, pythagorean triangles, flexagons and much much more. They have worked for many years in schools giving a new twist to mathematics with their hands-on approach to design using school maths. An excellent resource for teachers who want to get students involved in maths in a new way and also for mathematicians interested in knitting and crochet. Billie Ruth Sudduth is a North American artist specialising in basket work that is now internationally known. Her designs are based on the Fibonacci Numbers and the golden section - see her web page JABOBs (Just A Bunch Of Baskets). Mathematics Teaching in the Middle School has a good online introduction to her work (January 1999). Kees van Prooijen of California has used a similar series to the Fibonacci series - one made from adding the previous three terms, as a basis for his art. Fibonacci and Phi for fashioning Furniture Pietro Malusardi and Karen Wallace have a web page showing some elegant applications of the golden section in furniture design. Custom Furniture Solutions have a Media cabinet designed using golden section proportions. A recent edition (Jan/Feb 2003) of the Ancient Egypt Magazine contained an article on Woodworking in Ancient Egypt where the author, Geoffrey Killen, explains how a box (chest) exhibits the golden section in its design but is not sure if this is coincidence or design. Fletcher Cox is a craftsman in wood who has used the golden section in his birds-eye maple wooden plate. 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610, 987 ..More.. The Russian Sergie Eisenstein directed the classic silent film of 1925 The Battleship Potemkin (a DVD or video version of this 75 minute film is now available, both in PAL format). He divided the film up using golden section points to start important scenes in the film, measuring these by length on the celluloid film. Jonathan Berger of Stanford University's Center for Computer Research in Music and Acoustics used this as an illustration of Fibonacci numbers in a lecture course. Dénes Nagy, in a fascinating article entitled Golden Section(ism): From mathematics to the theory of art and musicology, Part 1 in Symmetry, Culture and Science, volume 7, number 4, 1996, pages 337-448 talks about whether we can percieve a golden section point in time without being initially aware of the whole time interval. He gives a reference to his own work on golden section perception in video art too (page 418 of the above article). 1·61803 39887 49894 84820 45868 34365 63811 77203 09179 80576 ..More.. The first section here is inspired by Dr Rachel Hall's Multicultural mathematics course syllabus at St Joseph's University in Philadelphia, USA. (Read more about it with some nice maths puzzles in this pdf document.) Stress, Meter and Sanskrit Poetry In English, we tend to think of poetry as lines of text that rhyme, that is, lines that end with similar sounds as in this children's song: Twinkle twinkle little star How I wonder what you are. Up above the world, so high Like a diamond in the sky Also we have the rhythm of the separate sounds (called syllables). Words like twinkle have two syllables: twin- and -kle whereas words such as star have just one. Some syllables are emphasized or stressed more than others so that they sound louder (such as TWIN- in twinkle), whereas others are unstressed and quieter (such as -kle in twinkle). Dictionaries will often show how to pronounce a word by separating it into syllables, the stressed parts shown in capital as we have done here, e.g. DIC-tion-ar-y to show it has 4 syllables with the first one only being stressed. If we let S stand for a stressed syllable and s an unstressed one, then the stress-pattern of each line of the song or poem is its meter (rhythm). In the song above each line has the meter SsSsSsS. In Sanskrit poetry syllables are are either long or short. In English we notice this in some words but not generally - all the syllables in the song above take about the same length of time to say whether they are stressed or not, so all the lines take the same amount of time to say. However cloudy sky has two words and three syllables CLOW-dee SKY, but the first and third syllables are stressed and take a longer to say then the other syllable. Let's assume that long syllables take just twice as long to say as short ones. So we can ask the question: in Sanskrit poetry, if all lines take the same amount of time to say, what combinations of short (S) and long (L) syllables can we have? This is just another puzzle of the same kind as on the Simple Fibonacci Puzzles page at this site. For one time unit, we have only one short syllable to say: S = 1 way For two time units, we can have two short or one long syllable: SS and L = 2 ways For three units, we can have: SSS, SL or LS = 3 ways Any guesses for lines of 4 time units? Four would seem reasonable - but wrong! It's five! SSSS, SSL, SLS, LSS and LL; the general answer is that lines that take n time units to say can be formed in Fib(n) ways. This was noticed by Acarya Hemacandra about 1150 AD or 70 years before Fibonacci published his first edition of Liber Abaci in 1202! Acarya Hemacandra and the (so-called) Fibonacci Numbers Int. J. of Mathematical Education vol 20 (1986) pages 28-30. Virgil's Aeneid Martin Gardner, in the chapter "Fibonacci and Lucas Numbers" in Mathematical Circus (Penguin books, 1979 or Mathematical Assoc. of America 1996) mentions Prof George Eckel Duckworth's book Structural patterns and proportions in Virgil's Aeneid : a study in mathematical composition (University of Michigan Press, 1962). Duckworth argues that Virgil consciously used Fibonacci numbers to structure his poetry and so did other Roman poets of the time. 1·61803 39887 49894 84820 45868 34365 63811 77203 09179 80576 ..More.. Fibonacci and Music Trudi H Garland's [see below] points out that on the 5-tone scale (the black notes on the piano), the 8-tone scale (the white notes on the piano) and the 13-notes scale (a complete octave in semitones, with the two notes an octave apart included). However, this is bending the truth a little, since to get both 8 and 13, we have to count the same note twice (C...C in both cases). Yes, it is called an octave, because we usually sing or play the 8th note which completes the cycle by repeating the starting note "an octave higher" and perhaps sounds more pleasing to the ear. But there are really only 12 different notes in our octave, not 13! Various composers have used the Fibonacci numbers when composing music, and some authors find the golden section as far back as the Middle Ages (10th century) ( see, for instance, The Golden Section In The Earliest Notated Western Music P Larson Fibonacci Quarterly 16 (1978) pages 513-515 ). Golden sections in Violin construction The section on "The Violin" in The New Oxford Companion to Music, Volume 2, shows how Stradivari was aware of the golden section and used it to place the f-holes in his famous violins. Baginsky's method of constructing violins is also based on golden sections. Did Mozart use the Golden mean? This is the title of an article in the American Scientist of March/April 1996 by Mike May. He reports on John Putz's analysis of many of Mozart's sonatas. John Putz found that there was considerable deviation from golden section division and that any proximity to golden sections can be explained by constraints of the sonata form itself, rather than purposeful adherence to golden section The Mathematics Magazine Vol 68 No. 4, pages 275-282, October 1995 has an article by Putz on Mozart and the Golden section in his music. Phi in Beethoven's Fifth Symphony? Mathematics Teaching volume 84 in 1978, Derek Haylock writes about The Golden Section in Beethoven's Fifth on pages 56-57. He claims that the famous opening "motto" (click on the music to hear it) occurs exactly at the golden mean point 0·618 in bar 372 of 601 and again at bar 228 which is the other golden section point (0.618034 from the end of the piece) but he has to use 601 bars to get these figures. This he does by ignoring the final 20 bars that occur after the final appearance of the motto and also ignoring bar 387. Have a look at the full score for yourself at The Hector Berlioz website on the Berlioz: Predecessors and Contemporaries page, if you follow the Scores Available link. A browser plug-in enables you to hear it also. Note that the repeated 124 bars at the beginning are not included in the bar counts on the musical score. Tim Benjamin for points out that But there are 626 bars and not 601! Therefore the golden section points actually occur at bars 239 (shown as bar 115 as the counts do not include the repeat) and 387 (similarly marked as bar 263). As UK composer Tim Benjamin points out: The 626 bars are comprised of a repeated section of 124 bars - so that's the first 248 bars in the repeated section, the "exposition" - followed by 354 of "development" section, then a 24 bar "recapitulation" (standard "first movement form"). Therefore there can't really be anything significant at 239, because that moment happens twice. However at 387, there is something pretty odd - this inversion of the main motto. You have some big orchestral activity, then silence, then this quiet inversion of the motto, then silence, then big activity again. Also you have to bear in mind that bar numbers start at 1, and not 0, so you would need to look for something happening at 387.9 (rounding to 1dp) and not 386.9. This is in fact what happens - the strange inversion runs from 387.25 to 388.5. But bar 387 is precisely one that Haylock singles out to ignore! So is it Beethoven's "phi-fth" or just plain old "Fifth"? Bartók, Debussy, Schubert, Bach and Satie There are some fascinating articles and books which explain how these composers may have deliberately used the golden section in their music: Duality and Synthesis in the Music of Bela Bartók by E Lendvai on pages 174-193 of Module, Proportion, Symmetry, Rhythm G Kepes (editor), George Brazille, 1966; Some striking Proportions in the Music of Bela Bartók in Fibonacci Quarterly Vol 9, part 5, 1971, pages 527-528 and 536-537. Bela Bartók: an analysis of his music by Erno Lendvai, published by Kahn & Averill, 1971; has a more detailed look at Bartók's use of the golden mean. Debussy in Proportion - a musical analysis by Roy Howat, Cambridge Univ. Press,1983, ISBN = 0 521 23282 1. Concert pianist Roy Howat's Web site has more information on his Debussy in Proportion book and others works and links. Adams, Coutney S. Erik Satie and Golden Section Analysis. in Music and Letters, Oxford University Press,ISSN 0227-4224, Volume 77, Number 2 (May 1996), pages 242-252 Schubert Studies, (editor Brian Newbould) London: Ashgate Press, 1998 has a chapter Architecture as drama in late Schubert by Roy Howat, pages 168 - 192, about Schubert's golden sections in his late A major sonata (D.959). The Proportional Design of J.S. Bach's Two Italian Cantatas, Tushaar Power, Musical Praxis, Vol.1, No.2. Autumn 1994, pp.35-46. This is part of the author's Ph D Thesis J.S. Bach and the Divine Proportion presented at Duke University's Music Department in March 2000. Proportions in Music by Hugo Norden in Fibonacci Quarterly vol 2 (1964) pages 219-222 talks about the first fugue in J S Bach's The Art of Fugue and shows how both the Fibonacci and Lucas numbers appear in its organization. Per Nørgård's 'Canon' by Hugo Norden in Fibonacci Quarterly vol 14 (1976), pages 126-128 says the title piece is an "example of music based entirely and to the minutest detail on the Fibonacci The Fibonacci Series in Twentieth Century Music J Kramer, Journal of Music Theory 17 (1973), pages 110-148 Art and Music web resources from Mathematics Archives that is worth looking at. The Golden String as Music The Golden String is a fractal string of 0s and 1s that grows in a Fibonacci-like way as follows: After the first two lines, all the others are made from the two latest lines in a similar way to each Fibonacci numbers being a sum of the two before it. Each string (list of 0s and 1s) here is a copy of the one above it followed by the one above that. The resulting infintely long string is the Golden String or Fibonacci Word or Rabbit Sequence. It is interesting to hear it in musical form and I give two ways in the section Hear the Golden sequence on that page. In that same section I mention the London based group Perfect Fifth who have used it in a piece called Fibonacci that you can hear online too . Other Fibonacci and Phi related music John Biles, a computer scientist at Rochester university in New York State used the series which is the number of sets of Fibonacci numbers whose sum is n to make a piece of music. He wrote about it and has a link to hear the piece online. The series looks like this: It has some fractal properties in that the graph can be seen in sections, each beginning and ending when the graph dips down to lowest points on the y=1 line. Each section begins and ends with a copy of the section two before it (and moved up a bit), and in between them is a copy of the previous section again moved up. I've written more about this series in a section called Sumthing about Fibonacci Numbers on the Fibonacci Bases and other ways of representing integers. 1·61803 39887 49894 84820 45868 34365 63811 77203 09179 80576 ..More.. Miscellaneous, Amusing and Odd places to find Phi and the Fibonacci Numbers TV Stations in Halifax, Canada In Halifax, Nova Scotia, there are 4 non-cable TV channels and they are numbered 3, 5, 8 and 13! Prof. Karl Dilcher reported this coincidence at the Eighth International Conference on Fibonacci Numbers and their Applications in summer 1998. Turku Power Station, Finland Joerg Wiegels of Duesseldorf told me that he was astonished to see the Fibonacci numbers glowing brightly in the night sky on a visit to Turku in Finland. The chimney of the Turku power station has the Fibonacci numbers on it in 2 metre high neon lights! It was the first commission of the Turku City Environmental Art Project in 1994. The artist, Mario Merz (Italy) calls it Fibonacci Sequence 1-55 and says "it is a metaphor of the human quest for order and harmony among chaos." The picture here was taken by Dr. Ching-Kuang Shene of Michigan Technological University and is reproduced here with his kind permission from his page of photos of his Finland trip. Designed in? • If you measure a credit card, you'll find it is a perfect golden rectangle. • The golden rectangle icon of National Geographic also seems to be a golden-section rectangle too. • Brian Agron of Fairfax, California, found the golden section in the design of his mountain bike, a Trek Fuel 90 shown above with golden sections marked. • Brian also says the shape of the large doors in hospitals seem to be a golden rectangle. • John Harrison MA has found a golden rectangle in the shape of a Kit-Kat chocolate wafer - the larger 4 finger bar in its older wrapping as shown above. Two myths about clocks and golden ratio time ten to two have their hands positioned so as to form a golden rectangle and that this is "aesthetically pleasing". But it is easy to calculate that the angle between the hands at this time is 0.3238 of a turn (or, the larger angle is 0.6762 of a turn) both of which are nowhere near the golden ratio angles of 0.618 and 0.382 (= 1–0.618) of a turn. There are eleven distinct times in any 12 hour period when the hands of a clock mark out a golden ratio on the circumference. What times are they? Which is the most symmetrical arrangement? Which is the easiest to remember? Which is closest to a multiple of 5 minutes? Other authors say the hands at 1:50 or 10:08 form a golden rectangle using the points on the rim. This also is not true even if one could imagine them projected on to the rim and then making a rectangle - not an easy visual exercise! Here are the clocks with hands extended to the rim and a golden rectangle superimposed on the clocks. When the hour hand points at the right place, it is about 10:04 and when the minute hand gets to the correct position, it is about 10h 9m 35s but then the hour hand does not point to the right place. The time when the hands are exactly symmetrical is 10 hours 9 minutes and 13.8462... seconds and also 1 hours, 50 minutes and 46.1538 seconds. So 10:09 and 1:51 are both reasonably close, but even with the visual gymnastics, it seems unlikely that the eye recognizes such a golden rectangle construction at those times, in my mathematical opinion! 1. What other logos can you find that are golden rectangles? 2. Where else have you found the golden rectangle? Email me with any answers to these questions and I'll try to include them on this page. 1·61803 39887 49894 84820 45868 34365 63811 77203 09179 80576 ..More.. There are many books and articles that say that the golden rectangle is the most pleasing shape and point out how it was used in the shapes of famous buildings, in the structure of some music and in the design of some famous works of art. Indeed, people such as Corbusier and Bartók have deliberately and consciously used the golden section in their designs. However, the "most pleasing shape" idea is open to criticism. The golden section as a concept was studied by the Greek geometers several hundred years before Christ, as mentioned on earlier pages at this site, But the concept of it as a pleasing or beautiful shape only originated in the late 1800's and does not seem to have any written texts (ancient Greek, Egyptian or Babylonian) as supporting hard evidence. At best, the golden section used in design is just one of several possible "theory of design" methods which help people structure what they are creating. At worst, some people have tried to elevate the golden section beyond what we can verify scientifically. Did the ancient Egyptians really use it as the main "number" for the shapes of the Pyramids? We do not know. Usually the shapes of such buildings are not truly square and perhaps, as with the pyramids and the Parthenon, parts of the buildings have been eroded or fallen into ruin and so we do not know what the original lengths were. Indeed, if you look at where I have drawn the lines on the Parthenon picture above, you can see that they can hardly be called precise so any measurements quoted by authors are fairly rough! So this page has lots of speculative material on it and would make a good Project for a Science Fair perhaps, investigating if the golden section does account for some major design features in important works of art, whether architecture, paintings, sculpture, music or poetry. It's over to you on this one! George Markowsky's Misconceptions about the Golden ratio in The College Mathematics Journal Vol 23, January 1992, pages 2-19 is an important article that points out the weaknesses in parts of "the golden-section is the most pleasing shape" theory. This is readable and well presented. Perhaps too many people just take the (unsupportable?) remarks of others and incorporate them in their works? You may or may not agree with all that Markowsky says, but this is a good article which tries to debunk a simplistic and unscientific "cult" status being attached to Phi, seeing it where it really is not! This is not to deny that Phi certainly is genuinely present in much of botany and the mathematical reasons for this are explained on earlier pages at this site. How to Find the "Golden Number" without really trying Roger Fischler, Fibonacci Quarterly, 1981, Vol 19, pages 406 - 410. Another important paper that points out how taking measurements and averaging them will almost always produce an average near Phi. Case studies are data about the Great Pyramid of Cheops and the various theories propounded to explain its dimensions, the golden section in architecture, its use by Le Corbusier and Seurat and in the visual arts. He concludes that several of the works that purport to show Phi was used are, in fact, fallacious and "without any foundation whatever". The Fibonacci Drawing Board Design of the Great Pyramid of Gizeh Col. R S Beard in Fibonacci Quarterly vol 6, 1968, pages 85 - 87; has three separate theories (only one of which involves the golden section) which agree quite well with the dimensions as measured in 1880. Golden Section(ism): From mathematics to the theory of art and musicology, Part 1, Dénes Nagy in Symmetry, Culture and Science, volume 7, number 4, 1996, pages 337-448 Section 2.1 says there are at least nine different theories about the shape of the Great Pyramid of Pharoah Khufu (the Great Pyramid of Cheops), two of which refer to the golden section: The angle of the slope of the faces is • the angle whose cosine is 0·618... which is about 51·82° • the angle whose tangent is twice 0·618... which is about 51·027° although a better fit is provided by a mathematical problem in the Rhind Papyrus which, in our notation is • the angle whose tangent is 28/22 which is about 51·84° All of the material at this site is about Mathematics so this page is definitely the odd one out! All the other material is scientifically (mathematically) verifiable and this page (and the final part of the Links page) is the only speculative material on these Fibonacci and Phi pages. References and Links on the golden section in Music and Art │ Key: │ │ │ a book │ │ │ │ an article in a magazine or │ │ │ │ a paper in an academic journal │ │ │ │ a website │ │ Fascinating Fibonaccis by Trudi Hammel Garland, Dale Seymours publications, 1987 is an excellent introduction to the Fibonacci series with lots of useful ideas for the classroom. Includes a section on Music. An example of Fibonacci Numbers used to Generate Rhythmic Values in Modern Music in Fibonacci Quarterly Vol 9, part 4, 1971, pages 423-426; Links to other Music Web sites Gamelan music is the percussion oriented music of Indonesia. The American Gamelan Institute has lots of information including a Gongcast recorded online music so you can hear Gamelan music for yourself. New music from David Canright of the Maths Dept at the Naval Postgraduate School in Monterey, USA; combining the Fibonacci series with Indonesian Gamelan musical forms. Some CDs on Gamelan music of Central Java (the Indonesian island not the software!). Other music Martin Morgenstern has a large and interesting list of books and articles on the golden section and music with abstracts, some of which is in German. The Fibonacci Sequence is the name of a classical music ensemble of internationally famous soloists, who are the musicians in residence at Kingston University (Kingston-upon-Thames, Surrey, UK). Based in the London (UK) area, their current programme of events is on the Web site link above. Casey Mongoven is a composer who has used Fibonacci numbers and golden sections in his own musical compositions. You can hear them and read more on his web site. Casey has an impressively large collection of pieces, most of them a few seconds only in length but they are fascinating to listen to and very different from conventional music. The pitches of his notes are often based on powers of Phi and their order is fixed by a number sequence, such as the Fibonacci numbers, or R(n) - the number of Fibonacci representations of n or on many other sequences that are described here on my Fibonacci site. His scores too are images that illustrate many of the series you will have seen here. You can experiment for yourself with the Fibonacci Sequence Visualiser that was designed specifically for Casey's works. Ted Froberg explains how he used the Fibonacci numbers "mod 7" (that is the remainders when we divide each Fibonacci number by 7) to make a "theme" which he then harmonizes and has made into a Fibonacci waltz. A Mathematical History of the Golden Number by Roger Herz-Fischler, Dover 1998, ISBN 0486400077. A scholarly study of all major references in an attempt to trace the earliest references to the "golden section", its names, etc. Education through Art (3rd edition) H Read, Pantheon books,1956, pages 14-22; The New Landscape in Art and Science G Kepes P Theobald and Co, 1956, pages 329 and 294; H E Huntley's, The Divine Proportion: A study in mathematical beauty, is a 1970 Dover reprint of an old classic. C. F. Linn, The Golden Mean: Mathematics and the Fine Arts, Doubleday 1974. Gyorgy Doczi, The Power of Limits: Proportional Harmonies in Nature, Art, and Architecture Shambala Press, (new edition 1994). M. Boles, The Golden Relationship: Art, Math, Nature, 2nd ed., Pythagorean Press 1987. Fibonacci Home Page General Fibonacci Series The next topics... Who was Fibonacci? Fibonacci, Phi and Lucas numbers Formulae WHERE TO NOW??? Links and References This is the last page on More Applications of the Fibonacci Numbers and Phi. © 1996-2010 Dr Ron Knott updated 22 September 2010
{"url":"http://www.maths.surrey.ac.uk/hosted-sites/R.Knott/Fibonacci/fibInArt.html","timestamp":"2014-04-16T04:11:58Z","content_type":null,"content_length":"73365","record_id":"<urn:uuid:c5dc8712-5d3c-4a2b-a1ba-3b9b4d431ece>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00179-ip-10-147-4-33.ec2.internal.warc.gz"}
[50.04] On stochastic forces in circumplanetary dust dynamics DPS 2001 meeting, November 2001 Session 50. Solar System Dust Oral, Chair: F. Spahn, Friday, November 30, 2001, 5:50-6:40pm, Regency GH [Previous] | [Session 50] | [Next] [50.04] On stochastic forces in circumplanetary dust dynamics F. Spahn, A.V. Krivov, Miodrag Sremcevic, U. Schwarz, J. Kurths (U. Postdam, Germany) Dust particles in orbit around planets are affected by stochastic perturbations beyond numerous deterministic forces. There are, for instance, fluctuations of the magnetic field or the grain charge. Here we investigate the dynamics of a dust stream perturbed by a stochastic magnetic field \vec B', which is modeled by a Gaussian white noise. Without an electric field the Lorentz force does no work, and the velocity is a stationary stochastic variable: \langle \Delta \vec v\,^2\,\rangle = {\rm constant} \propto D (brackets denote an ensemble average, D is a diffusion constant), like for Brownian particles in equilibrium. This leads to a normal diffusion in the configuration space: L^2 = \langle \Delta \vec r\,^2 \, \rangle \propto t (L is a random walk distance, t is time). To check whether this behavior holds true in a planetary environment, numerical experiments have been performed. We have chosen dust grains (0.3 micrometer in radius), escaping from Jupiter's satellite Europa and integrated numerically their trajectories over their typical lifetime (100 years). In one set of runs, the grains experienced a ``deterministic'' corotating dipole magnetic field \vec B[0]. In another one, the same grains were additionally exposed to a Gaussian magnetic field \vec B' such that \langle \vec B'\rangle \, = \, 0 and \langle \, B[i]' (t[1]) \, B'[j] (t[2]) \, \ rangle \, = \, B[0]^2 \, \delta[ij] \, \delta (t[1] - t[2]). We confirmed that L^2 \propto t, leading to a spread in an orbital element space by almost 200% over 100 years, which directly translates to the dimensions of the ring formed by the grains. Our results show a potential importance of stochasticity effects. Analyses of the magnetic field data measured by the Galileo magnetometer at Jupiter, providing the statistical properties of \vec B', are in progress. This work was funded by Deutsches Zentrum für Luft- und Raumfahrt (DLR). The author(s) of this abstract have provided an email address for comments about the abstract: frank@agnld.uni-potsdam.de [Previous] | [Session 50] | [Next]
{"url":"http://aas.org/archives/BAAS/v33n3/dps2001/309.htm","timestamp":"2014-04-20T18:55:49Z","content_type":null,"content_length":"3621","record_id":"<urn:uuid:3069b9d7-7d3b-4d68-bad8-73026e67c57f>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00203-ip-10-147-4-33.ec2.internal.warc.gz"}
ALEX Lesson Plans Subject: Mathematics (9 - 12) Title: Conic Sections: Discovering the Degenerates Description: Through a mixture of online exploration, and teacher instruction, students will discover how the degenerate forms of the conic sections are formed and will be able to identify the degenerate case of each conic section. Subject: Mathematics (9 - 12) Title: Conic Sections: Playing With Parabolas Description: Through a mixture of online exploration, and teacher instruction, students will discover how parabolas are formed and will be able to use the key components from a graph (vertex, focus and directrix,) to generate the equation of a graph. Subject: Mathematics (9 - 12) Title: Conic Sections: Playing with Hyperbolas Description: Through a mixture of online exploration, and teacher instruction, students will discover how Hyberbolas are formed and will be able to use the key components (center, vertices and the asymptotes) to generate the equation of a graph. Subject: Mathematics (9 - 12) Title: Conic Sections: Playing with Ellipses Description: Through a mixture of online exploration, and teacher instruction, students will discover how ellipses are formed and will be able to use the key components (center, foci, directrix, vertices, major axis, minor axis) to generate the equation of a graph. Subject: Arts Education (7 - 12), or Mathematics (9 - 12) Title: Graphing at all levels: It’s a beautiful thing! Description: This lesson addresses the societal issue of the arts being eliminated in many public schools be employing graphs (at any level) as an artistic media. Review of all types of graphs is included through various interactive websites.This lesson plan was created as a result of the Girls Engaged in Math and Science, GEMS Project funded by the Malone Family Foundation. Subject: Mathematics (9 - 12), or Science (9 - 12) Title: Ellipse Description: This lesson is designed for an Algebra II through Pre-Calculus classes and introduces the conic section- an ellipse. In this lesson students explore an ellipse, the set of points in a plane such that the sum of the distances from each point to two fixed points is constant. The students will investigate the construction of an ellipse and be able to recognize its major axis, semi-major axis and focal points and to be able to compute its eccentricity. This lesson plan was created as a result of the Girls Engaged in Math and Science, GEMS Project funded by the Malone Family Foundation. Subject: Mathematics (9 - 12), or Technology Education (9 - 12) Title: Going the Distance for Circles Description: This lesson is designed for an Advanced Mathematics class and introduces the first conic section - a circle. The definition and important information related to circles is presented in a multimedia presentation. Students are then required to document what they have learned in a brochure. Thinkfinity Lesson Plans Subject: Mathematics,Science Title: Analyze the Data Add Bookmark Description: In this lesson, one of a multi-part unit from Illuminations, students use rational functions to investigate the feeding behavior of Northwestern Crows. Biologists have observed that northwestern crows consistently drop a type of mollusk called a whelk from a mean height of about 5 meters. By analyzing a data set, students investigate and create a model for the relationship between the height of the drop and the number of drops. Thinkfinity Partner: Illuminations Grade Span: 9,10,11,12
{"url":"http://alex.state.al.us/plans2.php?std_id=54550","timestamp":"2014-04-18T05:34:01Z","content_type":null,"content_length":"24130","record_id":"<urn:uuid:d27195da-2835-4add-ab39-f24232321798>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00206-ip-10-147-4-33.ec2.internal.warc.gz"}
Converting Numbers Into Words Got a version of Excel that uses the ribbon interface (Excel 2007 or later)? This site is for you! If you use an earlier version of Excel, visit our ExcelTips site focusing on the menu interface. With more than 50 non-fiction books and numerous magazine articles to his credit, Allen Wyatt is an internationally recognized author. He is president of Sharon Parq Associates, a computer and publishing services company. Learn more about Allen... Please Note: This article is written for users of the following Microsoft Excel versions: 2007 and 2010. If you are using an earlier version (Excel 2003 or earlier), this tip may not work for you. For a version of this tip written specifically for earlier versions of Excel, click here: Converting Numbers Into Words. There are times when it is beneficial, or even mandatory, to spell numbers out. For instance, you may want to spell out "1234" as "one thousand two hundred thirty four." The following macro, NumberToWords, does just that. It is rather long, but it has to do a lot of checking to put together the proper string. There are actually five macros in the set; the four besides NumberToWords are called by NumberToWords to do the actual conversion. NumberToWords will convert any number between 0 and 999,999. To use it, simply select the cell (or cells) whose contents you want to convert, then run it. You should note that the cells must contain whole number values, not formulas that result in whole number values. The actual contents of the compliant cells are changed from the original number to a text representation of that number. In other words, this is not a format change, but a value change for those cells. Sub NumberToWords() Dim rngSrc As Range Dim lMax As Long Dim bNCFlag As Boolean Dim sTitle As String, sMsg As String Dim vCVal As Variant Dim lNumber As Long, sWords As String Set rngSrc = ActiveSheet.Range(ActiveWindow.Selection.Address) lMax = rngSrc.Cells.Count bNCFlag = False For lCtr = 1 To lMax vCVal = rngSrc.Cells(lCtr).Value sWords = "" If IsNumeric(vCVal) Then If vCVal <> CLng (vCVal) Then bNCFlag = True Else lNumber = CLng(vCVal) Select Case lNumber Case 0 sWords = "Zero" Case 1 To 999999 sWords = SetThousands(lNumber) Case Else bNCFlag = True End Select End If Else bNCFlag = True End If If sWords > "" Then rngSrc.Cells(lCtr) = sWords End If Next lCtr If bNCFlag Then sTitle = "lNumberToWords Macro" sMsg = "Not all cells converted. May not be whole number or may be too large." MsgBox sMsg, vbExclamation, sTitle End If End Sub Private Function SetOnes(ByVal lNumber As Integer) As String Dim OnesArray(9) As String OnesArray(1) = "One" OnesArray(2) = "Two" OnesArray(3) = "Three" OnesArray(4) = "Four" OnesArray(5) = "Five" OnesArray(6) = "Six" OnesArray(7) = "Seven" OnesArray(8) = "Eight" OnesArray(9) = "Nine" SetOnes = OnesArray(lNumber) End Function Private Function SetTens(ByVal lNumber As Integer) As String Dim TensArray(9) As String TensArray(1) = "Ten" TensArray(2) = "Twenty" TensArray(3) = "Thirty" TensArray(4) = "Forty" TensArray(5) = "Fifty" TensArray(6) = "Sixty" TensArray(7) = "Seventy" TensArray(8) = "Eighty" TensArray(9) = "Ninety" Dim TeensArray(9) As String TeensArray(1) = "Eleven" TeensArray(2) = "Twelve" TeensArray(3) = "Thirteen" TeensArray(4) = "Fourteen" TeensArray(5) = "Fifteen" TeensArray(6) = "Sixteen" TeensArray(7) = "Seventeen" TeensArray(8) = "Eighteen" TeensArray(9) = "Nineteen" Dim iTemp1 As Integer Dim iTemp2 As Integer Dim sTemp As String iTemp1 = Int(lNumber / 10) iTemp2 = lNumber Mod 10 sTemp = TensArray(iTemp1) If (iTemp1 = 1 And iTemp2 > 0) Then sTemp = TeensArray(iTemp2) Else If (iTemp1 > 1 And iTemp2 > 0) Then sTemp = sTemp + " " + SetOnes(iTemp2) End If End If SetTens = sTemp End Function Private Function SetHundreds(ByVal lNumber As Integer) As String Dim iTemp1 As Integer Dim iTemp2 As Integer Dim sTemp As String iTemp1 = Int(lNumber / 100) iTemp2 = lNumber Mod 100 If iTemp1 > 0 Then sTemp = SetOnes(iTemp1) + " Hundred" If iTemp2 > 0 Then If sTemp > "" Then sTemp = sTemp + " " If iTemp2 < 10 Then sTemp = sTemp + SetOnes(iTemp2) If iTemp2 > 9 Then sTemp = sTemp + SetTens (iTemp2) End If SetHundreds = sTemp End Function Private Function SetThousands(ByVal lNumber As Long) As String Dim iTemp1 As Integer Dim iTemp2 As Integer Dim sTemp As String iTemp1 = Int(lNumber / 1000) iTemp2 = lNumber Mod 1000 If iTemp1 > 0 Then sTemp = SetHundreds(iTemp1) + " Thousand" If iTemp2 > 0 Then If sTemp > "" Then sTemp = sTemp + " " sTemp = sTemp + SetHundreds(iTemp2) End If SetThousands = sTemp End Function ExcelTips is your source for cost-effective Microsoft Excel training. This tip (8351) applies to Microsoft Excel 2007 and 2010. You can find a version of this tip for the older menu interface of Excel here: Converting Numbers Into Words. A Picture is Worth Thousands! Your worksheets are not limited to holding numbers and text. You can also add graphics or easily create charts based on your data. Excel Graphics and Charts, available in two versions, helps you make your graphics and charts their absolute best. Check out Excel Graphics and Charts today! In the For Next cycle lCtr is not defined so a new variable needs to be set as Dim lCtr As Byte. Of course you can only face this "small" error if the "Option Explicit" is defined before the sub. Option Explicit checks whether all of the variables are defined with a dim statement or not.
{"url":"http://excelribbon.tips.net/T008351_Converting_Numbers_Into_Words.html","timestamp":"2014-04-18T08:20:31Z","content_type":null,"content_length":"45282","record_id":"<urn:uuid:680a7205-f305-485f-8c4d-10a288fa1ff1>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00364-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Help with these:/ I don't understand • 6 months ago • 6 months ago Best Response You've already chosen the best response. Best Response You've already chosen the best response. can you find slope of any given line ? Best Response You've already chosen the best response. like say , x-5y = -10 is a line. can you find slope of this line first ? Best Response You've already chosen the best response. Uh no not really... with a quick review I probably can Best Response You've already chosen the best response. good, so i have x-5y =-10 i will try to isolate y and bring it in the form y=mx+c where m=slope so, x=5y-10 5y = x+10 y= (1/5)x +2 got this ? Best Response You've already chosen the best response. Yeah I got it Best Response You've already chosen the best response. cool, so can you find the slope of x -3y=9 line ? in same way Best Response You've already chosen the best response. oh and i forgot to mention, y= (1/5)x +2 comparing with y=mx+c gives, m=slope = 1/5 Best Response You've already chosen the best response. x-37=9 -3y=x+9 Y=(1/3)x+3 Best Response You've already chosen the best response. So slope would be 1/3 I assume? Best Response You've already chosen the best response. correct! :) now try to find slopes of other 3 lines also! and tell me, so that i will verify :) Best Response You've already chosen the best response. 4x+2y=5 2y=-4x+5 Y-(-1/2)x+2.5 ? Best Response You've already chosen the best response. 2y=-4x+5 is correct, but then divide both sides by 2 what u get ? Best Response You've already chosen the best response. 10x-5y=8 -5y=10x+8 Y=-2x+1.6? Best Response You've already chosen the best response. 10x-5y=8 -5y=10x+8 <<<<<<NO! -5y = -10x +8 Best Response You've already chosen the best response. and you would get y=-2+2.5 Best Response You've already chosen the best response. Ahh okay -10 so the problem would come out with a positive 2x Best Response You've already chosen the best response. yes, so the slope will be m= 2 , right ? Best Response You've already chosen the best response. Best Response You've already chosen the best response. 4x+2y=5 slope = -2 make a note of all slopes, we'll need them Best Response You've already chosen the best response. and what about last line ? Best Response You've already chosen the best response. 4x+y=-1 Y=-4x-1 Best Response You've already chosen the best response. so, slope =? Best Response You've already chosen the best response. Best Response You've already chosen the best response. Good! now here's the rule : slope of parallel lines are equal! so, for the last one, slope of line parallel to 4x+y=-1 will also be m=-4 :) so, for last, the correct option will be `C) m=-4` got this ? in same way, can you match the 2nd last and 3rd last ? Best Response You've already chosen the best response. Yeah I got it, Thanks a ton man. You need to be my teacher xD Best Response You've already chosen the best response. haha, :) but we still require to do the 1st 2 ! Best Response You've already chosen the best response. another rule, `product of slopes of perpendicular lines = -1 ` so, for 1st one we had slope = 1/5 let slope of perpendicular line be m. so, \(\Large m\times (1/5)=-1\) find m from here :) Best Response You've already chosen the best response. 1st one is m=1/5? 2cnd one is m=1/3? It seems like these would be the answers but they are not possible to be selected. Best Response You've already chosen the best response. Ahhhh okay Best Response You've already chosen the best response. because perpendicular lines follow different rule :) Best Response You've already chosen the best response. Well I still don't exactly get how it goes from (1/5) to -1... Best Response You've already chosen the best response. product of slopes = -1 is the rule so, m (1/5) =-1 multiplying by 5, m = -5 got this ? Best Response You've already chosen the best response. multiplying by 5 on both sides. Best Response You've already chosen the best response. I'm confused on the product of slopes part, I get the multiplying on both sides. So like can you explain the product of slopes rule.. I tried looking it up on google but nothing I can comprehend really comes up. Best Response You've already chosen the best response. if say line 1 has slope M1 and line 2 has slope M2 now if these 2 lines are perpendicular to each other, then M1*M2 =-1 Best Response You've already chosen the best response. another way to remember the same thing is, the slope of line is negative reciprocal of the perpendicular line, so, slope of line perpendicular to line with slope 1/5 will be - (1/(1/5)) = -5 Best Response You've already chosen the best response. so, thats your 1st one, m =-5 for 2nd one, we had slope = 1/3, so m *(1/3) = -1 so, m=-3 is the answer to 2nd one :) Best Response You've already chosen the best response. Okay, so basically whenever there is a fraction just take (1/_) blank space number lol Best Response You've already chosen the best response. you forgot the negative :P -1/ stuff Best Response You've already chosen the best response. but this only applies to perpendicular lines Best Response You've already chosen the best response. for parallel lines, slope are equal Best Response You've already chosen the best response. Oh I see says the blind man Best Response You've already chosen the best response. what who ? Best Response You've already chosen the best response. Best Response You've already chosen the best response. i hope you got the entire problem clearly ? Best Response You've already chosen the best response. ask if any doubts anywhere Best Response You've already chosen the best response. Yeah I get it now man thanks a ton Best Response You've already chosen the best response. welcome ^_^ Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/525d3461e4b002bdb090eac2","timestamp":"2014-04-18T10:56:18Z","content_type":null,"content_length":"143986","record_id":"<urn:uuid:48490920-bb4f-44e4-84d2-4f8aec94dc85>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00260-ip-10-147-4-33.ec2.internal.warc.gz"}
Trouble Determining Moment of Inertia Formula it is if the rightmost vertex A is the tip of the wedge. so you'd have | .B | | .__________| h A C where the | above/below A represent the axis of rotation. with a line connecting AB and L=AC. How could I do this using similar triangles? so you have the sum of the 2 wedges that make up the rod: [tex]I_r=I_{BW}+I_{TW}[/tex] where BW=bottom wedge, TW=top wedge. but they are both not equal so the bottom may be 80% of the total I_r but I have no knowledge of that so I can't go that way right?
{"url":"http://www.physicsforums.com/showthread.php?t=195636","timestamp":"2014-04-20T21:22:09Z","content_type":null,"content_length":"65976","record_id":"<urn:uuid:d887f1c2-d57a-493c-b65b-a2d5229c4975>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00648-ip-10-147-4-33.ec2.internal.warc.gz"}
BRAINTENANCE: Train, Strain And Improve Your Brain. Expand Your Mind. Share this ARTICLE with your colleagues on LinkedIn . Dear Friends: The questions for today are quite simple. Assume (although we should not assume too often) that we have two men and two women whom we wish to seat in a row: 1) How many ways can these four people be arranged? 2) How many ways can these people be arranged if they must be seated with people of the opposite sex seated next to eachother? 3) How many ways can these people be arranged if they must be positioned with people of the same sex next to each other? Douglas Castle Answers to yesterday's quiz follow: A bucket contains 50 painted golf balls. 30 are painted blue, and the balance are painted red. Assume that the balls have been well shuffled around, and that you are blindfolded (in a non-hostile situation, and not for the purposes of doing anything that your sainted parents wouldn't approve of). Here are three questions: 1. What is the percentage likelihood (probability) that a ball that you choose will be red? 40%...found by subtracting 30 blue balls (ahem) from 50 in total (which leaves 20 red balls), and dividing the result by 50. 20/50= 40% 2. What is the probability that, if the first ball was red and eliminated from the game, that the next ball that you selected would be blue? In this case, we now have 49 balls in total, with 19 red and 30 blue. Dividing 30 by 49 gives us our answer, which is 61.22%. 3. What is the probability that, if the first ball's color was not identified and eliminated from the game, that the next ball that you selected would blue? Yep. A trick question. The answer would be the same as in question 1, above, because you are merely performing the same operation over again with the same number of balls. However...if you read the question thinking that a ball with an unidentified color was eliminated and not replaced, the answer is more complicated. It is what my math tutor used to call a "conditional probability problem". It poses a challenge. If the eliminated ball were red, the number of blue remaining would be 30, with 19 red remaining; but if the eliminated ball were blue, the number of blue remaining would be 29, with 20 red left. In either case, the number of balls left over would be 49. What to do? If a blue had been eliminated, the new probability of a blue being chosen would be 29/49; if a red had been eliminated, the new probability of blue being chosen would be 30/49. Stay tuned...we'll have to save this third answer for next time. With New Year's Day fast approaching, this gives you another two days to solve this one. Invest the time wisely! Share this ARTICLE with your colleagues on LinkedIn . Dear Friends: Today's quiz is certain to make you think. But then, that's the objective, isn't it? A bucket contains 50 painted golf balls. 30 are painted blue, and the balance are painted red. Assume that the balls have been well shuffled around, and that you are blindfolded (in a non-hostile situation, and not for the purposes of doing anything that your sainted parents wouldn't approve of). Here are three questions: 1. What is the percentage likelihood (probability) that a ball that you choose will be red? 2. What is the probability that, if the first ball was red and eliminated from the game, that the next ball that you selected would be blue? 3. What is the probability that, if the first ball's color was not identified and eliminated from the game, that the next ball that you selected would blue? Good luck! Douglas Castle The solution to yesterday's problem follows: The sum of all of the integers from 1 to 1,000, inclusive, will be 500,500. If you tried to obtain this answer by plugging numbers into your calculator, it would have taken you (if you're an average sort of person, but focused only on doing this repetitive operation) in excess of one hour, with a high probability of making at least one error. The formula for adding any series of consecutive numbers from 1 to n (where n is the highest number) is simply: n(n+1)/2. In this particular problem, the computation (which would have saved you the better part of an hour) would have been 1,000(1001)/2. Share this ARTICLE with your colleagues on LinkedIn . Dear Friends: Visit these sites, and make each on a favorite to visit every day: Invest a bit of time to transform every aspect of your life for the better! THE INTERNAL ENERGY PLUS WEBSITE (http://www.internalenergyplus.com/) - For self-help, personal growth and professional success tools superior to any others in the marketplace. You have to visit! BRAINTENANCE: MINDBUILDERS (http://braintenance.blogspot.com) - Build your intelligence, sharpen your senses and increase the quality and length of your life. You receive a quiz daily (with answers the following day) to keep you razor sharp. THE INTERNAL ENERGY PLUS FORUM (http://theinternalenergyplusforum.blogspot.com/) - Learn the latest about developments and modalities in IEP. THE NATIONAL NETWORKER (http://thenationalnetworkerweblog.blogspot.com/) - Expand your contacts, relationships and possibilities for friendship and business during the new year! Get a subscription for their free NEWSLETTER, too. CASTLE's BLOG (http://aboutdouglascastle.blogspot.com/) - Receive an interesting thought or idea every day. Stimulate your mind, initiate productive conversations with colleagues, and have fun. this will also feed my insatiable ego and lust for recognition. Happy , healthy and prosperous, New Year to all! Douglas Castle Share this ARTICLE with your colleagues on LinkedIn . Dear Friends: Today's quiz is a very straightforward one. What is the sum of all of the consecutive whole numbers (integers) from 1 to 1,000, inclusive? [Hint: Manual addition will take you far too long, and introduces a high probability of error -- there might be a formula to help solve this one...] Solutions to 12/23/08 Quiz: What is the next number in each of the following sequences? a) 1, 2, 4, 8, 16, 32, _____? The next number is 64. Each number can be found by doubling the one which precedes it. Alternately, each number in the series represents 2 to an exponential power. For example: 2 to the 0 power is 1; 2 to the 1st power is 2; 2 to the second power is 4, etc. b) 1, 4, 27, 256, 3125, _____? The next number is 46,656. Each number in this series is a number to its own exponential power. For example: 1 to the first power is 1; 2 to the second power is 4; 3 to the third power is 9, etc. c) 123, 234, 345, 456, 567, ______? The next number in this series is 678. Each of the numbers is generated by taking the preceding number, taking the middle integer, and using it to start a three-integer chain of ordinal numbers. Other than this if you add each of the three integers in each number, you will also find that it is 3 greater than the number which preceded it. d) 10, 1011, 1011000, 10110001111, 101000111100000, ______? The next number is 101000111100000111111. This series is built on simply taking each number and adding successive integers to it (either zero or one, alternating) in an amount greater than the amount in the number which preceded it. For example, the first number is 10, the second is 1011 (adding 1 twice to the end of the number), the next number is 1011000 (adding 0 three times to the end of the number). e) 1, 2, 6, 24, 120, _______? The next number is 720. The numbers in this series are each produced by taking the previous number and multiplying it by a number which is one greater than that which was used in arriving at the preceding number. For example: 1x2 =2; 2x3 =6; 6x4 = 24; 24x5 = 120, etc. Solution to 12/24/08 Quiz: Seven gentlemen are getting ready to leave a business meeting. They have all just met for the first time, and each of them will want to shake hands with each of the others. How many handshakes will be exchanged? (Remember: If two gents shake hands together, that only counts as one handshake. Also: no gentleman shakes hands with himself, unless he is praying, very cold, or addle-witted). There is a formula for combinations which gives us the answer: If n is the number of persons, and k is the amount of each combination, we divide n! by the product of (n-k)! x (k!) In this case, n=7 gents, k= a combination of 2, and ! means factorial (which means that number multiplied by each integer that precedes it. 4! would equal 4x3x2x1). In our case, this formula, with its blanks filled in would be 7!/(7-2)! x (2!), or 7!/5! x 2!, or (7 x 6)/2...which equals 21 handshakes in total. Douglas Castle p.s. Visit the new INTERNAL ENERGY PLUS Website at http://www.internalenergyplus.com/ . Share this ARTICLE with your colleagues on LinkedIn . Dear Friends: Here's the question - Seven gentlemen are getting ready to leave a business meeting. They have all just met for the first time, and each of them will want to shake hands with each of the others. How many handshakes will be exchanged? (Remember: If two gents shake hands together, that only counts as one handshake. Also: no gentleman shakes hands with himself, unless he is praying, very cold, or addle-witted). Good luck! Douglas Castle Share this ARTICLE with your colleagues on LinkedIn . Dear Friends: Today's problems are all of the same format. Solutions will be posted on Friday, right after Christmas. What is the next number in each of the following sequences? a) 1, 2, 4, 8, 16, 32, _____? b) 1, 4, 27, 256, 3125, _____? c) 123, 234, 345, 456, 567, ______? d) 10, 1011, 1011000, 10110001111, 101000111100000, ______? e) 1, 2, 6, 24, 120, _______? Good luck! Douglas Castle Share this ARTICLE with your colleagues on LinkedIn . Dear Friends: I cannot overemphasize just how important it is to stimulate and exercise your mind every day. Your mind is a muscle that atrophies if your fail to use it -- if you force it to stretch and strain, it grows stronger, and serves you longer. There is substantial evidence that frequent Braintenance (a mental workout) can not only sharpen your senses, your memory and heighten your mood...it can actually help hold off (or possibly even prevent) the onset of senile dementia and Alzheimer's disease. Invest ten minutes to an hour each day in the BRAIN GYM. It is a pat of IEP. Just click on BRAINTENANCE (http://braintenance.blogspot.com/) . Make it a favorite. Tell your friends and associates. Starting soon, we'll be featuring the Quiz Of The Day, so go to the site, sign up to receive emails (or even get the RSS feed), and be guaranteed a puzzle a day! Douglas Castle Miasma: Grouply, Yahoo, Google, command, leadership, decisionmaking, The National Networker, Internal Energy Plus, Humanitas Maximus, The Internal Energy Plus Forum, control, public service, inspiration, survival, uniting, dividing, conquest, growth, personal growth, self-improvement, personal development, networking, Mixx, Zimbio, Digg, Delicious, Magnolia, Technorati, Associated Press, LA Times, Fox News, Reuters, alpha, education, leadership crisis, personal power, humanitarianism, The Global Futurist, The Internationalist Page, Braintenance, blogs, bloggers, news, news releases, Douglas castle, intelligence, cognitive enhancement, advice, help, psychology, focus, technology, seduction, manipulation, debate, education, hypnosis, NLP, RET, articles, Linked In, Microsoft, SEO, fitness, longevity, creativity, meditation, media, YouTube, organizational engineering, winning, success, telepathy, trends, predictions, commentary, blogosphere, relationships, children, fitness, independence, mind, brain, thought, entrainment, Facebook, Naymz, downloads, CDs, tapes, instruction, competition, martial arts, politics, management, widget, blidget, social networking, nanotechnology, weight-loss, humor, alternative energy, philanthropy, not-for-profit organizations, volunteering, interrogation, trade, commerce, personal dynamics, personal coaching, leadership training, mind-body connection, nootropics, longevity, social groups, business groups, marketing, ecommerce, Feedburner, RSS, newsletters, writers, co-operative advertising, leverage, survivalism, viral marketing, public relations, advertising, promotion, image-development, videos, gaming, AI, social awareness, pursuasion, theology, logic, alliances, reputation, ratings, polls, standing, positioning, self-help, metrics, design, interactivity, campaign, Wikis, chat rooms, interactive forums, mentoring, modeling, mind expansion, brainwaves, motivation, current events, press releases, keywords, adwords, economics, geopolitical economics, conflict resolution, activism, achievement, subconscious, collective mind, The National Networker, Adam J. Kovitz, team-building, group effort, organizing groups, volunteering, Relationship Capital, Douglas Castle, FaceBook, MySpace, social networking, business development, self-help, accelerating learning technologies, wealth-building strategies, attaining goals, self-discovery, health, fitness, stree-reduction, anxiety-reduction, happiness, entrainment, IQ, psychology, philosophy, kinesiology, physiology, visceral learning, synchronicity, intuition, instinct, self-defense, conflict resolution, intellectual growth, the science of mind, telepathy, body language, The Internationalist Page, The Global Futurist, SEO, martial arts, Joseph Droual, Cinda Hocking, Lola Kern, Barbara Marynowski, Thomas Kern, Colleen Donahue Douglas Castle IEP, internationalism, survivalism, futurism, entrepreneurialism, CDs, DVDs, books, training tapes, self-mastery, self-growth, mind-body connection, visceral learning, cellular learning, vibrational medicine, health, brain-teasers, puzzles, intellectual stimulation, speed-reading, memory improvement, stress-reduction, empowerment...
{"url":"http://braintenance.blogspot.com/2008_12_01_archive.html","timestamp":"2014-04-19T09:28:13Z","content_type":null,"content_length":"173395","record_id":"<urn:uuid:844cdf44-4563-436c-afe1-cf3a2760f33b>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00287-ip-10-147-4-33.ec2.internal.warc.gz"}
Contextual Effects and Read Quality in a Probabilistic Aligner In my last post on this topic, Sequence Alignment with Conditional Random Fields, I introduced a properly normalized Smith-Waterman-like global aligner (which required the entire read to be aligned to a subsequence of the reference sequence). The motivation was modeling the probability of a read being observed given a reference sequence from which it originated. Context Plus Read Quality Today I want to generalize the model in two ways. First, I’ll add contextual effects like affine gap scores and poly-base errors on Illumina. Second, I’ll take into consideration per-base read quality, as produced by Illumina’s prb files (with simple fastq format, where you only get the probability of the best base, we can either treat the others as uniformly likely or use knowledge of the confusion error profile of the sequencer). We’ll still take start position into account, for issues like modeling hexamer priming or fractionation effects in sample prep. The dynamic programming algorithms work as before with finer-grained cells. Further, you can infer what the actual read was, though I believe a hierarchical model will be better for this, because the sequence polymorphism can be separated from the read errors generatively. Modeling Alignments What we’re interested in for our work on RNA-Seq based mRNA expression is modeling is $p(y|x)$, the probability of a read $y$ given a reference sequence $x$. Rather than approaching this directly, we’ll take the usual route of modeling a (latent) alignment $A$ at a (latent) start position $i$, $p(y,A,i|x)$, and then marginalizing, $p(y|x) = \sum_{A,i} p(y,A,i|x)$. The best alignment will be computable via a standard sequence max algorithm, and the marginalization through the standard product-sum algorithm. Both run in time proportional to the read length times reference sequence length (of course, the latter will be trimmed down to potential matches in a region). Both can be sped up with upper bounds on distance. The Model We assume there is a score $\mbox{\sc start}(i,x) \in \mathbb{R}$ for each start position $i$ in reference sequence $x$. For instance, this may be a log probability of starting at a position given the lab fractionation and priming/amplification protocol. We will also assume there is a quality score on a (natural) log probability scale $\mbox{\sc logprb}(y,b,j) \in \mathbb{R}$ for the probability of base $b$ at position $j$ in read $y$. For instance, these may be the Phred quality scores reported in Illumina’s prb files converted to (natural) log probability of match. Our definition only normalizes for reads of fixed length, which we’ll write $\mbox{\sc readlen}$. We will recursively define a score on an additive scale and then exponentiate to normalize. In symbols, $p(y,A,i|x) \propto \exp\big(\mbox{\sc start}(i,x) + f(a,i,0,x,y,\mbox{\rm start})\big)$ where we define the alignment score function $f(A,i,j,x,y,a)$, for alignment sequence $A$, read $x$ and read position $i$, reference sequence $y$ and reference position $j$ and previous alignment operation $a$, recursively by $\begin{array}{rcl}f([],i,\mbox{\sc readlen},x,y,a) & = & 0.0\\[18pt]f(\mbox{\rm sub}(x_i,b)+A,i,j,x,y,a) & = & \mbox{\sc subst}(x,y,i,j,a)\\ & + & \mbox{\sc logprb}(y,b,j)\\ & + & f(A,i+1,j+1,x,y,\ mbox{\rm sub}(x_i,b))\\[12pt]f(\mbox{\rm del}(x_i)+A,i,j,x,y,a) & = & \mbox{\sc del}(x,y,i,j,a)\\ & + & f(A,i+1,j,x,y,\mbox{\rm del}(x_i))\\[12pt]f(\mbox{\rm ins}(b)+A,i,j,x,y,a) & = & \mbox{\sc ins} (x,y,i,j,a)\\ & + & \mbox{\sc logprb}(y,b,j)\\ & + & f(A,i,j+1,x,y,\mbox{\rm ins}(b))\end{array}$ Reminds me of my Prolog programming days. The Edit Operations The notation $a + A$ is meant to be the edit operation $a$ followed by edit sequence $A$. The notation $[]$ is for the empty edit sequence. The notation $\mbox{\rm sub}(x_i,b)$ is for the substitution of an observed base $b$ from the read for base $x_i$ of the reference sequence; if $x_i eq b$ it’s a particular mismatch, and if $x_i = b$ , it’s a match score. The notation $\mbox{\rm ins}(b)$ is for the insertion of observed base $b$ and $\mbox{\rm del}(x_i)$ for the deletion of base $x_i$ in the reference sequence. Insertion and deletion are separated because gaps in the reference sequence and read may not be equally likely given the sequencer and sample being sequenced. Note that the base case requires the entire read to be consumed by requring $j = \mbox{\sc readlen}$. Scoring Functions Note that the score functions, $\mbox{\sc subst}(x,y,i,j,a) \in \mathbb{R}$, $\mbox{\sc del}(x,y,i,j,a) \in \mathbb{R}$, and $\mbox{\sc ins}(x,y,i,j,a) \in \mathbb{R}$, have access to the read $y$, the reference sequence $x$, the present position in the read $j$ and reference sequence $i$, as well as the previous edit operation $a$. Enough Context This context is enough to provide affine gap penalties, because the second delete will have a context indicating the previous operation was a delete, so the score can be higher (remember, bigger is better here). It’s also enough for dealing with poly-deletion, because you have the identity of the previous base on which the operation took place. Further note that by defining the start position score by a function $\mbox{\sc start}(x,i)$, which includes a reference to not only the position $i$ but the entire reference sequence $i$, it is possible to take not only position but also information in the bases in $x$ at positions around $i$ into account. Dynamic Programming Same as Before The context is also restricted enough so that the obvious dynamic programming algorithms for first-best and marginalization apply. The only tricky part is computing the normalization for $p(y,A,i|x)$ , which has only been defined up to a constant. As it turns out, the same dynamic programming algorithm works to compute the normalizing constant as for a single probabilistic read. In practice, you have the usual dynamic programming table, only it keeps track of the last base matched in the read, the last base matched in the reference sequence, and whether the last operation was an insertion, deletion, or substitution. To compute the normalizer, you treat each base log prob from a pseudoread of the given read length $\mbox{\sc readlen}$ as having score 0 and accumulate all the matches in the usual way with the sum-product algorithm. (You’ll need to work on the log scale and use the log-sum-of-exponentials operation at each stage to prevent underflow.) In general, these dynamic programming algorithms will work anywhere you have this kind of tail-recursive definition over a finite set of operations. Inferring the Actual Read Sequence for SNP, Etc. Calling Because of variation among individuals, the sequence of bases in a read won’t necessarily match the reference sequence. We can marginalize out the probability that the read sequence is $z$ given that it was generated from reference sequence $x$ with read $y$ (remember the read’s non-deterministic here), $p(z|x,y) = \sum_{i,\mbox{yield}(A) = z} p(A,i,x|y)$, where $\mbox{yield}(A)$ is the read produced from alignment $A$, which itself is defined recursively by $\mbox{yield}([]) = \epsilon$, $\mbox{\rm yield}(\mbox{\rm del}(x) + A) = \mbox{\rm yield}(A)$, $\mbox{\rm yield}(\mbox{subst}(b,x_i) + A) = b \cdot \mbox{\rm yield}(A)$, and $\mbox{\rm yield}(\mbox{ins}(b)+A) = b \cdot \mbox{\rm yield}(A)$, where $\epsilon$ is the empty string and $\cdot$ string concatenation. I believe this is roughly what the GNUMAP mapper does (here’s the pay-per-view paper). Reference Sequences with Uncertainty We could extend the algorithm even further if we have a non-deterministic reference sequence. For instance, we can use data from the HapMap project to infer single nucleotide polymorphisms (SNPs). We just do the same thing for reference side non-determinsm as on the read side, assuming there’s some score associated with each choice and adding the choice into the context mix.
{"url":"http://lingpipe-blog.com/2010/06/14/contextual-effects-and-read-quality-in-a-probabilistic-aligner/","timestamp":"2014-04-20T11:54:54Z","content_type":null,"content_length":"56344","record_id":"<urn:uuid:152ebe11-382d-4add-b46f-ef472c258ffa>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00016-ip-10-147-4-33.ec2.internal.warc.gz"}
Help simplifying expression How do you solve this expression for T? 2=e^(.003t) The answer is t=ln2/.003 In general, to "solve" an equation for t, you do, to both sides, the "opposite" of what is done to t, in the reverse order. Here, $y= e^{.003t}$. To evaluate that, you would do two things: first multiply t by 0.003, then take the exponential. We need to do the "opposite" (inverse) of the exponential, then the "opposite" of "multiply by 0.003". The inverse of the exponential function is the natural logarithm and the inverse of "multiply by 0.003" is "divide by 0.003". So we would start with $y= e^{.003t}$ and first take the logarithm of both sides: $ln(y)= .003t$. Now, divide both sides by .003: $\frac{ln(y)}{.003}= t$.
{"url":"http://mathhelpforum.com/algebra/185419-help-simplifying-expression-print.html","timestamp":"2014-04-16T15:03:05Z","content_type":null,"content_length":"5856","record_id":"<urn:uuid:1253582e-3143-4dbe-a027-f895253f448d>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00084-ip-10-147-4-33.ec2.internal.warc.gz"}
General Instructions for Simulations using Spreadsheets These instructions are general. They describe how to create random numbers for use in simulations, how to organize the spreadsheet and some about how to process the output of the simulation. The functions discussed here are: RAND, TRUNC, IF, AND, OR, and FREQUENCY. Random Numbers There are at least three ways to create random numbers with Excel. 1. The RAND() function returns a random number between zero and one. You may notice that the numbers in any cell with the RAND function change each time you do anything in the spreadsheet. This is because the spreadsheet is "recalculating" automatically each time you do something. You can turn this off if you wish from the Preferences page. What if we don't want zero to one? If you want numbers between zero and five, multiply by five: =5*RAND(). If you want numbers between 3 and 8, use =3+5*RAND(). If you want only INTEGERS, you need to round off the decimals somehow. The function to do this is TRUNC(). TRUNC() truncates all decimals from a number. So, TRUNC(2*RAND()) returns zero half the time, and one the other half of the time [Note: this is the same result as IF(RAND()<0.5,0,1)]. If you want an equal chance for the numbers 1, 2 or 3, use =1+TRUNC(3*RAND()). 2. The RANDBETWEEN(lo,hi) function returns a random integer between lo and hi. (for example, RANDBETWEEN(1,5) returns an integer between 1 and 5. As with RAND() this function recalculates each time you do anything in the spreadsheet. This is probably the one you will want to use the most. 3. In the Data Analysis Toolpak (under the Tools menu--see the "histogram" instructions if it does not appear in the Tools menu) there is a tool called "Random Number Generation". You can specify how many cells you want, what distribution they should follow, and many other features of the data. More than what we need, but you might try it out. Logic Operators You can interpret the answers you get from a simulation by using logical operators. For example, to see if cells A3 and B3 are the same, use =IF(A3=B3,1,0). The first argument to IF() is an expression to be evaluated as TRUE or FALSE. The second argument is the number to return if TRUE and the third is the number to return if FALSE. These do not have to be numbers; they can also be expressions themselves: =IF(A3=B3,IF(A3=1,1,A1),RAND()) would give the value 1 if the entries in A3 and B3 are both 1, would give the value in A1 if A3 and B3 were equal but not 1, and would give an entirely random number if the entries in A3 and B3 were unequal. Sometimes you will want to use AND() and OR() to build up your expression for IF(). For example, =IF(AND(A3=B3,C3>2000),1,0). The function AND() returns TRUE is both expressions are TRUE. The function OR() returns TRUE is either or both of the expressions are TRUE. Counting and Frequency If you want to know how many cells have values greater or less than some cutoff values, you can use the FREQUENCY() function. For example, if you want to know how many simulations ended up less than -100 and greater than zero and greater than 100, then FREQUENCY() is the function for you! To use FREQUENCY(), you need to have a range of data cells and a range of cutoffs which are the borderlines for the "bins" we wish to sort into. The list of cutoffs is called the "bin array". The data array and the bin array must be entered somewhere. For the example above the bin array would be three cells with the values -100,0,100 respectively. Notice that the number of bins is actually one more than the number of cells in the "bin array" because the numbers are really cutoffs between bins. The extra bin is labelled MORE, because it includes the numbers with values more than the largest bin cutoff (in this case, 100). The command looks like =FREQUENCY(A101:G101,A102:C102) where A101:G101 is the data array and A102:C102 is the bin array. But don't use the "function wizard" (the button with the fx symbol on it), or the "Insert-->Function" menu item! For some reason they don't work right. The way to get the desired output is to select (highlight) the cells into which the output is to go (one more than the bin array, remember), type the command as above, and then type Control-Shift-Enter (i.e., hold down the control and shift keys and type enter). The output of the FREQUENCY() function shows the number of cells between the previous bin value and the current bin value. For example, with the bin array -100,0,100, the first output cell contains the number of cells in the data range with value less than (or equal to) -100; the next cell contains the number between -100 and 0; the next between 0 and 100; and the last cell contains the number of cells in the data array that have a value greater than the last cutoff, 100. Organization of simulations Try to organize your spreadsheet so that you can follow what you have done even if you look at it a year from now. Keep all cells corresponding to a single trial in either a column or a row. Make summary statistics appear above, below or to the right of the corresponding trial with labels for what value that is (for example, the average or SD). Use one spreadsheet as a template for other similar simulations. You can copy it, change a few things and read off the new results. This is especially helpful when doing the same experiment with 10 then 40 then 160 flips of a coin. All you need to do is copy the first 10 rows and recalculate. Copyright 1999 © Colgate University. All rights reserved.
{"url":"http://math.colgate.edu/math102/Common/excel/generalsim.html","timestamp":"2014-04-18T05:29:36Z","content_type":null,"content_length":"6280","record_id":"<urn:uuid:d2bb7db5-80fb-49e7-873d-dcda35e0aa70>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00107-ip-10-147-4-33.ec2.internal.warc.gz"}
Exact conditional tests for cross-classifications: approximation of attained significance levels. Psychometrika Results 1 - 10 of 27 "... The strong Markov random field (MRF) model is a sub-model of the more general MRF-Gibbs model. The strong-MRF model defines a system whereby not only is the field Markovian with respect to a defined neighbourhood, but all sub-neighbourhoods also define a Markovian system. A checkerboard pattern is a ..." Cited by 8 (0 self) Add to MetaCart The strong Markov random field (MRF) model is a sub-model of the more general MRF-Gibbs model. The strong-MRF model defines a system whereby not only is the field Markovian with respect to a defined neighbourhood, but all sub-neighbourhoods also define a Markovian system. A checkerboard pattern is a perfect example of a strong Markovian system. Although the strong Markovian system requires a more stringent assumption about the field, it does have some very nice mathematical properties. One mathematical property, is the ability to define the strong Markov random field model with respect to its marginal distributions over the cliques. This property allows a direct equivalence to the ANOVA loglinear construction to be proved. From this proof, the general ANOVA log-linear construction formula is derived. - In Proceedings of COMPSTAT , 1994 "... this paper, we propose improvements to a naive simulate and reject procedure for generating r \Theta c tables under quasi-independence for an arbitrary pattern of fixed cells. Although some of the algorithmic improvements are described for generating under QI for the off-diagonal cells of a square t ..." Cited by 3 (1 self) Add to MetaCart this paper, we propose improvements to a naive simulate and reject procedure for generating r \Theta c tables under quasi-independence for an arbitrary pattern of fixed cells. Although some of the algorithmic improvements are described for generating under QI for the off-diagonal cells of a square table, the ideas are applicable to other patterns of fixed cells. Apart from complete enumeration, which is only viable for small tables, the simulate and reject procedure is currently the only method for generating independent tables from the exact null distribution under QI. Our improvements to the naive procedure greatly increase its efficiency. Smith, McDonald and Forster (1994) discuss another method for generating tables under QI using a Gibbs sampling approach, based on theoretical results in Forster, McDonald and Smith (1994). However, the generated tables are not necessarily independent and are only realizations from an approximation to the exact null distribution. When using a single Markov chain, the observed table is the obvious starting value. For multiple chains, obtaining other starting values with the same sufficient statistics for the nuisance parameters as the observed data is problematic. A possible solution is to generate a small number of independent starting values using the simulate and reject algorithms proposed. "... this paper, we review exact tests and the computing problems involved. We propose new recursive algorithms for exact goodness-of-fit tests of quasi-independence, quasi-symmetry, linear-bylinear association and some related models. We propose that all computations be carried out using symbolic comput ..." Cited by 2 (0 self) Add to MetaCart this paper, we review exact tests and the computing problems involved. We propose new recursive algorithms for exact goodness-of-fit tests of quasi-independence, quasi-symmetry, linear-bylinear association and some related models. We propose that all computations be carried out using symbolic computation and rational arithmetic in order to calculate the exact p-values accurately and describe how we implemented our proposals. Two examples are presented. , 1997 "... The analysis of categorical data often leads to the analysis of a contingency table. For large samples, asymptotic approximations are sufficient when calculating p-values, but for small samples the tests can be unreliable. In these situations an exact test should be considered. This bases the test o ..." Cited by 1 (0 self) Add to MetaCart The analysis of categorical data often leads to the analysis of a contingency table. For large samples, asymptotic approximations are sufficient when calculating p-values, but for small samples the tests can be unreliable. In these situations an exact test should be considered. This bases the test on the exact distribution of the test statistic. Sampling techniques can be used to estimate the distribution. Alternatively, the distribution can be found by complete enumeration. This thesis develops a number of new algorithms for complete enumeration of various models. Recursive algorithms are developed to test for independence in r × c tables. The algorithm is extended for multi-dimensional tables. One algorithm is extended to enumerate tables under the model of quasi-independence, and a rejection stage enables testing of models such as quasi-symmetry and uniform association. A new algorithm is developed that enables a model to be defined by a model matrix, and all tables that satisfy the model are found. This provides a more efficient enumera-tion mechanism for complex models and extends the range of models that can be tested. The technique can lead to large calculations and a distributed version of the algorithm is developed that enables a number of machines to work efficiently on the same problem. , 1994 "... this paper, is the hypothesis of QI for the off-diagonal cells of a r \Theta r square table, where the sufficient statistics for the nuisance parameters are x i+ ; x +j and x ii , for i; j = 1; : : : ; r. ..." Cited by 1 (1 self) Add to MetaCart this paper, is the hypothesis of QI for the off-diagonal cells of a r \Theta r square table, where the sufficient statistics for the nuisance parameters are x i+ ; x +j and x ii , for i; j = 1; : : : ; r. "... this document is subject to change without notice. VISUAL NUMERICS, INC., MAKES NO WARRANTY OF ANY KIND WITH REGARD TO THIS MATERIAL, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. Visual Numerics, Inc., shall not be liable for errors c ..." Add to MetaCart this document is subject to change without notice. VISUAL NUMERICS, INC., MAKES NO WARRANTY OF ANY KIND WITH REGARD TO THIS MATERIAL, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. Visual Numerics, Inc., shall not be liable for errors contained herein or for incidental, consequential, or other indirect damages in connection with the furnishing, performance, or use of this material. All rights are reserved.No part of this document may be photocopied or reproduced without the prior written consent of Visual Numerics, Inc. "... p. 329, Mar. 1993. [398] P. Zinzindohoue, "Use of a new original concept for improving texture discrim- ination and image segmentation in presence of noise," Optik, vol. 90, p. 97, May 1992. [377] D. Wang, V. Haese-Coat, and J. Ronsin, "A nonlinear decomposition algo- rithm for noisy texture segme ..." Add to MetaCart p. 329, Mar. 1993. [398] P. Zinzindohoue, "Use of a new original concept for improving texture discrim- ination and image segmentation in presence of noise," Optik, vol. 90, p. 97, May 1992. [377] D. Wang, V. Haese-Coat, and J. Ronsin, "A nonlinear decomposition algo- rithm for noisy texture segmentation," Conference publication, vol. 354, p. 319, 1992. [378] H. Wechsler, "Texture analysis a survey," Signal Processing, vol. 2, pp. 271 282, 1980. [379] L. Wei, "Deterministic texture analysis and synthesis using tree structure vec- tor quantization," in Xll Brazilian Symposium on Computer Graphics and Image Processing, pp. 20213, Oct 1999. [380] J. S. Weszka, C. R. Dyer, and A. Rosenfeld, "A comparative study of texture measures for terrain classification," IEEE Transactions on Systems, Man and Cybernetics, vol. 6, pp. 269 285, Apr. 1976. [381] R. G. White and C. J. Oliver, "Data driven texture segmentation of SAR imagery," Conference publication, vol. 365, p. 415, 1992. [3 "... this document is governed by a Visual Numerics Software License Agreement. This document contains confidential and proprietary information constituting valuable trade secrets. No part of this document may be reproduced or transmitted in any form without the prior written consent of Visual Numerics. ..." Add to MetaCart this document is governed by a Visual Numerics Software License Agreement. This document contains confidential and proprietary information constituting valuable trade secrets. No part of this document may be reproduced or transmitted in any form without the prior written consent of Visual Numerics. RESTRICTED RIGHTS LEGEND: This documentation is provided with RESTRICTED RIGHTS. Use, duplication, or disclosure by the U.S. Government is subject to the restrictions set forth in subparagraph (c)(1)(ll) of the Rights in Technical Data and Computer Software clause at DFAR 252.227-7013, and in subparagraphs (a) through (d) of the Commercial Computer Software - Restricted Rights clause at FAR 52.227-19, and in similar clauses in the NASA FAR Supplement, when applicable. Contractor/Manufacturer is Visual Numerics, Inc., 2500 Wilcrest Drive, Ste 200, Houston, Texas 77042 , 1989 "... The traditional practice in the analysis of contingency table has been one of the goodness-of-fit tests (e.g. Pearson's X 2 or the likelihood ratio 0 2) which lean heavily on asymptotic machinery. It is well known that these methods can give spurious results when the table is sparse or when the samp ..." Add to MetaCart The traditional practice in the analysis of contingency table has been one of the goodness-of-fit tests (e.g. Pearson's X 2 or the likelihood ratio 0 2) which lean heavily on asymptotic machinery. It is well known that these methods can give spurious results when the table is sparse or when the sample size of the table is small. Although various algorithms have been proposed for exact tests in two-dimensional tables, exact tests in three- and higher-dimensional tables is almost an untouched area. In fact, even the minimal extension from two-dimension to three-dimension poses conceptually new problems. Viewing the Pvvalue as the expectation of a indicator function on a Markov chain whose equilibrium distribution is the same as the distribution of contingency tables with same margins, we propose a method to estimate the P-value by simulating the Markov chain which is constructed by the Metropolis algorithm. Probability functions for multidimensional contigency tables are develeped in the process. The method considerably extends the bounds of computational feasibility of the exact test for contingency table of virtually any dimension. It is fast and can be modified to calculate other statistics of the table. It also has the advantage that it can be to parallel processing. Numerical examples are given to illustrate the method.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=693513","timestamp":"2014-04-18T06:59:07Z","content_type":null,"content_length":"36547","record_id":"<urn:uuid:d4b155fc-d208-4f29-bb7c-a55a45a4cadf>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00046-ip-10-147-4-33.ec2.internal.warc.gz"}
Where can I find Michael Artin Algebra solutions? I can't find them anywhere. No solution manual, no courses that used the textbook and then posted solutions, nothing. At the same time, the author leaves some important results in the exercises. I like the writing style and so far I'm not having inordinate trouble with the problems, but I'm now pretty unhappy with the person who recommended this book as "good for self-study". I am no stranger to the problem where one produces a wrong answer and doesn't know enough to realize that the answer is wrong. I'm surprised that I cannot find any courses that use this highly-regarded textbook.
{"url":"http://www.physicsforums.com/showthread.php?t=495660","timestamp":"2014-04-20T05:56:38Z","content_type":null,"content_length":"27668","record_id":"<urn:uuid:a63c464c-cd27-49f8-a4b5-08c6017cea69>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00452-ip-10-147-4-33.ec2.internal.warc.gz"}
Groups of Hodge type, hodge structure on Lie algebra up vote 3 down vote favorite Let $W$ be a real algebraic group, and $G$ the associated complex group. Then $W$ is of Hodge type if there is a $\mathbb{C}^*$ action on $G$ such that $U(1)$ preserves $W$ and the action of $-1$ on $G$ is a Cartan involution. I have trouble understanding the definition, I guess because I don't understand very well the definition of Cartan involution : it should be something like the complex conjugation relative to a compact real form of $G$. Examples: $SU(p,q)$, $SO(2n)$ ($n\geq 3$), $Sp(n)$, $Sp(p,q)$, $SO(p,2q)$ ($q\geq 2$) and some other classical Lie groups are apparently of Hodge type. But I don't see the action of $U(1)$, the compact real form etc. Also, I was told that a group is of Hodge type if the Lie algebra has a Hodge structure. Is it easy to see the Hodge structure in the examples given by Simpson ? Thanks for your help. lie-groups lie-algebras hodge-theory 1 Not exactly an answer, but you might like Gross's article "A Remark on Tube Domains". Although it doesn't cover all cases you're interested in, it presents some details you're interested in, in some very illustrative cases. Since it seems like you're looking for more background/examples, perhaps this reading will help. – Marty Sep 30 '10 at 17:55 add comment 1 Answer active oldest votes That $-1$ is a Cartan involution is the same as saying that the subgroup $K$ of points in G fixed under the action of the adjoint of this copy of $-1$ is a maximal compact subgroup of $G$. For example, in $W=U(p,q)$ the subgroup $K=U(p)\times U(q)$ is the centraliser of the diagonal matrix $k$ whose first $p$ entries are $-1$ and whose last $q$ entries are $+1$. This element $k$ may be viewed as an element of the diagonal ${\mathbb C}^*$ embedded in $GL_{p+q}({\mathbb C})$ (the latter is the complexification of $W$), where $z\in {\mathbb C}^*$ is embedded $G$ as the diagonal matrix whose first $p$ entries are $z$ and the last $q$ entries are $1$. It is clear that $K$ is a maximal compact of $W$. up vote 1 down vote Similar constructions exist for $Sp(2n)$ with $K=U(n)$. In general, you may like to replace your semi-simple group by a reuctive group, to see the ${\mathbb C}^*$ action more clearly; for example, $U(p,q)$ was more convenient that $SU(p,q)$ in the above example. You can look up Milne's articles on "Shimura varieties" for detailed descriptions of these Hodge groups. add comment Not the answer you're looking for? Browse other questions tagged lie-groups lie-algebras hodge-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/40634/groups-of-hodge-type-hodge-structure-on-lie-algebra","timestamp":"2014-04-21T07:56:59Z","content_type":null,"content_length":"51578","record_id":"<urn:uuid:29105866-9a96-4315-bd78-6ff1225c2175>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00090-ip-10-147-4-33.ec2.internal.warc.gz"}
Universal law of growth January 13th 2011, 09:18 AM #1 Nov 2010 Universal law of growth I need a serious help please Let Y be the metabolic rate of an organism, Yc the metabolic rate of a single cell, Nc(t) the total number of cells at time t, mc the mass of a cell, and Ec the energy required to create a new cell. The cell properties, Ec, mc, and Yc, are assumed to be constant and invariant with respect to the size of the organism. Thus Y = YcNc + Ec(dNc/dt). Let m be the total body mass of the organism at time t, and m = mcNc.(Note that Nc is the total number of cells in a body and is proportional to mass m, while the total number of capillaries Nn is proportional to 3/4 power of m.) Y=Y0(m)^(3/4) a.show that the above equation can be written as dm/dt = am^(3/4) - bm with a = Y0mc / Ec and b = Yc/Ec b.Let m = M be the mass of a matured organism, when it stops growing (dm/dt = 0). Find M, and show that the above equation can be rewritten as dm/dt = am^(3/4)[1-(m/M)^(1/4)]. c.Let r = (m/M)^(1/4), and R = 1-r. Then the above equation becomes dR/dt = -(a/4M^(1/4))R. Solve this simple ordinary differential equation and show that a plot of ln(R(t)/R(0)) vs. the no-dimensinal time at/(4M^(1/4)) should yield a straight line with a slope -1 for any organism regardless of its size. d. Based on this scaling for time t, argue that, for a mammal, the interval between heartbeats should scale with its size as M^(1/4). I am completely lost,,,, please help ,,,, Thank you I need a serious help please Let Y be the metabolic rate of an organism, Yc the metabolic rate of a single cell, Nc(t) the total number of cells at time t, mc the mass of a cell, and Ec the energy required to create a new cell. The cell properties, Ec, mc, and Yc, are assumed to be constant and invariant with respect to the size of the organism. Thus Y = YcNc + Ec(dNc/dt). Let m be the total body mass of the organism at time t, and m = mcNc.(Note that Nc is the total number of cells in a body and is proportional to mass m, while the total number of capillaries Nn is proportional to 3/4 power of m.) Y=Y0(m)^(3/4) a.show that the above equation can be written as dm/dt = am^(3/4) - bm with a = Y0mc / Ec and b = Yc/Ec b.Let m = M be the mass of a matured organism, when it stops growing (dm/dt = 0). Find M, and show that the above equation can be rewritten as dm/dt = am^(3/4)[1-(m/M)^(1/4)]. c.Let r = (m/M)^(1/4), and R = 1-r. Then the above equation becomes dR/dt = -(a/4M^(1/4))R. Solve this simple ordinary differential equation and show that a plot of ln(R(t)/R(0)) vs. the no-dimensinal time at/(4M^(1/4)) should yield a straight line with a slope -1 for any organism regardless of its size. d. Based on this scaling for time t, argue that, for a mammal, the interval between heartbeats should scale with its size as M^(1/4). I am completely lost,,,, please help ,,,, Thank you $\displaystyle Y0m^{3/4}=YcNc+Ec\frac{dNc}{dt}\Rightarrow \frac{Y0m^{3/4}}{Ec}=\frac{YcNc+Ec\frac{dNc}{dt}}{Ec}\Rightarro w \frac{Y0m^{3/4}}{Ec}=bNc+\frac{dNc}{dt}\Rightarrow\cdots$ Since every question is related to the above equation, finishing this should allow you to do the rest. Is it possible to go little further for me ??? because I still have some difficulties dealing with derivatives Multiply by mc $\displaystyle\Rightarrow am^{3/4}=b(Ncmc=m)+\frac{d(Ncmc=m)}{dt}\Rightarrow am^{3/4}-bm=\frac{dm}{dt}$ Last edited by dwsmith; January 13th 2011 at 12:16 PM. for d, how can I interpret my equations??? I got up to C. dR/dt = -(a/4M^(1/4))R. Solve this simple ordinary differential equation and show that a plot of ln(R(t)/R(0)) vs. the no-dimensinal time at/(4M^(1/4)) should yield a straight line with a slope -1 for any organism regardless of its size. I solved this................... ln(R(t)/R(0)) = -(a/4M^(1/4))t thus yield a straight line with a slope -1 vs the no-dimensinal time at/(4M^(1/4)) dR/dt = -(a/4M^(1/4))R. Solve this simple ordinary differential equation and show that a plot of ln(R(t)/R(0)) vs. the no-dimensinal time at/(4M^(1/4)) should yield a straight line with a slope -1 for any organism regardless of its size. I solved this................... ln(R(t)/R(0)) = -(a/4M^(1/4))t thus yield a straight line with a slope -1 vs the no-dimensinal time at/(4M^(1/4)) What is capital M? mass of a matured organism, oh so since it is all matured it should scale with its size as M^1/4 huh ?? January 13th 2011, 11:34 AM #2 MHF Contributor Mar 2010 January 13th 2011, 11:54 AM #3 Nov 2010 January 13th 2011, 12:05 PM #4 MHF Contributor Mar 2010 January 13th 2011, 12:28 PM #5 Nov 2010 January 13th 2011, 12:31 PM #6 MHF Contributor Mar 2010 January 13th 2011, 12:36 PM #7 Nov 2010 January 13th 2011, 12:38 PM #8 MHF Contributor Mar 2010 January 13th 2011, 12:47 PM #9 Nov 2010 January 13th 2011, 12:49 PM #10 MHF Contributor Mar 2010
{"url":"http://mathhelpforum.com/differential-equations/168235-universal-law-growth.html","timestamp":"2014-04-20T19:12:36Z","content_type":null,"content_length":"59396","record_id":"<urn:uuid:71248900-3723-4558-975f-9ccd6a482c67>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00614-ip-10-147-4-33.ec2.internal.warc.gz"}
John Wishart Born: 28 November 1898 in Montrose, Scotland Died: 14 July 1956 in Acapulco, Mexico Click the picture above to see a larger version Previous (Chronologically) Next Main Index Previous (Alphabetically) Next Biographies index John Wishart's father was John Wishart (born in Montrose, Forfarshire about 1867) who was a 'boot cutter and closer'. His mother was Elizabeth Wishart (born in Edinburgh about 1866). He had an older brother Scott (born about 1897) and a younger brother James (born about 1900). Wishart's family moved from Montrose to Perth in Scotland when John was two years old. He attended Perth Academy and then, in 1916, entered the University of Edinburgh. There he was taught mathematics by E T Whittaker. World War I meant that Wishart's university career was disrupted. He spent two years from 1917 to 1919 in the Black Watch regiment and served in France in 1918. He completed his university course in 1922, graduating with a First Class degree in mathematics and physics. He had taken a teacher training course at Moray House as part of his degree and, after graduating, he moved to Leeds accepting a post as mathematics teacher at West Leeds High School. In 1924, after a recommendation from Whittaker, Wishart was offered a post in University College, London, as assistant to Pearson. Pearson had a project for Wishart to work on and, given that Whittaker had set up his mathematical laboratory in Edinburgh, it was clear why Whittaker's advice on a possible assistant had been sought. Pearson had published his Tables of the Incomplete Gamma Function in 1922 and now he was looking for computational help in his next 'tables' project Tables of the Incomplete Beta Function. Wishart learned a great deal of statistics during his three years with Pearson. He attended Pearson's lectures and learnt how to go about statistical research. After a few months as a Mathematical Demonstrator at Imperial College, Wishart accepted an offer from R A Fisher to be his statistical assistant at Rothamsted. When Yule left his Cambridge lectureship in Statistics in 1931 there was a reorganisation of statistics teaching at Cambridge. A Readership in Statistics was created in the Faculty of Agriculture to teach courses in that Faculty and courses in Mathematics. A separate lectureship in Economic Statistics was also created. Wishart was appointed to the Readership in the Faculty of Agriculture. A laboratory was set up by Wishart at Cambridge for his postgraduate students. Cochran was one of these postgraduates and he described his time studying with Wishart:- In those days he believed in his students keeping office hours. When he assigned me a desk in the Laboratory, he told me that he expected me to be sitting at the desk most of the day when not in class. He instructed me to do three hours computing a day on a table of the 1% level of z to 7 decimal places ... Having anticipated a free and easy life as a graduate student, punctuated of course by periods of esoteric thinking when the spirit moved me, I didn't much like either the office hours or the computing, but I don't think they did me any harm. There were two aspects to Wishart's teaching at Cambridge since he taught both mathematics students and agriculture students. This suited him well since he had both a flair for mathematical statistics and a flair for very practical applications of experimental design. The arrangement did not suit other academics at Cambridge, however, and Wishart had to fight many academic battles. The problem that Wishart's position caused at Cambridge was that he was too high powered a statistician for those in Agriculture but the mathematicians were also unhappy to send their students to the Faculty of Agriculture for statistics courses, and they would have much preferred to have statistics completely within Mathematics. If World War II had not come along it is unclear how the problems would have resolved themselves. As it was, Wishart worked in army Intelligence from 1940 to 1942 and then on statistical work for the Admiralty from 1942 to 1946. The problems he had been having at Cambridge before the War made him think long and hard about whether to return, but his love of teaching, more than anything else, took him back. At Cambridge more statisticians were taken on within Mathematics and a Statistical Laboratory was set up within the Mathematics Faculty in 1949. Wishart became Head of the Statistical Laboratory in In [2] Wishart's international connections are summed up:- There are probably few statisticians who have had more friends scattered across the world that had John Wishart. Many of these friendships were made in the course of his work as a teacher of statistical method to practical agriculturists overseas. ... A pioneering visit to Nanking University in 1934 had been followed after the war by visits to Spain in 1947, to the United States in 1949, to India in 1954 and then, in his last year, to Mexico, where he was taking a leading part in the work of the Training Centre in Experimental Design arranged by the United Nations Food and Agricultural Organisation. Some of Wishart's most important publications were in the 1928-32 period before he became so involved with teaching at Cambridge. In 1928 he derived the generalised product-moment distribution which is now named the Wishart distribution. This distribution is described in [1] as:- ... fundamental to multivariate statistical analysis ... and is fully described on pages 154 to 163 of [2]. As well as further papers on the Wishart distribution, he also studied properties of the distribution of the multiple correlation coefficient which Fisher had considered earlier. In addition he wrote many papers on agricultural applications of statistics such as fertiliser trials, sugar beet experiments, crop experimentation and pig nutrition. Wishart was also much involved with the work of the Royal Statistical Society. He was one of the Fellows who formed the organising committee of the Agriculture research section in 1933. In 1945 he became chairman of the Royal Statistical Society's Research Section. He also sat on two committees of the Royal Statistical Society on the Teaching of Statistics: the reports of these committees appearing in the Journal of the Royal Statistical Society in 1948 and 1955. One other service that Wishart performed for statistics was his editorial work for Biometrika. He served as Assistant editor from 1937 and Associate Editor from 1948. Wishart died in a bathing accident in Acapulco, Mexico, which he was visiting as a representative of the United Nations Food and Agriculture Organisation to arrange setting up a research centre to apply statistical techniques in agricultural research. Article by: J J O'Connor and E F Robertson Click on this link to see a list of the Glossary entries for this page List of References (2 books/articles) Mathematicians born in the same country Additional Material in MacTutor Honours awarded to John Wishart (Click below for those honoured in this way) Fellow of the Royal Society of Edinburgh 1931 Previous (Chronologically) Next Main Index Previous (Alphabetically) Next Biographies index History Topics Societies, honours, etc. Famous curves Time lines Birthplace maps Chronology Search Form Glossary index Quotations index Poster index Mathematicians of the day Anniversaries for the year JOC/EFR © October 2003 School of Mathematics and Statistics Copyright information University of St Andrews, Scotland The URL of this page is:
{"url":"http://www-groups.dcs.st-and.ac.uk/~history/Biographies/Wishart.html","timestamp":"2014-04-21T04:40:08Z","content_type":null,"content_length":"17543","record_id":"<urn:uuid:d312af6f-7286-41c3-95d2-1e2e762628d2>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00337-ip-10-147-4-33.ec2.internal.warc.gz"}
What is a Pip? “PIP” stands for Point In Percentage. More simply though, a pip is what we in the FX would consider a “point” for calculating profits and losses. When trading a mini lot (10k units of currency), each pip is worth roughly one unit of the currency in which your account is denominated. If your account is denominated in USD, for example, each pip (depending on the currency pair) is worth about $1. In a micro lot, or 1k trade, each pip is worth roughly 1/10th the amount it would be worth in a mini lot -- so about $0.10. In all pairs involving the Japanese Yen (JPY), a pip is the 1/100th place -- 2 places to the right of the decimal. In all other currency pairs, a pip is the 1/10,000 the place -- 4 places to the right of the decimal. (Created using FXCM’s Trading Station II Desktop) You’ll see that in the Trading Station the digits for pips are in a larger font. This makes them easier to see. At FXCM, we provide additional transparency through the electronic platform and quote each currency pair with precision to 1/10th of a pip. This fraction of a pip allows price providers to bring spreads down even further as they are not restricted to quoting in full pip increments. This is beneficial to you, the trader, because the spread is a component of your transaction cost. You’ll notice that earlier in this post, we mentioned that the value of a pip for a 10,000 unit trade is roughly equal to 1 unit of your denominated currency (or $1 if you have a USD account). Now, let’s identify what the actual value per pip is. There are two simple methods to determine this. First, in the dealing rates of the FXCM Trading Station II platform, you will find the value per pip of the smallest trade size. In the account above, the minimum trade size is 1k. Therefore, the pip value listed in the advanced dealing rates window is based on a 1k trade size. For the EURUSD, that means every pip is $0.10. So the spread in the image above was 2.6 pips x 0.10 = $0.26 was the transaction cost to get into that trade. How do I know that 1k is the smallest trade size for the account? Simple! When you open a trade, a pop up box appears. In the Amount(K) field, open up the drop down box and what is the smallest figure you see? In the example above, 1 appears meaning it is a 1k minimum trade size for the USD/JPY. For the second method of determining the value per pip, open the “Market Order” pop up box by left clicking on the dealing rate like you are going to place a trade. You’ll notice, that when you change the Amount(K) field, the “Per Pip” value changes accordingly. Therefore, just before you get ready to enter the trade, simply double check that the “per pip” field is what you are comfortable with. If you are not sure what cost per pip you are interested in, no worries. Later on, we will show you how to use support and resistance to help you determine trade size. If you are already familiar with support and resistance, use this three step guide to determine trade size. For those who wish to determine the calculation by hand, follow this method below (if you are not interested in the mathematics involved, then proceed to the next article). First you start with the size of your trade. Micro lots are 1k, so if you want the value of a pip for a micro lot you start with 1,000. If you want the value of a pip for a mini lot, you start with 10,000. You then multiply your trade size by one pip for the pair that you are trading. In this example we are going to calculate the value of a pip for one 10k lot of EUR/USD. So since I am using 10k mini-lot, I’m starting with 10,000. I multiply 10,000 by .0001 since 1/10,000th is a pip for all pairs (except JPY pairs). That gets me a value of 1. That will be valued in the “counter currency” (second currency) of the pair that I am trading. In this example, I am trading EUR/USD, so USD is the counter currency of the pair. One pip is worth 1 USD dollar for one 10k lot of EUR/USD. If my FXCM account is based in US Dollars, then I will see $1 of profit or loss on my account for every 1 pip move that the EUR/USD makes in the market. Now, if my FXCM account is based in Euros (EUR), I would have to convert that $1 USD into Euros. To do so, I just divide by the current EUR/USD exchange rate which at the time of writing is 1.3797. I’m dividing here because a Euro is worth more than a USD, so I know my answer should be less than 1. 1 divided by 1.3797 is 0.7248 Euros. So now I know that if I have a Euro based account, and profit or lose one pip on 1 10k lot of EUR/USD, I will earn or lose 0.7248 Euros. Let’s do another example of GBP/JPY. Again we’ll go with a one 10k lot trade. This time a pip is .01 because it is a JPY pair. 10,000 times .01 is 100. Again, that “100” is in terms of the counter currency, so it is 100 Japanese Yen (JPY). Now we need to convert that 100 Yen to the denomination of your account. If you have a USD based account, then you take the 100 Yen and divide it by the USD/JPY spot rate, which at the time of this writing was 105.11. That gets you an answer of $0.95 per pip. comments powered by Disqus
{"url":"http://www.investopedia.com/forex/news/dailyfx/what-pip.aspx","timestamp":"2014-04-24T04:29:48Z","content_type":null,"content_length":"77972","record_id":"<urn:uuid:9e2a0aab-ba44-4691-87f8-04062ac28f8e>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00404-ip-10-147-4-33.ec2.internal.warc.gz"}
Wikipedia, the free encyclopedia Splay tree From Wikipedia, the free encyclopedia A splay tree is a self-balancing binary search tree with the additional unusual property that recently accessed elements are quick to access again. It performs basic operations such as insertion, look-up and removal in O(log(n)) amortized time. For many non-uniform sequences of operations, splay trees perform better than other search trees, even when the specific pattern of the sequence is unknown. The splay tree was invented by Daniel Sleator and Robert Tarjan. All normal operations on a binary search tree are combined with one basic operation, called splaying. Splaying the tree for a certain element rearranges the tree so that the element is placed at the root of the tree. One way to do this is to first perform a standard binary tree search for the element in question, and then use tree rotations in a specific fashion to bring the element to the top. Alternatively, a bottom-up algorithm can combine the search and the tree reorganization. Advantages and disadvantages Good performance for a splay tree depends on the fact that it is self-balancing, and indeed self optimising, in that frequently accessed nodes will move nearer to the root where they can be accessed more quickly. This is an advantage for nearly all practical applications, and is particularly useful for implementing caches; however it is important to note that for uniform access, a splay tree's performance will be considerably (although not asymptotically) worse than a somewhat balanced simple binary search tree. Splay trees also have the advantage of being considerably simpler to implement than other self-balancing binary search trees, such as red-black trees or AVL trees, while their average-case performance is just as efficient. Also, splay trees don't need to store any bookkeeping data, thus minimizing memory requirements. However, these other data structures provide worst-case time guarantees, and can be more efficient in practice for uniform access. One worst case issue with the basic splay tree algorithm is that of sequentially accessing all the elements of the tree in the sort order. This leaves the tree completely unbalanced (this takes n accesses- each an O(1) operation). Reaccessing the first item triggers an operation that takes O(n) operations to rebalance the tree before returning the first item. This is a significant delay for that final operation, although the amortised performance over the entire sequence is actually O(1). However, recent research shows that randomly rebalancing the tree can avoid this unbalancing effect and give similar performance to the other self-balancing algorithms. It is possible to create a persistent version of splay trees which allows access to both the previous and new versions after an update. This requires amortized O(log n) space per update. The splay operation When a node x is accessed, a splay operation is performed on x to move it to the root. To perform a splay operation we carry out a sequence of splay steps, each of which moves x closer to the root. As long as x has a grandparent, each particular step depends on two factors: • Whether x is the left or right child of its parent node, p, • Whether p is the left or right child of its parent, g (the grandparent of x). Thus, there are four cases when x has a grandparent. They fall into two types of splay steps. Zig-zag Step: One zig-zag case is when x is the right child of p and p is the left child of g (shown above). p is the new left child of x, g is the new right child of x, and the subtrees A, B, C, and D of x, p, and g are rearranged as necessary. The other zig-zag case is the mirror image of this, i.e. when x is the left child of p and p is the right child of g. Note that a zig-zag step is equivalent to doing a rotation on the edge between x and p, then doing a rotation on the edge between p and g. Zig-zig Step: One zig-zig case is when x is the left child of p and p is the left child of g (shown above). p is the new right child of x, g is the new right child of p, and the subtrees A, B, C, and D of x, p, and g are rearranged as necessary. The other zig-zig case is the mirror image of this, i.e. when x is the right child of p and p is the right child of g. Note that zig-zig steps are the only thing that differentiate splay trees from the rotate to root method indroduced by Allen and Munro prior to the introduction of splay trees. Zig Step: There is also a third kind of splay step that is done when x has a parent p but no grandparent. This is called a zig step and is simply a rotation on the edge between x and p. Zig steps exist to deal with the parity issue and will be done only as the last step in a splay operation and only when x has odd depth at the beginning of the operation. By performing a splay operation on the node of interest after every access, we keep recently accessed nodes near the root and keep the tree roughly balanced, so that we achieve the desired amortized time bounds. Performance theorems There are several theorems and conjectures regarding the worst-case runtime for performing a sequence S of m accesses in a splay tree containing n elements. Balance Theorem:^[1] The cost of performing the sequence S is O(m(logn + 1) + nlogn). In other words, splay trees perform as well as static balanced binary search trees on sequences of at least n Static Optimality Theorem:^[1] Let q[i] be the number of times element i is accessed in S. The cost of performing S is $O\left(m+\sum_{i=1}^n q_i\log\frac{m}{q_i}\right)$. In other words, splay trees perform as well as optimum static binary search trees on sequences of at least n accesses. Static Finger Theorem:^[1] Let i[j] be the element accessed in the j^th access of S and let f be any fixed element (the finger). The cost of performing S is $O\left(m+n\log n+\sum_{j=1}^m \log(|i_j-f |+1) \right)$. Working Set Theorem:^[1] Let t(j) be the number of distinct elements accessed between access j and the previous time element i[j] was accessed. The cost of performing S is $O\left(m+n\log n+\sum_{j= 1}^m \log(t(j)+1) \right)$. Dynamic Finger Theorem:^[2]^[3] The cost of performing S is $O\left(m+n\log n+\sum_{j=1}^m \log(|i_{j+1}-i_j|+1) \right)$. Scanning Theorem:^[4] Also known as the Sequential Access Theorem. Accessing the n elements of a splay tree in symmetric order takes Θ(n) time, regardles of the initial structure of the splay tree. The tightest upper bound proven so far is 4.5n.^[5] Dynamic optimality conjecture In addition to the proven performance guarantees for splay trees there is an unproven conjecture of great interest from the original Sleator and Tarjan paper. This conjecture is known as the dynamic optimality conjecture and it basically claims that splay trees perform as well as any other binary search tree algorithm up to a constant factor. Dynamic Optimality Conjecture:^[1] Let A be any binary search tree algorithm that accesses an element x by traversing the path from the root to x at a cost of d(x) + 1, and that between accesses can make any rotations in the tree at a cost of 1 per rotation. Let A(S) be the cost for A to perform the sequence S of accesses. Then the cost for a splay tree to perform the same accesses is O( n + A(S)). There is a special case of the dynamic optimality conjecture known as the traversal conjecture that is also unproven. Traversal Conjecture:^[1] Let T[1] and T[2] be two splay trees containing the same elements. Let S be the sequence obtained by visiting the elements in T[2] in preorder (i.e. depth first search order). The total cost of performing the sequence S of acesses on T[1] is O(n). See also • Donald Knuth. The Art of Computer Programming, Volume 3: Sorting and Searching, Third Edition. Addison-Wesley, 1997. ISBN 0-201-89685-0. Page 478 of section 6.2.3. 1. ^ ^a ^b ^c ^d ^e ^f D.D. Sleator and R.E. Tarjan. Self-Adjusting Binary Search Trees. Journal of the ACM 32:3, pages 652-686, 1985. 2. ^ R. Cole, B. Mishra, J. Schmidt, A. Siegel. On the Dynamic Finger Conjecture for Splay Trees. Part I: Splay Sorting log n-Block Sequences. SIAM Journal on Computing 30, pages 1-43, 2000. 3. ^ R. Cole. On the Dynamic Finger Conjecture for Splay Trees. Part II: The Proof. SIAM Journal on Computing 30, pages 44-85, 2000. 4. ^ R.E. Tarjan. Sequential access in splay trees takes linear time. Combinatorica 5, pages 367-378, 1985. 5. ^ Amr Elmasry. On the sequential access theorem and deque conjecture for splay trees. Theor. Comput. Sci. 314(3), pages 459-466, 2004. External links
{"url":"http://www.cs.waikato.ac.nz/Teaching/COMP317B/Week_6/Splay_tree.html","timestamp":"2014-04-20T00:39:18Z","content_type":null,"content_length":"29448","record_id":"<urn:uuid:ff4e9eac-4bf8-4f91-93d3-631001a1e8e4>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00071-ip-10-147-4-33.ec2.internal.warc.gz"}
Whitman, MA Calculus Tutor Find a Whitman, MA Calculus Tutor ...I've helped teacher candidates prepare for the Massachusetts Tests for Education Licensure--the MTEL--in Biology, Chemistry, General Science, and Physics. I've also worked with new teachers to develop their course material and understand the finer points of their science discipline during their ... 23 Subjects: including calculus, chemistry, writing, physics ...While a NASA employee on the Apollo Project, I made extensive use of algebra and calculus in the development of orbital rendezvous techniques. Later on I participated in the design of the space shuttle, and the development of the first GPS operating software. Thus I have an informed perspective regarding both teaching and application of these disciplines. 7 Subjects: including calculus, physics, algebra 1, algebra 2 ...Not only am I familiar with the content, but I have acquired many helpful tricks over the years that I can pass on to help students study for the AP exam as well as the SAT II Chemistry. In college I studied chemical engineering, so I have taken advanced math courses. I have tutored students in algebra I and II and seen improvements of two letter grades.I got an A in Genetics in 9 Subjects: including calculus, chemistry, biology, algebra 1 My tutoring experience has been vast in the last 10+ years. I have covered several core subjects with a concentration in math. I currently hold a master's degree in math and have used it to tutor a wide array of math courses. 36 Subjects: including calculus, English, reading, chemistry ...Even if you aren't shooting for 800, the SAT math exam is definitely one where you can markedly improve your score with even just a limited amount of studying and practice. Fortunately, the SAT math exam covers a specific, well-defined set of topics, and tends to ask the same types of questions ... 44 Subjects: including calculus, English, chemistry, reading Related Whitman, MA Tutors Whitman, MA Accounting Tutors Whitman, MA ACT Tutors Whitman, MA Algebra Tutors Whitman, MA Algebra 2 Tutors Whitman, MA Calculus Tutors Whitman, MA Geometry Tutors Whitman, MA Math Tutors Whitman, MA Prealgebra Tutors Whitman, MA Precalculus Tutors Whitman, MA SAT Tutors Whitman, MA SAT Math Tutors Whitman, MA Science Tutors Whitman, MA Statistics Tutors Whitman, MA Trigonometry Tutors Nearby Cities With calculus Tutor Abington, MA calculus Tutors Bridgewater, MA calculus Tutors Brockton, MA calculus Tutors East Bridgewater calculus Tutors Halifax, MA calculus Tutors Hanover, MA calculus Tutors Hanson, MA calculus Tutors Holbrook, MA calculus Tutors Kingston, MA calculus Tutors Norwell calculus Tutors Pembroke, MA calculus Tutors Randolph, MA calculus Tutors Rockland, MA calculus Tutors South Weymouth calculus Tutors West Bridgewater calculus Tutors
{"url":"http://www.purplemath.com/Whitman_MA_Calculus_tutors.php","timestamp":"2014-04-16T16:36:35Z","content_type":null,"content_length":"24117","record_id":"<urn:uuid:6acca115-d9e7-40ba-8e66-1cf71f72fec7>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00026-ip-10-147-4-33.ec2.internal.warc.gz"}
November 2 Any frequent reader of will have come across several posts concerning the optimization of code - in particular, the avoidance of loops. Here's another aspect of the same issue. If you have experience programming in other languages besides R, this is probably a no-brainer, but for laymen, like myself, the following example was a total surprise. Basically, every time you redefine the size of an object in R, you are also redefining the allotted memory - and this takes some time. It's not necessarily a lot of time, but if you are having to do it during every iteration of a loop, it can really slow things down. The following example shows three versions of a loop that creates random numbers and stores those numbers in a results object. The first example (Ex. 1) demonstrates the wrong approach, which is to concatenate the results onto the results object ("x") , thereby continually changing the size of x after each loop. The second approach (Ex. 2) is about 150x faster - x is defined as an empty matrix containing NAs, which is gradually filled (by row) during each loop. The third example (Ex. 3) shows another possibility if one does not know what the size of the results from each loop will be. An empty list is created of length equaling the number of loops. The elements of the list are then gradually filled with the loop results. Again, this is at least 150x faster than Ex. 1 (and I'm actually surprised to see that it may even be faster than Ex.2). The following function, color.palette(), is a wrapper for colorRampPalette() and allows some increased flexibility in defining the spacing between main color levels. One defines both the main color levels (as with colorRampPalette) and an optional vector containing the number of color levels that should be put in between at equal distances. The above figure shows the effect on a color scale (see image.scale) containing 5 main colors (blue, cyan, white, yellow, and red). The result of colorRampPalette (upper) produces an equal number of levels between the main colors. By increasing the number of intermediate colors between blue-cyan and yellow-red (lower), the number of color levels in the near white range is reduced. The resulting palette, for example, was better in highlighting the positive and negative values of an Emprical Orthogonal Function (EOF) mode. [Update]: The following approach has serious shortcomings, which I have recently become aware of. In a comparison of gappy EOF approaches Taylor et al. (2013) [pdf] show that this traditional approach is not as accurate as others. Specifically, the approach of DINEOF (Data Interpolating Empirical Orthogonal Functions) proved to be the most accurate. I have outlined the DINEOF algorithm in another post [link]. The following is a function for the calculation of Empirical Orthogonal Functions (EOF). For those coming from a more biologically-oriented background and are familiar with Principal Component Analysis (PCA), the methods are similar. In the climate sciences the method is usually used for the decomposition of a data field into dominant spatial-temporal modes. At the onset, this was strictly an excercise of my own curiosity and I didn't imagine writing this down in any form at all. As someone who has done some modelling work in the past, I'm embarrassed to say that I had never fully grasped how one can gauge the error of a model output without having to do some sort of Monte Carlo simulation whereby the model parameters are repeatedly randomized within a given confidence interval. Its relatively easy to imagine that a model containing many parameters, each with an associated error, will tend to propagate these errors throughout. Without getting to far over my head here, I will just say that there are defined methods for calculating the error of a variable if one knows the underlying error of the functions that define them (and I have tried out only a very simple one here!). In the example below, I have three main variables (x, y, and z) and two functions that define the relationships y~x and z~y. The question is, given these functions, what would be the error of a predicted z value given an initial x value? The most general rule seems to be: error(z~x)^2 = error(y~x)^2 + error(z~y)^2 However, correlated errors require additional terms (see Wikipedia: Propagation of uncertainty). The following example does just that by simulating correlated error terms using the MASS package's function mvrnorm().
{"url":"http://menugget.blogspot.com/2011_11_01_archive.html","timestamp":"2014-04-21T04:53:18Z","content_type":null,"content_length":"111243","record_id":"<urn:uuid:543c9629-5bc2-413d-b62f-a85ca4da80e7>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00142-ip-10-147-4-33.ec2.internal.warc.gz"}
the definition of volt unit of electrical potential, potential difference and electromotive force in the metre-kilogram-second system (SI); it is equal to the difference in potential between two points in a conductor carrying one ampere current when the power dissipated between the points is one watt. An equivalent is the potential difference across a resistance of one ohm when one ampere is flowing through it. The volt is named in honour of the 18th-19th-century Italian physicist Alessandro Volta. These units are defined in accordance with Ohm's law, that resistance equals the ratio of potential to current, and the respective units of ohm, volt, and ampere are used universally for expressing electrical quantities. See also electric potential; electromotive force. Learn more about volt with a free trial on Britannica.com.
{"url":"http://dictionary.reference.com/browse/volt","timestamp":"2014-04-16T22:41:54Z","content_type":null,"content_length":"107008","record_id":"<urn:uuid:ec1e51c5-2f58-45b7-9875-3e6815f0d9c1>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00470-ip-10-147-4-33.ec2.internal.warc.gz"}
Lucerne, CO SAT Math Tutor Find a Lucerne, CO SAT Math Tutor ...After working in healthcare for a while, I have decided to split my time between helping the body and helping the mind. I love math and science and was at the top of my class in many courses. I tutored in high school and helped my friends get through classes in college, so I have a proven track record. 13 Subjects: including SAT math, geometry, statistics, differential equations ...Following my graduation from UCSD (BS in Physiology and Neuroscience) with top honors of Summa Cum Laude (top 1% with a GPA of 3.948), I gained extensive knowledge in the fields of biology and math. During my undergraduate years in college I mastered math and biology classes receiving more A+ th... 26 Subjects: including SAT math, reading, geometry, biology ...I'd like to generate some extra income to help support their school and other activities by applying my knowledge in math and/or science in helping others interested in and/or struggling in either of these subject areas.I was horribly afraid of giving speeches growing up until I attended Toastmas... 14 Subjects: including SAT math, reading, algebra 1, grammar ...I have a PhD in experimental particle physics and even with a thorough understanding of many of the ideas surrounding and backing Quantum Mechanics, I still find trigonometry and its uses are important traits. Learning about how trigonometry pervades into other areas of mathematics as well as ho... 47 Subjects: including SAT math, chemistry, physics, calculus ...As a teacher I have taught for 4 years, and tutor privately for 3 years prior to my teaching career. During my educational career I have worked with a wide variety of mathematics classes ranging from 7th Grade Math to Pre-Calculus, Secondary English electives, and 6-12th grade physical education... 11 Subjects: including SAT math, geometry, algebra 1, algebra 2 Related Lucerne, CO Tutors Lucerne, CO Accounting Tutors Lucerne, CO ACT Tutors Lucerne, CO Algebra Tutors Lucerne, CO Algebra 2 Tutors Lucerne, CO Calculus Tutors Lucerne, CO Geometry Tutors Lucerne, CO Math Tutors Lucerne, CO Prealgebra Tutors Lucerne, CO Precalculus Tutors Lucerne, CO SAT Tutors Lucerne, CO SAT Math Tutors Lucerne, CO Science Tutors Lucerne, CO Statistics Tutors Lucerne, CO Trigonometry Tutors
{"url":"http://www.purplemath.com/Lucerne_CO_SAT_Math_tutors.php","timestamp":"2014-04-16T10:14:21Z","content_type":null,"content_length":"23992","record_id":"<urn:uuid:c7d62df6-03d5-4b0d-9320-48af9bac76cb>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00147-ip-10-147-4-33.ec2.internal.warc.gz"}
Problems with a simple dependency algorithm up vote 9 down vote favorite In my webapp, we have many fields that sum up other fields, and those fields sum up more fields. I know that this is a directed acyclic graph. When the page loads, I calculate values for all of the fields. What I'm really trying to do is to convert my DAG into a one-dimensional list which would contain an efficient order to calculate the fields in. For example: A = B + D, D = B + C, B = C + E Efficient calculation order: E -> C -> B -> D -> A Right now my algorithm just does simple inserts into a List iteratively, but I've run into some situations where that starts to break. I'm thinking what would be needed instead would be to work out all the dependencies into a tree structure, and from there convert that into the one dimensional form? Is there a simple algorithm for converting such a tree into an efficient ordering? add comment 2 Answers active oldest votes Are you looking for topological sort? This imposes an ordering (a sequence or list) on a DAG. It's used by, for example, spreadsheets, to figure out dependencies between up vote 13 down vote cells for calculations. Thanks very much, this is exactly the term that I was after. – Coxy Jul 28 '09 at 8:03 add comment What you want is a depth-first search. function ExamineField(Field F) if (F.already_in_list) foreach C child of F call ExamineField(C) Then just call ExamineField() on each field in turn, and the list will be populated in an optimal ordering according to your spec. Note that if the fields are cyclic (that is, you have something like A = B + C, B = A + D) then the algorithm must be modified so that it doesn't go into an endless loop. For your example, the calls would go: up vote 4 down vote ExamineField(C) (already in list, nothing happens) (already in list, nothing happens) (already in list, nothing happens) (already in list, nothing happens) (already in list, nothing happens) (already in list, nothing happens) And the list would end up C, E, B, D, A. Thanks very much for the example! That's exactly what I wanted to do, although I ended up going with the iterative algorithm. – Coxy Jul 28 '09 at 8:04 add comment Not the answer you're looking for? Browse other questions tagged algorithm sorting tree dependencies directed-acyclic-graphs or ask your own question.
{"url":"http://stackoverflow.com/questions/1192200/problems-with-a-simple-dependency-algorithm","timestamp":"2014-04-18T04:07:15Z","content_type":null,"content_length":"70354","record_id":"<urn:uuid:89ef222a-97f5-48e4-affe-67479bcc7996>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00251-ip-10-147-4-33.ec2.internal.warc.gz"}
Stepping down from an advanced calculator If you have a calculator that won't track order of operations for you, and I have one that let's you type it out on the screen to confirm it's correct, I would just go with the latter and restrict myself to only using it for basic calculations. As for your guy, I would start with .04, raise it to the power of 4, multiply by pi, divide by 2, hit the "1/x" key, then multiply by all the stuff in the numerator consecutively. The thing I would be worried about when entering this expression into the calculator is if I try to do divide by, then enter .04, then raise to the 4th power, that it would divide by .04 before raising everything I have so far to the 4th power (some calculators will recognize a Pemdas violation I believe, but some won't).
{"url":"http://www.physicsforums.com/showthread.php?s=e49d27fbf4e036357e1bcd0950eb9cdb&p=4525210","timestamp":"2014-04-19T22:47:44Z","content_type":null,"content_length":"28163","record_id":"<urn:uuid:aeb4e26c-29be-4e1a-ba8b-7a718ad6fcda>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00201-ip-10-147-4-33.ec2.internal.warc.gz"}
Items where Author is " Number of items: 10. Levinson, Ralph and Kent, Phillip and Pratt, David and Kapadia, Ramesh and Yogui, Cristina (2012) Risk-based decision making in a scientific issue : a study of teachers discussing a dilemma through a microworld. Science Education, 96 (2). pp. 212-233. ISSN 0036-8326 Levinson, Ralph and Kent, Phillip and Pratt, David and Kapadia, Ramesh and Yogui, Cristina (2012) Risk-based decision-making in a scientific issue: a study of teachers discussing a dilemma through a microworld. Science Education, 96 (2). pp. 212-233. ISSN 0036-8326 Pratt, David and Levinson, Ralph and Kent, Phillip and Yogui, Cristina and Kapadia, Ramesh (2012) A pedagogic appraisal of the Priority Heuristic. International Journal on Mathematics Education (ZDM), 44 (7). pp. 927-940. ISSN 1863-9690 Levinson, Ralph and Kent, Phillip and Pratt, David and Kapadia, Ramesh and Yogui, Cristina (2011) Developing a pedagogy of risk in socio-scientific issues. In: Authenticity in biology education. UNSPECIFIED, University of Minho, Braga, Portugal, pp. 177-186. ISBN 9789728952198 Pratt, Dave and Ainley, Janet and Kent, Phillip and Levinson, Ralph and Yogui, Cristina and Kapadia, Ramesh (2011) Role of context in risk-based reasoning. Mathematical Thinking and Learning, 13 (4). pp. 322-345. ISSN 1098-6065 (print); 1532-7833 (electronic) Kapadia, Ramesh and Kent, Phillip and Levinson, Ralph and Pratt, David and Yogui, Cristina (2010) Promoting a cross-curricular pedagogy of risk in mathematics and science classrooms. In: Proceedings of the British Society for Research into Learning Mathematics. BSRLM. Borovcnik, Manfred and Kapadia, Ramesh (2010) Research and developments in probability education internationally. In: Proceedings of the British Society for Research into Learning Mathematics. BSRLM. Kapadia, Ramesh (2009) Chance encounters - 20 years later: fundamental ideas in teaching probability at school level. International Electronic Journal of Mathematics Education, 4 (3). pp. 371-386. ISSN 1306-3030 Borovcnik, Manfred (ed.) and Kapadia, Ramesh (2009) Research and developments in probability education. International Electronic Journal of Mathematics Education, 4 (3), special issue. International Electronic Journal of Mathematics Education, 4 (3). ISSN 1306-3030 Pratt, Dave and Kapadia, Ramesh (2009) Shaping the experience of young and naïve probabilists. International Electronic Journal of Mathematics Education, 4 (3). pp. 323-338. ISSN 1306-3030
{"url":"http://eprints.ioe.ac.uk/view/creators/Kapadia=3ARamesh=3A=3A.default.html","timestamp":"2014-04-18T20:51:45Z","content_type":null,"content_length":"13016","record_id":"<urn:uuid:e23d5943-0d72-4b4e-974d-5b5c6435b5a5>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00598-ip-10-147-4-33.ec2.internal.warc.gz"}
Lathrop, CA Math Tutor Find a Lathrop, CA Math Tutor ...I find this dual approach to be very effective in helping people of any age to meet their goals in regards to test preparation. **Specifically in regard to the ASVAB - I scored at 95th percentile and above when I took the exam my senior year of high school. I have an excellent command of the En... 42 Subjects: including algebra 1, algebra 2, ACT Math, geometry ...To me, being a teacher meant having fun while holding students, and myself accountable, thus allowing us the opportunity to celebrate improved learning, grades, and scores. After retirement, for the past four years, I have been tutoring students who range in grade from kindergarten through grade ten. In addition, I have tutored elementary students in reading and writing. 7 Subjects: including algebra 1, geometry, prealgebra, reading ...I provide extra practice and drill problems. I have substantial experience tutoring Calculus (including AP, AB/BC) and Multivariate Calculus. I have taught Calculus II at San Jose St. 15 Subjects: including prealgebra, SQL, differential equations, algebra 1 I am an MBA graduate with a bachelor's degree in business as well. I can teach accounting and finance courses, but above all I can teach mathematics more effectively. I try to be at the level of student so that I can make him understand the matter easily. 8 Subjects: including geometry, business, finance, linear algebra I have been teaching 4th - 8th grade math in a private school for the past ten years. I have been instrumental in developing our math curriculum, which is aligned to the Common Core State Standards. My students receive strong foundational math skills in addition to opportunities to develop critical thinking and problem solving. 3 Subjects: including algebra 1, prealgebra, elementary math Related Lathrop, CA Tutors Lathrop, CA Accounting Tutors Lathrop, CA ACT Tutors Lathrop, CA Algebra Tutors Lathrop, CA Algebra 2 Tutors Lathrop, CA Calculus Tutors Lathrop, CA Geometry Tutors Lathrop, CA Math Tutors Lathrop, CA Prealgebra Tutors Lathrop, CA Precalculus Tutors Lathrop, CA SAT Tutors Lathrop, CA SAT Math Tutors Lathrop, CA Science Tutors Lathrop, CA Statistics Tutors Lathrop, CA Trigonometry Tutors
{"url":"http://www.purplemath.com/lathrop_ca_math_tutors.php","timestamp":"2014-04-19T14:46:55Z","content_type":null,"content_length":"23685","record_id":"<urn:uuid:1a79ab74-d51d-4774-b0a4-3f0f6898236f>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00090-ip-10-147-4-33.ec2.internal.warc.gz"}
L'Hopitals Rule September 27th 2009, 08:16 AM L'Hopitals Rule here's the problem: Find the limit of lim (e^t)-1 / t^3 I thought to use L'Hopital's Rule 3 times to get lim e^t / 6 Then it would no longer be indeterminate (0/0) and I would have 1/6 as my answer. But my book says the answer is infinity. What did I do wrong? September 27th 2009, 08:22 AM here's the problem: Find the limit of lim (e^t)-1 / t^3 I thought to use L'Hopital's Rule 3 times to get lim e^t / 6 Then it would no longer be indeterminate (0/0) and I would have 1/6 as my answer. But my book says the answer is infinity. What did I do wrong? You need to check the hypothesis each time you use L.H's rule. After one application you get $\lim_{t \to 0}\frac{e^{t}-1}{t^3}=\lim_{t \to 0}\frac{e^{t}}{3t^2}$ This limit is not of the form $\frac{0}{0}$ or $\frac{\infty}{\infty}$ so we CANNOT use L.H's rule. Now if we take the limit the numerator goes to 1 and the denominator goes to zero from both sides so $\lim_{t \to 0}\frac{e^{t}-1}{t^3}=\lim_{t \to 0}\frac{e^{t}}{3t^2}=\infty$ September 27th 2009, 08:30 AM wait I thought dividing by zero doesn't exist. you're saying 1/0 is infinity? September 27th 2009, 08:31 AM Prove It September 27th 2009, 08:35 AM okay I graphed it on my calculator so now it makes sense. is it always a good idea to check like that? September 27th 2009, 08:37 AM Prove It September 27th 2009, 08:40 AM thank you both for your help
{"url":"http://mathhelpforum.com/calculus/104570-lhopitals-rule-print.html","timestamp":"2014-04-20T20:07:10Z","content_type":null,"content_length":"8423","record_id":"<urn:uuid:4a144d94-d887-42b1-ae11-f49f963ca002>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00236-ip-10-147-4-33.ec2.internal.warc.gz"}
Three Issues in Sample Size Estimates for Multilevel Models November 30, 2012 By Karen Grace-Martin (This article was originally published at The Analysis Factor, and syndicated at StatsBlogs.) If you’ve ever worked with multilevel models, you know that they are an extension of linear models. For a researcher learning them, this is both good and bad news. The good side is that many of the concepts, calculations, and results are familiar. The down side of the extension is that everything is more complicated in multilevel models. This includes power and sample size calculations. If you’re not familiar with them, multilevel models are required when data are clustered. The basic idea is that each observation in the sample is not independent–those from the same cluster are associated, while observations from different clusters are not. There are many designs with multiple observations in a cluster. Repeated measures data have multiple observations from the same subject. Randomized block studies have multiple plant measurements nested within a farm. An evaluation may have social workers clustered within an agency. Because of the clustering, there are a few issues that come up when conducting sample size calculations for multilevel models that don’t usually come up when running calculations for simpler models. Issue 1: Choosing an Effect The first step in any sample size calculation is always to choose a hypothesis test. Any model tests many effects–each main effect and interaction in an ANOVA is a separate hypothesis test. Although the point of some multilevel studies is to test random effects, usually in multilevel models the effect of interest is a fixed effect–the overall regression coefficients or mean differences. Let’s use the example of testing the mean difference between an intervention group and a control group for our social workers. Issue 2: Sample sizes at each level Another issue is that there are multiple sample sizes. In planning this kind of study, you need to select a sample size at each level: how many social workers do you need per agency, and how many An overall sample of 300 workers will have different implications for power if it is made up of 5 workers each at 60 agencies or 20 workers each at 15 agencies. As a general rule, the sample size that matters most is the sample size at the level the effect is measured. For example, if we can randomly assign each social worker to one of the intervention groups, so the effect of interest is at the social worker level, the most important sample size is the overall number of social workers in the sample–the 300. It doesn’t matter much how many agencies they came from. However, depending on the nature of the intervention, there are often design and practical issues with assigning people from the same agency to different conditions. From a design perspective, it may be impossible to assign people from the same agency to different conditions if they will influence each other. From a practical perspective, it may be necessary to apply the condition to the entire group at once (as in a training). In either case, it may be necessary to assign groups at the agency level, making our effect of interest, group comparison, at the agency level. This means that the number of agencies has more of an effect of the power of this test than the number of workers per agency. So having 60 agencies with only 5 people each will give you more power than 20 agencies, even if the total number of people in the sample are the same. The difference can have large time and cost implications. In many studies, adding more social workers per agency has a marginal cost to the time and budget. The big cost is recruiting and administering the training for each agency. Issue 3: Estimate more parameters The fourth step in any sample size calculation is to obtain reasonably accurate measures of the other parameters that are used in the statistical test. This always includes standard deviation, but can also include others, like the correlation among multiple predictors. These estimates need to come from previous research or a pilot study. In multilevel models, you need to also estimate the Intra-Class Correlation, or ICC. The ICC is a measure of how correlated observations are within a cluster. You can think of it as a measure of how much non-unique information there is in each observation. If the social workers at each agency respond in similar ways (high ICC), adding another worker from an agency doesn’t add a lot of new information about the effect you’re testing. On the other hand, if the clustering isn’t having a big effect on responses, so workers from the same agency aren’t very similar, then adding more workers to your sample from a single agency has a bigger impact on power. So although there are more pieces of information to include, the steps and the ways of thinking about the issues are exactly the same as they are in any sample size estimate. If you’d like to learn more about power and sample size estimates, take a look at our online workshop: Calculating Power and Sample Size. We’ll go over the logic, the info you need, where to get it, how to do the steps, and how to use power software to get good estimates. We’ll also go over what these estimates really tell you, and what they don’t. Please comment on the article here: The Analysis Factor
{"url":"http://www.statsblogs.com/2012/11/30/three-issues-in-sample-size-estimates-for-multilevel-models/","timestamp":"2014-04-21T12:19:41Z","content_type":null,"content_length":"41009","record_id":"<urn:uuid:69cb5d44-86f2-48de-b189-b6b573bbf472>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00063-ip-10-147-4-33.ec2.internal.warc.gz"}
Roller Coasters 1. ______ hill. The highest hill in a roller coaster. It gives it potential energy. 3. _________ G force. A force greater than the normal force of gravity in which a person feels heavier than normal. 4. __________ induction motor. A type of motor used in roller coasters to move cars up a hill by use of electromagnets. 7. The change in level from horizontal to pitched on race tracks to reduce centrifugal inertia or outward force 9. ___________ acceleration. The acceleration of an object as it goes around a curve. [equals velocity squared divided by the length of the radius). 10. Newton's ____________ Law of Motion. Force equals mass times acceleration 14. A change in velocity over time or a change in direction 15. __________ velocity. The total distance a coaster travels divided by the total time it takes to travel that distance. 18. Free _____ diagram. A vector drawing which shows the forces on an object and their resolution into a final resultant force. 19. The inward directed force which keeps objects moving in a circle (example, a ball on the end of string is kept in a circle by the string) 20. Distance divided by time 21. A device which measures angles and is used to determine the height or distance of remote objects.
{"url":"http://www.armoredpenguin.com/crossword/Data/2011.05/2304/23044113.375.html","timestamp":"2014-04-18T23:28:25Z","content_type":null,"content_length":"69775","record_id":"<urn:uuid:d61813d7-d404-450e-aaaa-db1bf50853c8>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00540-ip-10-147-4-33.ec2.internal.warc.gz"}
188 helpers are online right now 75% of questions are answered within 5 minutes. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/users/amandachen.92/answered","timestamp":"2014-04-20T21:01:13Z","content_type":null,"content_length":"110225","record_id":"<urn:uuid:7ce97ba2-01cf-4f65-8c2e-9644a76f1868>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00246-ip-10-147-4-33.ec2.internal.warc.gz"}
A company bought for its 7 offices 2 computers of brand N Question Stats: 80%20% (05:16)based on 5 sessions Hi, there. I'm happy to help with this. This problem has to do with . Here's the general idea: if you have a set of n elements, and you are going to choose r of them (r < n), then the number of combinations of size r one could choose from this total set of n is: # of combinations = nCr = (n!)/[(r!)((n-r)!)] where n! is the factorial symbol, which means the product of every integer from n down to 1. BTW, nCr is read "n choose r." In this problem, let's consider first the three computers of brand M. How many ways can three computer be distributed to seven offices? # of combinations = 7C3 = (7!)/[(3!)(4!)] = (7*6*5*4*3*2*1)/[3*2*1)(4*3*2*1)] = (7*6*5)/(3*2*1) = (7*6*5)/(6) = 7*5 = 35 There are 35 different ways to distribute three computers to 7 offices. (The massive amount of cancelling that occurred there is very much typical of what happens in the nCr formula.) One we have distributed those three M computers, we have to distribute 2 N computers to the remaining four offices. How many ways can two computer be distributed to four offices? # of combinations = 4C2 = (4!)/[(2!)(2!)] = (4*3*2*1)/[(2*1)(2*1)] = (4*3)/(2*1) = 12/2 = 6 For each of the 35 configurations of distributing the M computers, we have 6 ways of distributing the N computers to the remaining offices. Thus, the total number of configurations is 35*6 = 210. Answer choice = Notice, we would get the same answer if we distributed the N computers first. Number of ways to distribute 2 N computers to seven offices = 7C2 = (7!)/[(2!)(5!)] = (7*6)/(2*1) = 21 Number of ways to distribute the 3 M computers to the remaining five offices = 5C3 = (5!)/[(3!)(2!)] = (5*4)/(2*1) = 20/2 = 10 Product of ways = 21*10 = 210. Same answer. Here's a blog article I wrote about this topic, with some practice questions. http://magoosh.com/gmat/2012/gmat-permu ... binations/ Does this make sense? Please let me know if you any questions on what I've said. Mike McGarry Magoosh Test Prep
{"url":"http://gmatclub.com/forum/a-company-bought-for-its-7-offices-2-computers-of-brand-n-126208.html?fl=similar","timestamp":"2014-04-17T01:13:17Z","content_type":null,"content_length":"182939","record_id":"<urn:uuid:7e56a818-b31f-434f-be9f-a27b310ea1fb>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00521-ip-10-147-4-33.ec2.internal.warc.gz"}
Tiger—A Fast New Hash Function "... We expand a previous result of Dean [Dea99] to provide a second preimage attack on all n-bit iterated hash functions with Damgård-Merkle strengthening and n-bit intermediate states, allowing a second preimage to be found for a 2 k-message-block message with about k × 2 n/2+1 +2 n−k+1 work. Using RI ..." Cited by 15 (3 self) Add to MetaCart We expand a previous result of Dean [Dea99] to provide a second preimage attack on all n-bit iterated hash functions with Damgård-Merkle strengthening and n-bit intermediate states, allowing a second preimage to be found for a 2 k-message-block message with about k × 2 n/2+1 +2 n−k+1 work. Using RIPEMD-160 as an example, our attack can find a second preimage for a 2^60 byte message in about 2^106 work, rather than the previously expected 2^160 work. We also provide slightly cheaper ways to find multicollisions than the method of Joux [Jou04]. Both of these results are based on expandable messages–patterns for producing messages of varying length, which all collide on the intermediate hash result immediately after processing the message. We provide an algorithm for finding expandable messages for any n-bit hash function built using the Damgård-Merkle construction, which requires only a small multiple of the work done to find a single collision in the hash function.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=3241656","timestamp":"2014-04-21T08:41:50Z","content_type":null,"content_length":"12343","record_id":"<urn:uuid:84b7eadd-0180-4f79-b5c0-00e0b2635bb9>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00036-ip-10-147-4-33.ec2.internal.warc.gz"}
Practice Surface Area and Volume of Pyramids What if you wanted to know the volume of an Egyptian Pyramid? The Khafre Pyramid is the second largest pyramid of the Ancient Egyptian Pyramids in Giza. It is a square pyramid with a base edge of 706 feet and an original height of 407.5 feet. What was the original volume of the Khafre Pyramid? After completing this Concept, you'll be able to answer this question. Watch This CK-12 Foundation: Chapter11PyramidsA Brightstorm: Surface Area of Pyramids Brightstorm: Volume of Pyramids A pyramid has one base and all the lateral faces meet at a common vertex. The edges between the lateral faces are lateral edges. The edges between the base and the lateral faces are called base edges . If we were to draw the height of the pyramid to the right, it would be off to the left side. When a pyramid has a height that is directly in the center of the base, the pyramid is said to be regular. These pyramids have a regular polygon as the base. All regular pyramids also have a slant height that is the height of a lateral face. Because of the nature of regular pyramids, all slant heights are congruent. A non-regular pyramid does not have a slant height. Surface Area Using the slant height, which is usually labeled $l$$A=\frac{1}{2} bl$ Surface Area of a Regular Pyramid: If $B$$P$$l$$SA=B+\frac{1}{2} Pl$ If you ever forget this formula, use the net. Each triangular face is congruent, plus the area of the base. This way, you do not have to remember a formula, just a process, which is the same as finding the area of a prism. Recall that the volume of a prism is $Bh$$B$ Investigation: Finding the Volume of a Pyramid Tools needed: pencil, paper, scissors, tape, ruler, dry rice or sand. 1. Make an open net (omit one base) of a cube, with 2 inch sides. 2. Cut out the net and tape up the sides to form an open cube. 3. Make an open net (no base) of a square pyramid, with lateral edges of 2.45 inches and base edges of 2 inches. This will make the overall height 2 inches. 4. Cut out the net and tape up the sides to form an open pyramid. 5. Fill the pyramid with dry rice. Then, dump the rice into the open cube. How many times do you have to repeat this to fill the cube? Volume of a Pyramid: If $B$$h$$V=\frac{1}{3} Bh$ The investigation showed us that you would need to repeat this process three times to fill the cube. This means that the pyramid is one-third the volume of a prism with the same base. Example A Find the slant height of the square pyramid. Notice that the slant height is the hypotenuse of a right triangle formed by the height and half the base length. Use the Pythagorean Theorem. $8^2+24^2&=l^2\\64+576&=l^2\\640&=l^2\\l&= \sqrt{640} = 8 \sqrt{10}$ Example B Find the surface area of the pyramid from Example A. The surface area of the four triangular faces are $4\left ( \frac{1}{2} bl \right ) =2(16)\left ( 8 \sqrt{10} \right )=256 \sqrt{10}$$16^2 = 256$$256 \sqrt{10}+256 \approx 1065.54$ Example C Find the volume of the pyramid. $V=\frac{1}{3} (12^2)12=576 \ units^3$ Watch this video for help with the Examples above. CK-12 Foundation: Chapter11PyramidsB Concept Problem Revisited The original volume of the pyramid is $\frac{1}{3} (706^2)(407.5)\approx 67,704,223.33 \ ft^3$ A pyramid is a solid with one base and lateral faces that meet at a common vertex. The edges between the lateral faces are lateral edges. The edges between the base and the lateral faces are base A regular pyramid is a pyramid where the base is a regular polygon. All regular pyramids also have a slant height, which is the height of a lateral face. Surface area is a two-dimensional measurement that is the total area of all surfaces that bound a solid. Volume is a three-dimensional measurement that is a measure of how much three-dimensional space a solid occupies. Guided Practice 1. Find the area of the regular triangular pyramid. 2. If the lateral surface area of a square pyramid is $72 \ ft^2$ 3. Find the area of the regular hexagonal pyramid below. 4. Find the volume of the pyramid. 5. Find the volume of the pyramid. 6. A rectangular pyramid has a base area of $56 \ cm^2$$224 \ cm^3$ 1. The area of the base is $A=\frac{1}{4} s^2 \sqrt{3}$ $B & =\frac{1}{4} 8^2 \sqrt{3}=16 \sqrt{3}\\SA& = 16 \sqrt{3}+\frac{1}{2} (24)(18)=16 \sqrt{3}+216 \approx 243.71$ 2. In the formula for surface area, the lateral surface area is $\frac{1}{2} Pl$$\frac{1}{2} nbl$$n = 4$$b = l$$b$ $\frac{1}{2} nbl & = 72 \ ft^2\\\frac{1}{2} (4) b^2 & = 72\\2b^2 & = 72\\b^2 & =36\\b & = 6$ Therefore, the base edges are all 6 units and the slant height is also 6 units. 3. To find the area of the base, we need to find the apothem. If the base edges are 10 units, then the apothem is $5 \sqrt{3}$$\frac{1}{2} asn = \frac{1}{2} \left ( 5\sqrt{3} \right )(10)(6)=150 \ $SA& = 150 \sqrt{3}+\frac{1}{2}(6)(10)(22)\\& = 150 \sqrt{3}+660 \approx 919.81 \ units^2$ 4. In this example, we are given the slant height. For volume, we need the height, so we need to use the Pythagorean Theorem to find it. Using the height, the volume is $\frac{1}{3} (14^2 )(24)=1568 \ units^3$ 5. The base of this pyramid is a right triangle. So, the area of the base is $\frac{1}{2} (14)(8)=56 \ units^2$ $V=\frac{1}{3} (56)(17) \approx 317.33 \ units^3$ 6. The formula for the volume of a pyramid works for any pyramid, as long as you can find the area of the base. $224&=56h\\4& = h$ Fill in the blanks about the diagram below. 1. $x$ 2. The slant height is ________. 3. $y$ 4. The height is ________. 5. The base is _______. 6. The base edge is ________. Find the area of a lateral face and the volume of the regular pyramid. Leave your answer in simplest radical form. Find the surface area and volume of the regular pyramids. Round your answers to 2 decimal places. 11. A regular tetrahedron has four equilateral triangles as its faces. Find the surface area of a regular tetrahedron with edge length of 6 units. 12. Using the formula for the area of an equilateral triangle, what is the surface area of a regular tetrahedron, with edge length $s$ For questions 13-15 consider a square with diagonal length $10\sqrt{2} \ in$ 13. What is the length of a side of the square? 14. If this square is the base of a right pyramid with height 12, what is the slant height of the pyramid? 15. What is the surface area of the pyramid? A regular tetrahedron has four equilateral triangles as its faces. Use the diagram to answer questions 16-19. 16. What is the area of the base of this regular tetrahedron? 17. What is the height of this figure? Be careful! 18. Find the volume. Leave your answer in simplest radical form. 19. Challenge If the sides are length $s$ A regular octahedron has eight equilateral triangles as its faces. Use the diagram to answer questions 20-22. 20. Describe how you would find the volume of this figure. 21. Find the volume. Leave your answer in simplest radical form. 22. Challenge If the sides are length $s$ Files can only be attached to the latest version of Modality
{"url":"http://www.ck12.org/book/CK-12-Geometry-Concepts/r2/section/11.5/","timestamp":"2014-04-19T15:43:10Z","content_type":null,"content_length":"172410","record_id":"<urn:uuid:60566849-a7b1-4983-b5f5-9cdee73bfc72>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00517-ip-10-147-4-33.ec2.internal.warc.gz"}
Haldane, J.B.S. 1964. A defense of beanbag genetics. Perspectives in Biology and Medicine 7: 343-359. Mayr, E. 1959. Where Are We? Cold Spring Harbor Symp. Quant. Biol . 24. pp1-14. Introduction for students This exercise illustrates the phenomenon of sampling error, which is the mechanism whereby genetic drift changes allele frequencies in populations. In this example a haploid model is used for simplicity. This exercise is designed to accompany an in-class discussion of genetic drift.
{"url":"http://www.faculty.virginia.edu/evolutionlabs/genetic_drift_exercise.html","timestamp":"2014-04-17T08:04:04Z","content_type":null,"content_length":"6348","record_id":"<urn:uuid:b25d40a8-afb3-41bf-bf39-4308c25115d0>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00215-ip-10-147-4-33.ec2.internal.warc.gz"}
Zeno's paradoxes From Wikipedia, the free encyclopedia Zeno's paradoxes are a set of philosophical problems generally thought to have been devised by Greek philosopher Zeno of Elea (ca. 490–430 BC) to support Parmenides's doctrine that contrary to the evidence of one's senses, the belief in plurality and change is mistaken, and in particular that motion is nothing but an illusion. It is usually assumed, based on Plato's Parmenides (128a-d), that Zeno took on the project of creating these paradoxes because other philosophers had created paradoxes against Parmenides's view. Thus Plato has Zeno say the purpose of the paradoxes "is to show that their hypothesis that existences are many, if properly followed up, leads to still more absurd results than the hypothesis that they are one." (Parmenides 128d). Plato has Socrates claim that Zeno and Parmenides were essentially arguing exactly the same point (Parmenides 128a-b). Some of Zeno's nine surviving paradoxes (preserved in Aristotle's Physics^1^2 and Simplicius's commentary thereon) are essentially equivalent to one another. Aristotle offered a refutation of some of them.^1 Three of the strongest and most famous—that of Achilles and the tortoise, the Dichotomy argument, and that of an arrow in flight—are presented in detail below. Zeno's arguments are perhaps the first examples of a method of proof called reductio ad absurdum also known as proof by contradiction. They are also credited as a source of the dialectic method used by Socrates.^3 Some mathematicians and historians, such as Carl Boyer, hold that Zeno's paradoxes are simply mathematical problems, for which modern calculus provides a mathematical solution.^4 Some philosophers, however, say that Zeno's paradoxes and their variations (see Thomson's lamp) remain relevant metaphysical problems.^5^6^7 The origins of the paradoxes are somewhat unclear. Diogenes Laertius, a fourth source for information about Zeno and his teachings, citing Favorinus, says that Zeno's teacher Parmenides was the first to introduce the Achilles and the tortoise paradox. But in a later passage, Laertius attributes the origin of the paradox to Zeno, explaining that Favorinus disagrees.^8 Paradoxes of motion Achilles and the tortoise In a race, the quickest runner can never overtake the slowest, since the pursuer must first reach the point whence the pursued started, so that the slower must always hold a lead. – as recounted by Aristotle, Physics VI:9, 239b15 In the paradox of Achilles and the Tortoise, Achilles is in a footrace with the tortoise. Achilles allows the tortoise a head start of 100 metres, for example. If we suppose that each racer starts running at some constant speed (one very fast and one very slow), then after some finite time, Achilles will have run 100 metres, bringing him to the tortoise's starting point. During this time, the tortoise has run a much shorter distance, say, 10 metres. It will then take Achilles some further time to run that distance, by which time the tortoise will have advanced farther; and then more time still to reach this third point, while the tortoise moves ahead. Thus, whenever Achilles reaches somewhere the tortoise has been, he still has farther to go. Therefore, because there are an infinite number of points Achilles must reach where the tortoise has already been, he can never overtake the tortoise.^9^10 Dichotomy paradox That which is in locomotion must arrive at the half-way stage before it arrives at the goal.– as recounted by Aristotle, Physics VI:9, 239b10 Suppose Homer wants to catch a stationary bus. Before he can get there, he must get halfway there. Before he can get halfway there, he must get a quarter of the way there. Before traveling a quarter, he must travel one-eighth; before an eighth, one-sixteenth; and so on. The resulting sequence can be represented as: $\left\{ \cdots, \frac{1}{16}, \frac{1}{8}, \frac{1}{4}, \frac{1}{2}, 1 \right\}$ This description requires one to complete an infinite number of tasks, which Zeno maintains is an impossibility. This sequence also presents a second problem in that it contains no first distance to run, for any possible (finite) first distance could be divided in half, and hence would not be first after all. Hence, the trip cannot even begin. The paradoxical conclusion then would be that travel over any finite distance can neither be completed nor begun, and so all motion must be an illusion. An equally valid conclusion, as Henri Bergson proposed, is that motion (time and distance) is not actually divisible. This argument is called the Dichotomy because it involves repeatedly splitting a distance into two parts. It contains some of the same elements as the Achilles and the Tortoise paradox, but with a more apparent conclusion of motionlessness. It is also known as the Race Course paradox. Some, like Aristotle, regard the Dichotomy as really just another version of Achilles and the Tortoise.^11 There are two versions of the dichotomy paradox. In the other version, before Homer could reach the stationary bus, he must reach half of the distance to it. Before reaching the last half, he must complete the next quarter of the distance. Reaching the next quarter, he must then cover the next eighth of the distance, then the next sixteenth, and so on. There are thus an infinite number of steps that must first be accomplished before he could reach the bus, with no way to establish the size of any "last" step. Expressed this way, the dichotomy paradox is very much analogous to that of Achilles and the tortoise. Arrow paradox If everything when it occupies an equal space is at rest, and if that which is in locomotion is always occupying such a space at any moment, the flying arrow is therefore motionless.^12 – as recounted by Aristotle, Physics VI:9, 239b5 In the arrow paradox (also known as the fletcher's paradox), Zeno states that for motion to occur, an object must change the position which it occupies. He gives an example of an arrow in flight. He states that in any one (durationless) instant of time, the arrow is neither moving to where it is, nor to where it is not.^13 It cannot move to where it is not, because no time elapses for it to move there; it cannot move to where it is, because it is already there. In other words, at every instant of time there is no motion occurring. If everything is motionless at every instant, and time is entirely composed of instants, then motion is impossible. Whereas the first two paradoxes divide space, this paradox starts by dividing time—and not into segments, but into points.^14 Three other paradoxes as given by Aristotle Paradox of Place From Aristotle: if everything that exists has a place, place too will have a place, and so on ad infinitum.^15 Paradox of the Grain of Millet From Aristotle: there is no part of the millet that does not make a sound: for there is no reason why any such part should not in any length of time fail to move the air that the whole bushel moves in falling. In fact it does not of itself move even such a quantity of the air as it would move if this part were by itself: for no part even exists otherwise than potentially.^16 Description of the paradox from the Routledge Dictionary of Philosophy: The argument is that a single grain of millet makes no sound upon falling, but a thousand grains make a sound. Hence a thousand nothings become something, which is absurd.^17 Description from Nick Huggett: This a Parmenidean argument that one cannot trust one's sense of hearing. Aristotle's response seems to be that even inaudible sounds can add to an audible sound.^18 The Moving Rows (or Stadium) From Aristotle: concerning the two rows of bodies, each row being composed of an equal number of bodies of equal size, passing each other on a race-course as they proceed with equal velocity in opposite directions, the one row originally occupying the space between the goal and the middle point of the course and the other that between the middle point and the starting-post. This...involves the conclusion that half a given time is equal to double that time.^19 For an expanded account of Zeno's arguments as presented by Aristotle, see Simplicius' commentary On Aristotle's Physics. Proposed solutions Simplicius of Cilicia According to Simplicius, Diogenes the Cynic said nothing upon hearing Zeno's arguments, but stood up and walked, in order to demonstrate the falsity of Zeno's conclusions. To fully solve any of the paradoxes, however, one needs to show what is wrong with the argument, not just the conclusions. Through history, several solutions have been proposed, among the earliest recorded being those of Aristotle and Archimedes. Aristotle (384 BC−322 BC) remarked that as the distance decreases, the time needed to cover those distances also decreases, so that the time needed also becomes increasingly small.^20^21 Aristotle also distinguished "things infinite in respect of divisibility" (such as a unit of space that can be mentally divided into ever smaller units while remaining spatially the same) from things (or distances) that are infinite in extension ("with respect to their extremities").^22 Aristotle's objection to the arrow paradox was that "Time is not composed of indivisible nows any more than any other magnitude is composed of indivisibles."^23 Saint Thomas Aquinas Saint Thomas Aquinas, commenting on Aristotle's objection, wrote "Instants are not parts of time, for time is not made up of instants any more than a magnitude is made of points, as we have already proved. Hence it does not follow that a thing is not in motion in a given time, just because it is not in motion in any instant of that time."^24 Before 212 BC, Archimedes had developed a method to derive a finite answer for the sum of infinitely many terms that get progressively smaller. (See: Geometric series, 1/4 + 1/16 + 1/64 + 1/256 + · · ·, The Quadrature of the Parabola.) Modern calculus achieves the same result, using more rigorous methods (see convergent series, where the "reciprocals of powers of 2" series, equivalent to the Dichotomy Paradox, is listed as convergent). These methods allow the construction of solutions based on the conditions stipulated by Zeno, i.e. the amount of time taken at each step is geometrically Bertrand Russell Bertrand Russell offered what is known as the "at-at theory of motion". It agrees that there can be no motion "during" a durationless instant, and contends that all that is required for motion is that the arrow be at one point at one time, at another point another time, and at appropriate points between those two points for intervening times. In this view motion is a function of position with respect to time.^26^27 Nick Huggett Nick Huggett argues that Zeno is begging the question when he says that objects that occupy the same space as they do at rest must be at rest.^14 Peter Lynds Peter Lynds has argued that all of Zeno's motion paradoxes are resolved by the conclusion that instants in time and instantaneous magnitudes do not physically exist.^28^29^30 Lynds argues that an object in relative motion cannot have an instantaneous or determined relative position (for if it did, it could not be in motion), and so cannot have its motion fractionally dissected as if it does, as is assumed by the paradoxes. For more about the inability to know both speed and location, see Heisenberg uncertainty principle. Hermann Weyl Another proposed solution is to question one of the assumptions Zeno used in his paradoxes (particularly the Dichotomy), which is that between any two different points in space (or time), there is always another point. Without this assumption there are only a finite number of distances between two points, hence there is no infinite sequence of movements, and the paradox is resolved. The ideas of Planck length and Planck time in modern physics place a limit on the measurement of time and space, if not on time and space themselves. According to Hermann Weyl, the assumption that space is made of finite and discrete units is subject to a further problem, given by the "tile argument" or "distance function problem".^31^32 According to this, the length of the hypotenuse of a right angled triangle in discretized space is always equal to the length of one of the two sides, in contradiction to geometry. Jean Paul Van Bendegem has argued that the Tile Argument can be resolved, and that discretization can therefore remove the paradox.^4^33 Hans Reichenbach Hans Reichenbach has proposed that the paradox may arise from considering space and time as separate entities. In a theory like general relativity, which presumes a single space-time continuum, the paradox may be blocked.^34 The paradoxes in modern times Infinite processes remained theoretically troublesome in mathematics until the late 19th century. The epsilon-delta version of Weierstrass and Cauchy developed a rigorous formulation of the logic and calculus involved. These works resolved the mathematics involving infinite processes.^35 While mathematics can be used to calculate where and when the moving Achilles will overtake the Tortoise of Zeno's paradox, philosophers such as Brown and Moorcroft^5^6 claim that mathematics does not address the central point in Zeno's argument, and that solving the mathematical issues does not solve every issue the paradoxes raise. Zeno's arguments are often misrepresented in the popular literature. That is, Zeno is often said to have argued that the sum of an infinite number of terms must itself be infinite–with the result that not only the time, but also the distance to be travelled, become infinite.^36 However, none of the original ancient sources has Zeno discussing the sum of any infinite series. Simplicius has Zeno saying "it is impossible to traverse an infinite number of things in a finite time". This presents Zeno's problem not with finding the sum, but rather with finishing a task with an infinite number of steps: how can one ever get from A to B, if an infinite number of (non-instantaneous) events can be identified that need to precede the arrival at B, and one cannot reach even the beginning of a "last event"?^5^6^7^37 Today there is still a debate on the question of whether or not Zeno's paradoxes have been resolved. In The History of Mathematics, Burton writes, "Although Zeno's argument confounded his contemporaries, a satisfactory explanation incorporates a now-familiar idea, the notion of a 'convergent infinite series.'".^38 Bertrand Russell offered a "solution" to the paradoxes based on modern physics,^citation needed but Brown concludes "Given the history of 'final resolutions', from Aristotle onwards, it's probably foolhardy to think we've reached the end. It may be that Zeno's arguments on motion, because of their simplicity and universality, will always serve as a kind of 'Rorschach image' onto which people can project their most fundamental phenomenological concerns (if they have any)."^5 Pat Corvini offers a solution to the paradox of Achilles and the tortoise by first distinguishing the physical world from the abstract mathematics used to describe it.^39 She claims the paradox arises from a subtle but fatal switch between the physical and abstract. Zeno's syllogism is as follows: P1: Achilles must first traverse an infinite number of divisions in order to reach the tortoise; P2: it is impossible for Achilles to traverse an infinite number of divisions; C: therefore, Achilles can never surpass the tortoise. Corvini shows that P1 is a mathematical abstraction which cannot be applied directly to P2 which is a statement regarding the physical world. The physical world requires a resolution amount used to distinguish distance while mathematics can use any Quantum Zeno effect In 1977,^40 physicists E. C. G. Sudarshan and B. Misra studying quantum mechanics discovered that the dynamical evolution (motion) of a quantum system can be hindered (or even inhibited) through observation of the system.^41 This effect is usually called the "quantum Zeno effect" as it is strongly reminiscent of Zeno's arrow paradox. This effect was first theorized in 1958.^42 Zeno behaviour In the field of verification and design of timed and hybrid systems, the system behaviour is called Zeno if it includes an infinite number of discrete steps in a finite amount of time.^43 Some formal verification techniques exclude these behaviours from analysis, if they are not equivalent to non-Zeno behaviour.^44^45 In systems design these behaviours will also often be excluded from system models, since they cannot be implemented with a digital controller.^46 A simple example of a system showing Zeno behaviour is a bouncing ball coming to rest. The physics of a bouncing ball, ignoring factors other than rebound, can be mathematically analyzed to predict an infinite number of bounces. See also 1. ^ ^a ^b Aristotle's Physics "Physics" by Aristotle translated by R. P. Hardie and R. K. Gaye 2. ^ "Greek text of "Physics" by Aristotle (refer to §4 at the top of the visible screen area)". Archived from the original on 2008-05-16. 3. ^ ([fragment 65], Diogenes Laertius. IX 25ff and VIII 57). 4. ^ ^a ^b ^c Boyer, Carl (1959). The History of the Calculus and Its Conceptual Development. Dover Publications. p. 295. ISBN 978-0-486-60509-8. Retrieved 2010-02-26. "If the paradoxes are thus stated in the precise mathematical terminology of continuous variables (...) the seeming contradictions resolve themselves." 5. ^ ^a ^b ^c ^d Brown, Kevin. "Zeno and the Paradox of Motion". Reflections on Relativity. Retrieved 2010-06-06. 6. ^ ^a ^b ^c Moorcroft, Francis. "Zeno's Paradox". Archived from the original on 2010-04-18. 7. ^ ^a ^b Papa-Grimaldi, Alba (1996). "Why Mathematical Solutions of Zeno's Paradoxes Miss the Point: Zeno's One and Many Relation and Parmenides' Prohibition" (PDF). The Review of Metaphysics 50: 8. ^ Diogenes Laertius, Lives, 9.23 and 9.29. 9. ^ "Math Forum"., mathforum.org 10. ^ Huggett, Nick (2010). "Zeno's Paradoxes: 3.2 Achilles and the Tortoise". Stanford Encyclopedia of Philosophy. Retrieved 2011-03-07. 11. ^ Huggett, Nick (2010). "Zeno's Paradoxes: 3.1 The Dichotomy". Stanford Encyclopedia of Philosophy. Retrieved 2011-03-07. 12. ^ Aristotle. "Physics". The Internet Classics Archive. "Zeno's reasoning, however, is fallacious, when he says that if everything when it occupies an equal space is at rest, and if that which is in locomotion is always occupying such a space at any moment, the flying arrow is therefore motionless. This is false, for time is not composed of indivisible moments any more than any other magnitude is composed of indivisibles." 13. ^ Laertius, Diogenes (about 230 CE). "Pyrrho". Lives and Opinions of Eminent Philosophers IX. passage 72. ISBN 1-116-71900-2. 14. ^ ^a ^b Huggett, Nick (2010). "Zeno's Paradoxes: 3.3 The Arrow". Stanford Encyclopedia of Philosophy. Retrieved 2011-03-07. 15. ^ Aristotle Physics IV:1, 209a25 16. ^ Aristotle Physics VII:5, 250a20 17. ^ The Michael Proudfoot, A.R. Lace. Routledge Dictionary of Philosophy. Routledge 2009, p. 445 18. ^ Huggett, Nick, "Zeno's Paradoxes", The Stanford Encyclopedia of Philosophy (Winter 2010 Edition), Edward N. Zalta (ed.), http://plato.stanford.edu/entries/paradox-zeno/#GraMil 19. ^ Aristotle Physics VI:9, 239b33 20. ^ Aristotle. Physics 6.9 21. ^ Aristotle's observation that the fractional times also get shorter does not guarantee, in every case, that the task can be completed. One case in which it does not hold is that in which the fractional times decrease in a harmonic series, while the distances decrease geometrically, such as: 1/2 s for 1/2 m gain, 1/3 s for next 1/4 m gain, 1/4 s for next 1/8 m gain, 1/5 s for next 1/ 16 m gain, 1/6 s for next 1/32 m gain, etc. In this case, the distances form a convergent series, but the times form a divergent series, the sum of which has no limit. Archimedes developed a more explicitly mathematical approach than Aristotle. 22. ^ Aristotle. Physics 6.9; 6.2, 233a21-31 23. ^ Aristotle. Physics VI. Part 9 verse: 239b5. ISBN 0-585-09205-2. 24. ^ Aquinas. Commentary on Aristotle's Physics, Book 6.861 25. ^ George B. Thomas, Calculus and Analytic Geometry, Addison Wesley, 1951 26. ^ Huggett, Nick (1999). Space From Zeno to Einstein. ISBN 0-262-08271-3. 27. ^ Salmon, Wesley C. (1998). Causality and Explanation. p. 198. ISBN 978-0-19-510864-4. 28. ^ Lynds, Peter. Time and Classical and Quantum Mechanics: Indeterminacy vs. Discontinuity. Foundations of Physics Letter s (Vol. 16, Issue 4, 2003). doi:10.1023/A:1025361725408 29. ^ Time’s Up Einstein, Josh McHugh, Wired Magazine, June 2005 30. ^ Van Bendegem, Jean Paul (17 March 2010). "Finitism in Geometry". Stanford Encyclopedia of Philosophy. Retrieved 2012-01-03. 31. ^ Cohen, Marc (11 December 2000). "ATOMISM". History of Ancient Philosophy, University of Washington. Retrieved 2012-01-03. 32. ^ van Bendegem, Jean Paul (1987). "Discussion:Zeno's Paradoxes and the Tile Argument". Philosophy of Science (Belgium) 54 (2): 295–302. doi:10.1086/289379. JSTOR 187807. 33. ^ Hans Reichenbach (1958) The Philosophy of Space and Time. Dover 34. ^ Lee, Harold (1965). "Are Zeno's Paradoxes Based on a Mistake?". Mind (Oxford University Press) 74 (296): 563–570. doi:10.1093/mind/LXXIV.296.563. JSTOR 2251675. 35. ^ Benson, Donald C. (1999). The Moment of Proof : Mathematical Epiphanies. New York: Oxford University Press. p. 14. ISBN 978-0195117219. 36. ^ Huggett, Nick (2010). "Zeno's Paradoxes: 5. Zeno's Influence on Philosophy". Stanford Encyclopedia of Philosophy. Retrieved 2011-03-07. 37. ^ Burton, David, A History of Mathematics: An Introduction, McGraw Hill, 2010, ISBN 978-0-07-338315-6 38. ^ Sudarshan, E. C. G.; Misra, B. (1977). "The Zeno's paradox in quantum theory". Journal of Mathematical Physics 18 (4): 756–763. Bibcode:1977JMP....18..756M. doi:10.1063/1.523304. 39. ^ W.M.Itano; D.J.Heinsen, J.J.Bokkinger, D.J.Wineland (1990). "Quantum Zeno effect" (PDF). PRA 41 (5): 2295–2300. Bibcode:1990PhRvA..41.2295I. doi:10.1103/PhysRevA.41.2295. 40. ^ Khalfin, L.A. (1958). "Contribution to the Decay Theory of a Quasi-Stationary State". Soviet Phys. JETP 6: 1053. Bibcode:1958JETP....6.1053K. 41. ^ Paul A. Fishwick, ed. (1 June 2007). "15.6 "Pathological Behavior Classes" in chapter 15 "Hybrid Dynamic Systems: Modeling and Execution" by Pieter J. Mosterman, The Mathworks, Inc.". Handbook of dynamic system modeling. Chapman & Hall/CRC Computer and Information Science (hardcover ed.). Boca Raton, Florida, USA: CRC Press. pp. 15–22 to 15–23. ISBN 978-1-58488-565-8. Retrieved 42. ^ Lamport, Leslie (2002). Specifying Systems (PDF). Addison-Wesley. p. 128. ISBN 0-321-14306-X. Retrieved 2010-03-06. 43. ^ Zhang, Jun; Johansson, Karl; Lygeros, John; Sastry, Shankar (2001). "Zeno hybrid systems". International Journal for Robust and Nonlinear control 11 (5): 435. doi:10.1002/rnc.592. Retrieved 44. ^ Franck, Cassez; Henzinger, Thomas; Raskin, Jean-Francois (2002). A Comparison of Control Problems for Timed and Hybrid Systems. Retrieved 2010-03-02. External links Wikisource has original text related to this article: Zeno of Elea • Dowden, Bradley. "Zeno’s Paradoxes." Entry in the Internet Encyclopedia of Philosophy. • Hazewinkel, Michiel, ed. (2001), "Antinomy", Encyclopedia of Mathematics, Springer, ISBN 978-1-55608-010-4 • Introduction to Mathematical Philosophy, Ludwig-Maximilians-Universität München • Silagadze, Z. K. "Zeno meets modern science," • Zeno's Paradox: Achilles and the Tortoise by Jon McLoone, Wolfram Demonstrations Project. • Palmer, John (2008). "Zeno of Elea". Stanford Encyclopedia of Philosophy. • This article incorporates material from Zeno's paradox on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License. • Grime, James. "Zeno's Paradox". Numberphile. Brady Haran.
{"url":"http://www.territorioscuola.com/wikipedia/en.wikipedia.php?title=Zeno's_paradoxes","timestamp":"2014-04-19T11:58:20Z","content_type":null,"content_length":"151565","record_id":"<urn:uuid:72d57bc4-434a-4642-a7a3-450a8d5e6cec>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00660-ip-10-147-4-33.ec2.internal.warc.gz"}
Flashcards - Mathematics - Addition & Subtraction Relationship Algorithm | StudyBlue a set of rules for solving a problem in a finite number of steps A characteristic to describe and object usually within a pattern. Usually refers to shape, size, or color Numbering system in common use, in which each place to the left or right represents a power of 10. Determining between 2 numbers which is greater, smaller, or equal Has at least one other factor beside one and its self Learning builds upon knowledge students know. Teacher avoids most direct instructions and attempts to lead students through questions and activities. Finding the number of elements of a finite set of objects Expressions written with an equal sign in between the two entities on either side are equal to each other. Examples: The number written in terms of digits multiplied by their columns A number may be made by multiplying two or more other numbers together. The numbers multiplied are the factors. The process or the result of determining the magnitude of a quantity, such as: Number that can be divided evenly by 2 and numbers that cant be divided evenly by 2 An action or procedure in which a new value from one or more input values The chance that a particular event (or set of events) will occur A comparison between two different things. And ________ are built from ratios One and only one number name is assigned to each object in a group, and the last number name is understood to name the quantity of the group. The amount “left over” after the division of two integers which cannot be expressed with an integer quotients Finite or infinite collection of objects in which order has no significance
{"url":"http://www.studyblue.com/notes/note/n/deck/1004408","timestamp":"2014-04-19T06:55:35Z","content_type":null,"content_length":"63537","record_id":"<urn:uuid:7c3a3e3f-7307-4cef-946f-63d20d1dfd0c>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00490-ip-10-147-4-33.ec2.internal.warc.gz"}
Derandomized Vector Sorting Derandomized Vector Sorting (1998) Derandomized Vector Sorting. Technical Report ncstrl.vatech_cs//TR-98-19, Computer Science, Virginia Polytechnic Institute and State University. Full text available as: Postscript - Requires a viewer, such as GhostView TR-98-19.ps (164553) An instance of the vector sorting problem is a sequence of k-dimensional vectors of length n. A solution to the problem is a permutation of the vectors such that in each dimension the length of the longest decreasing subsequence is O(sqrt(n)). A random permutation solves the problem. Here we derandomize the obvious probabilistic algorithm and obtain a deterministic O(kn^3.5) time algorithm that solves the vector sorting problem. We also apply the algorithm to a book embedding problem.
{"url":"http://eprints.cs.vt.edu/archive/00000498/","timestamp":"2014-04-19T02:29:08Z","content_type":null,"content_length":"5973","record_id":"<urn:uuid:e6120f19-4e7c-4819-8863-470444e946ed>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00424-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Prove that \(x\) is both rational and an integer.\[\lim_{n\to\infty}\left(\ln n-\frac{n}{\pi\left(n\right)}\right)=x\]The function \(\pi(n)\) counts the number of primes less than or equal to \(n\). I've seen this equation somewhere before, but I can't remember where. IIRC, \(x\) evaluates to \(1\). Mysterious! • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50332a4be4b0d8ed9b49a909","timestamp":"2014-04-16T08:08:41Z","content_type":null,"content_length":"49948","record_id":"<urn:uuid:052d954a-94e7-4ac4-837b-d001a2c3b41b>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00314-ip-10-147-4-33.ec2.internal.warc.gz"}
[Numpy-discussion] Another Array Robert Kern robert.kern@gmail.... Fri Apr 10 02:01:10 CDT 2009 On Fri, Apr 10, 2009 at 01:58, Ian Mallett <geometrian@gmail.com> wrote: > On Thu, Apr 9, 2009 at 11:46 PM, Robert Kern <robert.kern@gmail.com> wrote: >> Parabolic? They should be spherical. > The particle system in the last screenshot was affected by gravity. In the > absence of gravity, the results should be spherical, yes. All the vectors > are a unit length, which produces a perfectly smooth surface (unrealistic > for such an effect). >> No, it's not obvious. Exactly what code did you try? What results did >> you get? What results were you expecting? > It crashed. > I have this code: > vecs = Numeric.random.standard_normal(size=(self.size[0],self.size[1],3)) > magnitudes = Numeric.sqrt((vecs*vecs).sum(axis=-1)) > uvecs = vecs / magnitudes[...,Numeric.newaxis] > randlen = Numeric.random.random((self.size[0],self.size[1])) > randuvecs = uvecs*randlen #It crashes here with a dimension mismatch > rgb = ((randvecs+1.0)/2.0)*255.0 > I also tried randlen = Numeric.random.random((self.size[0],self.size[1],3)), > but this does not scale each of the vector's components equally, producing > artifacts again. Each needs to be scaled by the same random value for it to > make sense. See how I did magnitudes[...,numpy.newaxis]? You have to do the same. >> Let's take a step back. What kind of distribution are you trying to >> achieve? You asked for uniformly distributed unit vectors. Now you are >> asking for something else, but I'm really not sure what. What standard >> are you comparing against when you say that the unit vectors look >> "unrealistic"? > The vectors are used to "jitter" each particle's initial speed, so that the > particles go in different directions instead of moving all as one. Using > the unit vector causes the particles to make the smooth parabolic shape. > The jitter vectors much then be of a random length, so that the particles go > in all different directions at all different speeds, instead of just all in > different directions. Ah, okay. That makes sense. Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco More information about the Numpy-discussion mailing list
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2009-April/041763.html","timestamp":"2014-04-21T05:23:25Z","content_type":null,"content_length":"5362","record_id":"<urn:uuid:3e2c1f78-988c-45f9-a579-62cfa03653f5>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00027-ip-10-147-4-33.ec2.internal.warc.gz"}
Gloria Olive Born: 8 June 1923 in New York City, USA Died: 17 April 2006 in Dunedin, New Zealand Click the picture above to see a larger version Previous (Chronologically) Next Main Index Previous (Alphabetically) Next Biographies index Gloria Olive completed her school educations at Abraham Lincoln High School in Brooklyn, New York, graduating in 1940. She then entered Brooklyn College in New York where she studied mathematics, graduating with an B.A. in 1944. Perhaps the most famous of her lecturers was Jesse Douglas who had been awarded the Fields Medal at the International Congress of Mathematicians at Oslo in 1936. After graduating, Olive was appointed as a Graduate Assistant at the University of Wisconsin where she spent the two academic years 1944-46. In addition to teaching she studied for her Master's Degree during these two years and was awarded the degree in 1946. She had taken courses at Wisconsin from R H Bruck, H P Evans, R E Langer, and C C MacDuffee. From Wisconsin, Olive moved to the University of Arizona in 1946 where she was appointed as an instructor. After two years she was appointed to Idaho State University where again she spent two years teaching as an instructor. Her next appointment in Oregon State University in 1950 was as a Graduate Assistant and after this one-year post she left the academic world for a short time, taking a job as a cryptographer in the U.S. Department of Defense in Washington, D.C. After a year in Washington, Olive returned to an academic position being appointed to Anderson College in 1952. Anderson Bible Training School had been founded in 1917 as an educational establishment to train leaders and workers for a life in the church. It rapidly developed a broader, more general, education program, and changed its name first to Anderson College and Theological Seminary, and then to Anderson College. At Anderson College, Olive built up the mathematics department and began to become interested in mathematical research, in particular studying generalised powers. C C MacDuffee, who had taught Olive at the University of Wisconsin, agreed to accept a visiting professorship at Oregon State University so that he could supervise her doctoral thesis. Sadly he died in 1961 and Olive was left without a thesis advisor. However she was awarded a Ph.D. for her thesis Generalized Powers in 1963. Olive continued to at Anderson College until 1968 when she accepted a professorship at the University of Wisconsin-Superior. She left the mathematics department in Anderson College, of which she had been the chair, with three members of staff who were her former students. The educational establishment she joined had been accredited as Superior Normal School in 1916, then ten years later became Superior State Teachers College. By 1951 it had changed its name to Wisconsin State College-Superior, and then in 1964 it became a university, although at this stage it was not part of the University of Wisconsin. Olive stayed at the University of Wisconsin-Superior until 1972, the year after it joined the University of Wisconsin, and went to New Zealand where she was appointed as a senior lecturer at the University of Otago. She continued in this post until she retired in 1989. Mac Lane and Rayner wrote on her retirement [1]:- For all of her time with the Mathematics and Statistics Department of Otago University, Gloria has been the only female on the staff with tenure, and as such has been a shining example to both staff and students. She has fought hard for the issues she championed, and contributed to several worthwhile changes (such as the current internal assessment policy applauded by both staff and students). Her colleagues will miss her lively contributions to the debates in departmental meetings. Much of Olive's research was on applications of generalised powers. She published papers such as Binomial functions and combinatorial mathematics (1979), A combinatorial approach to generalized powers (1980), Binomial functions with the Stirling property (1981), Some functions that count (1983), Taylor series revisited (1984), Catalan numbers revisited (1985), A special class of infinite matrices (1987), and The ballot problem revisited (1988). Mac Lane and Rayner write [1]:- Some of her work on binomial functions overlaps that of Gian-Carlo Rota's "polynomials of binomial type". She has had a special interest in the polynomials which are generated by her generalised powers, and hopes that someone will prove or disprove her conjecture, now about 30 years old, that all their zeros lie on the unit circle. This conjecture has now been verified for infinitely many special cases. During her years in New Zealand, she served on the Council of the New Zealand Mathematical Society and was the convener of the New Zealand National Committee for Mathematics. Article by: J J O'Connor and E F Robertson A Reference (One book/article) Mathematicians born in the same country Previous (Chronologically) Next Main Index Previous (Alphabetically) Next Biographies index History Topics Societies, honours, etc. Famous curves Time lines Birthplace maps Chronology Search Form Glossary index Quotations index Poster index Mathematicians of the day Anniversaries for the year JOC/EFR © July 2007 School of Mathematics and Statistics Copyright information University of St Andrews, Scotland The URL of this page is:
{"url":"http://www-groups.dcs.st-and.ac.uk/~history/Biographies/Olive.html","timestamp":"2014-04-17T00:52:22Z","content_type":null,"content_length":"13402","record_id":"<urn:uuid:c45942bb-570f-4f81-9f8a-2a4c2e6ee233>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00514-ip-10-147-4-33.ec2.internal.warc.gz"}
CS 154: Intro. to Functional Programming—Midterm Review Notes click on the triangles to the left to expand sections! use the text size feature in your browser to make this text larger and more readable! Information about the exam The exam will be held on Monday 25 October at the usual lecture time (1:50-2:50) in the usual lecture room (Ford 204). The exam will consist of several kinds of questions; typically: 10-15 True-or-False questions "warm-up" questions at 1-2 points each 8-10 short answer questions at 4-6 points each 3-4 longer answer questions at 8-15 points each The emphasis is more on conceptual issues than is typical in labs (which emphasize programming practice). In any case, you won't be asked to write long programs, but perhaps to read or complete shorter-length ones (2-4 lines of Haskell). You might also be asked to find certain kinds of errors in a program you read. You should review your lecture notes to study, as well as the textbook (chapters and sections as marked below). You might also want to review the labs and your solutions to them, as well as various samples on the class web page. Any definitions you need from the Prelude will be supplied for you in an "appendix" to the exam! Functional programming—background (see Preface and Chapter 1) programming compared with math like math, programming involves definition, calculation, abstraction and formal reasoning in both fields, ideas are in some sense exact, rigorous and objective both tend to be seen (superficially) as concerned with numbers and operations on them ... but both are actually (more deeply) concerned with larger-scale structures, their properties and relationships programming contrasted with math math is communicated between humans, and so may be somewhat informal; in addition, programs are also communicated to computers, and so must be expressed precisely, in formal languages in math, abstract properties and "values" are considered most important; in programming, we must consider concrete representations as well, for feasibility and efficiency some typical topics treated differently in math and programming: numbers are usually treated abstractly in math, divorced from their numeral representations: in computing we consider both sets form the foundation of math: as abstractions, they have only logical properties (which elements?) lists are representations in computing, so they naturally involve order and repetition of elements we also want sets in computing, but must explicitly abstract away from representational issues (e.g., the order and repetition) how does pure functional programming in Haskell compare to other approaches and languages? most languages express programs as sequences of stepwise instructions which change stored data over time; Haskell programs are sets of definitions concerning "timeless" values (much like math) Haskell programs use recursion where most languages "loop" instructions through time since Haskell values don't change over time, we can write algebraic or equational proofs about them (in most languages, we can't guarantee that f(x) = f(x) or that 2 * f(x) = f(x) + f(x), since x might change along the way!) Haskell programs tend to be much shorter than in most languages (hopefully without being too cryptic) most Haskell implementations allow an interactive, exploratory style of development, using an expression "calculator" most languages require a longer process of program compilation, and a greater separation of testing from writing phases The "mechanics" of using the Hugs (or WinHugs) implementations (see Chapter 2) the Hugs (or WinHugs) program, by default, evaluates expressions that the user types in a basic module (i.e., a set of definitions called the Prelude is loaded by default (others can also be loaded) the expression $$ always refers to the last expression evaluated (and "up arrow" gives access to the history) if evaluation of an expression takes too long, it can be interrupted with the "stop button (or the "control-C" keystrokes) a number of different commands to the interpreter can be given by prefixing with a colon (:) commands can be abbreviated to just their first letter (or to any unique prefix) :type to determine the type of an expression (short form :t) :load to a module of definitions (short form :l) :browse to show the names of defined values available :info to get information on things other than values (types, type classes, modules, etc.) :find to look up (in the editor) the definition of a value :set +s to show statistics about the time and storage used by each evaluation a number of other system options can be set either by using commands (:set) or through the WinHugs options menu writing and loading scripts we leave the interpreter and se an editor to write sets of definitions called modules inside a module, each definition must be complete (i.e., not interspersed with others) definitions may optionally include a type assertion (e.g., "f :: a -> [a]") definitions may involve sub-definition, using "where" clauses or the "let" expression: "let x = ... in ..." where clauses must be indented under their main clause if there is an error in a module, none of the definitions will load (it's all or nothing ...) Basic types and literals: numbers, characters, and booleans (see Sections 3.1-3.2; and 3.9) kinds of numbers in Haskell Int — the type of "small" integers (must be greater than -231 and less than 231 - 1) can use base 16 with a "0x" prefix: 0x101 = 257 represented internally using bit fields of with 32 (one bit for the sign) Integer — may be positive or negative and of any magnitude (but they are less efficient) Float, Double — approximations to real numbers, in "scientific notation", with size limitations examples: 2.5, -36.7, 3.141592635, 2.0E25, -37.456e-20 == -3.7456e-19 represented internally using bit fields for mantissa and exponent, plus signs (IEEE standard representation) Num a — a type class that includes all the numeric types (supports addition, multiplication, etc.) by default, Haskell will try to defer judgements about what specific Num class type a literal constant is various other type classes support natural operations on fractions, integral types (Int and Integer), etc. individual letters, digits, punctuation, symbols are called characters and are written in single quotes their common type is Char values of Char type are represented internally as bot patterns or numbers, using the ASCII code longer texts, called strings, are written with double-quote marks examples: "hello", "a", "123@ $5" these values are all of type String, which is really just an abbreviation for list of Char (i.e., type String = [Char]) when possible, Hugs will write out lists of characters in this notation ... but you can input them as lists: ['a', 'b', 'c', 'd'] == "abcd") some characters are "written" as special sequences, starting with "\" examples include tabs, newline characters, and some other unprintable ones when these strings are printed (e.g., using putStr), they will appear as actual tabs, line separators, etc.) Booleans (= "truth values") there are only two truth values, True and False, of type Bool they are computed as the results of relational operators (<, ==, etc.) and can be combined with logical operators (&&, ||, etc.) booleans can also be used in conditional expressions: if <boolean expression> then ... else ... in many cases, computing boolean results using conditionals can be re-expressed more concisely (if E then True else False) is the same as E (if E then False else True) is the same as not E Applying functions and operators to their arguments (see Sections 3.5-3.6) syntax of function application see also the “chunking exercises” www—chunking.pdf functions apply by juxtaposition to their arguments (i.e., written to the left) unlike in math, no parentheses are necessary in general: f x rather than f(x) but, if the argument is itself a "structured" expression, then parentheses are needed: f (x+y) several arguments can be supplied one after the other, with parens when structured: g a b (x+y) 3 in fact, application to "several arguments" really is successive application of function results g a b (x+y) 3 == (((g a) b) (x+y)) 3 operators and infix application operators are names written using non-letter "symbols": + - * # @ <= ==> operators take two (successive) arguments and are written in between them: x+y , "abc" !! 2 when we want to refer to an operator without using it, we put it in parentheses: foldr (+) "normal" functions (of two arguments) can be used infix by wrapping them in "back ticks": x `f` y we can apply an operator to one or the other argument by wrapping the two in parentheses: (+3) (^2) ('a':) operators have a defined precedence and association, so that we can elide (leave out) some parentheses from complex expressions these rules generally follow the usual mathematical ones, so that, e.g., multiplication "precedes" addition function application always precedes infix operators: f x + g y == (f x) + (g y) (+) — addition (or plus, or sum) (*) — multiplication (or product) (-) — subtraction (minus) (&&) — logical conjunction ("and") (||) — logical disjunction ("or") (<) — "less than" (relational: takes Ord class values, but returns a Bool) (>=) — "greater than or equal to" (relational) (==) — comparison for equality (relational over Eq class values) (/=) — comparison for inequality (negation of above) (!!) — indexing: selects an element from a list by numbered position, starting at 0 (:) — "cons": builds a list from an element and the rest of the list (asymmetric) (++) — append: builds a list from two lists Pairs, tuples, Maybe values, etc. (see Sections 2.4 and 3.4) pairs and n-tuples are written in parentheses, separated by commas: (2,3) ("abc", True, 5) the type of a pair is also written in the same way: (2,'a') :: (Int, Char) elements of pairs and tuples may be any type (unlike lists, whose elements must be all the same type) the functions fst and snd (first and second) may be used to select out the elements of a pair pairs of variables (or other patterns, see below) may be used as parameters in function definitions, or in definition of values (left, right) = foo x y (here foo is presumed to be a function returning pairs) the unit type has just a single value; both are written as "empty" parentheses: () :: () this type is useful when we need a "token" type with just one value (e.g., in the Notches example) the Maybe type has two kinds of values, Nothing and Just x, where x is some value the type "under" the Maybe is the type of the value if it's of the Just form: Just 3 : Maybe Int this type is useful when we want to allow, for example, that the result of a function is undefined lookup 2 [(1,"foo"), (3,"goo")] = Nothing List literals and Prelude functions (see Section 2.2, 3.3, the Appendix and also the labs) several different forms for writing lists [1,2,3] — this is the usual convenient form [] — the empty list (pronounced "nil") 1 : 2 : 3 : [] — this form makes the application of constructors explicit [1..10] — this form allows ranges of values to be easily specified [1..] — an infinite list, starting at 1 [1,3..21] — an arithmetic progression (every odd number from 1 to 21) some handy Prelude functions on lists (see also the Appendix to the exam) head — the first element of a list tail — the rest of a list, after the first element init — all but the last element of a list, as a list last — the last element of a list, as a single element length — the length of a list (number of elements) reverse — a list with elements in the opposite order (last is first, first is last) filter — a list with only those elements matching some predicate take — the first n elements of a list, as a list (taking "too many" just returns those available) drop — the remaining elements, after n are dropped (dropping too many just returns nil) map — apply a function to every element of a list, returning a list of results remember, when we apply a function to a list, and get another list as a result, the original list does not change the wonderful! fold(r) function foldr :: (a -> b -> b) -> b -> [a] -> b foldr takes a function and a “final value” and folds up a list using the function foldr “replaces” all the cons operators (:) with the function and the nil list ( [] ) with the final value foldr (:) [] is just the identity function on lists (replace cons with cons, and nil with nil) a picture of the internal structure of lists: Higher-order functions (see Sections 3.5-3.6 and Chapter 7) functions which take other functions as arguments, or return other functions as results, are called higher-order higher-order functions allow us to generalize many useful patterns of programming filter, takeWhile, dropWhile: allow a predicate to choose elements from a list map: allows a function to be applied to all elements of a list (.): function composition, combines functions in a "pipeline" curry, uncurry: convert between successive arguments and arguments-as-pairs foldr, foldl: allow functions to be applied "between" list elements functions of several "successive" arguments are actually higher-order functions (of their first argument) which return functions (of their second argument) a function which returns a boolean result (a -> Bool) is called a predicate filter even [1..10] == [2,4,6,8,10] takeWhile (<5) [1..10] == [1,2,3,4] see also below regarding the types of higher-order functions Defining functions (and other values) in Haskell (see Chapter 4) in general, Haskell definitions involve giving a name to a value, by writing the name and an expression describing the value, on the left- and right-hand sides of an 'equals sign', respectively functions may de defined outright by using an expression which results in a function on the right-hand side functions may also be defined relative to some parameter variable, which may then be used in the expression when a function is applied to an actual argument, the argument is substituted for the parameter in the defining expression dub (3*y-7) ==> (3*y-7) * 2 functions may also be defined by cases, using patterns length (x:xs) = 1 + length xs patterns are matched against the structure of actual arguments in order of the definition, from top to bottom be careful: this means that a pattern might be redundant and never used! a special parameter or pattern, written with the underscore, matches any value (like a variable), but binds no name this can be used to emphasize that a definition is completely independent of the corresponding argument value variables in Haskell have a natural scope of definition: variables defined at the "top level" of a module (far left side) are available in the whole module (or when loaded) variable used in the (patterns of) function parameters are available only in that clause of the function definition variables defined in a "where clause" are available only in the dominating or outer definition variables defined in a let expression are available only in the expression after the let variables from an outer scope may be "shadowed" or over-ridden by an inner cope definition x = 3 f x = x + 2 f y = g x y where x = 18 here there are three different definitions of the variable x Programming strategies (see also Section 6.6) try defining a function by using Prelude functions or previously-defined as “helpers” count p xs = length (filter p xs) while p = until (not . p) try defining a function over lists using patterns and recursion; for starters, use a recursive call on the tail of the list f (x:xs) = ... x ... (f xs) ... don’t try to thin about the “how” of the recursive call, only the “what” think in terms of a “contract”: what should it give for the recursive result? try defining a function you want by translating it into terms you can work in, then translating back (this will usually involve a function and its inverse) title = unwords . map cap . words “round the block” diagram Recursive definitions (see Chapter 6) definitions are recursive when the name being defined is used in the right-hand side of the definition recursive definitions run the risk of being ill-defined always consider including a "base case" (a non-recursive clause) and ensure that recursive clause(s) make progress for lists, we often use nil for a base case and a simple head/tail pattern for the recursive clause f (x:xs) = ... (f xs) ... for natural numbers (non-negative integers), we can use a similar structure with 0 and (n+1) for functions of two arguments, consider using all combinations of structural patterns (sometimes this isn't necessary) sometimes "deeper" patterns might be necessary, e,g, in order to pick out a pair of values from a list of pairs, or several values at the front of a list Evaluation, convergence, lazy evaluation and infinite data structures Haskell evaluation mimics paper-and-pencil algebraic or equational calculation (see the Lab 1 written exercises given an expression, we replace defined names with their definitions, being careful to respect rules of scope it is possible that the process of evaluation does not converge or terminate technically, Haskell implementations alway try to resolve the leftmost/outermost function definition first: this guarantees termination when possible Haskell implementations never evaluate more than they absolutely have to in order to, e.g., print results or supply arguments to primitive functions (like addition and multiplication) this means that we can define infinite structures, and use them, as long as we don't try to use all of them take 7 [1,3..] == [1,3,5,7,9,11,13] consider these definitions of the Fibonacci numbers and the factorials as lists: fibs = 0 : 1 : zipWith (+) fibs (tail fibs) facs = 1 : zipWith (*) facs [1..] Types and type-checking (see Chapter 3) all Haskell values have a type, and all valid Haskell expressions (and definitions) must be well-typed we express the fact that an expression has a certain type by placing a double-colon between them types help "sort out" values and prevent expressions that don't make sense (e.g., applying a number as if it were a function, or multiplying a String by a number) if a module contains an expression which cannot be given a valid type, it will not load and you cannot use its definitions (until you correct the error) among the rules are, for example, that if a function has type X -> Y, then its argument must have type X, and the result will be of type Y Haskell includes type variables (like "a" or "b") when any type is possible; this is called type polymorphism as mentioned above, higher-order function types can be read in two ways: map :: (a -> b) -> [a] -> [b] read as "map takes two arguments, a function and a list, and returns a list" map :: (a -> b) -> ([a] -> [b]) read as "map takes a function and returns, a function from lists to lists" these two types are equivalent, but writing the parentheses in explicitly emphasizes the second reading
{"url":"http://www.willamette.edu/~fruehr/154/examnotes/MidtermNotes.html/","timestamp":"2014-04-20T16:23:02Z","content_type":null,"content_length":"222447","record_id":"<urn:uuid:5563a333-3ba4-45ee-8432-3b363974dc8c>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00643-ip-10-147-4-33.ec2.internal.warc.gz"}
Multivariable function notations So z=f(x,y) ...... is a function of three variables with x and y being independent Nope, it's an equation in 3 variables. We really don't care too much about what's a function of what here. u=F(x,y,z)=f(x,y)-z . . . all of a sudden it seems as though z has suddenly become independent and there are now four variables. Some clearing up should be done by explaining what exactly we mean by a graph. A graph of some equation (not always z=f(x,y) for some f) is the set of all points (x,y,z) that satisfy the equation. Once we clear that up, it shouldn't really matter what's dependent on what. It's just convention that we usually try to write z=f(x,y). EDIT: Reason why I raised this question. I have a function z=ln(xy^2). 1. First consideration . . . If z=f(x,y) then : F(x,y,z)=f(x,y)-z Hence grad(F)=<1/x,2/y,-1> . . . At point(1,1,0) . . . grad(F)=<1,2,-1> . . . where grad(F) is a vector normal to the surface at that point 2. Second consideration . . . If z=f(x,y) then I can also do . . grad(z)=<1/x,2/y> . . . At point(1,1,0) . . . grad(F)=<1,2> . . . which is also normal to the surface with a rising rate of modulus(<1,2>) The second is [itex]\nabla f[/itex] in two-dimensional space. It's normal to the curve [itex]f\left(x,y\right)=C[/itex], where C is a constant, not to the surface. The first is [itex]\nabla\left(f\ left(x,y\right)-z\right)[/itex], where z is some new variable, independent of x and y (unless we're only looking at the gradient when we're on the surface,) which is our third variable if we're taking the gradient in three-dimensional space.
{"url":"http://www.physicsforums.com/showthread.php?p=4231519","timestamp":"2014-04-20T05:54:47Z","content_type":null,"content_length":"32528","record_id":"<urn:uuid:85ef3531-d8b9-4754-a2dc-a008251584d4>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00025-ip-10-147-4-33.ec2.internal.warc.gz"}
Riverdale, GA Prealgebra Tutor Find a Riverdale, GA Prealgebra Tutor ...Awarded Mason Gold Standard Award for contributing to the academic achievement of my peers. Passed the CFA Level 1 exam in December 2011 on first try. Successfully tutored co-worker on math portion of the GRE. 28 Subjects: including prealgebra, calculus, finance, economics ...Hence I asked Mrs. M to take a few students aside to start a new group and she graciously allowed me to do so in her own classroom. It wasn't until a week later when Mrs. 8 Subjects: including prealgebra, chemistry, physics, biology ...I'm not here to force students to do things a certain way - that's what a lot of math teachers get wrong about teaching. I'm one of the best. I have taught all levels of math here in Georgia - except for AP calculus BC, only because I haven't had any students who desired to take it. 10 Subjects: including prealgebra, calculus, geometry, algebra 1 ...I am currently certified to teach Microsoft Outlook through my National Guard unit. I've taught it to soldiers on a few occasions, which is how I earned my certification. Outside of teaching the course in my unit, I've also done courses in my regular IT positions where I've had to teach users different functionality within Microsoft Outlook. 8 Subjects: including prealgebra, elementary (k-6th), elementary math, general computer ...I also taught high school math. I have a BA from Emmanuel College. In order to receive that BA, I had to take many Christian education classes. 17 Subjects: including prealgebra, English, reading, geometry Related Riverdale, GA Tutors Riverdale, GA Accounting Tutors Riverdale, GA ACT Tutors Riverdale, GA Algebra Tutors Riverdale, GA Algebra 2 Tutors Riverdale, GA Calculus Tutors Riverdale, GA Geometry Tutors Riverdale, GA Math Tutors Riverdale, GA Prealgebra Tutors Riverdale, GA Precalculus Tutors Riverdale, GA SAT Tutors Riverdale, GA SAT Math Tutors Riverdale, GA Science Tutors Riverdale, GA Statistics Tutors Riverdale, GA Trigonometry Tutors Nearby Cities With prealgebra Tutor College Park, GA prealgebra Tutors Conley prealgebra Tutors East Point, GA prealgebra Tutors Fayetteville, GA prealgebra Tutors Forest Park, GA prealgebra Tutors Hapeville, GA prealgebra Tutors Jonesboro, GA prealgebra Tutors Lake City, GA prealgebra Tutors Mableton prealgebra Tutors Morrow, GA prealgebra Tutors Norcross, GA prealgebra Tutors Peachtree City prealgebra Tutors Red Oak, GA prealgebra Tutors Tucker, GA prealgebra Tutors Union City, GA prealgebra Tutors
{"url":"http://www.purplemath.com/Riverdale_GA_prealgebra_tutors.php","timestamp":"2014-04-20T13:23:59Z","content_type":null,"content_length":"23960","record_id":"<urn:uuid:f2c495a1-a04b-45ad-90a8-4e369a84f06a>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00073-ip-10-147-4-33.ec2.internal.warc.gz"}
Giantspod | Sabermetrics For Beginners Part III: The Plus Sign Sabermetrics For Beginners Part III: The Plus Sign For our newest installment of SFB, I’d like to introduce you to two of the most deceptively simple and useful statistics, Adjusted On-Base Plus Slugging (OPS+) and Adjusted Earned Run Average (ERA+). These two stats are used to show a player’s performance, either in hitting or pitching, as compared to the league average in each stat and compensating for that player’s ballpark. The following assumes a basic knowledge of statistical understanding, so if you don’t remember what OBP and SLG are, check out SFB Part 1, or here is Wikipedia’s dissertation on ERA. Adjusted OPS (OPS+) is a tool for examining a player’s offensive production, adjusted for the ballpark he played in and the average statistics from the rest of the league. Here is the formula for OPS+: OPS+ = [OBP/(league OBP) + SLG/(league SLG) -1] * 100 This is actually not as complicated as it looks. The league-On-Base-Percentage and league-Slugging are exactly what they sound like, and are then both adjusted to the batter’s ballpark. This means that if a batter is playing in a stadium that is better for hitters, it slightly discounts their batting performance, and vice versa. Take AT&T Park, for example, which favors pitchers over hitters (a little bit) because of its big outfields and damp climate, a fact that Triples Alley (among others) has examined in detail. Given that it helps pitchers, a ballpark adjustment would help a batter’s stats, but hurt a pitcher’s. So essentially you’re taking the two components that make up OPS and comparing them to the league average. Because there are two components, you subtract 1 after that to get it back to the original scale. Then you multiply by 100 to get rid of the decimals. If you’ve followed this so far, we end up with a number in and around 100, which is the league average. Above 100 is good, and below 100 is bad, as compared to the league averages. Here are the OPS+ values from the 2010 Giants: Rk OBP SLG OPS OPS+ ▾ 1 Aubrey Huff* .385 .506 .891 138 2 Pat Burrell .364 .509 .872 132 3 Buster Posey .357 .505 .862 129 4 Andres Torres# .343 .479 .823 119 5 Juan Uribe .310 .440 .749 99 6 Freddy Sanchez .342 .397 .739 98 7 Pablo Sandoval# .323 .409 .732 95 8 Aaron Rowand .281 .378 .659 75 Team Totals .321 .408 .729 95 Rank in 16 NL teams 9 6 8 This is not particularly surprising, as we remember who was good and who wasn’t (and yes, Pat the Bat did have better OBP and SLG than Posey). It’s interesting to see that, after everything, Uribe and Freddy were basically league average hitters, and Panda was just a smidge below average. Keep in mind, the OPS+ number again does not mean anything by itself, but is rather a descriptive number that you can compare with others. Let’s pretend that the league average (ballpark-adjusted) OBP is 300, and the league average SLG is 400. Then let’s pretend that Johnny McBaseball bats 15% better than average in each statistic. OPS+ = [.345/.300 + .460/.400 - 1]*100 OPS+ = [1.15 + 1.15 - 1]*100 OPS+ = 1.3*100 OPS+ = 130 An OPS+ of 130 is not 30% better than league average, given that it comes from two statistics, but is rather 15% better than league average in each category. The top-3 single-season OPS+ scores are all held by Barry Bonds (268, 263, 259), which were all MVP years for the big man. Babe Ruth holds the career OPS+ record with 206, with Bonds at third with 181 and Albert Pujols tied with Mickey Mantle for sixth with 172 (kind of puts Pujols’ contract negotiations in context, eh?). It’s a useful tool, particularly because of the built-in historical context, as you can see just how good somebody was for his time. Again, not the be-all-end-all of offensive statistics, but just another useful metric. Adjusted Earned Run Average (ERA+) works along the same lines, comparing a pitcher’s ERA to the park-adjusted league average ERA from the same year. Here is the formula: ERA+ = [(league ERA)/ERA]*100 Again, we end up with a number roughly around 100, with above being good and below being bad. Note that this is the opposite of ERA, where you obviously value low numbers. Given that the formula is constructed this way, higher is better. Here are the Giants pitchers from 2010: Again, not particularly surprising. We know that Brian Wilson was good, and Todd Wellemeyer was not so good. Interesting to note again that, despite everything, Barry Zito was essentially a league-average pitcher, at least ERA-wise. Also, note that ERA+ (and OPS+) do not take into account anything involving playing time, so take all averages with a grain of salt, as with relief pitchers you’re dealing with a much smaller sample size than starters. Even though Santiago Casilla kicked some major ERA+ butt, other metrics (like WAR) would show that given his limited playing time, he may not have been as valuable as somebody like Matt Cain, who had a worse ERA+ but played a lot more. Mariano Rivera, the Yankees’ future-Hall of Fame closer, has the best career ERA+ with 205, which means that he has had an ERA somewhere around half of the league average for his entire career (the next highest career ERA+, Pedro Martinez, is 154, which just shows you how amazing Rivera really is). The highest single-season ERA+ is from 1880, but Martinez again has the highest in the modern era, with a 291 ERA+ in the year 2000. He had an ERA of 1.74, won his third Cy Young in four years, and made a lot of Red Sox fans really happy. I hope that these help explain a bit of which we’re talking about. I’ll keep writing things like this, so please let me know if there is something specific you’d like me to help explain. So now, comment starter: what’s your favorite, most descriptive statistic? How do you feel ballparks really affect hitters? Who the hell is Tim Keefe? Go Giants!
{"url":"http://giantspod.net/2011/02/22/sabermetrics-for-beginners-part-iii-the-plus-sign/","timestamp":"2014-04-19T22:07:28Z","content_type":null,"content_length":"153898","record_id":"<urn:uuid:7696ec80-5ed8-4550-b3c8-0a7033b6eedf>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00567-ip-10-147-4-33.ec2.internal.warc.gz"}
[Need help] Math-098 Graphs/Absolute Value 2011-11-29, 01:38 PM #1 Field Marshal Join Date Feb 2011 I missed one of my lectures and have nobody else to ask for help. I have a homework assignment due at the beginning of my next class along with a exam right at the beginning of class. I am stuck on 3 different problems. I'm not asking for somebody to do them for me, but I am asking for an explanation on how to solve them. Feel free to change the problem up so you don't feel like you're just helping me with my homework. Here are the 3 problems I have. 1. Solve the absolute value equation. 2. Graph (Shade on true side) 3x-2y >= 6 3. Graph the solution to the system of linear inequalities. (Shade on true side) 2x+3y =< 6 3x-2y =< 6 Since no-one has helped you out yet, I guess I could : ) 1. Even though I assume you're famaliar with the | | operator, I'll shortly resume it: If x is a real number, then | x | = x if x >= 0, | x | = -x if x < 0. So basicly | x | is always positive for any given number. The only thing it does is change the sign of said number if it's So in your case we'll have 2 equations: (5x+3)=(2x-7) or (5x+3)=-(2x-7). All that's left is to solve both equations seperately, which should give you 2 solutions: x = -10/3 or x = 4/7. These 2 are both solutions of your original equation (Test it! ^^) For your next questions, I'm not in the mood to draw some graphs in paint, so I'll try to explain it without it. 2. When you're asked to graph an inequation, it's best to put it in the standard form. y =< (3/2 x -3) (Remember to switch the inequation sign around if you multiply both sides with (-1) ! ) Now for the graph, start with the graph for the equation (Which will be a straight line). y = 3/2 x - 3 The next part might be tricky for you to understand, but you'll see that the line cuts your plane in 2. One of these 2 parts will be the true section, the other one the false one. To verify this you take a random point in one of the sections (Not a point on the line itself ofcourse) and check wether it fullfils the inequation or not, if it does then the section of that point is the true side, if it doesn't it's the false side. Let's say (x,y) = (2,-2), we get -2 =< 3/2*(2) - 3 = 0, since -2 =<0 thi is true. So the section with the point (2,-2) is the true one (that's the one you have to shade). You could also test that the point (1,1) does not fullfil the inequation, which is what we would expect. 3. These are the same as (2), so try yourself ^^ I hope this'll help. I'm currently in my second bachelor year in mathematics, it's been a while since I saw this in highschool, so I've tried to make it as understandable as possible. Okay thank you I get it now. I made it more difficult than it really was. haha I figured that's how you would do it, but the two absolute values threw me off. Same with the true side/false side. Once again thanks for taking the time to explain this to me This type of thread i believe is against the rules too bad 2011-11-29, 03:55 PM #2 Join Date Aug 2008 2011-11-29, 07:11 PM #3 Field Marshal Join Date Feb 2011 2011-11-29, 09:17 PM #4
{"url":"http://www.mmo-champion.com/threads/1031572-Need-help-Math-098-Graphs-Absolute-Value?p=14425395","timestamp":"2014-04-16T08:43:01Z","content_type":null,"content_length":"66543","record_id":"<urn:uuid:4152d34f-b29b-4a9e-8b3b-9a3c162dea7a>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00509-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: April 2010 [00314] [Date Index] [Thread Index] [Author Index] Re: beginner question about syntax • To: mathgroup at smc.vnet.net • Subject: [mg109105] Re: beginner question about syntax • From: telefunkenvf14 <rgorka at gmail.com> • Date: Mon, 12 Apr 2010 23:02:51 -0400 (EDT) • References: <hputp3$lh7$1@smc.vnet.net> On Apr 12, 5:48 am, AK <aaa... at googlemail.com> wrote: > Hi, > I'm a fairly seasoned user of another system who's just started using > Mathematica. Although at the moment I'm just playing around with > Mathematica (without any specific task at hand), trying to figure out > the Mathematica way of doing things from the documentation > (particularly the examples) there are some things I can't seem to wrap > my head around. For example, can anyone explain the outputs for the > inputs below: > In[1]:= Map[Function[x, x^2], a] > Out[1]:= a > In[2]:=Map[Function[x, x^2], a + b + c] > Out[2]:= a^2 + b^2 + c^2 > If I enclose the second argument of Map[] inside a list, I get the > expected output, but I don't understand what the operations given in > the example above represent and why the outputs are what they are. > Would appreciate an explanation for what's going here... thank you in > advance. Pure functions are easier to understand using the following notation, where the #1 stands for the first variable in your function. Evaluating it just returns the same thing. In[1]:= #1^2& Out[1]= #1^2& If you want to apply the pure function to 'a', one way to do this is: In[2]:= #1^2&[a] Out[2]= a^2 Note that if I add another variable, I get the same thing: In[3]:= #1^2&[a,b] Out[3]= a^2 Map[] can also be used to apply the pure function. (Note: (1) Mathematica applies Map at level 1, by default. (2) Most people use the /@ notation rather than wrapping with Map[] all the time.) In[4]:= #1^2&/@{a} Out[4]= {a^2} Also note that the following: In[5]:= #1^2&/@(a) Out[5]= a And for your second example, I'm not sure if you wanted: In[6]:= #^2&/@(a+b+c) Out[6]= a^2+b^2+c^2 In[7]:= #^2&/@{a+b+c} Out[7]= {(a+b+c)^2} I'm sure someone else will chime in with a better/clearer answer and a lecture on levels of expressions in Mathematica...
{"url":"http://forums.wolfram.com/mathgroup/archive/2010/Apr/msg00314.html","timestamp":"2014-04-19T00:05:31Z","content_type":null,"content_length":"26974","record_id":"<urn:uuid:c38ee506-1285-4de5-9f8b-f839c31f13ca>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00582-ip-10-147-4-33.ec2.internal.warc.gz"}
Linear Subspace problem. December 28th 2010, 05:15 AM #1 Oct 2010 Linear Subspace problem. I am very sorry if I am bothering you guys a lot but I am trying to understand some of this math and its very hard to do it alone So I have this problem that says: Find which of the following sets are linear subspaces of the corresponding linear spaces: I am only going to show the first one because maybe if I understand it I can do the rest(I have the answers but I have no idea how it comes out). This is the problem: $X_{0}=\left \{ (x1,...\: xn)/ x1+...+xn=0 \right.\left. \right \} \subset \mathbb{R}^{n}, (\mathbb{R}^{n}, \mathbb{R}, \cdot )$ Now I understand to be a linear subspace the set must contain the null vector, be closed under multiplication and addition. My problems begin from the step that I have no idea what $(\mathbb{R}^ {n} , \mathbb{R}, \cdot )$ is. Also are these the vectors within $(x1,...\: xn)$ the set X and if so what does this do: $/ x1+...+xn=0$ $\mathbb{R}$ is the set of real numbers. $\mathbb{R}^n$ is the set of n-tuples of real numbers. I would assume that the notation $(\mathbb{R}^n, \mathbb{R}, \cdot )$ means that you're looking at the vector space of n-tuples of real numbers over the field of reals where the operation is normal multiplication. Now, the null vector is in the subset because $0+\cdots +0 = 0$ If $(x_1,\ldots ,x_n), (y_1,\ldots ,y_n)$ are in the subset, then $x_1+y_1+\ldots x_n+y_n = x_1+\ldots +x_n+ y_1+\ldots +y_n=0+0=0$. So $(x_1,\ldots ,x_n) + (y_1,\ldots ,y_n)$ is in the subset. Scalar multiplication is similar. Yes, it's a subspace. Ok I think am getting it but whats confusing is that my book talked only about "internal binary operation on V(I guess in this case would be ${R}^n$), called addition and denoted by +, such that (V,+) is a commutative group". So you can only imagine when they decided to throw in a * how confused I got. But am I correct to assume that is the same thing but instead of + its *? To be honest, that notation is quite confusing. I'm not sure why they chose those 3 things to emphasize (as opposed to addition, for example). I wouldn't worry too much about it though - I think the question is pretty clear. Maybe ask your teacher why this notation is being used. It is somewhat surprising to lean that mathematical notation is anything but standard. There is a series of probability books all by the same author in which notation varies from book to book. That is why is almost pointless to ask about notation is a forum such as this. Sometimes if you give the title and author it is possible that someone may know that particular textbook. That said, I think Steve is correct in his guess. $\left( {\mathbb{R}^n ,\mathbb{R}, \cdot } \right)$ tells us that the v-space is the set of real n-tuples with vector addition, the scalar field is $\mathbb{R}$ and $\cdot$ tells us the scalar multiplication is that of the real numbers. But that is surely explained in the text material. December 28th 2010, 07:11 AM #2 Senior Member Nov 2010 Staten Island, NY December 28th 2010, 07:22 AM #3 Oct 2010 December 28th 2010, 07:34 AM #4 Senior Member Nov 2010 Staten Island, NY December 28th 2010, 08:03 AM #5
{"url":"http://mathhelpforum.com/advanced-algebra/167009-linear-subspace-problem.html","timestamp":"2014-04-17T05:13:41Z","content_type":null,"content_length":"45036","record_id":"<urn:uuid:10415a94-b880-458b-9561-6da9b4747b8f>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00010-ip-10-147-4-33.ec2.internal.warc.gz"}
Mplus Discussion >> Unconditional vs. conditional GMM socrates posted on Saturday, November 11, 2006 - 2:30 am Dear Dr. Muthén With an unconditional GMM I identified five latent classes in a londitudinal dataset. These trajectories agree with theoretical expectations. Subsequently, I entered time-invariant covariates to check if these variables allow to predict growth parameter variance within these latent classes. While I found some significant predictors with this procedure, some of the resulting trajectories look quite different compared to the ones of the unconditional GMM. How do I have to interpret this? Thank you very much foryour help! Linda K. Muthen posted on Saturday, November 11, 2006 - 8:32 am The following paper discusses this isuue. It can be downloaded from the website. See Recent Papers. Muthén, B. (2004). Latent variable analysis: Growth mixture modeling and related techniques for longitudinal data. In D. Kaplan (ed.), Handbook of quantitative methodology for the social sciences (pp. 345-368). Newbury Park, CA: Sage Publications. Tracie B posted on Monday, May 11, 2009 - 9:37 am I have 2 questions on this topic: 1. I am using unconditional GMM to generate classes, exporting them, and then examining covariates as well as outcomes, i.e. I am treating 'class' like any other categorical variable. Many posts talk about 'leading to distorted results' but intuitively this is what makes sense to me- using the naturally occuring patterns (regardless of who populates them), then exploring who is in what class, and what the consequence of being in that class is. Is there a problem with this approach, i.e. estimating unconditionally and then dealing with covariates subsequently? 2. Unrelated to above: when I use GMM (linear), I get the expected number of classes (4) and patterns, in line with a priori theory; if I fix the variance, it is still similar. If I fix the variance AND the intercept, I no longer get the patterns I anticipated- just more and more parallel lines. I know that there is a lot of within person variation, and in my data I have 20 time points. So it works well with GMM but not with LCGA. Is this a problem? My feeling is that I have to be able to allow within class variation in order to have them emerge. Tracie B posted on Monday, May 11, 2009 - 9:42 am To clarify above, I meant if I fix the variance of the slope, and then of both the slope and the intercept Michael Spaeth posted on Tuesday, May 12, 2009 - 8:25 am 1.) If you export to another program class membership is based on "most likely" and fuzzy boundaries (that would be: class membership based on posterior probabilities) are not taken into account in further analyses. Contrary, if you model the covariates in your LGMM (within mplus) effects of covariates are controlled for this class uncertainty. Another point is that in the latter kind of model you can reduce the potential of a misspecified model (often you need direct effects of covariates on the indicators). However, if your classification quality is very good (entropy above .90, average post probabilities also) you can probably stick with saving class membership based on most likely (because class uncertainty is very lowin this case). I also often check whether there are very few "borderline cases", i. e. individuals with nearly the same posterior probability to be assigned to each class. 2.) Interesting issue. However, I always do a LCGA first and then free growth factor variances in a stepwise fashion (first intercept, then slope) and finally I try to let these variances differ across classes. If your growth factor variances are significant within classes, I would leave them in the model, because this is closer to reality. Addtionally, out of my experiences, I get the impression that one overestimates the number of classes with restricted growth factor variances. Amir Sariaslan posted on Tuesday, May 12, 2009 - 8:39 am For the first part of your post, you should read the following paper: Clark, S. & Muthén, B. (2009). Relating latent class analysis results to variables not included in the analysis. Submitted for publication. Linda K. Muthen posted on Tuesday, May 12, 2009 - 8:56 am For the second part, if GMM makes more sense both statistically and substantively because you have variation within classes, I would use GMM. Tracie B posted on Tuesday, May 12, 2009 - 7:46 pm Thank you very much, this was a great help! Youngoh Jo posted on Thursday, December 15, 2011 - 8:51 pm Using unconditional models I found 3 groups. When I use conditional models, I put the following commands: data: file is "E:\data\w1-w6.csv"; variable: NAMES ARE ID SEX sc1-sc6 pa1-pa5 mo1-mo5 ab1-ab5 ta1-ta5 dp1-dp5 ne2-ne5; USEVARIABLES ARE SEX sc2-sc6; MISSING ARE ALL (999); classes = c (3); ANALYSIS: TYPE = MIXTURE; starts = 20 2; model: %overall% i s | sc2@0 sc3@1 sc4@2 sc5@3 sc6@4; i s on sex; c on sex; OUTPUT: tech1 tech8; and I got the following error message: *** ERROR in Model command Unknown variable(s) in an ON statement: C What's wrong with this? Thanks in advance. Linda K. Muthen posted on Friday, December 16, 2011 - 6:14 am Please send the full output and your license number to support@statmodel.com. Back to top
{"url":"http://www.statmodel.com/cgi-bin/discus/discus.cgi?pg=prev&topic=14&page=1788","timestamp":"2014-04-20T20:01:06Z","content_type":null,"content_length":"30684","record_id":"<urn:uuid:49ace99b-6263-4fd9-b6a9-1107f6fb5b57>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00068-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions - Re: Hex Win Proof? Date: Mar 21, 2004 8:14 AM Author: Steven Meyers Subject: Re: Hex Win Proof? gcrhoads@yahoo.com (Glenn C. Rhoads) wrote in message news:<3396efc6.0403201232.221c2e9d@posting.google.com>... > gcrhoads@yahoo.com (Glenn C. Rhoads) wrote in message news:<3396efc6.0403200010.aab22a4@posting.google.com>... > > w.taylor@math.canterbury.ac.nz (Bill Taylor) wrote in message news:<716e06f5.0403181938.72a82f90@posting.google.com>... > > > > > > It is an old theorem that in Hex, once the board has been completely > > > filled in with two colours, there *must* be a winning path for one > > > or other of them. > > > > > > Now, I can prove this easily enough mathematically, but I'm wondering if > > > there is a simple proof, or proof outline, that would be understandable > > > and reasonably convincing to the intelligent layman. > > > > > > Can anyone help out please? > > > > Here's a pretty proof involving a closely related game called Y. > > (Don't hesitate about the length of this post; it consists mostly > > of descriptions of Y, the relation to Hex, and the definition > > of "Y-reduction". The final proof is very short.) > > > > A Y board consists of a triangular configuration of hexagons. Play is > > the same as Hex except that the goal is to build a chain that connects > > all three sides (as in Hex, corner hexes belong to both sides). > > > > Y is a generalization of Hex as illustrated in the following ascii diagram. > > (view in a fixed font) > > > > _ > > _/*> > _/*\_/ > > _/*\_/*> > _/ \_/*\_/ > > _/ \_/ \_/*> > _/ \_/ \_/ \_/ > > / \_/ \_/ \_/ > > \_/ \_/ \_/ \_/ > > \_/ \_/ \_/+> > \_/ \_/+\_/ > > \_/+\_/+> > \_/+\_/ > > \_/+> > \_/ > > > > > > * are pieces of one player and + are pieces of the other player. > > > > Playing Y from the above legal position is equivalent to playing > > Hex in the unoccupied hexes of this position because a chains wins > > for Y if and only if the chain restricted to originally unoccupied > > hexes wins for Hex (in the Hex version, player * is connecting the > > southwest border up to the northeast). Thus, to prove a completely > > filled-in Hex board has a winning chain for one of the two players, > > it suffices to prove that a filled-in Y board has a winning chain > > for one of the two players. > > > > The proof makes use of "Y-reduction." > > > > Y-reduction: From a completely filled-in order n Y-board,construct > > a completely filled-in order n-1 Y-board as follows. For each > > triangle of three mutually adjacent hexes oriented the same way as > > the board, replace it with a single hexagon whose piece color is > > that of the majority of the colors in the triangle. > > Ex. > > _ > > _/4\ > > _/3\1/ _/3> > _/2\1/3\ _/2\1/ > > /1\1/2\2/ /1\1/2> > \1/1\2/2\ \1/1\2/ > > \2/1\3/ \2/1> > \3/1\ \3/ > > \4/ > > > > > > The triangle of hexes with coordinates > > x x+1 x > > y y y+1 > > > > on the order-n board corresponds to the hexagon > > x > > y on the order n-1 board > > > > > > Lemma: If a player has a winning chain on the reduced board, then > > that player also has a winning chain on the original board. > > > > It's easy to verify (but ugly to write up rigorously) that if a player > > has a winning chain on the reduced board, then he also has a winning > > chain through the corresponding triangles on the original board. > Consider two adjacent hexes along the winning chain on the reduced > board. Each of corresponding adjacent/overlapping triangles must > have at least two pieces of the winning color. > Case 1: The unique hex in the overlap is of the winning color. > Then both of the other disjoint "truncated triangles" has some > piece of the winning color. Both pieces are adjacent to the > piece in the overlap and hence must form a chain connecting the > corresponding edges. > Case 2: The unique hex in the overlap is not of the winning color. > Then all four of the remaining hexes are occupied by the winning > color. Again these hexes form a chain connecting the corresponding > edges. (it helps to draw a diagram). > A triangle corresponding to a reduced hex, contains at least two > pieces of the same color and thus, each edge of the triangle must > contain at least one piece (since each edge contains all but one > of the hexes in the triangle). This observation completes the proof. > (you need this fact to establish that for any piece connected to the > edge of the reduced board, that color must also be connected to the > corresponding edge in the original board. This fact also acts as a > base case for induction.) > > [note: the converse of the lemma is also true but we don't need it.] > > Thm. One of the player's has a winning chain on a completely filled-in > > Y board. > > > > Proof: > > From any completely filled-in Y board, perform a sequence of > > Y-reductions to reduce the board to a filled-in order 1 board. > > The filled-in order 1 board obviously has a winning chain and > > hence by the lemma, each Y board in the sequence has a winning > > chain. > > > > > > I wish I could take credit for this proof but it is due to > > Ea Ea (his former name was Craige Schensted). > Steve Meyers posted a link to a web page containing essentially > this same argument. However, this page claims *only* the converse > of the above lemma which is not sufficient for the proof; the > other direction is what is needed. Hi Glenn, I can well believe you're right. I'm not a math professional, so sometimes the finer things escape me:( I'm delighted to see that you and others have taken an interest in this result, and that you seem to appreciate its beauty. For a while there I feared almost nobody cared about it. But since there is apparently some interest now, I'll go ahead and post how this proof came to surface. In 2000 I got very interested in the game of Hex. I played against the computer and against myself hundreds of times, and I began to suspect there might be a "pattern" in the game. In fact I thought that I was going to be able to solve it. At the time I didn't know that Hex was PSPACE-complete, or I probably would never have attempted I got nowhere with Hex and in late 2001 switched to Y after realizing that Y was more fundamental than Hex; I had always thought the reverse. It was already known to some people that Hex was a special case of Y, but it came as a surprise to me! In January 2002 I had a much bigger surprise: I discovered that with the game of Y, an n game (where n is the number of points/cells per side of the board) consists of three n-1 games, at least two of which are necessary and sufficient to win in order to win the n game; each n-1 game, in turn, consists of three n-2 games, at least two of which are necessary and sufficient to win in order to win the n-1 game; and so on until n=1, where each point/cell is itself a "game." I immediately notified Mark Thompson of the discovery. He replied the same day, saying that the property was very interesting, and he suggested that Ea Ea (who was one of Y's co-inventors) be contacted about it. Ea Ea subsequently sent an e-mail to me, in which he said that my discovery was very closely related to a discovery he'd made some thirty years before. He then proceeded to describe that discovery, which is what you called "Y-reduction" in your post. The terminology I use for it is "micro-reduction," since it takes the small triangles as its starting point. (Unfortunately Ea Ea never published his result, so I went ahead and named it.) I called what I found "macro-reduction," since it starts at the big triangles. These very closely related processes together comprise what I called the "n-1 property" of the game of Y. Mark suspected that Y-reduction could not be used to solve a moderately complicated Hex position because of the vast number of steps required to perform the calculation. I didn't understand this at first, but he was right. I sent the information along to Hex programmer Jack van Ryswyck, who made the same observation as Mark. (It was then that I learned about Hex's PSPACE-completeness.) Jack came up with the idea to perform the reduction probabilistically in order to get a rough but fast estimate of which player had the advantage in a given position. (That is, his idea was to use Y-reduction not to solve a position but rather as the basis of an evaluation function.) Jack's method made use of micro-reduction rather than macro-reduction because the former avoids unnecessary repetition of steps --- this is also why micro-reduction makes for a cleaner no-draw proof than macro-reduction. For more information on Jack's programming idea, check his paper "Search and Evaluation in Hex," available on his website. Sorry to be long-winded,
{"url":"http://mathforum.org/kb/plaintext.jspa?messageID=558884","timestamp":"2014-04-20T01:33:14Z","content_type":null,"content_length":"11494","record_id":"<urn:uuid:aab1d8ef-aec4-4431-ac6d-497c5eae1f71>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00582-ip-10-147-4-33.ec2.internal.warc.gz"}
Thermal conductivity and thermal boundary resistance of nanostructures We present a fabrication process of low-cost superlattices and simulations related with the heat dissipation on them. The influence of the interfacial roughness on the thermal conductivity of semiconductor/semiconductor superlattices was studied by equilibrium and non-equilibrium molecular dynamics and on the Kapitza resistance of superlattice's interfaces by equilibrium molecular dynamics. The non-equilibrium method was the tool used for the prediction of the Kapitza resistance for a binary semiconductor/metal system. Physical explanations are provided for rationalizing the simulation results. 68.65.Cd, 66.70.Df, 81.16.-c, 65.80.-g, 31.12.xv Understanding and controlling the thermal properties of nanostructures and nanostructured materials are of great interest in a broad scope of contexts and applications. Indeed, nanostructures and nanomaterials are getting more and more commonly used in various industrial sectors like cosmetics, aerospace, communication and computer electronics. In addition to the associated technological problems, there are plenty of unresolved scientific issues that need to be properly addressed. As a matter of fact, the behaviour and reliability of these devices strongly depend on the way the system evacuates heat, as excessive temperatures or temperature gradients result in the failure of the system. This issue is crucial for thermoelectric energy-harvesting devices. Energy transport in micro and nanostructures generally differs significantly from the one in macro-structures, because the energy carriers are subjected to ballistic heat transfer instead of the classical Fourier's law, and quantum effects have to be taken into account. In particular, the correlation between grain boundaries, interfaces and surfaces and the thermal transport properties is a key point to design materials with preferred thermal properties and systems with a controlled behaviour. In this article, the prediction tools used for studying heat transfer in low-cost superlattices for thermoelectric conversion are presented. The technology used in the fabrication of these superlattices is based on the method developed by Marty et al. [1,2] to manufacture deep silicon trenches with submicron feature sizes (Figure 1). The height and periodicity of the wavelike shape of the surfaces can be monitored. When the trenches are filled in with another material, they give rise to superlattices with rough interfaces. This was the motivation for studying both the thermal conductivity and the Kapitza resistance [3] of superlattices with rough interfaces. We focus mostly at the influence of interfacial width of the superlattices made of two semiconductor-like materials, with simple Lennard-Jones potential for the description of interatomic forces. Simulations of the Kapitza resistance for binary system of silicon with metal are also presented. These interfaces are difficult to be modelled, first of all because of the phonon-electron coupling that occurs at these interfaces and secondly because of the plethora of potentials which can be used. The choice of potential is based in a comparison of their performance to predict in a correct manner, the harmonic and anharmonic properties of the material. Results on the Kapitza resistance of a silver /silicon interfaces are also presented. Figure 1. SEM pictures obtained by the group ESYCOM and ESIEE at Marne-la-Vallee, France, showing two submicron trenches in a silicon wafer. Fabrication process of superlattices To reduce the processing time and the manufacturing costs, vertical build superlattices are proposed as opposed to conventional planar superlattices. In Figure 2, a schematic representation of the two types of superlattices is given comparing their geometries. With this process, silicon/metal superlattices can be fabricated. Although final device will have material layers in the tens of nanometre range, 5- and 15-μm width superlattices are fabricated using typical UV lithography. These thick layer superlattices are necessary to develop an accurate model of thermal resistance at the metal/semiconductor interfaces. Figure 2. Structural comparison between conventional superlattices and vertical superlattices. Vertical superlattices were obtained by patterning and then etching the silicon by deep reactive ion etching (DRIE). The trenches were filled using electrodeposition on a thin metallic seed layer. In Figure 3, a scanning electron microscope (SEM) image of a processed silicon wafer with micro-superlattices is given. There are voids at the bottom of the trenches which are explained by the absence of the seed layer at the bottom, and the fact that they prevent any copper growth. These voids were successfully eliminated by increasing the amount of seed layer sputtered in subsequent trials. The excess copper on top, resulting from the trenches being shorted to facilitate electroplating, was polished away using chemical-mechanical polishing. This is done to electrically isolate the trenches from one another so as to allow thermo-electrical conversion. Figure 3. SEM image of copper-filled 5-μm-wide trenches. We aim to fabricate optimized vertical nano-superlattices (with layers ranging <100 nm each) with high thermoelectric efficiency. High thermoelectric efficiency occurs for high electrical conductivity and low thermal conductivity. The electronic conductivity will be controlled though the Si doping and the use of metal to fill in the trenches. The film thickness needs to be decreased, to decrease the individual layer thermal conductivity and increase the influence of the interfacial thermal resistance. To obtain such dimension on a large area at low cost, we are developing a process based on the transfer by DRIE of 30-nm line patterns made of di-block copolymers [4]. For this purpose, it is required to characterize them to the best possible degree of accuracy. Measurements at this scale will possibly be plagued by quantum effects [5,6]. That is the reason why we fabricated first micro-scale superlattices, to make thermoelectric measurements free from quantum effects and then applied the method to characterize the final nano-superlattice thermoelectric devices. Simulations: thermal conductivity of superlattices When the layer thickness of the superlattices is comparable to the phonon mean free path (PMFP), the heat transport remains no longer diffusive, but ballistic within the layers. Furthermore, decreasing the dimensions of a structure increases the effects of strong inhomogeneity of the interfaces. Interfaces, atomically flat or rough, impact the selection rules, the phonon density of states and consequently the hierarchy or relative strengths of their interactions with phonons and electrons. Thus, it is important to study and predict the heat transfer and especially the influence of the height of superlattice's interfaces on the cross and in-plane thermal conductivities. This is a formidable task, from a theoretical point of view, as one needs to account for the ballistic motion of the phonons and their scattering at interfaces. Molecular dynamics is a relatively simple tool which accounts for these phenomena, and it has been applied successfully to predict heat-transfer properties of superlattices. Two routes can be adopted to compute the thermal conductivity, namely, the non-equilibrium (NEMD) [7] and the equilibrium molecular dynamics (EMD) [8]. In this article, we have considered both methods to characterize the thermal anisotropy of the superlattices. In the widely used direct method (NEMD), the structure is coupled to a heat source and a heat sink, and the resulting heat flux is measured to obtain the thermal conductivity of the material [9,10]. Simulations are held for several systems of increasing size and finally thermal conductivity is extrapolated for a system of infinite size [11,12]. The NEMD method is often the method of choice for studies of nanomaterials, while for bulk thermal conductivity, particularly that of high conductivity materials, the equilibrium method is typically preferred because of less severe size effects. Comparisons between the two methods have been done previously, concluding that the two methods can give consistent results [13,14]. Green-Kubo method for nanostructures is proven to have greater uncertainties than those of NEMD, but a correct description of thermal conductivity with EMD is achieved by establishing statistics from several results, starting from different initial conditions. The superlattice system under study is made of superposition of Lennard-Jones crystals and fcc structures, oriented along the [001] direction. The molecular dynamics code LAMMPS [15-17] is used in all the NEMD and EMD simulations. The mass ratio of the two materials of the superlattice is taken as equal to 2, and this ratio reproduces approximately the same acoustic impedance difference as that between Si and Ge. Periodic boundary conditions are used in all the three directions. Superlattices with period of 40a[0 ]are discussed, where a[0 ]is the lattice constant. The shape of the roughness is chosen as a right isosceles triangle. The roughness height was varied from one atomic layer (1 ML = 1/2a[0]) to 24a[0]. For each roughness, heat transfer simulations with NEMD were performed for several system sizes in the heat flux direction to extrapolate the thermal conductivity for a system of infinite size [11]. For EMD simulations, the size of the system is smaller than with NEMD simulations and only one size is considered 20a[0 ]× 10a[0 ]× 40a[0], where the last dimension is perpendicular to interfaces. In Figure 4, we gathered the results for the in-plane and cross-plane thermal conductivities obtained by the two methods. The thermal conductivity is measured here in Lennard-Jones units (LJU), which correspond in real units typically to W/mK. At the low temperatures considered (T = 0.15 LJU), the period of the superlattice is comparable to that of the PMFP. The qualitative interpretation of the results shows that the thermal contact resistance of the interface has a strong influence on the superlattice thermal conductivity. The results previously obtained by NEMD method [12], and, in particular, the existence of a minimum for the in-plane thermal conductivity are now confirmed using the EMD method. The evolution of the TC as a function of the interfacial roughness is found to be non-monotonous. When the roughness of the interfaces is smaller than the superlattice's period, the in-plane thermal conductivity first decreases with increasing roughness. It reaches a minimum value which is lower by 35-40% compared to the thermal conductivity of the superlattice with smooth interfaces. For larger roughness, the thermal conductivity increases. The initial decrease of the in-plane thermal conductivity is quite intuitive if one considers the behaviour of phonons at the interfaces, which may be described by two different models. In the acoustic mismatch model [18,19], the energy carriers are modelled as waves propagating in continuous media, and phonons at the interfaces are either transmitted or specularly reflected. For atomically smooth interfaces, it is assumed that phonons experience mainly specular scattering. The roughness enhances diffuse scattering at the interface in all space direction. Figure 4. Cross-plane and in-plane thermal conductivity functions of the height of interfaces calculated by EMD and NEMD methods. In the diffuse-mismatch model, on the other hand, phonons are diffusively scattered at interfaces, and their energy is redistributed in all the directions [20]. In practice, the acoustic model describes the physics of interfacial heat transfer at low temperatures, for phonons having large wavelengths, while the diffuse model is relevant for small wavelengths phonons. At the considered temperature in the current study, we are most probably in an intermediate situation where the physics is not captured by one single model. Nevertheless, both models predict that a moderate amount of interfacial roughness will tend to decrease the in-plane TC, because rough interfaces will increase specular reflection and diffusive scattering of phonons travelling in the in plane direction. However, if the roughness is large enough, then locally, the phonons encounter smooth-like interfaces, and the partial group of phonons that are diffusely scattered in all space direction decreases. This might explain the further increase of the thermal conductivity when the roughness is large enough. The behaviour of the cross-plane thermal conductivity is different: it increases monotonously with the interfacial roughness. For smooth interfaces, the cross-plane thermal conductivity is 50% lower than the in-plane thermal conductivity. This anisotropy has to be taken into account for thermal behaviour of systems made of sub-micronic solid layers. Invoking again the acoustic mismatch model, we conclude that the transmission coefficient of the solid/solid interface is smaller than the reflection coefficient, which is not surprising if we consider the acoustic impedance ratio of the two materials. Roughness increases the transmission coefficient as it increases the diffused scattering at the interface [12]. The same qualitative trend regarding the influence of the roughness on the thermal conductivity of superlattices has been reported previously for materials with diffusive behaviour, without thermal contact resistance [21]. In this case, the variation of the in-plane and cross-plane conductivities with the interfacial roughness is due to the heat flux line deviation that minimizes the heat flux path in the material that has the lower thermal conductivity. This tends to increase the cross-plane thermal conductivity. On the other hand, the increase of the roughness leads to the heat flux constrictions that decrease the in-plane thermal conductivity. The qualitative interpretation of the results shows that the thermal contact resistance of the interface has a strong influence on the superlattice thermal conductivity. Simulations: Kapitza resistance Superlattices with rough interfaces The discussion above shows that obviously the phononic nature of the energy carriers has to be taken into account to understand heat transfer in superlattices, and that the evolution of the superlattice TC may be qualitatively understood in terms of interfacial or Kapitza resistance. At a more quantitative level, the Kapitza resistance is defined by and thus quantifies the temperature jump ΔT across an interface subject to a constant flowing heat flux J. In general, the Kapitza resistance may be computed using NEMD simulations by measuring the temperature jump across the considered interface. For superlattices, however, the direct method can be used only to measure easily the Kapitza resistance only for smooth surfaces, because of the difficulty involved in measuring locally the temperature jump for non-planar interfaces. To compute the Kapitza resistance for superlattices with rough interfaces, we have used EMD simulations, and the relation between R[K ]and the auto-correlation of the total flux q(t) flowing across an interface: where S is the interface area. The latter formula expresses the fact that the resistance is controlled by the transmission of all the phonons travelling across the interface. In the situation of interest to us here, the transmission of phonons is expected to be strongly anisotropic, and thus the resistance developed by an interface should depend on the main direction of the heat flux. To measure this anisotropy, we have generalised the previous equation and introduced the concept of directional resistance, by considering the heat flux q[θ](t) in the direction θ in (0,π/2) with the normal of the interface. The resistance in the direction θ may be then quantified by the generalised Kapitza resistance: This angular Kapitza resistance quantifies the transmission of the heat flux in the direction making an angle θ with the normal of the interface. Figure 5 displays the generalised Kapitza resistances measured with MD for superlattices with rough interfaces having variable roughnesses. Again, the results are displayed in LJU, which correspond to a resistance of 10 × 10^-9 m^2K/W in SI units. The period of the superlattice considered is larger than the PMFP, which here is estimated to be around 20a[0]. We have focused on two peculiar orientations θ = 0 and θ = π/2 which correspond, respectively, to the cross-plane and in-plane directions of the superlattices. It is striking that, for a given interfacial roughness, the computed resistance depends on the orientation θ. We have found that for almost all the systems analysed, the Kapitza resistance is larger in the cross-plane direction than in the direction parallel to the interfaces. This is consistent with the observation that the thermal conductivity is the largest in the in-plane direction (Figure 4). Again, this reinforces the message that the heat transfer properties of superlattices are explained by the phononic nature of the energy carriers, and that theses energy carriers feel less friction in the in-plane direction than that in the direction normal to the interfaces. Measuring the directional Kapitza resistance is a first step towards a quantitative measurement of the transmission factor of phonons depending on their direction of propagation across an interface. Figure 5. Kapitza resistance function of the height of superlattice's interfaces. Silver/silicon interfaces The Cu and Ag films on Si-oriented substrates are the principal combinations in large-scale integrate circuits. Furthermore, with the fabrication process of vertical-built superlattices described in previous section, we are interested in the heat transfer phenomena related to the metal/semiconductor interfaces. The prediction of heat transfer in these systems becomes challenging when the thickness of the layers reaches the same order of magnitude as the PMFP. For heat transfer studies, MD is well suited for dielectrics since only phonons carry heat. For metals, coupling between phonons and electrons can be modelled with the two-temperature model [22]. For the above systems, it has been proven that the Kapitza resistance is mainly due to phonon energy transmission through the interfaces [23,24]. The interfacial thermal resistance, known as the Kapitza resistance [25,26] is important to be studied as it might become of the same order of magnitude than the film thermal resistance. In this section, interatomic potentials for Ag and Si are discussed. Using NEMD simulations, for an average temperature of 300 K, the Kapitza resistance of Si/Ag systems is determined. Modified embedded-atom method (MEAM) is the only appropriate potential that can be used for metal/semiconductor systems. The first nearest-neighbour MEAM (1NN MEAM) potential by Baskes et al. [27] and the second nearest-neighbour MEAM (2NN MEAM) by Lee [28] are examined in the current study. The general MEAM potential is a good candidate for simulating the dynamics of a binary system with a single type of potential. For example, it can be applied for both fcc and bcc structures. Furthermore, this potential includes directional bonding, and thus can be applied for Si systems. In dielectric materials heat transfer depends mainly on phonons' propagation and their interactions. To make the best choice among a great number of potentials for calculating thermal conductivity, the dispersion curves and the lattice expansion coefficient were studied. Electron transport predominates at the heat transfer in metals. MD cannot simulate electron movement, although some models are suggested in the literature to include the interactions between electron and phonons but without yet a satisfying results for investigating heat transfer. As it is not possible to test the quality of electronic interactions, only the lattice properties are commented to determine the correct potential for simulating Ag. The dispersion curves in the [ξ, 0, 0], [ξ, ξ, 0] and [ξ, ξ, ξ] directions are determined and compared with the experimental dispersion curves of Ag [29] for the 1NN MEAM and 2NN MEAM (Figure 6). Figure 6. Phonon dispersion curves using the potentials of 1NN MEAM, and 2NN MEAM for Ag. To compare the anharmonic properties of Ag, the equilibrium lattice parameter is simulated for different temperatures using the 1NN MEAM, and 2NN MEAM potentials. This is modelled with an fcc slab consisting of 108 atoms of silver with periodic boundary conditions in all the directions. Initially, the temperature of the crystal was 0 K. For each temperature the simulations are performed with a 20 ps constant-pressure simulation (NPT) during which the volume of the box occupied by the atoms for each temperature is stored. The mean value of the volumes of the equilibrated energy is used to calculate the linear expansion coefficient. For each constant temperature, the volume of the simulation box is divided by the volume at 0 K. This ratio is directly proportional to the expansion coefficient. The expansion coefficients of Ag, obtained for the two potentials are compared to the experimental values [30] in Figure 7. The uncertainties on the linear expansion coefficient variation are less than 5% compared with the experimental values. Figure 7. Linear thermal expansion for Ag using 1NN MEAM and 2NN MEAM potentials. The 2NN MEAM potential allows recovering the expansion coefficient for Ag quite accurately while the 1NN MEAM potential significantly underestimates it. For Ag, the two potentials provide a good description for the more basic properties, such as cohesive energy, lattice parameters and bulk modulus [31]. Even if the 1NN MEAM potential gives results closer to the experimental values for dispersion curves, the values obtained for the linear thermal expansion are not reasonable. Therefore, the 1NN MEAM potential cannot be considered appropriate for simulating heat transfer for silver. Regarding the investigation of heat-transfer temperature, the 2NN MEAM gives the best results for harmonic and anharmonic properties for silver and for silicon using the previous results of the literature [32]. Kapitza resistance is predicted for the 2NN MEAM Si/Ag potential. The interface thermal resistance, also known as Kapitza resistance, R[K], creates a barrier to heat flux and leads to a discontinuous temperature, ΔT, drop across the interfaces. The interactions between silicon and silver are described thanks to the 2NN MEAM potential in which the set of parameters has been determined to produce a realistic atomic configuration of interfaces. The model structure consists of two slabs in contact: one of Si with a diamond structure, and one of Ag. The periodic boundary conditions are used in all the directions and the Si crystal is composed of 7200 atoms, while the Ag crystal is composed of 2560 atoms. In the first stage of MD simulation, the system is equilibrated at a constant temperature of 300 K for 20 ps using an integration time step of 5 fs. The heat sources are placed in the extremes of the structure, and one layer of Si and Ag is frozen to block the movement of Si atoms in the z-direction. The temperature gradient is formed in the z-direction, imposing hot and cold temperatures above and below the fixed atoms in z-direction. Using an integration time step of 5 fs, the simulation is run for 5.0 ns, with an average system temperature of 300 K. In Figure 8, the temperature profile for the Si/Ag system is shown. Figure 8. Temperature profile for the Si/Ag system. The Kapitza resistance obtained with NEMD is 4.9 × 10^-9 m^2K/W. The temperature profile for Si is almost flat due its high thermal conductivity. With MD simulations, it is not possible to simulate heat transfer due to the electrons, and thus the steep slope of Ag is due to its low lattice thermal conductivity. The value R[KT ]is in the range 1.4-125 × 10^-9 m^2K/W which also includes the Kapitza conductance for dielectric/metal systems [33,34]. Conclusions - Discussion A new fabrication method for superlattices is used, reducing the time and fabrication costs. With the fabrication of vertical superlattices, several questions a rose for the influence of the roughness' height of the superlattices and the quality of interface on the thermal transport. When the length of the superlattice's period is comparable to the phonon-free mean path, the heat transfer becomes ballistic. The cross-plane and in-plane thermal conductivities of a dielectric/dielectric (representing Si-Ge systems) superlattice are predicted using EMD and NEMD simulations. Both methods give the same tendencies for the anisotropic heat transfer at superlattices with rough interfaces. The in-plane thermal conductivity exhibits a minimum for a certain interfacial width, while the cross-plane thermal conductivity increases modestly in increasing the width of the interfaces. The Kapitza resistance of these interfaces is also studied, with a proposed methodology in this article, introducing the concept of directional thermal resistance. Values presented here are coherent with the difference between the in-plane and cross-plane thermal conductivities. Molecular dynamics simulations are also used to study the metal/semiconductor interfaces. Among all the interatomic potentials that are available, the MEAM potential is a good alternative to work with since it can be used for different materials. At 300 K, the 2NN MEAM potential gives the best results for the fundamental properties associated with the heat transfer of silicon and silver. Previous results [23,24,32] suggest that interfacial thermal conductance depends predominantly on the phonon coupling between silicon and metal lattices so that Si/Ag can be simulated without considering the contribution of electron heat transfer. The value of magnitude of the Kapitza resistance for a Si/Ag system is within the range of Kapitza resistance proposed in the literature. This study proves that making rough instead of smooth interfaces in superlattices is a useful way to decrease the thermal conductivity and finally to design materials with desired thermal properties. Furthermore, when more interfaces are added (rough or smooth), i.e. when the superlattice's period decreases, the interfacial thermal resistance becomes comparable to the superlattice's layers thermal conductivity. With these two parameters, namely, the introduction of rough interfaces and the decrease of the superlattice's period, we can create systems with controlled values of the thermal conductivity. DRIE: deep reactive ion etching; SEM: scanning electron microscope; PMFP: phonon mean free path; NEMD: non-equilibrium molecular dynamics; EMD: equilibrium molecular dynamics; LJU: Lennard-Jones Authors' contributions KT: Calculated the theoretical values for the thermal conductivity of super-lattices with NEMD and participated for the calculations of Kapitza resistance of the semiconductor superlattices with EMD method and drafted and revised the manuscript. JP: Participated in the design and fabrication (all steps) of the superlattices with micro- and nano-scale layers. CC: Calculated the Kapitza resistance of metal/semiconductor interfaces. SM: Calculated the Kapitza resistance of the semiconductor superlattices with EMD method and drafted the manuscript. DA: Participated in the development of the patterning of the "nano" superlattices using di-block copolymer. FM: Participated in the development of the high aspect ratio plasma etching of silicon for the "micro" and "nano" superlattices.TB: Participated in the development of the high aspect ratio plasma etching of silicon for the "micro" and "nano" superlattices. XK: Participated in the coordination. PC: Participated in the coordination and drafted and revised the manuscript. PB: Conceived and coordinated the COFISIS project, and also participated in the design of the superlattices and drafted and revised the manuscript. This study has been conducted within the framework of the projects ANR-COFISIS (ANR-07-NANO-047-03). COFISIS (Collective Fabrication of Inexpensive Superlattices in Silicon) is a project with collaboration between theoretical and experimental groups in ESIEE Paris, CETHIL and MATEIS at INSa of Lyon. The project COFISIS intends to develop integrated silicon-based and low-cost 1. Marty F, Rousseau L, Saadany B, Mercier B, Francais O, Mita Y, Bourouina T: Advanced etching of silicon based on deep reactive ion etching for silicon high aspect ratio microstructures and three-dimensional micro- and nanostructures. Microelectronics Journal 2005, 36(Issue 7):673-677. Publisher Full Text 2. Mita I, Kubota M, Sugiyama M, Marty F, Bourouina T, Shibata T: Aspect Ratio Dependent Scalloping Attenuation in DRIE and an Application to Low-Loss Fiber-Optical Switch. In Proc. of IEEE International Conference on MicroElectroMechanical Systems (MEMS 2006). Istanbul, Turkey; 2006:114-117. 3. Register RA, Angelescu D, Pelletier V, Asakawa K, Wu MW, Adamson DH, Chaikin PM: Shear-Aligned Block Copolymer Thin Films as Nanofabrication Templates. Journal of Photopolymer Science and Technology 2007, 20:493. Publisher Full Text 4. Radkowski P III, Sands PD: Quantum Effects in Nanoscale Transport: Simulating Coupled Electron and Phonon Systems in Quantum Wires and Superlattices. 5. Kotake S, Wakuri S: Molecular dynamics study of heat conduction in solid materials. 6. Frenkel D, Smit B: Understanding Molecular Simulation: From Algorithms to Applications. San Diego: Academic Press Inc; 1996. 7. Chantrenne P, Barrat JL: Analytical model for the thermal conductivity of nanostructures. Superlattices and Microstructures 2004, 35:173. Publisher Full Text 8. Chantrenne P, Barrat JL: Finite size effects in determination of thermal conductivities: Comparing molecular dynamics results with simple models. J Heat Transfer - Transactions ASME 2004, 126:577. Publisher Full Text 9. Schelling PK, Phillpot SR, Keblinski P: Comparison of atomic-level simulation methods for computing thermal conductivity. Physical Review B 2002, 65:144306. Publisher Full Text 10. Termentzidis K, Chantrenne P, Keblinski P: Nonequilibrium molecular dynamics simulation of the in-plane thermal conductivity of superlattices with rough interfaces. Physical Review B 2009, 79:214307. Publisher Full Text 11. Poetzsch R, Böttger H: Interplay of disorder and anharmonicity in heat conduction: Molecular-dynamics study. Physical Review B 1994, 50:15757. Publisher Full Text 12. Landry ES, McGaughey AJH, Hussein MI: Molecular dynamics prediction of the thermal conductivity of Si/Ge superlattices. Proc. ASME/JSME Thermal Engineering summer Heat Transfer Conf 2007, 2:779. 13. LAMMPS Molecular Dynamics Simulator [http://lammps.sandia.gov] webcite 14. Plimpton S: Fast Parallel Algorithms for Short-range Molecular Dynamics. J Computational Physics 1995, 117:1. Publisher Full Text 15. Plimpton S, Pollock P, Stevens M: Particle-Mesh Ewald and rRESPA for Parallel Molecular Dynamics Simulations. In Proc. 8th SIAM Conf. on Parallel Processing for Scientific Computing. Minneapolis, MN; 1997. 16. Swartz ET, Pohl RO: Thermal boundary resistance. Reviews of Modern Physics 1989, 61:605. Publisher Full Text 17. Reddy P, Castelino K, Majumdar A: Diffuse mismatch model of thermal boundary conductance using exact phonon dispersion. Applied Physics Letters 2005, 87:211908. Publisher Full Text 18. Ladd AJC, Moran B, Hoover WG: Lattice thermal conductivity - a comparison of molecular dynamics and anharmonic lattice dynamics. Physical Review B 1986, 34:5058. Publisher Full Text 19. Rutherford AM, Duffy DM: The effect of electron-ion interactions on radiation damage simulations. Journal of Physics - Condensed Matter 2007, 19:496201. Publisher Full Text 20. Mahan GD: Kapitza thermal resistance between a metal and a nonmetal. Physical Review B 2009, 79:075408. Publisher Full Text 21. Lyeo HK, Cahill DG: Thermal conductance of interfaces between highly dissimilar materials. Physical Review B 2006, 73:144301. Publisher Full Text 22. Hu M, Keblinski P, Schelling PK: Kapitza conductance of silicon--amorphous polyethylene interfaces by molecular dynamics simulations. Physical Review B 2009, 79:104305. Publisher Full Text 23. Luo TF, Lloyd JR: Non-equilibrium molecular dynamics study of thermal energy transport in Au-SAM-Au junctions. J Heat and Mass Trasfer 2010, 53:1. Publisher Full Text 24. Baskes MI, Nelson JS, Wright AF: Semiempirical modified embedded-atom potentials for silicon and germanium. Physical Review B 1989, 40:6085. Publisher Full Text 25. Lee BJ, Baskes MI: Second nearest-neighbor modified embedded-atom-method potential. Physical Review B 2000, 62:8564. Publisher Full Text 26. Lynn JW, Smith HG, Nicklow RM: Lattice Dynamics of Gold. Physical Review B 1973, 8:3493. Publisher Full Text 27. Touloukian YS, Taylor RE, Desai PD: Thermal Expansion-Metallic Elements and Alloys. Volume 12. New York: Plenum; 1975. 28. Lee BJ, Shim JH, Baskes MI: Semiempirical atomic potentials for the fcc metals Cu, Ag, Au, Ni, Pd, Pt, Al, and Pb based on first and second nearest-neighbor modified embedded atom method. Physical Review B 2003, 68:144112. Publisher Full Text 29. Da Cruz CA, Chantrenne P, Kleber X: Molecular Dynamics simulations and Kapitza conductance prediction of Si/Au systems using the new full 2NN MEAM Si/Au cross-potential. In Proc ASME/JSME. Honolulu, Hawaii, USA; 2011. 8th Thermal Engineering Joint Conference AJTEC2011, March 13-17, 2011 30. Smith AN, Hostetler JL, Norris PM: Thermal boundary resistance measurements using a transient thermoreflectance technique. Microscale Thermophysical Engineering 2000, 4:51. Publisher Full Text 31. Stoner RJ, Maris HJ: Kapitza conductance and heat flow between solids at temperatures from 50 to 300 K. Physical Review B 1993, 48:16373. Publisher Full Text Sign up to receive new article alerts from Nanoscale Research Letters
{"url":"http://www.nanoscalereslett.com/content/6/1/288?fmt_view=classic","timestamp":"2014-04-20T21:44:57Z","content_type":null,"content_length":"118783","record_id":"<urn:uuid:b494290b-9e02-4352-998d-90590e38142a>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00367-ip-10-147-4-33.ec2.internal.warc.gz"}
When forcing with a poset, why do we order the poset in the order that we do? up vote 2 down vote favorite In forcing, we take a collection of forcing conditions and impose a partial order on them. The convention is that if $p$ is stronger than $q$, then we say $p < q$. This is perfectly fine, but it seems intuitively backwards to me. If I were designing the notation for forcing, I would want the stronger condition to be larger. (Something I read says, I think, that Shelah uses the opposite convention that I find more intuitive. Is this so?) Further, if we are forcing with a collection of partial functions (as we often do), we want the stronger condition to be the partial function with the larger domain. This leads us to a definition of the poset order whereby $f < g$ iff $f \supset g$. This seems notationally awkward. Nonetheless, Cohen must have had some good reasons choosing the order that he did. What is/was the rational for Cohen's notational convention? Does it have benefits today, or is it just an artifact of a older approach to forcing? lo.logic forcing notation soft-question add comment 2 Answers active oldest votes The reason is that in the corresponding Boolean algebra, 0 is less than 1. That is, stronger conditions correspond to lower Boolean values in the Boolean algebra. The trivial condition (which is often the empty function in the cases you mention), corresponds to the element 1 in the Boolean alebra. We definitely want to regard lower elements of the Boolean algebra as stronger, since they have more implications in the Boolean algebra sense. (After all, 0 = false is surely the strongest assumption you could make, right?) Meanwhile, you can be comforted by the fact that Shelah and many researchers surrounding him (and a few others) use the alternative forcing-upwards notation. Nevertheless, the forcing-downwards notation is otherwise nearly universal. This difference in culture sometimes causes some funny problems when authors from opposing camps collaborate. Sometimes a compromise is struck to never officially to use the order explicitly, and to write "stronger than" or "weaker than" in words, rather than take sides. Another alternative is the use the forcing turnstyle symbol itself as the order, but this up vote 9 solution suffers from the fact that it only works when the order is separative. down vote accepted Edit. I looked at Cohen's PNAS 1963 article, and in that article, he does not use the forcing-downwards notation at all. Rather, he uses the containment symbol $\supset$ explicitly. Thus, the assumption in the question that Cohen did indeed use the downward-oriented relation may be unwarranted. (Perhaps this view is a little softened by the observation that he consistently uses $\supset$ rather than $\subset$.) Here is my theory. In logic and set theory there has been a long-standing tradition of consistently using the relation ≤ in preference to ≥, presumably to avoid the problems associated with mixing up the greater-than less-than order. Perhaps this goes back to Cantor? Now, in the case of forcing, it is usually the case that you have a condition P already, and you want to ask whether there is Q stronger than P with a certain property (one rarely asks for weaker conditions this way). Thus, if you have the downward-oriented relation, you can economically say "there is Q ≤ P such that..." This is just how Cohen's text reads, since he says "there is Q \supset P such that ...". Generalizing Cohen's containment order to an arbitrary partial order, one then wants to interpret containment as ≤. And then the further support for this convention arrives with the fact that it agrees with the Boolean algebra order a few years later, so it became standard (except for the Shelah school and a few others). The Boolean algebra approach to forcing is a later development though, isn't it? So this can't have been Cohen's rationale. – Oliver Mar 11 '10 at 19:25 I think it wasn't that much later (mid/late 60s). So the Boolean algebras were there when forcing was promoted by researchers such as Solovay. But the up/down controversy has been there from the start, with down currently predominating. – Joel David Hamkins Mar 11 '10 at 19:42 add comment The following is how I made peace with the notation. The set of realizations of a stronger condition is a smaller set, so $p\leq q$ iff $S_p\subseteq S_q$. $\leq$ is suggestive of $\ up vote 2 subseteq$. down vote This idea has some affinity with the Boolean algebra approach, since the canonical map of the poset P into its Boolean algebra completion B as the regular open algebra of P (the collection of all regular open subsets of P) take a condition p essentially to the lower cone { q | q $\leq$ p } of all conditions below p. (One should actually use the interior of the closure of this set.) And since stronger conditions have smaller lower cones, the order turns into subset as you mention. – Joel David Hamkins Apr 15 '10 at 12:37 add comment Not the answer you're looking for? Browse other questions tagged lo.logic forcing notation soft-question or ask your own question.
{"url":"http://mathoverflow.net/questions/17892/when-forcing-with-a-poset-why-do-we-order-the-poset-in-the-order-that-we-do?sort=votes","timestamp":"2014-04-19T02:09:05Z","content_type":null,"content_length":"60980","record_id":"<urn:uuid:1ca1e396-8515-47e0-972f-ec97682580c3>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00241-ip-10-147-4-33.ec2.internal.warc.gz"}
Patent application title: METHODS AND SYSTEMS FOR INTENSITY MODELING INCLUDING POLARIZATION Sign up to receive free email alerts when patent applications with chosen keywords are published SIGN UP Embodiments of the present invention provide computer readable media encoded with executable instructions for modeling an intensity profile at a surface illuminated by an illumination source through a mask. Further embodiments provide methods for correcting a mask pattern and methods for selecting an illumination source. Still further embodiments provide masks and integrated circuits produced using a model of the illumination source. Embodiments of the present invention take into account the polarization of the illumination source and are able to model the effect of polarization on the resultant intensity profile. A computer readable medium encoded with computer executable instructions for modeling an intensity profile at a surface resulting from illuminating a mask with an illumination system having an illumination source, including instructions for:receiving polarization data describing the illumination source;generating a polarization data model of the illumination source using the polarization data by evaluating a plurality of two-dimensional functions at a plurality of points in a pupil plane associated with the illumination system;generating a matrix operator using the polarization data model;generating a mask function based on the mask; andconvoluting the matrix operator with the mask function to generate intensity profile data representing the intensity profile at the surface. The computer readable medium of claim 1 further including instructions for altering the mask function based on the intensity profile. The computer readable medium of claim 1 wherein the instructions for receiving polarization data include instructions for receiving data analytically describing the polarization of the illumination The computer readable medium of claim 1 wherein the instructions for receiving polarization data include instructions for receiving data empirically describing the polarization of the illumination The computer readable medium of claim 1 wherein the plurality of two-dimensional functions includes a first function describing polarization of the illumination source along a first axis, a second function describing polarization of the illumination source along a second axis, and a third and a fourth function each describing a coupling component of the illumination source between the first and second axes. The computer readable medium of claim 5 wherein the first and second axes are orthogonal. The computer readable medium of claim 5 wherein the plurality of two-dimensional functions further includes a fifth function describing a non-polarization component. The computer readable medium of claim 1 wherein the plurality of two-dimensional functions include:K al- pha.(f,g)*DoP(f,g);K_cos XY=sin α(f,g)*cos α(f,g)*cos φ(f,g)*DoP(f,g);K_sin XY=sin α(f,g)*cos α(f,g)*sin φ(f,g)*DoP(f,g); andK_non pol=1-DoP(f,g);where f and g represent pupil variables in a spatial frequency domain, such that coordinates in the pupil plane are defined by an f value and a g value, α(f,g) represents a polarization angle as a first function in the pupil plane; φ(f,g) represents a phase angle between a first and a second dimension polarization as a second function in the pupil plane, and DoP(f,g) represents a degree of polarization as a third function in the pupil plane. The computer readable medium of claim 8 wherein the matrix operator is given by:Koptics=norm(cvr(J@K YY)*KpupilY- +2*cvr(J@K_cos i+c- vr pol)*KpupilY);wher- e norm includes a normalization operation; cvr includes a covariant operation; J represents a first pupil function, KpupilX and KpupilY represent second and third pupil functions in a first and second dimension, respectively, Kpcross and Kpcross_i represent third and fourth pupil functions capturing a real and an imaginary coupling between dimensions, respectively, and (includes a convolution operation. A method of selecting at least one component of an illumination system including an illumination source for illuminating a surface through a mask, the method comprising:generating a polarization data model of the illumination source by evaluating a plurality of two-dimensional functions at a plurality of points in a pupil plane associated with the illumination system;modeling an intensity profile at the surface based in part on the polarization data model and a mask function; andaltering at least one of the mask function and the illumination source based on the modeled intensity profile. The method of claim 10 further comprising generating a mask corresponding to the altered mask function. The method of claim 10 further comprising:receiving polarization data describing the illumination source; andwherein the act of generating a polarization data model uses the polarization data. The method of claim 10 wherein the act of modeling the intensity profile further comprises:generating a matrix operator using the polarization data model;generating a mask function based on the mask; andconvoluting the operator with the mask function to generate a modeled intensity profile. The method of claim 10 wherein the act of altering the mask function includes describing additional features on the mask. The method of claim 10 wherein the act of altering the mask function includes removing description of portions of features from the mask. The method of claim 10 wherein the plurality of two-dimensional functions includes a first function describing polarization of the illumination source along a first axis, a second function describing polarization of the illumination source along a second axis, and a third and a fourth function each describing a coupling component of the illumination source between the first and second axes. The method of claim 10 wherein the plurality of two-dimensional functions include:K YY=si- n.sup. XY=sin α(f,g)*cos α(f,g)*cos φ(f,g)*DoP(f,g);K_sin XY=sin α(f,g)*cos α(f,g)*sin φ(f,g)*DoP(f,g); andK_non pol=1-DoP(f,g);where f and g represent pupil variables in a spatial frequency domain, such that coordinates in the pupil plane are defined by an f value and a g value, α(f,g) represents a polarization angle as a first function in the pupil plane; φ(f,g) represents a phase angle between a first and a second dimension polarization as a second function in the pupil plane, and DoP(f,g) represents a degree of polarization as a third function in the pupil plane. A mask for use in an illumination system having an illumination source, to reproduce a predetermined feature at a surface, the mask comprising:a pattern of at least partially opaque and at least partially transparent features, the pattern selected based on the predetermined feature and based on a modeled intensity profile at the surface generated by the illumination source, the modeled intensity determined by acts including:receiving polarization data describing the illumination source;receiving initial mask profile data based on the predetermined feature;generating a polarization model of the illumination source by evaluating a plurality of two-dimensional functions at a plurality of points in a pupil plane associated with the illumination system; andgenerating a matrix operator using the polarization model;generating a mask function based on the initial mask profile data; andconvoluting the operator with the mask function to generate the intensity profile. A mask according to claim 18 wherein the plurality of two-dimensional functions includes a first function describing polarization of the illumination source along a first axis, a second function describing polarization of the illumination source along a second axis, and a third and a fourth function each describing a coupling component of the illumination source between the first and second An integrated circuit having features constructed, at least in part, by illuminating a surface with an illumination system having an illumination source through a mask, the mask comprising:a pattern of at least partially opaque and at least partially transparent features, the pattern selected based on the predetermined feature and based on a modeled intensity profile at the surface generated by the illumination source, the modeled intensity determined by acts including:receiving polarization data describing the illumination source;receiving initial mask profile data based on the predetermined feature;generating a polarization model of the illumination source by evaluating a plurality of two-dimensional functions at a plurality of points in a pupil plane associated with the illumination system; andgenerating a matrix operator using the polarization model;generating a mask function based on the initial mask profile data; andconvoluting the operator with the mask function to generate the intensity profile. A method of selecting an illumination source for an illumination system to produce at least one predetermined feature on a surface by illuminating a mask, the method comprising:receiving mask data; modeling the intensity profile generated at the surface when the mask is illuminated using the illumination source, the act of modeling including:receiving polarization data describing an initial illumination source;generating a polarization model of the illumination source by evaluating a plurality of two-dimensional functions at a plurality of points in a pupil plane associated with the illumination system;generating a matrix operator using the polarization model;generating a mask function based on the mask data; andconvoluting the operator with the mask function to generate an intensity profile; andselecting a desired polarization for the illumination source based on the intensity profile. The method of claim 21 wherein the plurality of two-dimensional functions includes a first function describing polarization of the illumination source along a first axis, a second function describing polarization of the illumination source along a second axis, and a third and a fourth function each describing a coupling component of the illumination source between the first and second axes. The method of claim 22 wherein the first and second axes are orthogonal. The method of claim 22 wherein the plurality of two-dimensional functions further includes a fifth function describing a non-polarization component. The method of claim 21 wherein the plurality of two-dimensional functions include:K YY=si- n.sup. XY=sin α(f,g)*cos α(f,g)*cos φ(f,g)*DoP(f,g);K_sin XY=sin α(f,g)*cos α(f,g)*sin φ(f,g)*DoP(f,g); andK_non pol=1-DoP(f,g);where f and g represent pupil variables in a spatial frequency domain, such that coordinates in the pupil plane are defined by an f value and a g value, α(f,g) represents a polarization angle as a first function in the pupil plane; φ(f,g) represents a phase angle between a first and a second dimension polarization as a second function in the pupil plane, and DoP(f,g) represents a degree of polarization as a third function in the pupil plane. The method of claim 25 wherein the matrix operator is given by:Koptics=norm(cvr(J@K YY)*KpupilY+2*cvr- (J@K_cos 5*- (J@K_non pol)*KpupilY);where norm includes a normalization operation; cvr includes a covariant operation; J represents a first pupil function, KpupilX and KpupilY represent second and third pupil functions in a first and second dimension, respectively, Kpcross and Kpcross_i represent third and fourth pupil functions capturing a real and an imaginary coupling between dimensions, respectively, and @ includes a convolution operation. TECHNICAL FIELD [0001] This invention relates to intensity profile modeling and illumination systems for photolithography. BACKGROUND OF THE INVENTION [0002] A general schematic diagram of a photolithography system is shown in FIG. 1 . Energy from an illumination source 100 is passed through a mask 110 and focused onto a photo-sensitive surface 120. The mask contains patterned regions, such as regions 130, 131, 132. The goal of the photolithography system is generally to reproduce the pattern on the mask 110 on the photo-sensitive surface 120. One or more optical components--such as lenses 140 and 142--may be used to focus and otherwise manipulate the energy from the illumination source 100 through the mask 110 and onto the surface 120. The resulting image on the photo-sensitive surface allows the surface 120, and ultimately underlying layers, to be patterned. Photolithography is widely used in typical semiconductor processing facilities to create intricate features on various layers forming integrated circuits or other micromachined structures. As the feature sizes desired for reproduction on the photosensitive surface shrink, it is increasingly challenging to accurately reproduce a desired pattern on the surface. Numerous optical challenges are presented, including those posed by diffraction and other optical effects or process variations as light is passed from an illumination source, through a system of lenses and the mask to finally illuminate the surface. Optical proximity correction tools, such as Progen marketed by Synopsys, are available to assist in developing mask patterns that will reflect optical non-idealities and better reproduce a desired feature on a desired surface. For example, "dog-ears" or "hammer head" shapes may be added to the end of linewidth patterns on the mask to ensure the line is reproduced on the surface completely, without shrinking at either end or rounding off relative to the desired form. For example, FIG. 2 depicts an initial mask pattern 200 designed to reproduce rectangle 201. The actual feature reproduced on surface 230, after the lithography, may look something like feature 220, considerably shorter and rounder than the desired rectangle 201. An optical proximity correction system, however, could generate a modified mask pattern 250. The modified pattern 250 yields, after lithography, the feature 260, considerably closer to the initial desired feature 201. Optical proximity correction tools, used to generate the modified mask pattern 250, for example, generate models of the intensity profile at the photo-sensitive surface after illumination of a mask with an illumination source. Intensity is typically represented by a scalar value. The intensity at a surface illuminated through a mask in a lithography system can be calculated generally by taking the convolution of a function representing the mask with a set of functions representing the lithography system that includes the illumination source. The set of functions representing the lithography system are eigenfunctions of a matrix operator. Hopkins imaging theory provides the rigorous mathematical foundation for intensity calculations. The theory provides that intensity, in the spatial domain, is given by: )- dx [2] where x and y are coordinates in the spatial domain . O represents a mask pattern, H is a lens pupil function and J is a source pupil intensity function. A Fourier transform yields intensity in the frequency domain, given by: )H*(f- +f i2π[(f.- sup.1 [2] where f and g are coordinates in the frequency domain . As described further below, the frequency domain is also representative of the pupil plane in an illumination system. This comprehensive theory provides for calculations of a complete intensity profile. To be useful, however, an optical proximity correction tool should generate an intensity profile within a reasonable amount of time to practically alter the mask design. Accordingly, the optical proximity correction tools make various simplifications and approximations of actual optical effects. In particular, optical proximity correction tools generally do not take into account polarization of an illumination source, or variation of that polarization across the illumination pupil. The polarization of an electromagnetic wave is generally the angle of oscillation. For example, in FIG. 3 , wave 310 is shown propagating in direction 300. The oscillations, however, may occur at any angle perpendicular to the direction of propagation, shown by circle 320. The polarization angle of energy emitted by an illumination source may alter the diffraction effects experienced by the energy, and therefore ultimately, the pattern generated at the photo-sensitive surface. BRIEF DESCRIPTION OF THE DRAWINGS [0010]FIG. 1 is a schematic diagram of a lithography system in accordance as known in the art. [0011]FIG. 2 is a schematic diagram of the operation of optical proximity correction as known in the art. [0012]FIG. 3 is a schematic diagram showing a polarization of an electromagnetic wave as known in the art. [0013]FIG. 4 is a flowchart of a method of generating intensity profile data according to an embodiment of the present invention. [0014]FIG. 5 is a schematic diagram of a system according to an embodiment of the present invention. [0015]FIG. 6 is a schematic diagram of a corrected mask generator according to an embodiment of the present invention. [0016]FIG. 7 is a schematic diagram of a desired polarization data generator according to an embodiment of the present invention. DETAILED DESCRIPTION [0017] Embodiments of the present invention take into account the polarization of an illumination source and are able to model the effect of polarization on the resultant intensity profile. Computationally, embodiments of the invention decompose a polarization pupil into a plurality of two-dimensional functions, also referred to as kernels. The plurality of two-dimensional functions are derived from Hopkins imaging theory. Methods of the present invention proceed by evaluating the plurality of two-dimensional functions at a plurality of points in a pupil plane of the illumination system to generate a polarization data model for the illumination source. This polarization data model is used to generate a matrix operator according to embodiments of the present invention, and the matrix operator is diagonalized to yield a set of eigenfunctions, which are convoluted with a function representative of the mask to generate the intensity profile. It will be clear to one skilled in the art that embodiments of the invention may be practiced without various details discussed below. In some instances, well-known optical and other lithography system components, controllers, control signals, and software operations have not been shown in detail in order to avoid unnecessarily obscuring the described embodiments of the invention. An embodiment of a method and an embodiment of a set of instructions encoded on computer-readable media according to the present invention is shown in FIG. 4 . Polarization data about the illumination source is received 400. The polarization data may include one or more analytical expressions describing polarization of the illumination source. This allows a designer to theoretically define a polarization of an illumination source. In some embodiments, the polarization data includes empirical data gathered by experimentally measuring an illumination source. This may allow for input of experimental polarization data gathered from an illumination source, in specific operating conditions in some embodiments. Any source of electromagnetic energy may be modeled according to embodiments of the present invention including conventional light sources, or ultraviolet laser sources. A complete polarization pupil describing the illumination source is decomposed into a plurality of two-dimensional functions for the purposes of modeling the illumination source, including the effects of the source's polarization. The plurality of two-dimensional functions are evaluated 410 to generate a polarization model from the received polarization data. The polarization model may allow modeling of a resultant intensity profile in a reasonable amount of computational time. In some embodiments of the present invention, the plurality of two-dimensional functions includes a first function describing polarization of the illumination source along a first axis, a second function describing polarization of the illumination source along a second axis, and a third and fourth function each describing a coupling component of the illumination source between the first and second axes. In some embodiments, the first and second axes are orthogonal axes. There are two coupling functions in some embodiments because a first coupling function describes a real portion of the coupling, and a second coupling function describes an imaginary portion of the coupling. In some embodiments, a fifth two-dimensional function describes a non-polarization component of the illumination source. The plurality of two-dimensional functions are evaluated in a pupil plane of the illumination system in some embodiments of the present invention. Referring back to the diagram of an illumination system in FIG. 1 , the pupil plane 150 is a location within the system where an image of the Fourier transform of the mask 110 is generated. At the pupil plane 150, the image can be described and modeled in the frequency domain in some embodiments. Accordingly, in some embodiments, the five two-dimensional functions used to decompose a complete polarization pupil can be given as: XY=sin α(f,g)*cos α(f,g)*cos φ(f,g)*DoP(f,g); XY=sin α(f,g)*cos α(f,g)*sin φ(f,g)*DoP(f,g); and where f and g represent pupil variables in frequency domain, such that coordinates in the pupil plane are defined by an f value and a g value, α(f,g) represents a polarization angle as a first function in the pupil plane; φ(f,g) represents a phase angle between a first and a second dimension polarization as a second function in the pupil plane, and DoP(f,g) represents a degree of polarization as a third function in the pupil plane. The derivation of these functions is now described. The theory provided below is provided to enable those skilled in the art to understand the origin of the five two-dimensional equations used in the embodiment described above and is not intended to limit embodiments of the invention to those five equations or to derivation in this manner. Recall Hopkins equation for intensity, expressed in vector form: ( x , y ) = ∫ ∫ ∫ ∫ ∫ ∫ J ( f g ) H ( f + f 1 , g + g 1 ) H * ( f + f 2 , g + g 2 ) i = x , y j = x , y k = x , y , z M ik ( f + f 1 , g + g 1 ) M jk * ( f + f 2 , g + g 2 ) E i E j * O ( f 1 , g 1 ) O * ( f 2 , g 2 ) - 2π [ ( f - f 2 ) x + ( g 1 - g 2 ) y ] f g f 1 g 1 f 2 g 2 ##EQU00001## The summation in Hopkins equation above is expressing the electric field squared, where M is the matrix mapping the electric field from an object to an image. The summation can then be expressed as: - ,g Now, polarization degree, angle (α) and phase shift (φ) can be represented in a vector function E (f,g) given as: where E(f,g) is the square root of the degree of polarization. Assuming a simple case, where the degree of polarization=1 and φ=0, we can write E =(cos(α(f,g)), sin(α(f,g))); and define functions K_sxx=cos α (f,g); K_syy=sin α (f,g) and K_sxy=sin α(f,g)*cos α(f,g) Expanding the ∥E∥ equation above for this case, we have: 2 = k = x , y , z M xk ( f + f 1 , g + g 1 ) M xk * ( f + f 2 , g + g 2 ) K_sxx ( f , g ) + k = x , y , z M yk ( f + f 1 , g + g 1 ) M yk * ( f + f 2 , g + g 2 ) K_syy ( f , g ) + k = x , y , z ( M xk ( f + f 1 , g + g 1 ) M yk * ( f + f 2 , g + g 2 ) + M yk ( f + f 1 , g + g 1 ) M xk * ( f + f 2 , g + g 2 ) ) K_sxy ( f , g ) ##EQU00002## An identity is used to put the cross term in a bilinear form. The identity is given as: M xk ( f + f 1 , g + g 1 ) M yk * ( f + f 2 , g + g 2 ) + M yk ( f + f 1 , g + g 1 ) M xk * ( f + f 2 , g + g 2 ) = 1 2 [ ( M xk ( f + f 1 , g + g 1 ) + M yk ( f + f 1 , g + g 1 ) ) ( M xk * ( f + f 2 , g + g 2 ) + M yk * ( f + f 2 , g + g 2 ) ) - ( M xk ( f + f 1 , g + g 1 ) - M yk ( f + f 1 , g + g 1 ) ) ( M xk * ( f + f 2 , g + g 2 ) - M yk * ( f + f 2 , g + g 2 ) ) ] ##EQU00003## Using this identity, and the ∥E∥ expression above, Hopkins equation becomes: ( x , y ) = ∫ ∫ ∫ ∫ ∫ ∫ J ( f , g ) H ( f + f 1 , g + g 1 ) H * ( f + f 2 , g + g 2 ) k = x , y , z M xk ( f + f 1 , g + g 1 ) M xk * ( f + f 2 , g + g 2 ) K_sxx ( f , g ) + k = x , y , z M yk ( f + f 1 , g + g 1 ) M yk * ( f + f 2 , g + g 2 ) K_syy ( f , g ) + k = x , y , z 2 [ 0.5 ( M xk ( f + f 1 , g + g 1 ) + M yk ( f + f 1 , g + g 1 ) ) 0.5 ( M xk * ( f + f 2 , g + g 2 ) + M yk * ( f + f 2 , g + g 2 ) ) - 0.5 ( M xk ( f + f 1 , g + g 1 ) - M yk ( f + f 1 , g + g 1 ) ) 0.5 ( M xk * ( f + f 2 , g + g 2 ) - M yk * ( f + f 2 , g + g 2 ) ) ] K_sxy ( f , g ) O ( f 1 , g 1 ) O * ( f 2 , g 2 ) - 2 [ ( f 1 - f 2 ) x + ( g 1 - g 2 ) y ] f g f 1 g 1 f 2 g 2 ##EQU00004## Writing this in the space domain, Hopkins equation takes the form: ( x , y ) = ∫ ∫ ∫ ∫ [ ( cvr ( J @ K_sxx ) k = x , y , z b ln ( H @ M xk ) ) + ( cvr ( J @ K_syy ) k = x , y , z b ln ( H @ M yk ) ) + ( 2 * cvr ( J @ K_sxy ) k = x , y , z [ b ln ( 0.5 ( H @ M xk + H @ M yk ) ) - b ln ( 0.5 ( H @ M xk - H @ M yk ) ) ] ) ] ##EQU00005## O ( x + x 1 , y + y 1 ) O * ( x + x 2 , y + y 2 ) x 1 y 1 x 2 y 2 ##EQU00005.2## bln is a bilinear operation function, cvr is a covariant function and @ represents a convolution operation. The first summation term, summing bln(H@M ) is represented as a pupil function, KpupilX, in optical proximity correction systems. The second summation term, summing bln(H@M ) is represented as a second pupil function, KpupilY and the third by a third function Kpcross. These functions are used to generate a matrix operator given as: syy)*Kpu- pilY+2*cvr(Ksource@K where norm is a normalization, cvr is a covariant operation, @ is a convolution operation and * is a multiplication. Recall that this solution is for a simple case. For an arbitrary polarization angle and phase shift, Hopkins equation in the space domain takes the form: ( x , y ) = ∫ ∫ ∫ ∫ [ ( cvr ( J @ K_sxx ) k = x , y , z b ln ( H @ M xk ) ) + ( cvr ( J @ K_syy ) k = x , y , z b ln ( H @ M yk ) ) + ( 2 * cvr ( J @ K_sxy ) k = x , y , z [ b ln ( 0.5 ( H @ M xk + H @ M yk ) ) - b ln ( 0.5 ( H @ M xk - H @ M yk ) ) ] ) ] ##EQU00006## O ( x + x 1 , y + y 1 ) O * ( x + x 2 , y + y 2 ) x 1 y 1 x 2 y 2 ##EQU00006.2## where K cos_xy=cos α(f,g) sin α(f,g)cos φ(f,g) K sin_xy=cos α(f,g)sin α(f,g)sin φ(f,g) and K_i=i (a constant function of 90 degree phase shift) The first three summation terms, as before, represent KpupilX, KpupilY, and Kpcross, however, for this more generic case, there is a fourth summation term, which can be called Kpcross_i. For a general polarization filter, therefore, in some embodiments five two-dimensional functions are used to generate a polarization model and, ultimately, to generate a matrix operator. These five functions, derived above are: XY=sin α(f,g)*cos α(f,g)*cos θ(f,g)*DoP(f,g); XY=sin α(f,g)*cos α(f,g)*sin θ(f,g)*DoP(f,g); and Referring back to FIG. 4 , a matrix operator is generated 420 using the polarization model. The matrix operator in embodiments of the present invention is generated using the above five functions and can be expressed as: YY)*KpupilY+2*cvr(J@K- _cos i+cvr0.5*(J@K- _non where norm includes a normalization operation ; cvr includes a covariant operation; J represents a first pupil function, KpupilX and KpupilY represent second and third pupil functions in a first and second dimension, respectively, Kpcross and Kpcross_i represent third and fourth pupil functions capturing a real and an imaginary coupling between dimensions, respectively, and (includes a convolution operation. This matrix operator can be generated in a reasonable amount of time using available computational systems in embodiments of the invention, and takes polarization of an illumination source into Referring again to FIG. 4 , a mask function is received 430 describing the mask, such as mask 110 in FIG. 1 . The mask generally contains a pattern of at least partially opaque and at least partially transparent features such that energy passed through the mask is filtered by the mask. In some embodiments, the mask includes opaque features patterned on a transparent substrate, such as glass. Substantially any mask may be used in embodiments of the present invention, manufactured in any way known or developed in the art of mask fabrication. In some embodiments, the structure and materials used to form the mask are determined by the needs of the illumination system and the illumination source The mask function is convoluted 440 with eigenfunctions of the matrix operator to generate a scalar intensity profile. This scalar intensity profile represents intensity at a surface, such as the surface 120 in FIG. 1 . An image may be created on substantially any surface according to embodiments of the present invention, including a photoresist layer or other photo-sensitive surface. Based on the intensity profile data generated 440, the mask function or the illumination source may be altered 450, 460 relative to those used to conduct the simulation in embodiments of the invention to achieve improved reproduction of a desired pattern on the surface. In some embodiments, the mask function is changed based on the intensity profile and a new mask is generated corresponding to the revised mask function. A system 500 according to an embodiment of the invention is shown schematically in FIG. 5 . A computer readable media 510 stores instructions that when executed, cause processor 520 to perform any or all of the acts described, for example, in FIG. 4 according to embodiments of the present invention. The computer readable media 510 may be coupled to the processor 520 using any known communication means, wired or wireless. The computer readable media may include one or more memory devices, CDs, DVDs, Flash drives, disk drives or the like in embodiments of the invention. One or more input devices such as input device 530 can be connected to the processor 520 to provide inputs, such as mask data or polarization data as described with reference to FIG. 4 . The input device 530 may include, for example, a keyboard or one or more other memories, CDs, DVDs, Flash drives, disk drives or communication devices. In some embodiments, the input device 530 may be the same as, or include the computer readable media 510. It is to be understood that any intensity profile data, polarization data, mask function data, and functions and equations described herein can be encoded as data or instructions on one or more computer readable media, and transmitted using one or more transmission mediums. One or more output devices, such as output device 540, may be in communication with the processor 520 to receive data from the processor, such as intensity profile data. The output devices may include, for example, a display to display the intensity profile data or another memory to store the intensity data. In some embodiments, a display provides a graphical or numerical display of the intensity profile data. In other embodiments, the intensity profile data is stored in a physical memory. In other embodiments, the intensity profile data may not be specifically output but may be used by the processor to create other or different data, such as a mask or illumination source alteration. Some embodiments of the present invention provide methods and systems for correcting a mask pattern. An embodiment of a mask correcting system 600 is shown in FIG. 6 . As described above with reference to FIGS. 4 and 5, polarization data 610, which may represent data stored on a storage media or transmitted over a communication medium, is received by polarization model generator 620. The polarization model generator evaluates a plurality of two-dimensional functions at a plurality of points in the pupil plane, as described above, to generate a polarization model of an illumination source. The polarization model is provided to an intensity profile generator 630 that receives data 640 encoding an initial mask pattern, such as feature 645. The resultant intensity profile data is provided to a corrected mask data generator 650. The mask pattern may be corrected through any known methodologies for altering mask features to improve the reproduction of a desired feature. The intensity profile data provided to the corrected mask data generator 650, however, includes effects from polarization of the illumination source in accordance with embodiments of the present invention. In some embodiments, as understood in the art, features are added to the initial mask pattern to form the corrected mask, and in other embodiments, features may be removed from the initial mask pattern. Generally, a corrected mask pattern is desired that more accurately reproduces a desired feature at the surface. That is, a comparison of desired features with the intensity profile is conducted and the mask pattern may be altered based on the comparison. The corrected mask data generator 650 generates corrected mask data describing corrected features, and a corrected mask 660 may be generated according to the corrected mask data, for example containing corrected feature 665. The polarization model generator, intensity profile generator, and corrected mask data generator may all be implemented in software, hardware, or combinations thereof. One or all of the components may be implemented using a processing device coupled to computer readable media encoding appropriate instructions, as generally illustrated in FIG. 5 . Multiple components may be executed by the same processing system in some embodiments. The masks produced by embodiments of the present invention may be used in lithography systems of generally any type. These lithography systems may be used in various semiconductor or other micromachining fabrication facilities to create various products including integrated circuit chips having features patterned using the mask. The masks and products made using embodiments of the present invention may have improved feature size or more accurate reproduction of features than those made without use of polarization data during the intensity modeling process. This may ultimately decrease failure rate of these final products or make smaller or more complicated feature arrangements possible. Some embodiments of the present invention also provide systems and methods for selecting an illumination source for an illumination system. An embodiment of a system 700 for generating desired polarization data is shown in FIG. 7 . As described above with reference to FIGS. 4, 5 and 6, polarization data 710 describing an initial illumination source is received. A polarization model generator 720 generates a polarization model by evaluating a plurality of two-dimensional functions at a plurality of locations within a pupil plane, and an intensity profile generator 730 receives the polarization data model and mask data 740 to generate an intensity profile. A desired polarization data generator 750 may then generate, based on a comparison of the intensity profile with desired features described by the mask data 740, desired polarization data describing a polarization that may better reproduce a feature on a mask at a surface. In this manner, a polarization of an illumination source may be selected to reproduce a feature. The polarization model generator, intensity profile generator, and desired polarization data generator may all be implemented in software, hardware, or combinations thereof. One or all of the components may be implemented using a processing device coupled to computer readable media encoding appropriate instructions, as generally illustrated in FIG. 5 . Multiple components may be executed by the same processing system in some embodiments. In some embodiments, both the mask and the polarization of the illumination source may be altered based on the modeled intensity profile data. From the foregoing it will be appreciated that, although specific embodiments of the invention have been described herein for purposes of illustration, various modifications may be made without deviating from the spirit and scope of the invention. Embodiments of the present invention can be implemented in software, hardware, or combinations thereof. One or more general or special purpose computers may be programmed to carry out methods in accordance with embodiments of the present invention. Patent applications by Chung-Yi Lee, Boise, ID US Patent applications by Fei Wang, Boise, ID US Patent applications by MICRON TECHNOLOGY, INC. Patent applications in class MODELING BY MATHEMATICAL EXPRESSION Patent applications in all subclasses MODELING BY MATHEMATICAL EXPRESSION User Contributions: Comment about this patent or add new information about this topic:
{"url":"http://www.faqs.org/patents/app/20090287461","timestamp":"2014-04-16T05:30:49Z","content_type":null,"content_length":"73085","record_id":"<urn:uuid:97fba2dc-5519-4804-aada-1cc41d6a10d3>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00089-ip-10-147-4-33.ec2.internal.warc.gz"}
Brighton, CO ACT Tutor Find a Brighton, CO ACT Tutor ...Because of that experience, I am able to work with students who are far behind in their school curriculum or have trouble grasping the material. I am very passionate about math and I believe that everyone can "get it." I have my B.S. in Mechanical Engineering and am now a grad student at DU. I can help students with more than just math! 27 Subjects: including ACT Math, reading, writing, geometry ...I am fluent in several computer languages, including Java, Octave, Groovy and Python. I have worked in small start-ups and large corporations, including Apple Computer. I have implemented systems across many domains, including, security, medical, government, retail, scientific modeling, among others. 17 Subjects: including ACT Math, geometry, algebra 1, statistics ...In my sessions, we follow an agenda: diagnose the problem, test the student's learning, review the answer, and repeat. We spend more time applying skills and less time doing boring stuff that doesn't enhance learning. In the past, I have typically worked with high school students and college freshmen but can work with any level if it is within my knowledge area. 41 Subjects: including ACT Math, Spanish, English, chemistry I have taught ESL for many years in Japan, Taiwan and China. I have experience teaching in schools, language institutes and to private students one-on-one, with students of all ages. I'm happy to adjust my teaching style and curriculum to meet your needs. 21 Subjects: including ACT Math, English, reading, writing ...I graduated from Dartmouth College (class of ’04) in the top 5% of my class with a degree in mathematics, and received my J.D. from Harvard Law School (class of ’07). I first began tutoring math while in college, and earlier this year, after leaving the practice of law, I began tutoring full-time... 12 Subjects: including ACT Math, algebra 1, algebra 2, SAT math
{"url":"http://www.purplemath.com/Brighton_CO_ACT_tutors.php","timestamp":"2014-04-18T11:03:56Z","content_type":null,"content_length":"23897","record_id":"<urn:uuid:24bea7fd-17ae-4c30-87f7-1ae989ddc7ee>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00529-ip-10-147-4-33.ec2.internal.warc.gz"}
Cauchy sequence Don't need a proof, just need confirmation. Every cauchy sequence converges. This is true right? If you are working in the real number field then that is correct. It is correct in any complete metric space. Actually you are correct to ‘call my hand’ on that. In most treatments a complete space is one in which every Cauchy sequence converges. Thus, in some way my answer is circular. In my own defense, some texts that is equivalent to every infinite bounded set has a limit point.
{"url":"http://mathhelpforum.com/calculus/6824-cauchy-sequence.html","timestamp":"2014-04-17T22:46:51Z","content_type":null,"content_length":"40202","record_id":"<urn:uuid:8aaa1cb6-f397-48e9-98c7-b1be665ad9cb>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00209-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: How many years will it take for a $50 investment to grow to $1000 if it is compounded continuously at a rate of 5%? Round to two decimal places. Do not include units in your answer A(t) = P * e^(rt) • one year ago • one year ago Best Response You've already chosen the best response. Best Response You've already chosen the best response. i need help!!!!!! Best Response You've already chosen the best response. what about? Best Response You've already chosen the best response. How many years will it take for a $50 investment to grow to $1000 if it is compounded continuously at a rate of 5%? Round to two decimal places. Do not include units in your answer A(t) = P * e^ Best Response You've already chosen the best response. is really hard Best Response You've already chosen the best response. is it 15? Best Response You've already chosen the best response. i have studied Engg Economics so it is right Best Response You've already chosen the best response. :( it was wrong ! Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4fabeec5e4b059b524f79eaa","timestamp":"2014-04-17T12:40:48Z","content_type":null,"content_length":"44182","record_id":"<urn:uuid:e0afec42-c7d7-422f-b291-b970510074d1>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00331-ip-10-147-4-33.ec2.internal.warc.gz"}
Optimization, Partial Derivatives October 25th 2009, 01:20 AM Optimization, Partial Derivatives Find three positive numbers x,y,z whose sum is 100 such that (x^a)(y^b)(z^c) is a maximum. At first, I thought a, b, and c needed constant values, but later realized that the numbers are in terms of a, b, and c. Can someone help? October 25th 2009, 05:07 AM I think this exercise is (almost) impossible to solve without knowing first that relation you say there exists between the variables x,y,z and the powers a,b,c. October 25th 2009, 05:58 AM Scott H The relations involved are Our task is to maximize the two-variable function $u=x^a y^b (100-x-y)^c$ subject to these conditions. In order to do this, we find all critical points, including boundary points and singular points (none, in this case), and points at which $abla u=\mathbf{0},$ i.e., $\frac{\partial u}{\partial x}=\frac{\partial u}{\partial y}=0.$ To begin, we first find the partial derivative of $u$ with respect to $x$: \begin{aligned}<br /> \frac{\partial u}{\partial x}&=ax^{a-1}y^b(100-x-y)^c+x^a y^b\cdot c(100-x-y)^{c-1}\cdot(-1)\\<br /> &= ax^{a-1}y^b(100-x-y)^c-cx^a y^b(100-x-y)^{c-1}.<br /> \end{aligned} Because $x$ and $y$ are symmetric in the function (with $a$ and $b$ respectively), the partial derivative of $u$ with respect to $y$ is therefore $\frac{\partial u}{\partial y}=bx^a y^{b-1}(100-x-y)^c-cx^a y^b(100-x-y)^{c-1}.$ The critical points are thus all those points at which \begin{aligned}<br /> ax^{a-1}y^b(100-x-y)^c-cx^a y^b(100-x-y)^{c-1}&=0\\<br /> bx^a y^{b-1}(100-x-y)^c-cx^a y^b(100-x-y)^{c-1}&=0.<br /> \end{aligned} We may simplify these equations a bit dividing by $x^{a-1}y^{b-1}(100-x-y)^{c-1}$, remembering that $x$, $y$, and $z$ are positive: \begin{aligned}<br /> ay(100-x-y)-cxy&=0\\<br /> bx(100-x-y)-cxy&=0.<br /> \end{aligned} Now all that remains is to solve for $x$ and $y$. October 25th 2009, 06:38 AM The relations involved are Our task is to maximize the two-variable function $u=x^a y^b (100-x-y)^c$ subject to these conditions. In order to do this, we find all critical points, including boundary points and singular points (none, in this case), and points at which $abla u=\mathbf{0},$ i.e., $\frac{\partial u}{\partial x}=\frac{\partial u}{\partial y}=0.$ To begin, we first find the partial derivative of $u$ with respect to $x$: \begin{aligned}<br /> \frac{\partial u}{\partial x}&=ax^{a-1}y^b(100-x-y)^c+x^a y^b\cdot c(100-x-y)^{c-1}\cdot(-1)\\<br /> &= ax^{a-1}y^b(100-x-y)^c-cx^a y^b(100-x-y)^{c-1}.<br /> \end{aligned} Because $x$ and $y$ are symmetric in the function (with $a$ and $b$ respectively), the partial derivative of $u$ with respect to $y$ is therefore $\frac{\partial u}{\partial y}=bx^a y^{b-1}(100-x-y)^c-cx^a y^b(100-x-y)^{c-1}.$ The critical points are thus all those points at which \begin{aligned}<br /> ax^{a-1}y^b(100-x-y)^c-cx^a y^b(100-x-y)^{c-1}&=0\\<br /> bx^a y^{b-1}(100-x-y)^c-cx^a y^b(100-x-y)^{c-1}&=0.<br /> \end{aligned} We may simplify these equations a bit dividing by $x^{a-1}y^{b-1}(100-x-y)^{c-1}$, remembering that $x$, $y$, and $z$ are positive: \begin{aligned}<br /> ay(100-x-y)-cxy&=0\\<br /> bx(100-x-y)-cxy&=0.<br /> \end{aligned} Now all that remains is to solve for $x$ and $y$. Wow! What a different looking question with so much more information, and a nice work. How did you know all this? The OP didn't write it. October 25th 2009, 06:49 AM Scott H The OP wrote, "Find three positive numbers x,y,z whose sum is 100 such that (x^a)(y^b)(z^c) is a maximum." This translates into October 25th 2009, 06:55 AM True, but now that I re-read carefully your answer we still don't have any idea what's that relation the OP talks about between a,b,c and the variables x,y,z. This is a must because for derivating we have to take that into account. In fact, the OP writes "later realized that the numbers are in terms of a, b, and c. Can someone help?", and I assume "the numbers" he talks about are x,y,z. October 25th 2009, 07:02 AM Actually, that is exactly what the OP wrote: maximize $x^ay^bz^c$ subject to the conditions x+y+ z= 100, x> 0, y> 0 z> 0 for fixed a, b, c. I confess that I do not understand what he meant by "At first, I thought a, b, and c needed constant values, but later realized that the numbers are in terms of a, b, and c." Yes, the result will depend upon a, b, and c, but that does not mean they are not constants. Perhaps he meant that he thought, at first, that he was to put specific values in for a, b, and c. Because the conditions x> 0, y> 0, z> 0 make the "feasible region" an open set, I would NOT have done it by looking for etrema on the boundaries. The boundaries are not included in the set (and there may not be a maximum). Instead of replacing z with 100- x- y, I think I would use the "Lagrange multiplier method". Let $F(x,y,z)= x^ay^bz^c$. Then $abla F= ax^{a-1}y^bz^c\vec{i}+ bx^ay^{b-1}z^c\vec{j}+ cx^ay^bz^{c-1}\ vec{k}$. Let G(x,y,z)= x+ y+ z= 100. Then $abla G= \vec{i}+ \vec{j}+\vec{k}$. At extrema of F, satisfying G= constant, those two vectors must be parallel- one is a multiple of the other: $abla F= \lambdaabla G$ or $ax^{a-1}y^bz^c\vec{i}+ bx^ay^{b-1}z^c\vec{j}+ cx^ay^bz^{c-1}\vec{k}= \lambda(\vec{i}+ \vec{j}+\vec{k})$. Equating the components, we have $ax^{a-1}y^bz^c= \lambda$, $bx^ay^{b-1}z^c= \lambda$, and $cx^ay^bz^{c-1}= \lambda$. Dividing the first equation by the second we get $\frac{a}{b}\frac{y}{x}= 1$ or $y= \frac{b}{a}x$. Dividing the first equation by the third gives $\frac{a}{c}\frac{z}{x}= 1$ or $z= \frac{c}{a}x$. Now put those into x+ y+ z= 100 to get $x+ \frac{b}{a}x+ \frac{c}{a}x= \frac{a+ b+ c}{a} x= 100$. $x= \frac{100a}{a+ b+ c}$. You could put that into $y= \frac{b}{a}x$ and $z= \frac{c}{a}x$ to get that $y= \frac{100b}{a+ b+ c}$ and $z= \frac{100c}{a+ b+ c}$ which follow from the symmetry also.
{"url":"http://mathhelpforum.com/calculus/110251-optimization-partial-derivatives-print.html","timestamp":"2014-04-18T00:39:48Z","content_type":null,"content_length":"27325","record_id":"<urn:uuid:f4a79540-990d-4ddc-8490-d678fbcf1d64>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00554-ip-10-147-4-33.ec2.internal.warc.gz"}
NASA - Problem 26, Astronomy as a Career The information in this document was accurate as of the original publication date. This problem looks at some of the statistics of working in a field like astronomy. Students read graphs and answer questions about the number of astronomers, the rate of increase in the population size and the number of advanced degrees. A one-page teacher guide accompanies the one-page assignment. Problem 26, Astronomy as a Career [118KB PDF file] This mathematics problem is part of Space Math III
{"url":"http://www.nasa.gov/audience/foreducators/topnav/materials/listbytype/SMIII_Problem26.html","timestamp":"2014-04-20T05:58:15Z","content_type":null,"content_length":"17357","record_id":"<urn:uuid:07d84a7e-2a73-42af-87c7-42f7b09692fe>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00117-ip-10-147-4-33.ec2.internal.warc.gz"}
Results 1 - 10 of 29 - IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING , 2004 "... The development of techniques for quantitative, model-based evaluation of computer system dependability has a long and rich history. A wide array of model-based evaluation techniques are now available, ranging from combinatorial methods, which are useful for quick, rough-cut analyses, to state-based ..." Cited by 56 (2 self) Add to MetaCart The development of techniques for quantitative, model-based evaluation of computer system dependability has a long and rich history. A wide array of model-based evaluation techniques are now available, ranging from combinatorial methods, which are useful for quick, rough-cut analyses, to state-based methods, such as Markov reward models, and detailed, discreteevent simulation. The use of quantitative techniques for security evaluation is much less common, and has typically taken the form of formal analysis of small parts of an overall design, or experimental red team-based approaches. Alone, neither of these approaches is fully satisfactory, and we argue that there is much to be gained through the development of a sound model-based methodology for quantifying the security one can expect from a particular design. In this work, we survey existing model-based techniques for evaluating system dependability, and summarize how they are now being extended to evaluate system security. We find that many techniques from dependability evaluation can be applied in the security domain, but that significant challenges remain, largely due to fundamental differences between the accidental nature of the faults commonly assumed in dependability evaluation, and the intentional, human nature of cyber attacks. , 1991 "... Methods for evaluating system performance, dependability, and performability are becoming increasingly more important, particularly in the case of critical applications. Central to the evaluation process is the definition of specific measures of system behavior that are of interest to a user. This p ..." Cited by 54 (7 self) Add to MetaCart Methods for evaluating system performance, dependability, and performability are becoming increasingly more important, particularly in the case of critical applications. Central to the evaluation process is the definition of specific measures of system behavior that are of interest to a user. This paper presents a unified approach to the specification of measures of performance, dependability, and performability. The unification is achieved by 1) using a model class well suited for representation of all three aspects of system behavior, and 2) system behavior. The resulting approach permits the specification of many non-traditional as well as traditional measures of system performance, dependability, and performability in a unified manner. Example instantiations of variables within this class are given and their relationships to variables used in traditional performance and dependability evaluations are illustrated. , 1994 "... This thesis has been submitted in partial fulfillment of requirements for an advanced ..." - Microelectronics and Reliability , 1994 "... Abstract--Three methods for numerical transient analysis of Markov chains, the modified Jensen's method (Jensen's method with steady-state detection of the underlying DTMC and computation of Poisson probabilities using the method of Fox and Glynn [1]), a third-order L-stable implicit Runge-Kutta met ..." Cited by 24 (7 self) Add to MetaCart Abstract--Three methods for numerical transient analysis of Markov chains, the modified Jensen's method (Jensen's method with steady-state detection of the underlying DTMC and computation of Poisson probabilities using the method of Fox and Glynn [1]), a third-order L-stable implicit Runge-Kutta method, and a second-order L-stable method, TR-BDF2, are compared. These methods are evaluated on the basis of their performance (accuracy of the solution and computational cost) on stiff Markov chains. Steady-state detection in Jensen's method results in large savings of computation time for Markov chains when mission time extends beyond the steady-state point. For stiff models, computation of Poisson probabilities using traditional methods runs into underflow problems. Fox and Glynn's method for computing Poisson probabilities avoids underflow problems for all practical problems and yields highly accurate solutions. We conclude that for mildly stiff Markov chains, the modified Jensen's method is the method of choice. For stiff Markov chains, we recommend the use of the L-stable ODE methods. If low accuracy (upto eight decimal places) is acceptable, then TR-BDF2 method should be used. If higher accuracy is desired, then we recommend third-order implicit Runge-Kutta method. 1. , 1998 "... Analytical modeling plays a crucial role in the analysis and design of computer systems. Stochastic Petri Nets represent a powerful paradigm, widely used for such modeling in the context of dependability, performance and performability. Many structural and stochastic extensions have been proposed in ..." Cited by 18 (4 self) Add to MetaCart Analytical modeling plays a crucial role in the analysis and design of computer systems. Stochastic Petri Nets represent a powerful paradigm, widely used for such modeling in the context of dependability, performance and performability. Many structural and stochastic extensions have been proposed in recent years to increase their modeling power, or their capability to handle large systems. This paper reviews recent developments by providing the theoretical background and the possible areas of application. Markovian Petri nets are first considered together with very well established extensions known as Generalized Stochastic Petri nets and Stochastic Reward Nets. Key ideas for coping with large state spaces are then discussed. The challenging area of non-Markovian Petri nets is considered, and the related analysis techniques are surveyed together with the detailed elaboration of an example. Finally new models based on Continuous or Fluid Stochastic Petri Nets are briefly discussed. - Commun. in Statist. – Stochastic Models , 1998 "... Markov reward models have been widely used to solve a variety of problems. In these models, reward rates are associated to the states of a continuous time Markov chain, and impulse rewards are associated to transitions of the chain. Reward rates are gained per unit time in the associated state, and ..." Cited by 18 (2 self) Add to MetaCart Markov reward models have been widely used to solve a variety of problems. In these models, reward rates are associated to the states of a continuous time Markov chain, and impulse rewards are associated to transitions of the chain. Reward rates are gained per unit time in the associated state, and impulse rewards are instantaneous values that are gained each time certain transitions occur. We develop an efficient algorithm to calculate the distribution of the total accumulated reward over a given interval of time when both rate and impulse rewards are present. As special cases, we obtain an algorithm which is used when only rate rewards occur and another algorithm to handle the case of models for which only impulse rewards are present. The development is based purely on probabilistic arguments, and the recursions obtained are simple and have a low computational cost. 1 This work was done while E. de Souza e Silva was on leave from the Federal University of Rio de Janeiro partially su... - IEEE Transactions on Parallel and Distributed Systems , 1993 "... Abstract-Distributed systems frequently have large numbers of idle computers and workstations. If we could make use of these, then considerable computing power could be harnessed at low cost. We analyze such systems using Brownian motion with drift to model the execution of a program distributed ove ..." Cited by 15 (0 self) Add to MetaCart Abstract-Distributed systems frequently have large numbers of idle computers and workstations. If we could make use of these, then considerable computing power could be harnessed at low cost. We analyze such systems using Brownian motion with drift to model the execution of a program distributed over the idle computers in a network of idle and busy processors, determining how the use of these “transient ” processors affects a program’s execution time. We find the probability density of a programs finishing time on both single and multiple transient processors, explore these results for qualitative insight, and suggest some approximations for the finishing time probability density that may be useful. Index Terms- Brownian motion, distributed processing, idle processors, performance analysis, transient processors. - Performance Evaluation , 2003 "... In this paper, we show how to utilize the expectation-maximization (EM) algorithm for efficient and numerical stable parameter estimation of the batch Markovian an'ival process (BMAP). In fact, effective computational formulas for the E-step of the EM algorithm are presented, which utilize the we ..." Cited by 14 (0 self) Add to MetaCart In this paper, we show how to utilize the expectation-maximization (EM) algorithm for efficient and numerical stable parameter estimation of the batch Markovian an'ival process (BMAP). In fact, effective computational formulas for the E-step of the EM algorithm are presented, which utilize the well-known randomization technique and a stable calculation of Poisson jump probabilities. , 1994 "... One of the most important performance measures for computer system designers is system availability. Most often, Markov models are used in representing systems for dependability/availability analysis. Due to complex interactions between components and complex repair policies, the Markov model often ..." Cited by 13 (3 self) Add to MetaCart One of the most important performance measures for computer system designers is system availability. Most often, Markov models are used in representing systems for dependability/availability analysis. Due to complex interactions between components and complex repair policies, the Markov model often has an irregular structure and closed form solutions are extremely difficult to obtain. Also, a realistic system model often has an unmanageably large state space and it quickly becomes impractical to even generate the entire transition rate matrix. In this paper, we present a methodology that can (i) bound the system steady state availability and at the same time, (ii) drastically reduce the state space of the model that must be solved. The bounding algorithm is iterative and generates a part of the transition matrix at each step. At each step, tighter bounds on system availability are obtained. The algorithm also allows the size of the submodel, to be solved at each step, to be chosen so a... - Perf. Ev , 1996 "... Over the last decade considerable effort has been put in the development of techniques to assess the performance and the dependability of computer and communication systems in an integrated way. This so-called performability modelling becomes especially useful when the system under study can operate ..." Cited by 12 (5 self) Add to MetaCart Over the last decade considerable effort has been put in the development of techniques to assess the performance and the dependability of computer and communication systems in an integrated way. This so-called performability modelling becomes especially useful when the system under study can operate partially, which is for instance the case for fault-tolerant computer systems and distributed systems. Modelling techniques are a fundamental prerequisite for actually doing performability analysis. A prerequisite of a more practical but not less important nature is the availability of software tools to support the modelling techniques and to allow system designers to incorporate the new techniques in the design process of systems. Since performability modelling requires many aspects of a system to be specified, high requirements should be posed on performability modelling tools. Moreover, these tools should be structured such that the models can be specified at a level that is easy to unde...
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=356131","timestamp":"2014-04-18T12:31:42Z","content_type":null,"content_length":"39980","record_id":"<urn:uuid:87da53d0-eafd-4b2d-b2f2-13bd526b1d4a>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00090-ip-10-147-4-33.ec2.internal.warc.gz"}
1/f Noise and RTS(Random Telegraph Signal) Errors in Comparators and Sense Amplifiers Nanotech 2007 Vol. 1 Technical Proceedings of the 2007 NSTI Nanotechnology Conference and Trade Show, Volume 1 Chapter 2: Nano Electronics & Photonics 1/f Noise and RTS(Random Telegraph Signal) Errors in Comparators and Sense Amplifiers Authors: L. Forbes, D.A. Miller and P. Poocharoen Affilation: Oregon State University, US Pages: 197 - 200 Keywords: RTS noise, l/f noise, nanoscale devices, sense amplifiers An analysis has previously been made of the increasing portion of the threshold voltage being occupied by thermal noise levels and the bit error rates in digital logic[1] and memory circuits [2-4]. This analysis has led to the prediction that there are fundamental limits imposed in digital circuits by thermal noise and that the scaling predicted by Moore’s law can not continue into the future.[1] No consideration was, however, given to the errors that might be caused by l/f noise or random telegraph signals. In small transistors such as used in read sense amplifiers the l/f noise is caused by and can be characterized by random telegraph signals(RTS). These random signals can cause errors in the sense amplifiers and limit the ability to read the data stored in memories. In the case of RTS and l/f noise in the time domain representation the small but finite probability of an error will result from all traps happening to capture or emit electrons at the same time. If all traps in the sense amplifier transistor capture or emit electrons at the same time there will be an erroneous sense amplifier signal, the probability of such an error is small but it will occur at random times. This will result in an error or give the appearance of there being a “variable retention time” in a particular memory cell. The representation or modeling of the RTS or l/f noise of nanoscale devices that is easiest to understand is that done in the time domain. The capture and emission of a single electron in a nanoscale NMOS transistor of size W/L with be equivalent to a change in threshold voltage, VT, of ΔVT= q /(Cox W L) where q is the electronic Abstract: charge, and Cox is the gate capacitance. The probability that there will be a coincidence of occurrence of a number of electrons contributing to a large change in threshold voltage and causing an error has been found to be described by a lognormal distribution. In a lognormal distribution the probability of a large value is of the order exp(-x). Published results show a 0.1% probability of a value twenty times the minimal RTS step of 10mV on a minimum size device in a 90nm technology with 9nm gate oxides.[5] This would be 200mV, corresponding to fifty traps changing charge state. For a sense amplifier in 50nm technology with 2nm gate oxides and a transistor size of W/L = 2.5u/0.5u this translates into a large threshold voltage distribution if we assume the l/f noise varies according to the NLEV=0 SPICE model. If a DRAM sense amplifier is upset by a threshold voltage mismatch of ΔVT = 20mV then the calculated error rate could be as high as exp(-11). In reality there is not an error or variable retention time in the memory cell but rather a random and variable error occurring in the sense amplifier due to RTS or l/f noise which could happen nearly as often as every time a Gbit DRAM is read. [1] L.B. Kish, Physics Letters, A 305, pp. 144-149, 2002. [2] L. Forbes, M. Mudrow and W. Wanalertlak, IEE Electronics Letters, Vol. 42, No. 5, pp. 279-280, 2 March 2006. [3] M. Mudrow, W. Wanalertlak and L. Forbes, IEEE Workshop on Microelectronics and Electron Devices, Boise, 14 April 2006, pp. 39-40. [4] L. Forbes, M. Mudrow and W. Wanalertlak, NanoTech, Boston, 7-11 May 2006, Vol. 3, pp.78-81. [5] N. Tega, et. al, IEEE Int. Electron Device Meeting, San Francisco, Dec. 2006, paper 18.4. View PDF of paper ISBN: 1-4200-6182-8 Pages: 726 Hardcopy: $139.95 Order: Mail/Fax Form
{"url":"http://www.nsti.org/procs/Nanotech2007v1/2/W78.707","timestamp":"2014-04-20T08:46:47Z","content_type":null,"content_length":"16435","record_id":"<urn:uuid:2062301f-46f6-4fe8-8876-12318501ec4b>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00655-ip-10-147-4-33.ec2.internal.warc.gz"}
Port length - multiple ports - diyAudio diyAudio Member Join Date: Feb 2004 Location: Chinook Country.Alberta Svante..an over simplified view.. as an example , use 2 ports of 2" diameter.. dt=sqrt(2^2 + 2^2) Area,S, is pi * r^2=2*pi=6.2832 so the area doubles, therefore the length needs to be expanded. The Helmholtz equation is as follows so if f is constant, V is constant (the enclosure), c is a constant as is 2*pi f is proportional to sqrt(S/L), but 2 ports of 2" diameter does not = the same as 1 port of 4", it =2.824 . (which is 2*sqrt(2)), so the length must be multiplied by sqrt(2) or 1.414 now take it to 3" port.. L would have to be multiplied by sqrt(3) and so on... stew ☮ -"A sane man in an insane world appears insane."
{"url":"http://www.diyaudio.com/forums/multi-way/83048-port-length-multiple-ports.html","timestamp":"2014-04-17T03:19:45Z","content_type":null,"content_length":"80344","record_id":"<urn:uuid:5c0e7e8b-f0d5-49ee-800c-fc5a842b8cca>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00422-ip-10-147-4-33.ec2.internal.warc.gz"}
[Ma191a - Fall 12 - 13] - Stochastic Analysis This course will describe the basic tools of stochastic analysis and their applications in probability, while at the same time discussing their analogues in quantum physics. With the proper background the probabilistic description of these tools is relatively transparent and simple. The main topics to be discussed include Gaussian Hilbert Spaces, Wiener chaos, Wick products, hypercontractivity, and Cameron-Martin shifts. The ultimate goal of the course is to gain a working understanding of the Malliavin calculus, which is usually described as the stochastic calculus of variations (i.e. variational formulas over functions in Wiener space). A strong background in analysis will be assumed. A good background in probability would be helpful but a working knowledge of multi-dimensional Gaussians will probably be sufficient.
{"url":"http://www.math.caltech.edu/~2012-13/1term/ma191a-sec1/","timestamp":"2014-04-20T15:52:35Z","content_type":null,"content_length":"12823","record_id":"<urn:uuid:96149df2-f928-4b37-9e4e-dc31b7bdf856>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00652-ip-10-147-4-33.ec2.internal.warc.gz"}