content
stringlengths
86
994k
meta
stringlengths
288
619
sampling distibution May 27th 2010, 07:07 PM #1 May 2010 sampling distibution can anyone help me with understanding how to do this problem. based on past experience a bank believes 12% of people who receive loans do not pay on time. The bank recently approved 400 new loans. what is the mean and standard deviation of the proportion of clients who may not pay on time? what is the probability that over 14% of these clients will not make timely payments? can anyone help me with understanding how to do this problem. based on past experience a bank believes 12% of people who receive loans do not pay on time. The bank recently approved 400 new loans. what is the mean and standard deviation of the proportion of clients who may not pay on time? what is the probability that over 14% of these clients will not make timely payments? X ~ Binomial(n = 400, p = 0.12). For the second part, you're probably expected to use the the normal approximation to the Binomial distribution. Note: 14% of 400 = 56. May 27th 2010, 07:50 PM #2
{"url":"http://mathhelpforum.com/statistics/146696-sampling-distibution.html","timestamp":"2014-04-23T17:07:12Z","content_type":null,"content_length":"33766","record_id":"<urn:uuid:fa3a8e61-62af-4991-ab07-4c24b51b8e75>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00039-ip-10-147-4-33.ec2.internal.warc.gz"}
Precalculus: A Prelude to Calculus, 2nd Edition Precalculus: A Prelude to Calculus, 2nd Edition November 2012, ©2013 Sheldon Axler's Precalculus focuses only on topics that students actually need to succeed in calculus. Because of this, Precalculus is a very manageable size even though it includes a student solutions manual. The book is geared towards courses with intermediate algebra prerequisites and it does not assume that students remember any trigonometry. It covers topics such as inverse functions, logarithms, half-life and exponential growth, area, e, the exponential function, the natural logarithm and trigonometry. The Student Solutions Manual is integrated at the end of every section. The proximity of the solutions encourages students to go back and read the main text as they are working through the problems and exercises. The inclusion of the manual also saves students money. Axler’s Precalculus is available with WileyPLUS, a research-based, online environment for effective teaching and learning. WileyPLUS sold separately from text. See More Chapter 0: The Real Numbers Chapter 1: Functions and Their Graphs Chapter 2: Linear, Quadratic, Polynomial, and Rational Functions Chapter 3: Exponential Functions, Logarithms, and e Chapter 4: Trigonometric Functions Chapter 5: Trigonometric Algebra and Geometry Chapter 6: Applications of Trigonometry Chapter 7: Sequences, Series, and Limits Chapter 8: Systems of Linear Equations See More • Numerous examples, exercises, and problems have been be added to the text, including many that are applications oriented. • Multiple significant improvements have been made throughout the text to enhance clarity and understanding for student readers. • Conics are now covered in more depth. Rather than being scattered throughout the book, this topic has been consolidated into one section. • A subsection on the Binomial Theorem has been added to Chapter 7 (Sequences, Series, and Limits). • Trigonometry content has been rearranged from two to three chapters. This change allows for more thought-provoking applications. • A section on parametric curves has been added. • Coverage of vectors and the complex plane has been expanded from one section to two in the new edition, allowing for better coverage of each of these topics. • WolframAlpha launched after the first edition published. To help students utilize this free new resource, scattered examples using WolframAlpha have been incorporated into the Second Edition. • Systems of linear equations and matrices have been moved from Section 2.7 to Chapter 8. These topics, which have been expanded, can be easily skipped by instructors focusing only on material needed for first-semester calculus. See More • Manageable Size: Even with a student solutions manual included, the text is shorter and more concise than other Precalculus books. It is also cost-effective for students because they do not have to purchase a separate solutions manual. • Flexible and Plentiful Topics: The text is not overloaded with extraneous topics. • Designed to be Read: The writing style and layout are meant to encourage students to read and understand the material. Explanations are plentiful with descriptions of concepts making the ideas concrete whenever possible. • Technology Optional: To aid instructors in presenting the kind of course they want, an icon appears next to exercises and problems that require students to use a calculator. Some exercises and problems that require a calculator are intentionally designed to make students realize that by understanding the material, they can overcome the limitations of calculators. • Worked-Out Solutions to Odd-Numbered Exercises: These solutions are written exclusively by the author. Therefore students can expect a consistent approach to the material. See More Instructors Resources See More See Less Students Resources See More See Less Purchase Options Precalculus: A Prelude to Calculus, 2nd Edition ISBN : 978-1-118-54607-9 672 pages October 2012, ©2013 Precalculus: A Prelude to Calculus, 2nd Edition Binder Ready Version ISBN : 978-1-118-08792-3 672 pages November 2012, ©2013 Precalculus: A Prelude to Calculus, 2nd Edition ISBN : 978-0-470-64804-9 672 pages November 2012, ©2013 Precalculus: A Prelude to Calculus, 2nd Edition ISBN : 978-1-118-08376-5 672 pages November 2012, ©2013 Information about Wiley E-Texts: • Wiley E-Texts are powered by VitalSource technologies e-book software. • With Wiley E-Texts you can access your e-book how and where you want to study: Online, Download and Mobile. • Wiley e-texts are non-returnable and non-refundable. • WileyPLUS registration codes are NOT included with the Wiley E-Text. For informationon WileyPLUS, click here . • To learn more about Wiley e-texts, please refer to our FAQ. Information about e-books: • E-books are offered as e-Pubs or PDFs. To download and read them, users must install Adobe Digital Editions (ADE) on their PC. • E-books have DRM protection on them, which means only the person who purchases and downloads the e-book can access it. • E-books are non-returnable and non-refundable. • To learn more about our e-books, please refer to our FAQ. This title is also available on :
{"url":"http://www.wiley.com/WileyCDA/WileyTitle/productCd-EHEP002454.html?filter=TEXTBOOK","timestamp":"2014-04-20T08:18:21Z","content_type":null,"content_length":"53776","record_id":"<urn:uuid:79558b1c-7301-4d4d-af48-4e50528059ed>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00635-ip-10-147-4-33.ec2.internal.warc.gz"}
53-XX DIFFERENTIAL GEOMETRY (For differential topology, see 57Rxx. For foundational questions of differentiable manifolds, see 58Axx) • 53-00 General reference works (handbooks, dictionaries, bibliographies, etc.) • 53-01 Instructional exposition (textbooks, tutorial papers, etc.) • 53-02 Research exposition (monographs, survey articles) • 53-03 Historical (must also be assigned at least one classification number from Section 01) • 53-04 Explicit machine computation and programs (not the theory of computation or programming) • 53-06 Proceedings, conferences, collections, etc. • 53Axx Classical differential geometry • 53Bxx Local differential geometry • 53Cxx Global differential geometry [See also 51H25, 58-XX; for related bundle theory, see 55Rxx, 57Rxx] (1) • 53Dxx Symplectic geometry, contact geometry [See also 37Jxx, 70Gxx, 70Hxx] • 53Zxx Applications to physics
{"url":"http://publikationen.stub.uni-frankfurt.de/solrsearch/index/search/searchtype/collection/id/11464","timestamp":"2014-04-18T11:00:23Z","content_type":null,"content_length":"11821","record_id":"<urn:uuid:a5daa158-758c-4e28-a4b4-b7b0f0068f22>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00455-ip-10-147-4-33.ec2.internal.warc.gz"}
Baseball Simulator Program 09-05-2013, 09:23 PM Baseball Simulator Program Hey guys! First time post. I have a question: First, I'm "semi" new to Java. I've written a number of programs, mainly simple ones to get totally comfortable with console before I move more into what I'm in with JFrame and GUI stuff. I want to just have the console print out the outcome of atbats, but I want to use, say if Ryan Howard was at the plate and Cliff Lee was pitching. I want weighted randomness, I guess is what I'm Is there a way to add weight to the Math.random() function so that Ryan Howard doesn't hit .400 with 2 homeruns? Thanks guys! 09-05-2013, 09:38 PM Re: Baseball Simulator Program But two times at bat with two home runs is .400. However, you could make it more meaningful by working in the number of times at bat. 09-05-2013, 09:43 PM Re: Baseball Simulator Program I think my main question is, how do the baseball simulators like OOTP work? I was trying to program just a simple one to get a more sophisticated win-loss record. But I can't see how they're programmed, or what kind of math they're using. Is there a way to get a random number that will be influenced by a player's overall and past/recent performance? 09-05-2013, 10:11 PM Re: Baseball Simulator Program I don't know about OOTP. If you want to calculate what a hitters average is it is essentially number of hits divided by number of times at bat. If you have a fictitious batter in a game and you want to see if the batter gets a hit or not. Then just generate a number between 0 and 1. If the number is less than or equal to the batter's average, he gets a hit. If greater than he doesn't. You could also work in home run stats and rbi stats too. 09-06-2013, 05:36 AM Re: Baseball Simulator Program I don't know about OOTP. If you want to calculate what a hitters average is it is essentially number of hits divided by number of times at bat. If you have a fictitious batter in a game and you want to see if the batter gets a hit or not. Then just generate a number between 0 and 1. If the number is less than or equal to the batter's average, he gets a hit. If greater than he doesn't. You could also work in home run stats and rbi stats too. I already know how to calculate the statistics but you helped me out TONS with the last part. That's working beautifully! Thanks a bunch
{"url":"http://www.java-forums.org/new-java/81362-baseball-simulator-program-print.html","timestamp":"2014-04-19T10:41:34Z","content_type":null,"content_length":"7991","record_id":"<urn:uuid:deedd92a-e227-4d44-a429-abf0c87c5de0>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00368-ip-10-147-4-33.ec2.internal.warc.gz"}
The Atlantic: Breaking News, Analysis and Opinion on politics, business, culture, international, science, technology, national Incentive works. Want people to do something better? Offer to give them something they want, like money. This simple theory has driven the American economic engine for centuries. In fact, the financial crisis was partially triggered by incentives becoming misaligned with long-term risk. They matter. One industry where incentive is clearly absent, however, is education. Recently, people have begun realizing this, and merit pay for teachers has become a heated debate. Teachers unions are vehemently against incentive-based pay. One problem, they say, is that test scores aren't a fair indicator of how much a child has learned. There's also the worry that students will just be taught how to do well on the test, instead of receiving a broader education. Another problem is that different students have different inherent learning curves, so teachers can't be evaluated fairly against each other, since their students may have different aptitudes. These are fair criticisms. So why not devise an evaluation methodology that answers such complaints? It might not be as impossible as it seems. First, you need to set the baseline for each student. You could do this through IQ tests, determining reading and math comprehension levels, identifying learning disabilities like dyslexia, attention deficit disorder, etc. Such information would be on record, so these variances could be taken into consideration when a teacher's students are evaluated. Then, rather than just standardized tests, you could have a more robust way to evaluate students. Each semester, school administrators could meet with a random sample of each teacher's students for a short time. They could ask the students questions that would qualitatively and quantitatively demonstrate the teacher's performance. For example, on the qualitative level, students could be asked what the most memorable history lesson was from the term. Or maybe to explain five things they've learned that semester in biology. The administrator could have a fifth-grader walk them through a long-division problem step-by-step. You could even ask questions that help determine parental involvement, to help establish the student's Then, you could have a more quantitative testing component. On some level pencil and paper are unavoidable. If students expect to succeed in life, they need to be able to perform adequately on a test -- but the exams don't have to be standardized. They can include items likeĀ a short essay, a math problem where you can see their work, etc. This pencil and paper testing portion can count for as little as 50% of the teacher's overall evaluation. And remember, a student's aptitude is already taken into account, so the test would be graded on a curve accordingly. Obviously, such a methodology wouldn't be perfect -- but neither is any incentive scheme in any industry whatsoever. Even if you're in sales, and paid on commission, you might have better luck with certain clients by the luck of a draw. There are times when some employees appear better than they should and others don't appear as stellar as they are. But on an aggregate basis, such irregularities should even out. And incentive pay would likely be based on a sort of bell curve anyway, so the teachers who really shine and those who are really terrible should be relatively easy to determine if the evaluations come out consistently for a few years. Of course, there is cost to consider. This would likely involve the hiring of new administrators -- but maybe not as many as you think. Let's do a little math. Imagine a school with 2000 students with a student-teacher ratio of 20, meaning 100 teachers. If each student has seven courses, and each teacher has a planning period, that comes out to around 140 students per teacher. Take a random sample of those -- say 20 per semester (40 per year). The evaluator spends 30 minutes with each student in the qualitative section and uses another 30 minutes to grade the written test and compile the full results. That would require 2000 working hours per semester for the school's evaluation process. If you add 5 employees for this evaluation role -- that's 400 hours each, or 10 weeks. This would increase a school's labor costs by just a few percent. The evaluation period would begin in the latter part of the semester. The evaluators could use the earlier part of the term to get the student profiles right and devise testing strategies -- which should not be known to the teachers. This is just one potential solution to the educator incentive problem. There are infinite variations on this plan, but the point here should be clear. It is possible to determine a teacher's performance by using broader methods of evaluation and taking into account differences in student aptitude. It isn't easy to come up with a good solution, but it is possible. And working to devise a fair evaluation process is better than the alternative: an education system where subpar teachers are treated the same way as their superior counterparts. This article available online at:
{"url":"http://www.theatlantic.com/business/print/2010/06/making-merit-pay-for-teachers-work/57937/","timestamp":"2014-04-19T17:40:17Z","content_type":null,"content_length":"17836","record_id":"<urn:uuid:df7afaf6-c3bb-4955-b7b5-e7eec4ec928c>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00564-ip-10-147-4-33.ec2.internal.warc.gz"}
Overview / review of commutative rings For: Math 421 at Northern Illinois University From: Beachy/Blair, Abstract Algebra, Second Edition Covering: Sections 5.1 through 5.4 Commutative rings, in general The examples you should have in mind are these: the set of integers Z; the set Z[n] of integers modulo n; any field F (in particular the set Q of rational numbers and the set R of real numbers); the set F[x] of all polynomials with coefficients in a field F. The axioms we will use are the same as those for a field, with two crucial exceptions. We have dropped the requirement that each nonzero element has a multiplicative inverse, in order to include integers and polynomials in the class of objects we want to study. Example 5.1.1. (Z[n]) The rings Z[n] form a class of commutative rings that is a good source of examples and counterexamples. Definition 5.1.2. Let S be a commutative ring. A nonempty subset R of S is called a subring of S if it is a commutative ring under the addition and multiplication of S. Proposition 5.1.3. Let S be a commutative ring, and let R be a nonempty subset of S. Then R is a subring of S if and only if (i) R is closed under addition and multiplication; and (ii) if a is in R, then -a is in R. Definition 5.1.4. Let R be a commutative ring with identity element 1. An element a in R is said to be invertible if there exists an element b in R such that ab = 1. The element a is also called a unit of R, and its multiplicative inverse is usually denoted by a^-1. Proposition 5.1.5. Let R be a commutative ring with identity. Then the set of units of R is an abelian group under the multiplication of R. An element e of a commutative ring R is said to be idempotent if e^2 = e. An element a is said to be nilpotent if there exists a positive integer n with a^^n = 0. Note that exercises in Section 1.4 contain information about idempotent and nilpotent elements in Z[n]. The group of units of Z[n] is also studied in Section 1.4. Definition 5.2.1. Let R and S be commutative rings. A function f : R -> S is called a ring homomorphism if f(a+b) = f(a) + f(b) and f(ab) = f(a) f(b) for all a,b in R. A ring homomorphism that is one-to-one and onto is called an isomorphism. If there is an isomorphism from R onto S, we say that R is isomorphic to S. An isomorphism from the commutative ring R onto itself is called an automorphism of R. Proposition 5.2.2. The inverse of a ring isomorphism is a ring isomorphism; the composition of two ring isomorphisms is a ring isomophism. Proposition 5.2.3. Let f : R -> S be a ring homomorphism. Then (a) f(0) = 0; (b) f(-a) = -f(a) for all a in R; (c) if R has an identity 1, then f(1) is idempotent; (d) f(R) is a subring of S. Definition 5.2.4. Let f : R -> S be a ring homomorphism. The set {a in R | f(a) = 0 } is called the kernel of f, denoted by ker(f). Proposition 5.2.5. Let f : R -> S be a ring homomorphism. (a) If a,b are in ker(f) and r is in R, then a+b, a-b, and ra belong to ker(f). (b) The homomorphism f is an isomorphism if and only if ker(f) = {0} and f(R) = S. Example 5.2.5. Let R and S be commutative rings, let f : R -> S be a ring homomorphism, and let s be any element of S. Then there exists a unique ring homomorphism f# : R[x] -> S such that f# (r) = f(r) for all r in R and f# (x) = s, defined by f#(a[0] + a[1]x + ... + a[m]x^m) = f(a[0]) + f(a[1]) s + ... + f(a[m]) s^m . Proposition 5.2.7. Let R and S be commutative rings. The set of ordered pairs (r,s) such that r is in R and s is in S is a commutative ring under componentwise addition and multiplication. Definition 5.2.8. Let R and S be commutative rings. The set of ordered pairs (r,s) such that r is in R and s is in S is called the direct sum of R and S. Example 5.2.10. The ring Z /nZ is isomorphic to the direct sum of the rings Z /kZ that arise in the prime factorization of n. This describes the structure of Z /nZ in terms of simpler rings, and is the first example of what is usually called a ``structure theorem.'' This structure theorem can be used to determine the invertible, idempotent, and nilpotent elements of Z /nZ, and provides an easy proof of our earlier formula for the Euler phi-function in terms of the prime factors of n. Definition 5.2.9. Let R be a commutative ring with identity. The smallest positive integer n such that (n)(1) = 0 is called the characteristic of R, denoted by char(R). If no such positive integer exists, then R is said to have characteristic zero. Ideals and factor rings Definition 5.3.1. Let R be a commutative ring. A nonempty subset I of R is called an ideal of R if (i) I is a subgroup of R (under addition), (ii) ra is in I, for all a in I and r in R. Proposition 5.3.2. Let R be a commutative ring with identity. Then R is a field if and only if it has no proper nontrivial ideals. Theorem 5.3.6. If I is an ideal of the commutative ring R, then R/I is a commutative ring, under the operations (a+I)+(b+I) = (a+b)+I and (a+I)(b+I) = ab+I, for all a,b in R. Proposition 5.3.7. Let I be an ideal of the commutative ring R. (a) The natural projection mapping p : R -> R/I defined by p(a) = a+I for all a in R is a ring homomorphism, and ker(p) = I. (b) There is a one-to-one correspondence between the ideals of R/I and ideals of R that contain I. Theorem 5.2.6. [Fundamental Homomorphism Theorem for Rings] Let f : R -> S be a ring homomorphism. Then R/ker(f) is isomorphic to f(R). Integral domains Definition 5.1.6. A commutative ring R with identity is called an integral domain if for all a, b in R, ab = 0 implies a = 0 or b = 0. The ring of integers Z is the most fundamental example of an integral domain. The ring of all polynomials with real coefficients is also an integral domain, but the larger ring of all real valued functions is not an integral domain. The cancellation law for multiplication holds in R if and only if R has no nonzero divisors of zero. One way in which the cancellation law holds in R is if nonzero elements have inverses in a larger ring; the next two results characterize integral domains as subrings of fields (that contain 1). Theorem 5.1.7. Let F be a field with identity 1. Any subring of F that contains 1 is an integral domain. Theorem 5.4.4. Let D be an integral domain. Then there exists a field F that contains a subring isomorphic to D. Theorem 5.1.8. Any finite integral domain must be a field. Proposition 5.2.10. An integral domain has characteristic 0 or p, for some prime number p. Definition 5.3.8. Let I be a proper ideal of the commutative ring R. Then I is said to be a prime ideal of R if for all a,b in R it is true that ab in I implies a in I or b in I. The ideal I is said to be a maximal ideal of R if for all ideals J of R such that I is a subset of J and J is a subset of R, either J = I or J = R. Proposition 5.3.9. Let I be a proper ideal of the commutative ring R with identity. (a) The factor ring R/I is a field if and only if I is a maximal ideal of R. (b) The factor ring R/I is a integral domain if and only if I is a prime ideal of R. (c) If I is maximal, then it is a prime ideal. Definition 5.3.3. Let R be a commutative ring with identity, and let a in R. The ideal Ra = { x in R | x = ra for some r in R } is called the principal ideal generated by a. An integral domain in which every ideal is a principal ideal is called a principal ideal domain. Example 5.3.1. (Z is a principal ideal domain) Theorem 1.1.4 shows that the ring of integers Z is a principal ideal domain. Moreover, given any nonzero ideal I of Z, the smallest positive integer in I is a generator for the ideal. Theorem 5.3.10. Every nonzero prime ideal of a principal ideal domain is maximal. Example 5.3.7. (Ideals of F[x]) Let F be any field. Then F[x] is a principal ideal domain, since the ideals of F[x] have the form I = ( f(x) ), where f(x) is the unique monic polynomial of minimal degree in the ideal. The ideal I is prime (and hence maximal) if and only if f(x) is irreducible. If p(x) is irreducible, then the factor ring F[x]/( p(x) ) is a field. Example 5.3.8. (Evaluation mapping) Let F be a subfield of E, and for any element u in E define the evaluation mapping f[u] : F[x] -> E by f[u] (g(x)) = g(u), for all g(x) in F[x]. Since f[u] (F[x]) is a subring of E that contains 1, it is an integral domain, and so the kernel of f[u] is a prime ideal. Thus if the kernel is nonzero, then it is a maximal ideal, so F[x]/ker(f[u]) is a field, and the image of f[u] is a subfield of E. Rings vs. groups GROUPS RINGS Examples: Examples: Symmetric group All n by n matrices General linear group Polynomial rings One binary operation Two binary operations +, . associative, abelian group under + identity element, mult is associative inverses distributive laws Group homomorphisms Ring homomorphisms f(g)f(h) = f(gh) f(r)+f(s) = f(r+s) f(r)f(s) = f(rs) Kernels of group homomorphisms Kernels of ring homomorphisms Normal subgroups: Ideals: gNg contained in N subgroups with rI and Ir contained in I Factor groups Factor rings cosets gN, where N is normal cosets r+I, where I is an ideal (gN)(hN) = gh N (r+I)+(s+I) = (r+s)+I (r+I)(s+I) = rs+I Some classes of groups: Corresponding classes of rings: 1. Abelian groups 1. Integral domains gh=hg, for all g,h rs=sr, for all r,s rs = 0 implies r = 0 or s = 0 2. Cyclic groups 2. Principal ideal domains Z; Z/nZ Z; F[x] (F a field) 3. Simple abelian groups 3. Fields Z/pZ Q; R; C; Z/pZ Back to the 420 / 421 home page
{"url":"http://www.math.niu.edu/~beachy/courses/algebra/rings_review.html","timestamp":"2014-04-18T00:25:25Z","content_type":null,"content_length":"14874","record_id":"<urn:uuid:a2b740e9-a25b-4684-875c-616d5cf9b6f5>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00345-ip-10-147-4-33.ec2.internal.warc.gz"}
Probability Tutorials: Books in Finance M. Baxter Financial Calculus D. Brigo Interest Rate Models J. Dewynne The Mathematics of Financial Derivatives M.U. Dothan Prices in Financial Markets D. Duffie Dynamic Asset Pricing Theory J.P. Fouque Derivatives in Financial Markets with Stochastic Vol. E. Gaardner Haug The Complete Guide to Option Pricing Formulas P. Glasserman Monte Carlo Methods in Financial Engineering S. Howison The Mathematics of Financial Derivatives J.C. Hull Options, Futures, and Other Derivatives P.J. Hunt Financial Derivatives in Theory and Practice J.E. Kennedy Financial Derivatives in Theory and Practice D. Lamberton Stochastic Calculus Applied to Finance B. Lapeyre Stochastic Calculus Applied to Finance A. Lipton Mathematical Methods for Foreign Exchange F. Mercurio Interest Rate Models R.C. Merton Continuous Time Finance A. Meucci Risk and Asset Allocation M. Meyer Continuous Stochastic Calculus with App. to Finance M. Musiela Martingale Methods in Financial Modelling S.N. Neftci Intro. to the Mathematics of Financial Derivatives G. Papanicolaou Derivatives in Financial Markets with Stochastic Vol C. Randall Pricing Financial Instruments: The Finite Diff. Method R. Rebonato Interest Rate Option Models R. Rebonato Modern Pricing of Interest Rate Derivatives A. Rennie Financial Calculus M. Rutkowski Martingale Methods in Financial Modelling S.E. Shreve Stochastic Calculus Models for Finance K.R. Sircar Derivatives in Financial Markets with Stochastic Vol J.M. Steele Stochastic Calculus and Financial Applications N.N. Taleb Dynamic Hedging D. Tavella Pricing Financial Instruments: The Finite Diff. Method P. Wilmott The Mathematics of Financial Derivatives
{"url":"http://www.probability.net/finance.html","timestamp":"2014-04-20T23:44:04Z","content_type":null,"content_length":"21429","record_id":"<urn:uuid:89724a0c-04de-4b0b-a875-174a4c5890f8>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00010-ip-10-147-4-33.ec2.internal.warc.gz"}
Canyon, CA Geometry Tutor Find a Canyon, CA Geometry Tutor ...On one side it is for many students a first encounter with mathematical abstraction and on the other side it is a topic that occurs in many scientific applications like in numerical or economical models. At different universities I taught my own courses that built on linear algebra. I also taug... 41 Subjects: including geometry, calculus, statistics, algebra 1 ...A strong foundation in Algebra 1 will serve them well for the remainder of their math education. My years of experience tutoring students (and teaching Algebra 2 in public schools) has given me an excellent understanding of the connections throughout the secondary curriculum. I am able to expla... 10 Subjects: including geometry, calculus, algebra 1, algebra 2 ...I've sung in choral performances. Whether I'm explaining the very basics of using a computer or trading shortcuts and new ways to use tools in design software, using computers and the internet is a prime example of continuous shared learning and teaching. Study skills and organization are criti... 34 Subjects: including geometry, Spanish, reading, English ...Before transferring to CAL Berkeley I studied professional cooking, viticulture, and critical thinking at the Santa Rosa Junior College. I have played the oboe for over a decade in numerous Bay Area youth, amateur and semi-professional orchestras, and studied with three of the four current membe... 28 Subjects: including geometry, English, reading, calculus ...During my undergraduate education I took two classes on organic synthesis for chemistry majors, two classes for chemistry majors on the analysis of organic molecules, and two semesters of organic synthesis research. During graduate school I also took graduate level organic analysis, physical che... 19 Subjects: including geometry, chemistry, physics, calculus Related Canyon, CA Tutors Canyon, CA Accounting Tutors Canyon, CA ACT Tutors Canyon, CA Algebra Tutors Canyon, CA Algebra 2 Tutors Canyon, CA Calculus Tutors Canyon, CA Geometry Tutors Canyon, CA Math Tutors Canyon, CA Prealgebra Tutors Canyon, CA Precalculus Tutors Canyon, CA SAT Tutors Canyon, CA SAT Math Tutors Canyon, CA Science Tutors Canyon, CA Statistics Tutors Canyon, CA Trigonometry Tutors
{"url":"http://www.purplemath.com/canyon_ca_geometry_tutors.php","timestamp":"2014-04-18T23:57:19Z","content_type":null,"content_length":"23935","record_id":"<urn:uuid:e2483345-73cb-4d90-ab6d-9d762afbbebe>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00338-ip-10-147-4-33.ec2.internal.warc.gz"}
e Learning Newton's Laws of Motion illustrated with 3D Animations Grade: 6 - 12 Learn about Isaac Newton's three laws of motion with this animated lesson. Newton's Laws of Motion Grade: 6 - 12 An introduction to Newton's Laws of Motion. Newton's Three Laws of Motion Grade: 6 - 12 Lesson on Newton's three laws of motion. It describes the first law that relates to inertia, the second law that relates to mass and acceleration, and the third law that allows a rocket to launch. Demonstration of Newton's First Law of Motion Grade: 6 - 12 Newton's first law of motion is pretty important if you are trying to get a rocket up into space. Demonstration of Newton's Second Law of Motion Grade: 6 - 12 The video demonstrates Newton's Second Law of Motion with high powered air cannons. Demonstration of Newton's Third Law of Motion Grade: 6 - 12 A fun demonstration of Newton's third law of motion. How do we launch rockets and shuttles into space? Of course, with Sir Isaac Newton and his laws of motion. What is Friction? Grade: 6 - 12 Lesson on force of friction. Force of friction can be classified as static or kinetic force of friction depending on whether the body is at rest or in motion. It explains that the frictional force is directly proportional to the weight of the body. Circular Motion Grade: 6 - 12 Learn about the science of spinning, circular motion and the centripetal force. Find the connection between merry-go-rounds and artificial gravity. Centripetal Force Grade: 6 - 12 Lesson on centripetal force. Cool science experiments in the video demonstrate inertia and centripetal force (not centrifugal). Angular Momentum Grade: 6 - 12 Learn about the law of conservation of angular momentum. See it in action when ice skaters spin faster by hugging themselves tight. Watch a few more angular momentum examples. Lesson on Inertia Grade: 6 - 12 The lesson introduces the concept of inertia, the first law of physics - Things like to keep on doing what they're already doing. Lesson on Speed Grade: 6 - 12 The lesson introduces the concept of speed to the inertia-mass relationship. It explains that force varies with mass and rate of change of speed. Lesson on Acceleration, part 1 Grade: 6 - 12 Learn about the concept: Force = mass x acceleration. The lesson illustrates this important rule of physics with the examples of a bicycle and a baseball player. Lesson on Acceleration, part 2 Grade: 6 - 12 The lesson explains how acceleration works, and how to calculate it with the help of an animated locomotive. It stresses the importance of reasonable units, and that acceleration is measured in m/s². Lesson on Mass Grade: 6 - 12 Building on the idea of inertia, this lesson introduces the concept of mass, tells how it's measured, and shows how it differs from size. It explains that inertia increases with mass. Lesson on Gravity Grade: 6 - 12 The lesson explains the force of gravity with the help of Isaac Newton's celebrated falling apple. It also explains the unit with which the force of gravity is measured. Forces - Mass and Weight Grade: 6 - 12 What is the difference between Mass and Weight? Get the answer in this physics lesson. Newton's First Law of Motion Grade: 6 - 12 A lesson on Newton's First Law (Galileo's Law of Inertia). Newton's Second Law of Motion Grade: 6 - 12 A lesson on Newton's Second Law of Motion: F=ma Newton's Third Law of Motion Grade: 6 - 12 A lesson on Newton's Third Law: Every action has an equal and opposite reaction.
{"url":"http://www.neok12.com/Laws-of-Motion.htm","timestamp":"2014-04-20T06:04:52Z","content_type":null,"content_length":"25400","record_id":"<urn:uuid:36819555-e950-4e56-95a1-a1166fe46148>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00243-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: A right triangle has a hypotenuse of 39 and a leg of length 15 what is the length of the other leg? if necessary round your answer to two decimal places • one year ago • one year ago Best Response You've already chosen the best response. Pythagorean's theorem tells us that for right triangles \[H^2 = A^2 + B^2\] Where H is the hypotenuse, A is one leg, and B is the other Best Response You've already chosen the best response. thanks for some reason i just spaced the theorem Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/510b1f94e4b09cf125bbed41","timestamp":"2014-04-16T10:27:37Z","content_type":null,"content_length":"30184","record_id":"<urn:uuid:08c40ae7-dcba-4e87-aa4e-e0f4328fc35a>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00106-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: Intersection of regular expression languages Chris F Clark <cfc@shell01.TheWorld.com> Sun, 21 Oct 2007 19:31:34 -0400 From comp.compilers | List of all articles for this month | From: Chris F Clark <cfc@shell01.TheWorld.com> Newsgroups: comp.compilers Date: Sun, 21 Oct 2007 19:31:34 -0400 Organization: The World Public Access UNIX, Brookline, MA References: 07-10-063 Keywords: lex, theory Posted-Date: 22 Oct 2007 00:23:37 EDT haberg@math.su.se (Hans Aberg) writes: > Can the intersection of two regular expression languages be > constructed as a regular expression language? I'm going to say something which I hope isn't stupid in response to this. The class of regular languages forms a Boolean algebra with respect to the union and intersection operators, which I believe means that they also form a field with those operators, which means they are closed with respect to both of them. I believe one can even use the class of regular languages as a topology. So, in case, this is a homework problem (since it is a commonly asked hw problem in automata theory). Identify what the identity languages are for the two operators. Show that idempotence holds. Find the union and intersection inverses of a given language. Is there something interesting about the two inverses? Does DeMorgan's theorem Hope this helps, Chris Clark Internet : compres@world.std.com Compiler Resources, Inc. Web Site : http://world.std.com/~compres 23 Bailey Rd voice : (508) 435-5016 Berlin, MA 01503 USA fax : (978) 838-0263 (24 hours) Post a followup to this message Return to the comp.compilers page. Search the comp.compilers archives again.
{"url":"http://compilers.iecc.com/comparch/article/07-10-068","timestamp":"2014-04-20T03:27:16Z","content_type":null,"content_length":"5948","record_id":"<urn:uuid:21dcf356-ac69-47e2-b640-c5dc2ec78508>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00252-ip-10-147-4-33.ec2.internal.warc.gz"}
Diffraction–attenuation resistant beams: their Recently, a method for obtaining diffraction–attenuation resistant beams in absorbing media has been developed in terms of suitable superposition of ideal zero-order Bessel beams. In this work, we show that such beams keep their resistance to diffraction and absorption even when generated by finite apertures. Moreover, we shall extend the original method to allow a higher control over the transverse intensity profile of the beams. Although the method is developed for scalar fields, it can be applied to paraxial vector wave fields, as well. These new beams have many potential applications, such as in free-space optics, medical apparatus, remote sensing, and optical tweezers. © 2010 Optical Society of America OCIS Codes (140.3300) Lasers and laser optics : Laser beam shaping (260.1960) Physical optics : Diffraction theory (350.7420) Other areas of optics : Waves ToC Category: Lasers and Laser Optics Original Manuscript: July 16, 2010 Manuscript Accepted: August 19, 2010 Published: October 18, 2010 Michel Zamboni-Rached, Leonardo A. Ambrósio, and Hugo E. Hernández-Figueroa, "Diffraction–attenuation resistant beams: their higher-order versions and finite-aperture generations," Appl. Opt. 49, 5861-5869 (2010) Sort: Year | Journal | Reset 1. M. Zamboni-Rached, “Stationary optical wave fields with arbitrary longitudinal shape by superposing equal frequency Bessel beams: Frozen Waves,” Opt. Express 12, 4001–4006(2004). [CrossRef] 2. M. Zamboni-Rached, E. Recami, and H. E. Hernández-Figueroa, “Theory of frozen waves: modeling the shape of stationary wave fields,” J. Opt. Soc. Am. A 22, 2465–2475 (2005). [CrossRef] 3. M. Zamboni-Rached, “Diffraction–attenuation resistant beams in absorbing media,” Opt. Express 14, 1804–1809 (2006). [CrossRef] [PubMed] 4. H.E.Hernández-Figueroa, M.Zamboni-Rached, and E.Recami, eds., Localized Waves: Theory and Applications, (Wiley, 2008), and references therein. [CrossRef] 5. C. J. R. Sheppard and T. Wilson, “Gaussian-beam theory of lenses with annular aperture,” IEE J. Microwaves Opt. Acoust. 2, 105–112 (1978). [CrossRef] 6. J. Durnin, J. J. Miceli, and J. H. Eberly, “Diffraction-free beams,” Phys. Rev. Lett. 58, 1499–1501 (1987). [CrossRef] [PubMed] 7. I. M. Besieris, A. M. Shaarawi, and R. W. Ziolkowski, “A bi-directional traveling plane wave representation of exact solutions of the scalar wave equation,” J. Math. Phys. 30, 1254–1269 (1989). 8. R. W. Ziolkowski, I. M. Besieris, and A. M. Shaarawi, “Aperture realizations of exact solutions to homogeneous-wave equations,” J. Opt. Soc. Am. A 10, 75–87 (1993). [CrossRef] 9. J.-Y. Lu and J. F. Greenleaf, “Nondiffracting x-waves: exact solutions to free-space scalar wave equation and their finite aperture realizations,” IEEE Trans. Ultrason. Ferroelectr. Freq. Control 39, 19–31 (1992). [CrossRef] [PubMed] 10. E. Recami, “On localized x-shaped superluminal solutions to Maxwell equations,” Physica A (Amsterdam) 252, 586–610 (1998). [CrossRef] 11. P. Saari and K. Reivelt, “Evidence of x-shaped propagation-invariant localized light waves,” Phys. Rev. Lett. 79, 4135–4138 (1997). [CrossRef] 12. D. Mugnai, A. Ranfagni, and R. Ruggeri, “Observation of superluminal behaviors in wave propagation,” Phys. Rev. Lett. 84, 4830–4833 (2000). This paper aroused some criticisms, to which the authors replied. [CrossRef] [PubMed] 13. M. Zamboni-Rached, E. Recami, and H. E. Hernández-Figueroa, “New localized Superluminal solutions to the wave equations with finite total energies and arbitrary frequencies,” Eur. Phys. J. D 21, 217–228 (2002). [CrossRef] 14. M. Zamboni-Rached, K. Z. Nóbrega, C. A. Dartora, and H. E. Hernández-Figueroa, “On the localized superluminal solutions to the Maxwell equations,” IEEE J. Sel. Top. Quantum Electron. 9, 59–73 (2003), and references therein. [CrossRef] 15. C. Conti, S. Trillo, G. Valiulis, A. Piskarskas, O. Jedrkiewicz, J. Trull, and P. Di Trapani, “Nonlinear electromagnetic x-waves,” Phys. Rev. Lett. 90, 170406 (2003). [CrossRef] [PubMed] 16. M. Zamboni-Rached and H. E. Hernández-Figueroa, “A rigorous analysis of localized wave propagation in optical fibers,” Opt. Commun. 191, 49–54 (2001). [CrossRef] 17. M. Zamboni-Rached, E. Recami, and F. Fontana, “Localized superluminal solutions to Maxwell equations propagating along a normal-sized waveguide,” Phys. Rev. E 64, 066603(2001). [CrossRef] 18. M. Zamboni-Rached, K. Z. Nóbrega, E. Recami, and F. H. E. Hernandez, “Superluminal x-shaped beams propagating without distortion along a coaxial guide,” Phys. Rev. E 66, 046617(2002). [CrossRef] 19. M. Zamboni-Rached, F. Fontana, and E. Recami, “Superluminal localized solutions to Maxwell equations propagating along a waveguide: the finite-energy case,” Phys. Rev. E 67, 036620 (2003). 20. S. V. Kukhlevsky and M. Mechler, “Diffraction-free sub-wavelength beam optics at nanometer scale,” Opt. Commun. 231, 35–43 (2004). [CrossRef] 21. M. Zamboni-Rached, K. Z. Nóbrega, H. E. Hernández-Figueroa, and E. Recami, “Localized superluminal solutions to the wave equation in (vacuum or) dispersive media, for arbitrary frequencies and with adjustable bandwidth,” Opt. Commun. 226, 15–23 (2003). [CrossRef] 22. S. Longhi and D. Janner, “X-shaped waves in photonic crystals,” Phys. Rev. B 70, 235123 (2004). [CrossRef] 23. M. Zamboni-Rached and E. Recami, “Subluminal wave bullets: exact localized subluminal solutions to the wave equations,” Phys. Rev. A 77, 033824 (2008). [CrossRef] 24. M. A. Porras, R. Borghi, and M. Santarsiero, “Suppression of dispersion broadening of light pulses with Bessel–Gauss beams,” Opt. Commun. 206, 235–241 (2002). [CrossRef] 25. M. Zamboni-Rached, H. E. Hernández-Figueroa, and E. Recami, “Chirped optical X-shaped pulses in material media,” J. Opt. Soc. Am. A 21, 2455–2463 (2004). [CrossRef] 26. Z. Bouchal and J. Wagner, “Self-reconstruction effect in free propagation wavefield,” Opt. Commun. 176, 299–307(2000). [CrossRef] 27. R. Grunwald, U. Griebner, U. Neumann, and V. Kebbel, “Self-reconstruction of ultrashort-pulse Bessel-like x-waves,” in Conference on Lasers and Electro-Optics/Quantum Electronics and Laser Science Conference (Optical Society of America, 2004), paper CMQ7. 28. M. Zamboni-Rached, A. Shaarawi, and E. Recami, “Focused x-shaped pulses,” J. Opt. Soc. Am. A 21, 1564–1574 (2004), and references therein. [CrossRef] 29. C. J. R. Sheppard and P. Saari, “Lommel pulses: an analytic form for localized waves of the focus wave mode type with bandlimited spectrum,” Opt. Express 16, 150–160(2008). [CrossRef] [PubMed] 30. M. Zamboni-Rached, “Unidirectional decomposition method for obtaining exact localized wave solutions totally free of backward components,” Phys. Rev. A 79, 013816 (2009). [CrossRef] 31. When generated by a finite aperture of radius R≫2.4/kρR situated on the plane z=0, the solution in Eq. becomes a valid approximation only in the spatial region 0<z<R/tan⁡θ≡Z and to ρ<(1−z/Z)R. 32. In this paper we use cylindrical coordinates (ρ,ϕ,z). 33. Fortunately, these conditions are satisfied for a great number of situations. 34. The same that was given in . 35. In an absorbing medium like this, at a distance of 25cm, these beams would have their initial field intensity attenuated 148 times. 36. According to Eq. , the maximum value allowed for N is 158 and we choose to use N=20 just for simplicity. Of course, by using higher values of N we get better results. 37. The analytic calculation of these coefficients is quite simple in this case and their values are not listed here; we just use them in Eq. . 38. In this case, we can consider both ϵb(ω) and σ(ω) real quantities. 39. J. D. Jackson, Classical Electrodynamics (Wiley, 1998). 40. Notice that, according to Section , the absorption coefficient of a Bessel beam is αθ=αcos⁡θ=2kIcos⁡θ. When θ→0, the Bessel beam tends to a plane wave and αθ→α. 41. The idea developed in this section generalizes that exposed in Section 5 of , which was addressed to nonabsorbing media. 42. The same is valid for a truncated higher-order Bessel beam. 43. Notice that kρRm=N is the smallest value of all kρRm, therefore, if R≫2.4/kρRm=N→R≫2.4/kρRm for all m. 44. Here, θm is the axicon angle of the mth Bessel beam in Eq. . 45. That is, the shortest diffractionless distance is larger than the distance L. 46. J. W. Goodman, Introduction to Fourier Optics (McGraw-Hill, 1968). OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed. « Previous Article | Next Article »
{"url":"http://www.opticsinfobase.org/ao/abstract.cfm?URI=ao-49-30-5861","timestamp":"2014-04-19T18:23:47Z","content_type":null,"content_length":"182607","record_id":"<urn:uuid:87bbe14e-7843-4594-8f77-f89119f01a3e>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00192-ip-10-147-4-33.ec2.internal.warc.gz"}
Nonlinear matrix equation up vote 3 down vote favorite Solve the following nonlinear equations for $v$ and $w$ where $\lambda_1, \lambda_2, \lambda_3$ are real. $A$ is a symmetric matrix. How would you generalize to the case Where both A and B are symmetric? Would it help if they are also similar and each of them has exactly $n/2$ eigenvalues equal to $+1$ and $n/2$ eigenvalues equal to $-1$? 2 Where does this problem come from? Could you provide more context, also to convince us that this is no homework? Also, why are there a $v^Tv$ and a $w^Tw$ in the equations when you know that they are both equal to $1$? – Federico Poloni Oct 16 '12 at 8:26 Hi, Sorry for the typo. It should be $vv^T$ and $ww^T$. The problem may look simple as if it is a homework, but it's not, and I think it's not trivial, at least to me. This is part of my attempt to minimize $\sum_{\sigma}|v^{\dagger}\sigma w|^2$ with Lagrange multiplier. Here {\sigma} are tensor products of some Pauli matrices, and $v$, $w$ are two orthonormal pure state. It is needed to prove another conjecture for my research project in quantum entanglement. I don't even know if it holds although random test suggest it does. – Minh Tran Oct 18 '12 at 7:10 add comment 1 Answer active oldest votes First of all, note that $w^TAv=v^TAw$ is a scalar. Here is an idea that should greatly simplify the equation: Your equations say that $Aw$ and $Av$ are both contained in $U=\operatorname{span}(v,w)$, therefore $U$ is an invariant subspace of $A$. You can get all two-dimensional invariant up vote 4 down subspaces by taking $U=\operatorname{span}(x_1,x_2)$, where $x_1$ and $x_2$ are eigenvectors of $A$ (proof: consider $A$ restricted to the subspace $U$; it is a symmetric linear vote accepted operator, so it has two eigenvalues which are also eigenvalues of $A$). So all solutions must be of the form $v=\alpha x_1 +\beta x_2$ and $w=\gamma x_1+\delta x_2$, where $x_1$ and $x_2$ are two eigenvectors of $A$. Making this ansatz the problem becomes a $2\times 2$ one in $\alpha,\beta,\gamma,\delta$ and should be easy to solve explicitly. Thank you for your answer. It is not so clear to me why the two eigenvalues of $A$ restricted to $U=span{v,w}$ are also eigenvalues of $A$. Can you explain a bit more? – Minh Tran Oct 18 '12 at 10:24 It's the same operator, just seen on a smaller vector subspace. The relation $Au=\lambda u$ still holds, does not depend on the ambient space. – Federico Poloni Oct 18 '12 at 10:40 Thank you for you great idea. I would like to ask you some more. How would you extend your idea to the case where you have two matrices instead of one as above? I have spent me some time to think about it but really I don't see a way. – Minh Tran Oct 25 '12 at 3:31 You'd better ask a new question with this second problem. Is the text correct? It is unpleasingly asymmetric. – Federico Poloni Oct 25 '12 at 9:22 Thank you for your advice. I will start a new question. There was indeed some typo anyway. – Minh Tran Oct 27 '12 at 8:04 add comment Not the answer you're looking for? Browse other questions tagged linear-algebra multilinear-algebra numerical-linear-algebra or ask your own question.
{"url":"http://mathoverflow.net/questions/109792/nonlinear-matrix-equation/109984","timestamp":"2014-04-17T05:43:02Z","content_type":null,"content_length":"58986","record_id":"<urn:uuid:0d97ec3b-807a-460f-8ae2-94b813228555>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00335-ip-10-147-4-33.ec2.internal.warc.gz"}
Pacoima Prealgebra Tutor ...I prefer meeting with students in environments where they feel comfortable, such as their home or a school library, but am flexible with this as well. I thank you for your consideration and look forward to helping you or your student succeed! Best wishes, Marisa S.I am qualified to tutor Genetics because I got an 'A' in Genetics when I took it for my undergraduate Biology degree. 11 Subjects: including prealgebra, chemistry, biology, algebra 2 ...I have many years of teaching experience and I understand that students are unique individuals who learn in different ways. I will work with each student to achieve the best possible results. I know the material extremely well, and I have many years of experience teaching Algebra I. 16 Subjects: including prealgebra, French, calculus, geometry ...I am very patient and make sure that I personalize all my lessons based on you or your child's needs. I successfully exited two start-up companies (logistics & technology)in the past. My Education : I earned a B.S in Psychology at University of Washington (focus on perception and memory), pursu... 25 Subjects: including prealgebra, English, reading, writing ...Though I have not continued with my own education, I have remained firmly planted in academics by assisting both my wife (completing her bachelor's degree in Business Administration) and my stepson (recently completed the 9th grade). My stepson is diagnosed with Asperger's Syndrome (a form of hig... 14 Subjects: including prealgebra, reading, public speaking, SAT math ...I have taught in the Los Angeles Unified School District (LAUSD) for the last 15 years at one of the top rated middle schools in the district. Our students consistently score well in the state tests in math, as well as in the other academic subjects. There are two reasons why I tutor--1) I woul... 3 Subjects: including prealgebra, algebra 1, elementary math
{"url":"http://www.purplemath.com/pacoima_ca_prealgebra_tutors.php","timestamp":"2014-04-19T09:45:39Z","content_type":null,"content_length":"24034","record_id":"<urn:uuid:24ef4db0-aa66-4768-bbde-1f77cb5b3033>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00047-ip-10-147-4-33.ec2.internal.warc.gz"}
How we look to religion professors From this eminently enjoyable interview with U Chicago professor JZ Smith: But it would be terrible if we did everything in the unambiguous world of mathematics. Here’s a speech designed not to have any of these problems, to be international, to have no ambiguity of any of that. I mean, it has its uses, but what an awful way to go around all day. I can’t imagine. It would be a very odd conversation. I’m sure we wouldn’t laugh once. They’re very funny people, mathematicians, but always when they stop being mathematicians they’re funny. I guess they have to be, having spent all day talking like that. 6 thoughts on “How we look to religion professors” 1. What’s funny is that I’ve lately found that I completely agree with that quote. Standing around the water cooler I’ve tried to explain this to my colleagues and grad students, but all I get is blank stares. Weird! Of course, my conclusion is not that it _is_ terrible, rather that it _should_ be terrible as humans are not made for that kind of thing, whence that it is amazing that we/I enjoy doing and talking mathematics so much. Or something… OK, I’ll go and have some more coffee now. 2. Never mind what Johan or anyone else says. You make math hilarious. Your talk at the JMM was one part math per part stand-up. 3. As someone who knew a lot of logicians, decision theorists, and theoretical computer scientists before knowing a lot of mathematicians, I am always struck by how ‘wet’ (like, opposite of dry?) the discourse of mathematicians is compared to the discourse of *those* disciplines. 4. A) in what way are logicians and theoretical computer scientists not mathematicians? and B) tell us more about what “wet” means — you mean, like, shot through with analogy, allusion, motivation, and anecdote? 5. “Shot through with analogy, allusion, motivation, and anecdote” is exactly the phrasing I failed to come up with. About ‘B)’, I think it’s common to assert that math-department mathematics has a different ‘flavor’ from logic and theoretical computer science, no? (And, I guess, that this difference in flavor is stronger than the one within subfield of math-department mathematics.) 6. (I’m saying this as someone with a good head for theoretical computer science and logic and zero talent for math-department math, by the way.)
{"url":"http://quomodocumque.wordpress.com/2013/02/22/how-we-look-to-religion-professors/","timestamp":"2014-04-21T14:40:03Z","content_type":null,"content_length":"62915","record_id":"<urn:uuid:6078ceea-c93d-437c-bb3c-43782fe2d5ab>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00304-ip-10-147-4-33.ec2.internal.warc.gz"}
Cayley's Theorem and the Yoneda Lemma MathOverflow is a question and answer site for professional mathematicians. It's 100% free, no registration required. Hi, from wiki, I know that yoneda lemma is the generalization of Cayley's theorem. But I am not quite understand the intuition behind that. Anyone can help me with that? up vote 2 down vote favorite ct.category-theory show 2 more comments Hi, from wiki, I know that yoneda lemma is the generalization of Cayley's theorem. But I am not quite understand the intuition behind that. Anyone can help me with that? Cheers! Questions on MathOverflow are expected to relate to research level mathematics within the scope defined by the community. Consider editing the question or leaving comments for improvement if you believe the question can be reworded to fit within the scope. Read more about reopening questions here.If this question can be reworded to fit the rules in the help center, please edit the question. The way I view Yoneda's lemma as a generalization of Cayley's theorem is by viewing a group as a category with one object $A$. To see how a category with one element is a group, let the points of the group be morphisms $f:A\rightarrow A$. By the definition of a category, we must have an identity morphism $id_A:A\rightarrow A$, and this is the identity of the group. Similarly, morphisms are associative, so $f\circ (g\circ h) = (f\circ g)\circ h$. So composition is the group operation. As for invertibility, let's just force our morphisms to be invertible by assuming they are all isomorphisms. Ok, so now the collection of morphisms forms a group in the sense we learn in undergrad. Cayley's Theorem says we can embed a group $G$ into $Sym(G)$, the set of bijections $f:G\rightarrow G$. This means we have an isomorphism $G\cong H$ for some $H\leq Sym(G)$. If we're going to generalize this to category theory we need to understand maps that take morphisms to morphisms (those are our group elements, after all). Functors do this, so a generalization of Cayley's Theorem needs to say something about embedding a category $\mathcal{C}$ into a category of functors going out of $\mathcal{C}$. If I want a category of functors, I need to know what maps are up vote between functors. They are natural transformations. 10 down vote Now, Yoneda's Lemma says we have $\mathrm{Nat}(h^A,F) \cong F(A)$ where $h^A = Hom(A, − )$ and $F$ is any functor from $\mathcal{C}$ to the category of Sets. If we set $F$ to be the functor $h^A$ then Yoneda is telling us $\mathrm{Nat}(h^A,h^A) \cong Hom(A,A)$. But $Hom(A,A)$ is exactly our group (elements of our group are exactly morphisms from $A$ to $A$), so now we see our group as isomorphic to $\mathrm{Nat}(h^A,h^A)$ sitting in a larger functor category (the category of all functors from $\mathcal{C}$ to Set). Some notational points: this category which $\mathcal{C}$ is equivalent to is called the category of representable functors with maps between functors given by natural transformations. Yoneda's lemma also discusses how it relates to the larger functor category it sits in, because a natural transformation $Φ: h^A \rightarrow F$ is sent to $Φ_A(id_A)$ in $F(A)$. I've ignored an issue of covariance vs. contravariance because all I need is $\mathrm{Nat}(h^A,h^A) \cong Hom(A,A)$, although the more general fact is that $\mathrm{Nat}(h^A,h^B) \cong Hom(B,A)$. add comment The way I view Yoneda's lemma as a generalization of Cayley's theorem is by viewing a group as a category with one object $A$. To see how a category with one element is a group, let the points of the group be morphisms $f:A\rightarrow A$. By the definition of a category, we must have an identity morphism $id_A:A\rightarrow A$, and this is the identity of the group. Similarly, morphisms are associative, so $f\circ (g\circ h) = (f\circ g)\circ h$. So composition is the group operation. As for invertibility, let's just force our morphisms to be invertible by assuming they are all isomorphisms. Ok, so now the collection of morphisms forms a group in the sense we learn in undergrad. Cayley's Theorem says we can embed a group $G$ into $Sym(G)$, the set of bijections $f:G\rightarrow G$. This means we have an isomorphism $G\cong H$ for some $H\leq Sym(G)$. If we're going to generalize this to category theory we need to understand maps that take morphisms to morphisms (those are our group elements, after all). Functors do this, so a generalization of Cayley's Theorem needs to say something about embedding a category $\mathcal{C}$ into a category of functors going out of $\mathcal{C}$. If I want a category of functors, I need to know what maps are between functors. They are natural transformations. Now, Yoneda's Lemma says we have $\mathrm{Nat}(h^A,F) \cong F(A)$ where $h^A = Hom(A, − )$ and $F$ is any functor from $\mathcal{C}$ to the category of Sets. If we set $F$ to be the functor $h^A$ then Yoneda is telling us $\mathrm{Nat}(h^A,h^A) \cong Hom(A,A)$. But $Hom(A,A)$ is exactly our group (elements of our group are exactly morphisms from $A$ to $A$), so now we see our group as isomorphic to $\mathrm{Nat}(h^A,h^A)$ sitting in a larger functor category (the category of all functors from $\mathcal{C}$ to Set). Some notational points: this category which $\mathcal{C}$ is equivalent to is called the category of representable functors with maps between functors given by natural transformations. Yoneda's lemma also discusses how it relates to the larger functor category it sits in, because a natural transformation $Φ: h^A \rightarrow F$ is sent to $Φ_A(id_A)$ in $F(A)$. I've ignored an issue of covariance vs. contravariance because all I need is $\mathrm{Nat}(h^A,h^A) \cong Hom(A,A)$, although the more general fact is that $\mathrm{Nat}(h^A,h^B) \cong Hom(B,A)$. Let $G$ be a finite group of order $n$ and consider the category $\mathscr G$ with a single object $\{ \bullet \}$ and whose morphisms consist of the elements of $G$, i.e., $\mathrm{Hom}_{\ mathscr G}(\bullet, \bullet)=G$. up vote Let $h^{\bullet}=\mathrm{Hom}_{\mathscr G}(\bullet , {\_} )$ be the Yoneda functor. Then according to Yoneda's lemma $$\mathrm{Nat}(h^{\bullet},h^\bullet)\simeq \mathrm{Hom}_{\mathscr G}(\ 7 down bullet,\bullet),$$ which implies that $h^\bullet$ is a faithful functor. In other words, $$ h^{\bullet} : \mathscr G \to \mathscr Set$$ maps $\bullet$ to the set $\mathrm{Hom}_{\mathscr G}(\ vote bullet,\bullet)\simeq_{\mathscr Set} |G|$ (where $|G|$ is the set of elements of $G$) and also embeds the group (!) $\mathrm{Hom}_{\mathscr G}(\bullet,\bullet)\simeq_{\mathscr Groups} G$ into $\mathrm{Hom}_{\mathscr Set}(|G|,|G|)\simeq S_n$, which gives you Cayley's Theorem. add comment Let $G$ be a finite group of order $n$ and consider the category $\mathscr G$ with a single object $\{ \bullet \}$ and whose morphisms consist of the elements of $G$, i.e., $\mathrm{Hom}_{\mathscr G} (\bullet, \bullet)=G$. Let $h^{\bullet}=\mathrm{Hom}_{\mathscr G}(\bullet , {\_} )$ be the Yoneda functor. Then according to Yoneda's lemma $$\mathrm{Nat}(h^{\bullet},h^\bullet)\simeq \mathrm{Hom}_{\mathscr G}(\bullet,\ bullet),$$ which implies that $h^\bullet$ is a faithful functor. In other words, $$ h^{\bullet} : \mathscr G \to \mathscr Set$$ maps $\bullet$ to the set $\mathrm{Hom}_{\mathscr G}(\bullet,\bullet)\ simeq_{\mathscr Set} |G|$ (where $|G|$ is the set of elements of $G$) and also embeds the group (!) $\mathrm{Hom}_{\mathscr G}(\bullet,\bullet)\simeq_{\mathscr Groups} G$ into $\mathrm{Hom}_{\mathscr Set}(|G|,|G|)\simeq S_n$, which gives you Cayley's Theorem. One can define the notion of a right category action on a set. This involves assigning a domain (an object of the category) to each element of the set, and a partial multiplication of elements by arrows defined whenever the domain of an element is the codomain of an arrow. The prototypical example is a category acting by composition on its set of arrows. The category of right $\mathbf{C}$-sets winds up being equivalent to the category of functors $\mathbf{C} \to \mathbf{Set}$. The reverse equivalence applied to Yoneda of an object $A$ is up vote 1 essentially the set of arrows with codomain $A$, with $\mathbf{C}$ acting by composition. down vote I believe there is also an equivalence of left-right $\mathbf{C}$-sets to the category of functors $\mathbf{C}^\circ \times \mathbf{C} \to \mathbf{Set}$. The action of $\mathbf{C}$ on itself corresponds to $\text{Hom}_{\mathbf{C}}(-,-)$, which in turn is the adjoint transpose of the Yoneda embedding. add comment One can define the notion of a right category action on a set. This involves assigning a domain (an object of the category) to each element of the set, and a partial multiplication of elements by arrows defined whenever the domain of an element is the codomain of an arrow. The prototypical example is a category acting by composition on its set of arrows. The category of right $\mathbf{C}$-sets winds up being equivalent to the category of functors $\mathbf{C} \to \mathbf{Set}$. The reverse equivalence applied to Yoneda of an object $A$ is essentially the set of arrows with codomain $A$, with $\mathbf{C}$ acting by composition. I believe there is also an equivalence of left-right $\mathbf{C}$-sets to the category of functors $\mathbf{C}^\circ \times \mathbf{C} \to \mathbf{Set}$. The action of $\mathbf{C}$ on itself corresponds to $\text{Hom}_{\mathbf{C}}(-,-)$, which in turn is the adjoint transpose of the Yoneda embedding.
{"url":"https://mathoverflow.net/questions/63654/cayleys-theorem-and-the-yoneda-lemma/63662","timestamp":"2014-04-18T14:11:19Z","content_type":null,"content_length":"62229","record_id":"<urn:uuid:552a1d24-1d9e-44e2-b052-8225c0fe95b6>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00244-ip-10-147-4-33.ec2.internal.warc.gz"}
Can someone explain why (x+y)^(-n) is a infinite rather than finite series? May 6th 2012, 10:41 PM #1 May 2012 United States Can someone explain why (x+y)^(-n) is a infinite rather than finite series? Hi first post here..., So, we have binomial theorem, works great for (x+y)^n with n being a natural number. But, what I can't figure out, why does the binomial expansion go on forever if n is negative? Makes no sense to me, should be able to expand it with the same number of terms as when n>0. Is this like a "unsolved" area of math, or is there a reason for the infinite terms? Re: Can someone explain why (x+y)^(-n) is a infinite rather than finite series? Re: Can someone explain why (x+y)^(-n) is a infinite rather than finite series? Hi first post here..., So, we have binomial theorem, works great for (x+y)^n with n being a natural number. But, what I can't figure out, why does the binomial expansion go on forever if n is negative? Makes no sense to me, should be able to expand it with the same number of terms as when n>0. Is this like a "unsolved" area of math, or is there a reason for the infinite terms? The expansion contains terms n and n-1 an n-2 and so on. If n is a positive integer one of these terms will be zero hence wiping out that term and any that come after. If n is fractional or negative none of these terms is ever zero. May 6th 2012, 11:02 PM #2 Senior Member Nov 2011 Crna Gora May 6th 2012, 11:10 PM #3 Senior Member Mar 2012 Sheffield England
{"url":"http://mathhelpforum.com/number-theory/198474-can-someone-explain-why-x-y-n-infinite-rather-than-finite-series.html","timestamp":"2014-04-19T19:07:09Z","content_type":null,"content_length":"34726","record_id":"<urn:uuid:2642243c-0de6-4ef5-ba07-a9d92f6cfe93>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00648-ip-10-147-4-33.ec2.internal.warc.gz"}
Tricky sin equation April 23rd 2010, 12:32 AM #1 May 2008 This is the equation Im stuck with: $2sin (0.5x) = 3/3x+2$ Im not sure where to start, because I cant take out any of the x values, should i let 0.5x be a completely new vale like 'y' and but then i will still be stuck Thank you! Last edited by mr fantastic; April 23rd 2010 at 02:47 PM. Reason: Edited post title What is required in the problem? Do you want to find the value of x? If yes, how much decimal places in x is required ? In the form of infinite series sinθ can be written as sinθ = θ - (θ)^3/3! + (θ)^5/5! and so on. Thank you so much for the reply!!! Yes i am asked to find the value of x and the answer is required to 3 s.f. It might be better if i give you the context of the equation. ''''''''''''There are two equations g(x) and f(x), There are two values of x for which the gradient of f is equal to the gradient of g. Find both these values of x. '''''''''" So i've already found the derivative of the two equations and i've checked that its correct, and then i equated the two derivatives, but i dont know how to find x. I was thinking that maybe it could be done by trig identities, lol but i dont know how though anyways Thank you! !! Hello appleseed There are infinitely many solutions to the equation which is what I assume you meant. (You should have written 3/(3x+2) if you don't know how to write it using LaTeX.) Have a look at the diagram I've attached, where I've plotted the graph of each side separately. You won't be able to solve an equation like this to get exact answers. You'll have to use a numerical method - e.g. Newton-Raphson. Do you know how? Hello appleseed There are infinitely many solutions to the equation which is what I assume you meant. (You should have written 3/(3x+2) if you don't know how to write it using LaTeX.) Have a look at the diagram I've attached, where I've plotted the graph of each side separately. You won't be able to solve an equation like this to get exact answers. You'll have to use a numerical method - e.g. Newton-Raphson. Do you know how? Hi Grandad!!! Thank you for the reply. I think i've heard of Newton-Raphson before, but i cant remember exactly how to find the answer using it. Could you briefly go over the steps please? Newton-Raphson method Hello appleseed If you've not had any practice with this method, this is not the easiest example to start with. If you want to get a solution to the equation $f(x) = 0$ and you have a first approximation, $x = a_1$, to the answer, then a second approximation, $a_2$, is given by In this case you'll have to let $f(x) = 2\sin(\tfrac12x)-\frac{3}{3x+2}$ $f'(x) = \cos(\tfrac12x) +\frac{9}{(3x+2)^2}$ From the graph, there's a solution close to $x = 1$. So with $a_1 = 1$ we get $a_2 = 1-\frac{f(1)}{f'(1)}$ $\approx 0.7$ ... and so on. The next approximation, $a_3$, uses $0.7$ as the starting value: $a_3 = 0.7 -\frac{f(0.7)}{f'(0.7)}$ $\approx 0.73$ (to 2 d.p.) Repeating the process, to 4 d.p., I make the answer $0.7314$. The next positive solution is around $x = 6$. To 4 d.p., the solution converges very quickly to $6.1361$. April 23rd 2010, 12:51 AM #2 Super Member Jun 2009 April 23rd 2010, 01:01 AM #3 May 2008 April 23rd 2010, 11:35 AM #4 April 24th 2010, 10:14 PM #5 May 2008 April 24th 2010, 10:44 PM #6
{"url":"http://mathhelpforum.com/trigonometry/140871-tricky-sin-equation.html","timestamp":"2014-04-19T12:31:34Z","content_type":null,"content_length":"54179","record_id":"<urn:uuid:682daaf8-4b1c-48ae-b553-8aa2380a2891>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00382-ip-10-147-4-33.ec2.internal.warc.gz"}
Next Article Contents of this Issue Other Issues ELibM Journals ELibM Home EMIS Home On Moduli of Regular Surfaces with $K^2=8$ $p_g=4$ Paola Supino Dipartimento di Matematica, Univ. di Ancona, via Brecce Bianche, 60131 Ancona -- ITALY E-mail: Supino@dipmat.unian.it Abstract: Let $S$ be a surface of general type with not birational bicanonical map and that does not contain a pencil of genus 2 curves. If $K^2_S=8$, $p_g(S)=4$ and $q(S)=0$ then $S$ can be given as double cover of a quadric surface. We show that its moduli space is generically smooth of dimension $38$, and single out an open subset. Note that for these surfaces $h^2(S,T_S)$ is not zero. Full text of the article: Electronic version published on: 9 Feb 2006. This page was last modified: 27 Nov 2007. © 2003 Sociedade Portuguesa de Matemática © 2003–2007 ELibM and FIZ Karlsruhe / Zentralblatt MATH for the EMIS Electronic Edition
{"url":"http://www.emis.de/journals/PM/60f3/7.html","timestamp":"2014-04-19T01:53:23Z","content_type":null,"content_length":"3390","record_id":"<urn:uuid:fffa0833-29e0-4ba4-a0a8-e7cc58c974eb>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00049-ip-10-147-4-33.ec2.internal.warc.gz"}
Morrow, GA Prealgebra Tutor Find a Morrow, GA Prealgebra Tutor ...I also tutored fellow students at college and I love the demonstrative approach (hands on where the student and I sit at a computer and program together, correcting errors and making adjustments). In pursuing my Bachelor's Degree in Computer Science, I was able to harness the power of my professo... 21 Subjects: including prealgebra, calculus, algebra 1, algebra 2 ...I am an advanced 3.5 player. I have played PeachTennis and Ultimate Tennis with great results in playoffs. I usually play in small tournaments around Atlanta. 18 Subjects: including prealgebra, reading, writing, accounting ...I am uniquely-qualified to tutor students for the MCAT. I received a B.A. degree in Chemistry and Mathematics. I have taken courses in General Chemistry, Organic Chemistry, Biochemistry, Biochemical Preparations, Personal Health and Introduction to Research. 57 Subjects: including prealgebra, reading, chemistry, GRE ...I am interested in bringing out the best in the student, first by finding out how the student learns best, then by teaching the student in that manner (verbal, visual, etc.) I have a natural ability to work through problems and love explaining how I derive the answer. I have an undergraduate degree in this subject. I do still study the topics to keep the information fresh in my head. 29 Subjects: including prealgebra, chemistry, reading, physics ...If you are looking for quick preparation for a standardized test, I teach testing methods and techniques to help prepare you for the test. If you are looking for an increase in your child's performance average, I use information from their current curriculum to tailor a tutoring program that wil... 10 Subjects: including prealgebra, algebra 1, Microsoft Word, Microsoft PowerPoint Related Morrow, GA Tutors Morrow, GA Accounting Tutors Morrow, GA ACT Tutors Morrow, GA Algebra Tutors Morrow, GA Algebra 2 Tutors Morrow, GA Calculus Tutors Morrow, GA Geometry Tutors Morrow, GA Math Tutors Morrow, GA Prealgebra Tutors Morrow, GA Precalculus Tutors Morrow, GA SAT Tutors Morrow, GA SAT Math Tutors Morrow, GA Science Tutors Morrow, GA Statistics Tutors Morrow, GA Trigonometry Tutors Nearby Cities With prealgebra Tutor Clarkston, GA prealgebra Tutors College Park, GA prealgebra Tutors Conley prealgebra Tutors East Point, GA prealgebra Tutors Ellenwood prealgebra Tutors Forest Park, GA prealgebra Tutors Hapeville, GA prealgebra Tutors Jonesboro, GA prealgebra Tutors Lake City, GA prealgebra Tutors Mcdonough prealgebra Tutors Red Oak, GA prealgebra Tutors Rex, GA prealgebra Tutors Riverdale, GA prealgebra Tutors Stockbridge, GA prealgebra Tutors Tyrone, GA prealgebra Tutors
{"url":"http://www.purplemath.com/morrow_ga_prealgebra_tutors.php","timestamp":"2014-04-17T21:38:23Z","content_type":null,"content_length":"24041","record_id":"<urn:uuid:5ac9673c-4389-4543-8f69-5faed360dc92>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00657-ip-10-147-4-33.ec2.internal.warc.gz"}
Ballista Equation Date: 09/30/2002 at 22:46:10 From: Jessi Subject: Geometry/Physics Dear Dr. Math, I'm writing a report about the ballista, an ancient weapon in the form of a giant crossbow, which fires bolts and grapeshot; see The Ballista and I have to create an equation for how ballistas would accurately hit their targets. I have figured that since they used giant crossbow bolts, the process to solve for arc length would help find the distance between the weapon and its target, but I do not know how to add the velocity of the bolt and how many degrees they would have to adjust it up and down to hit its target. Any help would be nice. Date: 10/01/2002 at 11:20:32 From: Doctor Ian Subject: Re: Geometry/Physics Hi Jessi, Here's something that you can read to get you started thinking about the issues involved: Real Life Uses of Quadratic Equations Note that it talks about holdover (aiming for a point a certain distance above the target) rather than angular correction. Let's say that a projectile leaves with an initial speed of V[i], at an angle (relative to the horizontal) of A degrees. For simplicity, let's also ignore air resistance, because once you start correcting for that, the real-world solution is to start collecting tables of information and using those to interpolate to nearby cases. (This is, in fact, what long-distance rifle shooters do. They have one table that compiles expected bullet drops at various distances, and another that compiles expected deflections for various wind speeds and distances. These are compiled empirically, rather than computed, because they need to account for things like bullet shape and Anyway, the motion can be decomposed into two separate motions, occurring simultaneously. One of the motions is horizontal, with a constant speed of The other speed is vertical, with a speed given by V[i]sin(A) - gt where g is about 32 feet per second per second. The height of the projectile above the launch device is given by h = V[i]sin(A)t - (1/2)gt^2 Let's say that the target and the launch device are at the same height. The projectile will rise for a while, and then fall back to the same height when h is zero: 0 = V[i]sin(A)t - (1/2)gt^2 = V[i]sin(A) - (1/2)gt (1/2)gt = V[i]sin(A) t = 2V[i]sin(A)/g This tells you the time of flight for the projectile, as a function of the initial velocity and angle. How far does it go during that time? The horizontal distance is given by d = V[i]cos(A)t (This is just the old familar distance-equals-rate-times-time.) We can solve this for t to get t = d/(V[i]cos(A)) And now we have two things that are equal to the same thing (t), so they must be equal to each other: 2V[i]sin(A)/g = d/(V[i]cos(A)) Solving for d, we get d = --------------------- = --------------------- Does that look reasonable? Let's do some simple checks. 1. Suppose the angle is 90 degrees. Then sin(A) is always zero, so d is always zero. So you can't shoot anything forward by shooting straight up. That makes sense. 2. Suppose the angle is 0 degrees. Then cos(A) is always zero, so d is always zero. This makes sense, since we assume that the launcher and target are at the same height; and the projectile will start dropping immediately under the influence of gravity, so it can never hit the target. 3. Suppose we take the derivative of d with respect to A (leaving V constant). We get d/dA d = 2V[i]^2/g d/dA sin(A)cos(A) d/dA sin(A)cos(A) = sin(A)*-sin(A) + cos(A)cos(A) = cos^2(A) - sin^2(A) which is zero when sin(A) = cos(A), or A = 45 degrees. So regardless of initial velocity, an angle of 45 degrees should give us the maximum possible distance. So you should probably go through the derivation again on your own to make sure that this formula is correct, but it seems pretty Does this help? - Doctor Ian, The Math Forum
{"url":"http://mathforum.org/library/drmath/view/61324.html","timestamp":"2014-04-19T23:50:11Z","content_type":null,"content_length":"9536","record_id":"<urn:uuid:83f8e377-bee5-49ae-8c79-838a4cc37579>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00621-ip-10-147-4-33.ec2.internal.warc.gz"}
Something funny happening with operator overloading 04-13-2007 #1 Registered User Join Date Aug 2006 Something funny happening with operator overloading The arithmetics of the program are incorrect #include <cmath> #ifndef _MVECTOR_H #define _MVECTOR_H namespace math class Vector float x; float y; float z; Vector(float a=0,float b=0,float c=0) Vector operator+(Vector &v1) Vector tmp; return tmp; Vector operator-(Vector v1) Vector tmp; return tmp; float VecLength(Vector v) return sqrt(pow(v.x,2)+pow(v.y,2)+pow(v.z,2)); float DotProduct(Vector v1,Vector v2) return (v1.x*v2.x + v1.y*v2.y + v1.z*v2.z); Vector CrossProduct(Vector v1, Vector v2) Vector temp; temp.x=v1.y*v2.z - v1.z*v2.y; temp.y=v1.z*v2.x - v1.x*v2.z; temp.z=v1.x*v2.y - v1.y*v2.x; return temp; #include <iostream> #include <cmath> #include "Mvector.h" using namespace math; using namespace std; Vector a= Vector(1,-8,-3); Vector b=Vector(2,2,3); Vector c= a-b; int main() std::cout <<c.x<< std::endl; std::cout <<c.y<< std::endl; std::cout <<c.z<< std::endl; return 0; for some reason it outputs: whats wrong? There's an error here, but I'm not telling you what. All the buzzt! "There is not now, nor has there ever been, nor will there ever be, any programming language in which it is the least bit difficult to write bad code." - Flon's Law Your class design is not very object oriented. A vector object should be able to calculate its own length, there doesn't have to be a function to do that for them, if you know what I mean This is a common fallacy. "Object oriented" is not the same thing as "all functions to calculate some property of an object must be member functions of the class". For discussion of cases, in C++, where implementing operations as member functions can actually reduce benefits associated with encapsulation of a class, have a look here. Last edited by grumpy; 04-14-2007 at 07:22 AM. So instead of v.VecLength(v) i should have v.VecLength? No, not at all. What grumpy is saying is that just because a free function is used instead of a member function does not mean the code is any less object oriented. C + C++ Compiler: MinGW port of GCC Version Control System: Bazaar Look up a C++ Reference and learn How To Ask Questions The Smart Way This is a common fallacy. "Object oriented" is not the same thing as "all functions to calculate some property of an object must be member functions of the class". For discussion of cases, in C++, where implementing operations as member functions can actually reduce benefits associated with encapsulation of a class, have a look here. I don't disagree that it's sometimes more practical to do things that way but it is not, by definition, OOP. it's unfortunate that C++ doesn't allow us to define non-member functions and yet have the option to call them as members, ie: bar(foo const &); foo foo; that would allow you to do a lot of neat things, if you think about it. if( numeric_limits< byte >::digits != bits_per_byte ) error( "program requires bits_per_byte-bit bytes" ); I'd be very interested to see an accepted definition of OOP that specifically requires that all operations on a class be members of a class. If you really need to do that, provide both the member and non-member versions, and have one call the other. 04-13-2007 #2 04-14-2007 #3 Registered User Join Date Aug 2006 04-14-2007 #4 04-14-2007 #5 Registered User Join Date Jun 2005 04-14-2007 #6 Registered User Join Date Aug 2006 04-14-2007 #7 04-14-2007 #8 04-14-2007 #9 Registered User Join Date Jun 2005
{"url":"http://cboard.cprogramming.com/cplusplus-programming/88630-something-funny-happening-operator-overloading.html","timestamp":"2014-04-19T18:07:41Z","content_type":null,"content_length":"78781","record_id":"<urn:uuid:c0502a34-5e1c-46d1-add4-79a3f00bc2e9>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00005-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions - Re: Stephen Wolfram's recent blog Date: Feb 17, 2013 4:08 AM Author: Murray Eisenberg Subject: Re: Stephen Wolfram's recent blog To the contrary, I find the new (by default) auto-completion a significant improvement over having to use Ctrl+K. E.g., say I want to type ChiDistribution. I type 'Chi' and get the drop-down list of 3 possible completions. Only one of these 3 has 'D' as the next letter. So I just type 'D' and press Return. Mathematica completes the rest. That's a fairly short example. I've found that I can type long expressions, with many names in them (`System or `Global) much faster This helps makes up for Mathematica's verbose descriptive language, in contrast with languages where names are short but often cryptic. On Feb 16, 2013, at 1:10 AM, "djmpark" <djmpark@comcast.net> wrote: > ...All the doo-dads are to me a nuisance. The one thing I liked was the Ctrl+K > (which I could type faster than you can say Jack Rabbit) command > But now Ctrl+K is gone and what replaces it is ill-conceived and Murray Eisenberg murray@math.umass.edu Mathematics & Statistics Dept. Lederle Graduate Research Tower phone 413 549-1020 (H) University of Massachusetts 413 545-2838 (W) 710 North Pleasant Street fax 413 545-1801 Amherst, MA 01003-9305
{"url":"http://mathforum.org/kb/plaintext.jspa?messageID=8351879","timestamp":"2014-04-16T22:00:50Z","content_type":null,"content_length":"2464","record_id":"<urn:uuid:791657e2-b742-4053-ba44-77156ee9f505>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00561-ip-10-147-4-33.ec2.internal.warc.gz"}
The stable-homotopy-homology-theory up vote 0 down vote favorite Is there a way to stabilise relative homotopy groups into giving the stable-homotopy-homology-functor? The fact that the homotopy excision theorem holds for exactly the same kind of pair that occurs in the Eilenberg-Steenrod axioms seems to indicate, that this should be possible, doesn't it? For CW-pairs $(X,A)$ simply applying unreduced suspension repeatedly, taking homotopy groups accordingly (basepoints become unnecessary) and going to the limit works fine, I think. (Though I may have overlooked something once more.) For a while I thought this approach might just work for arbitrary pairs, however Tom Goodwillie (thankfully) set me straight by pointing out, that this is rubbish: The suspension of a subspace need not even be a subspace of the suspension ( Suspension of an excisive pair ), whence one doesn't even end up with a pair of spaces after suspension. Has this approach been studied in the literature? My standard textbooks only construct reduced stable homotopy groups and I've been wondering why, ever since first learning about stable homotopy groups. So i would be very happy with a reference and somewhat content with an answer as to why this can't possibly work. at.algebraic-topology homotopy-theory stable-homotopy The stable homotopy groups of the mapping cone of the map of spectra $\Sigma^{\infty}A\rightarrow\Sigma^{\infty}A$ induced by the inclusion $A\subset X$ – Fernando Muro Jan 15 '12 at 16:29 1 I don't think so – Fernando Muro Jan 15 '12 at 17:09 1 it replaces relative groups by absolute groups of the cone and then stabilises them instead. How is that not circumventing? – old account Jan 17 '12 at 10:10 1 Because classical relative homotopy groups are ordinary homotopy groups of the homotopy fiber of the inclusion – Fernando Muro Jan 17 '12 at 11:15 1 Whatever approach you like, what I say is not circumventing, it's simply true! :-) – Fernando Muro Jan 17 '12 at 14:08 show 7 more comments Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook. Browse other questions tagged at.algebraic-topology homotopy-theory stable-homotopy or ask your own question.
{"url":"http://mathoverflow.net/questions/85747/the-stable-homotopy-homology-theory","timestamp":"2014-04-21T15:59:07Z","content_type":null,"content_length":"52913","record_id":"<urn:uuid:c92c3352-61d5-4a35-9fa7-63ba6220a9e1>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00009-ip-10-147-4-33.ec2.internal.warc.gz"}
Relations + higher-order functions = hardware descriptions - Proc. BCS FACS Workshop on Refinement, Workshops in Computing , 1991 "... A language of relations and combining forms is presented in which to describe both the behaviour of circuits and the specifications which they must meet. We illustrate a design method that starts by selecting representations for the values on which a circuit operates, and derive the circuit from the ..." Cited by 21 (1 self) Add to MetaCart A language of relations and combining forms is presented in which to describe both the behaviour of circuits and the specifications which they must meet. We illustrate a design method that starts by selecting representations for the values on which a circuit operates, and derive the circuit from these representations by a process of refinement entirely within the language. Formal methods have always been used in circuit design. It would be unthinkable to attempt to design combinational circuits without using Boolean algebra. This means that circuit designers, unlike programmers, already use mathematical tools as a matter of course. It also means that we have a good basis on which to build higher level formal design methods. Encouraged by these observations, we have been investigating the application of formal program development techniques to circuit design. We view circuit design as the transformation of a program describing the required behaviour into an equivalent program that is s... - in Systolic Array Processors , 1989 "... . We present an overview of a prototype system based on a functional language for developing regular array circuits. The features of a simulator, floorplanner and expression transformer are discussed and illustrated. INTRODUCTION Implementing algorithms on a regular array of processors has many ad ..." Cited by 17 (9 self) Add to MetaCart . We present an overview of a prototype system based on a functional language for developing regular array circuits. The features of a simulator, floorplanner and expression transformer are discussed and illustrated. INTRODUCTION Implementing algorithms on a regular array of processors has many advantages. Besides offering an efficient realisation of parallel structures, regular patterns of interconnections also provide an opportunity for simplifying their description and their development. Various approaches for regular array design have been proposed; examples include methods based on dependence graphs [5], recurrence equations [14], and algebraic techniques [16]. This paper presents an overview of a prototype system for regular array development. The system is based on ¯FP [15], a functional language with mechanisms for abstracting spatial and temporal iteration. These abstractions result in a succinct and precise notation for specifying designs. Moreover, the explicit , 1992 "... This thesis is about the calculational approach to programming, in which one derives programs from specifications. One such calculational paradigm is Ruby, the relational calculus developed by Jones and Sheeran for describing and designing circuits. We identify two shortcomings with derivations made ..." Cited by 15 (4 self) Add to MetaCart This thesis is about the calculational approach to programming, in which one derives programs from specifications. One such calculational paradigm is Ruby, the relational calculus developed by Jones and Sheeran for describing and designing circuits. We identify two shortcomings with derivations made using Ruby. The first is that the notion of a program being an implementation of a specification has never been made precise. The second is to do with types. Fundamental to the use of type information in deriving programs is the idea of having types as special kinds of programs. In Ruby, types are partial equivalence relations (pers). Unfortunately, manipulating some formulae involving types has proved difficult within Ruby. In particular, the preconditions of the `induction' laws that are much used within program derivation often work out to be assertions about types; such assertions have typically been verified either by informal arguments or by using predicate calculus, rather than by ap... - In Functional Programming, Glasgow 1991, Workshops in computing , 1992 "... The notion of functionality is not cast in stone, but depends upon what we have as types in our language. With partial equivalence relations (pers) as types we show that the functional relations are precisely those satisfying the simple equation f = f ffi f [ ffi f , where " [ " is the relation ..." Cited by 7 (1 self) Add to MetaCart The notion of functionality is not cast in stone, but depends upon what we have as types in our language. With partial equivalence relations (pers) as types we show that the functional relations are precisely those satisfying the simple equation f = f ffi f [ ffi f , where " [ " is the relation converse operator. This article forms part of "A calculational theory of pers as types" [1]. 1 Introduction In calculational programming, programs are derived from specifications by a process of algebraic manipulation. Perhaps the best known calculational paradigm is the Bird--Meertens formalism, or to use its more colloquial name, Squiggol [2]. Programs in the Squiggol style work upon trees, lists, bags and sets, the so--called Boom hierarchy. The framework was uniformly extended to cover arbitrary recursive types by Malcolm in [3], by means of the F--algebra paradigm of type definition, and resulting catamorphic programming style. More recently, Backhouse et al [4] have made a further ... , 1992 "... We present a programming paradigm based upon the notion of binary relations as programs, and partial equivalence relations (pers) as types. Our method is calculational , in that programs are derived from specifications by algebraic manipulation. Working with relations as programs generalises the fu ..." Cited by 5 (2 self) Add to MetaCart We present a programming paradigm based upon the notion of binary relations as programs, and partial equivalence relations (pers) as types. Our method is calculational , in that programs are derived from specifications by algebraic manipulation. Working with relations as programs generalises the functional paradigm, admiting non--determinism and the use of relation converse. Working with pers as types, we have a more general notion than normal of what constitutes an element of a type; this leads to a more general class of functional relations, the so--called difunctional relations. Our basic method of defining types is to take the fixpoint of a relator , a simple strengthening of the categorical notion of a functor. Further new types can be made by imposing laws and restrictions on the constructors of other types. Having pers as types is fundamental to our treatment of types with laws. Contents 1 Introduction 2 2 Relational calculus 4 2.1 Powerset lattice structure : : : : : : : : , 1995 "... Ruby is a relational language for describing hardware circuits. In the past, programming tools existed which only catered for the execution of functional Ruby expressions rather than the complete set of relational ones. In this paper, we develop an implementation of Ruby in Prolog---a higher-order l ..." Cited by 2 (0 self) Add to MetaCart Ruby is a relational language for describing hardware circuits. In the past, programming tools existed which only catered for the execution of functional Ruby expressions rather than the complete set of relational ones. In this paper, we develop an implementation of Ruby in Prolog---a higher-order logic programming language---allowing the execution of arbitrary, relational Ruby programs. 1 Introduction Programming problems can be tackled by specifying a program's behaviour in an abstract mathematical specification and then, through the application of some appropriate calculus, converting this into an efficient and implementable program. Until recently, the art of deriving computer programs from specification has been performed equationally in a functional calculus [Bir87]. However, it has become evident that a relational calculus affords us a greater degree of expression and flexibility in both specification and proof since a relational calculus naturally captures the notions of non-d...
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=3075292","timestamp":"2014-04-19T00:34:45Z","content_type":null,"content_length":"27804","record_id":"<urn:uuid:10754aac-2aa6-426d-9a13-33955a55ba4b>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00041-ip-10-147-4-33.ec2.internal.warc.gz"}
Core of divergence form operator with unbounded coefficient up vote 2 down vote favorite Consider the unbounded operator $L$ on $L^2(\mathbb{R^d})$ to be the self-adjoint extension of $$Lf := \nabla \cdot \left(a(x) \nabla f(x) \right)$$ on $C^2_c(\mathbb{R^d})$. I also assume that $a(x) > 0$ for all $x$ and $a(x)$ is differentiable. However, I make no assumptions on the boundedness of $a$. Does this operator have a core? If so, can it be identified explicitly? elliptic-pde ap.analysis-of-pdes fa.functional-analysis unbounded-operators I don't think your operator is defined on $C^2_c$ without assuming more on $a$ like differentiability. Writing down the core for the quadratic form $q(f) = \int a(x) |\nabla f(x)|^2 dx is easy, since it's all $f$ such that $a^{1/2} |\nabla f| \in L^2(\mathbb{R}^d)$. Something similar works for $L$ (it's the obvious condition of everything being defined, so $a \nabla f$ being in $H^1$. – Helge Jan 12 '12 at 2:11 I can't delete the previous comment for some reasons. Here's the scrambled part. since it's all $f$ such that $a^{1/2} |\nabla f| \in L^2(\mathbb{R}^d)$. Something similar works for $L$. It should just be the $f$ such that $a \nabla f \in H^1$. – Helge Jan 12 '12 at 2:13 Yes, you're right, $a(x)$ should be differentiable. I'll edit the question. Another question: So if a set C is a core for the Dirichlet form wouldn't this imply C is a core for L? – RadonNikodym Jan 12 '12 at 8:39 What's a core? A reference would suffice. – Deane Yang Jan 12 '12 at 18:28 Let $L$ be a closed operator on $L^2(D)$. Then $\mathcal{D} \subset D(L)$ is said to be a core of $L$ if $\mathcal{D}$ is dense in $D(L)$ with respect to the graph norm $||u||_{L^2} + ||Au||_{L^2} $. Ethier and Kurtz is a good reference and contains various sufficient condition for a set of functions to be a core. – RadonNikodym Jan 12 '12 at 21:00 show 1 more comment Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook. Browse other questions tagged elliptic-pde ap.analysis-of-pdes fa.functional-analysis unbounded-operators or ask your own question.
{"url":"http://mathoverflow.net/questions/85452/core-of-divergence-form-operator-with-unbounded-coefficient","timestamp":"2014-04-16T16:25:44Z","content_type":null,"content_length":"52822","record_id":"<urn:uuid:28cfbc73-cb76-460a-a9ce-e91aa5044afb>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00356-ip-10-147-4-33.ec2.internal.warc.gz"}
Computational Geometry Computational Geometry Class Outline COMPUTATIONAL GEOMETRY is the study of the representation and storage of geometric data and relationships, and the design, implementation and analysis of computational algorithms that operate on geometric data to answer questions of practical interest. Some characteristic problems of computational geometry include: • analysis: can we decompose a large geometric object into a collection of smaller objects of the same kind? Every soccer ball has 12 pentagons; why is it impossible to cover a ball using only six-sided figures? • traversal: given a set of cities and an airline flight schedule, is a tour possible which visits all the cities exactly once? • search: suppose we split each square of a checkboard into two triangles, and then place a penny in one triangle. Theoretically, we might need to check all 128 triangles to find the penny. Is it possible to search in such a way that no more than 8 triangles must be checked? • projection: given a description of a 3D object, how do we make a 2D image of it? Our 2D image will usually depend on the point from which we view the object. What happens if our viewpoint is actually inside the object? • sampling: what does it mean to pick a random point from a circle? What is different about the random pattern formed by throwing darts at a bull's-eye? • interpolation: is it possible to find a formula for your signature? For your face? Computational Geometry has applications thoughout computational science, most naturally in areas which have a strong geometric component. However, even in abstract computations involving multidimensional data, insights and algorithms originally developed for "physical" (2D or 3D) problems can be extended to the high dimensional case. Computational Geometry Schedule 1. The fundamental objects: points, lines and curves, planes and surfaces, spaces 2. The fundamental relations: inclusion, containment, perpendicularity, intersection 3. The fundamental measures: distance, angle, length, area, volume, projected length 4. The fundamental operations: interior, exterior, intersection, normal vector 5. Other basic ideas: convexity, change of coordinate system, equal spacing 6. The circle and the disk; polar coordinates 7. The sphere and the ball; spherical coordinates 8. The Triangle; area, orientation, angles, aspect ratio; containment, barycentric coordinates; distance from a point to a triangle 9. Polygons; partitioning a polygon 10. The Simplex 11. Polyhedrons: vertices, edges, faces; orientation 12. The description and approximation of 2D curves: y = f(x); polynomial interpolation; piecewise interpolations; least squares approximations; f(x,y) = 0; finite element approximation; 13. Sampling and Random Selection; uniform and nonuniform densities; rejection methods; transformation methods; sampling inside or on a sphere 14. The Convex Hull 15. Triangulation; Generating a triangulation; measures of uniformity; Delaunay triangulations; searching a (Delaunay) triangulation; 16. Adaptive meshing; binary trees for adaptive interval meshing; quadtrees for 2D; octrees for 3D meshing. Locating a point in a mesh defined by a tree. 17. Voronoi Diagrams in the plane; Voronoi diagrams restricted to a region; Voronoi diagrams on a sphere; Voronoi diagrams in higher dimensions 18. The Nearest Neighbor Problem 19. Storing, retrieving, displaying geometric information 20. Quadrature; derivatives 21. Software: OpenGL, Triangle, DISTMESH 22. Surface refinement, simplification, modification You can return to the HTML web page. Last revised on 25 September 2008.
{"url":"http://people.sc.fsu.edu/~jburkardt/html/computational_geometry.html","timestamp":"2014-04-18T21:13:48Z","content_type":null,"content_length":"5700","record_id":"<urn:uuid:945ae800-d524-4987-a9d2-2da54404b113>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00549-ip-10-147-4-33.ec2.internal.warc.gz"}
fundamental groupoid fundamental groupoid Definition 1. Given a topological space $X$ the fundamental groupoid $\Pi_{1}(X)$ of $X$ is defined as follows: • The objects of $\Pi_{1}(X)$ are the points of $X$ • morphisms are homotopy classes of paths “rel endpoints” that is where, $\sim$ denotes homotopy rel endpoints, and, It is easily checked that the above defined category is indeed a groupoid with the inverse of (a morphism represented by) a path being (the homotopy class of) the “reverse” path. Notice that for $x\ in X$, the group of automorphisms of $x$ is the fundamental group of $X$ with basepoint $x$, Definition 2. Let $f\colon\thinspace X\to Y$ be a continuous function between two topological spaces. Then there is an induced functor defined as follows • on objects $\Pi_{1}(f)$ is just $f$, • on morphisms $\Pi_{1}(f)$ is given by “composing with $f$”, that is if $\alpha\colon\thinspace I\to$$X$ is a path representing the morphism $[\alpha]\colon\thinspace x\to y$ then a representative of $\Pi_{1}(f)([\alpha])\colon\thinspace f(x)\to f(y)$ is determined by the following commutative diagram $\xymatrix{{I}\ar[d]_{{\alpha}}\ar@{-->}[dr]^{{\Pi_{1}(f)(\alpha)}}\\ {X}\ar[r]_{f}&{Y}}$ It is straightforward to check that the above indeed defines a functor. Therefore $\Pi_{1}$ can (and should) be regarded as a functor from the category of topological spaces to the category of groupoids. This functor is not really homotopy invariant but it is “homotopy invariant up to homotopy” in the sense that the following holds. Theorem 3. A homotopy between two continuous maps induces a natural transformation between the corresponding functors. A reader who understands the meaning of the statement should be able to give an explicit construction and supply the proof without much trouble. FundamentalGroupoidFunctor, FundamentalGroupoid2, HomotopyDoubleGroupoidOfAHausdorffSpace, QuantumFundamentalGroupoids, HomotopyCategory, GroupoidCategory Mathematics Subject Classification no label found Added: 2003-01-29 - 19:49
{"url":"http://planetmath.org/fundamentalgroupoid","timestamp":"2014-04-21T14:43:16Z","content_type":null,"content_length":"79202","record_id":"<urn:uuid:dfd9444a-c516-4a6f-88f9-5f1ecaf38142>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00584-ip-10-147-4-33.ec2.internal.warc.gz"}
Redwood City Algebra 2 Tutor Find a Redwood City Algebra 2 Tutor ...My students feel comfortable telling me what they didn't get in class in school or from their textbook, and so we are able to address those issues right away. I like to first establish the knowledge base of my student. We then work together to build on it, with lots of practice, to expand knowledge and mastery. 17 Subjects: including algebra 2, physics, calculus, GRE ...There is no wonder many of my students improve on their understanding of their math subjects after having been tutored by the teacher. Many of them, after achieving their goal of passing a test, became more confident and motivated to set a higher goal of a higher grade. I hold a single subject teaching credential in math. 5 Subjects: including algebra 2, geometry, Chinese, prealgebra ...Please e-mail me and I will be happy to talk to you. I am passionate about helping students dramatically improve their academic performance. Students deserve a curriculum that focuses on their individual needs, passions, and are applicable to real life. 26 Subjects: including algebra 2, reading, writing, statistics ...Today, I continue to speak and present at business, nonprofit, marketing and education events. I would love to work with students to help them to improve their public speaking skills so they become more successful at public speaking. College counseling usually involves college selection, college application preparation (including essay development) and financial aid application 52 Subjects: including algebra 2, English, chemistry, reading ...I worked more than 10 years in research using different forms of differential equations covering stiff ODEs and multidimensional equations in Computational Fluid Dynamics applications. I tutor on a regular basis university students in calculus and differential equations. I worked for several y... 41 Subjects: including algebra 2, calculus, geometry, statistics
{"url":"http://www.purplemath.com/Redwood_City_algebra_2_tutors.php","timestamp":"2014-04-17T16:05:49Z","content_type":null,"content_length":"24279","record_id":"<urn:uuid:1d359af3-f5ab-4a82-a0c4-dca9e6f49079>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00094-ip-10-147-4-33.ec2.internal.warc.gz"}
Finding radius of circle with 2 Perpendicular Chords June 15th 2011, 09:33 PM #1 Sep 2010 Finding radius of circle with 2 Perpendicular Chords Two chords KV and QR of circle O are perpendicular at point P, with PQ=6 and PR=8 If radius of circle is sq root of 65. find KP and PV. The answer is KP=12 and PV=4 My ques is how did they get KP? i understand how they get PV since QP*PR=KP*PV But i dont know what to do with the radius to get KP? Re: Finding radius of circle with 2 Perpendicular Chords Two chords KV and QR of circle O are perpendicular at point P, with PQ=6 and PR=8 If radius of circle is sq root of 65. find KP and PV. The answer is KP=12 and PV=4 My ques is how did they get KP? i understand how they get PV since QP*PR=KP*PV But i dont know what to do with the radius to get KP? 1. Draw a sketch. 2. Let x denote $|\overline{VP}|$ and y denotes $|\overline{PK}|$. According to the intersecting chord theorem (google for it!)[1st equation] and Pythagorean theorem[2nd equation] you'll get the system of equations: $\left|\begin{array}{rcl}x \cdot y &=& 6 \cdot 8 \\ \left(\dfrac{x+y}{2}\right)^2+1^2 &=& 65 \end{array}\right.$ 3. Solve for x and y. Re: Finding radius of circle with 2 Perpendicular Chords I understand the intersecting chord theorem. But how did you derive the 2nd equation? Re: Finding radius of circle with 2 Perpendicular Chords The centre lies on the perpendicular bisector of QR, which is parallel to KV and lies 1 unit below KV (from the sketch). Therefore Pythagoras' Theorem applied to the right-angled triangle OKM or OVM, where O is the centre and M is the midpoint of KV yields Earboth's 2nd equation. Re: Finding radius of circle with 2 Perpendicular Chords Thanks. i get it now. i was making it way to complicated Re: Finding radius of circle with 2 Perpendicular Chords You could also have answered exclusively with Pythagoras' Theorem. Label the circle centre O, the midpoint of KV = M and the midpoint of QR = N. Then the circle centreline containing O and N lies 1 unit below [KV] as it is 7 units from Q and 7 units from R. M is the midpoint of [KV] $\Rightarrow\ |KM|^2+1=65\Rightarrow\ |KM|=8$ Label the point of intersection of [QR] and the horizontal centreline as S. $|OS|^2+7^2=65\Rightarrow\ |OS|=4$ June 15th 2011, 10:21 PM #2 June 16th 2011, 05:48 AM #3 Sep 2010 June 16th 2011, 11:27 AM #4 MHF Contributor Dec 2009 June 16th 2011, 07:10 PM #5 Sep 2010 June 17th 2011, 02:29 AM #6 MHF Contributor Dec 2009
{"url":"http://mathhelpforum.com/geometry/183116-finding-radius-circle-2-perpendicular-chords.html","timestamp":"2014-04-17T16:02:43Z","content_type":null,"content_length":"48162","record_id":"<urn:uuid:2b9a26d6-c70f-479e-a762-a482bcdead7d>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00507-ip-10-147-4-33.ec2.internal.warc.gz"}
Large Data in MATLAB: A Seismic Data Processing Case Study Do you have data that is too large to fit into available memory? Or perhaps you would like to speed up data analysis tasks using additional hardware such as additional CPUs or GPUs? In this webinar, you will learn techniques for working with large data in MATLAB^® and approaches to speeding up your analyses using parallel computing and GPUs. Through an example seismic analysis case study we will show you how to: • Work with data that is too large to fit in available memory on a single machine • Perform large data analysis computations on a computer cluster (we will use a cluster running 64 MATLAB Distributed Computing Server workers) • Introduce GPU computing for speeding up solutions of the wave equation for seismic analysis About the Presenter: Stuart Kozola is a product manager at MathWorks and focuses on MATLAB® and add-on products for data analysis, mathematical modeling, and computational finance. Prior to joining MathWorks in 2006, Stuart worked at Pratt & Whitney (United Technologies) as a design engineer working on combustion systems for gas turbine engines. Stuart earned a B.S. in Chemical Engineering from the University of Wyoming, M.S. in Chemical Engineering from Arizona State University, M.S. in Electrical Engineering from Rensselaer Polytechnic Institute, and an M.B.A. from Carnegie Mellon
{"url":"http://www.mathworks.co.kr/videos/large-data-in-matlab-a-seismic-data-processing-case-study-81792.html?s_iid=disc_rw_enp_cta1&nocookie=true","timestamp":"2014-04-24T05:51:32Z","content_type":null,"content_length":"55933","record_id":"<urn:uuid:48036137-bcf8-4ab0-ae98-56cbc9eb97ae>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00643-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: hey do u have the python book by john guttag in pdf form • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50bc608ce4b0bcefefa077d9","timestamp":"2014-04-18T08:34:23Z","content_type":null,"content_length":"35407","record_id":"<urn:uuid:4aaecc25-0639-4ef6-bf6e-a4456e5c1682>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00144-ip-10-147-4-33.ec2.internal.warc.gz"}
Natural log problem November 13th 2012, 09:00 PM #1 Natural log problem Differentiate: h(t)= ln((t^2+1)/(t+1)) I'm having some trouble with this one. Not sure if I am doing too much work for it. Also this one: y= x^x^2 Do I have to use logarithmic differentiaton for the second? Last edited by ~berserk; November 13th 2012 at 09:09 PM. Re: Natural log problem For the second one, another option besides logarithmic differentiation is exponential differentiation: $y=x^{x^2}=e^{\ln\left(x^{x^2} \right)}=e^{x^2\ln(x)}$ Now use the exponential, chain and product rules to find the derivative. Re: Natural log problem would you be able to show me how to differentiate it logarithmically? Re: Natural log problem We are given to differentiate: Take the natural log. of both sides: $\ln(y)=\ln\left(x^{x^2} \right)=x^2\ln(x)$ Now, implicitly differentiate both sides with respect to $x$. I don't want to just work out the whole thing, but I will help guide you if you get stuck. November 13th 2012, 09:13 PM #2 November 13th 2012, 09:16 PM #3 November 13th 2012, 09:28 PM #4
{"url":"http://mathhelpforum.com/calculus/207535-natural-log-problem.html","timestamp":"2014-04-17T04:30:02Z","content_type":null,"content_length":"37646","record_id":"<urn:uuid:325dec37-a413-4a35-b6f6-2211dab5c7e7>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00030-ip-10-147-4-33.ec2.internal.warc.gz"}
Proposition 19 If a straight line touches a circle, and from the point of contact a straight line is drawn at right angles to the tangent, the center of the circle will be on the straight line so drawn. For let a straight line DE touch the circle ABC at the point C. Draw CA from C at right angles to DE. I say that the center of the circle is on AC. For suppose it is not, but, if possible, let F be the center, and join CF. Since a straight line DE touches the circle ABC, and FC has been joined from the center to the point of contact, FC is perpendicular to DE. Therefore the angle FCE is right. But the angle ACE is also right, therefore the angle FCE equals the angle ACE, the less equals the greater, which is impossible. Therefore F is not the center of the circle ABC. Similarly we can prove that neither is any other point except a point on AC. Therefore if a straight line touches a circle, and from the point of contact a straight line is drawn at right angles to the tangent, the center of the circle will be on the straight line so drawn.
{"url":"http://aleph0.clarku.edu/~djoyce/java/elements/bookIII/propIII19.html","timestamp":"2014-04-20T23:27:28Z","content_type":null,"content_length":"3180","record_id":"<urn:uuid:1050b1d4-7890-4446-b2d2-7e3ed678735f>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00184-ip-10-147-4-33.ec2.internal.warc.gz"}
Programs using the PBC library should include the file pbc.h: #include <pbc.h> and linked against the PBC library and the GMP library, e.g. $ gcc program.c -L. -lpbc -lgmp The file pbc.h already includes gmp.h. PBC follows GMP in several respects: • Output arguments generally precede input arguments. • The same variable can be used as input and output in one call. • Before a variable may be used it must be initialized exactly once. When no longer needed it must be cleared. For efficiency, unnecessary initializating and clearing should be avoided. • PBC variables ending with _t behave the same as GMP variables in function calls: effectively as call-by references. In other words, as in GMP, if a function that modifies an input variable, that variable remains modified when control return is returned to the caller. • Like GMP, variables automatically allocate memory when needed. By default, malloc() and friends are called but this can be changed. • PBC functions are mostly reentrant. Since the PBC library is built on top of GMP, the GMP types are available. PBC types are similar to GMP types. The following example is paraphrased from an example in the GMP manual, and shows how to declare the PBC data type element_t. element_t sum; struct foo { element_t x, y; }; element_t vec[20]; GMP has the mpz_t type for integers, mpq_t for rationals and so on. In contrast, PBC uses the element_t data type for elements of different algebraic structures, such as elliptic curve groups, polynomial rings and finite fields. Functions assume their inputs come from appropriate algebraic structures. PBC data types and functions can be categorized as follows. The first two alone suffice for a range of applications. • element_t: elements of an algebraic structure. • pairing_t: pairings where elements belong; can initialize from sample pairing parameters bundled with PBC in the param subdirectory. • pbc_param_t: used to generate pairing parameters. • pbc_cm_t: parameters for constructing curves via the CM method; sometimes required by pbc_param_t. • field_t: algebraic structures: groups, rings and fields; used internally by pairing_t. • a few miscellaneous functions, such as ones controlling how random bits are generated. Functions operating on a given data type usually have the same prefix, e.g. those involving element_t objects begin with element_.
{"url":"http://crypto.stanford.edu/pbc/manual/ch01s04.html","timestamp":"2014-04-21T12:09:57Z","content_type":null,"content_length":"11602","record_id":"<urn:uuid:2a108b99-cbb6-4753-8898-efd7b60bc2e9>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00017-ip-10-147-4-33.ec2.internal.warc.gz"}
-Self-Adjoint Extensions for a Class of Discrete Linear Hamiltonian Systems Abstract and Applied Analysis Volume 2013 (2013), Article ID 904976, 19 pages Research Article -Self-Adjoint Extensions for a Class of Discrete Linear Hamiltonian Systems ^1School of Mathematics and Quantitative Economics, Shandong University of Finance and Economics, Jinan, Shandong 250014, China ^2Department of Mathematics, Shandong University at Weihai, Weihai, Shandong 264209, China Received 15 January 2013; Accepted 18 March 2013 Academic Editor: Michiel Bertsch Copyright © 2013 Guojing Ren and Huaqing Sun. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. This paper is concerned with formally -self-adjoint discrete linear Hamiltonian systems on finite or infinite intervals. The minimal and maximal subspaces are characterized, and the defect indices of the minimal subspaces are discussed. All the -self-adjoint subspace extensions of the minimal subspace are completely characterized in terms of the square summable solutions and boundary conditions. As a consequence, characterizations of all the -self-adjoint subspace extensions are given in the limit point and limit circle cases. 1. Introduction Consider the following discrete linear Hamiltonian system: where , is a finite integer or , is a finite integer or , and ; is the forward difference operator, that is, ; is the canonical symplectic matrix, that is, where is the unit matrix; the weighted function is a real symmetric matrix with for , and it is of the block diagonal form, where is a complex symmetric matrix, that is, . The partial right shift operator with and ; is a complex spectral parameter. For briefness, denote in the case where and are finite integers; in the case where is finite and ; in the case where and is finite; in the case where and . Since is symmetric, it can be blocked as where , , and are complex-valued matrices with and . Then, can be rewritten as To ensure the existence and uniqueness of the solution of any initial value problem for , we always assume in the present paper that is invertible in . It can be easily verified that contains the following complex coefficients vector difference equation of order : where are complex-valued matrices with , ; is invertible in ; is an real-valued with . In fact, by letting with , , and for , can be converted into , as well as , with It is obvious that is satisfied for . The spectral theory of self-adjoint operators and self-adjoint extensions of symmetric operators (i.e., densely defined Hermitian operators) in Hilbert spaces has been well developed (cf. [1–4]). In general, under certain definiteness conditions, a formally self-adjoint differential expression can generate a minimal operator which is symmetric, and the defect index of the minimal operator is equal to the number of linearly independent square integrable solutions. All the characterizations of self-adjoint extensions of differential equation are obtained [5–8]. However, for difference equations, it was found in [9] that the minimal operator defined in [10] may be neither densely defined nor single-valued even if the definiteness condition is satisfied. This is an important difference between the differential and difference equations. In order to study the self-adjoint extensions of nondensely defined or multivalued Hermitian operators, some scholars tried to extend the concepts and theory for densely defined Hermitian operators to Hermitian subspaces [11–15]. Recently, Shi extended the Glazman-Krein-Naimark (GKN) theory for symmetric operators to Hermitian subspaces [9]. Applying this GKN theory, the first author, with Shi and Sun, gave complete characterizations of self-adjoint extensions for second-order formally self-adjoint difference equations and general linear discrete Hamiltonian systems, separately [16, 17]. We note that when the coefficient in is not a Hermitian matrix, that is, , system is not formally self-adjoint, and the minimal subspace generated by is not Hermitian. Hence the spectral theory of self-adjoint operators or self-adjoint subspaces is not applicable. To solve this problem, Glazman introduced a concept of -symmetric operators in [3, 18] where is an operator. The minimal operators generated by certain differential expressions are -symmetric operators in the related Hilbert spaces [19, 20]. Monaquel and Schmidt [21] discussed the -functions of the following discrete Hamiltonian system: where is the backward difference operator, that is, , and weighted function . By letting , , can be converted into with In [22], the result that every -Hermitian subspace has a -self-adjoint subspace extension has been given. Furthermore, a result about -self-adjoint subspace extension was obtained [22], which can be regarded as a GKN theorem for -Hermitian subspaces. In the present paper, enlightened by the methods used in the study of self-adjoint subspace extensions of Hermitian subspaces, we will study the -self-adjoint subspace extensions of the minimal operator corresponding to system . A complete characterization of them in terms of boundary conditions is given by employing the GKN theorem for -Hermitian subspaces. The rest of this paper is organized as follows. In Section 2, some basic concepts and useful results about subspaces are briefly recalled. In Section 3, a conjugation operator is defined in the corresponding Hilbert space, and the maximal and minimal subspaces are discussed. In Section 4, the description of the minimal subspaces is given by the properties of their elements at the endpoints of the discussed intervals, the defect indices of minimal subspaces are discussed, and characterizations of the maximal subspaces are established. Section 5 pays attention to two characterizations of all the self-adjoint subspace extensions of the minimal subspace in terms of boundary conditions via linearly independent square summable solutions of . As a consequence, characterizations of all the self-adjoint subspace extensions are given in two special cases: the limit point and limit circle cases. 2. Fundamental Results on Subspaces In this section, we recall some basic concepts and useful results about subspaces. For more results about nondensely defined -Hermitian operators or -Hermitian subspaces, we refer to [17–19, 22] and some references cited therein. In addition, some properties of solutions of and a result about matrices are given at the end of this section. By and we denote the sets of the real and the complex numbers, respectively. Let be a complex Hilbert space equipped with inner product , and two linear subspaces (briefly, subspace) in , and . Denote If , we write which is denoted by in the case that and are orthogonal. Denote It can be easily verified that if and only if can determine a unique linear operator from into whose graph is just . For convenience, we will identify a linear operator in with a subspace in via its graph. Definition 1 (see [11]). Let be a subspace in . (1) is said to be a Hermitian subspace if . Furthermore, is said to be a Hermitian operator if it is an operator, that is, . (2) is said to be a self-adjoint subspace if . Furthermore, is said to be a self-adjoint operator if it is an operator, that is, . (3) Let be a Hermitian subspace. is said to be a self-adjoint subspace extension (briefly, SSE) of if and is a self-adjoint subspace. (4) Let be a Hermitian operator. is said to be a self-adjoint operator extension (briefly, SOE) of if and is a self-adjoint operator. Lemma 2 (see [11]). Let be a subspace in . Then (1) is a closed subspace in ; (2) and , where is the closure of ; (3). In [19], an operator defined in is said to be a conjugation operator if for all , Definition 3. Let be a subspace in and be a conjugation operator. (1)The -adjoint of is defined by (2) is said to be a -Hermitian subspace if . Furthermore, is said to be a -Hermitian operator if it is an operator, that is, . (3) is said to be a -self-adjoint subspace if . Furthermore, is said to be a -self-adjoint operator if it is an operator, that is, . (4)Let be a -Hermitian subspace. is said to be a -self-adjoint subspace extension (briefly, -SSE) of if and is a -self-adjoint subspace. (5)Let be a -Hermitian operator. is said to be a -self-adjoint operator extension (briefly, -SOE) of if and is a -self-adjoint operator. Remark 4. (1) It can be easily verified that is a closed subspace. Consequently, a -self-adjoint subspace is a closed subspace since . In addition, if . (2) From the definition, we have that holds for all and , and that is a -Hermitian subspace if and only if for all . Lemma 5 (see [22]). Let be a subspace in . Then (1); (2). It follows from Lemmas 2 and 5 that , and is -Hermitian if is -Hermitian. Lemma 6 (see [22]). Every J-Hermitian subspace has a -SSE. Definition 7. Let T be a -Hermitian subspace. Then is called to be the defect index of . Next, we introduce a form on by Lemma 8 (see [22]). Let be a -Hermitian subspace. Then Lemma 9 (see [22]). Let be a closed -Hermitian subspace in and satisfy . Then a subspace is a -SSE of if and only if and there exists such that (1) are linearly independent in (modulo ); (2), , ;(3). Lemma 9 can be regarded as a GKN theorem for -Hermitian subspaces. A set of which is satisfying (1) and (2) in Lemma 9 is called a GKN set of . Definition 10. Let be a subspace in .(1) The set is called the resolvent set of .(2) The set is called the spectrum of . (3) The set is called to be the regularity field of . It is evident that for any subspace in . Lemma 11 (see [22]). Let be a -Hermitian subspace in with , and . Then The following is a well-known result on the rank of matrices. Lemma 12. Let be an matrix and an matrix. Then In particular, if , then 3. Relationship between the Maximal and Minimal Subspaces This section is divided into three subsections. In the first subsection, we define a conjugation operator in a Hilbert space. In the second subsection, we define maximal and minimal subspaces generated by and discuss relationship between them. In the last subsection, we discuss the definiteness condition corresponding to . 3.1. Conjugation Operator In this subsection, we define a conjugation operator in a Hilbert space and then discuss its properties. Since and may be finite or infinite, we introduce the following conventions for briefness: means in the case of and means in the case of . Denote For any Hermitian matrix defined in , we define with the semiscalar product Furthermore, denote for . Since the weighted function may be singular in , is a seminorm. Introduce the quotient space Then is a Hilbert space with the inner product . For a function , denote by the corresponding class in . And for any , denote by a representative of . It is evident that for any . For any , denote by the conjugation of ; that is, It can be easily verified that if and only if . Here is the conjugation of matrix . Since each is an equivalent class, we define a operator defined on by The following result is obtained. Lemma 13. defined by (24) is a conjugation operator defined on if and only if is real and symmetric in . Proof. The sufficiency is evident. Next, we consider the necessity. Assume that defined by (24) is a conjugation operator in . Then for any , it follows from that By the arbitrariness of , one has that . This, together with , yields that is real. The proof is complete. For any , we denote where is the canonical symplectic matrix given in Section 1. In the case of , if exists and is finite, then its limit is denoted by . In the case of , if exists and is finite, then its limit is denoted by . Denote where and are called the natural difference operators corresponding to system . The following result can be easily verified, and so we omit the proof. Lemma 14. Assume that holds. Let . (1). (2) For any , (3) For any , , and any two solutions and of , it follows that Moreover, let be a fundamental solution of , then 3.2. Relationship between the Maximal and Minimal Subspaces In this subsection, we first introduce the maximal and minimal subspaces corresponding to and then show that the minimal subspace is -Hermitian, and its -adjoint subspace is just the maximal Denote and define It can be easily verified that and are both linear subspaces in . Here, and are called the maximal and preminimal subspaces corresponding to or in , and is called the minimal subspace corresponding to or in . Since the end points and may be finite or infinite, we need to divide into two subintervals in order to characterize the maximal and minimal subspaces in a unified form. Choose and fix it. Denote and denote by , and , the inner products and norms of , , respectively. Let and be defined by (31) with replaced by and , respectively. Furthermore, let and be the left maximal and preminimal subspaces defined by (32) with replaced by , respectively, and and the right maximal and preminimal subspaces defined by (32) with replaced by , respectively. The subspaces and are called the left and right minimal subspaces corresponding to system in and , respectively. Similarly, we can define , , and ; , , and ; , and . The following result is directly derived from (1) of Lemma 14. Lemma 15. Assume that holds. Then if and only if . In order to study properties of the above subspaces, we first make some preparation. Let be the fundamental solution matrix of with . For any finite subinterval with , denote It is evident that is a positive semidefinite matrix and dependent on . By the same method used in [23, Lemma 3.2], it follows that there exists a finite subinterval with such that for any finite subinterval with . In the present paper, we will always denote and define whenever is finite or infinite. In the case that is finite, can be taken as . In the case that is finite, we define It is evident that is a bounded linear map and its range is a closed subset in . In the case that is infinite, that is, or or , where , are finite integers, we introduce the following subspaces of , respectively: It can be easily shown that is dense in . In this case, we define By the method used in [23, Lemma 3.3], one has the following properties of . Lemma 16. Assume that holds.(1). (2) In the case that is finite, in the case that is infinite, let . Then there exist linearly independent elements , , such that (3). The following is the main result of this section. Theorem 17. Assume that holds. Then , , and . Proof. Since the method of the proofs is similar, we only show the first assertion. By , it suffices to show . We first show that . Let . Then for any , there exists with such that in . So, it follows from (2) of Lemma 14 that This implies that . Next, we show . Fix any . It suffices to show that there exists such that in . Let be a solution of on . For any , there exits with such that in . Thus, it follows from (2) of Lemma 14 that In addition, it is clear that Combining (45) and (46), one has that for all , By (2) and (3) of Lemma 16, we get that for any and any The following discussion is divided into two parts. Case 1. is finite. It is evident that . Then, from (2) of Lemma 16, there exists such that . This, together with (48), implies that . This is equivalent to . Let . Then and satisfies Hence, . Since is arbitrary, we have . Case 2. is infinite. We only consider the case that . For the other two cases, it can be proved with similar arguments. Let . With a similar argument as Case 2 of the proof of [23, Theorem 3.1], it can be shown that there exist linearly independent elements , and such that , Combining (48)–(50), one has that for any This implies that and consequently is a representative of such that . So . By the arbitrariness of one has . The entire proof is complete. The following result is directly derived from Lemmas 5 and 15, and Theorem 17. Theorem 18. Assume that holds. Then , , and . 3.3. Definiteness Condition In this subsection, we introduce the definiteness condition for , and give some important results on it. Since the proofs are similar to those given in [23], we omit the proofs. The definiteness condition for or is given by the following. There exists a finite subinterval such that for any and for any nontrivial solution of , the following always holds: In particular, the definiteness condition for can be described as there exists a finite subinterval such that for any and for any nontrivial solution of , the following always holds: Lemma 19. Assume that holds. Then holds if and only if there exists a finite subinterval such that one of the following holds: (1); (2) for some , every nontrivial solution of satisfies By Lemma 19, if (52) (or (53)) holds for some , then it holds for every . In addition, if holds on some finite interval , then it holds on . The following is another sufficient and necessary condition for the definiteness condition. Lemma 20. Assume that holds. Then holds if and only if for any , there exists a unique such that for . Remark 21. (1) It can be easily verified that the definiteness condition for holds if and only if that for holds. (2) In the following of the present paper, we always assume that holds. In this case, we can write instead of in the rest of the present paper. (3) Denote by and , the definiteness conditions for in and , and By the corresponding intervals, respectively. It is evident that one of and implies . But cannot imply that there exists such that both and hold. (4) Several sufficient conditions for the definiteness condition can be given. The reader is referred to [23, Section 4]. For convenience, denote Lemma 22. Assume that holds. For any , if and only if holds. 4. Characterizations of Minimal and Maximal Subspaces and Defect Indices of Minimal Subspaces This section is divided into three subsections. In the first subsection, we give all the characterizations of the minimal subspaces generated by in , , and . In the second subsection, we study the defect indices of the minimal subspaces. In the third subsection, characterizations of the maximal subspaces are established. 4.1. Characterizations of the Minimal Subspaces In this subsection, we study characterizations of the minimal subspaces generated by in , , and . The following result is a direct consequence of Theorem 17. Theorem 23. Assume that holds. Then , , and are closed -Hermitian subspace in , , and , respectively. Now, we introduce boundary forms on , , and by Lemma 24. Assume that holds. (1) If holds, then for any , (2) If holds, then for any , (3) If holds, then for any , Proof. Since the proofs of (1)–(3) are similar, we only show that assertion (1) holds. For any , we have from (2) of Lemma 14 that for any . This yields that exists and is finite for any . Similarly, it can be shown that exists and is finite for any . Hence, assertion (1) holds. The proof is complete. Lemma 25. Assume that and hold. Then for any given finite subset with and for any given , there exists such that the following boundary value problem: has a solution . Proof. Set Let , be the linearly independent solutions of system . Then we have In fact, the linear algebraic system where , can be written as which yields Since is a solution of system , it follows from that . Then ; that is, (65) has only a zero solution. Consequently, (64) holds. Let , be any given vectors in . By (64), the linear algebraic system has a unique solution . Set . It follows from (68) that Let be a solution of the following initial value problem: Since for and , we get by (70) and (2) of Lemma 14 that Since are linearly independent in , we get from (69) and (71) that . So, is a solution of the following boundary value problem: On the other hand, the linear algebraic system has a unique solution by (64). Set . Then, by (73) Let be a solution of the following initial value problem: Since for and , we get by (2) of Lemma 14 and (75) that which, together with (74), implies that . So, is a solution of the following boundary value problem: Set and . Then is a solution of the boundary value problem (62). The proof is Remark 26. Lemma 25 is called a patch lemma. Based on Lemma 25, any two elements of (, , resp.) can be patched up to construct another new element of (, , resp.). In particular, (1) if holds, we can take , , and , . Then there exist satisfying (2) if holds, we take , , and . Then there exist satisfying (3) if both and hold, then there exist satisfying The above auxiliary elements , , and will be very useful in the sequent discussions. Theorem 27. Assume that holds.(1) If holds, then In particular, if , then (2) If holds, then (3) If holds, then Proof. We first show that assertion (1) holds. By Lemmas 8 and 24, and Theorem 17, one has For convenience, denote Clearly, . We now show that . Fix any . It follows from (85) that for all , For any given , by Remark 26 there exists such that Thus, it follows from (87) that , and consequently for all . In the case that , it is clear that So it remains to show that . It suffices to show that for any . Fix any , and let , , , and . Then by Lemma 25, there exist with and . Inserting these into (87) one has that . Similarly, one can show that . Thus . Therefore, assertion (1) has been shown. With similar arguments, one can show that assertion (2) and (3) hold by using (78) and (79), separately. This completes the proof. 4.2. Defect Indices of Minimal Subspaces In this subsection, we first give a valued range of the defect indices of and and then discuss the relationship among the defect indices of , , and . For briefness, denote For any , let , , , and be defined as (56) with replaced by and , respectively. The following results are obtained. Theorem 28. Assume that holds.(1) If holds and , then for any , and . (2) If holds and , then for any , and . Proof. Since the method of the proofs is the same, we only give the proof of assertion (1). For any , it follows from Lemma 11 and Theorem 18 that
{"url":"http://www.hindawi.com/journals/aaa/2013/904976/","timestamp":"2014-04-18T13:58:40Z","content_type":null,"content_length":"1042427","record_id":"<urn:uuid:dc03cfdb-a7f5-4119-b420-50039ba82d30>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00638-ip-10-147-4-33.ec2.internal.warc.gz"}
ADMB Files Code: nbmm.tpl NOTE: This model does not compile under the current demo version of ADMB-RE, but will do with the next version. Data: nbmm.dat Initial values: nbmm.pin All required files (DOS): nbmm.zip Results: nbmm.par Calling ADMB-RE from R AD model Builder has existed for years as a program to produce stand alone executables on Winows and Linux. Modifying it to seamlessly produce shared libraries for R can be expected to produce a few wrinkles so please be patient and give us feedback. To run the examples in R you need to download (and source into R) the file glmmadmb.s which defines the driver function glmm.admb() and a modified versions "epil2" of the epil-dataset from MASS. You also need to download the library file (nbmm.dll if you run Windows, or nbmm.so if you run Linux) and save it in the directory where you run R. You should note that the ADMB-RE executables create temporary files (sometimes large), so you should probably start R in a specially dedicated directory. Standard deviations of parameter estimates can be found in the file "nbmm.std" Model description The negative binomial distribution can be used instead of the Poisson distribution to investigate whether there is overdispersion in the data, that is whether the variance of the observations is greater than that which would be expected for a Poisson distribution. Parameter estimation for such models is generally claimed to be difficult. See for example R-help the mailing list archives of the statistical modeling language R. The data used in this example are the epilepsy data considered in Venables and Ripley Modern applied statics with S 4th edition. and by Booth et al. Negative Binomial Loglinear Mixed Models. Implementation in ADMB-RE callable from R We coded up the model in ADMB-RE (nbmm.tpl) with flexible linear predictors for both fixed and random effects. The program was then compiled into a DLL that can be called from R via the R-function glmm.admb(). Examples of how to use this function are given below. Currently glmm.admb() only allows negative binomial, but implementing other distributions like Bernoulli and Poisson is just a question of adding a few lines of code to nbmm.tpl. Comparison with SAS NLMIXED Booth et al. attempt to fit two negative binomial loglinear mixed models to the data. They refer to these models as the full model and a simpler model. For the full model they report: The fit of the full negative binomial model using NLMIXED was very unstable. Different starting values led to different estimates and very different standard errors. Booth et al also apply a Monte Carlo EM algorithm (MCEM) to the full model and report: Application of the MCEM algorithm in this problem suggest that the random slope is 0. The MCEM algorithm was run for a large number of iterations with all of the estimates except for slope variance and the covariance converging quickly. These latter two estimates appear to be slowly converging toward 0. The full model of Booth et al. is specified as: This model converges quickly (30 seconds), with the ML estimate of the variance of the random slope being equal to zero (or extremely small), and as a consequence of this there is very little information about the correlation parameter (between the random intercept and the slope). The standard eviations of the parameter estimates including those of the random effects are found in the file nbmm.std. We also fitted the simpler model of Booth et al.: This model converged quickly (15 seconds) to the ML estimates. We used different starting values to investigate the stability of the model and found that it converged to the same values each time (provided that the chosen initial values exceeded a minimum level of overdispersion). Thus it appears that the performance of ADMB-RE is superior to SAS NLMIXED for this problem.
{"url":"http://otter-rsch.com/admbre/examples/nbmm/nbmm.html","timestamp":"2014-04-19T09:53:44Z","content_type":null,"content_length":"8908","record_id":"<urn:uuid:05b7bb28-7527-48c7-9782-9d3171ed8e5e>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00176-ip-10-147-4-33.ec2.internal.warc.gz"}
Results 1 - 10 of 69 - Bull. Am. Math. Soc., New Ser , 1993 "... In this article we shall give an account of certain developments in knot theory which followed upon the discovery of the Jones polynomial [Jo3] in 1984. The focus of our account will be recent glimmerings of understanding of the topological meaning of the new invariants. A second theme will be the c ..." Cited by 94 (0 self) Add to MetaCart In this article we shall give an account of certain developments in knot theory which followed upon the discovery of the Jones polynomial [Jo3] in 1984. The focus of our account will be recent glimmerings of understanding of the topological meaning of the new invariants. A second theme will be the central role that braid , 2002 "... In [23] we introduced a knot invariant for a null-homologous knot K in an oriented three-manifold Y, which is closely related to the Heegaard Floer homology of Y (c.f. [21]). In this paper we investigate some properties of these knot homology groups for knots in the three-sphere. We give a combinato ..." Cited by 60 (16 self) Add to MetaCart In [23] we introduced a knot invariant for a null-homologous knot K in an oriented three-manifold Y, which is closely related to the Heegaard Floer homology of Y (c.f. [21]). In this paper we investigate some properties of these knot homology groups for knots in the three-sphere. We give a combinatorial description for the generators of the chain complex and their gradings. With the help of this description, we determine the knot homology for alternating knots, showing that in this special case, it depends only on the signature and the Alexander polynomial of the knot (compare [24]). Applications include new restrictions on the Alexander polynomial of alternating knots. - J. ACM , 1999 "... We consider the problem of deciding whether a polygonal knot in 3-dimensional Euclidean space is unknotted, capable of being continuously deformed without self-intersection so that it lies in a plane. We show that this problem, unknotting problem is in NP. We also consider the problem, unknotting pr ..." Cited by 55 (6 self) Add to MetaCart We consider the problem of deciding whether a polygonal knot in 3-dimensional Euclidean space is unknotted, capable of being continuously deformed without self-intersection so that it lies in a plane. We show that this problem, unknotting problem is in NP. We also consider the problem, unknotting problem of determining whether two or more such polygons can be split, or continuously deformed without self-intersection so that they occupy both sides of a plane without intersecting it. We show that it also is in NP. Finally, we show that the problem of determining the genus of a polygonal knot (a generalization of the problem of determining whether it is unknotted) is in PSPACE. We also give exponential worstcase running time bounds for deterministic algorithms to solve each of these problems. These algorithms are based on the use of normal surfaces and decision procedures due to W. Haken, with recent extensions by W. Jaco and J. L. Tollefson. - Ann. Sci. École Norm. Sup , 2001 "... Let M be a connected, compact, orientable 3-manifold with b1 (M) > 1, whose boundary (if any) is a union of tori. Our main result is the inequality kkA kkT between the Alexander norm on H 1 (M; Z), dened in terms of the Alexander polynomial, and the Thurston norm, dened in terms of the Euler ch ..." Cited by 53 (3 self) Add to MetaCart Let M be a connected, compact, orientable 3-manifold with b1 (M) > 1, whose boundary (if any) is a union of tori. Our main result is the inequality kkA kkT between the Alexander norm on H 1 (M;Z), dened in terms of the Alexander polynomial, and the Thurston norm, dened in terms of the Euler characteristic of embedded surfaces. (A similar result holds when b1 (M) = 1.) Using this inequality we determine the Thurston norm for most links with 9 or fewer crossings. Contents 1 - Kobe J. Math , 1999 "... Skein modules are the main objects of an algebraic topology based on knots (or position). In the same spirit as Leibniz we would call our approach algebra situs. When looking at the panorama of skein modules 1, we see, past the rolling hills of homologies and homotopies, distant mountains- the Kauff ..." Cited by 20 (6 self) Add to MetaCart Skein modules are the main objects of an algebraic topology based on knots (or position). In the same spirit as Leibniz we would call our approach algebra situs. When looking at the panorama of skein modules 1, we see, past the rolling hills of homologies and homotopies, distant mountains- the Kauffman bracket skein module, and farther off in the distance skein modules based on other quantum invariants. We concentrate here on the basic properties of the Kauffman bracket skein module; properties fundamental in further development of the theory. In particular we consider the relative Kauffman bracket skein module, and we analyze skein modules of I- bundles over surfaces. History of skein modules from my personal perspective I would like to use this opportunity, of informal presentation, 2 to give my personal history of algebraic topology based on knots (a more formal account was given in [Pr-7]). In July 1986 I left Poland invited by Dale Rolfsen for a visiting position at UBC. In January of 1987, Jim Hoste gave a talk at the first Cascade Mountains Conference (in Vancouver) and described his work on multivariable generalization of the Jones-Conway ([HOMFLY][PT]) polynomial of links in S 3. He was convinced that his construction works for 2 colors when the first color is represented only by a trivial component. He had already succeeded in the case of 2-component 2-bridge links. His method, following Nakanishi, was to analyze link diagrams in an annulus (the trivial component being z axis). We immediately noticed (with Jim) that the analogous construction for the Kauffman bracket polynomial has an easy solution [H-P-1]. In March - J. Knot Theory Ram "... We introduce and study in detail an invariant of (1,1) tangles. This invariant, derived from a family of four dimensional representations of the quantum superalgebra Uq[gl(2|1)], will be referred to as the Links–Gould invariant. We find that our invariant is distinct from the Jones, HOMFLY and Kauff ..." Cited by 18 (14 self) Add to MetaCart We introduce and study in detail an invariant of (1,1) tangles. This invariant, derived from a family of four dimensional representations of the quantum superalgebra Uq[gl(2|1)], will be referred to as the Links–Gould invariant. We find that our invariant is distinct from the Jones, HOMFLY and Kauffman polynomials (detecting chirality of some links where these invariants fail), and that it does not distinguish mutants or inverses. The method of evaluation is based on an abstract tensor state model for the invariant that is quite useful for computation as well as theoretical exploration. 1 , 1998 "... The research presented here examines topological drawing, a new mode of constructing and interacting with mathematical objects in three-dimensional space. In topological drawing, issues such as adjacency and connectedness, which are topological in nature, take precedence over purely geometric issues ..." Cited by 18 (1 self) Add to MetaCart The research presented here examines topological drawing, a new mode of constructing and interacting with mathematical objects in three-dimensional space. In topological drawing, issues such as adjacency and connectedness, which are topological in nature, take precedence over purely geometric issues. Because the domain of application is mathematics, topological drawing is also concerned with the correct representation and display of these objects on a computer. By correctness we mean that the essential topological features of objects are maintained during interaction. We have chosen to limit the scope of topological drawing to knot theory, a domain that consists essentially of one class of object (embedded circles in three-dimensional space) yet is rich enough to contain a wide variety of difficult problems of research interest. In knot theory, two embedded circles (knots) are considered equivalent if one may be smoothly deformed into the other without any cuts or self-intersections. This notion of equivalence may be thought of as the heart of knot theory. We present methods for the computer construction and interactive manipulation of a - Journal of Knot Theory and its Ramifications , 2000 "... This paper describes a method for the automatic evaluation of the Links– Gould two-variable polynomial link invariant (LG) for any link, given only a braid presentation. This method is currently feasible for the evaluation of LG for links for which we have a braid presentation of string index at mos ..." Cited by 8 (8 self) Add to MetaCart This paper describes a method for the automatic evaluation of the Links– Gould two-variable polynomial link invariant (LG) for any link, given only a braid presentation. This method is currently feasible for the evaluation of LG for links for which we have a braid presentation of string index at most 5. Data are presented for the invariant, for all prime knots of up to 10 crossings and various other links. LG distinguishes between these links, and also detects the chirality of those that are chiral. In this sense, it is more sensitive than the well-known two-variable HOMFLY and Kauffman polynomials. When applied to examples which defeat the HOMFLY invariant, interestingly, LG ‘almost’ fails. The automatic method is in fact applicable to the evaluation of any such state sum invariant for which an appropriate R matrix and cap and cup matrices have been determined.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1035632","timestamp":"2014-04-16T18:13:38Z","content_type":null,"content_length":"35597","record_id":"<urn:uuid:e84a315a-acea-4d36-b2bf-17725c22ccaf>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00221-ip-10-147-4-33.ec2.internal.warc.gz"}
Survival Output Tab Every session has an Output tab, on which you can customize miscellaneous settings that affect the appearance of the results matrix. In a Survival session, you can use the Output tab to edit the following settings. Display Standard Life Set Statistic Type and Number of Decimal Places for Survival These options let you specify whether survival in the results matrix will be displayed as percents or proportions, and to how many decimal places they will be rounded. Select the statistic type from the Display Statistics As drop-down list, and the precision from the Number of Decimal Places drop-down list. For example, if you select "Proportions" and "0.0001", the number 0.55555 will be displayed as 0.5556 in the results matrix. If you select "Percents" and "0.01%", it will be displayed as "55.56%". Once you have set the statistic type and precision, you may click the Set Defaults button if you want to use these settings automatically each time you create a new Survival Session. Flag High Relative Cumulative Standard Errors Select this option to flag all cumulative relative survival standard errors greater than the specified percent. This flag affects all cumulative summary and survival life pages. If the standard error is not displayed as a percentage, then you must multiply it by 100 in order to compare it to the flag value. Suppress Pages with Fewer than n Cases Alive This option allows you to suppress the display of statistics on survival life pages that are based on fewer than a specified minimum number of cases entering the first interval. The affected survival life pages will appear in the survival matrix but will be empty. The empty pages are left in the final matrix in order to properly document which pages were suppressed. Adjust Relative Survival Over 1.0 (100%) When calculating relative survival, the s can be calculated properly and yet be greater than 1.00. This occurs when the actual observed survival for the case cohort has a higher survival than the expected survival for that same age, race, sex and date at which age was coded. When this box is checked, any calculated relative survival which exceeds 1.00 will be adjusted down to 1.00 on the output tables. Adjust Increasing Relative Survival Sometimes the cumulative relative survival can be calculated properly and yet be increasing over time, making it appear as if people are rising from the dead. This occurs when the actual observed survival for the case cohort decreases more slowly than the expected survival for that same age, race, sex, and year group. When this box is checked, any cumulative relative survival which exceeds the survival in the previous interval will be adjusted down to equal the previous survival.
{"url":"http://seer.cancer.gov/seerstat/508_WebHelp/Survival_Output_Tab.htm","timestamp":"2014-04-17T00:49:13Z","content_type":null,"content_length":"7012","record_id":"<urn:uuid:b43e0461-e8bf-4fd8-bd42-7ad3508097c6>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00436-ip-10-147-4-33.ec2.internal.warc.gz"}
Technical Info - Anti This page describes the problem of aliasing in rendering fractal images and examines some ways to mitigate its effects. This always involves doing things that dramatically increase the rendering time of the fractal, so for many deep-zoom animations, it is not practical to use any of these methods unless frame interpolation is used. This page has the following topics: The related subject stochastic supersampling is discussed on a separate page. Digital signal processing is a branch of electrical engineering and mathematics that deals with signals that have been digitally sampled and are processed as discrete lists or arrays of numbers, as opposed to analog signals. Most of this field was originally developed to understand the processing of audio signals that are digitally sampled in time, but it turns out that essentially the same mathematics applies to images that are digitally sampled in a two-dimensional plane. Obviously all fractal images rendered by software that scans a region of the complex number plane are digitally sampled, so DSP has a lot of important insights regarding this business. The most important result to understand is the Nyquist-Shannon sampling theorem: If a signal y(t) contains no frequencies higher than F[0], then it can be completely reconstructed by sampling at a frequency F > 2F[0], by a grid of sample points spaced at intervals of dt = 1/F < 1 This can be restated slightly differently for two-dimensional images: If an image A{x,y} contains no spatial detail on scales smaller than size ds, then it can be completely reconstructed by sampling on a grid of points spaced at intervals smaller than dx=ds/2 and dy= The frequency 2F[0] is called the Nyquist frequency. The requirement that a signal have no energy at frequencies higher than F[0] is called the Nyquist criterion. Note that the Nyquist frequency is twice the limiting frequency F[0]. As important as this theorem is for providing guidance on how to reconstruct sampled signals, it is also important for fractal animations because it tells us what we cannot do: we cannot hope to reconstruct the fractal by sampling on a grid because the fractal always has spatial detail smaller than any distance scale ds that we can choose. This is a fundamentally different situation than electrical engineers face when sampling analog signals. Normally, when an analog signal is digitally sampled, it is first run through a low-pass filter that removes energy in the signal above the frequency F[0]. Although it may not be easy, it is at least possible in principle to filter an analog signal well enough to achieve any desired level of attenuation of the signals at frequencies higher than F[0], and the Nyquist criterion can be satisfied well enough for most practical purposes (even enough to make the fussiest audiophiles happy). The same principle applies to images. Digital cameras, for example, have to employ the optical equivalent of a low-pass filter before the image gets to the digital image sensor. In this case, the low-pass filter is an optical element that blurs the image very slightly to ensure that there is no detail on scales smaller than twice the image sensor's pixel spacing. Unfortunately, there is no way to do something analogous when we try to draw a fractal. No matter what we do, we are always beginning with the sampling operation -- there is no way to insert a low-pass filter in the image before we sample it. The phenomenon that happens when we digitally sample a signal without first filtering out the energy above the Nyquist frequency is called aliasing. If we go ahead and sample a signal with significant energy above F[0], it turns out that all the energy above F[0] gets folded back into the frequency range below F[0]. This is called "aliasing" because when the digitally sampled signal is reconstructed from its samples, high frequency input signals will be converted to some other, lower frequency; it is as if the high frequency took on a new identity as a lower frequency. This is the same phenomenon that makes it look like car wheels are moving slowly (or backwards) when you watch them on TV or a movie. In fractal animations, this effect is most obvious in areas with lots of extremely fine spatial detail, and manifests as formation of moire patterns. Here is an example, taken from the video Centanimus, with a size of about 1e-100 (click for a larger version): When the spatial frequency of the fine radial spokes converging on this mini-set's edge becomes higher than the Nyquist frequency determined by the spacing of the sampling grid, aliasing starts to happen. The high frequencies get aliased to low frequencies, which manifest as the moire patterns we see. These generally don't show up except at extreme deep-zooms like this where there is a tremendous amount of detail on exquisitely small scales compared to the pixel size. In the image above, when we look at the area right next to the set, all we see is noise. This is because the spatial frequencies of the converging radial lines are so high that they get folded back many times into the spatial frequency band below F[0]. That multiple folding effectively scrambles all the information in this region, and what we see is just noise. So we can think of each mini-set (and remember there are infinitely many of them) as surrounded by a little cloud of noise with very high spatial frequency. Furthermore, since the count numbers rise arbitrarily high as we get closer to the boundary of a set, and are indeed infinite within the set itself, the amplitude of the noise in this cloud can be arbitrarily large. This is what causes the "sparkle" noise effect that can be seen to some degree in all fractal animations. Although it is impossible to eliminate the effects of aliasing altogether, there are ways to reduce them or make them less visible. Fundamental Rules A very deeply-rooted fact (not a theoretical proposition) is that there are only two ways to deal with aliasing: 1. Prefilter the signal to remove as much of the energy above the Nyquist limit as necessary 2. Oversample the signal at a rate well above the Nyquist frequency Maybe the more laid-back people would add another choice: 3. Ignore it (or even enjoy it!) Fractal images (and indeed, all digital video images that are constructed from mathematical models, including things like Shrek and Nemo) do not exist until they are digitally sampled; there is nothing analogous to prefiltering an analog signal in the world of computer graphic imaging, so option (1) is not available. Therefore, all anti-aliasing techniques for fractal images, whether still or animated, are variations on one fundamental idea, which is oversampling (another synonymous term is "supersampling" which I like because it sounds more cool). Oversampling refers to sampling a signal at a rate much higher than the maximum frequency that needs to be reproduced. In the case of fractal images, it means sampling the fractal on a grid with many more pixels than the final image contains. For example, we could use a 3x3 grid, or a 5-point cross, or even just two points next to each other (or the tricky technique of stochastic supersampling). What this does is increase the sampling frequency, which means the the same amount of aliased energy will be spread over a much larger signal bandwidth. That means the energy density of the aliased energy will be lower in the signal band. Another way of thinking of this is that the aliased energy is spread into some discrete frequency bins, whose number is half the number of sample points. Increasing the number of sample points increases the number of frequency bins, but the amount of out-of-band energy that will be aliased is the same, so the aliased energy per bin is lower. Any sampling also establishes an absolute limit to the frequency content of the signal -- once any signal is sampled at a rate F, it cannot reproduce any frequency content above F/2. So if the oversampled signal is low-pass filtered and then sampled again -- this time with the Nyquist criterion more closely satisfied, since the oversampled signal can be low-pass filtered -- the amount of aliased noise in the final reproduced signal is lower. What that translates to in practical terms is that we calculate the fractal on a much larger grid than we really need, with the pixels on that larger grid spaced more closely than the pixels will be in the final image. The finer-spaced pixels give a higher sampling frequency. We then apply a low-pass filter of some sort (which type of filter is a whole subject in itself) to the oversampled grid and resample the output of that filter at the spatial resolution of the final video image. Bottom Line So all techniques for reducing noise in computer-generated fractal images are basically variations of oversampling and low-pass filtering. The most obvious questions now are: • What kind of filter do we use? • How much oversampling do we need? There is also a more subtle question about how to oversample...more on that later. The second question turns out to be the more important one, and its answer is easy: The more the better. Well, up to a point. The effects of oversampling follow one of the general rules of life, the Law of Diminishing Returns. A few extra points of oversampling, like 2X in each direction (for a total of 4X the number of points) gives a significant effect, while going to 3X gives a little more effect but not as much, and going to 4X gives an even smaller improvement. Generally, oversampling beyond 5X (25 samples per output pixel) doesn't give enough of a noticeable improvement to justify the 25X increase in rendering time. The histograms and sample images below -- some of which are drawn with 15x15 oversampling -- show this quite clearly. The first question above -- what type of filter to use -- is more difficult. There are dozens of different kinds of digital filters, many of which have dozens of variations, and many of which allow some sort of multi-parameter kernel to be specified... Broadly speaking, they are divided into two major groups; linear filters and nonlinear filters. We are not going to go into any details of this, but rather consider one simple linear filter, the mean filter, and one simple nonlinear filter, the median filter. The median filter turns out to be superior for our purpose, as we will see with the test images below. Still, the difference between these filters is often quite subtle, especially when compared to the impact of having more oversampling points. But it is possible that there are other more sophisticated filters that we could use. This is an ongoing area of interest in the research department of HPDZ.NET, and some developments may come in the future. Mean Filter One obvious thing to do when we have a bunch of data samples and want to reduce them to a single number is simply take their average. If we just take each image pixel and divide it into a small grid for oversampling, we can average those oversampled points together and use the average in the final video frame. This turns out to work fairly well, as shown by the test images below. Click on the thumbnails to download the 1024x768 JPG files. The JPG compression is set very low on all these files to preserve detail as much as possible, so they are all pretty big, around 700-900KB each. Mean filter examples Unfiltered 5x5 Unfiltered 3x3 15x15 Unfiltered 15x15 Unfiltered 15x15 2 cycles 2 cycles About the Example Images • The first row was used to make some of the example histogram plots below. The images are drawn with max counts = 100,000 and rank-order colorizing is used. • The second row is taken from the examples comparing histogram versus rank-order colorizing and uses histogram colorizing. • The third row is an area chosen for its high level of detail. It was drawn with rank-order colorizing with max counts = 100,000 and includes examples with a 2-cycle color palette. The results are pretty good, but this is not the optimal filter for this application. That is because the mean of a set of data gives a lot of weight to outlier values and doesn't always produce the result we want when the data has a single large, spurious value. Say we have a 3x3 oversampling grid that has fractal count values as shown below in set S. This kind of data is typical of what we see in the Mandelbrot set, with occasional huge spikes that we want to remove from the count values. S = { 90, 101, 110, 120, 126, 150, 173, 182, 10000 } The average of the values in S is 1228. This value is not really representative of the data set in this case. Furthermore, the average is not one of the members of the original input data set. That means that this kind of filter will introduce count values into the fractal image that were not calculated at any point in the supersampling grid. The mean filter is very good -- optimal, in fact -- at removing a certain type of noise, namely Gaussian-distributed noise, but it is not so good at removing the kind of spiky, impulsive noise that we have in this application. Median Filter Another way to reduce a set of data to a single number is to take the median. The median of a set is the number in the set which half the data is above and half the data is below. If the set has an odd number of points, the median is the data point in the middle when the data is sorted. If the set has an even number of points then the median is the average of the two center data values. Consider the previous data set on the hypothetical 3x3 oversampling grid. S = { 90, 101, 110, 120, 126, 150, 173, 182, 10000 } The median of this set is 126, which is more representative of the data than the mean of 1228. The median of a set is better able to reject large outlier values than the mean is. Notice that the median will be the same no matter how large the largest value gets, while the median will grow proportionally to the largest value. Also, notice that the median is one of the data set members, 126 in this case, while the mean value of 1228 is not an element of S. As long as the set S has an odd number of elements, the median will always be a member of S and therefore will always be a value that was actually calculated from the fractal formula, not a value that is an artifact of the filter. In terms of the performance on images, the median filter is a much better choice for this application because median filters remove impulsive, spiky noise much more effectively than any kind of filter based on averaging. Note that we are only using the filter on the supersampling pixels; we are not filtering the final image itself. Here are some examples of its effect on the previous test images. Median filter examples Unfiltered 5x5 oversampling Unfiltered 3x3 oversampling 15x15 oversampling Unfiltered 15x15 Unfiltered 15x15 2-cycles 2-cycles Here's a couple more examples that I made a long time ago and just had to keep. These are at a size of 1.2e-15 in the complex number plane. They show how much sparkle noise can be eliminated by supersampling and also show some subtle moire in the upper half that is diminished in the filtered version. Note that the filtered JPG file is significantly smaller than the unfiltered file since it has much less noise. We can gain some insight into the differences between the mean filter and median filter, as well as the effect of additional oversampling, by looking at the count histograms for images with different types of oversampling. The first set of histograms comes from the image in the first row of the sample images above, the one that looks like this: The first histogram was generated from images that were made with the maximum count number set to 500 and rendered with a resolution of 500x500 pixels. The following five configurations were used. In case the legend on the graph is not clear, the colors of each group of data are listed below. • Unfiltered data (black) • Median filter, 5x5 oversampling (red) • Median filter, 15x15 oversampling (violet) • Mean filter, 5x5 oversampling (green) • Mean filter, 15x15 oversampling (blue) The unfiltered data is black; the green/blue points are the mean filtered data, and the red/violet points are the median filtered data. The histogram, which is shown in log-log scale, is a plot of the frequency of each fractal data value in the image. The horizontal axis indicates the fractal count number, on a scale from 50 to 500 (the maximum possible for this set of images). Each minor log division represents a multiple of 10, so the minor ticks correspond to 50, 100, 150, 200, etc. up to 500. The vertical axis is proportional to the number of times each number occurs in the image, scaled by an arbitrary factor to make the range look nice. It is essentially the probability density function for the counts. The first thing to notice about this graph is that the mean-filtered points (green and blue) have a prominent hump around counts 200-250. This hump is not present in the original unfiltered data (black points) and is due to the effect described above -- the mean of a set of data with an outlier will give a value intermediate between the outlier and the true central value of the data. This hump is not present in the median-filtered (red and violet) data. Next, notice that the mean-filtered points retain a slightly higher maximum count value than the median-filtered points. This is also because of the fact that the mean will give significant weight to outlier points and is less able to eliminate them from the data than the median can. Finally, compare the difference between the 15x15 median filtered values (violet) and the 5x5 median-filtered value (red) to the corresponding difference for the mean filtered values. The median filter at 5x5 performs much closer to how it does at 15x15 than the mean filter does. The median filter is better able to filter out noise with fewer oversampling points than the mean filter is -- for the particular type of noise we have here. This makes a huge difference in rendering fractal images, where the difference between 5x5=25 and 15x15=225 can mean the difference between a project taking a month versus a project taking a year. Now let's look at what happens when we raise the max count to 100,000. Remember the vertical axis has an arbitrary scale, so don't compare it with the previous graph. This histogram has the horizontal (count) axis extended to 1000 to show the higher count numbers that occur now that the max count is 100,000 instead of 500, but there are actually a few additional points with counts well above 1000, extending to th 10,000 to 100K range. Of the several hundred thousand total points in this data set, only about ten had counts above 1000, so we've chopped them off here since they have a negligible effect on the main conclusions below. But if you are scaling based on the maximum count, these outliers can be devastating, which is why trimming (ignoring the few highest and lowest counts) is essential for colorizing. Two important things seem to jump out: • The humps in the mean-filtered data are more prominent. This is because there are more high-valued outliers that are artificially pulling the means up to the 200-250 range. • The difference between the 5x5 mean and the 15x15 mean is larger, while the difference for the median filtered data is no different. Again, the median filter is much more robust, even for smaller oversampling factors, than the mean filter. The median filter seems to be performing more like what we would want, at least based on this set of data from this particular image. Here is another set of data, this time from the image that looks like this: Here again we see the median filter is performing better, although the difference is subtle. First notice, as in the previous graphs, that there are fewer points at higher count numbers with the median filter. Next notice that in the range of counts around 10,000, the median filter data show much more oscillation than the mean filter data, indicating it is better able to reproduce the high-count dwell bands, rather than washing them out like the mean filter does. If we magnify the portion of the graph from counts 6000 to 12000 and add some guide lines to each data series we can see the difference more clearly. In the graph below, the black line is a fast moving average applied to the median filter data, while the blue line is the same moving average applied to the mean filter. The spikes are a real part of the image, corresponding to different dwell bands. The median filter is better able to reproduce them at higher count numbers than the mean filter. Supersampling is very expensive -- it increases rendering time by at least a factor of two, and generally more like a factor of 9 or 16. One way of reducing the time needed to achieve the benefit of anti-aliasing is to select only the noisiest points for supersampling. There are many variations on this idea, mostly based on different strategies for selecting which points to supersample. The basic plan is to measure how different each pixel is from its neighbors, and if it is more different than some threshold, it gets supersampled in a second rendering pass. Choosing how to put a number to this difference is where lies the art to selective supersampling. Future updates to this page will have more detail on this. Here is a quick summary of the main conclusions. • The most important thing by far for effective anti-aliasing is to use a lot of oversampling points. The biggest gains are up to the first 9 to 16 (3x3 to 4x4) samples, with only a little additional benefit from going even as high as 15x15 for most images. • The median filter is superior to the mean filter in several fundamental theoretical ways, although the differences are often subtle in actual fractal images. • The difference between the median filter and the mean filter becomes more noticeable as the maximum count increases and in images that have many points close to a mini-set. High max-counts cause more outliers which the median filter is better able to remove. The mean filter turns these into artifactual mid-range count values.
{"url":"http://www.hpdz.net/TechInfo/AntiAliasing.htm","timestamp":"2014-04-17T04:12:12Z","content_type":null,"content_length":"33157","record_id":"<urn:uuid:87ee2c1e-ecce-498a-8c0a-8628f2bd2bbb>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00248-ip-10-147-4-33.ec2.internal.warc.gz"}
s Finds in Mathematical Physics (Week 289) This Week’s Finds in Mathematical Physics (Week 289) Posted by John Baez In week289 of This Week’s Finds, hear the latest news about $E_8$. Then, continue exploring the grand analogy between different kinds of physics. We’ll get into a bit of thermodynamics — and chemistry, too! Finally, learn more about rational homotopy theory, this time entering the world of “differential graded Lie algebras”, which lets us use Lie theory to study topological spaces. Posted at January 8, 2010 10:48 PM UTC Re: This Week’s Finds in Mathematical Physics (Week 289) You say: Unlike elementary particles or rocks, people are complicated systems who don’t necessarily obey simple differential equations. However, some economists have used the above analogy to model economic systems. And I can’t help but find that interesting – even if intellectually dubious when taken too seriously. But do elementary particles really obey such simple differential equations? We come up with, say, the Klein-Gordon equation only by making all sorts of simplifying assumptions. We find that its predictions hold only when those assumptions aren’t violated too badly, like if interactions are “weak enough”. It seems to me that the difference isn’t so much that people are so much more complicated as that the simplifying assumptions become significant so much sooner. Posted by: John Armstrong on January 8, 2010 11:40 PM | Permalink | Reply to this Re: This Week’s Finds in Mathematical Physics (Week 289) I’m suspicious of any attempt to make economics seem like physics. Unlike elementary particles or rocks, people don’t seem to be very well modelled by simple differential equations. However, some economists have used the above analogy to model economic systems. And I can’t help but find that interesting - even if intellectually dubious when taken too seriously. Differential equations for particles and rocks are themselves somewhat intellectually dubious when taken too seriously, but at a higher level of mathematical sophistication :) $\text{Mathematical Sophistication}e\text{Intellectual Non-Dubiousness}.$ Having said that, I also have fun occasionally relating physics and finance (or phynance): Posted by: Eric on January 9, 2010 2:12 AM | Permalink | Reply to this Re: This Week’s Finds in Mathematical Physics (Week 289) I’m suspicious of any attempt to make economics seem like physics. Perhaps then More heat than light: economics as social physics, physics as nature’s economics by Philip Mirowski is for you: More Heat Than Light is a history of how physics has drawn some inspiration from economics and also how economics has sought to emulate physics, especially with regard to the theory of value. It traces the development of the energy concept in Western physics and its subsequent effect upon the invention and promulgation of neoclassical economics. Any discussion of the standing of economics as a science must include the historical symbiosis between the two disciplines. Starting with the philosopher Emile Meyerson’s discussion of the relationship between notions of invariance and causality in the history of science, the book surveys the history of conservation principles in the Western discussion of motion. Recourse to the metaphors of the economy are frequent in physics, and the concepts of value, motion, and body reinforced each other throughout the development of both disciplines, especially with regard to practices of mathematical formalisation. However, in economics subsequent misuse of conservation principles led to serious blunders in the mathematical formalisation of economic theory. The book attempts to provide the reader with sufficient background in the history of physics in order to appreciate its theses. The discussion is technically detailed and complex, and familiarity with calculus is required. Posted by: David Corfield on January 9, 2010 3:59 PM | Permalink | Reply to this Re: This Week’s Finds in Mathematical Physics (Week 289) John A. wrote: It seems to me that the difference isn’t so much that people are so much more complicated as that the simplifying assumptions become significant so much sooner. Good point. I’ve tried to improve the wording a bit. Posted by: John Baez on January 9, 2010 6:39 PM | Permalink | Reply to this Re: This Week’s Finds in Mathematical Physics (Week 289) Hi John, I remember seeing in some book what you wanted to know about that lattice with e8 symmetry. I am still looking for it, but I guess these articles should be interesting for the subject. I googled for “E8” + “Ising” : Paul A. Pearce, PHASE TRANSITIONS, CRITICAL PHENOMENA AND EXACTLY SOLVABLE LATTICE MODELS. V. BAZHANOV, B. NIENHUIS AND S. O. WARNAAR, LATTICE ISING MODEL IN A FIELD: E8 SCATTERING THEORY, this one cites Zamolodchikov. Posted by: Daniel de Franca MTd2 on January 9, 2010 3:56 AM | Permalink | Reply to this Re: This Week’s Finds in Mathematical Physics (Week 289) Perhaps the book you’re thinking of is G. Mussardo, Statistical Field Theory, Oxford, 2010? Posted by: Will Orrick on January 9, 2010 4:09 PM | Permalink | Reply to this Re: This Week’s Finds in Mathematical Physics (Week 289) 2010? Hmmm, I don’t think that is the one. It is older. But take a look at this. Posted by: Daniel de França MTd2 on January 9, 2010 6:50 PM | Permalink | Reply to this Re: This Week’s Finds in Mathematical Physics (Week 289) The A-D-E diagrams show up over and over again in conformal field theory and 2-D statistical mechanics. The critical Ising model is (A[2] , A[3]) in the A-D-E classification described in the Scholarpedia article. That E[8] shows up when the critical model is perturbed away from criticality is, I believe, an extra surprise. Posted by: Will Orrick on January 9, 2010 8:13 PM | Permalink | Reply to this Re: This Week’s Finds in Mathematical Physics (Week 289) I found it! Philippe Di Francesco,Pierre Mathieu,David Sénéchal,”Conformal Field Theory”,Springer, 1997. The stuff around page 814. If that still doesn’t help, I will use peg-leg magic to make the article appear. Posted by: Daniel de França MTd2 on January 9, 2010 9:22 PM | Permalink | Reply to this Re: This Week’s Finds in Mathematical Physics (Week 289) The paper by Pearce is a very pleasant introduction to critical points in statistical mechanics and how an ADE classification shows up in this subject. The paper by Bazhanov et al seems more relevant. It mentions the work of Zamolodchikov — precisely the work that I’d like to understand. It helps to know that the ferromagnetic system being studied is known mathematically as an ‘Ising model’. The paper begins: Since the work by A.B. Zamolodchikov [1] it is known that certain perturbations of conformal field theories (CFT’s) lead to completely integrable models of massive quantum field theory (QFT). The existence of non-trivial higher integrals of motion and other dynamical symmetries [2-6] in such a QFT allows to compute the spectrum of the particles and their $S$-matrix explicitly. At the same time, these QFT models can be obtained as the scaling limit of appropriate non-critical solvable lattice models in statistical mechanics (see [7] for an introduction and references on solvable lattice models). In the latter approach the spectrum and the S-matrices can be calculated from the Bethe Ansatz equations for the corresponding lattice model [8–10]. The natural problem arising in this connection is to find lattice counterparts for all known integrable perturbed CFT’s and vice versa. A description of known results of such correspondence lies outside the scope of this letter and we refer the interested reader to [1-10] and references therein. Here we consider one particularly important example of this correspondence associated with the Ising model at its critical temperature in a magnetic field, hereafter referred to as the magnetic Ising model. A.B. Zamolodchikov [11] has shown that the $c = 1/2$ CFT (corresponding to the critical Ising model) perturbed with the spin operator $\phi_{1,2} = \phi_{2,2}$ of dimension $(1/16, 1/16)$ describes an exactly integrable QFT containing eight massive particles with a reflectionless factorised $S$-matrix. Up to normalisation the masses of these particles coincide with the components $S_i$ of the Perron-Frobenius vector of the Cartan matrix of the Lie algebra E8. The aim of this letter is to show that the above QFT describes the scaling limit of the dilute A3 model of Warnaar, Nienhuis and Seaton [12,13] in the appropriate regime. I would love to understand how the magnetic Ising model gets to have 8 massive particles whose masses are related to the $E_8$ Cartan matrix — that’s what the new experiment is studying! But the above paper goes on to talk about the ‘dilute $A_3$ model’, whatever that is, rather than explaining Zamolodchikov’s work in more detail. The paper by Pearce hints at the relation between the dilute $A_3$ model and $E_8$, because in section 4.2 it says that the $E_8$ Rogers-Ramanujan identity can be proved by studying the dilute $A_3$ Even the ‘ordinary’ Rogers-Ramanujan identities are pretty terrifying. The $E_8$ version, which you can see on page 10 of Pearce’s paper, is even scarier. Clearly there’s quite a big web of deep mathematics at work here, and I probably don’t have the energy to penetrate it — at least not very quickly! But I’d love to hear from anyone who understands anything about this. Posted by: John Baez on January 9, 2010 4:57 PM | Permalink | Reply to this Re: This Week’s Finds in Mathematical Physics (Week 289) The following review article, as well as the book mentioned above, describe Zamolodchikov’s work: Giuseppe Mussardo, Off-critical statistical models: Factorized scattering theories and bootstrap program. Physics Reports 218 (1992). Pages 215-379. Abstract. An expansion on your remarks concerning the experimental situation: The E[8] structure is not seen when the transverse magnetic field is scanned through its critical value - one must fix the transverse field at its critical value, and turn on a longitudinal field as well. The transverse-field-only situation is shown in Figure 2e of the Science paper, where only a single peak is seen. What happens when the longitudinal field is added is shown in figure 4, where the first two of the eight particles show up as peaks with the correct mass ratio. The transverse magnetic field in the spin chain model corresponds to the temperature variable in the Ising model; the longitudinal field in the spin chain model corresponds to the external magnetic field in the Ising model. The critical point of the Ising model occurs when T=T[c], H=0, and corresponds to the M(3,4) minimal model of conformal field theory. It turns out that there are two integrable massive perturbations of this theory: One corresponds to moving T away from T[c]; the other corresponds to turning on an external magnetic field. That the magnetic perturbation results in an E[8] structure is something that I believe is still not fully understood. The bootstrap approach described in the Mussardo article is one way to see how it emerges. It is presumably is also related to the fact M(3,4) can be obtained from a coset construction involving E[8]. Finally, it should be noted that the integrable models referred to above are continuum models, related to the scaling theory of the 2-D Ising model. Neither the Ising model on the lattice nor the XX spin chain with a transverse field remains integrable when the longitudinal field is turned on. The dilute A[3] model is an integrable lattice model in the same universality class as the Ising model, but which does contain a field that breaks the up-down symmetry of the spins. Posted by: Will Orrick on January 9, 2010 5:53 PM | Permalink | Reply to this Re: This Week’s Finds in Mathematical Physics (Week 289) Will: thanks a lot for your corrections and explanations! I’ve massively rewritten week289. Do you see more mistakes? One basic thing I want to check: when you speak of a ‘longitudinal’ magnetic field, do you mean one pointing along the same axis the spins like to point along before any magnetic field at all is applied? I think so, from my reading of the Coldea paper. As you can see, I’m engaged in one of my favorite hobbies: learning stuff by making mistakes in public and letting experts correct them! Posted by: John Baez on January 9, 2010 9:10 PM | Permalink | Reply to this Re: This Week’s Finds in Mathematical Physics (Week 289) Looks good to me! In regard to your question about longitudinal field, it is indeed oriented along the direction in which the spins prefer to point before to the application of the transverse field. The spin chain with only a transverse field has Hamiltonian $H = \sum_i (-J S^z_i S^z_{i+1} - h S^x_i)$. The critical value of the transverse field is $h_c=J/2$. The model with $E_8$ structure, $H_ {E_8}$, is obtained by fixing $h=h_c$ and adding a small longitudinal field, which means adding the term $-\sum_i h_z S^z_i$ to $H$. The unperturbed Hamiltonian $H$ defines an integrable model, in the sense that there is an infinite set of operators that commute with $H$. The connection with the 2-D zero-field Ising model solved by Onsager is that $H$ commutes with the transfer matrices of that model. Unfortunately, the perturbed Hamiltonian, $H_{E_8}$ is not integrable in the same sense. The perturbed conformal field theory to which it corresponds is, however, integrable. This means that the $E_8$ symmetry manifests itself only approximately in the lattice model, whereas it is exact in the perturbed conformal field theory. Luckily, there is a different lattice model, the dilute $A_3$ model, in which the $E_8$ symmetry is exact. Posted by: Will Orrick on January 10, 2010 7:37 PM | Permalink | Reply to this Re: This Week’s Finds in Mathematical Physics (Week 289) Hi John. I haven’t had time to read your new post carefully, which I look forward to doing, but I did notice one mistake that I think I should mention right away: the normalized chains on a topological group is NOT a cocommutative dg Hopf algebra. Even with rational coefficients, the cocommutativity holds only up to homotopy, actually up to an infinite family of homotopies: the normalized chain complex of any space admits an E_\infty-coalgebra structure. This is the source of the Steenrod algebra action on the cohomology of the space. One can also say that the comultiplication on the normalized chain complex is a strongly homotopy comultiplicative map, i.e., a morphism of chain coalgebras up to strong homotopy. This is somewhat less strong that saying that there is a full E_\infty-coalgebra structure, but is often sufficient for building interesting algebraic models of topological spaces. I’ve enjoyed your introduction to rational homotopy theory! Posted by: Kathryn Hess on January 9, 2010 2:05 PM | Permalink | Reply to this Re: This Week’s Finds in Mathematical Physics (Week 289) Oh, right. I should have realized this. I’ll read some more and fix this part. Posted by: John Baez on January 9, 2010 4:07 PM | Permalink | Reply to this Re: This Week’s Finds in Mathematical Physics (Week 289) Back from Saturday shopping now, so I’ve had time to read more carefully what you wrote and to think a bit more. The key references to understanding when the chains on a topological group are weakly equivalent to the universal enveloping algebra of a dg Lie algebra are: –David Anick’s article on Hopf algebras up to homotopy, which was in the Journal of the AMS in 1989; –Steve Halperin’s article on universal enveloping algebras in JPAA 83 (1992); –Jonathan Scott’s article on Hopf algebras up to homotopy in AGT 5 (2005). Posted by: Kathryn Hess on January 9, 2010 4:19 PM | Permalink | Reply to this Re: This Week’s Finds in Mathematical Physics (Week 289) I’m wondering if something like this works: Problem: The rational chains on a topological space form a dg coalgebra that’s not cocommutative — it’s only an $E_\infty$-coalgebra. Sullivan was able to fix the corresponding problem for rational cochains: Problem: The rational cochains on a topological space form a dg algebra that’s not commutative — it’s only an $E_\infty$-algebra. Namely, he found a commutative dg algebra $A(X)$ of ‘rational differential forms’ for a space $X$, which is presumably equivalent to the $E_\infty$-algebra of rational cochains on $X$. So, can we dualize $A(X)$ and get a dg coalgebra that’s equivalent to the $E_\infty$-coalgebra of rational chains on $X$? Here we should heed the whispered warnings of the ancestors: never dualize unless you absolutely need to. The dual of an infinite-dimensional Hopf algebra isn’t usually a Hopf algebra. $A(X)$ is infinite-dimensional, so we probably don’t want to take its dual. But there’s a standard solution: the restricted dual of a Hopf algebra is again a Hopf algebra. The idea is to use the largest subspace of $A(X)^*$ such that $m^* : A(X)^* \to (A(X) \otimes A(X))^*$ actually maps this subspace into $A(X)^* \otimes A(X)^*$. This subspace — people call it $A(X)^\circ$ — will be a commutative dg coalgebra. Is this quasi-isomorphic to the rational cochains on $X$? Are they equivalent as $E_\infty$-coalgebras? Or does some other trick like this work? Posted by: John Baez on January 9, 2010 6:32 PM | Permalink | Reply to this Re: This Week’s Finds in Mathematical Physics (Week 289) I don’t know if the trick you suggest of considering the restricted dual of A(X) would work. I’m a bit skeptical, but that may just be my indoctrination by John Moore speaking. If you want to see a clear and complete explanation of the relationship between the Sullivan and Quillen models in rational homotopy theory, I strongly suggest you take a look at “Rational homotopy models and uniqueness” by Martin Majewski, which was published as an AMS Memoir in 2000. His argument proceeds (very) roughly as follows. Let X be a 1-connected space (resp., a 1-reduced simplicial set). By Anick’s theorem (from the paper I cited in my earlier comment), the rational cubical chain complex of the Moore loops on X (resp., the normalized rational chain complex of the Kan loop group of X) can be rigidified to a strictly cocommutative dg Hopf algebra, of which one can take the primitives. Majewski proves that the dg Lie algebra thus obtained is weakly equivalent both to Quillen’s dg Lie algebra model of X and to the dg Lie algebra canonically associated to Sullivan’s minimal model of X. (There’s a formula in Halperin’s paper, for example, for this associated dg Lie algebra.) The key concept operating here is Koszul duality between the commutative operad and Lie operad, over the rationals… Posted by: Kathryn Hess on January 9, 2010 8:28 PM | Permalink | Reply to this Re: This Week’s Finds in Mathematical Physics (Week 289) Thanks again, Kathryn. I think an argument roughly like the one you sketch may also be lurking somewhere in the Félix–Halperin–Thomas book. For example, in Section 26 they say (I’ll paraphrase it): The main objective of this section is to show that the isomorphism (26.1) is induced from a chain algebra quasi-isomorphism between a free Lie model for a rational homotopy type $X$ and $C_*(\ Omega(X))$ (that is, the rational chains on the based loop space of $X$). This is a result of Majewski. Its significance is due to a theorem of Anick, who should directly the existence of a unique quasi-isomorphism class of free chain Lie algebras admitting such a quasi-isomorphism. However, these were potentially different from the Lie models constructed via Sullivan’s functor $A_ {PL}$ as described here. Majewski’s result shows that they coincide. I’m willing to learn all this stuff, but it seems a bit elaborate and hard to explain, so I’m still hoping there’s a quick way to get ahold of cocommutative dg Hopf algebra that will substitute for If Sullivan could solve the ‘commutative cochain problem’ over the rationals, why can’t we solve the ‘commutative chain problem’ in an equally elegant way? Posted by: John Baez on January 9, 2010 9:28 PM | Permalink | Reply to this Re: This Week’s Finds in Mathematical Physics (Week 289) You’re right, John, that FHT are referring to the same result by Majewski that I sketched above. The only direct–and, in my opinion, very elegant– solution of the “cocommutative chain problem” of which I am aware is in Quillen’s landmark 1969 Annals paper on rational homotopy theory. There he considers a sequence of six (!) pairs of adjoint functors that link the category of 1-connected spaces (where weak equivalences are rational homotopy equivalences) to the category of 1-connected, cocommutative rational dg coalgebras and shows that all of the pairs are Quillen equivalences. When I was a grad student, this sequence of Quillen equivalences looked very intimidating, but I realize now that taken one by one, they’re not so bad. Given a 1-connected space X, you start by taking its singular simplicial set and throwing away all the simplices except the basepoint in degrees 0 and 1. You then apply the Kan loop group functor (the simplicial analogue of the based loop space functor) to S(X), obtaing an honest simplicial group The next step is still somewhat mysterious to me: you take the group ring and complete it with respect to powers of its augmentation ideal, obtaining a “reduced, complete simplicial Hopf algebra”, \hat Q[GS(X)], which happens to be cocommutative, since the group ring is cocommutative. Taking degreewise primitives, you then get a reduced simplicial Lie algebra Prim(\hat Q[GS(X)]). We’re getting close now! At the next stage, we finally apply the normalized chains functor, to get Quillen’s dg Lie model of X: N(Prim(\hat Q[GS(X)])). (So it seems that the key is to wait until the very end of the process to pass from the simplicial world to the chain world…) Finally, to get a a cocommutative dg coalgebra model for X, we use a slight generalization of a functor first defined by Koszul for computing the homology of a Lie algebra, which always gives rise to a cocommutative dg coalgebra. Posted by: Kathryn Hess on January 10, 2010 11:13 AM | Permalink | Reply to this Re: This Week’s Finds in Mathematical Physics (Week 289) Thanks yet again, Kathryn! By the way, in case anyone is interested in having a look, Quillen’s Annals paper is here, and the chain of functors listed by Kathryn is on top of page 211. Posted by: John Baez on January 13, 2010 8:13 PM | Permalink | Reply to this Re: This Week’s Finds in Mathematical Physics (Week 289) If Sullivan could solve the “commutative cochain problem” over the rationals, why can’t we solve the “commutative chain problem” in an equally elegant way? Maybe he was well advised not to worry so much about the whispered warnings of the ancestors: never dualize unless you absolutely need to. Kathryn Hess kindly sketched the dual Quillen route. Thanks! I hadn’t been aware of that. Despite the claim that this is very elegant, I get away with the impression that this route loses the conceptual transparency of what’s going on. Taken at face value, it looks like black magic. John’s general strategy in the TWF is to go to dg-Lie algebras, and is advertized as a massive generalization of Lie theory. But in fact dg-Lie algebras are the strictification of general $\infty$ -Lie algebras aka $L_\infty$-algebras. My feeling is that this extra strictification demand makes the construction more involved than it naturally is. We may think of $dgAlg^{op}$ (in non-negative degree) with its standard model structure as being already a presentation for $\infty LieAlgebroids$: the fibrant-cofibrant objects are precisely the $L_ \infty$-algebroids in that the fibrant-cofibrant objects with the point in degree 0 are precisely the $L_\infty$-algebras (at least if everything is degreewise finite, otherwise one has to say say this more carefully). From this perspective Sullivan’s construction is immediate and elegant in that it directly produces an object in there. The further steps of fibrantly replacing that object and then further finding strictified equivalents may be done if desired (for computations), but is not what the gods asked us to do. Posted by: Urs Schreiber on January 13, 2010 11:11 AM | Permalink | Reply to this Re: This Week’s Finds in Mathematical Physics (Week 289) Maybe I’m missing the point of contention, but I think Quillen’s homological construction is completely intuitive: we are simply taking the Lie algebra of the “group” which is the loop space of X. In other words, starting from a pointed space X we take the corresponding group $\Omega X$. From a group we pass to the enveloping algebra, ie distributions supported at the identity, completed. The topological analog of distributions is chains (dual to functions=cochains), so Quillen’s completed chains construction is exactly the completed enveloping algebra. From the (completed) enveloping algebra we recover the Lie algebra as its primitive elements. In the $\infty$-context this is all we’re doing - ie we internalize the identifications of simplicial sets and spaces and Dold-Kan. Again maybe this latter part is the point of the discussion, in which case sorry to barge in. But the overall scheme of Quillen’s argument, seen from a modern POV, is simply classical Lie theory (hence its Posted by: David Ben-Zvi on January 13, 2010 4:55 PM | Permalink | Reply to this Re: This Week’s Finds in Mathematical Physics (Week 289) Maybe I’m missing the point of contention, but I think Quillen’s homological construction is completely intuitive: Thanks, David, I see, that’s helpful. So the strictification is all in the first step, of course, where $\Omega X$ is realized as a simplicial group. The topological analog of distributions is chains (dual to functions=cochains), so Quillen’s completed chains construction is exactly the completed enveloping algebra. I see, thanks. Posted by: Urs Schreiber on January 13, 2010 5:44 PM | Permalink | Reply to this Re: This Week’s Finds in Mathematical Physics (Week 289) Urs wrote: John’s general strategy in the TWF is to go to dg-Lie algebras, and is advertized as a massive generalization of Lie theory. But in fact dg-Lie algebras are the strictification of general $\ infty$-Lie algebras aka $L_\infty$-algebras. My feeling is that this extra strictification demand makes the construction more involved than it naturally is. Of course I will talk about $L_\infty$-algebras in future weeks. But my plan here is to explain things gently and slowly. My goal is to get everyone in the world to understand this stuff. And I don’t think most people can grok $L_\infty$-algebras until they’re pretty comfortable with dg Lie algebras. So, I wanted to give an easy explanation of why the loop group of a space has a dg Lie algebra. Ultimately of course you’re right: it’s good to ‘accept weakness’ — not try to get equations to hold on the nose. Trying to always work with strict algebraic structures is constantly flirting with danger, and ultimately it’s quite stupid. I’ve spent years trying to convince everyone in the world that equations are evil. So it may seem inconsistent to take the opposite approach. But it’s a fun expository challenge to see how far one can go with strict structures! There will come a point at which it’s either impossible, or so inconvenient that everyone can see the need for weakness. Indeed, this question makes a nice test case: what’s the most natural way to get a dg Lie algebra from a rational homotopy type? Or is it just a bad idea? My original attempt, currently still uncorrected in week289, ran afoul of strictness issues. I wanted rational chains on a topological group $G$ to form a dg cocommutative Hopf algebra. For this, I needed a model of rational chains on a space that forms a cocommutative coalgebra. But the rational simplicial chains only form a $E_\infty$ coalgebra. I still hope there’s a dual version of Sullivan’s ‘rational differential forms’ construction that does the job. I even think I see roughly how it goes. If someone knows it can’t work, I’d love to know why! But if it does work and someone already knows how, I’d love that even more! If this approach doesn’t work, the approach that Kathryn sketched here also sounds nice. As she points out, the trick here is to pass from the simplicial world to the chain complex world at the very last minute, after you’ve turned your cocommutative Hopf algebra object into a Lie algebra object. Hopf algebras involve both operations and co-operations. Lie algebras only involves operations. So, the normalized chains functor treats Lie algebra objects better. At least that’s my impression as to why this works. Posted by: John Baez on January 13, 2010 7:51 PM | Permalink | Reply to this Re: This Week’s Finds in Mathematical Physics (Week 289) I briefly collected some of the material about the Quillen model mentioned above at $n$Lab:rational homotopy theory. Posted by: Urs Schreiber on January 14, 2010 8:52 AM | Permalink | Reply to this Re: This Week’s Finds in Mathematical Physics (Week 289) heat it up or cool it down “adiabatically” - that is, while keeping it in thermal equilibrium all along. That’s not “adiabatically”, that’s “reversibly”! “Adiabatically” means “without heat flow across the boundary between the system and its environment”. Posted by: Tim Silverman on January 9, 2010 4:57 PM | Permalink | Reply to this Re: This Week’s Finds in Mathematical Physics (Week 289) Posted by: John Baez on January 9, 2010 5:05 PM | Permalink | Reply to this Martian life forms Candy Hansen writes: “The channels carved by the escaping gas are often radially organized and are known informally as “spiders.” When I saw the picture I thought the Martian south pole was inhabited by giant creatures similar to basket sea stars. Posted by: RodMcGuire on January 9, 2010 8:17 PM | Permalink | Reply to this Re: This Week’s Finds in Mathematical Physics (Week 289) John wrote: Okay, back down to earth. Last week I began to sketch an analogy between various kinds of physical systems, based on general concepts of “displacement” and “momentum”, and their time derivatives, called “flow” and “effort”: I was so busy writing about global energy that I almost missed this thread about my one time favorite subject. Try my website on this subject. Anyway, what are some other examples of physical systems where we have a notion of “effort” and a notion of “flow”, such that effort times flow equals power? Here are two: Thermodynamics: entropy // temperature A disadvantage of this approach is that entropy flow is not a conserved current, unlike electric current. Electric circuit equivalents for heat conduction are quite useful in engineering, I use them a lot myself. Like you say: There are also weaker analogies to subjects where effort times flow doesn’t have dimensions of power. The two most popular are these: Heat Flow: heat flow// temperature An alternative, in which we have entropy production as the generating functional: Thermodynamics: heat flow// Negcitemp where “Negcitemp” = -1/kT. “Negcitemp”, a word invented by Zemansky, is just a transformed temperature, that in many contexts is actually nicer than temperature. The equations for electrical circuits can be derived for the “Principle of Least Dissipation”. This is a bit like the Principle of Least Action. You minimise the sum of all dissipations in the resistors, with the voltages as variables. This solves the circuit. If your circuit is using the Principle of Least Dissipation, then the pairs of quantities all have dissipation, or power, as their You can rewrite things so that the circuit is defined in terms of the Principle of Least Action, by making what I call a “space time circuit”. I explain this on my website. In a space time circuit, the flow/effort variables become displacement/momentum variables. Resistances in the time direction become negative. The pairs of quantities are now the canonical conjugates. Another subtlety: If you refine your discretization, the Voltages generally remain the same: They are “0-chunks”. Voltage differences across edges are “1-chunks”. Currents are “[D-1] chunks”, so that the dissipation in the resistor is a D-chunk: It scales as a D-volume, where D is the dimension of space. As you say, there is a huge amount if fun connected with this. Looking forward to it! Posted by: Gerard Westendorp on January 9, 2010 11:22 PM | Permalink | Reply to this Re: This Week’s Finds in Mathematical Physics (Week 289) Hi, Gerard! It’s nice to hear from you again. I was so busy writing about global energy that I almost missed this thread about my one time favorite subject. What are you writing about, and why are you doing it? I can’t tell if you mean ‘the problem of finding a globally well-defined notion energy’, which is a subtle issue in general relativity, or ‘the problem of getting enough energy for our civilization without destroying the globe’. An alternative, in which we have entropy production as the generating functional: Thermodynamics: heat flow// Negcitemp where “Negcitemp” = -1/kT. The concept of negcitemp seems to show up in that article about the Legendre transform that I mentioned. This is why I wrote “But it seems to be using a slightly different analogy than the one I was just explaining… so my confusion is not eliminated.” What’s going on with these different heat flow analogies? Why are there so many — and how are they related? The equations for electrical circuits can be derived for the “Principle of Least Dissipation”. This is a bit like the Principle of Least Action. I definitely plan to talk about this in future Weeks! If you refine your discretization, the Voltages generally remain the same: They are “0-chunks”. Voltage differences across edges are “1-chunks”. Currents are “[D-1] chunks”, so that the dissipation in the resistor is a D-chunk: It scales as a D-volume, where D is the dimension of space. I think what you’re calling a $D$-chunk is what mathematicians would call a $D$-cochain, or possibly a $D$-chain. (These are two different but closely related concepts.) Indeed, some of the pictures on your website suggest that you share an interest in $D$-cochains with Eric Forgy and Robert Kotiuga, whose work on discrete versions of electromagnetism can be strongly seen in week288 and the ensuing blog discussion! And, I definitely plan to talk about this side of things as well! Posted by: John Baez on January 10, 2010 3:20 AM | Permalink | Reply to this Re: This Week’s Finds in Mathematical Physics (Week 289) About World energy: I was writing this. The reason I made this was that I wanted to check certain things for myself. Thinking about dQ = TdS - pdV: In this case, S’ seems like a flow, just like V’. But like I wrote, if S is not conserved, Kirchhoff’s current law doesn’t work, so circuits don’t either. But in the reversible case, it is different: S is conserved. In acoustics, temperature as well as pressure fluctuate together, dQ = TdS - pdV = 0 applies. However, they are coupled by a state equation, like pV=RT. The state equation is used to eliminate S and T, so that the dynamic equations end up using just p and V. I can’t figure out yet if there is some general principle behind this. Legendre transform and thermodynamics: Still thinking… Chain complexes: I haven’t yet caught up with the terminology yet, (cup product, graded commutative etc.) but I’ll assume for the moment that D-chunks are D-cochains. I think an example of Eric’s “Cochain problem” is: How do you define the Poynting vector in a 2-complex? Or in the acoustic case, the Power flux. I’ve thought about this before, (I remember a correspondence with Eric about it some years ago) but not yet written anything about it yet. I call it “Cut functions”, but I guess a nicer name should be possible. Start with the acoustic case. Think of an acoustic circuit, that has a certain solution. Any solution has SUM((p_i-p_j)V_ij) = d/dt (SUM(p_i^2 + V_ij^2)) = 0. (Assume unit impedance for simplicity) Now, cut the circuit in 2 parts, along a certain line (I’ll use a 2D circuit here). The 2 half circuits now do not individually satisfy d/dt (SUM(p_i^2 + V_ij^2)) = 0. It would be nice if some quantity (W) integrated over the “cut” has the property: d/dt (SUM(p_i^2 + V_ij^2)) = - SUM (W). Let’s say an edge that is cut in 2 has an “inner” vertex, and an “outer” vertex. The vertices that are not connected to a cut are either “left” or “right” depending on which half of the original circuit they belong. I believe you can prove: SUM_left(SUM(p_i^2 + V_ij^2) + SUM_inner((p_i)V_ij) = 0 So (Assuming j is outer), W_ij_inner = p_i V_ij W_ij_outer = p_j V_ij So the required power flux quantity is situated not on a vertex, not on an edge, but on a vertex-edge pair. Perhaps confusing is that for each point in space, there are 2 times more components of the Power flux than components of flow. Which one you need depends on which side of a potential cut you are. Another subtlety is that W is not “Gauge invariant” : It depends on the absolute value of p. It is still OK to add a constant p to all vertices, as long as we do it to both sides of the cut. In the electromagnetic case, you get a Poynting vector component on an edge-loop pair. Again, there are more of these than components of other vectors or n-forms. The one you need again depends on how you choose to cut the circuit. hmm… It starts to make sense… Posted by: Gerard Westendorp on January 10, 2010 11:40 PM | Permalink | Reply to this Re: This Week’s Finds in Mathematical Physics (Week 289) Another thing: I would really like it if I could understand angular momentum in circuits. Acoustic circuits describe a spin 0 particle (You can make an analog of the Klein Gordon equation with mass too, as I did on my web site). The Maxwell equations, with their 2 complex “circuit” describe spin 1 particles. I don’t see how this fits in the circuits. So: Is there a meaningfull analog of angular momentum in circuits? I don’t know the answer, but I’ll try to write down the direction I been thinking in so far. Here is the acoustic circuit: Because the voltage integrated along any closed loop is zero, d/dt (SUM_loop (Momentum)) = 0 Also, iy you integrate along a line from a boundary to a boundary d/dt (SUM_line (Momentum)) = p_start-p_finish Intuitively, I feel momentum in a loop is related to angular momentum. In a circuit, we do no have an “r” to put in the (r × p) for angular momentum. But the “r” is not very nice anyway, it is frame dependent, and is very hard to generalise to curved space. Think about the circuit analog of a pirouette: You start with momentum along a certain loop. Although in acoustics, this momentum can never leave the loop, lets assume there is some mechanism that reversibly can transfer it to a loop that is “inside” the first loop. This second loop is smaller than the first, so you can imagine that the momentum per edge will be larger than along the first loop, so that the momentum integrated along it is equal to the momentum along the first loop. So perhaps the “r” gets hidden in the “loop length”. But wait: In a pirouette the momentum *increases* when the skater folds in her arms and legs. Hmm… where does the energy come from? Actually, I’m off to go skating myself. (Took a day off) I’ll think about pirouettes, but I will refrain from doing them myself. Posted by: Gerard Westendorp on January 12, 2010 9:10 AM | Permalink | Reply to this Re: This Week’s Finds in Mathematical Physics (Week 289) In week289 I asked: I’m also curious about lots of other things. For example: in classical mechanics it’s really important that we can define “Poisson brackets” of smooth real-valued functions on the cotangent bundle. So: how about in thermodynamics? Does anyone talk about the Poisson bracket of temperature and entropy, for example? In response, someone suggested that maybe Berris and Edwards use something of this sort in Thermodynamics of Flowing Systems to get the Navier–Stokes equation. I’ll try to remember to take a look — but I am still hoping someone can help me out some more here. Posted by: John Baez on January 9, 2010 11:35 PM | Permalink | Reply to this Re: This Week’s Finds in Mathematical Physics (Week 289) Since the overall universe cannot be in equilibrium, I googled the phrase “nonequilibrium Poisson” and found this for you: Constraints in Nonequilibrium Thermodynamics by Hans C. Ottinger in J. Chem. Phys. 130, 114904 (2009): “We elaborate how holonomic constraints can be incorporated into the general equation for the nonequilibrium reversible-irreversible coupling (GENERIC) framework of nonequilibrium thermodynamics. Dirac’s ideas for constructing constrained Poisson brackets are extended to dissipative brackets. The construction is presented such that it can be put into practice most readily. We illustrate the procedure by developing a symmetric thermodynamic description of diffusion in multicomponent systems and, as a further example, we impose an incompressibility constraint. As a consequence of its more elaborate and restrictive structure, GENERIC removes the ambiguities occurring in the classical thermodynamics of irreversible processes when one works with redundant variables.” Posted by: Charlie Stromeyer on January 10, 2010 12:05 AM | Permalink | Reply to this Re: This Week’s Finds in Mathematical Physics (Week 289) “We elaborate how holonomic constraints can be incorporated into the general equation for the nonequilibrium reversible-irreversible coupling (GENERIC) framework of nonequilibrium thermodynamics.” All this is collected into an interesting and comprehensive, poisson-bracket-based theory in Oettinger’s book Hans C. Oettinger, Beyond Equilibrium Thermodynamics, Wiley 2005 Posted by: Arnold Neumaier on January 13, 2010 8:37 PM | Permalink | Reply to this Re: This Week’s Finds in Mathematical Physics (Week 289) In week289 John Baez asked: “So: how about in thermodynamics? Does anyone talk about the Poisson bracket of temperature and entropy, for example?” Entropy is a canonical variable in the Hamiltonian description of fluid dynamics, see, e.g., R. Salmon, “Hamiltonian Fluid Dynamics”, Ann. Rev. Fluid Mech. 20 (1998) 225. “But if anyone knows a clear, detailed treatment of the analogy between classical mechanics and thermodynamics, focusing on the Legendre transform, please let me know!” A recent article, that you might find interesting (although perhaps not exactly what you are seeking for in terms of Legendre transform), is S.G. Rajeev, “Quantization of Contact Manifolds and Thermodynamics”, Annals Phys. 323 (2008) 768-782, http://arxiv.org/abs/math-ph/0703061. Posted by: Klaus Bering on January 9, 2010 11:51 PM | Permalink | Reply to this Re: This Week’s Finds in Mathematical Physics (Week 289) Thanks for the references, Klaus. The paper by Rajeev definitely proves I’m not the only one who had the crazy idea of trying to ‘quantize’ the conjugate variables in thermodynamics, or who wondered what the relevant uncertainty principle might then be. Quoting: In classical thermodynamics, as in classical mechanics, observables come in canonically conjugate pairs: pressure is conjugate to volume, temperature to entropy, magnetic field to magnetization, chemical potential to the number of particles etc. An important difference is that the thermodynamic state space is odd dimensional. Instead of the phase space forming a symplectic manifold (necessarily even dimensional) the thermodynamic state space is a contact manifold, its odd dimensional analogue. Upon passing to the quantum theory, observables of mechanics become operators; canonically conjugate observables cannot be simultaneously measured and satisfy the uncertainty principle $\Delta p \Delta q \ge \hbar.$ Is there is an analogue to this uncertainty principle1 for thermodynamically conjugate variables? Is there such a thing as ‘quantum thermodynamics’ where pressure or volume are represented as operators? The product of thermodynamic conjugates such as $\Delta P \Delta V$ has the units of energy rather than action. So if there is an uncertainty relation $\Delta P \Delta V \ge \hbar_1$, it is clear that $\hbar_1$ cannot be Planck’s constant as in quantum mechanics. And he notes: “There is already an uncertainty relation for statistical rather then quantum fluctuations of thermodynamical quantities, where the analogue of $\hbar$ is $k T$”. But there are a couple of strange things here, even before I read the rest of the paper! First, the variables $P$ and $V$ are of type that in general systems theory are called ‘effort’ and ‘displacement’, while $p$ and $q$ are of the type ‘momentum’ and ‘displacement’. It’s the latter sort where the product has units of action, and where $[p, q] = -i \hbar$ in the quantum theory. The analogous quantities would not be pressure and volume, but rather ‘pressure momentum’ and volume — as I explained in week288. Secondly, it’s not only thermodynamics where contact geometry becomes important. It’s also important in classical mechanics! Anyway, this paper looks very interesting — I’m just mentioning two places where I think it might be interesting to follow the analogies more religiously than Rajeev seems to be doing. There’s a lot more to say. But I should read the paper. Posted by: John Baez on January 10, 2010 7:43 AM | Permalink | Reply to this Re: This Week’s Finds in Mathematical Physics (Week 289) Another piece for your chart can be found in algorithmic information theory. Let $x[n]$ be the first $n$ bits of $x$. Let $H(x)$ be the length of the smallest program whose output is $x$. Let $h(p) = 1$ if the program $p$ halts, 0 otherwise. Let $Z(T) = \sum_p exp\left(\frac{-|p|}{T ln 2}\right)$. Then the expectation value of $h$ is (1)$\langle h \rangle(T) = \frac{\sum_p h(p) exp\left(\frac{-|p|}{T ln 2}\right)}{Z(T)};$ Chaitin’s Omega number is $\langle h \rangle(1)$. It’s also true that (2)$\Delta H(\langle h \rangle(T)[n]) = T * \Delta n.$ This is like the chemical potential idea: $T$ is the number of bits of information it takes to produce $\Delta n$ more bits of the expansion of $\langle h \rangle(T)$. Posted by: Mike Stay on January 10, 2010 1:01 AM | Permalink | Reply to this Re: This Week’s Finds in Mathematical Physics (Week 289) Replacing ‘energy’ by ‘money’ in your economic example is bold. So far there is no empirical evidence that money is invariant over time. On the contrary, as long as money can simply be printed, it is not invariant. However, I think your intuition still remains true. Christian Schwarz from Duisburg university argues that ‘displacement’ should be ‘price’ and from the empirically justified ‘demand invariance under price-scaling’ one gets that the ‘momentum’ (as the space shift invariant) becomes ‘demand’ (as the invariant for the above symmetry. One can even derive commutation relations and uncertainty equations with this approach. More can be found in my mathematics-blog filed under miroeconomics. Posted by: Uwe Stroinski on January 11, 2010 10:09 AM | Permalink | Reply to this Re: This Week’s Finds in Mathematical Physics (Week 289) I wonder if the number ought really to be the purchasing power of the Global economy or something; after all, money isn’t worth the paper it’s printed on (or the gold it’s pressed into) it’s only as good as what it will buy. It’s still a tricky idea — cash is coupled to many fields not directly interchangeable, in ways that vary across space and time. Of course, in thermodynamics, Gibbs’ free energy is often a more useful number than the raw total energy of a system; and that’s not an invariant either, but it tells you what is reversible and what Posted by: Jesse McKeown on January 12, 2010 6:05 AM | Permalink | Reply to this Re: This Week’s Finds in Mathematical Physics (Week 289) One approach to comparing economies across nations is that used by the Economist mag - it’s called the Big Mac index - no kidding! Posted by: jim stasheff on January 12, 2010 1:49 PM | Permalink | Reply to this Re: This Week’s Finds in Mathematical Physics (Week 289) Posted by: Jonathan Vos Post on January 13, 2010 12:32 AM | Permalink | Reply to this Re: This Week’s Finds in Mathematical Physics (Week 289) Hmmm… I am always excited when I see cool analogies like these developed! But I feel like there should be more columns in the table - to include explicitly dissipative terms. I realize that non-conservative terms may not fit so well into a program directed toward symplectic geometry, but, at least in the thermal and chemical systems, they seem unavoidable to me because in these systems dissipative effects predominate over “inertial” effects. In all but the most exotic chemical reactions, the approach to equilibrium is characterized by concentrations whose differences from their equilibrium values decay (often exponentially) without oscillating (overshooting equilibrium like a damped oscillator would). In fact, the damped mechanical harmonic oscillator (with displacement coordinate x, mass m, spring constant k, and a frictional force F = -b x’ proportional to velocity) serves as a good reference point for what I mean. (In the spirit of analogy, you could use a circuit having inductance, capacitance, and resistance, of course!) As you know, a classical equation of motion for this system, when unforced, is m x” + b x’ + k x = 0. When m k < (b/2)^2, the system undergoes no oscillations, and we say it is overdamped. In fact, when this inequality is pronounced, the restoring force F = -k x is proportional to a VELOCITY rather than an ACCELERATION, and the mass plays essentially no role in the solution except when the initial velocity is very large. Same thing goes for electrical circuits, where it is often appropriate to ignore inductances as negligible. I feel like this is more the case in thermal systems (with temperature gradients) or chemical systems (with concentration or chemical potential gradients). In fact, it is hard for me to see how a “temperature momentum” or “chemical momentum” would arise, as these would give rise to an overshooting of the natural equilibrium. There are “oscillating reactions” in which the chemical potential of a substance undergoes oscillations, but as I said, these are very rare and often the result of something crazy like autocatalysis, where a product of the reaction actually catalyses the reaction itself and speeds it up in proportion to the product’s concentration. Having said all that, there are really cool analogies to be made with dissipative systems! Ohm’s Law: J = - G dV/dx, in which a current density is proportional to a gradient of electric potential (an E field), Fick’s Law: J = - D dC/dx, in which a material flux is proportional to a concentration gradient, Fourier’s Law: 1/A dQ/dt = - k dT/dx, which relates heat flux through an area to a temperature gradient (and defines the thermal conductivity), and a law (Newton’s?) for viscosity, which relates a frictional force to a transverse velocity gradient: 1/A F[x] = - η dv[x]/dy. Here, the “flux” is a flux of transverse momentum through the area A. I’m sure there are others, but these are the only ones I can think of right now. All of these laws relate a type of “flux,” a type of “gradient,” and a type of “conductivity.” Posted by: Garett Leskowitz on January 13, 2010 8:40 AM | Permalink | Reply to this Re: This Week’s Finds in Mathematical Physics (Week 289) Hi, Garett! Remember telling me about the Y-$\Delta$ transformation for electrical circuits? I still don’t understand how this is related to the Yang–Baxter equation. But I plan to say a lot about the category-theoretic approach to electrical circuits and their analogues in other branches of physics and engineering. And the Y-$\Delta$ transformation is part of the story I’ll tell. So, thanks for telling me about it! I realize that non-conservative terms may not fit so well into a program directed toward symplectic geometry… Not so! I know what you mean: Hamiltonian mechanics and symplectic geometry are typically aimed at understanding systems without dissipation. But I’ll definitely be talking a lot about dissipative systems in the Weeks to come — indeed I already brought in RLC circuits and the damped harmonic oscillator back in week288. And it turns out that purely dissipative systems have their own different relation to symplectic geometry! So, thanks for listing a bunch of purely dissipative systems! As you may know, circuits built from resistors and capacitors are described by systems of first-order ordinary differential equations… but if you build a big grid of resistors and capacitors, its behavior can approximate that of the heat equation. So far, I mainly see symplectic geometry raising its pretty head in an even simpler situation: DC circuits made entirely of resistors — or in the continuum limit, Laplace’s equation. Systems like this can be described by a variational principle — the ‘principle of least power dissipation’. And that’s how symplectic geometry gets into the game. There’s a lot more to say… and I’ll try to say it all in future Weeks! Posted by: John Baez on January 13, 2010 5:40 PM | Permalink | Reply to this Re: This Week’s Finds in Mathematical Physics (Week 289) I look forward to hearing much more! Yes, I remember discussing the Y-Delta transformation with you in the context of “duals.” There was something I forgot to mention to you that I noticed around that time that might be relevant. There’s a very common bit of circuitry around called a Wheatstone bridge. (I’m sure it is on Wikipedia). I’ve never seen it drawn this way, but the six circuit elements in this circuit are connected in the same way as the edges of a tetrahedron! Also around that time I remember your introducing me to your idea of an “S-connector,” and I said “hey, that’s a transistor!” In an email, Peter Selinger raised some justified skepticism about this analogy. What I should have said was, “hey, that’s a passive linear current amplifier!” - which is one of the very important models of transistor behavior in common regimes of operation. On the subject of analogies, I remember a picture I saw in an electronics book from my youth that draws a parallel between current flow in a transistor and water flowing in connected channels that might be of some interest. There’s a similar sort of picture here (under “How do Transistors work?”): Posted by: Garett Leskowitz on January 13, 2010 7:05 PM | Permalink | Reply to this Re: This Week’s Finds in Mathematical Physics (Week 289) That picture with the voltmeter and square grid of resistors reminds me of an interesting problem: What is the resistance between 2 points separated (n,m)? You may think this is a function with a strong “rectangular” character, ie strongly correlated to (m+n). But it is surprisingly well correlated to sqrt(m^2+n^2), what you would expect for a continuous resistance sheet. One way to see this is using the fact that random walks solve networks. Computationally highly inefficient, but nice conceptually. A random walk of N steps in the n-direction, gives a binomial distribution N!/(n! (N-n!)). But this tends to a Gaussian for high N. If you include a second direction, you get approximately a product of 2 Gaussians, P ~ exp(-(n/N)^2)*exp(-(m/N)^2) = exp(-(n^2+ m^2)/N^2) So combinatorics induce isotropy! Posted by: Gerard Westendorp on January 15, 2010 1:49 PM | Permalink | Reply to this Re: This Week’s Finds in Mathematical Physics (Week 289) Garret wrote: But I feel like there should be more columns in the table - to include explicitly dissipative terms. By the way, we’ll see we don’t need more columns to describe dissipative terms: even equations that include dissipation, e.g. those for an RLC circuit, can be described using in terms of the variables $q, \dot{q}, p,$ and $\dot{p}$. There’s a very common bit of circuitry around called a Wheatstone bridge. (I’m sure it is on Wikipedia). I’ve never seen it drawn this way, but the six circuit elements in this circuit are connected in the same way as the edges of a tetrahedron! Good point! So, the Wheatstone bridge is a special case of the $6j$ symbols, or ‘tet net’, in the monoidal category whose morphisms are circuits. I’ll describe that monoidal category in a while. Also around that time I remember your introducing me to your idea of an “S-connector,” and I said “hey, that’s a transistor!” In an email, Peter Selinger raised some justified skepticism about this analogy. What I should have said was, “hey, that’s a passive linear current amplifier!” Thanks! If I ever finish that paper with Peter Selinger, I’ll include that information. Posted by: John Baez on January 14, 2010 3:18 AM | Permalink | Reply to this Re: This Week’s Finds in Mathematical Physics (Week 289) Bad link. (And no, the desired URI is not hidden anywhere in the HTML source of the page.) Posted by: Toby Bartels on January 14, 2010 5:17 AM | Permalink | Reply to this Re: This Week’s Finds in Mathematical Physics (Week 289) Fixed, thanks! This link should be a trip down memory lane for you, Toby… By the way, I think I was wrong to claim the Wheatstone bridge is a special case of a tet net. The tet net is a tetrahedron with edges labelled by identity morphisms of objects. The Wheatstone bridge has edges labelled by nonidentity morphisms. But there still may be something going on here! Posted by: John Baez on January 14, 2010 7:16 AM | Permalink | Reply to this Re: This Week’s Finds in Mathematical Physics (Week 289) Kostant is giving a talk about the new appearance of $E_8$ in condensed matter physics. If anyone can listen to it, please give us a report! • Bertram Kostant, Experimental evidence for the occurrence of $E_8$ in nature and the radii of the Gossett circles, Tuesday February 23, 3:00 at APM 6402, Department of Mathematics, U.C. San Abstract: A recent experimental discovery involving the spin structure of electrons in a cold one dimensional magnet points to a model involving the exceptional Lie group $E_8$. The model predicts 8 particles the ratio of whose masses are the same as the ratios of the radii of the circles in the famous Gossett diagram (going back to 1900) of what is now understood to be a 2 dimensional projection of the 240 roots of $E_8$ arranged in 8 concentric circles. The ratio of the radii of the two smallest circles (read 2 smallest masses) is the golden number. This beautifully has been found experimentally. The ratio of the radii of the other masses has been written down conjecturally by Zamolodchikov. This again agrees with the analogous statement for the radii of the Gossett Some time ago we found an operator $A$ (very easily defined and reexpressed by Vogan as an element of the group algebra of the Weyl group) on 8-space whose spectrum is exactly the squares of the radii of the Gossett circles. The operator $A$ is written in terms of the coefficients $n_i$ of the highest root. In McKay theory the $n_i$ are the dimensions of the irreducible representations of the binary icosahedral group. Our result works for any simple Lie group not just $E_8$. Posted by: John Baez on February 17, 2010 7:22 PM | Permalink | Reply to this
{"url":"http://golem.ph.utexas.edu/category/2010/01/this_weeks_finds_in_mathematic_50.html","timestamp":"2014-04-17T10:52:44Z","content_type":null,"content_length":"145685","record_id":"<urn:uuid:4f59b9f3-57b6-4874-b37f-882fa2df49e1>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00581-ip-10-147-4-33.ec2.internal.warc.gz"}
Best-first Model Merging for Hidden Markov Model Induction Results 1 - 10 of 71 - Machine Learning , 1999 "... . This paper shows that the accuracy of learned text classifiers can be improved by augmenting a small number of labeled training documents with a large pool of unlabeled documents. This is important because in many text classification problems obtaining training labels is expensive, while large qua ..." Cited by 803 (17 self) Add to MetaCart . This paper shows that the accuracy of learned text classifiers can be improved by augmenting a small number of labeled training documents with a large pool of unlabeled documents. This is important because in many text classification problems obtaining training labels is expensive, while large quantities of unlabeled documents are readily available. We introduce an algorithm for learning from labeled and unlabeled documents based on the combination of Expectation-Maximization (EM) and a naive Bayes classifier. The algorithm first trains a classifier using the available labeled documents, and probabilistically labels the unlabeled documents. It then trains a new classifier using the labels for all the documents, and iterates to convergence. This basic EM procedure works well when the data conform to the generative assumptions of the model. However these assumptions are often violated in practice, and poor performance can result. We present two extensions to the algorithm that improve ... - IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE , 1997 "... We present a technique for constructing random fields from a set of training samples. The learning paradigm builds increasingly complex fields by allowing potential functions, or features, that are supported by increasingly large subgraphs. Each feature has a weight that is trained by minimizing the ..." Cited by 554 (14 self) Add to MetaCart We present a technique for constructing random fields from a set of training samples. The learning paradigm builds increasingly complex fields by allowing potential functions, or features, that are supported by increasingly large subgraphs. Each feature has a weight that is trained by minimizing the Kullback-Leibler divergence between the model and the empirical distribution of the training data. A greedy algorithm determines how features are incrementally added to the field and an iterative scaling algorithm is used to estimate the optimal values of the weights. The random field models and techniques introduced in this paper differ from those common to much of the computer vision literature in that the underlying random fields are non-Markovian and have a large number of parameters that must be estimated. Relations to other learning approaches, including decision trees, are given. As a demonstration of the method, we describe its application to the problem of automatic word - Advances in Neural Information Processing Systems 5 , 1993 "... This paper describes a technique for learning both the number of states and the topology of Hidden Markov Models from examples. The induction process starts with the most specific model consistent with the training data and generalizes by successively merging states. Both the choice of states to mer ..." Cited by 135 (2 self) Add to MetaCart This paper describes a technique for learning both the number of states and the topology of Hidden Markov Models from examples. The induction process starts with the most specific model consistent with the training data and generalizes by successively merging states. Both the choice of states to merge and the stopping criterion are guided by the Bayesian posterior probability. We compare our algorithm with the Baum-Welch method of estimating fixed-size models, and find that it can induce minimal HMMs from data in cases where fixed estimation does not converge or requires redundant parameters to converge. 1 - In: Int. Conf. Grammatical Inference. URL: citeseer.nj.nec.com/stolcke94inducing.html , 1994 "... We describe a framework for inducing probabilistic grammars from corpora of positive samples. First, samples are incorporated by adding ad-hoc rules to a working grammar; subsequently, elements of the model (such as states or nonterminals) are merged to achieve generalization and a more compact repr ..." Cited by 130 (0 self) Add to MetaCart We describe a framework for inducing probabilistic grammars from corpora of positive samples. First, samples are incorporated by adding ad-hoc rules to a working grammar; subsequently, elements of the model (such as states or nonterminals) are merged to achieve generalization and a more compact representation. The choice of what to merge and when to stop is governed by the Bayesian posterior probability of the grammar given the data, which formalizes a trade-off between a close fit to the data and a default preference for simpler models (‘Occam’s Razor’). The general scheme is illustrated using three types of probabilistic grammars: Hidden Markov models, class-based-grams, and stochastic context-free grammars. 1 "... Machine Learning research has been making great progress in many directions. This article summarizes four of these directions and discusses some current open problems. The four directions are (a) improving classification accuracy by learning ensembles of classifiers, (b) methods for scaling up super ..." Cited by 114 (1 self) Add to MetaCart Machine Learning research has been making great progress in many directions. This article summarizes four of these directions and discusses some current open problems. The four directions are (a) improving classification accuracy by learning ensembles of classifiers, (b) methods for scaling up supervised learning algorithms, (c) reinforcement learning, and (d) learning complex stochastic , 2001 "... Recent research has demonstrated the strong performance of hidden Markov models applied to information extraction -- the task of populating database slots with corresponding phrases from text documents. A remaining problem, however, is the selection of state-transition structure for the model. This ..." Cited by 107 (2 self) Add to MetaCart Recent research has demonstrated the strong performance of hidden Markov models applied to information extraction -- the task of populating database slots with corresponding phrases from text documents. A remaining problem, however, is the selection of state-transition structure for the model. This paper demonstrates that extraction accuracy strongly depends on the selection of structure, and presents an algorithm for automatically finding good structures by stochastic optimization. Our algorithm begins with a simple model and then performs hill-climbing in the space of possible structures by splitting states and gauging performance on a validation set. Experimental results show that this technique finds HMM models that almost always out-perform a fixed model, and have superior average performance across tasks. - Corpus-Based Methods in Language and Speech , 1996 "... m we can carve o# next. `Partial parsing' is a cover term for a range of di#erent techniques for recovering some but not all of the information contained in a traditional syntactic analysis. Partial parsing techniques, like tagging techniques, aim for reliability and robustness in the face of the va ..." Cited by 96 (0 self) Add to MetaCart m we can carve o# next. `Partial parsing' is a cover term for a range of di#erent techniques for recovering some but not all of the information contained in a traditional syntactic analysis. Partial parsing techniques, like tagging techniques, aim for reliability and robustness in the face of the vagaries of natural text, by sacrificing completeness of analysis and accepting a low but non-zero error rate. 1 Tagging The earliest taggers [35, 51] had large sets of hand-constructed rules for assigning tags on the basis of words' character patterns and on the basis of the tags assigned to preceding or following words, but they had only small lexica, primarily for exceptions to the rules. TAGGIT [35] was used to generate an initial tagging of the Brown corpus, which was then hand-edited. (Thus it provided the data that has since been used to train other taggers [20].) The tagger described by Garside [56, 34], CLAWS, was a probabilistic version of TAGGIT, and the DeRose tagger improved on , 1996 "... Hidden Markov Models (HMMs) are statistical models of sequential data that have been used successfully in many machine learning applications, especially for speech recognition. Furthermore, in the last few years, many new and promising probabilistic models related to HMMs have been proposed. We firs ..." Cited by 84 (2 self) Add to MetaCart Hidden Markov Models (HMMs) are statistical models of sequential data that have been used successfully in many machine learning applications, especially for speech recognition. Furthermore, in the last few years, many new and promising probabilistic models related to HMMs have been proposed. We first summarize the basics of HMMs, and then review several recent related learning algorithms and extensions of HMMs, including in particular hybrids of HMMs with artificial neural networks, Input-Output HMMs (which are conditional HMMs using neural networks to compute probabilities), weighted transducers, variable-length Markov models and Markov switching state-space models. Finally, we discuss some of the challenges of future research in this very active area. 1 Introduction Hidden Markov Models (HMMs) are statistical models of sequential data that have been used successfully in many applications in artificial intelligence, pattern recognition, speech recognition, and modeling of biological ... , 1998 "... We introduce an entropic prior for multinomial parameter estimation problems and solve for its maximum... ..."
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=60074","timestamp":"2014-04-20T20:12:26Z","content_type":null,"content_length":"36320","record_id":"<urn:uuid:fb8ad509-bde2-4fb6-8122-24628cd1c72f>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00340-ip-10-147-4-33.ec2.internal.warc.gz"}
Central subgroup up vote 1 down vote favorite Dear all, Let N be abelian normal subgroup of finite group G and G/N be simple group. Why N is ceteral subgroup of G? All the best 5 Believing (on the basis of the title) that "ceteral" means "central" and believing that cyclic groups of prime order count as simple, I believe that $G=S_3$ and $N=A_3$ give a counterexample. If you prefer that simple groups not be abelian, then there should be similar examples, obtained as semidirect products of a simple group (to serve as $G/N$) acting non-trivially on an abelian group (to serve as $N$). – Andreas Blass Mar 15 '12 at 16:39 4 Just as a concrete example, one can take the holomorph of $C_2^3$, which is a semidirect product $N\rtimes H$. Here $N$ is elementary abelian of order $8$, and $H$ is the simple group $SL(3,2)$. – Steve D Mar 15 '12 at 17:26 add comment closed as not a real question by Alain Valette, Chris Godsil, HJRW, Ian Agol, Bill Johnson Mar 19 '12 at 3:00 It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.If this question can be reworded to fit the rules in the help center, please edit the question. Browse other questions tagged gr.group-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/91307/central-subgroup","timestamp":"2014-04-16T22:14:04Z","content_type":null,"content_length":"39608","record_id":"<urn:uuid:2ca1ac54-1d28-4ecf-9d68-72207fc1438e>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00299-ip-10-147-4-33.ec2.internal.warc.gz"}
A. Phase coherence and power spectrum B. Definition of phase variables C. Phase coherence as a structural attractorproperty D. Examples A. Experimental setup B. Preprocessing of the data C. Dynamical characteristics: Two examples D. Phase definition and projective effects on phase coherenceanalysis E. Phase coherence of experimental electrochemicaloscillations A. Recurrence network analysis B. Example cases C. Changes in attractor geometry D. Mathematical model
{"url":"http://scitation.aip.org/content/aip/journal/chaos/22/3/10.1063/1.4747707","timestamp":"2014-04-16T23:38:11Z","content_type":null,"content_length":"97772","record_id":"<urn:uuid:db042c7f-a300-4244-b0a5-42dd1de910d7>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00294-ip-10-147-4-33.ec2.internal.warc.gz"}
The Practical Impact of Set-Theoretic Axioms on Measure Theory up vote 12 down vote favorite The set-theoretic evidence is that we could probably safely add axioms to make many more sets measurable. For example, we could add axioms that would make projective sets measurable. I'm curious what would be the implications for working analysts of such a move. I can see two potential ways in which it could potentially have an impact: • Currently, proving measurability of sets is a somewhat fussy activity. With the additional freedom provided by extra constructions, the existing theory would become much simpler. • There are existing theories that are already straining at the limits of what can proved measurable in ZFC. These theories could be usefully extended. I could also see that it potentially having no real impact. I'd be curious to hear which if any of these possibilities actually holds. descriptive-set-theory measure-theory A standard application of Martin's axiom en.wikipedia.org/wiki/Martin%27s_axiom is the existence of Banach limits satisfying some measurability conditions (medial means in the sense of 1 Mokobodzki). See my answer to math.stackexchange.com/q/54554 for some details and references and part 4 of that answer for a basic sample application that might illustrate their power. – Theo Buehler Jan 1 '13 at 23:12 add comment 1 Answer active oldest votes Solovay's model already shows that the axiom of dependent choice (DC) is compatible with the assumption that all sets are Lebesgue measurable. As far as I am aware, DC suffices for essentially all applications that "working analysts" care about. If this is true, then the only practical impact of assuming that all sets are Lebesgue measurable is that you exchange slightly fussy proofs of measurability with slightly fussy proofs of results that currently invoke Hahn–Banach or other manifestations of AC, replacing AC with DC. up vote 5 down vote If I'm wrong and there are cases where DC isn't enough for "working analysts," I'd be curious to hear about it. 3 And, 40 years later, functional analysts have not begun using solovayan analysis. – Gerald Edgar Dec 31 '12 at 21:19 3 See my answer mathoverflow.net/questions/34863 for speculations about why analysis hasn't become Solovayian. – Andreas Blass Dec 31 '12 at 21:38 1 Note that even if working analysts resist opting for DC over AC in practice, Solovay's result still shows that making more sets measurable won't let you "extend" to new theorems unless those extended theorems aren't provable using DC. So for arsmath's question it is still relevant to consider what results in analysis aren't provable with DC. – Timothy Chow Dec 31 '12 at 22:28 I don't wish to turn my ignorance into undue vehemence, but apropos of Todd's remark, I have always felt that the category of Banach spaces becomes much nastier if the dual of $L^\infty$ 3 is $L^1$. Closed subspaces of reflexive spaces are no longer guaranteed to be reflexive; something odd must be happening with Hahn-Banach (meaning that duals of certain classes of short exact sequences are now no longer short exact), one loses automatic recourse to the "embed into the double dual to take advantage of compactness in the weak-star topology" technique, and so on. – Yemon Choi Jan 1 '13 at 4:11 1 Nevertheless, as Timothy points out, perhaps everything Proper People Care About can be reproved using mildly fussier arguments instead of breezy invocations of Hahn-Banach and Tychonoff. So maybe this is a question of convenience and habit, although I remain to be completely convinced of this. – Yemon Choi Jan 1 '13 at 4:16 show 4 more comments Not the answer you're looking for? Browse other questions tagged descriptive-set-theory measure-theory or ask your own question.
{"url":"https://mathoverflow.net/questions/117705/the-practical-impact-of-set-theoretic-axioms-on-measure-theory?sort=oldest","timestamp":"2014-04-17T18:54:07Z","content_type":null,"content_length":"60272","record_id":"<urn:uuid:0c179803-6a06-4fc8-bd23-4d8db9f50b5d>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00295-ip-10-147-4-33.ec2.internal.warc.gz"}
The Scorpio Miser with its special high-efficiency engine Author Message The Scorpio Miser with its special high-efficiency engine [#permalink] 14 Sep 2004, 13:02 crackgmat3 Difficulty: Manager 5% (low) Joined: 27 Question Stats: Aug 2004 Posts: 126 (00:00) correct Location: US 0% (00:00) Followers: 2 Kudos [?]: 3 [0], given: 0 based on 0 sessions 15. The Scorpio Miser with its special high-efficiency engine costs more to buy than the standard Scorpio sports car. At current fuel prices , a buyer choosing the Miser would have to drive it 60,000 miles to make up the difference in purchase price through savings on fuel .It follows that ,if fuel prices fell ,it would take fewer miles to reach the break-even point. Which one of the following arguments contains an error of reasoning similar to that in the argument above? (A) The true annual rate of earnings on an interest-bearing account is the annual rate of interest less the annual rate of inflation drops, the rate of interest can be reduced by an equal amount without there being a change in the true rate of earnings. (B) For retail food stores ,the Polar freezer, unlike the Arctic freezer, provides a consistent temperature that allows the store to carry premium frozen foods. Thus ,if electricity rates fell ,a lower volume of premium-food sales could justify choosing the Polar freezer . (C) With the Roadmaker ,a crew can repave a mile of decayed road in less time than with the competing model, which is ,however, much less expensive. Reduced staffing levels made possible by the Roadmaker eventually compensate for its higher price .Therefore, the Roadmaker is especially advantageous where average wages are low. (D) The improve strain the Northland apple tree bears fruit younger and lives longer than the standard strain .The standard strain .The standard strain does grow larger at maturity ,but to allow for this ,standard trees must be spaced farther apart. Therefore , new plantings should all be of the improved strain. (E) Stocks pay dividends ,which vary from year to year depending on profits made. Bonds pay interest ,which remains constant from year to year . Therefore ,since the interest earned on bonds does not decrease when economic conditions decline , investors decline , investors interested in a reliable income should choose bonds. Director New Miser - fuel efficient engine - but costly Savings on fuel over 60K miles make it a good buy. Joined: 16 => if fuel prices fall MORE miles need to be added. However the stimulus states fewer miles is enough and is the flaw. Jun 2004 Posts: 893 A. has a similar logical flaw Given: Annual rate of earnings = Annual rate of interest - Inflation drop points. Followers: 1 assuming some numbers we have 5 = 6-1. If 1 become 2 (2 point drop, i.e addl one point drop in inflation), and 6 becomes 5 (dropping by the same one point) we are told that Annual rate Kudos [?]: 7 of earnings will remain at 5. However, we see that 5-2 =3. Hence A. [0], given: 0 Oh yes. I really missed on this one. I was always inclined to C, but surely missed it. Joined: 16 Jun 2004 Posts: 893 Followers: 1 Kudos [?]: 7 [0], given: 0
{"url":"http://gmatclub.com/forum/the-scorpio-miser-with-its-special-high-efficiency-engine-9827.html?fl=similar","timestamp":"2014-04-23T14:33:14Z","content_type":null,"content_length":"151690","record_id":"<urn:uuid:6aaa37e6-1ed2-4e04-8351-1b2e3524e31b>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00137-ip-10-147-4-33.ec2.internal.warc.gz"}
Seth Long My blog is now located at http://www.WhoIsSethLong.com A good friend of mine called me up the other day and said he was getting back into programming after being away from it for many years. He was having a hard time understand the concept of inheritance and asked me for a really easy to understand example. So here it is… The easiest way to understand inheritance is to think about your parents. Ok, I know you’re wondering what thinking about your parents has to do with programming but let me explain. You are a unique individual yet you share many of the same features as your parents. You inherited those features from you parents and combined them with your own features, creating a new and unique individual. Well, inheritance in programming isn’t much different. You create a new class that inherits the properties and methods of another class. For this example, we will have a class named AlcoholicBeverage. In case you didn’t know, all alcoholic beverages contain alcohol and have a “proof”. So, we’ll add the properties “Proof” and “AlcoholPercent” to the AlcoholicBeverage class. This class will act as the “parent” for our future beer and wine classes. class AlcoholicBeverage protected double _PercentAlcohol; public double Proof get { return (_PercentAlcohol * 2); } set { _PercentAlcohol = (value / 2); } public double PercentAlcohol get { return _PercentAlcohol; } set { _PercentAlcohol = value; } Next, we will create our beer and wine class which will inherit from the AlcoholicBeverage class. A beer and a glass of wine both have a percentage of alcohol and a proof but we will not need to add them to our beer and wine classes since they are already defined in the base AlcoholicBeverage class. class Beer : AlcoholicBeverage public Beer() PercentAlcohol = 2.5; class Wine : AlcoholicBeverage public Wine() PercentAlcohol = 12; Now we can create our new beer and wine classes and check their Proof. Beer bottleOfBeer = new Beer(); Wine glassOfWine = new Wine(); When run, it gives the following output: This is a really simple example of inheritance. Hopefully it’s simple enough for everyone to understand. Well, that’s all for now! - Seth Long My blog is now located at http://www.WhoIsSethLong.com One of my biggest pet peeves as a programmer are other programmers who create giant “do all” general classes rather then creating a few smaller specialized classes. So, I thought it would be a good idea to write a little bit about the Object Oriented principles of Coupling and Cohesion that deal specifically with this issue. Coupling refers to how tightly different classes are connected to one another. Tightly coupled classes contain a high number of interactions and dependencies. Loosely coupled classes are the opposite in that their dependencies on one another are kept to a minimum and instead rely on the well defined public interfaces of each other Ok, I know it’s hard to understand coupling if you’ve never heard about it before, so I think the best way to explain it is with an example. I read a great description of coupling a while back that I’m going to share here. I think it will really help people better understand the concept of coupling. It was in a reply post on the Sun Developer’s Network forums and just does a great job of describing coupling in easy to understand terms. Legos, the toys that SNAP together would be considered loosely coupled because you can just snap the pieces together and build whatever system you want to. However, a jigsaw puzzle has pieces that are TIGHTLY coupled. You can’t take a piece from one jigsaw puzzle (system) and snap it into a different puzzle, because the system (puzzle) is very dependent on the very specific pieces that were built specific to that particular “design”. The legos are built in a more generic fashion so that they can be used in your Lego House, or in my Lego Alien Man. The code example below shows two very simple classes that are tightly coupled to one another. It was inspired by actual code from the Ektron CMS my previous employer so dearly loved. It is not the actual code but is in the exact same format. class TightCoupling { public void ShowWelcomeMsg(string type) { switch (type) { case “GM”: Console.WriteLine(“Good Morning”); break ; case “GE”: Console.WriteLine(“Good Evening”); break; case “GN”: Console.WriteLine(“Good Night”); break; } } } class TightCoupling2 { public TightCoupling2() { TightCoupling example = new TightCoupling(); example.ShowWelcomeMsg(“GE”); } } In the above example, the ShowWelcomeMsg function can not be called without knowing the inner-workings of that function making it nearly useless for other systems. Secondly, if another developer in the future decides to change the switch statement in the ShowWelcomeMsg function, it could inadvertently effect other classes that call that function. It shouldn’t be too hard to see that loosely coupled classes offer much greater code reuse and flexibility then tightly coupled classes. Any changes to tightly coupled classes run the high risk of inadvertently effecting other classes and their dependence on other specific classes making them worthless outside the system they are in. With loosely coupled classes we use well defined public interfaces. This give us the flexibility to modify the internals of the class in the future without effecting other classes and allow us to reuse the code in other systems. So, next time you are writing a class, think to yourself: Am I writing a Lego or am I writing a puzzle piece? Cohesion is often mentioned with Coupling since they usually go hand-in-hand. Cohesion refers to how closely related methods and class level variables are in a class. A class with high cohesion would be one where all the methods and class level variables are used together to accomplish a specific task. On the other end, a class with low cohesion is one where functions are randomly inserted into a class and used to accomplish a variety of different tasks. Generally tight coupling gives low cohesion and loose coupling gives high cohesion. The code below is of an EmailMessage class that has high cohesion. All of the methods and class level variables are very closely related and work together to accomplish a single task. class EmailMessage { private string sendTo; private string subject; private string message; public EmailMessage(string to, string subject, string message) { this.sendTo = to; this.subject = subject; this.message = message; } public void SendMessage() { // send message using sendTo, subject and message } } Now here is an example of the same class but this time as a low cohesive class. This class was originally designed to send an email message but sometime in the future the user needed to be logged in to send an email so the Login method was added to the EmailMessage class. class EmailMessage { private string sendTo; private string subject; private string message; private string username; public EmailMessage(string to, string subject, string message) { this.sendTo = to; this.subject = subject; this.message = message; } public void SendMessage() { // send message using sendTo, subject and message } public void Login(string username, string password) { this.username = username; // code to login } } The Login method and username class variable really have nothing to do with the EmailMessage class and its main purpose. This class now has low cohesion and is probably not a good example to follow. So that’s a brief overview of Coupling and Cohesion. Remember, if you want easy to maintain and reusable code, stick to loose coupling and high cohesion in your objects. I should also mention that there are times where tight coupling is desirable but it’s rare that I’ll leave that up to you to read more about on your own. Also, I really recommend reading up on Cohesion and Coupling on Wikipedia. They have a lot more detail and information there then what I’ve written here. - Seth Long The Haversine Formula class listed below is used to calculate the distance between two latitude/longitude points in either Kilometers or Miles. I’ve seen the formula written in a few other languages but didn’t see any in C#. So, here is the Haversine Formula in C#. Before I show the class, let me give an example of how the class is called and used. To start we will need 2 latitude/longitude points. I created a struct to hold the latitude/longitude points. Once we have the points, we create the Haversine class and call the Distance method, passing in the points and an enum specifying whether to return the results in Kilometers or Miles. We end up with the following: Position pos1 = new Position(); pos1.Latitude = 40.7486; pos1.Longitude = -73.9864; Position pos2 = new Position(); pos2.Latitude = 24.7486; pos2.Longitude = -72.9864; Haversine hv = new Haversine(); double result = hv.Distance(pos1, pos2, DistanceType.Kilometers); Here is the code for the class: using System; namespace HaversineFormula { /// <summary> /// The distance type to return the results in. /// </summary> public enum DistanceType { Miles, Kilometers }; /// <summary> /// Specifies a Latitude / Longitude point. /// </summary> public struct Position { public double Latitude; public double Longitude; } class Haversine { /// <summary> /// Returns the distance in miles or kilometers of any two /// latitude / longitude points. /// </summary> /// <param name=”pos1″></param> /// <param name=”pos2″></param> /// <param name=”type”></param> /// <returns></returns> public double Distance (Position pos1, Position pos2,DistanceType type) { double R = (type == DistanceType.Miles) ? 3960 : 6371; double dLat = this.toRadian(pos2.Latitude – pos1.Latitude); double dLon = this.toRadian(pos2.Longitude – pos1.Longitude); double a = Math.Sin(dLat / 2) * Math.Sin(dLat / 2) + Math.Cos(this.toRadian(pos1.Latitude)) *Math.Cos(this.toRadian(pos2.Latitude)) * Math.Sin(dLon / 2) * Math.Sin(dLon / 2); double c = 2 * Math.Asin(Math.Min(1, Math.Sqrt(a))); double d = R * c; return d; } /// <summary> /// Convert to Radians. /// </summary> /// <param name=”val”></param> /// <returns></returns> private double toRadian(double val) { return (Math.PI / 180) * val; } } } Here is the same formula as a SQL function. I used Microsoft SQL server for this example. CREATE FUNCTION [dbo].[GetDistance] ( @lat1 Float(8), @long1 Float(8), @lat2 Float(8), @long2 Float(8) ) RETURNS Float(8) AS BEGIN DECLARE @R Float(8); DECLARE @dLat Float(8); DECLARE @dLon Float(8); DECLARE @a Float(8); DECLARE @c Float(8); DECLARE @d Float(8); SET @R = 3960; SET @dLat = RADIANS(@lat2 - @lat1); SET @dLon = RADIANS(@long2 - @long1); SET @a = SIN(@dLat / 2) * SIN(@dLat / 2) + COS(RADIANS(@lat1)) * COS(RADIANS(@lat2)) * SIN(@dLon / 2) *SIN(@dLon / 2); SET @c = 2 * ASIN(MIN(SQRT(@a))); SET @d = @R * @c; RETURN @d; END GO - Seth Long My blog is now located at http://www.WhoIsSethLong.com I recently had the urge to create a Ternary Search Tree in C# (doesn’t everyone get these urges?). My first stop was at http://www.ddj.com/windows/184410528 for a GREAT article on how Ternary Search Trees work and how to create them. If you’re really unfamiliar with Ternary Search Trees, I’d recommend starting with the wikipedia article on them before reading this. For this project, I began by creating a class called TSTree and another called TSNode. The TSNode class is a very simple class that simply contains properties for the split char, low node, equal node and hi node (later we will add a value). Here is an example of the TSNode class: class TSNode { private char splitChar; private TSNode lowNode; private TSNode eqNode; private TSNode hiNode; private object value; public char SplitChar { get { return this.splitChar; } set { this.splitChar = value; } } public TSNode LowNode { get { return this.lowNode; } set { this.lowNode = value; } } public TSNode EqNode { get { return this.eqNode; } set { this.eqNode = value; } } public TSNode HiNode { get { return this.hiNode; } set { this.hiNode = value; } } public object Value { get { return this.value; } set { this.value = value; } } } Later on we can add more to the class but for now that is a good start. So lets briefly go over what it all does. The first property SplitChar holds the char that this node represents. The LowNode holds a reference to a node that holds a SplitChar that is less then the value of this Node, HiNode holds a reference to a node that holds a SplitChar that is greater then the value of this Node and of course EqNode holds a reference to a node that has a SplitChar that is equal. Pretty much like a binary tree node but with a reference to an equal node added (Binary = 2, Ternary = 3). The next step is to add an Add method to the TSTree class. What the Add method is going to do is add a node for each char in a string to the tree. For example, if we have an empty tree and add the key “cat” to it, it will add nodes for ‘c’, ‘a’, and ‘t’ to the tree. Since our word is “cat”, the root node will have a SplitChar value of ‘c’. In it’s EqNode property it will contain a reference to another node with a SplitChar value of ‘a’. This node will also contain a EqNode reference to another node with a SplitChar value of ‘t’. So we have three nodes in our tree whos collective SplitChar values spell out “cat”. Here is the code for the Add method: public TSNode Add(TSNode p, string s, int pos, object value) { if (p == null) { p = new TSNode(); p.SplitChar = s[pos]; } if (s[pos] < p.SplitChar) { p.LowNode = Add(p.LowNode, s, pos, value); } else if (s[pos] == p.SplitChar) { if (pos < s.Length – 1) { p.EqNode = Add(p.EqNode, s, ++pos, value); } else { p.Value = value; } } else { p.HiNode = Add(p.HiNode, s, pos, value); } return p; } Now, when we want to add another word things get interesting. So, lets add the word “cab” to our tree and see what happens next. We start by looking at the root node in our tree and see that ‘c’ matches the first letter in our word “cab”. Since it matches, we follow the EqNode reference and check the next SplitChar value. In this case it is ‘a’, which again matches the next char in our word “cab”. Again, we follow the EqNode reference and check the next SplitChar value. This time, the value of the SplitChar does not equal the last value of our word. So we have to check if the SplitChar value is greater then or less then the value of ‘b’. ‘b’ is less then the value of ‘t’ so we will follow the path of the LoNode. In this example the LoNode value is null so we create another node and set its SplitChar value to ‘b’. That’s how we add items to the tree. Searching for a string in our tree is going to work a lot like adding an item to it. If we were searching for the word “cab” in the previous tree we created, we would start by checking the root node’s SplitChar value and compairing it to the value of the first char in our search word. In this case the ‘c’ in our search word matches the SplitChar value of the root node. So we follow the reference in the EqNode property to the next node. The next node contains a SplitChar value of ‘a’ and again matches the second char in our search word of “cab”. Again, we follow the EqNode reference to the next node and check the SplitChar value. This time they are not equal so we then determine if ‘b’ is greater then or less then the ‘t’ that is stored in the SplitChar value. ‘b’ is less then ‘t’ so we take the LoNode to the next node. This node contains a SplitChar value of ‘b’ so we have found our match! Here is the code for the search: public object Search(string s) { TSNode p = root; int x = 0; while (p != null) { if (s[x] < p.SplitChar) { p = p.LowNode; } else if (s[x] == p.SplitChar) { x++; if (x == s.Length & & p.Value != null) { return p.Value; } p = p.EqNode; } else { p = p.HiNode; } } return null; } So those are the major methods of a Ternary Search Tree. I will post the complete code up here shortly. It’s also important to remember that even though a Ternary Search Tree is similar in use to other key-value type data structures it does have some major differences. First, the key MUST be a string. Second, the Ternary Search Tree has the ability to return partial matches and even similar matches. Very Cool! - Seth Long My blog is now located at http://www.WhoIsSethLong.com The Quicksort is one of the fastest sorting algorithms for sorting large lists of data. The Insertion sort is a fast sorting algorithms for sorting very small lists that are already somewhat sorted. We combine these two algorithms to come up with a very simple and effective method for sorting large lists. We start our sort by using a Quicksort. The Quicksort on average runs in O(n log(n)) time but can be as slow as O(n^2) if we try to sort a list that is already sorted and use the left most element as our pivot. If you are unfamiliar with the quicksort here is a Wikipedia article describing the algorithm: http://en.wikipedia.org/wiki/Quicksort As the Quicksort works, it breaks down our list into smaller and smaller lists. Once the lists become small enough that an Insertion sort becomes more efficient then the Quicksort we will switch to the Insertion sort to finish off the job. Think of it as grinding away with the Quicksort then polishing it off with the Insertion sort once most of the work has been done. Here is a code sample of a combined Quicksort and Insertion sort algorithm in C#. class Quicksort { public void Sort(int[] list, int start, int end) { if (start < end) { // This is where we switch to Insertion Sort! if ((end – start) < 9) { this.InsertionSort(list, start, end + 1); } else { int part = this.partition(list, start, end); this.Sort(list, start, part – 1); this.Sort(list, part + 1, end); } } } public void InsertionSort(int[] list, int start, int end) { for (int x = start + 1; x < end; x++) { int val = list[x]; int j = x – 1; while (j >= 0 && val < list[j]) { list[j + 1] = list[j]; j–; } list[j + 1] = val; } } private int partition(int[] list, int leftIndex, int rightIndex) { int left = leftIndex; int right = rightIndex; int pivot = list[leftIndex]; while (left < right) { if (list[left] < pivot) { left++; continue; } if (list[right] > pivot) { right–; continue; } int tmp = list[left]; list[left] = list [right]; list[right] = tmp; left++; } return left; } } My blog is now located at http://www.WhoIsSethLong.com I was recently asked about a sql junction table and thought I’d share my thoughts about them with the few people who read this. First off, I am NOT a DBA. So, if something is not correct or accurate please feel free to correct me. Junction tables are used when dealing with many-to-many relationships in a SQL database. If you’re wondering what exactly a many-to-many relationship is, let me try to briefly explain. Suppose we are working at a school and have a table full of student names and another table full of classrooms. Each of the students can belong to multiple classrooms or none at all. Likewise, each classroom can have multiple students or none at all. This is an example of a many-to-many relationship. A junction table will allow us to create the many-to-many relationship and most importantly, let us keep from adding duplicate entries as you’ll soon see. To start, lets create a student table and a classroom table. CREATE TABLE Students StudentID int IDENTITY(1,1) PRIMARY KEY, StudentName nchar(50) NOT NULL CREATE TABLE Classrooms ClassroomID int IDENTITY(1,1) PRIMARY KEY, RoomNumber int NOT NULL Now that we have our two tables created we need to create the junction table that will link them together. The junction table is created by using the primary key from the Classrooms and Students CREATE TABLE StudentClassroom StudentID int NOT NULL, ClassroomID int NOT NULL, CONSTRAINT PK_StudentClassroom PRIMARY KEY FOREIGN KEY (StudentID) REFERENCES Students (StudentID), FOREIGN KEY (ClassroomID) REFERENCES Classrooms (ClassroomID) We have now created a table with columns for the StudentID and the ClassroomID. This table also uses a combination of these two columns as the primary key. This means that each student-classroom pair is unique. Each student can belong to many classrooms, each classroom can belong to many students but each pair can only occur once. You should also note that the columns in the junction table are setup as foreign keys to the Students and Classrooms tables. This is important as it keeps us from adding students to a classroom that doesn’t exist or deleting a classroom from the database if there are still students belonging to it. To see what students belong to what classrooms we can use the junction table and the following query: SELECT StudentName, RoomNumber FROM StudentClassroom JOIN Students ON Students.StudentID = StudentClassroom.StudentID JOIN Classrooms ON Classrooms.ClassroomID = StudentClassroom.ClassroomID So, that’s a junction table in a nut shell. - Seth Long My blog is now located at http://www.WhoIsSethLong.com A binary search is used to find if a value exists or what it’s position is in a sorted list. You may have noticed that I put “sorted” in italics. That’s because in order for the binary search to work, it must have a sorted list. To better explain how the binary search works, lets walk through a simple example. We will start off with a sorted list of numbers. 3, 5, 6, 9, 11, 12, 14 Next, we’ll need a number to search for. For this example we’ll choose the number 12. The binary search starts by selecting the middle position of the array and getting it’s value. In our example we selected 9. 3, 5, 6, 9, 11, 12, 14 The algorithm then must decide if the number we are searching for is higher or lower then 9. Since we are searching for 12, the number is higher then 9. We must search to the right of 9 since we know that if there is a 12 in the list it must be to the right of 9. We have now reduced the number of items to compare to in half with just 1 comparison! Pretty cool! This leaves us with the follow numbers to search through. 11, 12, 14 We again select the middle position and compare it to the number we are searching for. In this case the middle position returns a 12 which happens to be the number we are searching for. Using the binary search we found our number in an array of 7 items using only 2 comparisons! The key things to remember about a binary search are that it must have a sort list and it reduces the number of items it has to search through every time it does a comparison. Coding a Binary Search Coding a binary search is fairly straight forward. We take the middle position of the array and compare it to our number. If our number is higher then the middle position we then use the top half of the list. If our number is lower we use the lower half of the list. We then repeat the process till we find the number or run out of numbers to compare. Here’s an example of a binary search using C#. public int binarySearch(int[] array, int lower, int upper, int target) // set -1 to the location in case we don't find it int location = -1; // keep looping till we run out of positions or we find it while (lower array[mid]) // the mid position is less then our target lower = mid + 1; else // the mid position is greater then our target upper = mid - 1; return location; One thing you may have noticed is that you could have written the algorithm using recursion instead of using an iterative approach. So, here is another example of the same algorithm using a recursive method rather then an iterative one. public int binarySearch(int[] array, int lower, int upper, int target) // get the mid position int mid = (lower + upper) / 2; // does not exist if (lower >= upper) return -1; if (array[mid] == target) // mid position equals the target return mid; else if (target < array[mid]) // target is less then mid return binarySearch(array, lower, mid - 1, target); else // target is greater then mid return binarySearch(array, mid + 1, upper, target); One of the greatest things about a binary search is the speed at which it can search through large lists. For example, if we had a list with 1,000,000 items in it, a binary search could find the item we are searching for in under 20 comparisons! Of course there is one drawback to the binary search, the list must be sorted. Since every comparison the binary search performs cuts the number of items we have to search through in half, the binary search runs in logarithmic time or O(logN). Well, that’s all for now. - Seth Long My blog is now located at http://www.WhoIsSethLong.com I went to an interview recently and was asked to arrange an array of integers from 1 to 52 in random order. As soon as they said 52 I thought about a deck of cards, then instantly thought about the Fisher-Yates Shuffle Algorithm. The Fisher-Yates Shuffle Algorithm is a simple algorithm that runs in linear time and is used to shuffle an array in random order. The algorithm loops through each item in the array, generates a random number between 0 and the array length, then assigns the array item to the randomly generated array position. This is a simple algorithm and is probably easiest understood with an example. For this example I’ll use C#. public void FisherYates(int[] ary) Random rnd = new Random(); for (int i = ary.Length - 1; i > 0; i--) int r = rnd.Next(0, i); int tmp = ary[i]; ary[i] = ary[r]; ary[r] = tmp; There are other ways to sort the array besides the Fisher- Yates method. If this had been a deck of cards, another method would be to assign a randomly generated 64-bit number to each card then sort by their randomly assigned number. - Seth Long My blog is now located at http://www.WhoIsSethLong.com This code challenge was inspired by a book I recently read. The book is called Programming Interviews Exposed: Secrets To Landing Your Next Job. It’s a great book full of those fun technical questions you get asked during programming job interviews. For this challenge, you will need to create a function/class/algorithm that takes a string and finds the first non-repeated character. For example, in the word reaper the first non-repeated character is ‘a’. The character ‘p’ is also not repeated but we are looking for the first one. There are multiple ways to accomplish this but I’m going to only post 1 here. To start, I first created a hash table with all the chars in the string and incremented each char as additional matches are found. So for example, in the string reaper I will start with the char ‘r’ and add that to the hashtable and give it a value of 1. Next I will add ‘e’ to the hash table and give it a value of 1. The char ‘p’ will get a value of 1 but the next char, ‘e’, already exists in the hashtable so we will increment it by 1 and it’s value becomes 2. This is done for all the chars in the string. Once this is done, we then read back over the string, comparing each char in the string to the hashtable till we find the first char that has a value of 1. I know, it sounds a little confusing but if you read of the code sample it will make a lot more sense. Here is a code sample in C#: public char NonRepeatedChar(string s) { Hashtable ht = new Hashtable(); // loop through each char in string // and add to hashtable for (int x = 0; x < s.Length; x++) { char c = s[x]; if (ht.ContainsKey(c)) { // already exists in hashtable so // give it a value of 2. ht[c] = 2; } else { // does not already exist so give // an initial value of 1 ht.Add(c, 1); } } // loop through the string again and // find the FIRST match in the hashtable // that has a value of 1 and return it. for (int x = 0; x < s.Length; x++) { char c = s[x]; // check if the char’s value is 1 if ((int)ht[c] == 1) { return c; } } // no match found return ”; } Well, that’s all for now! (I just noticed the return value for the above code isn’t showing up. It’s just a null char.) - Seth Long My blog is now located at http://www.WhoIsSethLong.com Well, this is my first time writing online so I thought I’d start off with something simple and a bubble sort was the first thing that came to mind. If you’re not familiar with what a bubble sort is, let me give you a quick rundown. A bubble sort algorithm is a very simple way to sort a collection but also not a very efficient one. The algorithm works by comparing 2 items in the collection and swapping them if they are not in the right position. For example, we are going to sort the following array of numbers from smallest to largest. 6, 3, 8, 2, 5 The algorithm would start by comparing the first two numbers and moving the larger number to the right. 6, 3, 8, 2, 5 -> 3, 6, 8, 2, 5 The algorithm would then move over 1 place and compare the numbers there. 3, 6, 8, 2, 5 Since the larger number is already on the right, the algorithm doesn’t need to make any changes and again moves over to the right 1 position. 3, 6, 8, 2, 5 Now we have the larger number on the left so it will need to be swapped with the smaller number of the right. 3, 6, 8, 2, 5 -> 3, 6, 2, 8, 5 The larger number has been swapped to the right so the algorithm will now move over again and check the last two numbers. 3, 6, 2, 8, 5 Again, the larger number is on the left so it will need to be swapped. This gives us: 3, 6, 2, 8, 5 -> 3, 6, 2, 5, 8 After the first pass through, we now have the largest number (8) in its proper position. So now we will start at the beginning of the array again, work our way towards the end and swap numbers as 3, 6, 2, 5, 8 3, 6, 2, 5, 8 -> 3, 2, 6, 5, 8 3, 2, 6, 5, 8 -> 3, 2, 5, 6, 8 Now, you can see that we don’t compare the last number this time. That’s because we know the last number is already in it’s proper place. Each run through the array we compare one less number on the right. The algorithm gets its name because the largest number will bubble up to the last position after each pass. Our third run through the array would look like this: 3, 2, 5, 6, 8 -> 2, 3, 5, 6, 8 2, 3, 5, 6, 8 Fourth run: 2, 3, 5, 6, 8 That’s definitely a lot of work just to sort 5 numbers. There are other algorithms out there that are much more efficient then the bubble sort. Coding the Bubble Sort Algorithm Now that we hopefully understand how the bubble sort algorithm works, lets look at the code needed to implement it. All we really need is 2 loops and an if statement to create a bubble sort algorithm. The first and outer loop will start from the end of the array and work its way to the beginning. The second and inner loop will start from the beginning of the array and work its way to the value of the outer loop. Inside the inner loop is an if statement that compares the 2 numbers and swaps them if necessary. Here is the code using C# syntax: // the array to sort int[] ary = new int[] { 6, 3, 8, 2, 5 }; // outer loop for (int outer = ary.Length - 1; outer > 0; outer--) // inner loop for (int inner = 0; inner ary[inner + 1]) // swap them int tmp = ary[inner]; ary[inner] = ary[inner + 1]; ary[inner + 1] = tmp; There are other variations of this algorithm but this is probably the most common way of coding it. As I’ve mentioned before, the bubble sort algorithm is one of the worst ways to sort a collection. To understand why, we need to look at the worst-case complexity of the algorithm. Since the algorithm uses 2 loops to sort the array, there are a LOT of comparisons and swaps being made. With an array of n items, the algorithm will make n – 1 comparisons on the first pass. The next pass will require n - 2 comparisons, etc… This gives us n(n - 1) / 2 or O(n^2). Well, that’s all for now. I’ll be posting some more sorting algorithms soon so keep checking back! - Seth Long • April 2014 M T W T F S S « May • Recent Posts • Categories • Pages
{"url":"http://megocode3.wordpress.com/author/megocode3/","timestamp":"2014-04-16T13:09:35Z","content_type":null,"content_length":"107388","record_id":"<urn:uuid:b536636e-4181-46f0-ab6d-ec99d211415d>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00249-ip-10-147-4-33.ec2.internal.warc.gz"}
Convergence of sequence using the definition August 2nd 2010, 04:55 AM #1 Junior Member Dec 2008 Convergence of sequence using the definition Hi, I know how to show the following sequence goes to its limit, but i need to do it using the definition, of which i am having trouble, i've started how i should though i think. $x_{n} = \sqrt{n+1} - \sqrt{n}$ prove $x_{n} \rightarrow 0$ Let $\epsilon > 0$ be given Let $N=N(\epsilon )$ be an integer greater than ...(answer goes here)... Then for all $n>N$ we have $|x_{n} - l| = |\sqrt{n+1}-\sqrt{n} - 0|$ $|x_{n} - l| = |\sqrt{n+1} - \sqrt{n}|$ $\sqrt{n+1} - \sqrt{n} = \dfrac{(\sqrt{n+1} - \sqrt{n})(\sqrt{n+1} + \sqrt{n})}{\sqrt{n+1} + \sqrt{n}}<br />$ $|x_{n} - l| = \dfrac{1}{\sqrt{n+1} + \sqrt{n}}$ and that is where i get stuck. I'm not sure if this definition method of proof is well known so if you would like an example of a completed question just ask. Thanks. Last edited by Rapid_W; August 2nd 2010 at 06:43 AM. Reason: dfrac instead of frac The first thing to note is $x_n=\dfrac{1}{\sqrt{n+1}+\sqrt{n}}$. Then $\left|\dfrac{1}{\sqrt{n+1}+\sqrt{n}} -0\right|\le\dfrac{1}{2\sqrt{n}}$. Can you finish? Yes, that is the missing link, thanks very much. i did think of doubling it, but i did this instead $\dfrac{1}{\sqrt{2n}}$, which isn't true lol Do i have to be able to prove that though, or state a theory? Prove what? That $\sqrt{n+1}- \sqrt{n}= \frac{1}{\sqrt{n+1}+ \sqrt{n}}$? Rationalize the numerator: multiply both numerator and denominator of $\frac{\sqrt{n+1}- \sqrt{n}}{1}$ by $\sqrt{n+1}+ \sqrt{n}$. no sorry, perhaps i am being silly but, prove that $\dfrac{1}{\sqrt{n+1} + \sqrt{n}} \le \dfrac{1}{2\sqrt{n}}$ also, the answer is $N=N(\epsilon ) = integer > 4\epsilon ^2$? Last edited by Rapid_W; August 2nd 2010 at 06:53 AM. so 1 over then all is the same, ok got that, so if i finish of my work i get $|x_{n}-l| \le \dfrac{1}{2\sqrt{n}}$ $n > N so$ $|x_{n}-l| \le \dfrac{1}{2\sqrt{N}}$ $\dfrac{1}{2\sqrt{N}} < \epsilon$ If $N > 4\epsilon ^2$ $|x_{n}-l| \le \epsilon$ and then i write $4\epsilon ^2$ in the gap at the top. Is this right? August 2nd 2010, 06:25 AM #2 August 2nd 2010, 06:32 AM #3 Junior Member Dec 2008 August 2nd 2010, 06:35 AM #4 MHF Contributor Apr 2005 August 2nd 2010, 06:41 AM #5 Junior Member Dec 2008 August 2nd 2010, 07:11 AM #6 August 2nd 2010, 07:47 AM #7 Junior Member Dec 2008
{"url":"http://mathhelpforum.com/differential-geometry/152573-convergence-sequence-using-definition.html","timestamp":"2014-04-17T08:46:03Z","content_type":null,"content_length":"55331","record_id":"<urn:uuid:d4fe69be-73cd-4d9a-b3b0-3cf811137edb>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00231-ip-10-147-4-33.ec2.internal.warc.gz"}
[Numpy-discussion] Rank-0 arrays - reprise Nathaniel Smith njs@pobox.... Sun Jan 6 10:52:13 CST 2013 On Sun, Jan 6, 2013 at 10:35 AM, Dag Sverre Seljebotn <d.s.seljebotn@astro.uio.no> wrote: > I should have been more precise: I like the proposal, but also believe > the additional complexity introduced have significant costs that must be > considered. > a) Making += behave differently for readonly arrays should be > carefully considered. If I have a 10 GB read-only array, I prefer an > error to a copy for +=. (One could use an ISSCALAR flag instead that > only affected +=...) Yes, definitely we would need to nail down the exact semantics here. My feeling is that we should see start by seeing if we can come up with a set of coherent rules for read-only arrays that does what we want before we add an ACT_LIKE_OLD_SCALARS flag, but either way is viable. (Or we could start with a PRETEND_TO_BE_SCALAR flag and then gradually migrate away from it.) > b) Things seems simpler since "indexing away the last index" is no > longer a special case, it is always true for a.ndim > 0 that "a[i]" is a > new array such that > a[i].ndim == a.ndim - 1 > But in exchange, a new special-case is introduced since READONLY is only > set when ndim becomes 0, so it doesn't really help with the learning > curve IMO. Yes, indexing with a scalar (as opposed to slicing or fancy-indexing) remains a special case just like now. And not just because the result is read-only -- it also returns a copy, not a view. I don't think the comparison to the a[i] special-case is very useful, really. Scalar indexing and the wacky one-dimensional indexing thing where a[i] -> a[i, ..] (unless a is one-dimensional) would still be different in general, even aside from the READONLY part, because the one-dimensional indexing thing only applies to one-dimensional indexes. For a 3-d array, a[i, j] gives an error; it's not the same as a[i, j, ...]. And while I understand why numpy does what it does for len() and __getitem__(int) on multi-dimensional arrays (it's to make multi-dimensional arrays act more like list-of-lists), this is IMO a confusing special case that we might be better off without, and in any case shouldn't be used as a guide for how to make the rest of the indexing system work. > In some ways I believe the "scalar-indexing" special case is simpler for > newcomers to understand, and is what people already assume, and that a > "readonly-indexing" special case is more complicated. It's dangerous to > have a library which people only use correctly by accident, so to speak, > it's much better if what people think they see is how things are. This is all true, but current scalars *are* readonly arrays, just weird ones with some limitations and that people don't realize are Heck, you can even reshape scalars: In [10]: a = np.float64(0) In [11]: a.reshape((1, 1)) Out[11]: array([[ 0.]]) And resizing is allowed... but silently does nothing: In [12]: a.resize((1, 1)) In [13]: a Out[13]: 0.0 > (With respect to arr[5] returning a good old Python scalar for floats > and ints -- Travis' example from 2002 is division, and at least that > example is much less serious now with the introduction of the // > operator in Python.) I thought Travis's example was (in current numpy terms): In [1]: a = np.array([-1.0, 1.0]) # Pretend that np.sum() returns a float, which uses Python's arithmetic: In [2]: 1 / float(np.sum(a)) ZeroDivisionError: float division by zero # It actually returns a numpy scalar, which uses numpy's arithmetic: In [3]: 1 / np.sum(a) /home/njs/.user-python2.7-64bit/bin/ipython:1: RuntimeWarning: divide by zero encountered in double_scalars Out[3]: inf Anyway, you still need to return some sort of special object for anything that's not part of python's type system (structured arrays, custom dtypes like enumerated values, etc.). So returning good-old Python scalars (GOPS?) for floats/ints/bools actually introduces a new special case. >> One could argue about structured datatypes, but maybe then it should be >> a datatype property whether its mutable or not, and even then the >> element should probably be a copy (though I did not check what happens >> here right now). > Elements from arrays with structured dtypes are already mutable (*and*, > at least until recently, could still be used as dict keys...). This was > discussed on the list a couple of months back I think. Yeah, this is another weird wart we could fix up in the process... More information about the NumPy-Discussion mailing list
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2013-January/065013.html","timestamp":"2014-04-18T00:36:58Z","content_type":null,"content_length":"7477","record_id":"<urn:uuid:2ad3163c-af90-4cd3-932f-c07f20ea535e>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00156-ip-10-147-4-33.ec2.internal.warc.gz"}
Algebra 2 Tutors Palos Verdes Peninsula, CA 90274 Always applying a holistic approach when tutoring! ...I just graduated from USC Marshall School of Business with an MBA degree with concentration in Marketing. My bachelor's degree is in Civil Engineering. I am currently tutoring Statics which is one of the most important courses in this career. For students to understand... Offering 10+ subjects including algebra 2
{"url":"http://www.wyzant.com/Torrance_CA_algebra_2_tutors.aspx","timestamp":"2014-04-20T17:44:41Z","content_type":null,"content_length":"61762","record_id":"<urn:uuid:a4cf99ea-00dd-4133-a07f-042d2a1c5a8e>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00174-ip-10-147-4-33.ec2.internal.warc.gz"}
20Cxx Representation theory of groups [See also 19A22 (for representation rings and Burnside rings)] • 20C05 Group rings of finite groups and their modules [See also 16S34] • 20C07 Group rings of infinite groups and their modules [See also 16S34] • 20C08 Hecke algebras and their representations • 20C10 Integral representations of finite groups • 20C11 p-adic representations of finite groups • 20C12 Integral representations of infinite groups • 20C15 Ordinary representations and characters (1) • 20C20 Modular representations and characters • 20C25 Projective representations and multipliers • 20C30 Representations of finite symmetric groups • 20C32 Representations of infinite symmetric groups • 20C33 Representations of finite groups of Lie type (2) • 20C34 Representations of sporadic groups • 20C35 Applications of group representations to physics • 20C40 Computational methods • 20C99 None of the above, but in this section
{"url":"https://kluedo.ub.uni-kl.de/solrsearch/index/search/searchtype/collection/id/9177","timestamp":"2014-04-24T10:04:07Z","content_type":null,"content_length":"18714","record_id":"<urn:uuid:0ede0ada-dae3-41f3-8fec-75cb8b20b60a>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00326-ip-10-147-4-33.ec2.internal.warc.gz"}
Winston, GA Math Tutor Find a Winston, GA Math Tutor ...I am a grammar guru. I think diagramming sentences is as fun as solving a puzzle. However, as an English teacher, I also have the ability to take a complex, abstract concept (grammar), break it down for understanding, and, more importantly, show students how to use it to construct meaningful writing and speaking. 15 Subjects: including ACT Math, English, Spanish, grammar ...Chemistry is my favorite class of all time!I have taught Physical Science which incorporates chemistry and Physics. Have taken MANY college Chemistry courses. I am highly qualified, and have taught Physical Science for four years. 11 Subjects: including algebra 1, algebra 2, biology, chemistry ...She then went on to graduate from the Georgia Tech with a degree in Applied Mathematics and a minor in Economics. She went through college at an accelerated pace of 3 years instead of 4, while maintaining her HOPE scholarship. She even studied abroad in Ireland during those three years! 22 Subjects: including algebra 2, reading, differential equations, ACT Math ...I earned a BS in math and physics from the University of Alabama in Huntsville and a MS in physics from Georgia Tech. I am currenly working on a PhD in physics at Georgia Tech. I have tutored several students in math in physics including trigonometry and have seen very positive results. 11 Subjects: including calculus, trigonometry, precalculus, algebra 1 ...I am flexible on the weekends, and have a library nearby that we can use for tutoring purposes, as well as a business center in my apartment. I am also a volunteer for the Junior Achievement Financial Literacy program here in Atlanta, GA, and have gone to several schools in the area to teach mid... 5 Subjects: including algebra 1, algebra 2, prealgebra, precalculus Related Winston, GA Tutors Winston, GA Accounting Tutors Winston, GA ACT Tutors Winston, GA Algebra Tutors Winston, GA Algebra 2 Tutors Winston, GA Calculus Tutors Winston, GA Geometry Tutors Winston, GA Math Tutors Winston, GA Prealgebra Tutors Winston, GA Precalculus Tutors Winston, GA SAT Tutors Winston, GA SAT Math Tutors Winston, GA Science Tutors Winston, GA Statistics Tutors Winston, GA Trigonometry Tutors Nearby Cities With Math Tutor Atlanta Ndc, GA Math Tutors Bowdon Junction Math Tutors Braswell, GA Math Tutors Cedartown Math Tutors Chattahoochee Hills, GA Math Tutors Clarkdale, GA Math Tutors Ephesus, GA Math Tutors Fairburn, GA Math Tutors Felton, GA Math Tutors Mount Zion, GA Math Tutors Palmetto, GA Math Tutors Red Oak, GA Math Tutors Roopville Math Tutors Sargent, GA Math Tutors Waco, GA Math Tutors
{"url":"http://www.purplemath.com/winston_ga_math_tutors.php","timestamp":"2014-04-16T04:13:36Z","content_type":null,"content_length":"23761","record_id":"<urn:uuid:f638fb72-9324-4953-9213-57af07ea9ecc>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00052-ip-10-147-4-33.ec2.internal.warc.gz"}
from Wiktionary, Creative Commons Attribution/Share-Alike License • n. The doctrine that mathematics is a branch of logic in that some or all mathematics is reducible to logic. from WordNet 3.0 Copyright 2006 by Princeton University. All rights reserved. • n. (philosophy) the philosophical theory that all of mathematics can be derived from formal logic Sorry, no etymologies found. • The rejoinder to this is that the similarity that Frascolla, Black and Savitt recognize does not make Wittgenstein's theory a “kind of logicism” in Frege's or Russell's sense, because Wittgenstein does not define numbers “logically” in either Frege's way or Russell's way, and the similarity (or analogy) between tautologies and true mathematical equations is neither an identity nor a relation of reducibility. • “the philosophy of arithmetic of the Tractatus ¦ as a kind of logicism” (Frascolla, 1994, 37). • This attitude allowed him to accommodate various ideas coming from rival foundational directions, that is, logicism, formalism and intuitionism. • A principled demarcation of logical constants might offer an answer to this question, thereby clarifying what is at stake in philosophical controversies for which it matters what counts as logic (for example, logicism and structuralism in the philosophy of mathematics). • Tarski also showed new perspectives for logicism by defining logical concepts as invariants under one-to-one transformations. • No. To see why it is not, notice that the ascription of limitations and confusions to his logical theory depends almost entirely on taking a special point of view on the nature of logic, namely the viewpoint of Fregean and Russellian logicism, which posits the reducibility of mathematics (or at least arithmetic) to some version of second-order logic. • Fregean logicism is just one way in which this template can be developed; some other ways will be mentioned below. • The slide towards this sort of formalist attitude to axioms can also be traced through Frege's logicism. • Pragmatism's injunction to abandon metaphysics might then be thought of as setting the stage for the radically different idioms of Heidegger's ontology and Carnap's logicism, or what is sometimes called the Continental/analytic divide. • PFO alone does not have sufficient expressive power to accommodate the needs of neo-logicism. Log in or sign up to get involved in the conversation. It's quick and easy.
{"url":"https://www.wordnik.com/words/logicism","timestamp":"2014-04-16T19:52:43Z","content_type":null,"content_length":"31171","record_id":"<urn:uuid:d1613af8-e555-433a-8a9e-c6e8be6b3931>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00512-ip-10-147-4-33.ec2.internal.warc.gz"}
PIRSA - Perimeter Institute Recorded Seminar Archive Perimeter Institute Pedagogical Introduction: Tensor Networks and Geometry, the Renormalization Group and AdS/CFT Abstract: One might be confused by the proliferation of tensor network states, such as MPS, PEPS, tree tensor networks [TTN], MERA, etc. What is the main difference between them? In this talk I will argue that the geometry of a tensor network determines several properties of the state that is being represented, such as the asymptotic scaling of correlations and of entanglement entropy. I will also describe the relation between the MERA and the Renormalization Group, and will review Brian Swingle Date: 25/10/2011 - 9:00 am
{"url":"http://pirsa.org/10100098","timestamp":"2014-04-19T09:36:27Z","content_type":null,"content_length":"8034","record_id":"<urn:uuid:f1946a5d-780e-4318-a2a4-fff61421d531>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00456-ip-10-147-4-33.ec2.internal.warc.gz"}
Algebra Tutors Collierville, TN 38017 Math, Physics, and Test Prep Tutor ...hroughout my academic career, I was privileged to learn new and challenging ways to explore our world through the lens of Math, Physics, and Engineering. As a result, I developed a strong background in , Calculus, Linear , and Differential Equations... Offering 10+ subjects including algebra 1 and algebra 2
{"url":"http://www.wyzant.com/Red_Banks_algebra_tutors.aspx","timestamp":"2014-04-17T10:30:06Z","content_type":null,"content_length":"55543","record_id":"<urn:uuid:b2dfa0b0-d160-4749-81e8-00b33f7a5bcc>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00305-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: If a, b and c are positive integers and if a=2b and a^2+b^2=c, which of the following cannot equal to c? a) 5 b)20 c) 50 d) 125 e) 500 • 9 months ago • 9 months ago Best Response You've already chosen the best response. |dw:1373735734443:dw| c is 5 times a square number. Divide each of the possible answers by 5 and see which one isn't a square number. Best Response You've already chosen the best response. man really lol Best Response You've already chosen the best response. Jack seriously I am not a smart person xD stahp Best Response You've already chosen the best response. @.Sam. please save time and help Best Response You've already chosen the best response. You have a relation between two of the three variables, so use it. You will be eliminating one of them in the process. Finally you get a simple expression, as solved above by a member. Try out all the given options and get to the correct answer. Best Response You've already chosen the best response. DDCamp gave a really nice method, why am i being tagged? Best Response You've already chosen the best response. Please stop tagging me. That was 12 notifications when I logged on JUST from you. Best Response You've already chosen the best response. im so sorry Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/51e18b1de4b076f7da421e24","timestamp":"2014-04-18T03:35:14Z","content_type":null,"content_length":"82726","record_id":"<urn:uuid:f1d16030-e5fb-4671-a98a-358e92523c21>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00430-ip-10-147-4-33.ec2.internal.warc.gz"}
Program & Workshop Chairs S. Autexier (DFKI Bremen, Germany) B. Beckert (Karlsruhe Institute of Technology, Germany) W. Ahrendt (Chalmers University of Technology) J. (Middlesex University) I. Cervesato (Carnegie Mellon University) J. Fleuriot (University of Edinburgh) M. Huisman (University of Twente) D. Hutter (DFKI GmbH) R. Hähnle (Technical University of Darmstadt) D. Kapur (University of New Mexico) G. Klein (NICTA and UNSW) J. Leslie-Hurd (Intel Corporation) F. Martinelli (IIT-CNR) C. Meadows (NRL) S. Merz (Inria Nancy) T. Nipkow (TU München) L. Paulson (University of Cambridge) J. Schumann (SGT, Inc/NASA Ames) K. Stenzel (University of Augsburg) Call for papers PDF – ASCII Important dates Abstract Submission Deadline: April 17th, 2014 Paper Submission Deadline: April 25th, 2014 Notification of acceptance: May 20, 2014 Final version due: May 27, 2014 Workshop date: July 23–24, 2014 If you need further information, do not hesitate to contact the workshop chairs by sending an e-mail to mailto:verify2014@informatik.uni-bremen.de?subject=Verify-2014. The formal verification of critical information systems has a long tradition as one of the main areas of application for automated theorem proving. Nevertheless, the area is of still growing importance as the number of computers affecting everyday life and the complexity of these systems are both increasing. The purpose of the VERIFY workshop series is to discuss problems arising during the formal modeling and verification of information systems and to investigate suitable solutions. Possible perspectives include those of automated theorem proving, tool support, system engineering, and applications. For automated theorem proving, each verification project is the source of numerous deduction problems that are not only interesting and challenging, but also of practical relevance. On the one hand, such proof obligations can serve as examples for experimenting with general-purpose deduction techniques and tools. On the other hand, deduction techniques can be tailored to typical classes of verification problems. Tool support is essential in order to deal with the numerous proof obligations arising in practical verification. In particular, powerful theorem provers are required to provide a high degree of automation. Moreover, tool support is also necessary for making the development of large specifications feasible, for keeping ongoing developments in a consistent state, and for supporting the reuse of previously constructed specifications and proofs. Often, satisfactory tool support can only be achieved by combining different systems. Engineering techniques are needed for making the formal modeling and analysis of complex information systems feasible. Specifications become more manageable when being developed in a modular fashion and on different levels of abstraction. When a well-defined engineering process is applied, verification techniques can be tailored to the deduction problems that typically originate from this Applications include the verification of functional properties, of safety properties, of security properties, and of fault tolerance. Evaluation criteria like the Common Criteria, for instance, require the construction of formal security models that constitute a basis for a formal verification. Verification case studies are necessary for evaluating the feasibility of verification techniques in practice. The VERIFY workshop series aims at bringing together people who are interested in the development of safety-critical and security-critical systems, in formal methods, in the development of automated theorem proving techniques, and in the development of tool support. Practical experiences gained in realistic verifications are of interest to the automated theorem proving community and new theorem proving techniques should be transferred into practice. The overall objective of the VERIFY workshops is to identify open problems and to discuss possible solutions under the theme What are the verification problems? What are the deduction techniques? The 2014 edition of VERIFY aims for extending the verification methods for processes implemented in hard- and software to processes that may well include computer-assistance, but have a large part or a frequent interaction with non-computer-based process steps. Hence the 2014 edition will run under the focus theme Verification Beyond IT Systems A non-exclusive list of application areas with these characteristics are • Ambient assisted living • Diagnostics and repair processes • Business systems and processes • Production logistics systems and processes • Clinical processes • Transportation logistics • Intelligent home systems and processes • Social systems and processes (e.g., voting systems) Relevant issues in these areas are safety and security, but especially also fault-tolerance, flexibilization, run-time adaptation, etc. The scope of VERIFY includes topics such as + ATP techniques in verification + Integration of ATPs and CASE-tools + Case studies (specification and verification) + Management of change + Combination of verification tools + Refinement and decomposition + Compositional and modular reasoning + Reliability of mobile computing + Experience reports on using formal methods + Reuse of specifications and proofs + Formal methods for fault tolerance + Safety-critical systems + Gaps between problems and techniques + Security models + Information-flow security + Tool support for formal methods Submissions are encouraged in one of the following two categories: • A. Regular paper: Submissions in this category should describe previously unpublished work (completed or in progress), including descriptions of research, tools, and applications. Papers must be 5–14 pages long (in EasyChair style) or 6–15 pages long (in Springer LNCS style). • B. Discussion paper: Submissions in this category are intended to initiate discussions and hence should address controversial issues, and may include provocative statements. Papers must be 3–14 pages long (in EasyChair style) or 3–15 pages long (in Springer LNCS style). Submission of papers is via EasyChair at http://www.easychair.org/conferences/?conf=verify2014. Final versions of accepted papers have to be prepared with LaTeX using the EasyChair class. Each accepted paper shall be presented at the workshop and at least one author of each paper must attend the workshop. For each presentation there will be 40 minutes (including about 10 minutes discussion). For discussion papers, 15–20 minutes shall be reserved for discussion. The workshop program also includes invited talks. Workshop Proceedings In addition to informal proceedings, a special issue in a journal on the topic of the workshop is envisaged. Participants of VERIFY-2014 are particularly encouraged to submit a paper to the special issue, but other submissions will also be welcome.
{"url":"http://www.informatik.uni-bremen.de/~autexier/VERIFY-2014/","timestamp":"2014-04-19T14:28:43Z","content_type":null,"content_length":"12569","record_id":"<urn:uuid:003db07a-bed9-4ad4-93b6-a790482c3e9f>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00558-ip-10-147-4-33.ec2.internal.warc.gz"}
Finding lots of discrete vectors in fairly general position up vote 15 down vote favorite How many vectors can there be in $\mathbb{F}_2^{2n}$ such that no $n$ of them form a linearly dependent set? The bounds I have so far are embarrassingly far apart, though that probably means I should have thought about the question for longer before posting it. To get an upper bound, observe that you can partition $\mathbb{F}_2^{2n}$ into $2^{n+2}$ translates of an $(n-2)$-dimensional subspace. If you choose more than $(n-1)2^{n+2}$ vectors, then $n$ of them must lie in one of those translates, and therefore in an $(n-1)$-dimensional subspace. So you definitely can't choose more than $Cn2^n$ vectors with the required property. In the other direction, if you choose $M$ vectors randomly, then the probability that some fixed set of $n$ of them lives in an $(n-1)$-dimensional subspace is at most $n2^{-n}$ (since one of them must lie in the linear span of the others). So the expected number of problematic sets of size $n$ is at most $\binom Mn n2^{-n}$. If this is at most $M/2$, then we can get rid of a vector from each problematic set and we end up with no such sets. But for $n\binom Mn$ to be less than $2^n$ we basically need $M$ to be proportional to $n$, so this gives a lower bound of something like $2n$, which is pathetic as we could have just taken $2n$ linearly independent vectors. I end up with a similarly pathetic bound if I try to pick vectors one by one, always avoiding the subspaces that the previous vectors require me to avoid. I think I'm slightly more convinced by the lower bound, pathetic as it is. My rough reason is that the difficulty I run into feels pretty robust, and also that the result I prove in the upper bound is much stronger than it needs to be (since the subspace I obtain is essentially a translate of some fixed subspace). But basically I can't at the time of writing see even roughly what the bound should be. co.combinatorics linear-algebra 2 I think I can do the smallest cases. When $n=2$, no $n$ (distinct, non-zero) vectors form a linearly dependent set, so you can take all 15 non-zero elements of ${\bf F}_2^4$. When $n=3$, you are looking for the largest sum-free subset of ${\bf F}_2^6$, and that's the set of odd weight elements, of which there are 32. – Gerry Myerson Apr 30 '12 at 23:36 5 For $n=4$, if your set has $k$ members, there are ${k \choose 2}$ pairs which must all have distinct sums in ${\mathbb F}_2^8$, so ${k \choose 2} \le 2^8-1$, and $k \le 23$. The best solution I've been able to find so far has 17 members. – Robert Israel May 1 '12 at 0:14 Consider a matrix whose colums are a collection of vectors that do have the required property, say there are $N$. (We can assume the vectors span the full space.) Take the linear code $C$ with 3 this check matrix. Its length is $N$. No $n$ vectors are dependent so minimal distance $D$ of $C$ is strictly greater than $n$. Moreover the dimension $K$ of $C$ is $N-2n$. The question is for which $N$ there can (still) exist $[N,N-2n, n+1]$ binary linear code. Combining this with results from coding theory should yield something. Sorry, I cannot check right now if this something will be useful. – quid May 1 '12 at 0:41 @Robert: That bound should generalize to $\left(\begin{array}{c} k \\ n/2 \end{array}\right)\leq 2^{2n}-1$, no? – Will Sawin May 1 '12 at 1:37 add comment 3 Answers active oldest votes Update 2: Since there seems some to be some interest in small values, in addition to asymtotic results, at the end some data that can, using the interpretation detailed below, be obtained from tables of code parameters to be found at http://www.codetables.de/ Update: the number of such vectors should be at most $(2 + \epsilon)n$ for any $\epsilon > 0$ if $n$ is large enough. The approach is as detailed below, but instead of (or in some sense in addition to) the Hamming bound we use McEliece--Rodemich--Rumsey--Welch bound which gives $$R \le h (1/2 - \sqrt{\delta (1 - \delta)}) +o(1)$$ with notation as below. Now, since $R = 1 - 2 \delta$ and ignoring the $o(1)$ one gets that the only positive $\delta \le 0.5$ for which this holds is in fact $0.5$. (I did not 'prove' this, but the functions seems nice enough to rely on computer aid and it should also be not too hard and is likely even written somewhere that this is so; for an illustration see for example the figure on page 2 in lecture notes of Atri Ruda; the Plotkin bound illustrates the $1 - 2 \delta$, the Elias--Bassalygo bound stays below it, and MRRW would at least where it is relevant be still better/smaller; one also sees nicely that the Hamming bound only works until $0.3...$; the GV is a lower bound so not relevant here). Thus, the ratio $n/N$ needs to approach $0.5$ (we know already it cannot go to zero), and the claim follows. While there are already two nice answers, I thought I still sketch how one also could get the linear bound by the approach given in my comment. Suppose there is a collection of $N$ such vectors. We seek an upper bound for $N$, so one can assume they generate the space as otherwise on could take add some vectors. up vote Now let $H$ be the matrix whose columns are these vectors. The code with this check matrix will be an $[N,N-2n]$ binary lineary code with minimal distance greater $n$. 14 down vote Write $R = (N-2n)/N$ and $\delta = n/N$. By the Hamming bound one knows that $R \le 1- h(\delta/2) + o(1)$, as $N$ tends to infinity, where $h$ is the binary entropy function. Since $R = 1 - 2 \delta $ one gets that $h(\delta/2) \ le 2 \delta + o(1)$. Since the derivative of $h$ at $0$ is $+\infty$ one gets that $\delta$ needs to stay away from $0$ showing the ratio $n/N$ stays bounded away from $0$. Moreover, 'solving' the equation for $\delta$ would yield an actual constant. Possibly using more sophisticated bounds from coding theory could yield a better constant; I will try when I have better access to a CAS and report if I find something. Some small values, up to 20 and then selcted others, the first two match those given in comments by Gerry Myerson and Robert Israel. I retrieved and typed them quickly so small errors are possible; format (n , max number of vectors); if the later is not just a number, it is to be interpreted as bounds in the obvious way. [In case somebody would now how to format this better, please let me know.] (3,32), (4,17), (5,24), (6,24), (7,28), (8,23), (9,28), (10,31), (11,34), (12,32), (13, 35-38), (14,37),, (15,40), (16,39), (17,43-45) , (18,44-48), (19,48-50), (20,48) (25,59-60), (30,70), (40,88-89), (50,109-110), (100, 209-211) 3 Perhaps I should add that the constants one gets with the Hamming bound is (essentially or even compltely) the one David Speyer got. (Indeed, in some sense looking at the proof of the Hamming bound, the argument is 'the same'.) – quid May 1 '12 at 2:32 add comment It seems to me that Robert Israel's argument generalizes. The sum of any $\frac{n}{2}$ vectors in your collection has to be distinct, in order for any $n$ to be linearly independent. up vote 12 From this one gets the inequality $$\binom{M}{n/2}\le 2^{2n}-1$$ which in particular implies $M\le O(n)$. down vote add comment $\def\FF{\mathbb{F}_2}$ Indeed, the linear bound is close to right. I can show that we can't beat about $3.197 n$. For convenience, set $m=n/2$. Fix a constant $c$ and suppose that we can find $\geq cn$ vectors for infinitely many $n$. Notice that every $m$ element subset of our $cn$ vectors must have a up vote different sum, or we could find a relation with at most $n$ terms. So $$\binom{cn}{m} \leq 2^{2n}$$ or $$(2cm)(2cm-1) \cdots (2cm-m+1) \leq 2^{2n} m! \leq 16^m m^m (e+o(1))^{-m} .$$ $$(2c) 12 down (2c-1/m)(2c-2/m) \cdots (2c-1+1/m) \leq (16 e^{-1} (1+o(1)))^m.$$ Taking logs $$\frac{1}{m} \sum_{i=0}^{m-1} \log (2c-i/m) \leq \log 16 -1 + o(1).$$ or, sending $m \to \infty$. $$\int_ vote {2c-1}^{2c} \log t dt \leq \log 16 - 1.$$ I get that this forces $c \leq 3.1965677$. 1 I take it this means we can't beat about $3.197n$, provided that $n$ is sufficiently large. We can certainly beat it for $n=2,3,4$. – Gerry Myerson May 1 '12 at 5:42 add comment Not the answer you're looking for? Browse other questions tagged co.combinatorics linear-algebra or ask your own question.
{"url":"http://mathoverflow.net/questions/95620/finding-lots-of-discrete-vectors-in-fairly-general-position","timestamp":"2014-04-20T01:54:41Z","content_type":null,"content_length":"69736","record_id":"<urn:uuid:8cf857d5-b56c-4493-9ce5-56796d00011a>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00494-ip-10-147-4-33.ec2.internal.warc.gz"}
The Library Unstable attractors in manifolds Sánchez-Gabites, J. J.. (2010) Unstable attractors in manifolds. Transactions of the American Mathematical Society, Vol.362 (No.7). pp. 3563-3589. ISSN 0002-9947 WRAP_Sanchez-Gabites_unstable_attractors.pdf - Requires a PDF viewer such as GSview, Xpdf or Adobe Acrobat Reader Download (356Kb) PDF (Cover sheet) WRAP_coversheet_Sánchez-Gabites.pdf - Requires a PDF viewer such as GSview, Xpdf or Adobe Acrobat Reader Download (39Kb) Assume that K is a compact attractor with basin of attraction A(K) for some continuous flow phi in a space M. Stable attractors are very well known, but otherwise (without the stability assumption) the situation can be extremely wild. In this paper we consider the class of attractors with no external explosions, where a mild form of instability is allowed. After obtaining a simple description of the trajectories in A(K) - K we study how K sits in A(K) by performing an analysis of the Poincare polynomial of the pair (A(K), K). In case M is a surface we obtain a nice geometric characterization of attractors with no external explosions, as well as a converse to the well known fact that the inclusion of a stable attractor in its basin of attraction is a shape equivalence. Finally, we explore the strong relations which exist between the shape (in the sense of Borsuk) of K and the shape (in the intuitive sense) of the whole phase space M, much in the spirit of the Morse-Conley theory. Item Type: Journal Article Subjects: Q Science > QA Mathematics Divisions: Faculty of Science > Mathematics Library of Attractors (Mathematics), Manifolds (Mathematics) Journal or Transactions of the American Mathematical Society Publisher: American Mathematical Society ISSN: 0002-9947 Date: July 2010 Volume: Vol.362 Number: No.7 Number of 27 Page Range: pp. 3563-3589 Status: Peer Reviewed Publication Published Access Open Access rights to Funder: Spain. Dirección General de Investigación Científica y Técnica (DGICYT) 1. K. Athanassopoulos, Explosions near isolated unstable attractors, Pacific J. Math. 210 (2003), no. 2, 201–214. MR1988531 (2004c:37030) 2. , Remarks on the region of attraction of an isolated invariant set, Colloq. Math. 104 (2006), 157–167. MR2197074 (2006i:37033) 3. A. Beck, On invariant sets, Ann. of Math. 67 (1958), no. 1, 99–103. MR0092106 (19:1064c) 4. N. P. Bhatia and G. P. Szeg¨o, Stability theory of dynamical systems, Die Grundlehren der mathematischen Wissenschaften, Band 161, Springer–Verlag, 1970. MR0289890 (44:7077) 5. S. A. Bogaty˘ı and V. I. Gutsu, On the structure of attracting compacta, Differentsialnye Uravneniya 25 (1989), no. 5, 907–909, 920. MR1003051 (90j:58076) 6. K. Borsuk, Theory of retracts, Monografie Matematyczne, Tom 44, Pa´nstwowe Wydawnictwo Naukowe, 1967. MR0216473 (35:7306) 7. , Concerning homotopy properties of compacta, Fund. Math. 62 (1968), 223–254. MR0229237 (37:4811) 8. , Theory of shape, Monografie Matematyczne, Tom 59, Pa´nstwowe Wydawnictwo Naukowe, 1975. MR0418088 (54:6132) 9. G. E. Bredon, Wilder manifolds are locally orientable, Proc. N. A. S. 63 (1969), no. 4, 1079– 1081. MR0286109 (44:3325) 10. M. Brown and H. Gluck, Stable structures on manifolds. I. Homeomorphisms of Sn, Ann. of Math. (2) 79 (1964), 1–17. MR0158383 (28:1608a) 11. R. C. Churchill, Isolated invariant sets in compact metric spaces, J. Diff. Eq. 12 (1972), 330–352. MR0336763 (49:1536) 12. C. Conley, Isolated invariant sets and the Morse index, CBMS Regional Conference Series in Mathematics, 38, American Mathematical Society, 1978. MR511133 (80c:58009) 13. C. Conley and R. Easton, Isolated invariant sets and isolating blocks, Trans. Amer. Math. Soc. 158 (1971), 35–61. MR0279830 (43:5551) 14. J. Dydak and J. Segal, Shape theory. An introduction, Lecture Notes in Mathematics, 688, Springer, 1978. MR520227 (80h:54020) 15. R. D. Edwards, The solution of the 4-dimensional annulus conjecture (after Frank Quinn), Four-manifold theory (C. Gordon and R. Kirby, eds.), Contemp. Math., vol. 35, Amer. Math. Soc., 1984, pp. 211–264. MR780581 (86j:57006) 16. A. Giraldo, M. A. Mor´on, F. R. Ruiz del Portal, and J. M. R. Sanjurjo, Shape of global attractors in topological spaces, Nonlinear Anal. 60 (2005), no. 5, 837–847. MR2113160 (2006k:37024) 17. B. G¨unther and J. Segal, Every attractor of a flow on a manifold has the shape of a finite polyhedron, Proc. Amer. References: Math. Soc. 119 (1993), no. 1, 321–329. MR1170545 (93k:54044) 18. C. Guti´errez, Smoothing continuous flows on two-manifolds and recurrences, Ergodic Theory Dynam. Systems 6 (1986), no. 1, 17–44. MR837974 (87k:58222) 19. H. M. Hastings, A higher-dimensional Poincar´e-Bendixson theorem, Glas. Mat. Ser. III 14(34) (1979), no. 2, 263–268. MR646352 (83e:34041) 20. A. Hatcher, Algebraic topology, Cambridge University Press, 2002. MR1867354 (2002k:55001) 21. M. W. Hirsch, Differential topology, Graduate Texts in Mathematics, No. 33, Springer-Verlag, 1976. MR0448362 (56:6669) 22. S. Hu, Theory of retracts, Wayne State University Press, 1965. MR0181977 (31:6202) 23. L. Kapitanski and I. Rodnianski, Shape and Morse theory of attractors, Comm. Pure Appl. Math. 53 (2000), no. 2, 218–242. MR1721374 (2000h:37019) 24. R. C. Kirby, Stable homeomorphisms and the annulus conjecture, Ann. of Math. 89 (1969), 575–582. MR0242165 (39:3499) 25. S. Mardeˇsi´c and J. Segal, Shape theory. The inverse system approach, North–Holland Mathematical Library, 26, North–Holland Publishing Co., 1982. MR676973 (84b:55020) 26. C. K. McCord, On the Hopf index and the Conley index, Trans. Amer. Math. Soc. 313 (1989), no. 2, 853–860. MR961594 (90a:58151) 27. M. A. Mor´on, J. J. S´anchez-Gabites, and J. M. R. Sanjurjo, Topology and dynamics of unstable attractors, Fund. Math. 197 (2007), 239–252. MR2365890 (2008j:37032) 28. F. Quinn, Ends of maps. III. Dimensions 4 and 5, J. Diff. Geom 17 (1982), no. 3, 503–521. MR679069 (84j:57012) 29. F. Raymond, Separation and union theorems for generalized manifolds with boundary, Michigan Math. J. 7 (1960), 7–21. MR0120638 (22:11388) 30. D. Salamon, Connected simple systems and the Conley index of isolated invariant sets, Trans. Amer. Math. Soc. 291 (1985), no. 1, 1–41. MR797044 (87e:58182) 31. J. W. Robbin and D. Salamon, Dynamical systems and shape theory, Ergod. Th. and Dynam. Sys. 8* (1988), 375–393. MR967645 (89h:58094) 32. J. M. R. Sanjurjo, Multihomotopy, ˇ Cech spaces of loops and shape groups, Proc. London Math. Soc. (3) 69 (1994), no. 2, 330–344. MR1281968 (95h:55011) 33. , On the structure of uniform attractors, J. Math. Anal. Appl. 192 (1995), no. 2, 519–528. MR1332224 (96c:58109) 34. , Morse equations and unstable manifolds of isolated invariant sets, Nonlinearity 16 (2003), 1435–1448. MR1986304 (2004d:37022) 35. E. H. Spanier, Algebraic topology, McGraw–Hill Book Co., 1966. MR0210112 (35:1007) 36. R. Thom, Quelques propri´et´es globales des vari´et´es diff´erentiables, Comment. Math. Helv. 28 (1954), 17–86. MR0061823 (15:890a) 37. R. L. Wilder, Topology of manifolds, American Mathematical Society Colloquium Publications, vol. 32, American Mathematical Society, 1949. MR0029491 (10:614c) URI: http://wrap.warwick.ac.uk/id/eprint/5699 Data sourced from Thomson Reuters' Web of Knowledge Actions (login required)
{"url":"http://wrap.warwick.ac.uk/5699/","timestamp":"2014-04-19T07:11:36Z","content_type":null,"content_length":"52365","record_id":"<urn:uuid:2e55c68f-412e-495a-b4e8-64a2494f9df0>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00146-ip-10-147-4-33.ec2.internal.warc.gz"}
An Introduction to Riemann Surfaces, Algebraic Curves and Moduli Spaces Format Post in Mathematics BY Martin Schlichenmaier 354050124X Shared By Guest An Introduction to Riemann Surfaces, Algebraic Curves and Moduli Spaces Martin Schlichenmaier is available to download An Introduction to Riemann Surfaces, Algebraic Curves and Moduli Spaces Martin Schlichenmaier Type: eBook Released: 1989 Publisher: Springer Page Count: 161 Format: djvu Language: English ISBN-10: 354050124X ISBN-13: 9783540501244 This lecture is intended as an introduction to the mathematical concepts of algebraic and analytic geometry. An Introduction to Riemann ... Textbook It is addressed primarily to theoretical physicists, in particular those working in string theories. The author gives a very clear exposition of the main theorems, introducing the necessary concepts by lucid examples, and shows how to work with the methods of algebraic geometry. As an example he presents the Krichever-Novikov construction of algebras of Virasaro type. The book will be welcomed by many researchers as an overview of an important branch of mathematics, a collection of useful formulae and an excellent guide to the more extensive mathematical literature. An Introduction to Riemann Surfaces, Algebraic Curves and Moduli Spaces You should be logged in to Download this Document. Membership is Required. Register here Related Books on An Introduction to Riemann Surfaces, Algebraic Curves and Moduli Spaces Comments (0) Currently,no comments for this book!
{"url":"http://bookmoving.com/book/an-introduction-riemann-surfaces-algebraic-curves-moduli-spaces_142405.html","timestamp":"2014-04-18T18:41:09Z","content_type":null,"content_length":"14179","record_id":"<urn:uuid:666d6668-8d48-427b-8c05-40803297b5c7>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00195-ip-10-147-4-33.ec2.internal.warc.gz"}
Scatter Table using Open Addressing Data Structures and Algorithms with Object-Oriented Design Patterns in C++ An alternative method of dealing with collisions which entirely does away with the need for links and chaining is called open addressing . The basic idea is to define a probe sequence for every key which, when followed, always leads to the key in question. The probe sequence is essentially a sequence of functions where x into the scatter table, we examine array locations x in the scatter table we examine the same sequence of locations in the same order. The most common probe sequences are of the form where h(x) is the same hash function that we have seen before. I.e., the function h maps keys into integers in the range from zero to M-1. The function c(i) represents the collision resolution strategy. It is required to have the following two properties: Property 1 c(0)=0. This ensures that the first probe in the sequence is Property 2 The set of values must contain every integer between 0 and M-1. This second property ensures that the probe sequence eventually probes every possible array position.
{"url":"http://brpreiss.com/books/opus4/html/page237.html","timestamp":"2014-04-21T04:55:17Z","content_type":null,"content_length":"4932","record_id":"<urn:uuid:7b7438ba-2ec7-4216-a2fa-5b8fe90ae91c>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00450-ip-10-147-4-33.ec2.internal.warc.gz"}
Real Analysis Help March 4th 2009, 10:54 AM Real Analysis Help Let X be a topological space and let ${f_n}$ be a sequence of continuous functions from X to the real numbers. If f(x)= $\lim_{n->\infty} f_n(x)$ for all x in X, show that the set of points where f is continuous is a $G_\delta$ set. March 4th 2009, 11:29 AM Sorry, I have more info I left off. Hint: Let $V_n(\epsilon)={x \in X : |f_n(x)-f(x)|<\epsilon}$ and define $O(\epsilon)=\bigcup_{n=1}^\infty Int(V_n(\epsilon))$. Then f is continuous on the $G_\delta$ given by $\bigcap_{k=1}^\ infty O(1/k)$ March 5th 2009, 12:18 AM Let $U_n = \{x\in X : \exists\text{ op{}en }V_xi x \text{ such that }v,w\in V_x\,\Rightarrow\,|f(v)-f(w)|<1/n\}.$ If $x\in U_n$ then $V_x\subseteq U_n$, so $U_n$ is open. Also, f is continuous at x if and only if $x\in\bigcap_nU_n$. Thus the set of points of continuity of f is a $G_\delta$. What bothers me about that argument is that it seems to apply to an arbitrary function f. (It doesn't assume that f is the pointwise limit of a sequence of continuous functions.) Am I missing March 6th 2009, 02:29 PM Let $U_n = \{x\in X : \exists\text{ op{}en }V_xi x \text{ such that }v,w\in V_x\,\Rightarrow\,|f(v)-f(w)|<1/n\}.$ If $x\in U_n$ then $V_x\subseteq U_n$, so $U_n$ is open. Also, f is continuous at x if and only if $x\in\bigcap_nU_n$. Thus the set of points of continuity of f is a $G_\delta$. What bothers me about that argument is that it seems to apply to an arbitrary function f. (It doesn't assume that f is the pointwise limit of a sequence of continuous functions.) Am I missing I think you're right. But this problem (and the suggested solution) also reminds me of something else, which is somewhat more interesting: under the additional hypothesis that $X$ is a complete metric space (or any Baire space), $f$ is continuous on a dense $G_\delta$ set. This can be proved by considering $V_n(\varepsilon)=\{x\in X|\forall m\geq n, \forall p\geq n, |f_m(x)-f_p(x)|\leq \varepsilon\}$. This is a closed subset of $X$. Then let $O(\varepsilon)=\ bigcup_{n=1}^\infty {\rm Int}(V_n(\varepsilon))$. This is an open subset, and it is dense because of Baire's theorem: if not, then there would be $x\in X$ and a closed ball $B=\overline{B}(x,\ delta)$ such that $O(\varepsilon)\cap B=\emptyset$, which means ${\rm Int}(V_n(\varepsilon))\cap B=\emptyset$ for all $n$, hence ${\rm Int}(V_n(\varepsilon)\cap B)=\emptyset$, and Baire's theorem would say ${\rm Int}\left(\bigcup_{n=1}^\infty (V_n(\varepsilon)\cap B)\right)=\emptyset$, in contradiction with $\bigcup_{n=1}^\infty (V_n(\varepsilon)\cap B)=B$ (which holds because the pointwise convergence gives $\bigcup_n V_n(\varepsilon)=X$). Finally, Baire's theorem shows that $O=\bigcap_{k=1}^\infty O(1/k)$ is dense. And one can see that $f$ is continuous on $O$.
{"url":"http://mathhelpforum.com/differential-geometry/76903-real-analysis-help-print.html","timestamp":"2014-04-20T10:17:44Z","content_type":null,"content_length":"13860","record_id":"<urn:uuid:daa68997-5be9-4a7a-92b4-c5375d86d37e>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00326-ip-10-147-4-33.ec2.internal.warc.gz"}
Fircrest, WA ACT Tutor Find a Fircrest, WA ACT Tutor ...Be prepared to work hard and score high! Again, I'm willing to set up a free trial session so you can see how I work. This will also give me a chance to see where you are at, and what' holding you back. 16 Subjects: including ACT Math, geometry, Chinese, algebra 1 ...During that time, I worked in 3 schools in 3 different cities, all with a very different student population. So I have experience teaching Geometry to students at every level. Before becoming a middle school and high school teacher, I worked as a para-educator in elementary schools for 3 years. 16 Subjects: including ACT Math, geometry, algebra 2, algebra 1 ...I have used this application since 1995. It is one of my favorites. If you need help getting it to do what you need, let me know. 46 Subjects: including ACT Math, English, reading, algebra 1 I have worked at Tacoma Community College for six years; when asked to describe what I tutor by a student, he responded by saying, "he DOESN'T tutor higher level biology or business classes if he can help it, but he does just about everything else." I have held a Master Tutor Certification with the ... 27 Subjects: including ACT Math, chemistry, geometry, statistics With my teaching experience of all levels of high school mathematics and the appropriate use of technology, I will do everything to find a way to help you learn mathematics. I can not promise a quick fix, but I will not stop working if you make the effort. -Bill 16 Subjects: including ACT Math, calculus, algebra 1, algebra 2 Related Fircrest, WA Tutors Fircrest, WA Accounting Tutors Fircrest, WA ACT Tutors Fircrest, WA Algebra Tutors Fircrest, WA Algebra 2 Tutors Fircrest, WA Calculus Tutors Fircrest, WA Geometry Tutors Fircrest, WA Math Tutors Fircrest, WA Prealgebra Tutors Fircrest, WA Precalculus Tutors Fircrest, WA SAT Tutors Fircrest, WA SAT Math Tutors Fircrest, WA Science Tutors Fircrest, WA Statistics Tutors Fircrest, WA Trigonometry Tutors Nearby Cities With ACT Tutor Dupont, WA ACT Tutors Fife, WA ACT Tutors Gig Harbor ACT Tutors Graham, WA ACT Tutors Lakewood, WA ACT Tutors Milton, WA ACT Tutors Puy, WA ACT Tutors Ruston, WA ACT Tutors Shorewood Beach, WA ACT Tutors Spanaway ACT Tutors Steilacoom ACT Tutors Sumner, WA ACT Tutors Sylvan, WA ACT Tutors Tacoma ACT Tutors University Place ACT Tutors
{"url":"http://www.purplemath.com/Fircrest_WA_ACT_tutors.php","timestamp":"2014-04-20T02:20:55Z","content_type":null,"content_length":"23408","record_id":"<urn:uuid:5029dbfc-f8aa-4f89-845f-34b796423dcb>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00058-ip-10-147-4-33.ec2.internal.warc.gz"}
Fun Problem April 21st 2005, 10:34 AM #1 Apr 2005 Fun Problem Consider a member of the Olympic shooting team who is shooting clay pigeons and a member of the Olympic diving team. Each is competing for a score. Who is conducting a sequence of Bernoulli a. neither b. the diver c. the shooter d. both Consider a member of the Olympic shooting team who is shooting clay pigeons and a member of the Olympic diving team. Each is competing for a score. Who is conducting a sequence of Bernoulli a. neither b. the diver c. the shooter d. both That depends on what outcome you are interested in. For Bernoulli trials, the only possible outcomes are "success" and "failure". In addition, the success probability must be the same for all So the clay pigeon shooter is conducting Bernoulli trials. The diver ... not if she is actually interested in the numerical score, which is more complicated than just a "success/failure" May 1st 2005, 10:42 AM #2
{"url":"http://mathhelpforum.com/advanced-statistics/98-fun-problem.html","timestamp":"2014-04-19T07:46:59Z","content_type":null,"content_length":"27840","record_id":"<urn:uuid:fb8fb11d-c017-4c0a-aafc-b7c573aa8e13>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00656-ip-10-147-4-33.ec2.internal.warc.gz"}
Faculty news. Joseph Richards joined our department in 2013, after completing his Ph.D. in Statistics at Carnegie Mellon, and a postdoctoral fellowship at UC Berkeley. His main area of interest is astrostatistics. Mariel Vazquez - PECASE Award 2012 (Presidential Early Career Award for Scientists and Engineers). Matt Beck - Mathematical Association of America Haimo Award for Distinguished College or University Teaching of Mathematics. Federico Ardila - NSF CAREER Award, 2010-2015. Yitwah Cheung - NSF CAREER Award, 2010-2015. Eric Hsu - NSF CAREER Award, 2004-2009. Mariel Vazquez - NSF CAREER Grant, 2011-2016. Federico Ardila - Diverse Issues in Higher Education, Emerging Scholar Award, 11. Mariel Vazquez - Diverse Issues in Higher Education, Emerging Scholar Award, 12. Federico Ardila - NSF Research Grant, 2008-2011. Combinatorics in geometry. Javier Arsuaga - NIH RO1, 2013-2017. Reconstruction of 3D Genome Architecture from Chromatin Conformation Capture Data Javier Arsuaga - NSF RUI Grant, 2012-2015. Using computational homology to detect DNA Copy Number Aberrations in Breast Cancer. Javier Arsuaga - NIH Research Grant, 2007-2010. Modeling of DNA repair. Javier Arsuaga and Mariel Vazquez - NIH Research Grant, 2008-2013. Multiscale analysis of CGH arrays from breast cancer patients using computational algebraic topology. Javier Arsuaga and Mariel Vazquez - NSF Research Grant, 2009-2013. Topological characterization of DNA organization in bacteriophages. Matthias Beck - NSF Research Grant 2012-2015. Applications to Ehrhart theory Matthias Beck - NSF Research Grant, 2008-2012. Computations in Ehrhart theory. Matthias Beck - NSF GK-12 Grant, 2009-2014. Creating Momentum through Communicating Mathematics. Yitwah Cheung - NSF Research Grant, 2007-2010. Interactions between number theory and ergodic theory. Joseph Gubeladze - NSF Research Grant, 2013-2016. Algebraic combinatorics. Joseph Gubeladze - NSF Research Grant, 2010-2013. Four problems in polytopal algebraic combinatorics. Joseph Gubeladze - NSF Research Grant, 2006-2009. Convex point configurations in algebraic combinatorics. Eric Hsu, Judy Kysh, Diane Resek - NSF Math Science Partnership, 2003-2009. Revitalizing algebra. Shidong Li - AFOSR Research Grant, 2011 - 2014. Frames and Compressed Sensing. Shidong Li - NSF Research Grant, 2010-2013. Development of Nonorthogonal Fusion Frames and Reflective Sensing with Applications. Shidong Li - NSF Research Grant, 2007-2010. Development of frame extensions and applications. Alex Schuster - NSF Research Grant, 2006-2009. Sampling and interpolation on Riemann surfaces and in several complex variables. Mariel Vazquez - NIH Research Grant, 2007-2010. Simulation of unknotting by Type II topoisomerases. Federico Ardila - Keynote Speaker, NSF Mathematics Institutes' Modern Math Workshop, 2013. Federico Ardila - Plenary Speaker, Colombian Mathematical Congress, 2011. Federico Ardila - Plenary Speaker, Mexican Mathematical Congress, 2011. Federico Ardila - Minicourse, NSF Mathematics Institutes' Modern Math Workshop, 2011. Javier Arsuaga - Plenary Speaker, Conference on Computational Physics, 2012 Matthias Beck - Invited Speaker, Triangle Lectures in Combinatorics, 2013 Matthias Beck - Invited Speaker, Golden Section, MAA Meeting 2012. Matthias Beck - Invited Speaker, Southern California-Nevada Section, MAA Meeting 2011. Joseph Gubeladze - Plenary Speaker, International FPSAC Conference on Formal Power Series and Algebraic Combinatorics, 2010. Mariel Vazquez - Keynote Speaker, NSF Mathematics Institutes' Modern Math Workshop, 2011. Department news. The mathematics department at San Francisco State University hosted the following events: • Bay Area Discrete Math Day 2011. • FPSAC 2010, the 22nd Annual International Conference on Formal Power Series and Algebraic Combinatorics, in July, 2010. • The International workshop on Optimal Frames and Operator Theory on Jan 17 -19, 2010. • The 1st San Francisco International Meeting on DNA Topology on April 22-26, 2009. More information here. • The 2009 Spring Western Section Meeting of the American Mathematical Society on April 25-26, 2009. More information here. • The Pamela Fong symposium, featuring De Witt Sumners, on March 18-19, 2009. More information here.
{"url":"http://math.sfsu.edu/news.php","timestamp":"2014-04-21T15:22:31Z","content_type":null,"content_length":"9801","record_id":"<urn:uuid:e9e1ed0a-6cb6-4ec1-a98c-cc0c256befe4>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00354-ip-10-147-4-33.ec2.internal.warc.gz"}
[Numpy-discussion] Confused with qr decomposition function Charles R Harris charlesr.harris@gmail.... Mon Nov 19 17:30:53 CST 2012 On Mon, Nov 19, 2012 at 4:17 PM, Virgil Stokes <vs@it.uu.se> wrote: > I am using the latest versions of numpy (from > numpy-1.7.0b2-win32-superpack-python2.7.exe) and scipy (from > scipy-0.11.0-win32-superpack-python2.7.exe ) on a windows 7 (32-bit) > platform. > I have used > import numpy as np > q,r = np.linalg.qr(A) > and compared the results obtained from MATLAB (R2010B) > [q,r] = qr(A) > The q,r returned from numpy are both the negative of theq,r returned > from MATLAB for the same matrix A. I believe that the q,r returned from > MATLAB are correct. Why am I getting their negative from numpy? > Note, I have tried this on several different matrices --- numpy always > gives the negative of MATLAB's q,r values. > [I mistakenly have already sent a similar email to the scipy list --- > please excuse this mistake.] They are both correct, the decomposition isn't unique. In particular, if both algorithms use Housholder reflections there are two possible reflection planes at each step, one of which is more numerically stable than the other, and the two choices lead to different signs at each step. That said, MATLAB may be normalizing the result in some way or using some other algorithm. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.scipy.org/pipermail/numpy-discussion/attachments/20121119/e326b009/attachment.html More information about the NumPy-Discussion mailing list
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2012-November/064521.html","timestamp":"2014-04-18T05:37:10Z","content_type":null,"content_length":"4479","record_id":"<urn:uuid:5e1cdc98-4b91-4cd4-bbf6-2310fc8bf5d5>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00444-ip-10-147-4-33.ec2.internal.warc.gz"}
December 1st 2008, 09:17 AM For each of the following linear transformations of R2, determine the 2 × 2 matrix representing that transformation (simplify your matrix entries as much as possible): (i) a reflection in the x-axis; (ii) a rotation about the origin through a counterclockwise angle of Pie/2; (iii) the transformation made by performing a reflection in the x-axis followed by a rotation about the origin through a counterclockwise angle of Pie/2; (iv) the transformation made by performing a rotation about the origin through a counterclockwise angle of Pie/2 followed by a reflection in the x-axis.
{"url":"http://mathhelpforum.com/advanced-algebra/62643-transformations-print.html","timestamp":"2014-04-19T01:09:14Z","content_type":null,"content_length":"4737","record_id":"<urn:uuid:53471c2b-fbd5-4d10-9231-a66ba0621a98>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00272-ip-10-147-4-33.ec2.internal.warc.gz"}
Non-negative least-squares Next: Other methodsincluding hybrids Up: Algebraic deconvolution Previous: Singular value decomposition Non-negative least-squares (NNLS), introduced by Lawson & Hanson (1974), also solves the basic matrix equation algebraically, but subject to the added constraint that S contains no negative elements. In principle, the algorithm has the merit that, given sufficient time, it will satisfy well-defined termination conditions, and thus requires no arbitrary cutoff parameter. This makes it a `hands-off' algorithm whose output is not susceptible to mis-tuning by unfortunate choice of the input parameters. In practice, however, the computation time and memory usage can be impossibly large if the number of non-zero pixels exceeds about 6000-8000. The point source model output by NNLS is again smoothed with a Gaussian beam and added to any residual emission when making the final image. NNLS distinguishes itself on bright, compact sources that neither `CLEAN' nor MEM can process adequately. Briggs showed that on such sources, both CLEAN and MEM produce artifacts that resemble calibration errors and that limit dynamic range. NNLS has no difficulty imaging such sources. It also has no difficulty with sharp edges, such as those of planets or of strong shocks, and can be very advantageous in producing models for self-calibration for both types of sources. Briggs (1995) showed that NNLS deconvolution can reach the thermal noise limit in VLBA images for which `CLEAN' produces demonstrably worse solutions. NNLS is therefore a powerful deconvolution algorithm for making high dynamic range images of compact sources for which strong finite support constraints are applicable. Next: Other methodsincluding hybrids Up: Algebraic deconvolution Previous: Singular value decomposition 1996 November 4 10:52:31 EST
{"url":"http://www.cv.nrao.edu/~abridle/deconvol/node29.html","timestamp":"2014-04-19T17:08:23Z","content_type":null,"content_length":"5302","record_id":"<urn:uuid:7405e393-9d0f-4ea6-9328-0dd8ccda5b93>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00601-ip-10-147-4-33.ec2.internal.warc.gz"}
Mccullom Lake, IL Find a Mccullom Lake, IL Calculus Tutor Hi!! My name is Harry O. I have been tutoring high school and college students for the past six years. Previously I taught at Georgia Institute of Technology from which I received a Bachelor's in Electrical Engineering and a Master's in Applied Mathematics. 18 Subjects: including calculus, physics, geometry, GRE ...I first began my journey helping others in college when I realized that I had a strong work ethic that allowed me to teach myself even if the material presented in class was inadequate. I often found myself mentoring colleagues through Chemistry classes, Philosophy classes, Spanish classes, and ... 26 Subjects: including calculus, Spanish, chemistry, writing ...One of the great difficulties many students face is the self fulfilling prophecy that I have always been poor at math and I will always be poor at math. The key to success is for the student to believe that they can become better at math. Therefore I have the student experience earned success in order to achieve some confidence. 12 Subjects: including calculus, geometry, algebra 1, algebra 2 ...During my MBA, I took 20 graduate level MBA courses in economics, finance, accounting, statistics and operations. In addition to MBA, I have a PhD in Engineering so I believe I am well qualified to teach mathematics courses as well as any business courses(undergraduate and MBA level) . I have se... 22 Subjects: including calculus, physics, geometry, statistics ...Parametric Equations and Graphs. SEQUENCES AND SERIES. Sequences and Series. 17 Subjects: including calculus, reading, geometry, statistics Related Mccullom Lake, IL Tutors Mccullom Lake, IL Accounting Tutors Mccullom Lake, IL ACT Tutors Mccullom Lake, IL Algebra Tutors Mccullom Lake, IL Algebra 2 Tutors Mccullom Lake, IL Calculus Tutors Mccullom Lake, IL Geometry Tutors Mccullom Lake, IL Math Tutors Mccullom Lake, IL Prealgebra Tutors Mccullom Lake, IL Precalculus Tutors Mccullom Lake, IL SAT Tutors Mccullom Lake, IL SAT Math Tutors Mccullom Lake, IL Science Tutors Mccullom Lake, IL Statistics Tutors Mccullom Lake, IL Trigonometry Tutors Nearby Cities With calculus Tutor Alden, IL calculus Tutors Bassett, WI calculus Tutors Benet Lake calculus Tutors Camp Lake calculus Tutors Hebron, IL calculus Tutors Ingleside, IL calculus Tutors Mchenry, IL calculus Tutors Oakwood Hills, IL calculus Tutors Pell Lake calculus Tutors Powers Lake, WI calculus Tutors Ringwood, IL calculus Tutors Round Lake Heights, IL calculus Tutors Union, IL calculus Tutors Wilmot, WI calculus Tutors Zenda, WI calculus Tutors
{"url":"http://www.purplemath.com/Mccullom_Lake_IL_Calculus_tutors.php","timestamp":"2014-04-20T13:33:23Z","content_type":null,"content_length":"24111","record_id":"<urn:uuid:e85ad729-2ed7-4ae9-80db-9e5adfc11238>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00495-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Throughout Lewis Carroll’s book Alice’s Adventures in Wonderland, Alice’s size change. Her normal height was about 50 inches tall. She came across a door, about 15 inches high that led to a garden. Alice’s height change to 10 inches so she can visit the garden. Best Response You've already chosen the best response. What is your question Brandon Lam Best Response You've already chosen the best response. soory I fogot the question.Here the question FInd the ratio of the height of the door to Alice's height in Wonderland. Brandon Lam Best Response You've already chosen the best response. are you there? Brandon Lam Best Response You've already chosen the best response. Can you help? Brandon Lam Best Response You've already chosen the best response. it some one there? Best Response You've already chosen the best response. when you read something with ratio and they say find x to y (im using x and y as examples) just put it in the order they tell you...x/y. If they say find y to x then y/x. But it is 10/15. Reduces to 2/3. Decimal form is .3333..... Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4d75bce61cd78b0bb947fdbb","timestamp":"2014-04-18T08:22:55Z","content_type":null,"content_length":"39753","record_id":"<urn:uuid:a81bd983-d964-4186-8b19-f6882492bdca>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00092-ip-10-147-4-33.ec2.internal.warc.gz"}
Primal Conundrum Alright, it's a simple question but I suck at chemistry so I can't figure out how to run the numbers to figure this one out. I know it can be figured out using the ideal gas law, but I have no idea how to use that. Help would be appreciated! I'm trying to figure out what the increase in volume is when boiling mercury and changing it from a liquid to a gas. For the purposes of this, the temperature can be assumed to be right around the boiling point of mercury (356.7 °C). To clarify, I know that for example water increases in volume to about 1600 times its previous volume when going from a liquid to a gas. I'm trying to figure out what the number is on mercury. Any help would be great, thank you! Use the gas constant law. N m^-2 = 1 Pascal PV = nRT V = nRT/P = (187.8 g / 200.6 g Hg/mol) (8.314472 m^3Pa/Kmol) (862.7) / (52995 Pa) = 0.1267 m^3 187.6 g mercury has a volume of 0.1267 m^3 when in vapor form. you can do the math from there to work out the ratio. When you say 0.1267 m^3, does the m stand for meters? Sorry, I'm trash at this stuff. Assuming that's meters, my numbers show that it expands by roughly ten times in volume, give or take.
{"url":"http://www.gaiaonline.com/forum/science-and-technology/need-help-with-something-chemistry-related/t.84163181/","timestamp":"2014-04-23T20:20:50Z","content_type":null,"content_length":"69696","record_id":"<urn:uuid:49f1e6cb-ee85-4360-8a63-c9a96204534b>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00650-ip-10-147-4-33.ec2.internal.warc.gz"}
Satellites and Dartboards Date: 02/21/2000 at 11:10:44 From: Brian Borden Subject: Probability of Accidentally Hitting a Satellite Assume we model our satellites in two groups. One group is at 500 km altitude, and the other group at geosynchronous altitude. Consequently, I can determine the surface area of the two spheres. I also assume a certain number of satellites in each sphere with a given average surface area for each. This yields a total surface area of satellites in each sphere. I also assume the satellites move randomly on each sphere for the sake of modeling simplicity. I realize this assumption is more valid for the lower satellites than for those at geosynchronous altitude. If I fire a laser beam with a known divergence, I can calculate the area it will cover on each of the two spheres. If I fire an instantaneous burst, what is the probability of hitting a satellite? I figure the probability must be proportional to the area of the satellites, proportional to the area of the beam, and inversely proportional to the area of the spherical cap above the local horizon. As the area of the beam approaches the area of the spherical cap, the probability should go to 1. Similarly, as the area covered by the satellites approaches the area of the cap, the probability should also go to 1. Obviously the probability can never exceed 1, thus as the area covered by the satellites and the area covered by the beam approach the area of the cap, the probability still must approach only 1. Also, when the area of the beam is greater than the area of the cap minus the area of the satellites, the probability of hitting a satellite must equal 1. I've tried (Asat + Abeam)/Acap, and (Asat/Acap)*(Abeam/Acap), but neither of these quite fit all the criteria that must apply. Also, I'm not certain if this is a conditional or an unconditional probability. The satellites could be anywhere, and the beam could be fired anywhere, but given that the beam is in a given spot, I need to know the probability that a satellite will also be there at the same time. Can you help? Date: 02/21/2000 at 22:57:53 From: Doctor Peterson Subject: Re: Probability of Accidentally Hitting a Satellite Hi, Brian. Since no expert on probability has taken your question, I'll try it. I think you have a couple of red herrings here. The "sky area" (your spherical cap) is irrelevant; if you think of a fixed laser and ask the probability that a satellite will be within your beam at a given time, the area of sky that you can aim at doesn't matter. Also, I suspect that the total surface area of the satellites will not be the determining factor; you could imagine a given total area either being covered just by one enormous (but still smaller than the moon) satellite, giving a small probability, or at the other extreme being divided into billions of micro-satellites scattered as dust, so that one will be hit with probability one! The actual sizes and number of satellites will be important. I'm going to proceed on the assumption that there are N satellites of about the same size, which is sufficiently smaller than the laser beam so that they can be thought of as points. (If their size is significant, I think you could adjust your measure of the beam diameter to make a larger effective diameter that would take into account that the center of the satellite could be some distance outside the beam and it would still count as a strike.) Now the probability that any one particular satellite will be hit is P_1 = Abeam / Asky where Asky is the area of the ENTIRE SPHERE on which the satellite is assumed to reside. The probability that ONE of N satellites will be hit is one minus the probability that NO satellite will be hit, or P_N = 1 - (1 - P_1)^N which may be approximated for large N and small P_1 as P_N = 1 - (1 - N*P_1) = N*P_1 = N * Abeam / Asky Whether this is a good approximation depends strongly on the values of N and P_1. Is Abeam much larger or smaller than Asky/N, the area assigned to any one satellite? I can't really proceed without knowing - Doctor Peterson, The Math Forum Date: 02/28/2000 at 13:56:08 From: Brian Borden Subject: Re: Probability of Accidentally Hitting a Satellite Dr. Peterson, I think a similar way to approach the problem would be to determine the probability of hitting a particular sector on a dartboard given three things: 1) Your dart will definitely hit SOMEWHERE on the board, but your aim isn't great so your chances of hitting any one spot on the board are the same as hitting any other spot. 2) You know the sizes of the sector and the board. 3) You're playing suction cup darts, and you know the size of the suction cup - you score a hit as long as any part of the suction cup is inside the sector. What's the probability of scoring a hit if you throw one dart? Brian Borden Date: 02/28/2000 at 16:58:44 From: Doctor Peterson Subject: Re: Probability of Accidentally Hitting a Satellite Hi, Brian. Thanks for writing back. If there's one sector we're trying to hit, this will certainly be equivalent to my N = 1 case (with one, possibly huge, satellite in the sky). We'll have to extend it if we want to model a large number of Suppose the target "sector" is a circle with radius r_t, and the dart has radius r_d, while the whole dart board has radius r_b. (If I used an actual sector, then as it became smaller, it would approach a line segment rather than a point, which doesn't seem realistic for a satellite.) Then we score a hit as long as the dart is within (r_t + r_d) of the center of the target spot. Therefore the probability of a hit is the ratio of the "hit" area to the total area: pi (r_t + r_d)^2 (r_t + r_d)^2 P_1 = ---------------- = ------------- pi r_b^2 r_b^2 Now let's move on to the N satellite case, by imagining N identical target spots, and assuming that the "hit" regions don't overlap. Then the probability of a hit will again be the ratio of the total area of all the "hit" regions to the total area: N pi (r_t + r_d)^2 N pi r_t^2 (1 + r_d/r_t)^2 P_N = ------------------ = -------------------------- pi r_b^2 pi r_b^2 Notice that N pi r_t^2 is the total area A_t of all the target spots, which you originally felt would determine the probability. If I let N increase and r_t decrease so as to keep this area constant, the r_d/r_t term will increase without bounds; the only limit on the probability of a hit will be that we assumed the hit regions would not overlap, and since they have a minimum area of pi r_d^2, N can't really increase forever without changing the conditions of the problem. As I said last time, if N is large enough, making the satellites "dust," the probability of a hit will be essentially one, because almost certainly there will be no part of the sky (dart board) that is not within r_d of a satellite (target spot). Rather than work with the area of the targets, we can focus on the area of the dart: N pi (r_t + r_d)^2 N pi r_d^2 (r_t/r_d + 1)^2 P_N = ------------------ = -------------------------- pi r_b^2 pi r_b^2 This way, when N gets large, r_t/r_d gets small and we are left with P_N = N A_d/A_b which is what I got last time (though I still haven't dealt with the overlap problem, so I can't really take r_t to zero). All of this essentially does what I did before, except that we are starting with the area of the target and adjusting it for the size of the dart, rather than starting with the size of the dart (treating the target as a point) and adjusting that for the actual size of the target spot. You can repeat the P_N = 1 - (1 - P_1)^N analysis I did last time if you wish, to remove the non-overlap assumption. - Doctor Peterson, The Math Forum
{"url":"http://mathforum.org/library/drmath/view/56658.html","timestamp":"2014-04-17T16:02:46Z","content_type":null,"content_length":"13092","record_id":"<urn:uuid:136339db-9aa3-4151-9c00-6e71f7b1b223>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00511-ip-10-147-4-33.ec2.internal.warc.gz"}
Center of Mass for an Arch April 1st 2012, 02:38 PM #1 Jun 2011 Center of Mass for an Arch A slender metal arch, denser at the bottom than the top, lies along the semicircle y^2+z^2=1, z>=0, in the yz plane and I'm trying to find its center of mass. Density, D(x,y,z) = 2-z I already found the total mass of the metal arch to be 2pi-2 I found the total mass by parameterizing the equation as y=cos(t), z=sin(t) from 0<t<pi and integrating 2-sin(t)dt from 0 to pi. I'm not sure how to find the center of mass, especially if it's not on the semicircle. All help is greatly appreciated! Follow Math Help Forum on Facebook and Google+
{"url":"http://mathhelpforum.com/calculus/196670-center-mass-arch.html","timestamp":"2014-04-18T00:28:31Z","content_type":null,"content_length":"28899","record_id":"<urn:uuid:81798e32-ca5d-434f-a5ea-e73c1c95133f>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00246-ip-10-147-4-33.ec2.internal.warc.gz"}
Analytical modeling of trilayer graphene nanoribbon Schottky-barrier FET for high-speed switching applications Recent development of trilayer graphene nanoribbon Schottky-barrier field-effect transistors (FETs) will be governed by transistor electrostatics and quantum effects that impose scaling limits like those of Si metal-oxide-semiconductor field-effect transistors. The current–voltage characteristic of a Schottky-barrier FET has been studied as a function of physical parameters such as effective mass, graphene nanoribbon length, gate insulator thickness, and electrical parameters such as Schottky barrier height and applied bias voltage. In this paper, the scaling behaviors of a Schottky-barrier FET using trilayer graphene nanoribbon are studied and analytically modeled. A novel analytical method is also presented for describing a switch in a Schottky-contact double-gate trilayer graphene nanoribbon FET. In the proposed model, different stacking arrangements of trilayer graphene nanoribbon are assumed as metal and semiconductor contacts to form a Schottky transistor. Based on this assumption, an analytical model and numerical solution of the junction current–voltage are presented in which the applied bias voltage and channel length dependence characteristics are highlighted. The model is then compared with other types of transistors. The developed model can assist in comprehending experiments involving graphene nanoribbon Schottky-barrier FETs. It is demonstrated that the proposed structure exhibits negligible short-channel effects, an improved on-current, realistic threshold voltage, and opposite subthreshold slope and meets the International Technology Roadmap for Semiconductors near-term guidelines. Finally, the results showed that there is a fast transient between on-off states. In other words, the suggested model can be used as a high-speed switch where the value of subthreshold slope is small and thus leads to less power consumption. Trilayer graphene nanoribbon (TGN); ABA and ABC stacking; TGN Schottky-barrier FET; High-speed switch Graphene, as a single layer of carbon atoms with hexagonal symmetry and different types such as monolayer, bilayer, trilayer, and multilayers, has attracted new research attention. Very high carrier mobility can be achieved from graphene-based materials which makes them a promising candidate for nanoelectronic devices [1,2]. Recently, electron and hole mobilities of a suspended graphene have reached as high as 2 × 10^5 cm^2/V·s [3]. Also, ballistic transport has been observed at room temperature in these materials [3]. Layers of graphene can be stacked differently depending on the horizontal shift of graphene planes [4,5]. Every individual multilayer graphene sequence behaves like a new material, and different stacking of graphene sheet lead to different electronic properties [3,6,7]. In addition, the configuration of graphene layers plays a significant role to realize either metallic or semiconducting electronic behavior [4,8,9]. Trilayer graphene nanoribbon (TGN), as a one-dimensional (1D) material, is the focus of this study. The quantum confinement effect will be assumed in two directions. In other words, only one Cartesian direction is greater than the de Broglie wavelength (10 nm). As shown in Figure1a, because of the quantum confinement effect, a digital energy is taken in the y and z directions, while an analog type in the x direction. It is also remarkable that the electrical property of TGN is a strong function of interlayer stacking sequences [10]. Two well-known forms of TGN with different stacking manners are understood as ABA (Bernal) and ABC (rhombohedral) [11]. The simplest crystallographic structure is hexagonal or AA stacking, where each layer is placed directly on top of another; however, it is unstable. AB (Bernal) stacking is the distinct stacking structure for bilayers. For trilayers, it can be formed as either ABA, as shown in Figure1, or ABC (rhombohedral) stacking [1,12]. Bernal stacking (ABA) is a common hexagonal structure which has been found in graphite. However, some parts of graphite can also have a rhombohedral structure (the ABC stacking) [6, 13]. The band structure of ABA-stacked TGNs can be assumed as a hybrid of monolayer and bilayer graphene band structures. The perpendicular external applied electric or magnetic fields are expected to induce band crossing variation in Bernal-stacked TGNs [14-16]. Figure1 indicates that the graphene plane being a two-dimensional (2D) honeycomb lattice is the origin of the stacking order in multilayer graphene with A and B and two non-equivalent sublattices. Figure 1. TGN. (a) As a one-dimensional material with quantum confinement effect on two Cartesian directions. (b) ABA-stacked [17]. As shown in Figure1, a TGN with ABA stacking has been modeled in the form of three honeycomb lattices with pairs of equivalent sites as {A[1],B[1]}, {A[2],B[2]}, and {A[3],B[3]} which are located in the top, center, and bottom layers, respectively [11]. An effective-mass model utilizing the Slonczewski-Weiss-McClure parameterization [17] has been adopted, where every parameter can be compared with a relevant parameter in the tight-binding model. The stacking order is related to the electronic low-energy structure of 3D graphite-based materials [18,19]. Interlayer coupling has been found to also affect the device performance, which can be decreased as a result of mismatching the A-B stacking of the graphene layers or rising the interlayer distance. A weaker interlayer coupling may lead to reduced energy spacing between the subbands and increased availability of more subbands for transfer in the low-energy array. Graphene nanoribbon (GNR) has been incorporated in different nanoscale devices such as interconnects, electromechanical switches, Schottky diodes, tunnel transistors, and field-effect transistors (FETs) [20-24]. The characteristics of the electron and hole energy spectra in graphene create unique features of graphene-based Schottky transistors. Recently, the fabrication and experimental studies as well as a hypothetical model of G-Schottky transistors have been presented [25]. The studies have focused towards the properties of TGN, and a tunable three-layer graphene single-electron transistor was experimentally realized [6,26]. In this paper, a model for TGN Schottky-barrier (SB) FET is analyzed which can be assumed as a 1D device with width and thickness less than the de Broglie wavelength. The presented analytical model involves a range of nanoribbons placed between a highly conducting substrate with the back gate and the top gate controlling the source-drain current. The Schottky barrier is defined as an electron or hole barrier which is caused by an electric dipole charge distribution related to the contact and difference created between a metal and semiconductor under an equilibrium condition. The barrier is found to be very abrupt at the top of the metal due to the charge being mostly on the surface [27-31]. TGN with different stacking sequences (ABA and ABC) indicates different electrical properties, which can be used in the SB structure. This means that by engineering the stack of TGN, Schottky contacts can be designed, as shown in Figure2. Between two different arrangements of TGN, the semiconducting behavior of the ABA stacking structure has turned it into a useful and competent channel material to be used in Schottky transistors [32]. In fact, the TGN with ABC stacking shows a semimetallic behavior, while the ABA-stacked TGN shows a semiconducting property [32]. A schematic view of TGN SB FET is illustrated in Figure3, in which ABA-stacked TGN forms the channel between the source and drain contacts. The contact size has a smaller effect on the double-gate (DG) GNR FET compared to the single-gate (SG) FET. Figure 3. Schematic representation of TGN SB FET. Due to the fact that the GNR channel is sandwiched or wrapped through by the gate, the field lines from the source and drain contacts were seen to be properly screened by the gate electrodes, and therefore, the source and drain contact geometry has a lower impact. The operation of TGN SB FET is followed by the creation of the lateral semimetal-semiconductor-semimetal junction under the controlling top gate and relevant energy barrier. TGN SB FET model The scaling behaviors of TGN SB FET are studied by self-consistently solving the energy band structure equation in an atomistic basis set. In order to calculate the energy band structure of ABA-stacked TGN, the spectrum of full tight-binding Hamiltonian technique has been adopted [33-37]. The presence of electrostatic fields breaks the symmetry between the three layers. Using perturbation theory [38] in the limit of υ[F]|k| « V « t[⊥] gives the electronic band structure of TGN as [35,39] where k is the wave vector in the x direction, , t[⊥] is the hopping energy, ν[f] is the Fermi velocity, and V is the applied voltage. The response of ABA-stacked TGN to an external electric field is different from that of mono- or bilayer graphene. Rather than opening a gap in bilayer graphene, this tuned the magnitude of overlap in TGN. Based on the energy dispersion of biased TGN, wave vector relation with the energy (E-k relation) shows overlap between the conduction and valence band structures, which can be controlled by a perpendicular external electric field [6,39]. The band overlap increases with increasing external electric field which is independent of the electric field polarity. Moreover, it is shown that the effective mass remains constant when the external electric field is increased [3,33]. As an essential parameter of TGNs, density of states (DOS) reveals the availability of energy states, which is defined as in [40,41]. To obtain this amount, derivation of energy over the wave vector is required. Since DOS shows the number of available states at each energy level which can be occupied, therefore, DOS, as a function of wave vector, can be modeled as [39] where E is the energy band structure and A, B, C, D, and F are defined as A = −6.2832α, B = 14.3849α^2β, , D = −9β^2, and . As shown in Figure4, the DOS for ABA-stacked TGN at room temperature is plotted. As illustrated, the low-DOS spectrum exposes two prominent peaks around the Fermi energy [39]. Figure 4. The DOS of the TGN with ABA stacking. The electron concentration is calculated by integrating the Fermi probability distribution function over the energy as in [42]. Biased ABA-stacked TGN carrier concentration is modified as [43] where , the normalized Fermi energy is , and M and N are defined as and . Based on this model, ABA-stacked TGN carrier concentration is a function of normalized Fermi energy (η). The conductance of graphene at the Dirac point indicates minimum conductance at a charge neutrality point which depends on temperature. For a 1D TGN FET, the GNR channel is assumed to be ballistic. The current from source to drain can be given by the Boltzmann transport equation in which the Landauer formula has been adopted [44,45]. The number of modes in corporation with the Landauer formula indicates conductance of TGN that can be written as [32] where the momentum (k) can be derived by using Cardano's solution for cubic equations [46]. Equation4 can be assumed in the form G = N[1]G[1]+ N[2]G[2], where N[1] = 2αq^2/lh and N[2] = −6βq^2/lh. Since G[1] is an odd function, its value is equivalent to zero. Therefore, G = N[2]G[2][32], where This equation can be numerically solved by employing the partial integration method and using the simplification form, where x = (E − Δ)/k[B]T and η = (E[F] − Δ)/k[B]T. Thus, the general conductance model of TGN will be obtained [32] as It can be seen that the conductivity of TGN increases by raising the magnitude of gate voltage. In the Schottky contact, electrons can be injected directly from the metal into the empty space in the semiconductor. When electrons flow from the valence band of the semiconductor into the metal, there would be a result similar to that for holes injected into the semiconductor. So, the establishment of an excess minority carrier hole in the vicinity is observed [28]. The current moves mainly from the drain to the source which consists of both drift and diffusion currents. The created 2D anticipated framework is expected to cause an explicit analytical current equation in the subthreshold system. Considering the weak inversion region, the diffusion current is mainly dominated and relative to the electron absorption at the virtual cathode [47]. A GNR FET is a voltage-controlled tunnel barrier device for both the Schottky and doped contacts. The drain current through the barrier consists of thermal and tunneling components [48]. The effect of quantum tunneling and electrostatic short channel is not treated, which makes it difficult to study scaling behaviors and ultimate scaling limits of GNR SB FET where the tunneling effect cannot be ignored [20]. The tunneling current is the main component of the whole current which requires the use of the quantum transport. Close to the source within the band gap, carriers are injected into the channel from the source [49]. In fact, the tunneling current plays a very important role in a Schottky contact device. The proposed model includes tunneling current through the SB at the contact interfaces, appropriately capturing the impact of arbitrary electrical and physical factors. The behavior of the proposed transistor over the threshold region is obtained by modulating the tunneling current through the SBs at the two ends of the channel [20]. The effect of charges close to the source for a SB FET is more severe because they have a significant effect on the SB and the tunneling possibility. When the charge impurity is situated at the center of the channel of a SB FET, the electrons are trapped by the positive charge and the source-drain current is decreased. If the charges are situated close to the drain, the electrons will collect near the drain. In this situation, low charge density near the source decreases the potential barrier at the beginning of the channel, which opens up the energy gap more for the flow of electrons from the source to the channel [50]. Electrons moving from the metal into the semiconductor can be defined by the electron current density J[m→s], whereas the electron current density J[s→m] refers to the movement of electrons from the semiconductor into the metal. What determines the direction of electron flow depends on the subscripts of the current. In other words, the conventional current direction is opposite to the electron flow. J[s→m] is related to the concentration of electrons with velocity in the direction to subdue the barrier [28]: where e is the magnitude of the electronic charge and ν[x] is the carrier velocity in the direction of transport: High carrier mobility reported from experiments on graphene leads to assume a complete ballistic carrier transport in the TGN, which means that the average probability of injected electron at one end that will transmit to the other end is approximately equal to 1: Kinetic energy, as a main parameter, is considered over the Fermi level, and the current density-voltage response of the TGN SB FET device is determined with respect to the carrier density and its kinetic energy as where (V[A] is the applied bias voltage and V[T] is the thermal voltage) [51]. The dependence of the drain current on the drain-source voltage is associated with the dependence of η on this voltage given by where V[GT] = V[GS] − V[T] and V(y) is the voltage of channel in the y direction. By solving Equation11, the normalized Fermi energy can be defined as In order to obtain an analytical relation for the contact current, an explicit analytical equation for the electric potential distribution along the TGN is presented. The channel current is analytically derived as a function of various physical and electrical parameters of the device including effective mass, length, temperature, and applied bias voltage. According to the relationship between a current and its density, the current–voltage response of a TGN SB FET, as a main characteristic, is modeled as where l is the length of the channel. Results and discussion In this section, the performance of the Schottky-contact double-gate TGN FET is studied. A novel analytical method is introduced to achieve a better understanding of the TGN SB switch devices. The results will be applied to identify how various device geometries provide different degrees of controlling transient between on-off states. The numerical solution of the presented analytical model in the preceding section was employed, and rectification current–voltage characteristic of TGN SB FET is plotted as shown in Figure5. Figure 5. Simulated I[D ](μA) versus V[DS ](V) plots of TGN Schottky-barrier FET (L = 25 nm, V[GS ]= 0.5 V). It further revealed that the engineering of SB height does not alter the qualitative ambipolar feature of the current–voltage characteristic whenever the gate oxide is thin. The reason is that the gate electrode could perfectly screen the field from the drain and source for a thin gate oxide (less than 10 nm). The SB whose thickness is almost the same as the gate insulator diameter is almost transparent. However, the ambipolar current–voltage (I-V) characteristic cannot be concealed by engineering the SB height when the gate insulator is thin. Lowering the gate insulator thickness and the contact size leads to thinner SBs and also greater on-current. Since the SB height is half of the band gap, the minimum currents exist at the gate voltage of V[G,min] = 1/2V[D], at which the conduction band that bends at the source extreme of the channel is symmetric to the valence band and also bends at the drain end of the channel, while the electron current is the same as the hole current. The consequence of attaining the least leakage current is the same as TGN SB FET with middle-gap SBs [23]. Raising the drain voltage leads to an exponential increase of the minimal leakage current which shows the importance of proper designing of the power supply voltage to ensure small leakage current. As depicted in Figure6, the proposed model points out strong gate-source voltage dependence of the current–voltage characteristic showing that the V[GS] increment effect will influence the drain current. In other words, as V[GS] increases, a greater value of I[D] results. As the drain voltage rises, the voltage drop through the oxide close to the drain terminal reduces, and this shows that the induced inversion charge density close to the drain also decreases [28]. The slope of the I[D] versus V[DS] curve will reduce as a result of the decrease in the incremental conductance of the channel at the drain. This impact is indicated in the I[D]-V[DS] curve in Figure6. If V [DS] increases to the point that the potential drop across the oxide at the drain terminal is equal to V[T], the induced inversion charge density is zero at the drain terminal. At that point, the incremental conductance at the drain is nil, meaning that the slope of the I[D]-V[DS] curve is zero. We can write where V[DS] (sat) is the drain-to-source voltage which is creating zero inversion charge density at the drain terminal. When V[DS] is more than the V[DS] (sat) value, the point in the channel where the inversion charge is zero moves closer to the source terminal [28]. In this case, electrons move into the channel at the source and pass through the channel towards the drain, and then at that point when the charge goes to zero, the electrons are infused into the space charge region where they are swept by the E-field to the drain contact. Compared to the original length L, the change in channel length ΔL is small, then the drain current will be regular for V[DS] > V[DS] (sat). The region of the I[D]-V[DS] characteristic is referred to as the saturation region. When V[GS] is changed, the I[D]-V[DS] curve will also be changed. It was found that if V[GS] increases, the initial slope of I[D]-V[DS] will be raised. We can also infer from Equation14 that the value of V[DS] (sat) is a function of V[GS]. A family of curves is created for this n-channel enhancement-mode TGN SB FET, as shown in Figure6. Figure 6. I[D ](μA)-V[DS ](V) characteristic of TGN SB FET at different values of V[GS ]for L = 100 nm. Also, it can be seen that by increasing V[GS], the saturation current increases, showing the fact that a larger voltage drop occurs between the gate and the source contact. Also, there is a bigger energy value for carrier injection from the source contact channel [20]. The impact of power supply up-scaling decreases the SB length at the drain side, allowing it to be more transparent and resulting in more turn-on current to flow. Therefore, an acceptable performance comparable to the conventional behavior of a Schottky transistor is obtained. The scaling of the channel length improves gate electrostatic control, creating larger transconductance and smaller subthreshold swings. The effect of the channel length scaling on the I-V characteristic of TGN SB FET is investigated in Figure7. It shows a similar trend when the gate-source voltage is changed. It can be seen that the drain current rises substantially as the length of the channel is increased from 5 to 50 nm. Figure 7. Impact of the channel length scaling on the transfer characteristic for V[GS ]= 0.5 V. To get a greater insight into the effect of increasing channel length on the increment of the drain current, two important factors, which are the transparency of SB and the extension of the energy window for carrier concentration, play a significant role [49,50]. For the first parameter, as the SB height and tunneling current are affected significantly by the charges close to the source of SB FET, the channel length effect on the drain current through the SB contact is taken into account in our proposed model. Moreover, when the center of the channel of the SB FET is unoccupied with the charge impurities, the drain-source current increases because of the fact that free electrons are not affected by positive charges [49]. The effect of the latter parameter appears at the beginning of the channel where the barrier potential decreases as a result of low charge density near the source. This phenomenon leads to widening the energy window and ease of electron flow from the source to the channel [50]. Furthermore, due to the long mean free path of GNR [52-55], the scattering effect is not dominant; therefore, increasing the channel length will result in a larger drain current. For a channel length of 5 nm, direct tunneling from the source to drain results in a larger leakage current, and the gate voltage may rarely adjust the current. The transistor is too permeable to have a considerable disparity among on-off states. For a channel length of 10 nm, the drain current has improved to about 1.3 mA. The rise in the drain current is found to be more significant for channel lengths higher than 20 nm. That is, by increasing the channel length, there is a dramatic rise in the initial slope of I[D] versus V[DS]. Also, based on the subthreshold slope model and the following simulated results, a faster device with opposite subthreshold slope or high on/off current ratio is expected. In other words, it can be concluded that there is a fast transient between on-off states. Increasing the channel length to 50 nm resulted in the drain current to increase by about 6.6 mA. The operation of the state-of-the-art short-channel TGN SB FET is found to be near the ballistic limit. Increasing further the channel length hardly changes neither the on-current or off-current nor the on/off current ratio. However, for a conventional metal-oxide-semiconductor field-effect transistor (MOSFET), raising the channel length may result in the channel resistance to proportionally increase. Therefore, in this case, down-scaling the channel length will result in significant loss of the on/off current ratio as compared to the SG device. Figure8 shows a comparative study of the presented model and the typical I-V characteristics of other types of transistors [49,50]. As depicted in Figure8, the proposed model has a larger drain current than those transistors for some value of the drain-source voltages. The resultant characteristics of the presented model shown in Figure8 are in close agreement with published results [49,50 ]. In Figure8, DG geometry is assumed for the simulations instead of the SG geometry type. Figure 8. Comparison between proposed model and typical I-V characteristics of other types of transistors. (a) MOSFET with SiO[2] gate insulator [50] (V[GS] = 0.5V), (b) TGN MOSFET with an ionic liquid gate, C[ins] >> C[q][49] (V[GS] = 0.5 V), (c) TGN MOSFET with a 3-nm ZrO[2] wrap around gate, C[ins]~ C[q][49] (V[GS] = 0.37 V), (d) TGN MOSFET with a 3-nm ZrO[2] wrap around gate, C[ins] ~ C [q][49] (V[GS] = 0.38 V). In order to have a deep quantitative understanding of experiments involving GNR FETs, the proposed model is intended to aid in the design of such devices. The SiO[2] gate insulator is 1.5 nm thick with a relative dielectric constant K = 3.9 [50] (Figure8a). Furthermore, the gate-to-channel capacitance C[g] is a serial arrangement of insulator capacitance C[ins] and quantum capacitance C[q] (equivalent to the semiconductor capacitance in conventional MOSFETs). Figure8b shows a comparative study of the presented model and the typical I-V characteristic of a TGN MOSFET with an ionic liquid gate. The availability of the ionic liquid gating [49] that can be modeled as a wrap-around gate of a corresponding oxide thickness of 1 nm and a dielectric constant ε[r] = 80 results in C [ins] >> C[q], and MOSFETs function close to the quantum capacitance limit, i.e., C[g] ≈ C[q][49]. As depicted in Figure8c,d, the comparison study of the proposed model with a TGN MOSFET with a 3-nm ZrO[2] wrap-around gate for two different values of V[GS] is notable. A 3-nm ZrO[2] (ε[r] = 25) wrap-around gate has C[ins] comparable to C[q] for solid-state high-κ gating, and this is an intermediate regime among the MOSFET limit and C[q] limit. Recently, a performance comparison between the GNR SB FETs and the MOSFET-like-doped source-drain contacts has been carried out using self-consistent atomistic simulations [20,21,48-50,56,57]. The MOSFET demonstrates improved performance in terms of bigger on-current, larger on/off current ratio, larger cutoff frequency, smaller intrinsic delay, and better saturation behavior [21,50]. Disorders such as edge roughness, lattice vacancies, and ionized impurities have an important effect on device performance and unpredictability. This is because the sensitivity to channel atomistic structure and electrostatic environment is strong [50]. However, the intrinsic switching speed of the GNR SB FET is several times faster than that of the Si MOSFETs. This could lead to promising high-speed electronics applications, where the large leakage of the GNR SB FET is of fewer concerns [20]. An efficient functionality of the transistor with a doped nanoribbon has been noticed in terms of on/off current ratio, intrinsic switching delay, and intrinsic cutoff frequency [48]. Based on the presented model, comparable with the other experimental and analytical models, the on-state current of the MOSFET-like GNR FET is 1 order of magnitude higher than that of the TGN SB FET. This is because the gate voltage ahead of the source-channel flat band condition modulates both the thermal and tunnel components in the on-state of MOSFET-like GNR FET, while it modulates the tunnel barrier only of the metal Schottky-contact TGN FET that limits the on-state current. Furthermore, TGN SB FET device performance can be affected by interlayer coupling, which can be decreased by raising the interlayer distance or mismatching the A-B stacking of the graphene layers. It is also noteworthy that MOSFETs operate in the region of subthreshold (weak inversion) as the magnitude of V[GS] is smaller than that of the threshold voltage. In the weak inversion mode, the subthreshold leakage current is principally as a result of carriers' diffusion [58,59]. The off-state current of the transistor (I[OFF]) is the drain current when V[GS] = 0. The off-state current is affected by some parameters such as channel length, channel width, depletion width of the channel, gate oxide thickness, threshold voltage, channel-source doping profiles, drain-source junction depths, supply voltage, and junction temperature [59]. Short-channel effects are defined as the results of scaling the channel length on the subthreshold leakage current and threshold voltage. The threshold voltage is decreased by reducing the channel length and drain-source voltage [58-61]. In the subthreshold region, the gate voltage is approximately linear [58,59]. It has been studied that the decrease of channel length and drain-source voltage results in shifting the characteristics to the left, and it is obvious that as the channel length gets less than 10 nm, the subthreshold current increases dramatically [62]. Based on the International Technology Roadmap for Semiconductors (ITRS) near-term guideline for low-standby-power technology, the value of the threshold voltage is close to 0.3 V [59]. Figure9 illustrates the subthreshold regime of TGN SB FET at different values of drain-source voltage. As shown in this figure, for lower values of drain-source voltage, the threshold voltage is decreased and meets the guidelines of ITRS. Figure 9. Subthreshold regime of TGN SB FET at different values of V[DS ](V) for L = 25 nm. The subthreshold slope, S (mV/decade), is evaluated by selecting two points in the subthreshold region of an I[D]-V[GS] graph as the subthreshold leakage current is adjusted by a factor of 10. It has been noted that self-consistent electrostatics and the gate bias-dependent electronic structure have an essential role in determining the intrinsic limits of the subthreshold slope in a TGN SB FET, which stays well over the Boltzmann limit of the ideal value of 60 mV/decade or less than 85mV/decade [58,63].The subthreshold slope, as one of the key issues of deep-submicrometer devices, is defined as [59] where V[t] is the threshold voltage, V[off] is the off voltage of the device, I[vt] is the drain current at threshold, and I[off] is the current at which the device is off. In other words, the subthreshold slope delineates the inverse slope of the log (I[D]) versus V[GS] plotted graph as illustrated in Figure10. Figure 10. I[D ](μA)-V[GS ](V) characteristic of TGN SB FET at different values of V[DS]. Average subthreshold swing is a fundamental parameter that influences the performance of the device as a switch. According to Figure10, the subthreshold slope for (l = 100 nm) is obtained as shown in Table1. Table 1. Subthreshold slope of TGN SB FET at different values of V[DS] Based on data from [64], for the effective channel lengths down to 100 nm, the calculated and simulated subthreshold slope values are near to the classical value of approximately 60 mV/decade. The subthreshold slope can be enhanced by decreasing the value of the buried oxide capacitance C[BOX] or by increasing the value of the gate oxide capacitance C[GOX][64]. Based on the simulated results, it can be concluded that when the channel material is replaced by TGN, the subthreshold swing improves further. The comparison study between the presented model with data from [62,64] showed that due to the quantum confinement effect [39,43], the value of the subthreshold slope in the case of TGN SB FET is less than those of DG metal oxide semiconductor and vertical silicon-on-nothing FETs [62,64] for some values of drain-source voltage. A nanoelectronic device characterized by a steep subthreshold slope displays a faster transient between on-off states. A small value of S denotes a small change in the input bias which can modulate the output current and thus leads to less power consumption. In other words, a transistor can be used as a high-speed switch when the value of S is small. As a result, the proposed model can be applied as a useful tool to optimize the TGN SB FET-based device performance. It showed that the shortening of the top gate may lead to a considerable modification of the TGN SB FET current–voltage properties. In fact, it also paves a path for future design of the TGN SB devices. TGN with different stacking arrangements is used as metal and semiconductor contacts in a Schottky transistor junction. The ABA-stacked TGN in the presence of an external electric field is also considered. Based on this configuration, an analytical model of junction current–voltage characteristic of TGN SB FET is presented. The dependence of the drain current versus the drain-source voltage of TGN SB FET as well as the back-gate and top-gate voltages for different values of gate-source voltage and geometric parameters such as channel length are calculated. In particular, we conclude that by increasing the applied voltage and also channel length, the drain current increases, which showed better performance in comparison with the typical behavior of other kinds of transistors. Finally, a comparative study of the presented model with MOSFET with a SiO[2] gate insulator, a TGN MOSFET with an ionic liquid gate, and a TGN MOSFET with a ZrO[2] wrap-around gate was presented. The proposed model is also characterized by a steep subthreshold slope, which clearly gives an illustration of the fact that the TGN SB FET shows a better performance in terms of transient between off-on states. The obtained results showed that due to the superior electrical properties of TGN such as high mobility, quantum transport, 1D behaviors, and easy fabrication, the suggested model can give better performance as a high-speed switch with a low value of subthreshold slope. Authors’ contributions MR wrote the manuscript, contributed to the design of the study, performed all the data analysis, and participated in the MATLAB simulation of the proposed device. Prof. RI and Dr. MTA participated in the conception of the project, improved the manuscript, and coordinated between all the participants. HK, MS, and EA organized the final version of the cover letter. All authors read and approved the final manuscript. The authors would like to acknowledge the financial support from a Research University grant of the Ministry of Higher Education (MOHE), Malaysia, under Projects Q.J130000.7123.02H24, PY/2012/00168, and Q.J130000.7123.02H04. Also, thanks to the Research Management Center (RMC) of Universiti Teknologi Malaysia (UTM) for providing excellent research environment in which to complete this work. 1. Mak KF, Shan J, Heinz TF: Electronic structure of few-layer graphene: experimental demonstration of strong dependence on stacking sequence. Phys Rev Lett 2010, 104:176404. PubMed Abstract | Publisher Full Text 2. Rahmani M, Ahmadi MT, Kiani MJ, Ismail R: Monolayer graphene nanoribbon p-n junction. 3. Craciun MF, Russo S, Yamamoto M, Oostinga JB, Morpurgo AF, Tarucha S: Trilayer graphene is a semimetal with a gate-tunable band overlap. Nat Nanotechnol 2009, 4:383-388. PubMed Abstract | Publisher Full Text 4. Berger C, Song Z, Li T, Li X, Ogbazghi AY, Feng R, Dai Z, Marchenkov AN, Conrad EH, First PN, de Heer WA: Ultrathin epitaxial graphite: 2D electron gas properties and a route toward graphene-based nanoelectronics. J Phys Chem B 2004, 108:19912-19916. Publisher Full Text 5. Nirmalraj PN, Lutz T, Kumar S, Duesberg GS, Boland JJ: Nanoscale mapping of electrical resistivity and connectivity in graphene strips and networks. Nano Letters 2011, 11:16-22. PubMed Abstract | Publisher Full Text 6. Avetisyan AA, Partoens B, Peeters FM: Stacking order dependent electric field tuning of the band gap in graphene multilayers. 7. Warner JH: The influence of the number of graphene layers on the atomic resolution images obtained from aberration-corrected high resolution transmission electron microscopy. Nanotechnology 2010, 21:255707. PubMed Abstract | Publisher Full Text 8. Zhu W, Perebeinos V, Freitag M, Avouris P: Carrier scattering, mobilities, and electrostatic potential in monolayer, bilayer, and trilayer graphene. 9. Sutter P, Hybertsen MS, Sadowski JT, Sutter E: Electronic structure of few-layer epitaxial graphene on Ru(0001). Nano Letters 2009, 9:2654-2660. PubMed Abstract | Publisher Full Text 10. Shengjun Y, Raedt HD, Katsnelson MI: Electronic transport in disordered bilayer and trilayer graphene. 11. Koshino M: Interlayer screening effect in graphene multilayers with ABA and ABC stacking. 12. Zhang F, Sahu B, Min H, MacDonald AH: Band structure of ABC-stacked graphene trilayers. 13. Lu CL, Lin HC, Hwang CC, Wang J, Lin MF, Chang CP: Absorption spectra of trilayer rhombohedral graphite. Appl Phys Lett 2006, 89:221910. Publisher Full Text 14. Xiao YM, Xu W, Zhang YY, Peeters FM: Optoelectronic properties of ABC-stacked trilayer graphene. 15. Rutter GM, Crain J, Guisinger N, First PN, Stroscio JA: Optoelectronic properties of ABC-stacked trilayer graphene. J Vac Sci Technol A 2008, 26:938-943. Publisher Full Text 16. Russo S, Craciun MF, Yamamoto M, Tarucha S, Morpurgo AF: Double-gated graphene-based devices. 17. Koshino M, McCann E: Gate-induced interlayer asymmetry in ABA-stacked trilayer graphene. 18. Craciun MF, Russo S, Yamamoto M, Tarucha S: Tuneable electronic properties in graphene. NanoToday Press 2011, 6:42-60. Publisher Full Text 19. Appenzeller J, Sui Y, Chen Z: Graphene nanostructures for device applications. In Digest of Technical Papers on 2009 Symposium on VLSI Technology: June 16–18 2009; Honolulu. Piscataway: IEEE; 20. Ouyang Y, Yoon Y, Guo J: Scaling behaviors of graphene nanoribbon FETs: a three-dimensional quantum simulation study. 21. Yoon Y, Fiori G, Hong S, Lannaccone G, Guo J: Performance comparison of graphene nanoribbon FETs with Schottky contacts and doped reservoirs. 22. Zhang Q, Fang T, Xing H, Seabaugh A, Jena D: Graphene nanoribbon tunnel transistors. 23. Naeemi A, Meindl JD: Conductance modeling for graphene nanoribbon (GNR) interconnects. 24. Liang Q, Dong J: Superconducting switch made of graphene–nanoribbon junctions. Nanotechnology 2008, 19:355706. PubMed Abstract | Publisher Full Text 25. Zhu J: A novel graphene channel field effect transistor with Schottky tunneling source and drain. In Proceedings of the ESSDERC 2007: 37th European Solid State Device Research Conference, 2007: September 11–13 2007; Munich. Piscataway: IEEE; 2007:243-246. 26. Guettinger J, Stampfer C, Molitor F, Graf D, Ihn T, Ensslin K: Coulomb oscillations in three-layer graphene nanostructures. New J Phys 2008, 10:125029. Publisher Full Text 27. Rahmani M, Ahmadi MT, Ismail R, Ghadiry MH: Performance of bilayer graphene nanoribbon Schottky diode in comparison with conventional diodes. J Comput Theor Nanosci 2013, 10:1-5. Publisher Full Text 28. Kargar A, Lee C: Graphene nanoribbon schottky diodes using asymmetric contacts. In Proceedings of the IEEE-NANO2009: 9th Conference on Nanotechnology, 2009: July 26–30 2009; Genoa. Piscataway: IEEE; 2009:243-245. 29. Jimenez D: A current–voltage model for Schottky-barrier graphene based transistors. Nanotechnology 2008, 19:345204. PubMed Abstract | Publisher Full Text 30. Ahmadi MT, Rahmani M, Ghadiry MH, Ismail R: Monolayer graphene nanoribbon homojunction characteristics. Sci Adv Mater 2012, 4:753-756. Publisher Full Text 31. Sadeghi H, Ahmadi MT, Mousavi M, Ismail R: Channel conductance of ABA stacking trilayer graphene field effect transistor. Mod Phys Lett B 2012, 26:1250047. Publisher Full Text 32. Avetisyan AA, Partoens B, Peeters FM: Electric-field control of the band gap and Fermi energy in graphene multilayers by top and back gates. 33. McCann E, Koshino M: Spin-orbit coupling and broken spin degeneracy in multilayer graphene. 34. Guinea F, Castro Neto AH, Peres NMR: Electronic states and Landau levels in graphene stacks. 35. Latil S, Meunier V, Henrard L: Massless fermions in multilayer graphitic systems with misoriented layers: ab initio calculations and experimental fingerprints. 36. Castro EV, Novoselov KS, Morozov SV, Peres NMR, Santos JMB L, Nilsson J, Guinea F, Geim AK, Castro AH: Electronic properties of a biased graphene bilayer. J Phys Condens Matter 2010, 22:175503. PubMed Abstract | Publisher Full Text 37. Rahmani M, Ahamdi MT, Ghadiry MH, Anwar S, Ismail R: The effect of applied voltage on the carrier effective mass in ABA trilayer graphene nanoribbon. Comput Theor Nanosci 2012, 9:1-4. Publisher Full Text 38. Guinea F, Castro Neto AH, Peres NMR: Interaction effects in single layer and multi-layer graphene. Eur Phys J Spec Top 2007, 148:117-125. Publisher Full Text 39. Krompiewski S: Ab initio studies of Ni-Cu-Ni trilayers: layer-projected densities of states and spin-resolved photoemission spectra. J Phys Condens Matter 1998, 10:9663. Publisher Full Text 40. Arora VK: Failure of Ohm's law: its implications on the design of nanoelectronic devices and circuits. In Proceedings of the 2006 25th IEEE International Conference on Microelectronics: May 14–17 2006; Belgrade. Piscataway: IEEE; 2006:15-22. 41. Rahmani M, Ahmadi MT, Ismail R, Ghadiry MH: Quantum confinement effect on trilayer graphene nanoribbon carrier concentration. J Exp Nanosci in press 42. Kumar SB, Guoa J: Chiral tunneling in trilayer graphene. Appl Phys Lett 2012, 100:163102. Publisher Full Text 43. Cubic equation. [http://eqworld.ipmnet.ru/en/solutions/ae/ae0103.pdf webcite] 44. Choi B: Improvement of drain leakage current characteristics in metal-oxide-semiconductor-field-effect-transistor by asymmetric source-drain structure. In Proceedings of the 2012 IEEE International Meeting for Future of Electron Devices Kansai (IMFEDK): May 9–12 2012; Osaka. Piscataway: IEEE; 2012:1-2. 45. Alam K: Transport and performance of a zero-Schottky barrier and doped contacts graphene nanoribbon transistors. Semicond Sci Technol 2009, 24:015007. Publisher Full Text 46. Ouyang Y, Dai H, Guo J: Multilayer graphene nanoribbon for 3D stacking of the transistor channel. In Proceedings of the IEDM 2009: IEEE International Electron Devices Meeting: December 7–9 2009; Baltimore. Piscataway: IEEE; 2009:1-4. 47. Fiori G, Yoon Y, Hong S, Jannacconet G, Guo J: Performance comparison of graphene nanoribbon Schottky barrier and MOS FETs. In Proceedings of the IEDM 2007: IEEE International Electron Devices Meeting: December 10–12 2007; Washington, D.C. Piscataway: IEEE; 2007:757-760. 48. Mayorov AS, Gorbachev RV, Morozov SV, Britnell L, Jalil R, Ponomarenko LA, Blake P, Novoselov KS, Watanabe K, Taniguchi T, Geim AK: Micrometer-scale ballistic transport in encapsulated graphene at room temperature. Nano Lett 2011, 11:2396-2399. PubMed Abstract | Publisher Full Text 49. Berger C, Song Z, Li X, Wu X, Brown N, Naud C, Mayou D, Li T, Hass J, Marchenkov AN, Conrad EH, First PN, De Heer WA: Electronic confinement and coherence in patterned epitaxial graphene. Science 2006, 312:1191-1196. PubMed Abstract | Publisher Full Text 50. Novoselov KS, Geim AK, Morozov SV, Jiang D, Zhang Y, Dubonos SV, Grigorieva IV, Firsov AA: Electric field effect in atomically thin carbon films. Science 2004, 306:666-669. PubMed Abstract | Publisher Full Text 51. Gunlycke D, Lawler HM, White CT: Room temperature ballistic transport in narrow graphene strips. 52. Jiménez D: A current–voltage model for Schottky-barrier graphene-based transistors. Nanotechnology 2008, 19:345204-345208. PubMed Abstract | Publisher Full Text 53. Liao L, Bai J, Cheng R, Lin Y, Jiang S, Qu Y, Huang Y, Duan X: Sub-100 nm channel length graphene transistors. Nano Letters 2010, 10:3952-3956. PubMed Abstract | Publisher Full Text | PubMed Central Full Text 54. Thompson S, Packan P, Bohr M: MOS scaling: transistor challenges for the 21st century. 55. Saurabh S, Kumar MJ: Impact of strain on drain current and threshold voltage of nanoscale double gate tunnel field effect transistor: theoretical investigation and analysis. Jpn J Appl Phys 2009, 48:064503-064510. Publisher Full Text 56. Jin L, Hong-Xia L, Bin L, Lei C, Bo Y: Study on two-dimensional analytical models for symmetrical gate stack dual gate strained silicon MOSFETs. Chin Phys B 2010, 19:107302. Publisher Full Text 57. Ray B, Mahapatra S: Modeling of channel potential and subthreshold slope of symmetric double-gate transistor. 58. Rechem D, Latreche S, Gontrand C: Channel length scaling and the impact of metal gate work function on the performance of double gate-metal oxide semiconductor field-effect transistors. 59. Majumdar K, Murali Kota VRM, Bhat N, Lin Y-M: Intrinsic limits of subthreshold slop in biased bilayer graphene transistor. Appl Phys Lett 2010, 96:123504. Publisher Full Text Sign up to receive new article alerts from Nanoscale Research Letters
{"url":"http://www.nanoscalereslett.com/content/8/1/55","timestamp":"2014-04-19T22:32:33Z","content_type":null,"content_length":"180373","record_id":"<urn:uuid:31cadd9a-936e-4e1f-b9eb-ac3e8e0a6ca8>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00420-ip-10-147-4-33.ec2.internal.warc.gz"}
Quadratic Equation Hidden, ax^2+bx+c=0 November 6th 2009, 01:50 AM Quadratic Equation Hidden, ax^2+bx+c=0 Hey all, I need help with this equation. It is a hidden equation which I have tried to solve on my maths booklet. The steps that must be followed are: ax^2 + bx + c = 0 I have to use the quadratic formula and tranpose the above here is my question 1 / R + R = 3 + 7 / R I need to get the A , b , c and plug it in the quadratic formula but the issue is i don't get the right answer which is 4.37, I believe I get the wrong A , B , C values at the start. If i need to be more specify please tell me. November 6th 2009, 02:02 AM Take your equation and multiply each side by R to get rid of frcations. This will give you 1 + R^2 = 3R + 7. Move everything over to the left to get: R^2 - 3R - 6= 0. So a=1, b=-3, c=-6. Then use the quadratic formula! November 6th 2009, 02:04 AM thank you so much it works. November 6th 2009, 02:24 AM Glad to help! May 17th 2010, 02:18 AM May 17th 2010, 09:26 AM 4.37 is approximate, plus there is a 2nd solution: [3 + sqrt(33)] / 2 = 4.37228... [3 - sqrt(33)] / 2 = -1.37228... Make sure you get that straight: it'll make your future life easier (Evilgrin)
{"url":"http://mathhelpforum.com/algebra/112751-quadratic-equation-hidden-ax-2-bx-c-0-a-print.html","timestamp":"2014-04-20T16:22:34Z","content_type":null,"content_length":"5955","record_id":"<urn:uuid:e6f69567-61d4-4699-84f8-5dc0a6aa70f1>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00540-ip-10-147-4-33.ec2.internal.warc.gz"}
Invertible matrix From Exampleproblems In mathematics and especially linear algebra, an n-by-n (square) matrix A is called invertible, non-singular, or regular if there exists another n-by-n matrix B such that AB = BA = I[n], where I[n] denotes the n-by-n identity matrix and the multiplication used is ordinary matrix multiplication. If this is the case, then the matrix B is uniquely determined by A and is called the inverse of A, denoted by A^−1. A square matrix that is not invertible is called singular. While the most common case is that of matrices over the real or complex numbers, all these definitions can be given for matrices over any ring. As a rule of thumb, almost all matrices are invertible. Over the field of real numbers, this can be made precise as follows: the set of singular n-by-n matrices, considered as a subset of R^n×n, is a null set, i.e., has Lebesgue measure zero. Intuitively, this means that if you pick a random square matrix over the reals, the probability that it be singular is zero. This is true because singular matrices can be thought of as the roots of the polynomial function given by the determinant. Matrix inversion is the process of finding the matrix B that satisfies the prior equation for a given invertible matrix A. Properties of invertible matrices If A be a square n by n matrix over a field K (for example the field R of real numbers), the following statements are equivalent: • A is invertible. • A is row-equivalent to the n by n identity matrix I[n]. • A has n pivot positions. • det A ≠ 0. • rank A = n. • The equation Ax = 0 has only the trivial solution x = 0 (i.e. Null A = {0}). • The equation Ax = b has exactly one solution for each b in K^n. • The columns of A are linearly independent. • The columns of A span K^n (i.e. Col A = K^n). • The columns of A form a basis of K^n. • The linear transformation mapping x to Ax is a bijection from K^n to K^n. • There is an n by n matrix B such that BA = I[n]. • There is an n by n matrix B such that AB = I[n]. • The transpose A^T is an invertible matrix. • The matrix times its transpose, A^TA, is an invertible matrix. • The number 0 is not an eigenvalue of A. In general, a square matrix over a commutative ring is invertible if and only if its determinant is a unit in that ring. The inverse of an invertible matrix A is itself invertible, with (A^−1)^−1 = A The inverse of an invertible matrix A multiplied by a scalar k yields the product of the inverse of both the matrix and the scalar (kA)^−1 = k^−1A^−1 The product of two invertible matrices A and B of the same size is again invertible, with the inverse given by (AB)^−1 = B^−1A^−1 (note that the order of the factors is reversed.) As a consequence, the set of invertible n-by-n matrices forms a group, known as the general linear group Gl(n). Proof for matrix product rule If A[1], A[2], ..., A[n] are nonsingular square matrices over a field, then $(A_1A_2\cdots A_n)^{-1} = A_n^{-1}A_{n-1}^{-1}\cdots A_1^{-1}$ It becomes evident why this is the case if one attempts to find an inverse for the product of the A[i]s from first principles, that is, that we wish to determine B such that $(A_1A_2\cdots A_n)B=I$ where B is some matrix, in terms of the A[i]s. To remove A[n] from the product, we can then write $(A_1A_2\cdots A_n)A_n^{-1}B'=I$ where B' is some matrix, which would reduce the equation to $(A_1A_2\cdots A_{n-1})B'=I$ Likewise, then, from $(A_1A_2\cdots A_n)A_n^{-1}B'=I$ we use the same technique, removing A[n − 1] from the equation, yielding $(A_1A_2\cdots A_{n-1}A_n)A_n^{-1}A_{n-1}^{-1}B''=I$ where B' is some matrix, which, when simplified, gives $(A_1A_2\cdots A_{n-2})B''=I$ If one repeat the process up to A[1], the above property is established. Methods of matrix inversion Gauss-Jordan elimination Gauss-Jordan elimination is an algorithm that can be used to determine whether a given matrix is invertible and to find the inverse. An alternative is the LU decomposition which generates an upper and a lower triangular matrices which are easier to invert. For special purposes, it may be convenient to invert matrices by treating mn-by-mn matrices as m-by-m matrices of n-by-n matrices, and applying one or another formula recursively (other sized matrices can be padded out with dummy rows and columns). For other purposes, a variant of Newton's method may be convenient (particularly when dealing with families of related matrices, so inverses of earlier matrices can be used to seed generating inverses of later matrices). Analytic solution Writing another special matrix of cofactors, known as an adjugate matrix, can also be an efficient way to calculate the inverse of small matrices (since this method is essentially recursive, it becomes inefficient for large matrices). To determine the inverse, we calculate a matrix of cofactors: $A^{-1}={1 \over \begin{vmatrix}A\end{vmatrix}}\left(C_{ij}\right)^{T}={1 \over \begin{vmatrix}A\end{vmatrix}} \begin{pmatrix} C_{11} & C_{21} & \cdots & C_{j1} \\ C_{12} & \ddots & & C_{j2} \\ \ vdots & & \ddots & \vdots \\ C_{1i} & \cdots & \cdots & C_{ji} \\ \end{pmatrix}$ where |A| is the determinant of A, C[ij] is the matrix cofactor, and A^T represents the matrix transpose. In most practical applications, it is in fact not necessary to invert a matrix to solve a system of linear equations. This can instead be done using decomposition techniques like LU decomposition, which are much faster than inversion. Various fast algorithms for special classes of linear systems have also been developed. Inversion of 2 x 2 matrices The cofactor equation listed above yields the following result for 2 x 2 matrices. Inversion of these matrices can be done easily as follows: $A^{-1} = \begin{bmatrix} a & b \\ c & d \\ \end{bmatrix}^{-1} = \frac{1}{ad - bc} \begin{bmatrix} d & -b \\ -c & a \\ \end{bmatrix}$ Inversion of 3 x 3 matrices The cofactor equation listed above yields the following result for 3 x 3 matrices. Inversion of these matrices can be done quite easily as follows: $A^{-1} = \begin{bmatrix} a & b & c\\ d & e & f \\ g & h & i \\ \end{bmatrix}^{-1} = \frac{1}{|A|} \begin{bmatrix} ei - fh & ch - bi & bf - ce \\ fg - di & ai - cg & cd - af \\ dh - eg & bg - ah & ae - bd \end{bmatrix}$ | A | = a(ei − fh) − b(di − fg) + c(dh − eg) Blockwise inversion Matrices can also be inverted blockwisely by using the following analytic inversion formula: $\begin{bmatrix} A & B \\ C & D \end{bmatrix}^{-1} = \begin{bmatrix} A^{-1}+A^{-1}B(D-CA^{-1}B)^{-1}CA^{-1} & -A^{-1}B(D-CA^{-1}B)^{-1} \\ -(D-CA^{-1}B)^{-1}CA^{-1} & (D-CA^{-1}B)^{-1} \end where A, B, C and D are matrix sub-blocks of arbitrary size. This strategy is particularly advantageous if A is diagonal and (D − CA^ − 1B) (the Schur complement of A) is a small matrix, since they are the only matrices requiring to be inverted. This technique was invented by Volker Strassen, who also invented the Strassen algorithm for fast(er) matrix multiplication. The Moore-Penrose pseudoinverse Some of the properties of inverse matrices are shared by (Moore-Penrose) pseudoinverses, which can be defined for any m-by-n matrix. See also External links de:Reguläre Matrix es:Matriz invertible fr:Matrice inversible it:Matrice invertibile he:מטריצה הפיכה nl:Inverse matrix ja:正則行列 pl:Macierz odwrotna ru:Обратная матрица zh:逆矩阵
{"url":"http://www.exampleproblems.com/wiki/index.php/Invertible_matrix","timestamp":"2014-04-18T13:06:59Z","content_type":null,"content_length":"35618","record_id":"<urn:uuid:26a739b9-6e4f-49f1-8da2-7a90071f0106>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00612-ip-10-147-4-33.ec2.internal.warc.gz"}
Aston Algebra 1 Tutor Find an Aston Algebra 1 Tutor ...If selected to be your tutor, my goal will be to not only to assist you with your current math classes, but to also help identify your areas of need and provide supplemental material to enhance these areas. After working with students with various learning differences, I fully understand that not everybody learns the same way. In math, there are various ways to understand a 12 Subjects: including algebra 1, geometry, ASVAB, SAT math ...I have tutored math and sciences in many volunteer and job opportunities. I have experience with after school tutoring from 2003-2006. I was an Enon Tabernacle after school ministry tutor for elementary and high school students 2011-2012. 13 Subjects: including algebra 1, chemistry, geometry, biology I have taught middle school and high school mathematics in northern Virginia for 8 years. I have tutored privately most of that time as well. I know that everyone learns in a different way and I try to use real world objects, models and examples to help students understand abstract concepts with which they may be struggling. 28 Subjects: including algebra 1, calculus, Microsoft Excel, Microsoft Word ...I took AP Calculus BC and scored a 5. Science I am available to tutor chemistry, physics and any electrical related topics. I have taken AP physics and AP chemistry. 15 Subjects: including algebra 1, chemistry, calculus, physics ...My formal teaching experiences include student teaching Honors and AP Biology at Lower Merion High School, substitute teaching at various charter schools in the Philadelphia region, and teaching Biology, Anatomy & Physiology, and Chemistry in the School District of Philadelphia. My other teachin... 12 Subjects: including algebra 1, chemistry, geometry, biology Nearby Cities With algebra 1 Tutor Brookhaven, PA algebra 1 Tutors Chester Heights algebra 1 Tutors Chester Township, PA algebra 1 Tutors Chichester, PA algebra 1 Tutors Eddystone, PA algebra 1 Tutors Garnet Valley, PA algebra 1 Tutors Glen Riddle Lima algebra 1 Tutors Lenni algebra 1 Tutors Logan Township, NJ algebra 1 Tutors Marcus Hook algebra 1 Tutors Media, PA algebra 1 Tutors Rose Valley, PA algebra 1 Tutors Trainer, PA algebra 1 Tutors Upland, PA algebra 1 Tutors Village Green, PA algebra 1 Tutors
{"url":"http://www.purplemath.com/Aston_algebra_1_tutors.php","timestamp":"2014-04-18T06:11:13Z","content_type":null,"content_length":"23885","record_id":"<urn:uuid:82d68d54-489f-4989-a7fd-b413e0a89965>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00258-ip-10-147-4-33.ec2.internal.warc.gz"}
CVRL functions The functions provided below result from our research into human cone spectral sensitivities and luminous efficiency. They provide a consistent set of colorimetric and photometric data that can be used to model and predict normal and dichromatic colour vision for standard target sizes of 2-deg or 10-deg in diameter. The data are provided either as ascii csv (comma separated values) files, as ascii xml (extensible markup language) files, as html tables—which appear in your browser window, or as dynamic graphical plots. Select the data format, the data stepsize and the data units and click the "Submit" button. For further details and references click here. These versions are useful for avoiding rounding errors in calculations; they are used in particualr for calculating the linear transformation between the physiologically-relevant CIE LMS CMFs and the new CIE XYZ CMFs. 2-deg fundamentals based on the Stiles and Burch 10-deg CMFs adjusted to 2-deg 10-deg fundamentals based on the Stiles and Burch 10-deg CMFs 2-deg functions 10-deg functions 2-deg functions 10-deg functions Macular pigment
{"url":"http://cvrl.ioo.ucl.ac.uk/cvrlfunctions.htm","timestamp":"2014-04-17T04:02:05Z","content_type":null,"content_length":"22362","record_id":"<urn:uuid:028bfd66-0393-4ada-815e-e38418a1d954>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00415-ip-10-147-4-33.ec2.internal.warc.gz"}
A Review of Nonlinear Oscillatory Shear Tests: Analysis and Application of Large Amplitude Oscillatory Shear (LAOS) By Hyun, K., Wilhelm, M., Klein, C.O., Cho, K.S., Nam, J.G., Ahn, K.H., Lee, S.J., Ewoldt, R.H. and McKinley, G.H. Dynamic oscillatory shear tests are common in rheology and have been used to investigate a wide range of soft matter and complex fluids including polymer melts and solutions, block copolymers, biological macromolecules, polyelectrolytes, surfactants, suspensions, emulsions and beyond. More specifically, Small Amplitude Oscillatory Shear (SAOS) tests have become the canonical method for probing the linear viscoelastic properties of these complex fluids because of the firm theoretical background [1-4] and the ease of implementing suitable test protocols. However, in most processing operations the deformations can be large and rapid: it is therefore the nonlinear material properties that control the system response. A full sample characterization thus requires well-defined nonlinear test protocols. Consequently there has been a recent renewal of interest in exploiting Large Amplitude Oscillatory Shear (LAOS) tests to investigate and quantify the nonlinear viscoelastic behavior of complex fluids. In terms of the experimental input, both LAOS and SAOS require the user to select appropriate ranges of strain amplitude (γ ) and frequency (ω). However, there is a distinct difference in the analysis of experimental output, i.e. the material response. At sufficiently large strain amplitude, the material response will become nonlinear in LAOS tests and the familiar material functions used to quantify the linear behavior in SAOS tests are no longer sufficient. For example, the definitions of the linear viscoelastic moduli G΄(ω) and G˝(ω) are based inherently on the assumption that the stress response is purely sinusoidal (linear). However, a nonlinear stress response is not a perfect sinusoid and therefore the viscoelastic moduli are not uniquely defined; other methods are needed for quantifying the nonlinear material response under LAOS deformation. In the present review article, we first summarize the typical nonlinear responses observed with complex fluids under LAOS deformations. We then introduce and critically compare several methods that quantify the nonlinear oscillatory stress response. We illustrate the utility and sensitivity of these protocols by investigating the nonlinear response of various complex fluids over a wide range of frequency and amplitude of deformation, and show that LAOS characterization is a rigorous test for rheological models and advanced quality control. Keywords: LAOS (Large amplitude oscillatory shear), nonlinear response, FT-Rheology, Stress Decomposition
{"url":"http://web.mit.edu/nnf/publications/GHM164_abstract.html","timestamp":"2014-04-17T18:41:38Z","content_type":null,"content_length":"25720","record_id":"<urn:uuid:068a65d8-d5db-4bf3-99a4-88c64d67e8ee>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00488-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: st: RE: 'movestay' command [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: st: RE: 'movestay' command From Clemence Berson <Clemence.Berson@univ-paris1.fr> To statalist@hsphsun2.harvard.edu Subject Re: st: RE: 'movestay' command Date Thu, 09 Jul 2009 10:47:43 +0200 I tried all these alternatives before borrowing the statalist, but I am not able to program in Stata then I am not really able to read the code. I sent an e-mail to M.Lokshin and he said he added this option in the last version of the command. I updated my version but I cannot find this option in the help file. Moreover, he did not answer me when I asked for some precisions. Does Anyone could help me ? Thanks for your attention, Clémence Berson Nick Cox <n.j.cox@durham.ac.uk> a écrit : You can answer this kind of question yourself: 1. Look at the help to see what options are documented. 2. Look at the code using -viewsource- to see if any options are Clemence Berson My current working paper compares the discrimination between the private and public sectors. I am using the Stata command 'movestay' developed by Lokshin and Sajaia. I would like to know whether it is possible to use control variables which are not present in the probit equation of sector choice with this stata command. lnw_1i = beta_1 (age age2 reg contract nbemployees) + u_1i lnw_2i = beta_2 (age age2 reg contract nbemployees) + u_2i I= delta(lnw_1i - lnw2i) + gamma(age age2 reg maritalstatus csp_parents) + mu_i where the type of contract and the number of employees in the firm do not enter in the probability of working in a particular sector. Is there an option to avoid adding these variables to the estimation of I and getting the value of delta ? * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/ Ce message a ete verifie par MailScanner pour des virus ou des polluriels et rien de suspect n'a ete trouve. Ce message a ete verifie par MailScanner pour des virus ou des polluriels et rien de suspect n'a ete trouve. * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2009-07/msg00374.html","timestamp":"2014-04-20T03:30:20Z","content_type":null,"content_length":"8675","record_id":"<urn:uuid:bf5057eb-ca31-47ac-a60e-756ea072d73d>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00288-ip-10-147-4-33.ec2.internal.warc.gz"}
How to Account for a Cash Out Due to Reverse Stock Splits Original post by Nola Moore of Demand Media A reverse stock split is when a company reduces the number of its outstanding shares, but without changing the total value of the shares. For example, if a company enacts a 2-for-3 reverse stock split, then the shareholders would end up with two shares for every three that they had owned prior to the split. At the same time, the price per share increases by the same ratio, so the value to the shareholders stay the same. However, the math doesn't work out evenly, and shareholders can end up with fractional shares. Companies often give cash in lieu (CIL) of fractional shares. Account for this as if you had sold shares on the open market. Calculate the Cost Basis Step 1 Add up the total cost basis for your holdings prior to the reverse stock split. This includes the share cost of each purchase plus any fees or commissions related to those trades. It also includes the fair market value of any reinvested dividends. It does not include the cost of any shares you sold prior to the split. So if you bought 100 shares at $15 each, plus $10 in commissions, your total cost basis is $1,510. Step 2 Divide the total cost basis by the total number of shares you received in the reverse split, including fractions. This is your cost basis per share. If the 100 shares underwent a 1:3 reverse split, you would have 33.333 new shares. Divide the total cost basis of $1,510 by 33.333 to get a per share basis of $45.30. Step 3 Multiply the per share cost basis by the fractional portion to find the cost basis for the fractional shares. This should be smaller than the cost basis per share. In the example from the previous step, 0.333 fractional shares multiplied by $45.30 is $15.08. Calculating and Reporting Gain or Loss Step 1 Subtract the cost basis of the fractional shares from the value of the cash you received. If the result is positive, you have a capital gain. If it is negative, you have a loss. If you had CIL of $25.00 for your 0.333 shares at a cost basis of $15.08, your total gain is $9.92. Step 2 Determine whether the gain is short or long term. If you held the shares for more than one year prior to the split, you have a long-term gain. If you held them for one year or less, then you have a short-term gain. Step 3 Consolidate your capital gain or loss with other similar gains and losses at tax time. Add your gain or loss as a line item on IRS Schedule D -- short-term gains and losses go in the top section of the form, while long-term gains and losses go on the bottom. Tips & Warnings • Your brokerage firm may calculate your gain or loss for you on your annual statement and tax forms. Things Needed About the Author Nola Moore has been writing articles since 1999. Based in Santa Monica, Calif., Moore writes and blogs about taxes, trading and trusts for a variety of publications including BankShout, CreditShout and various other websites. She holds a Bachelor of Science in retail merchandising and spent nearly a decade in trust and investment services before leaving Minnesota for the beach.
{"url":"http://wiki.fool.com/wiki/index.php?title=How_to_Account_for_a_Cash_Out_Due_to_Reverse_Stock_Splits&oldid=32388","timestamp":"2014-04-20T03:14:28Z","content_type":null,"content_length":"49962","record_id":"<urn:uuid:754a674a-6f1c-4e86-8030-2db117ab9ba6>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00147-ip-10-147-4-33.ec2.internal.warc.gz"}
Chapter 9 This chapter provides an introduction to OFDM concepts. It first introduces simple signal transmission concepts and orthogonal subcarrier properties. It then illustrates how standards like Wi-Fi, WiMAX, and LTE make use of OFDM properties. We present a cursory overview that allows the reader to understand the fundamentals of OFDM and OFDMA. For a more details, refer for instance to [10], [104 ], [119]. We’ve seen important and popular standards that use direct spreading sequences on user data, thus implementing a CDMA scheme. We now review another technique called Orthogonal Frequency Division Multiplex (OFDM), which is increasingly popular and adopted by standards like Wi-Fi, WiMAX, and LTE. OFDM techniques consist in splitting a user data stream into several sub-streams, which are sent in parallel on several subcarriers. These sub-streams and subcarriers benefit from a number of properties that we now review in details. Recall the classic example of continuous wave to encode information: the carrier frequency in itself in not capable of encoding information. The quantity of information s(t) is encoded by changes or modulation of the wave, and affects the amount of spectrum required Δf[c] as shown on figure 9.1. One can of course use several carriers f[i],i 1,2,…,N[c]}, and filter them separately. That is a common approach and is used extensively in FDMA systems: in particular multiple network operators who own licenses over a same area must take care not to exceed allowed levels of adjacent channel interference into one-another’s bands. OFDM improves on the idea by using orthogonal properties of functions to increase spectral efficiency by choosing a specific interval Δf = f[i+1] -f[i] between subcarriers. Multiple parallel signal streams are used: s[i](t) = exp(jω[i]t) (where ω[i] = 2πf[i]), and in frequency domain: S[i](f) = δ(f - f[i]). In fact time signals are limited in a time window, and a user information symbol has a time interval for transmission: [0, T[s]], so (where u[i] is a user information symbol) and the frequency domain representation of the signal is modified from a perfect Dirac function δ(f - f[i]) to a sinc function: This last expression is derived from Fourier transform, using definitions from the next section. Now recall the duality between time domain and frequency domain, with Fourier transform (and inverse Fourier transform) to switch from one domain to the other. Several definitions exist; let us the following definition of Fourier transform and inverse Fourier transform (Other definitions exist, with different signs under the exponent and different 2π factors, so it is important to always specify what definitions are used.) With this definition, the reader can readily derive formula (9.2): OFDM is a multicarrier modulation in which a user bit stream (of rate R[u]) is transmitted over N[c] subcarriers, each having a symbol rate R[s] = T[s] = Each symbol stream is multiplied by a function φ[k] from a family of orthogonal functions {φ[k]},k 0,…,N[c] - 1}. In CDMA, these functions were Walsh codes, in OFDM, they are windowed complex exponentials (or possibly cosine functions): So in a similar manner to the CDMA forward link presented in section 8.1, multiple channels are multiplexed and combined, using exponential functions instead of Walsh code sequences: (where u[i] = s[i]g[i]; s[i] represents the information symbol (+1 or -1), g[i] is the individual channel gain) or for many successive bits m = 0,1,2,..., etc. That sequence is manipulated further and sent over the air; on the receiver side, that sequence may decoded by using orthogonal properties of {φ[k]}. where δ[k-l] is the Kronecker symbol (i.e. 0 if k = l, 1 otherwise), and the asterisk (*) denotes the complex conjugate of the expression. Equations (9.6) and (9.9) gives us the orthogonal condition for subcarriers’ spacing: the right hand side equals zero if and only if (ω[k] -ω[l])T[s] = 2πn, for n non-zero integer. This leads to the condition Δf = n∕T[s], and 1∕T[s] is the smallest separation between two subcarriers. On the receiving side, the sequence may be decoded by simply integrating for each channel; for channel k, the information bit is retrieved from the sign of the integral: Although in this case an additional trick is used, and direct and inverse Fourier transforms are used for decoding. If we then examine the Fourier transform of our functions given in equation (9.6), we obtain a sinc function of pseudo-period T[s], which means that in the frequency domain subcarriers are spaced exactly such that the peak of the next one corresponds to the previous one’s first zero – see figure 9.3. The overall envelope looks a bit like a spread spectrum signal, and may be tapered further to reduce out of band spectral power density. The above choice of orthogonal basis functions has another useful property, relating to Fourier transform. Indeed sampling S[tot] turns the above expressions into the usual discrete Fourier Transform (DFT), and therefore, instead of multiplying, summing, and then integrating for decoding, OFDM allows to simply carry out a DFT and its inverse (IFT), which are very efficient operations. Looking back at the Fourier transform (9.3), and sampling the time function and its Fourier transform (with N samples), one may define the following notations: u[k] = u(t[k]),t[k] = kτ,k {0,1,…,N - 1}, and U[n] = U(f[n]),f[n] = ,n {-N∕2,…,N∕2}. And one obtains the discrete Fourier transform and the inverse discrete Fourier transform Now comparing these discrete transforms to above S′[tot] with the particular OFDM orthogonal functions, one sees that the s[k,m] ⋅g[k,m] coefficients are Fourier transforms of the complex amplitude of the subcarriers. Consequently, encoding-decoding of an OFDM signal is practically not done with integration like (9.10), but by simple FFT. The transmitter builds: {S(ω[n])} = DFT{s[k,m] ⋅ g[k,m]}. And the receiver decodes the received spectral signal: S[tot] = IDFT{S(ω[n])}. If sampling is made as a power of 2 (N = 2^p), DFT (and IFT) algorithms are in O(Nlog[2]N), referred to as fast Fourier transform (FFT) and very efficient to implement. OFDM schemes are therefore based on the nearest power of two, and when fewer subcarriers are users, the same order N = 2^p is used for the FFT algorithms, but with zero entries for a subset N[z] = N - N[c]. Note that increasing the number of subcarriers in a given band of spectrum does not increase capacity but provides a useful parameter to optimize: there is an interesting tradeoff between number of subcarriers N[c] and subcarriers symbol time T[c]. The more subcarriers are used, the longer their symbol rate is, which means that the overall rate of information remains the same, but a longer symbol rate is useful for multipath mitigation (recall conditions when equalizers are required). Consequently subcarrier spacing is a fundamental parameter to chose for an OFDM standard like Wi-Fi or A few more standard techniques are used in combination with the above OFDM definition in practical radio systems. [104] • Guard limits inter symbol interference (ISI): added guard time allows for larger delay spread and limits multipath interference from one symbol to the next. • Cyclic prefix limits inter carrier interference (ICI): by ransmitting a cyclical replica of the signal as a cyclical prefix, frequency orthogonality is improved between carriers. • Data scrambling, FEC encoding, interleaving, puncturing, even MIMO are also typically used as in other modern radio systems. OFDM systems are therefore well suited to resolve rich multipath situations and slow time varying channels, which explains their popularity for standards like Wi-Fi. They are however not ideal for Doppler shift and phase noise. Wi-Fi is a standard for interoperable equipment, certified by the Wi-Fi alliance, and based on various iterations of IEEE 802.11, which uses OFDM for its highest throughput profiles. Wi-Fi has been the most successful local area network standard, and it is worth spending some time examining some of its OFDM parameters. Details of the 802.11 air interface can be found in a number of references, and recent books have good overview of the latest efforts [105], including good overview of 802.11n [106]. We only examine here some aspects of 802.11 as they relate to OFDM in order to provide some insight on performance goals and limitations. General Parameters: 802.11a/g uses an N = 64 point FFT in a 20MHz channel, δ[f] = 1∕T[s] = 312.5kHz, T[s] = 3.2μs, 4 μs time block is used, with cyclic, 52 of the 64 subcarriers are populated, 4 are pilots (for phase and frequency training and tracking), 48 actually carry data. The 802.11g packet structure includes the following: Short training field (STF): 8 μs; 10x repetition of 0.8 μs symbol. Uses 12 subcarriers; good autocorrelation property and low peak to average ratio; also used for automatic gain control (AGC) Long training field (LTF): 8 μs, composed of two 3.2μs training symbols and prepended by a 1.6μs cyclic prefix. Used for time acquisition and channel estimation. Signal field (SiG): 4 μs (3.2 + 0.8 cyclic prefix), contains 24 bits BPSK describing transmit rate, modulation, coding, length. Forms together with the training field the preamble (totaling 20 μs). Data Field: includes service field (16 bits: 7 used to synchronize descrambler, 9 reserved for future use), data bits, and tail bits, and optionally padding bits. The Data field consists of a stream of symbols, each 4 μs (3.2 + 0.8 cyclic prefix), transmitted over 48 subcarriers, and 4 pilots. 802.11n is a high throughput amendment to 802.11 containing improvements over 802.11a/g. So what exactly does improve in this high throughput amendment? We review its major improvements in the physical and MAC layers. A number of improvements in the physical (PHY) layer were designed to increase throughput in some situations, although these improvements may come at a cost, which will be pointed out. Modul. bits/ Hz bits/ subcx 48cx 11g rates 40MHz 11n rates 40MHz 4x4MIMO BPSK1/2 1 1/2 24 6 13.5 54 BPSK3/4 1 3/4 36 9 QPSK1/2 2 1 48 12 27 108 QPSK3/4 2 1.5 72 18 40.5 162 16QAM1/2 4 2 96 24 54 216 16QAM3/4 4 3 144 36 81 324 64QAM2/3 6 4 192 48 108 432 64QAM3/4 6 4.5 216 54 121.5 486 64QAM5/6 6 5 135 540 Table 9.1: Throughput rates for 802.11a/g/n calculated for different modulations, given the number of data subcarriers used (48 for 11g, and 52 for 11n, 108 in 40MHz) and the symbol time of 3.2μs hence a data rate per subcarrier of 312.5kbps ×3.2∕4 = 250kbps, because only 3.2 of 4 μs carry actual data. OFDM carriers: 48 data subcarriers (+4 pilots) for 11g, 52 (+4 pilots) for 11n, and 108 (+6 pilots) for 40MHz operations. That increase in data subcarriers brings the maximum throughput up from 54Mbps to 58.5Mbps. Tradeoff: higher cost of mitigating interference in adjacent channel. Maximum FEC rate is increased from 3/4 to 56, hence reaching maximum data rate of 65Mbps, or 135Mbps in 40MHz channels. Tradeoff: more errors may occur, in which case the system can revert to lower modulation. Guard interval: The guard interval, or cyclic prefix may be shortened from 0.8μs to 0.4μs, thus increasing actual data rate to 72.25Mbps, or 150Mbps in 40MHz. Tradeoff: more ISI. Multiple spatial streams: MIMO offers 2, 3, or even 4 times the above rates, reaching 270/300, 540/600Mbps rates. Tradeoff: system complexity and cost. Greenfield Preamble: The 802.11n preamble is modified for higher throughput, adding a high throughput training field (series of 4 μs fields). A legacy mode appends this new preamble to the 11g preamble for backward compatibility, whereas the shorter “greenfield” 802.11n only preamble increase throughput by 10 to 15 percent, at the cost of backward compatibility. Low density parity check (LDPC): increases FEC efficiency and throughput in some cases. The media access control (MAC) layer deals with multiple element addressing, channel access prioritization, and control. It transmits among other things beacons with regulatory and management information (such as country code, allowed channels, max power), and scans channels for beacons. Scanning is usually done passively but when regulations allow it, active probe requests can be sent for specific SSIDs or BSSIDs. MAC improvements for 802.11n include: large frame may see considerable channel variations over time (especially in poor condition, thus low bitrates). Consequently frames can be fragmented, and only an erred fragment needs to be resent. MAC service data units (MSDU) above a certain settable threshold are broken into several fragments sent over different MAC protocol data units (MPDU). Aggregated MSDU or MPDU: A-MSDU is an efficient MAC frame format that aggregates multiple MSDUs in a single MPDU which maximum size is extended to 4 KBytes and optionally 8 KBytes. A-MPDU is another form of aggregation that aggregates multiple MPDUs in a single MPDU, which maximum size is extended to 64 KBytes. Enhanced Block Ack: a new scheme in which the sender requests to enter block acknowledgement (BA) request session, in which BA are requested periodically instead of having ongoing BA. Optional features: Other optional features are standardized as part of the 802.11n MAC: □ a reduced inter-frame space (RIFS<SIFS) during data bursts improves burst efficiency, □ a reverse direction protocol allows clients to let peers use potential unused transmit slots, □ fast link adaptation optimizes throughput vs. loss in fast changing channel conditions, □ transmit beamforming (TxBF) control, □ power save multi-poll (PSMP): a new channel scheduling scheme with very few idle mode transmissions for handheld devices to save power. A number of further development are in the work for 802.11. They produce new amendments to the specification with the following goals: is another high throughput amendment to 802.11 containing improvements over 802.11a/g/n. It aims at providing very high throughput at 5-6GHz, it makes use of higher modulation schemes, bonding multiple channels, and multiple spatial streams (up to 8x8 MIMO) to reach several Gbps throughput. is also a high throughput amendment to 802.11, at 60GHz rather that the typical 2.4GHz and 5GHz of 802.11a/g/n. 802.11ad uses higher modulation schemes, wider channels and multi user MIMO techniques. The standard aims at reaching multi Gbps throughput. This is the Wi-Fi version (based on 802.11ac physical layer) for use in several contiguous channels of TV White Space spectrum (TVWS), 5,10,20,40MHz channels, 8-16μs symbol time for LAN application, unlike the larger symbol times of 802.22 for wider areas. Planned March 2013. In addition, a similar standard was recently produced to deal with longer links in TV white space. 802.22 addresses Wireless Regional Area Networks (WRAN), PHY MAC, policies and procedures for operations in TV white spaces (TVWS); the standard was published 7/2011, and was widely reported on in the press, nicknamed super Wi-Fi. Given the TVWS spectrum landscape, 802.22 defines 6, 7, or8 MHz channels, it uses 2048FFT, up to 64QAM, 200-300μs symbol time, which adapts well to wider area delay spreads. IEEE 802.16 is a standard for wide area wireless networks [107]. The group focuses from the beginning on important service providers’ requirements for service reliability. Consequently 802.16 standardizes important features such as quality of service (QoS), security, flexible and scalable operations in many RF bands. WiMAX goes one step further and narrows down some implementation choices of 802.16 in order to achieve interoperation between equipment manufacturers. WiMAX still standardizes several air interfaces and several profiles in different frequency bands. Of course, performance varies with frequency, channel bandwidth, and other profile characteristics; and conformance between products and suppliers exist only in a given profile. [108] Two very different families of WiMAX systems exist: fixed and mobile WiMAX. In addition, a regional initiative, WiBro, which resembles mobile WiMAX, has been standardized in Korea. Fixed WiMAX (802.16-2004 [109]) is a standard for fixed broadband access. Several profiles exist for fixed WiMAX, including different bandwidths, carrier frequencies, and duplexing schemes. Its air interface is based on Orthogonal Frequency Division Multiplexing (OFDM), and access between multiple users within a sector is managed by time-division multiple access (TDMA). While equipment has been available since 2004, true conformance testing [110] led to the first WiMAX equipments to be certified in January 2006. We will examine in this chapter profiles at 3.5 MHz (TDD and FDD) at 3.5 GHz, and 10 MHz TDD channels at 5.8 GHz. Mobile WiMAX (802.16e-2005 [111]) defines a different standard with considerations such as location register, paging, handoff, battery saving modes, and other network functions to manage mobility. Its air interface is based on Orthogonal Frequency Division Multiple Access (OFDMA), with 5, 7, 8.75, and 10 MHz channel widths for operations in the 2.3 GHz, 2.5 GHz, 3.3 GHz, and 3.5 GHz frequency is a Korean initiative for Wireless Broadband. Similar in many ways to mobile WiMAX, WiBro includes mobility and handoff, and is commercially available in Korea since mid 2006. WiBro operates in 10 MHz TDD channels at 2.3 GHz, and uses OFDMA. It targets mobile usage up to 60 mph. Although the standard community is focusing on mobile WiMAX, fixed WiMAX applications still have a small role to play, especially in less dense areas. Small and large service providers worldwide have conducted over 200 fixed WiMAX trials, and analysts once estimated some growth potential for fixed wireless market. All in all, fixed wireless access remains usually a fairly small scale offering, led by small carriers, and do not achieve the order of magnitude of mobile wireless carriers. (Recall figure 1.4 from chapter 1.) Fixed WiMAX is based on the 802.16d standard and has the following properties: • OFDM air interface with 256 subcarriers in 1.75 or 3.5 MHz channels, TDMA • TDD, FDD, or hybrid FDD (H-FDD) operations • Adaptive modulation (BPSK 1/2 to 64QAM 3/4) • Power Control: uplink open-loop and closed-loop (up to 30 dB/second) • Forward Error Correction (Concatenated Reed-Solomon Convolutional Coding, Convolutional Turbo Encoding or Block Turbo Encoding) • Puncturing for rate variability • Optional STS transmit diversity • 802.16d flexible channel bandwidth 1.75 MHz to 20 MHz, but only a subset (1.75, 3.5 MHz) for WiMAX profiles • MAC supports Automatic Repeat Request (ARQ) to remedy imperfect link adaptation, BW allocated differently for different physical Modes, contention based • Standard QoS profiles (real time to best effort) • Mesh Mode Optional topology for license-exempt operation • Product interoperability and conformance certified at 3.5 GHz [112] OFDM is primarily used for fixed access. For mobility WiMAX uses a method for providing multiple user access in different simultaneous OFDM subchannels. This Orthogonal Frequency Division Multiple Access (OFDMA) is the true focus of 802.16 and WiMAX standards. Figure 9.4 shows how groups of subcarriers form subchannels, which are allocated to different users (as well as pilot and control channels). [113] [114] Mobile WiMAX is based on the 802.16e standard and has the following properties: • 802.16e OFDMA air interface with scalable number of subcarriers (128 to 2048) in different channel widths • TDD, FDD, or hybrid FDD (H-FDD) operations • Adaptive modulation (BPSK 1/2 to 64QAM 3/4) • Power Control: uplink open-loop and closed-loop (up to 30 dB/second) • Forward Error Correction (Concatenated Reed-Solomon Convolutional Coding, Convolutional Turbo Encoding or Block Turbo Encoding) • Puncturing for rate variability • Optional Adaptive Antenna Systems (AAS) for MIMO Diversity • 802.16e Flexible Channel bandwidth 1.75 MHz 20 MHz, but only a subset (5.5 MHz) for WiMAX profiles • MAC supports Automatic Repeat Request (ARQ) to remedy imperfect link adaptation, BW allocated differently for different physical Modes, contention based • Elaborate QoS profiles • Handoff and soft handoff for mobility • Mesh Mode Optional topology in the work • Product interoperability and conformance efforts at 2.3-2.5 GHz [112] Figure 9.4: OFDMA subcarriers, as used by WiMAX: at a given time certain subgroups of subcarriers are dedicated to specific subscribers. Figure 9.5: WiMAX subframes are flexible in allocating subcarriers to different subscribers according to demand. (Source: www.wimaxforum.org white paper) WiMAX frame structures are flexible in terms of use of subcarriers, which can be allocated to different subscriber units according to their needs (figure 9.5). The number of subcarriers is used as a mean to establish frequency reuse schemes. Recall from §2.1.1 that the reuse factor has a strong impact on spectrum efficiency, and that one of the strength of CDMA is to allow a reuse factor of one whereas TDMA schemes needed higher reuse factors. Mobile WiMAX and OFDMA use fractional reuse to optimize spectrum in different areas: the concept is simple, use all subcarriers near the center of the cell (full use of subcarriers, or FUSC), but only make partial use of subcarriers (PUSC) in areas where they would interfere. Figure 9.6: Fractional reuse of subcarriers, some areas only use subgroups of subcarriers (F1, F2, or F3), to avoid interference where they overlap. Areas near the center can make full use of all Further work in 802.16m will provide the 4G evolution in a backward compatible way (including MIMO and OFDMA); 4G improvements are inserted in reserved fields that can be ignored from legacy 802.16e gear, but utilized by future 802.16m equipment. The goal of LTE is to provide 3GPP with further evolutions, improving its architecture, throughput, and spectrum efficiency. LTE can: • provide throughput up to 100Mbps downlink and 50Mbps uplink in 20MHz (2x20MHz FDD) • achieve spectral efficiencies of 5bps/Hz downlink, 2.5bps/Hz uplink while maintaining coding rates exceeding 1/2 • LTE Advanced, rel 10 further increases these goals to: 1Gpbs/500Mbps, and 30/15bps/Hz for downlink/uplink • Optimised for user speeds around 15km/h, but supports high performance up to 120km/h, and supports even higher • Scalable capacity 1.4MHz to 20MHz RF channels (FDD) LTE’s air interface, like other 4G standards, revolves around OFDMA. MIMO is used to either enhance data rates or increase data integrity (diversity and MRC). And the other usual tools are used as well: convolutional and turbo codes, and adaptive modulation (QPSK, 16QAM, 64QAM). LTE offers a flexible range of channel bandwidth (1.4, 3, 5, 10, or 20 MHz), which is well adapted to the current cellular and PCS bands. Figure 9.7: 4G throughput goals as they apply to LTE were represented in the standard community by this picture nicknamed ‘the van’ for its shape: it shows throughout evolution goals as a function of mobility speed. Subsequent releases of 3GPP LTE have been published: • published December 2008, is the first release for LTE. • published December 2009, added location services, MBMS support, multi-standard support, and regional requirements for North America.. • published March 2011, is the first release for LTE-Advanced, and includes carrier aggregation, and MIMO. • work started in 2010, and continues improvements on LTE-Advanced. LTE uses OFDMA for the downlink, with a fairly simple frame structure, and SC-FDMA for the uplink. LTE FDD uses 10ms frames, divided into 20 sub-frames or slots (of 0.5ms each). Each sub-frame uses 7 OFDM symbols, each with a cyclic prefix. Subchannels separation is Δf = 1∕T =15kHz, where T is the OFDM symbol period. (For multimedia broadcast multicast service MBMS dedicated cell, reduced carrier spacing can be used in the downlink Δf=7.5kHz). A cyclic prefix (CP) is used to duplicate part of the symbol: total symbol duration T[s] = T[u] + T[cp]. For normal 15kHz subcarrier spacing, the normal CP is 7 OFDM symbols per slot, which works well in typical urban multipath (T[u] = 66.7μs, and T [cp] = 5.21μs for first symbol, 4.7μs for the following symbols). An extended CP for larger cells or heavy multipath is available: T[cp] = 16.67μs. This splits radio resources into time and frequency elements, called resource blocks. On the frequency scale a resource block is 12 subcarriers wide (180kHz), on the time scale it is one slot CP OFDMA symbols Subcx CP symbols CP (μs) Δf=15kHz, normal 7 12 160 first 5.2 first 144 after 4.7 after Δf=15kHz, extended 6 12 512 16.7 Table 9.2: LTE cyclic prefix lengths in number of symbols, subcarriers, and time. There are three downlink channels in the physical layer, shared, control, and common control. And there are two uplink channels, the shared and the control channel. Modulation techniques used for uplink and downlink are QPSK, 16 QAM, 64 QAM while the broadcast channel uses only QPSK. Figure 9.8: LTE physical layer uses multiple OFDMA subcarriers and symbols separated by guard intervals. The uplink standard is departing from the usual OFDMA approach: it uses single carrier FDMA (SC-FDMA). SC-FDMA is a type of frequency domain equalization (FDE). In SC-FDMA, a bit stream is converted into single carrier symbols, then a Discrete Fourier transform (DFT) is applied to it, subcarriers are mapped to these DFT tones and an inverse DFT (IDFT) is performed to convert back for transmission. Much like in OFDMA, the signal has a cyclic prefix to limit ICI, and pulse shaping is used to limit ISI. Similar parameters are used as for downlink: subcarrier spacing 15kHz, CP normal or extended (Note that CP is the same for all UE in cell, and the same as downlink). The uplink uses the same symbol period and resource elements as in the downlink. Resource blocks are defined in the same manner, with N[SC]^RB = 12 subcarriers and N[RB] depends on bandwidth: 6, 15, 25, 50, 75, or 100. The reasons for preferring SC-FDMA over OFDMA are mainly that transmitting mobile units have strict limitations on the transmit power, and that peak-to-average power ratios (PAPR) are high for OFDMA. Figure 9.11: CDF PAPR comparison for OFDMA used in the LTE downlink, and SC-FDMA localized mode (LFDMA) used in the LTE uplink – 256 total subcarriers, 64 subcarrier per user, 0.5 roll-off factor, (a) QPSK, (b) 16QAM. LTE physical layer throughput calculations are easily derived from the 3GPP specifications: 1 Radio Frame has 10 sub-frames, each sub-frame has 2 time-slots, each time-slot is 0.5 ms long, 1 time-slot has 7 modulation symbols or OFDMA symbols (when normal CP length is used). Each modulation symbol = 6 bits at 64 QAM (note that these are physical layer bits, not actual user information). A Resource Block (RB) uses 12 Sub-carriers. Assume 20 MHz channel bandwidth (100 RBs), normal CP. The number of bits in a 1ms sub-frame is 100RBs x 12 sub-carriers x 2 slots x 7 modulation symbols x 6bits=100800 bits. So the data rate is 100.8 Mbps. For 4x4 MIMO the peak data rate is simply four time that, or 403Mbps. (Of course, a more robust FEC coding, lowers the bitrate to 336Mbps at 64QAM 5 /6, or 302Mbps at 64QAM 3/4.) Note that the above accounts for every resource block, which has to carry overhead signaling, reference signals, etc. Practically, looking at resource elements in a resource block for one (1ms) subframe, some resource elements are reserved (for instance with control frame indicator CFI=2). Figure 9.12: Some LTE resource elements are reserved for control channel and reference signals only a subset are used for user data, thus lowering actual throughput. Out of the 12x14 RE, 36 are used for control (PDCCH) and reference signals, so only 132 can carry data. (See figure 9.12). So 20% of the physical layer data rate is reserved. So the maximum physical layer data rate is 80.64Mbps (or 322.56Mbps in 4x4 MIMO). Commonly cited numbers are 75Mbps uplink, and 300Mbps downlink for LTE, this because layer 2 has additional transport block size (TBS) restrictions and frame overhead – typically around 9-10%, leading to 75Mbps and 300Mbps rates (for 4x4 MIMO in 20MHz). Figure 9.13: A comparison table between various OFDM standards is a good starting point for comparison between standards; it allows to clearly outline advantages of certain standards. Many other wireless standards exist and are continually in development for a wide range of applications. Figure 9.14 shows a summary of the most popular ones with their typical throughput, range, and domain of applications. Such table is difficult to keep up-to-date as standards work focuses on new needs and new opportunities; and incorporate the latest technology innovations into them as needed. Some of them become extremely successful and nearly omnipresent, while others miss their window of opportunity and nearly die on the vine. They range over a wide industry interest from wireless cellular and LAN’s to smart grids, RFID tagging, entertainment and consumer electronics, and much more. Figure 9.14: A comparison table between many wireless standards shows their approximate throughput, range, and applications. 1. Derive equation (9.2) using the definition of Fourier transform in §9.1.2. 2. An 802.11a system uses N[c] = 48 subcarriers for data and 4 more for pilot. 1. What is the nearest power of two N = 2^p ? 2. 802.11a uses 20 MHz channels, what is Δf between subcarriers in such a channel? 3. What is each user information bitrate? (Assume a BPSK modulation, i.e. only one bit transmitted per symbol) 4. Compare to the spectral efficiency of WCDMA where 3.84 Mc/s are transmitted in 5 MHz. (Again assume 1 chip is 1 symbol for that comparison) Other non standard solutions are becoming popular, such as one by Flarion (now owned by Qualcomm). The proposal was the basis of work to another IEEE group to be created: 802.20. The proposal initially used 113 subcarriers, 17 of which are used for pilots. (The next four questions refer to this solution) 5. What is the nearest power of two N = 2^p ? 6. This uses 1.25 MHz channels (convenient for current CDMA providers). What is Δf between subcarriers in such a channel? 7. What is each user information bit rate? (Assume a BPSK modulation, i.e. only one bit transmitted per symbol) 8. Compare to the spectral efficiency of 802.11a above. 3. Beside spectral efficiency, what advantages and disadvantages can you think of between the solutions presented in the previous problem? (Hint: think of fading characteristcs and §9.1.5). 4. The following 14-bit sequence 00001111001101 is to be encoded on an OFDM system. Represent each bit by a BPSK symbol, ±1. Ignore any pilot signal, i.e. every subcarrier is for data transmission. 1. Implement a system within 1 MHz of spectrum bandwidth. Specify how many subcarriers are used and their frequency separation. 2. Compute complex coefficients for each subcarrier by FFT. (Zero out the N[z] trailing ports). Use matlab for instance, and compute the FFT. 3. Show the approximate spectral shape, i.e. the modulus of the sum of all subcarrier with their associated coefficient. (Use matlab, or any other graphic software, or approximate by hand drawing; in any case, show details of your method). 4. Truncate the result to the first 14 bits, again fill in the remaining bits by zero, compute the IFFT and explain how to retrieve data from the original bit stream. 5. We consider a Wi-Fi 802.11g system where approximately 64 subcarriers are used in a 20 MHz channel. 1. Assuming that symbol periods must be greater than 10 times delay spread (T[s] > 10σ[τ]), what is the maximum delay spread in which this system performs well. (For simplicity, ignore guard bands, cyclic prefix, etc, and assume that the entire symbol duration is for user data). 2. What happens if the delay spread is much greater? 3. Searching for typical delay spreads in various sources, is Wi-Fi subcarrier spacing adequate for most indoor environments? 4. We now consider a WiMAX 802.16d system with 256 subcarriers over a 3.5 MHz channel. Searching again for typical delay spreads in various sources, in what environment would this system be appropriate? (indoors? in rural areas? in major cities?) 5. 802.16e now standardizes 512 subcarriers for 3.5 MHz channels. In what environment might this be an improvement? 6. Explain why WiMAX is better suited for providing wireless access throughout a city than Wi-Fi access points. 6. A city is interested in a system providing coverage for its citizens city-wide. 1. Suggest a few state-of-the-art wireless systems for the city’s consideration and justify your choices. 2. Make a table showing advantages of these technologies for a carrier to provide extensive wireless coverage for a city. (Include at least considerations around: spectrum, cell sizes (consider power allowed, propagation, and delay spread), indoor service availability, mobility (Doppler, handoff). 3. The city is also interested in having its police, fire department and other first response emergency services communicate on that system. Are there any additional important arguments to consider for this type of use? Would they preclude any of your above systems — why?
{"url":"http://morse.colorado.edu/~tlen5510/text/classwebch9.html","timestamp":"2014-04-16T07:47:15Z","content_type":null,"content_length":"93065","record_id":"<urn:uuid:5596d584-006e-4377-be0a-eb81d00526a8>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00457-ip-10-147-4-33.ec2.internal.warc.gz"}
Just a problem in some websites Re: Just a problem in some websites When he first came in here he was 8, that seems like yesterday. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=296672","timestamp":"2014-04-18T08:14:31Z","content_type":null,"content_length":"23440","record_id":"<urn:uuid:7e554dcc-5055-4a45-ac32-4401c5cda0a8>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00103-ip-10-147-4-33.ec2.internal.warc.gz"}
Univariate GARCH(P,Q) parameter estimation with Gaussian innovations [Kappa, Alpha, Beta] = ugarch(U, P, Q) U Single column vector of random disturbances, that is, the residuals or innovations (ɛ[t]), of an econometric model representing a mean-zero, discrete-time stochastic process. The innovations time series U is assumed to follow a GARCH(P,Q) process. │ Note: The latest value of residuals is the last element of vector U. │ P Nonnegative, scalar integer representing a model order of the GARCH process. P is the number of lags of the conditional variance. P can be zero; when P = 0, a GARCH(0,Q) process is actually an ARCH(Q) process. Q Positive, scalar integer representing a model order of the GARCH process. Q is the number of lags of the squared innovations. [Kappa, Alpha, Beta] = ugarch(U, P, Q) computes estimated univariate GARCH(P,Q) parameters with Gaussian innovations. Kappa is the estimated scalar constant term ([[KAPPA]]) of the GARCH process. Alpha is a P-by-1 vector of estimated coefficients, where P is the number of lags of the conditional variance included in the GARCH process. Beta is a Q-by-1 vector of estimated coefficients, where Q is the number of lags of the squared innovations included in the GARCH process. The time-conditional variance, , of a GARCH(P,Q) process is modeled as where α represents the argument Alpha, β represents Beta, and the GARCH(P, Q) coefficients {Κ, α, β} are subject to the following constraints. Note that U is a vector of residuals or innovations (ɛ[t]) of an econometric model, representing a mean-zero, discrete-time stochastic process. Although is generated using the equation above, ɛ[t] and are related as where is an independent, identically distributed (iid) sequence ~ N(0,1). │ Note ugarch corresponds generally to the Econometrics Toolbox™ function garchfit. The Econometrics Toolbox software provides a comprehensive and integrated computing environment for the │ │ analysis of volatility in time series. For information, see the Econometrics Toolbox documentation or the financial products Web page at http://www.mathworks.com/products/finprod/. │ See ugarchsim for an example of a GARCH(P,Q) process. See Also garchfit | ugarchpred | ugarchsim
{"url":"http://www.mathworks.co.uk/help/finance/ugarch.html?nocookie=true","timestamp":"2014-04-23T17:38:08Z","content_type":null,"content_length":"44938","record_id":"<urn:uuid:a079fd2f-7f69-4fb6-8ea5-2caa95913765>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00256-ip-10-147-4-33.ec2.internal.warc.gz"}
Resolution: standard / high Figure 4. Positions of reflection, transmission, and adsorption events for the quantum-classical calculations. In a representative graphene hexagon, using SCC-DFTB. Adsorption (left) shows clustering of hydrogen atoms around the lattice carbons. Reflection (center) is most probable at the perimeter of the hexagon where interactions are strongest. Transmission (right) can occur at most points in the lattice for high energies but tends to occur at the hexagon center due to the low barrier. Ehemann et al. Nanoscale Research Letters 2012 7:198 doi:10.1186/1556-276X-7-198 Download authors' original image
{"url":"http://www.nanoscalereslett.com/content/7/1/198/figure/F4","timestamp":"2014-04-18T20:21:13Z","content_type":null,"content_length":"11950","record_id":"<urn:uuid:3f95b511-0fd8-46a7-95da-a938dcd3b759>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00069-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Hiii can someone help me with a Physics problem? Please and thank ya! :) • one year ago • one year ago Best Response You've already chosen the best response. Best Response You've already chosen the best response. oh boy, I looked at it! too late to get into though. you should get good help with that one, though it is tricky again.. Best Response You've already chosen the best response. OK, the work done is basically the force thats is influencing the movement times the distance, So we need to find the projection of the force in the direction of the movement, and it must be equal to the force of gravity summed with the friction, because it cannot have any aceleration for the speed to remain constant, and force=ma Best Response You've already chosen the best response. Considering that, we get:\[-mg-\mu N+Fsin \theta=0\] Best Response You've already chosen the best response. The normal force is the force balancing the force thats pulling the block towards the wall so:\[N=Fcos \theta\] Best Response You've already chosen the best response. So the work done is given by:\[W=\Delta xFsin \theta=(mg+\mu Fcos \theta)\Delta x\] Best Response You've already chosen the best response. how do we find the change in x? Best Response You've already chosen the best response. Now, whats missing is F. To find it we use the fact that the total force on the object must be zero. \[Fsin \theta=mg+\mu N=mg+\mu Fcos \theta\]Isolating F we get:\[Fsin \theta-F \mu \cos \theta =mg\]\[F=\frac{mg}{\sin \theta-\mu \cos \theta}\]Plug that into the formula for the work, and you have all the numbers now to calculate it. The change in x is given, it is 3m Best Response You've already chosen the best response. so instead of N should i plug in Fcosθ Best Response You've already chosen the best response. This is the work done by F, now to get the work done by gravity it is easy, we have the change in x and we also have the magnitude of the force, the only tricky part is that the work must be negative, because the force is going against the movement and not favoring it. Best Response You've already chosen the best response. and what equation would we use for that? Best Response You've already chosen the best response. Yes, for the first and\[W_G=-\Delta x mg\]for the second. Are you understanding what I'm doing here? Because this formula should be natural for you now. Best Response You've already chosen the best response. a little, not really because i havent used it before Best Response You've already chosen the best response. or at least i dont believe i have Best Response You've already chosen the best response. It is just gravitational potential in disguise natasha! (GPE=mgh) the work done is just the potential energy 'gained' (but negative) Best Response You've already chosen the best response. i understand the first part though, not really how you got the equation but how it makes sense Best Response You've already chosen the best response. but then where is the h @furnessj Best Response You've already chosen the best response. Yes, thats it, the h is x, just a diferent name. Best Response You've already chosen the best response. h is the height gained, or just x here, anyway, leaving Ivan to it! Best Response You've already chosen the best response. oh okay, so why cant we keep it as h? Best Response You've already chosen the best response. oh okay! Best Response You've already chosen the best response. If you understood everything so far, the only thing left is the normal force, wich you already have the formula for, and all the numbers involved. Just as a complementary comment for what furnessj said, the work done is also the change in the kinectic energy, and that is always true. However, the work is minus the change in the potential energy related to that force only if it is conservative, friction for example does work, but has no potential energy concept related to it. Best Response You've already chosen the best response. Any doubts? Best Response You've already chosen the best response. no, but how can i find the normal force formula? Best Response You've already chosen the best response. @ivanmlerner ? Best Response You've already chosen the best response. Oh, the normal force is the force that balances the force thats pulling the object towards the wall, so that it keeps still and don't "enter" the wall. It is the 3rd law of newton, that any force must have an opposite force balancing it. In this case what is pulling the object to the wall is the horizontal component of F, so its magnitude is Fcos theta Best Response You've already chosen the best response. Oh, it would be that simple? Best Response You've already chosen the best response. Yes, that simple Best Response You've already chosen the best response. of course @natasha.aries , that simple :) Best Response You've already chosen the best response. haha! okay thanks! can i try plugging it in a few minutes, and would you be able to tell me if its right? Best Response You've already chosen the best response. ok... ur good learner ., Best Response You've already chosen the best response. haha thank you! Best Response You've already chosen the best response. To give a medal for asking do I just click on best response by the asker? Best Response You've already chosen the best response. Best Response You've already chosen the best response. and when i did the first part i got 204 does that sound about right? Best Response You've already chosen the best response. I got approximately 312 using g=10. Best Response You've already chosen the best response. the mass would be 5 correct? Best Response You've already chosen the best response. Best Response You've already chosen the best response. idk what i did wrong :/ Best Response You've already chosen the best response. Are you talking about part (a)? Best Response You've already chosen the best response. Best Response You've already chosen the best response. Did you use a calculator? If you did, check for missing parentesis. Also, did you use w=xFsin theta? with that formula for F? Best Response You've already chosen the best response. yes, it would be (5)(9.81) / (3sin30-.3cos30) right? Best Response You've already chosen the best response. That would be F, but you got an extra 3 in the denominator in front of sin(30) Best Response You've already chosen the best response. You still need to multiply that by 3, and by sin(30) Best Response You've already chosen the best response. wait, im confused. so it would be (5)(9.81) / (sin30-.3cos30) but why do we need to multiply it by 3sin30 ? Best Response You've already chosen the best response. Best Response You've already chosen the best response. Sorry, I had to go in a rush, look at the formula for work on the begining of my posts:\[W=\Delta xFsin \theta\]So the one you are using is the one for F alone, and 3sin(30) is the rest: change in x=3 and sin theta=sin(30). Take a look again at the thinking I made to get to the first answer. Best Response You've already chosen the best response. its all good! and now i get 306 :) Best Response You've already chosen the best response. and so for the last one it would be 306(cos)30? Best Response You've already chosen the best response. Using 9.81 as g that is correct, and for the second thats almost it. Best Response You've already chosen the best response. 306 is correct, but this is the work done by the force. Best Response You've already chosen the best response. On the last it is asking for the normal, wich is a force, not work. Best Response You've already chosen the best response. It is given by Fcos(30) and F is what you were calculating before. Best Response You've already chosen the best response. F= that 204 you were calculating. Best Response You've already chosen the best response. but Fcos(30) is the normal force? Best Response You've already chosen the best response. i got 204 by mistake though Best Response You've already chosen the best response. Yes, read what I wrote on the begining. You got 204 calculating the force alone, you had forgotten the other terms but here the force is exacly what you need, take a look at the begining and try to understand that, you don't seem to have understood it well. Best Response You've already chosen the best response. i tried it again, and still didnt get that :/ Best Response You've already chosen the best response. im sorry :( Best Response You've already chosen the best response. Wait, I said 204 is the force, not the answer, the answer is 204cos(30) Best Response You've already chosen the best response. but i dont get 204 at all Best Response You've already chosen the best response. The formulas are:\[N=Fcos \theta\]\[F=\frac{mg}{\sin \theta-\mu \cos \theta}\] Best Response You've already chosen the best response. The explanation for them is in the begining. Best Response You've already chosen the best response. i did that Best Response You've already chosen the best response. Where you using that? Best Response You've already chosen the best response. Best Response You've already chosen the best response. Show the numbers again. Best Response You've already chosen the best response. 5)(9.81) / (sin30-.3cos30) Best Response You've already chosen the best response. And you need to multiply all of that by cos(30) Best Response You've already chosen the best response. i finally got it! thats what ive been doing! i just separated it a few more times now. thank you so much. so the first one you would multiply it by 3sin30 and the last one cos30? Best Response You've already chosen the best response. Yes, but remember what those numbers mean. Best Response You've already chosen the best response. i will, im just unsure with how u knew which equations to use Best Response You've already chosen the best response. I didn't I foun out wich one I needed to use, and I posted wverything I did, project the force to find the work, see what force is causing the normal, do the math and you get to the formulas. Best Response You've already chosen the best response. when i checked the answers, the right answers were 310J... -150J...180 N Best Response You've already chosen the best response. 310 for a -150 for b and 180 for c Best Response You've already chosen the best response. i got a similar answer for the first 2 parts, but not for the third! Best Response You've already chosen the best response. Remember that on the last one you only need to multiply the force F (204N) by cos(30) (sqrt(3)/2) because we need the projection of the force at the direction perpendicular to the wall. If you use those numbers you'll get the answer. Best Response You've already chosen the best response. why sqrt of 3/2 ? Best Response You've already chosen the best response. i think you just mean cos30 because those two are the same things :) Best Response You've already chosen the best response. What?? wait, thats what i meant:\[\cos 30°=\frac{\sqrt{3}}{2}\] Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/509af6dfe4b085b3a90e4650","timestamp":"2014-04-19T15:21:13Z","content_type":null,"content_length":"221887","record_id":"<urn:uuid:dd6d9bac-b47c-45e1-bb55-8d1fbdaa2a42>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00469-ip-10-147-4-33.ec2.internal.warc.gz"}
Wolfram Demonstrations Project Kermack-McKendrick Epidemic Model with Time Delay This Demonstration solves a system of three differential equations with time delays, corresponding to a Kermack–McKendrick epidemic model. The Kermack–McKendrick model simulates the number of people infected with a contagious illness in a closed population over time. It assumes that the population size is fixed, that the incubation period of the infectious agent is instantaneous, and that the duration of infectivity is the same as the duration of the disease. This model is modified by incorporating a delay time representing the period for incubation, which is the time during which infectious agents develop in the vector; only after that time does the infected vector itself becomes infective, and a delay time for the duration of the infectivity. The model consists of three coupled delay ordinary differential equations and three initial history functions. Here is time, is the number of susceptible people, is the number of infected people, is the number of people who have recovered and developed immunity to the infection, is the infection rate, is the recovery rate, is the incubation period, and is the duration of infectivity. The infection and recovery rates are assumed to be equal to 1. The delay equations are solved using 's built-in function and the results are shown as plots of the number of people in each group versus time and in a three-dimensional parametric plot of the three groups of people. You can change , the period of incubation, and , the duration of infectivity, to follow the trajectory of the solution.
{"url":"http://www.demonstrations.wolfram.com/KermackMcKendrickEpidemicModelWithTimeDelay/","timestamp":"2014-04-16T13:08:42Z","content_type":null,"content_length":"45673","record_id":"<urn:uuid:971cedb6-4ac0-4fba-a4d1-afe985847c9b>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00097-ip-10-147-4-33.ec2.internal.warc.gz"}
Note on Maximal Bisection above Tight Lower Bound G. Gutin and A. Yeo Inform. Proc. Letters Volume To appear, 2010. In a graph $G=(V,E)$, a bisection $(X,Y)$ is a partition of $V$ into sets $X$ and $Y$ such that $|X|\le |Y|\le |X|+1$. The size of $(X,Y)$ is the number of edges between $X$ and $Y$. In the Max Bisection problem we are given a graph $G=(V,E)$ and are required to find a bisection of maximum size. It is not hard to see that $\lceil |E|/2 \rceil$ is a tight lower bound on the maximum size of a bisection of $G$. We study parameterized complexity of the following parameterized problem called Max Bisection above Tight Lower Bound (Max-Bisec-ATLB): decide whether a graph $G=(V,E)$ has a bisection of size at least $\lceil |E|/2 \rceil+k,$ where $k$ is the parameter. We show that this parameterized problem has a kernel with $O(k^2)$ vertices and $O(k^3)$ edges, i.e., every instance of Max-Bisec-ATLB is equivalent to an instance of Max-Bisec-ATLB on a graph with at most $O(k^2)$ vertices and $O(k^3)$ edges. PDF - Requires Adobe Acrobat Reader or other PDF viewer.
{"url":"http://eprints.pascal-network.org/archive/00007531/","timestamp":"2014-04-18T13:09:38Z","content_type":null,"content_length":"7138","record_id":"<urn:uuid:38fc50cf-fbd5-4031-93d4-c35237c3360f>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00246-ip-10-147-4-33.ec2.internal.warc.gz"}
investigating an operator January 15th 2012, 06:18 AM #1 Jan 2011 investigating an operator I have a problem that comes in two parts. (1) $H$ is a Hilbert space with orthonormal basis $\{e_k\}$. We are given a positivie strictly increasing sequence $\{\alpha_k\}$with the following property $\lim_{k\rightarrow \infty}\frac{\ alpha_k}{\alpha_{k+1}}=1$. Our task is to show that there exists a unique operator $M\in B(H)$ such that $Me_k=\frac{\alpha_k}{\alpha_{k+1}}e_{k+1}$. (2) Determine $\|M\|$ and the spectrum of M (hint: use the eigenvalues of the adjoint $M^*$). My attempt (1) For the first one I was thinking that I could just simply find the operator by using $Mx$ where $x\in H$. Here it goes, $x\in H$ gives me $x=\sum_{k=1}^\infty <x,e_k>e_k=\lim_{n\rightarrow \infty}\sum_{k=1}^n <x,e_k>e_k$. $Mx=M\lim_{n\rightarrow \infty}\sum_{k=1}^n <x,e_k>e_k=\lim_{n\rightarrow \infty}\sum_{k=1}^n <x,e_k>Me_k = \lim_{n\rightarrow \infty}\sum_{k=1}^n <x,e_k>\frac{\alpha_k}{\alpha_{k+1}}e_{k+1}=\ sum_{ k=1}^\infty <x,e_k>\frac{\alpha_k}{\alpha_{k+1}}e_{k+1}$ (the last equality by Parseval's formula and the second by continuity (boundedness) of $M$). Well I now know what a bounded linear operator with the above stated property looks like. But is it unique? Futhermore, its "look" complicates things in the second part. (2) I know the definition $\|M\|=\sup_{\|x\|=1}|Mx|$. But this doesn't seem all that easy to compute using what I know about my operator. And what will the eigenvalues of $M^*$ tell me? I know that $\sigma (M)=\overline{\sigma (M^*)}$ (where I by $\sigma (M)$ denote the spectrum of $M$) and this does not help me unless I can use the eigenvalues of $M^*$ to find $\sigma(M^*)$. This would be easy if $M^*$ was compact, then I would know that $\sigma (M^*)= \{0\}\cup \{eigenvalues of M^*\}$. Re: investigating an operator The operator M is a weighted shift (check that link for some useful guidance). It shifts each basis vector $e_k$ to the next one $e_{k+1},$ multiplying it by the weight $\alpha_k/\alpha_{k+1}.$ Since its value at each basis vector is specified, it must be unique (you know its value at each finite linear combination of basis vectors and hence, by continuity, at every vector). The adjoint operator is a backward weighted shift, taking each $e_k$ to a multiple of $e_{k-1}$ and sending $e_1$ to 0. The advantage of looking at the adjoint is that it has many eigenvalues, whereas M itself does not. For convenience, write $\beta_k = \alpha_k/\alpha_{k+1}$, and let $B = \sup\{\beta_k:k\in\mathbb{N}\}$. If $x = \textstyle\sum \xi_ke_k$ then $Mx = \textstyle\sum \beta_k\xi_ke_{k+1}$, and $\|Mx\|^2 = \sum|\beta_k\xi_k|^2\leqslant B^2\sum|\xi|^2 = B^2\|x\|^2.$ That shows that $\|M\|\leqslant B$. By looking at $\|Me_k\|$ for each k, you should be able to show the reverse inequality. Re: investigating an operator Mabye I'm missing something trivial, but how do the eigenvalues of the adjoint help me find the spectrum of it? I know that I have the spectum of a compact operator if I have the eigenvalues. Is my $M^*$ compact? Re: investigating an operator The idea is that M* has so many eigenvalues that they force the spectrum to be as big as it could possibly be. To see how that might work, here is a slightly simplified example. Let S be the unilateral shift operator, defined on the basis vectors by $Se_k = e_{k+1}$ (so S is like your operator M except that it does not have the weights $\alpha_k/\alpha_{k+1}$). Its adjoint S* is the backwards shift, defined by $S^*e_1=0$ and $S^*e_k = e_{k-1}$ for k>1. Let $\lambda$ be a fixed complex number with $0<|\lambda|<1$ and let x be the vector given by $\textstyle x=\sum\lambda^ke_k.$ (Notice that that sum converges in H because the sum of the squares of the absolute values of the coefficients is $\textstyle\sum|\lambda|^{2k}<\infty.$) Then $S^*x = \textstyle \sum\lambda^ke_{k-1} = \lambda x.$ Thus $\lambda$ is an eigenvalue of S*, with eigenvector x. That holds for every nonzero $\lambda$ in the open unit ball. In fact, 0 is also an eigenvalue, because $S^*e_1=0.$ So the spectrum of S* contains the entire open unit ball. Since the spectrum is always closed, it contains the closed unit ball. But $\|S^*\| = 1$, and the absolute value of an element of the spectrum can never be greater than the norm of the operator. Conclusion: the spectrum of S* is the closed unit ball. Therefore the spectrum of S is also the closed unit ball (though S has no eigenvalues). Notice that although S and S* are not compact, S* has a huge number of eigenvalues (uncountably many, which a compact operator never could have). Finding the spectrum of the operator M* is a similar exercise, though you have to decide how to deal with the weights. January 15th 2012, 11:19 AM #2 January 16th 2012, 06:00 AM #3 Jan 2011 January 16th 2012, 08:05 AM #4
{"url":"http://mathhelpforum.com/differential-geometry/195332-investigating-operator.html","timestamp":"2014-04-16T21:06:48Z","content_type":null,"content_length":"54613","record_id":"<urn:uuid:875956bb-2d62-4ea1-9830-562c075227cc>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00044-ip-10-147-4-33.ec2.internal.warc.gz"}
Differential Equations Related Programs Related Courses A study of analytical and numerical solution methods for ordinary and partial differential equations. Includes series solutions and special functions for the solution of ODEs and the use of Fourier series to solve PDEs. Transform and numerical methods for solving ODEs and PDEs are introduced. Prerequisite: MATH 231. Fall.
{"url":"http://www.keene.edu/catalog/courses/detail/MATH361/","timestamp":"2014-04-16T14:00:29Z","content_type":null,"content_length":"6243","record_id":"<urn:uuid:8e1e20bf-2e1d-422d-8180-c05eab7f6532>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00109-ip-10-147-4-33.ec2.internal.warc.gz"}
Prediction of tissue-specific cis-regulatory modules using Bayesian networks and regression trees • We are sorry, but NCBI web applications do not support your browser and may not function properly. More information BMC Bioinformatics. 2007; 8(Suppl 10): S2. Prediction of tissue-specific cis-regulatory modules using Bayesian networks and regression trees In vertebrates, a large part of gene transcriptional regulation is operated by cis-regulatory modules. These modules are believed to be regulating much of the tissue-specificity of gene expression. We develop a Bayesian network approach for identifying cis-regulatory modules likely to regulate tissue-specific expression. The network integrates predicted transcription factor binding site information, transcription factor expression data, and target gene expression data. At its core is a regression tree modeling the effect of combinations of transcription factors bound to a module. A new unsupervised EM-like algorithm is developed to learn the parameters of the network, including the regression tree structure. Our approach is shown to accurately identify known human liver and erythroid-specific modules. When applied to the prediction of tissue-specific modules in 10 different tissues, the network predicts a number of important transcription factor combinations whose concerted binding is associated to specific expression. A cis-regulatory module (CRMs) is a DNA region of a few hundred base pairs consisting of a cluster of transcription factor (TF) binding sites [1]. By binding CRMs, transcription factors either enhance or repress the transcription of one or more nearby genes. Coordinated binding of several transcription factors to the same CRM is often required for transcriptional activation, thus allowing a very specific regulatory control. High-throughput experimental identification of CRMs remains inaccessible, especially for distal enhancers. Methods like genomic localization assays (also known as ChIP-chip) using whole genome tilling arrays may soon improve the situation, but the cost of such extremely large arrays will limit their utilization. Because of this, several computational approaches have been developed for predicting cis-regulatory modules. Some attempt to identify regulatory modules with a particular function (e.g. muscle [2] or liver [3] specific CRMs, and many others [4-6]) by building or learning a model of the binding site content of such modules, based on a set of known modules. These methods generally obtain a reasonable specificity, but their applicability is limited to the few tissues, cell types, or conditions for which sufficiently many experimentally verified modules can be used for training. Others seek more generic signatures of cis-regulatory regions, like inter-species sequence conservation [7], sequence composition [8], or homotypic and heterotypic binding site clustering [9,10]. These methods are more widely applicable, but their predictions may be of lesser accuracy, because they do not rely on any prior knowledge. Furthermore, the predictions made by these algorithms are not accompanied by any annotation regarding the putative function of the modules. The PReMod database [11] contains more than 100,000 human CRM computational predictions, mostly consisting of putative distal enhancers. By adjoining other types of information to the predicted module information, additional insights can be gained into the function of specific modules. For example, in yeast, Beer and Tavazoie have used gene expression data to train a algorithm to predict expression data based on sequence information. In human, Blanchette et al. [12] and Pennacchio et al. [13] have used tissue-specific gene expression data from the GNF Altas2 [14] to identify certain transcription factors involved in tissue-specific regulation and Pennachhio et al. [13] have further developed models to predict the tissues-specificity of regulatory modules based on their binding site content. In this paper, we propose a new approach to the detection of tissue-specific cis-regulatory modules. Our algorithm uses a Bayesian network to combine binding site predictions and tissue-specific expression data for both transcription factors and target genes. It identifies the transcription factors and combinations thereof whose presence bound to a module appears to be resulting in tissue-specific expression. Our approach takes advantage of the facts that tissue-specific CRMs are likely 1) to be located next to genes expressed in that same tissue, 2) to contain many binding sites for TFs that are also expressed in that tissue, and (3) to contain binding sites whose presence in other modules also appears to be associated to tissue-specific expression. Our approach falls under the category of unsupervised learning, as it does not rely on any labeled training set or any type of prior knowledge regarding the TFs that may be important for a given tissue. Importantly, the Bayesian network contains at its core a regression tree to represent the dependence between the regulatory activity of a CRM and the set of TFs predicted to bind it. A new unsupervised Expectation-Maximization-like algorithm is developed to infer the parameters of the network, including the structure of the regression tree. Our approach is related to that of Segal et al. [15,16] but differs in that it takes advantage of available TF position weight matrices and TF expression data to allow tissue-specificity predictions. Moreover, based on the candidate modules predicted by PReMod, our approach is allowed to detect distal enhancers that are involved in tissue-specific expression. We show that our method is able to accurately discriminate between known liver and erythroid-specific modules, even in the presence of a large fraction of modules with neither function, by discovering important combinations of transcription factors associated to these tissues. When applied to a larger set of putative modules and tissues, several known tissue-specific TFs were recovered, and many interesting new TF combinations were predicted to be linked to tissue-specific expression. The goal of the method developed in this paper is to predict whether a given putative cis-regulatory module is responsible (at least in part) for the expression of a given gene in a particular tissue. Since the problem of predicting regulatory modules has already been studied extensively, we assume that a set of candidate CRMs $ℳ={M1,...,M|ℳ|}$ has been identified in the genome under consideration and we focus on determining their tissue-specificity. We emphasize that many of these predicted CRMs are likely to be false-positives (i.e. they have no regulatory function whatsoever), and most are probably not specific to any tissue; our goal is to identify those that are. Given a putative CRM M^m, a gene G, and a tissue (or cell type) T, we want to determine whether module M^m up-regulates gene G in tissue T. (We focus only on the identification of enhancers, rather than repressors, because it is difficult to distinguish between repressed genes and genes that are not expressed due to the lack of activators.) To this end, we define a Bayesian network that is used to combine various types of evidence, including the putative transcription factor binding sites contained in M^m, the expression levels of the set of transcription factors predicted to bind M^m, and the expression level of gene G. Importantly, and perhaps counter-intuitively, we train a single Bayesian network that will be applicable to predicting tissue-specific regulatory modules in all the tissues considered. This stems from the hypothesis that the enhancer activity of a module should depend only on its binding site content and on the expression levels of the transcription factors binding it, and not directly on the tissue considered. By allowing sharing regulatory mechanisms across tissues, we hope to improve our sensitivity to subtle regulatory mechanisms. One obvious drawback of this method is that unobserved entities like the presence or absence of tissue-specific transcriptional co-activators may affect the regulatory effect of a given module in different tissues even if the set of TFs bound to it does not change. Typically, a Bayesian network consists of a set of observed variables, a set of unobserved variables, and an acyclic directed graph describing the direct dependencies between these. In this section, we first introduce the set of variables present in our network, which is depicted in Figure Figure1.1. We then describe the dependencies between these variables and the algorithms used to learn the parameters of the network. The bayesian network used for predicting tissue-specific regulatory modules. See section 'Bayesian network variables' for a description of the variables, and section 'Bayesian network architecture' for a description of their dependencies. Bayesian network variables Let Φ = {Φ[1],...,Φ[|Φ|]} be a set transcription factors, let $T={T1,...,T|T|}$ be a set of tissue (or cell) types, let $G={G1,...,G|G|}$ be the set of all known human protein-coding genes, and let $ℳ={M1,...,M|ℳ|}$ be a set of predicted cis-regulatory modules. Since the notation describing the network requires many types of subscripts, we adopt the following convention: Right-subscripts refer to transcription factor indices; Right-superscripts refer to module indices; Left-superscripts refer to tissue indices; Left-subscripts refer to gene indices (for example, $tissuegeneXfactormodule$). We start by defining the observed variables for our network, shown in unshaded ovals in Figure Figure1.1. More detailed definitions pertaining to the specific data set analyzed in this paper will be found in Section 'Data sets'. Consider the following domains of index variables: 1 ≤ m ≤ |$ℳ$|, 1 ≤ f ≤ |Φ|, 1 ≤ g ≤ |$G$|, and 1 ≤ t ≤ |$T$|. • $Afm$ is the real-number predicted affinity of transcription factor Φ[f ]for module M^m. It should reflect our confidence that, provided factor Φ[f ]is expressed, it will bind module M^m. It is a function of the number and the quality of Φ[f]'s predicted binding sites in M^m. • ^tF[f ]is a boolean variable describing whether transcription factor Φ[f ]is expressed in tissue ^tT. • $Egt$ is a boolean variable describing whether gene g is expressed in tissue ^tT. To model the relationships between the observed variables, it is necessary to introduce a set of hidden variables. • $tF^f$ is the actual state (active or inactive) of transcription factor Φ[f ]in tissue ^tT. State $tF^f$ may not equal the observed expression level ^tF[f ]because of post-transcriptional regulation (e.g. activation due to external stimuli for nuclear receptors) or errors in the measurements of mRNA abundance. • $E^gt$ is the actual transcriptional status (transcribed or not transcribed) of gene [g]G in tissue ^tT, which could be different from the observed mRNA abundance $Egt$ because of mRNA degradation or errors in the measurements of mRNA abundance. • $tBfm$ is a boolean variable indicating whether, in tissue ^tT, module M^m is bound by sufficiently many copies of factor Φ[f ]for this factor to achieve its function. • The fact that a module is bound by a transcription factor does not necessarily translate into this module being regulatorily active. Indeed, the presence of other transcription factors may be required for the module to become active. We represent the regulatory activity of module M^m in tissue ^tT by a boolean variable ^tR^m, which takes the value 1 when the module M^m actively (and positively) regulates its gene. This is the variable whose value is of the most interest for predicting tissue-specific regulatory modules. We acknowledge that using binary variables to represent expression levels and regulatory activity is a very crude approximation. Although all these variable should in theory be continuous, the quantitative relations between transcription factor expression levels, their binding affinity to a module, and the contribution of that module to the expression of the target gene remain poorly understood, so a more qualitative approach is preferable. Furthermore, due to the computational complexity of network inference, such a simplification was necessary. In fact, by reducing the size of the parameter search space, this simplification might actually be improving generalization from small data sets. Bayesian network architecture In a Bayesian network, dependencies between variables are modeled as directed edges connecting the cause to the effect. The conditional probability of a node given the value of its parent(s) is described by a set of parameters that are either fixed or learned from the data. When the variables at hand have a finite domain, these conditional probabilities can be represented by a conditional probability table (CPT). Conditional distributions of E and F The observed expression levels E and F depend on the true expression levels $E^$ and $F^$ respectively. Since all variables are boolean, the conditional probability tables are the following: $Pr ⁡ [ E | E ^ ] = E = 0 E = 1 E ^ = 0 1 − α E α E E ^ = 1 β E 1 − β E Pr ⁡ [ F | F ^ ] = F = 0 F = 1 F ^ = 0 1 − α F α F F ^ = 1 β F 1 − β F$ Here, α[E ]and β[E ]are the probabilities of false-positive and false-negative in the discretized gene expression data, respectively. We assume that these parameters are shared among all genes, i.e. expression measurement errors are equally likely for all genes. Similarly, α[F ]and β[F ]are the probabilities that the discretized expression measurement for a given factor does not reflect their actual regulatory potency. Again, these parameters are shared among all transcription factors, although this might to be inaccurate for factors like nuclear receptors, which require external signals for activation. Conditional distribution of B The probability of $tBfm$, the random variable that describes whether module M^m is bound by factor Φ[f ]in tissue ^tT, depends on whether the factor is expressed in that tissue, and on the affinity $Afm$ of the factor for that module. We assume that the parameters describing this conditional probability are the same for all m and t, so we drop some subscripts and superscripts to write Pr[B[f]|A [f], F[f]]. We model this conditional probability indirectly, by instead modeling Pr[A[f]|B[f ]= 1], the distribution of binding site affinities for a module that is bound, using a normal distribution with parameters μ[f ]and $σf2$ that will be estimated during training. Since the mathematical derivation is tedious (but relatively simple), it is left in Appendix 1. Conditional distribution of R using regression trees The most challenging set of conditional probabilities to represent is that of ^tR^m, which depends on the values of $tB1m,...,tB|ℱ|m$. Again, we assume the parameters that describe this dependency are the same for all tissues ^tT and all module M^m, so we drop these indices. This assumption is equivalent to saying that the regulatory effect of the binding of a certain set of transcription factors does not depend on the module bound, the gene being regulated, or the tissue type. How should we represent the probability that a module is regulatorily active, given the set of transcription factors bound to it, i.e. Pr[R|B[1],...,$B|ℱ|$]? Given that all variables are boolean, this conditional probability can be represented by a $2|ℱ|$ × 2 CPT containing $2|ℱ|$ parameters. In our application where $ℱ$ contains several hundred transcription factors, this is obviously not practical, because (1) the CPT would be too large to store, and (2) we would need a huge amount of training data to learn the parameters. We thus use a more compact representation for this CPT, based on regression trees [17]. A regression tree is a rooted tree whose internal nodes are labeled with tests on the value of some variable B[f]. See Figure Figure22 for a small example. For boolean variables (our case here), each node N tests whether the some variable $BiN$ takes the value true or false. Each leaf l of the tree is associated with a probability distribution Pr[R|l]. Let $π(l)= {Bi1=bi1,Bi2=bi2,...}$ be the set of variable assignments obtained by following the path from the root to l. Let l(b[1],...,$b|ℱ|$) be the leaf reached when B[1 ]= b[1],...,$B|ℱ|$ = $b|ℱ|$. Then, the regression tree defines a complete conditional probability distribution: Pr[R|B[1 ]= b[1],...$B|ℱ|$ = $b|ℱ|$] = p(l(b[1],...,$b|ℱ|$)). When many of the B[i]'s are irrelevant to R, the representation is much more compact than the standard CPT and can be estimated from less data. We will jointly refer to the tree topology, the node labelings, and the probability distributions at the leaves as the meta-parameter Ψ. Inferring Ψ will be the most significant difficulty of this approach. Example of a regression tree representing a small 2-variable conditional probability table. Conditional distribution of $E^$ The last set of dependencies is that of a gene's transcriptional activity $E^gt$ on the regulatory activity of the neighboring regulatory modules. This raises the difficult question of determining which gene is being regulated by each module. This is relatively straight-forward when the module is located in the promoter region of a gene, but much less so when it is located 100 kb away from any gene. Here, for lack of more accurate information, we assume that a module M^m only has the potential of regulating the gene [g]G whose transcription start site is the closest to it, denoted closest( M^m). Then the expression level of gene [g]G depends on regulators([g]G) = {m|closest(M^m) = [g]G} = {r[1], r[2],...}. We will assume that the expression level of [g]G only depends on the number of its modules that are active, through a sigmoid function: $Pr⁡[E^g|Rr1,Rr2,...]=1/(1+e−b⋅(∑r∈regulators(Gg)Rr−a))$, where a and b are user-defined parameters (see Appendix 3). Learning the network's parameters Our Bayesian network contains a number of parameters whose values are not known a priori. We collectively refer to these parameters as $Θ={μ1,...,μ|ℱ|,σ12,...,σ|ℱ|2,Ψ}$. The network will be trained using the set of all pairs (module, tissue). Let A, E, and F be the set of all TF affinity data, all gene expression data, and all TF expression data, respectively, over all tissues considered. A typical approach to estimating the network's parameters is to seek the value Θ* that maximizes the joint likelihood of the observed variables, i.e. Θ* = argmax[Θ]Pr[A, E, F]. An Expectation-Maximization algorithm can be used to learn the parameters Θ of the Bayesian network [18], whereby a local maximum of the likelihood function is reached by alternatively estimating the expected value of hidden variables given the observed variables and the current estimate Θ^0, and then reestimating the maximum likelihood values for the parameters Θ. However, since Θ contains the tree structure, we cannot apply the standard EM algorithm for learning Bayesian networks, as this algorithm relies on the ability to analytically derive a maximum likelihood estimate for the parameters (see however [18]). Instead, a new EM-like algorithm with regression tree learning is developed to infer the tree within the network. Estimating posterior probabilities for hidden variables Our first step is to calculate the expectation (or equivalently, the probability of taking the value 1, since all hidden variables are binary), for all hidden variables, given the value of the observed variables. These posterior probabilities can be calculated using the formulas given in Appendix 2. The derivation of most of these formulas is fairly straight-forward, except for the calculations involving the regression tree. Computing $Pr⁡[R|A,E,F]=∑b∈{0,1}|ℱ|Pr⁡[R=r,B=b|A,E,F]$ can be done efficiently thanks to the regression tree representation. Maximum likelihood parameter estimation Once the posterior probabilities of the hidden variables are computed, maximum likelihood estimators for the parameters of the network can be derived as given in Appendix 3. Again, the regression tree representing the dependence of R on B[1],...,$B|ℱ|$ poses significant challenge, as no efficient algorithm exists to choose the tree topology $T$. Instead, we developed a new tree learning algorithm, which adapts ideas from standard decision tree algorithms (e.g. C 4.5 [19], J48 [20]). The problem at hand is novel and challenging for several reasons: 1. Soft attributes: The input variables $tBfm$ are binary variables, but their values remain unknown at any given iteration of the EM-like algorithm. Only their distribution Pr[$tBfm$|A, E, F] is known for each m, f and t, given the current estimate of the parameters Θ. 2. Soft labels: The values of the target variables ^tR^m are also unknown, but their distribution Pr[^tR^m|A, E, F] is known. Learning regression trees from probabilistic instances Most decision tree learning algorithms are based on a greedy tree-growing approach trying to find the tree that minimizes the number of misclassifications [21]. Our tree learning algorithm is an adaptation of the standard approach using information gain as a method to select which attribute to select to split a node. Consider a node N that is currently a leaf and that we are considering splitting based on some attribute B[i]. The weight of a probabilistic instance $x=(tB1m,...,tB|Φ|m)$ is the probability of the path from the root to N, under the attribute probability distributions given by x. More precisely, $w e i g h t N ( m , t ) = ∏ assignment Λ on the path from root to N Pr ⁡ [ Λ | A , E , F ]$ We can now define the weighted entropy at node N as: $w e i g h t e d E n t r o p y ( N ) = − ∑ r = 0 , 1 p r log ⁡ 2 p r ,$ where $pr=∑t=1|T|∑m=1|ℳ|weightN(m,t)⋅Pr⁡[tRm=r|A,E,F]∑t=1|T|∑m=1|ℳ|weightN(m,t)$, and totalWeight(N) = ∑[t ]∑[m]weight[N](m, t). Then, the information gain obtained by splitting a leaf N with attribute B[i ]to obtain two new leaves N' and N" is defined as $i n f o G a i n ( N , B i ) = w e i g h t e d E n t r o p y ( N ) − ∑ n ∈ { N ′ , N ″ } t o t a l W e i g h t ( n ) T o t a l W e i g h t ( N ) ⋅ w e i g h t e d E n t r o p y ( n ) .$ The attribute B[i ]with the largest weighted information gain is chosen as label for N and corresponding children nodes N' and N" are added. The tree grows this way until no pair of node and attribute yields a positive information gain. This is a very loose stopping criterion and trees learned this way tend to be very large. In order to avoid the problem of overfitting, a method called reduced-error pruning is used [21]. It uses a separate validation data set to prune the tree, and each split node in the tree is considered to be a candidate for pruning. When pruning a node, a operation called subtree replacement is performed, which involves removing the subtree rooted at that node and replacing the subtree with a single leaf. Whether pruning is performed depends on the classification accuracy obtained by the unpruned tree and by the pruned tree over the validation set. Pruning will cause the accuracy over the training data set to decrease; but it may increase the accuracy over the test data set. Our approach was used to identify tissue-specific CRMs in human. First, we show, using a small set of experimentally verified tissue-specific CRMs, that our approach is able to discriminate between modules involved in different tissues. Then, we apply our method to a larger data set consisting of more than 6000 putative CRMs associated to genes specifically expressed in one of ten tissues, and show that interesting combinations of transcription factors can be linked to tissue-specific expression. Data sets We used a set of cis-regulatory modules predicted in the human genome by Blanchette et al. [12], based on a set of 481 position weight matrices from Transfac 7.2 [22]. The modules are available from the PReMod database [11]. Criteria used for the PReMod predictions include inter-species conservation of binding site predictions and homotypic clustering of binding sites. The complete data set consists of more than 100,000 predicted CRMs, but only subsets of those were used (see below). For each predicted module M^m, the predicted binding affinity $Afm$ is represented by the negative logarithm of the p-value of the binding site weighted density for factor Φ[f ]in module M^m, as reported in PReMod. Gene expression data came from the GNF Atlas 2 data set [14], downloaded from the UCSC Genome Browser [23]. A gene [g]G was identified as "expressed" (i.e. $Egt$ = 1) if and only if its expression level was at least two standard deviations above its mean expression level, over the 79 tissues for which data was available. Only 231 of the 481 Transfac PWMs were confidently linked to transcription factors for which GNF expression data is available. Only these |$ℳ$| = 231 PWMs were considered in our analysis. Some transcription factors are actually linked to several different PWMs, but our approach actually seems to take advantage of this to improve the quality of the predictions (see below). Validation experiments We first use a set of experimentally verified tissue-specific CRMs, together with a set of negative control regions, to validate our algorithm. To further evaluate the performance of our approach, we compare our results with the results obtained with several simpler classifiers. Validation data sets To demonstrated the ability of our approach to identify tissue-specific regulatory modules, we used it to discriminate between known liver-specific CRMs, known erythroid-specific CRMs, and other modules not likely to be involved in these two cell types. Each validation data set was composed of five subsets: 1. knownLiver: 11 experimentally verified liver-specific modules [3]. 2. knownErythroid: 22 experimentally verified erythroid-specific modules [24]. 3. putativeLiver: A set of 31 PReMod modules located in the vicinity of the genes associated to the knownLiver modules. These modules are possibly involved in liver-specific regulation and are included only to help the Bayesian network learning the association between a module's binding site composition and tissue-specificity of the target gene. 4. putativeErythroid: A set of 46 PReMod modules similar to (3) but for erythroid. 5. negative: For each knownErythroid or knownLiver module with associated closest gene g, a set of r[neg ](see below) PReMod modules associated to genes that are expressed in neither erythroid nor liver is randomly selected and artificially associated to gene g. These are modules that, if placed in the vicinity of gene g, would be unlikely to cause liver or erythroid-specific expression. The ratio r[neg ]of the number of negative modules to the number of known modules determines in part the difficulty of the classification task. Two types of validation data sets were thus created: In our 1X experiment (see below), we used r[neg ]= 1, whereas in our 2X data set, we used r[neg ]= 2. Each 1X data set thus contains 143 modules, each of which was considered as a possible liver or erythroid specific. The complete data set consists of 2 × 143 = 286 module-tissue pair, of which 11 + 22 = 33 are positive examples, 99 are negative examples (all the knownLiver modules when considered in the erythroid cell type, all the knownErythroid modules when considered in liver, and all the negative modules in both tissues). The 2X datasets are similar, except that they are noisier because they contain 165 negative examples. Three simple classifiers To assess the quality of our method, we compare it to three other simpler approaches. The first classifier, called the expressionOnly classifier, simply predicts that any module located next to a gene that is expressed in a given tissue is a tissue-specific module for that tissue. That is, the binding site content of the module is ignored, and only the expression [g]E is used to make the The second simple classifier, called SupervisedNaiveBayes, is a classical supervised Naive Bayes approach that takes as input a simplified, observable version of the B variables, where we set $Bmf$ = F[m]·A^f, as well as the expression of the target gene $Etg$ and is trained to distinguish between labeled positive and negative examples (see Appendix 4 for the complete details). Finally, the third simple classifier, called NaiveBayesInNet, is a version of our Bayesian Network classifier in which the regression tree representing the conditional probability of R is replaced by a Naive Bayes classifier, but where the rest of the structure is preserved. See Appendix 5 for more details. Validation results One hundred different runs of our EM-like algorithm were done on 1X and 2X datasets, each time with a different sample of negative modules. Each run used 100 EM-like iterations (taking approximately 10 minutes of running time), which was sufficient to achieve convergence, although different runs converge to slightly different likelihoods and regression trees (see Additional File 1). Since we do not know which of the putativeLiver and putativeErythroid CRMs are actually tissue-specific modules, we evaluate the performance of our algorithm based only on the positive and the negative modules. For each run, the network with the best likelihood over 100 EM-like iterations is used to compute Pr[^tR^m|A, E, F] for all examples and a module-tissue pair is predicted positive if this probability exceed some threshold t. The resulting precision-recall curve, averaged over all 100 runs, is shown in Figure Figure3,3, for both the 1X and 2X data set. The precision v.s. recall curve for the 1X (left) and 2X (right) data sets, where precision = TP/(TP + FP) and recall = TP/(TP + FN). The blue curve (diamond markers) is generated from the results of our approach, the brown curve (× markers) is ... Since 13 out of the 33 known CRMs have target genes expressed neither in liver nor in erythroid (based on our discretization of expression data), the ExpressionOnly classifier yields a recall = 60.6% and precision = 50% on the 1X data set, but only precision = 33% on the 2X data set. As seen from the curves, our method significantly outperforms both the Naive Bayes-based approaches for mid- to high-precision predictions. Our method can improve the precision to 72% for the 1X data sets and 66.2% for the 2X data sets. Notice that the highest precision for the 2X data sets remains close to that for the 1X data sets, although almost twice as many negative examples are considered. This indicates that our approach provides a way to improve the precision of prediction by combining the sequence data and the expression data. Regression trees Figure Figure44 shows the regression trees generated from one run for the 1X and 2X data sets. Each internal node tests the value of an attribute B[f], which indicates whether factor Φ[f ]is predicted to bind the module in the tissue under consideration. Each leaf shows the conditional probability predicted, which is the probability of R = 1 on the condition specified by the path from to root to the leaf. The regression tree generated by the iteration with the best likelihood for a 1X (top) and 2X (bottom) data sets. Internal nodes corresponding to liver-specific transcription factors are colored yellow, and those corresponding to erythroid-specific factors ... The tree structure indicates what are the most important TFs or combinations of TFs for explaining liver-specific and erythroid-specific expression. Our algorithm successfully detects most known liver-specific TFs and combinations of thereof, like HNF1 + HNF4, HNF1 + C/EBP, and HNF4 + C/EBP, which are reported in the literature [3]. The erythroid-specific TF GATA1 is also reported in the trees. The trees do not contain many erythroid-specific nodes, firstly because there are only two TFs (GATA1 and NF-E2) that are erythroid-specific based on our expression data, and secondly because NF-E2 has very few predicted binding sites on the genome. We observe from the trees that the leaves associated with TF combinations usually have higher regulatory probabilities than the leaves associated with individual TFs. This indicates that the ability to identify TF combinations is key to being able to identify cis-regulatory modules. We emphasize that the trees were obtained without any prior information about which of the 231 PWMs used are involved in liver or erythroid-specific expression. Notice that TF PPAR is reported in our trees. PPAR is indeed an important factor regulating expression in liver [25], but was absent from Krivan and Wasserman's paper [3] from which we obtain the known liver-specific CRMs. Most importantly, the expression of PPAR is low in both liver and erythroid, so ^erythroidF[PPAR ]= ^liverF[PPAR ]= 0. This shows that our approach is robust to noise in the expression data of TFs, provided the association between the binding sites in modules and the target gene's expression is sufficiently high. Finally, we note the unexpected selection of several different matrices for the same transcription factor along the same path in the tree (for example C/EBP M770 and M190 on the tree obtained for the 1X data set on Figure Figure3).3). This is caused by the fact that these matrices are quite actually different from each other, but the presence of sites for both matrices increases the association to the target gene's expression. Genome-wide CRM prediction in ten tissues We next extended our analysis to ten different tissues from the GNF Atlas2: $T$ = {brain, erythroid, thyroid, pancreatic islets, heart, skeletal muscle, uterus, lung, kidney, and liver}. 923 genes are specifically expressed (i.e. $Egt$ = 1) in at least one of these tissues and a total of 6278 modules are associated to these genes. We thus trained our Bayesian network on a set of 10 × 6, 278 = 62, 780 (module, tissue) pairs. Ten parallel runs of 100 EM-like iterations were performed from different random initializations, each taking approximately 24 hours. The regression tree obtained obtained from the best run is shown in Figure Figure5.5. We can clearly observe from the tree that the positive assignments along each path leading to a leaf typically consists of TFs expressed in the same tissue. Several known tissue-specific combinations of TFs are recovered in the tree, such as C/EBP + HNF1 and C/EBP + HNF4 in liver. Also, many new and potentially meaningful TF combinations are predicted, such as C/EBP + AR in liver and Tax/CREB + GATA1 in erythroid. Regression tree obtained from the best of ten runs on the set of 6,278 modules and 10 tissues. Nodes are colored based on the tissue in which a particular factor is expressed. The tree only contains the TFs expressed in four tissues: liver, erythroid, heart, and skeletal muscle. The other six tissues are not represented in the tree because of one of the following reasons: (1) The TFs that regulate the genes expressed in those tissues have low expression levels. (2) These TFs do not have strict requirements for sequence affinity, so the binding scores of their matrices are low. It is also possible that there are no PWMs for such TFs. (3) The expression of genes in those tissues are controlled by post-transcriptional regulation instead of tissue-specific TFs. The complete set of tissue-specificity predictions are available at http://www.mcb.mcgill.ca/~xiaoyu/tissue-specificModule. Statistical analysis of TF combinations The regression trees obtained in the 10 runs vary substantially in their structure but share many of their factors and combination of factors. The frequency with which factors or combination of factors are found in these trees is an indication of their role in regulating tissue-specific expression. A pair of factors is said to co-occur in a regression tree if the tree contains a path along which both factors take value 1. As seen in Tables Tables11 and and2,2, several factors and pairs of factors are consistently identified as part of the tree. Most TFs found are either known to be directly involved in tissue-specific regulation (in bold in Table Table1,1, or to be essential for the expression of certain genes in the given tissues, but to also have other non-tissues-specific roles (normal font in Table Table11). Significant TFs selected in the 10-tissue experiment. Significant TFs pairs selected in the 10-tissue experiment. Predicting gene tissue-specificity To further validate our module tissue-specificity predictions, we investigated whether a gene's tissue-specific fine-grain expression level could be predicted based on the modules regulating it. To this end, for each tissue t, we separated genes between highly expressed $Pgt$) and low expressed ($Egt$ = 0). Let $Pgt$ be the maximum of the predicted regulatory activity ^tR^m of the modules associated to gene g. We asked whether $Pgt$ is predictive of the raw, non-thresholded expression level of gene g. In the case of genes with $Egt$ = 0, such a correlation would show that we are able to detect tissue-specific genes even if their expression level is below the threshold. For genes with $Egt$ = 1, this correlation would show that genes with very high tissue-specific expression levels are associated to stronger module predictions than those that barely meet our threshold. We note that in both cases, such a correlation could not be explained by any kind of training artifact, since raw expression data is not part of the input. Considering genes showing tissue-specific expression ($Egt$ = 1), we find that eight of the ten tissues considered (all but "whole brain" and "erythroid") exhibit a positive correlation between $Pgt$ and the raw gene expression. Somewhat surprinsingly, the correlation is strongest for thyroid (p-value = 0.028) and skeletal muscle (p-value = 0.015), two factors that were relatively poorly represented in our regression tree. Among genes with $Etg$ = 0, the correlation is weaker but is positive in seven of the ten tissues (all except heart, skeletal muscle, and liver). These results indicate that our predictions yield a weak predictor of gene tissue-specificity. Clearly, it is easier to predict modules responsible for a gene's observed tissue-specificity than to predict the tissue-specificity of a gene based on its modules. Discussion and conclusion The approach we introduced here is the first to integrate binding site predictions and tissue-specificity of expression of both transcription factors and target genes to predict cis-regulatory modules involved in regulating tissue-specific gene expression. By introducing a regression tree at the heart of the network and deriving practical algorithms to train it, we are able to accurately identify important combinations of transcription factors regulating gene expression in a tissue-specific manner. The algorithms derived for learning this type of network will undoubtedly be applicable to a wide range of problems. Many of the choices made in the design of the Bayesian network were made for computational practicality reasons. As we improve the learning algorithm, it will become possible to use real-numbered expression measurements. Furthermore, our network could easily be extended by introducing additional sources of information as observed variables. For example, ChIP-chip and other binding assay data, when available, can be used to affect our belief in $tBfm$. Reporter assays and DNA accessibility assays could be used to modify our belief in ^tR^m. If modeled correctly, these types of experimental data may greatly increase the accuracy of our predictions, not only for the modules or the factors for which data is available, but also for other regions or factors associated to similar functions. The approach we described is potentially applicable to a wide range of data sets. While the relative inefficiency of the current learning algorithm prevented us from analyzing the complete set of tissue-specific expression from GNF, it is clear that this analysis, involving 79 tissues, would yield a wealth of information. Another possible application is to identify and characterize cis-regulatory modules involved in time and tissue-specific regulation during fish development. The large body of in situ hybridization data available in zebrafish [26] would provide an excellent basis for this analysis. Competing interests The authors declare that they have no competing interests. Authors' contributions XC designed and implemented the prediction algorithms, obtained all the results presented, and participated to the manuscript redaction. MB contributed to the original idea, the mathematical formulation and the redaction. All authors read and approved the final manuscript. Appendix 1. Calculation of Pr[B[f]|A[f], F[f]] Pr[B[f]|A[f], F[f]] is the probability of TF Φ[f ]binding a genomic region, given its observed expression F[f ]and its binding affinity A[f ]for the region. Modeling this relationship is challenging because it is unclear how B[f], a binary variable, should depend on A[f], a continuous variable, in the presence of the observed expression F[f]. For this reason, we derive this probability from a set of other probabilities distributions that are easier to model, specifically Pr[A[f]|B[f ]= 1], the affinity score distribution of sites that are bound. Recall that $F^f$ is defined as the actual expression of factor Φ[f]. Note first that $Pr ⁡ [ B f | F f , A f ] = ∑ e ′ ∈ { 0 , 1 } Pr ⁡ [ B f | F f , A f , F ^ f = e ′ ] ⋅ Pr ⁡ [ F ^ f = e ′ | F f , A f ] = ∑ e ′ ∈ { 0 , 1 } Pr ⁡ [ B f | A f , F ^ f = e ′ ] ⋅ Pr ⁡ [ F ^ f = e ′ | F f ] = ∑ e ′ ∈ { 0 , 1 } Pr ⁡ [ B f | A f , F ^ f = e ′ ] ⋅ Pr ⁡ [ F f | F ^ f = e ′ ] ⋅ Pr ⁡ [ F ^ f = e ′ ] / Z = ∑ e ′ ∈ { 0 , 1 } Pr ⁡ | A f | B f , F ^ f = e ′ ] ⋅ Pr ⁡ [ B f | F ^ f = e ′ ] ⋅ Pr ⁡ [ F f | F ^ f = e ′ ] ⋅ Pr ⁡ [ F ^ f = e ′ ] / Z ′$ for some appropriately chosen constants Z and Z'. The distribution of Pr[F[f]|$F^f$] is described in Section 'Conditional distributions of E and F', and the prior probability Pr[$F^f$] is approximated by the prior probability of the observed variable Pr[F[f ]]. So all that remains is to define Pr[A[f]|B[f], $F^f$] and Pr[B[f]|$F^f$]. Because a TF can bind only if it is expressed, we have Pr[A[f]|B[f ]= 1, $F^f$ = 0] = Pr[A[f]|B[f ]= 1, $F^f$ = 1] = Pr[A[f]|B[f ]= 1] When $F^f$ = 0, the event B[f ]= 0 yields no information on A[f], so Pr[A[f ]|B[f ]= 0, $F^f$ = 0] = Pr[A[f]] where the prior probability Pr[A[f]] is estimated from the data using a histogram approach. Notice that Pr[A[f]] = Pr[A[f], B[f ]= 0, $F^f$ = 0] + Pr[A[f], B[f ]= 0, $F^f$ = 1] + Pr[A[f], B[f ]= 1, $F^f$ = 0] + Pr[A[f], B[f ]= 1, $F^f$ = 1] We thus obtain $Pr ⁡ [ A f | B f = 0 , F ^ f = 1 ] = A − B Pr ⁡ [ B f = 0 , F ^ f = 1 ]$ A = Pr[A[f]]·(1 - Pr[B[f ]= 0, $F^f$ = 0]) B = Pr[A[f]|B[f ]= 1]·(Pr[B[f ]= 1, $F^f$ = 0] + Pr[B[f ]= 1, $F^f$ = 1]) and where Pr[B[f ]= x, $F^f$ = y] = Pr[B[f ]= x|$F^f$ = y]·Pr[$F^f$ = y] We assume that Pr[A[f]|B[f ]= 1] follows a normal distribution with parameters μ[f ]and $σf2$ that are optimized during the EM-like algorithm (see Appendix 3). Pr[F[f]|$F^f$] and Pr[$F^f$] have all been previously defined. Finally, Pr[B[f]|$F^f$] is represented by a fixed CPT: $Pr ⁡ [ B f | F ^ f ] = B f = 0 B f = 1 F ^ f = 0 1 0 F ^ f = 1 1 − γ γ$ where γ = 0.01 is a parameter that indicates the prior probability that an expressed TF will bind a generic genomic region. Appendix 2. Formulas for E-step Calculation of Pr[R^m|A, E, F] Let X be the set of modules associated with the same gene g. Let S = ∑[rX]R^r, we where $Pr ⁡ [ R m | A , E , F ] = 1 / Z ⋅ ∑ b , e , s Pr ⁡ [ R m , S = s , B = b , E ^ g = e , A , E , F ] = 1 / Z ⋅ ( ∑ b Pr ⁡ [ R m | B m = b ] ∏ f Pr ⁡ [ B f m = b f | A f m , F f ] ) . = ( ∑ e Pr ⁡ [ E g | E ^ g = e ] ∑ s Pr ⁡ [ E ^ g = e | S = s ] ⋅ Pr ⁡ [ S = s | R m , A X − m , F ] )$ • The regression tree allows an efficient computation of the first sum: ∑[b]Pr[R^m|B^m = b][f]Pr[$Bfm$ = b[f]|$Afm$, F[f]] = ∑[leaf l]P(R|l)·[assignments Λ in π(l)]Pr[Λ|$Afm$, F[f]] • Pr[$Bfm$|$Afm$, F[f]] has been defined in Appendix 1; • Pr[[g]E|[g]$E^$] is represented by a CPT described in Section 'Conditional distributions of E and F'; • Pr[[g]$E^$|S = s] is defined by the sigmoid function 1/(1 + e^-b(s-a)). Further noting that Pr[S = s|R^m, A^X-m, F] = Pr[∑[rX-m]R^r = s - R^m|A^X-m, F], we can calculate Pr[S = s|R^m, A^X-m, F] by using a simple dynamic programming. Calculation of Pr[$Bfm$|A, B, F] $Pr ⁡ [ B f m | A , E , F ] = ∑ r P r [ B f m | A m , E , F , R m = r ] ⋅ Pr ⁡ [ R m = r | A , E , F ] Pr ⁡ [ B f m | A m , E , F , R m ] = P r [ B f m | A f m , F f ] ⋅ Pr ⁡ [ R m | B f m ] Z$ Note that Pr[$Bfm$|$Afm$, F[f]] has been defined in Appendix 1. Furthermore, we can estimate Pr[R^m|$Bfm$] from the data $Pr ⁡ [ R m | B f m ] = ∑ m , t Pr ⁡ [ t R m | A , E , F ] ⋅ Pr ⁡ [ t B f m | A , E , F ] ∑ m , t Pr ⁡ [ t B f m | A , E , F ]$ where Pr[$tBfm$|A, E, F] takes the values calculated from the previous iteration. We thus get $Pr ⁡ [ B f m | A , E , F ] = 1 / Z ⋅ ∑ r Pr ⁡ [ A f m | B m f , F f ] ⋅ Pr ⁡ [ B f m | F f ] ⋅ Pr ⁡ [ R m = r | B f m ] ⋅ Pr ⁡ [ R m = r | A , E , F ]$ where Pr[$Afm$|$Bfm$, F[f]] is obtained as in Appendix 1 and Pr[$Bfm$|F[f]] = ∑[e']Pr[$Bfm$|F[f], $F^f$ = e']·Pr[$F^f$ = e'|F[f]]/Z. Calculation of Pr[$E^$|A, F, E] and Pr[$F^$|A, F, E] Although $E^$ is a hidden variable, its posterior probability distribution does not need to be estimated, because we sum over all its possible values when computing Pr[R^m|A, F, [g]E]. The same holds for $F^$ in Pr[B[f]|A[f], F[f]]. Appendix 3. Parameter re-estimation (M-step) Pr[A[f]|B[f ]= 1] is assumed to follow a normal distribution N(μ[f], $σf2$). Parameters μ[f ]and σ[f ]are re-estimated as follows: $μ f ← ∑ m , t Pr ⁡ [ t B f m = 1 | A , E , F ] ⋅ A f m ∑ m , t Pr ⁡ [ t B f m = 1 | A , E , F ] σ f 2 ← ∑ m , t Pr ⁡ [ t B f m = 1 | A , E , F ] ⋅ ( A f m ) 2 ∑ m , t Pr ⁡ [ t B f m = 1 | A , E , F ] − μ f 2$ In order to avoid overstepping the local maximum, we use small steps when updating the values of μ[f ]and σ[f]. Instead of replacing the old values with the new values calculated from Equation 1, we use a hybrid of the old values and the new values, weighted according to the step size. $μ f ← ( 1 − α ) ⋅ μ f o l d + α ⋅ μ f n e w σ f 2 ← ( 1 − α ) ⋅ σ f 2 , o l d + α ⋅ σ f 2 , n e w$ where α = 0.1 is the step size. The following parameters have values that remain fixed throughout the execution of the EM-like algorithm. Their value has been chosen empirically to optimize the quality of the results. 1. Parameters for Pr[E|$E^$] and Pr[F|$F^$]: α[E ]= β[E ]= α[F ]= β[F ]= 0.1 2. Parameters for $Pr⁡[E^g|Rr1,Rr2,...]$: a = 0.8, b = 10 in validation experiments (small data sets), and a = 0.4, b = 10 in discovery experiments (large data set). 3. Parameters for Pr[B[f]|$F^f$]: γ = 0.01. Appendix 4. The SupervisedNaiveBayes classifier A naive Bayes classifier was trained to discriminate between positive and negative (module, tissue) pairs. First, the affinity $Aim$ is discretized as 1 if and only if its value is at least one standard deviation above the mean of A[i], over all 100,000 putative modules from PReMod. The Naive Bayes network takes as input the following set of attributes: $F1⋅A1m,...,F|F|⋅A|F|m$, and E[g]. The precision-recall curves from Figure Figure33 were the result of a 11-fold cross-validation experiment. Appendix 5. The NaiveBayesInNet classifier The NaiveBayesInNet classifier is a Bayesian Network identical to the main classifier presented in this paper, except that a Naive-Bayes-like approach replaces the probability tree representing Pr[R| B[1],...,B[Φ]]. More specifically, it assumes Pr[R|B[1],...,B[Φ]] = [f=1...Φ]Pr[B[f]|R]/Z. At each iteration of the EM-like algorithm, Pr[B[f]|R] is estimated as $Pr⁡[Bf=a|R=b]=∑t=1|T|∑m=1|ℳ|Pr⁡[tBfm=a|A,E,F]⋅Pr⁡[tRfm=b|A,E,F]∑t=1|T|∑m=1|ℳ|Pr⁡[tRfm=b|A,E,F]$. Then, estimating Pr[R|A, F, E] requires a summation over all $2|ℱ|$ possible values of the B variables (the simplification afforded by the regression tree cannot be applied here). To make the computation practical, we instead fix the value of the B to their maximum likelihood estimates and use these fixed values to estimate Pr[R|A, F, E]. The approach was trained and evaluated using exactly the same methodology as for the Bayes network approach using regression trees. Supplementary Material Additional file 1: The logarithms of the likelihoods for the 2X validation experiments in three different randomly selected runs. Different colors represent different runs. We wish to thank Doina Precup, Eric Blais, Emmanuel Mongin, Francois Pepin, and two anonymous reviewers for their useful comments. XC was funded by Genome Quebec Comparative and Integrative Genomics. This article has been published as part of BMC Bioinformatics Volume 8 Supplement 10, 2007: Neural Information Processing Systems (NIPS) workshop on New Problems and Methods in Computational Biology. The full contents of the supplement are available online at http://www.biomedcentral.com/1471-2105/8?issue=S10. • Davidson EH. Genomic regulatory systems: development and evolution. Academic Press; 2001. • Wasserman W, Fickett J. Identification of regulatory regions which confer muscle-specific gene expression. J Mol Biol. 1998;278:167–81. doi: 10.1006/jmbi.1998.1700. [PubMed] [Cross Ref] • Krivan W, Wasserman W. A predictive model for regulatory sequences directing liver-specific transcription. Genome Res. 2001;11:1559–66. doi: 10.1101/gr.180601. [PMC free article] [PubMed] [Cross • Aerts S, Loo PV, Thijs G, Moreau Y, Moor BD. Computational detection of cis-regulatory modules. Bioinformatics. 2003;19:II5–II14. [PubMed] • Bailey TL, Noble WS. Searching for statistically significant regulatory modules. Bioinformatics. 2003;19:II16–II25. [PubMed] • Sinha S, van Nimwegen E, Siggia ED. A probabilistic method to detect regulatory modules. Bioinformatics. 2003;19:i292–301. doi: 10.1093/bioinformatics/btg1040. [PubMed] [Cross Ref] • Prabhakar S, Poulin F, Shoukry M, Afzal V, Rubin E, Couronne O, Pennacchio L. Close sequence comparisons are sufficient to identify human cis-regulatory elements. Genome Res. 2006;16:855–863. doi: 10.1101/gr.4717506. [PMC free article] [PubMed] [Cross Ref] • Taylor J, Tyekucheva S, King D, Hardison R, Miller W, Chiaromonte F. ESPERR: Learning strong and weak signals in genomic sequence alignments to identify functional elements. Genome Res. 2006;16 :1596–1604. doi: 10.1101/gr.4537706. [PMC free article] [PubMed] [Cross Ref] • Philippakis AA, He FS, Bulyk ML. Modulefinder: a tool for computational discovery of cis regulatory modules. Pac Symp Biocomput. 2005:519–30. [PMC free article] [PubMed] • Johansson O, Alkema W, Wasserman W, Lagergren J. Identification of functional clusters of transcription factor binding motifs in genome sequences: the MSCAN algorithm. Bioinformatics. 2003;19 :i169–76. doi: 10.1093/bioinformatics/btg1021. [PubMed] [Cross Ref] • Ferretti V, Poitras C, Bergeron D, Coulombe B, Robert F, Blanchette M. PReMod: a database of genome-wide mammalian cis-regulatory module predictions. Nucleic Acids Res. 2007:D122–6. doi: 10.1093/ nar/gkl879. [PMC free article] [PubMed] [Cross Ref] • Blanchette M, Bataille AR, Chen X, Poitras C, Lananiere J, Lefebvre C, Deblois G, Giguere V, Ferretti V, Bergeron D, Coulombe B, Robert F. Genome-wide computational prediction of transcriptional regulatory modules reveals new insights into human gene expression. Genome Research. 2006;16:656–668. doi: 10.1101/gr.4866006. [PMC free article] [PubMed] [Cross Ref] • Pennacchio L, Loots G, Nobrega M, Ovcharenko I. Predicting tissue-specific enhancers in the human genome. Genome Res. 2007;17:201–211. doi: 10.1101/gr.5972507. [PMC free article] [PubMed] [Cross • Su AI, Wiltshire T, Batalov S, Lapp H, Ching KA, Block D, Zhang J, Soden R, Hayakawa M, Kreiman G, Cooke MP, Walker JR, Hogenesch JB. A gene atlas of the mouse and human protein-encoding transcriptomes. Proc Natl Acad Sci USA. 2004;101:6062–7. doi: 10.1073/pnas.0400782101. [PMC free article] [PubMed] [Cross Ref] • Segal E, Yelensky R, Koller D. Genome-wide discovery of transcriptional modules from DNA sequence and gene expression. Bioinformatics. 2003;19:i273–82. doi: 10.1093/bioinformatics/btg1038. [ PubMed] [Cross Ref] • Segal E, Barash Y, Simon I, N F, Koller D. From Promoter Sequence to Expression: A Probabilistic Framework. Proc 6th Inter Conf on Research in Computational Molecular Biology (RECOMB) 2002. • Boutilier C, Friedman N, Goldszmidt M, Koller D. Context-specific independence in Bayesian networks. Proc Twelfth Conf on Uncertainty in Artificial Intelligence (UAI-96) 1996. • Dempster A, Laird N, Rubin D. Maximum likelihood from incomplete data via the EM algorithm. J of the Royal Statistical Society, Series B. 1977;39:1–38. • Quinlan J. C45: Programs for machine learning. Morgan Kaufmann; 1993. • Witten I, Frank E. Data Mining: practical machine learning tools with Java implementations. Morgan Kaufmann; 2000. • Mitchell TM. Machine learning. McGraw-Hill; 1997. • Matys V, Fricke E, Geffers R, Gössling E, Haubrock M, Hehl R, Hornischer K, Karas D, Kel A, Kel-Margoulis O, Kloos DU, Land S, Lewicki-Potapov B, Michael H, Münch R, Reuter I, Rotert S, Saxel H, Scheer M, Thiele S, Wingender E. TRANSFAC: transcriptional regulation, from patterns to profiles. Nucleic Acids Res. 2003;31:374–8. doi: 10.1093/nar/gkg108. [PMC free article] [PubMed] [Cross Ref • Karolchik D, Baertsch R, Diekhans M, Furey T, Hinrichs A, Lu Y, Roskin K, Schwartz M, Sugnet C, Thomas D, Weber R, Haussler D, Kent W, Kent W. The UCSC Genome Browser Database. Nucleic Acids Res. 2003;31:51–4. doi: 10.1093/nar/gkg129. [PMC free article] [PubMed] [Cross Ref] • Podkolodnaya OA, Stepanenko IL. The ESRG-TRRD: database of genes with specific transcription regulation in erythroid cells. 1998. http://wwwmgs.bionet.nsc.ru/mgs/papers/podkolodnaya/esg-trrd • Yoshikawa T, Ide T, Shimano H, Yahagi N, Amemiya-Kudo M, Matsuzaka T, Yatoh S, Kitamine T, Okazaki H, Tamura Y, Sekiya M, Takahashi A, Hasty AH, Sato R, Sone H, Osuga JI, Ishibashi S, Yamada N. Cross-talk between peroxisome proliferator-activated receptor (PPAR) alpha and liver X receptor (LXR) in nutritional regulation of fatty acid metabolism. I. PPARs suppress sterol regulatory element binding protein-1c promoter through inhibition of LXR signaling. Mol Endocrinol. 2003;17:1240–54. doi: 10.1210/me.2002-0190. [PubMed] [Cross Ref] • Sprague J, Bayraktaroglu L, Clements D, Conlin T, Fashena D, Frazer K, Haendel M, Howe D, Mani P, Ramachandran S, Schaper K, Segerdell E, Song P, Sprunger B, Taylor S, Slyke CV, Westerfield M. The Zebrafish Information Network: the zebrafish model organism database. Nucleic Acids Res. 2006:D581–5. doi: 10.1093/nar/gkj086. [PMC free article] [PubMed] [Cross Ref] • Krivan W, Wasserman W. A predictive model for regulatory sequences directing liver-specific transcription. Genome Research. 2001;11:1559–1566. doi: 10.1101/gr.180601. [PMC free article] [PubMed] [Cross Ref] • Eagon P, Elm M, Stafford E, Porter L. Androgen receptor in human liver: characterization and quantitation in normal and diseased liver. Hepatology. 1994;19:92–100. [PubMed] • Lecointe O, Bernard K, Naert V, Joulin C, Larsen P, Romej , D MM. GATA-and SP1-binding sites are required for the full activity of the tissue-specific promoter of the tal-1 gene. Oncogene. 1994;9 :2623–2632. [PubMed] • Humbert P, Rogers C, Ganiatsas S, Landsberg R, Trimarchi J, Dandapani S, Brugnara C, Erdman S, Schrenzel M, Bronson R, Lees J. E2F4 is essential for normal erythrocyte maturation and neonatal viability. Mol Cell. 2000;6:281–91. doi: 10.1016/S1097-2765(00)00029-0. [PubMed] [Cross Ref] • Bockamp E, McLaughlin F, Gottgens B, Murrell A, Elefanty A, Green A. Distinct Mechanisms Direct SCL/tal-1 Expression in Erythroid Cells and CD34 Positive Primitive Myeloid Cells. Journal of Biological Chemistry. 1997;272:8781–8790. doi: 10.1074/jbc.272.13.8781. [PubMed] [Cross Ref] • Blobel G, Nakajima T, Eckner R, Montminy M, Orkin S. CREB-binding protein cooperates with transcription factor GATA-1 and is required for erythroid differentiation. Proc Natl Acad Sci USA. 1998; 95:2061–2066. doi: 10.1073/pnas.95.5.2061. [PMC free article] [PubMed] [Cross Ref] • Welch J, Watts J, Vakoc C, Yao Y, Wang H, Hardison R, Blobel G, Chodosh L, Weiss M. Global regulation of erythroid gene expression by transcription factor GATA-1. Blood. 2004;104:3136–3147. doi: 10.1182/blood-2004-04-1603. [PubMed] [Cross Ref] • Dufour C, Wilson B, Huss J, Kelly D, Alaynick W, Downes M, Evans R, Blanchette M, Giguere V. Genome-wide orchestration of cardiac functions by the orphan nuclear receptors ERRalpha and gamma. Cell Metabolism. 2007;5:345–56. doi: 10.1016/j.cmet.2007.03.007. [PubMed] [Cross Ref] • Zhu W, TomHon C, Mason M, Campbell T, Shelden E, Richards N, Goodman M, Gumucio D. Analysis of Linked Human epsilon and gamma Transgenes: Effect of Locus Control Region Hypersensitive Sites 2 and 3 or a Distal YY1 Mutation on Stage-Specific Expression Patterns. Blood. 1999;93:3540–9. [PubMed] • Crestani M, De Fabiani E, Caruso D, Mitro N, Gilardi F, Vigil Chacon A, Patelli R, Godio C, Galli G. LXR (liver X receptor) and HNF-4 (hepatocyte nuclear factor-4): key regulators in reverse cholesterol transport. Biochem Soc Trans. 2004;32:92–6. doi: 10.1042/BST0320092. [PubMed] [Cross Ref] • Peterkin T, Gibson A, Loose M, Patient R. The roles of GATA-4, -5 and -6 in vertebrate heart development. Semin Cell Dev Biol. 2005;16:83–94. doi: 10.1016/j.semcdb.2004.10.003. [PubMed] [Cross • Reimold A, Etkin A, Clauss I, Perkins A, Friend D, Zhang J, Horton H, Scott A, Orkin A, Byrne M, Grusby M, Glimcher L. An essential role in liver development for transcription factor XBP-1. Genes Dev. 2000;14:152–157. [PMC free article] [PubMed] • Charron J, Malynn B, Fisher P, Stewart V, Jeannotte L, Goff S, Robertson E, Alt F. Embryonic lethality in mice homozygous for a targeted disruption of the N-myc gene. Genes Dev. 1992;6:2248–2257. doi: 10.1101/gad.6.12a.2248. [PubMed] [Cross Ref] Articles from BMC Bioinformatics are provided here courtesy of BioMed Central • MedGen Related information in MedGen • PubMed PubMed citations for these articles Your browsing activity is empty. Activity recording is turned off. See more...
{"url":"http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2230503/?tool=pubmed","timestamp":"2014-04-16T20:11:55Z","content_type":null,"content_length":"281922","record_id":"<urn:uuid:b1cd89db-604f-4e35-a436-66ef3c680baf>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00113-ip-10-147-4-33.ec2.internal.warc.gz"}
Can anyone help me with this? AxB = AB Post reply Can anyone help me with this? AxB = AB Hi, need help with this...Is it possible to have a number AB whereby A multiply B = AB? how do I work this out? Note : AB is a number where by the front portion is made up of A and the rear is made up of B...if you get what I mean... for eg. if X=2 and Y=3, then the number XY is 23 Re: Can anyone help me with this? AxB = AB Your question is, (if I get you right!), is there a two digit number AB such that A x B = AB, correct? That is, AB should be equal to 10A+B. B = 10A/(A-1). Value of A Value of B 1 Infinity 4 40/3 5 25/2 7 35/3 8 80/7 9 45/4 It can be seen that for single digit numbers of A, we don't get single digit numbers of B with the exception of 0,0. Hence, there can be no such number Character is who you are when no one is looking. Re: Can anyone help me with this? AxB = AB Thanks for the reply:) yup I tried that. What if AB, A and B are numbers with more than 1 digit? Any possibility? for example X=123, Y=12, XY=12312 Re: Can anyone help me with this? AxB = AB if AxBxC=ABC, then, 100A+10B+C=ABC. Therefore, C=(100A+10B)/(AB-1). We would have to try out 9x9=81 combinations for A=1,2,3,4,5,6,7,8, or 9 and B=1,2,3,4,5,6,7,8, or 9. None of the numbers, A, B or C can be zero, as the produce would then be zero. Let us explore other possibilities to know whether such numbers exist or not. Just give me some time. Character is who you are when no one is looking. Re: Can anyone help me with this? AxB = AB I have tried and now I have got a proof that there cannot be a number of any number of digits ABCD such that ABCD=AxBxCxD. I shall first prove for three digit numbers. If AxBxC=ABC, then, either all the three are equal, or one or two of them are not equal. First, lets consider the case of either one or two of A,B,C are not equal. That is all the three are not equal. The smallest cannot be zero, since then the product would be zero. The smallest cannot be one, since then, even if the other two numbers take the maximum possible value, the product is only 81, not a three digit number. The smallest number cannot be 2, since, even if the other two numbers take the maximum value, that is 9 and 9, the product is only 162, and a number greater than 200 is not obtained. The smallest number cannot be 3 since, even if the other two numbers take the maximum value, that is 9 and 9, the prouct is less than 300. Similarly, the smallest number cannot be 8, since even if the other two numbers take the maximum value, that is 9 and 9, the product is less than 800. Now, the only option is the three numbers are equal. When they are, that is, 9, 9, and 9, the product is only 729, less than 900. Thus, it is seen that the smallest number cannot be 0,1,2,3,4,5,6,7,8, or 9. Therefore, there can be no such three digit number. Extrapolating to any digit of n numbers, there can be no number ABCD such that AxBxCxD=ABCD. Character is who you are when no one is looking. Re: Can anyone help me with this? AxB = AB But what I really have problem figuring out is ...what if A or B is >9? ie. A and B can be numbers with more than 1 digit....AxB=AB possible? Re: Can anyone help me with this? AxB = AB I think the proof that it is not possible for single digit numbers can be extrapolated for multiple digit numbers also. I will post my reply after examining all the possibilities. Character is who you are when no one is looking. Post reply
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=31970","timestamp":"2014-04-19T04:57:23Z","content_type":null,"content_length":"18553","record_id":"<urn:uuid:dad57e6a-5ca8-4919-9107-9135e720ba04>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00519-ip-10-147-4-33.ec2.internal.warc.gz"}
Modeling Subsurface Petroleum Hydrocarbon Transport Modeling Framework It's inevitable that in a modeling course, the transport equations are discussed. One of the purposes of the course is to make the equations less intimidating and to show how to read these equations to learn what is included in a specific model. Various calculations that can be made from the equations are implemented in JavaScript and Java Applets. Here's a JavaScript calculator for determining the retardation factor. Retardation Factor Calculator Retardation Factor R = 1 + r[b]k[d] /q R = retardation factor r[b] = bulk density = r[s](1-q) r[s] = solids density q = porosity k[d] = (soil) distribution coefficient = f[oc] K[oc] f[oc] = fraction organic carbon K[oc] = organic carbon/water partition coefficient Solids Density (r[s]) (Try 2.65 g/cm^3) Porosity (q ) (Try 0.30) Bulk Density (r[b]) Calculated Result Fraction Organic Carbon (f[oc]) (Try 0.0001) Organic Carbon Partition Coefficient (K[oc]) K[oc] value Retardation Factor (R) Calculated Result Other calculators are used throughout the course for calculation of various quantities and unit conversions. The calculators form an on-line site assessment tool called OnSite which can be used at For more complex tutorials a Java Applet is being developed.
{"url":"http://www.epa.gov/athens/learn2model/demo/demo2a.html","timestamp":"2014-04-18T23:43:04Z","content_type":null,"content_length":"21644","record_id":"<urn:uuid:3c5f3c19-a3ae-4020-8324-1cabf3533b4e>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00472-ip-10-147-4-33.ec2.internal.warc.gz"}
Bapchule Trigonometry Tutor Find a Bapchule Trigonometry Tutor ...Many of the students were Hispanic and their parents like mine, couldn’t help on homework because they didn’t speak English. So I shifted some of my attention to those students because I didn’t want to improve their test scores, but improve their use of the English language to establish a strong... 67 Subjects: including trigonometry, English, Spanish, writing ...Overall I took 15 credits hours of physiology and 12 credit hours in pathophysiology while in medical school. My expertise is condensing the information and helping my students understand what it all means and why it is important. I have taken over 20 credit hours of nutrition in medical school, so I am very qualified to tutor in nutrition. 14 Subjects: including trigonometry, chemistry, physics, calculus ...Computer literacy is quickly becoming as important as language or analytical literacy. I have taken several courses in computer programming and have used many in my professional work. I can tutor basic computer programming, Matlab/Simulink, LabVIEW, Python, Lisp/Scheme, Perl, Java, and C. 62 Subjects: including trigonometry, English, reading, writing ...My educational philosophy is that learning is fun, or should be. My experience includes Middle School, High School, and College. My specialty is education in the classics: Latin, Classical and Biblical Greek, and the history and literature of the period. 43 Subjects: including trigonometry, English, Spanish, chemistry ...I would help him with adding and subtracting algebraic expressions and fractions. The algebraic expressions confused him at first because he was intimidated by letters being mixed in with numbers. I told him don't worry, just think of one these letters as apples and the other set as oranges. 7 Subjects: including trigonometry, calculus, geometry, algebra 1
{"url":"http://www.purplemath.com/bapchule_trigonometry_tutors.php","timestamp":"2014-04-21T10:53:48Z","content_type":null,"content_length":"24076","record_id":"<urn:uuid:efc2a182-01c3-4276-96e7-3330c979ec08>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00184-ip-10-147-4-33.ec2.internal.warc.gz"}
Find the extremal of this functional with given b.c.s November 6th 2011, 04:39 AM Find the extremal of this functional with given b.c.s I've used Euler-Lagrange and can't seem to get the right answer, please help if you can. Determine the extremal for the functional $\int_{0}^{1}(xy+y^2-2y^2y') dx$ with $y(0)=0, y(1)=2$ Using Euler-Lagrange I get, $\frac{\partial f}{\partial y}+\frac{\mathrm{d}}{\mathrm{d} x}(\frac{\partial f}{\partial y'})=x+2y-4yy'+4yy'=x+2y=0$ but that isn't consistent with the given b.c.s Please help! November 7th 2011, 04:11 AM Re: Find the extremal of this functional with given b.c.s I've used Euler-Lagrange and can't seem to get the right answer, please help if you can. Determine the extremal for the functional $\int_{0}^{1}(xy+y^2-2y^2y') dx$ with $y(0)=0, y(1)=2$ Using Euler-Lagrange I get, $\frac{\partial f}{\partial y}+\frac{\mathrm{d}}{\mathrm{d} x}(\frac{\partial f}{\partial y'})$ Actually, this should be $\frac{\partial f}{\partial y}-\frac{d}{dx}\frac{\partial f}{\partial y'},$ but you seem to have computed the correct expression here: On the face of it, it doesn't surprise me that the term with $y'$ drops out. After all, So the problem reduces down to finding the extremal of $\int_{0}^{1}(xy+y^{2})\,dx,$ subject to the boundary conditions. Since there is now no $y'$ term, the Euler-Lagrange equation simplifies down to setting $\frac{\partial L}{\partial y}=0,$ which implies $x+2y=0,$ or $y=-x/2,$ as before. And, as you've noted, this function does not satisfy the boundary conditions. Question: what is the domain of functions over which you're searching for a solution? Continuous? Differentiable? (I would assume probably differentiable, since you have a $y'$ in the integrand; however, you might be interpreting that derivative in a weak sense, or in some other similarly exotic fashion.) If you require a differentiable function as your solution, then I would say your problem has no solution.
{"url":"http://mathhelpforum.com/advanced-applied-math/191275-find-extremal-functional-given-b-c-s-print.html","timestamp":"2014-04-24T09:53:27Z","content_type":null,"content_length":"9576","record_id":"<urn:uuid:8a122ce4-0ea3-451e-869d-f42e6bcdf4de>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00275-ip-10-147-4-33.ec2.internal.warc.gz"}
34: Heisenberg's Uncertainty Principle All physics of the 19th century and earlier is called classical physics. Examples are Newtonian mechanics, which we dealt with this whole term, and electricity and magnetism, which you will encounter the next term. In the early part of this century, when we learned about the composition of atoms, it became clear that classical physics did not work on the very small scale of the atoms. The size of an atom is only ten to the minus ten meters. If you take 250 million of them and you line them up, that's only one inch. In 1911, the English physicist Rutherford demonstrated that almost all the mass of an atom is concentrated in an extreme small volume at the center of the atom. We call that the nucleus, it's positively charged. And there are electrons which are negatively charged, which are in orbits around the nucleus, and the typical distances from the nucleus to the electrons is about 100,000 times larger than the size of the nucleus itself. As early as 1920, Rutherford named the proton, and Chadwick discovered the neutron in 1932, for which he received the Nobel Prize. Now, let us imagine that this lecture hall is an atom. And the size of an atom is defined by the orbits, the outer orbits of the electrons. If I scale it properly, now, in this ratio 100,000 to 1, then the size of the nucleus would be even smaller than a grain of sand. And it just so happens that yesterday I went to Plum Island, I walked for three hours on the beach and I ended up with some sand in my pockets. And so I will donate to you one proton; make sure you hold onto it... Ooh, this is two protons, that's too generous. So keep it there-- this is one proton. And there would be an electron, then, anywhere there, near the walls, going around like mad in orbit and that would then be a hydrogen atom. Just think about what an atom is. An atom is all vacuum. You and I are all vacuum. You think of yourself as being something, but we are nothing. You can ask yourself the question, If you are all vacuum, why is it, then, that I can move my hand not through the other hand, like a ghost can walk through a wall? That's not so easy to answer, and in fact, you cannot answer it with classical physics and I will not return to that today. But you are all vacuum. According to Maxwell's equations, Maxwell's law of electricity and magnetism, an electron, because of the attractive force of the proton, would spiral into the proton in a minute fraction of a second, and so atoms could not exist. Now, we know that's not true. We know that atoms do exist. And so that created a problem for physics and it was the Danish physicist Niels Bohr who in 1913 postulated that electrons move around the nucleus in well-defined orbits which are distinctly separated from each other, and that the spiraling-in of the electrons into the nucleus does not occur, for the reason that an electron cannot exist in between these allowed orbits. It can jump from one orbit to another, but it cannot exist in between. Now, Bohr's suggestion was earth-shaking, because it would also imply that a planet that goes around the sun cannot orbit the sun just at any distance. You couldn't move it just a trifle in or a trifle farther out. It would also require discrete orbits. It would also mean that if you had a tennis ball and you would bounce the tennis ball up and down, that the tennis ball could not reach just any level above the ground, but it would only be discrete levels, and that is very much against our intuition. We'd like to think that when you bounce a tennis ball, that it can reach any level that you want to. You give it just a little bit more energy and it will go a little higher. That, according to quantum mechanics, would not be possible. Now, all this seems rather bizarre, as it goes against our daily experiences, but before we dismiss the idea of quantization-- see, the quantization comes in when you talk about discrete orbits-- you have to realize that the differences in the allowed heights of the tennis ball and the differences between the allowed orbits of the planets around the sun would be so infinitesimally small that we may never be able to measure it. In other words, quantum mechanics really plays no role in our macroscopic world. Now, atoms are very, very small compared to tennis balls, and the quantization effects are much larger in the sub-microscopic world of electrons and atoms than in our familiar world of baseballs, pots and pans, and planets. So before we continue, I would like to repeat to you one of the cornerstones of quantum mechanics. And it says that the electrons in atoms can only exist at well-defined energy levels-- think of them as being orbits-- around the nucleus, and they cannot exist in between. Now, when I heat a substance, the electrons in the atoms can jump from inner orbits to allowed outer orbits, and when they do so, they can leave a hole, an opening, an empty space in the inner But later on, they can fall back to fill that opening. They can occupy that place again. And when I keep heating this substance, there is some kind of a musical chair game going on. The electrons will go to outer orbits, they may spend there some time and then they may fall to lower orbits, to inner orbits. You see here a vase, a very precious vase, and when I pick up this vase, I have to do work. I bring it further away from the center of the Earth. Now, is that energy lost? No. I could drop the vase, and it would pick up kinetic energy. I will get that energy back. Gravitational potential energy will be converted to kinetic energy. It will crash to pieces, and it will generate some heat. In fact, the breaking itself of this vase would take some energy. In a similar way, the energy that you put into electrons when you bring them to outer orbits is retrieved when the electrons fall back. So there is a parallel-- dropping this vase and getting your work back that I put in. It wouldn't be a nice thing to do to this 500-year-old vase, but as far as I'm concerned, perfectly reasonable to do it with Ohanian, so we can let that go, and the energy will come out in the form of heat and also in the form of, perhaps, some noise. When electrons fall from an outer orbit back to an inner orbit, it's not kinetic energy that is released, but it comes out often in the form of light, electromagnetic radiation. Light has energy. Einstein formulated that a light photon, the energy of a light photon, is h times the frequency, and h is Planck's constant-- named after Max Planck-- and h is about 6.6 times 10 to the minus 34 joule-seconds. Now we've also seen in 8.01 that lambda, the wavelength of light, equals the speed of light divided by the frequency. And so if I eliminate the frequency, I also can write that the energy of a light photon equals hc divided by lambda. And so you see, the more energy there is available, the smaller the wavelength. And the less energy there is available, the longer the wavelength. And so if the jump from an outer orbit to an inner orbit is very high, then the wavelength will be shorter than when the jump is relatively small. I can make you some kind of an energy diagram of these jumps. And these are energy levels, so energy goes in this direction, but if you want to, you can think of these as the position of how far the electrons are away from the nucleus, if you like that, if it helps you, so this will be the electron that will be the closest to the nucleus. So these would be allowed energy levels, allowed orbits. And if this electron had jumped all the way here, then it could fall back at a later moment in time and the energy could be so much that you couldn't even see the light. It could be ultraviolet, and this jump may still be ultraviolet, but now this jump, which is a little less energy, that may be in the blue part of our spectrum. So we may see this as blue light. And this one, which is a little less than this, this energy may generate, this jump may generate green light. And the jump from here to here, which is even less, may generate red light. And a jump from here to here, which is even less, may again be invisible, so this may be infrared. And so as the electrons fall from outer orbits to inner orbits, you expect very discrete energies to come out, very discrete wavelengths, and these wavelengths that you would see correspond, then, to these allowed transitions between these energy levels. So if we could look at that light and sort it out by color, we would, in a way, see these energy levels. Now, you have in your little envelope a piece of plastic, which we call a grating, and the grating has the ability to decompose the light in colors, which we call a spectrum, and we're going to shortly use that grating to look at light from helium and light from neon. But before we do that, I'd like to hand out-- as a souvenir to a few people, randomly picked-- something that they can also use. It's not as good as your grating, though, but it's also nice. You will see a more spectacular result, but not as clean. It's not as clean. All right, one for you, one for you, one for you and one for you. And you want one-- I can tell that-- and you want one. And here, for you, for you. Oh, no, this side hasn't had anything. I've got to walk all the way over now. So this is really for children's parties, which I'm handing out. Oh, George Costa, you want one, of course. Professor Costa wants one-- I couldn't bypass him. And you want one, okay, and you want one. So, by all means, use your grating, but then, at the very end, you can always use these little spectacles, which don't work nearly as well, but, uh... this kind of thing. I'm going to light here this bulb, this light, which has helium in it, and what you're going to see with your grating, if you hold your grating properly-- you may have to rotate it 90 degrees; you will see how that works when you try it-- you're going to see very, very sharp, narrow lines at various colors. I want you to realize that the reason why you see very sharp, narrow lines is only because my light source are very sharp, narrow line. If you use it on something that is not a very sharp, narrow line, then you're not going to see through that grating very sharp, narrow lines. So don't confuse the lines that are on the grating with the line source that I have here. Now, when you look through your grating very shortly, you will see, on both sides, the wonderful lines. It's a mirror image, and we will discuss it in a little bit more detail, but before you look through your grating, I first want you to simply look at it without the grating, because then it is even more spectacular when you use the grating. Because you have no clue, when you don't use the grating, what kind of colors are hidden there. And the colors that you are going to see are these electron levels. So I am going to make it dark. And I will turn this on. And this one, I believe, is helium. I have a grating here. So we have to rotate it so that you see vertical lines on either side. You may have to rotate it 90 degrees, no more. And if you look closely-- for instance, look on the right side of the light-- you'll see a distinct blue line, a few blue lines, green, very nice bright yellow one, and you see red. And if you go further to the right, you see a repeat. It's a little fainter, but you see a repeat of that. That's not important right now; I just want you to see that this light, which you have no idea that it comes out in very discrete wavelengths, very discrete frequencies, and they correspond to these jumps from allowed energy levels to other allowed energy levels, but there is nothing in between. And when you look on the left side, you'll see a mirror image of what you see on the right side. Now, neon... excuse me, helium has only two electrons. I'm now going to put in the neon bulb and that makes it richer, for reasons that neon has ten electrons, so you have many more orbits, so many more ways that the electrons can play musical chair. A lot of lines in the red-- I'm not blocking you, I hope-- a lot of lines in the red, and some beautiful lines in the yellow. I see some in the green, I don't see much in the blue... a little bit in the blue. But the key thing is, I want you to see that these lines are discrete. It is not just any wavelength that can be generated; it's only the allowed orbits, the musical game when the electrons jump from one orbit to another, and that gives you this unique discrete Now, these light spectra were known long before Bohr came with his daring ideas, but before quantum mechanics, these lines were a great mystery, but they no longer are. I suggest you use this grating and use it when you are outside at night; look at some streetlights, particularly sodium lamps and mercury lamps. And, of course, the neon lamps are quite spectacular, but keep in mind, you will not see very nice straight lines unless your light source itself is a very nice straight, narrow light source. Now, quantum mechanics took a big leap in the '20s, and it would be impossible for me in the available amount of time to do justice to all the basic concepts. However, I will discuss some consequences that are rather nonintuitive. Prior to quantum mechanics, there was a long-standing battle between physicists whether light consists of particles or whether they are waves. Newton believed strongly that they're particles, and the Dutchman Huygens believed that they were waves. And it seemed like, in 1801, that a conclusive experiment was done by Young, which demonstrated unambiguously that light was waves; Huygens was right. But as time went on, discomfort was growing, as there were also experiments that showed rather conclusively that light really was particles. And it was one of the great victories of quantum mechanics that it showed that light is both. At times it behaves like waves and at other times, it behaves like particles; it all depends on how you do your experiment. In 1923, Louis de Broglie made the daring suggestion that a particle can behave like a wave, and he specified, he was very specific, that the wavelength-- which nowadays is called de Broglie wavelength-- is h, Max Planck's constant, divided by the momentum of that particle and the momentum is the mass of the particle times the velocity, as we have seen in 8.01. If the momentum is higher, then the wavelength is shorter. A baseball will have a very high momentum, with a ridiculously low... short wavelength. Now, one of the startling consequences is that protons and electrons, which everyone of that time considered particles, can then also be considered as being waves. And in 1926, the Austrian physicist Schrodinger drove the nail in the coffin with his famous equation-- Schrodinger's equation, it's called now-- which is the ground pillar of quantum mechanics and it unifies the wave and the particle character of matter. Returning to my baseball, take a mass of the baseball of, say, half a kilogram and give it a speed of 100 miles per hour. Calculate the wavelength that you would find, according to quantum mechanics. That wavelength is so absurdly small, it is 20 orders of magnitude smaller than the radius of an electron, so it is completely meaningless. So quantum mechanics plays no role in our macroscopic world of pots and pans and baseballs. But now take an electron. You take the mass of the electron, 10 to the minus 30 kilograms. And you give the electron a speed of, say, 1,000 meters per second. Now you get a wavelength which is comparable to the wavelength of visible light, red light. And now it's something that becomes very meaningful, something that can be measured. Now, you may argue, "Gee, what difference does it make? "Who cares whether something is a wave or whether something is a particle?" Well, it makes a huge difference, because waves have crests and they have valleys, and so if you take two sources of waves, either water waves-- two sources, tapping up and down on the water-- or you can take two sound sources, then there are certain locations on the surface of the water where the crest of one wave arrives at the same time as the valley of the other, and so they cancel each other out. There is nothing, there is no motion of the water. We call that destructive interference. Of course, there are other places where there is constructive interference, where they support each other. Now, if particles can do that, too... That is very hard to imagine-- how can one particle with another particle interfere and vanish, that the two particles no longer exist? So if, indeed, particles are waves, you should be able to demonstrate that by having the interference pattern of two particles, like the water waves, and make-- at certain locations in space-- those particles disappear, which turn out to be possible. But that's a very nonintuitive idea. So we think of it too classically when we say, "Well, two particles cannot disappear." But in quantum mechanics, you can think in waves if you want to, and then you have no problems with the interference pattern and the destructive interference at certain locations. Now, there are other remarkable consequences of quantum mechanics in classical mechanics. If you and I are clever enough, you think that we should be able to determine the position of an object to any accuracy that we require, and at the same time determine also its momentum at any accuracy that we require. It's just a matter of how clever we are. Simultaneously, the object is right there and that is its mass and that is its speed. However, the German physicist Heisenberg realized in 1927 that a consequence of quantum mechanics is that this is not possible. Strange as it may sound to you, Heisenberg stated that the position and the momentum of an object cannot be measured very accurately at the same time. And I will read to you Heisenberg's uncertainty principle, the way we know it. It says, "The very concept of exact position of an object "and its exact momentum, together, have no meaning in nature." It's a profound nonclassical idea, and it is hard for any one of us-- you and me included-- to comprehend. But it is consistent with all experiments that we can do to date. I want to repeat it, because it's going to be important of what follows. "The very concept of exact position of an object "and its exact momentum, together, have no meaning in nature." What does it mean? First, let me write down Heisenberg's uncertainty principle. Delta p, which is the uncertainty in the momentum, multiplied by delta x, which is an uncertainty in the position of that particle, is larger or approximately equal to Planck's constant divided by two pi-- for which, in physics, we call that "h-bar"-- and h-bar is approximately 10 to the minus 34 joule-seconds. You see, h is 6.6 times 10 to the minus 34. If you divide that by two pi, you get about 10 to the minus 34. What does this mean, now? What it means that if the position is known to an accuracy delta x-- we'll give you some examples-- that the momentum is ill- determined, is not determined, to the amount delta p, larger or equal than h-bar divided by delta x. That's what it means. And I'll give you an example which I've chosen from a book of George Gamow. Gamow wrote a book which he called Mr. Tompkins in Wonderland. It's about dreams. Mr. Tompkins wants to understand the quantum world, and there is a professor-- you will see a picture of the professor-- who takes him, in his dreams, along the various remarkable nonintuitive effects of quantum mechanics. And in one of these dreams, the professor suggests that we make h-bar one. And the professor takes a triangle in the pool table and he puts the triangle over one billiard ball, so the billiard ball is constrained in its position and that delta x is roughly... say, 30 centimeters, 0.3 meters. That means that the momentum is not determined, not determined to an approximate value of one divided by 0.3, is about 3 kilogram-meters per second. Now, if we give the billiard ball a mass of one kilogram, then delta p is m delta v, and so if m is one kilogram, then the speed of that billiard ball is undetermined, according to Heisenberg's uncertainty principle, by at least approximately three meters per second. Three meters per second-- that means seven miles per hour, and so that billiard ball will go around like crazy in that triangle, and that's exactly what happens in the dream. And I will show you here a picture from that book. Mr. Tompkins is always in pajamas, just to remind you that it is a dream. And needless to say, the professor is a very old man and has a very nice beard; it adds to the prestige. And I will read you from this book. I will read you a very short paragraph that deals with this. "So the professor says, "'Look, here, I'm going to put definite limits "'on the position of this ball by putting it inside a wooden triangle.'" "As soon as the ball was placed in the enclosure, "of the whole inside of the triangle "became filled up with glittering of ivory. "'You see,' said the professor, "'I defined the position of the ball "'to the extent of the dimensions of the triangle. "'This results in considerable uncertainty in the velocity "'and the ball is moving rapidly inside the boundary. "'Can't you stop it?' asked Mr. Tompkins. "'No, it is physically impossible. "'Anybody in an enclosed space possesses a certain motion. "'We physicists call it zero point motion, "'such as, for example, the motion of electrons in any atom.'" So here you see quantum mechanics at work when h-bar is one. This is a very nonclassical idea, because you and I would think-- and we've always dealt with that in 8.01-- that you can take an object and place it at location "a," and we say at time t zero it is at "a" and it has no speed and we know the mass, so we know both the momentum and the position to an infinite But according to quantum mechanics, that's not possible. So let's now return to the real world, where h-bar is not one, but where h-bar is 10 to the minus 34, and let's now put a billiard ball inside this triangle. Now, delta x is the same, but since h-bar is 10 to the minus 34, delta p is, of course, 10 to the 34 times smaller, and so the velocity is 10 to the 34 times smaller. This undeterminedness... degree to which the velocity is now undetermined, is so ridiculously small-- it is 3 times 10 to the minus 34 meters per second-- that if you allowed that ball to move with that speed, in 100 billion years, it would move only 1/100 of a diameter of an electron, so it's meaningless again. And so again, you see that quantum mechanics plays no role in our daily macroscopic world of baseballs and basketballs and billiards and pots and pans. And therefore, it is completely okay for us to say, "I have a billiard ball which is at point 'a,' and its mass is one kilogram and it has no speed." That is completely kosher, completely acceptable, and quantum mechanics has no problems with that. Let's now turn to an atom. Take a hydrogen atom. The diameter of a hydrogen atom is about 10 to the minus 10 meters. So the electron is confined to a delta x of about 10 to the minus 10 meters. That means the momentum of that electron becomes undetermined-- according to Heisenberg's uncertainty principle-- to about 10 to the minus 34, divided by 10 to the minus 10, is about 10 to the minus 24 kilogram-meters per second. What is the mass of an electron? That's about 10 to the minus 30 kilograms. So this, delta p, is also m delta v. So it means that delta v-- that means the velocity of the electron-- is undetermined, according to Heisenberg's principle, by an amount which is at least 10 to the minus 24, which is this delta p divided by the mass of the electron, which is 10 to the minus 30. And that is about 10 to the six meters per second-- that is one-third of a percent of the speed of light. So the electron is moving only because of the fact that it is confined. That's what quantum mechanics is all about. The electron's motion is dictated exclusively by quantum mechanics. I'm going to show you an experiment in which I want to convey to you how nonintuitive Heisenberg's uncertainty principle is. I have here a laser beam, and this laser beam is going to be aimed through a narrow slit-- I'll make a drawing, I'll turn this light off-- and that slit, which is a vertical slit, can be made narrow and can be made wider. Here is this light beam and here is this opening, this slit. It's only going to be confined in this direction, not in this direction. And so the light will come out here, and then, on a screen, which is going to be that screen, at large distance capital L, we're going to see that light spot, due to the light beam going through the slit and this separation, capital L. I start off with the slit all the way open and so you're going to see this light spot like this. And then I'm going to make the slit narrower and narrower, and as I'm going to cut into the light beam, what you're going to see is exactly what you expect. You expect that this light disappears, and when I cut in further, you see exactly what you expect, that this light disappears. And so the light spot there on that screen will become narrower and narrower and narrower. But then there comes a point that Heisenberg says, "Uh-uh, careful now, because your delta x, your knowledge, "the accuracy in this direction where the light goes through "is now so high that now I'm going to introduce "an uncertainty in the momentum of that light. "The momentum of that light is now no longer determined to infinite accuracy." And what that means, if you start fooling around with the momentum of that light in the x direction, it no longer goes through straight but it goes off at an angle, and I will make you a more quantitative calculation for that. So let's look at this slit from above. Here's the slit, and the slit has an opening, delta x. And this delta x we're going to make smaller and smaller, and let us start with a delta x of about 1/10 of a millimeter, which is 10 to the minus 4 meters. I have light, I know the wavelength of the light, and I know that lambda equals h divided by p, according to De Broglie. I know the wavelength, I know h, and so I can calculate the momentum of that light. I have done that, take my word for it. It is about 10 to the minus 27 kilogram-meters per second. That's the momentum of the individual light photons. Think of them as particles, which you can do, according to de Broglie. So now I have a delta p, the degree to which the momentum is undetermined, according to Heisenberg, is going to be 10 to the minus 34 divided by delta x-- which is 10 to the minus 4, so that is 10 to the minus 30, very small. But the momentum itself is 10 to the minus 27, so it's only one part in a thousand. So what will happen? If the light comes through here... And I now make a classical argument. I say, "This is the momentum of the light as it comes straight in." When it has to be squeezed through this narrow opening, Heisenberg's uncertainty principle demands that it is going to be undetermined, the momentum in this direction by roughly 10 to the minus 30, or more. Remember, it is always larger or equal. In other words, if I introduce, for instance, in this direction or in this direction, delta p, then I would expect that some of that light goes off in this direction. It is this change in momentum, this undeterminedness in momentum, that makes it go off at an angle, only in the x direction. If I have the slit like this, don't expect this to happen in this direction, because the uncertainty in the y direction, that's not the problem. Delta y is not very small, it's delta x that is very small, so it's this direction that's going to give you trouble. It's only in this direction that you know precisely where that light goes through. This direction is not the issue. So this angle theta can now be calculated very roughly. Theta is obviously delta p divided by p, so theta is very roughly 10 to the minus 3 radians, which is a fifteenth of a degree, and if you have at a distance L-- if this distance here is L-- if you have here a screen, then the spot on this screen... if I call that x at location L, then x at location L is obviously theta times L. And if theta is 10 to the minus 3-- and let's assume this is about 10 meters away from us, so L is about 10 meters-- then you get 10 to the minus 2 meters. That is one centimeter. One centimeter in this direction and one centimeter in that direction-- two centimeters. But when I make the slit width 10 times smaller, if I make the slit width only 1/100 of a millimeter, then this becomes 10 centimeters, because now I know delta x 10 times better, and so delta p is 10 times more uncertain. So now I expect to see here at least a smear of 20 centimeters and at least a smear of 20 centimeters there. So the absurdity is that a teeny-weeny little light source which in the beginning you will see as a very small spot... When I make this slit narrower and narrower, indeed, you will see that you will lose photons, and you will see this getting narrower and narrower, and then all of a sudden, it begins to spread out, and it begins to spread out, and by the time I'm close to a tenth of a millimeter, the light spot will be yay big. Very nonintuitive. You make the slit smaller, and the photons spread out. And I want to show that to you now. I have to make it very dark. And I need my flashlight, turn on the laser beam. There you see it. The slit is now all the way open. Yeah, it's all the way open, and I'm going to close the slit now slowly. And if you look closely, you will see that the... Let me also get my red laser, then I can point something out. You will see that the light will get squeezed in the horizontal direction. You can see already at the left side, has a very sharp vertical cut-off, and the right side also. It's getting narrower, it's getting narrower. Getting narrower, but I'm nowhere nearly a tenth of a millimeter yet. It's getting clearly narrower. You see, it's getting narrower, it's getting narrower. If I look here... oh, I'm not yet at the tenth of a millimeter, but I'm getting there. I'm going slowly, squeezing it. I'm squeezing those photons. Those photons now are forced to go through an extremely narrow opening and Heisenberg is very shortly going to jump in and says, "You are going to pay a price for that. "You know too well where those photons are in the x direction. "The price you pay-- "that nature will now make the momentum undetermined in the x direction." And you begin to... you see it now. You really begin to see that the center portion is widening. Even photons appear. Here, you see some dark lines, which I will not further discuss today, but notice that the light is spreading. Of course, when I squeeze this slit, when I make it narrower, it's obvious that I lose light, because the light that hits the side of the slit is not going through, so the light intensity will go That's just inevitable. I used fewer photons. But look at this. There are photons here, there are photons there. It's at least 10 centimeters, this portion. From here to here is at least one foot. I squeeze more-- this is more than half a meter now. I squeeze more-- this is about one meter already. I squeeze even more. I close the slit now, and I will open it slowly. I'm opening it very slowly, and at the moment that it opens... Look at this! You see this? You see this wonderful streak? It looks more like a comet. From here to here is at least a meter. That's that center portion of the light. It has spread out, since the poor light was forced to go through this very narrow opening. Now I'm opening it more and more. I'm opening it more, and now, of course, the reverse is happening. Extremely nonintuitive. Now, not only have you seen quantum mechanics at work, in terms of electrons jumping between orbits, but you now have also seen one other very interesting consequence of quantum mechanics, which is Heisenberg's uncertainty principle. Now, the spreading of this light can very easily be explained without Heisenberg's uncertainty principle. In fact, it was known, even in the previous century, to a high degree of accuracy, why this happens, and the dark lines were very accurately explained. All I wanted to show is that the spreading of the light is entirely consistent with Heisenberg's uncertainty principle, and it better be, because it would not be possible, it would be inconceivable that you could do any experiment that would violate Heisenberg's uncertainty principle. And if this light that you would see on the screen there, if that light spot would get narrower and narrower and narrower and narrower all the time, as we would think classically, that would have been a violation of Heisenberg's uncertainty principle, and that is not possible. Now, there is no way in advance to predict which photons end up where. All you can do with quantum mechanics is to do the experiment with lots of photons and then you will get a certain distribution and the distribution will be exactly as you saw there. Quantum mechanics can never predict, on an individual photon, where it will end up. We saw that bright spot in the center. So if you did this experiment with one photon per day-- one photon per day going through this slit-- and you had a photographic plate there, and you would keep it there for months, and you would develop it, you would see the same pattern that you see there. This photon arrives today. Here arrives one tomorrow. Here arrives one the day after tomorrow. Here one the day after that, the day after that, the day after that, the day after that, the day after that, the day after that, and slowly are you beginning to see that pattern that you saw. So don't think that this interference pattern that you saw is the result of two photons going through the slit simultaneously-- not at all. You can do it with one photon at a time and you would see exactly the same thing. Now, this idea-- that you cannot in advance predict what a particular photon will do-- is a very nonclassic idea, and it rubs us all the wrong way because our classical way of thinking is-- and you are no different from my own feeling in this respect-- that if you do an experiment a hundred times in a controlled way, you should get a hundred times exactly the same result. Not so, says quantum mechanics. All that quantum mechanics will tell you is what the probability is that something will happen. No guarantees, but it is very good in predicting probabilities. Now, Einstein had great problems with this idea of not knowing precisely what would happen, and he had endless discussions with Bohr and others in which he tried to convince them that because you couldn't predict what happened, that something had to be wrong with quantum mechanics, and Einstein's famous words were, "God does not throw dice." This was the way, was his way of saying, "It is ridiculous that the outcome of a well-controlled experiment is uncertain." Now, almost nine decades have gone by since the beginning of quantum mechanics, and we now know that God-- if there is one-- does throw dice. However, God is bound to the rules of quantum mechanics and cannot violate Heisenberg's uncertainty principle. The light could not go straight through without spreading when I made the slit as narrow as I did. So quantum mechanics is a bizarre world that we rarely experience in our daily lives, because we are used to basketballs, baseballs, tennis balls. But yet it is the way the world ticks, and atoms and molecules can only exist because of quantum mechanics. That means you and I can only exist because of quantum mechanics. I hope that this will give you something to think about, but I warn you in advance, because if you start thinking about this, it will give you headaches and it will give you sleepless nights. And it has given me countless sleepless nights in the past, and even today, when I think about the consequences-- the bizarre consequences of quantum mechanics-- I still cannot comprehend it, I still cannot digest it and I still have headaches and sleepless nights. But it may be necessary to go through these sleepless nights if you want to eventually evolve as an independent thinking scientist, and I hope that someday all of you will. Thank you. [class applauds]
{"url":"http://ocw.mit.edu/courses/physics/8-01-physics-i-classical-mechanics-fall-1999/video-lectures/lecture-34/","timestamp":"2014-04-17T16:21:00Z","content_type":null,"content_length":"92886","record_id":"<urn:uuid:f4fc79c2-08d7-4c43-aa24-14504f06a614>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00465-ip-10-147-4-33.ec2.internal.warc.gz"}
Derived functors in Categories Enriched Over Abelian Monoids up vote 3 down vote favorite EDIT: Are there references to the literature that works out dervied functors for categories enriched over abelian monoids? (I narrowed down the question in the hope for an answer.) reference-request ct.category-theory 4 In my book, semigroups have associative multiplication (or let us say addition in the abelian case). Since he assumes a neutral element, I assume he means commutative monoid, in which case I find this question to be a very natural one. If Colin doesn't mean to include associtativity as an axiom, I find the question far less natural. – Todd Trimble♦ Sep 18 '11 at 15:07 (Semigroups -abelian or not- are defined to be associative!) – Qfwfq Sep 18 '11 at 15:10 Yeah, I was confused because Colin forgot to list associativity as one of the axioms (unless he really meant the non-associative analogue!). – Harry Gindi Sep 18 '11 at 15:20 2 (In case my initial comment seems confusing, it was in response to a comment of Harry which he later deleted.) – Todd Trimble♦ Sep 18 '11 at 20:23 yes. i do mean associative. usually i find the nomenclature "monoid" confusing. i find the analogy between ring and semiring, and group and semigroup. – Colin Tan Sep 19 '11 at 2:47 add comment Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook. Browse other questions tagged reference-request ct.category-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/75750/derived-functors-in-categories-enriched-over-abelian-monoids","timestamp":"2014-04-18T23:33:59Z","content_type":null,"content_length":"52626","record_id":"<urn:uuid:9ed6246f-e5ac-4d1c-887a-72a26c09cd1b>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00138-ip-10-147-4-33.ec2.internal.warc.gz"}
8th EOG Math Vocabulary Goal 1 secending we____to make numbers easier to work with absolute value combination of more than one inequality simplify comparison between different things ratio ratio of circumference to length of the diameter square name of math symbol = subsets 1, 2, 3, 4,...a, b, c, d,....from lowest to highest scientific notation integers, whole, and rational are ____of real numbers rational numbers 3.141592... is an example greater than 3, 2, 1, ...order from highest to lowest denominator end result of subtraction sum name of math symbol > place value name of math symbol < irrational numbers one of two equal factors of a number, 16 = 4 or -4 repeating decimal distance of a number from zero on a number line ascending depends on place or position of a digit estimation top number of a fraction prime factorization bottom number of a fraction difference sign placed before another to denote a root is to be extracted integers numbers in a/b form product approximation of a quantity pi end result of division quotient decimal that never ends compound inequality short-hand for writing very large or very small numbers odd number raised to the second power equal to includes rational and irrational number number line end result of multiplication radical sign whole number not divisible by two real numbers line with equal distance marked off to represent numbers square root examples are 25, 49. 144 nonterminating decimal number written as a product of primes less than examples are 0.757575..., 0.333... perfect square includes positives, zero, and negatives even end result of addition numerator whole number divisible by two
{"url":"http://www.armoredpenguin.com/wordmatch/Data/best/math/vocabulary.10.html","timestamp":"2014-04-16T07:27:37Z","content_type":null,"content_length":"17490","record_id":"<urn:uuid:02eec24d-2b25-48c6-a550-75fe21e73608>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00161-ip-10-147-4-33.ec2.internal.warc.gz"}
How to show that an entire function is a constant function using Liouville's Theorem? November 29th 2011, 12:43 PM How to show that an entire function is a constant function using Liouville's Theorem? we have: f is entire such that f(z) = f(z + 2Pi) and f(z) = f(z+2*i*Pi) for all z in the complex plane. need to show that f is constant. By Liouville's them, we only need to show f is bounded. there's a hint which says to restrict f to the square S = {z=x+iy: 0=<x=<2Pi, 0=<y=<2pi} my working so far: to show f is bounded maybe look at x=0, y=0, x=2Pi, y=2Pi? i'm not sure what this would yield though :( November 29th 2011, 12:54 PM Re: How to show that an entire function is a constant function using Liouville's Theo we have: f is entire such that f(z) = f(z + 2Pi) and f(z) = f(z+2*i*Pi) for all z in the complex plane. need to show that f is constant. By Liouville's them, we only need to show f is bounded. there's a hint which says to restrict f to the square S = {z=x+iy: 0=<x=<2Pi, 0=<y=<2pi} my working so far: to show f is bounded maybe look at x=0, y=0, x=2Pi, y=2Pi? i'm not sure what this would yield though :( Hint: The unit square is a compact subset of the complex plane. What can we say about analytic functions on compact sets? November 29th 2011, 02:00 PM Re: How to show that an entire function is a constant function using Liouville's Theo November 29th 2011, 02:08 PM Re: How to show that an entire function is a constant function using Liouville's Theo
{"url":"http://mathhelpforum.com/differential-geometry/193008-how-show-entire-function-constant-function-using-liouvilles-theorem-print.html","timestamp":"2014-04-20T00:06:21Z","content_type":null,"content_length":"7436","record_id":"<urn:uuid:b0ce9889-da57-4c26-9fd0-df7fa2996d24>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00221-ip-10-147-4-33.ec2.internal.warc.gz"}
The Geometry/ The Geometry/Topology Group The members of the Geometry/Topology group pursue matters concerning the structures and properties of space in a general and abstract form. The means being used are called (co)homology and homotopy. Symmetries of spaces form a structure called a Lie group named after Norwegian mathematician Sophus Lie. Manifolds are an important class of such spaces and can be thought of as higher dimensional surfaces. Many important problems in physics and enigneering sciences lead to differential equations residing in such spaces. The number of solutions of these equations depends on the topological and geometrical structure of the space. This is called index theory which was the theme for the Abel prize awarded in 2004. The classification of 3-dimensional manifolds is connected with the Poincaré conjecture recently solved by Russian matematician Perelman, earning him the Fields medal in 2006. Differentiable structures on manifolds were studied by J. Milnor who in 2011 received the Abel prize for his contributions in this area. The Geometry/Topology group conducts research within the areas of analysis on loop spaces, algebraic topology, dynamical and complex systems, Lie theory and many-body problems, algebraic geometry and topological measures, topology and data. Specific examples include the construction of elliptic cohomology, use of higher order categories in topology and hyperstructures, integrability of many-body problems, moduli spaces and topological measures, and genomic data.
{"url":"http://www.ntnu.edu/imf/topology","timestamp":"2014-04-17T21:28:36Z","content_type":null,"content_length":"25117","record_id":"<urn:uuid:7edcca64-9eb1-4996-90d4-7e53722ad53e>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00638-ip-10-147-4-33.ec2.internal.warc.gz"}