content
stringlengths
86
994k
meta
stringlengths
288
619
Physical Treatment - Sedimentation Settling Basin Theoretical Design For any clay-sized or larger particle in suspension in a fluid, the settling rate is a function of the gravitational force (downward) and the frictional resistance (opposite). Because the mass of a particle increases with the cube of the radius, but drag surface area only increases with the square of the radius, larger particles settle more quickly than small particles. Very small particles (such as the colloidal particles in milk), can be kept in solution indefinately by static charges and brownian motion, so settling basins are ineffective at removing these particles. The essence of the design process is to determine a specific residence time, dependent on a particle size removal goal. The sediment removal will include all particles with a velocity > Vc, plus that fraction of the slower (smaller) particles that enter low enough in the column to also settle to the sludge layer before passing out of the basin. In order to theoretically calculate the critical velocity of the smallest consistently removed particle using Stoke's law (for small Reynolds numbers), we must know the particle density, fluid viscosity, and the drag coefficient. In practice, a settling column experiment is often used to determine the settling velocity of the different fractions of a suspension, along with the mass of sediment in each fraction. Details on this theory, additional readings and problems related to sedimentation basin design are available. The settling basin is sized for smallest particle to be removed, using the following equation: where Q = flowrate A = surface area of basin, and Vc is the terminal settling velocity of the smallest particle for which settling is desired. Note that the size of the basin required is a function of area but not depth, so shallow systems are most efficient (and are therefore sometime stacked in municipal and industrial applications). In practice, we must adjust the resulting Q/A (=Vc ) for the effects of inlet and outlet turbulence, non-uniform fluid flow, and sludge storage. Most settling basins are designed to achieve a 5 to 10 minute residence time, which should settle 50 to 75% of the solids from open feedlot runoff. At 6 minutes residence time, the nominal size of the smallest consistently separated particule is 35 microns, with a calculated terminal velocity Vc of 0.84 cm/sec (Lorimor, J., 1993, Iowa State Univ.). An empirical approach to settling basin design is outlined in Iowa State University's Livestock/Environment Home Study Series - Open Feedlot Runoff publication. Practical Notes on Settling Basin Design were developed to complement that approach, and discuss conversion to metric units among other issues.
{"url":"http://www3.abe.iastate.edu/Ae573_ast475/Settling_Basin_Notes.htm","timestamp":"2014-04-16T16:00:09Z","content_type":null,"content_length":"7277","record_id":"<urn:uuid:e0b72495-f5c9-4696-95db-68f65a47d83f>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00547-ip-10-147-4-33.ec2.internal.warc.gz"}
Mammals Contain A Diploid Genome Consisting Of ... | Chegg.com Mammals contain a diploid genome consisting of at least 109 bp. Each group of 200 bp of DNA is combined with 9 histones into a nucleosome. Each group of 6 nucleosomes is combined into a solenoid, achieving a final packing ratio of 50. Part A What is the total number of nucleosomes in all fibers? Enter your answer as coefficient * 10^exponent (e.g., 2 * 10^3). Part B What is the total number of histone molecules in this diploid genome? Enter your answer as coefficient * 10^exponent (e.g., 2 * 10^3). part C If the length of each base pair is 3.4 \AA, and the packing ratio is 50, what is the combined length of all fibers? Enter your answer as coefficient * 10^exponent (e.g., 2 * 10^3).
{"url":"http://www.chegg.com/homework-help/questions-and-answers/mammals-contain-diploid-genome-consisting-least-109-bp-group-200-bp-dna-combined-9-histone-q3799875","timestamp":"2014-04-20T19:02:52Z","content_type":null,"content_length":"20588","record_id":"<urn:uuid:de9ca52a-f042-41e7-b996-d7632a6d763d>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00009-ip-10-147-4-33.ec2.internal.warc.gz"}
Straight Line Charge of Finite Length September 21st 2011, 04:31 PM Straight Line Charge of Finite Length 1. The problem statement, all variables and given/known data Find the expression for the E field at an arbitrary point in space due to a straight line of length l uniformly charged with total charge Q. The ambient medium is air. 2. Relevant equations 3. The attempt at a solution I am following this example with the solution given in my textbook and I am confused about one part. He somehow makes the following transition, $\vec{E} = \frac{1}{4 \pi \epsilon_{0}} \int_{l} \frac{Q^{'}dl}{R^{2}}\hat{R} = \frac{Q}{4\pi \epsilon_{0} ld} \int_{\theta_{1}} ^{\theta_{2}} \left( cos\theta \hat{i} - sin\theta \hat{k} \right) Where does the ld in the denominator come from? From trig relationships it can be shown that, $\frac{1}{R^{2}} = \frac{d\theta}{dz} \frac{1}{d}$ which accounts for the d in the denominator, but where does the l come from? Is it that, $l = \frac{dz}{d\theta}$? And if so, why? (That doesn't make any sense to me) Also, where does he get $\hat{R} = cos\theta \hat{i} - sin\theta \hat{k}$? Where does this come from? Also, he mentions that $\theta$ ranges from $\theta_{1}$ to $\theta_{2}$, but $\theta_{1}$ is moving counterclockwise fashion, so shouldn't we conclude $\theta_{1} > 0$ and for $\theta_{2}$ moving in a clockwise fashion, $\theta_{2} < 0$? In his solution he has this reversed, so what am I mixing up?
{"url":"http://mathhelpforum.com/calculus/188531-straight-line-charge-finite-length-print.html","timestamp":"2014-04-18T03:30:17Z","content_type":null,"content_length":"6790","record_id":"<urn:uuid:d56ab9a9-ec7b-4c6a-9589-c7d5c4e08f9a>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00213-ip-10-147-4-33.ec2.internal.warc.gz"}
Why Twelve Why Twelve ? By Koenraad Elst The present paper deals with a question of symbolism: what is so special about the number 12? Historically, the preference for the number 12 goes back to the Zodiac. Thus, the twelvestar flag of the European Union was designed, in a public contest, by a devotee of the Virgin Mary, who thought of the Apocalypse passage where a celestial virgin appears in a circle of twelve stars; and these "twelve stars", in Hebrew mazzalot (whence mazzel!, "good luck", originally "lucky star", "beneficial stellar configuration"), were a standard expression referring to the Zodiac, the division of the Ecliptic in twelve equal parts, each one of them represented by a symbol: Aries, Taurus, Gemini, Cancer, Leo, Virgo, Libra, Scorpio, Sagittarius, Capricorn, Aquarius, Pisces. Though we acknowledge the intimate connection between astrology and the symbolic structure of the Zodiac, it is outside the scope of this paper to comment on the merits claimed for astrology. Indeed, we assume that stellar lore including the Zodiac precedes its use as a tool for divination, and that it is worth analyzing purely as a symbolic construct, regardless of its use by diviners. Contrary to what some astrologers claim, astronomy is very much older than astrology. But unlike astrology, the natural tendency to read "faces in the clouds", or in this case images in the stellar groupings, is probably as old as stargazing itself. Not-so-special properties of twelve The relationship of the number 12 with other numbers is interesting, but not really unique. Thus, it is said that 12 = 3 x 4, with the added explanation that "3 represents time" while "4 represents space". All very good, but then 10 = 2 x 5, which is not bad either and just as pregnant with number symbolism. And note that in both cases, the factors when added (rather than multiplied) yield 7, that mystical number. So, for a unique property, we must look elsewhere. In number theory, we do meet 12 in intriguing places. It is the sum of the first three natural numbers satisfying Pythagoras's (actually Baudhayana's) theorem, 3˛ + 4˛ = 5˛, and also figures in the next Pythagorean threesome: 5˛ + 12˛ = 13˛. In Fibonacci's series, the 12th number happens to be 12˛, or 144; it is the only number to have this property except for 1 (for the first power, the property is shared by the numbers 1 and 5, which stands at the 5th place; for the third power, there is none). There are twelve multiplications of natural numbers equalling 360 (1x360, 2x180, 3x120, 4x90, 5x72, 6x60, 8x45, 9x40, 10x36, 12x30, 15x24, 18x20). All very interesting, but less telling and unique than the properties of 12 conceived as a geometrical entity, viz. as the division of the circle into 12 equal parts. How to divide the Ecliptic? The Ecliptic can be divided into any number of zones. Wellknown is the division of 27 or 28 moonstations of about 13° each, marking the angular distance covered daily by the moon. The division in lunar mansions links an astronomical phenomenon, the moon's movement, with a division of space. The same principle probably underlies the division in twelve: it seems to be based on the approximately twelve lunation cycles in the solar year (whose quarterperiods of roughly seven days may also be related to the division in weeks). But why should immutable space be subjected to divisions suggested by the coincidental and highly impermanent data of the moon's motion? There could well have been no moon at all (as is the case for Venusians), or mankind could have come into existence and designed a Zodiac millions of years ago, when the moon was closer to the earth and its cycle as expressed in earthly days or fractions of earthly years much shorter. Freeing ourselves from the suggestions emanating from accidental circumstances, we want to construct a division of the circle based on nothing but the abstract circle itself, considered as a geometrical figure, hence part of a continuum of geometrical constructions. Which division of the circle is intrinsically most meaningful to the whole project of symbolically representing the diverse aspects of the universe with the sections of the ecliptical circle? World models The Zodiac is devised and understood as a world model, a simplification of the infinite complexity of the phenomenal world to a scheme with a finite number of elements, which nonetheless approximates the structure of the real world in that it embodies all worldly oppositions. In a rational world model (e.g. the four/five elements), if we have an element meaning "big" or "cold", than we must have one which means "small" c.q. "hot", just as in reallife natural cycles, a sunrise is counterbalanced with a sunset. The symmetry of a circle and of its rational divisions is already a good metaphor for this general symmetry requirement of credible world models. A world model replaces the practically infinite multiplicity of phenomena with a finite set, just as a regular polygon inscribed in a circle replaces the infinite division of the circle into infinitely small sections with a finite division into discreet and finite sections. If we study the surface of these polygons, we find that practically all of them, just like the circle itself (surface = pi, assuming radius = 1), have a surface numerically represented by a number reaching decimally into infinity (in practice represented by a finite sum involving at least one root), although the surface values of the polygons, unlike those of the circle, are not transcendent numbers (meaning numbers which can only be analyzed into infinite sums, e.g. pi = 4/1 4/3 + 4/5 4/7... ad For our project of replacing unmanageable infinity with more manageable finiteness, we find polygons with surface values consisting of a finite combination of roots and rational numbers a great improvement visāvis transcendent numbers, but we would prefer polygons with even more finite and manageable measurements, viz. those which have a rational surface value. Best of all are those with a natural number as magnitude of its surface. There are three of them: the bronze medal is for the inscribed square with surface = 2, silver for the circumscribed square with surface = 4, and gold for the inscribed dodecagon with surface = 3, the natural surface value most closely approaching pi. This way, the division into twelve is not just one in a series, it is quite special and corresponds in a neat metaphorical way with the whole project of devising a world model. Squaring the circle The second unique property of the division into twelve is that it somehow "squares the circle". At least, it bridges the gap between straight and circular, radius and circumference. As anyone who studied trigonometry knows, the sinus of 30° is Ŋ. This means that, alone among the angles into which a circle can be divided, the angle of 30° combines a rational division of the circle (into 12, or of the quartercircle into 3) with a rational division of the radius (into 2). This is a truly unique intrinsic property of the division into 12. Effortlessly dividing the circle A third special property of the division in 12 is that it is the most natural division of the circle, i.e. the one which does not require any other data (c.q. geometrical instruments and magnitudes) to get constructed except those already used in the construction of the circle itself, viz. compas width equal to the radius. If one constructs a samesize circle with any point of the first circle's perimeter as the centre, one obtains a perimeter passing through the first circle's centre and intersecting its perimeter twice at 60° of the new circle's centre. Next, these two intersection points become the centres of new samesize circles, and so on. The result is a set of six samesize circles symmetrically distributed around the original circle, with 13 intersection points: one in the original centre, six on the original perimeter at 60° intervals, and six outside the circle. The straight lines connecting the latter six with the original centre intersect the original perimeter exactly halfway in the said 60° intervals. This way, the circle is neatly divided in 12 x 30°. Moreover, this entirely natural construction reveals a specific structure: the division in alternating "positive" and "negative" signs, being the intersection points on c.q. outside the original perimeter. The twelve intersection points can also be connected to form the pattern known as Sri Cakra or Magen David, i.e. a straightstanding triangle intertwined with an inverted triangle. Possibly more special geometrical facts can be mustered to show that the division into 12 (which coincidence may be credited with suggesting, viz. through the moon's motion), has a more profound, more stable and more universal mathematical basis.
{"url":"http://koenraadelst.bharatvani.org/articles/misc/whytweve.html","timestamp":"2014-04-18T20:43:36Z","content_type":null,"content_length":"22240","record_id":"<urn:uuid:335040ca-6f35-4b36-bfb4-16a59a45cd43>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00204-ip-10-147-4-33.ec2.internal.warc.gz"}
Class Size2DSyntax is an abstract base class providing the common implementation of all attributes denoting a size in two dimensions. A two-dimensional size attribute's value consists of two items, the X dimension and the Y dimension. A two-dimensional size attribute may be constructed by supplying the two values and indicating the units in which the values are measured. Methods are provided to return a two-dimensional size attribute's values, indicating the units in which the values are to be returned. The two most common size units are inches (in) and millimeters (mm), and exported constants INCH and MM are provided for indicating those units. Once constructed, a two-dimensional size attribute's value is immutable. A two-dimensional size attribute's X and Y dimension values are stored internally as integers in units of micrometers (µm), where 1 micrometer = 10^-6 meter = 1/1000 millimeter = 1/25400 inch. This permits dimensions to be represented exactly to a precision of 1/1000 mm (= 1 µm) or 1/100 inch (= 254 µm). If fractional inches are expressed in negative powers of two, this permits dimensions to be represented exactly to a precision of 1/8 inch (= 3175 µm) but not 1/16 inch (because 1/16 inch does not equal an integral number of µm). Storing the dimensions internally in common units of µm lets two size attributes be compared without regard to the units in which they were created; for example, 8.5 in will compare equal to 215.9 mm, as they both are stored as 215900 µm. For example, a lookup service can match resolution attributes based on equality of their serialized representations regardless of the units in which they were created. Using integers for internal storage allows precise equality comparisons to be done, which would not be guaranteed if an internal floating point representation were used. Note that if you're looking for U.S. letter sized media in metric units, you have to search for a media size of 215.9 x 279.4 mm; rounding off to an integral 216 x 279 mm will not match. The exported constant INCH is actually the conversion factor by which to multiply a value in inches to get the value in µm. Likewise, the exported constant MM is the conversion factor by which to multiply a value in mm to get the value in µm. A client can specify a resolution value in units other than inches or mm by supplying its own conversion factor. However, since the internal units of µm was chosen with supporting only the external units of inch and mm in mind, there is no guarantee that the conversion factor for the client's units will be an exact integer. If the conversion factor isn't an exact integer, resolution values in the client's units won't be stored precisely.
{"url":"http://docs.oracle.com/javase/7/docs/api/javax/print/attribute/Size2DSyntax.html","timestamp":"2014-04-20T11:11:08Z","content_type":null,"content_length":"31513","record_id":"<urn:uuid:302ee479-9a3b-4983-869e-8dfb5ff896b2>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00195-ip-10-147-4-33.ec2.internal.warc.gz"}
Finding the Value of an Inverse Trig Function Date: 05/08/2000 at 20:01:16 From: Annette Granados Subject: Inverse Trigonometric Functions Find the value for I don't understand how to use the formulas for this function. Date: 05/09/2000 at 08:01:08 From: Doctor Jerry Subject: Re: Inverse Trigonometric Functions Hi Annette, I'll use arcsin(x) for the inverse sine function. We want the cotangent of the arcsin(4/5), right? Well, first, arcsin(4/5) lies between 0 and pi/2. We know this from the definition of the arcsine function. cot(y) = cos(y)/sin(y). sin(arcsin(4/5)) = 4/5 cot(arcsin(4/5)) = (3/5)/(4/5) = 3/4. You could also draw a small right triangle ABC, with a right angle at C. Think of A as the angle whose sine is 4/5. We can put a 5 on AC and a 4 on BC, because the sine of A is side opposite over hypotenuse. We see that AB is 3. So, cot(A) = 3/4. - Doctor Jerry, The Math Forum
{"url":"http://mathforum.org/library/drmath/view/54144.html","timestamp":"2014-04-16T06:11:02Z","content_type":null,"content_length":"5842","record_id":"<urn:uuid:fca49790-b898-44db-914f-7b9605a53319>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00389-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: Re: how to generate piecewise function from two lists Replies: 1 Last Post: Apr 30, 2013 4:17 AM Messages: [ Previous | Next ] Re: how to generate piecewise function from two lists Posted: Apr 28, 2013 5:17 AM Piecewise[{#1, #2[[1]] < x < #2[[2]]}& @@@ Transpose@{f,Partition[r,2,1]}] On Sat, Apr 27, 2013 @ 10:00 PM, S <dsalman96@gmail.com> wrote: > Hello > Suppose we have two lists > f={a,b} > r={0,1,2) > In general, the lengths of f and r can vary but the length of f is always 1 less than length of r. > Using the lists f and r, I want to define a piecewise function of the form: > Piecewise[{{a,0<x<1},{b,1<x<2}}] i.e., having the form > Piecewise[{{f[[1]],r[[1]]<x<r[[2]]},{f[[2]],r[[2]]<x<r[[3]]}}] > Is there a way to define a piecewise function using f and r for the general case, as described above. > Thanks > S Date Subject Author 4/28/13 Re: how to generate piecewise function from two lists Ray Koopman 4/30/13 Re: how to generate piecewise function from two lists S
{"url":"http://mathforum.org/kb/thread.jspa?threadID=2450228","timestamp":"2014-04-21T02:26:05Z","content_type":null,"content_length":"17815","record_id":"<urn:uuid:b353f3be-e82b-4acf-b4ac-13149b1a85af>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00187-ip-10-147-4-33.ec2.internal.warc.gz"}
About Profits and Lots Library Home || Primary || Math Fundamentals || Pre-Algebra || Algebra || Geometry || Discrete Math || Trig/Calc Author: Terry Trotter Description: Algebra, difficulty level 1. A person buys some land, keeps some for himself, divides the rest into lots, and sells them, thus making a profit. Please Note: Use of the following materials requires membership. Please see the Problem of the Week membership page for more information. Problem page: /library/go.html?destination=1216 Solution page: Problem #1216 © 1994-2012 Drexel University. All rights reserved. http://mathforum.org/ The Math Forum is a research and educational enterprise of the Drexel University School of Education.
{"url":"http://mathforum.org/library/problems/more_info/15733","timestamp":"2014-04-18T13:58:12Z","content_type":null,"content_length":"5558","record_id":"<urn:uuid:011a5187-db01-4ed9-badb-fa01237a119a>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00374-ip-10-147-4-33.ec2.internal.warc.gz"}
Rockefeller and the Internationalization of Mathematics Between the Two World Wars: Documents and Studies for the Social History of Mathematics in the 20th Century The Rockefeller Foundation was established in 1913 with a broad philanthropic mission that did not explicitly include mathematics. As the Foundation evolved in the 1920s, however, mathematics made inroads into what was becoming the modern world of foundation — and later governmental — support for scientific research. These inroads stemmed from the Foundation's International Education Board (IEB), created in 1923 with a mission both to provide fellowships for young scientists internationally and to support the infrastructure of science through capital grants for building and maintaining research institutes. As mathematicians like George David Birkhoff and Oswald Veblen came increasingly to advise IEB officials, mathematics began to benefit from Rockefeller philanthropy. Moreover, given the international focus of the Board, this philanthropy contributed in complex ways to the internationalization of science in general and of mathematics in particular. It is precisely this thorny historical problem of the Foundation's role in the internationalization of mathematics between the two World Wars that Reinhard Siegmund-Schultze confronts in his meticulously researched and abundantly illustrated book. The story begins in 1923 when Wickliffe Rose, originally a professor of philosophy and history but after 1910 a Rockefeller functionary, assumed the presidency of Rockefeller's General Education Board and insisted on the simultaneous foundation of an International Education Board. His wish was granted; he became head of both new boards; and he began the process of setting the agenda for the IEB. His white paper of April 1923, "Scheme for the Promotion of Science on an International Scale," focused first and foremost on fellowships and secondarily on institutional grants, but Rose recognized that ideas on paper were one thing, the actual needs of science internationally another. To inform himself more fully of the situation in Europe and to apprise the Europeans of the new Rockefeller initiative, Rose traveled to nineteen different countries between December 1923 and April 1924, talking with scientists of note and generally observing the state of science. Among the mathematicians he consulted at Birkhoff's suggestion were Émile Borel in France, G. H. Hardy in England, Tullio Levi-Civita in Italy, and Gösta-Mittag-Leffler in Sweden; Rose also spoke with Hermann Weyl, who was then in Zürich, about the state of mathematical affairs in Germany. His discussions with these and other mathematicians helped him not only formulate an international slate of potential fellowship candidates but also determine viable places for American mathematicians to further their studies abroad. In his ongoing efforts to assess the international scientific scene, moreover, Rose continued to consult with specialists in the various fields. In mathematics those consultants were men like E. H. Moore at the University of Chicago and his two students, Veblen at Princeton and Birkhoff at Harvard. Rose asked them, among other things, to identify the leaders in the field internationally. From these lists submitted early in 1926, the IEB drafted a map of the "Relative Standing of Mathematical Centers of Europe and Numbers of Outstanding Men at Each," which showed Göttingen, Paris, and Rome of roughly equal strength [p. 44]. When Birkhoff toured Europe from February through September 1926 as a "traveling professor" funded by and reporting to the IEB, he submitted a somewhat different assessment: the top countries in mathematics internationally were, first, Germany, followed by the United States, France, Italy, and England, while the most important mathematical center was Paris followed by Rome and Göttingen. As Siegmund-Schultze remarks, "Birkhoff's report of September 1926 on his trip, as well as his assessments of European mathematics as of the mid-twenties reveal the growing independence and self-confidence of American mathematics" [p. 56]. The report — in addition to over a dozen revealing and previously unpublished archives — is reproduced in full in one of the book's seventeen appendices. After setting the stage in Chapter 2 with the early history of the IEB and with an account of the involvement of mathematicians in setting its agenda, Siegmund-Schultze moves on to look more closely in Chapter 3 at the "General Ideological and Political Positions Underlying the IEB's Activities." A key Rockefeller operative in shaping these positions was the physicist, Augustus Trowbridge, the head of the IEB office in Paris that oversaw the IEB's activities in Europe. Trowbridge conceived of Europe in terms of scientifically advanced and scientifically backward countries and often found the IEB caught between the objectives of supporting the best science and helping the scientifically backward. Moreover, Trowbridge and other IEB officials also saw their mission as one of spreading, through their fellowship program, American values like energy, hard work, the equal treatment of workers, and the decentralization of science at the same time that they implemented new policies like the concept of matching funds. This sociological component of foundation support, Siegmund-Schultze argues, was one of several factors contributing to the internationalization — or perhaps to the Americanization — of mathematics between the wars. Up to this point, the narrative focuses primarily on the Rockefeller institutions, their development and philosophy. In Chapter 4, the book's longest and most archivally driven chapter, the emphasis shifts to the people who actually held the fellowships. Through a painstaking reading of fellowship files, Siegmund-Schultze teases out the largely unwritten practice of the Rockefeller philanthropies in their support for mathematics. What were the selection criteria for choosing fellows? How did (or did) they differ for applicants from different countries? Were applicants from certain countries favored over applicants from other countries and, if so, why? What, if any, strings came attached to Rockefeller support? What, if any, mathematical areas or styles of mathematical research were favored in the selection process? How did (or did) the granting of Rockefeller fellowships in particular fields shape mathematics from a technical, cognitive point of view? How did (or did) their trips abroad affect the fellows' subsequent careers? What sorts of attitudes did the fellows encounter during their fellowship periods? What were the particular challenges faced by women fellows? These and other questions are explored in this rich but somewhat fragmented chapter, a chapter that might have benefitted from a more fully developed, more synthetic conclusion. Although they formed the major focus of the IEB's activities, fellowships were not the only type of foundation funding. Chapter 5 looks closely at Rockefeller involvement in the late 1920s both in the building of the Mathematics Institute in Göttingen and in the construction of the Institut Henri Poincaré in Paris. These two projects were undertaken with different objectives in mind. In contributing to the institute in Germany, the IEB aimed to maintain (especially in the aftermath of World War I) the high mathematical standards that had been achieved there by the turn of the twentieth century, while it supported the Paris institute project in an effort to bring French mathematics into what was becoming the international mainstream. Both of these institute projects reached completion just a few years prior to the seizure of power by the Nazis in Germany in 1933. The radically altered political situation in Europe reflected itself in waves of emigrés seeking asylum elsewhere. In his sixth and final chapter, Siegmund-Schultze examines the impact on mathematics of the Rockefeller Foundation's Emergency Program for placing displaced scholars. Much has been written on the topic of scientific and mathematical refugees, but Siegmund-Schultze keeps his focus on the Foundation, its attitudes, its policies, and its cooperation with the American mathematical community and leaders like Veblen and Roland G. D. Richardson. In particular, he uses the cases of the German statistician, Emil Gumbel, and of the French mathematician, André Weil, to explore some of the conflicting attitudes within the Rockefeller Foundation and within American society at large toward Jews and political dissenters. The book closes with a mere three-page "Epilogue" that could rather have been a true concluding chapter to a book that raises so many fascinating and complex issues. Still, Siegmund-Schultze has provided us with a wealth of data, a bounty of archival material, and much to think about as we continue to grapple with the social history of mathematics in the twentieth century. Karen Hunger Parshall is Professor of History and Mathematics at the University of Virginia. Together with Adrian C. Rice, she has recently coedited the book, Mathematics Unbound: The Evolution of an International Mathematical Research Community, 1800-1945 (Providence: American Mathematical Society and London: London Mathematical Society, 2002).
{"url":"http://www.maa.org/publications/maa-reviews/rockefeller-and-the-internationalization-of-mathematics-between-the-two-world-wars-documents-and","timestamp":"2014-04-16T18:32:40Z","content_type":null,"content_length":"104486","record_id":"<urn:uuid:66c7a350-09cf-4e3d-8119-8a568779ae49>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00062-ip-10-147-4-33.ec2.internal.warc.gz"}
Paramus Science Tutor ...EDUCATION BACKGROUND/TUTORING EXPERIENCEI hold a Juris Doctor from DePaul University College of Law and a Bachelor of Science in Chemical Engineering from the University of Illinois at Chicago (UIC). I have intensive knowledge in Physics, Chemistry, and Mathematics and enormous teaching and tuto... 13 Subjects: including physics, algebra 1, algebra 2, calculus ...Good luck with the studying!I studied Physics with Astronomy at undergraduate level, gaining a master's degree at upper 2nd class honors level (approx. 3.67 GPA equivalent). I then proceeded to complete a PhD in Astrophysics, writing a thesis on Massive star formation in the Milky Way Galaxy usin... 8 Subjects: including physics, astronomy, geometry, algebra 1 ...I also have extensive background in the psychology of learning, moral development and human development. Let me know if I can be of help. Best, Rhonda Sarrazin I have over 23 years of successful teaching experience in elementary education teaching grades k-12. 32 Subjects: including biology, vocabulary, grammar, geometry I have experience as a writing tutor and will help with paper writing, grammar and all elements of the writing process. I am also certified to teach elementary school grades K-5 so I am able to help with all subjects. I have a degree in English literature and am well versed in American literature as well. 18 Subjects: including archaeology, reading, writing, English ...I shall help you answer questions quickly on standardized exam questions about Organic Chemistry and General Chemistry and work around your schedule. Students enjoy my friendly personality and teaching style. I have a Ph.D. from Cambridge University, UK. 2 Subjects: including chemistry, organic chemistry
{"url":"http://www.purplemath.com/paramus_nj_science_tutors.php","timestamp":"2014-04-20T16:18:09Z","content_type":null,"content_length":"23589","record_id":"<urn:uuid:0953cb58-d21b-4b91-ab35-b7308fdd46be>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00233-ip-10-147-4-33.ec2.internal.warc.gz"}
Sequential, machine-independent characterizations of the parallel complexity classes ALOGTIME - COMPUTATIONAL COMPLEXITY , 1992 "... We give a recursion-theoretic characterization of FP which describes polynomial time computation independently of any externally imposed resource bounds. In particular, this syntactic characterization avoids the explicit size bounds on recursion (and the initial function 2 |x||y| ) of Cobham. ..." Cited by 179 (7 self) Add to MetaCart We give a recursion-theoretic characterization of FP which describes polynomial time computation independently of any externally imposed resource bounds. In particular, this syntactic characterization avoids the explicit size bounds on recursion (and the initial function 2 |x||y| ) of Cobham. , 1992 "... The purpose of this thesis is to give a "foundational" characterization of some common complexity classes. Such a characterization is distinguished by the fact that no explicit resource bounds are used. For example, we characterize the polynomial time computable functions without making any direct r ..." Cited by 45 (3 self) Add to MetaCart The purpose of this thesis is to give a "foundational" characterization of some common complexity classes. Such a characterization is distinguished by the fact that no explicit resource bounds are used. For example, we characterize the polynomial time computable functions without making any direct reference to polynomials, time, or even computation. Complexity classes characterized in this way include polynomial time, the functional polytime hierarchy, the logspace decidable problems, and NC. After developing these "resource free" definitions, we apply them to redeveloping the feasible logical system of Cook and Urquhart, and show how this first-order system relates to the second-order system of Leivant. The connection is an interesting one since the systems were defined independently and have what appear to be very different rules for the principle of induction. Furthermore it is interesting to see, albeit in a very specific context, how to retract a second order statement, ("inducti... - Computational Complexity , 1994 "... Abstract. The main results of this paper are recursion-theoretic characterizations of two parallel complexity classes: the functions computable by uniform bounded fan-in circuit families of log and polylog depth (or equivalently, the functions bitwise computable by alternating Turing machines in log ..." Cited by 14 (4 self) Add to MetaCart Abstract. The main results of this paper are recursion-theoretic characterizations of two parallel complexity classes: the functions computable by uniform bounded fan-in circuit families of log and polylog depth (or equivalently, the functions bitwise computable by alternating Turing machines in log and polylog time). The present characterizations avoid the complex base functions, function constructors, and a priori size or depth bounds typical of previous work on these classes. This simplicity is achieved by extending the \tiered recursion &quot; techniques of Leivant and Bellantoni& Cook. Key words. Circuit complexity � subrecursion. Subject classi cations. 68Q15, 03D20, 94C99. 1. , 1996 "... We define the sharply bounded hierarchy, SBH (QL), a hierarchy of classes within P , using quasilinear-time computation and quantification over values of length log n. It generalizes the limited nondeterminism hierarchy introduced by Buss and Goldsmith, while retaining the invariance properties. T ..." Cited by 5 (3 self) Add to MetaCart We define the sharply bounded hierarchy, SBH (QL), a hierarchy of classes within P , using quasilinear-time computation and quantification over values of length log n. It generalizes the limited nondeterminism hierarchy introduced by Buss and Goldsmith, while retaining the invariance properties. The new hierarchy has several alternative characterizations. We define both SBH (QL) and its corresponding hierarchy of function classes, FSBH(QL),and present a variety of problems in these classes, including ql m -complete problems for each class in SBH (QL). We discuss the structure of the hierarchy, and show that certain simple structural conditions on it would imply P 6= PSPACE. We present characterizations of SBH (QL) relations based on alternating Turing machines and on first-order definability, as well as recursion-theoretic characterizations of function classes corresponding to SBH (QL). - Theory of Computing Systems , 1998 "... We de ne the sharply bounded hierarchy, SBH (QL), a hierarchy of classes within P, using quasilinear-time computation and quanti cation over strings of length log n. It generalizes the limited nondeterminism hierarchy introduced by Buss and Goldsmith, while retaining the invariance properties. The n ..." Cited by 4 (0 self) Add to MetaCart We de ne the sharply bounded hierarchy, SBH (QL), a hierarchy of classes within P, using quasilinear-time computation and quanti cation over strings of length log n. It generalizes the limited nondeterminism hierarchy introduced by Buss and Goldsmith, while retaining the invariance properties. The new hierarchy hasseveral alternative characterizations. We de ne both SBH (QL) and its corresponding hierarchy of function classes, ql and present a variety of problems in these classes, including m-complete problems for each class in SBH (QL). We discuss the structure of the hierarchy, and show that determining its precise relationship to deterministic time classes can imply P 6 = PSPACE. We present characterizations of SBH (QL) relations based on alternating Turing machines and on rst-order de nability, aswell as recursion-theoretic characterizations of function classes corresponding to SBH (QL). "... . In this note we characterize iterated log depth circuit classes LD i and ND i by Cobham-like bounded recursion schemata. We also give alternative characterizations which utilizes the safe recursion method developed by Bellantoni and Cook. 1. Introduction The search for recursion theoretic ..." Cited by 1 (1 self) Add to MetaCart . In this note we characterize iterated log depth circuit classes LD i and ND i by Cobham-like bounded recursion schemata. We also give alternative characterizations which utilizes the safe recursion method developed by Bellantoni and Cook. 1. Introduction The search for recursion theoretic characterizations of various complexity classes was began by A. Cobham [Cob], who characterized the class of polynomial time computable functions by a scheme now called bounded recursion on notation. (See also [Ro] for the proof.) The essence of this recursion scheme is two fold: firstly, on input x the recursive call is made for jxj O(1) times where jxj is the length of x, and . secondly, the growth rate is bounded by a previously defined polynomial time function. The second condition is crucial for the characterization of resource bounded computations since the computation on each recursive call takes the value of the function as an argument, so the number of steps that each recursive ...
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=2883720","timestamp":"2014-04-17T22:57:15Z","content_type":null,"content_length":"28136","record_id":"<urn:uuid:266c960b-8856-4318-9acd-c16f271ea86d>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00453-ip-10-147-4-33.ec2.internal.warc.gz"}
Plainfield, NJ Geometry Tutor Find a Plainfield, NJ Geometry Tutor ...I believe that learning is an ongoing process in one’s life, and I would like to share my experiences with you and help you avoid unnecessary detours, to archive a maximum result in a minimum amount of time. Second, learning should be a fun and rewarding experience. If you need some help to lea... 9 Subjects: including geometry, algebra 1, algebra 2, SAT math ...I think I'm pretty friendly and outgoing. You should find my rate fair, my approach to learning effective, and my knowledge of the material to be exceptional.Calculus is one of my favorite subjects to teach. It's the gateway to more advanced mathematics and the foundation for many other courses in mathematics and science. 28 Subjects: including geometry, physics, GRE, calculus ...Currently, I teach at the college level at a prestigious NYC College in midtown. My primary areas of focus were computers and history. As a history teacher, I am familiar with the requirements for good structured, coherent writing and reading. 21 Subjects: including geometry, reading, writing, algebra 2 ...I've been playing the saxophone for 10+ years. I began studying in fifth grade, via the alto saxophone. By seventh grade, I was playing both the tenor and baritone sax and participating in an advanced jazz ensemble. 11 Subjects: including geometry, reading, writing, algebra 1 ...I absolutely love teaching. I am currently student teaching in Lawrence High School this semester. I believe in helping students learn mathematical concepts and develop relational understandings, rather than just memorize procedure. 27 Subjects: including geometry, reading, Spanish, statistics Related Plainfield, NJ Tutors Plainfield, NJ Accounting Tutors Plainfield, NJ ACT Tutors Plainfield, NJ Algebra Tutors Plainfield, NJ Algebra 2 Tutors Plainfield, NJ Calculus Tutors Plainfield, NJ Geometry Tutors Plainfield, NJ Math Tutors Plainfield, NJ Prealgebra Tutors Plainfield, NJ Precalculus Tutors Plainfield, NJ SAT Tutors Plainfield, NJ SAT Math Tutors Plainfield, NJ Science Tutors Plainfield, NJ Statistics Tutors Plainfield, NJ Trigonometry Tutors
{"url":"http://www.purplemath.com/Plainfield_NJ_Geometry_tutors.php","timestamp":"2014-04-20T23:41:08Z","content_type":null,"content_length":"24039","record_id":"<urn:uuid:187294ae-1ddb-4a30-8a9f-860eab0cec29>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00032-ip-10-147-4-33.ec2.internal.warc.gz"}
Identity Crisis Identity Crisis Some Are Less Equal Than Others Numbers aren't the only things that can be equal or unequal. Most programming languages also have equality operators for other simple data objects, such as alphabetic characters; thus a = a but a ≠ b. (Whether a = A is a matter up for debate.) Sequences of characters (usually called strings) are also easy to compare. Two strings are equal if they consist of the same characters in the same sequence, which implies the strings also have the same length. Hence an equality operator for strings simply marches through the two strings in parallel, matching up the characters one by one. Certain other data structures, such as arrays, are handled in much the same way. But one important kind of data structure can be problematic. The most flexible way of organizing data elements is with links, or pointers, from one item to another. For example, the symbols a, b and c might be linked into the list a → b → c → nil, where nil is a special value that marks the end of a chain of pointers. Comparing two such structures for equality is straightforward: Just trace the two chains of pointers, and if both reach nil at the same time without having encountered any discrepancies along the way, they are identical. The pointer-following algorithm works well enough in most cases, but consider this structure: An algorithm that attempts to trace the chain of pointers until reaching nil will never terminate, and so structural equality will never be decided. This problem can be solved—the workaround is to lay down a trail of breadcrumbs as you go, and stop following the pointers as soon as you recognize a site you've already visited—but the technique is messy. There's something else inside the computer that's remarkably hard to test for equality: programs. Even in the simplest cases, where the program is the computational equivalent of a mathematical function, proving equality is a challenge. A function is a program that accepts inputs (called the arguments of the function) and computes a value, but does nothing else to alter the state of the computer. The value returned by the function depends only on the arguments, so that if you apply the function to the same arguments repeatedly, it always returns the same value. For example, f(x) = x ^2 is a function of the single argument x, and its returned value is the square of x. A given function could be written as a computer program in many different ways. At the most trivial level, f(x) = x ^2 might be replaced by f(y) = y ^2, where the only change is to the name of the variable. Another alternative might be f(x) = x ? x, or perhaps f(x) = exp(2 log(x)). It seems reasonable to say that two such functions are identical if they return the same value when applied to the same argument. But if that criterion were to serve as a test of function equality, you would have to test all possible arguments within the domain of the function. Even when the domain is not infinite, it is often inconveniently large. The alternative to such an "extensional" test of equality is an "intensional" test, which tries to prove that the texts of the two programs have the same meaning. Fabricating such proofs is not impossible—optimizing compilers do it all the time when they substitute a faster sequence of machine instructions for a slower one—but it is hardly a straightforward task. When you go beyond programs that model mathematical functions to those that can modify the state of the machine, proving the equality of programs is not just hard but undecidable. That is, there is no algorithm that will always yield the right answer when asked whether two arbitrary programs are equivalent. (For a thorough discussion of program equivalence, see Richard Bird's book Programs and Machines
{"url":"http://www.americanscientist.org/issues/pub/identity-crisis/4","timestamp":"2014-04-21T10:28:57Z","content_type":null,"content_length":"128259","record_id":"<urn:uuid:13315e76-cdbd-480e-9e8f-b3c8d0137d81>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00193-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: May 2000 [00308] [Date Index] [Thread Index] [Author Index] AW: ContourPlots,DensityPlots • To: mathgroup at smc.vnet.net • Subject: [mg23612] AW: [mg23558] ContourPlots,DensityPlots • From: Wolf Hartmut <hwolf at debis.com> • Date: Wed, 24 May 2000 02:16:12 -0400 (EDT) • Sender: owner-wri-mathgroup at wolfram.com -----Ursprüngliche Nachricht----- Von: Paul Hoke [SMTP:hokepaul at pilot.msu.edu] Gesendet am: Samstag, 20. Mai 2000 09:11 An: mathgroup at smc.vnet.net Betreff: [mg23558] ContourPlots,DensityPlots Anybody have a lot of experience with ListContourPlot and I have a matrix of data I want to plot and show for a presentation. The problems I am having are as follows with ColorFunctionScaling->True, It doesn't seem that the colors any thing to do with the actual values if I use a legend and use ColorFunction->Hue, the scale is on the plot doesn't equal the since the data is truncated to fit 0-1. I'm trying to divide by the largest value since all my data is and then the legend color scheme should fit the data plot except I have a zero in my data to peg the lower end. I hate to add a zero in my matrix just to fix the lower end of the color scheme, is that the I can delineate which contours I want, but I can't label them. Is anyway to print the value of contours? That is on the plot have contour marked so that it isn't just a bunch of lines? Dear Paul, I needed some guessing . . . but perhaps this example might help you: Let's define some data data = Table[Sin[x y] Cos[x] + 2, {y, 0, Pi, 0.2}, {x, 0, 2Pi, {Min[data], Max[data]} {1.00674, 2.9862} (roughly between 1 and 3) We color the density plot in a certain way p = ListDensityPlot[data, MeshRange -> {{0, 2 Pi}, {0, Pi}}, ColorFunction -> (Hue[#/3] &), ColorFunctionScaling -> False] So 1 corresponds to Hue[1/3] (green) and 3 corresponds to Hue[1] (red), all other values are in between (blue, violet, no yellow or orange). This is reflected by the legend: << Graphics`Legend` ShowLegend[ p, {Hue[(2 # + 1)/3]&, 5, " 1", "3", LegendPosition -> {1.1, -.4}}] Why that (seemingly) different color function? within the legend the color function is probed between 0 and 1 (in 5 steps here). So all we have to do is to linearly map the interval {0, 1} to {min, max} of our applied color function (when ColorFunctionScaling -> False). So if you prefer to have the color scale at the legend reversed just do ShowLegend[ p, {Hue[(3 - 2 #)/3]&, 7, " 3", "1", LegendPosition -> {1.1, -.4}}] Kind regards, Hartmut
{"url":"http://forums.wolfram.com/mathgroup/archive/2000/May/msg00308.html","timestamp":"2014-04-19T19:41:36Z","content_type":null,"content_length":"36636","record_id":"<urn:uuid:2cbeb5e5-6588-4965-bcac-3b7d60eae8dd>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00084-ip-10-147-4-33.ec2.internal.warc.gz"}
Model for Microcirculation Transportation Network Design Mathematical Problems in Engineering Volume 2012 (2012), Article ID 379867, 11 pages Research Article Model for Microcirculation Transportation Network Design School of Traffic and Transportation Engineering, Central South University, Changsha 410075, China Received 2 August 2012; Revised 8 November 2012; Accepted 21 November 2012 Academic Editor: Wuhong Wang Copyright © 2012 Qun Chen and Feng Shi. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. The idea of microcirculation transportation was proposed to shunt heavy traffic on arterial roads through branch roads. The optimization model for designing micro-circulation transportation network was developed to pick out branch roads as traffic-shunting channels and determine their required capacity, trying to minimize the total reconstruction expense and land occupancy subject to saturation and reconstruction space constraints, while accounting for the route choice behaviour of network users. Since micro-circulation transportation network design problem includes both discrete and continuous variables, a discretization method was developed to convert two groups of variables (discrete variables and continuous variables) into one group of new discrete variables, transforming the mixed network design problem into a new kind of discrete network design problem with multiple values. The genetic algorithm was proposed to solve the new discrete network design problem. Finally a numerical example demonstrated the efficiency of the model and algorithm. 1. Introduction Urban microcirculation transportation is an impersonate noun borrowed from human blood circulation system [1]. In the blood microcirculation system, blood flows from arterioles to microcirculation vessels and then flows from the microcirculation vessels back to venules. Similar to the blood microcirculation, traffic microcirculation can be defined as traffic flows from arterial roads to branch roads (microcirculation roads) and then flows from branch roads back to arterial roads. In general, most of vehicles run on the arterial roads, so arterial roads usually become very congested at peak hours. If microcirculation transportation network is designed around “jam points” of arterial roads, traffic on arterial roads can be shunted, and part of vehicles can go through the microcirculation roads (branch roads). In reality, owing to narrow road surface and complicated functions, some branch roads are primarily for in-area traffic (e.g., pedestrian, bicycles, and few vehicles) [2, 3] and do not have the ability of shunting traffic of arterials. For the purpose of traffic shunting, the microcirculation road systems need to be designed. Some branch roads with good conditions need to be picked out for reconstruction. So, there are two problems to solve: one is to determine which branch roads are picked out as traffic-shunting channels; the other is to determine the required capacity of these selected roads after reconstruction. Microcirculation transportation presented an effective and economical way for reducing traffic congestions because it does not need to add new roads but utilizes the existing branch roads to shunt traffic. Recently, in China, some big cities like Beijing and Kunming have established the microcirculation transportation systems in some congested segments. Microcirculation transportation network design problem belongs to the family of network design problems (NDPs). The NDP is normally formulated as a mathematical program with equilibrium constraints (MPEC) in which the planner aims to define modifications to a network so as to optimize an objective function, whilst considering the response of travellers to the changes following an equilibrium condition. Often, the travellers’ responses are assumed to follow Wardrop’s user equilibrium condition (deterministic UE). Typical models for the NDP under DUE have been developed by Tobin and Friesz [4], Yang et al. [5], and Chiou [6]. Users’ route choice behaviour is also usually characterized by the stochastic user equilibrium (SUE) [7]. Davis [8] and Uchida et al. [9] extended the NDP under the DUE to the SUE case. The NDP is usually classified into three categories: the discrete network design problem (DNDP), the continuous network design problem (CNDP), and the mixed network design problem (MNDP) that combines both CNDP and DNDP in a network. The DNDP deals with the selection of the optimal locations of new links to be added and is normally applied in the design of new road systems. Leblance [10], Chen and Alfa [11], Gao et al. [12], and Jeon et al. [13] researched DNDP and developed mathematical models and solution algorithms. The CNDP determines the optimal capacity enhancement for a subset of existing links and is especially suitable for the design of widening the existing roads. Abdulaal and LeBlanc [14], Friesz et al. [15], Meng et al. [16], Chiou [17], and Wang and Lo [18] researched CNDP and developed mathematical models and solution algorithms. Yang and Bell [19] provided a comprehensive review of the models and algorithms for the NDP, in which MNDP was mentioned. The MNDP is normally formulated as a nonlinear mixed integral bilevel programming problem that is very hard to solve. Luathep et al. [20] developed a mixed-integer linear programming approach for solving the MNDP. Shi et al. [21] modelled one-way traffic organization in microcirculation transportation network. In addition, Shi et al. [22] presented a model for reconstruction of urban branch road, but it only considered the cost target and optimized improvements of all branch roads. In fact, the microcirculation network design is a two-stage problem: the first is to determine which branch roads are picked out as traffic-shunting channels (0-1 variables); the second is to determine the required capacity of these selected roads (continuous variables). Microcirculation transportation network design problem includes both 0-1 discrete variables and continuous variables, so it also can be considered as one of the MNDPs. But it is different from the previous MNDPs. The conventional MNDPs combine both DNDP and CNDP in a network; discrete variables (for new road links) and continuous variables (for modified road links) are independent and for respective problems. However, in the microcirculation transportation network design problem, it needs to firstly select road links to be reconstructed, and then the required capacity of these selected road links can be determined. So it is a two-stage planning problem, in which determination of discrete variables’ values is prior to determination of continuous variables’ values. It is more difficult to solve than the conventional MNDPs. This paper presented a discretization way to convert two groups of variables (discrete variables and continuous variables) into one group of new discrete variables, and then the MNDP is transformed into a new kind of DNDP. The new DNDP is different from the conventional 0-1 DNDP because the variable of the new DNDP can take multiple values. The genetic algorithm was proposed to solve the new DNDP. Moreover, compared with the conventional NDPs, the microcirculation transportation network design problem has different objectives. Microcirculation transportation network is a little local network whose objective is to shunt traffic from arterial roads. Because the size of network is small, passing time of vehicles is very short if the network is not congested, so the factor of travel time may be ignored in the model which is usually taken into account by the conventional NDPs. The main objective of the microcirculation network design problem is to minimize the total reconstruction expense under a saturation constraint. Also, the objective of minimizing land occupancy is taken into account to minimize interference with in-area residents. In addition, microcirculation transportation network design problem considers some other constraints, such as reconstruction space constraint and restriction for the number of cross-points of microcirculation roads and arterial roads. The remainder of the paper is organized as follows. Section 2 presents the optimization model for designing the microcirculation transportation network. Section 3 introduces a discretization way to solve the model. In Section 4, a numerical example is given to demonstrate the application of the model and algorithm. The final section concludes the paper. 2. Optimization Model for Designing Microcirculation Transportation Network In Figure 1, road network , is the set of all nodes, . is the set of arterial roads, and is the set of candidate branch roads. is traffic distribution between origins and destinations. For branch road , equals 1 if it is selected and 0 if not selected. All the selected branch roads construct the microcirculation transportation network. The existing capacity of each road is , . For the selected branch road , its required capacity after reconstruction is , . Apparently, ,. , , are the optimization variables. In general, there are two road links with opposite directions between two adjacent nodes, and their capacities are usually the same. Before reconstruction the branch roads do not have the ability of shunting traffic from arterial roads. They are for in-area traffic and often crowded with pedestrians and bicycles and even occupied by some other temporary facilities, and so would not have their original designed capacity unless they are cleaned up or reconstructed. The main optimization objective is to minimize the total reconstruction cost which lies on the length and capacity of the reconstructed roads. If the capacity is improved more, the reconstruction expense will become more. In addition, the objective of minimizing land occupancy (expressed as land use cost) should be taken into account to reduce interference with the area. Although microcirculation transportation can shunt arterials’ traffic, the shunted traffic will interfere with residents’ life inside the area and may cause environmental pollution. Reducing land occupancy (including road length and width) of microcirculation transportation can reduce interference scope. So, for those unselected branch roads, some management measures need to be taken to bar the traversing traffic, making traffic shunting restricted within the selected roads. From the previous analysis, the cost function of candidate branch road can be expressed as In (2.1), item 1 of the right side is reconstruction expense and item 2 is land use cost of microcirculation transportation. The optimization goal is to minimize , namely: is the length of candidate branch road , is unit reconstruction expense, and is unit land use cost: In (2.3), for branch road , is an increasing function of ; namely, the required capacity is greater, the reconstruction expense is higher: Equation (2.4) implies that is an increasing function of , because land use of microcirculation roads depends on their length and width, while land use cost of unit length is decided by the road width which corresponds with the capacity after reconstruction (). In general, for branch road , if the capacity after reconstruction is greater, the road width should be greater, and so becomes The constraints are as follows.(1)Saturation constraint of arterial roads: since the function of the microcirculation transportation network is to shunt traffic from arterial roads, the first target is to make the saturation of arterial roads less than an allowed value. But the saturation of arterial roads should not be too small; otherwise, their capacity cannot be brought into full play. The key is to attain the goal of no more very congested: where are, respectively, the saturation and flow of arterial road and is the allowed saturation of arterial road .(2)Saturation constraint of branch roads: the saturation of microcirculation branch roads should also be under an allowed value to avoid traffic congestions on branch roads and ensure the safety of pedestrians and bicycles on the branch roads: where are, respectively, the saturation and flow of branch road and is the allowed saturation of branch road .(3)Capacity constraint of branch roads. Capacity enhancement of branch roads is affected by some actual conditions, such as land use restriction, building restriction and geological condition: where is the available maximal capacity of branch road after reconstruction. (4)Restriction for the number of cross-points of microcirculation and arterial roads: reducing the number of cross-points of microcirculation and arterial roads can reduce interference with arterial traffic. Microcirculation roads are for shunting arterial traffic and so generally have a relatively big traffic flow; signal controls normally need to be taken when they cross arterial roads. More signal-control intersections imply more of traffic delay on arterial roads (waiting time and the time needed for starting and braking of vehicles). denotes the number of cross-points of arterial road () and microcirculation roads. Suppose should not exceed (maximal allowed value): is calculated via the user equilibrium (UE) traffic assignment model: s.t. is the flow of path between origin-destination (OD) pair , is the number of paths between OD pair , and is traffic demand between OD pair . is the flow of link . equals 1 if link is on path between OD pair , otherwise 0. is travel time on link . Here BPR (bureau of public road) link impedance function is applied: In (2.11), is link capacity; for arterial roads, it is ; for branch roads, it is . , are parameters, and BPR suggested that . is free-flow travel time of link . 3. Solution Algorithms There are two groups of variables () in the above model, so the solution is very hard. But if the two groups of variables can be converted into one, then the solution will become much easier. For branch road (the existing capacity ), its capacity enhancement via reconstruction can be discretized if it is selected. Let capacity enhancements be 0, , 2, 3, 4, …, where denotes one added unit and 0 denotes that capacity enhancement is 0. This discretization way can accord with the real case. On the one hand, in reality, capacity enhancements via reconstruction are always discrete values instead of continuous; on the other hand, use of many discrete values is also able to reach the precision. One group of new discrete variables () can be defined to convert two groups of variables () into one group of variables (): is the optimization variable. If is calculated, and can be obtained. The real coded genetic algorithm is applied to solve the optimization model. The chromosome is made up of . Steps of solving the model using genetic algorithm are as follows. Step 1. Initialization: set population size (), chromosome length (), iteration number (), probability of crossover (), and probability of mutation (). Step 2. Construct a fitness function: , where is the fitness of individual , is the function value of individual and is the estimated maximal function value. Randomly produce the initial population and set . Step 3. Calculate link flows via UE traffic assignment model, and then calculate the fitness and excess over constraints of each individual. If , output the best individual; otherwise, turn to Step 4 Step 4. Use roulette wheel selection operator based on ranking [23] to select the population of next generation. Feasible solutions rank from high to low by fitness, and then infeasible solutions rank from small to much by excess over constraints. Step 5. According to probability of crossover (), make multi-point crossover. Crossover points can be randomly selected without repeat. Variables between crossover points interchange alternately to produce two new individuals. Step 6. According to probability of mutation (), make single point mutation. Randomly produce an integer between [-1 ] ( is the maximal value of ) to replace the current value of the variable. Set , and return to Step 3. 4. A Numerical Example In Figure 2, the thick lines around the area denote arterial roads and the thin lines inside the area denote candidate branch roads. Each line includes two links with opposite directions and equal capacity. Traffic distribution during peak hours is in Table 1. The capacity of arterial road is 3000(veh/h); the existing capacity of candidate branch road is 500(veh/h). The length of each link is 1km. Unit reconstruction cost function of branch roads is ($ /km); unit land use cost function is ($/km). Road saturation should not exceed 1. The available maximal capacity of each branch road after reconstruction is 1000(veh/h). of arterial roads is 1min and that of branch roads is 1.1min. , , , . Let ; the solution set of is since the available maximal capacity is 1000 and the existing capacity is 500. The selected branch roads are shown in Figure 3; the total cost is 8000 × 10^4$. Saturations of arterial and branch roads are all less than 1. Flows and saturations of arterial roads are in Table 2; capacities, flows, and saturations of the selected branch roads for constructing the microcirculation network are in Table 3. Comparatively, if only arterial roads exist (without microcirculation road network), the saturation of arterial roads goes beyond 1 (Table 4). 5. Conclusions This paper defined the concept of urban microcirculation transportation. Microcirculation transportation network is a little local network and can shunt traffic from arterial roads. Through the microcirculation transportation network design model in this paper, the branch roads as traffic-shunting channels and their required capacity after reconstruction can be decided. Since microcirculation transportation network design problem includes both discrete variables and continuous variables, this paper developed a discretization method to convert two groups of variables (discrete variables and continuous variables) into one group of new discrete variables, transforming the solution of MNDP into the solution of a new kind of DNDP with multiple values, and the genetic algorithm was proposed to solve the new DNDP. A numerical example demonstrated the application of the model and algorithm and compared the results with or no microcirculation transportation network. The method and model proposed in this paper provided a new effective way for solving urban traffic congestions. This research is supported by the National Natural Science Foundation of China (Grant no. 50908235) and the Hunan Province Social Science Fund (Grant no. 12YBB274). 1. F. Shi, E. H. Huang, and Y. Z. Wang, “Study on the functional characteistics of Urban transrortation micro-circulation system,” Urban Studies, vol. 15, no. 3, pp. 34–36, 2008. 2. W. H. Wang, Q. Cao, K. Ikeuchi, and H. Bubb, “Reliability and safety analysis methodology for identification of drivers' erroneous actions,” International Journal of Automotive Technology, vol. 11, no. 6, pp. 873–881, 2010. View at Publisher · View at Google Scholar · View at Scopus 3. W. H. Wang, H. W. Guo, K. Ikeuchi, and H. Bubb, “Numerical simulation and analysis procedure for model-based digital driving dependability in intelligent transport system,” KSCE Journal of Civil Engineering, vol. 15, no. 5, pp. 891–898, 2011. View at Publisher · View at Google Scholar · View at Scopus 4. R. L. Tobin and T. L. Friesz, “Sensitivity analysis for equilibrium network flow,” Transportation Science, vol. 22, no. 4, pp. 242–250, 1988. View at Zentralblatt MATH · View at Scopus 5. H. Yang, Q. Meng, and M. G. H. Bell, “Simultaneous estimation of the origin-destination matrices and travel-cost coefficient for congested networks in a stochastic user equilibrium,” Transportation Science, vol. 35, no. 2, pp. 107–123, 2001. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus 6. S. W. Chiou, “TRANSYT derivatives for area traffic control optimisation with network equilibrium flows,” Transportation Research B, vol. 37, no. 3, pp. 263–290, 2003. View at Publisher · View at Google Scholar · View at Scopus 7. Y. Sheffi, Urban Transportation Networks: Equilibrium Analysis with Mathematical Programming Methods, Prentice-Hall, Englewood Cliffs, NJ, USA, 1985. 8. G. A. Davis, “Exact local solution of the continuous network design problem via stochastic user equilibrium assignment,” Transportation Research B, vol. 28, no. 1, pp. 61–75, 1994. View at Publisher · View at Google Scholar · View at Scopus 9. K. Uchida, A. Sumalee, D. Watling, and R. Connors, “A study on network design problems for multi-modal networks by probit-based stochastic user equilibrium,” Networks and Spatial Economics, vol. 7, no. 3, pp. 213–240, 2007. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus 10. L. J. Leblanc, “An algorithm for the discrete network design problem,” Transportation Science, vol. 9, no. 3, pp. 183–199, 1975. View at Publisher · View at Google Scholar · View at Scopus 11. M. Chen and A. S. Alfa, “Network design algorithm using a stochastic incremental traffic assignment approach,” Transportation Science, vol. 25, no. 3, pp. 215–224, 1991. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus 12. Z. Y. Gao, J. J. Wu, and H. J. Sun, “Solution algorithm for the bi-level discrete network design problem,” Transportation Research B, vol. 39, no. 6, pp. 479–495, 2005. View at Publisher · View at Google Scholar · View at Scopus 13. K. Jeon, J. S. Lee, S. Ukkusuri, and S. T. Waller, “Selectorecombinative genetic algorithm to relax computational complexity of discrete network design problem,” Transportation Research Record, no. 1964, pp. 91–103, 2006. View at Scopus 14. M. Abdulaal and L. J. LeBlanc, “Continuous equilibrium network design models,” Transportation Research B, vol. 13, no. 1, pp. 19–32, 1979. View at Publisher · View at Google Scholar · View at 15. T. L. Friesz, H. J. Cho, N. J. Mehta, R. L. Tobin, and G. Anandalingam, “Simulated annealing approach to the network design problem with variational inequality constraints,” Transportation Science, vol. 26, no. 1, pp. 18–26, 1992. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus 16. Q. Meng, H. Yang, and M. G. H. Bell, “An equivalent continuously differentiable model and a locally convergent algorithm for the continuous network design problem,” Transportation Research B, vol. 35, no. 1, pp. 83–105, 2001. View at Publisher · View at Google Scholar · View at Scopus 17. S. W. Chiou, “Bilevel programming for the continuous transport network design problem,” Transportation Research B, vol. 39, no. 4, pp. 361–383, 2005. View at Publisher · View at Google Scholar · View at Scopus 18. D. Z. W. Wang and H. K. Lo, “Global optimum of the linearized network design problem with equilibrium flows,” Transportation Research B, vol. 44, no. 4, pp. 482–492, 2010. View at Publisher · View at Google Scholar · View at Scopus 19. H. Yang and M. G. H. Bell, “Models and algorithms for road network design: a review and some new developments,” Transport Reviews, vol. 18, no. 3, pp. 257–278, 1998. View at Publisher · View at Google Scholar · View at Scopus 20. P. Luathep, A. Sumalee, W. H. K. Lam, Z. C. Li, and H. K. Lo, “Global optimization method for mixed transportation network design problem: a mixed-integer linear programming approach,” Transportation Research B, vol. 45, no. 5, pp. 808–827, 2011. View at Publisher · View at Google Scholar · View at Scopus 21. F. Shi, E. H. Huang, Q. Chen, and Y. Z. Wang, “Optimization of one-way traffic organization for urban micro-circulation transportation network,” Journal of Transportation Systems Engineering and Information Technology, vol. 9, no. 4, pp. 30–35, 2009. View at Publisher · View at Google Scholar · View at Scopus 22. F. Shi, E. H. Huang, Q. Chen, and Y. Z. Wang, “Bi-level programming model for reconstruction of urban branch road network,” Journal of Central South University of Technology, vol. 16, no. 1, pp. 172–176, 2009. View at Publisher · View at Google Scholar · View at Scopus 23. X. P. Wang and L. M. Cao, Theory, Application and Software Implementation, Xi'an Jiaotong University Press, Xi'an, China, 2002.
{"url":"http://www.hindawi.com/journals/mpe/2012/379867/","timestamp":"2014-04-17T06:10:42Z","content_type":null,"content_length":"217110","record_id":"<urn:uuid:92583adb-5bf3-48a6-8b77-2b814dfd8f5d>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00107-ip-10-147-4-33.ec2.internal.warc.gz"}
Volume of Solid July 4th 2010, 02:36 PM #1 May 2010 Volume of Solid Looking for the volume of a solid bounded by the plane z=0 and three parabolic cylinders: Does my triple integral look ok? $\intolimits_{-1}^{1} {dy}\int\limits_{(y^{2}-1)}^{(1-y^{2})} {dx}\intolimits_{0}^{(1-x^{2})} {dz}$ Follow Math Help Forum on Facebook and Google+
{"url":"http://mathhelpforum.com/calculus/150085-volume-solid.html","timestamp":"2014-04-17T19:29:39Z","content_type":null,"content_length":"28330","record_id":"<urn:uuid:1f83319b-118a-4d29-9ff7-55ba379697ce>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00445-ip-10-147-4-33.ec2.internal.warc.gz"}
Physics - Kinematics - If an object is moving at a constant acceleration (as it would be if acted on by a constant force see forces Its acceleration is given by: a = dv / dT or integrating: v = ∫a dT In the more general case of translation in 3 dimensions: [a] = Constant Acceleration (constant force) If acceleration is constant, which happens when force is constant, so it is a common occurrence, for example when a body is falling in a gravity field. These equations for uniform acceleration can be solved analytically to give the following equations which relate p, p[0], v, v[0], t and a in different ways: • v = v[0] + a*t • p = p[0] + v[0] t + ½ a t^2 • v^2 = v[0]^2 + 2 * a * p │where: │ other definitions│ │symbol│ description │ type │ units │ │v[0] │initial velocity │vector │m/s │ │v │final velocity │vector │m/s │ │a │acceleration constant │vector │m/s^2 │ │t │time │scalar │s │ │p │final position │vector │m │ │p [0] │position at t=0 │vector │m │ These equations can be derived using calculus as follows: Velocity is the rate if change of position: a = dv/dt So integrating both sides gives: v = ∫a dt so if a is constant: v = v[0] + a*t I seem to remember that when I was at school this was written as: v = u + a t So integrating acceleration once gives the velocity, to get the position we need to integrate again: p = ∫v dt p = ∫(v[0] + a*t) dt p = p[0] + v[0] t + ½ a t^2 Variable Acceleration - approximate methods If we have an equation for the acceleration, as a function of time, we can apply integration to find the velocity and position, if we don't then we can use approximate methods such as finite difference method, Eulers Method or Runge-Kutta Method. If we are animating a computer simulation then this can be a very good method because we need to generate the position for each frame anyway, so is is much easier to generate the next frame from the frame before it. v[n+1] = v[n] + a * dt │where: │other definitions│ │symbol│ description │ type │ units │ │v[n+1]│velocity at frame n+1 │vector │m/s │ │v[n] │velocity at frame n │vector │m/s │ │a │acceleration │vector │m/s^2 │ │dt │time between frame n and frame n+1 │scalar │s │ and summing again: p[n+1] = p[n] + v[n] * dt │where: │other definitions│ │symbol│ description │ type │ units │ │p[n+1]│position at frame n+1 │vector │m │ │p[n] │position at frame n │vector │m │ │v[n] │velocity at frame n │vector │m/s │ │dt │time between frame n and frame n+1 │scalar │s │ These approximations can be made more accurate by using Eulers Method or Runge-Kutta Method Representing Acceleration in program Acceleration in 3D space can be held in a 3D vector (see class sfvec3f). For an example of how this might be used in a scenegraph node, see here.
{"url":"http://www.euclideanspace.com/physics/kinematics/acceleration/index.htm","timestamp":"2014-04-19T22:05:34Z","content_type":null,"content_length":"22126","record_id":"<urn:uuid:6d4d366d-7ffe-4fd7-9c69-2bf1988ec4ae>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00123-ip-10-147-4-33.ec2.internal.warc.gz"}
Matrix Algebra help please: October 17th 2007, 07:15 PM #1 Nov 2006 Given the system Ax = b has the solution X^T = [3-2s+3t - t-3s - s - 2t-2s-5 - 4-3t - t] (dashes only included for distinction) Write this solution as a linear combination of the particular solution and the homogeneous solution (clearly label each) and identify the basic solution vectors to the corresponding system Ax = 0? Does this system have an invertible matrix? Why or Why not? Any suggestions would be appreciated. Last edited by Nitz456; October 17th 2007 at 07:17 PM. Reason: not enough space Wow that is really complex. Is that a lot further ahead then a question like: Solve the system of equations by use of determinants Thats the math im on currently. Nowhere close to the stuff you're on. Sorry if this is an inapropriate post but im just intrigued by that question, made me want to see how far away I am from that level of mathematics. Good luck on solving it btw. October 17th 2007, 07:21 PM #2 Oct 2007
{"url":"http://mathhelpforum.com/advanced-algebra/20805-matrix-algebra-help-please.html","timestamp":"2014-04-19T20:22:30Z","content_type":null,"content_length":"31800","record_id":"<urn:uuid:c23b898f-0e4b-44f9-be6e-6437dcf6f282>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00465-ip-10-147-4-33.ec2.internal.warc.gz"}
[SOLVED] binomial theorem =( February 17th 2009, 01:03 AM [SOLVED] binomial theorem =( hey, ive got a question here in which ive been pondering over for hours, hope someone could help me ^^ If the coefficients of $x^k$ and $x^k+1$ in the expansion of $(2 + 3x)^(10)$ are equal, find K. thanks in advance =D February 17th 2009, 01:48 AM $T_n$ is the coefficient of nth term $<br /> T_k= T_{k+1}<br />$ $<br /> ^nC_k 2^k 3^{n-k} = ~^nC_{k+1} 2^{k+1} 3^{n-k-1}<br />$ n= 10 And find k February 17th 2009, 03:18 AM yea, thats where i got lost. btw sorry the 10 is supposed to be 19, so its $<br /> (2 + 3x)^{19}<br />$ i dont think it would make any difference in the working u gave me anyways. lols anywyas, my working is $<br /> ^{19}C_k 2^k 3^{n-k} = ^{19}C_{k+1} 2^{k+1} 3^{n-k-1}$ $\frac{^{19}C_k} {^{19}C_{k+1}} = 2 X 3$ $\frac{^{19}C_k} {^{19}C_{k+1}} = 6$ and i got stuck here... February 17th 2009, 03:54 AM ${19 \choose 9} = {19 \choose 10}$ February 17th 2009, 03:58 AM i dont get it... i know what ${19 \choose 9} = {19 \choose 10}$ is buthow did u get 9 and 10? February 17th 2009, 04:15 AM don mean to post . February 17th 2009, 05:07 AM Binomial Coefficients Hello teddybear67 yea, thats where i got lost. btw sorry the 10 is supposed to be 19, so its $<br /> (2 + 3x)^{10}<br />$ i dont think it would make any difference in the working u gave me anyways. lols anywyas, my working is $<br /> ^{19}C_k 2^k 3^{n-k} = ^{19}C_{k+1} 2^{k+1} 3^{n-k-1}$ $\frac{^{19}C_k} {^{19}C_{k+1}} = \color{red}{2 X 3}$ $\frac{^{19}C_k} {^{19}C_{k+1}} = 6$ and i got stuck here... You're OK to start with. But it should be $\frac{^{19}C_k} {^{19}C_{k+1}} =\frac{2}{3}$ $\Rightarrow \frac{19!(k+1)!(19-[k+1])!}{k!(19-k)!19!} = \frac{2}{3}$ $\Rightarrow \frac{k+1}{19-k}=\frac{2}{3}$, if you cancel carefully $\Rightarrow 3k+3 = 38 - 2k$ $\Rightarrow k = 7$ February 17th 2009, 05:37 AM thanks alot grandad, but i dont get this part : $<br /> \Rightarrow \frac{19!(k+1)!(19-[k+1])!}{k!(19-k)!19!} = \frac{2}{3}<br />$ sorry, im confused over the factorial and the "choose". its the 1 part of binomial theorem in which i never get >< oh ya, and the answer at the back of the book in which i got the question from is 11 and not 7. Do you know why? or is the book wrong? because there are times in which the book gives wrong February 17th 2009, 05:47 AM thanks alot grandad, but i dont get this part : $<br /> \Rightarrow \frac{19!(k+1)!(19-[k+1])!}{k!(19-k)!19!} = \frac{2}{3}<br />$ sorry, im confused over the factorial and the "choose". its the 1 part of binomial theorem in which i never get >< oh ya, and the answer at the back of the book in which i got the question from is 11 and not 7. Do you know why? or is the book wrong? because there are times in which the book gives wrong Use that fact that ${n \choose r}=\frac{n!}{(n-r)!r!}$ February 17th 2009, 05:59 AM Binomial Coefficients Hello teddybear67 thanks alot grandad, but i dont get this part : $<br /> \Rightarrow \frac{19!(k+1)!(19-[k+1])!}{k!(19-k)!19!} = \frac{2}{3}<br />$ sorry, im confused over the factorial and the "choose". its the 1 part of binomial theorem in which i never get >< oh ya, and the answer at the back of the book in which i got the question from is 11 and not 7. Do you know why? or is the book wrong? because there are times in which the book gives wrong Yes, it does look rather a mess, doesn't it? The working goes like this: $\frac{^{19}C_k}{^{19}C_{k+1}}=\frac{2}{3}$ (I assume you agree that this is correct!) $\Rightarrow \frac{19!}{k!(19-k)!}\div \frac{19!}{(k+1)!(19-[k+1]!)} =\frac{2}{3}$ $\Rightarrow \frac{19!}{k!(19-k)!}\times \frac{(k+1)!(19-[k+1]!)}{19!} =\frac{2}{3}$ Now the $19!$ on the top and bottom cancel immediately. Then notice that the $(k+1)!$ term on the top = $(k+1) \times k!$. So it cancels with the $k!$ term on the bottom to leave $(k+1)$ on the Similarly the $(19-k)!$ term on the bottom is $(19-k) \times (19-[k+1])!$. So it cancels with the term on the top to leave $(19-k)$ on the bottom. Hence $\frac{k+1}{19-k}=\frac{2}{3}$ ... etc I have checked my answer using an Excel spreadsheet (see the attached) and it is correct! So the answer in the book is wrong (nothing new there!). February 17th 2009, 06:05 AM lol i assumed that your $<br /> \frac{^{19}C_k}{^{19}C_{k+1}}=\frac{2}{3}<br />$ is correct, but i double checked and its supposed to be 3/2 . sorry!! but yea, i understand your workings and managed to get the hang of it. thanks alot grandad!!! ^^ February 17th 2009, 06:16 AM Binomial Coefficients Hello teddybear67 Ah, I see where $k = 11$ comes from now. Of course, it depends which way round you're expanding the binomial. Your original question has the expression as $(2 + 3x)^{19}$. And for this the answer is $k = 7$, and the equal coefficients are the 7th and 8th. But if you expand it the other way round $(3x+2)^{19}$, then the equal coefficients are the (19 - 7)th and (19 - 8)th i.e. the 12 th and 11th. It's all so simple really. February 17th 2009, 06:23 AM yepyeps, thanks much much again grandad !(Rofl)
{"url":"http://mathhelpforum.com/discrete-math/74035-solved-binomial-theorem-print.html","timestamp":"2014-04-21T02:47:33Z","content_type":null,"content_length":"23358","record_id":"<urn:uuid:49e6ede7-56e4-4a1c-b9a9-2da34e793462>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00427-ip-10-147-4-33.ec2.internal.warc.gz"}
CGTalk - sort vectors 05-10-2006, 03:37 AM I am storing particle positions in a series of vectors I am then calculating the angle between an origin {0,0,0} point. I want to use the angle to sort the vectors in ascending order. I’m not sure how to structure the script. I know mel has a sort command but if I do this it will sort the vectors and the vectors can only store three values as far as I know. Any information would be a huge help. I’m kind of stuck on this.
{"url":"http://forums.cgsociety.org/archive/index.php/t-355774.html","timestamp":"2014-04-19T02:03:55Z","content_type":null,"content_length":"19075","record_id":"<urn:uuid:23cb7f89-2047-4f59-aa80-19631f8df18d>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00095-ip-10-147-4-33.ec2.internal.warc.gz"}
Sphere inscribed in a regular tetrahedron Find the radius of a sphere inscribed in a regular tetrahedron which has a height of 8. Thank You Hello spredThe attached diagram shows a vertical cross-section through the tetrahedron, along one of the medians, $AM$, of the base. $V$ is the vertex of the tetrahedron, and $O$ the centre of the inscribed sphere. $G$ is the centroid of the base; $H$ is the centroid of one of the sloping faces. By symmetry, the sphere touches the base and this face at $G$ and $H$. The height of the tetrahedron is $h$. So $VG = AH = h$. (In your question $h = 8$.) Also $GM = HM = \tfrac13AM$, using the well-known property of the centroid of a triangle. So if $\angle HAM = \theta$, $\sin \theta = \frac{HM}{AM}= \frac13$ $\Rightarrow \tan\theta = \frac{1}{\sqrt8}$ ...(1) $\Rightarrow GM =HM = h\tan\theta$ $\Rightarrow AG = 2GM=2h\tan\theta$ $\Rightarrow r = OG = AG \tan\theta$ $=2h\tan^2\ theta$ $=\tfrac14h$, from (1) So the radius of the inscribed sphere is one-quarter of the height of the regular tetrahedron. In your question, then, the radius is $2$ units. Grandad
{"url":"http://mathhelpforum.com/geometry/132987-sphere-inscribed-regular-tetrahedron.html","timestamp":"2014-04-16T06:02:10Z","content_type":null,"content_length":"38787","record_id":"<urn:uuid:dfc9afe0-6072-42fa-a400-34517b45e8cb>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00473-ip-10-147-4-33.ec2.internal.warc.gz"}
Proof Help Ok. Let's start. We want to proof that: The sum of the squares of 3 consecutive even numbers always has 3 divisors. Lemma 1: Every even number p can be written in the form p = 2k, with k∈ℕ. 0 is even : 0 = 2k ; k=0 2 is even : 2 = 2k ; k=1 p is even : p = 2k ; k=p/2 Since p is even, p is a multiple of 2. Therefore p=2k', k'∈ℕ ⇒ k∈ℕ If p is an even number, then p²+(p+2)²+(p+4)² has 3 divisors. p, p+2,p+4 are the 3 consecutive even numbers starting with p Let z = p²+(p+2)²+(p+4)² I will write z|k if k is a divisor of z We want to find {k1, k2, k3}∈ℕ³ so that: z|k1, z|k2 and z|k3 It is known from number theory that every integer λ has always 2 trivial divisors: 1 and λ itself. So z|1 and z|z. Therefore k1=1 and k2=z. We have found 2 divisors. We only need to find one more. Let's get back to the expression. z = p²+(p+2)²+(p+4)² Since p is even, by our lemma 1 we can write p =2k, k∈ℕ z = (2k)²+(2k+2)²+(2k+4)² z = (2k)²+(2(k+1))²+(2(k+2))² z = 2².k²+2².(k+1)²+2².(k+2)² z = 2².(k²+(k+1)²+(k+2)²) We can see that 2² is a factor of z. And since (k²+(k+1)²+(k+2)²) is an integer: z divides 2², or z|2² = K3, the final divisor we needed. So D|(z)={1, z, 4} are the trivial divisors of z. #D|(z) = 3. So z always has (at least) 3 divisors. Last edited by kylekatarn (2005-11-21 11:47:04)
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=18544","timestamp":"2014-04-19T02:09:06Z","content_type":null,"content_length":"13941","record_id":"<urn:uuid:8e92398f-1f74-4310-9ab9-86592ed29314>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00153-ip-10-147-4-33.ec2.internal.warc.gz"}
188 helpers are online right now 75% of questions are answered within 5 minutes. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/users/vishweshshrimali5/medals","timestamp":"2014-04-19T13:01:43Z","content_type":null,"content_length":"98131","record_id":"<urn:uuid:86e85e37-c55d-4e1e-9fe0-c60322b1e334>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00659-ip-10-147-4-33.ec2.internal.warc.gz"}
How to Use Arithmetic Vector Operations in R One set of arithmetic functions in R consists of functions in which the outcome is dependent on more than one value in the vector. Summing a vector with the sum() function is such an operation. Here are some others: Function What It Does sum(x) Calculates the sum of all values in x prod(x) Calculates the product of all values in x min(x) Gives the minimum of all values in x max(x) Gives the maximum of all values in x cumsum(x) Gives the cumulative sum of all values in x cumprod(x) Gives the cumulative product of all values in x cummin(x) Gives the minimum for all values in x from the start of the vector until the position of that value cummax(x) Gives the maximum for all values in x from the start of the vector until the position of that value diff(x) Gives for every value the difference between that value and the next value in the vector How to summarize a vector in R You can tell quite a few things about a set of values with one number. To illustrate, let’s assume you have two vectors containing the number of baskets that Granny and her friend Geraldine scored in the six games of this basketball season: > baskets.of.Granny <- c(12,4,4,6,9,3) > baskets.of.Geraldine <- c(5,3,2,2,12,9) If you want to know the minimum and maximum number of baskets Granny made, for example, you use the functions min() and max(): > min(baskets.of.Granny) [1] 3 > max(baskets.of.Granny) [1] 12 To calculate the sum and the product of all values in the vector, use the functions sum() and prod(), respectively. These functions also can take a list of vectors as an argument. If you want to calculate the sum of all the baskets made by Granny and Geraldine, you can use the following code: > sum(baskets.of.Granny,baskets.of.Geraldine) [1] 75 Missing values always return NA as a result. The same is true for vector operations as well. R, however, gives you a way to simply discard the missing values by setting the argument na.rm to TRUE. Take a look at the following example: > x <- c(3,6,2,NA,1) > sum(x) [1] NA > sum(x,na.rm=TRUE) [1] 12 This argument works in sum(), prod(), min(), and max(). How to cumulate operations in R Suppose that after every game, you want to update the total number of baskets that Granny made during the season. After the second game, that’s the total of the first two games; after the third game, it’s the total of the first three games; and so on. You can make this calculation easily by using the cumulative sum function, cumsum(), as in the following example: > cumsum(baskets.of.Granny) [1] 12 16 21 27 36 39 In a similar way, cumprod() gives you the cumulative product. You also can get the cumulative minimum and maximum with the related functions cummin() and cummax(). How to calculate differences in R You can calculate the difference in the number of baskets between every two games Granny played by using the following code: > diff(baskets.of.Granny) [1] -8 1 1 3 -6 You get five numbers back. The first one is the difference between the first and the second game, the second is the difference between the second and the third game, and so on. The vector returned by diff() is always one element shorter than the original vector you gave as an argument.
{"url":"http://www.dummies.com/how-to/content/how-to-use-arithmetic-vector-operations-in-r.navId-812016.html","timestamp":"2014-04-16T20:00:47Z","content_type":null,"content_length":"55508","record_id":"<urn:uuid:c1faf128-7ae9-45ba-80dd-2eddf08f2386>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00000-ip-10-147-4-33.ec2.internal.warc.gz"}
Show that Z[i] is a PID. March 29th 2011, 07:04 PM Show that Z[i] is a PID. Show that $\mathbb{Z}[i]$ is a principal ideal domain. I have already shown that $\mathbb{Z}[i]$ is an integral domain. How do I show that all ideals are principal ( $I=(r), r\in \mathbb{Z}[i]$)? March 29th 2011, 07:08 PM If you know, or can show by means of the norm, that that ring is an Euclidean one then you're done, otherwise I can't see how to show directly that it is a PID... March 29th 2011, 07:10 PM March 29th 2011, 07:14 PM Well, then you still can do something that is actually equivalent: show that the norm $N(a+bi):=a^2+b^2$ in $\mathbb{Z}[i]$ permits you to carry on "division with residue" just as the absolute value allows us to do the same in the integers. Once you have this proceed as with the integers to show the ring is a PID.
{"url":"http://mathhelpforum.com/advanced-algebra/176265-show-z-i-pid-print.html","timestamp":"2014-04-23T08:55:36Z","content_type":null,"content_length":"7665","record_id":"<urn:uuid:25fdc87e-046e-44aa-9da0-273bec009ef4>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00006-ip-10-147-4-33.ec2.internal.warc.gz"}
Roslindale SAT Math Tutor Find a Roslindale SAT Math Tutor ...In addition to private tutoring, I have taught summer courses, provided tutoring in Pilot schools, assisted in classrooms, and run test preparation classes (MCAS and SAT). Students tell me I'm awesome; parents tell me that I am easy to work with. My style is easy-going; my expectations are real... 8 Subjects: including SAT math, statistics, algebra 1, geometry ...I've tutored nearly all the students I've worked with for many years, and I've also frequently tutored their brothers and sisters - also for many years. I enjoy helping my students to understand and realize that they can not only do the work - they can do it well and they can understand what they're doing. My references will gladly provide details about their own experiences. 11 Subjects: including SAT math, geometry, algebra 1, algebra 2 I am a retired university math lecturer looking for students, who need experienced tutor. Relying on more than 30 years experience in teaching and tutoring, I strongly believe that my profile is a very good fit for tutoring and teaching positions. I have significant experience of teaching and ment... 14 Subjects: including SAT math, calculus, geometry, statistics ...Also, I may not see a cancellation message in time. For late cancellations, I charge half of what would have been charged for the full lesson.I have tutored students of various ages in elementary school. I also have worked as a teacher's aide in Sunday school for Pre-K and grades 1, 2, 5, and 6. 18 Subjects: including SAT math, English, reading, writing ...I received nothing but positive feedback and recommendations. My schedule is flexible, but weeknights and weekends are my preference. I can tutor either at my home or will travel to your location unless driving is more than 30 minutes. 8 Subjects: including SAT math, calculus, geometry, algebra 1
{"url":"http://www.purplemath.com/Roslindale_SAT_math_tutors.php","timestamp":"2014-04-19T05:21:17Z","content_type":null,"content_length":"24044","record_id":"<urn:uuid:2b62e74a-e10c-470f-863d-d7ce5ae59423>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00591-ip-10-147-4-33.ec2.internal.warc.gz"}
digitalmars.D - std.math.TAU James Fisher <jameshfisher gmail.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Hopefully this won't be taken as frivolous. I (and possibly some of you) have been convinced by the argument at http://tauday.com/. It's very convincing, and I won't rehash it here. The use of =CF=84 instead of =CF=80 will only become really convenient when= one does not have to preface everything with "let =CF=84 =3D 2=CF=80". For example, in D, in order to think in terms of =CF=84 instead of =CF=80, = one must define `enum real TAU =3D std.math.PI * 2;`, and possibly also TAU_2, TAU_4= As well as being a typing inconvenience, I also think things are not that easy due to loss of precision (though I'm far from an expert on intricacies of floating point). There is an initiative to add TAU to the Python standard library: To this end, I suggest adding the constant TAU to std.math, and possibly also TAU_2 as an alias for PI, TAU_4 as an alias for PI_2, TAU_8 as PI_4. In any case, I'd like to know what's necessary in order for me to define these constants without loss of precision. Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Hopefully this won&#39;t be taken as frivolous. =C2=A0I (and possibly some = of you) have been convinced by the argument at=C2=A0<meta http-equiv=3D"con= tent-type" content=3D"text/html; charset=3Dutf-8"><a href=3D"http://tauday.= com/">http://tauday.com/</a>. =C2=A0It&#39;s very convincing, and I won&#39= ;t rehash it here.<div> <br></div>The use of =CF=84 instead of=C2=A0=CF=80 will only become really = convenient when one does not have to preface everything with &quot;let=C2= =A0<meta http-equiv=3D"content-type" content=3D"text/html; charset=3Dutf-8"= =CF=84 =3D 2<meta http-equiv=3D"content-type" content=3D"text/html; charse= <br></div><div>For example, in D, in order to think in terms of=C2=A0<meta = http-equiv=3D"content-type" content=3D"text/html; charset=3Dutf-8">=CF=84 i= nstead of=C2=A0<meta http-equiv= 3D"content-type" content=3D"text/html; char= set=3Dutf-8">=CF=80, one must define `enum real TAU =3D std.math.PI * 2;`, = and possibly also TAU_2, TAU_4, etc.<div> <br></div><div>As well as being a typing inconvenience, I also think things= are not that easy due to loss of precision (though I&#39;m far from an exp= ert on intricacies of floating point).<br><div><div><br></div><div>There is= an initiative to add TAU to the Python standard library:=C2=A0<a href=3D"h= ttp://www.python.org/dev/peps/pep-0628/">http://www.python.org/dev/peps/pep= -0628/</a></div> <meta http-equiv= 3D"content-type" content=3D"text/html; charset=3Dutf-8"><d= iv><br></div><div>To this end, I suggest adding the constant TAU to std.mat= h, and possibly also TAU_2 as an alias for PI, TAU_4 as an alias for PI_2, = TAU_8 as PI_4.</div> <div><br></div><div>In any case, I&#39;d like to know what&#39;s necessary = in order for me to define these constants without loss of precision.</div><= div> <div><span class=3D"Apple-style-span" style=3D"color: rgb(7, 7, 7); lin= e-height: 24px; "><span class=3D"MathJax" style=3D"margin-top: 0px; margin-= right: 0px; margin-bottom: 0px; margin-left: 0px; padding-top: 0px; padding= -right: 0px; padding-bottom: 0px; padding-left: 0px; border-top-width: 0px;= border-right-width: 0px; border-bottom-width: 0px; border-left-width: 0px;= border-style: initial; border-color: initial; font-weight: normal; vertica= l-align: baseline; display: inline; line-height: normal; text-indent: 0px; = text-align: left; text-transform: none; letter-spacing: normal; word-spacin= g: normal; word-wrap: normal; white-space: nowrap; float: none; direction: = ltr; border-style: initial; border-color: initial; "><span class=3D"math" i= d=3D"MathJax-Span-85" style=3D"margin-top: 0px; margin-right: 0px; margin-b= ottom: 0px; margin-left: 0px; padding-top: 0px; padding-right: 0px; padding= -bottom: 0px; padding-left: 0px; border-top-width: 0px; border-right-width:= 0px; border-bottom-width: 0px; border-left-width: 0px; border-style: initi= al; border-color: initial; font-weight: inherit; vertical-align: 0px; displ= ay: inline; position: static; border-style: initial; border-color: initial;= line-height: normal; text-decoration: none; "><span style=3D"margin-top: 0= px; margin-right: 0px; margin-bottom: 0px; margin-left: 0px; padding-top: 0= px; padding-right: 0px; padding-bottom: 0px; padding-left: 0px; border-top-= width: 0px; border-right-width: 0px; border-bottom-width: 0px; border-left-= width: 0em; border-style: initial; border-color: initial; font-weight: inhe= rit; font-style: inherit; font-size: 16px; font-family: MathJax_Main, MathJ= ax_Size1, MathJax_AMS; vertical-align: -0.012em; display: inline-block; pos= ition: static; border-style: initial; border-color: initial; line-height: n= ormal; text-decoration: none; border-left-style: solid; border-left-color: = initial; overflow-x: hidden; overflow-y: hidden; width: 0px; height: 0.468e= m; ">d</span></span></span></span><meta http-equiv=3D"content-type" content= =3D"text/html; charset=3Dutf-8"></div> </div></div></div></div> Jul 05 2011 "Steven Schveighoffer" <schveiguy yahoo.com> On Tue, 05 Jul 2011 04:31:09 -0400, James Fisher <jameshfisher gmail.com= Hopefully this won't be taken as frivolous. I (and possibly some of y= have been convinced by the argument at http://tauday.com/. It's very convincing, and I won't rehash it here. The use of =CF=84 instead of =CF=80 will only become really convenient= not have to preface everything with "let =CF=84 =3D 2=CF=80". For example, in D, in order to think in terms of =CF=84 instead of =CF= define `enum real TAU =3D std.math.PI * 2;`, and possibly also TAU_2, = As well as being a typing inconvenience, I also think things are not t= easy due to loss of precision (though I'm far from an expert on = of floating point). There is an initiative to add TAU to the Python standard library: To this end, I suggest adding the constant TAU to std.math, and possib= also TAU_2 as an alias for PI, TAU_4 as an alias for PI_2, TAU_8 as PI= In any case, I'd like to know what's necessary in order for me to defi= these constants without loss of precision. I read an article about this recently, it's definitely interesting. The= = one place where I haven't seen it mentioned is what happens when you wan= t = the area of a circle, since that necessarily involves the radius. I'd = guess you'd have to use =CF=84/2 * r^2, but even then, that's one formul= a vs. = the rest. It's probably a good tradeoff. I can definitely see the = advantage when using radians. Never thought I'd have to re-learn trig = again... One thing I like about Pi vs Tau is that it cannot be mistaken for a = normal character. I'm not a floating point expert, but I would expect since floating point= = is stored in binary, dividing or multiplying by 2 loses no precision at = = all. But I could be wrong... -Steve Jul 05 2011 Don <nospam nospam.com> James Fisher wrote: On Tue, Jul 5, 2011 at 12:31 PM, James Fisher <jameshfisher gmail.com <mailto:jameshfisher gmail.com>> wrote: Sorry, I didn't state this very clearly. Multiplying the approximation of PI in std.math should yield the exact double of that approximation, as it should just involve increasing the exponent by 1. However, [double the approximation of the constant] is not necessarily equal to [the approximation of double the constant]. Does that make sense? I understand what you're getting at, but actually multiplication by powers of 2 is always exact for binary floating point numbers. The reason is that the rounding is based on the values after the lowest bit of the _significand_. The exponent plays no role. Multiplication or division by two doesn't change the significand at all, only the exponent, so if the rounding was correct before, it is still correct after the multiplication. Or to put it another way: PI in binary is a infinitely long string of 1s and zeros. Multiplying it by two only shifts the string left and right, it doesn't change any of the 1s to 0s, etc, so the approximation doesn't change either. (I think this is why the constants in math.d are each defined separately rather than in terms of each other.) Hmm. I'm not sure why PI_2 and PI_4 are there. They should be defined in terms of PI. Probably should fix that. Jul 05 2011 Don <nospam nospam.com> James Fisher wrote: On Tue, Jul 5, 2011 at 8:49 PM, Don <nospam nospam.com <mailto:nospam nospam.com>> wrote: James Fisher wrote: On Tue, Jul 5, 2011 at 12:31 PM, James Fisher <jameshfisher gmail.com <mailto:jameshfisher gmail.com> <mailto:jameshfisher gmail.com <mailto:jameshfisher gmail.com>__>> wrote: Sorry, I didn't state this very clearly. Multiplying the approximation of PI in std.math should yield the exact double of that approximation, as it should just involve increasing the exponent by 1. However, [double the approximation of the is not necessarily equal to [the approximation of double the constant]. Does that make sense? I understand what you're getting at, but actually multiplication by powers of 2 is always exact for binary floating point numbers. The reason is that the rounding is based on the values after the lowest bit of the _significand_. The exponent plays no role. Multiplication or division by two doesn't change the significand at all, only the exponent, so if the rounding was correct before, it is still correct after the multiplication. Or to put it another way: PI in binary is a infinitely long string of 1s and zeros. Multiplying it by two only shifts the string left and right, it doesn't change any of the 1s to 0s, etc, so the approximation doesn't change either. Great explanation, thanks. (I think this is why the constants in math.d are each defined separately rather than in terms of each other.) Hmm. I'm not sure why PI_2 and PI_4 are there. They should be defined in terms of PI. Probably should fix that. Another thing -- why are some constants defined in decimal, others in hex, and one (E) with the long 'L' suffix? The ones defined in decimal are obsolete, they haven't had a conversion to hex yet. And is there a significance to the number of decimal/hexadecimal places -- e.g., is this the minimum places required to ensure the closest floating point value for all common hardware accuracies? Yes, it's 80 bit. Currently there's a problem with DMC's floating-point parser, all those numbers should really be 128 bit (we should be ready for 128 bit quads). Jul 05 2011 Walter Bright <newshound2 digitalmars.com> On 7/5/2011 3:45 PM, Don wrote: Another thing -- why are some constants defined in decimal, others in hex, and one (E) with the long 'L' suffix? The ones defined in decimal are obsolete, they haven't had a conversion to hex yet. The ones in hex I got out of a book that helpfully printed them as octal values. I wanted exact bit patterns, not decimal conversions that might suffer if there's a flaw in the lexer. It's hard to come by textbook values for some of these that are high precision. It's definitely not good enough to just write some simple fp program to generate them. Jul 05 2011 KennyTM~ <kennytm gmail.com> On Jul 6, 11 06:59, Walter Bright wrote: On 7/5/2011 3:45 PM, Don wrote: Another thing -- why are some constants defined in decimal, others in hex, and one (E) with the long 'L' suffix? The ones defined in decimal are obsolete, they haven't had a conversion to hex yet. The ones in hex I got out of a book that helpfully printed them as octal values. I wanted exact bit patterns, not decimal conversions that might suffer if there's a flaw in the lexer. It's hard to come by textbook values for some of these that are high precision. It's definitely not good enough to just write some simple fp program to generate them. Jul 05 2011 On 7/5/2011 11:12 PM, KennyTM~ wrote: On Jul 6, 11 06:59, Walter Bright wrote: It's definitely not good enough to just write some simple fp program to generate them. Jul 05 2011 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable On Tue, Jul 5, 2011 at 12:15 PM, Steven Schveighoffer <schveiguy yahoo.com>wrote: On Tue, 05 Jul 2011 04:31:09 -0400, James Fisher <jameshfisher gmail.com> Hopefully this won't be taken as frivolous. I (and possibly some of you= have been convinced by the argument at http://tauday.com/. It's very convincing, and I won't rehash it here. The use of =CF=84 instead of =CF=80 will only become really convenient w= not have to preface everything with "let =CF=84 =3D 2=CF=80". For example, in D, in order to think in terms of =CF=84 instead of =CF= define `enum real TAU =3D std.math.PI * 2;`, and possibly also TAU_2, TA= As well as being a typing inconvenience, I also think things are not tha= easy due to loss of precision (though I'm far from an expert on of floating point). There is an initiative to add TAU to the Python standard library: To this end, I suggest adding the constant TAU to std.math, and possibly also TAU_2 as an alias for PI, TAU_4 as an alias for PI_2, TAU_8 as PI_4= In any case, I'd like to know what's necessary in order for me to define these constants without loss of precision. I read an article about this recently, it's definitely interesting. The one place where I haven't seen it mentioned is what happens when you want the area of a circle, since that necessarily involves the radius. I'd gu= you'd have to use =CF=84/2 * r^2, but even then, that's one formula vs. t= It's probably a good tradeoff. I can definitely see the advantage when using radians. Never thought I'd have to re-learn trig again... One thing I like about Pi vs Tau is that it cannot be mistaken for a norm= I'm not a floating point expert, but I would expect since floating point = stored in binary, dividing or multiplying by 2 loses no precision at all. But I could be wrong... Sorry, I didn't state this very clearly. Multiplying the approximation of PI in std.math should yield the exact double of that approximation, as it should just involve increasing the exponent by 1. However, [double the approximation of the constant] is not necessarily equal to [the approximation of double the constant]. Does that make sense? --90e6ba6e8c0ca9a00704a750d43d Content-Type: text/ html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable <div class=3D"gmail_quote">On Tue, Jul 5, 2011 at 12:15 PM, Steven Schveigh= offer <span dir=3D"ltr">&lt;<a href=3D"mailto:schveiguy yahoo.com">schveigu= y yahoo.com</a>&gt;</span> wrote:<br><blockquote class=3D"gmail_quote" styl= e=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;"> <div><div></div><div class= 3D"h5">On Tue, 05 Jul 2011 04:31:09 -0400, James= Fisher &lt;<a href=3D"mailto:jameshfisher gmail.com" target=3D"_blank">jam= eshfisher gmail.com</a>&gt; wrote:<br> <br> <blockquote class= 3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p= x #ccc solid;padding-left:1ex"> Hopefully this won&#39;t be taken as frivolous. =C2=A0I (and possibly some = of you)<br> have been convinced by the argument at <a href=3D"http://tauday.com/" targe= t=3D"_blank">http://tauday.com/</a>. =C2=A0It&#39;s very<br> convincing, and I won&#39;t rehash it here.<br> <br> The use of =CF=84 instead of =CF=80 will only become really convenient when= one does<br> not have to preface everything with &quot;let =CF=84 =3D 2=CF=80&quot;.<br> <br> For example, in D, in order to think in terms of =CF=84 instead of =CF=80, = one must<br> define `enum real TAU =3D std.math.PI * 2;`, and possibly also TAU_2, TAU_4= ,<br> etc.<br> <br> As well as being a typing inconvenience, I also think things are not that<b= r> easy due to loss of precision (though I&#39;m far from an expert on intrica= cies<br> of floating point).<br> <br> There is an initiative to add TAU to the Python standard library:<br> <a href=3D"http://www.python.org/dev/peps/pep-0628/" target=3D"_blank">http= ://www.python.org/dev/<u></u>peps/pep-0628/</a><br> <br> To this end, I suggest adding the constant TAU to std.math, and r> <br> In any case, I&#39;d like to know what&#39;s necessary in order for me to d= efine<br> these constants without loss of precision.<br> d<br> </blockquote> <br></div></div> I read an article about this recently, it&#39;s definitely interesting. =C2= =A0The one place where I haven&#39;t seen it mentioned is what happens when= you want the area of a circle, since that necessarily involves the radius.= =C2=A0I&#39;d guess you&#39;d have to use =CF=84/2 * r^2, but even then, t= hat&#39;s one formula vs. the rest. =C2=A0It&#39;s probably a good tradeoff= . =C2=A0I can definitely see the advantage when using radians. =C2=A0Never = thought I&#39;d have to re-learn trig again...<br> <br> One thing I like about Pi vs Tau is that it cannot be mistaken for a normal= character.<br> <br> I& #39;m not a floating point expert, but I would expect since floating poin= t is stored in binary, dividing or multiplying by 2 loses no precision at a= ll. =C2=A0But I could be wrong...<br></ blockquote><div><br></div><div>Sorry= , I didn&#39;t state this very clearly. =C2=A0Multiplying the approximation= of PI in std.math should yield the exact double of that approximation, as = it should just involve increasing the exponent by 1. =C2=A0However, [double= the approximation of the constant] is not necessarily equal to [the approx= imation of double the constant]. =C2=A0Does that make sense?</div> </div> --90e6ba6e8c0ca9a00704a750d43d-- Jul 05 2011 Content-Type: text/plain; charset=UTF-8 On Tue, Jul 5, 2011 at 12:31 PM, James Fisher <jameshfisher gmail.com>wrote: Sorry, I didn't state this very clearly. Multiplying the approximation of PI in std.math should yield the exact double of that approximation, as it should just involve increasing the exponent by 1. However, [double the approximation of the constant] is not necessarily equal to [the approximation of double the constant]. Does that make sense? (I think this is why the constants in math.d<https://github.com/D-Programming-Language/phobos/blob/master/std/math.d#L206>are each defined separately rather than in terms of each other.) --90e6ba613b24fe399c04a750e371 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable <div class=3D"gmail_quote">On Tue, Jul 5, 2011 at 12:31 PM, James Fisher <s= pan dir= 3D"ltr">&lt;<a href=3D"mailto:jameshfisher gmail.com">jameshfisher = gmail.com</a>&gt;</span> wrote:<blockquote class=3D"gmail_quote" style=3D"m= argin:0 0 0 .8ex;border-left:1px #ccc solid; padding-left:1ex;"> <div class=3D"gmail_quote"><div>Sorry, I didn&#39;t state this very clearly= . =C2=A0Multiplying the approximation of PI in std.math should yield the ex= act double of that approximation, as it should just involve increasing the = exponent by 1. =C2=A0However, [double the approximation of the constant] is= not necessarily equal to [the approximation of double the constant]. =C2= =A0Does that make sense?</div> </div> </blockquote></div><br><div>(I think this is why <a href=3D"https://github.= com/D-Programming-Language/phobos/blob/master/std/math.d#L206">the constant= s in math.d</a> are each defined separately rather than in terms of each ot= her.)</div> --90e6ba613b24fe399c04a750e371-- Jul 05 2011 "James Fisher" <jameshfisher gmail.com> wrote in message news:mailman.1426.1309854678.14074.digitalmars-d puremagic.com... Hopefully this won't be taken as frivolous. I (and possibly some of you) have been convinced by the argument at http://tauday.com/. It's very convincing, and I won't rehash it here. He had me at "TAU == 2PI" I'm sold. Jul 05 2011 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable On Tue, Jul 5, 2011 at 12:15 PM, Steven Schveighoffer <schveiguy yahoo.com>wrote: I read an article about this recently, it's definitely interesting. The one place where I haven't seen it mentioned is what happens when you want the area of a circle, since that necessarily involves the radius. I'd gu= you'd have to use =CF=84/2 * r^2, but even then, that's one formula vs. t= It's probably a good tradeoff. I can definitely see the advantage when using radians. Never thought I'd have to re-learn trig again... It embarasses me to say that, after many years, working with radians and pi still makes my head hurt. "So I have to multiply -- no wait, divide -- no wait, multiply that by 2 ..." --bcaec52e6045c0378e04a75680b8 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable <div class=3D"gmail_quote">On Tue, Jul 5, 2011 at 12:15 PM, Steven Schveigh= offer <span dir=3D"ltr">&lt;<a href=3D"mailto:schveiguy yahoo.com">schveigu= y yahoo.com</a>&gt;</span> wrote:<blockquote class=3D"gmail_quote" style=3D= "margin:0 0 0 .8ex;border-left:1px #ccc solid; padding-left:1ex;"> I read an article about this recently, it&#39;s definitely interesting. =C2= =A0The one place where I haven&#39;t seen it mentioned is what happens when= you want the area of a circle, since that necessarily involves the radius.= =C2=A0I&#39;d guess you&#39;d have to use =CF=84/2 * r^2, but even then, t= hat&#39;s one formula vs. the rest. =C2=A0It&#39;s probably a good tradeoff= . =C2=A0I can definitely see the advantage when using radians. =C2=A0Never = thought I&#39;d have to re-learn trig again...<br> </blockquote><div><br></div><div>It embarasses me to say that, after many y= ears, working with radians and pi still makes my head hurt. =C2=A0&quot;So = I have to multiply -- no wait, divide -- no wait, multiply that by 2 ...&qu= ot;</div> </div> Jul 05 2011 Content-Type: text/plain; charset=UTF-8 On Tue, Jul 5, 2011 at 8:49 PM, Don <nospam nospam.com> wrote: James Fisher wrote: On Tue, Jul 5, 2011 at 12:31 PM, James Fisher <jameshfisher gmail.com<mailto: jameshfisher gmail.com**>> wrote: Sorry, I didn't state this very clearly. Multiplying the approximation of PI in std.math should yield the exact double of that approximation, as it should just involve increasing the exponent by 1. However, [double the approximation of the constant] is not necessarily equal to [the approximation of double the constant]. Does that make sense? I understand what you're getting at, but actually multiplication by powers of 2 is always exact for binary floating point numbers. The reason is that the rounding is based on the values after the lowest bit of the _significand_. The exponent plays no role. Multiplication or division by two doesn't change the significand at all, only the exponent, so if the rounding was correct before, it is still correct after the multiplication. Or to put it another way: PI in binary is a infinitely long string of 1s and zeros. Multiplying it by two only shifts the string left and right, it doesn't change any of the 1s to 0s, etc, so the approximation doesn't change either. Great explanation, thanks. (I think this is why the constants in math.d <https://github.com/D-** are each defined separately rather than in terms of each other.) Hmm. I'm not sure why PI_2 and PI_4 are there. They should be defined in terms of PI. Probably should fix that. Another thing -- why are some constants defined in decimal, others in hex, and one (E) with the long 'L' suffix? And is there a significance to the number of decimal/hexadecimal places -- e.g., is this the minimum places required to ensure the closest floating point value for all common hardware accuracies? --20cf3054a013e3447504a759cee0 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable <div class=3D"gmail_quote">On Tue, Jul 5, 2011 at 8:49 PM, Don <span dir=3D= "ltr">&lt;<a href=3D"mailto:nospam nospam.com" target=3D"_blank">nospam nos= pam.com</a>&gt;</span> wrote:<br><blockquote class=3D"gmail_quote" style=3D= "margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"> James Fisher wrote:<div><br> <blockquote class= 3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p= x #ccc solid;padding-left:1ex"> On Tue, Jul 5, 2011 at 12:31 PM, James Fisher &lt;<a href=3D"mailto:jameshf= isher gmail.com" target= 3D"_blank">jameshfisher gmail.com</a> &lt;mailto:<a= href=3D"mailto:jameshfisher gmail.com" target=3D"_blank">jameshfisher gmai= l.com</a><u></u>&gt;&gt; wrote:<br> <br> =C2=A0 =C2=A0Sorry, I didn&# 39;t state this very clearly. =C2=A0Multiplyin= g the<br> =C2=A0 =C2=A0approximation of PI in std.math should yield the exact double= of<br> =C2=A0 =C2=A0that approximation, as it should just involve increasing the<= br> =C2=A0 =C2=A0exponent by 1. =C2=A0However, [double the approximation of th= e constant]<br> =C2=A0 =C2=A0is not necessarily equal to [the approximation of double the<= br> =C2=A0 =C2=A0constant]. =C2=A0Does that make sense?<br> </blockquote> <br></div> I understand what you&#39;re getting at, but actually multiplication by pow= ers of 2 is always exact for binary floating point numbers.<br> The reason is that the rounding is based on the values after the lowest bit= of the _significand_. The exponent plays no role.<br> Multiplication or division by two doesn&#39;t change the significand at all= , only the exponent, so if the rounding was correct before, it is still cor= rect after the multiplication.<br> <br> Or to put it another way: PI in binary is a infinitely long string of 1s an= d zeros. Multiplying it by two only shifts the string left and right, it do= esn&#39;t change any of the 1s to 0s, etc, so the approximation doesn&#39;t= change either.<br> </blockquote><div><br></div><div>Great explanation, thanks.</div><div><br><= /div><blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-le= ft:1px #ccc solid; padding-left:1ex"> <blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p= x #ccc solid;padding-left:1ex"> (I think this is why the constants in math.d &lt;<a href=3D"https:// github.= com/D-Programming-Language/phobos/blob/master/std/math.d#L206" target=3D"_b= lank">https://github.com/D-<u></u>Programming-Language/phobos/<u></u>blob/m= aster/std/math.d#L206</a>&gt; are each defined separately rather than in te= rms of each other.)<br> </blockquote> <br> Hmm. I&#39;m not sure why PI_2 and PI_4 are there. They should be defined i= n terms of PI. Probably should fix that.<br> </blockquote></div><br><div>Another thing -- why are some constants defined= in decimal, others in hex, and one (E) with the=C2=A0long=C2=A0&#39;L&#39;= suffix? =C2=A0And is there a significance to the number of decimal/hexadec= imal places -- e.g., is this the minimum places required to ensure the clos= est floating point value for all common hardware accuracies?</div> <meta http-equiv=3D"content-type" content=3D"text/html; charset=3Dutf-8"> --20cf3054a013e3447504a759cee0-- Jul 05 2011
{"url":"http://www.digitalmars.com/d/archives/digitalmars/D/std.math.TAU_140035.html","timestamp":"2014-04-21T15:04:57Z","content_type":null,"content_length":"49845","record_id":"<urn:uuid:11ba99a6-6bd4-4b73-a5e3-ea414118ede1>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00307-ip-10-147-4-33.ec2.internal.warc.gz"}
I am in desperate need of help with various math problems. July 1st 2012, 10:11 AM #1 Jul 2012 I am in desperate need of help with various math problems. Please Help me: I am taking Praxis II and I need some help with a few math problems (mid-level)? I am sure that these are simple enough, but I want to make sure that I understand these. I took a practice test and I thought that I had done fairly well on it, but I missed 8 out of 30! I have a lot riding on this test, I have one chance to take it before the fall and if I do not pass, it will delay my graduation for one year. I feel really dumb, I have A's in all of my math classes, but this test seems to be hard for me. I will think that I have a grasp on it, but I have already failed it once. All help is appreciated and please make sure to explain your answer. Thank you! 1.) The population of country X is 0.36 billion persons, and the population of country Z is 12 million persons. The population of country X is how many times that of country Z? 2.) The 40 eggs in a box weight a total of x pounds. What is the average weight per egg for the eggs in the box, in ounces? a) x/640 b)56/x c)56x d)2x/5 3.) Shows a right triangle: side a=x side b=4 side c=w What is x^2 in terms of w? 4.) Shows a graph with a line segment with the coordinates a(1,1) and b(4,4) If line segment AB is flipped over the x-axis such that point A becomes point C and point B becomes point D. What are the coordinates for C and D? 4.) It shows three circles with one triable that goes through all three of them and they are arranged in such a way that the sides of the triangle makes up the radius of each circle. The problem Each circle is tangent to the other two circles. The points X, Y , and Z are the centers of the circles. If the circles have radii of 3, 4, and 5, respectively, what is the perimeter of triangle 5.) P=XY What would happen to the value of P in the equation above if the value of X was increased by 10% and the value of Y was decreased y 10%? 6.) A 4-inch by 6-inch photograph is to be enlarged by proportionally so that the 4-inch side expands to 5 inches. What would be the new length, in inches, of the 6-inch side of the photograph? 7.) A set consists of the letters A,B,C,D,E, and F. If order is not important, how many two-letter subsets are possible? I know this is a lot of questions, but I really need help to make sure that I understand how to work them. I really do appreciate your time and help. Thank you in advance. Re: I am in desperate need of help with various math problems. Please Help me: I am taking Praxis II and I need some help with a few math problems (mid-level)? I am sure that these are simple enough, but I want to make sure that I understand these. I took a practice test and I thought that I had done fairly well on it, but I missed 8 out of 30! I have a lot riding on this test, I have one chance to take it before the fall and if I do not pass, it will delay my graduation for one year. I feel really dumb, I have A's in all of my math classes, but this test seems to be hard for me. I will think that I have a grasp on it, but I have already failed it once. All help is appreciated and please make sure to explain your answer. Thank you! Since you didn't tell us where and what your difficulties are I can give you only hints what you should and can do. 1.) The population of country X is 0.36 billion persons, and the population of country Z is 12 million persons. The population of country X is how many times that of country Z? Convert billions into millions and divide afterwards popX by popZ 2.) The 40 eggs in a box weight a total of x pounds. What is the average weight per egg for the eggs in the box, in ounces? a) x/640 b)56/x c)56x d)2x/5 Convert x pounds into ounces and divide this value by 40. 3.) Shows a right triangle: side a=x side b=4 side c=w What is x^2 in terms of w? Use the Pythagorean theorem. Make sure that you know which side of the triangle is the hypotenuse. (That's the longest side of a right triangle) 4.) Shows a graph with a line segment with the coordinates a(1,1) and b(4,4) If line segment AB is flipped over the x-axis such that point A becomes point C and point B becomes point D. What are the coordinates for C and D? Draw a sketch. Read the new coordinates from your drawing. 4.) It shows three circles with one triable that goes through all three of them and they are arranged in such a way that the sides of the triangle makes up the radius of each circle. The problem Each circle is tangent to the other two circles. The points X, Y , and Z are the centers of the circles. If the circles have radii of 3, 4, and 5, respectively, what is the perimeter of triangle Draw a sketch. You should notice that one side of the triangle is the sum of two of the three radii. 5.) P=XY What would happen to the value of P in the equation above if the value of X was increased by 10% and the value of Y was decreased y 10%? There are three possible cases: X > Y, X = Y, X < Y Choose values for X and Y (for instance: X = 100, Y = 50; X = 100, Y = 100; X = 50, Y = 100), do the required calculations and see what happens to P. 6.) A 4-inch by 6-inch photograph is to be enlarged by proportionally so that the 4-inch side expands to 5 inches. What would be the new length, in inches, of the 6-inch side of the photograph? Determine the $enlargement\ factor = \frac{\text{length of new side}}{\text{length of old side}}$ Use this factor on the 2nd side. 7.) A set consists of the letters A,B,C,D,E, and F. If order is not important, how many two-letter subsets are possible? This question is unclear to me: Is AA a valid subset? I know this is a lot of questions, <--- this is the gratest understatement of the whole thread but I really need help to make sure that I understand how to work them. I really do appreciate your time and help. Thank you in advance. Re: I am in desperate need of help with various math problems. This has already been answered well but as an extra point: For 1) make sure you take care in checking what your definition of billion is (i.e. 10^9 or 10^12) July 1st 2012, 10:55 AM #2 July 2nd 2012, 08:05 AM #3 Jul 2012
{"url":"http://mathhelpforum.com/math-topics/200541-i-am-desperate-need-help-various-math-problems.html","timestamp":"2014-04-17T10:56:18Z","content_type":null,"content_length":"43677","record_id":"<urn:uuid:30348c27-6a97-4caf-9bf8-234f7d1aa574>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00480-ip-10-147-4-33.ec2.internal.warc.gz"}
Nested sums (sigma notation) February 4th 2008, 08:11 PM #1 Oct 2007 I am having difficulty with converting nested sigma notations into closed forms. sum [from i=3 to n] (i * sum [from j=i+1 to n] (3) ) I apologize for the notation but I hope I portrayed the question. The inner one I thought that it would be i(n- (i+1) +1)3 = 3i(n-i) {which I thought is the closed form for a constant} then substitute into the outer one. Any help or explanation muchly appreciated. I am having difficulty with converting nested sigma notations into closed forms. sum [from i=3 to n] (i * sum [from j=i+1 to n] (3) ) I apologize for the notation but I hope I portrayed the question. The inner one I thought that it would be i(n- (i+1) +1)3 = 3i(n-i) {which I thought is the closed form for a constant} then substitute into the outer one. Any help or explanation muchly appreciated. $3i(n - i) = 3ni - 3i^2$ $\sum_{i=3}^n (3ni - 3i^2) = 3n \sum_{i=3}^n i - 3 \sum_{i=3}^n i^2$ ...... The inner summation is multiplied by i. Do I multiply it through before I calculate the outer? So therefore 3n sum(i^2) - 3 sum(i^3)? Then used the closed forms for ^2 and ^3 respectively? Thanks for what you have given so far and once again I apologize for notation. February 4th 2008, 09:20 PM #2 February 4th 2008, 09:40 PM #3 Oct 2007
{"url":"http://mathhelpforum.com/algebra/27488-nested-sums-sigma-notation.html","timestamp":"2014-04-19T00:42:48Z","content_type":null,"content_length":"36217","record_id":"<urn:uuid:13505595-f001-40e7-b191-619a96769f31>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00214-ip-10-147-4-33.ec2.internal.warc.gz"}
Nash Equilibrium in Mixed Strategies Mixed Strategy Equilibrium In many games players choose unique actions from the set of available actions. These are called pure strategies .In some situations though a player may want to randomise over several actions. If a player is choosing which action to play randomly, we say that the player is using a "mixed strategy" as opposed to a pure strategy. In a pure strategy a player chooses an action for sure, whereas in a mixed strategy, he chooses a probability distribution over the set of actions available to him. Consider the following game of matching pennies. Note that this game does not have a pure strategy nash equilibrium: for any pair of pure strategies that the two players choose, one player will receive a negative payoff and hence want to change her strategy choice. So game theorists allow players to have mixed strategies. In particular, let each player play H and T with one-half probability each. We claim then that this choice of strategies constitute an equilibrium, in the sense that if each player predicts that the other player will play in this manner, then he has no reason not to play in the specified manner. Since player 2 plays H with probability ½, the expected payoff of player 1 if he plays H is (1/2)(1) + (1/2) (-1)=0. Similarly the expected payoff to action T is 0. Therefore player 1 has no reason to deviate from playing H and T with probability ½ each. Similarly, if player 2 predicts that player 1 will play H and T with one-half probability each, he has no reason to deviate from doing the same. We say that Player 1 and Player 2 each playing H and T with probabilities ½ and ½ constitutes a mixed strategy equilibrium of the game. If we assume that players repeatedly play this game and forecast each other’s action on the basis of past play, then each player actually has an incentive to adopt a mixed strategy with these probabilities. If, for example player 1 plays H constantly rather than the above mixed strategy, then it is reasonable that player 2 will come to expect him to play H again and play his best response, which is T. This will result in player 1 getting –1 as long as he continues playing H. Therefore he should try to be unpredictale, for as soon as his opponent is able to predict his actions he will be able to take advantage of the situation. Similarly player 2 must be unpredictable in order to avoid losing while playing this game. A mixed strategy for player i is a probability distribution over his set of available actions. In other words, if player i has m actions available, a mixed strategy is an m dimensional vector (α1[i] , α2[i] ,… αm[i] ) such that αk[i] >= 0 for all k=1,2,…m, and SUM αk[i] =1
{"url":"http://www.econport.org/content/handbook/gametheory/useful/equilibrium/nashmixed.html","timestamp":"2014-04-18T03:07:32Z","content_type":null,"content_length":"10340","record_id":"<urn:uuid:219e581c-6c02-4950-98c2-711c07efe261>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00152-ip-10-147-4-33.ec2.internal.warc.gz"}
Six identical capacitors with capacitance C are connected as shown... Get your Question Solved Now!! Six identical capacitors with capacitance C are connected as shown... Introduction: Physics (Electricity) Capacitance Question More Details: Six identical capacitors with capacitance C are connected as shown in the figure below. What is the equivalent capacitance of these six capacitors? A. 3/2 C B. 4/3 C C. 2/3 C D. 3 C E. 6 C Please log in or register to answer this question. 0 Answers Related questions
{"url":"http://www.thephysics.org/43574/six-identical-capacitors-with-capacitance-connected-shown","timestamp":"2014-04-16T04:10:46Z","content_type":null,"content_length":"107281","record_id":"<urn:uuid:27533efc-4df3-4b62-8713-cbb68d85eff7>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00113-ip-10-147-4-33.ec2.internal.warc.gz"}
Below is the C code for most of the programs that appear in HD, with test drivers. To save this code on your machine using Internet Explorer, right-click a line below, click Save Target As..., set directory, delete ".txt" from the file name (optional), and click Save. On Firefox it's the same except click Save Link As... . Page and figure numbers are for the second edition. Fig. 2-1. Next higher number with same number of 1-bits. Fig. 2-2. Determination of overflow of unsigned multiplication. P. 36. Test for overflow of signed "long division." ---. Determines which of the 256 Boolean functions of three variables can be implemented with three binary Boolean instructions. Fig. 3-1. Greatest power of 2 less than or equal to x, branch free. Fig. 3-3. Least power of 2 greater than or equal to x. Fig. 4-1. Propagating unsigned bounds through addition operations. Fig. 4-1. Propagating unsigned bounds through subtraction operations. Fig. 4-2. Propagating signed bounds through an addition operation. Fig. 4-3. Minimum value of x | y with bounds on x and y. Fig. 4-4. Maximum value of x | y with bounds on x and y. Fig. 4-5. Minimum value of x & y with bounds on x and y. Fig. 4-6. Maximum value of x & y with bounds on x and y. P. 77. Minimum value of x ^ y with bounds on x and y. P. 77. Maximum value of x ^ y with bounds on x and y. P. 78. Minimum value of x | y with bounds on x and y (signed). P. 78. Maximum value of x | y with bounds on x and y (signed). P. 82-87. Population count algorithms. Fig. 5-5. Computes pop(x) − pop(y). Fig. 5-6. Determines which word has the larger population count. Fig. 5-5 p. 73 of HD 1st ed. Counting 1-bits in an array. Figs. 5-7 and 5-9. Counting 1-bits in an array. P. 95. Indexing a moderately sparse array. P. 96-97. Various parity algorithms. P. 99-106. Number of leading zeros algorithms. P. 107-113. Number of trailing zeros algorithms. Fig. 5-28. Gosper's loop-detection algorithm. P. 117-122. Find first 0-byte or first value in a given range. ---. Find rightmost 0-byte. P. 123 and Fig. 6-5. Find first string of 1-bits of a given length. Figs. 6-6 and 6-7. Find longest string of 1's in a word. Fig. 6-8. Find shortest string of 1's in a word (optionally, at least as long as a given integer n). P. 129-134. Reversing bits. New code here. P. 138. Incrementing a reversed integer. P. 140-141. Shuffling bits. P. 143-145. Transposing an 8x8 bit matrix. P. 147-150. Transposing a 32x32 bit matrix. P. 151-153. Compress, or generalized extract. P. 156. Compress left. Fig. 7-12. Inverse of the compress (right) function. P. 162. Permuting by sheep and goats operation, little-endian. P. 162. Permuting by sheep and goats operation, big-endian. Fig. 8-1. Multiword integer multiplication, signed. Fig. 8-1 top portion. Multiword integer multiplication, unsigned. Fig. 8-2. Multiply high signed. P. 174. Multiply high unsigned. ---. Multiply, 32x32 ==> 64, signed. ---. Multiply, 32x32 ==> 64, unsigned. ---. Multiply, 64x64 ==> 128, unsigned. Fig. 9-1. Multiword integer division, unsigned, for a 32-bit machine. Like Fig. 9-1. Multiword integer division, unsigned, for a 64-bit machine. Fig. 9-2. Divide long unsigned, shift-and-subtract algorithm. Fig. 9-3. Divide long unsigned, using fullword division instruction. Fig. 9-4. Divide long signed, using divide long unsigned. Fig. 9-5. 64/64 ==> 64 division, unsigned and signed. Fig. 10-1. Computing the magic number for signed division. Figs. 10-2 and 10-3. Computing the magic number for unsigned division. ---, Computing the magic number for signed division (Python). Fig. 10-4, Computing the magic number for unsigned division (Python). Figs. 10-4 and 10-5. Multiplicative inverse modulo 2^32. Figs. 10-7 to 10-15. Unsigned division by constants. Figs. 10-16 to 10-22. Signed division by constants. Figs. 10-23 to 10-29 and 10-34 to 10-41. Unsigned remainder of division by constants. Figs. 10-30 to 10-33 and 10-42 to 10-46. Signed remainder of division by constants. Figs. 10-47 to 10-49. Exact division method of division by constants. Figs. 11-1 to 11-4. Integer square root. Fig. 11-5. Integer cube root, hardware algorithm. P. 434. Integer cube root of a 64-bit integer. Fig. 11-6. Computing x^n by binary decomposition of n. Figs. 11-7 to 11-13. Integer log base 10. Fig. 12-1. Division in base −2. P. 436. Convert a base −1 + i integer to a + bi, a and b real integers (Python). Figs. 14-5 to 14-7. Cyclic Redundancy Check (CRC-32). Figs. 15-1 and 15-2. SEC-DED Hamming code for 32 information bits. Fig. 16-3. Hilbert curve generator. Fig. 16-4. Driver program for Hilbert curve generator. Fig. 16-5. Program for computing (x, y) from s. Fig. 16-6. Lam and Shapiro method for computing (x, y) from s. P. 363. Variation of Fig. 14-6 that avoids a branch. Fig. 16-7. Code to verify the logic circuit of Fig. 14-7. NB error in first edition, first and second printings (Fig. 14-7): The title of this figure should be "Logic circuit for computing (x, y) from s." Fig. 16-8. Parallel prefix method for computing (x, y) from s. Fig. 16-9. Program for computing s from (x, y). Fig. 16-10. Lam and Shapiro method for computing s from (x, y). ---. Variation of Fig. 16-10 that avoids a branch. Fig. 16-11. Program for taking one step on the Hilbert curve. Fig. 16-12. Code to verify the logic circuit of Fig. 14-12. ---. Right-to-left algorithm for computing s from (x, y). Some Hilbert curves (not a program). Fig. 17-1. Approximate reciprocal square root of an IEEE float. P. 447. Approximate square root of an IEEE float. P. 448. Approximate cube root of an IEEE float. ---, Approximate reciprocal of an IEEE double. ---. Montgomery multiplication.
{"url":"http://www.hackersdelight.org/hdcode.htm","timestamp":"2014-04-21T15:56:41Z","content_type":null,"content_length":"10842","record_id":"<urn:uuid:e359821b-7077-46fb-9376-4c70c3e3cbf5>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00325-ip-10-147-4-33.ec2.internal.warc.gz"}
Recursion Help Recursion Help I am reading about recursion and I understand that it there is two versions of it: indirect and direct and understand that part however when I try to apply recursion to a problem (involving fractions) I get confused and start getting ........ed off how i cannot wrap my mind around it... Any ideas on how to learn this? Write your answer without recursion to begin with. There really isn't anything mysterious, it is just a function call disguised as a loop. Eg, two strlen functions. int mylen ( const char *s ) { int i; for ( i = 0 ; *s++ ; i++ ); return i; int mylen ( const char *s ) { if ( *s == '\0' ) return 0; else return 1 + mylen(s+1); In the case of strlen, the base case is when the length is zero, and the answer is therefore "obvious". Otherwise, the recursive step is to solve part of the problem (string is at least 1 char long), then call recursively to work out the length of the tail of the string.
{"url":"http://cboard.cprogramming.com/cplusplus-programming/140064-recursion-help-printable-thread.html","timestamp":"2014-04-18T20:05:27Z","content_type":null,"content_length":"7168","record_id":"<urn:uuid:278021ac-dc9c-4a92-b470-049f59c0645d>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00260-ip-10-147-4-33.ec2.internal.warc.gz"}
DFM/UNITAR e-Learning for Debt and Finance Managers This online course will involve a mix of self-study and online interaction culminating in a practical understanding of money market through online group work. Throughout the duration of the course, participants will go through theoretical and conceptual material prepared by UNITAR and will have an opportunity to relate it to real-life situations through online discussions and peer-to-peer interaction. There will be a quiz/assignment at the end of the course which is a requirement for obtaining a course certificate. Module 1 Derivative Markets: Context A description (including illustrations) of the elements that constitute the financial system is presented. This is the setting of the derivative markets. The underlying instruments of the financial derivative markets, as well as their prices / indices, are found in the various financial markets Module 2 Derivative Markets: Forwards This module presents detailed descriptions of the characteristics of the various types of forward contracts. Forward contracts are found in the debt, equity, forex and commodity markets. The organisational structure of the forward markets is elucidated as well as the underlying pricing principles. Module 3 Derivative Markets: Futures This module is given much attention because of the significance of futures markets around the world. It dissects the definition of futures, and describes all aspects of the futures market, including types of futures contracts (found in all markets), organisational structure, margining, cash versus physical settlement, the pricing of futures (which differs according to the underlying instrument), the participants, the economic significance of futures markets and so on. Module 4 Derivative Markets: Swaps In a swap contract certain cash flows are exchanged for other cash flows (for example fixed for floating) based on a notional amount that is not exchanged. In the case of a hedger the notional amount will mirror a cash market position. This module covers all swaps: interest rate swaps, currency swaps, equity swaps and commodity swaps. It also covers the organisational structure of the generic swap market. Module 5 Derivative Markets: Options An option, as the name suggests, is a right, without the obligation, to buy or sell the underlying instrument. This is done (called exercise) if it is profitable for the holder (buyer) to do so. Otherwise the holder lets the option lapse. The price paid for the option is called a premium because it is akin to an insurance policy; this, ie option pricing, is given much attention. The organisational structure of the generic options market is also covered, as are the various options markets: on debt, forex, currencies, commodities and other derivatives (futures, swaps and so Module 6 Derivative Markets: Other Derivatives There are a number of other derivative instruments, such as securitisation assets, credit derivatives, weather derivatives, insurance derivatives, electricity derivatives and so on. The main ones are the first three. They are covered in some detail.
{"url":"http://www2.unitar.org/dfm/DFMelearning/Courses/FDM/FDMCourseInfo2.htm","timestamp":"2014-04-18T00:15:06Z","content_type":null,"content_length":"26986","record_id":"<urn:uuid:422b3d47-06bf-45c2-940a-bdeb62dc0e36>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00021-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: September 2005 [00768] [Date Index] [Thread Index] [Author Index] Re: Grassmann Calculus / Indexed Objects / Simplify • To: mathgroup at smc.vnet.net • Subject: [mg60839] Re: Grassmann Calculus / Indexed Objects / Simplify • From: Robert Schoefbeck <schoefbeck at hep.itp.tuwien.ac.at> • Date: Fri, 30 Sep 2005 03:57:08 -0400 (EDT) • References: <dhdank$8ae$1@smc.vnet.net> • Sender: owner-wri-mathgroup at wolfram.com sorry for being rather unclear in my initial post. david park told me about his tensorial package to do index calculations, i hope i can learn how to handle dummies properly in simplifiactions from his code.. but id also like to rephrase my problem in a more understandable way: (Using Einstein conventions all along) For a short version of the actual mathematica stuff scroll down to (XXX). I have indexed objects theta^a, lambda^a ... where ^ means an upper index and _ means a lower index. The (two-value)indices a,b,... are pulled with an anit(!)-symmetric matrix eps^{a,b}, eps^{1,1} = eps^{2,2} = 0, eps^{1,2} = - eps^{2,1} = 0 theta^a =eps^{a,b} theta_b theta_a =eps_{a,b} theta^b and by consistency eps_{a,b}eps^{b,c} = delta^a_c. Furthermore, theta^1 and theta^2 anit-commute, that is any element squares to zero theta^1*theta^1 = 0,... theta^1*theta^2 = -theta^2*theta^1 (theta could be replaced by lambda,...) note that theta^a theta_a = theta^1 theta_1 + theta^2 theta_2 = -2 theta^1 theta^2 A Superfield is an object SF = phi + lambda^a theta_a +F theta^a theta_a where phi , lambda and F are x-dependent fields and theta is constant. Furthermore, theta and lambda are odd as above (that is, exchanging two odd objects produces a minus, i.e. theta_1 lambda_2 = -lambda_2 theta_1 and son on) Since any two thetas (with the same index in the same position) square to zero one finds that powers of Superfields always have a finite decomposition in the theta variables. due to the limited index range (a = 1,2) one has theta^a theta^b theta^c = 0 My problem is as follows: I want to compute arbitrary powers of superfields (that already works, it is MyTimes[CSF,CSF] in my code in the initial post) and i want to simplify them with all due care on the dummies. For simplifications i have to tell mathematica for example that X^a Y_a = - X_a Y^a for any two indices on any objects X and Y (due to the antisymmetry of the metric) and that in X^a Y_a U^b V_b it is IMportant that X,Y as well as U, V have the same index but it is UNimportant that they are called a and b but that it is IMportant that a and b are different. In my code i use Unique[] to generate the indices because it must not happen that an index occurs more than twice in an expression. Unfortunately there is then a problem with products of objects generated in this way: It frequently happens that terms like theta^a$1 theta_a$1 + theta_a$2 theta^a$2 appear. Due to the rule above, this expression equals zero. Mathematica would for example have to take out an eps^{a$2, a$3} in theta^a$2, flip the indices of eps, therby acquire a minus, contract the eps with theta_a$2 and then realize that theta^a$1 theta_a$1 - theta^a$2 theta_a$2 is zero since the sums are the same, nevermind the dummy. in terms of formulas: theta^a theta_a + theta^b theta_b Mathematica:"lets try this" theta^a theta_a + theta^b delta_b^c theta_c Mathematica:"now that was not enough, lets split delta in two eps's" theta^a theta_a + theta^b eps_b^c eps_c^d theta_d Mathematica:"that reminds me of something..." theta^a theta_a - eps^c_b theta^b eps_c^d theta_d theta^a theta_a - theta^c eps_c^d theta_d Mathematica:"My brain hurts..." theta^a theta_a - theta^c theta_c Mathematica:"If only c was a... dammit. Wait! I WAS TOLD HOW TO TREAT Out[1]: 0 I don't yet know how to do that though this problem is addressed in david parks package; although there seems (im not sure) to be a problem with that package 'tensorial 3' as index flipping always comes with a plus sign. I'd hence also be interested to know if that can be changed in the code.
{"url":"http://forums.wolfram.com/mathgroup/archive/2005/Sep/msg00768.html","timestamp":"2014-04-20T03:28:08Z","content_type":null,"content_length":"38099","record_id":"<urn:uuid:5b6f7f9e-05e2-465e-b660-13bb89290cd6>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00612-ip-10-147-4-33.ec2.internal.warc.gz"}
What Will You Learn? What Skills Will You Acquire? You will learn how to reason your way through unfamiliar territory, find familiar structures, make predictions and answer important questions. Mathematics is central to science, engineering, finance, insurance and computing for precisely these reasons. You will see many mathematical structures and understand how they apply to many pursuits. In your senior research project, you will apply your knowledge to a single challenging problem. These projects have included choosing optimal immunization strategies for a heterogeneous network, proving a matrix decomposition resulting from a multiplicative identity, using statistical mechanics to properly value financial options, and proving a relationship between offensive and defensive performance and the eventual outcome of a baseball game. From Western's Calculus 3 course: Students built parametric curves describing ski slopes and their likely paths. Beyond the Classroom The Math department has one of the most active student communities on campus. As a math student, you will enjoy a pre-built learning community for your schoolwork, as well as many social and scholarly activities organized by the faculty. In the fall, we host a welcome-back barbecue with a kickball game. Each fall the department has a barbecue to welcome our returning students back to campus and to help our new students become part of our community. Both of these events are open to all students and faculty interested in mathematics or computer science, their friends and family. In the winter, students in the department attend the Pikes Peak Regional Undergraduate Mathematics Conference where our seniors present their research projects. Some years, we have an ice-climbing outing. Others, it is a skating party. The high point of our spring social calendar is the annual MCIS banquet. The faculty caters this affair, so you will see we are not only excellent scholars and kickball players, but we can also can make a mean pan of mac 'n cheese. But life is not all kickball and casseroles. For professional advancement, we help show you what we do and point you to opportunities. The math seminar meets at noon Monday, Wednesday or Friday. At these gatherings, faculty members show what they are working on and seniors to present their research projects. In February, we load up a couple of vans and travel to the Pikes Peak Undergraduate Research Conference, both to show off what our seniors have done and to see what others are doing. Many of our students use the summer months to pursue internships or other advanced training. Our students have successfully landed positions in summer workshops offered by the institute for Advanced Studies and Research Experiences for Undergraduates (REUs), offered by the National Science Foundation. After Graduation As a Western Mathematics graduate, you will find opportunities both within the field and in other pursuits. Our graduates have earned advanced degrees in math, engineering, geology and architecture. Our graduates are working toward master’s degrees and PhDs with full financial support. If you want to teach, you will be in great demand. There are many programs that will allow you to begin teaching and complete your licensure requirements online. According to a recent survey, even our graduates who are not working in the field said they used math "almost every day" in their chosen professions. Common wisdom suggests that a math degree pays off commensurately with the level you rise to in your profession.
{"url":"http://www.western.edu/academics/undergraduate-programs/mathematics","timestamp":"2014-04-21T10:00:55Z","content_type":null,"content_length":"95286","record_id":"<urn:uuid:9fc850a1-1d1b-4f39-9cb5-1f16ac275a3c>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00568-ip-10-147-4-33.ec2.internal.warc.gz"}
Chevy Chase Find a Chevy Chase Calculus Tutor ...I have been using Windows based computers for the past 20 years. I am familiar with the Windows UI. I have 5 years of experience working with Mathematica. 27 Subjects: including calculus, physics, geometry, algebra 1 ...When needed, I use real-life examples or create my own problems on the side, which I feel would elaborate the academic concept being discussed better. And when discussing complex concepts, I start by using simple examples that the student understands easily. I am very patient and persistent. 14 Subjects: including calculus, chemistry, physics, geometry ...When I was teaching the course, I stressed conceptual understanding in addition to general approaches to solving problems. When I was in high school, I was introduced to Book 1 of Euclid's elements during a class. In addition, I took a class in geometry with practice in proofs. 15 Subjects: including calculus, geometry, statistics, algebra 1 ...I offer tutoring for any high school math subject up to and including AP Calculus AB and BC. I also help students improve their scores for the quantitative portions of the SAT and ACT. I did very well on the math portion of the SAT by scoring 720. 11 Subjects: including calculus, geometry, algebra 1, algebra 2 ...Golf: I was on my high school varsity team for three years. I cover swing mechanics (I encourage an easy, smooth swing), course etiquette, strategies, and short game, customized to your needs. Tennis: I have taken six years of tennis lessons and played in junior tournaments when I was younger. 13 Subjects: including calculus, writing, algebra 1, GRE
{"url":"http://www.purplemath.com/chevy_chase_calculus_tutors.php","timestamp":"2014-04-18T21:19:38Z","content_type":null,"content_length":"24032","record_id":"<urn:uuid:9c6a3631-e789-41d2-90f2-7b0e0e87b356>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00295-ip-10-147-4-33.ec2.internal.warc.gz"}
Wyandotte, MI Precalculus Tutor Find a Wyandotte, MI Precalculus Tutor ...My tutoring philosophy is focused on strengthening comprehension and application of fundamental concepts and problem solving approaches. While math problems can seem complicated, they all follow basic rules and patterns. By building and practicing foundational principles and rules, my students can break down complicated problems into smaller, simpler steps. 12 Subjects: including precalculus, calculus, physics, algebra 1 ...I am a also a grade 8 soccer referee. My methods vary depending upon the student's needs. I believe my high level of success is due to my ability to adapt to an individual's strengths and 16 Subjects: including precalculus, chemistry, physics, Spanish ...There are two math portions on this test: Arithmetic Reasoning and Mathematics Knowledge. The Arithmetic Reasoning portion contains word problems, while the Mathematics Knowledge portion contains algebra and geometry problems, which may or may not start off as a word problem. Currently, no calculator is allowed, so mental calculations, and tricks for simplifying are very useful. 13 Subjects: including precalculus, calculus, geometry, GRE I used to tutor math for 22 years as a teacher in public school and as a home tutor in my country and also I have 3 years of experience in Utah as an Aid in high school to help student in math and language for English second language (ESL) (7-12) grade. And also I have experience to tutor whoever n... 7 Subjects: including precalculus, calculus, geometry, algebra 1 ...I often use a visual chart with them for their schedule and let them know a few minutes before transitioning to the next activity. I tutor a student with autism and at the beginning we go over what we will be doing for the session. Every week I keep the same structure so he is not surprised when I come. 37 Subjects: including precalculus, reading, chemistry, English Related Wyandotte, MI Tutors Wyandotte, MI Accounting Tutors Wyandotte, MI ACT Tutors Wyandotte, MI Algebra Tutors Wyandotte, MI Algebra 2 Tutors Wyandotte, MI Calculus Tutors Wyandotte, MI Geometry Tutors Wyandotte, MI Math Tutors Wyandotte, MI Prealgebra Tutors Wyandotte, MI Precalculus Tutors Wyandotte, MI SAT Tutors Wyandotte, MI SAT Math Tutors Wyandotte, MI Science Tutors Wyandotte, MI Statistics Tutors Wyandotte, MI Trigonometry Tutors
{"url":"http://www.purplemath.com/wyandotte_mi_precalculus_tutors.php","timestamp":"2014-04-19T23:28:50Z","content_type":null,"content_length":"24400","record_id":"<urn:uuid:49a5886f-95a7-4234-b9fc-7b0603e74257>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00112-ip-10-147-4-33.ec2.internal.warc.gz"}
Physics Forums - View Single Post - For which positive real numbers a does the series converge 1. The problem statement, all variables and given/known data For which positive real numbers a does the series Ʃo→∞ a^log(n) Here logarithms are to the base e 2. Relevant equations Im afraid I'm not sure where to start, I'm not sure which topics would be applicable to this question. If someone could point me in the right direction or give me a clue, it would be much appreciated. tthanks 3. The attempt at a solution
{"url":"http://www.physicsforums.com/showpost.php?p=3728597&postcount=1","timestamp":"2014-04-17T07:32:16Z","content_type":null,"content_length":"8950","record_id":"<urn:uuid:216f17c7-c140-4135-9db1-631a4b194510>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00524-ip-10-147-4-33.ec2.internal.warc.gz"}
Tarzana Algebra 2 Tutor ...I studied Dyslexia's causes, symptoms and treatments during graduate school. I have a doctoral degree in Child and Adolescent Clinical Psychology (Psy.D.) from a program fully accredited by the American Psychological Association. I have ample experience tutoring students with a variety of learning differences. 44 Subjects: including algebra 2, English, reading, writing ...I was a straight A student myself in math in grades k-12. I have been playing for 11 years and my peers come to me for advice on a deeper understanding of hand analysis, strategy, and odds. Poker is a balance of skill, mathematics and cards. 22 Subjects: including algebra 2, English, reading, writing Teaching is my passion! I love to help others learn, but I don't believe in large classroom settings because they simply don't work. The absolute best way to learn any material is to have someone teach it to you one on one - that's why I prefer to tutor. 31 Subjects: including algebra 2, English, chemistry, reading ...Content understanding is not the key to doing well on the MCAT) Also, because I'm a small operation I spare the time to talk you through the purposes of each facet of strategy instead of just giving you a laundry list of techniques to memorize without explaining why they're useful. This allows e... 18 Subjects: including algebra 2, chemistry, reading, English ...Then I teach students to decode the words on the printed page by sounding them out, or in phonics terms, blending the sound-spelling patterns. I do believe that the "whole language" approach to reading is important and practice reading students good literature and stressing meaning. The whole language emphasis on identifying new words using context. 72 Subjects: including algebra 2, reading, English, geometry
{"url":"http://www.purplemath.com/tarzana_algebra_2_tutors.php","timestamp":"2014-04-17T21:37:22Z","content_type":null,"content_length":"23883","record_id":"<urn:uuid:5152c73a-4221-4892-a637-b692e18c3bd2>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00627-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: How hard is this? Replies: 0 How hard is this? Posted: Jun 13, 1996 11:09 PM Hello all, Here is a decision problem: Given a (large) natural number N, a (small) natural number d and bounds C and M, are there natural numbers m<M and c_i<C (for each 0 \le i \le d) such that N = c_d m^d + c_(d-1) m^(d-1) + ... + c_0 ? In other words, is there a polynomial of degree d with co- efficients less than C that takes value N at a point less than M ? (For example one might take M = C = O( N^(1/(d+1)) ) Does anyone know anything about the complexity of this or any even vaguely similar problems?
{"url":"http://mathforum.org/kb/thread.jspa?threadID=11830","timestamp":"2014-04-17T01:47:31Z","content_type":null,"content_length":"14075","record_id":"<urn:uuid:6144eaae-3c4b-447b-bc0a-e58f93b8f3ab>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00293-ip-10-147-4-33.ec2.internal.warc.gz"}
Laurys Station Calculus Tutor Find a Laurys Station Calculus Tutor ...I can help students that want extra help for a high school or college level course, SAT prep, GRE prep, or those who just want to learn more! Mostly, I will tutor in physics, mathematics and even chemistry. Contact me if you have any questions at all and I will be more than happy to answer them!I have a Bachelor of Science degree in Physics with a minor in mathematics. 16 Subjects: including calculus, chemistry, GRE, physics ...I am patient and understand that all students learn differently, and do my best to accommodate different learning styles.I have received a bachelor's degree in mechanical engineering from the University of Delaware in 2013. While attending the University of Delaware for mechanical engineering, I... 9 Subjects: including calculus, physics, geometry, algebra 1 ...I am a very patient person, and I love helping people through difficult material. I am looking forward to working with you! Thank you for your timeI have personally taught several classes in Calculus AB and BC, where differential equations is a single part of that course. 35 Subjects: including calculus, chemistry, geometry, biology ...It's like I'm getting paid to solve crossword puzzles, as that's how math problems feel for me. I have experience with every type of student imaginable. Some of those I have tutored include students from middle school all the way up to college, and range in age from 12 to 60. 11 Subjects: including calculus, Spanish, geometry, algebra 1 ...Last year I published my first novel. Creative and analytical writing appeal to me especially, not to mention fiction. My favorite subject. 34 Subjects: including calculus, English, writing, physics
{"url":"http://www.purplemath.com/laurys_station_calculus_tutors.php","timestamp":"2014-04-19T02:26:00Z","content_type":null,"content_length":"24083","record_id":"<urn:uuid:5dd25cbc-2e1e-4cb7-a35b-d4958dc7c3fe>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00206-ip-10-147-4-33.ec2.internal.warc.gz"}
Some syntactic and categorical constructions of lambda calculus models. Papport de Recherche 80, Institute National de Recherche en Informatique et en Automatique (INRIA - Information and Computation , 1996 "... E-mail: howe research.att.com We give a method for proving congruence of bisimulation-like equivalences in functional programming languages. The method applies to languages that can be presented as a set of expressions together with an evaluation relation. We use this method to show that some genera ..." Cited by 109 (1 self) Add to MetaCart E-mail: howe research.att.com We give a method for proving congruence of bisimulation-like equivalences in functional programming languages. The method applies to languages that can be presented as a set of expressions together with an evaluation relation. We use this method to show that some generalizations of Abramsky's applicative bisimulation are congruences whenever evaluation can be specified by a certain natural form of structured operational semantics. One of the generalizations handles nondeterminism and diverging computations.] 1996 Academic Press, Inc. 1. , 1998 "... In this paper we introduce a general class of lazy computation systems and define a natural program equivalence for them. We prove that if an extensionality condition holds of each of the operators of a computation system, then the equivalence relation is a congruence, so that the usual kinds of equ ..." Cited by 95 (6 self) Add to MetaCart In this paper we introduce a general class of lazy computation systems and define a natural program equivalence for them. We prove that if an extensionality condition holds of each of the operators of a computation system, then the equivalence relation is a congruence, so that the usual kinds of equality reasoning are valid for it. This condition is a simple syntactic one, and is easy to verify for the various lazy computation systems we have considered so far. We also give conditions under which the equivalence coincides with observational congruence. These results have some important consequences for type theories like those of Martin-Löf and Nuprl. - Proceedings of the 17th International Colloquium on Automata Languages and Programming , 1996 "... In this paper we study a higher-order process calculus, a restriction of one due to Boudol, and develop an abstract, model for it. By abstract we mean that the model is constructed domain-theoretically and reflects a certain conceptual viewpoint about observability. It is not constructed from the sy ..." Cited by 10 (2 self) Add to MetaCart In this paper we study a higher-order process calculus, a restriction of one due to Boudol, and develop an abstract, model for it. By abstract we mean that the model is constructed domain-theoretically and reflects a certain conceptual viewpoint about observability. It is not constructed from the syntax of the calculus or from computation sequences. We describe a new powerdomain construction that can be given additional algebraic structure that allows one to model concurrent composition, in the same sense that Plotkin's powerdomain can have a continuous binary operation defined on it to model choice. We show that the model constructed this way is adequate with respect to the operational semantics. The model that we develop and our analysis of it is closely related to the work of Abramsky and Ong on the lazy lambda calculus. 1 Introduction A fundamental problem in the semantics of parallel programming languages is integrating concurrency with abstraction. Kahn's pioneering work on stat... "... We study the observational theory of Thielecke’s CPS-calculus, a distillation of the target language of Continuation-Passing Style transforms. We define a labelled transition system for the CPS-calculus from which we derive a (weak) labelled bisimilarity that completely characterises Morris ’ contex ..." Cited by 2 (0 self) Add to MetaCart We study the observational theory of Thielecke’s CPS-calculus, a distillation of the target language of Continuation-Passing Style transforms. We define a labelled transition system for the CPS-calculus from which we derive a (weak) labelled bisimilarity that completely characterises Morris ’ context-equivalence. We prove a context lemma showing that Morris ’ context-equivalence coincides with a simpler context-equivalence closed under a smaller class of contexts. Then we profit of the determinism of the CPS-calculus to give a simpler labelled characterisation of Morris ’ equivalence, in the style of Abramsky’s applicative bisimilarity. We enhance our bisimulation proof-methods with up-to bisimilarity and up-to context proof techniques. We use our bisimulation proof techniques to investigate a few algebraic properties on diverging terms that cannot be proved using the original axiomatic semantics of the CPS-calculus. Finally, we prove the full abstraction of Thielecke’s encoding of the CPScalculus into a fragment of Fournet and Gonthier’s Join-calculus with single pattern definitions. 1 , 1999 "... This paper is concerned with the relationship between-calculus and ��-calculus. The-calculus talks about functions and their applicative behaviour. This contrasts with the ��-calculus, that talks about processes and their interactive behaviour. Application is a special form of interaction, and there ..." Cited by 1 (0 self) Add to MetaCart This paper is concerned with the relationship between-calculus and ��-calculus. The-calculus talks about functions and their applicative behaviour. This contrasts with the ��-calculus, that talks about processes and their interactive behaviour. Application is a special form of interaction, and therefore functions can be seen as a special form of processes. We study how the functions of the-calculus (the computable functions) can be represented as ��-calculus processes. The ��-calculus semantics of a language induces a notion of equality on the terms of that language. We therefore also analyse the equality among functions that is induced by their representation as ��-calculus processes. This paper is intended as a tutorial. It however contains some original contributions. The main ones are: the use of well-known Continuation Passing Style transforms to derive the encodings into ��-calculus and prove their correctness; the encoding of typed-calculi. "... Recursive functions are representable as lambda terms, and de nability in the calculus may be regarded as a de nition of computability. This forms part of the standard foundations of computer science. Lambda calculus is the commonly accepted basis of functional programming languages � and it is folk ..." Cited by 1 (0 self) Add to MetaCart Recursive functions are representable as lambda terms, and de nability in the calculus may be regarded as a de nition of computability. This forms part of the standard foundations of computer science. Lambda calculus is the commonly accepted basis of functional programming languages � and it is folklore that the calculus is the prototypical functional language in puri ed form. The course investigates the syntax and semantics of lambda calculus both as a theory of functions from a foundational point of view, and as a minimal programming language. Synopsis Formal theory, xed point theorems, combinatory logic: combinatory completeness, translations between lambda calculus and combinatory logic � reduction: Church-Rosser theorem � Bohm's theorem and applications � basic recursion theory � lambda calculi considered as programming languages � simple type theory and pcf: correspondence between operational and denotational semantics � current developments. Relationship with other courses Basic knowledge of logic and computability in paper B1 is assumed.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=750025","timestamp":"2014-04-17T13:32:27Z","content_type":null,"content_length":"27235","record_id":"<urn:uuid:9a9d66a6-06d6-4403-8a49-c2e03857a9f4>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00064-ip-10-147-4-33.ec2.internal.warc.gz"}
Our impact on others A nice story about the possibly unknown effects of our actions, from a review by mathematician Marjorie Senechal of a book about German Jewish mathematicians: Fritz John (1910–1994) [pictured in 1953], Jewish on his father’s side, left Germany in 1933 for England; in 1935 he was appointed assistant professor of mathematics at the University of Kentucky in Lexington. Back in the 1930s, the University of Kentucky was small and isolated but, except for two years of war-related work, John stayed there until 1946, when he moved permanently to New York University. Surely he was glad to rejoin his mentor, [mathematician Richard] Courant. But he made a difference in Lexington; I don’t know if he ever knew it. I grew up near Lexington and took piano lessons from a teacher in town named Helen Lipscomb. Helen was a polio victim, confined to a wheelchair; her brother, Bill, was a chemist at the University of Minnesota. I met Bill Lipscomb for the first time in 2009, two years before he died at the age of ninety-two. By then he’d taught at Harvard for forty years and earned a Nobel prize (1976) [in Chemistry] for his work on boranes. Unlike me, Bill had attended the University of Kentucky after a Lexington public high school; he’d had a music scholarship and studied chemistry on the side. “Why did you decide to become a chemist instead of a musician?” I asked him. “What changed your mind?” “A math class,” he told me. “A math class taught by a German named Fritz John.” (page 213) Marjorie Senechal [2013]: Review of: Birgit Bergmann, Moritz Epple, and Ruti Ungar (Editors): Transcending Tradition: Jewish Mathematicians in German-Speaking Academic Culture. Springer Verlag, 2012. Notices of the American Mathematical Society, 60 (2): 209-213. Available here. 0 Responses to “Our impact on others” Comments are currently closed.
{"url":"http://www.vukutu.com/blog/2013/03/our-impact-on-others/","timestamp":"2014-04-16T10:10:10Z","content_type":null,"content_length":"17490","record_id":"<urn:uuid:c3b885f3-6ec1-4c21-b3e1-e92fb147ed24>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00420-ip-10-147-4-33.ec2.internal.warc.gz"}
Optimising Compilers (lecture slides page) Principal lecturer: Prof Alan Mycroft These slides due to (and copyright holder): Tom Stuart Taken by: Part II The slides for each lecture are available as individual downloads below; alternatively, download them all as one giant document (3.5M PDF, 644 pages), which is also available in a tree-preserving 8-up format for printing (3.5M PDF, 81 pages). • Lecture 1: Introduction (352k PDF, 37 pages) □ Structure of an optimising compiler □ Why optimise? □ Optimisation = Analysis + Transformation □ 3-address code □ Flowgraphs □ Basic blocks □ Types of analysis □ Locating basic blocks • Lecture 2: Unreachable-code & -procedure elimination (263k PDF, 53 pages) □ Control-flow analysis operates on the control structure of a program (flowgraphs and call graphs) □ Unreachable-code elimination is an intra-procedural optimisation which reduces code size □ Unreachable-procedure elimination is a similar, interprocedural optimisation making use of the program's call graph □ Analyses for both optimisations must be imprecise in order to guarantee safety • Lecture 3: Live variable analysis (497k PDF, 45 pages) □ Data-flow analysis collects information about how data moves through a program □ Variable liveness is a data-flow property □ Live variable analysis (LVA) is a backwards data-flow analysis for determining variable liveness □ LVA may be expressed as a pair of complementary data-flow equations, which can be combined □ A simple iterative algorithm can be used to find the smallest solution to the LVA data-flow equations • Lecture 4: Available expression analysis (453k PDF, 58 pages) □ Expression availability is a data-flow property □ Available expression analysis (AVAIL) is a forwards data-flow analysis for determining expression availability □ AVAIL may be expressed as a pair of complementary data-flow equations, which may be combined □ A simple iterative algorithm can be used to find the largest solution to the AVAIL data-flow equations □ AVAIL and LVA are both instances (among others) of the same data-flow analysis framework • Lecture 5: Data-flow anomalies and clash graphs (162k PDF, 40 pages) □ Data-flow analysis is helpful in locating (and sometimes correcting) data-flow anomalies □ LVA allows us to identify dead code and possible uses of uninitialised variables □ Write-write anomalies can be identified with a similar analysis □ Imprecision may lead to overzealous warnings □ LVA allows us to construct a clash graph • Lecture 6: Register allocation (477k PDF, 45 pages) □ A register allocation phase is required to assign each virtual register to a physical one during compilation □ Registers may be allocated by colouring the vertices of a clash graph □ When the number of physical registers is limited, some virtual registers may be spilled to memory □ Non-orthogonal instructions may be handled with additional MOVs and new edges on the clash graph □ Procedure calling standards are also handled this way • Lecture 7: Redundancy elimination (352k PDF, 37 pages) □ Some optimisations exist to reduce or remove redundancy in programs □ One such optimisation, common-subexpression elimination, is enabled by AVAIL □ Copy propagation makes CSE practical □ Other code motion optimisations can also help to reduce redundancy □ The optimisations work together to improve code • Lecture 8: Static single-assignment; strength reduction (189k PDF, 35 pages) □ Live range splitting reduces register pressure □ In SSA form, each variable is assigned to only once □ SSA uses Φ-functions to handle control-flow merges □ SSA aids register allocation and many optimisations □ Optimal ordering of compiler phases is difficult □ Algebraic identities enable code improvements □ Strength reduction uses them to improve loops • Lecture 9: Abstract interpretation (101k PDF, 27 pages) □ Abstractions are manageably simple models of unmanageably complex reality □ Abstract interpretation is a general technique for executing simplified versions of computations □ For example, the sign of an arithmetic result can be sometimes determined without doing any arithmetic □ Abstractions are approximate, but must be safe □ Data-flow analysis is a form of abstract interpretation • Lecture 10: Strictness analysis (215k PDF, 44 pages) □ Functional languages can use CBV or CBN evaluation □ CBV is more efficient but can only be used in place of CBN if termination behaviour is unaffected □ Strictness shows dependencies of termination □ Abstract interpretation may be used to perform strictness analysis of user-defined functions □ The resulting strictness functions tell us when it is safe to use CBV in place of CBN • Lecture 11: Constraint-based analysis (168k PDF, 42 pages) □ Many analyses can be formulated using constraints □ 0CFA is a constraint-based analysis □ Inequality constraints are generated from the syntax of a program □ A minimal solution to the constraints provides a safe approximation to dynamic control-flow behaviour □ Polyvariant (as in 1CFA) and polymorphic approaches may improve precision • Lecture 12: Inference-based analysis (173k PDF, 21 pages) □ Inference-based analysis is another useful framework □ Inference rules are used to produce judgements about programs and their properties □ Type systems are the best-known example □ Richer properties give more detailed information □ An inference system used for analysis has an associated safety condition • Lecture 13: Effect systems (277k PDF, 37 pages) □ Effect systems are a form of inference-based analysis □ Side-effects occur when expressions are evaluated □ Function types must be annotated to account for latent effects □ A type system may be modified to produce judgements about both types and effects □ Subtyping may be required to handle annotated types □ Different effect structures may give more information • Lecture 14: Instruction scheduling (364k PDF, 45 pages) □ Instruction pipelines allow a processor to work on executing several instructions at once □ Pipeline hazards cause stalls and impede optimal throughput, even when feed-forwarding is used □ Instructions may be reordered to avoid stalls □ Dependencies between instructions limit reordering □ Static scheduling heuristics may be used to achieve near-optimal scheduling with an O(n²) algorithm • Lecture 15: Register allocation vs. instruction scheduling; legality of reverse engineering (216k PDF, 29 pages) □ Register allocation makes scheduling harder by creating extra dependencies between instructions □ Less aggressive register allocation may be desirable □ Some processors allocate and schedule dynamically □ Reverse engineering is used to extract source code and specifications from executable code □ Existing copyright legislation may permit limited reverse engineering for interoperability purposes • Lecture 16: Decompilation (270k PDF, 35 pages) □ Decompilation is another application of program analysis and transformation □ Compilation discards lots of information about programs, some of which can be recovered □ Loops can be identified by using dominator trees □ Other control structure can also be recovered □ Types can be partially reconstructed with constraint-based analysis
{"url":"http://www.cl.cam.ac.uk/teaching/0910/OptComp/slides/","timestamp":"2014-04-19T04:23:58Z","content_type":null,"content_length":"16012","record_id":"<urn:uuid:bdec3299-5a46-4762-98f9-9a90ca004584>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00642-ip-10-147-4-33.ec2.internal.warc.gz"}
To avoid economic collapse, Russia must increase its GNP by Author Message To avoid economic collapse, Russia must increase its GNP by [#permalink] 20 Apr 2005, 21:52 5% (low) Joined: 25 Nov 2004 Question Stats: Posts: 1502 0% Followers: 5 (00:00) correct Kudos [?]: 20 [0] 0% (00:00) , given: 0 based on 0 sessions To avoid economic collapse, Russia must increase its GNP by 20%. However, due to the structure of its economy, if the 20% threshold is reached, then a 40% increase in GNP is Assuming that the above statements are true, which one of the following must also be true? (A) If ethnic strife continues in Russia, then a 20% increase in GNP will be unattainable. (B) If a 40% increase in Russia's GNP is impossible, its economy will collapse. (C) If Russia's GNP increases by 40%, its economy will not collapse. (D) If the 20% threshold is reached, then a 40% increase in GNP is achievable and a 60% increase is probable. (E) If Russia's economy collapses, then it will not have increased its GNP by 40%. Re: CR: Russian Economy [#permalink] 20 Apr 2005, 22:23 MA wrote: To avoid economic collapse, Russia must increase its GNP by 20%. However, due to the structure of its economy, if the 20% threshold is reached, then a 40% increase in GNP is Assuming that the above statements are true, which one of the following must also be true? chunjuwu (A) If ethnic strife continues in Russia, then a 20% increase in GNP will be unattainable. (B) If a 40% increase in Russia's GNP is impossible, its economy will collapse. VP (C) If Russia's GNP increases by 40%, its economy will not collapse. (D) If the 20% threshold is reached, then a 40% increase in GNP is achievable and a 60% increase is probable. Joined: 26 Apr (E) If Russia's economy collapses, then it will not have increased its GNP by 40%. go for E Posts: 1226 I think it's 'only if' pattern. Location: Taiwan economic doesn't collapse --> must increase its GNP by 20%(a 40% increase in GNP is achievable) Followers: 2 if economic collapses --> must not increase its GNP by 40% Kudos [?]: 28 [0] , given: 0 Oh, must remember myself fault. if A then B <> if no A, then no B. if A then B == if no B, then no A. Last edited by on 25 Apr 2005, 03:29, edited 1 time in total. 1) To avoid economic collapes, Russia MUST increase GNP by 20%. 2) Due to structure of its economy, if 20% threshold is reached, then a 40% increase in GNP is achievable. Assuming that the above statements are true, which one of the following must also be true? GMAT Club Legend (A) If ethnic strife continues in Russia, then a 20% increase in GNP will be unattainable. Joined: 07 Jul - Not sure how ethnic strife comes into the picture. A is out. (B) If a 40% increase in Russia's GNP is impossible, its economy will collapse. Posts: 5099 - Not nescessarily. We're told the GNP of 40% is achieveblae. Nothing about whether it's possible or not. Location: (C) If Russia's GNP increases by 40%, its economy will not collapse. Singapore - This is the one. To increase by 40%, it must have reached the 20% threshold and so the economy will not collaspe. Followers: 15 (D) If the 20% threshold is reached, then a 40% increase in GNP is achievable and a 60% increase is probable. - Out. We do not know what's probably, could be any percentage. Kudos [?]: 120 [0 ], given: 0 (E) If Russia's economy collapses, then it will not have increased its GNP by 40%. - No. It could have reached teh 20% threshold, but did not achieve the 40% mrk. C it is. ywilfred wrote: 1) To avoid economic collapes, Russia MUST increase GNP by 20%. 2) Due to structure of its economy, if 20% threshold is reached, then a 40% increase in GNP is achievable. Assuming that the above statements are true, which one of the following must also be true? (A) If ethnic strife continues in Russia, then a 20% increase in GNP will be unattainable. chunjuwu - Not sure how ethnic strife comes into the picture. A is out. VP (B) If a 40% increase in Russia's GNP is impossible, its economy will collapse. - Not nescessarily. We're told the GNP of 40% is achieveblae. Nothing about whether it's possible or not. Joined: 26 Apr 2004 (C) If Russia's GNP increases by 40%, its economy will not collapse. - This is the one. To increase by 40%, it must have reached the 20% threshold and so the economy will not collaspe. Posts: 1226 (D) If the 20% threshold is reached, then a 40% increase in GNP is achievable and a 60% increase is probable. Location: Taiwan - Out. We do not know what's probably, could be any percentage. Followers: 2 (E) If Russia's economy collapses, then it will not have increased its GNP by 40%. - No. It could have reached teh 20% threshold, but did not achieve the 40% mrk. Kudos [?]: 28 [0] , given: 0 C it is. Hi, I think it's a bit of weird. In the passage: To avoid economic collapse, Russia must increase its GNP by 20%. but if Russia increase its GNP by 20%, its economic may or may not collapse. In other words, even if it increase its GNP by 40%, we still cannot reach the absolute result. Senior Manager Joined: 19 Feb I think "increase in GNP" is a necessary but not sufficient condition. 2005 So E prevails. Posts: 493 Location: Milan Followers: 1 Kudos [?]: 7 [0], given: 0 chunjuwu wrote: ywilfred wrote: 1) To avoid economic collapes, Russia MUST increase GNP by 20%. 2) Due to structure of its economy, if 20% threshold is reached, then a 40% increase in GNP is achievable. Assuming that the above statements are true, which one of the following must also be true? (A) If ethnic strife continues in Russia, then a 20% increase in GNP will be unattainable. - Not sure how ethnic strife comes into the picture. A is out. (B) If a 40% increase in Russia's GNP is impossible, its economy will collapse. - Not nescessarily. We're told the GNP of 40% is achieveblae. Nothing about whether it's possible or not. (C) If Russia's GNP increases by 40%, its economy will not collapse. GMAT Club Legend - This is the one. To increase by 40%, it must have reached the 20% threshold and so the economy will not collaspe. Joined: 07 Jul (D) If the 20% threshold is reached, then a 40% increase in GNP is achievable and a 60% increase is probable. 2004 - Out. We do not know what's probably, could be any percentage. Posts: 5099 (E) If Russia's economy collapses, then it will not have increased its GNP by 40%. - No. It could have reached teh 20% threshold, but did not achieve the 40% mrk. Singapore C it is. Followers: 15 Hi, I think it's a bit of weird. Kudos [?]: 120 [0 In the passage: ], given: 0 To avoid economic collapse, Russia must increase its GNP by 20%. but if Russia increase its GNP by 20%, its economic may or may not collapse. In other words, even if it increase its GNP by 40%, we still cannot reach the absolute result. What the passage is saying: To avoid economy collaspe, Russia's GNP must be raised by 20%. So if their GNP is raised only by 19.9%, the economy will collaspe. Then it says if the GNP does managed to be raised to 20%, then there is a possibility that the GNP could go up and hit 40%. In C, we're told that Russia's GNP increases by 40%. To be able to even achieve this, it must first reach the 20% threshold, and this is the only condition that will prevent an economy collaspe. Director C for me... Joined: 18 Feb 2005 If it reaches 20% it would avoid collapse and also it may even streach the percentage to 40%.... Posts: 674 Followers: 1 Kudos [?]: 2 [0], given: 0 stem...If 20% not achieved (no X )----> then Y (economy collapse) ---(1) Also ....If 20% is achieved (X)----> then 40% is possible (Z) Joined: 18 Nov 2004 so.....If no Z (40% not possible) --> then no X (20% is not achieved) --(2) Posts: 1448 From 1 and 2 Followers: 2 If no Z (40% not possible) ----> then Y (economy collapse)...that's what "B" is saying. "C" is out becose 20% is a necessary but may or may not be a sufficient condition to avoid collapse. go with E)... B) should be out because the economy can reach the 20% and NOT the 40% and the economy will not collapse Joined: 30 Sep 2004 C) should be out because it still could collapse in the future. Posts: 1496 _________________ Location: Germany If your mind can conceive it and your heart can believe it, have faith that you can achieve it. Followers: 4 SVP if A then B Joined: 30 Oct if not A then not B Minimum requirement is 20% increase. Posts: 1799 if economy collapses then it must be true that the GNP didnt increase by 20% (which means it didnt increase more than 40%) Location: definitely (E). NewJersey USA Followers: 3 Kudos [?]: 29 [0] , given: 0 banerjeea_98 anandnk wrote: VP if A then B Joined: 18 Nov if not A then not B Minimum requirement is 20% increase. Posts: 1448 if economy collapses then it must be true that the GNP didnt increase by 20% (which means it didnt increase more than 40%) Followers: 2 definitely (E). I disagree with u here Anand....I think 20% increase is the required condition but not sufficient one.....economy can still collapse even with a 20% increase. banerjeea_98 wrote: anandnk wrote: christoph if A then B VP if not A then not B Joined: 30 Sep Minimum requirement is 20% increase. 2004 if economy collapses then it must be true that the GNP didnt increase by 20% (which means it didnt increase more than 40%) Posts: 1496 definitely (E). Location: Germany I disagree with u here Anand....I think 20% increase is the required condition but not sufficient one.....economy can still collapse even with a 20% increase. Followers: 4 baner, how can B) be true when the economy do not pass the 40%, but rests at 20% ? then the economy won`t collapse and the increase of 40% was not reached ("impossible"). the situation (no collapse) lasts with or without the passing of 40%. If your mind can conceive it and your heart can believe it, have faith that you can achieve it. VP Christoph, we need to realize that "B" is not talking abt if 40% is not reached , it is talking abt if 40% is impossible ....two different things....as the stem says ......if 20% threshold is reached then 40% is possible....that means if 40% is impossible then 20% threshold is not reached (if X, then Y concept) , hence the collapse. Joined: 18 Nov Posts: 1448 Followers: 2 banerjeea_98 wrote: Christoph, we need to realize that "B" is not talking abt if 40% is not reached , it is talking abt if 40% is impossible ....two different things....as the stem says ......if 20% Joined: 30 Sep threshold is reached then 40% is possible....that means if 40% is impossible then 20% threshold is not reached (if X, then Y concept) , hence the collapse. you are right...it is B) Posts: 1496 Location: Germany If your mind can conceive it and your heart can believe it, have faith that you can achieve it. Followers: 4 VP my first vote was wrong because E) cannot be true because the economy could still collapse for some other reason... Joined: 30 Sep _________________ If your mind can conceive it and your heart can believe it, have faith that you can achieve it. Posts: 1496 Location: Germany Followers: 4 MA - what's the OA ?? Joined: 21 Jun 2004 I seem to be lost in the jungle of words..... Posts: 240 Followers: 1 Kudos [?]: 0 [0], given: 0 SVP banerjeea_98 wrote: Joined: 30 Oct 2003 I disagree with u here Anand....I think 20% increase is the required condition but not sufficient one.....economy can still collapse even with a 20% increase. Posts: 1799 Hi Location: I agree with you. (B) should be it. NewJersey USA Followers: 3 Kudos [?]: 29 [0] , given: 0 anandnk wrote: banerjeea_98 wrote: Joined: 25 Nov 2004 I disagree with u here Anand....I think 20% increase is the required condition but not sufficient one.....economy can still collapse even with a 20% increase. Posts: 1502 Hi I agree with you. (B) should be it. Anand. Followers: 5 I also agree with you guys. OA is B. Kudos [?]: 20 [0] , given: 0 jpv Agree with Banerjee.. It is (B). Director A - avoid economic Collapse B - 20% increase Joined: 04 Jul C - 40 % increase Question Stem => If A, then B and if B then C Posts: 905 => NO C -> NO B -> NO A Followers: 3 A) Irrelevent Kudos [?]: 14 [0] B) No C -> No A CORRECT LOGIC , given: 0 C) C -> A Wrong Logic D) B -> C -> D Wrong E) Not A -> Not C Wrong Logic
{"url":"http://gmatclub.com/forum/to-avoid-economic-collapse-russia-must-increase-its-gnp-by-15754.html","timestamp":"2014-04-16T16:08:41Z","content_type":null,"content_length":"222287","record_id":"<urn:uuid:c352516c-6c96-4399-b3f9-317817c03311>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00379-ip-10-147-4-33.ec2.internal.warc.gz"}
Terence Tao, Fields Medal Winner Terence Tao is UCLA’s first mathematician to receive the prestigious Fields Medal, often described as the "Nobel Prize in Mathematics.” Tao, 31, was presented the prize today (August 22) at the International Congress of Mathematicians in Madrid. The Fields Medal is awarded by the International Mathematical Union every fourth year. Tao's capture of the Fields Medal surprised few at UCLA. "Terry is like Mozart; mathematics just flows out of him, except without Mozart’s personality problems,” said John Garnett, professor and former chair of the mathematics department. "People all over the world say, ‘UCLA’s so lucky to have Terry Tao,'” said Tony Chan, dean of physical sciences and professor of mathematics. "The way he crosses areas would be like the best heart surgeon also being exceptional in brain surgery.” A math prodigy from Adelaide, Australia, Tao started learning calculus as a 7-year-old high school student. By 9, he had progressed to university-level calculus; by 11, he was already burnishing his reputation at international math competitions. Tao was 20 when he earned his Ph.D. from Princeton University and joined UCLA’s faculty. By 24, he had become a full professor. "The best students in the world in number theory all want to study with Terry,” Chan said. Graduate students have come to UCLA from as far as Romania and China. One area in which Tao specializes is harmonic analysis, an advanced form of calculus that uses equations from physics. Some of his work involves "geometrical constructions that almost no one understands,” Garnett said. Tao is also regarded as the world’s expert on the "Kakeya conjecture,” a perplexing set of five problems in harmonic analysis. And his work with Ben Green of the University of Bristol, England - proving that prime numbers contain infinitely many progressions of all finite lengths - was lauded by Discover magazine as one of the 100 most important scientific discoveries in 2004. "I don’t have any magical ability,” Tao said. "I look at a problem, and it looks something like one I’ve done before. I think maybe the idea that worked before will work here. . . . After awhile, I figure out what’s going on.” Video for UCLA by Peter Rothenberg
{"url":"http://www.spotlight.ucla.edu/faculty/terence-tao_fields-medal/","timestamp":"2014-04-17T18:23:36Z","content_type":null,"content_length":"10854","record_id":"<urn:uuid:73c3472d-0dce-4172-b667-942143eedb9f>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00105-ip-10-147-4-33.ec2.internal.warc.gz"}
Homework Help Posted by Heath on Friday, February 2, 2007 at 12:29pm. Roll 3 dice. What is the probability of getting a sum of: A) sum of 7: 15/216 B) sum of 8: 21/216 C) sum of 11: 27/216 D) sum of 12: 25/216 Are these correct? How many ways are there to choose 3 numbers ranging from 1 to 6 such that the sum is N? Consider the function: F(X) = Sum over n1 from 1 to 6; Sum over n2 from 1 to 6; Sum over n3 from 1 to 6; X^(n1 + n2 + n3) Clearly the coefficient of X^N will give you the answer. We can calculate F(X) as follows. Note that: X^(n1 + n2 + n3) = X^(n1)*X^(n2)*X^(n3) When summing over n3 from 1 to 6, the factors X^(n1)*X^(n2) don't change. You can take them outside that summation. So, what you see is that the summation factorizes into three summations which are Sum over n from 1 to 6 of X^n = F(X) = X^3(1-X^6)^3/(1-X)^3 We now need to perform a series expansion. The series expansion of 1/(1-X)^3 is given by: A(X) = Sum from k = 0 to infinity of We must multiply this by: B(X) = X^3(1-X^6)^3 = X^3(1 - 3X^6 + 3 X^12 - X^18)= X^3 - 3 X^9 + 3 X^15 - X^21 And we have: F(X) = A(X)*B(X) The coefficient of X^7 of F(X) is clearly the coefficient of X^4 of A(X), which is: 6!/(4!2!) = 15 So, answer A is correct! The coefficient of X^8 of F(X) is clearly the coefficient of X^5 of A(X) which is 7!/(5!2!) = 21 Answer B is also correct! The coefficient of X^11 of F(X) is clearly the coefficient of X^8 of A(X)minus 3 times the coefficient of X^2 of A(X), which is: 10!/(8!2!) - 3* 4!/(2!2!) = 27 Answer C is also correct! The coefficient of X^12 of F(X) is clearly the coefficient of X^9 of A(X)minus 3 times the coefficient of X^3 of A(X), which is: 11!/(9!2!) - 3* 5!/(3!2!) = 25 Answer D is also correct! Related Questions probability - two fair six-sided dice are rolled and the sum of the dots on the ... probability - a student rolls three standard six-sided dice. what is the ... math URGENT!!!! - i have a list of 5 different numbers. the sum of 2 of the ... math - When two die are tossed,throwing a sum of 7 is three times more likely ... math - this is real hard and i would love some help. In a game of craps, a ... Probability NEED HELP - 1. When two fair dice are tossed, what is the ... math - If two dice are rolled, what is the probability that their sum will be a ... Probability - 1. When two fair dice are tossed, what is the probability of a sum... math - a pair of dice is rolled. is the event of rolling a sum of the dice is an... mathhw - a pair of dice is rolled. find the probability that the sum of the dice...
{"url":"http://www.jiskha.com/display.cgi?id=1170437374","timestamp":"2014-04-21T13:49:01Z","content_type":null,"content_length":"10083","record_id":"<urn:uuid:c551ca1f-b9e6-4753-b0be-6a75de828191>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00169-ip-10-147-4-33.ec2.internal.warc.gz"}
Computational Complexity Thursday, April 29, 2004 Karp Symposium Posted by Lance [A report from weblog correspondent Bill Gasarch. Link to Allender's talk added 5/7] On Wednesday April 28 there was a SYMPOSIUM HONORING DR. RICHARD M. KARP at Drexel University in Philadelphia. They were honoring him for winning the BEN FRANKLIN MEDAL IN COMPUTER AND COGNITIVE SCIENCE (There are Ben Franklin Medals for Physics, Chemistry, Life Sciences, Earth Science, Computer and Cognitive Science, and Engineering.) There were three talks: ERIC ALLENDER: The Audacity of Computational Complexity. This talk described the basics of complexity theory and mostly focused on reductions. A nice contrast that it made: 1. in the year 2004 we have good reason to think that many problems (e.g., SAT, 3-COL) are hard, except factoring which is still hard to classify. 2. in the year 1970 most problems (including SAT, 3-COL) were hard to classify. The talk also pointed out some of the problems with Computational Complexity (e.g., "How can you call a n^100000 algorithm feasible?") and answered them nicely (e.g., "we want to show problems are hard, so showing its not in P does that.") The talk both began and ended on the topic of CHECKERS and GO being computationally hard problems. AVI WIGDERSON: The Power and Weakness of Randomness (When you are short on time). This talk showed several examples of problems where randomness helps (hence randomized algorithms are powerful) but also indicated why there may be reason to think that you can always replace a randomized algorithms with a polynomial time algorithm (hence randomization adds no power). The problems it helped on involved sampling, routing in networks, and mazes. RICHARD KARP: Even Approximation Solutions can be Hard to Compute. This talk was about certain problems that can be approximated and certain ones that (it seems) cannot be. A nice contrast was variants of TSP, which ranged from what can be approximated very well, to what can be approximated some, to what can't be approximated. He also brought in randomized rounding as a technique for approximation. The talk ended on PCP (done informally) and how it can be used to show lower bounds for approximation. The talks were all well presented and quite understandable. The point of the talks was to expose our area to people outside of theory and perhaps even outside of computer science. As such the theorists in the audience did not learn much new; however, it is still interesting to see someone else's perspective on material that you are familiar with. 9:56 AM # 0 comments Wednesday, April 28, 2004 Conferences versus Journals Posted by Lance A reader asks why Gafni and Borowski did not publish their paper in a journal and become eligible for the Gödel Prize. I wish this was an isolated incident but it reflects on a sad state of affairs in computer science and theoretical computer science in particular. Too many papers in our field, including many great ones, do not get submitted to refereed journals. In an extreme case, Steve Cook received the Turing Award mostly for a STOC paper. In most cases, conferences in computer science are more selective than journals. Your reputation in theoretical computer science is measured more by the conferences your papers appear than the journals. In other fields like mathematics, physics and biology, journals have a much greater reputation and most of their papers do appear in refereed form. I believe the reason is historical: computer science started as a quickly changing field and journals could not keep up with the rapidly emerging ideas. Conference program committee cannot and do not produce full referee reports on conference submissions. Proofs are not verified. Papers are not proofread carefully for mistakes and suggested improvement of presentation. Computer science suffers by not having the permanency and stamp of approval of a journal publication on many of its best papers. The founders of the Gödel Prize put in the journal requirement to encourage potential award winning papers to go through the full refereeing process. Many papers in our field do appear in journals and some researchers are extremely diligent in making sure all of their work appears in refereed form. Also I know of no computer scientist who purposely avoids sending their papers to a journal. But when we have a system that does not value journal publications, a computer scientist pressed for time often will not make the effort to take their papers past the conference version. 8:53 AM # 0 comments Monday, April 26, 2004 Is Disney World NP-complete? Posted by Lance The Unofficial Guide to Walt Disney World 2004 gives a lesson on complexity by describing the optimal tour of the Magic Kingdom as a traveling salesman problem. Some excerpts: As we add more attractions to our list, the number of possible touring plans grows rapidly...The 21 attractions in the Magic Kingdom One-Day Touring Plan for Adults as a staggering 51,090,942,171,709,440,000 possible touring plans...roughly six times more than the estimated number of grains of sand in the whole world...Fortunately, scientists have been hard at work on similar problems for many years..finding good ways to visit many places with minimum effort is such a common problem that it has its own nickname: the traveling salesman problem. The book goes on to describe the computer program they use to approximate the optimal tour. Read more here (which I found by searching within the book for "traveling salesman" on the Amazon site). You'll need to be a registered user of Amazon.com to read it. 9:17 AM # Sunday, April 25, 2004 G�del Prize Posted by Lance From the PODC (distributed computing) mailing list via Harry Buhrman. Usually the winners are kept secret until the ICALP or STOC conference but the PODC mailing list has already broken the news. It has been recently announced that this year's winners of the Gödel Prize are As we all know, the result was initially published simultaneously in STOC 1993 also by Eli Gafni and Liz Borowski, but the Gödel Prize is awarded only to journal articles. Congratulations to the winners! Note that for the second time, the Gödel's Prize honors a core PODC topic (in 1997, Joe Halpern and Yoram Moses won the prize). This is a sign both of the scientific quality of the PODC community, as well as the respect it wins in the theoretical CS world at large. In case you are counting, that's Complexity 5, PODC 2. 6:53 AM # 0 comments Friday, April 23, 2004 Theory Girl Posted by Lance From Bill Gasarch: There are some more novelty songs about theory (aside from THE LONGEST PATH) from the Washington CSE Band. The best one is THEORY GIRL. 2:53 PM # 1 comments Thursday, April 22, 2004 A Few Short Announcements Posted by Lance Alan Kay will receive the 2004 Turing award. It can't always be a theorist. Registration is open for the 2004 Conference on Computational Complexity. The final schedule will be posted soon. Also keep in mind STOC 2004 right here in Chicago. The list of accepted papers for ICALP is up. Finally, next Wednesday the 28^th in Philadelphia, Drexel is hosting a symposium on computational complexity honoring Richard Karp. 10:36 AM # 0 comments Wednesday, April 21, 2004 Are There #P Functions Equivalent to SAT? Posted by Lance Help me solve this problem, write the paper with me, get an Erdös number of 3 and it won't cost you a cent. We can have #P functions hard for the polynomial-time hierarchy (Toda) or very easy but can they capture exactly the power of NP? Conjecture: There exists an f in #P such that P^f=P^SAT. There is some precedence: counting the number of graph isomorphisms is equivalent to solving graph isomorphism. The conjecture is true if NP=UP, NP=PP or if GI is NP-complete. I don't believe any of these. Does the conjecture follow from some believable assumption or perhaps no assumption at all? We don't know if there exists a relativized world where the conjecture does not hold. Even the following weaker conjecture is open: There exists an f in #P such that NP⊆P^f⊆PH. A good solution to these conjectures might help us settle the checkability of SAT. 8:51 AM # 0 comments Monday, April 19, 2004 Asian Food for Thought? Posted by Lance Many years ago, an Israeli graduate student made the rounds and gave talks at several US universities. When he arrived in Chicago, he asked me if Americans only eat Chinese food. I told him he hadn't seen a random sample of Americans and took him out for some good Chicago ribs. Afterwords he told me he preferred the Chinese food. At a logic conference at Notre Dame, I ate dinner with a small group at one of the few Chinese restaurants in South Bend. Surprisingly no other mathematicians were eating in the restaurant. Just as we noticed this, the waiters started putting tables together and about five minutes later in walk about 20 logicians for dinner. Why do mathematicians and computer scientists eat so much Asian food? Not just Chinese but Japanese, Thai, Korean, Vietnamese, Indonesian, Ethiopian (not Asian but close enough) and of course Indian (northern and southern). Not that I don't enjoy Asian food but what's wrong with a good hamburger? 8:40 AM # 0 comments Tuesday, April 13, 2004 Favorite Theorems: Primality Posted by Lance March Edition Primality is a problem hanging onto a cliff above P with its grip continuing to loosen each day. - Paraphrased from a talk given by Juris Hartmanis in 1986. It took sixteen more years but the primality problem did fall. PRIMES is in P by Manindra Agrawal, Neeraj Kayal and Nitin Saxena. This paper gave the first provably deterministic polynomial-time algorithm that could determine whether n is a prime given n in binary. The theoretical importance cannot be overstated. But why do I consider the paper a complexity result instead of just an algorithmic result? Manindra Agrawal had already a strong reputation as a complexity theorist. The proof involves a derandomization technique for a probabilistic algorithm for primality. But more importantly primality had a long history in complexity. Primality is in co-NP almost by definition. In 1975, Vaughn Pratt showed that PRIMES is in NP. In 1977, Solovay and Strassen showed that PRIMES in co-RP and testing primality became the standard example of a probabilistic algorithm. In 1987, Adleman and Huang building on work of Goldwasser and Kilian showed that PRIMES is in RP and thus in ZPP. In 1992, Fellows and Koblitz showed that PRIMES is in UP∩co-UP. Finally in 2002 came AKS putting PRIMES in P. A runner-up in this area is the division problem recently shown to be in logarithmic space and below. 10:28 AM # Sunday, April 11, 2004 The Cost of Textbooks Posted by Lance The University of Chicago Bookstore has asked for textbook requests for the fall quarter by the middle of next month instead of during the summer as in past years. The reasoning: A burgeoning used textbook market. If the bookstore knows what books faculty will use in the fall, they can offer higher prices to pay for used books at the end of the spring quarter. This is just an indication of the problems of higher textbook costs. CALPIRG has a recent extensive report on this topic. Textbook costs add to already spiraling increases in tuition and other college expenses. In addition, I have more griping than usual about buying the textbook from students in my class though the book, Homer and Selman's Computability and Complexity Theory lists new for $50, under even the average used price mentioned in the CALPIRG report. What should I do as a faculty member? Should professors strive to reuse the same textbook each year so student's can buy and sell used versions to keep their costs down? That can lead to courses getting stale very fast. Or should I even forgo textbooks completely and rely on less organized material freely available on the internet? I already do this for graduate courses where strong up-to-date textbooks simply do not exist. 11:47 AM # 0 comments Tuesday, April 06, 2004 The View of a Science Writer Posted by Lance A friend of mine from college became a science writer for various newspapers and magazines. Once he told me about his two biggest complaints about scientists. 1. Scientists want everyone who works on a project to be named in an article. 2. Scientists want every detail in an article to be complete and correct. You might initially take the side of the scientists. But the science writer does not write for the scientists but for the general public. Put yourself in the position of the reader. The reader doesn't want to read through a long list of names that they won't remember anyway. The average reader also just wants an overview of the research and its importance. If removing some technical caveats and slightly oversimplifying the research achieves a better level of understanding to the reader, so be it. Remember next time you read a science article in the popular press or get interviewed for such an article, the goal of the article is not to pass a serious referee review but to give the general public some glimpse into an important research area. 9:01 PM # 0 comments Monday, April 05, 2004 Blum Complexity Measures Posted by Lance The Blum speed-up theorem states that there exists a computable language L such that if L is in time t(n) then L is in time log(t(n)). The log function can be replaced by any arbitrarily slowly growing computable function. Instead of time one can use space or any other measure Φ that fulfills these properties: 1. Φ(M,x) is finite if and only if M(x) halts, and 2. There is a computable procedure that given (M,x,r) can decide if Φ(M,x)=r. These are known as Blum axioms and measures that fulfill them are known as Blum complexity measures. They were developed by Manuel Blum in the late 1960's. The Borodin-Trakhtenbrot Gap Theorem states that given any computable function g(n) (e.g. g(n)=2^n), there exists a function t(n) such that every language computable in time g(t(n)) is also computable in time t(n), i.e., there exists a gap between these time classes. Once again the theorem holds for any Blum complexity measure. We don't see much of the Blum complexity measures these days for a few reasons. 1. The only truly interesting Blum measures are time and space. 2. The functions and languages that one gets out of the speed-up, gap and related theorems are usually quite large and artificial. 3. Many measures that we are interested in today, like the number of random coins used by a probabilistic Turing machine, do not fulfill the Blum axioms. In 1991 I saw Manuel Blum give a talk discussing a new complexity measure, something about mind changes, that did not fulfill his axioms. So we had a Blum complexity measure that was not a Blum complexity measure and as Douglas Adams would say Manuel Blum "promptly vanishes in a puff of logic." [Just kidding-we like Manuel] 8:06 AM # 0 comments Friday, April 02, 2004 More News from Dagstuhl Posted by Lance Another Guest Post from Dieter van Melkebeek Thursday morning, Shuki Bruck gave the first talk at the workshop that dealt with actual Boolean circuits. He pointed out that cyclic circuits can be combinational and may allow us to realize Boolean functions with fewer gates and/or less delay. Consider the following circuit with inputs x1, x2, x3, and outputs f1, f2, f3, f4: | | | x1 x2 -x1 x3 | | | | | | | | | | | | | | v v v v | | | |-> \/ ----> /\ ----> \/ ----> /\ --| | | | | v v v v f1 f2 f3 f4 Although the circuit is topologically cyclic, the outputs are well-defined and only depend on the inputs. (Look at the cases x1=0 and x1=1 separately.) A careful analysis shows that every acyclic circuit that outputs f1, f2, f3, and f4 needs at least 5 nonunary gates. Thus, circuits with feedback allow us to gain a factor of 4/5 in terms of number of gates needed to compute these functions. (As usual, we do not count negations.) Shuki presented a sequence of Boolean functions for which the reduction in the number of nonunary gates asymptotically reaches 1/2 if we only allow gates of fanin at most 2. He raised the question how significant the reduction can be if we allow larger fanin. Thomas Thierauf presented an NC^2 algorithm for unique perfect matching. A perfect matching in a graph is a collection of disjoint edges that cover all vertices. It is known for some time how to decide the existence of a perfect matching and how to construct one in randomized NC^2: 1. Assign random weights from a small range of integers to the edges of the graph such that with high probability there is at most one minimum weight perfect matching. If we are in the situation with a unique minimum weight matching M, we can decide whether a given edge belongs to M by evaluating two determinants of matrices with integer entries that are exponential in the weights. Since the weights are small, we can do the latter in NC^2. 2. Run the NC^2 algorithm on all edges in parallel and verify that the result is a perfect matching M. It is open whether perfect matchings can be constructed deterministically in NC. To decide whether a graph G has a unique perfect matching, Thomas first runs step 2 above (with unit weights). If that test fails, the algorithm rejects since G either has no perfect matching or has more than one. If the test is passed, the algorithm additionally verifies that G has no perfect matching M' other than M. Such an M' exists iff G contains a cycle that alternates between edges from M and edges in G-M. The latter can be cast as a reachability problem in a graph that is roughly a concatenation of directed copies of M and G-M. Since directed graph reachability can be computed in NC^2 and the input to the reachability problem can be computed in NC^2 by step 2 above, the additional test runs in NC^2, as does the entire algorithm. On Friday, Oded Lachish discussed the current records on unrestricted circuit lower bounds for explicit functions in n Boolean variables. For circuits that can use any binary gate, the record dates back to 1984 and stands at 3n. For circuits that can use any binary gate except parity and its negation, the record has recently been improved from 4n - O(1) to 5n - o(n). Both records use the technique of gate elimination, and Oded conjectured that the 3n result can be improved along the lines of the recent 5n - o(n) result. The workshop ended at noon on Friday. One statistic: among the 33 talks, 3 were blackboard only, 5 used handwritten slides, 1 printed slides, and 24 were computer presentations. Finally, I have one suggestion for those readers who have attended a Dagstuhl seminar in the past. In a response to changes in financial support, the Dagstuhl office is requesting information about research publications that grew out of or have otherwise been significantly influenced by a Dagstuhl seminar. If you are an author of such a publication, please send the information to office@dagstuhl.de. Let's try to keep the wonderful tradition of Dagstuhl alive! 2:39 PM # 0 comments
{"url":"http://oldblog.computationalcomplexity.org/archive/2004_04_01_archive.html","timestamp":"2014-04-17T18:23:42Z","content_type":null,"content_length":"56759","record_id":"<urn:uuid:16e4b4cf-cfd9-49ef-86b5-c6a7ca24b44d>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00499-ip-10-147-4-33.ec2.internal.warc.gz"}
Anti-self-dual connections on CP^2 up vote 5 down vote favorite I'm learning Yang-Mills theory and its applications on 4-manifold. I want to know that have someone computed all the anti-self-dual connections on principle $SU(2)$ bundles over complex projective space $CP^2$. Where can I find the original paper if someone has calculated it? dg.differential-geometry connections 3 Donaldson-Kronheimer's book The Geometry of Four-Manifolds computes the moduli space for small 2nd Chern class (it's empty for $c_2=1$); check the examples in chapter 4. But it's not computed by explicitly writing down connections (compared to the $S^4$ scenario). You should be able to find appropriate references there. – Chris Gerig Jun 30 '13 at 18:26 You also have the Atiyah-Drinfeld-Hitchin-Manin construction en.wikipedia.org/wiki/ADHM_construction, on one side, and twistor space of Atiyah-Hitchin-Singer, or actually and originally R Penrose, on the other, which leads to complex algebraic geometry and results of Horrocks, Barth and Hartshorne. The twistor space of CP2 is the flag manifold F(C3)... – Al-burcas Dec 16 '13 at 18:23 add comment migrated from meta.mathoverflow.net Jun 30 '13 at 13:53 This question came from our discussion, support, and feature requests site for professional mathematicians. Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook. Browse other questions tagged dg.differential-geometry connections or ask your own question.
{"url":"http://mathoverflow.net/questions/135339/anti-self-dual-connections-on-cp2","timestamp":"2014-04-19T02:41:26Z","content_type":null,"content_length":"48794","record_id":"<urn:uuid:10977026-d65d-4b31-bdcc-8b6099638d13>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00240-ip-10-147-4-33.ec2.internal.warc.gz"}
James Colliander's Blog Galina Perelman: 2 soliton collision in NLS i \psi_t = – \psi_{xx} + F(|\psi|^2) \psi, ~ x \in R where $F(\xi) = -2 \xi + O (\xi^2), ~ \xi \rightarrow 0.$ This family of equations has solitary wave solutions e^{i \theta(x,t) \phi (x – b(t), E)} where $\theta(x,t) = \omega t + \gamma + v \frac{x}{2}, ~b(t) =vt + c$ (all reall parameters). The profile $\phi$ is the associated ground state, which is $C^2$, decays exponentially, is even, … If I set $\epsilon^2 = E$ and write $\phi(y, \epsilon^2) = \epsilon \hat{\phi}(\epsilon, \epsilon).$ We have then that $\hat{\phi}(z, \epsilon) = \phi_0 (z) + O(\epsilon^2)$ where $\phi_0$ is the standard soliton for cubic NLS. A calculation shows that $$\| \phi(\cdot, \epsilon^2) \|_{H^1} = O(\epsilon^{1/2}).$$ Let’s collect the parameters $\sigma = (\beta, E, b, v) \in R^4.$ The question I’d like to address: Question: As $t \rightarrow -\infty$, suppose that $\psi(t) = w(\cdot, \sigma_0 (t)) + w(\cdot, \sigma_1 (t)) + o_{H^1} (1)$. Because of the galilean invariance we can arrange so that $\sigma_0$ does not move and we assume that $v_1 > 0$. So, we can arrange this data to have completely decoupled solitons as $t \rightarrow – \infty$. The question is then to understand the soliton collision and also what happens afterwards. Perturbative regime: $$\epsilon^2 = E_1 \ll 1, E_0 \thicksim 1, v_1 \thicksim 1.$$ Collision Scenario: 1. $w(\cdot, \sigma_0 (t))$ is ‘preserved’. 2. $w(\cdot, \sigma_1 (t))$ splits into two outgoing waves of the cubic NLS. The splitting is controlled by the linearized operator associated to the large soliton $w_{\sigma_0}$. Collision: $|t| \lesssim \epsilon^{-1-\delta}, ~ \delta > 0$. pre-interaction: $t leq – \epsilon^{-1-\delta}$ post-interaction: $t leq – \epsilon^{-1-\delta}$ She draws a picutre: Long wide soliton to the left of a big soliton at the origin before the collision. After the collision the small soliton splits into two waves, one moving left and one moving right. The big soliton at the origin is drawn not centered at the origin. $s = s(\frac{v_1}{2}), r = r (\frac{v_1}{2})$ where $s(k), r(k)$ are the translation and reflection coefficients of the linearized operator corresponding to $w(\cdot, \sigma_0 (t))$. Here we have $|s |^2(k) + |r|^2 (k) =1$. The only trace of nonlinearity appears in the phase. This phenomena has been observed before by Holmer-Mazuola-Zworski and earlier by physicists. H-M-Z conisdered the cubic NLS with an external delta potential. For small incoming solitons, they have observed the small soliton splitting caused by the Dirac function potential. (H0): $F \in C^\infty, F(\xi) = – 2 \xi + O(\xi^2), \xi \rightarrow 0.$ $F(\xi ) \geq – C\xi^q, C>0, q<2, \xi \geq 1$. (GWP in $H^1$) $\exists !$ ground state. Linearization around $w(x, \sigma(t)) = e^{i\theta} \phi(x – b(t), E)$. We substitute $\psi = w + f$ and expand to obtain the following equation for $f$: i {\bf{f}}_t = L(E) {\bf{f}}. Here ${\bf{f}}$ is a (column) vector $(f, \overline{f})$. L(E)= (-\partial_y^2 + E) \sigma_3 + V(E). Here $\sigma_3$ is the Pauli matrix and $V$ is a certain matrix involving $V_1 = F(\phi^2) + F’ (\phi^2) \phi^2$ and $V_2 = F’ (\phi^2) \phi^2$. She draws a spectral plane. Essential spectrum along real line in region $|x| > E$ and some eigenvalues drawn as x’s inside the gap and one above and below the real line on the imaginary axis. 0 is an eigenvalue. We have two explicit eigenfunctions $\xi_0$ and $\xi_1$. $M(E)$ is the generalzied null space of $L(E)$. We have the following equivalence: $$\sigma(L(E)) \subset R, {\mbox{dim}} M(E) = 4 \iff \frac{d}{dE} \| \phi(E) \|_2^2 > 0.$$ These conditions imply the orbital stability of $\Phi$. $Lf = \lambda f, ~\lambda \geq E, \lambda = E + k^2, ~ k \in R$. If $k^2 + I \notin \sigma_p (L(E))$ then $\exists ~! ~ f(x,k) = s(k) e^{i k x} (1, 0)^t + O(e^{-\gamma x})$ as $ x\rightarrow + \ infty, ~ \gamma > 0$ and $f(x,k) = e^{ikx} (1,0)^t + r(k) e^{-ikx}(1,0)^t + O(e^{\gamma x}), x \rightarrow – \infty$. $w(x,\sigma, t), ~ j=0,1$ normalized as before. $$\frac{d}{dE} \| \phi(E) \|2^2 |{E=E_0} > 0$$ (H2): $\epsilon^2 = E_1$ sufficiently small (H3): $M(E + \frac{v_1^2}{4}) \notin \sigma_p (L(E_0))$ (Nobody knows how to prove no embedded eignevalues.) Proposition: $\exists ~! ~ \psi \in C(R, H^1)$ such that …. Theorem: For $\epsilon^{-1-\delta} \leq t \leq \delta \epsilon^{-2} | \ln \epsilon |$ $$ \psi (t) = w (\cdot, \sigma(t)) + \psi_+ (t) + \psi_{-} (t) + h(t)$$ 1. $\sigma(t) = (\beta(t), E_0, b(t), v_0), ~V_0 = \epsilon \kappa$ where $\kappa$ is an explicit constant and |\beta(t) – \beta_0 (t)|, |b(t) – v_0 t| \leq C \epsilon^2 t. 2. $\Psi_{\pm} (x,t) = ….ack too fast to type… is expressed as an explicit phase times a function $S^{\pm}$ which solves cubic NLS emerging from data built using thre reflection, transmission coefficients and $\phi_0 (y)$. 3. error estimates in terms of $\epsilon.$ Edriss Titi: Loss of smoothness in 3d Euler Equations (joint work with Claude Bardos) 1. Background □ Euler □ Classical □ Nonuniqueness: De Lellis – Sh… 2. Shear flow □ DiPerna Majda example: weak limit of Euler solutions whose limit is not a solution □ Illposedness of Euler in C^{0,\alpha} 3. Vortex sheets induced by 3d shear flows □ Examples □ Differences between 2d and 3d Kelvin-Helmholtz problems □ Comments on numerics Euler equations Euler equations on the 3-torus. $\omega$ is the vorticity. Recast using Biot-Savart. Vorticity stretching term distinguishes 2d and 3d. Classical Wellposedness: • global existence and uniquenes for initial data $\omega_0 \in L^\infty$. This result is due to Yudovich (1963). Some extension…. • For data in $C^{1,\alpha}$, Euler equations are short time well-posed and the solution conserves energy. [Lictenstein (1925)] • The same result holds the context of Sobolev spaces $H^s, ~ s > \frac{5}{2}$. (Basically same result in more modern spaces) Question: Does there exist a regular solution (say in $C^{1,\alpha}$) of the 3d Euler equation that becomes singular in a finite time (blows up problem)? This is in osome sense as difficult as the millenium problem. There are different opinions…. “I spoke with Necas about this…near end of his life…on Wendesday’s he thinks it blows up and on Thursdays he thinks no…so he has bad dreams about it…” DeLellis-Szekelyhidi: There exists a set of initial data $u_0 \in L^2 (\Omega)$ (not explicitly constructed, Baire argument) for which the Cauchy problem has, for the same inital data, an infinite family of weak solutiosn of the 3d Euler equations: a residual set in the space $C(R; L^2_{weak} (\Omega))$. These are also in $L^\infty$ so they have finite energy. (Built on Shnirelman and others….). This is a breakthrough…but it is not so physical. Maybe a selection mechanism….for NS we don’t have such a result. Leray solutions are not known to be unique. Any result like this for NS would be extremely important….connect it with turbulence. The lack of uniqueness, according to Leray, relates to Shear flows: $$u(x,t) = (u_1 (x_2), 0, u_3 (x_1 – t u_1 (x_2))).$$ For $u_1, u_3 \in C^1$, the above shear flow is a classical solution of the Euler equations with pressure $p=0$. Yudovich used these to show the existence of solutions with exponentially growing high regularity norms. This example due to DiPerna-Majda (1987). Theorem (DiPerna-Lions): Norm explosion in $W^{1,p}$ for Euler, for any $p \geq 1$. Idea of the proof: $ \partial_{x_2} u_3 (x_1 – t u_1 (x_2))=…$ Theorem: The shear flow is a weak solution of the Euler equations in the sense of distribtuions in $R^3$, provided $u_1, u_3 \in L^2_{loc} (R^3)$. On the periodic box, we can do same thing and in this case we have finite energy. Why do I stress the finite energy? This relates to the Onsager conjecture. Theorem [Ill-posedness of the Euler equations in $C^{0,\alpha}$]: The shear flow with $C^{1,\alpha}$ components $u_1, u_3$. However, for $u_1, u_3 \in C^{0,\alpha}$ then the above shear flow is always in $C^{0, \alpha^2}$ which is a much larger space. We instantly lose the $C^{0, \alpha}$. There exists a shear flow which starts in $C^{0, \alpha}$ which, at any positive time, is not in $C^{0, \beta}$ for any $\beta > \alpha^2$. This family of solutions is compactly supported in space and time. Other spaces and optimal spaces: There are many layers of spaces between these H”older spaes. He writes a tower of inclusions between $C^{1,\alpha } \subset C^{0, \alpha}$. In fact, there is well-posedness [Pak and Park] vs. failure of wp in $B^1_{\infty, infty}$ (Zygmund class) and failure in certain Triebel-Lizorkin spaces. Weak limit of oscillating initial data: DiPerna-Majda example… Shear flow with vorticity interface. Vortex sheet flows are irrotational off an interface. To build such solutions he takes $u_1, u_3$ as (parametrized) Heaviside functions. …wow…this talk is coming pretty fast, slides are changing…I stop typing and start to just try to keep up. Numerical investigation of blowup for the 3d Euler John Gibbon gave a talk a few years ago on the history of these investigations. Tom Hou and Bob Kerr are competing and disagreeing in this direction….is there a singularity…maybe not? Question: Does the soluton of the following PDE blow up? \partial_t u – \nu \Delta u = |\nabla u |^4? What would you try numerically to determine if it blows up or not? You can even collapse it to the corresponding 1d problem? Postlude Discussion: Yudovich explored the DiPerna-Lions shear flow examples to see that norms measuring high regularity can grow exponentially in time. Chemin has studied the vortex patch and shown some measures of regularity of the boundary of the patch grow doubly exponentially fast. It was not explicitly clear to me yet how to relate Chemin’s rough patch boundary example to the growth of norms measuring regularity of the solution. Also, Chemin’s examples emerge from non-smooth initial data. I remain interested in the question: Does there exist nice data for 2D Euler which evolves with high regularity norms growing doubly exponentially? Benoit Grébert: Hamiltonian Interpolation for Approximation of PDEs. (joint work [Grébert-Faou] with Erwan Faou) Take a PDE with solution u. Consider a numerical approximation $u^n$ built with a symplectic integrator which approximates $u(nh)$. We build a hamiltonian $H_h$ such that $$u^n = \Phi_{Hh}^{nh}(u_0) + very ~small.$$ I am concerned with the long time behavior of the numerical trajectory. My concern right now is not in estimating the quality of the approximation. Instead, I want to understand the numerical flow. 1. Finite dimensional Context (ODE) 2. PDE Context 3. Ideas of the proof (time permitting) Finite Dimensional Context We go back to Moser’s theorem. A discrete symplectic map close to the identity can be approximated by a Hamiltonian flow. Consider an analytic symplectic map R^{2n} \ni (p,q) \longmapsto \Psi(p,q) \in R^{2n} with $\Psi = Id + O(\epsilon)$. Then $\exists~ H_\epsilon$ such that $$\Psi = \Phi_{H\epsilon}^\epsilon + O(e^{-\frac{1}{c\epsilon}}).$$ ([Moser 1968], [Benettin-Giorgilli 1994]) Numerical Context: Suppose I have a Hamiltonian ODE system $$ (\dot{p}, \dot{q}) = X_H (p,q) and an associated numerical discrete-time-step symplectic integrator (p_n, q_n)= \Psi_h^n (p_0, q_0). We then have that $\Psi_{h} = \Phi_{Hh} + O(e^{-1/ch}).$ We obtain that $H_h (p_n, q_n) = H_h (p_0, q_0) + n e^{-1/ch}$. So, we are observing that the modified energy is essentially conserved for exponentially long times. Backward Error Analysis PDE Context $$ H = H_0 + P$$ Here we imagine $H_0$ is the linear part and P is the nonlinear part. As an example, consider the cubic NLS on $T^d$. We can treat other equations as well. Let’s recall the Hamiltonian formalism in the Fourier variables: Expand $u$ to get u = \sum \xi_j e^{ijx}, ~ {\overline{u}}= \sum \eta_j e^{-ijx}. We can then write, for each $j \in Z^d$, {\dot{\xi}} = -i \frac{\partial H}{\partial \eta} {\dot{\eta}} = i \frac{\partial H}{\partial \xi}. For the cubic NLS case, we obtain H = \sum |j|^2 \xi_j \eta_j + \sum^* \xi_{k_1} \xi_{k_2} \eta_{l_1} \eta_{l_2} where $\sum^*$ is the sum over all the parameters subject to the constraint $k_1 + k_2 = l_1 + l_2$. The problem we face here is that the linear part is unbounded, and we have infinitely many dimensions as first obstructions in passing from the ODE to the PDE context. Splitting Method: $$\Phi_{P+H0 } \thicksim \Phi_p^h \circ \Phi_{H0}^h =^? \Phi_{Hh}^h.$$ First naive idea: Use the Baker-Campbell-Haussdorf formula. We can then expand as a Lie series… to write $$\Phi_p^h \circ \Phi_{H0}^h = e^{h\mathcal{L}p}e^{h\mathcal{L}H0} = e^{h\mathcal{Hh}} with $H_h = H_0 + P + \frac{h}{2}{ P, H_0 } + \dots. To proceed, we will need conditions $small = h^N C(N,, \| num sol \|_H^N)$ NOT FAIR! So we need to work harder. First Idea: Replace $hH_0$ by $A_0$ by cutting off to low frequencies. We can splt and impose the CFL condition. Midpoint + split. He considers different cutoffs. We then consider $\Phi_p^h \circ \Phi_{A_0}^1$. Second Idea: Use the Wiener Algebra. Space of functions with Fourier coefficients in $l^1$. Theorem (Grébert-Faou): For the approximation scheme $\Phi_p^h \circ \Phi_{A_0}^1$ there exists a (polynomial) modified energy $H_h$ such that \| \Phi_p^h \circ \Phi_{A_0}^1 (\xi, \eta) – \Phi_{Hh}^h(\xi, \eta) \|_{l^1} \leq h^{N+1} (cN)^N uniformly for $\|(\xi, \eta)\|_{l^1} \leq M.$ So, assuming that the numerical trajectory is bounded in $l^1$ (as opposed to the stronger claim that it is bounded in $H^k$ for $k$ large) then H_h (u^n) = H_h (u_0) + Cn h^{N+1}. Of course, I have to explain: what is $N$? This is related to a regularization condition. We know that $N = \frac{r-2}{r_0 – 2}$ where $r_0$ is the degree of $P$ (so 4 for cubic NLS). The parameter $r$ is determined by the condition: $ \forall ~ j = 1, \dots, r$ and for any $j$-tuple of integers $(k_1, \dots, k_j) \in Z^d$, we have $$|\lambda_{k1} \pm \lambda_{k2} \pm \dots \pm \lambda_{kj}| \leq 2 \pi.$$ CFL: $|\lambda_k | \leq C$. He describes some examples where $N = 3, 4$ and $N=7$. For cubic NLS, we end up obtaining $$Hh = \frac{1}{h} A_0 + Z_1 + h Z_2 + \dots Z_1 = \sum^* \frac{e^{i(\lambda_{k1} \pm \lambda_{k2} \pm \dots \pm \lambda_{kj})}}{e^{i(\lambda_{k1} \pm \lambda_{k2} \pm \dots \pm \lambda_{kj})} – 1}. You can now see how the zero divisor issue emerges and is resolved. Sergei Kuksin (École Polytechnique): Nonlinear Schrödinger Equation We consider Hamiltonian PDE. This is of course very interesting. In physics, there is a class of pdes which is also of interest: Hamiltonian PDE = small damping + small forcing Why is it so important? 1. This class contains a very important equation: Navier-Stokes. \dot{u} + (u\cdot \nabla) u + \nabla p = \epsilon \Delta u + force; ~ \nabla \cdot u = 0. We are interested in cases $d = 2,3.$ For $d=3$, this problem seems impossible. So, let’s collapse to the 2d case. 2. Nonlinear Schrödinger equation with some damping and forcing \dot{u} + i \Delta u – i |u|^2 u = \epsilon \Delta u + force. Similarly, we might want to study the PKdV equation \dot{u} + u_{xxx} + u u_x = \epsilon u_{xx} + force. We are interested in the small viscosity $\epsilon \ll 1$ and $t \rightarrow \infty$ extremes. At least we want to study $t \gtrsim \epsilon^{-1}$. Two papers on my web page: We introduce the slow time $\tau = \epsilon t$. Perturbations of linear Hamiltonian PDEs \frac{\partial u}{\partial \tau} + i \epsilon^{-1} (- \Delta u + V(x)u ) = \Delta u – \gamma_R |u|^{2p}u – i \gamma_I |u|^{2q} u + (random force). Both of the parameters $\gamma > 0$ and satisfy $\gamma_R^2 + \gamma_I^2 =1$. The parameters $p,q$ are natural numbers, possibly 0. WE weill look at the case $d=1$ on $x \in [0,\pi]$ with Dirichlet boundary conditions. Some more information about the random force, (random force) = \frac{d}{d\tau} \sum_{j=1}^\infty b_j \beta_j (\tau) e_j (x) Here the $\beta_j$ are complex valued standard, independent random variables. We will work in the Sobolev space $H^2$. Theorem 1: If $u_0 \in H^1$ then $\exists ~! ~ u^\epsilon (\tau, x)$ such that E ( \|u\|1^2 + \int0^\tau \| u(s)\|_2^2 ds) < \infty. Let $u_0^\omega \in H^1$ be a random. • Let $\mathcal{P}(u_0^\omega) = \mu$ denote the measure in H^1 • Calculate $u^\omega (\tau)$. Definition: A measure $\mu$ is called a stationary measure if $\forall ~ \tau$ we have $\mathcal{P} (\mu_\tau ) = \mu.$ Bogolyubov-Krylov: A stationary measure almost always exists. Theorem 2 (Hairer, Odasso, AS): If $b_j \neq 0 ~ \forall ~j$ then $\exists ~ !$ stationary measure $\mu^\epsilon.$ For any solution $u(\tau)$, we have \mbox{dist} (\mathcal{P}(u(\tau)), \mu^\epsilon) \rightarrow 0 ~\mbox{as}~ \tau \rightarrow 0. The measure $\mu_\epsilon$ depends upon the force but not on the data. Fourier Tranform For the operator $A = – \Delta + V(x)$ consider the eignefunctions $\phi_1, \phi_2, \dots$ with associated eigenvalues $\lambda_1, \lambda_2, \dots$. Assume that 1. $\lambda_1 >0$ 2. $\lambda \cdot s \neq 0 ~ \forall s \in {\mathbb{Z}}^\infty, ~ 0 <|s| < \infty.$ For any $u \in H^1$, we can expand $u$ w.r.t. the basis and denote the associated coefficients by $v_1, v_2, \dots$. The Fourier transform is the map $u \longmapsto v$ and the inverse goes the other We can pass from $v_j$ to polar coordinates $I_j, \phi_j$. He recasts the dynamics w.r.t the polar coordinate variables and started speaking about averaging lemmas. Effective Equations These objects are somehow analogs of the kinetic equations in the theory of weak turbulence….some notation….I want to understand this better….an average of the nonlinear potential energy term. This is a semilinear heat equation with a nonlocal heat equation. The term proportional to $\gamma_I$ does not influence the effective equation. This equation really takes complete control when $\epsilon$ is very small. The advance obtained here uses randomness in the forcing. “I expect that the effective equation is relevant even without the randomness but I don’t know how to prove it.” J. Colliander: Numerical Simulations of Radial Supercritical Defocusing Waves F. Bouchet (ENS-Lyon): Invariant measures • A. Venaille • E. Simonnet • H. Morita • M. Corvellec Physical phenomena. I am interested in self-organization in turbulent flows. Examples: stripes and spots on Jupiter. Ocean currents. Height differences in ocean surface. Stable jets. I will mainly speak about the 2d Navier-Stokes equation with random forcing. This is not such a good model for these phenomena. There are others that are quite similar that might be better to describe the phenomena listed above like the quasigeostrophic and shallow water layer models. Equilibrium will be related to 2D Euler. For 2D, we have the vorticity-stream formulation. Steady solutions to the Euler equation satisfying $\omega = f(\psi)$ or, equivalently, ${\bf{u}} \cdot \ nabla \omega = 0,$ play a crucial role in describing the dynamics. Degeneracy: what is the selection mechanism leading to $f$? The main advance is that $f$ can be predicted using equilibrium statistical mechanics ideas. 1. Invariant measures of the 2D Euler equation □ Equilibrium stat mech □ applications of equilibrium stat mec □ invariant measures of the 2d euler equation 2. Irreversible relaxation of the 2D Euler equations □ irreversibility in fluid mechanics □ …..slide switched….ack 3. 2D stochastic Navier-Stokes equation: non-equilibrium phase transitions Statisitical mechanics for 2d and geopphysical flows. Statistical equilibrium. very old idea. famous contributions • Onsager 1949 • Joyce-Montgomery 1970 • Caglioti Marhioro plvirenti lions 1990 • Robert-Sommeria 1991 • Miller 1991 • Eyink-Spohn 1994 Robert-Sommeria-Miller (RSM) theory: The most probable vorticity field. We want to measure the number of microscopic fields $\omega$ which correspond to a probabiility $\rho$. The number of such configuarations is quantified by the Boltzmann-Gibbs Entropy. This is the mixing entropy. Microcanonical RSM variational problem. Critical points are startionary flows of the QG model. Microcanonical measures for Hamiltonian systems: • Hamilton’s equations • Liouville Theorem • Define the microcanonical measures which are the natural invariant measures taking into account the constraints in the dynamics. Detailed Liouvilles thEorem for 2D Euler: Lee 1952, Kraichnan JFM 1975, Robert 2000 We want to take into account the casimirs and the constraints. He describes a limiting process based on galerkin approximations. Mean field behavior? Large deviations? Sanov theorem? ……lots of discussion…..ideas vs. proofs…..nontribvial…what’s going on? Audience is confusing me…speaker seems clear. Young measures….entropy… The claim is that the theory he and his collaborators hav developed explains the emergence and stabiltiy of coherent structures like the great spot on Jupiter. Similar statements about ocean Are microcanonical measures invariant measures for the 2D Euler dynamics? Is the setof invariant Young measures for the 2D Euler dynamics larger than the set of microcanonical measures? Two conjectures: • Weak perturbations of the 2D Euler equations close to steady states converge to invariant Young measures. • The 2D Euler equations converge to invariant Young measures. Wave breaking is an irreversible mechanism in fluids that does not require viscosity. Sebastian Reich: Data Assimilation Data Assimilation Nature Physical Laws Measurements Model Optimal prediction He drew arrows between these frameworks of understanding and highlights the assembly of processing at the data assimilation level. Sequential Data Assimilation in a nutshell. Model + Observations $\longmapsto$ Prediction Ingredients of Data Assimilation: 1. Mathematical and numerical model. solutions and their undertainties caused by approximation errors as well as state and parameter undertainties. 2. Data/observations with measurements as well as approximation (forward operators) errors –> Inverse problems 3. Numerical approximations to the data assimilation problem within a statistical (Bayesian) framework, assessment of the induced predictions and their uncertainties. Mathematical problem statement Consider an evolution problem for which the initial state is treated as a random variable with some given probability density function. For simplicity assume finite-d phase space. The uncertainty in the initial conditions will generally lead to unpredictability over long time intervals. Weather prediction is a nice example. To counterbalance this increase in uncertainty, we collect observations at discrete times subject to some random measurement errors. We wish to find a trajectory that makes optimal use of the available information in terms of initial data, observations and model dynamics. The task of data assimilation is to combine the model, the measurements and then we want to make the optimal Theoretical solution i) Model dynamics Lift the dynamics to the level of the Liouville equation on the probability distribution function. ii) Data assimilation Assimilate data using Bayes’ theorem $$\pi (x|y) \thicksim \pi(y|x) X \rho_{pr} (x). Here $\pi(x|y)$ is the know conditional PDF (likelihood) for observing $y$ given a state $x$. Given an actual measurement, we can correct and proceed. Under Bayes’ theorem, we always reuce uncertainty. Ensemble Prediction… ack….slides are changing fast. Particle filter. We give better weight to points that are closer to the observed data. If we repeat this a few times, there will be very few particles contributing to the final answer. Assimilation as a continuous deformation of probability: McKean-Vlasov We can think of Bayes theorem as an optimal transportation problem. Crisan-Xiong 2010 did something similar in the context of continuous time filter problem. Otto 2001 for an application in gradient flow dynamics. We started with an ODE, spoon fed the measurement data to update the dynamics, and encounter a more complicated dynamical description of the system. We encounter a McKean-Vlasov system, a modified Liouville equation, which is closed by an elliptic PDE. Numerical filter implementations will now rely on appropriate approximations to the lliptic PDE. We use the ensemble of solutions to define an appropriate statistical model and then solve via numerics or by quadrature. Obvious choices for the numerical version of $\rho$ include a Gaussian PDF parameterized by the ensemble mean and covariance matri (ensemble Kalmna filter) or Gaussian mxture modes. N. Faou: 2d Submarines 2D Euler equation on 2-torus….I was a bit tired and did not type notes during this talk. These are notes from a meeting entitled Advanced Numerical Studies in Nonlinear PDEs in Edinburgh, Scotland. Walter Craig (McMaster): Water Wave Interactions I’m an analyst but I’m going to talk about numerics and experiments as well as analysis. We will discuss the problem of water waves and then I’ll talk about two specific settings in which the theory has led to good and quite elegant numerics and the numerics have started to answer some questions. (joint work with P. Guyenne and C. Sulem) • Free surface water waves • Hamiltonian PDEs • Periodic Traveling wave patterns • Solitary wave Interactions • The KdV scaling limit Free surface water waves Euler’s equations of hydrodynamics, incompressible and irrotational flow. This is therefore given as a potential flow. The irrotational assumption is really an oceanographers assumption. Of course, there is vorticity but we follow the models of oceanographers. The fluid domain is $-h < y < \eta (x,t)$. So, the domain is changing. Free surface boundary conditions hold on $y = \eta (x,t)$. Zakharov’s Hamiltonian • The energy functional $$ H = K+P $$ K = \int_x \int_{-h}^{\eta(x)} \frac{1}{2} |\nabla \phi|^2 dy dx. P = \int_x \frac{g}{2} \eta^2 dx. This could also include surface tension effects. • Zakharov’s choice of variables, z = ( \eta(x), \xi(x) = \phi(x, \eta(x))), for which we consider $\phi = \phi[\eta, \xi] (x,y)$. • Express the energy in terms of $\xi$ and $\eta$. This involves the Dirichlet-Neumann operator $G(\eta)$. Dirichlet-Neumann operator • Laplace’s equation on the fluid domain: $\Delta \phi = 0$ subject to bottom Neumann boundary condition. Free surface boundary data $\phi (x, \eta(x)) = \xi(x)$, for which the D-N operator is given by \xi(x) \longmapsto \phi(x,y) \longmapsto N \cdot \nabla \phi (1+ |\nabla_x \eta|^2)^{1/2} := G(\eta) \xi(x). • In these coordinates, we can rewrite the boundary conditions in a new (and nicer) form. This reexpresses the water wave problem as a Hamiltonian system in Darboux coordinates. Hamiltonian PDEs • KdV is a Hamiltonian PDE with a different symplectic structure. • Other Hamiltonian PDEs □ shallow water equations □ Boussinesq □ KP □ NLS □ Dysthe equation Many of these problems arise in scaling limits of the water wave problem. Lemma (Properties of D-N operator): 1. $G(\eta) \geq 0$ and $G(\eta) 1 = 0$. 2. $G(\eta)^* = G(\eta)$ Hermitian Symmetric 3. $G(\eta): H^1_\xi \rightarrow L^2_\xi$ is analytic in $\eta$ for $\eta \in C^1$. There is an operator valued power series expansion of $G(\eta)$ (using a theorem of Christ-Journé 1987). 4. Some explicit calculations of the Taylor expansion (I couldn’t keep up….) 5. Conservation Laws □ Mass: $M = \int \eta dx$ (He shows the calculation using properties of $G$.) □ Momentum: Similar calculation □ Energy: Easy since the commutator of $H$ with itself vanishes. 6. Taylor expansion of the Hamiltonian 7. Linearized equations; comparison with the harmonic oscillator. Periodic Traveling wave patterns • Can I find traveling wave solutions? $$ \eta(x,t) = \eta ( x-tc); \xi(x,t) = \xi(x – tc) $$ • Spatially periodic, $\Gamma \subset {\mathbb{R}^{d-1}}$. \eta(x + \gamma, \cdot) = \eta(x, \cdot), \xi(x+\gamma, \cdot) = \xi(x, \cdot), ~ \forall \gamma \in \Gamma. On such domains, we can use the Fourier tranform. Rk: Notice this is a mathematician imposing a period rather than the physics making that selection. More can be said in this direction, but let’s proceed this way. Rk: These (time independent) traveling wave patterns can be imagined to emerge in transient interactions in seas. The nonlinear actions create large amplitudes and this might be related to the phenomena of freak waves. Equations for traveling waves. Periodic traveling wave patterns are critical points of the Hamltonian on the variety $I = const$, with Lagrange multiplier $c \in {\mathbb{R}^{d-1}}.$ This leads to a bifurcation problem. brief history (dimension $d=2$): • Levi-Civita 1925; existence of traveling waves • Struik 1926; traveling waves case • Zeidler 1971 • Beale 1979 • Jones-Toland 1985 brief history (dimension $d=3$): • Reeder Shinbrot 1981 • Sun 1986 • Craig-Nicholls 2000 • Iooss-Plotnikov-Toland 2000 (small divisor problem) He shows a picture from the wave tank at Penn State. He then shows some numerics which are trying to model those observations and they look beautiful. Kuksin Question: Stability of these patterns? Craig Answer: This is a very good question. I don’t know results like that. This is related to Benjamin-Feir. McLean showed instability for $d = 3$. Some further discussion….We need the Bloch theory of stability for these wave patterns. This appears to be difficult analytically so might need some numerical studies at first. There are instability zones…. Solitary wave Interactions Solitary waves in 2-dimensions (Friedreichs-Hyers 1954, Amick-Fraenkel-toland 1980s) • Head-on collisions of solitons. The numerics reveal some inelasticity in the collision. We’d like to understand those. If we make the amplitude of the solitons bigger, the dispersive ripples are more visible. The KdV scaling limit Titi’s Question: Can we reduce to the surface equations including rotation? Craig’s Answer: Yes and No. You can make a rotation depending purely on y and impose that. Then it is reducible. But this is rather artificial. There is stuff that happens in the middle which is not a surface effect. Therefore, this problem requires a more complete analysis of the Euler equation and will not collapse to a system on the surface. Sergey Nazarenko (Warwick): Assumptions, Techniques, Cahllenges in Wave Turbulence This is not so much about new result. Instead, this is an attempt by a physicist trying to explain wave turbulence ideas being explored by physicists to mathematicians. My view is that there is a lot of interesting work to be done. Lots of open problems…. What is wave turublence? He shows a picture of a relatively calm seashore from Nice. He emphasizes there is a wide range fo scales in these problems. WT is a statistical system of nonlinear waves. • Water waves • Waves in rotating and stratified fluids (internal and inertial waves, Rossby waves) • Plasma waves • Waves in Bose-Einstein condensates • Kelvin waves on quantized vortex filnments • MHD turbulence in interstellar turbulence and solar wind • Nonlinear optics • Solids: phonons, spin waves. Kinetics of phonons in weakly anharmonic crystals is a first example of study in tis direction (1920s). I didn’t catch the name…. He shows a picutre of a wave take of Lukaschuk. Waves in fusion plasmas. Shows a picture of a Tokamak. Drift wave turbulence causes anomalous heat and particle loss – major problem for fusion. The devices have grown larger and larger basically to carry out the confinement for a longer period of time. MHD turbulence in astrophysics. He shows some data from the Ulysses/Swoops (los alamos) solar wind studies. Bose Einstein Condensates Nazarenko-Onorato 2006: • Inverse cascade – condensation • Condensate strongly affects WT Quantum Turbulence (see Lvov et. al. 2007) (Superfluid turbulence) • Kelvin waves on quantized vortex filaments • Interaction with hydro eddies (vortex bundles) is important Optical Turbulence • This project studies nonlinear corrections (coming from the optical physics) which are included beyond the 1d NLS model. Kuksin Question: Which corrections? Can you write them down? Nazareknko: Something like a DNLS correction…not so clear. Ingredients in the approach He writes $NLS_3^\pm (T^d)$ and comments that this is a physically reasonable model but we are really interested in the study in infinite space with finite energy density. He reexpresses the NLS equation in Fourier language. Set of wave modes: amplitudes and phases. N-mode joint probability density function. Some notation….probability…sectors in the wave modes setting. Random Phase (RP) and Random Phase Amplitude (RPA) systems All phases are independent random variables such that uniformly distributed on $S^1$. 1. All amplitudes and all phases are independent random variables. 2. All phases are uniformly distributed on $S^1$. Note: RPA does not mean Gaussian. Nevertheless, we have obtained successful closures without assuming the Gaussian statistics. Frog Jumps! • expanding in small nonlinearity • Assuming RP at $t=0$. • Taking limit of a large box followed by the limit of small nonlinearity. (The order of these steps is important.) Evolution of joint PDF? We can derive the evolution equation under these assumptions. The derivation is rather systematic, in fact it is perhaps rigorous. Mathematical Challenges: • WT is formally derived for $t=0$. • Does it work at the long time of nonlinear evolution? • Does RPA survive over this time? • Adding forcing and dissipation: will WT describe the steady state? Hmmmm….This RPA condition at $t=0$ reminds me a bit of the assumption of product wave function in the QMB theory. The dynamics in the Hartree derivation might drive the multiparticle wave function away from the product case. Here we have a dynamic that might drive us away from the RPA condition. Evolution of 1-mode PDF. Kinetic equation (Hasselmann 1962). Kolmogorov-Zakharov state. • Explained a steady state spectrum corresponding to energy cascade. • Exact solution of the asymptotic closure. Numerics and Analysis of KE. • What is the role of KZ solutions with respect to the thermodynamic Rayleigh-Jeans state? • Similar issues for the classical Boltzmann equation. Zakharov was awarded the 2003 Dirac Medal for “putting the theory of wave turbulence on a firm mathematical ground”! What is it that we want to do? Gregor Tanner (Nottingham): A wave chaos approach towards describing the vibro-acoustic response of engineering structures (Joint work with D. Chappel, Stefano Gianai, Hanya Ben Hamdin, Dmitrii Maksimov) This talk is more directed toward engineering applications. inuTech is an industrial collaborator. • Introduction – the need for numerical short wavelenght methods in vibroacoustics • From wave equations to the Liouville equation • Solving the Liouville equation – a boundary integral approach (Dynamical Energy Analysis – DEA) • Tackling the Midfrequency problem – hybrid methods • Numerical results Aim: predicting wave intensity distributions for the vibro-acoustical response of mechanical structures. Think of a car. Companies like Bombardier and Airbus use these methods. It is a difficult problem. You want these structures to be quite and with no noise in the interior. Where is the problem? • Low frequencies – wavelength around the size of the object • Finite Element method • Boundary elemtn method • plane wave methods High Frequencies: • Ray tracing • Statistical energy analysis • … Midfrequency problems: • Structures with large variations in the local wavelength. (Large variations in the stiffness of components, ie body frame and side panels.) • Hybrid methods. Try to connect exact numerical methods with the statistical methods. Short wavelength approximations – from wave chaos to statistical methods • Wave chaos -short wavelength asymptotics □ Keller □ Gutzwiller □ Berry □ Bogolmolny □ Smilansky • Nonlinear dynamics – thermodynamic formalism □ Ruelle □ Arnold □ Sinai □ Eckmann □ Cvitanovic – chaosbook.org • Wave transport – statistical methods in vibro-acoustics □ Lyon – SEA (1967 paper) □ Langley – WIA □ Heron □ Weaver – diffusion equation □ Le Bot – radiative transformation Linear wave equation. WKB ansatz. Hamiltoninan equations for the amplitude and phase. Characteristics of JH; nonlinear ODE; Liouville equation (linear). Linear wave –> WKB –> HJ equation –> Liouville Equation Think of polygonal billiards, not necessarily convex. We want to understand the influence of a source (transfmitting at frequency $\omega$) at one location on the wave amplitude at another point. He writes this as a green’s function $G(r, r_0, \omega)$. Small wavelength limit, so low frequency waves. Write things as sums over all paths. Perron-Frobenius operator… I started wondering about connections between these ideas and quantum ergodicity… I had a nice conversation with Gregor after the break. I learned from him about microlasers. The idea is to build a circular region out of a lasing material. We energize the material somehow with hopes to excite the whispering gallery mode. The laser light propagates near the boundary but can be arranged to exit the medium by raising the curvature at a specific location. These appear to be rather hard to control to create a unidirectional beam. Since the losses take place all along the boundary, there is very little power in the output beam. Some web searching revealed an advance made by the Capasso group at Harvard. David Dritschel (St. Andrews): CLAM, The Combined Lagrangian Advection Method (Many many collaborators) I’ll be speaking a bit about a numerical method. I’ll focus mostly on the results we’ve obtained to understand the large scale atmospheres, like Jupiter and perhaps also the ocean. The numerical method (CLAM) emerges from a Lagrangian method from the 50s for studying fluid dynamics. Zabusky then built from these developments to develop new methods in plasmas. We’ve been extending these ideas to treat certain geophysical fluid flows. The atmosphere and the oceans are extremely complex, turbulent flows. Accurate computer simulation is immensely difficult to achieve. However, much of this difficulty is inherent in the computational methods employed: • None take direct advantage of the natural inherent Lagrangian advection of dynamical, chemical and biological tracers. (Exploit Lagrangian Descriptions.) • None seek to separate slow vortical (eddying) and fast wave-like motions and use appropriate, optimal numerical methods for each. (Slow Rossby waves interacting with fast waves inertial-gravity We can build the mathematical theory of the separation into the numerical methods and this will lead to better predictions. Contour Advection (CASL) Dritschel & Ambaum 1997 geostropic and hydrostatic balances are basic features for describing atmospheric wave dynamics. This talk reminds me somehow of Bourgain’s high/low method for proving low regularity GWP. The idea is to use the advection of the vorticity to resolve some (especially relevant) sub-grid scales. Dugald Duncan: IDE equation • full IDE equation and how it looks like, where it arises • Linear part of the IDEbehaviour and approximation • the full problem – behanviour and approximation • examples u_t = \sigma \int_\Omega J(x-y) [u(y,t) - u(x,t)]dy + f(u) dx ~ \forall x \in \Omega, t>0. Typically, $f(u) = u – u^3$. This should be contrasted with the Allen-Cahn equation u_t = \sigma \Delta u + f(u) dx ~ \forall x \in \Omega, t>0. There are no spatial derivatives. Therefore, there are now boundary conditions. Instead, this is some kind of integral dynamical equation. It is similar to the Cahn-Allen equation. This equation is also related to sandpiles, neurons, phase transitions. Other variations recently: Rossi, Perez-Llanos, Andreu, Mazon, Toledo et. al. They study a nonlocal version of the $p$-laplacian. Linear IDE: • Ignore the nonlinear reaction term for now and take $\sigma \geq 0$ and $\Omega \subset {\mathbb{R}}$: $$ u_t = Lu.$$ • L is a linear operator – partly a convolution: Lu = \int_\Omega J(x-y) [u(y,t) - u(x,t)]dy = J * u …ack slide changed…. Discontinuities don’t move. The solution collapses to the average value. Ther eis acomparison principle. Snapshots of linear behavior. He does a Fourier analysis of the behavior of plane waves. Instead of having an $\omega^2$, we have $$\hat{J} (\omega) – \hat{J} (0) \thicksim \frac{\omega^2}{2} \frac{d^2}{d \omega^2} {\widehat{J}} Peter Bates and Paul Fife did some of the earliest analysis on this equation.
{"url":"http://blog.math.toronto.edu/colliand/tag/edinburgh/","timestamp":"2014-04-21T09:39:41Z","content_type":null,"content_length":"74126","record_id":"<urn:uuid:90a9d476-04db-4598-82f8-27629ccd17c3>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00372-ip-10-147-4-33.ec2.internal.warc.gz"}
math prob: someone plz. answer Author math prob: someone plz. answer Ranch Hand Joined: Sep 02, 2001 I am stuck to one problem... Posts: 756 To enroll 300 students for 10 courses, 4 hrs is required. How mucg time is required to enroll X students for Y courses Count the flowers of your garden, NOT the leafs which falls away! Prepare IBM Exam 340 by joining http://groups.yahoo.com/group/IBM340Exam/ Joined: Nov 09, 2000 Moved to Programming Diversions. Posts: 6450 Ranch Hand Originally posted by Vikrama Sanjeeva: Joined: Feb 18, To enroll 300 students for 10 courses, 4 hrs is required. How mucg time is required to enroll X students for Y courses Posts: 988 1 Assumption: Registering one student for one course is the basic unit of work. So registering one student for ten courses takes the same time as registering ten students for one course each. 300 students x 10 courses for each = 3000 studentCourses 3000 studentCourse / 4 hours = 750 studentCourse/hour For any number of courses and students, just divide by this work rate. Time needed = X x Y / 750 studentCourses/hour Ranch Hand Joined: Feb 28, I got 2*X*Y/25 minutes. Posts: 986 Namma Suvarna Karnataka Ranch Hand Originally posted by Arjunkumar Shastry: Joined: Feb 18, I got 2*X*Y/25 minutes. Posts: 988 1 That's what I said. (2XY/25)minutes == (XY/750)hours And that would have clearer if I used more parentheses. I shoudl have said... Time needed = X x Y / (750 studentCourses/hour) ...which reduces to (XY hours)/(750 Student*Courses) ...or if you leave out the student and course units... (XY/750) hours [ April 14, 2005: Message edited by: Ryan McGuire ] Ranch Hand Joined: Sep 02, 2001 Thank you guyz Posts: 756 subject: math prob: someone plz. answer
{"url":"http://www.coderanch.com/t/35345/Programming/math-prob-plz-answer","timestamp":"2014-04-19T12:46:47Z","content_type":null,"content_length":"28957","record_id":"<urn:uuid:784b8949-ce65-4822-9fe3-b5fff277a5a7>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00131-ip-10-147-4-33.ec2.internal.warc.gz"}
The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
{"url":"http://nrich.maths.org/public/leg.php?group_id=26&code=-97","timestamp":"2014-04-19T15:22:11Z","content_type":null,"content_length":"34530","record_id":"<urn:uuid:5d0393fc-8f25-4954-87f0-fbb872943c43>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00277-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Help November 1st 2006, 07:33 PM #1 Sep 2006 A limit Need serious help on a proof: Show that if lim(an/n) = L where L>0, then lim an (n approaches inf) = inf. Thanks guys. Informally this is because if: $<br /> \lim_{n \to \infty}a_n/n = L\ >0\ \ \ \dots\ (1)<br />$ for large $n,\ a_n \sim nL$, and as the RHS goes to infinity $a_n$ does as well. Now all you have to do is write out what $(1)$ means in full and rearrange it to show that $a_n$ differers form $nL$ by arbitrarily small amounts for sufficiently large $n$. I don't understand what you mean when you say "rearrange" it by "arbitrarily small amounts." $<br /> \begin{array}{l}<br /> L > 0\quad \Rightarrow \quad \left( {\exists N} \right)\left[ {n \ge N \Rightarrow \left| {\frac{{a_n }}{n} - L} \right| < \frac{L}{2}} \right] \\ <br /> \ Rightarrow \quad \frac{L}{2} < \frac{{a_n }}{n} \\ <br /> \Rightarrow \quad n\frac{L}{2} < a_n \\ <br /> \end{array}<br />$ Of course you are right, I don't mean what I say at all do I What I do mean is that for every epsilon>0, there exists an N such that for all n>N: -epsilon< (a_n)/n-L <epsilon nL-n epsilon< a_n < nL + n epsilon, n(L-epsilon) < a_n < n(L+epsilon) so if we choose a small enough epsilon, as n-> infnty a_n is trapped between two sequences both of which go to infinity. November 1st 2006, 09:25 PM #2 Grand Panjandrum Nov 2005 November 2nd 2006, 11:26 AM #3 Sep 2006 November 2nd 2006, 12:15 PM #4 November 2nd 2006, 01:34 PM #5 Grand Panjandrum Nov 2005
{"url":"http://mathhelpforum.com/calculus/7109-limit.html","timestamp":"2014-04-17T22:33:04Z","content_type":null,"content_length":"44540","record_id":"<urn:uuid:b0b377dd-d2d3-4e28-8c6e-5f4d8ddc4c94>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00248-ip-10-147-4-33.ec2.internal.warc.gz"}
My Biased Coin While people are all talking about , another new conference was also being whispered about in the hallways at STOC. I'm happy to say that SLOGN now has a call for papers up (sent to me by Ryan O'Donnell and Rocco Servedio). It promises to be something different entirely. 8 comments: Excellent !! This sounded just right for the paper I am finishing up, all about shaving off a sublogarithmic factor. The location seems a bit inaccessible, though. :( It seems like you guys forgot one thing. If there is any mention of an experiment or implementation in the submission, then the submission would be immediately rejected. Of course, if they need to implement their algorithm or do any experiments, then the theory is not strong enough! I do not understand this sentiment against writing thirty page papers which uses deep mathematics. It should now be obvious to everyone that any paper settling or even making non-zero progress on the central questions of TCS (namely, those around the P/NP problem) will be considerably longer than thirty pages and will involve rather deep mathematics. While I understand that not everyone will want to work on such questions, I think it has suddenly become fashionable to uphold simplicity as a prime virtue, and technical mastery as some kind showmanship which should be exposed and vilified. Short simple-minded papers in TCS which contain little or no mathematics are usually mostly drivel. To Anon 7:18pm, if any paper proposes an _algorithm_ then isn't that clearly a ground for its rejection? After all, can algorithms be deep mathematics? Previous winners of the SLOGN "Best Paper" award: Proof Verification and the Hardness of Approximation Problems, by S. Arora, C. Lund, R. Motwani, M. Sudan, M. Szegedy, 54 pages. "For shaving a sqrt(log n) factor off the query complexity in Expander Flows, Geometric Embeddings and Graph Partitioning, by S. Arora, S. Rao, U. Vazirani, 30 pages. "For improving on the 16-year-old approximation ratio for Sparsest-Cut by a sqrt(log n) Triangulating a Simple Polygon in Linear Time, by B. Chazelle, 40 pages. "For shaving a log^*n factor off the previous best running time." Free Bits, PCPs, and Non-Approximability -- Towards Tight Results, by M. Bellare, O. Goldreich, M. Sudan, 113 pages. "For the bound of 0.55218507 in Case 2.2.1.3 on page 66." A Theory of Alternating Paths and Blossoms for Proving Correctness of the O(sqrt{V} E) General Graph Matching Algorithm, by V. Vazirani, 57 pages. "For providing full details of the seminal fast matching algorithm of Micali and Vazirani." The Complexity of Computing a Nash Equilibrium, by C. Daskalakis, P. Goldberg, C. Papadimitriou, 70 pages. "For pushing PPAD-completeness down from 4- to 3- to 2-player games." How are we supposed to submit? Does anyone know if they'll use easychair. Anon #5: I think if the algorithm is so unnatural that no one would consider implementing it or has prohibitively high O(1) constants it would be considered. "I think if the algorithm is so unnatural that no one would consider implementing it or has prohibitively high O(1) constants it would be considered." I think though that amongst algorithmic results, only those which prove the existence of an algorithm with certain complexity, but is unable to explicitly describe them, would have a chance to be accepted. A proof that having an explicit description would imply that the either the Hodge conjecture or the GRH is false will seal the deal.
{"url":"http://mybiasedcoin.blogspot.co.il/2009/06/slogn.html","timestamp":"2014-04-21T07:36:16Z","content_type":null,"content_length":"82842","record_id":"<urn:uuid:237b2d77-a625-4b12-a791-bec02fef096a>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00577-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: November 2008 [00177] [Date Index] [Thread Index] [Author Index] Re: Re: Eliminating common factors? • To: mathgroup at smc.vnet.net • Subject: [mg93409] Re: [mg93371] Re: Eliminating common factors? • From: Andrzej Kozlowski <akoz at mimuw.edu.pl> • Date: Thu, 6 Nov 2008 04:07:38 -0500 (EST) • References: <gemjk2$4v5$1@smc.vnet.net> <gepb1g$sg6$1@smc.vnet.net> <200811050954.EAA28041@smc.vnet.net> On 5 Nov 2008, at 18:54, AES wrote: > In article <gepb1g$sg6$1 at smc.vnet.net>, SigmundV <sigmundv at gmail.com> > wrote: >> Well, any of Expand, Simplify, Together, Apart ... will do (at least >> with the actual expression, you posted). However, why did you post >> the >> expression in TeX syntax? Wouldn't it be easier to type 4(a/4+b/4 c)? > Since this has been asked twice: I just wanted to emphasize that the > output was appearing as two *display fractions*, i.e. something like > a b > 4 ( ------ + ------------ ) > 4 4 c > in case it made any difference; and I wanted do this without messing > with typing something like the above, and worrying about whether it > would display properly with different fonts that were or were not > monospaced. > [The math and computer gurus posting on this group sometimes seem to > assume everyone will know their arcane lingo; I was assuming they > would > know at least some basic TeX syntax, since it's a lingua franca for > every colleague (and grad student) that I know.] This argument might have carry some weight if your question concerned some general mathematical or computer science problem, for which purpose it is not unreasonable to assume that TeX remains a lingua franca. But your question concerned specific Mathematica input so anybody who might want to answer it would have to type in your example starting from scratch (with all the possible problems of misinterpretation or typos that this entails) instead of simply copying your input from your post and pasting into Mathematica. I don't think anyone here (or almost any one) answers Mathematica related question without actually running posted examaples in Mathematica (and if they do then I think they would actually be more helpful if they refrained from posting any answers), so the issue > whether it would display properly with different fonts that were or > were not > monospaced." is completely irrelevant. Andrzej Kozlowski • References:
{"url":"http://forums.wolfram.com/mathgroup/archive/2008/Nov/msg00177.html","timestamp":"2014-04-16T04:24:14Z","content_type":null,"content_length":"27621","record_id":"<urn:uuid:bddce507-9416-409f-bed3-a90411b8a4b9>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00462-ip-10-147-4-33.ec2.internal.warc.gz"}
Evaluation of Hybrid Distributed Least Squares for Improved Localization via Algorithm Fusion in Wireless Sensor Networks Publication: Sensors & TransducersAuthor: Behnke, Ralf Date published: March 1, 2012 (ProQuest: ... denotes formulae omitted.) 1. Introduction Recent technological advances enabled development of tiny wireless devices sensing environmental phenomena, compute simple tasks and exchange data among each other via wireless communication. Interconnected assemblies of such devices, called Wireless Sensor Networks (WSNs), are commonly used to observe large inaccessible areas. In many applications of WSN, knowledge of node's locations is mandatory for a meaningful interpretation of measured data. In addition, location-awareness is also necessary for applications using geographic routing [1, 2] or location based clustering [3]. Due to existing limitations in terms of size, financial cost and energy consumption, local positioning within the network is preferred over utilizing Global Navigation Satellite Systems (GNSSs) like GPS [4]. Therefore, the presence of location-aware sensor nodes, referred to as beacon nodes, is typically assumed. The remaining nodes, which we refer to as blind nodes, are assumed to use communication and any kind of distance estimation or neighborhood information to estimate their positions with the help of beacon nodes. Localization algorithms can be divided into centralized or decentralized on the one hand, and fine-grained or coarse-grained on the other hand. Coarse-grained approaches like Centroid Localization (CL) [5], Weighted Centroid Localization (WCL) [6] and Adaptive Weighted Centroid Localization (AWCL) [7] often abstain from exact distances, require less communication and computation, and provide lower precision estimates. In contrast, fine-grained approaches use costly computations and distance estimations to achieve localization with high precision. High precision and low complexity have been firstly combined by Distributed Least Squares (DLS) [8] which splits the costly localization calculation into precalculation and postcalculation. Independent from a specific blind node, the complex precalculation is performed on a high performance sink. The remaining postcalculation is less complex and performed on locally on the resource-constrained blind nodes. The concept of DLS has been adapted by scalable DLS (sDLS) [9] which enables the idea of DLS to be used in large WSNs. In contrast to DLS, sDLS provides costs of computation and communication, incurred on blind nodes, which are independent from network size, i.e. independent from total number of beacon nodes. This is achieved by use of individual precalculations instead of one global precalculation. A fundamental enhancement is given by sDLS with normal equation (sDLSne) [10], which significantly reduces the cost of computation by circumventing costly updates, introduced with sDLS. Using sDLS, blind nodes are assumed to choose one precalculation out of several precalculations provided by neighboring beacon nodes, according to their distances. Commonly, the set of beacon nodes included in the chosen precalculation differs from the set of beacon nodes within a blind node's communication range. This causes suboptimal localization accuracy and offers possibilities for further improvements. The present work combines multiple position estimates, based on sDLS^sup ne^, by use of coarse grained localization techniques, to improve localization accuracy. The remainder of the paper is organized as follows. Section 2 covers basic information about sDLS algorithms. In section 3, the new hybrid localization approach is presented in various variants, using several optimizing parameters. Section 4 covers performed simulations. Simulation results are presented in section 5. Finally, the presented work is summarized in section 6. 2. Related Work The DLS algorithm was developed to diminish tradeoff between precision and cost of localization and provides localization with high precision and low cost [8]. The basic idea of splitting the calculation into precalculation and postcalculation was adapted by sDLS and its successor sDLSne to support large WSNs with network size independent cost for blind nodes. Both approaches are briefly described in this Section. The system of equations which have to be solved for localization of a blind node is originally built by distance equations as given in equation (1). ... (1) Here the coordinates x and y represent the unknown position of a blind node. The known position of a beacon node is denoted as x^sub i^ and y^sub i^ , while the distance between both nodes is denoted as r^sub i^. The number of beacon nodes utilizable for localization is given as m . This system of equations is linearized by use of a linearization tool [11], using one beacon node as linearizer, denoted with index L. After restructuring, the system of equations consists of equations as given in equation (2), where r^sub L^ denotes the distance between blind node and linearizer, r^sub i^ is the distance between blind node and beacon node, and d^sub IL^ denotes the distance between linearizer and beacon node. ... (2) After further restructions, the system of equations matches the matrix form ..., using elements as given in equation (3). ... (3) Here, the beacon nodes, used for localization, are denoted with indices K = k^sub 1^,k^sub 2^,...,k^sub n^ } with K = {I\L}. Matrix ... of equation (3) only consists of beacon position data, while ... contains distances between beacon nodes and blind nodes. Therefore, calculations on matrix ... are to be performed as part of the precalculation at a powerful sink outside the WSN. The localization will be finalized on each blind node by performing the remaining part of the calculation. To solve the linear system of equations, using normal equations, equations (4) are used. While (4a) shows the entire equation, equation (4b) presents the precalculation, performed on the sink, and equation (4c) presents the postcalculation, performed locally on blind nodes. ... (4a) ... (4b) ... (4c) The main difference between DLS and sDLS^sup ne^ is given by number and size of precalcultions. Regarding beacon nodes in a WSN, H is considered as the global set of all beacon nodes and Γ^sub i^ ⊆ H denotes a local set of beacon nodes within the communication range of beacon node i . While DLS uses only one precalculation, including all beacon nodes, i.e. equation 3 with conditions K=U\L and L = I, sDLS^sup ne^ uses individual precalculations for all beacon nodes, i.e., |H| precalculations using K = Γ^sub i^\L , L = i , ∀^sub i^. ∈ H . Therefore, the sDLS^sup ne^ algorithm starts with an additional discovery phase to find other beacon nodes in one hop distance, as illustrated in Fig. 1 . Furthermore, DLS needs an explicit communication with all beacon nodes during the communication phase for distance estimation. Using sDLS^sup ne^, this is an implicit process as each blind node receives precalculations from beacon nodes in its own communication range. Using sDLS^sup ne^, each beacon node provides its own precalculation, which would perfectly fit for a blind node on the same position. From all offered precalculations, blind nodes are expected to choose the one of the closest beacon node. 3. Hybrid Localization Approach The original intention of sDLS was to use exactly those beacon nodes, which are located within the communication range of the blind node, attempting to estimate its own position. To achieve this goal, a blind node is expected to choose the precalculation, provided by the beacon node closest to its own position. Consequently, this precalculation includes most of the beacon nodes within the blind node's communication range. Nevertheless, in most cases some beacon nodes included in this precalculation are outside the communication range of the blind node, and vice versa. While sDLS locally updates this precalculation, using the matrix updates, to achieve the initial intention, sDLSne estimates the unknown position with this unprecise precalculation. Due to this displaced set of beacon nodes as well as the high influence of the node geometry, especially the given choice of the linearizing beacon node, the resulting position estimation tends to be drawn in the direction of this beacon node. In addition, the used distance estimation also causes an impairement of the position estimation. Furthermore, defective distance estimation may cause the blind node to spuriously choose a precalculation of a beacon node, which is not the closest. The aim of Hybrid Distributed Least Squares (HDLS) is to use multiple precalculations of nearby beacon nodes. The resulting position estimates, according to each chosen precalculation, serve as tentative results. These results can be seen as virtual beacon nodes. They will be combined to a final position estimate using coarse grained localization techniques. For that aim, various approaches have been studied in this work. The used coarse grained localization approach presents only one parameter that influences the resulting accuracy. The following parameters, studied in our work, are to be further explained in this section: Strategy Variation of the number of virtual beacon nodes. Technique Variation of different coarse grained localization techniques. Weigh tage Variation of the weight factors. Reduction Variation of the reduction part, used by AWCL. Approximation Variation of the distance approximation of inaccessible beacon nodes. 3.1. Virtual Beacon Strategy To control the number of virtual beacon nodes (VBN) that are to be created using sDLSne, the following strategies have been investigated: Closest Two VBN are created from precalculations of the two closest beacon nodes. Closest Three VBN are created from precalculations of the three closest beacon nodes. Great Deal VBN are to be created, using precalculations of all beacon nodes in range. Range Based Beacon nodes in a range, given as a multiple of the distance to the closest beacon node, are used for creation of VBNs. This strategy extends the before mentioned strategies, which serve as upper bound. Within our investigations, this range has been varied from 125% up to 250% of the closest beacon node. 3.2. Coarse Grained Estimation Technique Created virtual beacon nodes are combined to a resulting position estimation P^sub b^ using coarse grained localization techniques. The following techniques have been studied: CL - The plain Centroid Localization (CL) approach is used to combine the virtual beacon nodes, i.e. unweighted arithmetic mean is used as given in equation (5). Here, V indicates a set of given virtual beacon nodes and P indicates a position: ... (5) WCL - Virtual beacon nodes are combined using Weighted Centroid Localization (WCL) as given in equation (6). Suitable substitutions for weight oet are to be presented subsequently. Common weights rely on measured distances or the received signal strength (RSS). ... (6) A WCL - Virtual beacon nodes are combined by use of Adaptive Weighted Centroid Localization (AWCL). While WCL simply gives more influence to closer beacon nodes, i.e. beacon nodes with higher weight, the idea of AWCL is to give more influence to the difference of given weights. Therefore, if the weights, e.g. RSS, of beacon nodes in range are similar to each other, they are to be reduced by a reduction part q of the smallest weight, with #eic|0<7<l},as illustrated in Fig. 2. Otherwise, i.e. in case of high differences within the weights, AWCL inherently acts as WCL. Various reduction parts, referred to as q in equation (7), have been investigated, as described as follows. ... (7) 3.3. Weightage Except from the plain CL algorithm, the presented coarse grained estimation techniques are utilizing weighting factors. The aim of weights is to give higher influence to more important (virtual) beacon nodes. In the given case a precalculation is defined as more important, if the accordant beacon node and therefore the linearizer are closer to the blind node. In the same way, it is more important if the number of beacon nodes included in the precalculation and in the blind node's communication range is high. Consequently the following weights have been studied: Signal Strength - Virtual beacon nodes are weighted according to the RSS of the beacon node that provided the precalculation that was used to create the virtual beacon node. On average, the RSS is expected to be higher the closer the beacon node is. Although, variations of shadowing and fading may compromise this relation, it has been investigated as possible weightage. Equation (8) illustrates this weight, with i indicating the linearizer of the according precalculation as well as the resulting virtual beacon node. ω^sub i^ = RSS1 (8) Similarity - Virtual beacon nodes are weighted according to the rate of beacon nodes, included in precalculation, that are located within the communication range of the blind node. This weight is given in equation (9), where IP, indicates the set of beacon nodes included in the precalculation of beacon node i , and B indicates the set of beacon nodes within the communication range of the blind node. This is applied to the WCL approach, which is then called Similarity based WCL (SWCL). ... (9) 3.4. Reduction Part AWCL has been shown as more accurate than the original WCL. In advance of an included WCL estimation, AWCL reduces all given weights by a certain portion of minimum weight, as given in equation (7). This leads to the behavior that in case of nearby weights the remaining small differences gets more importance. For our investigations, the used reduction part Q has been varied from 15 % to 65 %. 3.5. Distance Approximation To enable a blind node using beacon nodes outside its own communication range, sDLSne introduced a distance approximation, illustrated in Fig. 3, that utilizes the given distance between linearizer and inaccessible beacon node (diL), and the estimated distance between blind node and linearizer (rL), which was assumed to be as close as possible, due to the prior choice of the blind node. The sum of both distances is used as approximation of the unknown distance ? . Now, using not only the closest beacon node, but up to all beacon nodes within the communication range, this approximation tends to be more and more inaccurate. Therefore, two variants of this distance approximation have been investigated. Independent Approximation - For each precalculation, distances to inaccessible beacon nodes are estimated as given in Fig. 3. All data used is either directly estimated by use of measurements or provided by the precalculation itself. Dependent Approximation - Most inaccessible beacon nodes are included in multiple precalculations, provided to the blind node. As illustrated in Fig. 4, distance approximations towards such an inaccessible beacon node will differ according to the used precalculation, due to different linearizer nodes, used in different precalculations. To provide the most precise distance estimation the shortest distance, which can be estimated from the given precalculations, have to be selected for calculation of virtual beacon nodes. To achieve this, one possibility is to firstly determine all possible estimates for inaccessible beacon nodes, i.e. one for each precalculation which includes the inaccessible node, to subsequently calculate the minimum distance estimations. In most cases, a more efficient solution can be applied. Fig. 5 illustrates such a solution, compared along with independent approximation in the context of the over all position estimation, given on the left hand side. The illustrated approach processes precalculations individually but in ascending order of their distances towards the blind node, as illustrated on the left side. For this purpose, the distance between blind node and linearizer acts as the distance towards its according precalculation. Once a distance towards an inaccessible beacon node has been approximated by a close precalculation, this distance will be marked as known, as illustrated in the last but one box on the right side of Fig. 5. If the same beacon node occurs in a further precalculation, the before calculated distance will be taken and the beacon node will be not treated as inaccessible. Fig. 4 illustrates the presented strategies by giving a worst case example for both approaches. Using independent approximation for the precalculation illustrated in Fig. 4(a) highly overrates the distance towards the inaccessible beacon node. By use of dependent approximation instead, the better approximation provided by a closer precalculation illustrated in Fig. 4(b) would be used. 4. Simulations To verify performance of the introduced HDLS approaches, the MATLAB® based network simulator Rmase is used [12]. The simulator provides a realistic radio communication model, including spatial and temporal normal distributed fading. A static bidirectional spanning-tree routing was used to send data packets from nodes to sink and vice versa. Distance estimations, performed by blind nodes rely on the simulators radio model. A random deployment of rcs nodes within a field of ft * ft arbitrary distance units (adus) was utilized. The first node was always used as sink, while the remaining nodes have been randomly chosen as blind nodes (50 %) or beacon nodes (50 %). Note that the low number of blind nodes has been proofed to has no significant influence on the presented results but speeds up the simulation dramatically. The field size parameter ? was varied from 5 to 30. The average communication range, given by the radio model, was 3 adus. For each field size the average over 100 simulations has been determined. In each simulated network all presented localization approaches have been performed concurrently. 5. Results As described in Section 3, various parameters influence the accuracy of HDLS. To distinguish between the different approaches, resulting from these parameter, a naming scheme is used, illustrated as syntax diagram in Fig. 6. This diagram also shows the more than 350 combinations which have been investigated by simulations. In Fig. 6, "S" symbols similarity based weightage, applied to WCL. Reduction part of AWCL has been varied from 15 % to 65 %. Range based strategy, indicated with an "R", also denotes a percentage of the distance towards the closest beacon node, which limits the catchment area of further beacon nodes. It is used in addition to the fixed upper bound of virtual beacon nodes. First, used coarse grained techniques are analyzed along with different virtual beacon strategies, i.e. the number of virtual beacon nodes. Fig. 7 shows mean localization error over the number of deployed nodes, using the basic CL approach. It is illustrated, that the hybrid approaches perform significantly better than the underlying sDLSne, but in most cases not as accurate as the original sDLS approach with costly matrix updates. Furthermore, it is shown that the hybrid approach with two virtual beacon nodes is outperformed by the one, using three virtual beacon nodes. In contrast, using as much virtual beacon nodes as possible does not further increase localization accuracy. Similar results have been found for HDLS based on WCL with traditional RSS based weighting, shown in Fig. 8. It is shown that this approach performs the better, the more virtual beacon nodes are used. Furthermore, it outperformes sDLS and therefore it also outperforms the CL based approach. In Section 3, SWCL has been introduced as an alternative approach of WCL, using similarity instead of signal strength. Both WCL based approaches are compared in Fig. 9. On the one hand, the illustration shows that similar to the CL based approach, the SWCL approach performs best, when three virtual beacon nodes are used. On the other hand, it is shown that this approach is outperformed by the RSS based approach. The third coarse grained technique, investigated to use with HDLS, is AWCL. The performance of AWCL depends on a reduction part, defined by AWCL. The best reduction part is said to be 55%. Therefore, this factor is also used for the results, given in Fig. 10. The presented results show, that this approach also outperforms the costly sDLS approach and performs the better the more virtual beacon nodes are used. Further investigations, using different reduction factors showed that also in the given context a reduction factor of 55% performs best in most cases. Nevertheless, achieved accuracy is often influenced only marginal by the reduction factor. As an intermediate result, HDLS provides best accuracy, using as much virtual beacon nodes as possible, combined by AWCL with a reduction part of 55 %. While the previous results used virtual beacon strategies with a fixed number of virtual beacon nodes, the following results investigate the range based virtual beacon strategy. The range within precalculations of beacon nodes are used to create virtual beacon nodes was varied from 125 % to 250 % of the distance between blind node and closest beacon node. The range based approach is combined with a fixed upper bound as presented before. Fig. 1 1 shows the resulting localization accuracy for CL based HDLS, using various ranges and an upper bound of two, i.e. HDLS falls back into sDLSne, if the closest beacon node is significantly closer than all other beacon nodes. On the one hand, it is shown, that even a small range of 125 % outperforms sDLSne. On the other hand, the graph shows, that only in few cases, this spatial limit outperforms the unlimited version. It also shows that in most cases a spatial limitation of 175 % performs very close to the unlimited counterpart. Similar results have been found for the use of WCL, SWCL or AWCL. As AWCL turned out as the most promising approach, it is selected to compare the range based strategy with various upper limits of virtual beacon nodes. Fig. 12 shows the results for the previously introduced upper bounds in combination with the spatial limits of 125 % and 250 %. On the one hand, the results show that the range based strategy also works for limits higher than two. On the other hand, it is shown that the higher the spatial limit, the lower the mean localization error. Although the unlimited approaches perform better than the corresponding limited approaches, it comes out that a spatial limit of 250 % achieves good results. To evaluate the range based strategy as an alternative to the before mentioned strategies of fixed limits, spatial limits have been explored, which are equivalent to numerous limits. As illustrated in Fig. 13, a spatial range of 150 % can be put on a level with the upper bound of two virtual beacon nodes. A spatial limit of 200 % instead, can be equated with the strategy of using 3 virtual beacon nodes. Once again, as much beacon nodes as available is proved to provide lowest localization error. Nevertheless, a spatial limit of 250 % provides also good results. Using a spatial limitation instead of a fixed number of virtual beacon nodes can be only seen as alternative, if it is more cost efficient. Therefore, the number of arithmetic operations, used for the according localization approach, has been investigated. Fig. 14 illustrates this cost for the HDLS approaches, presented in Fig. 13. It clearly shows, that the two range based approaches, which have been pointed out as equivalents need slightly more computations than the corresponding approaches. Up to this point, the presented results are based on independent approximation of distances towards inaccessible beacon nodes. The remaining part of this section presents the results, achieved by use of dependent approximation. As shown in Fig. 15, use of this approximation significantly improves localization accuracy of CL based HDLS. It outperforms sDLS as well as the best CL approach with independent approximation, even if only two virtual beacon nodes are used. It is also shown, that there is only a small gain, which distinguishes the all beacon strategy from the three beacon strategy. Similar results have been found using WCL, SWCL and AWCL. In all cases, each approach using dependent distance approximation outperforms the according HDLS approach based on independent distance approximation, using as much virtual beacon nodes as possible. To sum up the before mentioned results and to figure out the best HDLS approach, for each coarse grained technique the best performing approach is presented in Fig. 16. Noticeable but not surprising, best results are achieved, using as much virtual beacons as possible. Furthermore, Fig. 16 shows impressively that use of dependent distance approximation outperforms independent distance approximation. Using dependent approximation, the AWCL based approach performs best, closely followed by WCL. The same order is depicted for independent distance approximation. Furthermore, it is shown, that range based virtual beacon strategy is useful in combination with CL and SWCL. In all cases a high spatial limit is used. Regarding the strong impact of the number of virtual beacon nodes, same analyses have been performed, taking only results with an upper limit of three or two virtual beacon nodes into account, respectively. In both cases, dependent distance approximation outperforms independent distance approximation. Also the internal order of the presented approaches is similar to the one, presented in Fig. 16. For each upper limit of virtual beacon nodes, the best HDLS approach is presented in Fig. 17. As it is illustrated, two times AWCL based approaches provide best results, while in the third case WCL performs best. All given HDLS approaches perform better than original sDLS with costly update operations. Even though, using as much virtual beacon nodes as possible results in highest accuracy, high accuracy can be also achieved, using two or three virtual beacon nodes. The achieved improvements in localization accuracy are mainly caused by an increased number of beacon nodes, used for localization. Due to the fact, that different virtual beacons, based on different precalculations, use different sets of beacon nodes, cardinality of resulting unions is commonly higher than cardinality of the individual sets. Since the number of used beacon nodes only depends on the applied virtual beacon strategy, Fig. 1 8 exemplarly shows the number of used beacon nodes for the best cases, presented in Fig. 17. It is shown, that using two virtual beacon nodes increases the number of beacon nodes used by about 40 %, compared to sDLSne. Using three virtual beacon nodes leads to an increase of about 67 %, while using as much virtual beacon nodes as possible leads to an increase of 245 % beacon nodes. As a matter of course, using more beacon nodes, increasing localization accuracy, comes along with increased cost by means of computation. The mean number of operations, performed on each blind node to perform one localization is given in Fig. 19. The number of operations is mainly determined by the number of virtual beacon nodes or the number of individual precalculations, respectively. The most important result is that all presented approaches need less computations than the original sDLS with matrix updates, while all of these approaches, given in Fig. 19, provide higher accuracy than sDLS. The additional cost for each additional virtual beacon node is about 80 % of the cost of sDLSne. 6. Conclusions In this work, the efficient localization approach sDLSne has been combined with various coarse grained approaches, to improve accuracy of localization. As shown in Figs. 8, 10, 15, and 17, the new HDLS approach provides higher accuracy than sDLSne and even outperforms the initial sDLS approach. Using the newly introduced dependent distance approximation, even use of only two virtual beacon nodes, i.e. two precalculations, dramatically increases localization accuracy. Although HDLS needs more computations than sDLSne, it needs much less computations than sDLS. It further provides the possibility to choose between various variants with different cost. Using a small range, the presented range based virtual beacon strategy provides a very cost efficient way to improve sDLSne. This work was supported by the German Research Foundation under grant number BI467/17-2 (keyword: Geosens2). [1]. K. Akkaya and M. F. Younis, A survey on routing protocols for wireless sensor networks, Ad Hoc Networks, Vol. 3, No. 3, 2005, pp. 325-349. [2]. J. Al-Karaki and A. Kamal, Routing techniques in wireless sensor networks: a survey, Wireless Communications, IEEE, Vol. 11, No. 6, Dec. 2004, pp. 6-28. [3], J. Salzmann, R. Behnke, M. Gag, and D. Timmermann, 4-MASCLE - Improved Coverage Aware Clustering with Self Healing Abilities, in Proceedings of the International Symposium on Multidisciplinary Autonomous Networks and Systems (MANS ?9) , Jul. 2009, pp. 537-543. [4]. J. D. Gibson, The Mobile Communications Handbook, CRC Press, Boca Raton FL, USA, 1996. [5]. N. Bulusu, J. Heidemann, and D. Estrin, GPS-less low cost outdoor localization for very small devices, IEEE Personal Communications Magazine, Vol. 7, No. 5, Oct. 2000, pp. 28-34. [6]. J. Blumenthal, R. Grossmann, F. Golatowski, and D. Timmermann, Weighted Centroid Localization in Zigbee-based Sensor Networks, in Proceedings of the IEEE International Symposium on Intelligent Signal Processing (WISP ?7), Madrid, Spain, October 2007. [7]. R. Behnke and D. Timmermann, AWCL: Adaptive Weighted Centroid Localization as an efficient Improvement of Coarse Grained Localization, in Proceedings of the 5th Workshop on Positioning, Navigation and Communication (WPNC W), March 2008, pp. 243-250. [8]. F. Reichenbach, A. Bom, D. Timmermann, and R. Bill, A distributed linear least squares method for precise localization with low complexity in wireless sensor networks, Distributed Computing in Sensor Systems, 2006, pp. 514-528. [9]. R. Behnke, J. Salzmann, D. Lieckfeldt, and D. Timmermann, sDLS - Distributed Least Squares Localization for Large Wireless Sensor Networks, in Proceedings of the International Workshop on Sensing and Acting in Ubiquitous Environments, Oct. 2009. [10]. R. Behnke, J. Salzmann, and D. Timmermann, sDLSne - Improved Scalable Distributed Least Squares Localization with minimized Communication, in Proceedings of the 21st Annual IEEE International Symposium on Personal, Indoor and Mobile Radio Communications (PIMRC '1O), September 2010. [11].W. S. Murphy and W. Hereman, Determination of a position in three dimensions using trilateration and approximate distances, Tech. Rep., 1999. [12]. Y. Zhang, M. Fromherz, and L. Kuhn, Rmase: Routing modeling application simulation environment, 2009, http://www2.parc.com/isl/groups/era/nest/Rmase/ Author affiliation: RaIfBEHNKE, Jakob SALZMANN, Philipp GORSKI, Dirk TIMMERMANN Institute of Applied Microelectronics and Computer Engineering, University of Rostock, Richard- Wagner Straße 31,18119 Rostock, Germany Tel.:+49 381-4987251 E-mail: ralf.behnke, jakob. salzmann, philipp.gorski2, dirk.timmermann}@uni-rostock.de Received: 2 November 201 1 /Accepted: 20 December 201 1 /Published: 12 March 2012
{"url":"http://www.readperiodicals.com/201203/2624558931.html","timestamp":"2014-04-17T13:17:24Z","content_type":null,"content_length":"45333","record_id":"<urn:uuid:31fcd0d2-b08a-447e-9729-fc93aedf73fe>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00484-ip-10-147-4-33.ec2.internal.warc.gz"}
Mathematical Thinking through Web Programming Ideas in Mathematics is a liberal-arts mathematics course at the University of Pennsylvania, taken by many humanities, nursing, and elementary education majors to satisfy a distribution requirement. Over the last several semesters, I have been teaching two versions of the course. The first is a regular, "in-class" version for Penn undergraduates. The other is an online "distance learning" course for a varied audience consisting of high school students, adult continuing education students, Penn alumni, and a few regular Penn students. The online version involves weekly webcasts in place of regular class sessions and features chatroom-based office hours. Both versions rely on a suite of web-based courseware for student collaboration, communication, and assessment. For the in-class version I have relied on the Blackboard system, and for the distance learning version I have been using software developed by eCollege. Dennis DeTurck is Professor of Mathematics at the University of Pennsylvania. The goal of the course is to get every student excited about and involved in at least one aspect of the mathematics that we do. The curriculum of the course is loosely based on the COMAP textbook, For All Practical Purposes, involving strands that include • graph theory, • number theory, • combinatorics, • probability and statistics, • geometry and fractals, • game theory, • voting and apportionment, • data encoding, and • encryption. Often two or more of the strands are running simultaneously, in an effort to appeal to most of the students at least some of the time -- and to address the related issues of impatience and short attention spans! For example, the course begins with graph theory -- which is covered in the COMAP book -- and number theory -- which is not. The point of this article is not to describe the distance learning or e-communication aspects of the course, but rather to describe one of the strands that runs through and supports the entire course, simultaneously providing a medium for experimentation with various concepts and, more importantly, a framework for mathematical thinking. The advertised purpose of this strand is to teach a little web-based programming in HTML and Javascript. The deeper purpose is to involve students in the highly mathematical activity of programming and -- as a subconscious byproduct -- proving Throughout the course, students engage in activities that simultaneously • prepare them to write their own programs, • teach the mechanics of web-based programming, and • force them to think algorithmically and/or reinforce or extend mathematical concepts from other strands of the course. This is done in several "phases" in the course of the semester. In the following sections, I describe each phase, together with the corresponding activities. Published July 2001 © 2001 by Dennis DeTurck
{"url":"http://www.maa.org/publications/periodicals/loci/joma/mathematical-thinking-through-web-programming","timestamp":"2014-04-21T07:23:01Z","content_type":null,"content_length":"100209","record_id":"<urn:uuid:feaf8bc1-4f10-4a14-8a57-a01c6a09eb3c>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00378-ip-10-147-4-33.ec2.internal.warc.gz"}
Geting XYZ of a point along a line (vector) - Java-Gaming.org I know this may seem like a dumb question, but I'm having some odd problems when trying to calculate the position of a point in space along a vector. My objective is to grab the XYZ coordinates of a point in space directly in front of the camera with a set distance. I know the pitch, roll and yaw angles of the camera already, and my existing calculation looks like the following (angles in this example are originally stored as degrees, then converted into 1 float addX = 5 * (float)Math.sin(Math.toRadians(camera.yaw)); 2 float addZ = 5 * (float)Math.cos(Math.toRadians(camera.yaw)); 3 float addY = 5 * (float)Math.sin(Math.toRadians(camera.pitch)); 5 float newX = camera.x + addX; 6 float newZ = camera.z + addZ; 7 float newY = camera.y + addY; The problem I have is that the new point - which does project in front of the camera as intended - doesn't seem to work along the Y axis correctly. It 'clamps' between the angles of -45° to +45°. So, rotating the camera around the Y axis (yawing) seems to work great - the point 'fires' off from the camera in the right direction - but the plotting of the point in around the camera's pitch arc is out of whack! For reference, this code is intended to be used as a line-of-sight calculation for detecting what the player is looking at. Any help would be greatly appreciated. Thank you!
{"url":"http://www.java-gaming.org/index.php?topic=29155.msg266989;topicseen","timestamp":"2014-04-19T00:23:31Z","content_type":null,"content_length":"91088","record_id":"<urn:uuid:6b1324a5-05dd-4241-9fcc-232b916d1630>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00196-ip-10-147-4-33.ec2.internal.warc.gz"}
Tuckahoe, NY ACT Tutor Find a Tuckahoe, NY ACT Tutor ...In 2012 I graduated with a B.S. in mechanical engineering from Columbia University. During 2011, 2012, and 2013 I've worked at a joint research group between Mt. Sinai Medical School and Columbia Mechanical Engineering. 32 Subjects: including ACT Math, reading, calculus, physics ...Throughout the years I have tutored various students in SAT math. Most of these students come to me having scored around a 500 on the math section on their first try, although a few were a little lower, and a few a little higher. After working with me each student increased their score by at least 100 points, but usually more. 11 Subjects: including ACT Math, Spanish, algebra 2, geometry ...I'm currently taking a break to pursue dance and theater in the Big Apple. I'm a lifelong learner, always seeking an opportunity to discover new aspects of the world and people. I love science but also excel in math and English. 26 Subjects: including ACT Math, reading, English, biology ...I referred to the main idea at hand, made certain observations, used specific examples to support my observations, and summarized my position in the final paragraph. This approach was more than sufficient to satisfy my critical analysis writing. My strengths in English focus on grammar and sentence structure. 41 Subjects: including ACT Math, reading, chemistry, physics ...In Mount Sinai I worked with AP, SAT, and ACT students for their general chemistry coursework. I have experience with all ages of students! In short, I prefer a "hands-on" approach method while doing tutoring. 37 Subjects: including ACT Math, chemistry, English, reading Related Tuckahoe, NY Tutors Tuckahoe, NY Accounting Tutors Tuckahoe, NY ACT Tutors Tuckahoe, NY Algebra Tutors Tuckahoe, NY Algebra 2 Tutors Tuckahoe, NY Calculus Tutors Tuckahoe, NY Geometry Tutors Tuckahoe, NY Math Tutors Tuckahoe, NY Prealgebra Tutors Tuckahoe, NY Precalculus Tutors Tuckahoe, NY SAT Tutors Tuckahoe, NY SAT Math Tutors Tuckahoe, NY Science Tutors Tuckahoe, NY Statistics Tutors Tuckahoe, NY Trigonometry Tutors
{"url":"http://www.purplemath.com/Tuckahoe_NY_ACT_tutors.php","timestamp":"2014-04-17T07:47:46Z","content_type":null,"content_length":"23573","record_id":"<urn:uuid:511e8776-b2bb-46d3-bda6-23d84c3048aa>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00143-ip-10-147-4-33.ec2.internal.warc.gz"}
the first resource for mathematics Resolving controversies in the application of the method of multiple scales and the generalized method of averaging. (English) Zbl 1142.34356 Summary: I compare application of the method of multiple scales with reconstitution and the generalized method of averaging for determining higher-order approximations of three single-degree-of-freedom systems and a two-degree-of-freedom system. Three implementations of the method of multiple scales are considered, namely, application of the method to the system equations expressed as second-order equations, as first-order equations, and in complex-variable form. I show that all of these methods produce the same modulation equations. I address the problem of determining higher-order approximate solutions of the Duffing equation in the case of primary resonance. I show that the conclusions of Rahman and Burton that the method of multiple scales, the generalized method of averaging, and Lie series and transforms might lead to incorrect results, in that spurious solutions occur and the obtained frequency-response curves bear little resemblance to the actual response, is the result of their using parameter values for which the neglected terms are the same order as the retained terms. I show also that spurious solutions cannot be avoided, in general, in any consistent expansion and their presence does not constitute a limitation of the methods. In particular, I show that, for the Duffing equation, the second-order frequency-response equation does not possess spurious solutions for the case of hardening nonlinearity, but possesses spurious solutions for the case of softening nonlinearity. For sufficiently small nonlinearity, the spurious solutions are far removed from the actual response. But as the strength of the nonlinearity increases, these solutions move closer to the backbone and eventually distort it. This is not a drawback of the perturbation methods but an indication of an application of the analysis for parameter values outside the range of validity of the expansion. Also, I address the problem of obtaining non-Hamiltonian modulation equations in the application of the method of multiple scales to multi-degree-of-freedom Hamiltonian systems written as second-order equations in time and how this problem can be overcome by attacking the state-space form of the governing equations. Moreover, I show that application of a variation of the method of Rahman and Burton to multi-degree-of-freedom systems leads to results that do not agree with those obtained with the generalized method of averaging. 34E13 Multiple scale methods (ODE) 34C29 Averaging method 70K65 Averaging of perturbations (nonlinear dynamics) [1] Nayfeh, A. H., ?The response of single-degree-of-freedom systems with quadratic and cubic nonlinearity to a subharmonic excitation?, Journal of Sound and Vibration 89, 1983, 457-470. · Zbl 0544.70034 · doi:10.1016/0022-460X(83)90347-4 [2] Nayfeh, A. H., ?Combination resonances in the nonlinear response of bowed structures to a harmonic excitation?, Journal of Sound and Vibration 90, 1983, 457-470. · Zbl 0527.73051 · doi:10.1016/ [3] Nayfeh, A. H., ?Combination tones in the response of single-degree-of-freedom systems with quadratic and cubic nonlinearities?, Journal of Sound and Vibration 92, 1984, 379-386. · Zbl 0538.70026 · doi:10.1016/0022-460X(84)90386-9 [4] Nayfeh, A. H., ?Quenching of primary resonances by a superharmonic resonance?, Journal of Sound and Vibration 92, 1984, 363-377. · Zbl 0544.70032 · doi:10.1016/0022-460X(84)90385-7 [5] Nayfeh, A. H., ?Quenching of a primary resonance by a combination resonance of the additive or difference type?, Journal of Sound and Vibration 97, 1984, 65-73. · doi:10.1016/0022-460X(84) [6] Nayfeh, A. H., ?Topical course on nonlinear dynamics?, in Perturbation Methods in Nonlinear Dynamics, Societa Italiana di Fisica, Santa Margherita di Pula, Sardinia, 1985. [7] Nayfeh, A. H., Perturbation Methods, Wiley, New York, 1973. [8] Nayfeh, A. H., Introduction to Perturbation Techniques, Wiley, New York, 1981. [9] Nayfeh, A. H. and Khdeir, A. A., ?Nonlinear rolling of ships in regular beam seas?, International Shipbuilding Progress 33(379), 1986, 40-49. [10] Nayfeh, A. H. and Khdeir, A. A., ?Nonlinear rolling of biased ships in regular beam waves?, International Shipbuilding Progress 33(381), 1986, 84-93. [11] Nayfeh, A. H. and Mook, D. T., Nonlinear Oscillations, Wiley, New York, 1979. [12] Nayfeh, A. H. and Chin, C.-M., Perturbation Methods with Mathematica, Dynamics Press, Virginia, 1999; http://www.esm.vt.edu/$\sim$anayfeh/. [13] Nayfeh, A. H. and Chin, C.-M., Perturbation Methods with Maple, Dynamics Press, Virginia, 1999; http://www.esm.vt.edu/$\sim$anayfeh/. [14] Rahman, Z. and Burton, T. D., ?Large amplitude primary and superharmonic resonances in the Duffing oscillator?, Journal of Sound and Vibration 110, 1986, 363-380. · Zbl 1235.70100 · doi:10.1016/ [15] Rahman, Z. and Burton, T. D., ?On higher order method of multiple scales in nonlinear oscillations-periodic steady state response?, Journal of Sound and Vibration 133, 1989, 369-379. · Zbl 1235.70099 · doi:10.1016/0022-460X(89)90605-6 [16] Luongo, A., Rega, G., and Vestroni, F., ?On nonlinear dynamics of planar shear indeformable beams?, Journal of Applied Mechanics 53, 1986, 619. · Zbl 0597.73061 · doi:10.1115/1.3171821 [17] Hassan, A., ?Use of transformations with the higher order method of multiple scales to determine the steady state periodic response of harmonically excited nonlinear oscillations. Part I. Transformation of derivative?, Journal of Sound and Vibration 178, 1994, 21-40. · Zbl 1237.70076 · doi:10.1006/jsvi.1994.1465 [18] Hassan, A., ?Use of transformations with the higher order method of multiple scales to determine the steady state periodic response of harmonically excited nonlinear oscillations. Part II. Transformation of detuning?, Journal of Sound and Vibration 178, 1994, 1-19. · Zbl 1237.70075 · doi:10.1006/jsvi.1994.1464 [19] Lee, C. L. and Lee, C. T., ?A higher order method of multiple scales?, Journal of Sound and Vibration 202, 1997, 284-287. · Zbl 1235.70093 · doi:10.1006/jsvi.1996.0736 [20] Lee, C. L. and Perkins, N. C., ?Nonlinear oscillations of suspended cables containing a two-to-one internal resonance?, Nonlinear Dynamics 3, 1992, 465-490. [21] Benedettini, F., Rega, G., and Alaggio, R., ?Nonlinear oscillations of a four-degree-of-freedom model of a suspended cable under multiple internal resonance conditions?, Journal of Sound and Vibration 182, 1995, 775-798. · doi:10.1006/jsvi.1995.0232 [22] Pan, R. and Davies, H. G., ?Responses of a nonlinearly coupled pitch-roll ship model under harmonic excitation?, Nonlinear Dynamics 9, 1996, 349-368. · doi:10.1007/BF01833361 [23] Boyaci, H. and Pakdemirli, M., ?A comparison of different versions of the method of multiple scales for partial differential equations?, Journal of Sound and Vibration 204, 1997, 595-607. · Zbl 1235.74350 · doi:10.1006/jsvi.1997.0951 [24] Luongo, A. and Paolone, A., ?On the reconstitution problem in the multiple time-scale method?, Nonlinear Dynamics 19, 1999, 133-156. · Zbl 0966.70015 · doi:10.1023/A:1008330423238 [25] Cartmell, M. P., Ziegler, S. W., Khanin, R., and Forehand, D. I. M., ?Multiple scales analyses of the dynamics of weakly nonlinear mechanical systems?, Applied Mechanics Reviews 56, 2003, 155-492. · doi:10.1115/1.1581884 [26] Rega, G., Lacarbonara, W., Nayfeh, A. H., and Chin, C. M., ?Multiple resonances in suspended cables: Direct versus reduced-order models?, International Journal of Non-Linear Mechanics 34, 1999, 901-924. · Zbl 1068.74562 · doi:10.1016/S0020-7462(98)00065-1 [27] Nayfeh, A. H., Nonlinear Interactions, Wiley, New York, 2000. [28] Nayfeh, A. H., Arafat, H. N., Chin, C.-M., and Lacarbonara, W., ?Multimode interactions in suspended cables?, Journal of Vibration and Control 8(3), 2002, 337-387. · Zbl 1107.74314 · doi:10.1177
{"url":"http://zbmath.org/?q=an:1142.34356","timestamp":"2014-04-18T23:53:58Z","content_type":null,"content_length":"31950","record_id":"<urn:uuid:b1574cfa-f92f-4939-b4a5-5c40c966ba5b>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00101-ip-10-147-4-33.ec2.internal.warc.gz"}
[R] Plotting the complex fft in 3D? Uwe Ligges ligges at statistik.tu-dortmund.de Sat Sep 6 16:10:50 CEST 2008 Oliver Bandel wrote: > Hello Martin, > Zitat von Martin Maechler <maechler at stat.math.ethz.ch>: >> Just another remakr on this thread. >> I you have time series and think about its fourier transform >> (EE language) then you should know that the statistical language >> of that is "spectral analysis" or maybe >> "frequency domain time-series analysis" >> and the R function to consider should definitely be > [...] > "Yes, I'm coming from the EE world, but I also know the other terms. > The term "spectral analysis" is also used in EE, maybe not that often. > Also "frequency domain time-series analysis" is used in EE, but > maybe only at the university, later at the job, short terms > "spectral analysis" or "fft" are used. (But it may differ from country > to country.) > I tried "spectrum" now on my example data, and it looks quite different > to the result of fft(). > It looks very close to what one gets as output from a spectrum analyzer > (measurement harware). > So it's quite nice to use this. :-) >> spectrum() which is a wrapper (among others) to spec.pgram() >> -- which calls fft() -- for computing the so-called >> "periodogram". >> If you learn more about the topic, you will learn that in almost >> all cases you'd consider a *smoothed* version of the >> periodogram, etc etc (because the so-called *raw* periodgram is >> *in*consistent as an estimate fo the underlying true spectrum). > Well, what do you mean with inconcistent? > And why is spectrum() better? > Do you talk about things like windowing for becoming more appropriate > results? > Even if the output from spectrum() looks more like what I know from > measurement hardware, it might not always be better. > Can yo explain, why better using this? > The FFT only creates coefficients for certain seperated > frequencies. It depends on number of samlpes how accurate the > result is. And if the samples aren't an integer multiple of the > frequency in the measured signal, this creates errors in the results. > Possibly this is, what you are talking about? > Why is spectrum() better? Would be nice to have an explanation, > on how it's results are created, so that I can understand, > when which kind of analysis is better. > For the non-EE analysis, why is there fft() used and not > spectrum()? > For what kind of analysis is what function better? spectrum() and spec.pgram() use fft() to calculate results. But some information is thrown away, some information is merged appropriately, and maybe tapering and padding is applied. Well, it really depends on your tasks which functions to use. I found myself frequently using fft() and doing the rest manually, because spectrum() is sometimes too intelligent for me ... >> The Time-Series chapter/section of the MASS book is very helpful >> here, IIRC. > "MASS book"? Do you mean the documentation of the MASS package? No, the other way round: the Springer book "Modern Applied Statistics with S, 4th edition, 2002, by Venables and Ripley has some package (MASS) as supplementary material. Best wishes, Uwe Ligges > Well, so much packages.... R is very powerful and provides a lot of > things. > But where to start? > Any idea? I started with some introductional papers and got a > good overview. But a more systematical approach might be better. > Can you recommend some things to read? > I also have seen that there is one german book about programming in R. > I will look at it in more detail, when the new edition will be published > (sept/oct). > But maybe you can recommend other readings as well? > Thanks, > Oliver > ______________________________________________ > R-help at r-project.org mailing list > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code. More information about the R-help mailing list
{"url":"https://stat.ethz.ch/pipermail/r-help/2008-September/172990.html","timestamp":"2014-04-20T20:57:15Z","content_type":null,"content_length":"7719","record_id":"<urn:uuid:89443456-2d97-49a9-ac40-1f8e6bd24573>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00192-ip-10-147-4-33.ec2.internal.warc.gz"}
Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole. Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages. Do not use for reproduction, copying, pasting, or reading; exclusively for search engines. OCR for page 3 THE PREPARATION OF TEACHERS OF MATHEMATICS: CONSIDERATIONS AND CHALLENGES: A LETTER REPORT entering teaching at these different levels bring different expectations, experiences, and professional goals (Ball, 1988). The trend is that elementary teachers tend to enter the profession more deeply committed to children, and less committed to particular content areas. CHALLENGES IN THE PREPARATION OF K-12 TEACHERS OF MATHEMATICS We identify five challenges to improving the preparation of teachers of K-12 mathematics. Both through applied work in mathematics teacher preparation, and through research, there have been efforts to better understand each. Many practical problems and theoretical dilemmas remain, however. We discuss “where we are now” -- highlighting what is happening in practice, what is known from research, and where there is consensus. We then address “what is needed” by pointing out where consensus does not exist, where movement toward consensus might be possible, and where informed debate among researchers, practitioners and policymakers might be especially productive. Issue I: What Mathematics Should Teachers Know? The mathematics community has a long history of supporting strong mathematics content preparation for prospective teachers. Current publications of the professional societies continue to make this case, emphasizing that the new K-12 reforms require teachers to have increased mathematical breadth. A Call for Change (Leitzel, 1991, preface) notes: “The content of collegiate level courses must reflect the changes in emphases and content of the emerging school curriculum and the rapidly broadening scope of mathematics itself. In general, current requirements for certification of teachers of school mathematics, particularly at the elementary and middle school levels, and the learning experiences of prospective teachers within college mathematics classes fall far short of these goals.” With the publication of the NCTM Curriculum and Evaluation Standards for School Mathematics (NCTM, 1989) and the ensuing development of curriculum materials reflecting the mathematical emphases of the Standards, teachers face subjects, such as data analysis and discrete mathematics, not traditionally included in preservice preparation programs. Sources which describe contemporary mathematics (Peterson, 1988; Steen, 1990) should be considered in re-thinking content issues. New curriculum materials and standards also raise issues about the depth of mathematical understanding needed by teachers. The NCTM Professional Standards, for example, suggest that teachers “orchestrate discourse by deciding what to pursue in depth from among the ideas that students bring up in a discussion” (NCTM, 1991, p. 35). Teachers may also need deeper mathematical understanding in order to promote mathematical sense-making, problem solving, reasoning, and justification. Ball observes “elementary teachers, most of whom experienced school knowledge as given—and who acquired facts and memorized rules—must invent a teaching that engages students in complex reasoning in authentic contexts” (Ball, in press, p. 14). Lampert foreshadowed the current content need in her choice of the single indicator of an ideal mathematics teacher: “whether that teacher could give students at the grade level he or she is teaching a mathematically legitimate and comprehensible explanation for why the procedures students are using are appropriate or not, or why the answers they are giving are correct or not” OCR for page 3 THE PREPARATION OF TEACHERS OF MATHEMATICS: CONSIDERATIONS AND CHALLENGES: A LETTER REPORT (Lampert, 1987, p. 37). The inservice project Teaching to the Big Ideas seeks to identify the “central organizing principles of mathematics with which students must wrestle as they confront the limitations of their existing conceptions ” (Schifter, Russell, & Bastable, in press, p. 3). Cooney, Brown, Dossey, Schrage, and Whittman (in press) are producing materials for prospective secondary school teachers to bridge the abstraction of typical college mathematics courses with the realities of the secondary curriculum and pedagogy. Differences in need among elementary, middle grades, and secondary teachers may be especially great relative to mathematical depth. Despite strong mathematics community consensus about the importance of subject matter knowledge, only in recent years has the teacher preparation research community been able to assemble a convincing case that subject knowledge matters in teaching. Begle and Geeslin (1972), for example, found that teachers' mathematical preparation did not seem to affect students' test performance. More recent work has been able to demonstrate a relationship (Chaney, 1995). McDiarmid, Ball, and Anderson (1989) conclude: “Recent research highlights the critical influence of teachers' subject matter understanding on their pedagogical orientations and decisions. . . . Teachers' capacity to pose questions, select tasks, evaluate their pupil's understanding, and make curricular choices all depend on how they themselves understand the subject matter” (pp. 195-196). Research perspectives and methodologies are increasingly useful in helping to confirm and illuminate impressions and beliefs about depth and breadth issues. What Is Needed Detailed standards about mathematics content knowledge for prospective K-12 teachers do not exist. Would standards be useful? Who should lead such an effort? What philosophical base (that of mathematics, that of student learning, or both) might ground such an endeavor? How might these balances differ for elementary, middle, and secondary school teachers? Should university mathematics departments offer credit (sometimes graduate credit) for courses about the mathematics content of the K-12 curriculum? How does understanding of advanced mathematical subject matter influence understanding of elementary mathematical concepts? What are the most fruitful ways to think about the integration of mathematics and science? How could such work be informed by practices in other countries? Many mathematics teacher educators contend that teachers, in addition to knowing mathematics, also need to know and experience mathematical inquiry and the “practice” of mathematics (Copes, 1996; Ernest, 1994; NCTM, 1991). Where do prospective teachers acquire mathematical “habits of mind” (Brown, Collins, & Duguid, 1989; Cuoco, Goldenberg, & Mark, in press)? How can study of the history and philosophy of mathematics be a meaningful and appropriate part of the mathematical preparation of teachers? How can teachers learn to appreciate the coherence of mathematics, so that it informs their selection of curriculum materials and their lesson planning? What is meant by mathematical “practice”? What aspects of mathematical practice are most defensibly connected to teaching and learning? Preservice programs for elementary teachers are very crowded and allow little time for study of mathematics. Can preservice teachers learn mathematics outside of their mathematics content courses, such as in their clinical experiences, from their cooperating teachers and supervisors, or OCR for page 3 THE PREPARATION OF TEACHERS OF MATHEMATICS: CONSIDERATIONS AND CHALLENGES: A LETTER REPORT from mathematics methods courses? Can studying the practice of mathematics teaching lead to deeper knowledge of mathematics? Can pre- and post-undergraduate experiences be viewed as part of a teacher's mathematical preparation? How can graduate credit be given for elementary mathematics? In what ways do teachers need to “own” knowledge before teaching? There are hard questions concerning whether prospective teachers should learn mathematics in courses specially designed for teachers. Should we imagine “every student a teacher” (Goroff, in press), and thus provide experience appropriate for prospective teachers through regular departmental mathematics course offerings? Or, do prospective teachers need to “come to know” particular mathematics in particular ways that will be most influential for their subsequent practice in classrooms? Who has the proper authority and expertise to make such judgments? Summary. Subject matter matters. Deciding what subject matter, for whom, and in what depth, is a substantial challenge for mathematicians and mathematics educators. Issue II: How Should Teachers Come to Know Mathematics? The MSEB (NRC, 1995) has argued that “To prepare teachers to implement the new vision of mathematics education, colleges and universities need to reflect the same principles in their programs for the preparation of teachers.” Postsecondary institutions provide a variety of structures in which prospective teachers learn mathematics. Influenced by 1983 Mathematical Association of America recommendations (MAA, 1983), many institutions offer special courses with titles such as “Number Systems for Elementary Teachers” and “Geometry for Elementary Teachers” for prospective elementary teachers. Judging from the textbooks, the approach often involves teaching these topics at an elementary level while modeling instructional methods appropriate for use in the elementary classroom. Such experiences can therefore be considered both methods and content courses. In other situations, elementary teachers learn mathematics in general education offerings. Secondary candidates typically elect to major in mathematics, taking a number of standard department offerings. At advanced levels prospective secondary teachers sometimes take courses in abstract algebra, linear algebra, geometry, and real analysis along with other mathematics majors, and sometimes take courses specially designed for prospective teachers. Reform documents promote learning through active engagement with subject matter, both for students and for prospective teachers. There are various innovative attempts at providing mathematics content preparation for prospective teachers. Projects such as the Middle School Mathematics Project (Stake, 1993) and the Middle School Mathematics Program (Sullivan, 1993) have experimented extensively with the use of “hands-on” materials and discovery approaches toward mathematics instruction for teachers. However, not only is there little national documentation about attempts of these types, little is known from research about how such experiences might differentially affect student learning of mathematics and use of mathematics in teaching practice later on. Studies which describe, synthesize, and compare college and university-based programs of teacher preparation are important to the improvement of teacher preparation (Ball & Wilson, 1990; Goodlad, 1991; Howey & Zimpher, 1989; Raizen OCR for page 3 THE PREPARATION OF TEACHERS OF MATHEMATICS: CONSIDERATIONS AND CHALLENGES: A LETTER REPORT & Michelsohn, 1994; Stake et al., 1993). Additional work specific to mathematics would be useful. There is growing consensus among mathematics teacher educators and researchers (Ball, in press; Even & Lappan, 1994; Schifter, in press; Thompson & Thompson, 1994) that preparing future teachers to be effective in the standards-based reform climate depends in part upon teachers ' experience of “qualitatively different and significantly richer understanding of mathematics than most teachers currently possess” (Schifter & Bastable, 1995, p. 1). But Carpenter (1995) argues that, even when teachers are taught additional content in their undergraduate programs, they do not necessarily apply that knowledge to their teaching, or even retain that knowledge. He claims that the way in which teachers come to understand the content is critical, and its relationship to future teaching practice is not well understood; “ . . . teachers need to understand how their content knowledge applies to their teaching . . . [so] that the content is learned in a context that provides some links with how that knowledge is used in teaching” (Carpenter, 1995, p. 23). What Is Needed Research presents little evidence about the connections between how teachers come to know mathematics and their own practice in the mathematics classroom. How can we learn from programs, both inservice and preservice, which are experimenting with helping teachers come to know mathematics in new ways? How can we articulate these approaches and share them within the mathematics teacher education community? Despite recurring calls for blending content and pedagogy in teacher preparation, research tells us little about this area. Are there effective ways of integrating mathematics content and pedagogy in teacher preparation? How do these differ across levels? What skills and knowledge do teacher educators need to enact these approaches? What are the effects of “some content, some pedagogy”? Summary. It's not just the mathematics. Knowing mathematics does not ensure the effectiveness of prospective teachers. How they come to know their mathematics matters as well. Issue III: How Do Teachers Learn about Teaching Mathematics? Most would probably agree that scientists learn to do science by working in laboratories, and that students learn to do mathematics in part by solving problems. What is the appropriate “laboratory” in which the prospective teacher of mathematics learns about teaching mathematics? Many mathematics education methods courses across the country address pedagogical content knowledge, defined by Shulman to include “. . . for the most regularly taught topics in one's subject area, the most useful forms of representation of those ideas, the most powerful analogies, examples, explanations, and demonstrations . . . an understanding of what makes the learning of specific topics easy or difficult . . .” (Shulman, 1986, p. 9). Prospective mathematics teachers learn about pedagogical content knowledge when their instructors model activities, introduce tools such as manipulatives and technology, and discuss literature about how students learn certain OCR for page 3 THE PREPARATION OF TEACHERS OF MATHEMATICS: CONSIDERATIONS AND CHALLENGES: A LETTER REPORT mathematical concepts and about student misconceptions. Increasingly, prospective K-12 teachers are learning about what is known from research about children's learning of mathematics. There is increased evidence that prospective teachers can learn about teaching mathematics from studying the “practice of mathematics teaching. ” Several projects are underway in which actual mathematics classrooms, or suitable proxies, become fruitful sites for learning about mathematics teaching. Some preservice and inservice teacher education projects are beginning to draw heavily on video excerpts from classrooms as the material from which students can learn about mathematics teaching (Ball, Lampert, & Rosenberg, 1991). The current expanding pool of material in the form of vignettes, scenarios, case studies, teaching cases, and sample student work is consistent with this trend (Merseth 1991; Barnett, Goldenstein, & Jackson, 1994). Related approaches involve teacher reflection and writing about practice (Schifter, 1996), and, more generally, action research (Oja & Ham, 1994) and inquiry into student learning (Raizen & Michelsohn, 1994). In the case of new and innovative K-12 curriculum, many developers are taking quite seriously their responsibility to educate teachers through the curriculum materials (Russell, 1994). Other possibilities for extending the opportunities to learn about teaching mathematics might be found by studying other countries' inservice mentoring practices. Little (1993) points out that the reforms in subject matter teaching “represent, on the whole, a substantial departure from teachers' prior experience, established beliefs, and present practice. . . . they hold out an image of conditions of learning for children that their teachers have themselves rarely experienced” (p. 130). Lampert (1985) and Ball (1993) have characterized teaching as the management of dilemmas, and posit that helping teachers prepare to teach is mainly about preparing teachers for handling the uncertainty of their work. It is a great challenge to prepare people for a kind of mathematics teaching that is so unfamiliar, invisible, and unpredictable. Cases, vignettes, and video tapes are beginning to help. Clinical experiences, such as practica, student teaching, and internships, have long been a part of the preparation of teachers of mathematics. A substantial general literature on field experience examines the role of practice in developing teaching (Buchmann & Schwille, 1983; Feiman-Nemser, 1983; Floden, Buchmann, & Schwille, 1987), giving it mixed reviews. Raizen and Michelsohn (1994) point out that schools of education are requiring increased involvement of faculty in students ' practica in schools, but the level of involvement of content-specific faculty in clinical experiences varies widely. A number of studies have found that prospective teachers often have insufficient knowledge of content and pedagogy when they enter their student teaching and internship experiences (Brown, 1985), and in their initial teaching positions (Brown & Borko, 1992), and that there is a tendency for beginning teachers to retreat to teaching styles that conform with the setting in which they are working. The notion of teachers learning from one another is more thoroughly developed in other countries, especially Japan (Stevenson & Stigler, 1992). The NCTM Professional Teaching Standards (NCTM, 1991) make the case that the preservice education of teachers should develop teachers' knowledge of “the influence of students' linguistic, ethnic, racial, and socioeconomic backgrounds and gender on learning mathematics” (p. 144). Preparing teachers to make mathematics instruction work for student groups for whom it is currently highly ineffective, if not failing altogether, is a difficult challenge for OCR for page 3 THE PREPARATION OF TEACHERS OF MATHEMATICS: CONSIDERATIONS AND CHALLENGES: A LETTER REPORT teacher educators. A number of researchers and practitioners have addressed this area (Secada, Fennema, & Adajian, 1995), consistently making the point that consideration of equity issues is critical in teacher development. What Is Needed There are several well-known inservice efforts which are based on learning through mathematics teaching practice. What is the potential of such efforts for preservice teachers? What kinds of adaptations are necessary? What can preservice teachers who lack classroom experience still learn from the “practice of mathematics teaching”? How promising are classroom-like sites as places to learn about mathematics? How can preservice teachers best learn about how children learn mathematics? The induction years and transition into the profession are critical parts of the teacher preparation experience. What happens when new teachers work in settings where the methods other teachers use are inconsistent with what the new teachers have learned? What is the responsibility of postsecondary institutions in shaping and supporting the induction experience of beginning teachers who are embarking on reformed teaching practices? How can mathematicians be involved? Summary. It's not just “some mathematics and some pedagogy.” There is much to be learned about mathematics teaching by examining the practice of mathematics teaching. Issue IV: How Can We Build Capacity Among Those Who Educate Mathematics Teachers? Not only are prospective K-12 teachers faced with teaching mathematics in ways they have never experienced in the reform climate, but mathematics teacher educators are faced with helping teachers learn to teach in a way that they themselves have probably neither experienced nor used much. Often the mathematics faculty members who teach content courses for elementary school teachers are isolated in their departments, without colleagues to consult about new trends and materials. Sometimes mathematics methods courses are taught by education faculty with little expertise or knowledge of current reform trends in mathematics education. Networking and interaction among mathematics teacher education community is only at a fledgling stage. The Association of Mathematics Teacher Educators, founded in the late 1980s, is one example. The professional societies are beginning to take up the issue of professional development for those who teach prospective teachers. There are occasional faculty enhancement opportunities for mathematics teacher educators, many NSF-sponsored, where faculty can learn about trends, materials, and research directions in mathematics teacher preparation. Examples include Project RADIATE, Project PROMPT, Project NEX T, and the East Carolina University Middle Math project. Additional professional support, access to emerging research developments, and resources for use in the preparation of teachers could be helpful as well. Aside from commercial textbooks used either in methods courses or in content courses especially designed for teachers, few curricular resources are available for mathematics teacher educators to use and adapt in their instruction. A number of innovative OCR for page 3 THE PREPARATION OF TEACHERS OF MATHEMATICS: CONSIDERATIONS AND CHALLENGES: A LETTER REPORT projects and programs are generating materials which might prove useful (Cooney et. al., in press; Graeber & Johnson, 1991; Schifter, in press; Merseth, in preparation). Too often, promising materials are implemented in specific projects only, without being produced or packaged in more widely accessible forms. “It is difficult to prepare elementary school teachers to teach science (or any subject) well without having them practice with excellent clinical teachers in classrooms.” (Raizen & Michelsohn, 1994). Despite the critical role of the internship and clinical experience, there are few models of mathematics-specific professional development for cooperating and supervising teachers coming from disparate backgrounds. Little mathematics-specific literature or activity exists in this area. What Is Needed Mathematicians, mathematics educators, teacher educators, cooperating and supervising teachers all are involved in mathematics teacher preparation. What kind of diversity of training and experience do they represent? What professional development experiences for mathematics teacher educators are needed? What are some exemplary opportunities and structures, and how do they work? How can a “community” be encouraged among those involved in the preparation of teachers of mathematics? Could standards be developed for this community? Mathematics teacher education faculty need materials to use in the preparation of teachers of mathematics, as well as opportunities to learn about these new materials and their effective use. How can the development of alternative materials, resources and technological tools for use in the preparation of teachers of mathematics be encouraged? How can such materials be disseminated so that faculty can learn to use them? How can school-university professional development programs for faculty and school personnel involved in mathematics teacher preparation be funded and continued? How can the effectiveness of teacher preparation curriculum materials be assessed? Summary. Teaching teachers is substantive work. We can learn from one another and from research. Issue V: How Can Mathematics Teacher Preparation Be a Coherent Process? Learning to teach mathematics occurs through a variety of experiences: study of mathematics content, pedagogical preparation, formal clinical preparation, and experiences as a student and learner of mathematics. Thus “. . . the dispersion of the teacher preparation effort has resulted in teacher education being nobody's clearly defined responsibility ” (Goodlad, 1991, p. 6). Rather, it is influenced by a collection of stakeholders with differing expectations, values, and assumptions about what is important in mathematics teacher preparation. These stakeholders include mathematics teacher education faculty, school of education faculty, mathematics faculty, faculty in community colleges, supervisors of clinical experiences, cooperating teachers in clinical experiences, and administrators in clinical settings. State certification and licensure agencies, national accreditation agencies, professional societies, authors of content and methods textbooks, OCR for page 3 THE PREPARATION OF TEACHERS OF MATHEMATICS: CONSIDERATIONS AND CHALLENGES: A LETTER REPORT and, of course, all teachers at all levels who have taught the prospective teachers also play important roles. There is virtually no research that helps clarify the array of structural arrangements, or the impact of various types of coordination, in promoting effective mathematics teacher preparation. There are policy shifts and trends that could possibly influence preparation programs for mathematics teachers. For example, the American Association of Colleges for Teacher Education (Teacher Education Policy in the States Survey (AACTE, 1994) indicates that states are considering portfolio assessments, adaptations for multicultural considerations, and state-developed tests as measures of competency in subject areas as requirements for regular licensure. There is also much discussion of demonstrated ability to teach as part of state and national teacher licensure and credentialling processes. The Interstate New Teacher Assessment and Support Consortium (INTASC, 1992) has developed a new portfolio assessment system for initial certification which is quite ambitious in its demands for high quality mathematics teaching. Faculty with mathematics-specific content and pedagogy interests should be involved in shaping and learning about in these trends. Some states now require instructors of methods courses to be certified, or to have recent inschool experience. The National Council for Accreditation of Teacher Education (NCATE) also expects K-12 expertise among teacher educators, thus impacting the participation of mathematicians in the teacher preparation enterprise. Decisions and policies made in schools of education, state agencies, national organizations concerned with licensure, accreditation, and standards-setting efforts for teacher credentialling, will affect specifically the mathematics preparation of teachers. Because mathematics teacher educators face very challenging problems in the more limited domain of mathematics, there is little research or codified practice to inform our thinking about how infrastructure shifts might either facilitate or impede mathematics teacher preparation, and about how the mathematics community might engage effectively in policy-related work. There are alternative programs at the state level (AACTE, 1993) and innovative programs such as Teach for America (Kopp, 1994) and Troops to Teachers (Keltner, 1994). How these relate to the growing base of knowledge about research and practice in mathematics teacher preparation is unclear. Additional information about the nature, extent, and effectiveness of mathematics teacher preparation through all these programs is needed. It may be important to acquire much deeper understanding of a number of important models for teacher development, and to analyze the implications for the improvement of mathematics teacher preparation. Studies of professional development school models (Abdal-Haqq, 1995), for example, which focus particularly on the preparation of teachers of mathematics might be beneficial. The National Board for Professional Teaching Standards is developing assessments that recognize “accomplished practice” in mathematics teaching of adolescents and young adults (NBPTS, 1994). The Salish I Research Project is a “national collaborative, working with a coherent research design, to establish a national data base on science teacher preparation programs and a study of the teaching abilities as reflected by the national science education standards” (Brunkhorst, 1994, p. 3). The NSF Collaboratives for Excellence in Teacher OCR for page 3 THE PREPARATION OF TEACHERS OF MATHEMATICS: CONSIDERATIONS AND CHALLENGES: A LETTER REPORT Preparation should provide a rich source of evidence about how changes in infrastructure might influence policy shifts, as well as effective mathematics teacher preparation, provided there is adequate support for the longitudinal research that this requires. Although research does exist concerning successful non-mathematics teacher preparation and development programs, such as the National Writing Project (Smith & Ylvisaker, 1993), little of that work has been analyzed or used by the mathematics education community. Comparative study of mathematics teacher preparation activity in other countries might also be rewarding, especially as the results of the Third International Mathematics and Science Study (US National Research Center, 1995) become available. What Is Needed Prospective teachers experience contradictions and inconsistencies among the many components of their programs. How can prospective teachers make sense of contradictions and differences they might find among their content, mathematics pedagogy, and clinical experiences? Can they incorporate these inconsistencies in some productive way into their own learning, and their own efforts at building working models of mathematics teaching and learning? Should school teachers be equal participants in the planning and redesign of teacher preparation programs? (Certainly accrediting agencies such as NCATE encourage their involvement.) How can the differences in expectations and views between mathematics content faculty and school personnel be mediated? The lack of coherence among program elements presents a great challenge for mathematics teacher preparation. How can cross-department and cross-college arrangements facilitate more coherence? In what ways should mathematics faculty become more involved in the clinical experiences of prospective teachers? How can mathematics department missions and reward structures reflect commitment to such involvement? What administrative structures within colleges and universities will support and encourage collaboration among the many stakeholders in teacher preparation? Are mathematicians sufficiently committed to teacher preparation to provide encouraging advice to promising candidates? A significant percentage of the prospective teachers in this country begin their education in the community and two-year college system. How can the community college system, with its base of experience and commitment to teaching-related matters, become a full partner in the teacher preparation debate? Much of the preparation of teachers of mathematics is significantly influenced by state, regional, and national entities. How can the mathematics community learn more about these processes? How might mathematicians interact with them? How can effective input from the mathematical community to the state and national teacher preparation infrastructure be assured? Districts and states have great discretion over the hiring, promotion, and development of teachers. Can the mathematics community become involved in these policies in order to support teachers' ongoing professional development? Are there organized intervention strategies which might be appropriate? OCR for page 3 THE PREPARATION OF TEACHERS OF MATHEMATICS: CONSIDERATIONS AND CHALLENGES: A LETTER REPORT Summary. Mathematics teacher preparation is situated in a much larger arena. Engagement with the larger teacher preparation infrastructure is critical. CONCLUSION We reiterate the summary statements: Subject matter matters. Deciding what subject matter, for whom, and in what depth, is a substantial challenge for mathematicians and mathematics educators. It's not just the mathematics. Knowing mathematics does not ensure the effectiveness of prospective teachers. How they come to know their mathematics matters as well. It's not just “some mathematics and some pedagogy.” There is much to be learned about mathematics teaching by examining the practice of mathematics teaching. Teaching teachers is substantive work. We can learn from one another and from research. Mathematics teacher preparation is situated in a much larger arena. Engagement with the larger teacher preparation infrastructure is critical. The MSEB believes that the issues raised here must be addressed by those concerned with postsecondary education, jointly with those concerned about mathematics education research, K-12 curriculum, K-12 schools, continuing teacher education, and policy development. Specifically, we recommend continued attention to the research knowledge base in mathematics teacher preparation, a relationship between teacher preparation research and practice which enables both enterprises to be more effective, the creation of diverse sets of resources for mathematics teacher preparation, and heightened emphasis on providing faculty development. We see a need for efforts that will join the many parts of the mathematics teacher preparation enterprise in productive collaborations to address these challenges. The MSEB looks forward to being involved in continued work on this important topic.
{"url":"http://www.nap.edu/openbook.php?record_id=10055&page=3","timestamp":"2014-04-19T11:58:35Z","content_type":null,"content_length":"68458","record_id":"<urn:uuid:da41b939-b883-4a23-8333-0bf48aba704d>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00662-ip-10-147-4-33.ec2.internal.warc.gz"}
A001423 - OEIS A. de Vries, Formal Languages: An Introduction, http://haegar.fh-swf.de/Seminare/Genome/Archiv/languages.pdf Andreas Distler and Tom Kelsey, The Monoids of Order Eight and Nine, in Intelligent Computer Mathematics, Lecture Notes in Computer Science, Volume 5144/2008, Springer-Verlag. [From N. J. A. Sloane, Jul 10 2009] G. E. Forsythe, SWAC computes 126 distinct semigroups of order 4, Proc. Amer. Math. Soc. 6, (1955). 443-447. H. Juergensen and P. Wick, Die Halbgruppen von Ordnungen <= 7, Semigroup Forum, 14 (1977), 69-79. D. J. Kleitman, B. L. Rothschild and J. H. Spencer, The number of semigroups of order n, Proc. Amer. Math. Soc., 55 (1976), 227-232. R. J. Plemmons, There are 15973 semigroups of order 6, Math. Algor., 2 (1967), 2-17; 3 (1968), 23. Satoh, S.; Yama, K.; and Tokizawa, M., Semigroups of order 8, Semigroup Forum 49 (1994), 7-29. N. J. A. Sloane, A Handbook of Integer Sequences, Academic Press, 1973 (includes this sequence). N. J. A. Sloane and Simon Plouffe, The Encyclopedia of Integer Sequences, Academic Press, 1995 (includes this sequence).
{"url":"https://oeis.org/A001423","timestamp":"2014-04-17T16:16:15Z","content_type":null,"content_length":"18074","record_id":"<urn:uuid:c395f144-c688-4a62-a6e3-9179dcec0579>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00116-ip-10-147-4-33.ec2.internal.warc.gz"}
Question: Ranged Weapons and Ammo Relationship Well, since Ranged weapon damage and PTH is based on both the weapon and the ammo, how does that work exactly? For PTH, it seems likely to me that the PTH is added together, but what about the damage? Is it multiplied, or added? Or does it go through some deviously tricky formula? Multiplication would make sense, but that would make it impossibly overpowered, wouldn't it? I dunno. So I'm hoping someone could make this clear to me. Weapon X (yes I said it), adds a fixed amount of damage per X Ammo X increases overall damage by a single percentage point Ooh, thanks. However, what is the value? Of the absolute value increase of the weapon's X. So if I understand you correctly, a Bolt [10x6] (+0) Would increase my damage by 16% Or do you mean a Bolt [10x6](+0) would increase damage by 60%? I think he means: Bolt [10x6] (+0) increase damage by 6%. If that was the case, with multipliers being the only difference, then why do the different ammos have different base values? Surely there must be some sort of effect, no? Bolt [10x1] would do 1% less damage than Bolt [10x2] is what he means. The base damage is exactly that; the base damage. Much like weapons have a base damage. "For PTH, it seems likely to me that the PTH is added together" This is not entirely correct. ammo PTH is three times as effective as weapon PTH. So a bolt +6 will add 18% to your PTH, while a crossbow +6 will only add 6%. How does the x on weapons work exactly then? is it a percentage? I always thought it like this. an Elven Longbow 6x1000 +20 added 6,000 damage and 20% plus to hit. I guess i was wrong...I'm so slow : You're right on the PTH part. As for damage, all you can really say is that relative increases are linear. Therefore: [6xY] +20 = X damage [6xNY] +20 = NX damage or something to that effect. I think when Jon explained it, there was a third variable in there. "if damage for weapon stats X and strength Y is N, then damage for 10X and 10Y is now 10N, for any X, Y, and N" There ya go "if damage for weapon stats X and strength Y is N, then damage for 10X and 10Y is now 10N, for any X, Y, and N" Y in this equation is the strength of the minion using the weapon correct? In which case it would be Minion with X ST and weapon [6xY] +20 = X damage Minion with NX ST and weapon [6xNY] +20 = NX damage Meaning that simply equipping a weapon with twice the X on a minion won't double damage, you would have to double strength as well. I haven't been required to do algebra in a while..If I continue to look at it it may make some sense to me... /me scrunches forehead in concentration Or should I say Minion with X ST and weapon [6xY] +20 = N damage Minion with ZX ST and weapon [6xZY] +20 = ZN damage Basically double strength and the weapons X multiplier and you double damage is what Jon seems to be saying (assuming Y is strength of the minion using weapon X in his equation). Correct. And nice use of the overloaded variable =) =) Wow. I *think* I understand, if both ST and weapon damage multiplier were increased by the same percentage, damage would likewise increase by the same percentage. Also, the PTH is added together, with ammo having a 300% increase in effect. However, how important is the ST compared the multiplier on the weapon damage in reference ot the damage? Also, it was not mentioned how the damages of the weapon and the ammo stack together? Thanks a whole lot! Draugluin, yes you are correct. In my experience you get a similar damage increase from doubling your ST as you do from doubling your X so they are of similar importance. Ammo X will increase your Ranged damage by about 1% per X, so if you have x10 ammo your damage will be near enough to 10% higher than x1 ammo. I see. But does the base damage of the ammo make a difference? I would say so, but I don't know by how much they would increase it. You could try buying some x1 slayer arrows and see how much more damage they do than x1 regular arrows. I'm sorry to be a little slow, but returning to the subject of the weapon damage and ST... When you add in the additional % damage from the multiplier forged of the ammo, is that based on the final damage after applying all other factors? Or does it apply to the weapon dam Final damage I believe - you should experience a 1% increase to damage you inflict in Ranged when you upgrade ammo by a point.
{"url":"http://www.carnageblender.com/bboard/q-and-a-fetch-msg.tcl?msg_id=0022yc","timestamp":"2014-04-17T00:54:46Z","content_type":null,"content_length":"19251","record_id":"<urn:uuid:26604367-1134-4944-8326-027dab893be7>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00598-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: February 2000 [00530] [Date Index] [Thread Index] [Author Index] Simplifying Problems • To: mathgroup at smc.vnet.net • Subject: [mg22392] Simplifying Problems • From: "Jordan Rosenthal" <jr at ece.gatech.edu> • Date: Sun, 27 Feb 2000 18:55:32 -0500 (EST) • Organization: Georgia Institute of Technology, Atlanta GA, USA • Sender: owner-wri-mathgroup at wolfram.com Hi all, Two questions: First question: I have an expression which has a sum of a number of sinc-like terms. For f[k] = Sin[k Pi] / k If I try using simplify with the assumption that k is an integer I get Simplify[f[k], k \[Element] Integers] Although this is true for most integers, it is incorrect for the integer k==0 since f[0] = Pi. So why is this happening? I would have expected it to either leave the expression untouched or to give me an If expression. What I would like is to be able to convert the expression to If[ k==0, Pi, 0] What is the best way to do this? I can setup a rule like: f[k] /. Sin[k_*Pi]/k_ -> If[k == 0, Pi, 0] but my problem is that this does not account for the fact that the pattern k_ must be an integer. How do I include that information? (See my second question for why I can't just use k_?IntegerQ). Second question: Let's say I declare a variable to be an Integer with j \[Element] Integers Now I set up a function which should only work on integers f[x_?IntegerQ] = x+2 This, however, does not recognize that the variable j has been declared an Is there a way I can get the function to work for variables declared as integers with the Element function? Any help is appreciated. Thanks, Prev by Date: Re: linux question Next by Date: Re: Visualization of Intersections Previous by thread: Creation of file.m
{"url":"http://forums.wolfram.com/mathgroup/archive/2000/Feb/msg00530.html","timestamp":"2014-04-16T19:18:56Z","content_type":null,"content_length":"35638","record_id":"<urn:uuid:9bb52b6b-d434-4f19-9b70-9080f180ec0d>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00011-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: New to the group. Replies: 1 Last Post: Aug 22, 1997 5:30 PM Re: New to the group. Posted: Aug 22, 1997 5:30 PM At 08:46 PM 7/26/97 -0400, you wrote: >Read your posting this morning but don't have the time to write a >lengthy response. I've been using graphing calculators in class for the >past several years. I find them helpful in exploring limits, in >graphing the more interesting functions one can look at with a graphing >calculator, in finding extreme points, and - especially - finding the >values of definite integrals. >Graphing calculators have made it possible for beginning calculus >students to go far beyond the polynomial functions that made up a huge >portion of first semester calculus in the past. >I do "expect" all my students to take the AP exam; it's part of the >course. I have paid the fee out of my own pocket for the occasional >student in financial straits. I have only had three students out of >nearly 300 in 10 years of teaching calculus AP convince me that there >was no reason for them to take the test because of the college they were >planning to attend. Two of them subsequently transferred and regretted >not taking the test. >I use the Finney, Thomas, Demana & Waits text published by >Addison-Wesley because of the large number of problems with an >engineering and science basis. This text also incorporates graphing >calculators throughout, although I have found that the students tend to >get "lost" in pursuing many of the "Explorations". This may be more my >lack of experience in clearly defining my expectations in setting up >these activities than fundamental flaws in the authors' writing. >I teach at an inner city magnet school in Fresno, CA. We have about 80 >students each year take AB calculus and 25 take my BC course. >Brad Huff >Thank you very much for your answer, I think it will help me a lot. I could not respond earlier because my system was down for month. My computer also broke down but now everything is Ok. Thank you for your kind response Adolfo Gonzalez Z. Lincoln International School
{"url":"http://mathforum.org/kb/message.jspa?messageID=653290","timestamp":"2014-04-17T05:35:09Z","content_type":null,"content_length":"16126","record_id":"<urn:uuid:70211847-840d-44d3-a4de-1756ffc75a8e>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00295-ip-10-147-4-33.ec2.internal.warc.gz"}
Fairless Hills Algebra 2 Tutor Find a Fairless Hills Algebra 2 Tutor ...We can work with your school curriculum and on other topics. I try my best to be flexible and available the days that work best for you. I am easily accessible outside of our sessions with questions and will make myself available for your success. 14 Subjects: including algebra 2, calculus, geometry, ASVAB I completed my master's in education in 2012 and having this degree has greatly impacted the way I teach. Before this degree, I earned my bachelor's in engineering but switched to teaching because this is what I do with passion. I started teaching in August 2000 and my unique educational backgroun... 12 Subjects: including algebra 2, calculus, physics, ACT Math ...It can afford the student an opportunity to ask questions they might be too shy or embarrassed to ask in front of all their peers. Or it can afford them the opportunity to grasp concepts at a pace more in line with their learning abilities. But sometimes tutoring can be an individualized supplement to a student's classroom education. 10 Subjects: including algebra 2, geometry, precalculus, algebra 1 ...I have had many successful students during this time so long as they are willing to be flexible and think a bit abstractly. My approach tends to be to break down abstract ideas into bite-sized chunks and have a student reconstruct that information to create a coherent thought. Proceeding throug... 5 Subjects: including algebra 2, calculus, precalculus, linear algebra ...I have also taught Archaeology to adult students at the Abington High School Evening School. Having accumulated this knowledge over the last 4 decades, I now wish to share it with those seeking tutoring. I am the recipient of an Honors B.A. degree from Trent University in Peterborough, Ontario, Canada. 17 Subjects: including algebra 2, chemistry, geometry, algebra 1 Nearby Cities With algebra 2 Tutor Bristol, PA algebra 2 Tutors Fallsington, PA algebra 2 Tutors Feasterville Trevose algebra 2 Tutors Feasterville, PA algebra 2 Tutors Fieldsboro, NJ algebra 2 Tutors Florence, NJ algebra 2 Tutors Hulmeville, PA algebra 2 Tutors Langhorne algebra 2 Tutors Levittown, PA algebra 2 Tutors Middletown Twp, PA algebra 2 Tutors Morrisville, PA algebra 2 Tutors Newtown, PA algebra 2 Tutors Penndel, PA algebra 2 Tutors Riverside, NJ algebra 2 Tutors Tullytown, PA algebra 2 Tutors
{"url":"http://www.purplemath.com/Fairless_Hills_Algebra_2_tutors.php","timestamp":"2014-04-16T07:36:21Z","content_type":null,"content_length":"24361","record_id":"<urn:uuid:eafae09b-4cae-46e6-8e21-99bf3f53378f>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00410-ip-10-147-4-33.ec2.internal.warc.gz"}
show that events are independent Show that if events A and B are independent, then events A' and B are independent. If events $A,B$ are independent, then we know that $P(A \cap B) = P(A) P(B)$ by definition. That's our key. For the proof, we will use the following lemma: $A^C \cap B = B \setminus (A \cap B)$ Proof of the lemma: $A^C \cap B = (\Omega \setminus A) \cap B = (\Omega \cap B) \setminus (A \cap B) = B \setminus (A \cap B)$ So now we know that lemma is true, we can use it to conclude that: $P(A^C \cap B) = P(B) - P(A \cap B)$ Looking at the right hand side, we have: $=P(B) - P(A \cap B)$ $=P(B) - P(A)P(B)$ (using our key assumption) $=(1-P(A))P(B)$ $=P(A^C)P(B)$ So we conclude that $P(A^C \cap B) = P(A^C)P(B)$ as desired. $QED$ $\begin{array}{*{20}c}<br /> {P\left( {A^c \cap B} \right)} & = & {P\left( B \right) + P\left( {A \cap B} \right)} \\<br /> {} & = & {P\left( B \right) + P\left( A \right)P\left( B \right)} \\<br /> {} & = & {P(B)\left( {1 - P(A)} \right)} \\<br /> {} & = & {P(B)P\left( {A^c } \right)} \\<br /> <br /> \end{array}$
{"url":"http://mathhelpforum.com/advanced-statistics/67735-show-events-independent-print.html","timestamp":"2014-04-18T17:01:23Z","content_type":null,"content_length":"9646","record_id":"<urn:uuid:b073a360-512d-415d-b5f3-e1f7dc19e94f>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00462-ip-10-147-4-33.ec2.internal.warc.gz"}
Given a number of source and a number of target nodes, enumerate the k lightest paths connecting any of the source nodes with any of the target nodes in the order of their weight. Paths are simple, that is no node can occur more than once in the path. Pathfinder is a Java wrapper around REA written by Jimenez and Marzal in C [1]. It has been inspired by a Python wrapper written by Pierre Schaus and Jean-Noël Monette at UCL (INGI). See the Cytoscape plugins by Pierre Schaus and Jean-Noël Monette for their k shortest paths tool. Note that Pierre Schaus and Jean-Noël Monette also modified the REA code. The multiple-to-multiple end path finding relies on a graph transformation suggested by Olivier Hubaut (former aMAZE team member) and also described in [2]. Pathfinder makes use of a graph library developed by the former aMAZE team. Input graph The input graph can be either in tab-delimited or gml format. graph file A file containing a graph in either tab-delimited or gml format. graph id The identifier of a graph that has been submitted and stored on the server. This allows to avoid re-submitting large graphs. The identifier is assigned by Pathfinder. For storage of graphs on the server side, see below. Warning: The weight of already submitted graphs cannot be changed! Check the "directed" option to signal that the input graph is directed. This option is important for tab-delimited format, because this format, in contrast to gml, does not specifiy whether a graph is directed or undirected. Results may greatly vary between path finding done on the same input graph treated as undirected or as directed! Storage of input graphs on server To submit large input graphs can take time. In addition, Pathfinder generates temp files each time it is called. Re-using a graph and its associated temp files saves time. This is why input graphs can be stored on the server. Select this option and save the graph identifier that Pathfinder returns. The next time you want to work on the same graph, enter its identifier in the graph id field. Although the input graphs themselves are stored an infinite time when this option has been set, result files generated from them are only available during three days. Input nodes Sources and targets If several source and/or target node identifiers are given, they are separated by '/'. Example: Source nodes: a/b/c Target nodes: d Batchfile A batchfile allows to do several path finding tasks in a row. It is a collection of sources and targets, which are specified in a tab-delimited file. Source nodes are assigned to a group containing START in its name and target nodes are assigned to a group containing END. The START and END groups are assigned to experiment groups, whose names can be chosen freely. Thus, each experiment is defined by a start group and an end group, which in turn consist of start nodes and end nodes. Comments in the batchfile are preceded by #. # example for batchfile describing two path finding experiments # the first experiment has source R04511 and target C00191 # the second experiment has source R04198 and targets C00080 and C00681 R04511 START1 C00191 END1 R04198 START2 C00080 END2 C00681 END2 START1 EXP1 END1 EXP1 START2 EXP2 END2 EXP2 Graph formats The tab-delimited format is simpler and therefore faster to transfer than the gml format. Warning: For tab-delimited format, the user needs to specify whether or not the graph is directed. See the option described above to do this. Warning: The tab-delimited format with a node part is restricted to Pathfinder and cannot be read in by other NeAT tools. 1. The tab-delimited format in its most simple form is a list of arcs. Optionally, weights can be set as third column. Example: ; example tab-delimited graph as arc list with weights on arcs a b 1.1 b c 2.4 c a 3 2. If one wants to give nodes (i.e. to include orphan nodes or to set node attributes), the node identifiers can be set as one column, along with their weights. Arcs are separated from nodes by a line starting with ;ARCS. Example: ; example tab-delimited graph with nodes a b 1.1 b c 2.4 c a 3 3. Additional node or arc attribute values along with the attribute name can be set by setting tab-delimited node or arc attribute headers. Example: ; example tab-delimited graph with nodes having values for a color attribute ;NODES color a yellow b blue c blue d yellow a b 1.1 b c 2.4 c a 3 Attributes can also be set on the arcs. Example: ; example tab-delimited graph with nodes having values for a color attribute ; and arcs having values for a probability attribute ;NODES color a yellow b blue c blue d yellow ;ARCS probability a b 0.9 1.1 b c 0.5 2.4 c a 0.1 3 # is an alternative comment symbol. If not specified otherwise, the last node or arc column is always treated as weight column. Warning: The symbol -- is NOT treated as comment symbol! The gml format is a generic format allowing to store data as a tree of objects consisting of attribute and value pairs. It is widely used for describing graphs. It allows to set attributes on nodes, arcs and the graph itself. See here for more information on this format. This option specifies how many paths should be found in terms of weight levels. If for example rank is set to three, Pathfinder attempts to find three increasing weight levels, where each weight level may contain several paths. Three options are available. 1. unit: This option sets a weight of one on each node. 2. degree: This option sets a weight equal to its degree on each node. 3. as given in input graph: This option should be set to use weights given in the graph. Warning: Weights should be set on the arcs/edges as values of the attribute "Weight". Example for edge in tab-delimited format: 1 2 3.22 ;ARCS Weight 1 2 3.22 Example for edge in gml format: edge[ source 1 target 2 Weight 3.22] Node weights are transformed into edge/arc weights by taking the mean of the head and tail node weights. The k shortest paths algorithm (see Credits) expects arc weights as input. Identifiers of nodes to exclude This option allows to post-filter paths in order to keep only those that do not contain the given node identifiers. Identifiers of nodes to include This option allows to post-filter paths in oder to keep only those that contain the given node identifiers. Maximal Weight This option sets the maximal weight a path can have. The weight of a path is the sum of its edges. Maximal Length This option sets the maximal length a path can have. The length of a path is defined as its number of nodes. Minimal Length This option sets the minimal length a path should have. Exclusion attribute This option allows to specify the name of the exclusion attribute. The exclusion attribute is set if certain nodes should not appear together in one path. Nodes sharing the same value for the exclusion attribute are treated as mutually exclusive. A metabolic graph is defined as follows: It has values on its nodes for two attributes, namely "ObjectType" and the given exclusion attribute. The "ObjectType" attribute has two values: one for compound nodes ("Compound") and one for reaction nodes ("Reaction"). In addition, the graph is directed. This definition follows from our experience that metabolic graphs should be represented as directed, bipartite graphs, where forward and reverse direction of a reaction are mutually exclusive (see publications). You can choose between REA (the default, see [1]) or backtracking developed by Fabian Couche and Didier Croes (see publications). You can use backtracking only on metabolic graphs and only with a pre-defined weighting scheme. Backtracking treats the input graph as directed and sets weights as follows: compound nodes receive their degree as weight and reaction nodes receive a weight of one. Only one start and one end node can be given. In general, result files are stored no longer than three days on the server, so make sure you download them on time, if you need them. Pathfinder unifies all paths of equal weight into one pathway. Thus, a pathway does not need to be linear. Because Pathfinder treats paths of equal weight as one pathway, the option "rank" is interpreted as the requested maximal weight level. Pathfinder attempts to find as many weight levels (with any number of paths belonging to a given weight level) as ranks have been specified. Of course, if less than the requested number of paths is present in the graph, Pathfinder cannot return these paths. If a graph is requested as Output type, the nodes have values for the "Path_Rank" attribute, to distinguish different paths in the graph. Nodes and edges/arcs belonging to a path are colored in gold (that is they have a value "gold" for attribute color and value "#FFD700" for attribute rgb_color). The following output types are available: 1. table of paths Pathfinder returns the requested number of lightest paths ranked according to their weight in a table. The table can be converted into a graph. 2. input graph with paths highlighted Pathfinder returns the input graph with paths highlighted in another color. The modified input graph can be input of further analysis steps. Its format can be specified by setting output format either to "tab-delimited" or to "gml". 3. each path as a separated component of a single graph Pathfinder returns the requested number of weight levels (see rank) as separated components of one graph in the desired output format (gml or tab-delimited). The graph can be input of further analsysis steps. Each component merges all paths of equal weight. 4. paths unified into one graph Pathfinder returns the paths unified into one graph, which is returned in the desired output format ("gml" or "tab-delimited"). The graph can be input of further analsysis steps. You can optionally specify an email address, so Pathfinder results can be sent to you by email. This is recommended for very large graphs or batch files, which might require a computation time above the server timeout. If you leave the email field empty, results will be displayed in the browser. The next steps panel allows you to input the result of Pathfinder into one of the listed tools without copy-pasting the result into that tool. The next steps panel will only appear if you request a graph output. Format limitations: 1. Pathfinder cannot deal with multi-edges (several edges between two nodes) or hyper-edges (edges between more than two nodes). 2. Pathfinder does not process correctly mixed graphs (mixtures of undirected and directed graphs). Algorithmic limitations: 1. REA does not allow a graph with negative-length cycles reachable from the start nodes. Therefore, negative weights on arcs should be used carefully, if at all. 1. Runtime depends strongly on the size of the graph and the number of paths asked. A timeout of 10 minutes has been set. We tested REA on metabolic graphs up to 44,000 arcs. With degree weighting scheme and paths requested up to second rank, the algorithm usually completed within one minute. Be aware that for unit weighting scheme, runtime can be much longer, because many more paths of equal weight may exist. In this case, the runtime may exceed the server timeout. Whenever a timeout occurs, you might try to repeat the search with your email address set. This will circumvent the server timeout. Pathfinder does not use the usual RSAT Web Service address! Check the NeAT web services page. 1. Jimenez, V.M., and Marzal, A. (1999). "Computing the K Shortest Paths: a New Algorithm and an Experimental Comparison", Proc. 3rd Int. Worksh. Algorithm Engineering (WAE 1999) 2. Duin, C.W., Volgenant, A., and Voß, S. (2004). "Solving group Steiner problems as Steiner problems." European Journal of Operational Research 154, 323-329.
{"url":"http://rsat.ulb.ac.be/help.pathfinder.html","timestamp":"2014-04-18T00:17:50Z","content_type":null,"content_length":"18786","record_id":"<urn:uuid:62785426-3bf6-4f96-8f1b-44ad532adc61>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00413-ip-10-147-4-33.ec2.internal.warc.gz"}
Westford, MA Accounting Tutor Find a Westford, MA Accounting Tutor ...However, my dream is to teach more broad math subjects by introducing the Chinese math learning methods, such as Multiplication Tables. I would like to teach the math concepts and skills rather than formulas. Remembering formulas might help you pass all tests, but it's not the best path to a high grade. 11 Subjects: including accounting, geometry, Chinese, algebra 1 ...Looking forward to hearing from you! Cheers, SusieI have played violin since I was 5 years old. I was trained with the Suzuki Method and completed all levels of Suzuki by age 10. 11 Subjects: including accounting, Spanish, ESL/ESOL, algebra 1 ...Finance and accounting courses were taken at the undergrad (Providence College) and graduate school levels (Northeastern). I have more than 20 years of experience in the securities industry across trading, operations, advisory services, capital markets and private investments. I have a degree i... 5 Subjects: including accounting, finance, business, Series 63 ...That is why I am one of the busiest tutors in Massachusetts and the United States (top 1% across the country). I provide 1-on-1 instruction in all levels of math and English, including test preparation (SAT, GMAT, LSAT, GRE, ACT, SAT II, SSAT, PSAT, TOEFL; English reading and writing, Algebra I... 67 Subjects: including accounting, English, calculus, reading ...In addition, it is important that non-native speakers be able to speak or write down the appropriate information about themselves when required(e.g.names,addresses,dates,etc.when filling out forms) And too, ESL/ESOL students must become familiar with the often confusing idioms and colloquialisms ... 30 Subjects: including accounting, English, reading, GRE
{"url":"http://www.purplemath.com/Westford_MA_Accounting_tutors.php","timestamp":"2014-04-21T12:47:05Z","content_type":null,"content_length":"24018","record_id":"<urn:uuid:d27d657f-cbb2-45c6-9c5e-1b3277ccbe42>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00643-ip-10-147-4-33.ec2.internal.warc.gz"}
Code readability vs conciseness up vote 0 down vote favorite What are the merits and demerits of the following two code snippets: return n==0 ? 0 : n==1 ? 1 : fib(n-1) + fib(n-2); return 0; return 1; return fib(n-1) + fib(n-2); for calculating the nth letter in the Fibonacci sequence? Which one would you favour and why? c coding-style 7 community wiki? – falstro Feb 16 '10 at 17:39 6 I prefer return round(pow(GOLDEN_RATIO,n) / sqrt(5)); – KennyTM Feb 16 '10 at 17:39 1 The nth letter? Is this the Roman version of the Fibonacci sequence? – Mark Byers Feb 16 '10 at 17:40 2 The second one, first of all because the first one is wrong (fib(n-1) + fib(n+1)?) and second because there's no point in doing that, it just makes the code unreadable. – IVlad Feb 16 '10 at 17:41 1 stackoverflow.com/questions/160218/to-ternary-or-not-to-ternary – Jørn Schou-Rode Feb 16 '10 at 17:50 show 3 more comments closed as not constructive by falstro, Jørn Schou-Rode, Earlz, 0xA3, YOU Feb 17 '10 at 1:55 As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.If this question can be reworded to fit the rules in the help center, please edit the question. 14 Answers active oldest votes The first one is the devil and must be purged with fire. up vote 7 down vote 1 And the second is little better. – Clifford Feb 16 '10 at 17:41 What is wrong with first one ? – Hannoun Yassir Feb 16 '10 at 17:49 @Yassir - it doesn't calculate a fibonacci number – mocj Feb 16 '10 at 17:56 @mocj - Edited, thanks. Sorry for the silly mistake – Tom Feb 16 '10 at 18:10 2 @tom - that's exactly the point, isn't it? It won't be your first mistake when you write your code like that. – Hans Passant Feb 16 '10 at 18:26 add comment I would favour: up vote 6 down return n <= 1 ? n : fib(n-1)+fib(n-2); 1 n would have to be unsigned, since fib(-1) is undefined – Tom Feb 16 '10 at 17:42 1 Indeed; neither of the Tom's suggestions have much merit. – Clifford Feb 16 '10 at 17:43 Tom: Sure. But yours are even worse. They wouldn't terminate until they've made an overflow on IntXX.MinValue and back to 0. Might even Stack Overflow before then... – Sani Huttunen Feb 16 '10 at 18:09 add comment I would prefer writing: if (n == 0) { return 0; else if (n == 1) { return 1; up vote 4 } down vote else { return fib(n-1) + fib(n-2); This is very readable code. I don't even like omitting braces as the code is not that readable and when you maintain code that omits braces, you easily make bugs. 1 I agree, in my opinion, brackets should never ever be omitted. – dassouki Feb 16 '10 at 18:07 I agree, don't omit braces. (Although I prefer the opening brace on a separate line) – Liz Albin Feb 16 '10 at 18:53 I disagree, please omit braces. Adding braces on one-line if statements is equivalent to adding semicolons to the end of every line in Python. – Cory Petosky Feb 16 '10 at 19:05 This is just an matter of opinion. The only exception that I allow myself to do is that I can test is parameters are valid like this: int foo(int bar) { if (bar < 0) throw new IllegalArgumentException(); } It requires that the if statement really is on-line statement and does not have new line before the throw statement. One of my favorite quotes is from Kent Beck: "I'm not a great programmer, I'm a pretty good programmer with great habits." I have seen programmers to fail to implement working code/fixing bugs because of this just too many times. – Lauri Feb 17 '10 at 8:09 add comment Everything in life is a matter of equilibrium. Finding the right compromise between two opposite ends of the spectrum. Optimality is a scoring function that is highly dependent on the evaluator, and the situation, but you should strive for the sweet spot in everything. Programming is not different. you should evaluate • simplicity • terseness • efficiency up vote 2 down • practicality vote • artistic freedom of expression • time constraints and find the sweet spot. Your first construct is clearly powerful and geeky, but definitely not easy to understand. add comment I prefer the second over the first, mostly for readability. The second "reads" well - it has the code broken up, so it reads most like English. This will make it easier to understand for many developers. up vote 1 Personally, I find multiple, chained ternary operations difficult to follow at times. down vote Also, I personally find "conciseness" to be a poor goal, in most cases. Modern IDEs make "longer" code much more manageable than it used to be. I'm not saying you want to be overly verbose, but trying to be concise often causes an increase in the maintenance effort, in my experience. add comment If you're asking for readability, I'd prefer the second option because it doesn't contain the (double!) ternary operator. Usually you're writing code that other people also have to read, and from the second snippet, it's clear at first sight what the function does. In the first snippet though, one has to resolve both ternary operators "in your head" and additionally think about associativity (I'd think about that automatically because parentheses are missing). up vote 1 But anyway, you could reduce the two if statements to one: down vote if(n <= 1) return n; return fib(n-1) + fib(n-2); add comment I'd prefer neither because they are both too slow. Readability should not come at the cost of an exponential explosion in runtime, especially when there exists a simple way that runs in linear time. I'd do this something like this (pseudo-code): up vote 1 down a = 0; vote b = 1; n.times { a, b = b, a + b; } In C you'd have to use a temporary variable unfortunately, but the principle is the same. 1 Wow, how delightfully on topic, yet completely off topic. – Earlz Feb 16 '10 at 17:43 Earlz: Hypocrite? stackoverflow.com/questions/2274937/… – Mark Byers Feb 16 '10 at 17:59 umm? I don't get it. – Earlz Feb 16 '10 at 18:12 add comment Of the two, the second is easier to understand at a glance. However, I'd consolidate it as if (n <= 1) return n; return fib(n-1) + fib(n-2); up vote 1 down vote Or, if you're not into multiple returns: if (n > 1) n = fib(n-1) + fib(n-2); return n; add comment I often find that indentation can make the multiple-ternary operators a lot more readable: return n == 0 ? 0 : up vote 1 down vote n == 1 ? 1 : fib(n-1) + fib(n-2); add comment I prefer the second one in most situations, but there are times where it seems a bit of a waste to not do it in one line. For instance, I'd prefer text="my stuff_"+id==null ? "default" : id; text="my stuff_"; up vote 0 down vote text+="default"; Note, this also helps with DRY because now if you need to change the name of text then you only change it in one place, compared to 3. add comment if you use the ?: operator two or three times you will get used to it so i would go with the first up vote 0 down vote add comment I'd say even more than @Lauri: if (n == 0) { tmp = 0; else if (n == 1) { tmp = 1; up vote 0 down vote else { tmp = fib(n-1) + fib(n-2); return tmp; It's good to have just one exit point. So you'd prefer to use a temporary variable than having 3 points of return all at the end of a function and all in one place? – Earlz Feb 16 '10 at 17:58 What's the problem with that? The compiler should be able to optimise that without problems and it helps to write maintainable and debuggable code. – fortran Feb 16 '10 at 18:37 Introducing a temporary variable strictly for the purpose of restricting yourself to having only one exit point is the opposite of "maintainable" coding. – mocj Feb 16 '10 at yeah, I bet you're a lot smarter than Dijkstra was... – fortran Feb 16 '10 at 20:23 When did I make such a claim? You can't seriously claim this version is somehow more maintainable than Lauris can you? – mocj Feb 16 '10 at 21:37 show 9 more comments I'd bet there's no difference in the compiled code. I'd at least try to make it a little more readable: up vote 0 down vote return n==0 ? 0 : ( n==1 ? 1 : fib(n-1) + fib(n+1) ); 1 I'd try to make your answer more readable as well ;) – Earlz Feb 16 '10 at 17:47 add comment As a generic rule, you should write readable code, which means code which is most readable by the people who will actually read it. Most of the time, this means "yourself, three weeks later". When you write code, the good question is then "will I be able to read and understand it again next month ?". up vote 0 Apart from that, the first expression is buggy (it uses fib(n+1) instead of fib(n-2)) and both exhibit the exponential explosion which makes Fibonacci sequence a classical tool for down vote teaching some important practical aspects of algorithmics. add comment Not the answer you're looking for? Browse other questions tagged c coding-style or ask your own question.
{"url":"http://stackoverflow.com/questions/2274937/code-readability-vs-conciseness","timestamp":"2014-04-23T21:53:34Z","content_type":null,"content_length":"129287","record_id":"<urn:uuid:ece362d3-f3b1-4141-a362-fcd2eb96b034>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00030-ip-10-147-4-33.ec2.internal.warc.gz"}
Lorentz invariance and you Where were we? Ah yes, spontaneous symmetry breaking. When some field takes on a nonzero value even in empty space, and that field is affected by some symmetry transformation, the resulting symmetry is said to be “spontaneously broken,” and becomes hard for us to see directly. The classic example is the electroweak symmetry of the Standard Model, which is purportedly broken by a Higgs field that we have yet to directly detect. The fields that get expectation values and spontaneously break symmetries are generally taken to be “scalar” fields — that is, they are single functions of spacetime, not something more complicated like a vector field. If a vector field did get a nonzero expectation value, it would have to point somewhere, thereby picking out a preferred direction in spacetime. That means that Lorentz invariance — the physical symmetry corresponding to rotations and changes of velocity — would be broken. Lorentz invariance is a cornerstone of relativity (and thus of all of modern physics), so breaking it is often thought to be bad. But really, how bad is it? When Einstein put together special relativity on the basis of Lorentz invariance, he was arguing that there was no absolute space nor absolute time in the sense of Sir Isaac Newton. If two physicists traveling freely through empty space passed by each other at a high relative velocity, we couldn’t tell in any universal sense which one was stationary and which was moving — it’s all relative, if you like. If we violated Lorentz invariance by having a vector field get a nonzero value in the vacuum, we could tell who was stationary and who was moving — the vector would define a preferred rest frame. But that’s not quite the same as going all the way back to Newtonian spacetime. The underlying theory is still Lorentz invariant — if we can’t easily detect this vector field (and we obviously haven’t thus far), Lorentz invariance could be spontaneously violated while remaining in complete accord with all experimental tests. I was in on the ground floor for this idea — it was the first project I worked on in graduate school (with George Field and Roman Jackiw), and was sufficiently non-mainstream that I worried for my career prospects. Alas, those were more freewheeling times, and you could get a good postdoc without necessarily jumping on a major bandwagon. Subsequently, I was surprised to see Lorentz violation actually become it’s own (relatively tiny) bandwagon! A group of researchers, led by Alan Kostelecky at Indiana, have really pushed the idea of writing down ways to spontaneously violate Lorentz invariance, and have spawned an active experimental program to test these ideas using precision data from astophysics, particle physics, and atomic physics. (Alan has a FAQ on the whole idea of violating Lorentz symmetries.) So I occasionally return to the idea, as in work with my former graduate student Eugene Lim on the gravitational effects of Lorentz-violating vectors. And now I’ve returned to it again, this time with current student Jing Shu, as we try to understand a fundamental question in physics: why is there more matter than antimatter? This issue goes under the name of “baryogenesis,” as it is baryons (protons, neutrons, and other heavy particles made of three quarks) that have an actual verifiable excess over antibaryons in our observable universe. We could also contemplate an asymmetry in leptons — electrons, muons, tau particles, and the various neutrinos — but it is hard to measure, since we have no direct handle on the total number of neutrinos vs. anti-neutrinos. But there are certainly a lot more baryons than antibaryons, as Mark (who is one of the world’s experts) will tell you. I am not one of the world’s experts, but Jing and I have contemplated an interesting idea: that baryogenesis becomes easier in the presence of Lorentz violation. Ordinarily, successful baryogenesis requires three ingredients, as first elucidated by Andrei Sakharov: violation of baryon number (i.e., processes which produce different numbers of baryons than antibaryons), violation of charge and charge-parity symmetries (i.e., processes which behave differently for particles and antiparticles), and a departure from thermal equilibrium (i.e., things don’t have a chance to settle down in to a quiescent state, in which baryons and antibaryons would presumably be equally abundant). Sakharov’s argument, sensibly enough, assumes that everything is nice and Lorentz invariant. If you violate that assumption, an interesting thing happens — you can get different numbers of baryons and anti-baryons even in thermal equilibrium! This is an old idea, actually — suggested by Cohen, Kaplan and Nelson under the name “spontaneous baryogenesis,” and explored more recently in the context of evolving dark-energy (quintessence) fields by Mark and his students Antonio De Felice and Salah Nasri, as well as in the context of simple Lorentz-violating vector fields by Bertolami et The loophole is easy enough to state (although more difficult to appreciate). In quantum field theory there is something called the CPT theorem, which (among other things) guarantees that particles and antiparticles have equal masses. But Lorentz invariance is an assumption of the CPT theorem, and a vector field with a nonzero value in the vacuum can violate it. If the vector interacts with baryons in a certain way (not so hard to arrange, really), it can make antibaryons be just a bit heavier than baryons. That means that the baryons can be more abundant, even in equilibrium, and this slight asymmetry can persist to this day — and provide the particles out of which we are all made. (You know, the tiny yellow slice of the cosmic pie chart.) What Jing and I did in our recent paper was basically to investigate this idea from various angles, suggesting what kinds of ways it could be successfully implemented. We found a few ideas, some of which were nicer than others. Amusingly enough, it’s the new experimental constraints on violating Lorentz invariance that get in our way — for a model of the type we consider to really work, the violation has to be strong in the early universe, and fade away by the time we are doing are experiments today. That’s plausible enough, but puts an extra limit on the imagination of we theorists. I don’t know if the models we looked at will ultimately be judged to be very promising or not — like I said, I’m not the world’s expert. But it’s fun to imagine that we owe our very existence to a tiny violation of a cherished symmetry of our natural world. (caution laymans wildly speculative conjecture) Does is not seem reasonable that in the “early” or “earlier” universe the concept of the center vs. the edge might not only make more sense but in some way be quantifiable? Doesn’t this potentially provide a frame of reference against which the invariance could be violated. (end laymans wildly speculative conjecture) The edge of the rapidly expanding early universe. Maybe horizon?? • http://eskesthai.blogspot.com/2005/09/cft-and-tomato-soup-can.html I think one had to understand what increasing complexity means in our universe? What relation to “time” would have been of value here, while pointing to the quantum levels?? We knew there existed “a time” for such things to emerge? hi sean… i noticed in your post that there is no mention of spacetime `non-commutativity’ as a potential motivation for studying the effects of violations of lorentz invariance. Is what you are proposing completely independent of this line of thought? If so, what might be other (theoretical, and not neccesarily phenomenological) motivations for violating lorentz invariance? • http://blogs.discovermagazine.com/cosmicvariance/sean/ Elliot, as far as we know there is no such thing as the edge of the universe, or the center. And by “as far as we know” we mean more than “that’s our best guess”; we mean that all the data we have indicate that the visible part of the universe is smooth and uniform, it doesn’t have any special points either inside or somewhere outside. Of course we are allowed to speculate about what happens outside our observable patch of spacetime, but it doesn’t affect what happens inside. • http://blogs.discovermagazine.com/cosmicvariance/sean/ Subodh, you’re right that I didn’t really go into motivations. Non-commutative geometry would be one, but not what we were considering in our recent paper. There are some hints that effects in string theory or loop quantum gravity might give rise to Lorentz violation, but nothing very firm. But you don’t need anything very esoteric to get a nonzero vector field, at least temporarily; if you have a scalar field rolling down its potential, its gradient defines a nonvanishing vector field. So we were open-minded, not worrying about the origin of our fields at this point. Sean, thanks for the clarification. Intuition (particulary laymans intuition) is easily subject to error. There is some recent work that indicates (independently?) that large* violations of Lorentz symmetry are a feature of generic renormalizable field theories that change the structure of spacetime at the Planck scale, posing another kind of fine-tuning problem. This paper reports a result that suggests that supersymmetry may have a role to play in suppressing such violations (thus avoiding the need for fine-tuning). Its citation links are worth looking at as well. (* inconsistent with current experimental limits) • Pingback: The Screwy Universe | Cosmic Variance • Pingback: Ripples in the Aether | Cosmic Variance | Discover Magazine • Pingback: Forrest Nobles Pushing Gravity - Page 13 - Bad Astronomy and Universe Today Forum • Pingback: LAWS OF PHYSICS STILL CONSTANT |
{"url":"http://blogs.discovermagazine.com/cosmicvariance/2005/10/25/lorentz-invariance-and-you/","timestamp":"2014-04-19T11:25:53Z","content_type":null,"content_length":"112230","record_id":"<urn:uuid:4e22a2ff-1b70-4f95-8147-1711ce14ec15>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00059-ip-10-147-4-33.ec2.internal.warc.gz"}
Chapter 6 Strategy Analysis And Choice Slide 1 Chapter 6Strategy Analysis And Choice Strategic Management: Concepts and Cases. 9th edition Fred R. David PowerPoint Slides by Anthony F. Chelte Western New England College Fred R. David Prentice Hall Slide 2 Chapter Outline • The Nature of Strategy Analysis and Choice • A Comprehensive Strategy-Formulation Framework • The Input Stage Fred R. David Prentice Hall Slide 3 Chapter Outline • The Matching Stage • The Decision Stage • Cultural Aspects of Strategy Choice Fred R. David Prentice Hall Slide 4 Chapter Outline • The Politics of Strategy Choice • The Role of a Board of Directors Fred R. David Prentice Hall Slide 5 Strategy Analysis & Choice Whether it’s broke or not, fix it—make it better. Not just products, but the whole company if necessary. -- Bill Saporito Fred R. David Prentice Hall Slide 6 Strategy Analysis & Choice Strategic analysis and choice largely involves making subjective decisions based on objective information. Fred R. David Prentice Hall Slide 7 Strategy Analysis & Choice The Nature of Strategy Analysis and Choice – • Establishing long-term objectives • Generating alternative strategies • Selecting strategies to pursue • Best alternative to achieve mission and objectives Fred R. David Prentice Hall Slide 8 Strategy Analysis & Choice Alternative strategies derive from – • Vision • Mission • Objectives • External audit • Internal audit • Past successful strategies Fred R. David Prentice Hall Slide 9 Strategy Analysis & Choice Participation in generating alternative strategies should be broad – Fred R. David Prentice Hall Slide 10 Strategy-Formulation Analytical Framework Stage 1: The Input Stage Stage 2: The Matching Stage Stage 3: The Decision Stage Fred R. David Prentice Hall Slide 11 Formulation Framework Internal Factor Evaluation Matrix (IFE) Stage 1: The Input Stage External Factor Evaluation Matrix (EFE) Competitive Profile Fred R. David Prentice Hall Slide 12 Input Stage • Provides basic input information for the matching and decision stage matrices • Requires strategists to quantify subjectivity early in the process • Good intuitive judgment always needed Fred R. David Prentice Hall Slide 13 Formulation Framework TOWS Matrix SPACE Matrix Stage 2: The Matching Stage BCG Matrix IE Matrix Grand Strategy Matrix Fred R. David Prentice Hall Slide 14 Matching Stage • Match between organization’s internal resources and skills and the opportunities and risks created by its external factors. Fred R. David Prentice Hall Slide 15 Excess working capacity (strength) 20% annual growth in the cell phone industry (opportunity) Acquire Cellfone, Inc. Insufficient capacity (weakness) Exit of two major foreign competitors form the industry (opportunity) Pursue horizontal integration by buying competitor's facilities Strong R&D (strength) Decreasing numbers of young adults (threat) Develop new products for older adults Poor employee morale (weakness) Develop a new employee benefits package Strong union activity (threat) Matching Key Factors to Formulate Alternative Strategies Key Internal Factor Key External Factor Resultant Strategy Fred R. David Prentice Hall Slide 16 Formulation Framework TOWS Matrix SPACE Matrix Stage 2: The Matching Stage BCG Matrix IE Matrix Grand Strategy Matrix Fred R. David Prentice Hall Slide 17 Matching Stage TOWS Matrix • Threats • Opportunities • Strengths • Weaknesses Fred R. David Prentice Hall Slide 18 TOWS Matrix Develop four types of strategies • Strengths-Opportunities (SO) • Weaknesses-Opportunities (WO) • Strengths-Threats (ST) • Weaknesses-Threats (WT) Fred R. David Prentice Hall Slide 19 Use a firm’s internal strengths to take advantage of external opportunities Fred R. David Prentice Hall Slide 20 Improving internal weaknesses by taking advantage of external opportunities Fred R. David Prentice Hall Slide 21 Using firm’s strengths to avoid or reduce the impact of external threats. Fred R. David Prentice Hall Slide 22 Defensive tactics aimed at reducing internal weaknesses and avoiding environmental threats. Fred R. David Prentice Hall Slide 23 TOWS Matrix Steps in developing the TOWS Matrix • List the firm’s key external opportunities • List the firm’s key external threats • List the firm’s key internal strengths • List the firm’s key internal weaknesses Fred R. David Prentice Hall Slide 24 TOWS Matrix Developing the TOWS Matrix • Match internal strengths with external opportunities and record the resultant SO Strategies • Match internal weaknesses with external opportunities and record the resultant WO Strategies • Match internal strengths with external threats and record the resultant ST Strategies • Match internal weaknesses with external threats and record the resultant WT Strategies Fred R. David Prentice Hall Slide 25 Leave Blank List Strengths List Weaknesses List Opportunities SO Strategies Use strengths to take advantage of opportunities WO Strategies Overcome weaknesses by taking advantage of opportunities List Threats ST Strategies Use strengths to avoid threats WT Strategies Minimize weaknesses and avoid threats TOWS Matrix Fred R. David Prentice Hall Slide 26 Formulation Framework TOWS Matrix SPACE Matrix Stage 2: The Matching Stage BCG Matrix IE Matrix Grand Strategy Matrix Fred R. David Prentice Hall Slide 27 SPACE Matrix Strategic Position and Action Evaluation Matrix • Four quadrant framework • Determines appropriate strategies □ Aggressive □ Conservative □ Defensive □ Competitive Fred R. David Prentice Hall Slide 28 SPACE Matrix Two Internal Dimensions • Financial Strength [FS] • Competitive Advantage [CA] Two External Dimensions • Environmental Stability [ES] • Industry Strength [IS] Fred R. David Prentice Hall Slide 29 SPACE Matrix Overall Strategic position determined by: • Financial Strength [FS] • Competitive Advantage [CA] • Environmental Stability [ES] • Industry Strength [IS] Fred R. David Prentice Hall Slide 30 SPACE Matrix Developing the SPACE Matrix: • EFE Matrix • IFE Matrix • Financial Strength • Competitive Advantage • Environmental Stability • Industry Strength Fred R. David Prentice Hall Slide 31 SPACE Matrix • Select variables to define FS, CA, ES, & IS • Assign numerical ranking from +1 (worst) to +6 (best) for FS and IS; Assign numerical ranking from –1 (best) to –6 (worst) for ES and CA. • Compute average score for FS, CA, ES, & IS Fred R. David Prentice Hall Slide 32 SPACE Matrix • Plot the average scores on the Matrix • Add the two scores on the x-axis and plot point on X. Add the scores on the y-axis and plot Y. Plot the intersection of the new xy point. • Draw a directional vector from origin through the new intersection point. Fred R. David Prentice Hall Slide 33 Internal Strategic Position External Strategic Position Financial Strength (FS) Return on investment Working capital Cash flow Ease of exit from market Risk involved in business Environmental Stability (ES) Technological changes Rate of inflation Demand variability Price range of competing products Barriers to entry Competitive pressure Price elasticity of demand SPACE Factors Fred R. David Prentice Hall Slide 34 Internal Strategic Position External Strategic Position Competitive Advantage CA Market share Product quality Product life cycle Customer loyalty Competition’s capacity utilization Technological know-how Control over suppliers & distributors Industry Strength (IS) Growth potential Profit potential Financial stability Technological know-how Resource utilization Capital intensify Ease of entry into market Productivity, capacity utilization SPACE Factors Fred R. David Prentice Hall Slide 35 SPACE Matrix Fred R. David Prentice Hall Slide 36 Formulation Framework TOWS Matrix SPACE Matrix Stage 2: The Matching Stage BCG Matrix IE Matrix Grand Strategy Matrix Fred R. David Prentice Hall Slide 37 BCG Matrix Boston Consulting Group Matrix • Enhances multidivisional firms’ efforts to formulate strategies • Autonomous divisions (or profit centers) constitute the business portfolio • Firm’s divisions may compete in different industries requiring separate strategy Fred R. David Prentice Hall Slide 38 BCG Matrix Boston Consulting Group Matrix • Graphically portrays differences among divisions • Focuses on market share position and industry growth rate • Manage business portfolio through relative market share position and industry growth rate Fred R. David Prentice Hall Slide 39 BCG Matrix Relative market share position defined: • Ratio of a division’s own market share in a particular industry to the market share held by the largest rival firm in that industry. Fred R. David Prentice Hall Slide 40 Question Marks Cash Cows BCG Matrix Relative Market Share Position Industry Sales Growth Rate Fred R. David Prentice Hall Slide 41 BCG Matrix • Question Marks • Stars • Cash Cows • Dogs Fred R. David Prentice Hall Slide 42 BCG Matrix Question Marks • Low relative market share position yet compete in high-growth industry. □ Cash needs are high □ Case generation is low • Decision to strengthen (intensive strategies) or divest Fred R. David Prentice Hall Slide 43 BCG Matrix • High relative market share and high industry growth rate. □ Best long-run opportunities for growth and profitability • Substantial investment to maintain or strengthen dominant position □ Integration strategies, intensive strategies, joint ventures Fred R. David Prentice Hall Slide 44 BCG Matrix Cash Cows • High relative market share position, but compete in low-growth industry □ Generate cash in excess of their needs □ Milked for other purposes • Maintain strong position as long as possible □ Product development, concentric diversification □ If becomes weak—retrenchment or divestiture Fred R. David Prentice Hall Slide 45 BCG Matrix • Low relative market share position and compete in slow or no market growth □ Weak internal and external position • Decision to liquidate, divest, retrenchment Fred R. David Prentice Hall Slide 46 Formulation Framework TOWS Matrix SPACE Matrix Stage 2: The Matching Stage BCG Matrix IE Matrix Grand Strategy Matrix Fred R. David Prentice Hall Slide 47 Grand Strategy Matrix • Popular tool for formulating alternative strategies • All organizations (or divisions) can be positioned in one of four quadrants • Based on two evaluative dimensions: □ Competitive position □ Market growth Fred R. David Prentice Hall Slide 48 Quadrant II • Market development • Market penetration • Product development • Horizontal integration • Divestiture • Liquidation Quadrant I • Market development • Market penetration • Product development • Forward integration • Backward integration • Horizontal integration • Concentric diversification Quadrant III • Retrenchment • Concentric diversification • Horizontal diversification • Conglomerate diversification • Liquidation Quadrant IV • Concentric diversification • Horizontal diversification • Conglomerate diversification • Joint ventures Fred R. David Prentice Hall Slide 49 Grand Strategy Matrix Quadrant I • Excellent strategic position • Concentration on current markets and products • Take risks aggressively when necessary Fred R. David Prentice Hall Slide 50 Grand Strategy Matrix Quadrant II • Evaluate present approach seriously • How to change to improve competitiveness • Rapid market growth requires intensive strategy Fred R. David Prentice Hall Slide 51 Grand Strategy Matrix Quadrant III • Compete in slow-growth industries • Weak competitive position • Drastic changes quickly • Cost and asset reduction indicated (retrenchment) Fred R. David Prentice Hall Slide 52 Grand Strategy Matrix Quadrant IV • Strong competitive position • Slow-growth industry • Diversification indicated to more promising growth areas Fred R. David Prentice Hall Slide 53 Formulation Framework Stage 3: The Decision Stage Quantitative Strategic Planning Matrix Fred R. David Prentice Hall Slide 54 Quantitative Strategic Planning Matrix • Only technique designed to determine the relative attractiveness of feasible alternative actions Fred R. David Prentice Hall Slide 55 Quantitative Strategic Planning Matrix • Tool for objective evaluation of alternative strategies • Based on identified external and internal crucial success factors • Requires good intuitive judgment Fred R. David Prentice Hall Slide 56 Quantitative Strategic Planning Matrix • List the firm’s key external opportunities & threats; list the firm’s key internal strengths and weaknesses • Assign weights to each external and internal critical success factor Fred R. David Prentice Hall Slide 57 Quantitative Strategic Planning Matrix • Examine the Stage 2 (matching) matrices and identify alternative strategies that the organization should consider implementing • Determine the Attractiveness Scores (AS) Fred R. David Prentice Hall Slide 58 Quantitative Strategic Planning Matrix • Compute the total Attractiveness Scores • Compute the Sum Total Attractiveness Score Fred R. David Prentice Hall Slide 59 Key External Factors Strategy 1 Strategy 2 Strategy 3 Key Internal Factors Research and Development Computer Information Systems Strategic Alternatives Fred R. David Prentice Hall Slide 60 • Requires intuitive judgments and educated assumptions • Only as good as the prerequisite inputs Fred R. David Prentice Hall Slide 61 • Sets of strategies examined simultaneously or sequentially • Requires the integration of pertinent external and internal factors in the decision-making process Fred R. David Prentice Hall Slide 62 Cultural Aspects of Strategy Choice • The set of shared values, beliefs, attitudes, customs, norms, personalities, heroes, and heroines that describe a firm Fred R. David Prentice Hall Slide 63 Cultural Aspects of Strategy Choice • Successful strategies depend on degree of support from a firm’s culture Fred R. David Prentice Hall Slide 64 Politics of Strategy Choice Politics in organizations: • Management hierarchy • Career aspirations • Allocation of scarce resources Fred R. David Prentice Hall Slide 65 Politics of Strategy Choice Political tactics for strategists: • Equifinality • Satisfying • Generalization • Focus on Higher-Order Issues • Provide Political Access on Important Issues Fred R. David Prentice Hall Slide 66 Role of A Board of Directors Duties and Responsibilities: • Control and oversight over management • Adherence to legal prescriptions • Consideration of stakeholder interests • Advancement of stockholders’ rights Fred R. David Prentice Hall Slide 67 Key Terms • Aggressive quadrant • Attractiveness Scores (AS) • Board of Directors • Boston Consulting Group (BCG) Matrix • Business portfolio • Cash cows • Champions • Competitive Advantage (CA) Fred R. David Prentice Hall Slide 68 Key Terms • Competitive quadrant • Conservative quadrant • Culture • Decision stage • Defensive quadrant • Directional vector • Dogs • Environmental Stability (ES) • Financial Strength (FS) Fred R. David Prentice Hall Slide 69 Key Terms • Grand Strategy Matrix • Halo error • Industry Strength (IS) • Input stage • Internal-External (IE) Matrix • Long-term objectives • Matching • Matching stage • Quantitative Strategic Planning Matrix (QSPM) Fred R. David Prentice Hall Slide 70 Key Terms • Question marks • Relative market share position • SO strategies • ST strategies • Stars • Strategic Position and Action Evaluation (SPACE) Matrix • Strategy-formulation framework Fred R. David Prentice Hall Slide 71 Key Terms • Sum total attractiveness scores • Threats-Opportunities-Weaknesses-Strengths (TOWS) Matrix • Total Attractiveness Scores (TAS) • WO strategies • WT strategies Fred R. David Prentice Hall
{"url":"http://www.slideserve.com/mahala/chapter-6-strategy-analysis-and-choice","timestamp":"2014-04-20T10:46:35Z","content_type":null,"content_length":"104253","record_id":"<urn:uuid:5ffa94c8-f002-4c87-b0f3-eae1945a46b7>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00060-ip-10-147-4-33.ec2.internal.warc.gz"}
Gradient of a Tangent to a curve - Mathematics Gradient of a Tangent to a curve Calculating the gradient of a tangent to a curve can be very useful in mathematical calculations. Some mechanical structures such as ramps, rides are usually modelled around mathematical curves. In some cases there is a need to know the gradients of a tangent to a curve. Calculating the gradient at each point, this is called differentiation; Differentiation in math terms is the mathematical procedure of taking the derivative of a function. A derivative is a tangent line to a curve on a graph. Working with gradients of a tangent to a curve is actually very easy. The most important is understanding how graph coordinates work for example (x, y), everything else goes smoothly. For this topic however I won’t cover the basics of graphs and how their coordinates work. I assume you have a very good understanding. You must also understand what a tangent is; A tangent to a curve at a certain point is a straight line which touches the curve at that point. As shown in the illustration. Here is the graph y=x^2. Suppose we wanted to find the gradient of the curve at the point (2, 4). If we drew the graph by hand, it would be impossible to find out the gradient of the tangent, because it will be unlikely that your graph is accurate. To find the gradient of a tangent to a curve, we estimate it by using a nearby point in the graph in our calculation. Here is a closer up of the graph above showing the point (2, 4), we’re trying to find the gradient of the tangent to the graph y = x^2 at point (2, 4). We look at a nearby point, just a tiny tiny distance from our point in question (2, 4), let’s call that point m and the point a tiny distance away, n. We can see this from the graph close up. If the x coordinate of point m is 2, this must mean that the x coordinate of n is 2+h. The y coordinate of n is y = x^2. This means that the y coordinate of n is (2 + h)^2. Do you follow? You must spend sometime trying to understand this part as it is vital! The up distance is the difference between the y coordinate of m and the y coordinate of n. We know the m coordinates being (2, 4) and the n coordinates being (2+h, (2+h)^2) …that means. The gradient of mn = 4 + h^2 But remember n is just a small distance away from m because of this, h must be really really tiny, pretty much a zero value. That means the gradient of mn is 4+0 since because h is very tiny. Gradient of tanget to y=x^2 at (2, 4) is 4. Example 2 Here is another example. What is the gradient of tangent to y=x^2 at x = 4. y value = 4^2 … this means the coordinate values for our first point are (4, 16). To find the gradient we need two points. The other point must be (4+h, (4+h2)^2) … this is equal to 8 + h The gradient of y = x^2 at x = 4 is 8 Example 3 The same applies to other graphs such as y = x^3. General formula for any point There must a general formula that you could use to find the gradient of other points for a given graph. For example y = x^2. This also applies to any graph such as y = x^3. Let’s try to find a general formula for y = x^2 below; As we’ve seen from above since h gets really small it is unnecessary. So that leaves 2x as our formula. 2 Responses 1. ahmed says: not good Like or Dislike: 0 1 2. james says: very easy to get lost as you didnt explain what h is Like or Dislike: 0 0 Leave a Reply Cancel reply
{"url":"http://mathematicsi.com/gradient-of-a-tangent-to-a-curve/","timestamp":"2014-04-21T12:22:14Z","content_type":null,"content_length":"69458","record_id":"<urn:uuid:e09b5adf-dbd3-4a82-9054-cc2ceffa6099>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00033-ip-10-147-4-33.ec2.internal.warc.gz"}
QM/MM tutorial QM/MM calculations on a Diels-Alder antibody catalyst. V. Conclusions, Discussion and Outlook We have performed a number of QMMM calculations on a Diels-Alder cycloaddition. We have seen that the reaction barrier decreases when going from vacuum to water and from water to the catalytic antibody. The conclusion so far are that the protein catalyzes the cycloaddition by stabilizing the transition state with respect to the reactant state. See table 5 and figure 9 for details. However, there a few critical points that have not been addressed. Table 5. Relative Energies of the reactant, transition state and product Figure 9. Energy profiles for the Diels Alder cycloaddition in geometries in vacuo, water and the vacuum, in water and in the active site of the catalytic antibody, protein at the PM3/GROMOS96 calculated at the PM3/GROMOS QM/MM level. QM/MM level. The energies are reported in kJ/mol and are taken relative to the potential energy of the │ │vacuo│water│protein│ │ │ │ │ │ │ │ │ │Reactant│0.0 │0.0 │0.0 │Trans. St.│192.902│139.5│113.5│Product│-45.45│-69.3│-144.4│ First of all, we have used a Steepest Descent algorithm to do the energy minimizations. This algorithm, however, is known not to converge very well near a minimum. This was one of the reasons for increasing the convergence criteria. A better algortithm for minimization would be the BFGS algorithm, which employs higer order derivatives, but is much more expensive in terms of computational costs. It would give more accurate results though, which is the primary reason for using it. Second, the results happen to be very sensitive with respect to the initial conditions. Slighly different starting configurations would result in different potential energy curves. This is another consequence of using the simple Steepest Descent method for minimization. Third, real reaction barriers are Free Energy barriers. With the current setup for the Linear Transit is is possible to do a free energy calculation. We need to define a State A and a State B, where A and B represent the reactant and product configuration respectively. The constraint between the Dummy atoms in state A would be 0.14 and in state B 0.3. To actually perform a QM/MM Free Energy calculation, one needs to specify both the A and B state parameters in the constraint section of the topology file: [ constraints ] ; atom1 atom2 type stateA stateB dummy1 dummy2 2 0.14 0.3 Furthermore, one needs to 'tell' gromacs it is supposed to do a free energy perturbation calculation by addig the lines (see gmx manual). free_energy = yes init_lambda = 0 delta_lambda = 0.01 to the mdp file. The calculations will be more time consuming, but the result is the free energy curve of the reaction, which is easier to relate to experimental data than the potential energy curves we computed thus far. But the tutorial was aimed to give a more qualitative picture of the reaction rather than a quantitative one. We hope you enjoyed doing this tutorial and that you found it useful for your own work now or in the future. Of course, the system used in the tutorial was a rather easy starting point as the transition state analogue was known, but the techniques you learned should work without such knowledge as well. If, while performing QM/MM calculations you run into trouble, or would like to discuss something, you can always contact me. Previous: IV. Optimization of product, reactant and transition state geometries in the fully solvated protein, using Linear Transit in gromacs updated 07/09/04
{"url":"http://wwwuser.gwdg.de/~ggroenh/EMBO2004/html/conclusions.html","timestamp":"2014-04-21T14:40:58Z","content_type":null,"content_length":"5286","record_id":"<urn:uuid:3a8b4ef0-74ab-4a65-a8e5-c307e3bc61de>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00300-ip-10-147-4-33.ec2.internal.warc.gz"}
Reserving based on log-incremental payments in R, part III January 22, 2013 By Markus Gesmann This is the third post about Christofides' paper on Regression models based on log-incremental payments [1] . The first post covered the fundamentals of Christofides' reserving model in sections A - F, the focused on a more realistic example and model reduction of sections G - K. Today's post will wrap up the paper with sections L - M and discuss data normalisation and claims inflation. I will use the same triangle of incremental claims data as introduced in my previous post . The final model had three parameters for origin periods and two parameters for development periods. It is possible to reduce the model further as Christofides illustrates in section L onwards by using an inflation index to bring all claims payments to current value and a claims volume adjustment or weight for each origin period to normalise the triangle. In his example Christofides uses claims volume adjustments for the origin years and an earning or inflation index for the different payment calendar years. The claims volume adjustments aims to normalise the triangle for similar exposures across origin periods, while the earnings index, which measures largely wages and other forms of compensations , is used as a first proxy for claims inflation. Note that the earnings index shows significant year on year changes from 5% to 9%. Barnett and Zehnwirth would probably recommend to add further parameters for the calendar year effects to the model. # Page D5.36 ClaimsVolume <- data.frame(origin=0:6, volume.index=c(1.43, 1.45, 1.52, 1.35, 1.29, 1.47, 1.91)) # Page D5.36 EarningIndex <- data.frame(cal=0:6, earning.index=c(1.55, 1.41, 1.3, 1.23, 1.13, 1.05, 1)) # Year on year changes # [1] 0.09 0.08 0.05 0.08 0.07 0.05 dat <- merge(merge(dat, ClaimsVolume), EarningIndex) # Normalise data for volume and earnings dat$logvalue.ind.inf <- with(dat, log(value/volume.index*earning.index)) with(dat, interaction.plot(dev, origin, logvalue.ind.inf)) points(1+dat$dev, dat$logvalue.ind.inf, pch=16, cex=0.8) Indeed, the interaction plot shows the various origin years now to be much more closely grouped. Only the single point of the last origin period stands out now. Christofides tests several models with different numbers of origin levels, but I am happy with the minimal model using only one parameter for the origin period, namely the intercept: Read more » for the author, please follow the link and comment on his blog: mages' blog daily e-mail updates news and on topics such as: visualization ( ), programming ( Web Scraping ) statistics ( time series ) and more... If you got this far, why not subscribe for updates from the site? Choose your flavor: , or
{"url":"http://www.r-bloggers.com/reserving-based-on-log-incremental-payments-in-r-part-iii/","timestamp":"2014-04-21T04:43:36Z","content_type":null,"content_length":"40421","record_id":"<urn:uuid:26a51dd0-f4d8-4bea-ae96-6e0bfd4aa463>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00529-ip-10-147-4-33.ec2.internal.warc.gz"}
How to Find the Normal Approximation to the Binomial with a Large Sample How to Find the Normal Approximation to the Binomial with a Large Sample n If you are working from a large statistical sample, then solving problems using the binomial distribution might seem daunting. However, there's actually a very easy way to approximate the binomial distribution, as shown in this article. Here's an example: suppose you flip a fair coin 100 times and you let X equal the number of heads. What's the probability that X is greater than 60? In a situation like this where n is large, the calculations can get unwieldy and the binomial table runs out of numbers. So if there's no technology available (like when taking an exam), what can you do to find a binomial probability? Turns out, if n is large enough, you can use the normal distribution to find a very close approximate answer with a lot less work. But what do we mean by n being "large enough"? To determine whether n is large enough to use what statisticians call the normal approximation to the binomial, both of the following conditions must To find the normal approximation to the binomial distribution when n is large, use the following steps: 1. Verify whether n is large enough to use the normal approximation by checking the two appropriate conditions. For the above coin-flipping question, the conditions are met because n ∗ p = 100 ∗ 0.50 = 50, and n ∗ (1 – p) = 100 ∗ (1 – 0.50) = 50, both of which are at least 10. So go ahead with the normal 2. Translate the problem into a probability statement about X. In this example, you need to find p(X > 60). 3. Standardize the x-value to a z-value, using the z-formula: For the mean of the normal distribution, use (the mean of the binomial), and for the standard deviation (the standard deviation of the binomial). So, in the coin-flipping example, you have Then put these values into the z-formula to get To solve the problem, you need to find p(Z > 2). On an exam, you won't see in the problem when you have a binomial distribution. However, you know the formulas that allow you to calculate both of them using n and p (both of which will be given in the problem). Just remember you have to do that extra step to calculate the needed for the z-formula. You can now proceed as you usually would for any normal distribution. 4. Look up the z-score on the Z-table and find its corresponding probability. □ a. Find the row of the table corresponding to the leading digit (one digit) and first digit after the decimal point (the tenths digit). □ b. Find the column corresponding to the second digit after the decimal point (the hundredths digit). □ c. Intersect the row and column from Steps (a) and (b). Continuing the example, from the z-value of 2.0, you get a corresponding probability of 0.9772 from the Z-table. 5. Select one of the following. □ a. If you need a "less-than" probability — that is, p(X < a) — you're done. □ b. If you want a "greater-than" probability — that is, p(X > b) — take one minus the result from Step 4. Remember, this example is looking for a greater-than probability ("What's the probability that X — the number of flips — is greater than 60?"). Plugging in the result from Step 4, you find p (Z > 2.00) = 1 – 0.9772 = 0.0228. So the probability of getting more than 60 heads in 100 flips of a coin is only about 2.28 percent. (In other words, don't bet on it.) □ c. If you need a "between-two-values" probability — that is, p(a < X < b) — do Steps 1–4 for b (the larger of the two values) and again for a (the smaller of the two values), and subtract the When using the normal approximation to find a binomial probability, your answer is an approximation (not exact) — be sure to state that. Also show that you checked both necessary conditions for using the normal approximation.
{"url":"http://www.dummies.com/how-to/content/how-to-find-the-normal-approximation-to-the-binomi.navId-811045.html","timestamp":"2014-04-17T13:54:02Z","content_type":null,"content_length":"58397","record_id":"<urn:uuid:85df76d6-38e8-4795-b1da-c9093c859805>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00659-ip-10-147-4-33.ec2.internal.warc.gz"}
Important: Use custom search function to get better results from our thousands of pages Use " " for compulsory search eg:"electronics seminar" , use -" " for filter something eg: "electronics seminar" -"/tag/" (used for exclude results from tag pages) 23-05-2012, 12:14 PM Post: #1 seminar ideas Posts: 10,000 Super Moderator Hey...Ask More Info About IMAGE SEGMENTATION ALGORITHMS USING MATLAB Joined: Apr 2012 IMAGE SEGMENTATION ALGORITHMS USING MATLAB IMAGE SEGMENTATION ALGORITHMS USING MATLAB.docx (Size: 1.51 MB / Downloads: 139) “One picture is worth more than ten thousand words” -ANONYMOUS In computer vision, segmentation refers to the process of partitioning a digital image into multiple segments (sets of pixels, also known as super pixels). The goal of segmentation is to simplify and/or change the representation of an image into something that is more meaningful and easier to analyse. Image segmentation is typically used to locate objects and boundaries (lines, curves, etc.) in images. More precisely, image segmentation is the process of assigning a label to every pixel in an image such that pixels with the same label share certain visual characteristics. The result of image segmentation is a set of segments that collectively cover the entire image, or a set of contours extracted from the image (see edge detection). Each of the pixels in a region is similar with respect to some characteristic or computed property, such as colour, intensity, or texture. Adjacent regions are significantly different with respect to the same characteristic(s). The simplest method of image segmentation is called the thresholding method. This method is based on a clip-level (or a threshold value) to turn a gray-scale image into a binary image. The key of this method is to select the threshold value (or values when multiple-levels are selected). Several popular methods are used in industry including the maximum entropy method, Otsu's method (maximum variance), and et al. k-means clustering can also be used. Clustering methods The K-means algorithm is an iterative technique that is used to partition an image into K clusters. The basic algorithm is: 1. Pick K cluster centres, either randomly or based on some heuristic 2. Assign each pixel in the image to the cluster that minimizes the distance between the pixel and the cluster centre 3. Re-compute the cluster centres by averaging all of the pixels in the cluster 4. Repeat steps 2 and 3 until convergence is attained (e.g. no pixels change clusters) In this case, distance is the squared or absolute difference between a pixel and a cluster centre. The difference is typically based on pixel colour, intensity, texture, and location, or a weighted combination of these factors. K can be selected manually, randomly, or by a heuristic. This algorithm is guaranteed to converge, but it may not return the optimal solution. The quality of the solution depends on the initial set of clusters and the value of K. In statistics and machine learning, the k-means algorithm is a clustering algorithm to partition n objects into k clusters, where k < n. It is similar to the expectation-maximization algorithm for mixtures of Gaussians in that they both attempt to find the centres of natural clusters in the data. The model requires that the object attributes correspond to elements of a vector space. The objective it tries to achieve is to minimize total intra-cluster variance, or, the squared error function. The k-means clustering was invented in 1956. The most common form of the algorithm uses an iterative refinement heuristic known as Lloyd’s algorithm. Lloyd’s algorithm starts by partitioning the input points into k initial sets, either at random or using some heuristic data. It then calculates the mean point, or centroid, of each set. It constructs a new partition by associating each point with the closest centroid. Then the centroids are recalculated for the new clusters, and algorithm repeated by alternate application of these two steps until convergence, which is obtained when the points no longer switch clusters (or alternatively centroids are no longer changed). Lloyd’s algorithm and k-means are often used synonymously, but in reality Lloyd’s algorithm is a heuristic for solving the k-means problem, as with certain combinations of starting points and centroids, Lloyd’s algorithm can in fact converge to the wrong answer. Other variations exist, but Lloyd’s algorithm has remained popular, because it converges extremely quickly in practice. In terms of performance the algorithm is not guaranteed to return a global optimum. The quality of the final solution depends largely on the initial set of clusters, and may, in practice, be much poorer than the global optimum. Since the algorithm is extremely fast, a common method is to run the algorithm several times and return the best clustering found. A drawback of the k-means algorithm is that the number of clusters k is an input parameter. An inappropriate choice of k may yield poor results. The algorithm also assumes that the variance is an appropriate measure of cluster scatter. They are two types of algorithms are used in our project. There are shown below Segmentation is pivotal work in character recognition especially in case hand-written characters are connected. During past 50 years, many methods have been set forth in segmenting connected characters. Drop fall algorithm is a classical segmentation algorithm often used in character segmentation because of its simplexes and effectiveness in application. Firstly advanced by G. Conge do in 1995, Drop Fall algorithm mimics the motions of a falling raindrop that falls from above the characters rolls along the contour of the characters and cuts through the contour when it cannot fall further. The raindrop follows a set of movement rules to determine the segmentation trace. Concretely, the Drop Fall algorithm selects one pixel out of the neighbours of the current pixel as a new pixel of the segmentation trace. Although Extended Drop Fall algorithm has been advanced to improve the performance of drop fall algorithm, when the raindrop falls into the concave pixel between the small convexnesses on the contour of characters, these algorithms will treat it as connected strokes and therefore start splitting it. Obviously it could split a single character and result in invalid segmentation. In this case, we introduce Inertial Drop Fall algorithm which follows the previous direction in the segmentation. Furthermore, Big Inertial Drop Fall algorithm is advanced to increase the size of the raindrop. When there is no big enough free space for the big raindrop to fall down, it will search for other direction and thus can avoid fall into the Traditional Drop Fall algorithm: As mentioned above, the basic idea of Traditional Drop Fall (TDF) algorithms is to simulate a “drop-falling” process. The cut tracing is defined with both the information of neighbour pixels and perhaps of more pixels. The algorithm considers only five adjacent pixels: the three pixels blow the current pixel and the pixels to the left and right .Upward moves are not considered, because the rules are meant to mimic a falling motion. seminar ideas Posts: 10,000 Super Moderator Hey...Ask More Info About IMAGE SEGMENTATION ALGORITHMS USING MATLAB Joined: Apr 2012 IMAGE SEGMENTATION ALGORITHMS USING MATLAB.docx (Size: 1.51 MB / Downloads: 139) “One picture is worth more than ten thousand words” -ANONYMOUS In computer vision, segmentation refers to the process of partitioning a digital image into multiple segments (sets of pixels, also known as super pixels). The goal of segmentation is to simplify and/or change the representation of an image into something that is more meaningful and easier to analyse. Image segmentation is typically used to locate objects and boundaries (lines, curves, etc.) in images. More precisely, image segmentation is the process of assigning a label to every pixel in an image such that pixels with the same label share certain visual characteristics. The result of image segmentation is a set of segments that collectively cover the entire image, or a set of contours extracted from the image (see edge detection). Each of the pixels in a region is similar with respect to some characteristic or computed property, such as colour, intensity, or texture. Adjacent regions are significantly different with respect to the same characteristic(s). The simplest method of image segmentation is called the thresholding method. This method is based on a clip-level (or a threshold value) to turn a gray-scale image into a binary image. The key of this method is to select the threshold value (or values when multiple-levels are selected). Several popular methods are used in industry including the maximum entropy method, Otsu's method (maximum variance), and et al. k-means clustering can also be used. Clustering methods The K-means algorithm is an iterative technique that is used to partition an image into K clusters. The basic algorithm is: 1. Pick K cluster centres, either randomly or based on some heuristic 2. Assign each pixel in the image to the cluster that minimizes the distance between the pixel and the cluster centre 3. Re-compute the cluster centres by averaging all of the pixels in the cluster 4. Repeat steps 2 and 3 until convergence is attained (e.g. no pixels change clusters) In this case, distance is the squared or absolute difference between a pixel and a cluster centre. The difference is typically based on pixel colour, intensity, texture, and location, or a weighted combination of these factors. K can be selected manually, randomly, or by a heuristic. This algorithm is guaranteed to converge, but it may not return the optimal solution. The quality of the solution depends on the initial set of clusters and the value of K. In statistics and machine learning, the k-means algorithm is a clustering algorithm to partition n objects into k clusters, where k < n. It is similar to the expectation-maximization algorithm for mixtures of Gaussians in that they both attempt to find the centres of natural clusters in the data. The model requires that the object attributes correspond to elements of a vector space. The objective it tries to achieve is to minimize total intra-cluster variance, or, the squared error function. The k-means clustering was invented in 1956. The most common form of the algorithm uses an iterative refinement heuristic known as Lloyd’s algorithm. Lloyd’s algorithm starts by partitioning the input points into k initial sets, either at random or using some heuristic data. It then calculates the mean point, or centroid, of each set. It constructs a new partition by associating each point with the closest centroid. Then the centroids are recalculated for the new clusters, and algorithm repeated by alternate application of these two steps until convergence, which is obtained when the points no longer switch clusters (or alternatively centroids are no longer changed). Lloyd’s algorithm and k-means are often used synonymously, but in reality Lloyd’s algorithm is a heuristic for solving the k-means problem, as with certain combinations of starting points and centroids, Lloyd’s algorithm can in fact converge to the wrong answer. Other variations exist, but Lloyd’s algorithm has remained popular, because it converges extremely quickly in practice. In terms of performance the algorithm is not guaranteed to return a global optimum. The quality of the final solution depends largely on the initial set of clusters, and may, in practice, be much poorer than the global optimum. Since the algorithm is extremely fast, a common method is to run the algorithm several times and return the best clustering found. A drawback of the k-means algorithm is that the number of clusters k is an input parameter. An inappropriate choice of k may yield poor results. The algorithm also assumes that the variance is an appropriate measure of cluster scatter. They are two types of algorithms are used in our project. There are shown below Segmentation is pivotal work in character recognition especially in case hand-written characters are connected. During past 50 years, many methods have been set forth in segmenting connected characters. Drop fall algorithm is a classical segmentation algorithm often used in character segmentation because of its simplexes and effectiveness in application. Firstly advanced by G. Conge do in 1995, Drop Fall algorithm mimics the motions of a falling raindrop that falls from above the characters rolls along the contour of the characters and cuts through the contour when it cannot fall further. The raindrop follows a set of movement rules to determine the segmentation trace. Concretely, the Drop Fall algorithm selects one pixel out of the neighbours of the current pixel as a new pixel of the segmentation trace. Although Extended Drop Fall algorithm has been advanced to improve the performance of drop fall algorithm, when the raindrop falls into the concave pixel between the small convexnesses on the contour of characters, these algorithms will treat it as connected strokes and therefore start splitting it. Obviously it could split a single character and result in invalid segmentation. In this case, we introduce Inertial Drop Fall algorithm which follows the previous direction in the segmentation. Furthermore, Big Inertial Drop Fall algorithm is advanced to increase the size of the raindrop. When there is no big enough free space for the big raindrop to fall down, it will search for other direction and thus can avoid fall into the Traditional Drop Fall algorithm: As mentioned above, the basic idea of Traditional Drop Fall (TDF) algorithms is to simulate a “drop-falling” process. The cut tracing is defined with both the information of neighbour pixels and perhaps of more pixels. The algorithm considers only five adjacent pixels: the three pixels blow the current pixel and the pixels to the left and right .Upward moves are not considered, because the rules are meant to mimic a falling motion. « Next Oldest · Next Newest » Share IMAGE SEGMENTATION ALGORITHMS USING MATLAB To Your Friends :- Marked Categories : ppt mri otsu method, medical image segmentation using thresholding in matlab, download matlab code image segmentation clustering, image segmentation seminar report, image segmentation with clustering methods matlab code, image segmentation for non adjacent region, image thresholding seminar report, powered by mybb invalid attribute, seminar topics using matlab for ece , automatic image segmentation matlab, matlab code for image segmentation using thresholding, multi otsu method matlab, image thresholding information maximization matlab, optimum global thresholding using otsu s method seminar report, image segmentation algorithms matlab, image segmentation, Possibly Related Threads... Thread: Author Replies: Views: Last Post Creating Graphical User Interface for Enhancement of ECG Signals Using Matlab pdf seminar post 0 27 Yesterday 03:35 PM Last Post: seminar post Modeling Transformer Internal Faults Using Matlab pdf seminar post 0 46 14-04-2014 02:32 PM Last Post: seminar post Semisupervised Biased Maximum Margin Analysis for Interactive Image Retrieval pdf seminar post 0 32 12-04-2014 04:26 PM Last Post: seminar post A Joint Channel Estimation and Unequal Error Protection Scheme for Image Transmission seminar post 0 17 11-04-2014 02:44 PM Last Post: seminar post IMAGE BASED PASSWORD AUTHENTICATION FOR ILLITERATES WITH TOUCHSCREEN REPORT seminar projects maker 0 47 09-04-2014 02:51 PM Last Post: seminar projects maker IMAGE COMPRESSION USING DISCRETE COSINE TRANSFORM REPORT seminar projects maker 0 51 27-03-2014 03:43 PM Last Post: seminar projects maker SECURE ATM BY IMAGE PROCESSING seminar class 6 6,241 24-03-2014 08:03 PM Last Post: Guest Power Quality Improvement in DC Drives Using Matlab/Simulink pdf seminar projects maker 0 94 11-03-2014 04:58 PM Last Post: seminar projects maker VECTOR CONTROL DRIVE OF PERMANENT MAGNET SYNCHRONOUS MOTOR USING MATLAB/SIMULINK seminar class 3 4,072 07-03-2014 10:12 AM Last Post: seminar projects maker Modeling and Testing of a Digital Distance Relay Using MATLAB/SIMULINK smart paper boy 12 2,896 06-02-2014 06:05 PM Last Post: Guest
{"url":"http://seminarprojects.com/Thread-image-segmentation-algorithms-using-matlab","timestamp":"2014-04-18T18:14:21Z","content_type":null,"content_length":"49070","record_id":"<urn:uuid:4f967177-4665-4016-bb6d-aedb60219634>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00151-ip-10-147-4-33.ec2.internal.warc.gz"}
Thrown Ball problem October 30th 2007, 12:43 PM #1 Oct 2007 if a ball is thrown vertically upward at 30 m/s, then its approxiamte height im meters t seconds later is given by 1) what is the domain of h? 2) for what time limit is theball more than 45 feet above the ground? 3) Whatis themaximu height of the ball? 4) After how many seconds does the ball reach the maximun height? please please help. Notice the word "help". You must show your work in order for help to be effective. You don't learn anything if someone else does your homework. I'll get you started. 1) Do you have a definition of "Domain"? How long does the ball stay in the air? 2) h(t) = 45 should have two solutions. What are they? You should be good at solving quadratic equations. 3 & 4) You should know a think or two about parabolas. Find the vertex and maybe the axis of symmetry. ok so far whati have on that is: 1)the ball stays in the air for 6 seconds right? 2) the ball is at 45 at 3 seconds andbefore and afterthat its below it. and the maximum height is 45 if a ball is thrown vertically upward at 30 m/s, then its approxiamte height im meters t seconds later is given by 1) what is the domain of h? 2) for what time limit is theball more than 45 feet above the ground? 3) Whatis themaximu height of the ball? 4) After how many seconds does the ball reach the maximun height? please please help. 1) You are looking for the domain, which is all the possible values for t, not just the amount of time the ball is in the air. 2) This is best answered by considering for what times is h = 45 ft. So solve $30t - 5t^2 = 45$. Since your height function is a parabola that opens downward, we know that the ball is higher than 45 feet for the time interval between these two times. 3 and 4) Your height function is a parabola opening downward. So how do you find the vertex form for $h(t)=30t-5t^2$? The vertex will be the point where you have the maximum height. October 30th 2007, 12:48 PM #2 MHF Contributor Aug 2007 October 30th 2007, 01:01 PM #3 Oct 2007 October 31st 2007, 12:24 AM #4
{"url":"http://mathhelpforum.com/pre-calculus/21648-thrown-ball-problem.html","timestamp":"2014-04-16T16:12:21Z","content_type":null,"content_length":"41417","record_id":"<urn:uuid:a1721ffd-8488-44d0-b4f3-1e0c6453a710>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00059-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: DSP based High Speed Data [Date Prev][Date Next][Thread Prev][Thread Next] - [Date Index][Thread Index][Author Index] Re: DSP based High Speed Data • Subject: Re: [amsat-bb] DSP based High Speed Data • From: "John Stephensen, KD6OZH" <kd6ozh@xxxxxxx> • Date: Fri, 15 Sep 2000 19:45:53 -0000 My definition of high speed is too low. I was thinking of minimizing power and antenna requirements and transmitting a 3 KB JPEG-encoded image in 10 seconds at 300 baud or 2400 bits/sec. This would require 39 kHz out of the 400 kHz available on P3D. 3 KB is the size of a JPEG file with an SSTV-quality (120x160 pixel) color picture. For very high data rates, a multi-tone system takes a lot of spectrum. FSK requires about 1.7 times the bit rate in bandwidth so 200 kbps would occupy 350 kHz which is larger than the P3D analog or digital passbands. A multitone system spreads the signal over a wider bandwidth to lower the SNR required for a given error rate. With 128 tones, 8-bits are transmitted in each symbol after error correction, so the baud rate would need to be 25,000 and the tone spacing would have to be 25 kHz at a minimum. The system requires 3.2 MHz or 9 times the FSK bandwidth. It would also require 10 dB less power at the receiver. However, a very fast DSP chip requires about 100 us to do the FFT calculation so it can't keep up with the baud rate. As the number of tones increases, it gets worse. At 16-bits per symbol, the baud rate halves and the number of tones goes up by 256 so the bandwidth occupies increases by 128 to 409.6 MHz. The FFT calculation also gets much If we keep the baud rate below 2000 symbols/second a cheap DSP chip can do the FFT calculations. The data rate is limited to 16,000 bits/second and the bandwidth required is only 128 kHz. Higher speeds should be done with QPSK modulation. This requires 1 Hz of bandwidth per bit/sec. before adding error corection overhead, requires minimal processing on a DSP and is more efficient than FSK. FSK is actually the worst case for a DSP chip -- it has to do about 20 arctangents per baud. This requires 1260 instructions on an Analog Devices DSP per bit. >Date: Tue, 12 Sep 2000 18:00:05 -0400 >From: bronson@eece.maine.edu >Subject: [amsat-bb] DSP based High Speed Data >Being curious about this High Speed Data stuff, I thought I'd try to put >some numbers to the idea >presneted by John Stephensen, KD6OZH. >I'm new to DSP, but here goes anyway; >Given the following: >No info in the amplitude. >20KHz audio upper limit. >20Msps 10 bit A/D converter. >DSP chip that can do the calculations fast enough. > a) ignore the DSP processing overhead >A minimum of 2 data points on the highest Tone (20 kHz). > a) and the system can resolve the difference between tones reliably. > How do I get some serious bit rates (200Kbps or better) out of this >An 8bit (128 tone) system has a speed limit of <160Kbps. If the signal >BW is <=1KHz (from 20KHz down) you will have to resolve tones that >differ by 7.8Hz. You will have to wait at least 52.6us (19KHz, lowest >frequency) for the entire waveform to arrive before you can decompose >the signal. This gives you a bit rate of about 152Kbps. Decreasing the >signal BW doesn't help all that much since you can't break 160Kbps with >an 8bit tone structure anyway. >So what about a 16bit word? >Now there are 32768 tones. For the same signal BW (1KHz) you will have >to resolve tones to 31mHz. Given the above constraints your A/D >converter can give you 2mHz, double it (Nyquist times 2) and you are >still only at 4mHz. The same latency period of 52.6us holds since 19KHz >is still the lowest frequency. Now the data rate is 304Kbs. >If we relax the signal BW to 5KHz the difference between tones jumps to >153mHz, the latency period goes up to 66.67us (15KHz) and the bit rate >drops to 240Kbps. >Have I overlooked anything? >Absolutely... Noise (in the RF channel and Audio channel), Throughput >of the DSP system, Additional tuning tones, Separation of each >composite (all 32768 tones) audio signal in the data stream, My >ignorance and lack of experience in the DSP field. >How bout someone else taking a crack at it? Via the amsat-bb mailing list at AMSAT.ORG courtesy of AMSAT-NA. To unsubscribe, send "unsubscribe amsat-bb" to Majordomo@amsat.org
{"url":"http://www.amsat.org/amsat/archive/amsat-bb/200009/msg00433.html","timestamp":"2014-04-21T12:31:05Z","content_type":null,"content_length":"7103","record_id":"<urn:uuid:5f195f8c-6fd2-4ca1-8cd2-3d5dd29bd4db>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00241-ip-10-147-4-33.ec2.internal.warc.gz"}
Decomposition of semisimple Lie group into almost simple factors up vote 3 down vote favorite Can anyone suggest a reference that defines or explains that a semisimple real Lie group can be decomposed into a product of almost simple factors? In some papers that I read recently people keep talk about "semisimple Lie groups withoug compact factors" without explanation and it appears to be some standard notion. But the only reference I can find is Margulis' book "discrete subgroups of semisimple Lie groups" where this decomposition is claimed only for algebraic groups instead of for Lie groups. gr.group-theory lie-groups reference-request rt.representation-theory A semisimple Lie group is a covering of a semisimple algebraic group .... Alternatively, use that its Lie algebra is a product of simple Lie algebras. – anon Mar 7 '12 at 5:45 add comment 1 Answer active oldest votes Concerning references, there exist many books which treat the structure and classification of semisimple Lie groups, usually with a wider agenda involving for example symmetric spaces, harmonic analysis, infinite dimensional representations. Older and newer authors include Chevalley, Helgason, Knapp, Wallach, Onishchik-Vinberg, Bump, etc. (Other books concentrate more heavily on compact groups.) Though the coverage in such books varies a lot, the basic outline is usually similar: start with the notion of (real) Lie group and the associated Lie algebra (originally called the "infinitesimal group"), study the solvable radical, pass to semisimple groups and their Lie algebras, then complexify the situation in order to use more algebraic methods. Ultimately the structure of a complex semisimple group lifts from the structure of a complex semisimple Lie algebra. Here relatively elementary methods, based on nondegeneracy of the Killing form, decompose the Lie algebra into a direct sum of simple ideals (which can be readily classified). Then the nice correspondence between the groups and their Lie algebras up vote 5 allows most of this structure to be found in the group as well, though the "simple" groups may in fact just be "almost simple". Decomposing the group directly into simple factors is not down vote an attractive project, though it might be done indirectly using Chevalley's approach via linear algebraic groups over arbitrary algebraically closed fields. Only after such results are in hand for the complex Lie algebras and groups can one adapt the structure and classification to the real case. I don't think it's practical to get a direct factorization of a semisimple (real) Lie group into its simple factors, but on the other hand the existence of unique compact real forms makes the less direct comparison of real and complex cases doable. None of the standard Lie groups books can be viewed as easy reading, since by its nature Lie group theory merges ideas from analysis, topology, algebra in a sophisticated way. In any case, algebraic group ideas have their limits in the study of real Lie groups, since covering groups arise which are not algebraic. add comment Not the answer you're looking for? Browse other questions tagged gr.group-theory lie-groups reference-request rt.representation-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/90416/decomposition-of-semisimple-lie-group-into-almost-simple-factors","timestamp":"2014-04-18T23:21:36Z","content_type":null,"content_length":"54758","record_id":"<urn:uuid:9f13ea94-8f5e-42a7-ab71-18ffbfbf857b>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00453-ip-10-147-4-33.ec2.internal.warc.gz"}
Algebraic Systems Representation of DNA Sequence-Structure Relationships Mary E. Karpen, Howard Jay Chizeck, Stephen D. Hawley sequence dependent structure, DNA, algebraic systems theory The Calladine-Dickerson rules predict variations in the three- dimensional structure of a DNA helix from its base-pair sequence. A new approach to modeling these DNA sequence/structure relationships is to represent them as an algebraic system. This requires the extension of algebraic systems theory to a new class of sequential systems, called group homatons. The base-pair sequence serves as the input to a group homaton, which sequentially processes the base- pairs according to an abstracted version of the Calladine-Dickerson rules. The output is a set of four structure parameters, at each base-pair location along the helix. This representation also provides a means of inverting the Calladine-Dickerson rules, determining the base-pair sequence from a specified sequence of structure variations. Both the inverse operation and the forward system are easily implementable on a microcomputer. The inverse system has potential use as a tool in designing sequences of DNA with desired structures. This technique is sufficiently general to allow future expansion to more complex models. Download the PDF version Download the Gzipped Postscript version
{"url":"https://www.ee.washington.edu/techsite/papers/refer/UWEETR-2003-0022.html","timestamp":"2014-04-16T10:25:26Z","content_type":null,"content_length":"3518","record_id":"<urn:uuid:71da5452-a469-4684-bc28-de77a3149297>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00269-ip-10-147-4-33.ec2.internal.warc.gz"}
M. Mönnigmann, W. Marquardt, C. H. Bischof, T. Beelitz, B. Lang, and P. Willems. A hybrid approach for efficient robust design of dynamic systems. SIAM Review, 49(2):236-254, 2007. We propose a novel approach for the parametrically robust design of dynamic systems. The approach can be applied to system models with parameters that are uncertain in the sense that values for these parameters are not known precisely, but only within certain bounds. The novel approach is guaranteed to find an optimal steady state that is stable for each parameter combination within these bounds. Our approach combines the use of a standard solver for constrained optimization problems with the rigorous solution of nonlinear systems. The constraints for the optimization problems are based on the concept of parameter space normal vectors that measure the distance of a tentative optimum to the nearest known critical point, i.e., a point where stability may be lost. Such normal vectors are derived using methods from Nonlinear Dynamics. After the optimization, the rigorous solver is used to provide a guarantee that no critical points exist in the vicinity of the optimum, or to detect such points. In the latter case, the optimization is resumed, taking the newly found critical points into account. This optimize-and-verify procedure is repeated until the rigorous nonlinear solver can guarantee that the vicinity of the optimum is free from critical points and therefore the optimum is parametrically robust. In contrast to existing design methodologies, our approach can be automated and does not rely on the experience of the designing engineer. A simple model of a fermenter is used to illustrate the concepts and the order of activities arising in a typical design process. [ DOI ]
{"url":"http://www.sc.rwth-aachen.de/Publications/Pubs/04-10.html","timestamp":"2014-04-18T18:45:44Z","content_type":null,"content_length":"6973","record_id":"<urn:uuid:49b0e0fc-da9b-4dbf-991c-3e35a6966e10>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00214-ip-10-147-4-33.ec2.internal.warc.gz"}
Graph Embedding with Topological Cycle-Constraints Dornheim, Christoph (1999) Graph Embedding with Topological Cycle-Constraints. In: Graph Drawing 7th International Symposium, GD’99, September 15-19, 1999, Štirín Castle, Czech Republic , pp. 155-164 (Official URL: http://dx.doi.org/10.1007/3-540-46648-7_16). Full text not available from this repository. This paper concerns graph embedding under topological constraints. We address the problem of finding a planar embedding of a graph satisfying a set of constraints between its vertices and cycles that require embedding a given vertex inside its corresponding cycle. This problem turns out to be NP-complete. However, towards an analysis of its tractable subproblems, we develop an efficient algorithm for the special case where graphs are 2-connected and any two distinct cycles in the constraints have at most one vertex in common. Repository Staff Only: item control page
{"url":"http://gdea.informatik.uni-koeln.de/324/","timestamp":"2014-04-16T19:45:47Z","content_type":null,"content_length":"20692","record_id":"<urn:uuid:48eaa904-3c38-4e16-94ff-db03558701f7>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00364-ip-10-147-4-33.ec2.internal.warc.gz"}
Pages with category Boot Find the page/content you are looking for with our index. • embedded Embedded software is a reference to software written to work in a piece of hardware equipment. At first, this was code written directly for a specialized processor such as an FPGA or a DSP. Today, regular computers will be used for applications such as a medical device or a kiosk and it is also called embedded software, even though these just are desktop applications... • float float is a type in most software languages referencing an IEEE floating point number. These numbers are generally defined on 32 or 64 bits with three parts: a sign, an exponent and a mantissa. There is also a bias which is not saved in the number. The sign is 0 (positive) or 1 (negative). This means you have a representation of: +0.0 and -0.0. The exponent is about 1/6th the total size in bits. The bias is added/subtracted from the exponent. In the end, it is a signed power of 2 exponent (i.e. exercises a shift on the mantissa.) The mantissa forms the current number. • nop nop is the usual abbreviation for the No Operation command often used in assembly language (processor code.) Some times, it is written as noop instead.
{"url":"http://linux.m2osw.com/category_list/Boot","timestamp":"2014-04-19T11:56:55Z","content_type":null,"content_length":"42880","record_id":"<urn:uuid:a24405d1-653c-4f13-905e-a13e24e7762f>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00331-ip-10-147-4-33.ec2.internal.warc.gz"}
Effect of Natural Convection on Dendritic Growth Geoffrey B. McFadden, ACMD Sam R. Coriell, Metallurgy Division Robert F. Sekerka, Carnegie Mellon University An outstanding problem in solidification theory is to predict the length scales and time scales that occur during crystal growth. These scales are crucial in determining the physical properties of the solidified material; for example, when multicomponent alloys are produced by directional solidification, instabilities of the solid-liquid interface can lead to inhomogeneous solute patterns, or microsegregation, in the solid phase which are generally undesirable. Predicting the conditions under which microsegregation occurs, and the associated length scales, is therefore a problem that receives much attention both experimentally and theoretically. Recently M. E. Glicksman, of the Rensselaer Polytechnic Institute, and colleagues have performed fundamental studies of growth of a single-component material from a supercooled melt. When the liquid phase is maintained at temperatures below the equilibrium melting point of the material, the solid that forms spontaneously has a dendritic, or branch-like structure, with primary stems growing at constant velocity into the melt. The tip velocities, the radius of curvature of the tip, and other features of the dendrites are fundamental properties that Glicksman et al. are able to measure experimentally, and are used to critically assess various theoretical predictions for these quantities. In the figure below, a multiple-exposure photograph taken at equal time intervals illustrates the parabolic tip and constant tip velocity [S.-C. Huang and M. E. Glicksman, Acta Met. 29 (1981) 717-734]. An experimental complication is the occurrence of natural convection in the liquid phase, which is driven by buoyancy forces produced by the density variations associated with the temperature gradients in the system. The effect of the convection is to alter the transport of heat away from the solidifying dendrite, so that the resulting tip velocity and radius of curvature of the tip are modified. In order to reduce the effects of this buoyancy-driven convection, the experiments have been performed in a reduced gravity environment on board the NASA space shuttle. Examination of typical data for both terrestrial growth conditions and the microgravity conditions of space show that in both cases there are significant effects that may be attributed to natural convection; an example is shown in the figure below. Here the tip velocity, V, is plotted as a function of the amount of thermal undercooling, delta T, of the liquid below the bulk melting point of the material. Data is shown for both terrestrial conditions (circles) and for microgravity conditions (squares). If convection effects were absent, the experimental data would be expected to fall on the bottom-most curve shown in the figure, which represents a theoretical prediction based on a model that includes no convective effects. Both sets of data show good agreement with the model at large velocities and large undercoolings, but show systematic deviation from the predicted behavior at smaller velocities. The agreement is better for the microgravity data (squares), which can be attributed to the decreased importance of buoyancy because of the reduced gravity. Both sets of data eventually deviate from the bottom curve at low enough undercoolings, implying that even under microgravity conditions natural convection can play a significant role. To help understand the observed behavior, we have developed a simple model that takes into account effects of buoyancy-driven convection. The model assumes that because of natural convection the fluid is well-mixed and isothermal outside of a boundary layer or stagnant film near the surface of the dendrite, and assumes that the heat transport within the stagnant film takes place by diffusion alone. A closed-form solution to the thermal problem can be found that depends on the assumed thickness of the stagnant film. The macroscopic flow outside the stagnant film is given by a large Raleigh number approximate solution in which the region consisting of the network of growing dendrites is approximated by an isothermal sphere of radius R. The stagnant film thickness is then determined self-consistently by a balance of convective and diffusive heat transfer at the edge of the stagnant film. The predictions of the resulting theory are shown as the top curves in the above figure. The theory depends on a single adjustable parameter, representing the ratio of the gravitational acceleration, g, and the radius, R. The top three curves in the figure correspond to the terrestrial value g_e for g and radii of 0.5, 1.0, and 2.0 cm, which are typical of the geometries in the experiments. The bottom three curves correspond to R = 1 cm and g = 0.0001 g_e, g = 0.00001 g_e, and g = 0. The theory does a good job of predicting the values of undercooling for which the effects of convection become important. This work has been described in a paper entitled ``Stagnant Film Model of the Effect of Natural Convection on the Dendrite Operating State,'' by R. F. Sekerka, S. R. Coriell and G. B. McFadden, that has been submitted for publication in the Journal of Crystal Growth.
{"url":"http://math.nist.gov/~GMcFadden/dendrite.html","timestamp":"2014-04-18T08:04:12Z","content_type":null,"content_length":"5877","record_id":"<urn:uuid:8d4c3b53-ea88-414d-bd71-65a3e3953e05>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00180-ip-10-147-4-33.ec2.internal.warc.gz"}
Computer Programming/Hebrew calendar From Wikibooks, open books for an open world Programming a Hebrew calendar application: The intended audience for this summary of the mechanics of the Hebrew calendar is computer programmers who wish to design software that accurately computes dates in the Hebrew calendar. The following details may prove useful for validating such software. Note, however, that published Hebrew calendar algorithms are much simpler than the details listed below, and there is no need to employ tables in computer implemention of Hebrew calendar arithmetic. As usual, tables are useful shortcuts for humans carrying out the calculations manually. 1. The Hebrew calendar is computed by lunations. One mean lunation is reckoned at 29 days, 12 hours, 44 minutes, 3⅓ seconds, or equivalently 765433 parts = 29 days, 13753 parts, where 1 minute = 18 parts (halakim plural, helek singular). 2. A common year must be either 353, 354, or 355 days; a leap year must be 383, 384, or 385 days. A 353 or 383 day year is called haserah. A 354 or 384 day year is kesidrah. A 355 or 385 day year is 3. Leap years follow a 19 year schedule in which years 3, 6, 8, 11, 14, 17, and 19 are leap years. The Hebrew year 5758 (which starts in Gregorian year 1997) is the first year of a cycle. 4. 19 years is the same as 235 lunations. 5. The months are Tishrei, Cheshvan, Kislev, Tevet, Shevat, Adar, Nisan, Iyar, Sivan, Tammuz, Av, and Elul. In a leap year, Adar is replaced by Adar II (also called Adar Sheni or Veadar) and an extra month, Adar I (also called Adar Rishon), is inserted before Adar II. 6. Each month has either 29 or 30 days. A 30 day month is full (מלא pronounced: maleh, maley, or malei), whereas a 29 day month is defective (חסר pronounced: ħaser or khaser). □ Nisan, Sivan, Av, Tishrei, and Shevat are always full. □ Iyar, Tammuz, Elul, Tevet, and Adar (Adar II in leap years) are always defective. □ Adar I, added in leap years before Adar II, is full. □ Cheshvan and Kislev vary. There are three possible combinations: both defective, both full, Cheshvan defective and Kislev full. 7. Tishrei 1 (Rosh Hashana) is the day during which a molad (instant of the mean lunar conjunction) occurs unless that conflicts with certain postponements (dehiyyot plural; dehiyyah singular). Note that for calendar computations, the Jewish date begins at 6 pm or six fixed hours before midnight when the date changes in the Gregorian calendar, not at nightfall or sunset when the observed Hebrew date begins. □ Postponement A is required whenever Tishrei 10 (Yom Kippur) would fall on a Friday or a Sunday, or if Tishrei 21 (7th day of Sukkot) would fall on a Saturday. This is equivalent to the molad being on Sunday, Wednesday, or Friday. Whenever this happens, Tishrei 1 is delayed by one day. □ Postponement B is required whenever the molad occurs at or after noon. When this postponement exists, Tishrei 1 is delayed by one day. If this conflicts with postponement A then Tishrei 1 is delayed an additional day. □ Postponement C: If the year is to be a common year and the molad falls on a Tuesday at or after 3:11:20 am (3 hours 204 parts), Tishrei 1 is delayed by two days—if it weren't delayed, the resulting year would be 356 days long. □ Postponement D: If the new year follows a leap year and the molad is on a Monday at or after 9:32:43⅓ am (9 hours 589 parts), Tishrei 1 is delayed one day—if it weren't, the preceding year would have only 382 days. 8. Postponements are implemented by adding a day to Kislev of the preceding year, making it full. If Kislev is already full, the day is added to Cheshvan of the preceding year, making it full also. If a delay of two days is called for, both Cheshvan and Kislev of the preceding year become full. 9. A reference epoch in modern times is molad Tishrei for Hebrew year 5758, which is at 22:07:10 on Wednesday, 1 October 1997 (Gregorian), or equivalently midnight-referenced Julian day number 2450723 plus 23889 parts. This epoch also marks the beginning of a cycle. Note: Although the Julian day number begins at noon, it can be reckoned twelve hours earlier for programming purposes, which is what is meant here by the phrase, "midnight-referenced." Calculation by use of partial weeks[edit] There are a number or approaches that can be taken in calculating Hebrew dates. One that is widely documented uses partial weeks and a table of limits. This method relies on all postponements being defined in terms of a seven-day week. That means that whole weeks between the epoch and the molad of the current year can be eliminated, leaving only a partial week with a few days, hours and parts. A nineteen-year cycle has 235 months of 29d 12h 793p each or 6939d 16h 595p. Eliminating 991 weeks leaves a partial week of 2d 16h 595p or 69715p. A common year has 12 months of 29d 12h 793p each or 354d 8h 876p. Eliminating 50 weeks leaves a partial week of 4d 8h 876p or 113196p. A leap year has 13 months of 29d 12h 793p or 383d 21h 589p. Eliminating 54 weeks leaves a partial week of 5d 21h 589p or 152869p. Postponement B requiring a delay until the next day (beginning at 6 pm) if a molad occurs at or after noon effectively means that the week begins at noon Saturday for computational purposes. Calculate the partial week between the molad of the desired Hebrew year and the preceding noon Saturday considering the partial week before molad Tishrei of AM 1 (or the first year of a more recent nineteen-year cycle) and the partial weeks from the intervening cycles and years within the current cycle, eliminating whole weeks via mod 181440, the number of parts in one week. Thus molad Tishrei AM 1, which is 1d 5h 204p after 6 pm Saturday, is increased by 6 hours to 1d 11h 204p or 38004p. This is 5h 204p after the beginning (6 pm) of the second day of the week. In Western terms, this is 23:11:20 on Sunday (because it is before midnight), 6 October 3761 BCE in the proleptic Julian calendar. This date is midnight-referenced Julian day number 347997. Consulting the Table of Limits below, 1 Tishrei is the second day of the week, equivalent to the tabular Western day of Monday (same daylight period as the Hebrew day), which is 7 October 3761 BCE. This means no postponement was needed (both the molad Tishrei and 1 Tishrei were on the second day of the week). Alternatively, the molad of a more recent Hebrew year may be selected as the epoch if it is the first year of a nineteen-year cycle, such as 5758 (used in rule 9), which is 303 nineteen-year cycles after molad Tishrei AM 1. Thus molad Tishrei 5758 is (38004 + 303×69715) mod 181440 = 114609 parts after noon Saturday, or 4d 10h 129p, which is 4h 129p after the beginning (6 pm) of the fifth day of the week. In Western terms, this is before midnight, which yields the date and time indicated in rule 9. Consulting the Table of Limits, 1 Tishrei is the fifth day of the week, or tabular Thursday 2 October 1997 (Gregorian), again no postponement was needed. By applying the postponements to the moladot Tishrei at the beginning and end of any Hebrew year, a table of four gates (Hebrew: arba'ah sha'arim), which is also a table of limits, can be developed which uniquely identifies which of the fourteen types the year is (the day of the week of 1 Tishrei, the number of days in Cheshvan and Kislev, and whether common or leap (embolismic)).^[1]^[2]^[3]^ [4] "Four gates" refers to the four allowable days of the week with which the year can begin. The first table of four gates was developed by Saadiah Gaon (892–942).^[1]^[2] In the following table, the years of a nineteen-year cycle are listed in the top row, organized into four groups: a common year after a leap year but before a common year (LCC, 1 4 9 12 15), a common year between two leap years (LCL, 7 18), a common year after a common year but before a leap year (CCL, 2 5 10 13 16), or a leap year between two common years (CLC, 3 6 8 11 14 17 19). The week since noon Saturday on the left is partitioned by a set of limits between which the molad Tishrei of the Hebrew year can be found. The resulting type of year in the body of the table indicates the day of the Hebrew week of 1 Tishrei (2, 3, 5, or 7), the four gates, and whether the year is deficient (−1), regular (0), or abundant (+1). Table of four gates LCC LCL CCL CLC 0 ≤ molad < 16404 2 , −1 16404 ≤ molad < 28571 28571 ≤ molad < 49189 2 , +1 49189 ≤ molad < 51840 51840 ≤ molad < 68244 3 , 0 68244 ≤ molad < 77760 77760 ≤ molad < 96815 5 , 0 5 , −1 96815 ≤ molad < 120084 120084 ≤ molad < 129600 5 , +1 129600 ≤ molad < 136488 136488 ≤ molad < 146004 7 , −1 146004 ≤ molad < 158171 158171 ≤ molad < 181440 7 , +1 1. ↑ ^a ^b Bushwick, pp.95-97, Hebrew and English. Bushwick ignored 5, −1 for leap years. 2. ↑ ^a ^b Poznanski, p.121, Hebrew and English. Poznanski ignored 5, −1 for leap years in his table although he lists it in his text. 3. ↑ Resnikoff, p.276, English. Resnikoff is correct. 4. ↑ The four gates can be presented in many ways. Resnikoff only used parts (up to 181440) whereas Bushwick and Poznanski used days, hours, and parts. Bushwick began the week at noon Saturday whereas Resnikoff and Poznanski began their week at 6 pm Saturday. Bushwick and Poznanski had cyclic years on the left and types of years on top. Resnikoff rotated his table 90° to the right, so cyclic years were on top and types of years on the right, similar to the table given here.
{"url":"http://en.wikibooks.org/wiki/Computer_Programming/Hebrew_calendar","timestamp":"2014-04-20T19:43:34Z","content_type":null,"content_length":"40943","record_id":"<urn:uuid:48040dc7-0080-46fb-b9bf-5c99aeedd6a2>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00248-ip-10-147-4-33.ec2.internal.warc.gz"}
Mazur’s knotty dictionary Posted by lieven on Saturday, 27 December 2008 In the roaring 60-ties, Barry Mazur launched the seemingly crazy idea of viewing the affine spectrum of the integers as a 3-dimensional manifold and prime numbers themselves as knots in this However, sometime in the roaring 60-ties, Barry Mazur launched the crazy idea of viewing the affine spectrum of the integers, $\mathbf{spec}(\mathbb{Z}) $, as a 3-dimensional manifold and prime numbers themselves as knots in this 3-manifold... After a long silence, this idea was taken up recently by Mikhail Kapranov and Alexander Reznikov (1960-2003) in a talk at the MPI-Bonn in august 1996. Pieter Moree tells the story in his recollections about Alexander (Sacha) Reznikov in Sipping Tea with Sacha : "Sasha's paper is closely related to his paper where the analogy of covers of three-manifolds and class field theory plays a big role (an analogy that was apparently first noticed by B. Mazur). Sasha and Mikhail Kapranov (at the time also at the institute) were both very interested in this analogy. Eventually, in August 1996, Kapranov and Reznikov both lectured on this (and I explained in about 10 minutes my contribution to Reznikov's proof). I was pleased to learn some time ago that this lecture series even made it into the literature, see Morishita's 'On certain analogies between knots and primes' J. reine angew. Math 550 (2002) 141-167." Here's a part of what is now called the Kapranov-Reznikov-Mazur dictionary : What is the rationale behind this dictionary? Well, it all has to do with trying to make sense of the (algebraic) fundamental group $\pi_1^{alg}(X) $ of a general scheme $X $. Recall that for a manifold $M $ there are two different ways to define its fundamental group $\pi_1(M) $ : either as the closed loops in a given basepoint upto homotopy or as the automorphism group of the universal cover $\tilde{M} $ of $M $. For an arbitrary scheme the first definition doesn't make sense but we can use the second one as we have a good notion of a (finite) cover : an etale morphism $Y \rightarrow X $ of the scheme $X $. As they form an inverse system, we can take their finite automorphism groups $Aut_X(Y) $ and take their projective limit along the system and call this the algebraic fundamental group $\pi^{alg}_1(X) Hendrik Lenstra has written beautiful course notes on 'Galois theory for schemes' on all of this starting from scratch. Besides, there are also two video-lectures available on this at the MSRI-website : Etale fundamental groups 1 by H.W. Lenstra and Etale fundamental groups 2 by F. Pop. But, what is the connection with the 'usual' fundamental group in case both of them can be defined? Well, by construction the algebraic fundamental group is always a profinite group and in the case of manifolds it coincides with the profinite completion of the standard fundamental group, that is, $\pi^{alg}_1(M) \simeq \widehat{\pi_1(M)} $ (recall that the cofinite completion is the projective limit of all finite group quotients). Right, so all we have to do to find a topological equivalent of an algebraic scheme is to compute its algebraic fundamental group and find an existing topological space of which the profinite completion of its standard fundamental group coincides with our algebraic fundamental group. An example : a prime number $p $ (as a 'point' in $\mathbf{spec}(\mathbb{Z}) $) is the closed subscheme $\ mathbf{spec}(\mathbb{F}_p) $ corresponding to the finite field $\mathbb{F}_p = \mathbb{Z}/p\mathbb{Z} $. For any affine scheme of a field $K $, the algebraic fundamental group coincides with the absolute Galois group $Gal(\overline{K}/K) $. In the case of $\mathbb{F}_p $ we all know that this abslute Galois group is isomorphic with the profinite integers $\hat{\mathbb{Z}} $. Now, what is the first topological space coming to mind having the integers as its fundamental group? Right, the circle $S^1 $. Hence, in arithmetic topology we view prime numbers as topological circles, that is, as knots in some bigger space. But then, what is this bigger space? That is, what is the topological equivalent of $\mathbf{spec}(\mathbb{Z}) $? For this we have to go back to Mazur's original paper Notes on etale cohomology of number fields in which he gives an Artin-Verdier type duality theorem for the affine spectrum $X=\mathbf{spec}(D) $ of the ring of integers $D $ in a number field. More precisely, there is a non-degenerate pairing $H^r_{et}(X,F) \times Ext^{3-r}_X(F, \mathbb{G}_m) \rightarrow H^3_{et}(X,F) \simeq \mathbb{Q}/\mathbb{Z} $ for any constructible abelian sheaf $F $. This may not tell you much, but it is a 'sort of' Poincare-duality result one would have for a compact three dimensional manifold. Ok, so in particular $\mathbf{spec}(\mathbb{Z}) $ should be thought of as a 3-dimensional compact manifold, but which one? For this we have to compute the algebraic fundamental group. Fortunately, this group is trivial as there are no (non-split) etale covers of $\mathbf{spec}(\mathbb{Z}) $, so the corresponding 3-manifold should be simple connected... but wenow know that this has to imply that the manifold must be $S^3 $, the 3-sphere! Summarizing : in arithmetic topology, prime numbers are knots in the 3-sphere! More generally (by the same arguments) the affine spectrum $\mathbf{spec}(D) $ of a ring of integers can be thought of as corresponding to a closed oriented 3-dimensional manifold $M $ (which is a cover of $S^3 $) and a prime ideal $\mathfrak{p} \triangleleft D $ corresponds to a knot in $M $. But then, what is an ideal $\mathfrak{a} \triangleleft D $? Well, we have unique factorization of ideals in $D $, that is, $\mathfrak{a} = \mathfrak{p}_1^{n_1} \ldots \mathfrak{p}_k^{n_k} $ and therefore $\mathfrak{a} $ corresponds to a link in $M $ of which the constituent knots are the ones corresponding to the prime ideals $\mathfrak{p}_i $. And we can go on like this. What should be an element $w \in D $? Well, it will be an embedded surface $S \rightarrow M $, possibly with a boundary, the boundary being the link corresponding to the ideal $\mathfrak{a} = Dw $ and Seifert's algorithm tells us how we can produce surfaces having any prescribed link as its boundary. But then, in particular, a unit $w \in D^* $ should correspond to a closed surface in $M $. And all these analogies carry much further : for example the class group of the ring of integers $Cl(D) $ then corresponds to the torsion part $H_1(M,\mathbb{Z})_{tor} $ because principal ideals $Dw $ are trivial in the class group, just as boundaries of surfaces $\partial S $ vanish in $H_1(M,\mathbb{Z}) $. Similarly, one may identify the unit group $D^* $ with $H_2(M,\mathbb{Z}) $... and so on, and on, and on... More links to papers on arithmetic topology can be found in John Baez' week 257 or via here.
{"url":"http://www.neverendingbooks.org/index.php/mazurs-dictionary.html","timestamp":"2014-04-16T04:10:59Z","content_type":null,"content_length":"19695","record_id":"<urn:uuid:fad8857a-0a4d-4233-85a4-96c0ceb648a2>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00579-ip-10-147-4-33.ec2.internal.warc.gz"}
Activities for Preschoolers Preschool Math Activities It’s time to teach preschoolers some math! Check out our collection of fun, free preschool math activities to get the learning started. Does your child love to color? ‘Color by Number’ is a great kindergarten math activity to combine coloring with number recognition – a basic math skill. See more Sorting and classifying toolbox items is a great way to sneak in some basic math lessons. Here’s a fun kindergarten math activity that shows you how.See more This fun preschool math activity teaches your little one to tell time. See more This fun preschool math activity teaches your little one to tell time. See more Grab the chance to introduce your child to basic geometry with this simple and interesting math activity for preschool. See more Help your preschooler make sense of math with “Egg Carton Math,” a free fun math activity for kids. See more Here's a fun way to teach your toddler to count - link the lesson with one of his favorite activities – eating See more Teach kids all about counting, comparing and classifying in this cun - errr... fun math activity for kids, “Collections”. See more Teach preschoolers numbers from 1 – 20 with our fun math puzzle, ‘Who’s This?’! Free and printable, this math activity can be used by teachers as well as homeschooling parents. See More Fun Math Activities for Preschoolers Kids begin to use math at a very early age. What may surprise many is the number of different math concepts that preschoolers learn to use. Preschool math activities can be both fun and educational. These activities focus on concepts like basic counting, learning different shapes, identifying patterns, differentiating between sizes and being able to compare and identify which is bigger, etc. These basics form the foundation on which the little ones will grow up to learn more advanced and complex math topics. An important advantage of preschool math activities is that they are able to highlight the practical uses of math in everyday life. Simple activities such as getting your preschooler to count the number of plates on the dinner table, or measuring out one cup of water, will help the tiny tots establish the relevance of the subject. So go ahead and get your preschoolers started on our fun math activities and watch them grow to love the subject!
{"url":"http://www.mathblaster.com/parents/math-activities/preschool-math-activities","timestamp":"2014-04-20T03:10:32Z","content_type":null,"content_length":"88845","record_id":"<urn:uuid:664f51df-7c04-449b-8d68-3678849cbb85>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00610-ip-10-147-4-33.ec2.internal.warc.gz"}
f(x^2+y^2)=g(x)g(y) ... May 26th 2011, 04:18 AM Also sprach Zarathustra f(x^2+y^2)=g(x)g(y) ... Hello everybody! A problem: Find functions $\varphi , \psi$ which are fulfilling: $\varphi(x^2+y^2)=\psi(x) \psi(y)$ for all $x,y$. Prove that if $\varphi , \psi$ are fulfilling the above equation then $\psi$ determined by $\varphi$. How? Thank you. May 26th 2011, 04:26 AM For example $\phi=\psi =0$. A more interesting example is given by $\phi :x\mapsto e^x$ and $\psi:x\mapsto e^{x^2}$. May 26th 2011, 04:59 AM Also sprach Zarathustra May 27th 2011, 07:45 AM This is how I might attempt to do this problem. I will assume that both $\psi$ and $\phi$ are smooth. Taking the natural log of both sides gives $\ln \phi(x^2+y^2) = \ln \psi(x) + \ln \psi(y)$. Call the term on the RHS $F(x^2+y^2)$. Differentiating both side wrt $x$ and $y$ gives $F''(x^2+y^2) = 0$. Thus, $F(x^2+y^2) = a(x^2+y^2) + \ln(b)$ where $a$ and $b$ are constant. So $\ln \phi(x^2+y^2) = a(x^2+y^2) + \ln(b)$ so $\phi(x^2+y^2) = b e^{a(x^2+y^2)} = \psi(x)\psi(y)$. Now set $y = 0$ and this gets you the form for $\psi(x) = k e^{ax^2}.$ Then substitute into the original functional equation to determine $k$. May 29th 2011, 02:05 PM Maybe there is something missing from the problem statement. Otherwise $\phi(x) = e^x$ and $\psi(x) = - e^{x^2}$, combined with girdav's solution, provides a counterexample to the second part (uniqueness).
{"url":"http://mathhelpforum.com/differential-geometry/181695-f-x-2-y-2-g-x-g-y-print.html","timestamp":"2014-04-17T19:08:02Z","content_type":null,"content_length":"12986","record_id":"<urn:uuid:293cf5f0-6c36-46d8-aabf-ec123ad4bb6d>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00028-ip-10-147-4-33.ec2.internal.warc.gz"}
The number of orbits of a permutation action up vote 3 down vote favorite Let $G$ be a finite group acting on a finite set $\Omega$. A general question is to determine the sequence $o_k(\Omega)$, where $o_k(\Omega)$ is the number of orbits on $G$ for the natural action of $G$ on the set of $k$-subsets of $\Omega$. It's well-known that if $G=S_n$ and the action on $\Omega =[n] := \{1, \ldots, n \}$ is the standard permutation action, that $o_k(\Omega) = 1$ (i.e. since $S_n$ is $n$-fold transitive the induced action on $k$-sets is transitive). I'm interested in being able to figure out the sequence $o_k(\Omega_r)$ where $G=S_n$, but where $\Omega$ is the set of all $r$-subsets of $[n]$, say for $r=2$ or $3$. I had hesitated asking this question since I thought that the answer must be well-known, but after a little while looking around I haven't been able to find it. What I have been able to figure out is the following: if $A$ is a set of $r$-subsets of $[n]$ I'll define its signature: Let $U$ denote the multiset which is the multiset union of the elements of $A$ -- i.e. the multiplicity of an element $x \in U$ is the number of elements of $A$ which contain $x$. The signature of $A$ is the multiset of multiplicities in $U$. Then the action of $S_n$ is transitive on sets of a fixed signature. So the answer to my question is to count the number of possible signatures. [Addition: if $s$ is the signature of a set $A$ of $k$-subsets of the set of $r$-subsets of $[n]$, then the sum of the elements (with multiplicity) of $s$ is $r k$. Thus a signature is a partition of of $rk$ into $\le n$ parts. However, it's not clear to me that all such partitions actually occur as signatures] gr.group-theory permutation-groups 2 Are you sure about the transitivity of $S_n$ on the sets of fixed signature? For $r=2$ - using the shortcut $ab$ for the set $\{a, b\}$ - the two sets $A = \{12, 23, 13, 45, 56, 46\}$ and $B = \ {12, 23, 34, 45, 56, 16\}$ have the same signature, but I doubt that they are both in the same orbit of $S_n$. I'd expect $\omega_k(\Omega_2)$ to be the number of isomorphic classes of (unordered) graphs with $k$ edges on $n$ vertexes. – Someone Jul 20 '11 at 14:38 I'll have to think about this. I had thought that I could use the $n$-transitivity of $S_n$ to move all of the elements with the same multiplicity to each other. – Victor Miller Jul 20 '11 at add comment 1 Answer active oldest votes For $r=2$ a $k$-subset can be thought of as a graph with vertices $[n]$ and $k$ edges. Hence the number of orbits is equal to the number of isomorphism classes of graphs on $n$ up vote 4 down vote vertices and $k$ edges. Counting them seems like a fairly intractable problem. Torsten, Thanks. You are right. I found this link mathworld.wolfram.com/SimpleGraph.html – Victor Miller Jul 20 '11 at 16:19 1 Counting them exactly is quite hard, but asymptotics are easy, since a general graph has trivial automorphism group. – Igor Rivin Jul 20 '11 at 17:21 add comment Not the answer you're looking for? Browse other questions tagged gr.group-theory permutation-groups or ask your own question.
{"url":"http://mathoverflow.net/questions/70813/the-number-of-orbits-of-a-permutation-action?sort=oldest","timestamp":"2014-04-18T20:45:24Z","content_type":null,"content_length":"57883","record_id":"<urn:uuid:d351aa6c-cef0-4632-8f15-d54e5ef1b47b>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00573-ip-10-147-4-33.ec2.internal.warc.gz"}
6. COSMOGRAPHY Since the angular sizes of gravitational arc(let)s depends on the cosmological distances of the lens and sources, in principle arc(let)s can be used to probe the geometry of the universe itself. With well-known arc configurations it should be possible to determine the cosmological parameters H[0], and [0] + [0]. In fact, we are unable to separate the relative contribution of [0] and [0]. For the present time, we have some hope of measuring the mass density of the universe, [0], from independent observations, mainly from the large scale structure velocity field. Therefore we should be able to obtain a value for [0] = 1 - [0] if we adopt the theoretical hypothesis of an inflation period for the primeval universe. Additional observations of gravitational arcs may offer an opportunity to get around this hypothesis. In practice, a value of Carroll, Press & Turner 1992). Also, the positions of arcs are far more sensitive to the lens modeling. In conclusion, the determination of In the following we must indeed assume that the general theoretical framework given in part 2 is valid. Note that Nottale (1988) emphasized that giant arcs with known redshift can test gravitation theories on cosmological scales, provided dynamical masses and gravitational masses are identical. The present-day observations do not contradict the predictions of General Relativity, and actually confirm the equality between the two masses (Nottale 1988, Dar 1992). The time delay which appears between the successive observations of an intrinsic event in the source in each multiple image of an arc system can be used to infer H[0]. The original idea was introduced for multiple quasars by Refsdal (1964) and applied to the double QSO 0957+561. It has not succeeded so far in providing a better value of H[0] than other methods (see Kochanek 1991): observational discrepancies were reported between the radio and optical measurements of the time delay, and the modeling of the lens appears still more complex than previously thought with the discovery of a fold arc in the field of the associated cluster (Bernstein et. al 1993). At first glance, the observational situation would seem better for arcs. A-priori we expect a large number of supernovae events in distant blue galaxies which have a large star formation rate. The magnified event that appears at different times on each image can be well recognized. Giraud (1992b) claimed to detected a local surface brightness variation in the giant arc observed in MS0302+17 which could be interpreted as such supernovae explosion. But his detection is marginal and is not confirmed by similar observations done by other observers during almost the same period. However, even if it is possible to search for and monitor supernovae, the time delay can reach hundreds of years for separate multiple images! Very often, only for the region near the critical line of two merging arcs is the time delay a few days or weeks (Kovner & Paczynski 1988). But even if by chance a supernova were observed with such an ideal geometrical configuration, there will still remain an uncertainty in the determination of H[0] associated with the modelling of the cluster lens, even if it could be better determined than for small galaxy lenses. It is likely that a reliable value of the Hubble constant will need multiple observations of supernovae in a large number of clusters with giant arcs; an observational challenge which will be out of reach for a long time. Paczynski & Gorki (1981) first suggested using the multiple images of a lensed quasar to constrain the cosmological constant provided the core radius and the velocity dispersion of the lens are known (and obviously the redshifts of the lens and the source). The technique assumes a model for the mass profile which relates the angular separation of split images to the velocity dispersion and the angular diameter distances. In that case, it is straightforward to find the best Breimer and Sanders (1992) used basically the same approach as Paczynski & Gorki but on clusters with giant arcs. They discussed simultaneously the gravitational lensing analysis and dynamical mass distributions inferred from both the galaxy behavior and the distribution of hot X-ray gas. They concluded that if light traces mass the observations of A370 are compatible with M/L varies with distance), a wide range of cosmologies are also possible. In a similar way, the cosmological parameters could be inferred by measuring the position and magnification of two arcs with different redshifts and observed in the same cluster. From equation (7) of section 2, the ratio of deviation angles generated by a singular isothermal sphere for two different sources with different redshifts is and is independent of H[0]. Therefore, the ratio only depends on the curvature k (and the deceleration parameter q[0]) and the cosmological constant In principle this technique should work. However, the ratio of the angular distances between the two arcs is strongly dependent on the lens modeling and the assumptions made for the sources. It is likely that this approach will need the discovery of at least two relatively bright arcs with different redshifts in a cluster lens with a simple geometry and better spectroscopic capabilities coming from future large telescopes.
{"url":"http://ned.ipac.caltech.edu/level5/Mellier/Mellier6.html","timestamp":"2014-04-19T10:31:17Z","content_type":null,"content_length":"9595","record_id":"<urn:uuid:cab01312-187b-4b62-878b-2cfefd163991>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00488-ip-10-147-4-33.ec2.internal.warc.gz"}
Silver Lake, NJ Math Tutor Find a Silver Lake, NJ Math Tutor ...I have a BS in Chemistry and a MS in Education. Since 2006, I have tutored many students in a variety of subjects. Beginning in May of each year, I work more with students who want to prepare for the Regents in June. 50 Subjects: including trigonometry, ACT Math, SAT math, English ...I intend on working for a major consulting firm in Manhattan upon graduation. As a result of my career related experiences thus far, I would be very interested in tutoring an upcoming college student (or even high school student) in the in's and out's of looking for an internship, externship, su... 15 Subjects: including algebra 1, elementary (k-6th), precalculus, writing ...I have written and spoken professionally, which has required a good knowledge of grammar and vocabulary. I have not yet failed to help a student learn what they need to know in order to achieve their goals, as long as they were willing to do the necessary work. I have worked for an SAT tutoring service and tutored this subject privately. 29 Subjects: including ACT Math, SAT math, trigonometry, precalculus ...For students who need help with formal school programs, I first learn what methods they are being taught to use and then help them see how to use those methods better. On Bellcore's (now Telcordia) award-winning science magazine, EXCHANGE, my boss referred to me as the "nitpicker in chief." I wa... 28 Subjects: including SAT math, reading, English, Russian ...As an undergraduate I took many mathematics and science courses outside the requirements for my BA. Beyond academic and industrial research, my primary employment was in IT (including compiler maintenance, applications programing and prototype development of security devices, and research databa... 23 Subjects: including algebra 1, algebra 2, ACT Math, ASVAB Related Silver Lake, NJ Tutors Silver Lake, NJ Accounting Tutors Silver Lake, NJ ACT Tutors Silver Lake, NJ Algebra Tutors Silver Lake, NJ Algebra 2 Tutors Silver Lake, NJ Calculus Tutors Silver Lake, NJ Geometry Tutors Silver Lake, NJ Math Tutors Silver Lake, NJ Prealgebra Tutors Silver Lake, NJ Precalculus Tutors Silver Lake, NJ SAT Tutors Silver Lake, NJ SAT Math Tutors Silver Lake, NJ Science Tutors Silver Lake, NJ Statistics Tutors Silver Lake, NJ Trigonometry Tutors Nearby Cities With Math Tutor Beaver Lake, NJ Math Tutors Briar Park, NY Math Tutors Canaan Lake, NY Math Tutors Captree Island, NY Math Tutors Fire Island, NY Math Tutors Gerard, NJ Math Tutors Hamburg, NJ Math Tutors Heer Park, NY Math Tutors Hillside, PA Math Tutors Lake Gardens, NY Math Tutors Lake Swannanoa, NJ Math Tutors Monmouth Park, NJ Math Tutors Oak Island, NY Math Tutors Shady Lake, NJ Math Tutors Stockholm, NJ Math Tutors
{"url":"http://www.purplemath.com/silver_lake_nj_math_tutors.php","timestamp":"2014-04-18T23:46:05Z","content_type":null,"content_length":"24070","record_id":"<urn:uuid:e4b81469-698d-4071-a791-f9ac93e5a8d4>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00305-ip-10-147-4-33.ec2.internal.warc.gz"}
Probability Sampling After examining this page your knowledge on probability sampling will be enhanced Before proceeding, we have to define some general terms. A probability sampling method is any method of sampling that utilizes some form of random selection. What is random selection? It is a selection made so that each person or item has an equal chance of being chosen. With those general terms defined you may proceed to examine the history of probability and the different probability sampling methods. The History of Probability Probability started from the study of games of chance. Tossing a dice, playing poker and spinning a roulette wheel are just some examples of random sampling. Games of chance were not studied by mathematicians until the sixteenth and seventeenth centuries. Probability theory as a branch of mathematics arose in the seventeenth century when French gamblers asked Blaise Pascal and Pierre de Fermat, both well known pioneers in mathematics, for help in their gambling. In the eighteenth and nineteenth centuries, careful measurements in astronomy and surveying led to further advances in In the twentieth century probability is used to control the flow of traffic through a highway system, a telephone interchange, or a computer processor. In addition, it is used to find the genetic makeup of individuals or populations, figure out the energy states of subatomic particles, estimate the spread of rumors, and predict the rate of return in risky investments. Probability and Chance Chance is a part of our everyday lives. Everyday we make judgements based on probability: There is a 90% chance the Detroit Red Wings will win the game tomorrow. There is a 60% chance of thunderstorm this afternoon. We have a 50-50 chance of winning the game. There is a 20% chance of showers today. Although we assign certain probabilities to certain events, others might assign different probabilities to those same events due to their difference of opinion. For example, not everyone agrees with the high chance of the Detroit Red Wings winning the game. They might say that there is a 20% chance the Detroit Red Wings will win the game tomorrow. It all depends on what the person believes. Chance may result from human design such as casino games and the lottery, or it may result from nature such as determining a person’s sex and other human characteristics. Probability is defined as the branch of mathematics that describes the pattern of chance outcomes. Probability Theory Probability Theory is the mathematical study of randomness. This theory deals with the possible outcomes of an event. It must be possible to list every outcome that can occur, and we must be able to state the expected relative frequencies of these outcomes. It is the method of assigning relative frequencies to each of the possible outcomes. If the outcomes of an experiment are equally likely, then the probability of an event is the ratio of the number of outcomes favourable to the event to the total number of outcomes. Personal Probability We can have a personal opinion about the next outcome of an event such as a coin toss. I can say that my personal probability of a head in the next toss is 1/2. Your personal probability may be different from mine. Personal probability sets us free from figuring out the outcome from many repetitions. Therefore, personal probability allows us to assign a probability to one time events such as a golf tournament. Simple Random Sampling Simple random sampling is the simplest form of random sampling. It is the basic sampling technique where you select a group of subjects, a sample, for study from a larger group, a population. Each individual is chosen entirely by chance and each member of the population has an equal chance of being included in the sample. Every possible sample of a given size has the same chance of selection. As a result, each member of the population is equally likely to be chosen at any stage in the sampling process. For example, the thingamajig at the top is an ideal model of simple random sampling. Press the "Start" button to start the random selection. You will notice that at every second the thingamabob will pick up one of the three numbers 1, 2, or 3. You can terminate the process anytime by pressing the "Stop" button. Randomly picking clients from a list of clients is another example of simple random sampling. Simple random sampling is simple to accomplish and is easy to explain to others because it is a fair way to select a sample, it is reasonable to generalize the results from the sample back to the population. However, it is not the most statistically efficient method of sampling. It does not get a good representation of subgroups in a population because of the luck of the draw. To deal with these issues, we have to turn to other sampling methods. Stratified Random Sampling A stratified random sample, also called proportional or quota random sample, is obtained by taking samples from each stratum or sub-group of a population. It involves dividing your population into homogeneous subgroups and then taking a simple random sample in each subgroup. Stratified sampling techniques are generally used when the population is heterogeneous, or dissimilar, where certain homogeneous, or similar, sub-populations can be isolated. Simple random sampling is most appropriate when the entire population from which the sample is taken is homogeneous. There are several reasons why you would prefer stratified sampling over simple random sampling. Firstly, it assures that you will be able to represent not only the overall population, but also key subgroups of the population, especially small minority groups. Secondly, the cost per observation in the survey may be reduced and lastly, it provides each sub-population estimates of the population parameters. Splitting clients into three different groups and picking from them is another example of stratified random sampling. Take a farmer for example. Suppose he wishes to work out the average milk yield of each cow type in his herd which consists of Ayrshire, Friesian, Galloway and Jersey cows. He could divide up his herd into the four sub-groups and take samples from these. Cluster Random Sampling Cluster sampling is a sampling technique where the entire population is divided into groups, or clusters, and a random sample of these clusters are selected. All observations in the selected clusters are included in the sample. It is typically used when the researcher cannot get a complete list of the members of a population they wish to study but can get a complete list of groups or clusters of the population. It is also used when a random sample would produce a list of subjects so widely scattered that surveying them would prove to be far too expensive, for example, people who live in different postal districts in the UK. This sampling technique is more practical and economical than simple random sampling or stratified sampling. The problem with random sampling methods when we have to sample a population that's disbursed across a wide geographic region is that you will have to cover a lot of ground geographically in order to get to each of the units you sampled. Imagine taking a simple random sample of all the residents of New York State in order to conduct personal interviews. By the luck of the draw you will wind up with respondents who come from all over the state. Your interviewers are going to have a lot of traveling to do. For instance, in the figure we see a map of the counties in New York State. Let's say that we have to do a survey of town governments that will require us going to the towns personally. If we do a simple random sample state-wide we'll have to cover the entire state geographically. Instead, we decide to do a cluster sampling of five counties, marked in red in the figure. Once these are selected, we go to every town government in the five areas. Clearly this strategy will help us to economize on our mileage. Cluster or area sampling, then, is useful in situations like this, and is done primarily for efficiency of administration. Take this as another example, suppose that the Department of Agriculture wishes to investigate the use of pesticides by farmers in England. A cluster sample could be taken by identifying the different counties in England as clusters. A sample of these counties, clusters, would then be chosen at random, so all farmers in those counties selected would be included in the sample. It can be seen here then that it is easier to visit several farmers in the same county than it is to travel to each farm in a random sample to observe the use of pesticides. Valerie J. Easton & John H. McColl. William M.K. Trochim. Research Methods Knowledge Base. Alexander Bogomolny. The Probability Web. Lexico LLC. Oxford University Press. www1.oup.co.uk/jnls/ fields/mathematics Addison - Wesley Western Canadian Edition Mathematics 11 Robert Alexander and Brendan Kelly. English project Email: itdoesntmatterwhatmyusernameis@hotmail.com
{"url":"http://www.angelfire.com/empire/richardt/","timestamp":"2014-04-20T01:02:48Z","content_type":null,"content_length":"29467","record_id":"<urn:uuid:6f5fbcbb-44c8-40a8-ae95-4b033d77b97b>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00340-ip-10-147-4-33.ec2.internal.warc.gz"}
Brookhaven, NY Lindenhurst, NY 11757 MATH! - Middle/High school/SAT/ACT. Patient/experienced! I am a high school teacher. I chose teaching as a second career, because I love and believe in children. I am a teacher, tutor, and mom (of teenagers). I am a full-time high school teacher of algebra, geometry, and advanced algebra/trigonometry... Offering 8 subjects including algebra 1, geometry and prealgebra
{"url":"http://www.wyzant.com/Brookhaven_NY_Math_tutors.aspx","timestamp":"2014-04-20T09:45:25Z","content_type":null,"content_length":"58533","record_id":"<urn:uuid:449b9109-7e92-4c2e-a28f-af3d72f8986c>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00601-ip-10-147-4-33.ec2.internal.warc.gz"}
Woodacre Geometry Tutor ...In addition, he has a lifelong passion for mathematics and, in addition to tutoring all grade levels in math, has volunteered for 6 years in the local public schools in San Rafael (including mathematics instruction and Odyssey of the Mind coach). Dr. G. has a daughter who is currently in high school. He enjoys music, hiking and geocaching.Dr. 13 Subjects: including geometry, calculus, physics, algebra 2 ...I took biostatistics in college and through UC Berkeley Extension. I received an A in both classes. It was a very interesting subject and I'll be more than happy to help students with any difficulty they have. 22 Subjects: including geometry, calculus, statistics, biology ...I am well regarded as an excellent instructor and am able to deal with students with a wide range of abilities in math, finance and economics. I worked a number of years as a data analyst and computer programmer and am well versed in communicating with people who have a variety of mathematical a... 49 Subjects: including geometry, calculus, physics, statistics ...This subject is a specialty of mine. I have strong vocabulary skills from a lifetime of reading and writing, started at a very young age and continued to date. I have taught professional courses in technical and business writing for several years. 25 Subjects: including geometry, reading, English, writing ...I also teach Tai Chi at Cal. I had a chess rating of 1600 in the United States Chess Federation. I taught an after school program through the Berkeley Chess School. 12 Subjects: including geometry, chemistry, physics, calculus
{"url":"http://www.purplemath.com/woodacre_geometry_tutors.php","timestamp":"2014-04-16T22:09:01Z","content_type":null,"content_length":"23677","record_id":"<urn:uuid:2fb9ac73-2174-4a56-aa6c-7ed3b15dcc63>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00415-ip-10-147-4-33.ec2.internal.warc.gz"}
Stockbridge, GA Math Tutor Find a Stockbridge, GA Math Tutor ...While enjoying all math, I specialize in Algebra, Trigonometry, Precalculus, Calculus, and Statistics. I look forward to being able to help you in your Math Classes. I was the math and science lab supervisor at Georgia Perimeter College for five years, and tutored there for 10 years before that. 15 Subjects: including algebra 1, algebra 2, biology, calculus ...I was an Information Technology teacher for 3 months, a programming tutor for two semesters and a Computer Science teacher for a year. As part of my requirements for my major in Computer Science, I opted to study Linear Algebra. Linear algebra involves systems of equations, matrices and vectors, permutations and determinants. 21 Subjects: including calculus, linear algebra, discrete math, Java ...I enjoy working with students, and I find great pleasure upon seeing a smile on a child's face who has conquered a skill that he/she has had trouble with in the past. I seek to first assess the student, and then plan a course of action based upon that student's strengths and/or weaknesses in the... 19 Subjects: including calculus, chemistry, English, geometry ...I currently hold a valid GA teaching certification in elementary education (P -5) and middle grades mathematics (4-8). I have over 7 years of experience in the classroom and over 20 years in working with children from birth to ages 14. It is my belief that every child can learn and will learn wh... 3 Subjects: including prealgebra, elementary (k-6th), elementary math ...This makes it much easier to understand, and maybe even more important, it makes it a lot more interesting and fun for the student. I love standardized tests and have scored within the 99th percentile for all tests I tutor. I have been able to consistently help my students increase their SAT sc... 19 Subjects: including calculus, algebra 1, algebra 2, geometry Related Stockbridge, GA Tutors Stockbridge, GA Accounting Tutors Stockbridge, GA ACT Tutors Stockbridge, GA Algebra Tutors Stockbridge, GA Algebra 2 Tutors Stockbridge, GA Calculus Tutors Stockbridge, GA Geometry Tutors Stockbridge, GA Math Tutors Stockbridge, GA Prealgebra Tutors Stockbridge, GA Precalculus Tutors Stockbridge, GA SAT Tutors Stockbridge, GA SAT Math Tutors Stockbridge, GA Science Tutors Stockbridge, GA Statistics Tutors Stockbridge, GA Trigonometry Tutors Nearby Cities With Math Tutor Chamblee, GA Math Tutors Conley Math Tutors Covington, GA Math Tutors Ellenwood Math Tutors Fayetteville, GA Math Tutors Forest Park, GA Math Tutors Hampton, GA Math Tutors Hapeville, GA Math Tutors Jonesboro, GA Math Tutors Lake City, GA Math Tutors Lovejoy, GA Math Tutors Mcdonough Math Tutors Morrow, GA Math Tutors Rex, GA Math Tutors Tyrone, GA Math Tutors
{"url":"http://www.purplemath.com/Stockbridge_GA_Math_tutors.php","timestamp":"2014-04-18T21:19:04Z","content_type":null,"content_length":"23984","record_id":"<urn:uuid:ab2a2531-91b2-45b1-b023-f62989cc5e69>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00233-ip-10-147-4-33.ec2.internal.warc.gz"}
Homework Help Posted by Amanda on Monday, October 6, 2008 at 4:50pm. One hundred draws will be made at random with replacement from the box 1, 1, 2, 2, 2, 4... The chance that the sum will be bigger than 250 is what percent? • statistics - David Q, Monday, October 6, 2008 at 5:49pm The mean of those numbers is 2. The variance is 1.2. The sum of 100 of them should have a mean of 200 and a variance of 120, i.e. a standard deviation of sqrt(120) = 10.95. Now, 250 is (250-200)/10.95 = 4.56 standard deviations above the mean. Assuming the distribution is approximately Normal (and I think it would be after you've added 100 such draws together), you can look that up in a set of Normal tables to find out the area under the curve to the right of that point. I make that 1-0.999997=0.000003, or 0.0003%. (I've also run some simulations in a spreadsheet which seem to bear that figure out: not a single run out of a several hundred has reached 250.) I'm not actually sure about the above reasoning (paricularly the bit about the variance of 100 draws being 100 times the variance of {1,1,2,2,2,4}), so if anybody reckons I've made a mistake, just shout. • statistics - Amanda, Monday, October 6, 2008 at 6:13pm shouldn't the standard deviation be 1? i have all the info... i just don't know how to find the z score from the info that i have: avg= 2 SD= 1 sum= 200 SE= 10 • statistics - David Q, Tuesday, October 7, 2008 at 8:33am You could be right about the standard deviation being 1. I did wonder at the time about whether you ought to be using the standard deviation for the entire population, as opposed to the usual one that's applied to a sample, which would be calculated using n as the divisor instead of (n-1). If so, then 250 is 5.0 standard deviations above 200 as opposed to the 4.56 I calculated earlier. Either way, that's the Z value which you need to look up in a set of Normal probability tables, to find the area under the Normal probability curve to the left of that figure. For Z=5 the answer will be almost one (if you do it in Excel using the NORMSDIST function you will get 0.9999997, whereas for Z=4.56 you'll get 0.999997). You then subtract that from 1 to get the area to the right. Either way, the answer is extremely small (2.9E-7 for Z=5, or 2.6E-6 for Z=4.56). Related Questions statistics - A box of tickets averages out to 75 and the SD is 10. One hundred ... maths - A random number generator draws at random with replacement from the ... Statistics - Hello. I would like to ask a college statistics question. Q. One ... statistics - A random number generator draws at random with replacement from the... Statistics - You draw three tickets at random with replacement from the box. ... statistics - 400 draws are made at random with replacement from 5 tickets that ... statistics - The sum of the draws from a box is 440. If the average of these ... maths - A deck of cards consists of 8 blue cards and 5 white cards. A simple ... statistics - A die has 2 red faces, 2 blue faces, and 2 green faces. It is ... math - in a grocery store, butter is sold in “sticks” that are shaped like ...
{"url":"http://www.jiskha.com/display.cgi?id=1223326216","timestamp":"2014-04-17T02:45:48Z","content_type":null,"content_length":"10499","record_id":"<urn:uuid:e078f7b0-e0d2-4084-9081-e16e719e8ecf>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00082-ip-10-147-4-33.ec2.internal.warc.gz"}
Teaching Math Below are some resources I found and bookmarked. AAA Study: Pick a grade and a math topic and begin learning. BBC GCSE Maths: Part of the Britsh tutoring Bitesize lessons at a very high level, pick a topic, and a lesson and go! Hooda Math: Fun math games for all ages, and tutorials. Note: They took out many of the games that were there before. Pick a grade and topic and take the test! Note: Need a paid membership for some features. Pearson Success Net: Login, choose your book, and read EnVision Math online. Note: Need to own at least one EnVision Math textbook at school, and have your teacher add your account.
{"url":"http://mathisfunforum.com/viewtopic.php?pid=252231","timestamp":"2014-04-18T13:47:02Z","content_type":null,"content_length":"18976","record_id":"<urn:uuid:b78ea6a4-2234-454d-9d06-df0be359f3a9>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00365-ip-10-147-4-33.ec2.internal.warc.gz"}