content
stringlengths
86
994k
meta
stringlengths
288
619
Convergence of series May 22nd 2011, 04:09 PM #1 May 2011 Convergence of series I have to prove that Sn \to 0 as n \to \infty, given that the series \sum_{n = 1}^\infty 2{}^{ }n Sn converges. I don't see how this converges to 0 as n tends to infinity. Can anyone help at all. May 22nd 2011, 04:24 PM #2
{"url":"http://mathhelpforum.com/differential-geometry/181354-convergence-series.html","timestamp":"2014-04-16T20:13:25Z","content_type":null,"content_length":"33231","record_id":"<urn:uuid:2456f881-7b42-4525-8ffc-d71d8a8362bb>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00105-ip-10-147-4-33.ec2.internal.warc.gz"}
Using C to Blend Mathematics and Art (When Math goes Beautiful) Many people think that mathematics is a difficult thing where unsuspected turns out to be a beautiful thing. You may know about Cycloids, Epicycloids, Epitrochoids, Lissajous, Hypocycloids, and Hypotrochoids that are some cases in mathematics where a circle rolls around within another circle by an equation that traces of points of the two circles that are (X1,Y1) and (X2,Y2). All cases can be made as basic methods to make an art in mathematics. You can see more explanation about all cases in Wikipedia. Intuitively, I make some mathematical arts in C programming and I use a method that is similar with the all cases where the algorithm of the program is so simple. This time, the program has been improved but anyway there are many bugs that have not yet been fixed. Initially, I make a mathematical program about circle where most of circles rolling around within other circles with 2 up to no limit total of the circles (Circle on Circle on Circle on ........). Every circle has its unique rotation and radius that would be determined by its point for X(a),Y(a), where "a" is a sensitive angle. Every point of a circle will be the center for point of the next circle, and so on. To make it easy for you to imagine it, see the picture below where there are two circles that are red point with 4 radius rolling around black point with 2 radius where the black point is as center for the red point and "0,0" is as center point for the black point: After I make the mathematical program, then I have an idea of how if every point is linked to another point with line. For example, 1^st point will be linked to the 2^nd point, the 2^nd point will be linked to the 3^rd point, and so on. With this method will be produced the beautiful pictures. In the program, I used 2 or 3 circles. To make it easy for you to understand, you can see in the demo file in slow drawing (slide show and full screen) by DOWN control on keyboard (Note: Up control to increase of the drawing speed). Some of mathematical functions that used are sin() and cos(), where "a" as the sensitive angle that will be determined the smoothness of pictures. Note that this is not a fractal, so there is no iteration. Using the Code The basics of equations for making a circle for "X,Y" are: X = CenterPointX + (Radius x sin(Angle)) Y = CenterPointY + (Radius x cos(Angle)) Here I simplify the "X,Y" in the C code into: CX = (Px/2) + (Py * sin(_AGL)); CY = (Py/2) + (Py * cos(_AGL)); Where CX = X and CY = Y, Px is Pixel for sb.x and Py is Pixel for sb.y, that is used to define the center point and radius with constant value, and then _AGL = Angle that in the equations are CX (_AGL), CY(_AGL)where per step is 0.5 degree, for example: X(0.5),Y(0.5) -> X(1),Y(1) -> X(1.5),Y(1.5) -> X(2),Y(2). The step angle will take effect to smoothness of the pictures that will produce a gradation of the color. To make more variant of pictures, I modify the equations into: CX = (Px/2) + ((Py/4) * sin(_AGL*_COEF)); CY = (Py/2) + ((Py/4) * cos(_AGL*_COEF)); CX = (Px/2) + ((Py/4) * sin(_AGL*_COEF) * cos(_AGL*_COEF)); CY = (Py/2) + ((Py/4) * cos(_AGL*_COEF) * sin(_AGL*_COEF)); _COEF is the coefficient where the value is by random method per picture to produce many more variants of pictures that consist of curves. There are 3 type coefficients, that are _COEF for CURVE I, _COEF2 for CURVE II, and _COEF3 for CURVE III. The _COEF is how many times the CURVE I rolls around the center point per period, the _COEF2 is how many times the CURVE II rolls around the CURVE I per period, and the _COEF3 is how many times the CURVE III rolls around the CURVE II per period. Remember, circle on circle on circle where this is the basic principle to produce the mathematical art. The program is divided into 3 types of art pictures, they are carving A, carving B, and graffiti where technically, they make no odds, but I am just exchanging between the equations. You can see the algorithm below that is actually so simple and there is no complex matter, and you can easily learn it. FIRST, this is piece of a function (procedure) to make graffiti art (you can see the complete source code/MathArtAnimation()): int Px,Py; static int _COEF = -7+rand()%6, _COEF2 = -7+rand()%14, _COEF3 = -10+rand()%16, RandPnt1 = 2, RandPnt2 = 2, RandPnt3 = 2, RandPnt4 = 2; static double _AGL = 0; double CX,CY,Cx,Cy; SetTextColor(hdc, RGB(200,255,100)); _COEF = -7+rand()%6; _COEF2 = -7+rand()%14; _COEF3 = -10+rand()%16; _AGL = 0; RandPnt1 = rand()%25; RandPnt2 = rand()%25; RandPnt3 = rand()%25; RandPnt4 = rand()%25; _AGL += StepAngle(); CX = (Px/2)+(Py/4)*sin(_AGL); CY = (Py/2)+(Py/4)*cos(_AGL); if(RandPnt1%2 == 0) CX = (Px/2)+(Py/4)*sin(_AGL*_COEF); if(RandPnt1%3 == 0) CY = (Py/2)+(Py/4)*cos(_AGL*_COEF); ///////////////////////////////////////////////////////////////CURVE I Substitute the CX to Cx and CY to Cy: Cx = CX+(Py/7)*sin(_AGL*_COEF2); Cy = CY+(Py/7)*cos(_AGL*_COEF2); if(RandPnt2%2 == 0) Cx = CX+(Py/7)*sin(_AGL*_COEF)*cos(_AGL*_COEF2); if(RandPnt2%3 == 0) Cy = CY+(Py/7)*cos(_AGL*_COEF)*sin(_AGL*_COEF2); if(RandPnt2%4 == 0) Cx = CX+(Py/7)*cos(_AGL*_COEF2); if(RandPnt2%5 == 0) Cy = CY+(Py/7)*sin(_AGL*_COEF2); ///////////////////////////////////////////////////////////////CURVE II Substitute the Cx to Cx and Cy to Cy: if(RandPnt3%4 == 0) Cx = Cx+(Py/15)*sin(_AGL*_COEF3); if(RandPnt3%3 == 0) Cx = Cx+(Py/20)*cos(_AGL*_COEF3); if(RandPnt3%2 == 0) Cx = Cx+(Py/15)*sin(_AGL*_COEF2)*cos(_AGL*_COEF3); if(RandPnt4%4 == 0) Cy = Cy+(Py/15)*cos(_AGL*_COEF3); if(RandPnt4%3 == 0) Cy = Cy+(Py/20)*sin(_AGL*_COEF3); if(RandPnt4%2 == 0) Cy = Cy+(Py/15)*cos(_AGL*_COEF2)*sin(_AGL*_COEF3); ///////////////////////////////////////////////////////////////CURVE III pen = CreatePen(PS_SOLID,1,RGB(245, 255, 200)); for(n=0; n<=Speed()-10; n++) The picture below is one of the sample pictures that is generated with the algorithm above which the polynomial equations are selected by random method where the selected equations are: CX = (Px/2)+(Py/4)*sin(_AGL); CY = (Py/2)+(Py/4)*cos(_AGL*_COEF); Substitute to > Cx = CX+(Py/7)*sin(_AGL*_COEF2); Cy = CY+(Py/7)*sin(_AGL*_COEF2); Substitute to > Cx = Cx+(Py/15)*sin(_AGL*_COEF3); Cy = Cy+(Py/15)*cos(_AGL*_COEF3); Picture Sample Other Picture Samples SECOND, this is piece of a function (procedure) to make carving A (you can see the complete source code/MathArtAnimation_2()): int Px,Py; static int COEF = -13+rand()%10, RAND = rand()%12, COEF2 = -13+rand()%10, RAND2 = rand()%12; static double AGL = 0; double CX,CY,Cx,Cy; SetTextColor(hdc, RGB(200,255,100)); if(AGL==720) { COEF = -17+rand()%15; RAND = rand()%12; COEF2 = -17+rand()%15; RAND2 = rand()%12; AGL = 0; AGL += 0.5; if(RAND%6 == 0) { CX = (Px/2)+(Py/4)*sin(AGL*COEF); CY = (Py/2)+(Py/4)*cos(AGL*COEF); if(RAND%5 == 0) { CX = (Px/2)+(Px/4)*sin(AGL*COEF)*cos(AGL*COEF); CY = (Py/2)+(Py/4)*cos(AGL*COEF); if(RAND%4 == 0) { CX = (Px/2)+(Px/4)*cos(AGL*COEF); CY = (Py/2)+(Py/4)*sin(AGL*COEF); if(RAND%3 == 0) { CX = (Px/2)+(Px/4)*sin(AGL*COEF); CY = (Py/2)+(Py/4)*cos(AGL*COEF)*sin(AGL*COEF); if(RAND%2 == 0) { CX = (Px/2)+(Py/4)*cos(AGL*COEF); CY = (Py/2)+(Py/4)*sin(AGL*COEF); else { CX = (Px/2)+(Px/4)*sin(AGL*COEF); CY = (Py/2)+(Py/4)*cos(AGL*COEF); ///////////////////////////////////////////////////////////CURVE I Substitute the CX to Cx and CY to Cy: if(RAND2%6 == 0) { Cx = CX+(Py/5)*sin(AGL*COEF2); Cy = CY+(Py/5)*cos(AGL*COEF2); if(RAND2%5 == 0) { Cx = CX+(Px/5)*sin(AGL*COEF2)*cos(AGL*COEF2); Cy = CY+(Py/5)*cos(AGL*COEF2); if(RAND2%4 == 0) { Cx = CX+(Px/5)*cos(AGL*COEF2); Cy = CY+(Py/5)*sin(AGL*COEF2); if(RAND2%3 == 0) { Cx = CX+(Px/5)*sin(AGL*COEF2); Cy = CY+(Py/5)*cos(AGL*COEF2)*sin(AGL*COEF2); if(RAND2%2 == 0) { Cx = CX+(Py/5)*cos(AGL*COEF2); Cy = CY+(Py/5)*sin(AGL*COEF2); else { Cx = CX+(Px/5)*sin(AGL*COEF2); Cy = CY+(Py/5)*cos(AGL*COEF2); ///////////////////////////////////////////////////////////CURVE II pen = CreatePen(PS_SOLID,1,RGB(245, 255, 200)); RoundRect(hdc, Cx,Cy, Cx+5,Cy+4, Cx+5,Cy+4); The picture below is one of the sample pictures that is generated with the algorithm above which the polynomial equations are selected by random method where the selected equations are: CX = (Px/2)+(Py/4)*cos(AGL*COEF); CY = (Py/2)+(Py/4)*sin(AGL*COEF); Substitute to > Cx = CX+(Px/5)*sin(AGL*COEF2); Cy = CY+(Py/5)*cos(AGL*COEF2); Picture sample Other Picture Samples THIRD, carving B (you can see the function(procedure) in the source code/MathArtAnimation_3()): Picture Samples About the Author • Mark Daniel • A Freelancer of C Programming and Architecture. Flight Simulator is my hobby. • Indonesia
{"url":"http://www.codeproject.com/Articles/300388/Using-C-to-blend-Mathematics-and-Art-When-Math-goe?msg=4143372","timestamp":"2014-04-21T03:26:35Z","content_type":null,"content_length":"130527","record_id":"<urn:uuid:a97fd403-18b7-4ff6-86cc-043583dd0433>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00061-ip-10-147-4-33.ec2.internal.warc.gz"}
Mapping Torus of a Manifold is a Manifold. Mapping Torus of a Manifold is a Manifold. I'm not saying you are not sharp. I am suggesting you have not played around enough with these ideas. But ask yourself, did you try the exercise I just gave, or are you just in the habit of asking your friends, or looking in your book? get your pencil working. Here are some suggestions for practice: In all mathematics involving mappings, especially therefore geometry and algebra, it is important to practice mappings along with spaces. In particular every construction of a new space from old ones, also involves constructing new mappings from old ones. Whenever one learns a new construction for spaces, and hence also mappings, one should learn the relationship between the new mappings and the old. E.g. a product of two spaces X,Y is a new space XxY, but also it comes with a pair of maps XxY-->X and XxY-->Y such that a function Z-->XxY is admissible (e.g. continuous) as a map if and only if both composed functions Z-->XxY-->X and Z-->XxY-->Y are admissible mappings. Moreover every pair of continuous mappings Z-->X and Z-->Y induces a unique continuous map Z-->XxY, with compositions Z--> XxY-->X and Z-->XxY-->Y equal to the original pair of maps. This is the characteristic property of the product topology. As for a quotient space of a space X by a subspace A, the topology on the space X/A is defined as all sets whose preimages under the natural projection X-->X/A are open in X. This is the largest topology on X/A making that projection continuous. Then the key to understanding maps out of the quotient space X/A is this: a function X/A-->Z is continuous if and only if the composition X-->X/A-->Z is continuous. As a consequence we learn to form continuous maps out of X/A as follows: Any function from X-->Z that is continuous and which sends all points of A to a single point of Z, induces a continuous map X/A-->Z, (and conversely). check this, and all other claims made here. (i am not working out the proofs as i go except in my head, so i could make a mistake here, but there is a pretty fair probability i am getting it mostly correct. but the only way to be sure is to do the proof yourself.) As a consequence any continuous map X-->Y such that the subspace A of X maps into the subspace B of Y, induces a continuous map X/A-->Y/B. So what is a homeomorphism? It is a continuous map with a continuous inverse. Suppose X-->Y is a homeomorphism mapping the subspace A homeomorphically to the subspace B. Then also the inverse g of f maps Y-->X homeomorphically to X and maps the subspace B to A. hence we get two induced continuous mappings X/A-->Y/B which you should check are also inverse to each other. In particular this answers lavinia’s conjecture in the affirmative, i.e. under these conditions, X/A is homeomorphic to Y/B. Since the result follows immediately from the basic definitions and properties of quotients, it might be called elementary, but i will not call it trivial, because to become so first we have to master the concepts, which takes some work. also one has to be taught properly. the word trivial is used mostly as an insult in my experience. Here are some more exercises on quotient spaces. Check that in R/Q, the quotient of the reals by the rationals, that the point [0] is dense. I.e. the only closed subset of R/Q containing [0] is the whole space. or prove otherwise. in the same spirit, if ≈ is an equivalence relation on X, how are maps out of X/≈ related to maps out of X? Is a function X/≈-->Z continuous whenever the composition X-->X/≈-->Z continuous? Does a map out of X-->Z induce a continuous map out of X/≈ whenever it send equivalent points of X to the same point of Z? Can you use this to describe a map out of a nbhd of a point on the problematic set of a mapping torus, that defines a chart?
{"url":"http://www.physicsforums.com/showthread.php?p=4168764","timestamp":"2014-04-17T15:37:23Z","content_type":null,"content_length":"45336","record_id":"<urn:uuid:e4bd4a44-297c-4c87-911a-3405ef379c42>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00374-ip-10-147-4-33.ec2.internal.warc.gz"}
Mickaël Launay Mickaël Launay PhD student in probability Laboratoire d'Analyse, Topologie et Probabilités Centre de Mathématiques et d'Informatique 39, rue Frédéric Joliot-Curie 13453 Marseille, cedex 13 E-mail : mlaunay Office : R115 Welcome to my homepage ! I'm currently a Ph.D. student under the supervision of Vlada Limic in Marseille, France. I'm writing my thesis on interacting urn models. Submitted in december 2011. Abstract.In classical urn models, one usually draws one ball with replacement at each time unit and then adds one ball of the same colour. Given a weight sequence w[k], the probability of drawing a ball of a certain colour is proportional to w[k] where k is the number of balls of this colour. A classical result states that an urn fixates on one colour after a finite time if an only if the sum of all 1/w[k] is finite. In this paper we shall study the case when at each time unit we draw with replacement a number d>2 of balls and then add d new balls of matching colours. The main goal is to prove that the result in the case of unique drawing generalises assuming in addition that w[k] is non-decreasing. My first article ! Submitted. Abstract. The aim of this paper is to study the asymptotic behavior of strongly reinforced interacting urns with partial memory sharing. The reinforcement mechanism considered is as follows: draw at each step and for each urn a white or black ball from either all the urns combined (with probability p) or the urn alone (with probability 1-p) and add a new ball of the same color to this urn. The probability of drawing a ball of a certain color is proportional to w[k] where k is the number of balls of this color. The higher the p, the more memory is shared between the urns. The main results can be informally stated as follows: in the exponential case w[k]=ρ^k, if p≥1/2 then all the urns draw the same color after a finite time, and if p<1/2 then some urns fixate on a unique color and others keep drawing both black and white balls. Reinforced Random Walks My master dissertation supervised by Vlada Limic. In french. Abstract. The aim of this thesis is to present differents results related to Sellke conjecture, which state that a strongly reinforced random walk on a graph G is almost surely attracted by a single edge. This work is mainly adapted form articles of Vlada Limic and Pierre Tarrès. Limit theorems for Galton-Watson processes Magister disertation with Adrien Joseph and supervised by Jean-François Le Gall. In french Abstract. If (Z[n])[n] is a Galton-Watson process with natality L satisfying E[L]=m>1 (supercritical case), we study the limit of Z[n]/m^n that we denote by W. Surprisingly, the condition W=0 is not equivalent to the extinction of the process (Z[n]=0 after a finite time). The Kesten-Stigum theorem states that for natality laws such that E[Llog^+(L)] is infinite, we have W=0 almost surely even on non extinction of the process. This thesis is based on chapter 11 of the book Probability on trees and networks by Russell Lyons and Yuval Peres. Last update : March 8, 2011
{"url":"http://www.cmi.univ-mrs.fr/~mlaunay/index.en.html","timestamp":"2014-04-19T19:34:14Z","content_type":null,"content_length":"8360","record_id":"<urn:uuid:20d3e258-09fa-4c93-a5b3-571d2d74d725>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00325-ip-10-147-4-33.ec2.internal.warc.gz"}
Vauxhall Science Tutor Find a Vauxhall Science Tutor ...I am an experienced mathematics and science teacher, with a wide range of interests and an extensive understanding of physics and mathematics. I love to talk with students of all ages about these subjects, and I would like to help you to appreciate their fundamental simplicity and beauty while g... 25 Subjects: including astronomy, statistics, discrete math, logic ...I graduated cum laude from Boston College in 2011 with a degree in political science. I began tutoring students in high school, and my students have generally shown a significant improvement in their grades. I excel in one-on-one tutoring sessions, and am a very patient and understanding tutor. 16 Subjects: including philosophy, reading, writing, algebra 1 ...Unsure what to do next on your dissertation? Learning SPSS or SAS but stuck? I find research fun and make stats simple. 4 Subjects: including biostatistics, statistics, SPSS, SAS ...My goal is to not only get you to speak, read and understand Italian, but also to have fun and learn about the culture. Don't wait! Start your language learning journey with me today!I have a passion for cooking. 10 Subjects: including sociology, psychology, ESL/ESOL, GED ...I am currently working in the recovery room at a large hospital northern New Jersey and maintain my critical care certification. Please feel free to request copies of my license and certification. I have helped my niece pass the NCLEX exam and am willing to help others as well. 9 Subjects: including anatomy, grammar, reading, ESL/ESOL Nearby Cities With Science Tutor Chatham Twp, NJ Science Tutors Chatham, NJ Science Tutors Garwood, NJ Science Tutors Hillside, NJ Science Tutors Irvington, NJ Science Tutors Kenilworth, NJ Science Tutors Maplecrest, NJ Science Tutors Maplewood, NJ Science Tutors Millburn Science Tutors Mountainside Science Tutors Short Hills Science Tutors South Orange Science Tutors Springfield, NJ Science Tutors Union Center, NJ Science Tutors Union, NJ Science Tutors
{"url":"http://www.purplemath.com/Vauxhall_Science_tutors.php","timestamp":"2014-04-20T09:14:10Z","content_type":null,"content_length":"23563","record_id":"<urn:uuid:13446cc8-24b2-4211-85cb-f618fd5285d0>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00394-ip-10-147-4-33.ec2.internal.warc.gz"}
Institute for Mathematics and its Applications (IMA) Applying methods of Calculus of Variations, we introduce and justify a variant of Kirchhoff theory for thin 3d shells, valid in presence of residual stresses. The effective 2d energy is the relative bending, appropriately modified while in the varying thickness and/or incompressible materials situation We rigorously derive the von Karman shell theory (and the resulting von Karman equations) for incompressible materials. Our approach is variational and starts from the general nonlinear 3-dimensional elastic energy functional. Our only assumption is that the midsurface of the shell enjoys the following approximation property: C3 first order infinitesimal isometries are dense in the space of all W ^{2, 2} infinitesimal isometries. The class of surfaces with this property includes: flat surfaces, convex surfaces, developable surfaces and rotationally invariant surfaces. The most accessible problems for the mechanics of deformable solid bodies are those for thin bodies, namely, rods and shells, because their equations respectively have but one and two independent spatial variables. There is a voluminous literature devoted to the derivations of various models for such bodies undergoing small deformations. On the other hand, geometrically exact theories are derived directly from fundamental principles. They readily accommodate general nonlinear material response. This lecture describes solutions of a variety of steady-state and dynamical geometrically exact problems, emphasizing the appearance of thresholds in constitutive response that separate qualitatively different behaviors. We study the wrinkling of a thin elastic sheet caused by a prescribed non-Euclidean metric. This is a model problem for the folding patterns seen, e.g., in torn plastic membranes and the leaves of plants. Following the lead of other authors we adopt a variational viewpoint, according to which the wrinkling is driven by minimization of an elastic energy subject to appropriate constraints and boundary conditions. Our main goal is to identify the scaling law of the minimum energy as the thickness of the sheet tends to zero. This requires proving an upper bound and a lower bound that scale the same way. The upper bound is relatively easy, since nature gives us a hint. The lower bound is more subtle, since it must be ansatz-free. It is well known that elastic sheets loaded in tension will wrinkle, with the length scale of wrinkles tending to zero with vanishing thickness of the sheet [Cerda and Mahadevan, Phys. Rev. Lett. 90, 074302 (2003)]. We give the first mathematically rigorous analysis of such a problem. Since our methods require an explicit understanding of the underlying (convex) relaxed problem, we focus on the wrinkling of an annular sheet loaded in the radial direction [Davidovitch et al, arxiv 2010]. While our analysis is for that particular problem, our variational viewpoint should be useful more generally. Our main achievement is identification of the scaling law of the minimum energy as the thickness of the sheet tends to zero. This requires proving an upper bound and a lower bound that scale the same way. We prove both bounds first in a simplified Kirchhoff-Love setting and then in the nonlinear three-dimensional setting. To obtain the optimal upper bound, we need to adjust a naive construction (one family of wrinkles superimposed on the planar deformation) by introducing cascades of wrinkles. The lower bound is more subtle, since it must be ansatz-free. Differential growth processes play a prominent role in shaping leaves and biological tissues. Using both analytical and numerical calculations, we consider the shapes of closed, elastic strips which have been subjected to an inhomogeneous pattern of swelling. The stretching and bending energies of a closed strip are frustrated by compatibility constraints between the curvatures and metric of the strip. To analyze this frustration, we study the class of “conical” closed strips with a prescribed metric tensor on their center line. The resulting strip shapes can be classified according to their number of wrinkles and the prescribed pattern of swelling. We use this class of strips as a variational ansatz to obtain the minimal energy shapes of closed strips and find excellent agreement with the results of a numerical bead-spring model. Wrinkling is a fundamental mechanism for the relief of compressive stress in thin elastic sheets. It is natural to consider wrinkling as a (supercritical) instability of an appropriate flat, highly-symmetric state of the sheet. This talk will address the subtlety of this approach by considering wrinkling in the Lame` geometry: an annular sheet under radial tension. This axi-symmetric system seems to be the most elementary, yet nontrivial extension of Euler buckling (that emerges under uniaxial compression). Nevertheless, despite its apparent simplicity, the Lame` geometry exhibits a dramatic change of the wrinkling pattern beyond the instability threshold. I will address the distinct features of wrinkling patterns in the near-threshold (NT) and far-from-threshold (FFT) regimes, and will show how they emanate from different asymptotic expansions of Foppl-van-Karman (FvK) equations in these two limits. Our systematic theory of the FFT regime unifies the old “membrane limit” approach for the asymptotic stress field (Wagner, Stein&Hedgepeth, Pipkin) with more recent scaling ideas for the wavelength of wrinkles (Cerda&Mahadevan). Combining the analysis of these asymptotic regimes allows us to construct a complete “phase diagram” for wrinkling patterns in the Lame` geometry that sheds new light on experiments in this field. I will discuss general lessons that can be extracted from this analysis, and will conclude with some conjectures on possible universal aspects of this study. Despite an almost two thousand year history, origami, the art of folding paper, remains a challenge both artistically and scientifically. Traditionally, origami is practiced by folding along straight creases. A whole new set of shapes can be explored, however, if, instead of straight creases, one folds along arbitrary curves. We present a mechanical model for curved fold origami in which the energy of a plastically-deformed crease is balanced by the bending energy of developable regions on either side of the crease. The language of Riemannian geometry arises naturally in the elastic description of amorphous solids, yet in the long history of elasticity it was put to very little practical use as a computational tool. In recent years the usage of Riemannian terminology has been revived, mostly in the context of incompatible irreversible deformations. In this talk I will compare different approaches to the description of growth and irreversible deformations focusing on the metric description of incompatible growth. I will also discuss the appropriate reduced theories for slender bodies. Particularly, I will present a specific problem inspired by strictureplasty in which the metric approach elucidates the path to solution. In this poster we illustrate that isometric immersions of the hyperbolic plane into three dimensional Euclidean space with a periodic profile exist. These surfaces are piecewise smooth but have vastly lower bending energy then their smooth counterparts and could explain why periodic hyperbolic surfaces are proffered in nature. We present a theoretical study of free non-Euclidean plates with a disc geometry and a prescribed metric that corresponds to a constant negative Gaussian curvature. We take the equilibrium configuration taken by the these sheets to be a minimum of a Foppl Von-Kàrmàn type functional in which configurations free of in plane stretching correspond to isometric immersions of the metric. We show for all radii there exists low bending energy configurations free of any in plane stretching that obtain a periodic profile. The number of periods in these configurations is set by the condition that the principle curvatures of the surface remain finite and grows approximately exponentially with the radius of the disc. Among the many typical biological structures, cylindrical and tubular structures such as hyphae, stems, roots, blood vessels, airways, oesophagus, and tree trunks abound in nature. Tubes are typically used for transport, mechanical support or both. Their morphogenesis usually involves complex genetic and biochemical processes mediated by mechanical forces. In many cases, tubes have (at least) two layers glued together. Each layer has different mechanical and geometric properties. Moreover, due to growth taking place in the layers, each tube may also develops residual stresses. In this talk, I will be discussing the range of mechanical properties and functions that can be obtained by tuning these different properties within the framework of nonlinear morphoelasticity. In particular, I will discuss how differential axial growth can be used to improved structural stiffness (with examples from plants and arteries), how relative radial growth of the tube can either induce hollowing (as found in plant aerenchyma), or generate mucosal folding (as found in oesophagus and airways), and how anisotropy can induce handedness reversal (as found in phycomyces). Given time, I will also discuss the inverse problem of designing a tube with desired mechanical properties through growth and remodelling. I propose Jell-O as a building material. The concept stems from a question in blobby architecture on transformable walls. Phase transition gels are known to expand and contract up to a thousand fold. Tiny wireless stimulators mixed inside these gels could direct local shape changes. A sum of small volume changes would, in theory, yield the overall shape desired. The immediate goal of creating a set of prototype gel models was to provide visual aids as a basis for discussion with other disciplines. Starting by experimenting with rigid and flexible molds, a series of 10-centimeter jiggly gel objects was formed and photographed. Next, as a proof of concept, a 1-meter pneumatic robot was designed and constructed to demonstrate motion via selective volume displacement. Following the successes of the gel mold objects and robot control experiments, the two components will now be mixed for preliminary tests of a “slosh-bot.” The mechanics of a thin elastic sheet can be explored variationally, by minimizing the sum of "membrane" and "bending" energy. For some loading conditions, the minimizer develops increasingly fine-scale wrinkles as the sheet thickness tends to 0. While the optimal wrinkle pattern is probably available only numerically, the qualitative features of the pattern can be explored by examining how the minimum energy scales with the sheet thickness. I will introduce this viewpoint by discussing past work on simpler but related problems. Then I'll discuss recent work with Hoai-Minh Nguyen, concerning the cascade of wrinkles observed by J. Huang et al at the edge of a floating elastic film (Phys Rev Lett 105, 2010, 038302). I will discuss some simple mathematical problems associated with the shaping of sheets inspired by the buckling of graphene, the rippling of leaf edges, the blooming of flowers, and the coiling of guts. One particular focus is the role of boundary conditions at a free edge, and a second is the question of inverse problems inspired by optimal design for tissue engineering. Many of the challenges of finding the shapes of elastic surfaces have first cousins in the world of pattern formation. I will try to sketch out the connections and explain where there are similarities and where there are profound differences even though the equations and the free energies look much the same. If time permits, and with the indulgence of the audience, I shall also tell you how a three dimensional version of the ideas give rise to objects, "quarks and leptons," with spin and charge symmetries which arise because of symmetry breaking and which do not require to be put in by hand. Non-Euclidean thin plates arise in different circumstances: differential growth, swelling, shrinking or plastic deformations can set the geometry of an elastic body to a preferred "target metric". In our model, the latter plays the main role in determining the shape of the plate. We use analytical techniques in the context of calculus of variations to predict the behavior of these structures for their very thin limits. We will moreover discuss a disparity between the theoretical analysis and experimental data, in which a sharp qualitative contrast between the negative and positive constant curvature cases has been observed. What three-dimensional shapes can be made with an elastic film of finite thickness upon which an isotropic, but inhomogeneous, pattern of growth has been prescribed? I will describe both theoretical progress in addressing this question, and an experimental realization in a swelling polymer film in which a metric is prescribed by modulating the local polymer cross-link density. By imposing a pattern of swelling dots, similar to half-toning in an inkjet printer, we can prescribe arbitrary swelling patterns. This system allows us to directly put mathematics to the experimental test. I will finally present a simple swelling geometry from which more complex shapes can be built, and rationalize some of the potentially counterintuitive behavior observed experimentally. We derive effective theories for heterogeneous multilayers from three-dimensional nonlinear elasticity by Gamma-convergence. Such materials have been used recently for a self-induced fabrication of nanotubes. The energy minimizers of the limiting functional turn out to be cylinders (scrolls) whose winding direction and radius depends on the equilibrium misfit of the specimen's layers. Taking a non-interpenetration condition into account we find spirals and double spirals as energetically optimal shapes. Several features, such as d-cones, minimal ridges, developable patches, and collapsed compressive stress, occur regularly in the the configuration of elastic sheets. We dub such features "building blocks." By understanding the shape of an elastic sheet as an amalgamation of these building blocks, we can understand its behavior without fully solving the governing equations. Here, we consider the building blocks that make up a wrinkle cascade. Such a cascade occurs when an elastic sheet is subject to confinement, so that it buckles at some optimal wavelength, but is required to have another wavelength at one end. The transition between imposed and optimal wavelength occurs in a cascade going through several intermediate wavelengths. We simulate a single generation of this cascade and demonstrate that it is composed of two different building blocks: a focused-stress feature reminiscent of a d-cone and a "diffuse-stress" feature. The former is characterized by a geometrical constraint (inextensibility), while the latter is governed by a mechanical constraint: the dominance of a single component of the stress tensor. We will discuss how boundary conditions affect which building blocks are chosen. I will present our theoretical framework and experimental techniques, developed for constructing thin elastic sheets that undergo a known, nonuniform active deformation (or "growth") and calculating their equilibrium configurations. The poster includes two limit examples: 1) Non-Euclidean plates, in which the lateral growth is uniform along the thickness of the sheet, but varies across its surface. 2) An incompatible shell, in which the lateral growth is uniform across the surface, but varies along the sheet thickness, leading to double spontaneous curvature. "interesting" configurations and transitions, relevant to biological and chemical systems will be presented Many natural structures are made of soft tissue that undergoes complicated shape transformations as a result of the distribution of local active deformation of its "elements". Currently, the ability of mimicking this shaping mode in man-made structures is poor. I will present some results of our study of actively deforming thin sheets. We formulated a covariant elastic theory from which we derive an approximate 2D plate/shell theory for sheets with intrinsic incompatible metric and curvature tensors. With this theory we study selected cases of special interest. Experimentally, we use environmentally responsive gel sheets that adopt prescribed metrics upon induction by environmental conditions. With this system we study the shaping mechanism in different cases of imposed metrics and curvature. The generated sheets can be viewed as primitive soft machines. Finally, we study different cases of plant mechanics, connecting between the local growth tensor of the tissue and the evolution of the global shape of an organ. I will propose techniques of persistence homology for use in analyzing growing cells. A biochemomechanical oscillator has been developed in which a clamped, pH-sensitive hydrogel membrane containing N-isopropylacrylamide (NIPAAm) and methacrylic acid (MAA) separates a chamber containing glucose oxidase from a pH controlled external medium containing a constant concentration of glucose. This system undergoes oscillations in intrachamber pH and concomitant on/off switching of glucose permeation through the membrane due to a nonlinear feedback instability between the enzyme-mediated reaction, which converts glucose to hydrogen ion, and the swelling/glucose-permeability characteristic of the membrane. Oscillation period increases with time due to buildup in the chamber of a buffering product, gluconate ion, and eventually oscillations cease. During operation there is a fluctuating pH gradient between the chamber and the external medium. We have gathered experimental evidence that a sustained pH gradient leads to stress induced pattern formations in the hydrogel due to phase separation, which we believe may be responsible for cessation of oscillations. We have also shown pattern development in clamped thermally sensitive hydrogel membranes based on NIPAAm without MAA), with a temperature gradient applied across the membrane. A mathematical description of the observed phenomena is desirable. We consider multi-phase equilibria of elastic solids under anti-plane shear.We use global bifurcation methods to determine paths of equilibria in the presence of small interfacial energy. In an earlier paper the rigorous existence of global bifurcating branches was established. The stability of the solutions along these branches are difficult to determine. By an appropriate numerical representation of the second variation we show, that phase-tip splitting at the boundary (which is typically observed in experiments with shape memory alloys) appears for stable solutions of our Shape memory alloys and other active materials undergo displacive phase transitions which change the symmetry of the crystal lattice through a diffusionless coordinated motion of atoms. A signature feature of these materials is the hysteresis they exhibit in response to cyclic loading. The dissipation is due to propagating phase boundaries that can be represented at the continuum level as surfaces of discontinuity. Classical elastodynamics admits nonzero dissipation on moving discontinuities but provides no information about its origin and kinetics. In the presence of subsonic discontinuities, this results in an ill-posed initial-value problem. One can extract the missing information about phase boundary kinetics and regularize the continuum model by considering its natural discrete prototype. This leads to an incredibly rich lattice model that also describes other interesting phenomena such as phase nucleation and evolution of microstructures. In this talk I will describe some recent work in this direction. Thin elastic sheets are usually modeled by variational problems for an energy with two scales, a (strong) stretching energy and a (weak) bending energy. A useful paradigm in understanding the behavior of theses sheets under various loadings/boundary conditions is the following -- Singularites/microstructure in the observed configurations of thin sheets reflect "geometric incompatibility", that is the non-existence of admissible, sufficiently smooth isometric immersions (zero stretching energy test functions). Using specific examples, and a combination of rigorous results and conjectures motivated by numerical computation, I will try to argue that the relation between the observed geometry of thin elastic sheets and the existence/nonexistence of isometric immersions is much more subtle. There are different sources of residual stresses in solids. Nonuniform temperature distributions and defects (e.g. dislocations) have been of interest in mechanics in the last few decades (and of course bulk growth more recently). Distributed dislocations were geometrically studied by Kondo and Bilby (among others) in the 1950s. Dislocation density tensor has been identified with torsion of an affine connection (of a flat material manifold) in the literature. However, the successful phenomenological models in finite plasticity have been non-geometric and overwhelmingly based on a multiplicative decomposition of deformation gradient into elastic and plastic parts. This idea has been the main kinematic assumption in the early theories of bulk growth as well. In this talk I will outline a theory of anelasticity in which material manifold has an evolving geometry. In this framework elasticity is a very special case for which geometry of the material manifold is time independent. A connection will be made between Cartan's moving frames and decomposition of deformation gradient. The Riemannian manifold corresponding to a flat Riemann-Cartan material manifold will be defined. As an example of its application, I will show how to construct the material manifold of a single screw dislocation. Then the residual stress field of a single screw dislocation in an incompressible nonlinear elastic solid will be obtained. For certain martensitic phase transformations, one observes a close relation between the width of the thermal hysteresis and the compatibility of two phases. The latter is in the context of geometrically non-linear elasticity measured by the deviation of the middle eigenvalue of the transformation stretch matrix from one. This observation forms the basis of a theory of hysteresis that assigns an important role to the energy of the transition layer (Zhang, James, Müller, Acta mat. 57(15):4332–4352, 2009). Following this ansatz, we study the energy barriers leading to hysteresis, and analyze the shapes of energetically optimal transition layers for low hysteresis alloys.
{"url":"http://ima.umn.edu/2010-2011/SW5.16-20.11/abstracts.html","timestamp":"2014-04-16T22:30:26Z","content_type":null,"content_length":"60560","record_id":"<urn:uuid:0aeca3c2-4ba3-4bf9-8a85-71f6f842f03b>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00038-ip-10-147-4-33.ec2.internal.warc.gz"}
Report on the functional programming language Haskell: A non-strict, purely functional language: Version 1.2 Results 11 - 20 of 90 - In Haskell Workshop , 1997 "... When type classes were first introduced in Haskell they were regarded as a fairly experimental language feature, and therefore warranted a fairly conservative design. Since that time, practical experience has convinced many programmers of the benefits and convenience of type classes. However, on occ ..." Cited by 89 (8 self) Add to MetaCart When type classes were first introduced in Haskell they were regarded as a fairly experimental language feature, and therefore warranted a fairly conservative design. Since that time, practical experience has convinced many programmers of the benefits and convenience of type classes. However, on occasion, these same programmers have discovered examples where seemingly natural applications for type class overloading are prevented by the restrictions imposed by the Haskell design. It is possible to extend the type class mechanism of Haskell in various ways to overcome these limitations, but such proposals must be designed with great care. For example, several different extensions have been implemented in Gofer. Some of these, particularly the support for multi-parameter classes, have proved to be very useful, but interactions between other aspects of the design have resulted in a type system that is both unsound and undecidable. Another illustration is the introduction of constructor cla... , 1998 "... Many compilers do some of their work by means of correctness-preserving, and hopefully performance-improving, program transformations. The Glasgow Haskell Compiler (GHC) takes this idea of "compilation by transformation" as its war-cry, trying to express as much as possible of the compilation proces ..." Cited by 84 (11 self) Add to MetaCart Many compilers do some of their work by means of correctness-preserving, and hopefully performance-improving, program transformations. The Glasgow Haskell Compiler (GHC) takes this idea of "compilation by transformation" as its war-cry, trying to express as much as possible of the compilation process in the form of program transformations. This paper reports on our practical experience of the transformational approach to compilation, in the context of a substantial compiler. , 1993 "... We study the problem of type inference for a family of polymorphic type disciplines containing the power of Core-ML. This family comprises all levels of the stratification of the second-order lambda-calculus by "rank" of types. We show that typability is an undecidable problem at every rank k >= 3 o ..." Cited by 78 (14 self) Add to MetaCart We study the problem of type inference for a family of polymorphic type disciplines containing the power of Core-ML. This family comprises all levels of the stratification of the second-order lambda-calculus by "rank" of types. We show that typability is an undecidable problem at every rank k >= 3 of this stratification. While it was already known that typability is decidable at rank 2, no direct and easy-to-implement algorithm was available. To design such an algorithm, we develop a new notion of reduction and show howto use it to reduce the problem of typability at rank 2 to the problem of acyclic semi-unification. A by-product of our analysis is the publication of a simple solution procedure for acyclic semi-unification. - In Eighteenth Annual ACM Symposium on Principles of Programming Languages , 1991 "... We present a type inference system for FL based on an operational, rather than a denotational, formulation of types. The essential elements of the system are a type language based on regular trees and a type inference logic that implements an abstract interpretation of the operational semantics of F ..." Cited by 65 (7 self) Add to MetaCart We present a type inference system for FL based on an operational, rather than a denotational, formulation of types. The essential elements of the system are a type language based on regular trees and a type inference logic that implements an abstract interpretation of the operational semantics of FL. We use a non-standard approach to type inference because our requirements---using type information in the optimization of functional programs---differ substantially from those of other type systems. 1 Introduction Compilers derive at least two benefits from static type inference: the ability to detect and report potential run-time errors at compile-time, and the use of type information in program optimization. Traditionally, type systems have emphasized the detection of type errors. Statically typed functional languages such as Haskell [HWA*88] and ML [HMT89] include type constraints as part of the language definition, making some type inference necessary to ensure that type constraints ... - Annals of Pure and Applied Logic , 1998 "... Girard and Reynolds independently invented System F (a.k.a. the second-order polymorphically typed lambda calculus) to handle problems in logic and computer programming language design, respectively. Viewing F in the Curry style, which associates types with untyped lambda terms, raises the questions ..." Cited by 58 (4 self) Add to MetaCart Girard and Reynolds independently invented System F (a.k.a. the second-order polymorphically typed lambda calculus) to handle problems in logic and computer programming language design, respectively. Viewing F in the Curry style, which associates types with untyped lambda terms, raises the questions of typability and type checking . Typability asks for a term whether there exists some type it can be given. Type checking asks, for a particular term and type, whether the term can be given that type. The decidability of these problems has been settled for restrictions and extensions of F and related systems and complexity lower-bounds have been determined for typability in F, but this report is the rst to resolve whether these problems are decidable for System F. This report proves that type checking in F is undecidable, by a reduction from semiuni cation, and that typability in F is undecidable, by a reduction from type checking. Because there is an easy reduction from typability to typ... - In Proc. European Symp. on Programming , 1996 "... Many compilers do some of their work by means of correctness-preserving, and hopefully performance-improving, program transformations. The Glasgow Haskell Compiler (GHC) takes this idea of "compilation by transformation" as its war-cry, trying to express as much as possible of the compilation proces ..." Cited by 55 (4 self) Add to MetaCart Many compilers do some of their work by means of correctness-preserving, and hopefully performance-improving, program transformations. The Glasgow Haskell Compiler (GHC) takes this idea of "compilation by transformation" as its war-cry, trying to express as much as possible of the compilation process in the form of program transformations. This paper reports on our practical experience of the transformational approach to compilation, in the context of a substantial compiler. The paper appears in the Proceedings of the European Symposium on Programming, Linkoping, April 1996. 1 Introduction Using correctness-preserving transformations as a compiler optimisation is a well-established technique (Aho, Sethi & Ullman [1986]; Bacon, Graham & Sharp [1994]). In the functional programming area especially, the idea of compilation by transformation has received quite a bit of attention (Appel [1992]; Fradet & Metayer [1991]; Kelsey [1989]; Kelsey & Hudak [1989]; Kranz [1988]; Steele [1978]). A ... - PROCEEDINGS OF THE 1996 ACM SIGPLAN INTERNATIONAL CONFERENCE ON FUNCTIONAL PROGRAMMING , 1997 "... Virtually every compiler performs transformations on the program it is compiling in an attempt to improve efficiency. Despite their importance, however, there have been few systematic attempts to categorise such transformations and measure their impact. In this paper we describe a particular group o ..." Cited by 54 (11 self) Add to MetaCart Virtually every compiler performs transformations on the program it is compiling in an attempt to improve efficiency. Despite their importance, however, there have been few systematic attempts to categorise such transformations and measure their impact. In this paper we describe a particular group of transformations --- the "let-floating" transformations --- and give detailed measurements of their effect in an optimising compiler for the non-strict functional language Haskell. Let-floating has not received much explicit attention in the past, but our measurements show that it is an important group of transformations (at least for lazy languages), offering a reduction of more than 30% in heap allocation and 15% in execution time. - PROCEEDINGS OF THE IEEE , 1991 "... Several programming languages arising from widely diverse practical and theoretical considerations share a common high-level feature: their basic data type is an aggregate of other more primitive data types and their primitive functions operate on these aggregates. Examples of such languages (and th ..." Cited by 51 (5 self) Add to MetaCart Several programming languages arising from widely diverse practical and theoretical considerations share a common high-level feature: their basic data type is an aggregate of other more primitive data types and their primitive functions operate on these aggregates. Examples of such languages (and the collections they support) are FORTRAN 90 (arrays), APL (arrays), Connection Machine LISP (xectors), PARALATION LISP (paralations), and SETL (sets). Acting on large collections of data with a single operation is the hallmark of data-parallel programming and massively parallel computers. These languages --- which we call collection-oriented --- are thus ideal for use with massively parallel machines, even though many of them were developed before parallelism and associated considerations became important. This paper examines collections and the operations that can be performed on them in a language-independent manner. It also critically reviews and compares a variety of collection-oriented languages... , 1991 "... I am greatly indebted to Simon Peyton Jones, my supervisor, for his encouragement and technical assistance. His overwhelming enthusiasm was of great support to me. I particularly want to thank Simon and Geoff Burn for commenting on earlier drafts of this thesis. Through his excellent lecturing Cohn ..." Cited by 48 (3 self) Add to MetaCart I am greatly indebted to Simon Peyton Jones, my supervisor, for his encouragement and technical assistance. His overwhelming enthusiasm was of great support to me. I particularly want to thank Simon and Geoff Burn for commenting on earlier drafts of this thesis. Through his excellent lecturing Cohn Runciman initiated my interest in functional programming. I am grateful to Phil Trinder for his simulator, on which mine is based, and Will Partain for his help with LaTex and graphs. I would like to thank the Science and Engineering Research Council of Great Britain for their financial support. Finally, I would like to thank Michelle, whose culinary skills supported me whilst I was writing-up.The Imagination the only nation worth defending a nation without alienation a nation whose flag is invisible and whose borders are forever beyond the horizon a nation whose motto is why have one or the other when you can have one the other and both , 1997 "... Context-sensitive rewriting is a simple restriction of rewriting which is formalized by imposing fixed restrictions on replacements. Such a restriction is given on a purely syntactic basis: it is (explicitly or automatically) specified on the arguments of symbols of the signature and inductively ..." Cited by 43 (30 self) Add to MetaCart Context-sensitive rewriting is a simple restriction of rewriting which is formalized by imposing fixed restrictions on replacements. Such a restriction is given on a purely syntactic basis: it is (explicitly or automatically) specified on the arguments of symbols of the signature and inductively extended to arbitrary positions of terms built from those symbols. Termination is not only preserved but usually improved and several methods have been developed to formally prove it. In this paper, we investigate the definition, properties, and use of context-sensitive rewriting strategies, i.e., particular, fixed sequences of context-sensitive rewriting steps. We study how to define them in order to obtain efficient computations and to ensure that context-sensitive computations terminate whenever possible. We give conditions enabling the use of these strategies for root-normalization, normalization, and infinitary normalization. We show that this theory is suitable for formalizing ...
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=180475&sort=cite&start=10","timestamp":"2014-04-21T01:43:20Z","content_type":null,"content_length":"39483","record_id":"<urn:uuid:c5daf892-af62-4cdd-b5a4-e5977b37c83b>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00408-ip-10-147-4-33.ec2.internal.warc.gz"}
Archim 1.1 Archim is a program for drawing the graphs of all kinds of functions. You can define a graph explicitly and parametrically, in polar and spherical coordinates, on a plane and in space (surface). Archim will be useful for teachers and students. License Freeware (Free) Date Added 06/05/2006 Price USD $0.00 Category Education / Mathematics Filesize 623.8 KB Author Stochastic Lab Archim is a program for drawing the graphs of all kinds of functions. You can define a graph explicitly and parametrically, in polar and spherical coordinates, on a plane and in space (surface). Archim will be useful for teachers and students, as well as for everyone who is interested min geometry. With Archim, you will draw the graph of any function and form, just use your imagination. Archim has a wizard making it easier to draw graphs and more than 30 various sample functions. Sends us the formula of a beautiful and original graph and you will get Archim for free! Navigation: The working area of the program can be conventionally divided into two parts: you enter your formula and its parameters in the "Function" area, while the graph of this function is drawn in the "Graph" section. You can use your mouse to rotate the graph in any direction (with the left mouse button, you can rotate it left/right and up/down, while with the right mouse button, you can rotate it clockwise and counterclockwise). You can change the color of the graph and the method of filling (for a surface), background and grid colors, the scale and perspective level of the displayed Platform:Windows 95, Windows 98, Windows Me, Windows 2000, Windows XP System Requirements: There is no specific requirements Archim Related Freeware Terms Cosine Function Graph Of Function Graphs Polar Coordiantes Sine Spherical Coordinates Archim Related Software CalcCoord - transforms cartesian, spherical and cylindrical coordinatesCartesian coordinates, spherical coordinates und cylindrical coordinates can be transformed into each other. (5 languages, 2 and 3 dimensions) Complex Grapher - Complex Grapher is a graphing calculator to create a graph of complex function. 3D function graphs and 2D color maps can be created with this grapher.Complex Grapher is a graphing calculator to create a graph of complex function. 3D function... Function Grapher - Function Grapher is graph maker to create 2D, 2.Function Grapher is graph maker to create 2D, 2.5D and 3D function graphs, animations and table graphs. 2D Features: explicit function, implicit function, parametric function, and inequality... Scientific Calculator Precision 90 - A handy, fast, reliable, precise tool if you need to perform complex mathematical calculations.A handy, fast, reliable, precise tool if you need to perform complex mathematical calculations. Scientific Calculator Precision 90 is programmed in C#.... APIS IQ-FMEA - FMEA (all standards) & DRBFM editors: Structure Tree, Function Nets, Function Nets with Operating Conditions, FMEA Form Sheet, Statistics, Action Tracking, Fault Tree Analysis, IQ-Explorer, Team Communication, Terminology and Translation,... Function Analyzer - Function Analyzer is a program that draws the graph of a function with one variable declared by the user.Function Analyzer is a program that draws the graph of a function with one variable declared by the user. The operators used to build the... Visual Complex For Academic - Visual Complex is a graph software to create graph of complex function. 3D function graphs and 2D color maps can be created with this grapher.Visual Complex is a graph software to create graph of complex function. 3D function graphs and 2D color... Archim Match at Super Shareware CalcCoord - transforms cartesian, spherical and cylindrical coordinatesCartesian coordinates, spherical coordinates und cylindrical coordinates can be transformed into each other. (5 languages, 2 and 3 dimensions) Complex Grapher - Complex Grapher is a graphing calculator to create a graph of complex function. 3D function graphs and 2D color maps can be created with this grapher.Complex Grapher is a graphing calculator to create a graph of complex function. 3D function... Function Grapher - Function Grapher is graph maker to create 2D, 2.Function Grapher is graph maker to create 2D, 2.5D and 3D function graphs, animations and table graphs. 2D Features: explicit function, implicit function, parametric function, and inequality... Scientific Calculator Precision 90 - A handy, fast, reliable, precise tool if you need to perform complex mathematical calculations.A handy, fast, reliable, precise tool if you need to perform complex mathematical calculations. Scientific Calculator Precision 90 is programmed in C#.... APIS IQ-FMEA - FMEA (all standards) & DRBFM editors: Structure Tree, Function Nets, Function Nets with Operating Conditions, FMEA Form Sheet, Statistics, Action Tracking, Fault Tree Analysis, IQ-Explorer, Team Communication, Terminology and Translation,... Function Analyzer - Function Analyzer is a program that draws the graph of a function with one variable declared by the user.Function Analyzer is a program that draws the graph of a function with one variable declared by the user. The operators used to build the... Visual Complex For Academic - Visual Complex is a graph software to create graph of complex function. 3D function graphs and 2D color maps can be created with this grapher.Visual Complex is a graph software to create graph of complex function. 3D function graphs and 2D color... Education / Mathematics Popular Software Student Grade Calculator - Student Grade Calculator for Excel makes easy work of grading students. It quickly weights the students grades on a per-assignment basis.Student Grade Calculator for Excel makes easy work of grading students. It quickly weights the students grades... Simple Solver - Boolean minimization and truth tables, Automatic logic design and simulation of digital logic circuits from truth table or waveform inputs, Permutations, Random numbers.Simple Solver (SSolver) provides a suite of five design tools: Boolean,... Equation graph plotter - EqPlot - Graph plotter program plots 2D graphs from complex equations. The application comprises algebraic, trigonometric, hyperbolic and transcendental functions. EqPlot can be used to verify the results of nonlinear regression analysis program.Graph... Math Homework Maker - Math Homework Maker,is a FREE software which can solve all your math homeworkThe Math Homework Maker, is a FREE software which can solve all your Math homework. If you are a pupil or a student, needing help with your math homework or just want... Graphing Calculator 3D - Easy-to-use 3D grapher that plots high quality graphs for 2D and 3D functions and coordinates tables. Graphing equations is as easy as typing them down. Graphs are beautifully rendered with gradual colors and lighting and reflection... More Mathematics downloads at 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 Other software of "STOCHASTIC LAB23" MIDI to WAV Renderer - A new program, effective and easy in use for anyone! The best in MIDI to WAV Renderer is an ability to render files avoiding any foreign sounds, as it converts files without playing the original sound and no foreign sounds can get in the output... SLGallery - Computers, the restless machines with unprecedented precision are helpers to human beings at many tasks, these days.Computers, the restless machines with unprecedented precision are helpers to human beings at many tasks, these days. Almost any... SLInvest - SLInvest is used to evaluate the financial profitability of investment projectsSLInvest is used to evaluate the financial profitability and efficiency of investment projects. A lot of financial operations imply several payments in the process of...
{"url":"http://www.supershareware.com/info/archim.html","timestamp":"2014-04-19T01:49:52Z","content_type":null,"content_length":"53339","record_id":"<urn:uuid:38403dda-7921-4ef6-865d-8139b185bb00>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00049-ip-10-147-4-33.ec2.internal.warc.gz"}
Adventure Games, Permutations Groups, and Spreadsheets Adventure Games, Permutations, and Spreadsheets Paul Vodola (PVodola@CAIS.com). Paul Vodola is a Program Scientist and Assistant Program Manager at Systems Planning and Analysis, Inc. in Alexandria, Virginia. He received a Ph.D. in Mathematics at the University of Virginia in 1976. Since then he has worked in the areas of operations research, systems analysis, and modeling but has maintained an interest in group theory and algebra. Puzzles have long been a source of motivation for the exploration of mathematical concepts, theory, and computation. Multimedia adventure games with creative storylines, movies, sophisticated graphics, and sound, are the modern context for both new and classic puzzles. We will show how some of these puzzles can be modeled with directed graphs and the resulting mathematical work can be carried out with nothing more than a computer spreadsheet. Puzzle description and graph model. The adventure game Shivers, by Sierra, takes place in a haunted museum. You, the player, have recklessly accepted a bet to spend the night alone inside . You must wander through rooms collecting objects, manipulating devices, and solving puzzles in order to avoid harm and gain access to areas that might provide escape. One of the puzzles is sketched in Figure 1. We will call it the pinball puzzle because of its operating characteristics. Each of nine colored tiles contains a depression that holds a ball except that the center tile’s depression is empty. Each tile except the white center tile is to be matched with a ball of the corresponding color. The eight balls are labeled with the first letter of their color. Below each tile are two flippers capable of batting the ball one or more positions to the right or left. If a ball is flipped into an empty tile, it stays there. Otherwise the ball bounces back to its original tile. The flippers are labeled to show how far, right or left, they can send a ball. In Figure 1, for example, on the orange tile the right flipper (labeled 1L) would shoot the red ball one position left into the red tile. But since the red tile is already occupied, the ball would return to the orange tile. The left flipper (labeled 3R) would move the red ball three places to the right into the white tile where it would occupy the empty depression. Play could then continue by moving some ball into the empty orange tile. The puzzle is solved when the ball and tile colors match and the white tile is empty again. This is clearly a permutation problem, but it is too complex for us to assimilate patterns and develop an orderly approach. Although Figure 1 indicates the destinations of struck balls, since only the empty tile is an admissible destination it might be more useful to visualize which locations flip balls into a given empty position. For example, the only possible opening moves are orange/left, green/right, and blue/left, where color/flipper refers to the color of the tile and the flipper used. Indeed, any time the white tile is empty, one of those three moves must be chosen. We could draw from those three tiles pointing to the white tile to indicate that such transfers are possible. Continuing that line of thought, and using numbers instead of colors, results in the directed graph shown in Figure 2. The numbers at the nodes represent the colors of the tiles as indicated in Figure 1. Only two of the 18 directed edges are labeled with the tile and flipper action that affects the movement. Node 5 is special because it is empty at the beginning and end of the puzzle. From Figure 2, we see quickly that the only opening moves are from nodes 2, 4, and 6. Suppose we move the ball from tile 4. Then the possible moves are from 1 to 4 and from 7 to 4. If we pick 1 to 4 that leaves no choice but to move the ball from 2 to 1. With 2 now empty, we can fill it from 3 or from 5. Choosing the latter leaves 5 empty once again and we have formed a permutation of some of the balls while leaving the others fixed. The ball in tile 4 ended up in tile 2 (this ball moved twice, but our only concern is how the final configuration differs from the original one). The ball in tile 1 ended up in tile 4 and 2 was moved to 1. In short, the balls in tiles 1, 4, and 2 were permuted and the remaining balls were fixed. As is customary, let’s write this cyclic permutation by listing within parenthesis the sequence of tiles visited by the balls that move, with the understanding that the ball in the last numbered tile moves to the tile indicated by the first number in the sequence. The cycle created above is ( 1 4 2 ). Another feasible set of moves is 4 to 5, 7 to 4, 8 to 7, and 5 to 8 which creates the cycle ( 4 8 7 ). The goal (see Figure 1) is to obtain the permutation ( 1 4 6 2 ) ( 3 8 7 9 ). Tile 5 is empty at the beginning and end, hence it is a fixed point of the ultimate rearrangement. When operating with Figure 2, it helps to visualize these actions as "pushing the empty space" through a path: rather than moving a ball into the space, think of moving the space into the tile containing that ball. Each move "pushes" to a new position swapping places with the ball it displaces. Each flipper action thus transposes the node occupied by the space and that of the flipper, to which the space is "moving". The admissible movements of the space are those against the direction of arrows in the directed graph of admissible moves of the ball. So let’s make a change. Figure 3 is the complementary graph created by reversing all edges in Figure 2. Now the admissible moves of the space are in the direction of the arrows; the remaining notation and conventions do not change. Permutation group model. With practice, we can write down the permutations associated with simple paths taken by the space as it is pushed. This can be formalized by defining a space-path to be a sequence of admissible moves and denoting the path by naming the sequence of nodes occupied by the space. The first space-path used in the examples earlier would be denoted 5-4-1-2-5. The resulting permutation is the product of transpositions ( 5 4 ) ( 4 1 ) ( 1 2 ) ( 2 5 ) formed by moving through the list and creating a transposition from each adjacent pair. In canonical form, as a product of disjoint cycles, that product of transpositions is the single cycle ( 1 4 2 ). Similarly, the second space-path noted earlier is designated 5-4-7-8-5 and it corresponds to the transpositions ( 5 4 ) ( 4 7 ) ( 7 8 ) ( 8 5 ), or as a product of disjoint cycles, ( 4 8 7 ). The concatenation of two space-paths that begin and end at node 5 is another path with the same property. The mechanism for associating transposition products with space-paths shows that the permutation associated with the concatenation is the product of the associated permutations. The composition of the preceding paths above is 5-4-1-2-5-5-4-7-8-5 = 5-4-1-2-5-4-7-8-5 and the associated product ( 1 4 2 ) ( 4 8 7 ) is ( 1 8 7 4 2 ). Note that in interpreting space-paths, strings of repeated 5’s are replaced by a single occurrence. (This could be interpreted as adding a directed edge from each node to itself. It essentially acts as a "do nothing" operation within a space-path.) The desired solution to the entire puzzle can thus be expressed as a certain space-path, because space-paths are an abstract model of the flipper operations controlling the puzzle. The solution path might be quite long, but it can be decomposed into smaller elementary space-paths. Visualize the solution space-path as a string of digits that starts and ends with 5. Proceed through that path from left to right creating a subpath from each interval of digits between consecutive occurrences of the digit 5. Let adjacent sub-paths share the 5 that separates them, so that they also form space-paths that begin and end at node 5. These elementary subpaths then have the property that the space does not pass a second time through node 5 until termination of the sub-path. The permutations associated with such subpaths therefore all have the property that they fix the letter 5. The collection G of permutations associated with space-paths that start and end at node 5 forms a group. (Note from Figure 3 that any move of the space can be inverted, although it may take several moves to do so.) The desired solution is to express the permutation ( 1 4 6 2 ) ( 3 8 7 9 ), which transforms the initial configuration (Figure 1) into the desired state, with each colored ball resting on its matching tile, as a product of members of G. (We have no guarantee that this is possible, but we trust the designer of the pinball puzzle did not create an impossible task!) While transforming the puzzle into a permutation problem seems promising, it really just replaces a trial-and-error sequence of flipper actions with a trial-and-error permutation multiplication problem. Consider a comparable problem in arithmetic: is the number 25470 expressible as a product from the list of numbers 3, 6, and 35, allowing repeated factors? Someone proficient with spreadsheets, but unaware of number theory, would have no problem implementing the following brute force approach. Form a table of all products of pairs of numbers chosen from the list { 1, 3, 6, 35 } including repeated factors such as 6 ×6. This produces all products using two or fewer numbers from { 1, 3, 6, 35 }. Sort these products in order of increasing size, discarding any duplicates, and look for 25470. (Spreadsheets have built-in commands for these operations.) If 25470 does not appear, repeat the process choosing the two factors from this new, larger list. The result, after sorting and culling duplicates, is a list of all products of four or fewer factors. After n iterations the list will consist of all products of 2^n or fewer factors. At some step either the desired number 25470 will appear or all new products will exceed 25470. In the latter case the answer is no, because further products would be still larger. For moderate sized numbers, this process would take only a few seconds. Excel algorithms. In the pinball puzzle problem, potential factors are permutations associated with the elementary space-paths that include node 5 only at the endpoints. We have already identified two by inspecting Figure 3 and it is easy to spot a few more. Can a collection of such factors be found from which ( 1 4 6 2 ) ( 3 8 7 9 ) can be expressed as a product? We could use the same brute force approach that we applied to the arithmetic problem if we could manipulate permutations on a spreadsheet as easily as numbers and text. A collection of permutation functions and related utilities that will give us these capabilities, written for the Microsoft Excel spreadsheet, is described below SimplifyCycles is an Excel function that accepts a character string representation of potentially overlapping cycles and returns the product permutation in canonical form as a product of disjoint cycles. The steps are the same as those generally used with pencil and paper. The first column of Table 1 consists of input strings of cycles that were manually typed. The second column shows the results of SimplifyCycles. The MultiplyCycles function takes two string representations of permutations and multiplies them by concatenating the strings and calling SimplifyCycles. The PathToPerm function creates the permutation associated with a space-path. It inputs the letters that define the path, separated by spaces. The transposition expression is created internally and passed through SimplifyCycles for output. The space-paths used in earlier examples have been typed into the second column of Table 2 and concatenated at the bottom using "&" (Excel’s concatenation operator). The third column uses PathToPerm to associate the paths with permutations. The last column shows the running product of the permutations, from MultiplyCycles. Since there are only two entries, the first entry repeats the first permutation and the second entry is the product ( 1 4 2 ) ( 4 8 7 ). Note that the permutation associated with the concatenation of space-paths is the product of the permutations associated with each path in the same order as the concatenation. Solution of the Pinball Puzzle. Table 3 is a list of space-paths and associated permutations derived from a rudimentary inspection of Figure 3. Each has been labeled with a letter symbol to make tracking easier when products are formed. At this point, one might experiment with products of these generators, perhaps looking back at Figure 3 to see if anything new can be gleaned from the diagram. The power of spreadsheets is that one can survey an enormous number of approaches and calculations while waiting for inspiration to arrive. In case inspiration is on a later train, here is an Excel macro that implements the brute-force approach outlined earlier for the arithmetic problem. GenPermProd generates the products of all pairs of permutations appearing in a list. If the user assigns a label to each permutation, the macro will concatenate the labels each time it forms a pairwise product so that the labels for the products are maintained along with the permutation products. The user selects a list of permutations and associated labels on any worksheet by highlighting them in adjacent columns using the mouse. The macro separates duplicate expressions and outputs the list of products in a symbol/permutation format suitable for selection and repeated application of the macro. The results of the first invocation of the macro, using the data outlined in Table 3, are shown in Table 4. The four original elements together with their 16 products form a list of 19 distinct permutations; there is one duplicate: AC = CA. The list is small enough to let us see that the desired permutation, ( 1 4 6 2 ) ( 3 8 7 9 ), does not appear. But the two columns are in the right format to be selected for a second pass of the macro. A small portion of the results of the second macro invocation are shown in Table 5. There are actually 257 different Table 5 contains the desired solution as the expression BADD (shown in boldface). Excel’s Edit|Find command lets us find it in the list by searching for the solution permutation string. Taking the solution expressed as a product of the four generators, we can convert the sequence BADD back into a space-path, which in turn is translated into flipper selections. Considering how much effort went into finding the solution, using techniques and tools unfamiliar to the untrained person, surely any player with the insight or persistence required to complete the puzzle deserves bonus points! The puzzle designer has applied to a given configuration a sequence of permutation operations from some group of permutations. The solver’s task tries to discover how to express the inverse permutation as a product of a given set of generators of this group. Related Problems and Puzzles. The graph for the pinball puzzle is a more useful tool for understanding the puzzle’s properties and behavior than the puzzle itself. A similar but even more striking example of the utility of the model is provided by the knight puzzle found in the game The 11th Hour by Trilobyte. Two knights of each color sit on an irregular fragment of a chessboard as shown in Figure 4. They move as in chess (without the requirement that alternate moves be by alternate colors) and must remain on the fragment. Can they be moved so that the white and black knights exchange The reader may wish to explore the puzzle before reading further. Number the squares as shown in the figure and build a graph by connecting two nodes if they are associated with squares that a knight can traverse in a single move. The graph, annotated to show the initial positions of the knights, can be seen in Figure 5. The graph alone may make you cry "voil à". Node 3 provides convenient "off-street parking" for any knight the player wishes to place in the next available upper position; that is, the knights can be shuttled into the lower nodes while dropping off any desired knight at node 3. That knight can then be moved upward and the process repeated. The puzzle is so flexible that more complicated variations can also be solved. If the knights were individually distinguished, say by the letters A through D, you could create any of the 4!=24 possible permutations of their initial positions -- even if squares 4 and 5 were removed from the board fragment! Rearrangement puzzles. The knight and pinball puzzles are examples from a family of rearrangement puzzles that can be designed and modeled using a directed graph or an undirected graph. Design options include the addiing and removing nodes and edges; changing the directions of edges; altering the relative number of filled and empty nodes; and either distinguishing between all objects, as in the pinball puzzle, or consider various subsets to be equivalent, as in the knight puzzle. Each designer must establish initial positions and objectives that are compatible with the constraints of the graph. Provided that each move is invertible, (perhaps in several moves) the set of permitted rearrangements will always form a group. The size of the group of permissible rearrangements generally determines the level of challenge to the player, who seeks to solve the inverse problem posed by the designer. The group of permissible rearrangements of the knights is S[4], the symmetric group on four letters. What about the group of possible permutations of the eight balls in the pinball puzzle (assuming, as always, the center white tile remains empty)? The upper bound would be all of S[8] but the size of S[8] (8! = 40320) makes the brute force computation of all possible products a challenge. Note that if the group were not all of S[8], then for some initial configurations the balls could not be moved to matching tiles by any sequence of flipper actions. It takes some knowledge of permutation group theory to find the group G of rearrangements of the pinball puzzle. A fairly complete understanding exists for the archetypal rearrangement puzzle, often called the 15-puzzle, whose frame contains 15 numbered tiles and a space (Figure 6). The tiles are scrambled and must be restored to the order shown. Adjacent tiles can slide into the space but cannot be removed or otherwise manipulated. Interestingly, the graph is so similar to the mechanical puzzle that it offers little, if any, additional insight. On the other hand, group theory -- which might be regarded as the most abstract model for the puzzle, leads to a complete understanding of the puzzle and the recognition that A[15], the alternating group on 15 letters, is the group of rearrangements associated with the puzzle [3]. Stein [4] provides some insight into the 15-puzzle and other rearrangement problems. Wilson [5] discusses the general structure of graphs and their groups including the determination of the groups associated with all undirected graphs. The spreadsheet functionality offered here, together with the theory and application of permutation groups found in references [1] and [2], may be sufficient for you to explore and determine of the groups associated with the pinball puzzle and other puzzles modeled by graphs. Computer adventure games contain a variety of mathematically oriented puzzles involving mazes, modular arithmetic, variations on rearrangements, and linear algebra. Other games, such as The 7th Guest by Trilobyte and Jewels of the Oracle by Discis incorporate puzzles in their design. There are also many Internet sites devoted to games including the discussion and solution of puzzles. The algorithms used in this article can be obtained over the Internet at the Mathematics Archives (http://archives.math.utk.edu). The functions and subroutines are contained in an Excel workbook along with additional algorithms and materials related to permutation groups and puzzles. G. Birkhoff and S. MacLane, A Survey of Modern Algebra, revised edition, The MacMillan Company, New York, 1953. I. N. Herstein, Topics in Algebra, 2^nd edition, John Wiley & Sons, Inc., New York, 1975. E. Spitznagel, Jr., A New Look at the Fifteen Puzzle, Mathematics Magazine, Vol. 40 (1967) 171-174. S.K. Stein, Mathematics: The Man-Made Universe, W.H. Freeman and Company, New York, 1976, Third Edition. R.M. Wilson, Graph Puzzles, Homotopy, and the Alternating Group, Journal of Combinatorial Theory (B) 16 (1974) 86-96. Acknowledgment. I would like to thank my good friend George Mackiw for his encouragement and support in writing this article.
{"url":"http://archives.math.utk.edu/combinatorics/Combinatorics/AdvGame.html","timestamp":"2014-04-17T04:11:48Z","content_type":null,"content_length":"24122","record_id":"<urn:uuid:2a62502b-7d5a-4078-9c87-f7f3fb9ad0ba>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00533-ip-10-147-4-33.ec2.internal.warc.gz"}
An example Next: Experimental Results Up: Embedded zerotree wavelet (EZW) Previous: Decoding We use a two scale wavelet image as a simple to show the algorithm. The original image is shown in (a) of Figure . The procedure of encoding together decoding for the image is listed as follows. Dominant pass 1. The threshold T=16, and the quantization value is 1.5T=24. The output symbols are POS(18), ZTR(3), ZTR(6) and ZTR(5). The reconstruction values is 24, corresponding to the only positive significant coefficient 18. The subordinate list contains just one number 18. The coefficient maintained in the subordinate list is replaced by zero in the image, and the image is shown in (b) Figure . Subordinate pass 1. One symbol is generated corresponding to the only coefficient (18) in the subordinate list, the symbol is 1, because 18 is larger than the threshold 16. Dominant pass 2. The threshold is halved, i.e., T=8, and the quantization value is 1.5T=12. The output symbols are ZTR(3), IZ(6),ZTR(5), POS(8), POS(13), ZTR(-7) and ZTR(1). The reconstruction values is 24, 12, 12, corresponding to the numbers 18, 8, 13 in the subordinate list, and then the coefficients maintained in the subordinate list are replaced by zero in the image, and the image is shown in (c) Figure . Subordinate pass 2. Three symbols are generated corresponding to the coefficients 18, 8, 13 in the subordinate list. The output symbols are 0,1 and 1. Dominant pass 3. The threshold is halved again, i.e., T=4, and the quantization value is 1.5T=6. The output symbols are ZTR(3), POS(6),NEG(-5), NEG(-7), ZTR(1), NEG(-6), POS(4),ZTR(3) and and ZTR (-2). The reconstruction values is 24, 12, 12,6, 6, -6, -6, 6, corresponding to the numbers 18, 8, 13, 6, -5,-7,-6, 4 in the subordinate list, and then the coefficients maintained in the subordinate list are replaced by zero in the image, and the image is shown in (c) Figure . Subordinate pass 3. Eight symbols are generated corresponding to the coefficient 18, 8, 13, 6, -5,-7,-6, 4 in the subordinate list. The output symbols are -1, 1, 1, 1, 1, 1, 1 and 1. Next: Experimental Results Up: Embedded zerotree wavelet (EZW) Previous: Decoding Andrew Doran Cherry Wang Huipin Zhang
{"url":"http://www.owlnet.rice.edu/~elec539/Projects99/BACH/proj1/report/node13.html","timestamp":"2014-04-17T21:26:16Z","content_type":null,"content_length":"5844","record_id":"<urn:uuid:77c3f3cf-09dd-4deb-8fef-fffc888c9736>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00553-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts from January 6, 2009 on The Unapologetic Mathematician Well, I made some good non-academic contacts already. I’d rather not go into details, since being too talky might be a problem when prospective employers look to Google and find me as the top hit for my name and subject. But I’m feeling good about my prospects, even without looking at the wilds of federal government and contracting jobs. Anyhow, as for mathematics there were a number of good talks, but most of them were either what was expected from the speaker, or felt pretty technical. One, though, really grabbed me. Kerry Luse, formerly a student of Yongwu Rong’s at George Washington, spoke about “A transition polynomial for signed Feynman diagrams”. She started with the chord diagram of a knot and added a sign to each chord. If you read the signs as orientations, you get Feynman diagrams for a single species of noninteracting, non-self-dual particles. Alternately, you can interpret the diagrams as arising from RNA secondary structures, as she did. Either way, she was looking for a polynomial invariant to be calculated from such a diagram, and she came up with some really interesting results from her choice. One property in particular was the fact that the resulting polynomial (as applied to chord diagrams arising from knots) is multiplicative under connected sums of links. This makes me think it’s got something to do with the Alexander polynomial. She also mentioned chord diagrams for links, with more than one loop, which I don’t think I’ve ever considered as such. Immediately this made me think of extending to tangles (naturally), and then that these chord diagrams may themselves form a category of their own. Is there some sort of duality here? If so it might turn connected sum on one side into disjoint union on the other side, which could provide a fascinating connection between classical and quantum topology… See, I’m not going to be stopping research, and definitely not stopping this project here (thanks, btw, for the comments), but I just need to get out of the academic game I’ve been playing the last few years. Anyhow, since it’s been weeks since I’ve been at home cooking for myself (thanks to Dad insisting on doing it all), I figured I’d have a bonus “I (Didn’t) Made It!”: At the Afghan Grill, just around the corner from the Marriott, I’m having the mantoo, and across the table from me is the lamb qabili palao. So who ordered the Afghan equivalent of biryani? This guy! I also ran into Jesse Johnson this morning. Oh, and if Sarah from John Hopkins is reading this and is at the meetings, she really needs to drop me an email so she can join the fun. • Recent Posts • Blogroll • Art • Astronomy • Computer Science • Education • Mathematics • Me • Philosophy • Physics • Politics • Science • RSS Feeds • Feedback Got something to say? Anonymous questions, comments, and suggestions at • Subjects • Archives
{"url":"https://unapologetic.wordpress.com/2009/01/06/","timestamp":"2014-04-19T09:27:07Z","content_type":null,"content_length":"38323","record_id":"<urn:uuid:9e3f5b34-efc9-47bb-8268-b3fa1901dbac>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00520-ip-10-147-4-33.ec2.internal.warc.gz"}
why do shots drop more when shooting at a small incline at extreme range? Re: why do shots drop more when shooting at a small incline at extreme range? When I enter incline values greater than 7 degrees the drop decreases as expected, but when the bullet has slowed so much the downhill shot needs less holdover than uphill. I assume it is because gravity is helping the trajectory downhill and slowing the bullet shot uphill. The effect of gravity is almost invisible when comparing incline angles at supersonic speed, but at slower speeds, gravity changes things. look at it this way, the rifleman's rule is explained by the effect of gravity is diminished by the angle because gravity pulls down, not 90 degrees away from the line of sight. Well, when shooting uphill, gravity is not only pulling away from the line of sight but also pulling away from the direction of travel (slowing it down). When shooting downhill, gravity still pulls the same away from the line of sight ( assuming a directly inverted angle, like 6 and -6 degrees) but instead of pulling away from the direction of travel it is now benefitting the bullets speed (not slowing down as much). Another perspective is to look at shot at 45 degrees. Only half the force of gravity pulls away from the line of sight. How much effect can the other have, and at what point is the effect large enough to notice? It is so small that it is imperceptible when shooting a 375 cheytac at 2000. noted. Let me remind, this is only noticeable at much slower speeds, but here of guys doing amazing things with .308 winchester's and 175 smk. Maybe those guys have witnessed this?
{"url":"http://www.longrangehunting.com/forums/f116/why-do-shots-drop-more-when-shooting-small-incline-extreme-range-96138/index3.html","timestamp":"2014-04-24T04:03:41Z","content_type":null,"content_length":"79625","record_id":"<urn:uuid:45c3eabf-0ba5-4ee8-8775-373dc51c7b41>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00334-ip-10-147-4-33.ec2.internal.warc.gz"}
Hooke's Law More Information About This Video Hooke's Law Hooke's law is named after British physicist Robert Hooke who published it in 1660. Hooke's Law states that the extension of a spring is in direct proportion with the load or force applied to it. In other words: equal weight increments stretch the spring equally (linear Hooke's law formula states that: x is the length displacement of the spring's end from its start position (units: meters) F is the force exerted on the spring's end (units: N or kilogram-force) k is the spring constant (units: N/m) The meaning of the constant k, in N/m, is how many newtons we need in order to stretch the spring 1m. A higher value for k means that more force or newtons are needed in order to stretch the spring 1m - in other words, a higher value means a less flexible or stiffer spring and vice versa. Hooke's Law Experiment A coil spring is hanged with a weight hanger and pointer attached to its end. Different known weights with equal increments (in this case 500g, 1000g, 1500g, 2000g – according to the coil's flexibility) are hanged and the readings of the pointer with the respective weights are marked on a sheet of paper. The results show that equal weight increments stretch the spring equally (linear relationship) what we call Hooke's law. Take in account that this linear relationship between weight and length displacement of the spring is limited to the working range of the spring (too small weights will not stretch the spring at all whereas too big weights will compromise its flexibility). It is clear that Hooke's law is theoreticall correct for an ideal spring that has no weight, mass, or damping losses and is perfectly elastic within its working range. But for practical springs Hooke's law is only a good approximation within their working range
{"url":"http://www.physicsdemos.juliantrubin.com/physics_videos/hookes_law.html","timestamp":"2014-04-17T15:28:22Z","content_type":null,"content_length":"12670","record_id":"<urn:uuid:bee24192-5b87-4256-8ff8-808dedfd0e1c>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00072-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Tools Discussion: Roundtable, Teaching Mathematics as a Science Discussion: Roundtable Topic: Teaching Mathematics as a Science << see all messages in this topic < previous message | next message > Subject: RE: Teaching Mathematics as a Science Author: Alice Date: Aug 8 2004 I must reply to the last post! I use the discovery method of teaching math. In the beginning most of my students rebel. They feel that by not giving them answers and teaching rote, I am teaching wrong. Some even tell me so. I persist. (I even left a tenured job because they insisted I teach rote.) By the end of the first semester of making hypotheses and testing them, after having discovered truths in math and then seeing that their textbook says virtually the same thing,(And a lot of "doesn't that feel good" from me)... most students are very proud of their discoveries. We call conjectures by student names, and they own them. By the end of the year, when I ask them to tell me what they thought I should keep in my lessons, and what should be changed, I get notes about how they fought me in the beginning, but they really learned to like math this year...Their test results usually soar too, even though I don't seem to be teaching for the tests. I've used this method with 7th grade average students, through Calculus in the last 8 years. My principal agrees that students like math after being in my Try it, you'll like it. Reply to this message Quote this message when replying? yes no Post a new topic to the Roundtable Discussion discussion Discussion Help
{"url":"http://mathforum.org/mathtools/discuss.html?context=dtype&do=r&msg=12456","timestamp":"2014-04-21T02:31:43Z","content_type":null,"content_length":"16479","record_id":"<urn:uuid:c67f487d-e575-4707-b96d-0a8eceb6ad3d>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00305-ip-10-147-4-33.ec2.internal.warc.gz"}
Capitulation in cyclotomic extensions up vote 12 down vote favorite Let $p$ be an irregular prime, which means that $p$ divides some Bernoulli number: $p \mid B_k$ (for some even $k\in[2,p-3]$). This implies that the class number of the field $K$ of $p$-th roots of unity is divisible by $p$. Let $L$ be the field of $p^2$-th roots of unity. What, if anything, is known about the capitulation of ideal classes in $L/K$ ( we say that an ideal class from $K$ capitulates in $L$ if an ideal generating this class becomes principal there)? It is possible to write down criteria in terms of units that are or are not norms from $L$, but this does not seem to help a lot. I am mainly interested in the question whether there is a connection between the index $k$ and the capitulation of the subgroup of order $p$ corresponding to $k$ via Herbrand-Ribet. I am pretty sure that classical algebraic number theorists did not do an awful lot in this direction but I am not familiar with any advances in Iwasawa theory: whether an ideal class capitulates in $L/K$ is encoded in the Hilbert class field, so the structure of the maximal abelian unramified $p$-extension of the cyclotomic Iwasawa extension of $K$ might contain relevant information. Does it? I love the usage of «capitulation» :) – Mariano Suárez-Alvarez♦ Sep 28 '10 at 14:39 1 @Mariano: the word capitulation was coined by Arnold Scholz; the German word for principal (as in principal ideal) is Haupt, which is caput in Latin; but caputilation would sound silly in both German and English -) – Franz Lemmermeyer Sep 28 '10 at 16:21 I prefer that they give up, as in capitulation, rather than if they would be beheaded, as in enthaupted (=decapitation). – Chris Wuthrich Sep 28 '10 at 17:20 2 Yeah: it is a nice image: those classes doing the best to survive, extension after extension until finally, well, they just have to give in and submit to principalization :P – Mariano Suárez-Alvarez♦ Sep 28 '10 at 21:26 add comment 1 Answer active oldest votes Assume $p$ is an irregular prime for which Vandiver's conjecture holds, e.g. $p<12'000'000$. This conjecture asserts that $p$ does not divide the $+$-part of the class group. Then there is no capitulation in the class group from the first layer of the cyclotomic $\mathbb{Z}_p^{\times}$-tower to any other in this tower. See Proposition 1.2.14 in Greenberg's up vote 7 book, which says that the capitulation kernel lies in the $+$-part. See also the discussion on page 102 where it is discussed what happens when Vandiver's conjecture does not hold. down vote accepted Generally capitulations in Iwasawa theory are well studied. The capitulation is linked to the question of whether there are non-trivial finite sub-$\Lambda$-modules in the Iwasawa module $X$, here the projective limit of the $p$-primary parts of the class groups in the tower, or equivalently the Galois group mentioned in the question. But I guess the main conjecture of Iwasawa theory does not say anything about this because we can ignore finite submodules. The advances in Iwasawa theory have only been on 1 formulating main conjectures in more general situations and not very much on getting finer information about the Iwasawa modules (Kurihara's work on computing all Fitting ideals in some situations is an exception to this that I know but not enough for these kinds of question I think). Does anyone if ETNC says anything for such conjectures? – Mahesh Kakde Sep 30 '10 at 13:10 Slightly more is true (as I should have known): the whole minus part of the class group cannot capitulate. This leaves us with the question which eigenspaces of the plus class group 1 may or may not capitulate. Since this is entirely a problem involving the structure of the unit group of the real subfield, I guess it is not wise to expect an answer one way or another. – Franz Lemmermeyer Sep 30 '10 at 14:22 add comment Not the answer you're looking for? Browse other questions tagged nt.number-theory algebraic-number-theory iwasawa-theory class-field-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/40324/capitulation-in-cyclotomic-extensions/40605","timestamp":"2014-04-17T07:22:40Z","content_type":null,"content_length":"61565","record_id":"<urn:uuid:49a2537e-b81c-4d4e-98fa-c309e28bb9d8>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00052-ip-10-147-4-33.ec2.internal.warc.gz"}
Stochastic Models of the Social Security Trust Funds by Clark Burdick and Joyce Manchester Research and Statistics Note No. 2003-01 (released March 2003) This note was prepared by Clark Burdick and Joyce Manchester, Division of Economic Research, Office of Research, Evaluation, and Statistics, Office of Policy. Research assistance provided by Eugene The findings and conclusions presented in this paper are those of the authors and do not necessarily represent the views of the Social Security Administration. Each year in March, the Board of Trustees of the Social Security trust funds reports on the current and projected financial condition of the Social Security programs. Those programs, which pay monthly benefits to retired workers and their families, to the survivors of deceased workers, and to disabled workers and their families, are financed through the Old-Age, Survivors, and Disability Insurance (OASDI) Trust Funds. In their 2003 report, the Trustees present, for the first time, results from a stochastic model of the combined OASDI trust funds. Stochastic modeling is an important new tool for Social Security policy analysis and offers the promise of valuable new insights into the financial status of the OASDI trust funds and the effects of policy changes. This Research and Statistics (R&S) Note demonstrates that several stochastic models deliver broadly consistent results even though they use significantly different approaches and assumptions. However, the results also demonstrate that the variation in trust fund outcomes differs as the approach and assumptions are varied. Which approach and assumptions are best suited for Social Security policy analysis remains an open question. Further research is needed before the promise of stochastic modeling is fully realized. Despite this caveat, stochastic modeling results are already shedding new light on the range and distribution of trust fund outcomes that might occur in the future. The stochastic model used in the Trustees' Report was recently developed by the Office of the Chief Actuary (OCACT) of the Social Security Administration to illustrate the uncertainty surrounding projections of the financial future of the Social Security system over the next 75 years. The stochastic results are intended to augment the traditional demonstrations of uncertainty used in past Trustees' Reports. The standard method of demonstrating uncertainty is to present three alternative sets of deterministic projections. The intermediate (Alternative II) projections are intended to reflect the best estimates of future experience. The low-cost (Alternative I) and high-cost (Alternative III) projections are based on more optimistic and more pessimistic assumptions about the future, respectively. The three alternatives indicate a possible range for future experience. The stochastic model also relies on the assumptions underlying the intermediate (Alternative II) projections. Time constraints dictated that the stochastic results in the 2003 Trustees' Report be based on the assumptions from the 2002 report. The purpose of stochastic modeling is to further illustrate the degree of uncertainty inherent in projecting future financial outcomes for the combined OASDI trust funds. Those outcomes depend on the future values of a large number of demographic, economic, and program-specific variables that cannot be known with certainty and must be forecast. Stochastic modeling is an attempt to forecast the future values of variables in a manner that is consistent with contemporary economic and demographic theory and with empirical evidence. The statistical techniques used in stochastic modeling help to ensure that the variables that determine the future financial condition of the combined OASDI trust funds evolve over time in a fashion that is consistent with their past behavior and is intended to be consistent with their actual future behavior. Simulation techniques are used to construct a large number of future financial outcomes for the trust funds. Each simulation makes random draws for the stochastic variables from their assumed probability distributions and uses those randomly drawn variables to produce future trust fund outcomes. The distribution of those simulated outcomes is then assumed to be representative of the range and distribution of actual outcomes that might be realized in the future. To help interpret the new stochastic results and to place them in context, the Social Security Administration's Office of Policy arranged for three external modeling groups to produce alternative stochastic results that also used the assumptions from the 2002 Trustees' Report.^1 This R&S Note describes results produced by: • The Congressional Budget Office Long Term (CBOLT) model; • A model developed by Shripad Tuljapurkar and Ron Lee (TL), formerly of Mountain View Research; and • The SSASIM model developed by the Policy Simulation Group. The results from the three external models are analyzed and compared, with particular attention paid to the alternative assumptions and approaches adopted by each. The results are also compared with the stochastic results produced by OCACT and with the intermediate, high-cost, and low-cost projections traditionally used by the Trustees. The external models produce results that are quite similar to each other, but they exhibit some significant differences when compared with both the stochastic and nonstochastic results produced by OCACT. Highlights of the Models' Results The major results from the models are summarized below and in Table 1. CBOLT Model The median simulation result from the CBOLT model projects that the combined OASDI trust funds will be exhausted in 2037—4 years earlier than the date of 2041 projected under the Alternative II assumptions in the 2002 Trustees' Report. The CBOLT model projects a 10 percent chance that the trust funds will be exhausted by 2028, which is one year earlier than the Trustees' standard high-cost projection. The model also projects a 10 percent chance that exhaustion will not occur before 2063. In contrast, according to the Trustees' standard low-cost projection, the trust funds are not exhausted by 2076, the end of the 75-year projection period. TL Model The median simulation result from the TL model also projects exhaustion of the combined OASDI trust funds in 2037—again, 4 years earlier than projected by the standard model under the Alternative II assumptions in the 2002 Trustees' Report. The TL model projects a 10 percent chance that the trust funds will be exhausted by 2029 and a 10 percent chance that exhaustion might not occur before 2056. These dates compare with the high-cost projection of 2029 and the low-cost projection from the 2002 Trustees' Report that the trust funds will not be exhausted at the end of the 75-year projection SSASIM Model Although SSASIM is fully as capable as the other models, for this note it models only two variables stochastically—productivity and fertility. Therefore, the results from the SSASIM model shown here are not directly comparable with those of the other external models or with those produced by OCACT. Rather, the results are used to explore the implications of different modeling choices. The results show that stochastic modeling outcomes can exhibit significantly more variation when structural time-series models are used than when the more typical reduced-form ARIMA models are used. The limited SSASIM model produces median simulation results that project exhaustion of the trust funds in 2037 and 2038 under the two different model specifications, respectively. Again, these dates are 3 to 4 years earlier than the date of 2041 projected under the intermediate assumptions in the 2002 Trustees' Report. OCACT Model Unlike the results of the three external models, the OCACT stochastic projections produce a median simulation result of trust fund exhaustion in 2041, the same as the standard Alternative II projection. The OCACT results indicate a 10 percent chance that the combined OASDI trust funds will be exhausted by 2034 and a 10 percent chance that they will not be exhausted before 2057. Table 1. Projected trust fund exhaustion dates under assumptions of the 2002 Trustees' Report Model Low range Intermediate range High range Stochastic models CBOLT 2028 2037 2063 TL 2029 2037 2056 SSASIM a 2037/2038 a OCACT 2034 2041 2057 Standard model (Trustees' Report) 2029 2041 b NOTE: For the stochastic models, the low-, intermediate-, and high-range results are for the 10th, 50th, and 90th percentile, respectively. For the Trustees' Report, the three ranges are for the low-cost, intermediate, and high-cost assumptions in the 2002 Trustees' Report. a. For this note, SSASIM modeled only two variables—productivity and fertility—stochastically, so the range of outcomes should not be compared with those of the other stochastic models. b. According to the low-cost projections in the 2002 Trustees' Report, the trust funds are not exhausted at the end of the 75-year projection period. All three external models produce median simulation results that project that the combined OASDI trust funds will be exhausted 3 to 4 years earlier than projected under Alternative II in the 2002 Trustees' Report. By contrast, the OCACT median simulation results show trust fund exhaustion in the same year as Alternative II. Differences in the short-run behavior of the input variables and in the methodology used in modeling the long-run behavior of some variables may account for the divergent results, as explained below. The trust fund ratio (TFR), or the ratio of trust fund assets at the beginning of the year to the year's expenditures, is a common measure of financial adequacy for the OASDI trust funds. The TFR projections from the CBOLT, TL, and OCACT stochastic models are detailed in the fan charts shown below (see Chart 1). The projections from the SSASIM model are shown later since, for this note, the SSASIM model was restricted to modeling only two variables—productivity and fertility—stochastically, producing results not directly comparable with those of the other models.^2 Chart 1. Trust fund ratio projections from the CBOLT, TL, and OCACT stochastic models CBOLT model TL model OCACT stochastic model SOURCES: Data for the CBOLT model are based on December 2002 CBOLT results. Data for the TL model are based on the January 2003 TL results. Data for the OCACT stochastic model are based on data for the 2003 Trustees' Report. NOTE: The vertical lines depict the high-cost and intermediate projections of trust fund exhaustion from the 2002 Trustees' Report. The exhaustion date under the Trustees' low-cost assumptions is not shown because OCACT does not project exhaustion prior to 2076 in that case. The fan charts depict the 10th through the 90th percentiles of the simulation results. The percentiles do not represent individual simulation outcomes or paths. Rather, they describe the probability distribution of all of the simulation outcomes collectively. In each year, 10 percent of the simulated TFRs fall below the 10th percentile, another 10 percent fall between the 10th and 20th percentiles, and so on. Finally, 10 percent of the simulated TFRs for each year lie above the 90th percentile. The fan charts are assumed to describe the probability distribution of future TFR outcomes that might be realized. When the trust fund reaches zero, the combined OASDI trust funds are exhausted. Exhaustion dates shown in the fan charts match those in Table 1. In addition, the fan charts illustrate differences in how the various models depict the evolution of the trust funds over time. The standard low-cost, intermediate, and high-cost projections described in the Trustees' Report are each based on a set of assumptions for the variables that determine the financial future of the combined OASDI trust funds. Each set of assumptions consists of a short-range path and a long-range ultimate value for each variable. Those assumptions are agreed upon each year by the Trustees of the OASDI trust funds. The intermediate assumptions reflect the Trustees' best estimate for the future behavior of each variable. The new stochastic model developed by OCACT uses the intermediate assumptions as the mean, or expected value, of each of the stochastic variables. The External Models The external models also use the ultimate assumptions of Alternative II in the 2002 Trustees' Report as the long-run expected value of the stochastic variables, with one exception. The TL model uses a method known as Lee-Carter to simulate future mortality. The Lee-Carter approach generates faster mortality improvement, resulting in longer life expectancies, than do the Trustees' 2002 intermediate assumptions about mortality. The CBOLT and SSASIM methods for simulating future economic variables are similar to each other.^3 Both models use a three-variable system of equations to simulate inflation, unemployment, and interest rates. The two models differ in that the CBOLT model simulates the real (inflation-adjusted) interest rate whereas the SSASIM model simulates nominal interest rates. In addition, the CBOLT model adds an equation describing real wage growth, while the SSASIM model adds an equation for productivity growth. In contrast, the TL model adopts a different, reduced-form approach, modeling the real interest rate and the rate of real wage growth in separate equations and focusing on the effective real per capita tax rate without modeling unemployment and inflation directly.^4 Despite their different approaches, the CBOLT and TL models produce stochastic projections that are broadly similar. The median results of both models project the same date of 2037 for trust fund exhaustion, 4 years sooner than projected under Alternative II in the 2002 Trustees' Report. The SSASIM model produces median trust fund exhaustion dates of 2037 and 2038 for the two different model specifications considered. Again, these dates are 3 to 4 years earlier than projected under the Alternative II assumptions from the 2002 Trustees' Report. The OCACT Model The assumptions regarding the behavior of stochastic variables in the OCACT model differ somewhat from those of the external models. The expected values of OCACT's input variables are calibrated to the Alternative II short-range assumptions for up to 25 years before settling into the Alternative II ultimate values (the CBOLT model also follows this approach). In addition, whereas the OCACT model projects the level of the variables, the external models in many cases project a nonlinear function of a variable (for example, the logarithm of the variable). In the external models, those projected nonlinear functions lead to asymmetric responses to shocks, which cause the median projections of the trust fund ratio to lie below the Alternative II projection. Using levels rather than nonlinear functions may contribute to the OCACT model's tracking the Alternative II projections of the trust fund ratio more closely. Modeling Approaches Examined in the SSASIM Model As mentioned above, the results from the SSASIM model are not directly comparable with those of the CBOLT and TL models. The SSASIM results do, however, highlight the importance of basic approaches when modeling the OASDI trust funds stochastically. The model generates two sets of results, using two different specifications for the behavior of the stochastic variables. The results indicate greater dispersion of outcomes when a structural time-series specification is assumed for the stochastic variables. The structural time-series approach allows the behavior of stochastic variables to change over time. The more common reduced-form ARIMA specification for the stochastic variables produces less dispersion of outcomes. Such a result is expected because the ARIMA modeling approach assumes that the behavior of stochastic variables does not change over time. The structural time-series approach may produce a better fit to the historical data for some variables. The trust fund ratio projections from the SSASIM model under both approaches are shown in Chart 2. Chart 2. Comparison of trust fund ratio projections from the SSASIM model Structural time-series model ARIMA model SOURCE: Based on January 2003 SSASIM results provided by the Policy Simulation Group. NOTES: The vertical lines depict the high-cost and intermediate projections of trust fund exhaustion from the 2002 Trustees' Report. SSASIM results should not be compared directly with those of the other stochastic models because only two variables—productivity and fertility—were modeled stochastically. Additional Modeling Issues The results from the three external models differ because of the varying approaches they adopt and the varying assumptions each model makes. However, all three models have been carefully calibrated so that in the long run the expected values of stochastic variables, with the exception of mortality in the TL model, are in accord with the Alternative II assumptions from the 2002 Trustees' Report. Hence, the results are conditional on the Alternative II assumptions. To eliminate this strict reliance on those assumptions, the ultimate rates to which the stochastic variables are calibrated can themselves be treated stochastically. Incorporating stochastic ultimate rates will add an additional dimension of uncertainty to the models, resulting in greater dispersion of the trust fund outcomes (Holmer 2003). Yet another dimension of uncertainty concerns the parameter values used in the simulation of future values of the stochastic variables. Each of the models estimates its time-series equations using historical data. Once estimated, the coefficients are treated as if they are known with certainty. However, an alternative statistical approach would treat these estimated coefficients themselves as random, adding additional uncertainty to the stochastic models and resulting in still greater dispersion of the trust fund outcomes (Lee and Carter 1992). 1. An earlier version of the CBOLT model was documented in Congressional Budget Office (2001). Results from the TL model are documented in Lee, Anderson, and Tuljapurkar (2003). Those from the SSASIM model are described in Holmer (2003). 2. Previous results from the SSASIM model (Holmer 2002) that exploit its full stochastic capabilities have produced projections that are broadly similar to those of the CBOLT and TL models in all 3. For this note, the SSASIM model was restricted to modeling only two variables---productivity and fertility---stochastically. Nevertheless, the SSASIM model has the ability to model all of the major trust fund determinants stochastically and is fully as capable as the other models described in this note. It is these more general capabilities that are referred to here. 4. The TL modeling team chose this approach after finding statistical evidence that the real interest rate and real wage growth are independent. Board of Trustees of the Federal Old-Age and Survivors Insurance and Disability Insurance Trust Funds. 2002. 2002 Annual Report. Washington, D.C.: U.S. Government Printing Office. March. Available at __________. 2003 Annual Report. Washington, D.C.: U.S. Government Printing Office. March. Available at http://www.ssa.gov/OACT/TR/TR03/index.html. Congressional Budget Office. 2001. Uncertainty in Social Security's Long-Term Finances: A Stochastic Analysis. Technical Report. December. Available at http://www.cbo.gov. Holmer, Martin R. 2002. Presentation to Office of Policy, Social Security Administration, Washington, D.C. June. __________. 2003. Methods for Stochastic Trust Fund Projection. Report prepared for the Social Security Administration. January. Available at http://www.polsim.com/stochsim.pdf. Lee, Ronald D., and Lawrence Carter. 1992. "Modeling and Forecasting the Time Series of U.S. Mortality." Journal of the American Statistical Association 87(419): 659-671. Lee, Ronald D., Michael W. Anderson, and Shripad Tuljapurkar. 2003. Stochastic Forecasts of the Social Security Trust Fund. Report prepared for the Social Security Administration. January.
{"url":"http://ssa.gov/policy/docs/rsnotes/rsn2003-01.html","timestamp":"2014-04-19T04:23:37Z","content_type":null,"content_length":"27464","record_id":"<urn:uuid:70e48c2c-d7a2-4eab-a59d-7bd5fe5fc500>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00516-ip-10-147-4-33.ec2.internal.warc.gz"}
SAS-L archives -- April 2003, week 2 (#116)LISTSERV at the University of Georgia Date: Wed, 9 Apr 2003 11:19:19 -0700 Reply-To: cassell.david@EPAMAIL.EPA.GOV Sender: "SAS(r) Discussion" <SAS-L@LISTSERV.UGA.EDU> From: "David L. Cassell" <cassell.david@EPAMAIL.EPA.GOV> Subject: Re: Sampling Question Content-type: text/plain; charset=us-ascii Action Man <wollo_desse@HOTMAIL.COM> wrote: > I have 7,000 records in my SAS file. Out of these records I want to pick 500 > of them randomly, How do I do that using SAS. Hamani Elmaache and John Ladds both gave the traditional, inefficient answer of "create a random variable, sort on it, take the first n". For small data sets, this doesn't use up much CPU or wallclock time, but it simply is not necessary. Paul Dorfman pointed out: > You will no doubt get (or have already gotten) plenty of advice how to do it > using the "standard" K/N method, where K is the sample size and N is > population size. It is based on reading all N records from the > file. It is plenty sufficient and fast in your case, where N=7000 only > K=500 is not a tiny fraction of N. Dale McLerran has written a macro to address this for a small n and large N, using POINT= to select out only the needed records. He has published this on SAS-L, so it is in the archives. Paul then presented a very nice bit of code to pull out a sample, but ended up with simple random sampling *WITH* replacement, which is what is called URS sampling in the PROC SURVEYSELECT docs. Dale McLerran replied: > The algorithm which you are proposing performs sampling with > replacement: a record may be picked more than one time. Now, > there are occasions when we do want to do sampling with > replacement, but my guess is that this is not one of them. In > order to guarantee sampling without replacement (records can > only be selected once), one could use a hash, right? As a matter of fact, this is what PROC SURVEYSELECT does under the hood. For simple random sampling problems, it can use Floyd's ordered hash-table algorithm, which is considered a very good choice for large data sets. The standard references on this algorithm are: Bentley, J.L. and Floyd, R. (1987), "A Sample of Brilliance," Communications of the Association for Computing Machinery, 30, 754-757. Bentley, J.L. and Knuth, D. (1986), "Literate Programming," Communications of the Association for Computing Machinery, 29, 364-369. So use PROC SURVEYSELECT. It is: [1] fast [2] efficient [3] safe from programming errors (sorry, Paul! :-) [4] simple [5] easier to validate or unit-test Just say: proc surveyselect data=MyInData out=MySample sampsize=500; id <list of vars to drag along>; Note that I did not even specify METHOD=SRS, since that is the default when no SIZE variable is given. I also did not specify a SEED= option. PROC SURVEYSELECT will generate a random seed and print it out in the output, so you can always re-create the sample if need be. Actually I recommend you select your own random seed, but I did it this way solely for instructional purposes. Now what could be simpler than this? You don't need to compute or indicate the size of the population. You don't need to worry about generating your own random seed. You don't need to worry about the accuracy of algorithm or the chance of errors in your code. "We do more samples before 9 am than most people do all day." David Cassell, CSC Senior computing specialist mathematical statistician
{"url":"http://listserv.uga.edu/cgi-bin/wa?A2=ind0304b&L=sas-l&D=1&P=12534","timestamp":"2014-04-20T13:23:54Z","content_type":null,"content_length":"12473","record_id":"<urn:uuid:42ddc51c-7f1d-4729-8bd3-ae8169148134>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00285-ip-10-147-4-33.ec2.internal.warc.gz"}
Highland Park, NJ SAT Math Tutor Find a Highland Park, NJ SAT Math Tutor ...For the combinatorial aspects of a typical probability course, I have taken an advanced course in combinatorics - culminating in generating functions that reduce problems about finding the number of ways to distribute indistinguishable objects into finding coefficients of a polynomial. I have ta... 21 Subjects: including SAT math, chemistry, calculus, French ...I can work with you using your school curriculum, or we can work from Calculus (2nd ed.) by Hughes-Hallett, Gleason, et al. (John Wiley & Sons, Inc: 1998). I have experience tutoring the following courses: * AP Calculus AB * AP Calculus BC * Rutgers Calculus I (Math 135) * Rutgers Multivariabl... 13 Subjects: including SAT math, reading, writing, physics ...My students have had a successful rate of passing the NYS Regents Examination. I have a degree in mathematics. Algebra 2 is just an extension of Algebra 1 but in more extensive details. 7 Subjects: including SAT math, geometry, algebra 1, algebra 2 ...I'm currently an MFA candidate at the Graduate Film Program at NYU Tisch, and my short films have screened at Sundance, the Brooklyn Academy of Music (BAM), IFC and Rooftop Films, among many others. Much of my work has been in prepping students to take the SAT, ACT, ISEE and SSAT. I tutor high ... 36 Subjects: including SAT math, reading, chemistry, English ...I have been tutoring since the age of 14. I began as a private tutor at Kumon Math and Learning Center and continued to be a private tutor throughout my 4 years of high school. I have tutored all ages ranging from K-12. 26 Subjects: including SAT math, English, reading, writing Related Highland Park, NJ Tutors Highland Park, NJ Accounting Tutors Highland Park, NJ ACT Tutors Highland Park, NJ Algebra Tutors Highland Park, NJ Algebra 2 Tutors Highland Park, NJ Calculus Tutors Highland Park, NJ Geometry Tutors Highland Park, NJ Math Tutors Highland Park, NJ Prealgebra Tutors Highland Park, NJ Precalculus Tutors Highland Park, NJ SAT Tutors Highland Park, NJ SAT Math Tutors Highland Park, NJ Science Tutors Highland Park, NJ Statistics Tutors Highland Park, NJ Trigonometry Tutors
{"url":"http://www.purplemath.com/Highland_Park_NJ_SAT_math_tutors.php","timestamp":"2014-04-20T19:18:50Z","content_type":null,"content_length":"24131","record_id":"<urn:uuid:538d377a-b9fb-42d0-a8e3-ee5607c15ce4>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00605-ip-10-147-4-33.ec2.internal.warc.gz"}
Quadratic solitons for negative effective second-harmonic diffraction as nonlocal solitons with periodic nonlocal response function Publication: Research - peer-review › Journal article – Annual report year: 2012 Esbensen, BK , Bache, M , Krolikowski, W & Bang, O 2012, ' Quadratic solitons for negative effective second-harmonic diffraction as nonlocal solitons with periodic nonlocal response function Physical Review A (Atomic, Molecular and Optical Physics) , vol 86, no. 2, pp. 023849., Esbensen, B. K. , Bache, M. , Krolikowski, W. , & Bang, O. Quadratic solitons for negative effective second-harmonic diffraction as nonlocal solitons with periodic nonlocal response function Physical Review A (Atomic, Molecular and Optical Physics) (2), 023849. title = "Quadratic solitons for negative effective second-harmonic diffraction as nonlocal solitons with periodic nonlocal response function", publisher = "American Physical Society", author = "B.K. Esbensen and Morten Bache and W. Krolikowski and Ole Bang", year = "2012", doi = "10.1103/PhysRevA.86.023849", volume = "86", number = "2", pages = "023849", journal = "Physical Review A (Atomic, Molecular and Optical Physics)", issn = "1050-2947", TY - JOUR T1 - Quadratic solitons for negative effective second-harmonic diffraction as nonlocal solitons with periodic nonlocal response function A1 - Esbensen,B.K. A1 - Bache,Morten A1 - Krolikowski,W. A1 - Bang,Ole AU - Esbensen,B.K. AU - Bache,Morten AU - Krolikowski,W. AU - Bang,Ole PB - American Physical Society PY - 2012 Y1 - 2012 N2 - We employ the formal analogy between quadratic and nonlocal solitons to investigate analytically the properties of solitons and soliton bound states in second-harmonic generation in the regime of negative diffraction or dispersion of the second harmonic. We show that in the nonlocal description this regime corresponds to a periodic nonlocal response function. We then use the strongly nonlocal approximation to find analytical solutions of the families of single bright solitons and their bound states in terms of Mathieu functions. AB - We employ the formal analogy between quadratic and nonlocal solitons to investigate analytically the properties of solitons and soliton bound states in second-harmonic generation in the regime of negative diffraction or dispersion of the second harmonic. We show that in the nonlocal description this regime corresponds to a periodic nonlocal response function. We then use the strongly nonlocal approximation to find analytical solutions of the families of single bright solitons and their bound states in terms of Mathieu functions. KW - Diffraction KW - Solitons U2 - 10.1103/PhysRevA.86.023849 DO - 10.1103/PhysRevA.86.023849 JO - Physical Review A (Atomic, Molecular and Optical Physics) JF - Physical Review A (Atomic, Molecular and Optical Physics) SN - 1050-2947 IS - 2 VL - 86 SP - 023849 ER -
{"url":"http://orbit.dtu.dk/en/publications/quadratic-solitons-for-negative-effective-secondharmonic-diffraction-as-nonlocal-solitons-with-periodic-nonlocal-response-function(85e6570c-2962-452a-9638-503cad79daf9)/export.html","timestamp":"2014-04-16T12:02:32Z","content_type":null,"content_length":"22275","record_id":"<urn:uuid:432fd55e-5c6d-48e2-9ee9-1d4b22ccafce>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00634-ip-10-147-4-33.ec2.internal.warc.gz"}
A divisibility problem Show that if $(a,b)=1$ and $p$ is an odd prime, then $( a+b, (a^p+b^p)/a+b ) = 1$ or $p$ $\frac{a^p+b^p}{a+b}=\sum_{j=1}^p (-1)^{j-1} a^{p-j}b^{j-1}= \sum_{j=1}^p (-1)^{j-1} (a+b \ - \ b)^{p-j}b^{j-1} \equiv pb^{p-1} \mod a+b.$ similarly $\frac{a^p+b^p}{a+b} \equiv pa^{p-1} \mod a+b.$ so if $d \mid a+b$ and $d \mid \frac{a^p + b^p}{a+b},$ then $d \mid pa^{p-1}$ and $d \mid pb^{p-1}.$ thus $d \mid p \gcd(a^{p-1},b^{p-1})=p. \ \Box$
{"url":"http://mathhelpforum.com/number-theory/131072-divisibility-problem.html","timestamp":"2014-04-17T04:19:28Z","content_type":null,"content_length":"35225","record_id":"<urn:uuid:bb97ac39-017e-4686-b5c6-6bdd81912ac7>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00546-ip-10-147-4-33.ec2.internal.warc.gz"}
[Fractal Spirograph] Fractal Roulette Hi anonimnystefy, Thanks for the link. Hi bobbym, You're welcome. Hi John, John E. Franklin wrote: What are the ratios between the circles? The ratios used in the eight animated GIFs on my blog post are 3, 3, 2, 2, 2, 2, 2.5, and 2.75 respectively.
{"url":"http://mathisfunforum.com/viewtopic.php?pid=264410","timestamp":"2014-04-17T00:50:09Z","content_type":null,"content_length":"20974","record_id":"<urn:uuid:1b5b1853-28b4-461e-86fe-ce9b852a763b>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00615-ip-10-147-4-33.ec2.internal.warc.gz"}
math (ruler) Number of results: 214,121 math (ruler) I need help finding the mesurment in a ruler the little bitty mark could you pull up a ruler for me with all the marks . Wednesday, April 22, 2009 at 8:40pm by kim first part: Suppose you are standing a ruler on the floor, the ruler is RS, and it is perpendicular to the floor. If T is not along this ruler, could RT be parallel to ruler RS ??? (NO) 2nd: think of a table with 4 legs. All the legs would be perpendicular to the table and ... Tuesday, May 17, 2011 at 8:49pm by Reiny A ruler is accurate when the temperature is 25°C. When the temperature drops to -16°C, the ruler shrinks and no longer measures distances accurately. However, the ruler can be made to read correctly if a force of magnitude 1.2 103 N is applied to each end so as to stretch it ... Saturday, April 5, 2008 at 3:28am by Katie A ruler is accurate when the temperature is 25°C. When the temperature drops to -16°C, the ruler shrinks and no longer measures distances accurately. However, the ruler can be made to read correctly if a force of magnitude 1.2 103 N is applied to each end so as to stretch it ... Sunday, April 6, 2008 at 12:21pm by jack A ruler is accurate when the temperature is 25°C. When the temperature drops to -16°C, the ruler shrinks and no longer measures distances accurately. However, the ruler can be made to read correctly if a force of magnitude 1.2 103 N is applied to each end so as to stretch it ... Friday, April 11, 2008 at 4:50am by Johnson A ruler stands vertically against a wall. It is given a tiny impulse at such that it starts falling down under the influence of gravity. You can consider that the initial angular velocity is very small so that . The ruler has mass 250 g and length 30 cm. Use m/s for the ... Friday, December 6, 2013 at 10:35am by juanpro Is the sun shining? If so, you could take a one-meter ruler, place it vertically on the ground, mark both the point on the ground where you placed the ruler and the point where the end of its shadow is, then take the ruler away. Next, use the ruler to measure both the length ... Friday, August 31, 2012 at 2:32pm by David Q/R Math - Measurement Is there any picture of a ruler (cm and inches) saying the parts of it???? Like which one in the ruler is a haft or 3 quarters. Tuesday, December 20, 2011 at 6:09pm by Lauren Two rulers are the same length. Ruler A is divided into 357 units while ruler B is divided into 255 units The rulers are placed side by side with the 0 marks lined up and the 375th mark of ruler A aligned with the 255th mark of ruler B. How many total places will the marks on ... Tuesday, April 30, 2013 at 8:05pm by Marshall math (ruler) On some rulers, the smallest mark is 1/16 of an inch. On others it's 1/8 of an inch. Count the number of little marks in an inch to find out what your ruler shows. Or -- if your ruler shows millimeters and centimeters, the smallest mark is a millimeter. Wednesday, April 22, 2009 at 8:40pm by Ms. Sue The precision of a metric ruler is so variable. It just depends upon the ruller, how it is marked off, the size of the ruler etc. The easiest way to determine the precision of YOUR metric ruler is to look at the marking, choose the smallest division, then estimate how close ... Wednesday, October 8, 2008 at 9:50pm by DrBob222 1- writing to explain- why might you need to measure a perimeter with a measuring tape instead of a ruler? 2- how are a ruler, a yardstick, and a measuring tape the same? how are they different? i don't 1 answer and i try 2 answer please check is it right or not 2) a ruler a ... Monday, March 5, 2012 at 9:15pm by jack Derek wants to determine the height of the top of the backboard on the basketball goal at the playground. He places a standard 12-inch ruler next to the goal post and measures the shadow of the ruler and the backboard. If the ruler has a shadow of 10 inches and the backboard ... Friday, March 30, 2012 at 4:31pm by Jae A ruler stands vertically against a wall. It is given a tiny impulse at θ=0∘ such that it starts falling down under the influence of gravity. You can consider that the initial angular velocity is very small so that ω(θ=0∘)=0. The ruler has mass m= 100 g and length l= 35... Monday, December 9, 2013 at 1:57am by keitanako Do you have a ruler? Draw a line from the zero on the ruler to 6 cm on the ruler. Same for 13 cm. For the 56 mm, you can draw from zero to 5.6 cm (or to the 5 cm mark PLUS six of the smaller Monday, September 8, 2008 at 4:50pm by DrBob222 Suppose the ruler in procedure 3 is asymmetrical, balancing at the 60.2 cm mark. The ruler is now supported at the 41.3 cm mark, and a mass of 364 g is placed at the 26.2 cm mark. Find the mass of the ruler. T=rF Monday, November 26, 2012 at 8:06pm by Anwar Physics Classical Mechanics A ruler stands vertically against a wall. It is given a tiny impulse at θ=0∘ such that it starts falling down under the influence of gravity. You can consider that the initial angular velocity is very small so that ω(θ=0∘)=0. The ruler has mass m= ... Sunday, December 8, 2013 at 7:07am by Teresa A spring with 50 N/m hangs vertically next to a ruler. The end of the spring is next to the 21-cm mark on the ruler.If a 2.5-kg mass is now attached to the end of the spring, where will the end of the spring line up with the ruler marks? Monday, October 7, 2013 at 7:28pm by Anonymous Salisbury University A spring with 53 hangs vertically next to a ruler. The end of the spring is next to the 21- mark on the ruler. If a 3.0- mass is now attached to the end of the spring, where will the end of the spring line up with the ruler marks? Wednesday, April 6, 2011 at 4:09pm by Cara 1-writing to explain- why might you need to measure a perimeter with a measuring tape instead of a ruler? 2-how are a ruler, a yardstick , and a measuring tape the same? how are they different? Monday, March 5, 2012 at 5:56pm by jack A ruler stands vertically against a wall. It is given a tiny impulse at θ=0∘ such that it starts falling down under the influence of gravity. You can consider that the initial angular velocity is very small so that ω(θ=0∘)=0. The ruler has mass m= ... Sunday, December 8, 2013 at 11:03am by Anonymous PHYSICS!!! HELP A ruler stands vertically against a wall. It is given a tiny impulse at θ=0∘ such that it starts falling down under the influence of gravity. You can consider that the initial angular velocity is very small so that ω(θ=0∘)=0. The ruler has mass m= ... Sunday, December 8, 2013 at 2:24pm by Anonymous A ruler stands vertically against a wall. It is given a tiny impulse at θ=0∘ such that it starts falling down under the influence of gravity. You can consider that the initial angular velocity is very small so that ω(θ=0∘)=0. The ruler has mass m= ... Sunday, December 8, 2013 at 11:56am by Anonymous A ruler stands vertically against a wall. It is given a tiny impulse at θ=0∘ such that it starts falling down under the influence of gravity. You can consider that the initial angular velocity is very small so that ω(θ=0∘)=0. The ruler has mass m= ... Monday, December 9, 2013 at 10:00am by Anonymous A ruler stands vertically against a wall. It is given a tiny impulse at θ=0∘ such that it starts falling down under the influence of gravity. You can consider that the initial angular velocity is very small so that ω(θ=0∘)=0. The ruler has mass m= ... Sunday, December 8, 2013 at 4:29pm by Anonymous Classical Mechanics Physics A ruler stands vertically against a wall. It is given a tiny impulse at θ=0∘ such that it starts falling down under the influence of gravity. You can consider that the initial angular velocity is very small so that ω(θ=0∘)=0. The ruler has mass m= ... Friday, December 6, 2013 at 3:41pm by Anonymous A ruler stands vertically against a wall. It is given a tiny impulse at θ=0∘ such that it starts falling down under the influence of gravity. You can consider that the initial angular velocity is very small so that ω(θ=0∘)=0. The ruler has mass m= ... Saturday, December 7, 2013 at 8:20am by Geoff World Hisory Which of the following reflects the influence of middle american civalization on North American culture ? A. Iroquois League B. Potlatch C. Cahokia temple mound D. Dog sleds I got C. Nezahualcoyotl was considered an outstanding leader for all the following EXCEPT: A. He was ... Monday, January 9, 2012 at 6:00pm by sarah A ruler stands vertically against a wall. It is given a tiny impulse at θ=0∘ such that it starts falling down under the influence of gravity. You can consider that the initial angular velocity is very small so that ω(θ=0∘)=0. The ruler has mass m= ... Sunday, December 8, 2013 at 7:17am by Anonimus A ruler stands vertically against a wall. It is given a tiny impulse at θ=0∘ such that it starts falling down under the influence of gravity. You can consider that the initial angular velocity is very small so that ω(θ=0∘)=0. The ruler has mass m= ... Sunday, December 8, 2013 at 3:02pm by Andy physics I'M STUCK PZL HELP A ruler stands vertically against a wall. It is given a tiny impulse at θ=0∘ such that it starts falling down under the influence of gravity. You can consider that the initial angular velocity is very small so that ω(θ=0∘)=0. The ruler has mass m= ... Sunday, December 8, 2013 at 12:30pm by Andy I'm doing the classic experiment on "reaction time: male vs. female. " The test involve dropping a ruler and see where the subject catches it. The problem is that If i was the one to drop the ruler, depend on the subject being male or female, it would influence the way i drop ... Sunday, April 14, 2013 at 1:50am by Mina Math130 HELP Ashley's answer is not correct Her compass does not cost 70 cents more than her ruler. cost of ruler --- x cost of compass --- x=70 x + x+70 = 500 2x = 430 x = 215 Ruler costs $2.15, compass costs Thursday, May 16, 2013 at 8:58pm by Reiny Physics Classical Mechanics @ Greco u done for this q8??? need help A ruler stands vertically against a wall. It is given a tiny impulse at θ=0∘ such that it starts falling down under the influence of gravity. You can consider that the initial angular velocity is very small so that ω(... Sunday, December 8, 2013 at 4:48am by kumar If you had a more accurate ruler with divisions of 1mm (a standard ruler you would find) what would the absolute error be? (In cm) Thursday, April 28, 2011 at 8:27am by H.T Classical Mechanics Physics - Urgent help please A ruler stands vertically against a wall. It is given a tiny impulse at θ=0∘ such that it starts falling down under the influence of gravity. You can consider that the initial angular velocity is very small so that ω(θ=0∘)=0. The ruler has mass m= ... Saturday, December 7, 2013 at 5:46pm by Briggs Thanks #Greco for nizo ans got it too!!! A ruler stands vertically against a wall. It is given a tiny impulse at θ=0∘ such that it starts falling down under the influence of gravity. You can consider that the initial angular velocity is very small so that ω(... Sunday, December 8, 2013 at 7:15am by kumar Thanks #Greco for nizo ans got it too!!! A ruler stands vertically against a wall. It is given a tiny impulse at θ=0∘ such that it starts falling down under the influence of gravity. You can consider that the initial angular velocity is very small so that ω(... Sunday, December 8, 2013 at 7:15am by kumar Classical Mechanics Physics - Urgent help please Hey Damon, Part b and c: (b) What is the force exerted by the wall on the ruler when it is at an angle θ=30∘? Express your answer as the x component Fx and the y component Fy (in Newton) Fx= Fy= (c) At what angle θ0 will the falling ruler lose contact with the ... Saturday, December 7, 2013 at 5:46pm by Jeff A ruler stands vertically against a wall. It is given a tiny impulse at θ=0∘ such that it starts falling down under the influence of gravity. You can consider that the initial angular velocity is very small so that ω(θ=0∘)=0. The ruler has mass m= ... Sunday, December 8, 2013 at 1:31pm by Josh Physics - Please Help A ruler stands vertically against a wall. It is given a tiny impulse at θ=0∘ such that it starts falling down under the influence of gravity. You can consider that the initial angular velocity is very small so that ω(θ=0∘)=0. The ruler has mass m= ... Saturday, December 7, 2013 at 7:49pm by Briggs A math teacher is randomly distributing 15 rulers with centimeter labels and 10 rulers without centimeter labels. What is the probability that the first ruler she hands out will have centimeter labels and the second ruler will not have labels. Monday, April 16, 2012 at 7:45pm by John How to read a metric ruler. Question is what lengths are indicated on this ruler. It has 5 for centimeters and 5 for mm. But some of the mm are in between the centimeters. How would I figure out what the answer is? Thanks Sunday, September 14, 2008 at 3:52pm by tyler Take a ruler and measure the height of the textbook three times. Record the three measurements, and discuss the uncertainty in your measurements. Remember to report the smallest markings on your Monday, June 20, 2011 at 10:24pm by tina Algebra 2 a math teacher is randomly distributing 15 rulers with centimeters labels and 10 rulers without centimeter labels. What si the probability that the first ruler she hands out will have centimeter labels and the second ruler will not have labels? Tuesday, April 3, 2012 at 1:06pm by Juliet When typing, when should you use the Enter key? What is word wrap? How do you set indents using the ruler? How do you set margins using the ruler? What are the four main types of tabs? Describe each. Tuesday, October 20, 2009 at 11:36am by please help(: 8th grade science the length of a piece of string is known to be exactly 9.84 cm. two students measured the string. student A used a ruler marked in centimeters and got a measurement of 10 cm. student B used a ruler marked in millimeters and centimeters and got a measurement of 9.8 cm. how ... Friday, October 2, 2009 at 1:51pm by Michaela Hello 801x-ers! I would like to give you a little clue to solve the problem. When we say "the ruler loses contact with the wall when the force exerted by the wall on the ruler vanishes" means that Fx equals zero. Good luck. Friday, December 6, 2013 at 10:35am by WLewin A spring with k = 53 N/m hangs vertically next to the 15 cm mark on the ruler. If a 2.5 kg mass is now attached to the end of the spring, where will the end of the spring line up with the ruler marks Thursday, August 19, 2010 at 5:39am by kathy Social Studies 5. Stone roads were an important development of the Inca or Aztec Inca 6.Who was Atahualpa? Aztec Ruler or Inca ruler Inca ruler 7. The Aztec chinampas were used for Farming or writing farming 8.The Columbian Exchange refers to the exchange of goods between Canada and the US ... Thursday, February 7, 2013 at 7:25pm by Jerald x is 30 cm from the fulcrum, and the center of mass of the ruler is 10 cm on the other side of it (in the middle of the ruler). The two moments balance. Therefore x*30 = 100*10 x = 33.3 g Sunday, February 19, 2012 at 3:14am by drwls Thursday, November 18, 2010 at 11:30pm by Ms. Sue An article in the local paper states that 30% of the students at Oak Grove Middle School earned a place on the Silver Honor Roll. If there are 920 students at Oak Grove, how many are on the Silver Honor Roll? USE A PERCENT RULER TO HELP YOU DECIDE. SHOW ALL YOUR WORK... I DONT... Wednesday, October 30, 2013 at 4:58pm by Robert Math130 HELP A compass and a ruler cost $5. The compass cost .70 cents more the the ruler. Jow much does the compass cost? Thursday, May 16, 2013 at 8:58pm by Enis The error can be expressed as units, or as percentage. Say you have a somewhat accurate ruler, and you measure a board. You find a length of 3m, but the lines on the ruler are rather blurred, so there's a chance of a 2mm error either way. So, the actual length could be ... Tuesday, September 20, 2011 at 7:39pm by Steve You could scan a ruler on a computer, crop the image to a one-inch segment, label the appropriate values and enlarge that to the size you want. Or you could enlarge it before labeling it. Another alternative is to draw a one inch segment of a ruler to the scale of 1" = 10" and... Sunday, September 7, 2008 at 10:30pm by PsyDAG college - visual arts C-Thru Graphic Arts Ruler A useful tool for graphic designers, this opaque plastic ruler featuring E-scale, 16ths, picas, points, and agate scales is printed on both sides and laminated for durability. It also includes percentages, halftone screens, and proofreaders’ marks. ... Tuesday, October 7, 2008 at 7:28pm by gin Ms wanton recycles colored pencils in a box for use in her art class. She has accumulated 4 red 5 blue 3 yellow and 3 green colored pencils. If one of her students reaches into the box and selects one pencil without looking what is the probabiltiy that the student will not get... Tuesday, April 16, 2013 at 6:25pm by Jerald How to tell 4/7 on a ruler Monday, May 13, 2013 at 5:28pm by ethan How to tell 4/7 on a ruler Monday, May 13, 2013 at 5:41pm by ethan on a centemeter ruler where is .13 m? Thursday, January 21, 2010 at 6:06pm by lauren you count using a ruler Tuesday, June 1, 2010 at 12:59am by Anonymous I need help with scales and converting them to real-life measurements. So, there's this line on the page that measures 1 metre in real life. I measured it with my ruler and it is 9.3 cm. So the scale is 9.3 cm = 1 m. But how would I convert measurements with my ruler into real... Thursday, March 27, 2008 at 4:25pm by Lucy math mesurment i need a printable ruler Monday, November 26, 2007 at 8:45pm by katie Use your ruler and measure. Tuesday, April 3, 2012 at 7:57pm by Ms. Sue Whoops! My computer is sending off answers before they are done. The ruler weight W' acts at the 50 cm mark. Let W be the weight added at the 30 cm mark to produce balance. Set the total moment about the fulcrum equal to zero. W*10 - W'*10 = 0 W = W' You need to know the force... Friday, October 12, 2012 at 12:53am by drwls You're going to need to use a ruler. We can't help you online with this.. Thursday, January 6, 2011 at 11:22pm by Jen explain how a ruler is different from a straight edge Wednesday, September 12, 2012 at 5:20pm by antonio what is the line segement for each length 2 1/4 and 7/8 when measured with a ruler Monday, February 4, 2013 at 10:22pm by princess Mr. Oakley, my wood shop teachger is kind of strange. He has a 12 inch strip of wood with only four marks in it that he uses as a ruler. Yesterday, I asked him about it. "What good is a ruler you use uf ut only has 4 marks on it? There are some lengths you can't measure!" "... Monday, November 15, 2010 at 6:39pm by Issa I tried to get this, but really I am unable to can someone please help me a little with this, then maybe I will be able to do the rest. make an organizer to compare government under a republic(the commonwealth), an absolute monarchy, and a constitutional monarchy. Use the ... Tuesday, September 22, 2009 at 8:25pm by sara On a ruler , where does 7/16 lie? Between..... 1/2 and 5/8 ****3/8 and 1/2 (is this the answer) 1/4 and 3/8 none of these Friday, February 8, 2008 at 3:10pm by Mike math (ruler) I still need to know what the little mark stand for. Wednesday, April 22, 2009 at 8:40pm by Anonymous 5th grade -math okay show i need to use a ruler Thursday, August 27, 2009 at 11:01pm by lexi Can you please show me how to measure 1.19 cm on a ruler Friday, July 6, 2012 at 12:40am by Dannie Maths P5 Cindy went to the store with just enough money to buy 11 files.However, she bought only 8 files and spent the rest of her money on 24 pens and rulers. The ratio of the number of pens to rulers was 3:5. Each ruler costs $0.80. Each pen costs $0.40 more than a ruler.How much did... Sunday, August 21, 2011 at 7:14am by jo How do we find the perimeter and area of an object using an inch ruler? Thursday, May 15, 2008 at 8:18pm by Ivan J do you know a where I can find a web link that would tell you how to read a ruler? thanks. Thursday, November 18, 2010 at 11:30pm by jill It is virtually impossible to measure that accurately with a ruler. Call it 1.2 cm Friday, July 6, 2012 at 12:40am by Damon Dr. bob please help-chemistry I'm wondering if the fact that the prof is supplying TWO measuring units means the accuracy can be increased over that suggested by amir. How about this? Place the marble on the floor and against the wall. Place the ruler (or the meter stick) against the marble to hold it in ... Monday, October 27, 2008 at 11:01pm by DrBob222 1 inch = 2.54 cm 1 inche = 25.4 mm 1/8 inch = 3.175 mm Since it takes appr 3 mm to cover 1/8 inch the mm would be the more accurate measurement, e.g to be off by 1/8 of an inch is the same as being off by 3 mm another way: 1 inch measured in 1/8 units takes 8 divisions of a ... Sunday, October 21, 2012 at 8:30am by Reiny math mesurment Check this site for a ruler. (Broken Link Removed) Monday, November 26, 2007 at 8:45pm by Ms. Sue If a retailer buy a dozen of ruler for #3.50 each,what is its make-up percentage? Thursday, October 18, 2012 at 11:11pm by peres Writing to Explain Describe how to use a ruler to measure to the nearest inch. Tuesday, May 21, 2013 at 10:33pm by Quan accelerated math is there a ruler thhat is just made for fractions like 1/8ths and lower Monday, November 26, 2007 at 8:45pm by marty accelerated math is there a ruler thhat is just made for fractions like 1/8ths and lower Monday, November 26, 2007 at 8:45pm by marty use the ruler and the equation to make a function table. rule: multiply by 7. p times 7 = r Thursday, September 1, 2011 at 8:04pm by stephanie A ruler is marked with inches or centimeters in order to measure flat objects. Wednesday, September 12, 2012 at 5:20pm by Ms. Sue Math (?) Measure them with ruler or measuring tape? Insufficient data to answer second question. Saturday, April 5, 2014 at 7:37pm by PsyDAG For the samples, i was thinking about recruiting 10 females age range from 19-25. Yes, i am pretty much will have a survey about general backgrounds, and number of hours of sleep on average per night. I will give the survey to the participants before testing the reaction time... Monday, March 25, 2013 at 1:29pm by Mina 4th grade math I suggest you use a ruler with centimeters and measure those objects. Wednesday, January 28, 2009 at 10:52pm by Ms. Sue A yard stick/ruler would probably be a good choice. Measuring tape perhaps. Sunday, March 27, 2011 at 9:43pm by Ben 8th grade - science the length of a string is known to be exactly 9.84cm. two students measured the string. student A used a ruler marked in centimeters and got a measurement of 10 cm. student B used a ruler marked in millimeters and centimeters and got a measurement of 9.8 cm. which student's ... Friday, October 2, 2009 at 10:20am by Michaela Use a linear model (for example, a number line, a ruler) to compare the pairs of rational numbers 1/4 and 1/6 Friday, October 9, 2009 at 6:58pm by Anonymous The tail of neil's dog is 5 1/4 inches long. This length is between which two inch marks on a ruler? Monday, March 17, 2014 at 9:32pm by Anonymous rulers --- x folders --- x+16 x + x+16 = 36 2x = 20 x = 10 , then y = 26 so she bought 10 rulers and 16 folders now to the cost: cost of ruler --- y cost of folder --- y + .5 10y + 16(y+ .5) = 20,2 10y + 16y + 8 = 20.2 26y = 12.2 y = .4692 A ruler costs $0.47 and a folder ... Thursday, February 14, 2013 at 12:33am by Reiny Leadership Priorities & Practice Choices are = baton : conductor symphony : composer stop sign : driver ruler : math Saturday, December 18, 2010 at 12:37am by Anonymous 3 grade math ms sue You have to. Find. The. Length by using a ruler. Then draw a shape with 12 units Thursday, March 8, 2012 at 7:49am by Narine Which of these tables correctly matches the list of tools with the task that can be performed with the tools? Answer A.pencil, string, ruler = duplicate a rectangle B.pencil, compass, straightedge = duplicate a triangle C.pencil, straightedge, ruler = make an exact replica of ... Tuesday, September 20, 2011 at 10:24pm by Anonymous I don't know how to use a percent ruler. However, you'll find the answer if you multiply: 920 * 0.3 = _________ students on the Silver Honor Roll Wednesday, October 30, 2013 at 4:58pm by Ms. Sue Pages: 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | Next>>
{"url":"http://www.jiskha.com/search/index.cgi?query=math+(ruler)","timestamp":"2014-04-17T19:45:17Z","content_type":null,"content_length":"39291","record_id":"<urn:uuid:3d52ab83-03d9-40db-9bfc-01f0aa185a49>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00574-ip-10-147-4-33.ec2.internal.warc.gz"}
MITlaos: understanding Large Amplitude Oscillatory Shear (LAOS) R. H. Ewoldt, A. E. Hosoi, G. H. McKinley We have developed a framework for physically interpretating Large Amplitude Oscillatory Shear (LAOS), to make a rheological fingerprint of a complex material. For many systems the common practice of reporting only "viscoelastic moduli" as calculated by commercial rheometers (typically the first harmonic Fourier coefficients G1' , G1") is insufficient and/ or misleading in describing the nonlinear phenomena. Although the higher Fourier harmonics of the material response capture the mathematical structure, they lack a clear physical interpretation. Part of our framework gives a physical interpretation to the third-order Fourier coefficients. We build on the earlier geometrical interpretation of Cho et al. (2005) which decomposes a nonlinear stress response into elastic and viscous stress contributions using symmetry arguments. We then use Chebyshev polynomials (closely related to the Fourier decomposition) as orthonormal basis functions to further decompose these stresses into harmonic components having physical interpretations. We also introduce new measures for reporting the first-order (linear) viscoelastic moduli. These measures give deeper physical insight than reporting only the first harmonic Fourier coefficients G1', G1", and reduce to the linear viscoelastic framework of G', G" at small strains. Software is available for analyzing raw data with this framework. To request the free software, please contact MITlaos [at] mit [dot] edu Also see Randy Ewoldt's personnal webpage.
{"url":"http://web.mit.edu/nnf/research/phenomena/mit_laos.html","timestamp":"2014-04-17T04:25:24Z","content_type":null,"content_length":"3015","record_id":"<urn:uuid:2a6ea816-df44-448c-88a4-0025aa9f1e38>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00098-ip-10-147-4-33.ec2.internal.warc.gz"}
math_example package ==================== The following functions are available in the math_example package. - **addition** Adds two numbers and returns the result. - **sbutraction** Subtracts the second number from the first and returns the result. - **multiplication** Multiplies two numbers and returns the result. - **division** Divides the first number by the second number and returns the result. - **fibonacci** Applies the fibonacci sequence count times and returns the result. npm install math_example2000 Want to see pretty graphs? Log in now! 2 downloads in the last week 4 downloads in the last month math_example package The following functions are available in the math_example package. • addition Adds two numbers and returns the result. • sbutraction Subtracts the second number from the first and returns the result. • multiplication Multiplies two numbers and returns the result. • division Divides the first number by the second number and returns the result. • fibonacci Applies the fibonacci sequence count times and returns the result.
{"url":"https://www.npmjs.org/package/math_example2000","timestamp":"2014-04-16T20:10:25Z","content_type":null,"content_length":"7420","record_id":"<urn:uuid:2cdf5e21-58a3-46bd-8e76-50469d8150de>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00318-ip-10-147-4-33.ec2.internal.warc.gz"}
7. UNUSUAL FEATURES OF THE PROGRAM a. Optical model particle transmission coefficients are calculated according to a spherical optical model. Optical model parameter sets can be given in input or selected by means of acronyms out of a library available internally to the code. b. The correction for the width fluctuations has been introduced into channels leading both to discrete and to continuum level excitation. c. Composite level density like Gilbert-Cameron is used but with constant in spin distribution K = .146, when not given in input. For low lying levels between Ecut and U(x), sigma(E)**2 is auto- matically interpolated between sigma(LEVELS)**2 and sigma(U(x))**2 = K.SQRT(aU(x)).exp(A,2/3). The value sigma(LEVELS)**2 is obtained by maximum likelihood method to fit known discrete levels distribution. Sigma(LEVELS)**2 is calculated by the code on the basis of adop- ted level for each nucleus involved. Alternatively sigma(LEVELS)**2 can also be given in input, in those particu- lar nuclei where additional information is known above Ecut, as far as spin attribution is concerned. d. Optionally a parity distribution p(pi)=exp(AE+B) according to ref. 1 can also be assumed provided A and B are given in input. e. Gamma-ray transmission coefficients are calculated according to one or two Lorentzian curves for the E1 photoabsorption cross sections. Peak energy, half maximum width, peak cross section must be given in input for the E1 giant resonance. The resulting total radiative width is spin and parity depen- dent. In principle it should not be normalized because the model proved to work satisfactorily. For the purpose of evalua- tion a normalization constant N (J and pi independent) can be given in input. f. Q values are calculated from recent mass excess tables (internal to the code) provided by Wapstra in 1978 as a private communica- tion. g. i) The output are average resonance parameters like strength functions (from adopted optical model), radiative width and mean observed level spacing. ii) Angular distributions are given for compound, shape and total elastic. Total cross section and primary spectra are given for all involved particle and gamma-ray emis- sions. Compound nucleus and total cross section from optical model are given at the end together with the percentual differ- ence between compound nucleus cross section and the sum of the contribution of all channels via compound nucleus re- action mechanism.
{"url":"http://www.oecd-nea.org/tools/abstract/detail/nea-0648/","timestamp":"2014-04-18T03:43:48Z","content_type":null,"content_length":"25347","record_id":"<urn:uuid:8da1739e-8419-4e48-beab-79216994b19d>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00520-ip-10-147-4-33.ec2.internal.warc.gz"}
[Numpy-discussion] numpy-izing a loop Stéfan van der Walt stefan@sun.ac... Tue Feb 10 15:27:31 CST 2009 2009/2/10 Robert Kern <robert.kern@gmail.com>: > x = np.arange(dim)[:,np.newaxis,np.newaxis] > y = np.arange(dim)[np.newaxis,:,np.newaxis] > z = np.arange(dim)[np.newaxis,np.newaxis,:] Yes, sorry, I should have copied from my terminal. I think I had x = np.arange(dim) y = np.arange(dim)[:, None] z = np.arange(dim)[:, None, None] which broadcasts the same way. More information about the Numpy-discussion mailing list
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2009-February/040238.html","timestamp":"2014-04-16T19:12:31Z","content_type":null,"content_length":"3026","record_id":"<urn:uuid:ce70b86a-8838-48c9-9ae3-5527d65845d7>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00239-ip-10-147-4-33.ec2.internal.warc.gz"}
Turing machines - In 47th Annual IEEE Symposium on Foundations of Computer Science (FOCS , 2006 "... We show that 2-tag systems efficiently simulate Turing machines. As a corollary we find that the small universal Turing machines of Rogozhin, Minsky and others simulate Turing machines in polynomial time. This is an exponential improvement on the previously known simulation time overhead and improve ..." Cited by 16 (7 self) Add to MetaCart We show that 2-tag systems efficiently simulate Turing machines. As a corollary we find that the small universal Turing machines of Rogozhin, Minsky and others simulate Turing machines in polynomial time. This is an exponential improvement on the previously known simulation time overhead and improves a forty year old result in the area of small universal Turing machines. 1 - Theoretical Computer Science , 2005 "... We present a small time-efficient universal Turing machine with 5 states and 6 symbols. This Turing machine simulates our new variant of tag system. It is the smallest known universal Turing machine that simulates Turing machine computations in polynomial time. ..." Cited by 15 (8 self) Add to MetaCart We present a small time-efficient universal Turing machine with 5 states and 6 symbols. This Turing machine simulates our new variant of tag system. It is the smallest known universal Turing machine that simulates Turing machine computations in polynomial time. , 2009 "... We present universal Turing machines with state-symbol pairs of (5, 5), (6, 4), (9, 3) and (15, 2). These machines simulate our new variant of tag system, the bi-tag system and are the smallest known single-tape universal Turing machines with 5, 4, 3 and 2-symbols, respectively. Our 5-symbol machin ..." Cited by 13 (4 self) Add to MetaCart We present universal Turing machines with state-symbol pairs of (5, 5), (6, 4), (9, 3) and (15, 2). These machines simulate our new variant of tag system, the bi-tag system and are the smallest known single-tape universal Turing machines with 5, 4, 3 and 2-symbols, respectively. Our 5-symbol machine uses the same number of instructions (22) as the smallest known universal Turing machine by Rogozhin. Also, all of the universal machines we present here simulate Turing machines in polynomial time. - Machines, Computations and Universality (MCU), volume 4664 of LNCS , 2007 "... Abstract. We present three small universal Turing machines that have 3 states and 7 symbols, 4 states and 5 symbols, and 2 states and 13 symbols, respectively. These machines are semi-weakly universal which means that on one side of the input they have an infinitely repeated word, and on the other s ..." Cited by 10 (4 self) Add to MetaCart Abstract. We present three small universal Turing machines that have 3 states and 7 symbols, 4 states and 5 symbols, and 2 states and 13 symbols, respectively. These machines are semi-weakly universal which means that on one side of the input they have an infinitely repeated word, and on the other side there is the usual infinitely repeated blank symbol. This work can be regarded as a continuation of early work by Watanabe on semi-weak machines. One of our machines has only 17 transition rules, making it the smallest known semi-weakly universal Turing machine. Interestingly, two of our machines are symmetric with Watanabe’s 7-state and 3-symbol, and 5-state and 4-symbol machines, even though we use a different simulation technique. 1. - Computability in Europe 2007, volume 4497 of LNCS , 2007 "... Abstract. We present small polynomial time universal Turing machines with state-symbol pairs of (5, 5), (6, 4), (9, 3) and (18, 2). These machines simulate our new variant of tag system, the bi-tag system and are the smallest known universal Turing machines with 5, 4, 3 and 2-symbols respectively. O ..." Cited by 7 (3 self) Add to MetaCart Abstract. We present small polynomial time universal Turing machines with state-symbol pairs of (5, 5), (6, 4), (9, 3) and (18, 2). These machines simulate our new variant of tag system, the bi-tag system and are the smallest known universal Turing machines with 5, 4, 3 and 2-symbols respectively. Our 5-symbol machine uses the same number of instructions (22) as the smallest known universal Turing machine by Rogozhin. 1 "... Abstract. We give small universal Turing machines with state-symbol pairs of (6, 2), (3,3) and (2,4). These machines are weakly universal, which means that they have an infinitely repeated word to the left of their input and another to the right. They simulate Rule 110 and are currently the smallest ..." Cited by 7 (4 self) Add to MetaCart Abstract. We give small universal Turing machines with state-symbol pairs of (6, 2), (3,3) and (2,4). These machines are weakly universal, which means that they have an infinitely repeated word to the left of their input and another to the right. They simulate Rule 110 and are currently the smallest known weakly universal Turing machines. Despite their small size these machines are efficient polynomial time simulators of Turing machines. 1 , 2007 "... We survey some work concerned with small universal Turing machines, cellular automata, tag systems, and other simple models of computation. For example it has been an open question for some time as to whether the smallest known universal Turing machines of Minsky, Rogozhin, Baiocchi and Kudlek are e ..." Cited by 4 (2 self) Add to MetaCart We survey some work concerned with small universal Turing machines, cellular automata, tag systems, and other simple models of computation. For example it has been an open question for some time as to whether the smallest known universal Turing machines of Minsky, Rogozhin, Baiocchi and Kudlek are efficient (polynomial time) simulators of Turing machines. These are some of the most intuitively simple computational devices and previously the best known simulations were exponentially slow. We discuss recent work that shows that these machines are indeed efficient simulators. As a related result we also find that Rule 110, a well-known elementary cellular automaton, is also efficiently universal. We also mention some old and new universal program-size results, including new small universal Turing machines and new weakly, and semi-weakly, universal Turing machines. We then discuss some ideas for future work arising out of these, and other, results. "... My supervisor Damien Woods deserves a special thank you. His help and guidance went far beyond the role of supervisor. He was always enthusiastic, and generous with his time. This work would not have happened without him. I would also like to thank my supervisor Paul Gibson for his advice and suppor ..." Add to MetaCart My supervisor Damien Woods deserves a special thank you. His help and guidance went far beyond the role of supervisor. He was always enthusiastic, and generous with his time. This work would not have happened without him. I would also like to thank my supervisor Paul Gibson for his advice and support. Thanks to the staff and postgraduates in the computer science department at NUI Maynooth for their support and friendship over the last few years. In particular, I would like to mention Niall Murphy he has always been ready to help whenever he could and would often lighten the mood in dark times with some rousing Gilbert and Sullivan. I thank the following people for their interesting discussions and/or advice:
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=3674447","timestamp":"2014-04-21T08:42:15Z","content_type":null,"content_length":"30545","record_id":"<urn:uuid:cb4fdfa4-5c12-42fb-b497-6acd62470447>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00592-ip-10-147-4-33.ec2.internal.warc.gz"}
Let The Electric Field Of An Optical Signal At A... | Chegg.com Let the electric field of an optical signal at a carrier frequency vi be where Pi is the signal power at the carrier frequency vi and ϕi is the carrier phase. If N optical signals each at a different frequency viare traveling along a fiber, show that the signal power is where Ωjk = 2π(vj - vk) represents the beat frequency at which the carrier population oscillates.
{"url":"http://www.chegg.com/homework-help/let-electric-field-optical-signal-carrier-frequency-vi-bewhe-chapter-11-problem-13-solution-9780073380711-exc","timestamp":"2014-04-20T19:44:37Z","content_type":null,"content_length":"23030","record_id":"<urn:uuid:fb818cf8-b682-440c-8ff8-824b3bae4e4d>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00213-ip-10-147-4-33.ec2.internal.warc.gz"}
Question about None Terry Reedy tjreedy at udel.edu Mon Jun 15 01:36:11 CEST 2009 Paul LaFollette wrote: > Thank you all for your thoughtful and useful comments. Since this has > largely morphed into a discussion of my 3rd question, perhaps it would > interest you to hear my reason for asking it. > John is just about spot on. Part of my research involves the > enumeration and generation of various combinatorial objects using what > are called "loopless" or "constant time" algorithms. (This means that > the time required to move from one object to the next is bounded by a > constant that is independent of the size of the object. It is related > to following idea: If I generate all of the possible patterns > expressible with N bits by simply counting in binary, as many as N > bits may change from one pattern to the next. On the other hand if I > use Gray code only one bit changes from each pattern to the next. But the number of bits you must examine to determine which to change is bounded by N and increases with N. More information about the Python-list mailing list
{"url":"https://mail.python.org/pipermail/python-list/2009-June/540404.html","timestamp":"2014-04-20T07:11:08Z","content_type":null,"content_length":"3456","record_id":"<urn:uuid:cd5e9e1a-adf4-4bee-92f2-97f3698b40c5>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00185-ip-10-147-4-33.ec2.internal.warc.gz"}
Octonions, like quaternions, are a relative of complex numbers. Octonions see some use in theoretical physics. In practical terms, an octonion is simply an octuple of real numbers (α,β,γ,δ,ε,ζ,η,θ), which we can write in the form o = α + βi + γj + δk + εe' + ζi' + ηj' + θk', where i, j and k are the same objects as for quaternions, and e', i', j' and k' are distinct objects which play essentially the same kind of role as i (or j or k). Addition and a multiplication is defined on the set of octonions, which generalize their quaternionic counterparts. The main novelty this time is that the multiplication is not only not commutative, is now not even associative (i.e. there are octonions x, y and z such that x(yz) ≠ (xy)z). A way of remembering things is by using the following multiplication table: Octonions (and their kin) are described in far more details in this other document (with errata and addenda). Some traditional constructs, such as the exponential, carry over without too much change into the realms of octonions, but other, such as taking a square root, do not (the fact that the exponential has a closed form is a result of the author, but the fact that the exponential exists at all for octonions is known since quite a long time ago).
{"url":"http://www.boost.org/doc/libs/1_53_0/libs/math/doc/octonion/html/boost_octonions/octonions/overview.html","timestamp":"2014-04-19T17:43:28Z","content_type":null,"content_length":"7330","record_id":"<urn:uuid:65441114-9c04-46bd-90a3-f6da92f6386c>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00369-ip-10-147-4-33.ec2.internal.warc.gz"}
Why Not Lambda Encode Data? Datatypes are a conundrum for designers of type theories and advanced programming languages. For doing formal reasoning, we rely heavily on datatype induction, and so datatypes seem to be essential to the language design. And they are a terrible nuisance to describe formally, adding a hefty suitcase to the existing baggage we are already coping with. For example, in the “Sep3″ Trellys design we have been developing for the past 12 months now, we are contending with a programming language with Type:Type and dependent types, plus a distinct predicative higher-order logic, with syntax for proofs, formulas, and predicates. This language is getting a bit big! And then one has to describe datatypes, and you need an Ott magician to make it all work (fortunately we do have one, but still). What a hassle! Can’t we just get rid of those darn datatypes? The Pi-Sigma language of Altenkirch et al. and recent proposals of McBride et al. are out to do this, or at least minimize what one needs to accept as a primitive datatype. In this post, I want to talk about the problems you encounter if you lambda-encode your datatypes, and why I think we might be able to get around these in Sep3 Trellys. There are two terrible snarls that we face with lambda-encoding datatypes, and they both have to do with eliminations (that is, with the ways we are allowed to use data that we have or assume have already constructed). The first snarl is multi-level elimination. Suppose you want to allow the construction of data by recursion on other data (standard), together with construction of types by recursion on data (exotic: “large eliminations”). Then you have a big problem right away with using lambda-encoded data, because lambda-encoding with either the Scott or Church encoding requires fixing a level for elimination of your data. Let me say more about this, looking for the moment just at the well-known Church encoding. In this encoding, a numeral like 2 is encoded as $\lambda s. \lambda z. s (s z)$. I am writing this Curry-style (without typing annotations) in the hopes of making it more readable. The type you want to assign to this term is then $\forall A:\textit{Type}. (A \rightarrow A) \rightarrow A \rightarrow A$. The numeral 2 has been encoded as its own iterator: given a function $s$ of type $A \rightarrow A$ for some type $A$, and a starting point $z$ of type $A$, the numeral 2 will apply $s$ twice starting with $z$. The problem with this encoding is that it is forced to pick a level of the language for $A$. Here, we said that $A$ is a type, so values computed by 2 must be at the term level of the language. So we cannot use this encoding of 2 if we want to compute at the type level. For that, we would need the type $\forall A:\textit {Kind}. (A \rightarrow A) \rightarrow A \rightarrow A$, and that is quite bad since if we are going to have impredicative quantification over kinds as well as types, we will lose normalization (I don’t have a precise reference for this, but heard it through the type theory grape vine, attributed to Thorsten Altenkirch, that impredicativity is a trick we can only play once.) Anyhow, even if we had such impredicativity, we would need to assign multiple types to this encoded 2, and that sounds very worrisome: users are carefully crafting annotations to show a single typing, and it would become quite wild, a priori, to be crafting annotations to achieve multiple typings. The second snarl is dependent elimination. If we want to use Church-encoded 2 for writing dependently typed functions, or (as is the same under the Curry-Howard isomorphism) proving theorems, the simple polymorphic type we just considered is not good enough. From an encoded natural number $n$, we want to produce data of type $C\ n$, for some type constructor $C$. So instead of $\forall A. (A \rightarrow A) \rightarrow A \rightarrow A$, we really want something like $\forall C. (\forall n. (C n) \rightarrow (C (n+1)) \rightarrow (C\ 0) \rightarrow \forall n. (C\ n)$. But this looks problematic, as we are trying to define the $\textit{nat}$ type itself! What types can we possibly give to $C$ and $n$ in that definition? Their types need to use $\textit{nat}$ itself, which we are trying to define. Now we could use recursive types to try to get around that problem, but the situation is actually even worse: the pseudo-type I wrote is supposed to be something like the definition for $\textit{nat}$. That is, we would expect our encoding of 2 to have that type. But then we see that even as a pseudo-type, it does not make sense. We actually want 2 to have the type $\forall C. (\forall n. (C n) \rightarrow (C (n+1)) \rightarrow (C\ 0) \rightarrow (C\ 2)$. That is, its type should mention 2 itself! In general, we want a natural number $N$ to have type $\forall C. (\forall n. (C n) \rightarrow (C (n+1)) \rightarrow (C\ 0) \rightarrow (C\ N)$. How can something’s type mention the thing itself? This reminds me of very dependent function types (see Section 3 of Formal Objects in Type Theory Using Very Dependent Types by Jason Hickey, for example). It seems pretty exotic. Now in the context of our Sep3 Trellys design, there is actually a little light here through all these complex difficulties for lambda encodings. First of all, we have Type : Type in the programming part of Sep3. So we could hope that the problem with multi-level elimination would go away. It actually doesn’t with our current design, because we have adopted a predicative higher-order logic on the logic side, which has actually forced us to add a universe hierarchy on the programming side. So we have Type 0 : Type 0, and then Type 0 : Type 1 : Type 2 … Now it occurs to me that all this trouble would just go away, and the language design would get much simpler, if we just bit the bullet and adopted an impredicative higher-order logic. I immediately lose a lot of confidence that our logical language would be normalizing — but that’s what we do careful metatheoretic study for. There’s no reason to assume that impredicativity will cost us consistency, so let’s not be pessimistic. If we do not stratify the logical side of the language, we need not stratify the programming side, and so we won’t have multiple levels at which to eliminate an encoded piece of data. This would be great, as it would (almost) knock down the first objection to lambda-encoding datatypes. There’s another issue one would have to worry about, though, and that is eliminating data in proofs (since Sep3 distinguishes proofs and terms). I am going on a bit too long, and my thinking isn’t settled yet, but there’s one very nice thing we could hope for about eliminating data in proofs. Instead of using the Church encoding, we could use the Scott encoding. Here, 2 is encoded as $\lambda s. \lambda z. s\ 1$. Data are encoded as their own case-statements, rather than as their own iterators. Why is this relevant? Well, I have to note first that now we really will need recursive types of some kind for this, because intuitively, we want the (simple) type for Scott encoded 2 to be $\ forall A.(\textit{nat} \rightarrow A) \rightarrow A \rightarrow A$. Note the reference to $\textit{nat}$ in the type assigned for $s$. We need to use a recursive type to allow $\textit{nat}$ to be defined in terms of itself like this. But here’s the cool observation: Garrin Kimmell and Nathan Collins proposed for Trellys (both Sep3 and U. Penn. Trellys) that we base induction on an explicit structural subterm ordering in the language. We have an atomic formula $a < b$ for expressing that the value of terminating $a$ is structurally smaller than the value of terminating $b$. The obvious but beautiful observation is that with the Scott encoding, we really do have $1 < 2$. Note that this is not true with the Church encoding, since $\lambda s. \lambda z. s\ z$ is not a subterm of $\lambda s. \lambda z. s\ (s\ z)$. Ok, I’ve gone on too long and haven’t gotten to address the problem with dependent elimination yet. But suffice it to say that lambda-encoding datatypes, which was always eminently attractive, is starting to look like a possibility for Sep3. 8 Comments 1. Hello Aaron, For the fact that impredicativity does not “stack”, this fact is a consequence of the inconsistency of Girards system U- which does exactly that: integrate imprediative polymorphism at the kinding level. A very nice proof of this fact, due to Coquand, involves encoding the proof that impredicativity is not set-theoretical within the type theory itself, see: T. Coquand, A New Paradox in Type Theory http://www.cse.chalmers.se/~coquand/nyparadox.ps Of interest might be: Geuvers, Induction is not derivable in second-order dependent type theory, which is a negative result about church encodings of natural numbers similar to the one you mention, but it is preceded by an interesting discussion about how to potentially encode inductive types with an interesting dependent inductive principle in an impredicative type theory. I don’t know what happens with his “clever encoding” in the case with universes (his proof obviously breaks down in that case). 2. Thanks a lot for the references, Cody! I am particularly interested in taking a look at the article by Geuvers you mentioned. 3. I could also point you to this discussion on Oleg’s blog: About representations of datatypes in lambda-calculus. This might be less interesting for your needs, but i think it bears reminding that the church natural numbers are not adequate computationally: there is no way to compute the predecessor of a church natural number in constant time. I would also like to expand on the comment above: H. Geuvers explains that if we define Nat to be: Nat = forall X. X -> (X -> X) -> X And 0 and successor defined in the usual way Then the type forall P: Nat -> Prop, P 0 -> (forall m. P m -> P (S m)) -> forall n, P n is not inhabited in the CoC. If we define Ind n = forall P: Nat -> Prop, P 0 -> (forall m. P m -> P (S m)) -> P n And then define Nat_1 = exists x:Nat, Ind x Then Geuvers says: forall P: Nat_1 -> Prop, P 0 -> (forall m. P m -> P (S m)) -> forall n, P n Is uninhabited as well. In Type : Type, this type is of course inhabited as are all types, but the question becomes, can it be inhabited by something that has the correct computational behaviour? If not is there some variation of the trick which makes this work? □ I’m not sure how well known this is, but it is fairly easy to show for various elementary datatypes that parametricity (and possibly extensionality) implies strong induction principles for Church encodings. For instance: forall R. R -> R is provably uniquely inhabited by appeal to parametricity, and one can also define strong elimination of dependent pairs. I don’t think I’ve ever managed to work through the proof for Church numerals, but I expect it’s possible for those, as well (I have done binary sums). One problem is finding a computational interpretation of parametricity, though. Jean Philippe Bernardy has done some work on this with microAgda, and wrote a paper I believe. But I remember being a little perplexed by the results; parametricity was given some sort of proof irrelevant interpretation, which I didn’t expect since it appears to enable us to do computation of some sort. But I won’t claim my intuition for the area is particularly well developed. The other issue is that this is still no solution to the problem of how to do large elimination, which you really want to prove things like disjointness of constructors. ☆ As an addendum: I suppose the answer to the parametricity thing may be: it doesn’t allow us to do any additional computation, just prove that the computation we can already do has a stronger type. But I don’t really know. ☆ I must admit that I am incapable of finding the proof of the “strong elimination” principle for the church numerals, given the parametricity theorem (and extensionality). I also am incapable of solving the problem (of proving the strong elimination principle with the right computational properties) when placing myself in the Type:Type theory, even with increasingly complex definitions of Nat. I would be very grateful if you could send any solutions my way if you find them :)… The only thing i remember when i perused Bernardy’s papers and blog posts is how much it reminded me of Harper and Licata’s “Canonicity for 2-dimentional Type Theory”. I would also like to know if this is just a coincidence… ☆ I don’t think I’ve seen the Church numerals case, and I’ve never worked through it personally because it was further along the complexity curve than I felt like going back when I was fooling with this. My rather messy Agda file is here: It has top, bottom, binary products, sigma, booleans and binary sums. I also got part of the way along the naturals, proving that either n z s = z or n z s = s (m z s) for some m. From there it’s usually not hard to prove that n = zero or n = suc m for some m. However, the recursion for naturals is, obviously I suppose, novel compared to all the other encodings. In the n = suc m case, we want to recurse on m, but you can’t just do that, it has to be performed by using the recursion built into n, and I’m not sure how or if that’s going to work out. So, I suppose it’s still an open question. 4. Yes, I prefer working with Scott encodings over Church encodings, for reasons like the one you mentioned about computing predecessor. For our Sep3 design, dependent elimination in the programmatic part of the language is not the same as induction in the logical part of the language. In my next post (maybe next week), I hope to explain some new ideas for how to support both induction (logical fragment) and dependent elimination, using parametricity for the former and self types (which I think are new) for the latter.
{"url":"https://queuea9.wordpress.com/2012/03/28/why-not-lambda-encode-data/","timestamp":"2014-04-17T20:12:22Z","content_type":null,"content_length":"76281","record_id":"<urn:uuid:52a3c24f-22a7-4548-b3fb-69eebd311007>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00608-ip-10-147-4-33.ec2.internal.warc.gz"}
On This Day in Math - February 25 People must understand that science is inherently neither a potential for good nor for evil. It is a potential to be harnessed by man to do his bidding. ~Glenn T. Seaborg The 56th day of the year; There are 56 normalized 5x5 Latin Squares ( First row and column have 1,2,3,4,5; and no number appears twice in a row or column. There are a much smaller number of 4x4 squares, try them first 1598 John Dee demonstrates the solar eclipse by viewing an image through a pinhole. Two versions from Ashmole and Aubrey give different details of who was present. Dee's Diary only contains the notation, "the eclips. A clowdy day, but great darkness about 9 1/2 maine " *Benjamin Wooley, The Queen's Conjuror 1606 Henry Briggs sends a Letter to Mr. Clarke, of Gravesend, dated from Gresham College, with which he sends him the description of a ruler, called Bedwell's ruler, with directions how to use it. (it seems from the letter to be a ruler for measuring the volume of timber. If you have information on where I could see a picture or other image of the device, please advise) *Augustus De Morgan, Correspondence of scientific men of the seventeenth century 1870 Hermann Amandus Schwarz sent his friend Georg Cantor a letter containing the first rigorous proof of the theorem that if the derivative of a function vanishes then the function is constant. See H. Meschkowski, Ways of Thought of Great Mathematicians, pp. 87–89 for an English translation of the letter. *VFR 1959 The APT Language is Demonstrated: The Automatically Programmed Tools language is demonstrated. APT is an English-like language that tells tools how to work and is mainly used in computer-assisted manufacturing. NEW YORKER: Cambridge, Mass. - Feb. 25: The Air Force announced today that it has a machine that can receive instructions in English - figure out how to make whatever is wanted- and teach other machines how to make it. An Air Force general said it will enable the United States to build a war machine that nobody would want to tackle. Today it made an ashtray. *CHM 1976 Romania issued a stamp picturing the mathematician Anton Davidoglu (1876–1958). [Scott #2613] *VFR 1670 Maria Winckelmann (Maria Margarethe Winckelmann Kirch (25 Feb 1670 in Panitzsch, near Leipzig, Germany - 29 Dec 1720 in Berlin, Germany) was a German astronomer who helped her husband with his observations. She was the first woman to discover a comet.*SAU 1827 Henry William Watson (25 Feb 1827 in Marylebone, London, England - 11 Jan 1903 in Berkswell (near Coventry), England) was an English mathematician who wrote some influential text-books on electricity and magnetism. *SAU 1902 Kenjiro Shoda (February 25, 1902 – March 3, 1977 *SAU gives March 20 for death) was a Japanese mathematician. He was interested in group theory, and went to Berlin to work with Issai Schur. After one year in Berlin, Shoda went to Göttingen to study with Emmy Noether. Noether's school brought a mathematical growth to him. In 1929 he returned to Japan. Soon afterwards, he began to write Abstract Algebra, his mathematical textbook in Japanese for advanced learners. It was published in 1932 and soon recognised as a significant work for mathematics in Japan. It became a standard textbook and was reprinted many times.*Wik 1922 Ernst Gabor Straus (February 25, 1922 – July 12, 1983) was a German-American mathematician who helped found the theories of Euclidean Ramsey theory and of the arithmetic properties of analytic functions. His extensive list of co-authors includes Albert Einstein and Paul Erdős as well as other notable researchers including Richard Bellman, Béla Bollobás, Sarvadaman Chowla, Ronald Graham, László Lovász, Carl Pomerance, and George Szekeres. It is due to his collaboration with Straus that Einstein has Erdős number 2. *Wik 1926 Masatoşi Gündüz İkeda (25 February 1926, Tokyo. - 9 February 2003, Ankara), was a Turkish mathematician of Japanese ancestry, known for his contributions to the field of algebraic number theory. 1723 Sir Christopher Wren (20 Oct 1632; 25 Feb 1723) Architect, astronomer, and geometrician who was the greatest English architect of his time ( Some may suggest Hooke as an equal ) whose famous masterpiece is St. Paul's Cathedral, among many other buildings after London's Great Fire of 1666. Wren learned scientific skills as an assistant to an eminent anatomist. Through astronomy, he developed skills in working models, diagrams and charting that proved useful when he entered architecture. He inventing a "weather clock" similar to a modern barometer, new engraving methods, and helped develop a blood transfusion technique. He was president of the Royal Society 1680-82. His scientific work was highly regarded by Sir Isaac Newton as stated in the Principia. *TIS (I love the message on his tomb in the Crypt of St. Pauls: Si monumentum requiris circumspice ...."Reader, if you seek his monument, look about you." Lisa Jardine's book is excellent 1786 Thomas Wright (22 September 1711 – 25 February 1786) was an English astronomer, mathematician, instrument maker, architect and garden designer. He was the first to describe the shape of the Milky Way and speculate that faint nebulae were distant galaxies.*Wik 1947 Louis Carl Heinrich Friedrich Paschen (22 Jan 1865; 25 Feb 1947) was a German physicist who was an outstanding experimental spectroscopist. In 1895, in a detailed study of the spectral series of helium, an element then newly discovered on earth, he showed the identical match with the spectral lines of helium as originally found in the solar spectrum by Janssen and Lockyer nearly 40 years earlier. He is remembered for the Paschen Series of spectral lines of hydrogen which he elucidated in 1908. *TIS 1950 Nikolai Nikolaevich Luzin, (also spelled Lusin) (9 December 1883, Irkutsk – 28 January 1950, Moscow), was a Soviet/Russian mathematician known for his work in descriptive set theory and aspects of mathematical analysis with strong connections to point-set topology. He was the eponym of Luzitania, a loose group of young Moscow mathematicians of the first half of the 1920s. They adopted his set-theoretic orientation, and went on to apply it in other areas of mathematics.*Wik 1972 Władysław Hugo Dionizy Steinhaus (January 14, 1887 – February 25, 1972) was a Polish mathematician and educator. Steinhaus obtained his PhD under David Hilbert at Göttingen University in 1911 and later became a professor at the University of Lwów, where he helped establish what later became known as the Lwów School of Mathematics. He is credited with "discovering" mathematician Stefan Banach, with whom he gave a notable contribution to functional analysis through the Banach-Steinhaus theorem. After World War II Steinhaus played an important part in the establishment of the mathematics department at Wrocław University and in the revival of Polish mathematics from the destruction of the war. Author of around 170 scientific articles and books, Steinhaus has left its legacy and contribution on many branches of mathematics, such as functional analysis, geometry, mathematical logic, and trigonometry. Notably he is regarded as one of the early founders of the game theory and the probability theory preceding in his studies, later, more comprehensive approaches, by other scholars. *Wik His Mathematical Snapshots is a delight to read, but get the first English edition if you can—there are lots of surprises there. *VFR "When Steinhaus failed to attend an important meeting of the Committee of the Polish Academy of Sciences in 1960, he received a letter chiding him for "not having justified his absence." He immediately wired the President of the Academy that "as long as there are members who have not yet justified their presence, I do not need to justify my absence." [ Told by Mark Kac in "Hugo Steinhaus -- A Remembrance and a Tribute," Amer. Math. Monthly 81 (June-July 1974) 578. ] * http://komplexify.com 1988 Kurt Mahler (26 July 1903, Krefeld, Germany – 25 February 1988, Canberra, Australia) was a mathematician and Fellow of the Royal Society. Mahler proved that the Prouhet–Thue–Morse constant and the Champernowne constant 0.1234567891011121314151617181920... are transcendental numbers. He was a student at the universities in Frankfurt and Göttingen, graduating with a Ph.D. from Johann Wolfgang Goethe University of Frankfurt am Main in 1927. He left Germany with the rise of Hitler and accepted an invitation by Louis Mordell to go to Manchester. He became a British citizen in 1946. He was elected a member of the Royal Society in 1948 and a member of the Australian Academy of Science in 1965. He was awarded the London Mathematical Society's Senior Berwick Prize in 1950, the De Morgan Medal, 1971, and the Thomas Ranken Lyle Medal, 1977. *Wik 1999 Glenn Theodore Seaborg (April 19, 1912(Ishpeming, Michigan) – February 25, 1999) was an American scientist who won the 1951 Nobel Prize in Chemistry for "discoveries in the chemistry of the transuranium elements", contributed to the discovery and isolation of ten elements, and developed the actinide concept, which led to the current arrangement of the actinoid series in the periodic table of the elements. He spent most of his career as an educator and research scientist at the University of California, Berkeley where he became the second Chancellor in its history and served as a University Professor. Seaborg advised ten presidents from Harry S. Truman to Bill Clinton on nuclear policy and was the chairman of the United States Atomic Energy Commission from 1961 to 1971 where he pushed for commercial nuclear energy and peaceful applications of nuclear science. The element seaborgium was named after Seaborg by Albert Ghiorso, E. Kenneth Hulet, and others, who also credited Seaborg as a co-discoverer. It was so named while Seaborg was still alive, which proved controversial. He influenced the naming of so many elements that with the announcement of seaborgium, it was noted in Discover magazine's review of the year in science that he could receive a letter addressed in chemical elements: seaborgium, lawrencium (for the Lawrence Berkeley Laboratory where he worked), berkelium, californium, americium (Once when being aggressively cross-examined during testimony on nuclear energy for a senate committee, the Senator asked, “How much do you really know about Plutonium.” Seaborg quietly answered, “Sir, I discovered it.” , Which he did as part of the team at the Manhattan Project. *Wik Credits : *CHM=Computer History Museum *FFF=Kane, Famous First Facts *NSEC= NASA Solar Eclipse Calendar *RMAT= The Renaissance Mathematicus, Thony Christie *SAU=St Andrews Univ. Math History *TIA = Today in Astronomy *TIS= Today in Science History *VFR = V Frederick Rickey, USMA *Wik = Wikipedia *WM = Women of Mathematics, Grinstein & Campbell
{"url":"http://pballew.blogspot.com/2013/02/on-this-day-in-math-february-25.html","timestamp":"2014-04-17T07:04:36Z","content_type":null,"content_length":"112123","record_id":"<urn:uuid:7e06afcb-2348-4b12-b008-e4ea91e3e3ca>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00514-ip-10-147-4-33.ec2.internal.warc.gz"}
Breaking Down GMATPrep Weighted Average Problems Before we begin, I want to mention that every weighted average problem I’ve seen on GMATPrep is a Data Sufficiency question. This doesn’t mean that they’ll never give us a Problem Solving weighted average problem, but it does seem to be the case that the test-writers are more concerned with whether we understand how weighted averages work than with whether we can actually do the calculations. So we’re going to work on that conceptual understanding today and then we’ll discuss a neat calculation shortcut next week (built on the same principles!), just in case we do need to solve. Let’s start with a sample problem. Set your timer for 2 minutes. and GO! * At a certain company, the average (arithmetic mean) number of years of experience is 9.8 years for the male employees and 9.1 years for the female employees. What is the ratio of the number of the company’s male employees to the number of the company’s female employees? (1) There are 52 male employees at the company. (2) The average number of years of experience for the company’s male and female employees combined is 9.3 years. Given a certain company, we’re asked to determine the ratio (not a real number, just a ratio “ key point!) of two subgroups that together make up all employees: males to females. So, what do we know? We know that male employees have an average of 9.8 years of experience. We also know that female employees have an average of 9.1 years of experience. What would be useful to solve? It would be useful to know about the actual number of male and female employees. Alternatively, it would be useful to know about the relationship between the number of male employees and the number of female employees. (For example, if they told me 60% of the employees were female, then I would know the ratio of males to females was 40:60, or 2:3, even though I wouldn’t know the actual number of employees.) Most of what we’re going to do next is just to explain how weighted averages work. Once you understand how this works, you will not actually have to do these calculations on DS questions (this will take way longer than 2 minutes!); you’ll be able to determine conceptually whether enough info was provided to solve. In the given problem, could there be equal numbers of male and female employees? Go take a look at the problem again and see what you think. Let’s say that there are, in fact, 50 male employees and 50 female employees. If the male employees’ average experience is 9.8 years and the female employees’ average experience is 9.1 years, then what is the average experience for the whole group? That would just be the average of 9.8 and 9.1. Is the average of those two numbers 9.3 (the total group average given in statement 2)? No. So now we know we’ve got a weighted average problem; in other words, the number of male employees is not equal to the number of female employees. (Bonus question: can you tell, just based on what we’ve discussed so far, whether there are more male or female employees?) In order to understand how weighted averages work, let’s calculate a few things and let’s start by using the weighted average formula to see what happens in a case where we have equal numbers of employees (which, again, is not true for this problem “ we’re just examining the concept). We know the two sub-group averages, 9.8 and 9.1, and we’re also assuming an equal weighting of the two averages, 50:50, which simplifies to 1:1. Put that ratio, 1:1, in a form where the two parts add up to 1: ½:½. Each average gets paired with its adds up to 1 weighting: [(9.8)(1/2) + (9.1)(1/2)] = 9.45 Because the weightings are equal, this can be simplified to the standard average formula (below); this is why we don’t bother calculating the adds up to 1 weighting when the weightings are equal. (9.8 + 9.1) / 2 = 9.45 What if the weightings are not equal, though? Let’s say that there were 40 male employees and 60 female employees. Then, the ratio would be 40:60, or 2:3, and the adds up to 1 weighting would be 2/ 5: 3/5. (The easiest way to determine the adds up to 1 weighting is to first add the two parts of the given ratio, 2 and 3, to get 5. 5 becomes the denominator of both fractions and the original numbers, 2 and 3, become the numerators of each respective fraction: 2/5 and 3/5.) The weighted average formula would become: [(9.8)(2/5) + (9.1)(3/5)] = 9.38 Here’s the abstract version of this formula: [(average #1)(a) + (average #2)(b)] = weighted average, where: a + b = 1, and a and b represent the relative weightings of the two sub-groups In the given problem, we don’t know a and b, but we do know the two sub-group averages, 9.8 and 9.1, so we can write these two formulas: 9.8a + 9.1b = c a+b = 1 We need to see whether we have enough information in the statements such that a and b could be calculated. Statement 1 says There are 52 male employees at the company. That gives us an actual number for the male employees; that might be good. We want the male to female ratio, though; does this statement tell us anything about the other group, female employees? No. Not sufficient. Eliminate answers A and D. Statement 2 says The average number of years of experience for the company’s male and female employees combined is 9.3 years. That’s something we can add to one of our formulas: c, the weighted average, is 9.3: 9.8a + 9.1b = 9.3 a+b = 1 What do we have? We have two distinct, linear equations with two variables, a and b. Can we solve for a and b? Yes! Sufficient! The correct answer is B. We can simplify this further (for future data sufficiency questions) by saying: if we know the two sub-group averages and we know the overall weighted average, then we know we can solve for a and b, the relative weightings of the two sub-groups. (Don’t bother to write the equations, of course “ it’s data sufficiency!) In this case, a:b represents the requested ratio (male:female). Key Takeaways for Data Sufficiency Weighted Average Problems: (1) Determine that you have a weighted average problem: this occurs when an average is discussed or could be calculated, but that average is not a standard 1:1 or equally weighted average. (2) Carefully write down what you were asked to solve, then determine what you know, what you don’t know, and what you would need to know in order to solve (before you look at the statements). Remember that, if you have two sub-group averages and the overall average, then you can determine the relative weightings of the sub-groups. (3) Check the given statements to see whether you can find a match (that is, a statement tells you what you had already decided you would need to know in order to solve). Answer to bonus question: there are more female employees at the company because the weighted average, 9.3, is closer to 9.1 (the female employee figure) than to 9.8 (the male employee figure). Click here to read the second article in this series, where we’ll elaborate on this concept. * GMATPrep questions courtesy of the Graduate Management Admissions Council. Usage of this question does not imply endorsement by GMAC. 14 responses to Breaking Down GMATPrep Weighted Average Problems Hi Stacey, This really helped me refresh the weighted average concept. Thank you. By the way, I think you meant to say “9.8a + 9.1b = 9.3″ instead of “9.8x + 9.1y = 9.3″. You’re welcome. And thanks! You’re right – I’ll alert the editing team! Great news I recently find your blog and have been reading along. I thought I would leave a comment. Hello!purchase cialis , purchase ha ha ha , ha ha ha side effects , nexium.cheap nexium , generic ha ha ha , [...] dating experiences can be different, some found their partners here and some not. More information: click here This entry was posted in Uncategorized. Bookmark the permalink. ← Does He Want a [...] Boy that looks good! Lider Chat sitesine girmek icin mirc indir Paylasimlarda bulunmak icin forum sitemize sohbet etmeye buyurun Sick and tired of your website getting useless low traffic? Well i want to let you know about a new and amzing profit system that lets you get high quality targeted traffic and LOTS of it! You can use this traffic to sell products such as ClickBank products and start earning money within 2 hours. With this new system i personally made $4,100 in the first week and last month made just over $219,000. Got your attention? I dont blame you, this is selling out quick. For a video tutorial on everything you need to know follow this link and start earning big TODAY! Auto Traffic As for me it is a really good point of view. I meet people who rather say what they suppose others want to hear. Good and well written! I will come back to your site for sure! Many of the feedback website is ridiculous. TOMI at golf clubs. PowerBilt at golf equipment. Pin Balls at golf shoes. The Swing Stabilizer at golf balls and Wilson Golf. rehearsals rwanda abcs recreation finance performance ashram fruitful awesome contd pmaustralian Russian ha ha ha Doll Gosh, Ive been looking about this specific topic for about an hour, glad i found it in your website! You must be logged in to post a comment.
{"url":"http://www.manhattangmat.com/blog/index.php/2012/10/01/breaking-down-gmatprep-weighted-average-problems/","timestamp":"2014-04-19T06:52:10Z","content_type":null,"content_length":"84009","record_id":"<urn:uuid:c800e099-a32b-472b-90d9-d9abc048a42b>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00290-ip-10-147-4-33.ec2.internal.warc.gz"}
188 helpers are online right now 75% of questions are answered within 5 minutes. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/users/mariab2014/asked","timestamp":"2014-04-17T06:58:14Z","content_type":null,"content_length":"102373","record_id":"<urn:uuid:3c4ebc61-c4ff-4e18-ab41-73226f4740e7>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00083-ip-10-147-4-33.ec2.internal.warc.gz"}
Post a reply hi Skylor, From the look of this question, you are expected to do this without a calculator. Set adjacent = 40 and hypotenuse = 41 . Then work out the 'opposite' using Pythagoras. (As the angle is not acute you may wonder if this is valid but, at this stage you may treat it as if it is acute) That means you can write down a value for sin θ. But now you must adjust your answer (+ or -) according to which quadrant this angle lies in. After that you can get the angles you want with sin2θ = 2sinθcosθ cos2θ= (cosθ)^2 - (sinθ)^2 tan 2θ = 2tanθ/( 1 - [tanθ]^2 ) or sin2θ/cos2θ Of course you can also check your answers using a calculator.
{"url":"http://www.mathisfunforum.com/post.php?tid=18309&qid=236976","timestamp":"2014-04-19T12:51:16Z","content_type":null,"content_length":"17368","record_id":"<urn:uuid:64e444bb-f5ec-4c5b-8e42-8caab41f0b0f>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00568-ip-10-147-4-33.ec2.internal.warc.gz"}
Video Library Since 2002 Perimeter Institute has been recording seminars, conference talks, and public outreach events using video cameras installed in our lecture theatres. Perimeter now has 7 formal presentation spaces for its many scientific conferences, seminars, workshops and educational outreach activities, all with advanced audio-visual technical capabilities. Recordings of events in these areas are all available On-Demand from this Video Library and on Perimeter Institute Recorded Seminar Archive (PIRSA). PIRSA is a permanent, free, searchable, and citable archive of recorded seminars from relevant bodies in physics. This resource has been partially modelled after Cornell University's arXiv.org. I will survey recent feasibility results on building multi-party cryptographic protocols which manipulate quantum data or are secure against quantum adversaries. The focus will be protocols for secure evaluation of quantum circuits. Along the way, I'll discuss how quantum machines can (and can't) prove knowledge of a secret to a distrustful partner. The talk is based on recent unpublished results, as well as older joint work with subsets of Michael Ben-Or, Claude Crepeau, Daniel Gottesman, and Avinatan Hasidim (STOC '02, FOCS '02, Eurocrypt '05, FOCS '06). Among the possible explanations for the observed acceleration of the universe, perhaps the boldest is the idea that new gravitational physics might be the culprit. In this colloquium I will discuss some of the challenges of constructing a sensible phenomenological extension of General Relativity, give examples of some candidate models of modified gravity and survey existing observational constraints on this approach. Dark matter and dark energy can be explained without resorting to exotic fields if one accepts that the geometry of spacetime is governed by suitable generalized gravitational theories based on Lagrangians that are non-linear in the curvature of a metric and/or a torsionless linear connection, i.e. in second order and first order formalisms. We show that the current accelerated expansion of the Universe can be explained without resorting to dark energy. Models of generalized modified gravity, with inverse powers of the curvature can have late time accelerating attractors without conflicting with solar system experiments. We have solved the Friedman equations for the full dynamical range of the evolution of the Universe. This allows us to perform a detailed analysis of Supernovae data in the context of such models that results in an excellent fit.
{"url":"http://perimeterinstitute.ca/video-library?title=&page=618","timestamp":"2014-04-21T02:27:26Z","content_type":null,"content_length":"66756","record_id":"<urn:uuid:715086cf-8569-4788-8abd-7da65dffa5a0>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00126-ip-10-147-4-33.ec2.internal.warc.gz"}
Two-Way ANOVA (Analysis of Variance) in Microsoft Excel - ASP Free Two-Way ANOVA (Analysis of Variance) in Microsoft Excel For those times when you need to analyze two factors simultaneously, a one-way analysis of variance (ANOVA) is not enough. That’s when you need to perform a two-way ANOVA. Doing the analysis manually can take quite some time, however. This article shows you how to do it with the help of Microsoft Excel. This is a tutorial extension of Using MS Excel for One-way Analysis of Variance. In this tutorial, you will learn how to perform two-way analysis of variance in MS Excel. This is a more complicated analysis as compared to one-way ANOVA because it will enable you to investigate two factors simultaneously in a single experiment. In the article linked to above (for the one way ANOVA), only one factor is investigated to determine whether it has an effect on the response (e.g. electrical resistance). A two-way factor analysis of variance is of vital importance in engineering, information technology and the academic/research field. Instead of doing two independent one-way ANOVAs, more information and efficiency is gained from doing a single experiment that investigates the effects of the two factors. A two-way ANOVA is the simplest form of experimental design covered in more advanced topics of inferential statistics called Design of Experiments (DOE). {mospagebreak title=Best practices for two-way analysis of variance} There are at least six things that you need to know and keep in mind before you can conduct a two-way ANOVA. First, has the data that was gathered been replicated? This is not necessary, but highly recommended. Say for example that you are studying the effects of two factors (material type and treating laboratory) on the thickness of a coating. Then you might gather samples three times in each laboratory for a specific material, because replication increases the accuracy and precision of the experimental results. Second, the data must be quantitative. A two-way ANOVA is the statistical/quantitative analysis of means, so make sure that the data gathered are purely numerical. Using decimals can add precision and accuracy. Up to three places is good for a tight comparison — for example: 1.323, 1.456, 2.343. A “YES” or “NO” answer for gathered data cannot be used in a two-way ANOVA. However, there are devices that precisely provide this level of accuracy (example: analog- based devices). Third, bear in mind that it is not good enough to have measurable and quantifiable responses; you also need to ensure that the data gathered is accurate and precise. Confirming the accuracy, consistency or precision of a measuring device is covered in a separate statistical topic called MSA (Measurement Systems Analysis), which is not discussed in this article. Fourth, you should use a two-way ANOVA when there are two factors involved, and you are interested in finding out what factor is significantly affecting the response. Fifth, by using a two-way ANOVA, it is possible to confirm whether the interaction of two factors affects the response. This cannot be done using a one-way analysis of variance. Finally, to get the most out of this article, you should have MS Excel installed in your computer with the Data Analysis Toolpak enabled. {mospagebreak title=Case Study Example: Evaluation of New Coating} To illustrate this analysis using MS Excel, we will present a real example which is solved by a traditional method (using manual computation). We’ll discuss this two-way ANOVA example of the evaluation of a new coating. Problem Statement: An evaluation of a new coating applied to three different materials was conducted at two different laboratories. Each laboratory tested three samples from each of the treated materials. The results are given in the next table: It would be interesting to know which factors influence the coating’s thickness. However, the entire solution presented on that page is manually computed. You are going to solve the above problem by using MS Excel and adopting the scientific method: Step 1: Formulate the null hypothesis of the study. In a two-way ANOVA there are three null hypotheses which you need to formulate. First null hypothesis: There is no significant effect from the “material” type factor on the response. This is like doing a one-way analysis of variance on the Materials (column) factor. Rejecting this null hypothesis means that the “material” type factor is significant. Second null hypothesis: There is no significant effect from the “Laboratory” factor on the response. Rejecting the null hypothesis means that this factor is indeed significant. Third null hypothesis: The interaction between Materials and Laboratory do not affect the level of the response; if this is rejected, it would come out as significant. Note: For the three hypotheses, a 95% confidence level is used. This means you can be sure that 95% of the time the result is actually correct. And it has a 5% risk of arriving at a false conclusion. In a p-value analysis, if the p value is less than 0.05, you reject the null hypothesis and conclude that the factor is significantly affecting the response. Otherwise, if it is above 0.05, you accept the null hypothesis and the factor has no effect after all. Step 2: Encode the following data into an Excel worksheet, which should look like the screen shot below: Excel is very strict with how the data must be arranged on the worksheet. If it is not entered in the way Excel suggests, it won’t produce analysis results; or, if it gives results, they may not be entirely correct or accurate. Step 3: Launch MS Excel and go to Tools -> Data Analysis -> ANOVA: Two-factor with Replication, and then click “OK.” If you cannot see this in your MS Excel, then the analysis toolpak add-in is not Step 4: Select the area in the input range (see screen shot below). Since there are three replications/repetitions involved, input “3” in the rows per sample. Set the “alpha” to industry standard, which is 0.05 (for a 95% confidence level, “alpha” is 1-confidence level = 1- 0.95 = 0.05). You can customize the output options. In the screen shot below, the completed analysis will be shown in the same worksheet. However, you can choose to place the results in a new worksheet or even in a new workbook. Once you press OK, the analysis will automatically be generated by Excel. There is no need to compute those values manually. {mospagebreak title=Interpreting the results} Successful experiments require correct interpretation of results. By default, Excel will provide the two sets of results, the descriptive stats summary and the ANOVA table. The ANOVA table is the most important result. This is where the results can be a little confusing. You cannot see the name of the factors in the table; instead, what you see under “source of variation” is as follows: a. Sample b. Columns c. Interaction The “Sample” stands for the row factor, which is the “Laboratory;” the “Columns” represents the “Material” type factor; and "Interaction" is the combination of two factors, “Laboratory x Material.” Since ANOVA is a comparison of means using analysis of variances, the most important column in the ANOVA table is the P-value. Based on the above ANOVA table screen shot, it says that the two factors (Laboratory and Material) are indeed a significant factor affecting the response (coating thickness) because the P-value is below 0.05. The null hypothesis of each of those two factors will be rejected because of this result. The interaction (Laboratory x Material) is above the 0.05, and the null hypothesis will be accepted; it means that their interaction is not a significant factor contributing to the response “coating By using the example stated in this tutorial, any analyst, engineer, researcher or scientist can analyze more complicated experiments by following the fundamentals of analysis and interpretation of a two-way ANOVA using MS Excel. You can download the sample workbook used in this tutorial. /?php comment_form(); ?> [gp-comments width="770" linklove="off" ] /?php get_sidebar(); ?>
{"url":"http://www.aspfree.com/c/a/microsoft-access/two-way-anova-analysis-of-variance-in-microsoft-excel/","timestamp":"2014-04-21T12:07:29Z","content_type":null,"content_length":"29034","record_id":"<urn:uuid:93c367ad-5c5e-416e-a2af-c98d6ff9c9c7>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00303-ip-10-147-4-33.ec2.internal.warc.gz"}
26.5 Distributions Octave has functions for computing the Probability Density Function (PDF), the Cumulative Distribution function (CDF), and the quantile (the inverse of the CDF) for a large number of distributions. The following table summarizes the supported distributions (in alphabetical order). Distribution PDF CDF Quantile Beta Distribution betapdf betacdf betainv Binomial Distribution binopdf binocdf binoinv Cauchy Distribution cauchy_pdf cauchy_cdf cauchy_inv Chi-Square Distribution chi2pdf chi2cdf chi2inv Univariate Discrete Distribution discrete_pdf discrete_cdf discrete_inv Empirical Distribution empirical_pdf empirical_cdf empirical_inv Exponential Distribution exppdf expcdf expinv F Distribution fpdf fcdf finv Gamma Distribution gampdf gamcdf gaminv Geometric Distribution geopdf geocdf geoinv Hypergeometric Distribution hygepdf hygecdf hygeinv Kolmogorov Smirnov Distribution Not Available kolmogorov_smirnov_cdf Not Available Laplace Distribution laplace_pdf laplace_cdf laplace_inv Logistic Distribution logistic_pdf logistic_cdf logistic_inv Log-Normal Distribution lognpdf logncdf logninv Univariate Normal Distribution normpdf normcdf norminv Pascal Distribution nbinpdf nbincdf nbininv Poisson Distribution poisspdf poisscdf poissinv Standard Normal Distribution stdnormal_pdf stdnormal_cdf stdnormal_inv t (Student) Distribution tpdf tcdf tinv Univariate Discrete Distribution unidpdf unidcdf unidinv Uniform Distribution unifpdf unifcdf unifinv Weibull Distribution wblpdf wblcdf wblinv
{"url":"http://www.gnu.org/software/octave/doc/interpreter/Distributions.html","timestamp":"2014-04-17T04:24:20Z","content_type":null,"content_length":"40296","record_id":"<urn:uuid:febb08cb-9fe4-4519-84c2-fe52e16ef23e>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00304-ip-10-147-4-33.ec2.internal.warc.gz"}
Mastering Physics Solutions: Energy of Harmonic Oscillators Energy of Harmonic Oscillators Part A = A Part B = A Part C = moving toward equilibrium. Part D = C Part E = C Part F = D Part G = 3/8kA^2 Solutions Below: Consider a harmonic oscillator at four different moments, labeled A, B, C, and D, as shown in the figure . Assume that the force constant k, the mass of the block, m, and the amplitude of vibrations, A, are given. Answer the following questions: Part A Which moment corresponds to the maximum potential energy of the system? The maximum PE is when the spring is fully compressed. D might look like the right answer, but actually A is when the spring is at amplitude. Even though it’s stretched, that still counts as Part B Which moment corresponds to the minimum kinetic energy of the system? When PE is maximized, KE is minimized, so the correct answer is the same as from Part A: Part C Consider the block in the process of oscillating. If the kinetic energy of the block is increasing, the block must be: A. at the equilibrium position. B. at the amplitude displacement. C. moving to the right. D. moving to the left. E. moving away from equilibrium. F. moving toward equilibrium. KE is maximized when PE is minimized. PE is greatest at amplitude and KE is greatest at equilibrium. So choice F is correct: F. moving toward equilibrium. Part D Which moment corresponds to the maximum kinetic energy of the system? KE is greatest at equilibrium: Part E Which moment corresponds to the minimum potential energy of the system? Minimum PE is when KE is greatest, i.e. at equilibrium: Part F At which moment is K = U? When U = KE, U = 1/2U[max] (this is just a fact, which we won’t bother solving for here). So: U = 1/2(U) 1/2kx^2 = 1/2(1/2kA^2) x^2 = 1/2A^2 x = sqrt(1/2A^2) x = Asqrt(2)/2 Note- if it isn’t obvious where the sqrt(2)/2 came from, use 2/4 for the fraction above instead of 1/2: x = sqrt((2/4)A^2) x = Asqrt(2)/2 The diagram doesn’t give this as an answer, but it does give -Asqrt(2)/2, which is equivalent: Part G Find the kinetic energy K of the block at the moment labeled B. Express your answer in terms of k and A. Since total energy = KE + PE and we only have enough information to find PE, we can work backwards by first finding the maximum PE (the PE when the spring is fully compressed) and then subtracting the PE at point B. Maximum PE: PE[max] = 1/2kA^2 PE at Point B: PE[B] = 1/2k(A/2)^2 PE[B] = 1/2k(A^2/4) PE[B] = 1/8k(A^2) Now find the difference between the PEs, and that difference must be KE since there is no friction: KE[B] = PE[max] – PE[B] KE[B] = 1/2kA^2 – 1/8k(A^2) KE[B] = 3/8kA^2 6 Responses to Mastering Physics Solutions: Energy of Harmonic Oscillators 1. AMAZING!!! BEST THING YETTT 2. This was extremely helpful!
{"url":"http://www.masteringphysicssolutions.net/chapter-13-2-energy-of-harmonic-oscillators/","timestamp":"2014-04-18T18:10:29Z","content_type":null,"content_length":"50270","record_id":"<urn:uuid:97fb393e-b46f-4076-aa98-5e824178fe60>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00050-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Math Forum » Discussions » Policy and News » geometry.announcements Topic: The Solomon Temple's Mathematical Codes - Part 8: The Lost World of Espionages and Encodings of the Library of Alexandria Replies: 0 The Solomon Temple's Mathematical Codes - Part 8: The Lost World of Espionages and Encodings of the Library of Alexandria Posted: Nov 7, 2010 6:24 AM WELCOME to the 2010- 2011 edition of the miniserie of articles: "The Solomon Temple's Mathematical Codes - Part 8. Please, be advised that this article contains on only the Research, but also my Opinions of philosopher, both fully protected by the Federal Laws of the United States, and by the International Copyrights Laws. If you intend to use partial or total this Research, you can use it by the Fair Uses Rights of the Copyrights Laws, or by the Grant I have indicatted in the Copyrights Notice of that can be found at the end of the article: "The Condemnation by the Paris Livre of the French Academy of Sciences" Ion Vulcanescu's previous years research articles of were entrusted by me as author to USA Drexel University's Mathforum, and can be found at: As here-and there may be typing errors, or other errors, this article shall be continuu corrected and reeditted until it shall remain fidel to the intended form. Due to this Effect the researchers who shall find this article on Internet are asked that allways to return to Mathforum and consider only the last corrected edition. Please, be aware that the understanding of this article's Mathematical ands Historical time-messages encoded by the Old Hebrews in the sizes of Solomon Temple, requires prior familiarization with the published research data of the Part 1 through Part 7 that can be found at: In order that you to understand this article's Research Data I have firstly to make you aware of few facts. Please, be patientand step-by-step come with me into this article's Research Data ! Step 1 As I lett it be understood in the prior published articles, the Mathematical and Historical time-messages hidden in the sizes of the Solomon Temple seems to indicate a so High Spiritual Level of Knowledge that now it is lost not only by the Civilization, but by the Jews themselves. The Origin of this now lost Body of Knowledge came to my attention not from the research at the Torah, but by joining the obtained research with the known Sumerian, Egyptian, Roman, Greek, Early European Sources, and Other Sources, and puting face-to-face the presente face of the Science of Mathematics, with the Calendar, and with the Conduct of the Geometrical Proportions. And as I already mentioned, after realizing that this High Level of Knowledge appeared with such Mathematical Exactitude, and also Strange Appearings ( as it is the case with my name ION VULCANESCU), I had no other option than to take as its Origin a possible now lost Earth Civilization, or it was the Input of an Unknown Supreme Power, that I could not define. In any case, as this Level of Knowledge appears in full relatted to - THE LIBRARY OF ALEXANDRIA, - THE SOLOMON TEMPLE"S Base Perimeter, Volume and Total Area, and through the Mathematical Model involved in my research through which the Master Frequency was obtained it creates MATHEMATICAL COORDONATIONS with: - THE YEAR 2012 AD - PTOLEMY, - ARCHIMEDES, - EUCLID, - POINT, - ARTABA, - SWASTIKA ( as the number 10000), and - ION VULCANESCU. Continuing the research I have observed that I WAS IN FRONT OF THE MATHEMATICAL, AND HISTORICAL EVIDENCES which indicates that THE JEWS ( as the Jews of Judea, and also as the Old Hebrews) were fully involved in Acts of Intelligence, and Counterintelligence in the Greek World of the 4th and 3rd century BC when it is historically known that around this time THE TORAH TOOK THE FORM WE HAVE IT TODAY. Step 2 As can be checkeds and proved by my past years articles published here on Mathforum at I have advanced to this Level of Research through steps as such: - THE FREQUENCY OF THE PI VALUE IN POINT - THE HUMAN VOICE MATHEMATICAL EFFECT, and I ended in when I realized that the Old Hebrews appears to have realized that (although the prior to them civilizations had not only the Sexagessimal System, but also the Decimal System) they did not have the need of NUMBERS, but used the LETTERS as to indicate the Numbers. Although in the now academic world this Effect it is used as to ridiculize the Old Hebrews, myself I have realized that in fact this Effect I have came to this Conclusion due to the following Historical Fact: 1- The First Known ALPHABET it is in the todays academic world accepted as having appeared in PHOENICIA, around 1700 BC then it was taken by the GREEKS who appears to have put in the Vowels. Then by cca. 7th century BC the ETRUSCANS appears to have taken the Greek Alphabet ( with the Vowels already attached) and then step by step we see this Alphabet as: Old Latin, Classical Latin ad then we end up in the Roman Alphabet, that by now was already having all Sounds of the todays English Alphabet some in double or two letters from which later in the Middle Ages the Double or Two Letters were "cleared up" and ended in Single letters as it was the case with the letters J,U, and W. 2- But there appears now a PROBELM, and to see where this "problem" originates we have to go back to the Greeks where we realize that the Greeks could not have put in the Vowels without to have had a prior research, and "academic" decision taken that the Vowels were in need. 3- As I "accepted " that somehow there was in the then Greek World a time of research, and an" academic decision" was taken that the Vowels to be put in the Alphabet, joining this "acceptance" with the fact that in the Solomon Temple (interior) Base Perimeter , (interior) Volume, and also (interior ) Total Area, appeared that you shall see few of them bellow, I immediate realized that FROM THE PHOENICIANS, BUT IT WAS "GIVEN TO THEM" BY THE OLD HEBREWS through AN ACT OF INTELLIEGENCE. 4- To understand this "ACT OF INTELLIGENCE' one must go back in the history of this time when the Jews were not Jews, when there was not Judaism, nor Christianity, or Islam, when there was no Palestine or Israel, but through the convulsions of the time a then Group of people that was residing in that area separated - As JEWS OF JUDEA, and - AS JEWS OF SAMARIA, and through THE SPIRITUAL GREED they begun to manipulate Each-Other, hate Each-Other, and politically incriminate Each-Other until all this nonsense ended in the Jews of Samaria being taken captive in Babylon, and the remaining Jews of Judea considering themsels as THE ONLY REAL JEWS realized that they need a Religion which to rise them up over the Samaritan Jews. Following this Effect, we enter now in We this Effect in the configuration of the Torah where it appears that: 1- Either from among the Jews of the Judea there were High Priests who ghaving a High Conscience did not want to be part of that appears was expected from them, or 2- Sellected Sages of Old Hebrews ( The Jews of Samaria) chnaged the sides, and thropugh undercover they entered in the World of the Jews of Judea and from these positions they as to please the then Political Intereses of THE POLITICAL LEADERS OF THE JEWS OF JUDEA. 5- As the time passed we then see that continued until the era in time when we reach the time of when we can reach the mathematical and historical proves that can be read through the Mathematical Model of the Master Frequency. As by now all the civilization knows, during 1991 a group of NASA Senior Engineers sustaining the Earlier Phase of this research were sillenced by The White House a sellected High Intelligence Analysts of USA Government have asked me to continu the research, and also have Warned me that I shall be undermined by USA Universities for that: "IT WAS NOT AN ACCEPTABLE TO THEM THE SCIENCE OF MATHEMATICS PI VALUE ". The Reason , as told to me, it was that I was not " american educatted", but a selfeducatted immigrant, and my acceptance would bring Shame and Dishonour to the prestige of the USA Universities, and of the USA National Academy of Sciences, and also to then USA PRESIDENT GEORGE H. W. BUSH Political Decison that NASA Space Shuttle Chalenger exploded by "Cold Cause", and not by the fact that the PI Value of the science of mathematics "3.141592654... used in calculations of the containment of the Force of Explosion of Solid Fuel Engines, was wrong, as the NASA Senior Engineers were "implying" based on my research. I know that I do not speak english correctly, I know that I do not write english correctly, but I know that I was, and I am and that all I need to know at this age of 61 years old. Any USA Federal Agent, as also any researcher can see the Loss of Geometrical ( Mathematical ) Exactitude of the PI Value of the science of mathematics by clicking on the bellow link: where the Sumerian Natural Geometrical PI Value "3.146264369.." appears in FULL CONTROLL OF THE NUMERICAL EXACTUITUDE. I understood then that it is a Legal Enterprise well hidden under all kid of covers, and I accepted to live a life of Discriminatted, but I never considered that the USA Agents to go as low as to try to entrapp me by attempting to lure me into Israel, where to possible "better entrapp me" far away of the United States, or using the Town of South Fallsburg Police Officers. And when I realized a local Judge appeared giving "Judicial Cover" to this entrapment, I considered to lett the future generations know I have then decided to openly make known this Effect in the 4th edition of "THE AMERICAN GEOMETRY" Copyrights 1993 by Ion Vulcanescu Publisher: Ion Vulcanescu US Library of Copyrights # TX 3 682 159/Nov 5 1993, now recorded worldwide in the libraries of the academies of sciences, far away of the hands of It was during these convulsions of my life that unknown to me Intelligence Analysts designed for me through which I was advised to hidde "Sellected Parts"of the research data, in such way that "interested parties" to remain blocked, confused, or just simplu to consider me " an idiot", and later to come with such "sellected part", and explain it. ...and now I shall take you to one of such "sellected part"hidden in my articles published here on Mathforum. This "sellected part" it is hidden in the article: "The Human Voice Mathematical Effect - Parts 16: Ion Vulcanescu's Philosophical Burning of...the LIbrary of Alexandria" It can be found by clicking on bellow link: Look in SECTION A to the word "LIBRARY" Note that THE ALPHABETICAL FREQUENCY of the LIBRARY letters suppose to to be "85", and realize that I marked it only "75" taking away THE DECIMAL SYSTEM. Now you shall see it WHY I have taken out the Decimal System ! I took out the Decimal System for that in "THE DECIMAL SYSTEM" seen as "85 - DECIMAL SYSTEM = 75"" of the word "LIBRARY"was hidden the Mathematical Ratio between: THE CHEOPS PYRAMID , and HER PYRAMIDON. ...and herenow you can see it: ENJOY IT CIVILIZATION ! ...and you future generations remember that this Mathematical and Historical Beauty it was given back to the Civilization by Just look to it future generations, just look to it: "85 + 75 = (2592000 : 230.4 : 146.484375) : ( .5555..: 1.44 : .803755144)" I have explained the Cheops Pyramid and Cheops Pyramid's Pyramidon in Part 6 (SECTION B), that can be found at: Let's now see THE COUNTERINTELLIGENCE EFFECT that indicates the fact that the Old Hebrews knew this effect when they coined the world "LIBRARY" and were expecting that ahead in time SOMEONE shall find We see this Old Hebrews" THIS COUNTERINTELLIGENCE EFFECT when we related the "85" to the Cheops Pyramid, and "75" to the Cheops Pyramid's Pyramidon in the following mathematical time-message: "(29.16 x 1.11..) : Sumerian Grain + 5772 = ((85 x (2592000 : 230.4 : : 146.484375)) - (( 75 x ( .5555... : 1.44 : .803755144))" Look now to "5772". This is our year "2012 AD" in the Old Hebrews calendar. Question now: WHY is the Old Hebrews year "5772" appears realtted to the Cheops Pyramid and Her Pyramidon, is the writers of the Torah have said that the Jews left Egypt in an Exodus? WHY WERE THE WRITERS OF THE TORAH if indeed an Exodus took place and the Jews were hating the Egyptians? What indicates this Mathematical Coordonation? It indicates a COUNTERINTELLIGENCE EFFECT that surely ot was not done by the JEWS OF JUDEA, but by...THE SAGES OF OLD HEBREWS ! ...and now let's see few Mathematical Coordonations through which you can see for yourself the appearition of the Master Frequencies of my name "ION VULCANESCU" as being implicatted in these "Mathematical Coordonations", and also you can realize that the appearition of the Vowels in the Greek Alphabet it was not a Greek achievement,but it was either AN INTELLIGENCE ACTION of the Old Hebrews, or A SUPERNATURAL POWER has then controlled not only the Old Hebrews and the Greeks, but also it is now SPIRITUALLY appearing ! 1st Reading The Reading IN NUMBERS: "(46656 x 10) : 480 - (85-1) - (75-1)) : (10:2) = 162 The Reading IN WORDS" "(INTERIOR x DECIMAL SYSTEM) : 480 (years after "Exodus" when the Temple of Solomon...) - (Real Alphabetical Frequency of the word LIBRARY-1) - (The Decimal System subtracted from the word LIBRARY - 1)) : DECIMAL SYSTEM : 2) = = THE SUMERIAN NATURAL GEOMETRIC PI VALUE " 3.1462643699.." in Sumerian Fingers, or Grains. Note: Observe here the Master Frequency of the word "INTERIOR" mentioned by the Designers of Solomon Temple through the fact that the dimensions of the Solomon Temple's Ulam, Hekal and Debir were given in INTERIOR sizes. 2nd Reading The Reading IN NUMBERS: "(10000 : 10 ) + 11664 + 21904 + 28900 = 22500 + 1764 + 39204" The Reading in WORDS: "SWASTIKA : DECIMAL SYSTEM) + EUCLID + POINT + + ARCHIMEDES = LIBRARY+ OF+ ALEXANDRIA" 3rd Reading The Reading IN NUMBERS: (5928 + 58564) - (11664 + 20904) - 28900 - 12 = 2012" The Reading IN WORDS: "(ION + VULCANESCU) - (EUCLID + POINT) - ARCHIMEDES - - 12 (Tribes of Israel) = 2012 AD" 4th Reading The Reading IN NUMBERS: "11664 + 28900 + 18000 = 58564" The Reading IN WORDS: " EUCLID + ARCHIMEDES + PREDYNASTIC EGYPT'S LENGTH ( in Geographic Cubits) = VULCANESCU" 5th Reading The Reading IN NUMBERS: "(29160 + 122) + ( 29160 + 122) = 58564" The Reading IN WORDS: "(The ARTABA + A SIDE of the Library of Alxandria) + ( The ARTABA + A SIDE of the Library of Alexandria) = VULCANESCU" Note: I have explained this SIDE of "122" units of the Library of Alexandria in the above mntioned article, In SECTION B, Step 3. See it by clicking on bellow link: 6th Reading The Reading IN NUMBERS: (( 4624 + (169 = 135)) : 2 + ( 180 + 56000 + 15200) = 44944 + 28900" The Reading IN WORDS: (( CANAAN + (The Pyramidal Frequency of the Decimal System - - The Pyramidal Frequency of the English alphabet's 26 letters)) : 2 + + The Solomon Temple ( interior) Base Perimeter + The Solomon Temple (interior) Volume + The Solomon Temple (interior) Total Area = PTOLEMY + ARCHIMEDES" a -I have explained THE PYRAMIDAL FREQUENCIES of the Decimal System and of the English Alphabet's 26 letters in Part 7 at: b - Did ARCHIMEDES EVER EXISTED ? WHAT HE WAS DOING. He appears to have been...AN OLD HEBREW SAGE ! 7th Reading The Reading IN NUMBERS: "(5928 + 58564 - 10000) + ( 28900 - 10000) + ( 11664 - 10000) + + (122 - 10) = 180 + 56000 + 15200"" The Reading IN WORDS: " (ION + VULCANESCU - SWASTIKA) + ( ARCHIMEDES - - SWASTIKA) + ( EUCLID - SWASTIKA) + ( One SIDE ( of the Library of Alexandria) - DECIMAL SYSTEM ) = The Solomon Temple's(Interior) Base Perimeter + The Solomon Temple's (interior) Volume + The Solomon Temple's (interior) Total Area" 8th Reading The Reading IN NUMBERS: "(21904 - 11664) : 10 = 1024" The Reading IN WORDS: "( POINT - EUCLID) : DECIMAL SYSTEM = BAAL" 9th Reading The Reading IN NUMBERS: "(180 + 56000 + 15200) --( 22500 + 1764 + 39204) - 7744 x x 10000 : 10 = 70 x 20 x 160" The Reading IN WORDS: '( The Solomon Temple ( interior) Base Perimeter + The Solomon Temple (interior) Volume + The Solomon Temple (interior) Total Area - (LIBRARY + OF + ALEXANDRIA) - ABRAHAM x x SWASTIKA : DECIMAL SYSTEM = The Solomon TEmple (interior ) Length of 70 cubits x The Solomon Temple (inyterior) Width of 20 cubits x The Solomon Temple Ulam's height of 120 cubits" 10th Reading The Reading IN NUMBERS: "(22500 + 1764 + 39204) - 11664 - 2904 - 29160 = 740" The Reading IN WORDS: " (LIBRARY + OF + ALEXANDRIA) - EUCLID - POINT - - The ARTABA weight = THE BLAK STONE OF MECCA" 11th Reading The Reading IN NUMBERS: "((2.5 x 1.5 x 1.5) x 8 x 8.888...)) + 16384)) = (1664 + 21904) x 2" The Reading IN WORDS: "((THE ARC OF THE COVENANT"S Volume x 8 x 8.888..) + + YHWH)) = (EUCLID + POINT ) x 2 Did "EUCLID" ever exited ? WHO SIGNED HIS WRITINGS as "EUCLID" KNEW WHAT HE WAS DOING ? He appears to have been...AN OLD HEBREW SAGE ! 12th Reading The Reading IN NUMBERS: "((5125 + ( 22500 + 1764 + 39204) - 11664 - 21904 - 2012 + + (2 + 3 + 4 + 5 + 6 + 7) x 10 = 330440" The Reading IN WORDS: "((The cyclical Earth lifecycle of 5125 years + ( LIBRARY OF ALEXANDRIA) - EUCLID - POINT + 2012 AD + (The Six Steps of Saqqara Step Pyramid ) x DECIMAL SYSTEM = The VOLUME of the Saqqara Step Pyramid". Notes: The SIX ( 6 ) STEPS of Saqqara Step Pyramid are the steps seen today. This Pyramid had in ITS ORIGINAL DESIGN SEVEN (7) STEPS. As the Government of Egypt is witholding from the public the now sizes of the Saqara Step Pyramid, as researcher I REFUSE to further public the full obtained research data until I can verify if there were not additional Mathematical Concepts attached to the Original Design. Publishing the obtained research data "as I have it now " I would facilitate to any unscrupulous individual to take over this research data, and as I am a Selfeducated I would not have any chance to claim any credit. ...and so be it: I am waiting that the Government of Egypt to public release THE SAQARA STEP PYRAMID'S STEP SIZES ! The publication of the Research Data of this miniserie of articles continues here on Mathforum, and the interested researchers can reach it by checking the posting of future articles on Mathforum, at: Ion Vulcanescu - Philosopher Independent Researcher in Geometry Author, Editor, and Publisher of November 4 2010 Sullivan County State of New York
{"url":"http://mathforum.org/kb/thread.jspa?threadID=2171790","timestamp":"2014-04-17T16:05:55Z","content_type":null,"content_length":"36859","record_id":"<urn:uuid:d191962f-4480-4796-a6fd-e38aee441bfd>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00541-ip-10-147-4-33.ec2.internal.warc.gz"}
Mind the Gap: Part III We recently introduced a method for solving problems in which people or objects are moving in the same direction. We recommend that you simplify such problems by focusing only on the relative speed of the objects, rather than their individual rates. Here, again, is our example problem (explanation follows): Hillary and Eddy are climbing to the summit of Mt. Everest from a base camp 4,800 ft from the summit. When they depart for the summit at 06:00, Hillary climbs at a rate of 800 ft/hr with Eddy lagging behind at a slower rate of 500 ft/hr. If Hillary stops 800 ft short of the summit and then descends at a rate of 1,000 ft/hr, at what time do Hillary and Eddy pass each other on her return trip? A) 11:00 B) 12:00 C) 12:30 D) 15:00 E) 16:00 B) 12:00 Begin by finding the duration of Hillary’s climb. If she stopped 800 ft short of the summit, she climbed a total of 4,000 ft (4,800 to the summit minus 800). At a rate of 800 ft/hr, she would have climbed for five hours. Thus, Hillary stops climbing and begins her descent at 11:00. Next, we can find the distance between Hillary and Eddy at the end of her climb by multiplying the difference in their climbing speeds, 300 ft/hr, by five hours. So, we know that the two are 1,500 ft apart when Hillary begins her descent. Finally, we calculate the time it would take the two to cover the 1,500 ft distance at a combined rate of 1,500 ft/hr (1,000 ft/hr descent plus 500 ft/hr climb). So, Hillary and Eddy meet up at 12:00, one hour after she begins her descent.
{"url":"https://gmat.economist.com/blog/quantitative/mind-gap-part-iii?gsrc=GMATCLUB_TEGblogfeed","timestamp":"2014-04-17T01:04:37Z","content_type":null,"content_length":"34919","record_id":"<urn:uuid:c2df14ac-31da-4fdc-9e08-204a2a9983ca>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00123-ip-10-147-4-33.ec2.internal.warc.gz"}
[SciPy-User] [ANN] la 0.4, the labeled array Keith Goodman kwgoodman@gmail.... Tue Jul 6 10:40:56 CDT 2010 The main class of the la package is a labeled array, larry. A larry consists of data and labels. The data is stored as a NumPy array and the labels as a list of lists (one list per dimension). Alignment by label is automatic when you add (or subtract, multiply, divide) two larrys. The focus of this release was binary operations between unaligned larrys with user control of the join method (five available) and the fill method. A general binary function, la.binaryop(), was added as were the convenience functions add, subtract, multiply, divide. Supporting functions such as la.align(), which aligns two larrys, were also added. download http://pypi.python.org/pypi/la doc http://larry.sourceforge.net code http://github.com/kwgoodman/la list1 http://groups.google.ca/group/pystatsmodels list2 http://groups.google.com/group/labeled-array New larry methods - ismissing: A bool larry with element-wise marking of missing values - take: A copy of the specified elements of a larry along an axis New functions - rand: Random samples from a uniform distribution - randn: Random samples from a Gaussian distribution - missing_marker: Return missing value marker for the given larry - ismissing: A bool Numpy array with element-wise marking of missing values - correlation: Correlation of two Numpy arrays along the specified axis - split: Split into train and test data along given axis - listmap_fill: Index map a list onto another and index of unmappable elements - listmap_fill: Cython version of listmap_fill - align: Align two larrys using one of five join methods - info: la package information such as version number and HDF5 availability - binaryop: Binary operation on two larrys with given function and join method - add: Sum of two larrys using given join and fill methods - subtract: Difference of two larrys using given join and fill methods - multiply: Multiply two larrys element-wise using given join and fill methods - divide: Divide two larrys element-wise using given join and fill methods - listmap now has option to ignore unmappable elements instead of KeyError - listmap.pyx now has option to ignore unmappable elements instead of KeyError - larry.morph() is much faster as are methods, such as merge, that use it Breakage from la 0.3 - Development moved from launchpad to github - func.py and afunc.py renamed flarry.py and farray.py to match new flabel.py. Broke: "from la.func import stack"; Did not break: "from la import stack" - Default binary operators (+, -, ...) no longer raise an error when no labels Bug fixes - #590270 Index with 1d array bug: lar[1darray,:] worked; lar[1darray] crashed More information about the SciPy-User mailing list
{"url":"http://mail.scipy.org/pipermail/scipy-user/2010-July/025971.html","timestamp":"2014-04-19T02:55:15Z","content_type":null,"content_length":"5409","record_id":"<urn:uuid:b6ea5e3d-4d08-4248-ba84-46e3ac746343>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00298-ip-10-147-4-33.ec2.internal.warc.gz"}
How Can Nature Help Us Compute "... Abstract. Looking at very recent developments in spacetime theory, we can wonder whether these results exhibit features of hypercomputation that traditionally seemed impossible or absurd. Namely, we describe a physical device in relativistic spacetime which can compute a non-Turing computable task, ..." Cited by 4 (0 self) Add to MetaCart Abstract. Looking at very recent developments in spacetime theory, we can wonder whether these results exhibit features of hypercomputation that traditionally seemed impossible or absurd. Namely, we describe a physical device in relativistic spacetime which can compute a non-Turing computable task, e.g. which can decide the halting problem of Turing machines or decide whether ZF set theory is consistent (more precisely, can decide the theorems of ZF). Starting from this, we will discuss the impact of recent breakthrough results of relativity theory, black hole physics and cosmology to well established foundational issues of computability theory as well as to logic. We find that the unexpected, revolutionary results in the mentioned branches of science force us to reconsider the status of the physical Church Thesis and to consider it as being seriously challenged. We will outline the consequences of all this for the foundation of mathematics (e.g. to Hilbert’s programme). Observational, empirical evidence will be quoted to show that the statements above do not require any assumption of some physical universe outside of our own one: in our specific physical universe there seem to exist regions of spacetime supporting potential non-Turing computations. Additionally, new “engineering ” ideas will be outlined for solving the so-called blue-shift problem of GR-computing. Connections with related talks at the Physics and Computation meeting, e.g. those of Jerome Durand-Lose, Mark Hogarth and Martin Ziegler, will be indicated. 1 "... Abstract.- Can general relativistic computers break the Turing barrier?- Are there final limits to human knowledge?- Limitative results versus human creativity (paradigm shifts).- Gödel’s logical results in comparison/combination with Gödel’s relativistic results.- Can Hilbert’s programme be carried ..." Cited by 3 (2 self) Add to MetaCart Abstract.- Can general relativistic computers break the Turing barrier?- Are there final limits to human knowledge?- Limitative results versus human creativity (paradigm shifts).- Gödel’s logical results in comparison/combination with Gödel’s relativistic results.- Can Hilbert’s programme be carried through after all? 1 Aims, perspective The Physical Church-Turing Thesis, PhCT, is the conjecture that whatever physical computing device (in the broader sense) or physical thought experiment will be designed by any future civilization, it will always be simulatable by a Turing machine. The PhCT was formulated and generally accepted in the 1930’s. At that time a general consensus was reached declaring PhCT valid, and indeed in the succeeding decades the PhCT was an extremely useful and valuable maxim in elaborating the foundations of theoretical computer science, logic, foundation of mathematics and related areas. But since PhCT is partly a physical conjecture, we emphasize that this consensus of the 1930’s was based on the physical worldview of the 1930’s. Moreover, many thinkers considered PhCT as being based on , 2008 "... In dealing with emergent phenomena, a common task is to identify useful descriptions of them in terms of the underlying atomic processes, and to extract enough computational content from these descriptions to enable predictions to be made. Generally, the underlying atomic processes are quite well un ..." Cited by 3 (1 self) Add to MetaCart In dealing with emergent phenomena, a common task is to identify useful descriptions of them in terms of the underlying atomic processes, and to extract enough computational content from these descriptions to enable predictions to be made. Generally, the underlying atomic processes are quite well understood, and (with important exceptions) captured by mathematics from which it is relatively easy to extract algorithmic content. A widespread view is that the difficulty in describing transitions from algorithmic activity to the emergence associated with chaotic situations is a simple case of complexity outstripping computational resources and human ingenuity. Or, on the other hand, that phenomena transcending the standard Turing model of computation, if they exist, must necessarily lie outside the domain of classical computability theory. In this talk we suggest that much of the current confusion arises from conceptual gaps and the lack of a suitably fundamental model within which to situate emergence. We examine the potential for placing emergent relations in a familiar context based on Turing’s 1939 model for interactive computation over structures described in terms of reals. The explanatory power of this model is explored, formalising informal descriptions in terms of mathematical definability and invariance, and relating a range of basic scientific puzzles to results and intractable problems in computability theory. In this talk "... Abstract. Computability concerns information with a causal – typically algorithmic – structure. As such, it provides a schematic analysis of many naturally occurring situations. We look at ways in which computabilitytheoretic structure emerges in natural contexts. We will look at how algorithmic str ..." Add to MetaCart Abstract. Computability concerns information with a causal – typically algorithmic – structure. As such, it provides a schematic analysis of many naturally occurring situations. We look at ways in which computabilitytheoretic structure emerges in natural contexts. We will look at how algorithmic structure does not just emerge mathematically from information, but how that emergent structure can model the emergence of very basic aspects of the real world. The adequacy of the classical Turing model of computation — as first presented in [18] — is in question in many contexts. There is widespread doubt concerning the reducibility to this model of a broad spectrum of real-world processes and natural phenomena, from basic quantum mechanics to aspects of evolutionary development, or human mental activity. In 1939 Turing [19] described an extended model providing mathematical form to the algorithmic content of structures which are presented in terms of real numbers. Most scientific laws with a computational content can be framed "... Abstract. We discuss the impact of very recent developments of spacetime theory, black hole physics, and cosmology to well established foundational issues of computability theory and logic. Namely, we describe a physical device in relativistic spacetime which can compute a non-Turing computable task ..." Add to MetaCart Abstract. We discuss the impact of very recent developments of spacetime theory, black hole physics, and cosmology to well established foundational issues of computability theory and logic. Namely, we describe a physical device in relativistic spacetime which can compute a non-Turing computable task, e.g. which can decide the halting problem of Turing machines or whether ZF set theory is consistent or not. Connections with foundation of mathematics and foundation of spacetime theory will be discussed. 1 "... The articles in the volume Computing Nature present a selection of works from the ..." "... Abstract. We need much better understanding of information processing and computation as its primary form. Future progress of new computational devices capable of dealing with problems of big data, internet of things, semantic web, cognitive robotics and neuroinformatics depends on the adequate mode ..." Add to MetaCart Abstract. We need much better understanding of information processing and computation as its primary form. Future progress of new computational devices capable of dealing with problems of big data, internet of things, semantic web, cognitive robotics and neuroinformatics depends on the adequate models of computation. In this article we first present the current state of the art through systematisation of existing models and mechanisms, and outline basic structural framework of computation. We argue that defining computation as information processing, and given that there is no information without (physical) representation, the dynamics of information on the fundamental level is physical / intrinsic / natural computation. As a special case, intrinsic computation is used for designed computation in computing machinery. Intrinsic natural computation occurs on variety of levels of physical processes, containing the levels of computation of living organisms (including highly intelligent animals) as well as designed computational devices. The present article offers a typology of current models of computation and indicates future paths for the advancement of the field; both by the development of new computational models and by learning from nature how to better compute using different mechanisms of intrinsic computation. 1 "... This is a draft of the article to be published in Springer book series SAPERE. The final publication will be available at ..." Add to MetaCart This is a draft of the article to be published in Springer book series SAPERE. The final publication will be available at
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=775139","timestamp":"2014-04-24T13:42:48Z","content_type":null,"content_length":"31037","record_id":"<urn:uuid:7554d92e-4fe5-4ad4-99c5-8d256932aaed>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00409-ip-10-147-4-33.ec2.internal.warc.gz"}
This paper considers the logic FOcard, i.e., first-order logic with cardinality predicates that can specify the size of a structure modulo some number. We study the expressive power of FOcard on the class of languages of ranked, finite, labelled trees with successor relations. Our first main result characterises the class of FOcard-definable tree languages in terms of algebraic closure properties of the tree languages. As it can be effectively checked whether the language of a given tree automaton satisfies these closure properties, we obtain a decidable characterisation of the class of regular tree languages definable in FOcard. Our second main result considers first-order logic with unary relations, successor relations, and two additional designated symbols < and + that must be interpreted as a linear order and its associated addition. Such a formula is called addition-invariant if, for each fixed interpretation of the unary relations and successor relations, its result is independent of the particular interpretation of < and +. We show that the FOcard-definable tree languages are exactly the regular tree languages definable in addition-invariant first-order logic. Our proof techniques involve tools from algebraic automata theory, reasoning with locality arguments, and the use of logical interpretations. We combine and extend methods developed by Benedikt and Segoufin (ACM ToCL, 2009) and Schweikardt and Segoufin (LICS, 2010). Contextual equivalence in lambda-calculi extended with letrec and with a parametric polymorphic type system (2009) This paper describes a method to treat contextual equivalence in polymorphically typed lambda-calculi, and also how to transfer equivalences from the untyped versions of lambda-calculi to their typed variant, where our specific calculus has letrec, recursive types and is nondeterministic. An addition of a type label to every subexpression is all that is needed, together with some natural constraints for the consistency of the type labels and well-scopedness of expressions. One result is that an elementary but typed notion of program transformation is obtained and that untyped contextual equivalences also hold in the typed calculus as long as the expressions are well-typed. In order to have a nice interaction between reduction and typing, some reduction rules have to be accompanied with a type modification by generalizing or instantiating types.
{"url":"http://publikationen.ub.uni-frankfurt.de/solrsearch/index/search/searchtype/authorsearch/author/%22Frederik+Harwath%22","timestamp":"2014-04-18T03:47:03Z","content_type":null,"content_length":"24284","record_id":"<urn:uuid:2c21250a-dd24-49b2-9256-d38b563b4c6d>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00539-ip-10-147-4-33.ec2.internal.warc.gz"}
When A Heat Engine Works, Entropy Flows From The ... | Chegg.com When a heat engine works, entropy flows from the hot source to thecold sink and heat at higher entropy is exhausted into thesurroundings. In the following cycle, the universe undergoes atotal positive entropy change. In the following cycle, the processCA is an adiabatic process. Its a diatomic gas with n=1. Thetemperatures of the surrounding environment where entropyproduction takes place is not given. Nevertheless, the entropychange of the surroundings can be estimated with the given data.Estimate it and comment if you expect the entropy change of thesurroundings to be higher or lower than your estimate had you runthis cycle in a lab. Explain your reasoning. (Assume process AB is isobaric and process BC is isochoric) P[c] = 1.6 kPa V[c] = 0.8m^3 P[b] = 0.8 kPa V[b] = 0.8m^3 P[a] = 0.8 kPa V[a] =???
{"url":"http://www.chegg.com/homework-help/questions-and-answers/heat-engine-works-entropy-flows-hot-source-thecold-sink-heat-higher-entropy-exhausted-thes-q653542","timestamp":"2014-04-21T16:50:12Z","content_type":null,"content_length":"19914","record_id":"<urn:uuid:d39294fd-5a24-4b24-bfa7-9a2a4bb33a0c>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00360-ip-10-147-4-33.ec2.internal.warc.gz"}
Modulational instability of two pairs of counter-propagating waves and energy exchange in two-component media (Venue: MR3 CMS) Meeting Room 3, CMS The dynamics of two pairs of counter-propagating waves in two-component media is considered within the framework of two generally nonintegrable coupled Sine-Gordon equations. We consider the dynamics of weakly nonlinear wave packets, and using an asymptotic multiple-scales expansion we obtain a suite of evolution equations to describe energy exchange between the two components of the system. Depending on the wave packet length-scale vis-a-vis the wave amplitude scale, these evolution equations are either four non-dispersive and nonlinearly coupled envelope equations, or four non-locally coupled nonlinear Schr\"odinger equations. We also consider a set of fully coupled nonlinear Schr\"odinger equations, even though this system contains small dispersive terms which are strictly beyond the leading order of the asymptotic multiple-scales expansion method. Using both the theoretical predictions following from these asymptotic models and numerical simulations of the original unapproximated equations, we investigate the stability of plane-wave solutions, and show that they may be modulationally unstable. These instabilities can then lead to the formation of localized structures, and to a modification of the energy exchange between the components. When the system is close to being integrable, the time-evolution is distinguished by a remarkable almost periodic sequence of energy exchange scenarios, with spatial patterns alternating between approximately uniform wavetrains and localized structures. Related Links
{"url":"http://www.newton.ac.uk/programmes/PFD/seminars/2005121416006.html","timestamp":"2014-04-17T04:25:29Z","content_type":null,"content_length":"5512","record_id":"<urn:uuid:3eb02c21-ba15-4617-a711-25b338fe8e8c>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00571-ip-10-147-4-33.ec2.internal.warc.gz"}
Waltham, MA SAT Math Tutor Find a Waltham, MA SAT Math Tutor ...By sharpening their ability to explain, my students not only refine their problem-solving skills and learn to ace exams, but also develop deep and lasting understanding.I hold a degree in theoretical math from MIT, and I've taught every level of calculus -- from elementary to AP (both AB and BC),... 47 Subjects: including SAT math, chemistry, English, reading ...Classes I have tutored: MIT 18.06 - Linear Algebra University of Phoenix MTH360 - Linear Algebra I aced a course in electromagnetism at MIT and passed the Fundamentals of Engineering exam, which tests basic understanding of RLC circuits. I also have lots of hands-on experience with circuits; I h... 8 Subjects: including SAT math, physics, calculus, differential equations ...As a tutor, I work with students on reading, writing, math, and study skills. I can work with students to help them improve their reading levels and become better writers. I have helped students improve their math skills in middle and high school, and I can help them become better organized in the classroom, working on note-taking, test-taking, and time management skills as 29 Subjects: including SAT math, reading, English, writing ...My two middle children, a senior and sophomore in high school, are currently attending MassBay Community college full-time. I believe that my approach to home-schooling, and thus to tutoring is unique. I am very good at teaching so that the student “sees” the solution. 25 Subjects: including SAT math, reading, ESL/ESOL, English ...Statistics offers many new concepts which, depending how it's taught, can be overwhelming at times. I have experience taking topics in statistics which students find challenging or intimidating and placing them in an easier to understand context. I have taught math for an SAT prep company. 24 Subjects: including SAT math, chemistry, calculus, physics Related Waltham, MA Tutors Waltham, MA Accounting Tutors Waltham, MA ACT Tutors Waltham, MA Algebra Tutors Waltham, MA Algebra 2 Tutors Waltham, MA Calculus Tutors Waltham, MA Geometry Tutors Waltham, MA Math Tutors Waltham, MA Prealgebra Tutors Waltham, MA Precalculus Tutors Waltham, MA SAT Tutors Waltham, MA SAT Math Tutors Waltham, MA Science Tutors Waltham, MA Statistics Tutors Waltham, MA Trigonometry Tutors Nearby Cities With SAT math Tutor Arlington, MA SAT math Tutors Auburndale, MA SAT math Tutors Belmont, MA SAT math Tutors Brighton, MA SAT math Tutors Brookline, MA SAT math Tutors Lexington, MA SAT math Tutors Medford, MA SAT math Tutors Newton Center SAT math Tutors Newton Centre, MA SAT math Tutors Newton, MA SAT math Tutors Newtonville, MA SAT math Tutors North Waltham SAT math Tutors South Waltham, MA SAT math Tutors Watertown, MA SAT math Tutors West Newton, MA SAT math Tutors
{"url":"http://www.purplemath.com/Waltham_MA_SAT_Math_tutors.php","timestamp":"2014-04-17T21:37:11Z","content_type":null,"content_length":"24151","record_id":"<urn:uuid:d4fffd0c-698e-4111-bf79-e2ebc3ec7b9a>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00191-ip-10-147-4-33.ec2.internal.warc.gz"}
Socio Demographic Profile Of The Survey Sample Marketing Essay Generally, frequency is used for looking at detailed information on nominal (category) data and describing the results. Table 1 show the study sample comprises of 220 respondents, which vary on characteristics such as gender, age, educational level, marital status, ethnicity, employment status, income level, airline website that have ever visited and number of online purchases in airasia.com in last one year. The following table 4.1 summarizes the socio demographic profiles of the respondents of this study. Table 4.1 Demographic profiles of respondents Percentage (%) Below 20 Above 50 Educational Level High School Certificate Bachelor Degree Master Degree PhD (Doctorate) Marital Status Employment status Income Level Below RM1500 Above RM10500 Airline website that you have ever visited. Number of online purchases in airasia.com in last one year. 1-3 times 4-6 times 7-9 times 10-12 times 13-15 times Above 16 times In term of gender, the sample indicates that female is more than male, 149(67.7%) is made up of females and the rest 71(32.3%) are males. The imbalance number of respondents between male and female could be due to the sampling process and females are more willing to answer surveys compared to males. All of the respondents are above the age of 18 years old and they are predominantly young people with age of 20 to 29 years old (82.3%). The second biggest number of respondents comprised of individuals in below 20 (8.6%) age group. With respect to of educational level, more than 60 % of the respondents hold at least bachelor degree which has 62.7%, followed by high school certificate 23.2%, 8.2% are diploma and 5.9% are master degree. Concerning the marital status of the respondents the majority 192(87.3%) of the respondents are single and only 28(12.7%) of them are married. This can be due to fact that the majority of survey respondents are young people who are aged between 20 to 29(70.2%) years old. With respect to the ethnic groups of the survey sample, the majority 117(53.2%) of respondents is made up of Chinese, followed by 81(36.8%) Malay, 13(5.9%) Indians and 9(4.1%) is others which consist of foreigners. The high Chinese respondents reported are slightly imbalanced, however it can be considered as a representative of Malaysian population composition. Meanwhile, with regard to the employment status of the respondents more than 60% of them are employed. Working respondents consists of 63.6% followed with the student 36.4%. It is notable that household income which is below RM1500 is 40%, 35% are between RM1501 to RM3000, 14.1% are between RM3001-RM4500, 3.2% are RM 4501 to RM6000, 2.7% are RM6001 to RM7500 and only 0.5% are RM7501 to RM9000 and RM9001 to RM10500 while the remaining 4.1% of respondents earned more than RM10501 a month. This is could be due to the fact that most of the respondents are young adults such as student. While for the airline website that respondents have ever visited, all of the respondents had experience of visited airasia.com. This is due to the questionnaires only distributed to respondents who had online shopping experiences at airasia.com. 49 respondents had experiences of visit website of Malaysia Airline, 10 respondents had visit fireflyz.com and only 9 respondents visit other airline website. This is because there are 4.1% of respondents who are non Malaysian and they had experience of online shopping with other airline website in their origin country. Lastly, for the number of online purchases in airasia.com in last one year, large majority of 78.2% of respondents purchase 1 to 3 times Air Asia airline ticket in last one year. 13.2% of them purchase 4 to 6 times in airasia.com, 4.1% of them purchase 7 to 9 times while the remaining 4.6% of respondents purchase more than 10 times airline tickets. Factor analysis is used for data reduction and examine how underlying that the questions in the questionnaires asked is relating to the construct that need to study. There are two purposes in doing exploratory factor analysis; one is to identify the representative variables to create new variables the other purpose is to determine factors that would account maximum variance in the study that can be use for multivariate analysis (Quester & Lim 2003). Factor analysis was conducted for all variables. In this study, two variable had been tested which were internet apprehensiveness and website quality. Variables of internet apprehensiveness contain 14 items while for website quality has 29 items. All the variables included in this study were tested by previous studies. Two separate sets of factor analyses were conducted so that factors were correlated that can produce the conceptual similarities within the two set of scale items (Norusis, 1993). The eigenvalues setting at 1.0 which mean factor with a variance greater than 1.0 are retained. Kaiser (1974) recommends that the KMO accepting values greater than 0.5 as acceptable. Furthermore, values between 0.5 and 0.7 are mediocre, values between 0.7 and 0.8 are good, values between 0.8 and 0.9 are great and values above 0.9 are superb. The KMO value must be 0.50 and above to retained to obtain 80% of significant level. Items with KMO below than 0.5 were deleted (Hair et al 1998). Factor and item retention in this study were based on: Items not display when the cross-loadings greater than 0.40 with other factors. Items exhibiting principal factor loadings approximating 0.5 or above A reliability coefficient for the aggregated scale of 0.70 or greater. 4.2.1 Factor Analysis on Internet Apprehensiveness Table 4.2 shows the factor analysis performed on 14 items resulted in two factors explaining 66.6% of the overall variance. In this study, the KMO is 0.913, which fall into the range of superb indicating that all the variables for internet apprehensiveness were interrelated. So the factor analysis is appropriate for these data. For these data, Bartlett’s test is highly significant which is p=0.000 and therefore factor analysis is appropriate. Two factors were structured in Internet Apprehensiveness which was Transactional Internet Apprehensiveness and General Internet Apprehensiveness. Table 4.2 Factor analysis on internet apprehensiveness Factor 1: Transactional Internet Apprehensiveness I have fear of using the internet to make on-line purchases I am afraid to make on-line purchases at times I have fear of making on-line purchases. I am worried when using the internet to purchase products or services. I dislike using the internet to make online purchases. Ordinarily, I am not calm when making on-line purchases. I feel uncomfortable using the internet to make on-line purchases under RM1,000 I feel uncomfortable using the internet to make on-line purchases over RM1,000 The security of my credit card for use with on-line purchases concerns me. Factor 2: General Internet Apprehensiveness I dislike using the internet for a variety of reasons. I am usually not calm while using the internet. Communicating with the internet usually makes me uncomfortable. Generally, I am uncomfortable using the internet to gather information. I would not use the internet to purchase airline tickets, book hotel rooms, or other travel-related service. Percentage of Variance Cronbach Alpha(Reliability) Bartlett's Test of Sphericity-sig.0.000 Percentage of Cumulative Variance: 66.597 4.2.2 Factor Analysis on Website Quality Table 4.3 Factor analysis on Website Quality The information provided at airasia.com is clear to me. The interactions with airasia.com are clear and understandable. airasia.com is user friendly. airasia.com is easy to use. The information provided at airasia.com is reliable. The information on airasia.com is complete for my purchase decisions. I feel happy when I use airasia.com. I believe the airasia.com provides accurate information to potential customers like me. airasia.com is flexible to interact with. The information in airasia.com is relevant. The information provided on airasia.com is easily understandable. I find it easy to obtain information from airasia.com. I can easily find what I need on airasia.com. airasia.com uses good color combinations. I like the color combination of airasia.com airasia.com is creative in design. I like the layout of airasia.com. I found it easy to move around in airasia.com The start page leads me easily to the information I need. The website and all of its linked pages work well. Search engine provides accurate results Is easily accessed via search engines Can be accessed from a variety of other related websites I can find all the detailed information I need. The start page tell me immediately where I can find the information I am looking for. airasia.com loads quickly When I use airasia.com there is very little waiting time between my actions and the website's response. All my business can be completed via airasia.com Most business processes can be completed via airasia.com Percentage of Variance Cronbach Alpha(Reliability) Bartlett's Test of Sphericity-sig.0.000 Percentage of Cumulative Variance: 68.412 Table 4.3 shows the factor analysis results on 29 items that produced only four distinct factors that explaine 56.71% of the overall variance. One item has been discarded from further analyses because of factor loadings of that item are below 0.50 which were not acceptable according to Lee and Crompton (1992). Items that are cross loadings were deleted, so in this study three items were deleted and not included in further analysis. The Kaiser-Meyer-Olkin(KMO) was 0.938 and the Bartlett’s Test of Sphericity were significant which is 0.000 and shows that the items were appropriate in factor analysis (Hair et al.1998). For the remaining items, the factor analysis was between 0.503 and 0.829. The eigenvalues for component 1, component 2, component 3 and components 4 were 14.729, 2.705, 1.271 and 1.134 respectively. Table 4.4 Factor analysis on website quality re-run with the remaining 24 items Factor 1: Website Usability The information provided at airasia.com is clear to me. airasia.com is user friendly. The interactions with airasia.com are clear and understandable. airasia.com is easy to use. The information provided at airasia.com is reliable. The information on airasia.com is complete for my purchase decisions. I believe the airasia.com provides accurate information to potential customers like me. I feel happy when I use airasia.com. airasia.com is flexible to interact with. The information in airasia.com is relevant. The information provided on airasia.com is easily understandable. Factor 2: Website Design airasia.com uses good color combinations. I like the color combination of airasia.com airasia.com is creative in design. I like the layout of airasia.com. I found it easy to move around in airasia.com Factor 3: Accessibility Search engine provides accurate results Is easily accessed via search engines Can be accessed from a variety of other related websites I can find all the detailed information I need. Factor 4: Transactional Capabilities airasia.com loads quickly When I use airasia.com there is very little waiting time between my actions and the website's response. All my business can be completed via airasia.com Most business processes can be completed via airasia.com Percentage of Variance Cronbach Alpha(Reliability) Bartlett's Test of Sphericity-sig.0.000 Percentage of Cumulative Variance: 69.177 The first run of factor analysis did not produce a clean factor structure and there are three items were cross loadings, factor analysis had to re-run again in order to get better result. Table 4.4 shows the factor analysis results on website quality construct resulted in four factor explaining 69.17% of the total variance explained. The second run of factor loadings on the remaining 24 items were between 0.515 and 0.840. The Kaiser-Meyer-Olkin (KMO) was at a superb value of 0.930 and the Bartlett’s Test of Sphericity was significant that is 0.000 at the 5% level of significance. Therefore, the use of factor analysis was suitable in this study. The factor loading for each factor was high and stands alone in one factor. Therefore, the four dimensions in website quality were independently structured. For the first factor which is Website Usability contained 11 items with an eigenvalue of 13.055 and has 50.213 percent of the variance in the data. While for the second factor that is Website Design contained 5 items which has an eigenvalue of 2.604 and explained 10.015 percent of variance. The third factor Accessibility that has 4 items explaining 4.652 percent of variance with an eigenvalue of 1.209. Lastly, for the factor of Transactional Capabilities also has 4 items with an eigenvalue of 1.117 with 4.298 percent of variance. 4.2.3 Factor Analysis on Satisfaction Table 4.5 Factor analysis on satisfaction Factor Loadings My choice to visit airasia.com was a wise one. I am satisfied with my recent decision to purchase from airasia.com Overall, I was satisfied with airasia.com I think I did the right thing by visiting airasia.com. My choice to purchase from airasia.com was a wise one. I recommend airline.com to my colleagues. I recommend airasia.com to my friends. I have truly enjoyed purchasing from airasia.com I was satisfied with online buying when compared to offline buying. Percentage of Variance Bartlett's Test of Sphericity-sig.0.000 Percentage of Cumulative Variance: 74.497 Table 4.5 above summarized the result of factor analysis on satisfaction that produced only one factor. The result also shows that the ten statements explained 74.50% of total variance and the value of KMO is 0.929 which means this data is appropriate for factor analysis. All of the factor loadings in satisfaction were greater than 0.80 that are between 0.816 and 0.909. The Bartlett's Test of Sphericity is significant which 0.000 at the 5% level which means the items reject the null hypothesis. 4.3 RELIABILITY ANALYSIS Table 4.6 Result of reliability test Number of items Cronbach's Alpha General Internet Apprehensiveness Transactional internet apprehensiveness Website Usability Website Design Transactional Capabilities The result of reliability analysis is presented in table 4.6 above. A Conbach’s alpha reliability coefficient was used to check the reliability of the questionnaires (Cronbach, 1951). The Cronbach alpha values were conducted as 0.827, 0.934, 0956, 0.904, 0.771 0.831 and 0.961 for Transactional Internet Apprehensiveness (IA1), General internet apprehensiveness (IA2), Website Usability (WQ1) , Website Design (WQ2), Accessibility (WQ3, Transactional Capabilities (WQ4) and Satisfaction (SAT) respectively. All of the Cronbach's Alpha values are greater than 0.7 which are exceed the acceptable value of 0.7 and above suggested by (Hair et al.1998). None of the above variable will be deleted according to Nunally (1978) which suggested that the Cronbach's Alpha less than 0.50 will be deleted. The result of reliability test in this study revealed that all the seven factors fulfilled the requirements. 4.4 REGRESSION Linear Regression was used in this study to measure the significance of the relationship between scales of Transactional Internet Apprehensiveness, General internet apprehensiveness, Website Usability, Website Design, Accessibility and Transactional Capabilities, which were identified in literature review. The results from the regression analysis are presented in the table 4.7 below. Table 4.7 Regression analysis Sig.(P value) General Internet Apprehensiveness Transactional internet apprehensiveness Website Usability Website Design Transactional Capabilities R=0.753 R Square=0.567 From the result obtained from analysis of linear regression in table 4.7 above shows that three attributes which was Website Usability (0.000), Website Design (0.019) and Accessibility (0.001) are all significance and positive relationship with satisfaction at the significant level of p< 0.05. Therefore it can be concluded that three attributes are significantly associated with the satisfaction of air passengers towards airasia.com. Thus, null hypothesis is rejected, and it is concluded that there is significant relationship between independents variables and dependent In the other hand, three attributes which was General Internet Apprehensiveness, Transactional internet apprehensiveness and Transactional Capabilities have no effect on customer satisfaction. The R-Square is 0.567 explains 56.7% of the dependent variable is explained by the independent variable and another 43.3% is not explained. The output from the regression analysis shows that Website Usability has the largest absolute value of standardized beta coefficient (β=0.393) emerges as the most important factor that effect air passengers satisfaction towards Air Asia’s website. Website Usability has the strongest effect and become the important role of air travelers’ satisfaction which accounted 39.3%. The data also indicated that Accessibility is the second most important element driving customers’ satisfaction (β=0.249). Accessibility is followed by Website Design (β=0.157), Transactional Capabilities (β=0.110) and General Internet Apprehensiveness (β=0.002).The results also indicate that there is a significant negative relationship between Transactional internet apprehensiveness (β=-0.059) and the satisfaction towards airline website. Independent T-test and One way ANOVA analysis were conducted to assess whether air passengers’ satisfaction towards airasia.com will differ according to their socio-demographic characteristics. 4.5.1 T –Test T-Test is normally used when there are only two values in a variable. In this study, Independent T-Test was used to compare means among gender, marital status and employment status to know is there any difference between demographic profile of respondents and factors. In T-test, items that have a probability that is less than 0.05, null hypothesis of equal variance will be reject and the value of T of items are based on equal variance not assumed will be utilized. Therefore, if the items have a probability that is more than 0.05 confidence level and for the value of T, those items are based on the equal variance assumed will be utilized and indicates that the variance of the two samples is approximately equal. Table 4.8 T-Test result between gender and factors that influence customers’ satisfaction. I tem General Internet Apprehensiveness Transactional internet apprehensiveness Website Usability Website Design Transactional Capabilities Table 4.8 above shows the result between gender and six dimensions that effect customers’ satisfaction towards airasia.com. Table 4.8 indicates that none of the values give a probability that is less than the significant level of 0.05. Thus, the null hypothesis of equal mean is not rejected. In other word, differences between gender and factors were not found. Table 4.9 T-Test result by marital status and factors that influence customers’ satisfaction. I tem General Internet Apprehensiveness Transactional internet apprehensiveness Website Usability Website Design Transactional Capabilities The result of t-test between marital status and factors influencing consumer satisfaction towards airline website is presented in table 4.9 above indicates that the value of T for Website Usability is -0.776 and gives a probability of 0.035 which shows that website usability was significantly difference at 0.05 confidence level. Thus, the null hypothesis of equal mean is rejected. The remaining five dimensions are not significant difference meaning that null hypothesis for those factors is not rejected. Table 4.10 T-Test by Test result by employment status and factors that influence customers’ satisfaction. I tem General Internet Apprehensiveness Transactional internet apprehensiveness Website Usability Website Design Transactional Capabilities In general, the results in table 4.10 indicated only General Internet apprehensiveness was significant differences that gives the probability of 0.008 and the value of T is -0.2150. The other five dimensions were found that no significantly difference between the working people and students. 4.5.2 One Way ANOVA One Way ANOVA is used when there are three or more values in a variable and used to compares mean of one group or more groups based on independent variable. In this study, One way ANOVA is used to analyze between the factors that affect air travelers’ satisfaction towards an airline website with respondents’ demographic profile such as age, education level, ethnicity, household income and Numbers of online Shopping Experience at airasia.com. Table 4.11 One Way ANOVA between age group Age (Mean) < 20 > 50 General Internet Apprehensiveness Transactional internet apprehensiveness Website Usability Website Design Transactional Capabilities Table 4.11 displayed above shows that there is no significant level of 0.05 for the different age groups. Thus, the null hypothesis of equal mean is accepted Table 4.12 One Way ANOVA between educational levels. Educational level (Mean) High school Bachelor Degree Master Degree General Internet Apprehensiveness Transactional internet apprehensiveness Website Usability Website Design Transactional Capabilities Table 4.12 above shows the F value for Transactional internet apprehensiveness and Accessibility are 4.259 and 4.304 respectively. Both variables result a probability of 0.006 which is less than a significant level of 0.05. Besides that, that, General Internet Apprehensiveness also shows a significant difference that has a probability of 0.009 and the value of F was 3.981. Therefore, the null hypothesis is rejected and this proposes that there are significant differences among different education levels and Transactional internet apprehensiveness, General Internet Apprehensiveness and Accessibility. The remaining three variable shows that there is no significance differences among different education levels and Website Usability, Website Design and Transactional Capabilities. Table 4.13 One Way ANOVA between Ethnicity Ethnicity (Mean) General Internet Apprehensiveness Transactional internet apprehensiveness Website Usability Website Design Transactional Capabilities Table 4.13 indicates that there was no significance difference in how the ethnicity effect in Transactional internet apprehensiveness (F=0.592, p=0.621). The findings from the table above also indicate that there is no significance difference between ethnicity of respondents and accessibility of an airline website (F=0.261, p=0.853).While for the remaining four variables shows a significant differences which their p value is less than 0.05 in how the ethnicity affect the customer satisfaction. Table 4.13 One Way ANOVA between household incomes Household Income (RM) General Internet Apprehensiveness Transactional internet apprehensiveness Website Usability Website Design Transactional Capabilities Instrestingly, table 4.13 shows that the two dimensions of internet apprehensiveness shows a significant difference between household incomes(p<0.05). However, all four dimensions of website quality indicates that no significance difference between household incomes. Table 4.14 One Way ANOVA between Numbers of online Shopping Experience at airasia.com Numbers of online Shopping Experience at airasia.com (Mean) 1-3 times 4-6 times 7-9 times 10-12 times 13-15 times >16 times General Internet Apprehensiveness Transactional internet apprehensiveness Website Usability Website Design Transactional Capabilities Table 4.14 shows that Transactional internet apprehensiveness and Website Usability does has a significant differences and the F value were 2.926 and 3.188 respectively. The remaining four dimensions do not show a significance difference with Numbers of online Shopping Experience at airasia.com because the p value is more than 0.05. Thus, these four dimensions accept the null hypothesis. Share This Essay Did you find this essay useful? Share this essay with your friends and you could win £20 worth of Amazon vouchers. One winner chosen at random each month. Request Removal If you are the original writer of this essay and no longer wish to have the essay published on the UK Essays website then please click on the link below to request removal: Request the removal of this essay. More from UK Essays Need help with your essay? We offer a bespoke essay writing service and can produce an essay to your exact requirements, written by one of our expert academic writing team. Simply click on the button below to order your essay, you will see an instant price based on your specific needs before the order is processed:
{"url":"http://www.ukessays.com/essays/marketing/socio-demographic-profile-of-the-survey-sample-marketing-essay.php","timestamp":"2014-04-17T15:44:46Z","content_type":null,"content_length":"57352","record_id":"<urn:uuid:5a3fd599-c2ce-48a4-9dbb-00784f46b461>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00012-ip-10-147-4-33.ec2.internal.warc.gz"}
Roemer's Hypothesis Soon after the invention of the telescope, Galileo discovered the four largest moons of Jupiter. Subsequently many astronomers made careful observations of those moons, and already by the 1660's detailed tables of their movements had been developed by Borelli (1665) and Cassini (1668). Naturally these tables were based mainly on observations taken around the time when Jupiter is nearly "in opposition", which is to say, when the Earth passes directly between Jupiter and the Sun, because this is when Jupiter appears high in the night sky. The orbital periods of Jupiter's four largest moons were found to be 1.769 days, 3.551 days, 7.155 days, and 16.689 days, and these are very constant and predictable, like a giant clockwork. Based on these figures it was possible to predict within minutes the times of eclipses and passages (i.e., the passings behind and in front of Jupiter) that would occur during the viewing opportunities in future "oppositions". However, by the 1670's people began to make observations of Jupiter's moons from the opposite side of the Earth's orbit, i.e., when the Earth was on the opposite side of the Sun from Jupiter. Obviously it's more difficulty to make measurements at these times, because the Jovian system is nearly in conjunction with the Sun, but at dawn and dusk it is possible to observe Jupiter even when it is fairly close to conjunction. These observations, taken about 6 months away from the optimum viewing times, revealed a puzzling phenomenon. The eclipses and passages of Jupiter's moons, which could be predicted so precisely when Jupiter is in opposition, are found to be consistently late by about 17 minutes relative to their predicted times of occurrence. (The early astronomers actually measured up to 22 minutes late, but modern instruments have shown that the "lateness" is slightly less than 17 minutes.) This is not to say that the time intervals between successive eclipses is increased by 17 minutes, but that the absolute time of occurrence is 17 minutes later than was predicted six months earlier based on the observed orbital period at that time. For example, the moon Io has a period of 1.769 days, so it completes about 103 orbits in six months, and apparently it lost a total of 17 minutes during those 103 orbits, which is an average of about 9.9 seconds per orbit. All the other moons seemed to be late by the same amount when observed with Jupiter near conjunction. Nevertheless, at the subsequent "opposition" viewing six months later, all the moons are found to be back on schedule! It's as if a clock runs slow in the mornings and fast in the afternoons, so that on average it never loses any time from day to day. While mulling over this data in 1675 on a visit to Paris, the Danish astronomer Ole Roemer thought of a beautiful explanation based on a remarkable hypothesis: "sight" is not instantaneous. Light travels at a finite speed, which implies that when we see things we are really seeing how they were at some time in the past. The further away we are from an object, the greater the time delay in our view of that object. Applying this hypothesis to the observations of Jupiter's moons, Roemer considered the case when Jupiter was in opposition on, say, January 1, so the light from the Jovian eclipses was traveling from the orbit of Jupiter to the orbit of the Earth, as shown in the figure below. The intervals between successive eclipses around this time will be very uniform near the opposition point, because the eclipses themselves are uniform and the distance from Jupiter to the Earth is fairly constant during this time. However, after about six and a half months (denoted by July 18 in the figure), Jupiter is in conjunction, which means the Earth is on the opposite side of it's orbit from Jupiter. The light from the "July 18" eclipse will still cross the Earth's orbit (on the near side) at the expected time, but it must then travel an additional distance, equal to the diameter of the Earth's orbit, in order to reach the Earth. Hence we should expect it to be "late" by the amount of time required for light to travel the Earth's orbital diameter. In the late 1600's there were already some rough estimates of the mean Earth-Sun distance, so this enabled Roemer to estimate the speed of light. Using modern estimates, the Earth's orbital diameter is about 2.98 x 10^11 meters, and the observed time delay in the eclipses and passages of Jupiter's moons when viewed from the Earth with Jupiter in conjunction is about 16.55 minutes = 993 seconds, so we can deduce that the speed of light is about 2.98 x 10^11 / 993 » 3 x 10^8 meters/sec. (For more discussion of Roemer's actual historical estimate and its basis, see Extraordinarily Fast.) Of course, Roemer's hypothesis implies a specific time delay for each point of the orbit, so it can be corroborated by making observations throughout the year. We find that most of the discrepancy occurs during the times when the distance between Jupiter and the Earth is changing most rapidly, which is when the Earth-Sun axis is nearly perpendicular to the Jupiter-Sun axis. At one of these positions the Earth is moving almost directly toward Jupiter, and at the other it is moving almost directly away from Jupiter, as shown in the figure below. The Earth's speed relative to Jupiter at these points is essentially just its orbital speed, which is the circumference of its orbit divided by one year. Thus we have v = p (2.98 x 10^11) / 365 meters per day which is equivalent to about 3 x 10^4 meters/sec. If we choose units so that c = 1, then we have v = 0.0001. From this point of view the situation can be seen as a simple application of the Doppler effect, and the frequency of the eclipses as viewed on Earth can be related to the actual frequency (which is what we observe at conjunction and opposition) according to the formulas The frequencies are inversely proportional to the time intervals between eclipses. These formulas imply that, for the moon Io, whose orbital period is 1.769 days = 2547.3600 minutes, the time interval between consecutive observed eclipses when the Earth is moving directly toward Jupiter (indicated as "Jan" in the above figure) is 2547.1052 minutes, and the time intervals between successive observed eclipses six months later is 2547.6147 minutes. Thus the interval between observed eclipses is 15.2 seconds shorter than nominal in the former case, and it is 15.3 seconds longer than nominal in the latter case, making a total difference of 30.5 seconds between the inter-arrival times at the two extremes, separated by six months. It would have been difficult to keep time this accurately in Roemer's day, but differences of this size are easily measured with modern clocks. Incidentally, Maxwell once suggested that Roemer's method could be used to test for the isotropy of light speed, i.e., to test whether the speed of light is the same in all directions. Roemer's method can be regarded as a means of measuring the speed of light in the direction from Jupiter to the Earth. Jupiter has an orbital period of about 12 years, so if we use Roemer's method to evaluate the speed of light several times over a 12 year period, we will be evaluating the speed in all possible directions (in the plane of the ecliptic). The entire solar system is known to be moving with a speed of about 3.7 x 105 meters per second with respect to the cosmic microwave background radiation (i.e., the frame in which the radiation is roughly isotropic), so if we assumed a pre-relativistic model in which light propagates at a fixed speed with respect to the background radiation, and in which frames are related by Galilean transformations, this would in principle provide a means of assessing the "absolute speed" of the Earth. The magnitude of the effect is given by computing how much difference would be expected in the time for light to traverse one orbital diameter of the Earth at an effective speed of c+V and c-V, where V is the presumed absolute speed of the Earth. This gives a maximum difference of about 2.45 seconds between two measurements taken six years apart. (These two measurements each occur over a 6 month time span as explained above.) Of course, in practice it would be necessary to account for many other uncontrolled variables, such as the variations in the orbits of the Earth and Jupiter over the six year interval. These would need to be known to much better than 1 part in 400 to give adequate resolution. As far as I know, this experiment was never performed, because by the time sufficiently accurate clocks were available the issue of light's invariance with respect to inertial coordinate systems had already been established by more accurate terrestial measurements, together with an improved understanding of the meaning of inertial coordinates. Today we are more likely to establish a system of coordinates optically, and then test to verify the isotropy of mechanical inertia with respect to those coordinates.
{"url":"http://mathpages.com/home/kmath203/kmath203.htm","timestamp":"2014-04-17T05:52:07Z","content_type":null,"content_length":"10876","record_id":"<urn:uuid:d81933b4-2c0b-4121-8b43-92d115e10a2d>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00331-ip-10-147-4-33.ec2.internal.warc.gz"}
converging analysis proof 1. The problem statement, all variables and given/known data Assume that (an) is a bounded (but not necessarily convergent) sequence, and that the sequence (bn) converges to 0. Prove that the sequence (anbn) converges to zero. 2. Relevant equations 3. The attempt at a solution Assume that an is a bounded sequence and bn converges to 0. That means for all n in N, there exists a M >0 so that Since bn converges, that means that it must be bounded as well. Which means for all n in N there exists a P>0 so that bounded look alright As b is convergent, I would say for any P>0, there exists N such that for all n>N then -0|< |b since |an|<=M and |bn|<=P that means for all n in N: |an||bn|<= MP which is equivalent to |anbn|<=MP where MP>0 since M>0 and P>0. Hence (anbn) is bounded Since bn converges to 0 that means for e>0 there exists an N in N so that for n>=N which is equivalent to -e<bn<e This is where I get stuck. Do I just multiply the inequality by an? cause then I'd have which would be equivalent to |anbn|<e2 if I let e2=e(an) which would mean that anbn converges to zero as well. But I don't know if I can multiply the sequence by it though.... Any help would be great! i think you were almost there... now what you need to show to prove a converges to zero, is that for any e>0 you can choose N, such that for all n>N you have as you know a <=M for all n, then so now you just need to show you can choose N such that for all n>N and i think you're there
{"url":"http://www.physicsforums.com/showthread.php?t=340677","timestamp":"2014-04-18T03:13:09Z","content_type":null,"content_length":"31588","record_id":"<urn:uuid:0216eac2-d0d7-4b60-b657-a9150818cebd>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00659-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: polredabs(,16) Karim BELABAS on Thu, 12 Sep 2002 15:17:26 +0200 (MEST) [Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index] On Wed, 11 Sep 2002, Igor Schein wrote: > On Wed, Sep 11, 2002 at 04:39:28AM +0200, Karim BELABAS wrote: > > I have modified the recovery code to enforce this (thereby discovering new > > factors, and reducing the number of failures). Does any of your examples > > break it ? > There're actually a lot of degree-3 polynomials that don't get reduced > with flag=16. Here's one of the smaller ones > x^3 - 5*x - 86089842541292486305497862178148061265660715093760132420 This is expected: using flag = 16, we may reduce a suborder of the maximal order. And it will actually occur whenever two relatively large (> primelimit) primes divide the discriminant, one of them to an odd power [ hence also dividing the field discriminant ]. flag = 16 is mostly useful when only "small" primes are ramified [ otherwise, it would be the default ! ] ? factor(poldisc(x^3 - 5*x - 86089842541292486305497862178148061265660715093760132420)); [1000183 1] <---- this one is responsible [480860048849029 2] [781678926150510511345069276448469059 2] What can still be done is use a less naïve approach to "factor out small primes" (currently trial division only), e.g use (trial division + rho) [ ECM is probably too much already ], spending much less time rho than in the default factorint(). Say, number of rounds increases linearly with the discriminant size [ factorint() has a cubic increase once input gets large ]. In fact, this should be faster than pure trial division up to 'primelimit'. The best solution is probably to add a flag to factorint() for We need a good routine for "partial factorization of discriminants". Many functions need this (and currently do trial division only). Currently factorint() is not suited to the task since there's no way to override pollardbrent()'s tuning parameters [ which are geared towards complete factorization, _not_ "quick check just in case" ]. And modifying pollardbrent() is not enough, we still need (a large part of) the ifac_crack() machinery. Of course you still have the possibility to launch a full scale factorization in a different process and help out polredabs(,16) with the private primetable [ addprimes() ] if you're not happy with the results. Karim Belabas Tel: (+33) (0)1 69 15 57 48 Dép. de Mathematiques, Bat. 425 Fax: (+33) (0)1 69 15 60 19 Université Paris-Sud Email: Karim.Belabas@math.u-psud.fr F-91405 Orsay (France) http://www.math.u-psud.fr/~belabas/ PARI/GP Home Page: http://www.parigp-home.de/
{"url":"http://pari.math.u-bordeaux.fr/archives/pari-dev-0209/msg00102.html","timestamp":"2014-04-16T16:31:27Z","content_type":null,"content_length":"6422","record_id":"<urn:uuid:8fbb4c07-712c-40ae-b4be-c6ae75cc2d39>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00566-ip-10-147-4-33.ec2.internal.warc.gz"}
Carrollton, GA SAT Math Tutor Find a Carrollton, GA SAT Math Tutor ...Tutored on Precalculus topics during high school and college. Completed coursework through Multivariable Calculus. I love helping students understand Precalculus! 28 Subjects: including SAT math, physics, calculus, statistics ...I have tutored many students in algebra who are high school students and students in college. I enjoy teaching math and helping others learn math. I will work with students and try to find the easiest and most comfortable way for them to learn. 9 Subjects: including SAT math, calculus, algebra 1, precalculus ...I solved my own problem by going to study hall and help others in chemistry, which I did for at least a year. The school gave me an award for my efforts I did that year. When I tutored someone in chemistry, I devoted my time to one person at a time to focus on their problems, to see what they were struggling on. 17 Subjects: including SAT math, chemistry, calculus, geometry ...I also ran cross country in high school and participated or led several service organizations in college and high school, so I can easily relate to a wide variety of interests and backgrounds. I bring a professional, optimistic, and energetic attitude to every session and I think that everyone c... 17 Subjects: including SAT math, chemistry, physics, writing As an interdisciplinary student at KSU, my goals as a tutor revolve around uncovering teaching methods that work best for every individual. In keeping with that goal, I monitor new academic research in the cognitive & education fields on a weekly basis for different methodologies. I have been a pr... 26 Subjects: including SAT math, reading, English, geometry Related Carrollton, GA Tutors Carrollton, GA Accounting Tutors Carrollton, GA ACT Tutors Carrollton, GA Algebra Tutors Carrollton, GA Algebra 2 Tutors Carrollton, GA Calculus Tutors Carrollton, GA Geometry Tutors Carrollton, GA Math Tutors Carrollton, GA Prealgebra Tutors Carrollton, GA Precalculus Tutors Carrollton, GA SAT Tutors Carrollton, GA SAT Math Tutors Carrollton, GA Science Tutors Carrollton, GA Statistics Tutors Carrollton, GA Trigonometry Tutors
{"url":"http://www.purplemath.com/Carrollton_GA_SAT_math_tutors.php","timestamp":"2014-04-17T15:33:11Z","content_type":null,"content_length":"24024","record_id":"<urn:uuid:f5eff334-ff0d-4453-9515-7a6d15aac37e>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00637-ip-10-147-4-33.ec2.internal.warc.gz"}
Escondido Algebra 2 Tutor ...I used Algebra in many aspects of my math and physics course in college. I tutored Algebra 2 in high school and college. I also used Algebra 2 in many aspects of my math and physics courses in 8 Subjects: including algebra 2, calculus, algebra 1, physics ...In addition I have previous experience in being a teaching assistant for the graduate anatomy class at Rutgers University. Through my college coursework and experience as being a teaching assistant at a major university I have gained the necessary skills to help your student excel in the subject... 10 Subjects: including algebra 2, chemistry, reading, physics I am a San Diego native that chose to stay in this beautiful city and go to University of California, San Diego (UCSD).I graduated from high school as a lifetime member of the California Scholars Federation, as an AP Scholar with Distinction, as a National AP Scholar, and as an IB Diploma recipient.... 42 Subjects: including algebra 2, reading, English, Spanish Does your child need help with math and science class? If so, I can help. My name is Adrian and I hold both a B.A. in chemistry from Dartmouth College and a Masters in physical organic chemistry from The Scripps Research Institute (TSRI). My years of instruction and practical experience in chemistry labs have equipped me well to explain essential chemistry concepts to a young audience. 22 Subjects: including algebra 2, chemistry, organic chemistry, algebra 1 ...My relationships with the students go beyond the formality of the classroom because I take the time to meet students' families and attend students' events (e.g., sports games and music recitals). My efforts also pay off in the classroom, however, because students feel more comfortable with me and... 54 Subjects: including algebra 2, reading, English, chemistry
{"url":"http://www.purplemath.com/escondido_algebra_2_tutors.php","timestamp":"2014-04-21T11:01:55Z","content_type":null,"content_length":"24002","record_id":"<urn:uuid:714ab6de-0ee9-4e0a-8cc5-37aff99cbd13>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00134-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Which postulate can be used to prove that the triangles are congruent? A. SSS B. AAA C. AAS D. ASA • one year ago • one year ago Best Response You've already chosen the best response. Best Response You've already chosen the best response. What do you think is the answer ? Any guesses ? Best Response You've already chosen the best response. C) AAS Best Response You've already chosen the best response. @katlin95 ok got it!!!lol Best Response You've already chosen the best response. Best Response You've already chosen the best response. so its B right? Best Response You've already chosen the best response. @mayankdevnani Stop providing direct answers -_- Best Response You've already chosen the best response. i ment c ;p Best Response You've already chosen the best response. @hba she did'nt ask how to solve??? Best Response You've already chosen the best response. @mayankdevnani As a rule of thumb, we would prefer the askers to participate in the solution process instead of having the answerers doing all the work. thats not really our concern; we are aimed at being a study site, not a free answering service. We would rather promote learning the material as opposed to just handing over answers. This social contract is aimed at curbing people from using the site to cheat with ... Best Response You've already chosen the best response. where did u copy that from ?? Best Response You've already chosen the best response. @hartnn whom you are you asking?? Best Response You've already chosen the best response. there is no use asking you anything , mayank...i have told you many times not to give direct answers, even though the asker asks for it...but u never stop, do you ? Best Response You've already chosen the best response. @mayankdevnani Can you tell me why the answer is AAS ? Best Response You've already chosen the best response. I want to learn. Best Response You've already chosen the best response. @hartnn Actually i have saved this as a note so I can use it to explain anyone who provides direct answers. Best Response You've already chosen the best response. @mayankdevnani Please answer. Best Response You've already chosen the best response. @hba because there are two angles simultaneously and one side is given..so two angles represented as AA and side as S.so it becomes AAS.. ok @hba and @katlin95 Best Response You've already chosen the best response. I don't understand,please explain using the diagram. Best Response You've already chosen the best response. Best Response You've already chosen the best response. ok @hba Best Response You've already chosen the best response. sorry..@hartnn from today i teach people..don't give direct answers....if they want direct answers then Best Response You've already chosen the best response. Best Response You've already chosen the best response. Is ASA=SAS ? Best Response You've already chosen the best response. ASA because side is between two angles. Best Response You've already chosen the best response. Best Response You've already chosen the best response. Ok good.Hope you explain next time :) Best Response You've already chosen the best response. Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50b0c3fae4b09749ccac3d59","timestamp":"2014-04-20T18:51:54Z","content_type":null,"content_length":"113344","record_id":"<urn:uuid:2224ccee-a40b-4703-bb04-4b069c91c7d2>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00574-ip-10-147-4-33.ec2.internal.warc.gz"}
Binomial Probability Calculating a Binomial Probability In mathematics, to calculate the binomial probability, P(X = x), we use the formula binompdf (n, p, x). To calculate P(X=3) given n = 4, p = 0.41, and q = 0.59, press 2nd VARS [DISTR], ARROW DOWN to select 0:binompdf(, and then press ENTER. After binompdf( type 4,.41,3), and then press ENTER. See the results below. To 4 decimal places, the P(X = 3) = 0.1627. To calculate the probablity P(X by hand, we would have to calculate each individual probability, P(0), P(1), P(2), ..., P(x), and then add those together. The TI-83/84 calculator has a built in function that does this calculation in one step. The function is binomcdf (n, p, x). To calculate P(X given n = 4, p = 0.41, and q = 0.59, press 2nd VARS [DISTR], ARROW DOWN to select A:binomcdf(, and then press ENTER. After binomcdf(, type 4,.41,3), and then press ENTER. See the results below. To 4 decimal places, the P (X = 0.9717. To find use the fact that = 1 - P (X Thus to find P (X 3) given n = 4, p = 0.41, and q = 0.59, compute 1 - P(X using binomcdf(4,.41,2) as is done directly above. © 2001-2007, Macon State College. All rights reserved.
{"url":"http://calculator.maconstate.edu/binomial_probability/index.html","timestamp":"2014-04-18T18:10:45Z","content_type":null,"content_length":"8831","record_id":"<urn:uuid:8a5ff69e-db8f-484d-b9a8-36e09ed664ae>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00072-ip-10-147-4-33.ec2.internal.warc.gz"}
Measures of Central Tendency, Position and Dispersion Measures of Central Tendency The measures of central tendency are different ways of determining or indicating which value from the information is the central value. The different measures of central tendency are: The mean is the average value of the distribution. The median is the score of the scale that separates the upper half of the distribution from the lower, that is to say, it divides the series of data in two equal parts. The mode is the most repeated value in a distribution. Measures of Position Measures of position are different techniques that divide a set of data into equal groups. To determine the measurement of position, the data must be sorted from lowest to highest. The different measures of position are: The quartiles divide the data set into four equal parts. The deciles divide the data set into ten equal parts. Percentiles divide the data set into one hundred equal parts. Measures of Dispersion The measures of dispersion report on how far the values of the distribution are from the center. The measures of dispersion are: The range is the difference between the highest and lowest data of a statistical distribution. The average deviation is the arithmetic mean of the absolute values of the deviations from the mean. The variance is the arithmetic mean of the squared deviations from the mean. The standard deviation is the square root of the variance.
{"url":"http://www.vitutor.com/statistics/descriptive/measures.html","timestamp":"2014-04-16T21:51:53Z","content_type":null,"content_length":"16707","record_id":"<urn:uuid:79b2290d-1c9c-4a8e-a767-5e142bfbba15>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00190-ip-10-147-4-33.ec2.internal.warc.gz"}
directed graph Newbie Poster 6 posts since Feb 2012 Reputation Points: 0 [?] Q&As Helped to Solve: 0 [?] Skill Endorsements: 0 [?] So I need help asap. I can't figure out how to modify code from my book to work with my problem. I'm givin an input file, the first value is the dimension of an n x n matrix. every value after is a weight for each vertice for the matrix. I'm unable to insert edges into an adjacency list input is as follows: 1 -2 3 2 -3 3 2 0 Initially moves are vertical and horizontal. when you land on negative moves turn to diagonal. goal is to reach 0, while starting at the top left 1. These are the structs from the book: struct edgenode{ :y(0),weight(0), next(NULL) edgenode(int newWeight) :y(0), weight(newWeight), next(NULL) edgenode(int newAdj, int newWeight) :y(newAdj),weight(newWeight), next(NULL) int y; int weight; edgenode* next; struct graph{ edgenode* edges[MAXV+1]; //adjacency list int degree[MAXV+1]; //outdegree of each vertx int nvertices; //number of vertices in graph int nedges; //number of edges in graph int directed; //is graph directed this is the insertion from the book: void insert_edge(graph* g, int x, int y, bool directed, int w){ edgenode *p; p = new edgenode; p->weight = w; p->y = y; p->next = g->edges[x]; g->edges[x] = p; if(directed == false) insert_edge(g,y,x, true, w); This is the function that uses the insert: void read_graph(graph* g, bool directed, ifstream* input){ int m; int x,y; initialize_graph(g, directed); //source is an input.txt file *input >> m; g->nvertices = m * m; for(int j=1; j <= g->nvertices; j++) *input >> weightArray[j]; for(int i = 1; i <=g->nvertices; i++){ //this needs to be changed******orignally looked like this // cin >> x >> y*********** but i need to use input file values //also might need a second for loop to add every edge to each row******* x = i; y = i; insert_edge(g,x,y,directed, weightArray[i]);
{"url":"http://www.daniweb.com/software-development/cpp/threads/440746/directed-graph","timestamp":"2014-04-20T18:25:55Z","content_type":null,"content_length":"31327","record_id":"<urn:uuid:31fb5c9f-9b0a-481f-b002-9af2ce69c49a>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00379-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: DFA complement/intersection problem "Dennis Mickunas" <mickunas@cs.uiuc.edu> 6 Apr 2002 23:13:26 -0500 From comp.compilers | List of all articles for this month | From: "Dennis Mickunas" <mickunas@cs.uiuc.edu> Newsgroups: comp.compilers Date: 6 Apr 2002 23:13:26 -0500 Organization: University of Illinois at Urbana-Champaign References: 02-03-189 Keywords: DFA, lex Posted-Date: 06 Apr 2002 23:13:26 EST "Paul J. Lucas" <pjl@mac.com> wrote in message > If I have two languages: > L = a* > M = (a|b)* > it's obvious that L <= M (L is a subset of M) because all > sequences of A* are also sequences of (a|b)*. However, when I > write code for this, I get L <= M and M <= L which is wrong. > From: > L <= M === intersect(L,~M) = 0 > I first have two minimal DFA for both languages: > L: (s0) --{a}-> (s0) > M: (t0) --{a|b}-> (t0) > However, if you do this for M <= L, you get the same result > because ~L has no accepting states so N' = intersect(M,~L) > doesn't either; therefore N' is empty and M <= L. But this > isn't right! > What am I missing? Where is the mistake. The "flipping accepting/nonaccepting states" method works only if L is specified as a *complete* DFA (over the alphabet {a,b}), namely where (s1) is non-final. Now it's obvious that ~L=a*b(a|b)* (i.e. strings that contain at least one b). Post a followup to this message Return to the comp.compilers page. Search the comp.compilers archives again.
{"url":"http://compilers.iecc.com/comparch/article/02-04-014","timestamp":"2014-04-20T20:56:45Z","content_type":null,"content_length":"5691","record_id":"<urn:uuid:d0d34f50-38c4-416a-bcee-7c13573d899d>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00007-ip-10-147-4-33.ec2.internal.warc.gz"}
I have a problem with an "ATM" machine programming... 07-21-2011, 06:55 PM I have a problem with an "ATM" machine programming... we want to model an ATM. the ATM give note of 10, 20 and 50 euro. taking into account following logic...: the withdrawed amount is always composed of the maximum number of highest notes. en the maximum number notes of the next value. Example: the requested amount = 90 this gives 1 note of 50 , 2 notes of 20 (NOT 4 of 20 and 1 of 10) the returned amount is at least equal to the requested value. so the ATM rounds up. example if requested is 75 then you get 80 back (1x50, 1x20 and 1x10) 1 van 10€), if U ask 96 u get 100 (2 x50 ) each type of note should be a different class the returned amount should also be a sepperate class. Can someone help me with this, because, I'm getting desperate about how to solve this! 07-21-2011, 07:00 PM The same things I told you in your other post apply here as well. 07-21-2011, 07:04 PM Thread locked as a homework dump (no work shown) and a duplication of a previous question.
{"url":"http://www.java-forums.org/new-java/46728-i-have-problem-atm-machine-programming-print.html","timestamp":"2014-04-17T17:38:27Z","content_type":null,"content_length":"5047","record_id":"<urn:uuid:6397277c-56cb-4ad9-9e79-325442c79040>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00418-ip-10-147-4-33.ec2.internal.warc.gz"}
suspended category A suspended category is an additive category $C$ equipped with an additive functor $S:C\to C$ called suspension and a class of $S$-sequences called triangles satisfying axioms below. Here one calls an $S$-sequence a sequence of morphisms of the form $X\stackrel{f}\to Y\stackrel{g}\to Z\stackrel{h}\to S X,$ and morphisms are ladders of the type $\array{ X&\stackrel{f}\to &Y&\stackrel{g}\to &Z&\stackrel{h}\to& S X\\ a\downarrow&&b\downarrow&&c\downarrow&&\downarrow S a\\ X'&\stackrel{f'}\to &Y'&\stackrel{g'}\to &Z'&\stackrel{h'}\to& S X',\\ where all the squares commute. Axioms: (SP0) Each sequence isomorphic to a triangle is a triangle. (SP1) Each sequence of the form $0\to X\stackrel{id}\to X\to S0$ is a triangle. (SP2) If $X\stackrel{f}\to Y\stackrel{g}\to Z\stackrel{h}\to S X$ is a triangle, then $Y\stackrel{g}\to Z\stackrel{h}\to S X\stackrel{-S f}\to S Y$ is also a triangle. (SP3) Every diagram of the form $\array{ X&\stackrel{f}\to &Y&\stackrel{g}\to &Z&\stackrel{h}\to& S X\\ a\downarrow&&b\downarrow&&&&\downarrow S a\\ X'&\stackrel{f'}\to &Y'&\stackrel{g'}\to &Z'&\stackrel{h'}\to& S X'\\ }$ can be completed to a morphism of $S$-sequences. (SP4) For any two morphisms $X\stackrel{f}\to Y$ and $Y\stackrel{g}\to Z$ there is a commuting diagram $\array{ X&\stackrel{f}\to &Y&\stackrel{g}\to &Z'&\to& S X\\ =\downarrow&&f\downarrow&&\downarrow&&=\downarrow \\ X&\to &Z&\to &Y'&\to& S X\\ &&\downarrow&&\downarrow&&\downarrow S f\\ & &X'&\ stackrel{id}\to &X'&\stackrel{j}\to& S Y\\ &&j\downarrow&&\downarrow&&\\ &&S Y&\stackrel{S i}\to &S Z'&&\\ }$ where the first two rows and the middle two columns are triangles. Suspended categories were introduced in • Bernhard Keller, Dieter Vossieck, Sous les catégories dérivées. [Beneath the derived categories] C. R. Acad. Sci. Paris Sér. I Math. 305 (1987), no. 6, 225–228. See also • B. Keller, Chain complexes and stable categories, Manus. Math. 67 (1990), 379-417.
{"url":"http://ncatlab.org/nlab/show/suspended+category","timestamp":"2014-04-18T05:33:32Z","content_type":null,"content_length":"22974","record_id":"<urn:uuid:4f2cdf66-db38-45c3-b65d-74dc68b9931e>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00661-ip-10-147-4-33.ec2.internal.warc.gz"}
How to figure time complexity of algorithm August 22nd, 2013, 10:54 AM How to figure time complexity of algorithm Lets say you have to find a pair of matching socks. You could: 1) grab sock from basket. 2) grab another sock from basket. If it doesn't match throw it to the side. If it matches goto line 4. 3) repeat line 2 4) your done. How would you go about figuring time complexity of this algorithm or the following one. 1) grab sock from basket lay it on the bed. 2) grab another sock from basket. If it doesn't match any previous socks already on bed lay it on the bed. If it matches any previous socks goto line 4. 3) repeat line 2. 4) your done. How would you find it from these two cases. Thanks in advance. September 13th, 2013, 04:37 AM Re: How to figure time complexity of algorithm The first algorithm looks for a specific pair of socks in a sequence. It picks a first sock and then searches sequentially for a match. The second algorithm looks for any first pair of socks in a sequence. It pick socks sequentially and compares them with all socks encountered so far which are kept on a "bed". To make this efficient the simplest way is to use an array (or hash table). Checking whether a sock matches one on the bed then becomes a constant operation that is O(1). The complexity of the first algorithm is straightforward. It's O(N). Since stocks are picked at random with equal probability the time it takes to find a match will be proportional to the number of socks. The second algorithm is harder and I'm not quite sure. It cannot be worse than O(N), rather the opposite. The question is whether finding the first pair of socks is linearly dependent on the number of socks (like it was finding a specific pair of socks), or if maybe there is a logarithmic dependence? In that case this algorithm will be O(log N). I'll be back. September 13th, 2013, 07:18 AM Re: How to figure time complexity of algorithm depends wht you're after. if you only want 1 match, then the 1st solution is faster. the 2Nd solution is slower since it will need more compares on average, however it'll be faster if you need to find multiple matches. the 2nd solution isn't providing for the fast that there may be more socks than can fit on the bed and still keep it distinguishable enough. Somethig people often forget in sorting algo's since they tend to deal with integers... most algo's tend to revolve around fewest swaps. the time for a compare however may be the bottleneck rather than the time for a swap, so an algorith that does fewer compares may be preferable over one that does fewer swaps. There are cases where insertion sort will be better than a quicksort because of this. September 13th, 2013, 08:51 AM Re: How to figure time complexity of algorithm September 13th, 2013, 05:56 PM Re: How to figure time complexity of algorithm Yeah for one sock the 1st is way faster. I implemented it in software to see. When the number of pairs of socks increase the time increases quite a bit (maybe even exponentially). But I still wish I understood more on how to figure out time complexity. the 2Nd solution is slower since it will need more compares on average, however it'll be faster if you need to find multiple matches. the 2nd solution isn't providing for the fast that there may be more socks than can fit on the bed and still keep it distinguishable enough. That is interesting, I never thought about that matching all socks would be quicker this way. I may have to implement that next to see. Thanks for the reply man. September 16th, 2013, 07:52 AM Re: How to figure time complexity of algorithm This is a continuation of my post #2. The second algorithm finds the first pair of matching socks drawn from a basket containing N socks (N/2 pairs). Socks are picked one by one from the basket and compared for a match with all socks already drawn. To make the comparision efficient all previously drawn socks are kept in a set which allows for insertion and membership testing in constant time, that is O(1). Such a set could be implemented using a simple array. One can convince oneself that if n socks have been drawn already the probability p that the next sock will be a match is p(n) = n / (N-n). The complement 1 - p(n) is the probability of a no-match. As a plausibility check one may evaluate the extreme values p(0) and p(N/2) which turn out 0 and 1 respectively as they should. The first means that the very first sock drawn cannot be a match. The second means that if one sock has been drawn from each and every pair the next sock will be a match for certain. The next step is to have a look at the probability distribution function. What's the probability that exactly the x'th sock drawn is a match? First a series of non-matching socks must be drawn followed by the matching sock. This basically is a "first success" geometric distribution with the twist in this case that the probability of success varies. Expressed in p(n) it becomes, pdf(x) = [(1-p(0)) * (1-p(1)) * (1-p(2)) * ... * (1-p(x-2))] * p(x-1) This can now be used to calculate the expectation, that is the number of socks that have to be drawn on average to get a match. Using the definition of expected value it becomes (the sum of the products of all possible outcomes times their probabilities of occuring), E(N) = 1*pdf(1) + 2*pdf(2) + 3*pdf(3) + ... + (N/2+1)*pdf(N/2+1) Here's the code, double p(int n, int N) { // chance to draw a match after n socks (of N) have been drawn. return (n<=0) ? 0.0 : (n>=N/2) ? 1.0 : double(n)/double(N-n); double pdf(int x, int N) { // prob. distr. for sock number x (of N) to be the match double r=1.0; for (int i=0; i<x-1; ++i) r *= (1.0 - p(i,N)); return r*p(x-1,N); double E(int N) { // expected number of socks (of N) to draw to find a match double s=0.0; for (int i=0; i<=N/2; i++) s += pdf(i+1,N)*double(i+1); return s; void socks1() { // actual test program int N=10; for (int i=0; i<=N/2; i++) std::cout << p(i,N) << " "; std::cout << std::endl; for (int i=0; i<=N/2; i++) std::cout << pdf(i+1,N) << " "; std::cout << std::endl; std::cout << E(N) << std::endl; std::cout << "---" << std::endl; for (int i=1; i<=25; ++i) { int n = i*1000; double En= E(n); std::cout << n << ": " << En << " / " << En/std::sqrt(n) << std::endl; The test program first prints p, pdf and E for N=10 to check if it looks okay, which I think it does. The p probabilities go from 0 to 1. The pdf values sum up to 1. And E is about 4 which means the match is most likely to come when 3 non-matching socks have been drawn. Since a match must come at the latest after 5 this seems reasonable. Then the actual test is performed. I print 25 E values from N=1.000 to 25.000. I also print E(N)/sqrt(N). This quotient turns out to be constant for all N and that, ladies and gentlemen, indicates an O(sqrt N) algorithm. O(sqrt N) is not quite as good as the O(log N) I anticipated but it still beats the first algorithm which is O(N). For N=25.000 the first algorithm will require 12.500 socks on average to be drawn before a match. This second algorithm only 198. What a difference an array makes! :)
{"url":"http://forums.codeguru.com/printthread.php?t=539317&pp=15&page=1","timestamp":"2014-04-19T10:53:45Z","content_type":null,"content_length":"17111","record_id":"<urn:uuid:c292e4b6-29ee-485d-8e8c-4aa4d0270b98>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00342-ip-10-147-4-33.ec2.internal.warc.gz"}
A proof of the Riemann Hypothesis using the convergence of an integral Thursday morning update: After many hours, I decided that there is a critical error in the otherwise cleverly constructed proof. On page 138 (discussing Lemma 3), second part, he says "whence the function converges absolutely" essentially for any \(z\) with a real positive part. But it seems he hasn't really established that (except for circular reasoning) because if RH is false, and it may be false, the numerator \(|\psi(e^t)-e^t|\) goes like \(e^{at}\) for some positive \(a\) and the region of convergence is shifted by \(a\). So the "absolute" part of the convergence isn't correctly proven, it seems to me. Maybe it's enough to prove the "ordinary" convergence but I suspect that there could be a similar error in the \(g_1\) part of Lemma 3, too. Apologies if I am making a mistake. Some people talk about the proof of "almost twin" prime integers separated by at most 70 million or something like that. I am not terribly excited by this result even if it is true. It's always more interesting to talk about somewhat promising proofs to the Riemann Hypothesis, not only because of the $1 million that will be given to the first person who solves the old puzzle. Many people have thought that they had a proof but the candidate proofs have always failed so far. So you must understand it is extremely likely that we have another example of a failure here. But I am going to tell you, anyway. It would be great if some readers spend a sufficient time and energy by reading the paper. Please don't be repelled by the idiosyncratic Chinese English. Even I can recognize that it's not how a native speaker would formulate the ideas. ;-) That's his real name. Today, Hao-cong [first name] Wu [surname] of China sent me his new paper with a somewhat strange title (linguistically) Showing How to Imply Proving The Riemann Hypothesis (PDF full) published in the European Journal of Mathematical Sciences. How does the proof work? It's likely that I won't quite reproduce everything that is needed for the proof in this blog entry even though I may try. Teaching things is the best way to learn them. ;-) Wu elaborates upon some ideas initiated by Serge Lang, a famous mathematician. But that's the last comment about the sociological context. Now, let us look at the ideas which don't seem to require any esoteric new branches of mathematics. The proof reduces the Riemann Hypothesis to a claim about the absolute convergence of an integral that is related to the Riemann \(\zeta\)-function in a simple way. Let's roll. The function that Wu finds more convenient is called \(\psi(x)\), pronounce "psi of ex". It is related to the Riemann \(\zeta\)-function by the following identities\[ \phi(s) &= -\frac{\zeta'(s)}{\zeta(s)} = \sum_{n=1}^\infty \frac{\Lambda(n)}{n^s} =\sum_p \frac{\log p}{p^s-1}=\\ &= s \int_1^\infty \frac{\psi(x)}{x^{s+1}}\dd x = \frac{s}{s-1}+s\int_1^\infty \frac{\psi(x)-x}{x^{s+1}}\dd x \] where the sum over \(p\) goes over the primes \(2,3,5,\dots\). The first step you should be able to verify if you want to validate Wu's proof is that the identities above are satisfied if \(\psi (x)\) is defined as the manifestly convergent sum\[ \psi(x) = \sum_{p^m\leq x} \log p = \sum_{n\leq x} \Lambda(n) \] where \(\Lambda(n)=\log p\) if \(\exists m\geq 1: \,n=p^m\) for a prime \(p\) and otherwise it is set to zero. Note that this \(\psi(x)\) is defined in such a way that for a large \(x\), it's expected to be very close to \(x\) because the "probability to be prime" \(1/\log x\) is cancelled by the factor \(\log p\) from the definition of \(\psi(x)\) – it's close enough already when we allow \(m=1\) only. The second step is to realize that the presence of a zero or zeroes of \(\zeta(s)\) also implies (or would imply) a pole of \(\phi(s)\), the [minus] "logarithmic derivative of the \(\zeta\) -function", at the same location of the complex plane. To prove the Riemann hypothesis, it is sufficient to prove that \(\phi(s)\) has no poles for \[ \frac 12 \lt {\rm Re}(s) \lt 1 \] (in the "right half-strip", as I will call it) because the hypothetical "RH-violating" zeroes (and singularities) come in pairs symmetrically distributed relatively to the critical axis \(s=1/2+it \) for \(t\in\RR\). Note that \(\phi(s)\) has a pole (or would have a pole) even for a higher-order zero of \(\zeta(s)\). The third step, and it's the only hard one, is to actually prove that one of the integrals involving \(\psi(x)\) used to calculate \(\psi(s)\) above\[ \int_1^\infty \frac{\psi(x)-x}{x^{s+1}}\dd x \] is analytic in the right half-strip so it has no poles over there. Consequently, the \(\zeta\)-function has no zeroes in the right half-strip and, by the left-right symmetry, no zeroes in the left half-strip, either. Wu reduces the claimed analyticity of the integral above to the absolute convergence (convergence even if the integrand is replaced by its absolute value) and uniform convergence (the speed of convergence may be taken to be \(\varepsilon\)-independent), \(\forall\varepsilon\gt 0\), of the integral\[ \int_1^\infty \frac{\psi(x)-x}{x^{3/2+\varepsilon}}\dd x. \] It shouldn't be hard to see that the absolute and uniform convergence of the integral above (here) is enough for the analyticity of the previous integral, and therefore for the absence of the non-trivial zeroes. Note that the exponents \(s+1\) for \(s\) in the right half-strip and \(3/2+\varepsilon\) for a positive \(\varepsilon\) are the same objects. So aside from the claims that should be straightforward, the beef of the proof should be the demonstration of the absolute and uniform convergence of the integral in the last displayed equation. Note that Wu's approach is linked both to the "complex analytic" interpretation of the Riemann Hypothesis as well as the prime-integer-counting, "number-theoretical" interpretation. It's because sufficient experts know that the Riemann Hypothesis is equivalent to the statement\[ \forall \varepsilon\gt 0: \, \psi(x) = x+ O(x^{1/2+\varepsilon}) \] which says that if we accept that the probability for a "rough number \(x\)" to be a prime is \(1/\log(x)\), then the estimated number of primes up to \(n\) deviates from the actual one at most by a power law (that is producing the \(O(\dots)\) term above. Proving the convergence OK, so how does Wu want to prove the uniform and absolute convergence? He offers some introduction to the theory of functions of real and complex variables together with some lemmas that are not quite well-known and that may even be new. Finally, the proof boils down to the existence (for any \(s\) with a real positive part) of the Laplace transforms \(g_{1,2}(s)\) of a function called \(f_ {1,2}(t)\) related to \(\psi(e^t)-e^t\) for the subscript \(1\) or its absolute value for the subscript \(2\). If you quickly want to focus on claims related to the \(\zeta\)-function and ignore various theorems and lemmas about completely general functions and their convergence etc. (assuming that these things are harmless and perhaps known to you, explicitly or intuitively), you may find it helpful for me to say that only Theorem 5 (among 7 theorems) and Lemma 3 (among 3 lemmas) is what you want to read. If there is some circularity in Wu's argument (secretly assuming RH), it's probably somewhere in Theorem 5 or Lemma 3. In particular, I believe that Theorem 5 contains the main trick that allows us to show the convergence in the right half-strip. This theorem claims the absence of poles (except for the \(s=1\) pole) of the function\[ \Phi(s) &= \sum_p \frac{\log p}{p^s} = \phi(s)-\sum_p h_p(s),\\ |h_p(s)|&\leq B\frac{\log p}{|p^{2s}|} \] On one hand, this capital \(\Phi(s)\) is shown to be rather close to the lowercase \(\phi(s)\), using an argument based on geometric series. On the other hand, the \(2s\)-th power of something appears in the difference between \(\Phi\) and \(\phi\) which makes \(\sum\log n/n^{2s}\) converge for \({\rm Re}(s)\geq 1/2+\delta\). So the coefficient \(2\) in \(2s\) here is the ultimate reason why the meromorphic character of \(\Phi(s)\) starts at \({\rm Re}(s)\gt 1/2\), how we get the one-half somewhere, and why the critical axis becomes a decisive boundary for the well-definedness of \(\ phi(s)\), too. I don't see any mistake so far but I haven't really devoured all the beef of the proof yet, either, so no complete confirmation from your humble correspondent yet. But it is apparently making more sense every minute! See the previous TRF blog entries mentioning the Riemann Hypothesis snail feedback (32) : It will be a big feather in China's cap if proven. According to Wikipedia there have been two Chinese Fields medalists but neither was born or trained in China. The IQ figure is invented. For Fields medals, I assume you mean Terence Tao and Shing Tung Yau, but Tao was born in Australia and never lived in China. S.T. Yau lived in British controlled Hong Kong from the age of 4 months until he moved to the U.S. Or perhaps you're not a trained mathematician in this area and therefore don't know what you're talking about? ;) Hey, Lubos You and Tommaso Dorigo sound so alike sometimes. Tommaso has the recent cold fusion paper, you have this Riemann hypothesis paper. I wonder if Tommaso will do a tongue in cheek blog of you becoming a math crackpot ;) The proof is not correct and said integral converges if the Riemann hypothesis is true. Frankly I agree with Littlewood that RH is probably not true, in there is no reason for it to be true. I think counter example will eventually be shown (for some number that will never appear in digit form). Anyway I think Generalized RH is true, however. Doesn't the Zeta function in this form only make sense for Re(s)>1. Doesn't it require an analytic continuation to the rest of the complex plane to even discuss the strip 1/2<Re(s)<1? Where is the analytic continuation in this proof? A difference is that no one has revealed a reason why this proof is wrong so far. I don't understand what problem with the proof you have found. The RH is just a special case of the GRH, so how can you think the RH is probably not true but the GRH is true?? Definitely true. Thank you, I haven't looked at it very closely yet, but it does not appear that to me convergence of that integral is guaranteed unless the Riemann hypothesis is true More later, thanks! Thanks, I meant to say, I think GRH is true in some other cases (as Piltz wrote it), but not the case for which RH is true. I THINK THAT DOESN'T EXIST ONE ONLY ONE RESOLUTION TO THE RIEMANN'S HYPOTHESYS. THERE ARE SEVERAL "HOLES" THAT ARE TO THE SAME THE THEIRS ARE REAL AND COMPLEX VARIABLES.O OPERATOR IMPLIES THE EVOLUTIONS OF THE SYSTEMS.THEN THE "HOLES IS 1 OR -! SIMULTANEOUSLYmTHAT COULD TO BE MEASURED BY TWO OPPOSED ORIENTATIONS THAT SEND INFORMATIONS TO THE POSITIVE AND THE OTHER DOES IN OPOSED ORIENTATIONS The paper's author has a blog (in Chinese): http://whc8778.blog.163.com/ This is a hilariously terrible paper and European Journal of Mathematical Sciences is an execrable journal. A travesty. No, it's a travesty of a mockery of a sham. Lubos, I think you are trolling us just as Tommaso Dorigo is trolling by pretending to take cold fusion seriously. It's not reasonable to expect mathematicians to debunk every crackpot "proof" of RH that comes along. They have better things to do. Dear Jason, I haven't found any clear error in the paper yet - the strategy looks very sensible. I agree that the journal is not even worth the name but your superficial way of deciding whether something is right or not is a much worse travesty than everything you criticize. I think the 4 color theorem is not a very good example, exactly because of the use of computers, which could not have been done much earlier. Anyway, it's fun to try to proof something hard even if it's extremely likely to fail. Dear Mark, so let me give you another, computer-free example. Several days ago, the proof about "almost twin primes" separated by 70,000,000 or less was announced. In principle, Yitang Zhang's proof is elementary maths, and expert mathematicians say that he nevertheless succeeded where experts have failed. The idea that all important proofs have to be done by well-known experts and the most esoteric and new techniques is just crap. Sometimes it's so, sometimes it's not so. His english is terrible. I don't think a proof so simple exist. I has just browse the paper and the last sentence in page 137 is wrong. Both substitutions are wrong, an e^t is missing in both integrals on the right. Regarding Perelman's work. There are deep math or what you called advanced tools of course. eg. Ricci flow with surgery, W-entropy, collapsing theory, etc. I don't think these are not advanced tools at least these tools are not accessible to most of the graduate students. elementary maths? are you kidding me? you look just like a crackpot According to one of the world leading experts on analytic number theory, Henryk Iwaniec,``Yitang Zhang was not well-known to specialists in number theory before his fantastic paper on prime numbers was recognized by the Annals of Mathematics three weeks ago. But he possessed the knowledge of the most sophisticated areas of analytic number theory, and he could use it all with ease. Also, he was able to make a breakthrough where many investigators were stuck, not because something little was overlooked, but because of new, clever arrangements which he introduced and brilliantly executed. You could sense immediately that the work had a great chance to be fine by looking at the clear and logical architecture of the arguments. It does not mean the paper is simple or elementary. To the contrary, the work of Zhang constitutes the state-of-the art of analytic number theory. It also borrows gracefully from other areas, for example, it makes use indirectly of the Riemann hypothesis for varieties over finite field. Zhang's work will trigger a lasting avalanche of refinements and improvements with many innovations. Overnight Zhang redirected the focus of analytic number theory. How long do we need to wait to see what comes next?" Dear Petr, be sure that I have read everything here and please don't lie. But just because I have read something doesn't imply that I am going to uncritically parrot it or misinterpret it. I have explained what I meant by the adjective "elementary", I insist it is the right description of the proof, and I also insist that the bulk of the Wu's proof uses equally elementary or non-elementary tools as the bulk of Zhang's proof. I think whether or not OVERSPECIALIZED, the world-class mathematicians have a better taste in math than you do. Also since you said that "One only has one life to live", I think you wasted too much time in Wu Haosong's paper. Wu is a Chinese salesman and do not have any academic training. He is a crackpot and EJMS is a junk journal. Instead Annals of Mathematics is the most prestigious mathematics journals and Zhang earned PhD from Purdue Univ. I don't think these two things have any similarities. You put them together only reflect your shallowness. Zhang's research is groundbreaking. Please publish some papers on String Theory before you make such naive judgements on other people's work. Tell me bro, what have you contributed to the science world lately? are you working on RH like Perelman? I think you're right Lubos, and from what I see, he has already assumed absolute and uniform convergence of his integral to claim the existence of the Laplace transform of it at all. So it looks like we have a demonstration that the Riemann hypothesis is true if the Riemann hypothesis is true (I believe John Horton Conway used to collect these as a hobby). Finding the cases where RH is false numerically might be impossible if the computation involves functions like loglog(n); what are we up to now? Something to the order E+12 if I am not mistaken, that is a nothing to the function loglog(n). Finding better bounds between the zeros, and bounds for the height of the "spikes" between them might be a way to demonstrate the existence of a contradiction People have claimed "havoc" if RH is not true - nonsense - people know what the distribution of primes looks like in the event RH is not true, it just means that life isn't as simple as one would like it. Hardy and others proved that some 40% of the nontrivial zeros are real; so maybe about half of them are real, then the other half are imaginary, that's life with a transcendental By the way the best part of his paper is his references. Serge Lang presents analysis from the knowledge of an algebraist, everybody ought to run out and buy and read Serge Lang's books right Sorry, this is exactly the same link that is used in the blog entry. Hello, there's a claimed proof of this Riemann Hypothesis at http://arxiv.org/abs/0809.5120 by Arne Bergstrom. I haven't found anywhere online which correctly claims an error in the proof. Do you mind taking a look? Several months later we have a much clearer picture what Zhang really needed to obtain his result. For one thing, one can get without the Deligne's results on Weil conjectures. Nevertheless, this work is built upon the results of Goldston, Pintz & Yildirim. While these results may fit Luboš's definition of elementary (After all it's just some inequalities and sieves, right?) for most mathematicians are these results rather far from (their definition of) elementary. For interested readers I suggest checking out Polymath8 paper which should be published shortly.
{"url":"http://motls.blogspot.in/2013/05/a-proof-of-riemann-hypothesis-using.html","timestamp":"2014-04-16T04:10:53Z","content_type":null,"content_length":"246203","record_id":"<urn:uuid:316d89d0-d1b6-4754-a3ee-a42598be56a2>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00075-ip-10-147-4-33.ec2.internal.warc.gz"}
B-spline toolbox Basic toolbox for polynomial B-splines on a regular grid B-splines is a natural signal representation for continous signals, where many continous-domain operations can be carried out exactly once the B-spline approximation has been done. The B-spline estimation procedure in this toolbox using allpole filters is based on the classic papers by M. Unser and others [1,2,3], it allows very fast estimation of B-spline coefficients when the sampling grid is uniform. Evaluation/interpolation is also a linear filter operation. The toolbox has two layers; a set of functions for the fundamental operations on polynomial B-splines, and an object-oriented wrapper which keeps track of the properties of a spline signal and overload common operators. The representation is dimensionality-independent, and much of the code is vectorized. Units tests are included, these require the MATLAB xunit toolbox. [1] M. Unser, A. Aldroubi, M. Eden, "B-Spline Signal Processing: Part I-Theory", IEEE Transactions on Signal Processing, vol. 41, no. 2, pp. 821-833, February 1993 [2] M. Unser, A. Aldroubi, M. Eden, "B-Spline Signal Processing: Part II-Efficient Design and Applications", IEEE Transactions on Signal Processing, vol. 41, no. 2, pp. 834-848, February 1993 [3] M.Unser, "Splines: A Perfect Fit for Signal and Image Processing", IEEE Signal Processing Magazine, vol. 16, no. 6, pp. 22-38, 1999 To install the toolbox Unzip the archive to a location on your harddrive. Add the root directory of the installation to your matlab path. Optionally run the unit tests to check the installation (see next section.) You can also try to run this example cell file to regenerate the output. Running the unit tests To run the unit tests you need the MATLAB xunit toolbox by Steve Eddings which can downloaded from http://www.mathworks.com/matlabcentral/fileexchange/22846 Simply enter the test directory and run xunit on the MATLAB command line by typing runtest. For details, see the xunit documentation. Introductory example We can create an instance of the Bspline class to approximate a signal as a sum of B-spline functions. The resulting Bspline object keeps track a few properties of the approximated signal and overloads many common operators in addition to giving the opportunity to evaluate the spline on the subscript space of the original signal. The signal we want to approximate, defined on a uniform integer-valued grid x = 1:20; s = atan(x-10); Create a third order B-spline object: s_spl = Bspline(s,3); We can get back a close approximation of the original signal vector by evaluating the B-spline on integer points using familiar subscripting syntax: s_rec = s_spl(1:20); plot(1:20, s_rec); But it is also possible to evaluate the B-spline on a finer grid using the same syntax, effectively interpolating between the integer knots. x_fine = 1:0.2:20; s_fine_rec = s_spl(x_fine); plot(x_fine, s_fine_rec); Approximaton accuracy How well does this approximate the original arctangent function? s_fine = atan(x_fine-10); plot(x_fine, s_fine-s_fine_rec); grid on; grid minor; This is an exactly interpolating spline, so the error is zero at the integer knots, but higher between. Maybe a higher spline order can improve the approximation? Or in this case maybe we are better of by using denser sampling. Let us try a fifth order spline. s_spl5 = Bspline(s, 5); s_fine_rec = s_spl5(x_fine); plot(x_fine, s_fine-s_fine_rec); grid on; grid minor; That did not help much, actually it looks a little worse. Let us instead try to increase the sample density by a factor of 10. x10 = 1:200; x10_fine = 1:0.2:200; s10 = atan((x10/10)-10); s10_fine = atan((x10_fine/10)-10); s10_spl = Bspline(s10, 3); s10_fine_rec = s10_spl(x10_fine); plot(x10_fine, s10_fine-s10_fine_rec); The approximation is better, but still there is some approximation error at the edges of the signal. one likely reason for this; the mirrored signal extension implicit in the direct transform estimation gives a discontinuity in higher order derivatives (unless the signal under approximation has moments which die off at the edge). Since high order B-splines model continous higher order derivatives and the signal extension in this case gives discontinous higher order derivatives on the edge, there is not such a good fit. Coefficient characteristics Let us look at the B-spline coefficients themselves: stem(s_spl.c); title('samplerate 1, 3rd order') stem(s10_spl.c); title('samplerate 10, 3rd order') stem(s_spl5.c); title('samplerate 1, 5th order') We can see that the coefficients are in same range as the original vector regardless of order and sampling density in this example. Derivatives and integration Once the approximation has been done during construction, the B-spline can be exactly derived or integrated. For derivatives use the overloaded diff function: s10_spl_diff = diff(s10_spl); s10_diff_rec = s10_spl_diff(x10_fine); plot(x10_fine, s10_diff_rec); The corresponding integral function: s10_spl_int = integral(s10_spl); s10_int_rec = s10_spl_int(x10_fine); plot(x10_fine, s10_int_rec); How well is the approximation for the differentiation here? The arctanget has an analytic differential which we can compare against. s10_diff = (1/10)./(1+(x10_fine/10-10).^2); plot(x10_fine, s10_diff_rec-s10_diff); The edge errors from the first approximation is propagated, otherwise it looks quite good. Compare this with simply using the difference operator on the input vector: s10_diff2 = diff(s10); s10_diff = s10_spl_diff(x10); 2D splines The toolbox also lets you work on multi-dimensional splines. We can create a Bspline object for a 2d array: fun2d = peaks(32); sfun2d = Bspline(fun2d,3); For such splines we can also calculate the gradient: grad_fun2d = gradient(sfun2d);
{"url":"http://www.mathworks.com/matlabcentral/fileexchange/27047-b-spline-tools/content/bspline_tools_1_2/doc/html/BsplineDoc.html","timestamp":"2014-04-25T03:39:49Z","content_type":null,"content_length":"43823","record_id":"<urn:uuid:7a550a48-3687-4638-98c0-f23b671702da>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00079-ip-10-147-4-33.ec2.internal.warc.gz"}
Seminar on Jan. 18, 2011 - Microsoft Research Seminar on Jan. 18, 2011 Date: Jan. 18, 2011 Venue: Microsoft Shanghai Zizhu Campus, Building 01, Room 5019 Schedule of talks: 11:00 am Speaker: Guohua Wu (NTU) Title: Automatic Sequences and Transcendence of Real Numbers In this talk, I will first review basics of automatic sequences, and then discuss transendence properties of the corresponding real numbers. Cobham Theorem and Mahler's will also be introduced. 2:00 pm Speaker: David Xiao (Université Paris-Sud) Title: Is privacy compatible with truthfulness? We investigate the mainstream interpretation of differential privacy, which says that given a differentially private mechanism, people are likely to share their data truthfully because they are at little risk of revealing their own information. We argue that this interpretation is incomplete because people attach a concrete value to their privacy, and so they should be explicitly motivated to reveal their information. Using the notion of truthfulness from game theory, we show that in certain settings, existing differentially private mechanisms do not encourage participants to report their information truthfully. On the positive side, we exhibit a transformation that, for games where the type space is small and the goal is to optimize social welfare, takes truthful and efficient mechanisms and transform them into differentially private, truthful, and efficient mechanisms. We also study what happens when an explicit numerical cost is assigned to the information leaked by a mechanism. We show that in this case, even differential privacy may not be strong enough of a notion to motivate people to participate truthfully. 3:00 pm Speaker: Yuan Zhou (CMU) Title: Optimal lower bounds for locality sensitive hashing (except when q is tiny) Lower bounds for Locality Sensitive Hashing (LSH) in the strongest setting: point sets in {0,1}^d under the Hamming distance. Recall that here H is said to be an (r, cr, p, q)-sensitive hash family if all pairs x,y in {0,1}^d with dist(x,y) \leq r have probability at least p of collision under a randomly chosen h in H, whereas all pairs x,y in {0,1}^d with dist(x,y) \geq cr have probability at most q of collision. Typically, one considers d \to infty, with c > 1 fixed and q bounded away from 0. For its applications to approximate nearest neighbor search in high dimensions, the quality of an LSH family H is governed by how small its "rho parameter" rho = ln(1/p)/ln(1/q) is as a function of the parameter c. The seminal paper of Indyk and Motwani showed that for each c \geq 1, the extremely simple family H = {x \mapsto x_i : i in d} achieves rho \leq 1/c. The only known lower bound, due to Motwani, Naor, and Panigrahy, is that rho must be at least (e^{1/c}-1)/(e^{1/c}+1)\geq .46/c (minus o_d(1)). In this paper we show an optimal lower bound: rho must be at least 1/c (minus o_d(1)). This lower bound for Hamming space yields a lower bound of 1/c^2 for Euclidean space (or the unit sphere) and 1/ c for the Jaccard distance on sets; both of these match known upper bounds. Our proof is simple; the essence is that the noise stability of a boolean function at e^{-t} is a log-convex function of t. Like the Motwani--Naor--Panigrahy lower bound, our proof relies on the assumption that q is not `tiny', meaning of the form 2^{-Theta(d)}. Some lower bound on q is always necessary, as otherwise it is trivial to achieve rho = 0. The range of q for which our lower bound holds is the same as the range of q for which rho accurately reflects an LSH family's quality. Still, we conclude by discussing why it would be more satisfying to find LSH lower bounds that hold for tiny q. This is joint work with Ryan O'Donnell of Carnegie Mellon and Yi Wu of IBM Research.
{"url":"http://research.microsoft.com/en-us/groups/msratheory/2011jan18.aspx","timestamp":"2014-04-16T16:08:06Z","content_type":null,"content_length":"14129","record_id":"<urn:uuid:f63bfa01-57e4-4791-bc46-5b4216c0c5f5>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00551-ip-10-147-4-33.ec2.internal.warc.gz"}
Brevet US7944859 - Network design method This invention relates generally to the field of network design and in particular to a design method resulting in networks that are both resilient and fault-tolerant. Spanning Tree Protocol(s) (STPs) and spanning trees are conventionally employed by network designers to determine routing and/or forwarding in contemporary Ethernet-type networks. When a node or link within the network fails, a new spanning tree is determined and traffic is rerouted such that it avoids the failed node or link. An advance is made in the art according to the principles of the invention directed to a network design method which produces Ethernet-type networks exhibiting sufficient capacity during periods of normal operation as well as periods when a single node or link has failed. According to an aspect of the present invention, an input to the method will comprise a network design and a collection of new demands and an output of the method will be a set of parameters that will define a new network design. More particularly, and as used herein, a network is characterized by a graph wherein the nodes and links exhibit various attributes. In particular, a network N comprises a unidirected graph G=(V,E) and a number of other parameters. For each node v ε V, the cost[v]( ) is defined as a function which—for example—cost[v](b) is the cost of provisioning v to have capacity sufficient to handle a node load of b Similarly, for each link e ε V, cost[e]( ) is a function where cost[e](b) is the cost of provisioning e to have a capacity sufficient to handle a link load of b. Finally, for each link e ε V, there is a link delay given by δ[e]. According to an aspect of the present invention, we let V={v[1],v[2], . . . ,v[n]} and E={e[1],e[2], . . . ,e[m]}, wherein t is the number of MSTIs in the desired design. As such, a network design will comprise: a network N; capacities C(v) for each v ε V; capacities C(e) for each e ε E; and for each i, 1≦i≦t; (that is for each MSTI)—a set of demands D , a priority function p (x) defined for x ε E ∪ V, and a length function L (e) defined for e ε E, which we write notationally as: p [1] ( ), p [2] ( ), . . . , p [t] ( )); L [1] ( ), L [2] ( ), . . . , L [t] ( )); v [1] ), . . . , v [n] e [1] ), . . . , e [m] )); and D [1] ,D [2] , . . . ,D [t] ); such that According to an aspect of the present invention, an input to the method will comprise a network design =(N,D,P,L,C) and a collection of new demands {circumflex over (D)}=({circumflex over (D)} , {circumflex over (D)} , . . . ,{circumflex over (D)} The output of the method will be a set of parameters that will define a new network design ′=(N,D′,P′,L′,C′), where D [1] ′=D [1] ∪{circumflex over (D)} [1] ,D [2] ∪{circumflex over (D)} [2] , . . . ,D [t] ∪{circumflex over (D)} [t] p [1] ′( ), p [2] ′( ), . . . p [t] ′( )), L [1] ′( ), L [2] ′( ), . . . L [t] ′( )), and v [1] ), . . . v [n] e [1] ), . . . e [m] which advantageously survives a single node or link failure. These and other aspects of the invention will be apparent to those skilled in the art in view of the following detailed description along with the drawing figures in which: FIG. 1 is a flow diagram depicting a network design according to an aspect of the present invention; FIG. 2 is a flow diagram depicting an aspect of network design according to the present invention. The following merely illustrates the principles of the invention. It will thus be appreciated that those skilled in the art will be able to devise various arrangements which, although not explicitly described or shown herein, embody the principles of the invention and are included within its spirit and scope. Furthermore, all examples and conditional language recited herein are principally intended expressly to be only for pedagogical purposes to aid the reader in understanding the principles of the invention and the concepts contributed by the inventor(s) to furthering the art, and are to be construed as being without limitation to such specifically recited examples and conditions. Moreover, all statements herein reciting principles, aspects, and embodiments of the invention, as well as specific examples thereof, are intended to encompass both structural and functional equivalents thereof. Additionally, it is intended that such equivalents include both currently known equivalents as well as equivalents developed in the future, i.e., any elements developed that perform the same function, regardless of structure. Thus, for example, it will be appreciated by those skilled in the art that the diagrams herein represent conceptual views of illustrative structures embodying the principles of the invention. By way of some additional background, we begin by noting that when performing a network design method according to an aspect of the present invention, a network operator may provide as input a set of locations (nodes) as well as identification of which one(s) of the nodes are interconnected. For our purposes herein, the term network generally means a representation of those nodes along with their connectivity (i.e., links). As noted earlier, our inventive method takes as input a representation of a network and its demands and returns a set of node and link parameters that determine how the Spanning Tree Protocol will perform. More particularly, for each Multiple Spanning Tree Instance (MSTI) the node and link parameters determine a sub-tree of the network that the Spanning Tree Protocols will construct as well as any additional sub-trees that the Spanning Tree Protocols will construct upon the failure of any given node or link. Finally, our method produces indications of the required capacities for the nodes and links such that it is possible to forward the demands according to any of these sub-trees. With this background in place, and as can now be readily understood by those skilled in the art, when given an undirected graph G=(V,E), a bridge priority p(v)∀v ε V, a link priority p(e), and a link cost L(e) for each link e ε E, the Spanning Tree Protocol computes a shortest path spanning tree denoted as STP(G,p( ),L( )). As used herein—and in order to avoid confusion—we refer to the link cost L(e) for each link as the length of e to distinguish it from the cost of provisioning capacity in the network. Importantly, each link e={u,v} has two oppositely directed links e1=(u,v) and e2=(v,u) associated with it. As a result, specifying the priorities and lengths completely determines the tree that the STP will compute as well as the tree that the STP will compute in the event of a given link or node failure. Each demand d=(s[d],t[d],b[d],p[d]) is a directed demand that represents a requirement of forward traffic of bandwidth b[d ]from source node s[d ]to terminal nodet[d]. A demand d originates at s[d ] and terminates at t[d ]and p[d ]is to indicate whether or not a demand is a protected demand. For a given forwarding F of demands, the node load of node v is denoted by b[F](v) and it represents the total bandwidth of all demands originating, terminating or transiting v in F. The link load of a link e is denoted by b[F](e) and it represents the maximum of the total bandwidth of all demands forwarded over e1 (i.e., along e in the direction defined by e1) and the total bandwidth of all demands forwarded over e2 in F. Initially, we may think of the present method as taking as input some network and demands and returning priorities, lengths and capacities such that the capacities returned are sufficient to handle traffic on links and nodes as determined by the priorities and the lengths. Operationally however, we may prefer a more general view as in the case where the network may already have some capacities, priorities and lengths specified for a set of demands and now new demands are to be incorporated into the design such that the initial capacities, priorities and lengths may be adjusted. Similarly, an “empty” or “Greenfield” network design which has no pre-defined capacities, priorities or lengths as well as a network design that has some capacity at some nodes and/or links but may not necessarily be sufficient capacity may advantageously serve as a valid network inputs to the method of the present invention. As noted, a network is characterized by a graph wherein the nodes and links exhibit various attributes. In particular, a network N comprises a unidirected graph G=(V,E) and a number of other parameters. More particularly, for each node v ε V, the cost[v]( ) is defined as a function which—for example—cost[v](b) is the cost of provisioning v to have capacity sufficient to handle a node load of b Similarly, for each link e ε V, cost[e]( ) is a function where cost[e](b) is the cost of provisioning e to have a capacity sufficient to handle a link load of b. Finally, for each link e ε V, there is a link delay given by δ[e]. With these definitions in place, if we let V={v[1],v[2], . . . ,v[n]} and E={e[1],e[2], . . ,e[m]}, and wherein t is the number of MSTIs in the desired design, a network design will comprise: a network N; capacities C(v) for each v ε V; capacities C(e) for each e ε E; and for each i, 1≦i≦t; (that is for each MSTI)—a set of demands D , a priority function p (x) defined for x ε E ∪ V, and a length function L (e) defined for e ε E. Notationally we write p [1] ( ), p [2] ( ), . . . , p [t] ( )); L [1] ( ), L [2] ( ), . . . , L [t] ( )); v [1] ), . . . , v [n] e [1] ), . . . , e [m] )); and D [1] ,D [2] , . . . ,D [t] ); such that Accordingly an input to our inventive method will comprise a network design =(N,D,P,L,C) and a collection of new demands {circumflex over (D)}=({circumflex over (D)} ,{circumflex over (D)} , . . . ,{circumflex over (D)} ). Note that the “Greenfield Design” described above is simply a special case of a general “growth design” described above where p ( ) and L ( ) are undefined, D =0, and C(x)=0, for 1≦i≦t and x ε E ∪ V. The output of our inventive method will be a new network design ′=(N,D′,P′,L′,C′) where D [1] ′=D [1] ∪{circumflex over (D)} [1] ,D [2] ∪{circumflex over (D)} [2] , . . . ,D [t] ∪{circumflex over (D)} [t] p [1] ′( ), p [2] ′( ), . . . p [t] ′( )), L [1] ′( ), L [2] ′( ), . . . L [t] ′( )), and v [1] ), . . . v [n] e [1] ), . . . e [m] Notice that the network N remains unchanged as it is just a description of a network's connectivity, link delays and cost functions. Preferably—and as already noted—the output of our inventive method namely the new network design, should satisfy a number of constraints. First, there must be sufficient capacity assigned to nodes and links such that the output network design will handle traffic in normal situations (all nodes/links operating) and in those situations in which a single node or link has failed. We may now define this constraint. To begin, for x ε E ∪ V, we let G−{x} be a graph obtained by removing a single node x from G. Of course, and as can be quickly appreciated by those skilled in the art, once a node is removed, then so too are all of the links adjacent to that node. Consider a network design =(N,D,P,L,C). Let P [i] ⊂ [i ] be a set of protected demands in D , where 1≦i≦t. For v ε V, and any P , we let P (v) be those protected demands in D [i ] that do not have v as an endpoint. Let F represent the forwarding of demands where all of the demands in D[i ]are forwarded in STP(G,p[i]( ),L[i]( )), where 1≦i≦t. For v ε V, we let F(v) be the forwarding of demands where all demands in P[i](v), i.e., the protected demands not originating or terminating at v, are forwarded in STP(G−{v},p[i]( ),L[i]( )), where 1≦i≦t. Similarly, for e ε E, we let F(e) be the forwarding of demands where all demands in P[i ](the protected demands) are forwarded in STP(G−{e},p[i]( ),L[i]( )), where 1≦i≦t. If, for all x ε E ∪ V, v ε V and e ε E, we have that b[F](v)≦C(v), b[F(e)]≦C(e), b[F(x)](v)≦C(v), and b[F(x)](e)≦C(e), then we may say that is a valid network design. As can now be appreciated, and according to an aspect of the present invention, we require that all output network designs be valid network designs. As can be readily appreciated, there are a number of situations in which it is desirable to impose Quality-Of-Service (QOS) constraints on the output network design. Advantageously, such impositions are accommodated with the network design method according to the present invention. It may be desirable that the network design satisfy a constraint—for example—whereby the demands in D[i]′ are forwarded according to STP(G,p[i]′( ),L[i]′( )), then no demand is forwarded over more than h[i ]links where 1≦i≦t. A network design that satisfies this exemplary constraint is said to be hop count compliant. Similarly, it may be desirable that the network design satisfies the constraint that if the demands in D[i]′ are forwarded according to STP(G,p[i]′( ),L[i]′( )), then the total delay along any path over which a demand is forwarded is not greater than Δ[i], where 1≦i≦t. A network design that satisfies this exemplary constraint is said to be delay compliant. Of course those skilled in the art will readily appreciate that any of a number of additional constraints may be imposed upon a network design ′. For example, a min-cost design (minimum cost design) imposes the constraint that the total cost of provisioning new capacity is minimized. Such minimum cost designs may have additional constraints as well. More particularly, if the set of endpoints of new demands (the demands in {circumflex over (D)}) is contained in the set of endpoints of the original demands (the demands in D) then it may be required that the only difference between ′ are the capacities. That is to say, ′=(P,L,C′) where C(x)≦C′(x), for x ε E ∪ V. Stated alternatively, all forwarding remains the same but additional capacity may be required in order to produce a valid network design. Such designs are called fixed tree designs and sometimes described as “link capacitation without adding new nodes.” Another type of design which imposes particular constraints is that which a bound c≧0 is given and the output network design will minimize the maximum link utilization where the total cost of additional capacity cannot be more than c. Such a design is called a min-max design. Generally, a min-max design is one sometimes described as a “minimize maximum link utilization with a given cost budget for limited capacity change”. In the specific instance where c=0, this is the design scenario also described as “basic load balancing.” Additionally, one could also be given a bound u and required to minimize the cost of adding capacities so as to achieve a network design having a maximum link utilization that is no more than u. Such a network is known in the art as a bounded utilization design. Finally, it is worth noting that any of the min-max, bounded utilization, augmented tree, fixed tree, or min-cost designs described above may be further constrained by hop-count constraints or delay constraints as well. As may now be appreciated, from the network design so produced i.e., ′=(P′,L′,C′), a number of other derivable quantities are realizable. For example, given P′ and L′, each primary tree T ′( ),L ′( )) may be computed by running the Spanning Tree Protocol algorithm with the given priorities and lengths (costs). Also, for each node v failure, each resulting backup tree namely B ′( ),L ′( )) may be determined once again by running STP with the node v removed from G. Similarly, for any link e, when e fails the resulting backup trees B ′( ),L ′( )) may likewise be determined by running STP. We may now understand the operational aspects of our inventive method. In particular, and as shown in a flow chart of FIG. 1, when given a network of nodes and links along with their costs and traffic demands between the nodes 110, our inventive method produces the set of parameters (i.e., Ethernet parameters) and any required provisioning of the nodes and links such that any resulting network exhibits sufficient capacity during periods of normal operation as well as any periods when a single node or link fails 120. As those skilled in the art will readily appreciate, the set of parameters (Ethernet parameters) are a set of node priorities, port priorities and link weights that the Spanning Tree Protocol uses to determine forwarding paths for demands in a network. In a representative operation, the STP will set the node having the best (highest) node priority as the root r of a tree and then uses link weights to determine a shortest path tree T rooted at r. In the event of a tie, port priorities may be used to resolve same. Each demand d is forwarded according to a unique path in T between the endpoints of d. When a node or link fails, STP runs again with the remaining network (non-failed portion) and computes a backup tree on which to forward the demands. Consequently, and as can be appreciated by those skilled in the art, the set of parameters produced by our inventive method fully determines the routing of the demands in all network conditions and hence the maximum load on any link or node over normal and single node/link failure. More specifically, the parameters determine the size (and therefore cost) of the provisioning necessary for each node and link to ensure sufficient capacity in both the normal and single-failure conditions. Turning now to FIG. 2, there is shown a flowchart depicting a central routine of our inventive method. In particular, a candidate primary tree T, for example the tree on which demands will be routed in normal operating situations) is input 210. For each node and link failure in T a backup tree is selected by, for example, a heuristic methodology based upon estimated loads 220. For the set of trees so selected a set of parameters is determined which produce each one of the trees in the set for each of a given set of network conditions 230. As can be appreciated, it may not always be possible to determine the exact set of parameters which produces a given tree however, tree selections may be judiciously made to influence the possible set of parameters. For example, the link weights may be set such that those in the primary tree T are very small, while those in the backup tree are larger and none in any of the selected trees are set so large that STP would never choose them in a shortest path tree. After the parameters have been determined, the load on each link and each node is determined for each network condition and then provisioned to the maximum of each determined load. As can be appreciated, this ensures that if the determined network parameters are used to design the network, then the network so designed will be sufficiently provisioned for both normal conditions and those conditions in which a single node or link has failed. Such methodology may be conveniently thought of as determining the cost of provisioning the network taking T as a candidate primary tree. Advantageously, if a goal of a network design is to produce a favorable cost, then our inventive method will produce a number of candidate primary trees and then perform the above-described procedure to determine their costs and select the one which has such desirable cost. In many cases, that may be the minimum cost. As before, the output is a set of Ethernet (network) parameters and any required provisioning of the links and nodes. In a preferred embodiment, an initial candidate primary tree is selected based on—for example—heuristics for estimating the minimum cost spanning tree. The backup trees are then evaluated as potential primary trees and the link backup trees for these backup trees in turn may be used as potential primary trees—and so on. Of course, any constraints such as hop count may be imposed by rejecting any primary tree candidates that fail a particular constraint. In such cases, a depth first search trees may also be used as potential primary trees. In those cases where link delays are given and a bound on total delay on demand routes is known and imposed, shortest path trees may advantageously be used as candidates where the link weights used are the link delays. Alternatively, for those designs which are constrained such that no link is utilized more than some given upper bound, then demand sizes may be appropriately increased. The above described procedure may then be used to find a minimum cost design. Finally, for a case where link utilization is necessarily minimized, a minimum cost design is first determined as above and then a maximum link utilization is determined for that design. Then—using a binary search-like procedure, for various bounds a network design is determined without any additional provisioning while maintaining the fixed utilization requirement (by for example, expanding the size of demands). At this point, while we have discussed and described the invention using some specific examples, those skilled in the art will recognize that our teachings are not so limited. For example, once the primary and backup trees have been determined as described, further details such as determining all demands forwarded over a given link or through a given node in any of the trees may be determined since a unique path for each demand within a tree may be found. Thus it is possible to determine the resulting node and link loads in each such tree. Of additional interest, given the capacities C′ and link loads as described, the link utilities can be determined. Additionally, given the input capacities C and the output capacities C′ it is possible to determine any additional capacity added at each node and link. As a result, the present method permits the determination of the total cost of any additional capacity. Accordingly, the invention should be only limited by the scope of the claims attached hereto.
{"url":"http://www.google.fr/patents/US7944859","timestamp":"2014-04-20T03:17:24Z","content_type":null,"content_length":"77481","record_id":"<urn:uuid:527da8f7-9104-4180-98ed-f1e366754c13>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00074-ip-10-147-4-33.ec2.internal.warc.gz"}
SpikesBook < Teaching < TWiki Rieke, Warland, de Ruyter van Steveninck and Bialek. Spikes: Exploring the Neural Code. ISBN:0262681080 Link to the publishers' page. Summary: Neural computation in terms of information theory and statistical decision theory. Our perception of the world is driven by input from the sensory nerves. This input arrives encoded as sequences of identical spikes. Much of neural computation involves processing these spike trains. What does it mean to say that a certain set of spikes is the right answer to a computational problem? In what sense does a spike train convey information about the sensory world? Spikes begins by providing precise formulations of these and related questions about the representation of sensory signals in neural spike trains. The answers to these questions are then pursued in experiments on sensory neurons. The authors invite the reader to play the role of a hypothetical observer inside the brain who makes decisions based on the incoming spike trains. Rather than asking how a neuron responds to a given stimulus, the authors ask how the brain could make inferences about an unknown stimulus from a given neural response. The flavor of some problems faced by the organism is captured by analyzing the way in which the observer can make a running reconstruction of the sensory stimulus as it evolves in time. These ideas are illustrated by examples from experiments on several biological systems. Intended for neurobiologists with an interest in mathematical analysis of neural data as well as the growing number of physicists and mathematicians interested in information processing by "real" nervous systems, Spikes provides a self-contained review of relevant concepts in information theory and statistical decision theory. A quantitative framework is used to pose precise questions about the structure of the neural code. These questions in turn influence both the design and analysis of experiments on sensory neurons. Here's another one: Chris Eliasmith and Charles H. Anderson Neural Engineering: Computation, Representation, and Dynamics in Neurobiological Systems ISBN:0262050714 Link to publishers' page. For years, researchers have used the theoretical tools of engineering to understand neural systems, but much of this work has been conducted in relative isolation. In Neural Engineering, Chris Eliasmith and Charles Anderson provide a synthesis of the disparate approaches current in computational neuroscience, incorporating ideas from neural coding, neural computation, physiology, communications theory, control theory, dynamics, and probability theory. This synthesis, they argue, enables novel theoretical and practical insights into the functioning of neural systems. Such insights are pertinent to experimental and computational neuroscientists and to engineers, physicists, and computer scientists interested in how their quantitative tools relate to the brain. The authors present three principles of neural engineering based on the representation of signals by neural ensembles, transformations of these representations through neuronal coupling weights, and the integration of control theory and neural dynamics. Through detailed examples and in-depth discussion, they make the case that these guiding principles constitute a useful theory for generating large-scale models of neurobiological function. A software package written in MatLab for use with their methodology, as well as examples, course notes, exercises, documentation, and other material, are available on the Web. Copyright © 2008-2014 by the contributing authors. All material on this collaboration platform is the property of the contributing authors. Ideas, requests, problems regarding TWiki? Send feedback
{"url":"http://biowiki.org/bin/view/Teaching/SpikesBook","timestamp":"2014-04-16T13:03:42Z","content_type":null,"content_length":"12104","record_id":"<urn:uuid:271d6427-b249-41d7-b881-59129b83c7c5>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00600-ip-10-147-4-33.ec2.internal.warc.gz"}
Items tagged with complex i am investigating above numeric when ln(-.733448502640020+0.*I); it has Pi imaginary part i try complex(1, exp(Pi)^3); it return Complex(...) but not 1 + i*exp(Pi)^3 3 means 3 times come from recursively using pattern ln(Re(ln(x) - 3.141592654*I)) 3.141592654 in imaginary part appear 3 times i use Round(Im(x), 8) during above operation actually i want to extract Pi imaginary part from -.733448502640020+0.*I however, after minus exp(Pi) from it first time, it is near the original number -.733448502640020 is this elimination of imaginary part is just a illusion from log function?
{"url":"http://www.mapleprimes.com/tags/complex","timestamp":"2014-04-16T16:10:36Z","content_type":null,"content_length":"119081","record_id":"<urn:uuid:9f78c814-236f-4fd3-a6b5-88b4577cd026>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00493-ip-10-147-4-33.ec2.internal.warc.gz"}
Show that all of these numbers are zero December 17th 2009, 08:35 PM Bruno J. Show that all of these numbers are zero Suppose you have $2n$ numbers $(n > 1)$ which have the following property : whenever you remove one of them (any one), you can split the remaining ones into two sets having equal sum. Show that all of these numbers are zero. December 17th 2009, 08:42 PM by "zero" did you mean "equal"? also, each of those two sets with equal sum must have exactly $n-1$ elements. otherwise, the claim would be false. December 17th 2009, 09:33 PM Bruno J. Oh, yeah. I'm sorry! I made a mistake in the statement of the problem! I have fixed my post. (Giggle) December 17th 2009, 11:23 PM a weaker property: whenever you remove one of them (any one), either the sum of the remaining ones is zero or you can split the remaining ones into two sets having equal sum. this problem is equivalen to this claim that the $2n \times 2n$ matrix $A=[a_{ij}]$ with $a_{ii}=0, \ a_{ij}=\pm 1, \ \forall i eq j,$ is invertible. to prove this claim we'll show that $\det A eq 0$: $\det A = \sum_{\sigma \in S_{2n}} \text{sign}(\sigma) \prod_{i=1}^{2n} a_{i \sigma(i)}=\sum_{\sigma \in D} \text{sign}(\sigma) \prod_{i=1}^{2n} a_{i \sigma(i)},$ where $D = \{\sigma \in S_{2n}: \ \ \sigma(i) eq i, \ \forall i \},$ because we're given that $a_{ii}=0.$ so $D$ is the set of derangements of $\{1,2, \cdots , 2n \}.$ but we know that the number of derangements of a set with even number of elements is odd. so $\det A$ is a sum of odd number of terms where each term is $\pm 1.$ clearly this sum can never be zero and hence $A$ is invertible. Q.E.D.
{"url":"http://mathhelpforum.com/math-challenge-problems/121040-show-all-these-numbers-zero-print.html","timestamp":"2014-04-20T06:15:10Z","content_type":null,"content_length":"10197","record_id":"<urn:uuid:8d1db2e9-6ec1-4f4c-9343-3d437fa63181>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00504-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Mama Writes...Are you telling me you have no access to a way to ...What do you think, Nelson? (And if you&#39;re not ...Nice article highlighting the reasons for math pho...Thanks for your information and thoughts... I am e...Hi Sue, you should check out Michael Spivak&#39;s...http://projecteuclid.org/download/pdf_1/euclid.bam...I&#39;m honored to find out I am one of your favor...Thanks for the idea! I will find out whether our c...You can create solids of revolution or other known...Now I do. Thank you!Do you know about explainxkcd? http://www.explainx...I&#39;ve just finished optimization, but I&#39;ll ...All the girls I know use and like computers. Not n...Things have changed! http://www.slate.com/blogs/fu...Thanks for writing. I think your comments will hel...Hi. I bought the book from Amazon.co.uk and I&#39;...I did try it in a few classes last semester. And t...This is a great idea! Thanks for sharing. I will...Thanks. I saw that all the links opened wrong, so ...hee hee. Yeah, I didn&#39;t slow down enough. Once...We are on Twitter: @Five_TrianglesSue, the Numberplay link actually goes back to Fiv...Whenever a problem says angle A = angle B, that se... tag:blogger.com,1999:blog-5303307482158922565.comments2014-04-14T06:26:06.809-07:00Sue VanHattumhttp://www.blogger.com/profile/ 10237941346154683902noreply@blogger.comBlogger1928125tag:blogger.com,1999:blog-5303307482158922565.post-16670802829882856682014-03-30T20:44:38.721-07:002014-03-30T20:44:38.721-07:00Are you telling me you have no access to a way to heat water up? You can at least test the time, can&#39;t you?Sue VanHattumhttp://www.blogger.com/profile/ 10237941346154683902noreply@blogger.comtag:blogger.com,1999:blog-5303307482158922565.post-60372524840354731102014-03-30T20:35:41.208-07:002014-03-30T20:35:41.208-07:00This comment has been removed by the author.Nelsonhttp://www.blogger.com/profile/ 16299664747490212402noreply@blogger.comtag:blogger.com,1999:blog-5303307482158922565.post-74065061995454969132014-03-30T20:30:31.455-07:002014-03-30T20:30:31.455-07:00What do you think, Nelson? (And if you&#39;re not old enough to drink coffee, ask someone who is. Or google it. Or google the time McDonald&#39;s got sued because someone got burned by their coffee. Lots of options. No need to ask here.) You could buy a cup of coffee to see how long it takes to get cold. Or you could heat up some water, and check that.<br /><br />Sorry if I sound snippy. I am concerned because I&#39;ve also gotten questions in email that seem like people don&#39;t want to think. I&#39;m not sure yet what I want to do about it...<br />Sue VanHattumhttp://www.blogger.com/profile/ 10237941346154683902noreply@blogger.comtag:blogger.com,1999:blog-5303307482158922565.post-8907617729020678062014-03-30T20:24:00.530-07:002014-03-30T20:24:00.530-07:00This comment has been removed by the author.Nelsonhttp://www.blogger.com/profile/ 16299664747490212402noreply@blogger.comtag:blogger.com,1999:blog-5303307482158922565.post-70143366094154463302014-03-29T09:57:50.743-07:002014-03-29T09:57:50.743-07:00Nice article highlighting the reasons for math phobia and the solutions to end it. I totally agree with you on this. People (both children and grown-ups) need to understand that math is not boring, it is recreational. This will make learning math a happy and positive experience. (http://myblogxpedition.blogspot.com/2014/03/math-can-be-recreational.html)sbloggerhttp://www.blogger.com/profile/ 00943520777853669818noreply@blogger.comtag:blogger.com,1999:blog-5303307482158922565.post-72253635339768856902014-03-18T11:26:46.473-07:002014-03-18T11:26:46.473-07:00Thanks for your information and thoughts... I am educating my children, not according to any set curriculum but to what they can handle. My daughter and son are very right brained individuals, but, they are also developing their left brain, because we must use them both in conjunction with one another. Using just one or the other is why our society is far from balanced... perhaps moreso than ever on this planet.<br /><br /> Many people don&#39;t know, and are not told, intentionally, that all the greatest mathematicians also studied the &#39;occult&#39; or mysticism, or whatever one wants to call it. Math is meaningless without creativity (or creative wonder/thought) and will never manifest into something meaningful.<br /><br />I&#39;d like to see in schooling, various ages, POVs, and the like; whether it be parents coming in, or alternate theorists being invited at an early age and not only at collegiate level academics. Also, challenges to the prevailing thoughts (theories that are being taught as truths) should be encouraged, not shut down as they often are. <br /><br />My son is reading Bernay&#39;s &quot;Propaganda&quot; (which everyone should read if they want an insider&#39;s POV regarding the real workings of life among the masses) over Shakespeare because one is truly valuable to life, and the other, while interesting and is a form of writing that one might enjoy, is absolutely meaningless in terms of understanding of the world around us and how it is absolutely run. He has already read &quot;Sacred Geometry&quot; and now sees the world differently.<br /><br />I didn&#39;t really like Shakespeare&#39;s works and I don&#39;t believe it should be forced upon the masses as if is part of life... as an elective, sure, great, but it is not essential to the growth of the mind.<br /><br />Sorry for the long note.. I could make it much longer. :)<br /><br />Thanks, again.Dave J.http://www.blogger.com/profile/ 01403208198066500332noreply@blogger.comtag:blogger.com,1999:blog-5303307482158922565.post-48354817812200237262014-03-17T17:03:11.119-07:002014-03-17T17:03:11.119-07:00Hi Sue,<br /><br />you should check out Michael Spivak&#39;s Calculus. The entire 16th chapter is devoted to proving the irrationality of Pi. He&#39;s very thorough and doesn&#39;t skip &quot;obvious&quot; steps like Niven&#39;s classic half-page proof linked above (although Niven&#39;s proof is not aimed at the same audience, e.g. professional mathematicians). Spivak&#39;s (treatment of the) proof was the first proof that I was able to grasp.<br /><br />Also, you should also see this (other classic) integral @ http://www.thedudeminds.net/?p=4768 (it&#39;s in french but you&#39;ll get the math parts for sure) which links nicely 22/7 and Pi.<br /><br />Cheers !The Dudehttp://www.blogger.com/profile/ download/pdf_1/euclid.bams/1183510788Tim Cieplowskihttp://www.blogger.com/profile/ 09721346163506388968noreply@blogger.comtag:blogger.com,1999:blog-5303307482158922565.post-33836247640899132122014-03-14T14:29:49.669-07:002014-03-14T14:29:49.669-07:00I&#39;m honored to find out I am one of your favorite bloggers! =) You can find the proof that pi is irrational here: http://mathlesstraveled.com/post-series/ Scroll down to &quot;Irrationality of Pi&quot;. The proof that e is irrational is actually much simpler than the proof for pi --- I hope to write about it at some point, once I get back to blogging regularly again (at the moment I&#39;m trying to finish my PhD dissertation). It seems there are proofs of the transcendence of e and pi which are about the same level of difficulty as the proof of pi&#39;s irrationality, but I haven&#39;t looked carefully at them myself.Brenthttp://www.blogger.com/profile/ 14440861005012132386noreply@blogger.comtag:blogger.com,1999:blog-5303307482158922565.post-18776619080453743742014-03-05T16:37:33.722-08:002014-03-05T16:37:33.722-08:00Thanks for the idea! I will find out whether our campus has a 3D printer, and see what it takes to do this.Sue VanHattumhttp://www.blogger.com/profile/ 10237941346154683902noreply@blogger.comtag:blogger.com,1999:blog-5303307482158922565.post-67103817623909270932014-03-05T16:17:48.758-08:002014-03-05T16:17:48.758-08:00You can create solids of revolution or other known cross sections on Google Sketchup, and then use a 3d printer to create the solids. We did this in our Calculus classes this year and it was pretty sweet. The kids can be very creative and some used really complex cross-sections.G Millshttp://www.blogger.com/profile/ 03001091338170265775noreply@blogger.comtag:blogger.com,1999:blog-5303307482158922565.post-20206986109627125542014-03-04T06:25:08.674-08:002014-03-04T06:25:08.674-08:00Now I do. Thank you!Sue 10237941346154683902noreply@blogger.comtag:blogger.com,1999:blog-5303307482158922565.post-54019876667344312072014-03-04T02:34:12.229-08:002014-03-04T02:34:12.229-08:00Do you know about explainxkcd? 1337Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-5303307482158922565.post-7235712509466591812014-03-02T05:49:56.919-08:002014-03-02T05:49:56.919-08:00I&#39;ve just finished optimization, but I&#39;ll keep this one for next year. Thank you! Jasminehttp://www.blogger.com/profile/ 14163491309269691356noreply@blogger.comtag:blogger.com,1999:blog-5303307482158922565.post-60580353512506332272014-02-24T16:14:42.137-08:002014-02-24T16:14:42.137-08:00All the girls I know use and like computers. Not necessarily for programming, though. Some computer-based groups are heavily dominated by women, for example, fanfic communities, or Ravelry knit and crochet community. Adult females is the fastest-growing gamer demographic.MariaDhttp://www.blogger.com/profile/ 00769513929584082597noreply@blogger.comtag:blogger.com,1999:blog-5303307482158922565.post-35688071350368131042014-02-23T18:55:27.933-08:002014-02-23T18:55:27.933-08:00Things have changed!<br />http:/ Thanks for writing. I think your comments will help others. You may also like <a href="http://www.moebiusnoodles.com/TheBook" rel="nofollow">Moebius Noodles</a>, a very different math joiurney for parents and young kids.Sue VanHattumhttp://www.blogger.com/profile/ 10237941346154683902noreply@blogger.comtag:blogger.com,1999:blog-5303307482158922565.post-84599671142935272002014-02-20T09:26:25.992-08:002014-02-20T09:26:25.992-08:00Hi. I bought the book from Amazon.co.uk and I&#39;m enjoying reading it. It&#39;s written in a modest way and it seems very real because it describes real kids reactions when confronted with problem or questions. It also gives the reader a sense of evolution of children &quot;mathematical thinking&quot; and the way this educator copes and overcomes the difficulties found in teaching some concepts. I was looking for a book that help me to prepare some learning/playful activities for my 3 years old (in Maths field) and this one was perfect, because it gives me some real information on how he may react and how I can continue my journey if he doesn&#39;t get automatically to the wanted point. I&#39;m also re-learning Maths in a more enthusiastic way than it was introduced to me first.Boshihttp://www.blogger.com/ profile/10870743300775695062noreply@blogger.comtag:blogger.com,1999:blog-5303307482158922565.post-21538719970734473632014-02-19T12:13:41.472-08:002014-02-19T12:13:41.472-08:00I did try it in a few classes last semester. And then we went over it as a class. I prefer not to post my answers. Anything you&#39;re not sure of, I&#39;m sure you can figure out with google&#39;s help.Sue VanHattumhttp: is a great idea! Thanks for sharing. I will likely try extending the activity by having students come to an agreement on the correct order and then estimate the order of magnitude of each object. Have you tried this activity already? Could you post your idea of the correct order from smallest to greatest as well. ThanksAnonymousnoreply@blogger.comtag:blogger.com,1999:blog-5303307482158922565.post-36643172184905295712014-02-18T16:14:08.927-08:002014-02-18T16:14:08.927-08:00Thanks. I saw that all the links opened wrong, so I fixed that too. (I like to have them open in a new window.)Sue VanHattumhttp://www.blogger.com/profile/ 10237941346154683902noreply@blogger.comtag:blogger.com,1999:blog-5303307482158922565.post-10967109840958114872014-02-18T16:13:36.152-08:002014-02-18T16:13:36.152-08:00hee hee. Yeah, I didn&#39;t slow down enough. Once I drew it out carefully, it became super simple. Trust, that&#39;s what it took. Thanks.Sue VanHattumhttp://www.blogger.com/profile/ 10237941346154683902noreply@blogger.comtag:blogger.com,1999:blog-5303307482158922565.post-71747625191186202712014-02-18T04:34:34.913-08:002014-02-18T04:34:34.913-08:00We are on Twitter: @Five_TrianglesFive Triangleshttp://www.blogger.com/profile/ 12846752710456413605noreply@blogger.comtag:blogger.com,1999:blog-5303307482158922565.post-15103629081597520812014-02-18T04:29:43.835-08:002014-02-18T04:29:43.835-08:00Sue, the Numberplay link actually goes back to Five Triangles; need correction... as far as contacting Five Triangles perhaps tweet them @Five_Triangles and see if they respond"Shecky Riemann"http://www.blogger.com/profile/ 07065658607024191185noreply@blogger.comtag:blogger.com,1999:blog-5303307482158922565.post-8323813074913531792014-02-17T23:47:15.983-08:002014-02-17T23:47:15.983-08:00Whenever a problem says angle A = angle B, that sets off my &quot;similar triangles&quot; flag.<br /><br />One might even call it &quot;AAAdar&quot;. ;)Haohttp://www.blogger.com/profile/02348974241652264510noreply@blogger.com
{"url":"http://mathmamawrites.blogspot.com/feeds/comments/default","timestamp":"2014-04-21T14:42:07Z","content_type":null,"content_length":"49737","record_id":"<urn:uuid:c8140593-f5db-4cfc-88db-5f5578931653>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00055-ip-10-147-4-33.ec2.internal.warc.gz"}
[FOM] Monsieur, x^p = x (mod p), donc Dieu existe. Repondez ! Daniel Mehkeri dmehkeri at gmail.com Mon Mar 21 23:25:08 EDT 2011 A FOM editor points out that systems without exponentiation, and lengths of proofs in these systems, are well-known, and asks that I relate my remarks to these. Simultaneously, less well-known systems have been brought to my attention, along with the suggestion that they might provide an ultrafinitary proof of Fermat's little theorem. (Some relevant public discussion is on dialinf.wordpress.com) There are, I'm sure, lots of ways to go about ultrafinitism. I will write briefly about some examples, after which I will state some general considerations. The line between ultrafinitism and other forms of constructivism is succinctly characterised in the passage of Troelstra and van Dalen I originally quoted: can we, 'in principle', view a large number as a sequence of units? EXAMPLE 1. Q + IDelta_0. In the language of Robinson's Q, numbers on the order of 2^2^10 can be given as terms in a few thousand symbols. IDelta_0 is induction for formulas with any number of bounded quantifiers. So if P is a bounded formula, and I can prove P(0), and also that P(n) implies P(n+1), then I can conclude P(n). This particularises to give a deduction of P(2^2^10) of feasible size. By allowing induction in this way, we are treating 2^2^10 as a sequence of EXAMPLE 2. Buss' Bounded Arithmetic (S_2^1). This is the system I referred to in my original message on this thread, and I will give some more detail here. It is also based on Q, but here we have "binary" induction instead (the inductive step goes from n to 2n+1 and 2n+2, rather than to n+1 in the "unary" version; see e.g. http://www.math.ucsd.edu/~sbuss/ResearchWeb/BAthesis/). BA allows us to express large numbers without obviously treating it as a sequence of units, instead treating it as a binary representation. Now consider: FOR ANY x,p [[ such that x^p =/= x (mod p) ]], THERE EXISTS d [[ such that d is a non-trivial factor of p ]]. I put it in this form because the parts in [[ ]] are computable and so definable in Q at most Sigma_1, meaning this is Pi_2. BA has a metamathematical property that its Pi_2 sentences are witnessed by computable functions having running times which are polynomial in the lengths of their arguments. So from a proof of Fermat's little theorem we would extract a PTIME method to factor p whenever x^p =/= x (mod p). No such algorithm is known. EXAMPLE 3. Sazonov's arithmetic over a finite row Sazonov describes some related systems not based on Q. I attempt to summarise some of them here, see e.g. "A logical approach to the problem 'P=NP?'" http://www.csc.liv.ac.uk/~sazonov/papers.html Here, the successor operation eventually stops at the last number, designated by []. We have the finite row of numbers from 0 to [] to which unary induction applies, and we have some alternate ways of representing numbers up to 2^[] as binary strings. The provably total functions of this system have running time polynomial in []. Fermat's little theorem may well be provable up to [], but if it were provable up to 2^[] then as above we would extract a PTIME factoring algorithm. EXAMPLE 4. Boucher's second-order arithmetic This is in the same spirit as Sazonov's arithmetic with box, but using second-order variables to represent the binary strings. For details see "Arithmetic without the Successor Axiom", This does in fact prove Fermat's little theorem for the first-order variables. But these are explicitly being treated as sequences of units. It may also be provable for the second-order variables. But without suitable restrictions on induction and comprehension, some unary induction seems to become derivable for the second-order variables as well, so they are also being treated as sequences of Now to the general considerations here. I repeat the sentence from the discussion of BA: FOR ANY x,p [[ such that x^p =/= x (mod p) ]], THERE EXISTS d [[ such that d is a non-trivial factor of p ]]. the parts in [[ ]] are efficiently testable by modern computers, for binary representations on the order of 2^2^10, and in fact somewhat larger. In particular, for the RSA-2048 challenge number that I quoted originally, 2^p =/= 2 (mod p). To the constructivist or intuitionist there is no problem: there are indeed algorithms to factor p. They are not practical, but given enough time and space, they would work. "Given enough time and space" is of course not an excuse available to the ultrafinitist. An infeasible computation does not count as a proof of anything: "really" it doesn't terminate at all, since the running time is a large number, and cannot be viewed as a sequence of units. I do realise that PTIME is only asymptotic feasibility. I realise we can't PROVE there are no good algorithms for factoring. I realise that there are many other variations on ultrafinitary systems besides these that I haven't explicitly covered. But, modern computers can easily do modular exponentiation with quite large numbers, while factoring numbers of this size is so far not feasible in ANY sense, not even probabilistically. So we do seem to have a quite general obstacle here. At least, I hope the FOM editors and readers are now satisfied that I do not need to pre-emptively address further proposals. Maybe instead this will provoke the ultrafinitists, or those wishing to argue on their behalf, to prove Fermat's little theorem, up to at least the relevantly sized numbers, and argue that their assumptions do not treat numbers of this size as sequences of units, despite the above considerations. Or maybe it will prove them to argue against this characterisation of ultrafinitism. Or maybe to provide an ultrafinitary alternative to Fermat's little theorem that suffices for modern cryptography. Daniel Mehkeri More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/2011-March/015337.html","timestamp":"2014-04-18T06:19:01Z","content_type":null,"content_length":"8876","record_id":"<urn:uuid:1b8ebc0d-cd3f-44fc-b4ef-f922a212ba00>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00021-ip-10-147-4-33.ec2.internal.warc.gz"}
On 2D Inverse Problems/On random processes In this section the endpoints of edges of a considered graph can have an order, making it a directed graph. A graph with positive function/vector defined on its edges is called weighted graph. Random walk of a particle on a graph G with discrete time is the following process: • At moment t = 0 the particle occupies a vertex v of G. • At moment t = n+1 the particle moves to a neighbor of its position at the moment t = n w/probability proportional to the weight of the edge connecting/pointing to the neighbor. Choosing a subset of vertices of a graph as boundary, the harmonic measure of a subset S of the boundary is the function/vector on vertices of G that equals the probability that a particle, starting its random walk at a vertex p, occupies a boundary vertex in the set S before a boundary vertex not in S. It follows from the definition that the harmonic measure at p of a single boundary node b equals to the sum over the finite paths through interior from p to b: $u_b(p) = \sum_{p\xrightarrow[]{path} b}(\prod_{e\in path}w(e)/\prod_{q\in (path-\{b\})}\sum_{q\rightarrow r}w(qr)),$ $u_b(p) = \sum_{p\xrightarrow[]{path} b}\prod_{e=(qr)\in path}\frac{w(e)}{\sum_{q\rightarrow r}w(qr)}.$ Note, that an edge or a vertex may appear multiple times in a path. The Brownian motion is a continuous/limiting analog of the random walk. It follows from the averaging property of the operator that the hitting probabilities of a particle under Brownian motion are described by harmonic functions, defined in the previous section. The harmonic functions are conformaly invariant. Last modified on 9 September 2013, at 05:18
{"url":"http://en.m.wikibooks.org/wiki/On_2D_Inverse_Problems/On_random_processes","timestamp":"2014-04-18T08:07:01Z","content_type":null,"content_length":"16081","record_id":"<urn:uuid:a0f3e1ca-2b90-4b20-9b50-923370ce4d52>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00005-ip-10-147-4-33.ec2.internal.warc.gz"}
Degree of Polynomial Degree of Polynomial The degree is the value of the greatest of any expression (except the constant ) in the . To find the degree all that you have to do is find the largest exponent in the polynomial . Note: Ignore have nothing to do with the degree of a polynomial Practice finding the degree of a polynomial Problem 1) What is the degree of the following + x + 3? Problem 2) Determine the degree of the + x + 33? Just use the 'formula' for finding the degree of a polynomial . ie--look for the value of the largest exponent the answer is since the first term is squared . Remember have nothing at all do to with the degree. Problem 5) What is the degree of the following polynomial 11x + 10x + 11? Problem 6) What is the degree of the polynomial + 2x the answer is 8. Be careful sometimes polynomials are not ordered from greatest exponent to least. Even though 7x^3 is the first expression, its exponent does not have the greatest value. Problem 7) What is the degree of the following polynomial 5x + 2x ^9 ^ + 3x +2x ? Problem 8) What is the degree of the polynomial x ^2 ^ + x + 2 the answer is 2. Do NOT count any constants("constant" is just a fancy math word for 'number'). IE you do not count the '2^3' which is just another way of writing 8.
{"url":"http://www.mathwarehouse.com/algebra/polynomial/degree-of-polynomial.php","timestamp":"2014-04-19T02:43:58Z","content_type":null,"content_length":"24645","record_id":"<urn:uuid:35b2c026-a34a-42b5-a310-baae60c314b1>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00558-ip-10-147-4-33.ec2.internal.warc.gz"}
Stockertown Township, PA Algebra 2 Tutor Find a Stockertown Township, PA Algebra 2 Tutor ...I am currently tutoring several students in GED preparation. Experienced college and high school chemistry tutor with excellent understanding of the physical sciences. Have taught general chemistry and organic chemistry at both community colleges and 4-year universities. 36 Subjects: including algebra 2, chemistry, reading, calculus ...I specialize in college level English papers, or any paper you need written. You give me the facts, I will help you outline, build and create an "A" paper! After all, if I have yet to receive a grade lower than an "A", why shouldn't you? 18 Subjects: including algebra 2, Spanish, English, writing ...After a successful career as a Ph.D. Chemist in the Pharmaceutical Industry, I mentored several chemists, and my experience is a great asset in my tutoring sessions. I have experience with AP, Honors and Academic Chemistry. 7 Subjects: including algebra 2, chemistry, French, geometry ...I have a PhD in chemistry, and was required to fulfill advanced Algebra requirements. I have taught freshman classes Algebra and have tutored since then. I have Ph.D in Medicinal Chemistry (Biology)/Organic Chemistry. 16 Subjects: including algebra 2, chemistry, geometry, biology ...PowerPoint is one of Microsoft's best programs. You will be amazed at how easy it will be to familiarize yourself with the various aspects of this program. An understanding of algebra is a foundational skill to virtually all topics in higher-level mathematics, and it is useful in science, statistics, accounting, and numerous other professional and academic areas. 1. 27 Subjects: including algebra 2, calculus, geometry, trigonometry
{"url":"http://www.purplemath.com/stockertown_township_pa_algebra_2_tutors.php","timestamp":"2014-04-16T07:56:19Z","content_type":null,"content_length":"24583","record_id":"<urn:uuid:20d54a0b-8d20-4814-b4e0-79ef53ed5cb7>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00082-ip-10-147-4-33.ec2.internal.warc.gz"}
Stochastic Modelling for Systems Biology, Second Edition • Provides an accessible introduction to stochastic modeling for systems biology • Focuses on computer simulation, with R and SBML code • Includes exercises and many biologically motivated examples • Presents enhanced material on statistical inference Since the first edition of Stochastic Modelling for Systems Biology, there have been many interesting developments in the use of "likelihood-free" methods of Bayesian inference for complex stochastic models. Re-written to reflect this modern perspective, this second edition covers everything necessary for a good appreciation of stochastic kinetic modelling of biological networks in the systems biology context. Keeping with the spirit of the first edition, all of the new theory is presented in a very informal and intuitive manner, keeping the text as accessible as possible to the widest possible readership. New in the Second Edition • All examples have been updated to Systems Biology Markup Language Level 3 • All code relating to simulation, analysis, and inference for stochastic kinetic models has been re-written and re-structured in a more modular way • An ancillary website provides links, resources, errata, and up-to-date information on installation and use of the associated R package • More background material on the theory of Markov processes and stochastic differential equations, providing more substance for mathematically inclined readers • Discussion of some of the more advanced concepts relating to stochastic kinetic models, such as random time change representations, Kolmogorov equations, Fokker-Planck equations and the linear noise approximation • Simple modelling of "extrinsic" and "intrinsic" noise An effective introduction to the area of stochastic modelling in computational systems biology, this new edition adds additional mathematical detail and computational methods that will provide a stronger foundation for the development of more advanced courses in stochastic biological modelling. Table of Contents Modelling and Networks Introduction to Biological Modelling What is modelling? Aims of modelling Why is stochastic modelling necessary? Chemical reactions Modelling genetic and biochemical networks Modelling higher-level systems Representation of Biochemical Networks Coupled chemical reactions Graphical representations Petri nets Stochastic process algebras Systems Biology Markup Language (SBML) Stochastic Processes and Simulation Probability Models Discrete probability models The discrete uniform distribution The binomial distribution The geometric distribution The Poisson distribution Continuous probability models The uniform distribution The exponential distribution The normal/Gaussian distribution The gamma distribution Quantifying "noise" Stochastic Simulation Monte Carlo integration Uniform random number generation Transformation methods Lookup methods Rejection samplers Importance resampling The Poisson process Using the statistical programming language, R Analysis of simulation output Markov Processes Finite discrete time Markov chains Markov chains with continuous state-space Markov chains in continuous time Diffusion processes Stochastic Chemical Kinetics Chemical and Biochemical Kinetics Classical continuous deterministic chemical kinetics Molecular approach to kinetics Mass-action stochastic kinetics The Gillespie algorithm Stochastic Petri nets (SPNs) Structuring stochastic simulation codes Rate constant conversion Kolmogorov’s equations and other analytic representations Software for simulating stochastic kinetic networks Case Studies Dimerisation kinetics Michaelis–Menten enzyme kinetics An auto-regulatory genetic network The lac operon Beyond the Gillespie Algorithm Exact simulation methods Approximate simulation strategies Hybrid simulation strategies Bayesian Inference Bayesian Inference and MCMC Likelihood and Bayesian inference The Gibbs sampler The Metropolis–Hastings algorithm Hybrid MCMC schemes Metropolis–Hastings algorithms for Bayesian inference Bayesian inference for latent variable models Alternatives to MCMC Inference for Stochastic Kinetic Models Inference given complete data Discrete-time observations of the system state Diffusion approximations for inference Likelihood-free methods Network inference and model comparison SBML Models Auto-regulatory network Lotka–Volterra reaction system Dimerisation-kinetics model All chapters include exercises and further reading. Author Bio(s) Darren Wilkinson is Professor of Stochastic Modelling at Newcastle University in the UK. He was educated at the nearby University of Durham, where he took his first degree in Mathematics, followed by a Ph.D. in Bayesian statistics which he completed in 1995. He moved to a lectureship in statistics at the Newcastle University in 1996, where he has remained since, being promoted to his current post in 2007. Professor Wilkinson is interested in computational statistics and Bayesian inference and in the application of modern statistical technology to problems in statistical bioinformatics and systems biology. He is involved in a variety of systems biology projects at Newcastle, including the Centre for Integrated Systems Biology of Ageing and Nutrition (CISBAN). He recently held a BBSRC Research Development Fellowship on Integrative modelling of stochasticity, noise, heterogeneity and measurement error in the study of model biological systems. Editorial Reviews "Each chapter is completed by some training exercises. … In order to satisfy more curious or more advanced readers, the author also proposes further readings in a dedicated section for each chapter, which is in my opinion a really good idea: highlighting a selection of interesting readings is much less disheartening than referring to a bibliography at the end of the book. Note that the book is supplemented by a quite complete website. … the book has been enhanced by an introduction to approximate Bayesian computation, the codes have been updated to SBML Level 3, and the chapters on Markov chains and stochastic differential equations have been reinforced. … a really comprehensible and easy-to-read course." —Sophie Donnet, Université Paris-Dauphine, CHANCE, 25.4 Praise for the First Edition: "…designed and well suited as an in-depth introduction into stochastic chemical simulation, both for self-study or as a course text…" —Biomedical Engineering Online, December 2006
{"url":"http://www.crcpress.com/product/isbn/9781439837726","timestamp":"2014-04-19T04:43:19Z","content_type":null,"content_length":"111623","record_id":"<urn:uuid:e3581319-cfac-4e3c-9624-608926b13048>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00342-ip-10-147-4-33.ec2.internal.warc.gz"}
Commerce, GA SAT Math Tutor Find a Commerce, GA SAT Math Tutor ...I am a mentor at my high school and involved in many honor societies as well as volunteered to teach at a homework club in an elementary school. If my students do not understand the way I am teaching I will adjust my teachings to be more suitable for my students. I am flexible with my schedule and I am always punctual. 14 Subjects: including SAT math, chemistry, geometry, biology I love teaching mathematics! I have degrees in Mathematics and Mathematics education and several-year math teaching experiences at colleges. I can help high school and college students who need help with algebra, geometry, pre-calculus and calculus. 8 Subjects: including SAT math, calculus, geometry, algebra 1 ...I have several years experience doing this as well as experience teaching self contained SPED with students with mild to moderate intellectual disabilities. I also spent three years working for Babies Can't Wait working with preschool age children with severe disabilities. I am certified in Special Education (Adapted Curriculum.) I am certified to teach ADD and ADHD students. 34 Subjects: including SAT math, reading, English, writing ...I graduated from the University of Georgia with a B.S. in Mathematics and Physics. While at UGA I took MATH3000 which is titled "Linear Algebra" and made an A in this course. I also received the Hollingsworth Award which was given to me from the Department of Mathematics for my outstanding academic achievement in Linear Algebra. 15 Subjects: including SAT math, calculus, physics, geometry Hello! My name is Mary, and I've been tutoring math since I was in the 9th grade. I've tutored a variety of math courses from kindergarten to college level. 20 Subjects: including SAT math, reading, physics, calculus Related Commerce, GA Tutors Commerce, GA Accounting Tutors Commerce, GA ACT Tutors Commerce, GA Algebra Tutors Commerce, GA Algebra 2 Tutors Commerce, GA Calculus Tutors Commerce, GA Geometry Tutors Commerce, GA Math Tutors Commerce, GA Prealgebra Tutors Commerce, GA Precalculus Tutors Commerce, GA SAT Tutors Commerce, GA SAT Math Tutors Commerce, GA Science Tutors Commerce, GA Statistics Tutors Commerce, GA Trigonometry Tutors
{"url":"http://www.purplemath.com/commerce_ga_sat_math_tutors.php","timestamp":"2014-04-18T18:49:15Z","content_type":null,"content_length":"23941","record_id":"<urn:uuid:f80f8ab3-d359-4e93-b7e3-b27e57757e88>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00476-ip-10-147-4-33.ec2.internal.warc.gz"}
Mathematics Education/Pendidikan Matematik PPS 2873 - Current Issues in Mathematics Education Synopsis : This course aims at exploring critically issues and trends related to four main aspects namely curriculum, teaching and learning, assessment and research in mathematics education from both local and international perspectives. It focuses on the concepts and philosophies underlying the implementation of curriculum, teaching and learning and assessment in mathematics education. Students are expected to do a lot of independent and critical reading from local and international mathematics education research journals. Assessment methods : 1. Review of selected research articles and presentation (individual) - 30% 2. Project - analysis of secondary school mathematics textbook (group of 2/3) - 30 % 3. Final exam (comprehensive) - 40 % Assumption (epistemologically) - mathematics as result of human problem posing and solving - mathematics as a mental construction (creation, invention or In term of learning, social constructivism identifies all learners of mathematics as creators of mathematics involving problem posing and solving. Therefore as a consequences of problem posing and solving pedagogy : 1. School maths for all should be centrally concerned with mathematical problem posing and solving (reduce content- oriented mathematics curriculum) 2. Inquiry, investigation, problem posing or formulation should occupy a central place in the school maths curriculum and precedes problem solving 3. The pedagogy (teaching, learning and assessment) should be process and inquiry(or investigation) focused (vs product) 4. learner-centered view of investigation as a learner directed activities (new questions posed, new situations are generated and explored- promotes active learning) 5. Increase learner autonomy and self- regulation ( develop reflective and meta-cognitive skills) Mathematical problem posing (formulation, investigation etc) is divergent (creative thinking and higher order thinking) as the process of mathematical problem solving (critical thinking and higher order thinking - eg by using Polya method)) is convergent. Paradigm shift from traditional teaching model (content-based, teacher directed and student as knowledge recipient) to problem- based learning model( problem motivated, teacher as facilitator and student as problem solver). Main characteristic of PBL approach is that, the problem (real-world problem : unstructured and authentic (vs simulated) is the starting point of learning. (discuss its implications to the current maths curriculum). eg (solve the following problems) 1. The costs for two different kinds of heating systems for a three- bedroom home are given below solar system - cost to install rm 29700 and operating cost/year is rm 150 electric system - cost to install rm 5000 and operating cost/year is rm 1100 After how many years will total costs for solar heating and electric heating be the same? What will be the total costs for both systems at that time? 2. Two ordinary six-sided dice are rolled, what is the probability of getting a sum of 8? 3. Working together, Ahmad and Ali can complete a job in 4 hours. Working alone, Ahmad requires 6 hrs more than Ali to do the job. How many hrs does it take Ali to do the job if he works alone? Benefits of PBL: 1. creating meaningful learning(content/topic/concept) through inquiry (emphasis on critical, logical creative thinking, deep reasoning and metacognition)) 2. encourage the development broad -based mathematical problem solving strategies (heuristics) rather than content learning in a limited sense. 3. development of self-directed/regulated/independent learners - students assume major responsibility for the acquisition of knowledge. ...in USA (cont..) NCTM (National Council of Teachers of Mathematics) - focus on concept development and problem solving. By learning and acquiring a variety of broad-based ( general ) mathematical problem-solving strategies (heuristics) students are equip to be a better problem solvers across the topics in mathematics or transferring those skills to a variety of problems. Some of the general problem solving strategies are : 1. Characterize the problem : What is given? What is needed?What is missing? etc 2. Have you seen this before? : or different form ? 3. Look for pattern : eg Gauss recognized a pattern 1+2...+100 = ? 1+100=2+99=...101 (50 pairs) 50 @ 101 = 5050 4. Simplification/reduction : can the problem be broken up into smaller or manageable 5. Work backwards : when trying to prove a theorem, it may begin from the conclusion and back track logically 6. Modeling/simulation : a mathematical model may be developed that simplify some complicated process/phenomena in the real word (representing/translating into a mathematical forms eg table, diagram, chart, graph, equation, relationship, function, inequalities, matrices, etc) 7. Logical reasoning/arguments - inductive and deductive reasoning, 8. guess and check/improve - develop a sense of estimation 9. make and test conjectures 10. formulate/pose problems from situations within and outside mathematics In USA : 1. Philosophy of mathematics education for the 21 st century : The goal of teaching mathematics is to help all students develop mathematical power ie to produce effective problem solvers and powerful mathematical thinkers. 2. Mathematics must be seen as an integrated whole ( not as a separate and unrelated topics), as a part of human experience, emerging from everyday experience, interaction with science and technology and other fields. 3. Spend more time on developing broad- based mathematical problem solving skills ( general problem solving techniques ie heuristics) and less time on perfecting routine 4. Teaching mathematics as problem solving - problem solving as a means as well as a goal of instruction - apply problem solving skills to solve problems in new contexts with emphasis on multi-steps and non-routine problems. - recognize and formulate (posing) problems from real word situations/phenomena - mathematics is problem-centered and application- based - subject to be investigated, discovered, explored and created 5. Problem solving is seen as the most important means to develop powerful mathematical thinkers. Behaviorist approach : 1. drill -practice (practice makes perfect) 2. mastery of skills (lower order thinking skills- knowledge, comprehension and application) 3. performance- based (how to do) - suitable for routine/familiar problems 4. focus on algorithm (procedures/steps of calculation) 5. mistakes and errors should be avoided/minimized 6. teacher- centered (focus on teaching) Cognitive approach : 1. construction of meaning (searching for meaning) 2. conceptual understanding (higher order thinking skills - analysis, synthesis and evaluation) 3. thinking- based (emphasis on why) - suitable for non- routine/ unfamiliar problems 4. focus on heuristic (general methods of solving problems) - Polya's Model 5. mistakes and errors is good indicators of misconceptions and difficulties 6. student- centered (focus on learning) How to teach problem solving in mathematics classroom? As an example in KBSM : Teaching problem solving skills as an independent topic (eg in (Additional Maths KBSM form 4) - try and error/improve, guess and check, drawing diagrams, construct a table/chart, simplify the problem, using simulation/experiment, identifying patterns, working backwards, develop mathematical models, - skills that can be used in solving mathematical problems in any topics/across the topics. Nature of mathematics : A problem solving view of mathematics as a dynamic (vs fixed or static - accumulation of facts, concepts, rules, theorems etc)) and continually expanding field of human creation, involving the process of inquiry, reasoning (with its intellectual rigour and beauty), discovery and invention as well as a cultural product (is not cultural free !). Mathematics is useful, practical and problem- driven knowledge (ie mainly arise from practical or real- life situations). Assignment 2 : Discuss in what ways do this view of the nature of mathematics influence teaching and learning mathematics ?
{"url":"http://mdnorbakar-mdnor.blogspot.com/","timestamp":"2014-04-21T02:02:05Z","content_type":null,"content_length":"101966","record_id":"<urn:uuid:43d1b3ea-6a7f-49a1-9bca-0124f362cda2>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00371-ip-10-147-4-33.ec2.internal.warc.gz"}
Commerce, GA SAT Math Tutor Find a Commerce, GA SAT Math Tutor ...I am a mentor at my high school and involved in many honor societies as well as volunteered to teach at a homework club in an elementary school. If my students do not understand the way I am teaching I will adjust my teachings to be more suitable for my students. I am flexible with my schedule and I am always punctual. 14 Subjects: including SAT math, chemistry, geometry, biology I love teaching mathematics! I have degrees in Mathematics and Mathematics education and several-year math teaching experiences at colleges. I can help high school and college students who need help with algebra, geometry, pre-calculus and calculus. 8 Subjects: including SAT math, calculus, geometry, algebra 1 ...I have several years experience doing this as well as experience teaching self contained SPED with students with mild to moderate intellectual disabilities. I also spent three years working for Babies Can't Wait working with preschool age children with severe disabilities. I am certified in Special Education (Adapted Curriculum.) I am certified to teach ADD and ADHD students. 34 Subjects: including SAT math, reading, English, writing ...I graduated from the University of Georgia with a B.S. in Mathematics and Physics. While at UGA I took MATH3000 which is titled "Linear Algebra" and made an A in this course. I also received the Hollingsworth Award which was given to me from the Department of Mathematics for my outstanding academic achievement in Linear Algebra. 15 Subjects: including SAT math, calculus, physics, geometry Hello! My name is Mary, and I've been tutoring math since I was in the 9th grade. I've tutored a variety of math courses from kindergarten to college level. 20 Subjects: including SAT math, reading, physics, calculus Related Commerce, GA Tutors Commerce, GA Accounting Tutors Commerce, GA ACT Tutors Commerce, GA Algebra Tutors Commerce, GA Algebra 2 Tutors Commerce, GA Calculus Tutors Commerce, GA Geometry Tutors Commerce, GA Math Tutors Commerce, GA Prealgebra Tutors Commerce, GA Precalculus Tutors Commerce, GA SAT Tutors Commerce, GA SAT Math Tutors Commerce, GA Science Tutors Commerce, GA Statistics Tutors Commerce, GA Trigonometry Tutors
{"url":"http://www.purplemath.com/commerce_ga_sat_math_tutors.php","timestamp":"2014-04-18T18:49:15Z","content_type":null,"content_length":"23941","record_id":"<urn:uuid:f80f8ab3-d359-4e93-b7e3-b27e57757e88>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00476-ip-10-147-4-33.ec2.internal.warc.gz"}
4.4.4 XY Model Next: 4.4.5 O(3) Model Up: 4.4 Spin Models Previous: 4.4.3 Potts Model The XY (or O(2)) model consists of a set of continuous valued spins regularly arranged on a two-dimensional square lattice. Fifteen years ago, Kosterlitz and Thouless (KT) predicted that this system would undergo a phase transition as one changed from a low-temperature spin wave phase to a high-temperature phase with unbound vortices. KT predicted an approximate transition temperature, Our simulation [Gupta:88a] was done on the 128-node FPS (Floating Point Systems) T-Series hypercube at Los Alamos. FPS software allowed the use of C with a software model similar (communication implemented by subroutine call) to that used on the hypercubes at Caltech. Each FPS node is built around Weitek floating-point units, and we achieved for this application. We use a 1-D torus topology for communications, with each node processing a fraction of the rows. Each row is divided into red/black alternating sites of spins and the vector loop is over a given color. This gives a natural data structure of ( Figure 4.19: Autocorrelation Times for the XY Model Previous numerical work was unable to confirm the KT theory, due to limited statistics and small lattices. Our high-statistics simulations are done on algorithms which decorrelates as This implementation [Creutz:87a], [Brown:87a] of the over-relaxed algorithm is microcanonical, and it reduces critical slowing down even though it is a local algorithm. The ``hit'' elements for the Metropolis algorithm are generated as in the interval 4.19, we show the autocorrelation time Table 4.7: Results of the XY Model Fits: (a) We ran at 14 temperatures near the phase transition and made unconstrained fits to all 14 data points (four parameter fits according to Equation 4.24), for both the correlation length (Figure 4.20) and susceptibility (Figure 4.21). The key to the interpretation of the data is the fits. We find that fitting programs (e.g., MINUIT, SLAC) move incredibly slowly towards the true minimum from certain points (which we label spurious minima), which, unfortunately, are the attractors for most starting points. We found three such spurious minima (KT1-3) and the true minimum KT4, as listed in Table 4.7. Figure 4.20: Correlation Length for the XY Model Figure 4.21: Susceptibility for the XY Model Thus, our data was found to be in excellent agreement with the KT theory and, in fact, this study provides the first direct measurement of Next: 4.4.5 O(3) Model Up: 4.4 Spin Models Previous: 4.4.3 Potts Model Guy Robinson Wed Mar 1 10:19:35 EST 1995
{"url":"http://www.netlib.org/utk/lsi/pcwLSI/text/node46.html","timestamp":"2014-04-20T15:56:31Z","content_type":null,"content_length":"8994","record_id":"<urn:uuid:30c3583a-51de-4083-9f67-41eaed73b83a>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00595-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Help February 8th 2010, 04:54 PM #1 Sep 2009 If I can describe the subspaces of R^3 as being lines, planes, all of R^3, and the zero vector then how can I describe D the space of all 2x2 diagonal matrices? Would it be correct to say: a "line" of 2x2 diagonal matrices, a "plane" of 2x2 diagonal matrices, all of D, and the zero matrix? Is there a term for a "line"/"plane" of matrices? Is there a term for a "line"/"plane" of functions? Not really. We'll just say "a two-dimensional subspace" instead of a "plane". February 8th 2010, 04:59 PM #2
{"url":"http://mathhelpforum.com/advanced-algebra/127868-subspaces.html","timestamp":"2014-04-16T19:47:16Z","content_type":null,"content_length":"31560","record_id":"<urn:uuid:8baabf48-cb6b-4e83-8f73-cf67060c664d>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00472-ip-10-147-4-33.ec2.internal.warc.gz"}
A probability question August 22nd 2012, 02:25 PM #1 Aug 2012 New Zealand A probability question I am trying to revise for a test, and am working through some prep questions. I am struggling on this one, could anyone help? Suppose there is a medical diagnostic test for a disease. The sensitivity of the test is .95. This means that if a person has the disease, the probability that the test gives a positive response is .95. The specificity of the test is .90. This means that if a person does not have the disease, the probability that the test gives a negative response is .90, or that the false positive rate of the test is .10. In the population, 1% of the people have the disease. What is the probability that a person tested has the disease, given the results of the test is positive? Let D be the event "the person has the disease" and let T be the event "the test gives a positive result." Any help would be much appreciated. n = normal s = sick + = test positive - = test negative Pr(n+) = .1 * .99 = .099 Pr(n-) = .90 * .99 = .891 Pr(s+) = .95 * .01 = .0095 Pr(s-) = .05 * .01 = .0005 Pr(s|+) = .0095/(.099+.0095) = .08756 (=8.756%) Re: A probability question Suppose there is a medical diagnostic test for a disease. The sensitivity of the test is .95. This means that if a person has the disease, the probability that the test gives a positive response is .95. The specificity of the test is .90. This means that if a person does not have the disease, the probability that the test gives a negative response is .90, or that the false positive rate of the test is .10. In the population, 1% of the people have the disease. What is the probability that a person tested has the disease, given the results of the test is positive? Let D be the event "the person has the disease" and let T be the event "the test gives a positive result." I think that I answered this elsewhere. Let $D$ means a person has the disease and $D^c$ means a person does not have the disease. You are given: $P(+|D)=0.95,~P(-|D^c)=0.90,~\&~P(D)=0.01$. From that we conclude $P(-|D)=0.05,~P(+|D^c)=0.10,~\&~P(D^c)=0.99~.$ Now $P(+)=P(+|D)P(D)+P(+|D^c)P(D^c)$ The question is $P(D|+)=\frac{P(D\cap +)}{P(+)}~.$ Re: A probability question Thanks for those answers, both were really helpful August 22nd 2012, 02:54 PM #2 August 22nd 2012, 03:11 PM #3 August 22nd 2012, 05:54 PM #4 Aug 2012 New Zealand
{"url":"http://mathhelpforum.com/statistics/202447-probability-question.html","timestamp":"2014-04-18T23:35:27Z","content_type":null,"content_length":"41099","record_id":"<urn:uuid:3e0573a7-aee7-48d8-88ff-25bf7ba10e57>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00392-ip-10-147-4-33.ec2.internal.warc.gz"}
Solids of revolution are not fruits thrown at tyrannical rulers in protest. A solids of revolution is a volume built by rotating an area around a predetermined center line called the axis of rotation We mentioned before that way to think of this is like bundt cake. If we aren't happy with a thin slice, we can choose a thicker piece by making an initial slice and rotating the knife around the center to cut out a better sized portion. If we were to rotate all the way around once without cutting, we would form the entire cake, which is a full revolution of the solid cake. Understanding how to make solids of revolution can be difficult to picture, so drawing them is often very useful. Just Like football players practice plays before a big game, we are going to get plenty of practice drawing solids of revolution before we build integrals with them. Sample Problem Draw the solid obtained by rotating the region bounded by y = x, y = 1, and the y-axis around the y-axis. As always, we will start by drawing the area we are going to make the volume with: We can label the axis of rotation and draw a mirror copy of the region on the other side of the axis of rotation. Drawing the mirror copy helps us picture the axis of rotation more clearly. We know the y-axis is the axis of rotation because the problem said to rotate around the y-axis. Next, we draw curved lines, solid in front and dotted in back, to make it look like the region went all the way around the y-axis: This will usually be good enough to give you an idea of what the solid looks like. Draw the solid obtained by rotating the region bounded by y = x and y = x^2 around the line y = 2. Let R be the region bounded by the graphs of x = 1, and the x-axis. Draw the solid obtained by rotating R around the x-axis Let R be the region bounded by the graphs of x = 1, and the x-axis. Draw the solid obtained by rotating R around the y-axis Let R be the region bounded by the graphs of x = 1, and the x-axis. Draw the solid obtained by rotating R around the line x = 1 Let R be the region bounded by the graphs of x = 1, and the x-axis. Draw the solid obtained by rotating R around the line x = 2 Let R be the region bounded by the graphs of x = 1, and the x-axis. Draw the solid obtained by rotating R around the line x = -1 Let R be the region bounded by the graphs of x = 1, and the x-axis. Draw the solid obtained by rotating R around the line y = 1 Let R be the region bounded by the graphs of x = 1, and the x-axis. Draw the solid obtained by rotating R around the line y = 4 Let R be the region bounded by the graphs of x = 1, and the x-axis. Draw the solid obtained by rotating R around the line y = -1
{"url":"http://www.shmoop.com/area-volume-arc-length/solid-revolution-help.html","timestamp":"2014-04-19T12:08:34Z","content_type":null,"content_length":"48191","record_id":"<urn:uuid:5209edd5-46a5-42d6-9cb4-25960839f6d1>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00317-ip-10-147-4-33.ec2.internal.warc.gz"}
Forest Park, IL Algebra 1 Tutor Find a Forest Park, IL Algebra 1 Tutor ...I have a teaching certificate in mathematics issued by the South Carolina Department of Education. During my two and a half years of teaching high school, I have taught various levels of Algebra 1 and Algebra 2. I have a teaching certificate in high school mathematics issued by the South Carolina State Department of Education. 12 Subjects: including algebra 1, calculus, algebra 2, geometry Greetings, My name is Evan. I was a student at the University of Illinois at Chicago studying Biomedical Engineering. I graduated from Whitney M. 16 Subjects: including algebra 1, chemistry, English, writing ...My undergraduate was a major in chemistry with a minor in math after which I completed a masters in chemistry. I was the top student in all my chemistry classes, so I have a clear understanding of all the concepts to do with chemistry. I will be able to help you or your child to understand thes... 20 Subjects: including algebra 1, chemistry, calculus, physics ...My family is originally from Belgium and I have been privileged to live in four different countries including Belgium, the UK, and Australia. Although difficult at times, I have loved this experience because it has both broadened my interests and allowed me to create a global network. Although ... 20 Subjects: including algebra 1, English, chemistry, GED Physics and mathematics instructor with wealth of experience teaching college and GED. Patience and multiple approaches to learning, I believe, are important to help struggling students. I believe that all students can learn even if some of their experiences have not been positive. 7 Subjects: including algebra 1, geometry, algebra 2, trigonometry Related Forest Park, IL Tutors Forest Park, IL Accounting Tutors Forest Park, IL ACT Tutors Forest Park, IL Algebra Tutors Forest Park, IL Algebra 2 Tutors Forest Park, IL Calculus Tutors Forest Park, IL Geometry Tutors Forest Park, IL Math Tutors Forest Park, IL Prealgebra Tutors Forest Park, IL Precalculus Tutors Forest Park, IL SAT Tutors Forest Park, IL SAT Math Tutors Forest Park, IL Science Tutors Forest Park, IL Statistics Tutors Forest Park, IL Trigonometry Tutors Nearby Cities With algebra 1 Tutor Bellwood, IL algebra 1 Tutors Berkeley, IL algebra 1 Tutors Berwyn, IL algebra 1 Tutors Broadview, IL algebra 1 Tutors Hines, IL algebra 1 Tutors La Grange, IL algebra 1 Tutors Lyons, IL algebra 1 Tutors Maywood, IL algebra 1 Tutors North Riverside, IL algebra 1 Tutors Oak Park, IL algebra 1 Tutors River Forest algebra 1 Tutors River Grove algebra 1 Tutors Riverside, IL algebra 1 Tutors Stone Park algebra 1 Tutors Westchester algebra 1 Tutors
{"url":"http://www.purplemath.com/forest_park_il_algebra_1_tutors.php","timestamp":"2014-04-16T07:48:40Z","content_type":null,"content_length":"24134","record_id":"<urn:uuid:e8da269d-34a5-454b-bcb5-ccd492c712f2>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00325-ip-10-147-4-33.ec2.internal.warc.gz"}
Cotati SAT Math Tutors ...To bring this subject to life during tutoring, I tie in real world applications, some of which can be quite surprising. I have gained a real-world understanding for this subject by taking graduate seminars, and having worked in an organic chemistry research lab. After 3 quarters (plus lab) of undergraduate ochem, I have developed a solid foundation in the fundamentals as well. 50 Subjects: including SAT math, reading, English, physics ...I am primarily interested in working during the summer but I am willing to work during the school year.I have 5 years of experience teaching ESL in Asia. From August of 2001 to October 2004 I worked for TCD Tutorial Services in Bangkok Thailand. I primarily taught private lessons to students of all ages. 10 Subjects: including SAT math, geometry, statistics, algebra 1 I'm a retired engineer and math teacher with a love for teaching anyone who wants to learn. As an engineer, I regularly used all levels of math (from arithmetic through calculus), statistics, and physics. I hold a California Single Subject Teaching Credential in math and physics (I taught high school math from pre-algebra to geometry). 26 Subjects: including SAT math, reading, calculus, physics ...I am well regarded as an excellent instructor and am able to deal with students with a wide range of abilities in math, finance and economics. I worked a number of years as a data analyst and computer programmer and am well versed in communicating with people who have a variety of mathematical a... 49 Subjects: including SAT math, physics, geometry, calculus ...Many students tell me that I can explain concepts much better than their teachers. I graduated with honors at UCLA with a BS in Chemical Engineering. I can definitely tailor the tutoring session to your specific needs. 59 Subjects: including SAT math, chemistry, reading, calculus
{"url":"http://www.algebrahelp.com/Cotati_sat_math_tutors.jsp","timestamp":"2014-04-16T14:13:00Z","content_type":null,"content_length":"24775","record_id":"<urn:uuid:2597efb1-d72a-41fb-b89c-a744171f622c>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00358-ip-10-147-4-33.ec2.internal.warc.gz"}
Repeated moral hazard and recursive Lagrangeans Mele, Antonio (2010): Repeated moral hazard and recursive Lagrangeans. Download (903Kb) | Preview This paper shows how to solve dynamic agency models by extending recursive Lagrangean techniques a la Marcet and Marimon (2009) to problems with hidden actions. The method has many advantages with respect to promised utilities approach (Abreu, Pearce and Stacchetti (1990)): it is a significant improvement in terms of simplicity, tractability and computational speed. Solutions can be easily computed for hidden actions models with several endogenous state variables and several agents, while the promised utilities approach becomes extremely difficult and computationally intensive even with just one state variable or two agents. Several numerical examples illustrate how this methodology outperforms the standard approach. Item Type: MPRA Paper Original Repeated moral hazard and recursive Lagrangeans Language: English Keywords: repeated moral hazard; recursive Lagrangean; collocation method D - Microeconomics > D8 - Information, Knowledge, and Uncertainty > D86 - Economics of Contract: Theory C - Mathematical and Quantitative Methods > C6 - Mathematical Methods; Programming Models; Mathematical and Simulation Modeling > C63 - Computational Techniques; Simulation Modeling Subjects: D - Microeconomics > D8 - Information, Knowledge, and Uncertainty > D82 - Asymmetric and Private Information; Mechanism Design C - Mathematical and Quantitative Methods > C6 - Mathematical Methods; Programming Models; Mathematical and Simulation Modeling > C61 - Optimization Techniques; Programming Models; Dynamic Analysis Item ID: 21741 Depositing Antonio Mele Date 31. Mar 2010 05:52 Last Modified: 11. Jan 2014 10:05 Abraham, A. and N. Pavoni (2008), "Principal-Agent Relationships with Hidden Borrowing and Lending: The First-Order Approach in Two Periods", mimeo, UCL Abraham, A. and N. Pavoni (2009), " Efficient allocations with moral hazard and hidden borrowing and lending: A recursive formulation, Review of Economic Dynamics, Volume 11, Issue 4, October 2008, Pages 781-803 Abreu, D., Pearce, D. and E. Stacchetti (1990) \textquotedblleft Toward a Theory of Discounted Repeated Games With Imperfect Monitoring,\textquotedblright\ Econometrica, vol. 58(5), pp. 1041-1063. Atkeson, A. and H. Cole (2008), \textquotedblleft A Dynamic Theory of Optimal Capital Structure and Executive Compensation,\textquotedblright\ mimeo, UCLA Chien, Y. and H. Lustig (Forthcoming) "The Market Price of Aggregate Risk and the Wealth Distribution" Review of Financial Studies Clementi, G. L., Cooley,T. and C. Wang (2006), "Stock Grants as a Commitment Device", Journal of Economic Dynamics and Control 30(11): 2191-2216 Clementi, G. L., Cooley, T. and S. Di Giannatale (2008a), "Total executive compensation", mimeo Clementi, G. L., Cooley, T. and S. Di Giannatale (2008b), "A theory of firm decline", mimeo Fernandes, A. and C. Phelan (2000), "A Recursive Formulation for Repeated Agency with History Dependence", Journal of Economic Theory 91(2): 223-247 Friedman, E. (1998), "Risk sharing and the dynamics of inequality", mimeo, Northwestern University Hopenhayn, H. A. and Nicolini, J. P. (1997), "Optimal Unemployment Insurance", Journal of Political Economy 105(2): 412-438 Jewitt, I. (1988) "Justifying the First-Order Approach to Principal-Agent Problems," Econometrica, Econometric Society, vol. 56(5), pages 1177-90, September. Judd K. (1998), "Numerical Methods in Economics", MIT\ Press, Cambridge (MA) Judd K., J. Conklin and Sevin Yeltekin (2003) "Computing Supergame Equilibria", Econometrica, 2003 71(4): 1239- 1255. Ke, R. (2010), "A Fixed-Point Method for Validating the First-Order Approach: Necessary and Sufficient Condition and its Implications", mimeo Koehne, S. (2009), "The First-Order Approach to Moral Hazard Problems with Hidden Saving", mimeo, University of Mannheim Lehnert A., Ligon E. and R. M. Townsend (1999), "Liquidity Constraints and Incentive Contracts", Macroeconomic Dynamics 3: 1-47. Luenberger D. G. (1969), "Optimization by vector space methods", Wiley and Sons, New York Marcet, A. and R. Marimon (2009), "Recursive contracts", mimeo, IAE References: Mele, A. (2009), "Dynamic risk sharing and moral hazard", work in progress Messner, M. and N. Pavoni (2004). "On the Recursive Saddle Point Method: A Note", IGIER\ Working Paper n. 255 Mirrlees, J. A. (1975), "The Theory of Moral Hazard and Unobservable Behaviour: Part I", published in: The Review of Economic Studies, Vol. 66, No. 1, Special Issue: Contracts (Jan., 1999), pp. 3-21 Paulson, A. L., Karaivanov, A. and Townsend R. M. (2006), "Distinguishing Limited Liability from Moral Hazard in a Model of Entrepreneurship", Journal of Political Economy 144(1): Pavoni, N. (2007),\ "On optimal unemployment compensation", Journal of Monetary Economics 54(6): 1612-1630 Pavoni, N. (forthcoming),\ "Optimal Unemployment Insurance with Human Capital Depreciation and Duration Dependence", International Economic Review Phelan, C. and R. M. Townsend (1991), "Computing Multi-Period, Information Constrained Equilibria", Review of Economic Studies 58(5): 853-881 Quadrini,V. (2004), "Investment and liquidation in renegotiation-proof contracts with moral hazard", Journal of Monetary Economics, 51(4): 713-751 Rogerson, W. (1985a), "Repeated Moral Hazard", Econometrica, 53: 69-76 Rogerson, W. (1985b), "The First-Order Approach to Principal-Agent Problems", Econometrica, 53 (6): 1357-1368 Sleet C. and S. Yeltekin (2003), \textquotedblleft On the Approximation of Value Correspondences\textquotedblright , mimeo, Carnegie Mellon University Sleet C. and S. Yeltekin (2006), \textquotedblleft Credibility and Endogenous Societal Discounting\textquotedblright , Review of Economic Dynamics 9, 2006; 410-437. Sleet C. and S. Yeltekin (2008a), \textquotedblleft Solving private information models\textquotedblright , mimeo, Carnegie Mellon University Sleet C. and S. Yeltekin (2008b), \textquotedblleft Politically Credible Social Insurance\textquotedblright , Journal of Monetary Economics 55, 2008; 129-151 Shimer, R. and I. Werning (forthcoming), "Liquidity and insurance for the unemployed", American Economic Review Spear, S. and S. Srivastava (1987), "On Repeated Moral Hazard with Discounting", Review of Economic Studies 54(4): 599-617 Thomas, J. and T. Worrall (1990) "Income fluctuations and asymmetric information: An example of a repeated principal-agent problem", Journal of Economic Theory 51: 367 390 Werning, I. (2001), "Repeated Moral-Hazard with Unmonitored Wealth: A Recursive First-Order Approach", mimeo, MIT Werning, I. (2002), "Optimal Unemployment Insurance with Unobservable Savings", mimeo, MIT Zhao, R. (2007), \textquotedblleft Dynamic risk-sharing with two-sided moral hazard\textquotedblright , Journal of Economic Theory 136: 601-640. URI: http://mpra.ub.uni-muenchen.de/id/eprint/21741
{"url":"http://mpra.ub.uni-muenchen.de/21741/","timestamp":"2014-04-17T19:04:34Z","content_type":null,"content_length":"28878","record_id":"<urn:uuid:a34dc282-906e-4f22-a3c3-85d531ccf4be>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00548-ip-10-147-4-33.ec2.internal.warc.gz"}
Help with trig algebra. I can't figure this out October 11th 2009, 01:35 PM #1 Oct 2009 Help with trig algebra. I can't figure this out Ok, so I am looking at the min and max for the angle x: -12cos(x) + (2)12sin(x) = 5 -12cos(x) + (2)12sin(x) = -5 which I reduced to: (12)(2sin(x) - cos(x)) = ±5 2sin(x) - cos(x) = ±5/12 I'm not sure how to solve for the angle (x) from here. I'm not sure about what is going on here, but: Squaring both sides $4-4\cos^2{x}=\frac{25}{144}+\frac{5}{6}\cos{x}+\cos^ 2{x}$ Now you have a quadratic in $cos{x}$. Can you solve quadratics? Working with what you gave me I was able to successfully get both angles using the quadratic formula, thanks. Just to understand exactly how you manipulated the sin function: sin(x) = sqrt(1-cos(x)^2) ??? Is there a listing of these conversions somewhere? This was derived from the pythagorean identity IE Oh yeah. Pythagoras, unit circle...it's all coming back now. Thanks October 11th 2009, 02:36 PM #2 October 11th 2009, 03:02 PM #3 Oct 2009 October 11th 2009, 03:06 PM #4 October 11th 2009, 03:08 PM #5 Oct 2009
{"url":"http://mathhelpforum.com/trigonometry/107404-help-trig-algebra-i-can-t-figure-out.html","timestamp":"2014-04-19T09:49:03Z","content_type":null,"content_length":"43443","record_id":"<urn:uuid:76ffd91c-1e72-42f9-a3b4-e67a6d5e78c8>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00262-ip-10-147-4-33.ec2.internal.warc.gz"}
Designing for Power Amplifier Efficiency: Statistical Description of Signals Multichannel amplifiers have become ubiquitous in modern communications, including cellular, satellite and other wireless systems. While power efficiency is well understood for CW signals, the amplitude of multicarrier signals varies in time. Today's component designers must be able to simulate and analyze this kind of varying-envelope signal in order to develop successful devices cost effectively - a task entirely within the realm of today's advanced simulation tools. This article explores the properties of baseband multicarrier signals. The concept of power-added efficiency (PAE) for these signals is defined and the statistical approach to signal description is introduced. This approach estimates signal levels and enables the application of classical computation concepts to signals with randomly varying power. Ultimately, this yields closed-form formulas that can be used to describe the PAE of class A and B amplifiers. Fig. 1 Class B amplifier model. To facilitate the discussion, amplifier models are kept simple to allow concentration on signal descriptions and input-output characterization results. Thus, the primary goal will be the analysis of power levels in the baseband signal. Finally, the definition of average PAE will be introduced, as well as an overview of how it enables designers to make intelligent tradeoffs about average efficiency versus linearity. The simple class B amplifier shown in Figure 1 is modeled by a symbolically defined device (SDD), which enables its input/output relationships to be to specified. Input current (i1) is determined by the admittance Gi. The output voltage (Vout) is modeled by a voltage source V2 in series with a resistance Ro. For a class A amplifier: V2 = if | Vin | < Vsat/A then A x Vin else Vsat x sign(Vin) end if For a class B amplifier : V2 = if Vin < 0 then 0 else if Vin < Vsat/ A then A x Vin else Vsat end if Fig. 2 SDD input/output characteristics for class A and B amplifiers. Fig. 3 Two Monte Carlo simulation samples. The plot shown in Figure 2 illustrates the input/output characteristics the SDD amplifier models from the previous example over a Vin range of -1.5 to +1.5 V. The class A model provides linear gain and clipping at ?Vin? = 1 V. The class B model provides linear gain only for positive signals and clipping at Vin = 1 V. If a varying power waveform is applied to the class B amplifier, the device only amplifies through half of the cycle. This limits DC power dissipation to half of the cycle and significantly improves power efficiency. Power efficiency for class A and B amplifiers is well known. The theoretical upper limits of this efficiency can be expressed using the expressions: Class A: h = Prf/Pdc = 0.5 z Class B: h = Prf/Pdc = (p /4)vz Where z represents normalized RF power so that z = 1 corresponds to the maximum power for which the signal is not clipped. For class A amplifiers, the DC supply is constant and the RF amplitudes cannot exceed the biasing voltage and current. The result is 50 percent efficiency at maximum amplitudes (that is, when z = 1). For class B amplifiers, where the DC power supply depends on the RF output, the results are better than class A amplifiers, reaching a theoretical maximum at p /4, or 78 percent. Real amplifier efficiency is typically 30 percent to 50 percent lower than these limits due to losses in matching circuits and in DC supplies and the nonideal behavior of real devices. Multicarrier signals consist of a number of digitally-modulated carriers with a spacing of several megahertz. Since these types of signals generally use carrier frequencies in the gigahertz range, the baseband "comb" can be thought of as a slowly-varying envelope: Consequently, the multicarrier signal power is If the signals are phase or frequency modulated, then all amplitudes (Ak) will have the same value. Figure 3 shows a multicarrier signal envelope simulation. Two out of 250 samples from a Monte Carlo simulation are displayed. The envelope contains 10 signals with equal amplitudes and randomly varying phases (with uniform distribution), that are spaced at 1 MHz intervals. Note that the envelope (and consequently the signal power) strongly varies in time and also from sample to sample. To analyze the probability distribution of the power of multicarrier signals, a signal that contains two carriers only will be used. Figure 4 reveals the simulation results from 250 Monte Carlo samples of the power of a two-carrier signal. The data are presented as a normalized histogram spread over 50 bins. The horizontal axis is envelope power, while the vertical axis shows the number of instances (from the sample of 250) that correspond to each power level. Fig. 4 Simulated two-carrier power distribution. Fig. 5 PDF for sinusoidal envelopes. It is apparent from this plot that the signal power remains close to zero and one for much of the time. A similar histogram for a CW signal would have a peak at one, and zeroes elsewhere. The shape of this plot can also be found geometrically. Indeed, for two carriers with amplitudes equal to one, the power (envelope squared) equals 1 + 1 + 2cosq = 4cos(q/2)2. When transformed through the power of the sinusoidal envelope, z = (cos(x))2 (upper-right plot shown in Figure 5), the uniform distribution of the phase shift x (bottom plot) results in a cup-shaped distribution (upper-left Consider a 10-carrier signal with equal amplitudes and random (uniformly distributed) phases. The results of Monte Carlo simulation of 250 samples are shown in Figure 6 together with the chi-square distribution. Note the close agreement between the two plots. It is a well established fact that the chi-square is a limit distribution for a large amount of carriers. This fact follows from the central limit theorem, which asserts that many independent random signals added together have a Gaussian distribution. Applying the theorem to in-phase and quadrature carriers two Gaussian processes x and y are obtained. Therefore the signal power of many carriers is the sum of two squared Gaussian processes which means that it has a chi-square distribution (with two degrees of freedom) such Fig. 6 Theoretical exponential distribution for power vs. a Monte Carlo simulation. Fig. 7 PDF for Pout. Consequently, the expected power of the multicarrier signal equals the sum of the carrier's power Two important conclusions follow: First, the simulation shows that 10 carriers approximate very well the theoretical many-carrier distribution. Also, there is a significant spread in the value for envelope power, as well as non-negligible probability that a given signal will exceed the expected value (up to 10 times higher). The probability distribution of a multicarrier signal after it passed through an ideal clipping amplifier can now be calculated. If a multicarrier signal is passed through it, in the linear region, the probability distribution remains exponential since the signal is multiplied by a constant. However, all points above saturation are related to the saturation point. In terms of the probability distribution, this lumping of probabilities into one point is represented by Dirac's delta, shown in Figure 7. Clipped signals are indicated by the arrow in the upper left plot. The signals are represented by Dirac's delta distribution, for which the weight is set by the shaded part of the bottom plot. The analytical expression for this distribution is Once the probability distribution is obtained, the average output power can be found as and the probability of the clipped signal is e-a/s2 It is convenient to use h(z) to find mean Pdc: Consequently, the average efficiency can be calculated using: h = P_out/P_dc. Thus, for a class A amplifier Fig. 8 Average efficiency vs. linearity. with clipping probability e-a/s2 It is already known how much clipping is obtained for a given signal level and the amplifier's saturation level. Thus, a tool now exists that enables us to precisely measure the amplifier's efficiency and linearity, and consequently make tradeoffs of one versus the other. The plot shown in Figure 8 compares the loss of linearity (probability of clipping) and the average efficiency. It is assumed that amplifier saturation is fixed (normalized to one), and that sigma-square is being changed. This condition corresponds to the situation when the amplifier is fixed and the signal level is controlled. What is found is that as the mean power reaches 20 percent of saturation, the percentage of the clipped signal is approximately one percent, while the average efficiency reaches 10 percent. Of course, the actual signal level will vary depending on the application. The tools described enable amplifiers to be designed in the presence of many signals with random phases. In particular, amplifier linearity versus power efficiency can easily be balanced. While the examples shown are very simple, the methods are applicable to many kinds of amplifiers with a range of saturation characteristics. The analysis methods presented can easily be generalized for class B amplifiers, where mean Pdc is expressed by erf(a), as well as for class AB and C amplifiers with the use of slightly more complex expressions. Finally, as shown, distributions can be calculated numerically via Monte Carlo analysis. The author wishes to thank Wolfgang Bosch at Filtronics plc., for permission to use his notes.
{"url":"http://www.microwavejournal.com/articles/3015-designing-for-power-amplifier-efficiency-statistical-description-of-signals","timestamp":"2014-04-16T13:42:09Z","content_type":null,"content_length":"63060","record_id":"<urn:uuid:90ec771c-0678-4ee3-97f1-08bfa49eb56d>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00111-ip-10-147-4-33.ec2.internal.warc.gz"}
Homework Help Posted by Shadow on Wednesday, February 10, 2010 at 10:10pm. Two cards are chosen at random from a standard deck of cards with replacement. What is the probability of getting 2 aces? Is is an independent events? If it is then would it be 2/52(chances of an ace) x 2/52? • Math - Reiny, Wednesday, February 10, 2010 at 10:19pm since you are replacing it, the second event is not affected by the result of the first event, so the events are independent. Prob = 4/52 x 4/52 = 16/2704 = 1/169 (why do you have 2/52 ? Aren't there 4 aces?) • Math - Shadow, Wednesday, February 10, 2010 at 10:34pm I confused the PROBABILITY OF GETTING 2 ACES. Thank you very much. Could you review this one too please. A jar hold 15 red pencils and 10 blues pencils. What is the probability of drawing two red pencil form the jar? This would be an dependent and it would be 15/25( first time) x 14/24(second time)? • Math - Reiny, Wednesday, February 10, 2010 at 10:37pm correct! reduce it to 7/20 • Math - Shadow, Wednesday, February 10, 2010 at 10:40pm Thanks for the help, Reiny. • Math - Ariel, Wednesday, April 4, 2012 at 11:15pm That is very correct 7/20 I've been a math teacher for over 30 years. And I had my students do 9-7 Independent And Dependent Events And we did 1-9 togather. So then I said 10 is a hard one i'm gone let them sweat. I know why you get 15/25 because 15 red pencils and add 15+10=25 so then thats 15/25 * 14/24 = 7/20 . Have a nice day. But it's not cool to get answers off the internet your math teacher will think you really really tried. But okay. Related Questions math - two card are chosen at random from a standard deck of cards with ... math - Two cards are chosen from a standard deck of 52 playing cards without ... statistics - Three cards are randomly chosen, without replacement, from a ... general - From a standard deck of 52 cards, all of the clubs and the jack of ... finite math-random variables - Assume that a deck of cards contains 4 aces, 5 ... Grade 12 Data Management - Two cards are drawn without replacement from a deck ... math-independent and dependent events - A card is chosen at a ramdom from a deck... College Math - A card is selected from a standard deck of 52 playing cards. A ... algebra2 - Two cards are drawn at random froma standard deck of 52 cards without... MATH Prob. - You are dealt two cards successively without replacement from a ...
{"url":"http://www.jiskha.com/display.cgi?id=1265857808","timestamp":"2014-04-21T13:48:13Z","content_type":null,"content_length":"9967","record_id":"<urn:uuid:1ca1cdf6-c77f-4542-b337-9fd224fd6bb1>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00622-ip-10-147-4-33.ec2.internal.warc.gz"}
dy/dx = 14x^2 + 3x, what is y in terms of x. - Homework Help - eNotes.com dy/dx = 14x^2 + 3x, what is y in terms of x. The derivative of y with respect to x, `dy/dx = 14x^2 + 3x` `dy/dx = 14x^2 + 3x` => `dy = (14x^2 + 3x) dx` take the integral of both the sides => `int dy = int (14x^2 + 3x) dx` => `y = 14*x^3/3 + 3x^2/2 + C` Here, C is a constant; this is included as the derivative of a constant is always 0. The expression for `y = 14*x^3/3 + 3x^2/2 + C` Join to answer this question Join a community of thousands of dedicated teachers and students. Join eNotes
{"url":"http://www.enotes.com/homework-help/dy-dx-14x-2-3x-what-y-terms-x-426981","timestamp":"2014-04-18T20:10:43Z","content_type":null,"content_length":"24631","record_id":"<urn:uuid:ee5db806-cbf5-4f4d-b8f0-a5272b62e300>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00303-ip-10-147-4-33.ec2.internal.warc.gz"}
Help me, Friends and Gurus ???? 12-02-2001 #1 Registered User Join Date Nov 2001 Help me, Friends and Gurus ???? Hi; Friends; I solved most of the work. But it's now having problem to insert in the middle. Any can suggest me where to correct, how to correct, and where I did mistake. Here is my that insertnode function: ....Code .......... struct node *head = (struct node *) NULL; struct node *end = struct node *) NULL; void insertnode(struct node *insert) struct node *curr, *prev; if(head == NULL) head = insert; else if(head==end) if(strcmp(head->name,insert->name) < 0) else { /* inserting in the middle part is here....*/ /*...........Problem here ............*/ /* .........segmentation error....... */ curr = head; while(strcmp(curr->name,insert->name) < 0) { curr = curr->next; ---A COMPUTER ANIMATED BLOCK DESIGN--- Okay, so you want to insert a node into the linked list? You will have to make it so that FIRST_POINTER's next * would be the insertion pointer. And the insertion pointer's next * would have to be NEXT_POINTER. So, it would end up like: Now, put that into code and then post that if it is wrong. 1978 Silver Anniversary Corvette Okay, I need a little more "error info". Did you try to compile this code, or is this just "thinking code"? If you already tried to compile it, what were the errors? Were there any Runtime Errors? Because I think I may see your problem. Also, you also know that strcmp returns 0 if the strings are equal and nonzero if the strings aren't equal, right? The reason I ask this is because this line doesn't look great: while(strcmp(curr->name,insert->name) < 0) are you trying to continue the while loop if curr->name and insert->name are not equal? Because, if you wanted to see if they are equal, you would have to do this: while(strcmp(curr->name,insert->name) == 0) strcmp is just screwy like that. Try changing this stuff and tell me if this works. Worst comes to worst and I'll just write up some code, compare it to yours, and tell you the error. Nothing to worry about. 1978 Silver Anniversary Corvette I think you have problems when you are inserting to the end of the list. The lines while(strcmp(curr->name,insert->name) < 0) { curr = curr->next; if(curr=head){ // /What is this? should be if (curr==head) seems to be buggy. What if you go to the end of the list? This seems to be a sorted circular queue so at the end you go to the beginning. What if you add "Bob" and then "Charlie". Bob will be Ok but Charlie will go to the beginning of list. try this while(strcmp(curr->name,insert->name) < 0) { curr = curr->next; Last edited by ozgulker; 12-03-2001 at 03:45 AM. 12-02-2001 #2 the Corvetter Join Date Sep 2001 12-02-2001 #3 the Corvetter Join Date Sep 2001 12-03-2001 #4 Registered User Join Date Nov 2001
{"url":"http://cboard.cprogramming.com/c-programming/6389-help-me-friends-gurus.html","timestamp":"2014-04-18T22:14:03Z","content_type":null,"content_length":"49449","record_id":"<urn:uuid:f5ab4967-2483-4dc6-87d2-2b2aa126db28>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00379-ip-10-147-4-33.ec2.internal.warc.gz"}
Lisle, IL Statistics Tutor Find a Lisle, IL Statistics Tutor ...I taught three of my four daughters algebra at home so they could enter high school math a year ahead. They all performed in the >90% range in the New York State algebra regents exam. I have also tutored neighbors, both high school age and middle age adults, in math and physics. 17 Subjects: including statistics, chemistry, geometry, reading I love teaching math and science, and I know how to make it fun and interesting. As a physicist I work everyday with math and science, and I have a long experience in teaching and tutoring at all levels (university, high school, middle and elementary school). My son (a 5th grader) scores above 99 p... 23 Subjects: including statistics, calculus, physics, geometry ...Since graduation, I've worked as a vision therapist for children ages 6-18, combining my love of optometry with one-on-one tutoring. I believe that students are able to learn anything with the right instruction and I know my passion for science and math will contribute greatly to this process. ... 25 Subjects: including statistics, chemistry, calculus, physics ...I have my master's degree in mechanical engineering. For my thesis project and my current professional career, I have become proficient in control systems. Control systems theory consists of many differential equations. 20 Subjects: including statistics, calculus, physics, geometry ...I tutored Algebra in high school, and went on to further my knowledge in math, making me a great candidate for helping others understand and learn everything from Algebra to ANOVA, Regression, Multivariate statistics, and many types of predictive equation modeling. I am also well trained in usin... 11 Subjects: including statistics, writing, GRE, grammar Related Lisle, IL Tutors Lisle, IL Accounting Tutors Lisle, IL ACT Tutors Lisle, IL Algebra Tutors Lisle, IL Algebra 2 Tutors Lisle, IL Calculus Tutors Lisle, IL Geometry Tutors Lisle, IL Math Tutors Lisle, IL Prealgebra Tutors Lisle, IL Precalculus Tutors Lisle, IL SAT Tutors Lisle, IL SAT Math Tutors Lisle, IL Science Tutors Lisle, IL Statistics Tutors Lisle, IL Trigonometry Tutors
{"url":"http://www.purplemath.com/Lisle_IL_statistics_tutors.php","timestamp":"2014-04-18T19:03:47Z","content_type":null,"content_length":"23979","record_id":"<urn:uuid:7169509c-dab8-4157-9bb1-76db185fc13a>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00527-ip-10-147-4-33.ec2.internal.warc.gz"}
Applied Linear Equations: Distance Problem - Problem 2 Now we're going to look at another distance, rate, time problem, but, this one, we're actually dealing with things, going towards each other. So, a train leaves New York city, headed towards LA, travelling 120 miles per hour. At the same time, another train leaves LA, heads towards New York at 180. If the two cities are 2400 miles apart, how long until they meet. So, let's look at this. We're first going to do this from a logical standpoint. We have basically two cities, pretty far away. We have, a train that leaves LA at 180 miles per hour, a train that leaves New York city, at 120 miles per hour. After one hour, this train has gone 180, after one hour, this train has gone 120 miles, so after one hour, they're actually 300 miles closer, after two hours, double that, 600, after three hours, triple, 900. Every hour, they are getting 300 miles closer together. So, what we actually end up to solve, is just, distance is equals to rate times time. The distance we're concerned about is, the distance they're apart 2400. The rate that they're going together, is just the sum of their two individual rates, which is 300, and then the time. Solve this out, divide by 300. 2400 divide by 300 is 8, unit of time, hours. So this is more of a logical approach, if you also do more of the mathematical approach which we know that the distance of one train plus the distance of the other is equal to the distance in total. The distance of one train is just rate times time, distance of the second train is just rate times time and the distance in total we know to be 2400. We know that these times are equivalent because they leave at the same time and they meet at the same time. So our rate of our first train is 120 and our time is just t, then the rate of our second train is 180 and our total distance is still 2400. If we then combine like terms 120 plus 180 is actually 300t which actually takes us back to where we were over here divide by 300, once again we would end up with t equals 8. Two approaches, both are completely valid, either one works. distance rate time train word problem
{"url":"https://www.brightstorm.com/math/algebra-2/linear-equations/applied-linear-equations-distance-problems-problem-2/","timestamp":"2014-04-20T18:50:38Z","content_type":null,"content_length":"57968","record_id":"<urn:uuid:88b7a87b-0e41-4c7c-875c-9cb9c494a0d2>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00463-ip-10-147-4-33.ec2.internal.warc.gz"}
Partial Fraction Decomposition February 8th 2007, 11:15 AM #1 Junior Member Aug 2006 Partial Fraction Decomposition I need some help understanding an apparent discrepancy. In this problem: 4 / (2x^2 - 5x -3) (2x^2 stands for 2x squared) I find that if I resolve that rational expression to these partial fractions: (A / 2x + 1) + (B / x - 3) I get A = -8/7 and B = 4/7 However, if I resolve the rational expression to these partial fractions: (A / x -3) + (B / 2x + 1) I get A = 4/7 and B = -8/7 Then, if I let x = 0 and sum the partial fractions I get -4/3 for the first case, and 20/21 for the second case. I think the first case is correct because the original rational expression (4 / (2x^2 -5x -3)) does resolve to -4/3 when x is set to 0. So my question is, how does one determine what order to use when resolving rational expressions into partial fractions composed of linear non-repeating factors? It seems to make a big difference in this example, yet my book does not provide guidance in this area. I need some help understanding an apparent discrepancy. In this problem: 4 / (2x^2 - 5x -3) (2x^2 stands for 2x squared) I find that if I resolve that rational expression to these partial fractions: (A / 2x + 1) + (B / x - 3) I get A = -8/7 and B = 4/7 However, if I resolve the rational expression to these partial fractions: (A / x -3) + (B / 2x + 1) I get A = 4/7 and B = -8/7 These are the same partial fraction resolutions of the original term. Then, if I let x = 0 and sum the partial fractions I get -4/3 for the first case, and 20/21 for the second case. I think the first case is correct because the original rational expression (4 / (2x^2 -5x -3)) does resolve to -4/3 when x is set to 0. Just check your arithmetic, since the two expressions are identical they must give the same result. So my question is, how does one determine what order to use when resolving rational expressions into partial fractions composed of linear non-repeating factors? It seems to make a big difference in this example, yet my book does not provide guidance in this area. Either will do, as you see here you get the same result either way. Yes, I realize now that I had made an arithmetic error. Thanks. February 8th 2007, 11:30 AM #2 Grand Panjandrum Nov 2005 February 8th 2007, 11:50 AM #3 Junior Member Aug 2006
{"url":"http://mathhelpforum.com/pre-calculus/11362-partial-fraction-decomposition.html","timestamp":"2014-04-20T10:27:25Z","content_type":null,"content_length":"36878","record_id":"<urn:uuid:a627daf7-33d1-4db5-8061-98ea0c0e062c>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00515-ip-10-147-4-33.ec2.internal.warc.gz"}
Finding limit Can someone please help me in evaluating the following limit : Lim (n-> infinity) nsq.root(n)/(n+1)sq.root(n+1). Thanks in advance. It's pretty obvious, isn't it, that both numerator and denominator go to infinity- but the denominator goes to infinity faster. If you want to be more "technical", let treat n as a continuous variable and apply L'Hopital's rule.
{"url":"http://mathhelpforum.com/calculus/218366-finding-limit.html","timestamp":"2014-04-20T02:23:45Z","content_type":null,"content_length":"32227","record_id":"<urn:uuid:70f8c935-5133-4630-84ed-dbe2dba85d82>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00069-ip-10-147-4-33.ec2.internal.warc.gz"}
July 18th 2008, 12:05 AM #1 Nov 2006 There are thirty people in a room and all of them write down the day and month of their birth. a)What is the probability that two or more of these dates will be the same (remember, just day and month, not year)? b)What is the probability that three or more of these dates will all coincide? Hello, perash! This is a classic . . . The Birthday Paradox. . . I'll do the easy one. Thirty people write down the day and month of their birth. a) What is the probability that 2 or more of these dates will be the same ? The opposite of "at least one match" is "no matches." Person #1 can have any birthday: . $\frac{365}{365}$ #2 can have any of the remaining 364 days: . $\frac{364}{365}$ #3 can have any of the remaining 363 days: . $\frac{363}{365}$ . . . and so on . . . #30 can have any of the remaining 336 days: . $\frac{336}{365}$ $P(\text{no match}) \:=\:\frac{365}{365}\cdot\frac{364}{365}\cdot\frac {363}{365}\cdots \frac{336}{365}$ Hence: . $P_0 \;=\;\frac{365!}{335!365^{30}} \;=\;\frac{{365\choose335}}{365^{30}}$ Therefore: . $P(\text{2 or more}) \;=\;1 - P_0$ . . (I'll let you crank it out.) July 18th 2008, 05:04 AM #2 Super Member May 2006 Lexington, MA (USA)
{"url":"http://mathhelpforum.com/algebra/43958-probability.html","timestamp":"2014-04-16T20:28:26Z","content_type":null,"content_length":"34919","record_id":"<urn:uuid:07b6af10-0ab3-4c44-8c98-2c1f02f099f5>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00592-ip-10-147-4-33.ec2.internal.warc.gz"}
Question about limits August 19th 2011, 01:14 PM Question about limits the question involves lim x->0 (x cot x) Sorry if the notation isnt so good but its the limit of x as it approaches 0 the book goes on to prove that the answer is 1 by doing the following steps that confuse me: lim(x cot x) = lim(x (cosx/sin x)) = lim ((x/sin x)cos x) =(lim(x/sin x))(lim cos x) =((lim cos x)/(lim (sin x/x))) = 1/1 = 1 the bold part highlights my confusion, I dont understand what happened there, why the denominator went to x, and was removed by cos x its very possible its just a simple mistake on my part, but for some reason im just lost. thanks, and sorry about the notation, I dont know how to use the forum that well August 19th 2011, 01:22 PM Re: Question about limits the question involves lim x->0 (x cot x) Sorry if the notation isnt so good but its the limit of x as it approaches 0 the book goes on to prove that the answer is 1 by doing the following steps that confuse me: lim(x cot x) = lim(x (cosx/sin x)) = lim ((x/sin x)cos x) =(lim(x/sin x))(lim cos x) =((lim cos x)/(lim (sin x/x))) = 1/1 = 1 the bold part highlights my confusion, I dont understand what happened there, why the denominator went to x, and was removed by cos x its very possible its just a simple mistake on my part, but for some reason im just lost. thanks, and sorry about the notation, I dont know how to use the forum that well $x\cot{x} = x\cdot \frac{\cos{x}}{\sin{x}} = \frac{x}{\sin{x}} \cdot \cos{x} = \cos{x} \cdot \frac{x}{\sin{x}} = \cos{x} \div \frac{\sin{x}}{x}$ August 19th 2011, 01:23 PM Re: Question about limits $x\frac{\cos(x)}{\sin(x)}=\frac{x\cos(x)}{\sin(x)}= \frac{x}{\sin(x)}\cos(x)$
{"url":"http://mathhelpforum.com/calculus/186415-question-about-limits-print.html","timestamp":"2014-04-20T08:45:33Z","content_type":null,"content_length":"6272","record_id":"<urn:uuid:0bfa1116-39b1-4aa3-a369-fc8b29f6241c>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00030-ip-10-147-4-33.ec2.internal.warc.gz"}
Prove quotient group cyclic Let $G=C_8\times C_8$ and $H\leq G$. $H$ is cyclic of order 8. Show that $G/H$ is cyclic. I'm not sure my solution is correct, but here it goes. I assume $C_8$ is a cyclic group of order 8. So H is isomorphic $Z_8$, or to $\{0\} \times Z_8$. So $(C_8 \times C_8)/H$ is isomorphic to $(Z_8 \times Z_8)/(\{0\} \times Z_8)$, which is isomorphic to $(Z_8/ \{0\}) \times (Z_8/Z_8)$, which is isomorphic to $Z_8$, which is a cyclic group. Edited to, hopefuly, correct the mistakes. Last edited by ModusPonens; September 10th 2011 at 03:37 AM. Thanks, You are correct. We have atheorem that states $(H\times K)/H \cong K$.
{"url":"http://mathhelpforum.com/advanced-algebra/187648-prove-quotient-group-cyclic.html","timestamp":"2014-04-17T02:08:47Z","content_type":null,"content_length":"36025","record_id":"<urn:uuid:a4931f46-65e7-474f-b27b-628c2bd718e4>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00511-ip-10-147-4-33.ec2.internal.warc.gz"}
Maintainer Roman Cheplyaka <roma@ro-che.info> Constructing tests The simplest kind of test is a function (possibly of many arguments) returning Bool. In addition, you can use the combinators shown below. For more advanced combinators, see Test.SmallCheck.Property. class Testable a Source Anything of a Testable type can be regarded as a "test" Testable Bool Testable Property (Serial a, Show a, Testable b) => Testable (a -> b) Existential quantification Suppose we have defined a function isPrefix :: Eq a => [a] -> [a] -> Bool and wish to specify it by some suitable property. We might define prop_isPrefix1 :: String -> String -> Bool prop_isPrefix1 xs ys = isPrefix xs (xs++ys) where xs and ys are universally quantified. This property is necessary but not sufficient for a correct isPrefix. For example, it is satisfied by the function that always returns True! We can also test the following property, which involves an existentially quantified variable: prop_isPrefix2 :: String -> String -> Property prop_isPrefix2 xs ys = isPrefix xs ys ==> exists $ \xs' -> ys == xs++xs' exists :: (Show a, Serial a, Testable b) => (a -> b) -> PropertySource exists p holds iff it is possible to find an argument a (within the depth constraints!) satisfying the predicate p existsDeeperBy :: (Show a, Serial a, Testable b) => (Depth -> Depth) -> (a -> b) -> PropertySource The default testing of existentials is bounded by the same depth as their context. This rule has important consequences. Just as a universal property may be satisfied when the depth bound is shallow but fail when it is deeper, so the reverse may be true for an existential property. So when testing properties involving existentials it may be appropriate to try deeper testing after a shallow failure. However, sometimes the default same-depth-bound interpretation of existential properties can make testing of a valid property fail at all depths. Here is a contrived but illustrative prop_append1 :: [Bool] -> [Bool] -> Property prop_append1 xs ys = exists $ \zs -> zs == xs++ys existsDeeperBy transforms the depth bound by a given Depth -> Depth function: prop_append2 :: [Bool] -> [Bool] -> Property prop_append2 xs ys = existsDeeperBy (*2) $ \zs -> zs == xs++ys (==>) :: Testable a => Bool -> a -> PropertySource The ==> operator can be used to express a restricting condition under which a property should hold. For example, testing a propositional-logic module (see examples/logical), we might define: prop_tautEval :: Proposition -> Environment -> Property prop_tautEval p e = tautology p ==> eval p e But here is an alternative definition: prop_tautEval :: Proposition -> Property prop_taut p = tautology p ==> \e -> eval p e The first definition generates p and e for each test, whereas the second only generates e if the tautology p holds. The second definition is far better as the test-space is reduced from PE to T'+TE where P, T, T' and E are the numbers of propositions, tautologies, non-tautologies and environments. Running tests The functions below can be used to run SmallCheck tests. As an alternative, consider using test-framework package. It allows to organize SmallCheck properties into a test suite (possibly together with HUnit or QuickCheck tests), apply timeouts, get nice statistics etc. To use SmallCheck properties with test-framework, install test-framework-smallcheck package. smallCheck :: Testable a => Depth -> a -> IO ()Source Run series of tests using depth bounds 0..d, stopping if any test fails, and print a summary report or a counter-example. smallCheckI :: Testable a => a -> IO ()Source Interactive variant, asking the user whether testing should continue/go deeper after a failure/completed iteration. Example session: haskell> smallCheckI prop_append1 Depth 0: Completed 1 test(s) without failure. Deeper? y Depth 1: Failed test no. 5. Test values follow. Continue? n Deeper? n type Depth = IntSource Maximum depth of generated test values For data values, it is the depth of nested constructor applications. For functional values, it is both the depth of nested case analysis and the depth of results.
{"url":"http://hackage.haskell.org/package/smallcheck-0.6/docs/Test-SmallCheck.html","timestamp":"2014-04-20T00:45:08Z","content_type":null,"content_length":"17601","record_id":"<urn:uuid:f7ce72c1-050d-41e5-b008-08d2d270067c>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00245-ip-10-147-4-33.ec2.internal.warc.gz"}
The two-phase model for calculating thermodynamic properties of liquids from molecular dynamics: Validation for the phase diagram of Lennard-Jones fluids Lin, Shiang-Tai and Blanco, Mario and Goddard, William A.,III (2003) The two-phase model for calculating thermodynamic properties of liquids from molecular dynamics: Validation for the phase diagram of Lennard-Jones fluids. Journal of Chemical Physics, 119 (22). pp. 11792-11805. ISSN 0021-9606. http://resolver.caltech.edu/CaltechAUTHORS: See Usage Policy. Use this Persistent URL to link to this item: http://resolver.caltech.edu/CaltechAUTHORS: We propose a general approach for determining the entropy and free energy of complex systems as a function of temperature and pressure. In this method the Fourier transform of the velocity autocorrelation function, obtained from a short (20 ps) molecular dynamics trajectory is used to obtain the vibrational density of states (DoS) which is then used to calculate the thermodynamic properties by applying quantum statistics assuming each mode is a harmonic oscillator. This approach is quite accurate for solids, but leads to significant errors for liquids where the DoS at zero frequency, S(0), remains finite. We show that this problem can be resolved for liquids by using a two phase model consisting of a solid phase for which the DoS goes to zero smoothly at zero frequency, as in a Debye solid; and a gas phase (highly fluidic), described as a gas of hard spheres. The gas phase component has a DoS that decreases monotonically from S(0) and can be characterized with two parameters: S(0) and 3Ng, the total number of gas phase modes [3Ng0 for a solid and 3Ng3(N–1) for temperatures and pressures for which the system is a gas]. To validate this two phase model for the thermodynamics of liquids, we applied it to pure Lennard-Jones systems for a range of reduced temperatures from 0.9 to 1.8 and reduced densities from 0.05 to 1.10. These conditions cover the gas, liquid, crystal, metastable, and unstable states in the phase diagram. Our results compare quite well with accurate Monte Carlo calculations of the phase diagram for classical Lennard-Jones particles throughout the entire phase diagram. Thus the two-phase thermodynamics approach provides an efficient means for extracting thermodynamic properties of liquids (and gases and solids). Item Type: Article Additional ©2003 American Institute of Physics. Received 6 February 2003; accepted 12 September 2003. The authors would like to thank Dr. Tahir Cagin, Dr.Seung Soon Jang, Dr. Prabal Maiti, Dr. Information: Valeria Molinero, and Peng Xu for many useful discussions. This research was partially supported by the NSF (CHE 99-85774, CTS-0132002) and NIH (1R01-GM62523-01). The facilities of the MSC used in this research are also supported by grants from DOE (ASCI and FETL), ARO (MURI and DURIP), ONR (MURI and DURIP), IBM-SUR, General Motors, ChevronTexaco, Seiko-Epson, Asahi Kasai, Beckman Institute, and Toray. Subject density; Lennard-Jones potential; thermodynamic properties; molecular dynamics method; harmonic oscillators; phase diagrams Record CaltechAUTHORS:LINjcp03 Persistent http://resolver.caltech.edu/CaltechAUTHORS: Alternative http://dx.doi.org/10.1063/1.1624057 Usage No commercial reproduction, distribution, display or performance rights in this work are provided. ID Code: 4030 Collection: CaltechAUTHORS Deposited Lindsay Cleary Deposited 25 Jul 2006 Last 26 Dec 2012 08:57 Repository Staff Only: item control page
{"url":"http://authors.library.caltech.edu/4030/","timestamp":"2014-04-20T18:26:33Z","content_type":null,"content_length":"24115","record_id":"<urn:uuid:0f5261d2-93bb-45f0-b85d-77d367415ab1>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00425-ip-10-147-4-33.ec2.internal.warc.gz"}
Fractional Expontent of a Binomial - Exponent Rules Question August 7th 2012, 05:44 AM #1 Aug 2012 Fractional Expontent of a Binomial - Exponent Rules Question To evaluate the expression (x-2)^1/2 the next steps listed are x^(-2)(1/2) then x^-1 and finally 1/x. I am fine with the final two steps. Please help me see how or what rules are used to go from the first to second step. Is this just a basic rules I am forgetting? So in summary how does (x-2)^1/2 become x^(-2)(1/2) Thank you. Re: Fractional Expontent of a Binomial - Exponent Rules Question To evaluate the expression (x-2)^1/2 the next steps listed are x^(-2)(1/2) then x^-1 and finally 1/x. I am fine with the final two steps. Please help me see how or what rules are used to go from the first to second step. Is this just a basic rules I am forgetting? So in summary how does (x-2)^1/2 become x^(-2)(1/2) Thank you. 1. An example: $\left(b^3\right)^4 = \left(b^3\right) \cdot \left(b^3\right) \cdot \left(b^3\right) \cdot \left(b^3\right) = \left(b\right)^{3+3+3+3} = b^{3 \cdot 4}$ So if you take a power to a power you have to multiply the exponents. 2. In general: $\left(b^a\right)^c = b^{a\cdot c} = \left(b^c\right)^a$ 3. Apply this rule to your question. Re: Fractional Expontent of a Binomial - Exponent Rules Question tHANK YOU for the reply. I am aware of the rule for multiplying exponents. My question is how does (x-2)^1/2 become x^-2(1/2). Thanks. Re: Fractional Expontent of a Binomial - Exponent Rules Question Re: Fractional Expontent of a Binomial - Exponent Rules Question ... by typo! To evaluate the expression (x-2)^1/2 the next steps listed are x^(-2)(1/2) then x^-1 and finally 1/x. I am fine with the final two steps. <--- who has listed these steps? Please help me see how or what rules are used to go from the first to second step. Is this just a basic rules I am forgetting? So in summary how does (x-2)^1/2 become x^(-2)(1/2) Thank you. 1. I assumed that there was a typo in the original term and that actually was meant $\left(x^{-2} \right)^{\frac12}$ 2. If you re-arrange the given term $(x-2)^{\frac12} = \sqrt{x-2}$ you can see that $\sqrt{x-2} = \frac1x~\implies~x^3-2x^2-1=0$ has only one real solution at $x = \frac{\sqrt[3]{172-12 \cdot \sqrt{177}}}{6}+\frac{\sqrt[3]{172+12 \cdot \sqrt{177}}}{6} +\frac23$ To answer your question So in summary how does (x-2)^1/2 become x^(-2)(1/2): Only once - and in all other cases this transformation is wrong. August 7th 2012, 10:28 AM #2 August 7th 2012, 12:04 PM #3 Aug 2012 August 7th 2012, 12:07 PM #4 August 7th 2012, 10:00 PM #5
{"url":"http://mathhelpforum.com/algebra/201857-fractional-expontent-binomial-exponent-rules-question.html","timestamp":"2014-04-18T10:27:24Z","content_type":null,"content_length":"48924","record_id":"<urn:uuid:5469de7a-9d7a-4663-8a11-0f0463e2f6df>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00189-ip-10-147-4-33.ec2.internal.warc.gz"}
Westchester, FL Trigonometry Tutor Find a Westchester, FL Trigonometry Tutor ...I am very patient with students and enjoy teaching, seeing as I aspire to become a mathematics professor. I have always gotten A's in all my math courses and am a Dual-Enrolled student currently in Senior year of High School and finishing my AA in college, where I took Algebra. I am required to maintain a 3.0 average and have a 3.7 unweighted and a 6.06 weighted GPA. 12 Subjects: including trigonometry, calculus, algebra 2, algebra 1 ...I am not a living book so if there is a question that can't be solved at the time of the tutoring, I will resolve it in a time of 24 hours or less, and the answer will be sent by e mail if it is urgent. Studied my Bachelor of Science at Notre Dame University and obtained Master and PhD candidacy at Florida State University. Also I am certified by the State of Florida as teacher. 13 Subjects: including trigonometry, Spanish, calculus, physics ...Let's get together and map out your plan for success. The basics of an education can be achieved by anyone eager to succeed in life. Arithmetic and grammar are the first building blocks. 30 Subjects: including trigonometry, calculus, geometry, GRE ...Additionally, I worked as a discussion leader for both general and organic chemistry where I led students through problem sets and answered any questions they may have had. Finally, I worked as a chemistry laboratory teaching assistant (TA) for two years and was recognized for my work by receive... 14 Subjects: including trigonometry, chemistry, calculus, geometry ...Memorizing the trig identities makes this subject easier. I have taken 3 semesters of Japanese at Florida International University. My professors can say my Japanese is great. 11 Subjects: including trigonometry, chemistry, physics, calculus Related Westchester, FL Tutors Westchester, FL Accounting Tutors Westchester, FL ACT Tutors Westchester, FL Algebra Tutors Westchester, FL Algebra 2 Tutors Westchester, FL Calculus Tutors Westchester, FL Geometry Tutors Westchester, FL Math Tutors Westchester, FL Prealgebra Tutors Westchester, FL Precalculus Tutors Westchester, FL SAT Tutors Westchester, FL SAT Math Tutors Westchester, FL Science Tutors Westchester, FL Statistics Tutors Westchester, FL Trigonometry Tutors Nearby Cities With trigonometry Tutor Carol City, FL trigonometry Tutors Coconut Grove, FL trigonometry Tutors Crossings, FL trigonometry Tutors Gables By The Sea, FL trigonometry Tutors Inverrary, FL trigonometry Tutors Kendall, FL trigonometry Tutors Ludlam, FL trigonometry Tutors Olympia Heights, FL trigonometry Tutors Perrine, FL trigonometry Tutors Quail Heights, FL trigonometry Tutors Richmond Heights, FL trigonometry Tutors Snapper Creek, FL trigonometry Tutors Sweetwater, FL trigonometry Tutors Village Of Palmetto Bay, FL trigonometry Tutors West Miami, FL trigonometry Tutors
{"url":"http://www.purplemath.com/Westchester_FL_Trigonometry_tutors.php","timestamp":"2014-04-17T21:35:23Z","content_type":null,"content_length":"24529","record_id":"<urn:uuid:7945f013-8298-4ac7-b3ba-67be9dbbbe30>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00142-ip-10-147-4-33.ec2.internal.warc.gz"}