content
stringlengths
86
994k
meta
stringlengths
288
619
Math Forum: Encouraging Mathematical Thinking: Middle School Lesson Plan Students will build a family of cylinders and discover the relation between the dimensions of the generating rectangle and the resulting pair of cylinders. They will then order the cylinders by their volumes and draw a conclusion about the relation between the cylinder's dimensions and its volume. Finally, they will calculate the volumes of the family of cylinders with constant area. 8 1/2" by 11" sheets of paper for the class (transparencies work well for the initial experiment), tape, ruler, graph paper, fill material (birdseed, Rice Krispies, Cheerios, packing "peanuts," Cylinder, dimension, area, circumference, height, lateral surface area, volume. Initial Experiment: Take a sheet of paper and join the top and bottom edges to form a "base-less" cylinder. The edges should meet exactly, with no gaps or overlap. With another sheet of paper the same size and aligned the same way, join the left and right edges to make another cylinder. Stand both cylinders on a table. One of the cylinders will be tall and narrow; the other will be short and stout. We will refer to the tall cylinder as cylinder A and the short one as cylinder B. Mark each cylinder now to avoid confusion later. Now pose the following question to the class: "Do you think the two cylinders will hold the same amount? Or will one hold more than the other? If you think that one will hold more, which one will that be?" Have them record their predictions, with an explanation. Place cylinder B in a large flat box with cylinder A inside it. Fill cylinder A. Ask for someone to restate his or her predictions and explanation. With flair, slowly lift cylinder A so that the filler material falls into cylinder B. (You might want to pause partway through, to allow them to think about their answers.) Since the filler material does not fill cylinder B, we can conclude that cylinder B holds more than cylinder A. Ask the class: "Was your prediction correct? Do the two cylinders hold the same amount? Why or why not? Can we explain why they don't?" (Note to the teacher: because the volume of the cylinder equals pi*r^2*h, r has more effect than h [because r is squared], and therefore the cylinder with the greater radius will have the greater volume.) Second Experiment: "Let's go back and look at our original sheet of paper. We made two different cylinders from it. What geometric shape is the sheet of paper?" (rectangle) "What are its dimensions?" (8.5" by 11"). "What are the dimensions of the resulting cylinders? That is, what is the height and what is the circumference?" (The height of the cylinder is the length of the side of the paper rectangle that you taped, and the circumference is the length of the other side.) "Are there any other cylinders that we can make from this same sheet of paper?" (Yes. There are many cylinders that can be made.) "Let's try to make some other cylinders. If we fold a new sheet of paper lengthwise and cut it in half, we will get two pieces - each measuring 4.25" by 11" - which we can tape together to form a rectangle 4.25" by 22". We can repeat the process to create a second rectangle the same size. Now we can roll these rectangles into two different cylinders, one 4.25" high and another 22" high. We will label them cylinder C (4.25" high) and cylinder D (22" high)." "Now we have four cylinders. Which of them would hold the most? Write down your predictions." Test by filling. Have a student report the results. Now have the students arrange the cylinders in order, by volume, from the cylinder that holds the least to the cylinder that holds the most. "Do you see any pattern that relates the size of the cylinder and the amounts they hold?" (As they get taller and narrower the cylinders hold less, and as they get shorter and stouter, they hold more.) "How many other cylinders could we make from a rectangle with these same dimensions?" (Theoretically, infinitely many. Cylinders could get taller and narrower and taller and narrower until they were infinitely tall and infinitely narrow, or they could get shorter and stouter and shorter and stouter until they were infinitely short and infinitely stout.) Calculation investigation: "We think that the taller the cylinder, the smaller the volume, and the shorter the cylinder, the greater the volume. Can we write this in mathematical language that will help us confirm our observations? What formulas relate to this problem?" C = 2pi*r or pi*d [circumference of a circle] A = b*h [Area of rectangle] V = pi*r^2*h [Volume of cylinder] So if our ultimate goal is to calculate the volume, then the formula we will need to use is V = pi*r^2h. How are we going to find r and h? That is the challenge. Find h: "Let's go back to our original sheet of paper. What were its dimensions?" (8.5" by 11".) "Which of these two dimensions represents the height of the cylinder?" (11". The height of the taped edge of the paper is the height of the cylinder.) "Halfway there. We have found h. Now on to r." Find r: "How does the circumference of the cylinder relate to the dimensions of the rectangle?" (The untaped edge of the rectangle is the circumference of the cylinder.) "So, since the circumference is equal to 2pi*r and the circumference equals the untaped edge of the rectangle, then C = 2pi*r = 8.5". Now we can solve for r. How do we do that?" (Divide both sides of the equation by 2*pi.) "What do we get?" (r = 8.5/(2*pi) = 1.35282") "Now we have r and h and we are ready to find the volume. Let's put them both into the volume formula, V = pi*r^2*h Using substitution, V = pi(1.35282)^2*(11) in^3 V = 63.2442 in^3 "Now you do the other cylinder and see what you get. Compare the volumes of the two cylinders. Do your results confirm what we discovered with our physical models?" (Note to teacher: you may need to lead students through the reasoning here as well.) Organizing Material: Complete a Table Remember our conclusion relating the dimensions of the cylinder to its volume? (As cylinders get taller and narrower they hold less, and as they get shorter and stouter they hold more.) Fill out the following table, and confirm that calculation. You can download the completed table as an Excel spreadsheet here. Extension Question: Multiply the number in the first column of the above table by the number in the second column. What do you notice? (The products are all equal.) Why is this true? (These products represent the base times the height of the rectangle - in other words, the area. Since the cylinders were all made from sheets of paper having the same dimensions, they all have the same area. The rectangle area represents the lateral surface area in the cylinder.) Assessment Question: Find a cylinder that has a volume of over 300 in^3 with a lateral surface area of 93.5 in^2. In the above table, fill in all four columns for that cylinder.
{"url":"http://mathforum.org/brap/wrap2/midlesson.html","timestamp":"2014-04-18T06:07:31Z","content_type":null,"content_length":"13628","record_id":"<urn:uuid:cf205c89-11d0-4856-a1b1-0e2cab6fce3b>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00279-ip-10-147-4-33.ec2.internal.warc.gz"}
the encyclopedic entry of Light fields light field is a function that describes the amount of traveling in every direction through every point in space. Michael Faraday was the first to propose (in an lecture entitled "Thoughts on Ray Vibrations") that light should be interpreted as a field, much like the magnetic fields on which he had been working for several years. The phrase light field was coined by Alexander Gershun in a classic paper on the radiometric properties of light in three-dimensional space (1936). The phrase has been redefined by researchers in computer graphics to mean something slightly different. To understand this difference, we'll need a bit of terminology. The 5D plenoptic function If we restrict ourselves to geometric optics, i.e. to incoherent light and to objects larger than the wavelength of light, then the fundamental carrier of light is a ray. The measure for the amount of light traveling along a ray is radiance, denoted by L and measured in watts (W) per steradian (sr) per meter squared (m^2). The steradian is a measure of solid angle, and meters squared are used here as a measure of cross-sectional area, as shown at right. The radiance along all such rays in a region of three-dimensional space illuminated by an unchanging arrangement of lights is called the plenoptic function (Adelson 1991). The plenoptic illumination function is an idealized function used in computer vision and computer graphics to express the image of a scene from any possible viewing position at any viewing angle at any point in time. It is never actually used in practice, and is more useful in understanding other concepts in vision and graphics. Since rays in space can be parameterized by three coordinates, x, y, and z and two angles $theta$ and $phi$, as shown at left, it is a five-dimensional function. (One can consider time, wavelength, and polarization angle as additional variables, yielding higher-dimensional functions.) Like Adelson, Gershun defined the light field at each point in space as a 5D function. However, he treated it as an infinite collection of vectors, one per direction impinging on the point, with lengths proportional to their radiances. Equivalently, one can imagine an infinite collection of infinitesimal surfaces placed at that point, one per direction, with different values of irradiance assigned to each surface. Integrating these vectors over any collection of lights, or over the entire sphere of directions, produces a single scalar value - the total irradiance at that point, and a resultant direction. The figure at right, reproduced from Gershun's paper, shows this calculation for the case of two light sources. In computer graphics, this vector-valued function of 3D space is called the vector irradiance field (Arvo, 1994). The vector direction at each point in the field can be interpreted as the orientation one would face a flat surface placed at that point to most brightly illuminate it. The 4D light field In a plenoptic function, if the region of interest contains a concave object (think of a cupped hand), then light leaving one point on the object may travel only a short distance before being blocked by another point on the object. No practical device could measure the function in such a region. However, if we restrict ourselves to locations outside the convex hull (think shrink-wrap) of the object, then we can measure the plenoptic function easily using a digital camera. Moreover, in this case the function contains redundant information, because the radiance along a ray remains constant from point to point along its length, as shown at left. In fact, the redundant information is exactly one dimension, leaving us with a four-dimensional function. Parry Moon dubbed this function the photic field (1981), while researchers in computer graphics call it the 4D light field (Levoy 1996) or Lumigraph (Gortler 1996). Formally, the 4D light field is defined as radiance along rays in empty space. The set of rays in a light field can be parameterized in a variety of ways, a few of which are shown below. Of these, the most common is the two-plane parameterization shown at right (below). While this parameterization cannot represent all rays, for example rays parallel to the two planes if the planes are parallel to each other, it has the advantage of relating closely to the analytic geometry of perspective imaging. Indeed, a simple way to think about a two-plane light field is as a collection of perspective images of the st plane (and any objects that may lie astride or beyond it), each taken from an observer position on the uv plane. A light field parameterized this way is sometimes called a light slab. Ways to create light fields Light fields are a fundamental representation for light. As such, there are as many ways of creating light fields as there are computer programs capable of creating images or instruments capable of capturing them. In computer graphics, light fields are typically produced either by rendering a 3D model or by photographing a real scene. In either case, to produce a light field views must be obtained for a large collection of viewpoints. Depending on the parameterization employed, this collection will typically span some portion of a line, circle, plane, sphere, or other shape, although unstructured collections of viewpoints are also possible (Buehler 2001). Devices for capturing light fields photographically may include a moving handheld camera, a robotically controlled camera (Levoy, 2002) an arc of cameras (as in the bullet time effect used in The Matrix), a dense array of cameras (Kanade 1998; Yang 2002; Wilburn 2005), or a handheld camera (Ng 2005; Georgiev 2006), microscope (Levoy 2006), or other optical system in which an array of microlenses has been inserted in the optical path: see plenoptic camera. Some public domain archives of light field datasets are listed below. How many images should be in a light field? The largest known light field (of Michelangelo's statue of Night) contains 24,000 1.3-megapixel images. At a deeper level, the answer depends on the application. For light field rendering (see the Application section below), if you want to walk completely around an opaque object, then of course you need to photograph its back side. Less obviously, if you want to walk close to the object, and the object lies astride the st plane, then you need images taken at finely spaced positions on the uv plane (in the two-plane parameterization shown above), which is now behind you, and these images need to have high spatial resolution. The number and arrangement of images in a light field, and the resolution of each image, are together called the "sampling" of the 4D light field. Analyses of light field sampling have been undertaken by many researchers; a good starting point is Chai (2000). Also of interest is Durand (2005) for the effects of occlusion, Ramamoorthi (2006) for the effects of lighting and reflection, and Ng (2005) and Zwicker (2006) for applications to plenoptic cameras and 3D displays, respectively. Applications of light fields Computational imaging refers to any image formation method that involves a digital computer. Many of these methods operate at visible wavelengths, and many of those produce light fields. As a result, listing all applications of light fields would require surveying all uses of computational imaging - in art, science, engineering, and medicine. In computer graphics, some selected applications are: Illumination engineering. Gershun's reason for studying the light field was to derive (in closed form if possible) the illumination patterns that would be observed on surfaces due to light sources of various shapes positioned above these surface. An example is shown at right. A more modern study is (Ashdown 1993). Light field rendering. By extracting appropriate 2D slices from the 4D light field of a scene, one can produce novel views of the scene (Levoy 1996; Gortler 1996). Depending on the parameterization of the light field and slices, these views might be perspective, orthographic, crossed-slit (Zomet 2003), multi-perspective (Rademacher 1998), or another type of projection. Light field rendering is one form of image-based rendering. Synthetic aperture photography. By integrating an appropriate 4D subset of the samples in a light field, one can approximate the view that would be captured by a camera having a finite (i.e. non-pinhole) aperture. Such a view has a finite depth of field. By shearing or warping the light field before performing this integration, one can focus on different fronto-parallel (Isaksen 2000) or oblique (Vaish 2005) planes in the scene. If the light field is captured using a handheld camera (Ng 2005), this essentially constitutes a digital camera whose photographs can be refocused after they are taken. 3D display. By presenting a light field using technology that maps each sample to the appropriate ray in physical space, one obtains an autostereoscopic visual effect akin to viewing the original scene. Non-digital technologies for doing this include integral photography, parallax panoramagrams, and holography; digital technologies include placing an array of lenslets over a high-resolution display screen, or projecting the imagery onto an array of lenslets using an array of video projectors. If the latter is combined with an array of video cameras, one can capture and display a time-varying light field. This essentially constitutes a 3D television system (Javidi 2002; Matusik 2004). Image generation and predistortion of synthetic imagery for holographic stereograms is one of the earliest examples of computed light fields, anticipating and later motivating the geometry used in Levoy and Hanrahan's work (Halle 1991, 1994). Glare reduction. Glare arises due to multiple scattering of light inside the camera’s body and lens optics and reduces image contrast. While glare has been analyzed in 2D image space (Talvala 2007), it is useful to identify it as a 4D ray-space phenomenon (Raskar 2008). By statistically analyzing the ray-space inside a camera, one can classify and remove glare artifacts. In ray-space, glare behaves as high frequency noise and can be reduced by outlier rejection. Such analysis can be performed by capturing the light field inside the camera, but it results in the loss of spatial resolution. Uniform and non-uniform ray sampling could be used to reduce glare without significantly compromising image resolution (Raskar 2008). Adelson, E.H., Bergen, J.R. (1991). "The plenoptic function and the elements of early vision", In Computation Models of Visual Processing, M. Landy and J.A. Movshon, eds., MIT Press, Cambridge, 1991, pp. 3-20. Arvo, J. (1994). "The Irradiance Jacobian for Partially Occluded Polyhedral Sources", Proc. ACM SIGGRAPH, ACM Press, pp. 335-342. Faraday, M., "Thoughts on Ray Vibrations", Philosophical Magazine, S.3, Vol XXVIII, N188, May 1846. Gershun, A. (1936). "The Light Field", Moscow, 1936. Translated by P. Moon and G. Timoshenko in Journal of Mathematics and Physics, Vol. XVIII, MIT, 1939, pp. 51-151. Gortler, S.J., Grzeszczuk, R., Szeliski, R., Cohen, M. (1996). "The Lumigraph", Proc. ACM SIGGRAPH, ACM Press, pp. 43-54. Levoy, M., Hanrahan, P. (1996). "Light Field Rendering", Proc. ACM SIGGRAPH, ACM Press, pp. 31-42. Moon, P., Spencer, D.E. (1981). The Photic Field, MIT Press. Ramamoorthi, R., Mahajan, D., Belhumeur, P. (2006). "A First Order Analysis of Lighting, Shading, and Shadows", ACM TOG. Zwicker, M., Matusik, W., Durand, F., Pfister, H. (2006). "Antialiasing for Automultiscopic 3D Displays", Eurographics Symposium on Rendering, 2006. Ng, R. (2005). "Fourier Slice Photography", Proc. ACM SIGGRAPH, ACM Press, pp. 735-744. Durand, F., Holzschuch, N., Soler, C., Chan, E., Sillion, F. X. (2005). "A Frequency analysis of Light Transport", Proc. ACM SIGGRAPH, ACM Press, pp. 1115-1126. Chai, J.-X., Tong, X., Chan, S.-C., Shum, H. (2000). "Plenoptic Sampling", Proc. ACM SIGGRAPH, ACM Press, pp. 307-318. Halle, M. (1994) "Holographic stereograms as discrete imaging systems", in SPIE Proc. Vol. #2176: Practical Holography VIII, S.A. Benton, ed., pp. 73-84. Liang, C.K., Lin, T.H., Wong, B.Y., Liu, C., Chen, H. H. (2008). "Programmable Aperture Photography:Multiplexed Light Field Acquisition", Proc. ACM SIGGRAPH. Veeraraghavan, A., Raskar, R., Agrawal, A., Mohan, A., Tumblin, J. (2007). "Dappled Photography: Mask Enhanced Cameras for Heterodyned Light Fields and Coded Aperture Refocusing", Proc. ACM SIGGRAPH. Georgiev, T., Zheng, C., Nayar, S., Curless, B., Salesin, D., Intwala, C. (2006). "Spatio-angular Resolution Trade-offs in Integral Photography", Proc. EGSR 2006. Kanade, T., Saito, H., Vedula, S. (1998). "The 3D Room: Digitizing Time-Varying 3D Events by Synchronized Multiple Video Streams", Tech report CMU-RI-TR-98-34, December 1998. Levoy, M. (2002). Stanford Spherical Gantry Levoy, M., Ng, R., Adams, A., Footer, M., Horowitz, M. (2006). "Light field microscopy", ACM Transactions on Graphics (Proc. SIGGRAPH), Vol. 25, No. 3. Ng, R., Levoy, M., Brédif, M., Duval, G., Horowitz, M., Hanrahan, P. (2005). "Light Field Photography with a Hand-Held Plenoptic Camera", Stanford Tech Report CTSR 2005-02, April, 2005. Wilburn, B., Joshi, N., Vaish, V., Talvala, E., Antunez, E., Barth, A., Adams, A., Levoy, M., Horowitz, M. (2005). "High Performance Imaging Using Large Camera Arrays", ACM Transactions on Graphics (Proc. SIGGRAPH), Vol. 24, No. 3, pp. 765-776. Yang, J.C., Everett, M., Buehler, C., McMillan, L. (2002). "A real-time distributed light field camera", Proc. Eurographics Rendering Workshop 2002. Archives of light fields "UCSD/MERL Light Field Repository" Ashdown, I. (1993). "Near-Field Photometry: A New Approach", Journal of the Illuminating Engineering Society, Vol. 22, No. 1, Winter, 1993, pp. 163-180. Buehler, C., Bosse, M., McMillan, L., Gortler, S., Cohen, M. (2001). "Unstructured Lumigraph rendering", Proc. ACM SIGGRAPH, ACM Press. Isaksen, A., McMillan, L., Gortler, S.J. (2000). "Dynamically Reparameterized Light Fields", Proc. ACM SIGGRAPH, ACM Press, pp. 297-306. Javidi, B., Okano, F., eds. (2002). Three-Dimensional Television, Video and Display Technologies, Springer-Verlag. Matusik, W., Pfister, H. (2004). "3D TV: a scalable system for real-time acquisition, transmission, and autostereoscopic display of dynamic scenes", Proc. ACM SIGGRAPH, ACM Press. Rademacher, P., Bishop, G. (1998). "Multiple-Center-of-Projection Images", Proc. ACM SIGGRAPH, ACM Press. Vaish, V., Garg, G., Talvala, E., Antunez, E., Wilburn, B., Horowitz, M., Levoy, M. (2005). "Synthetic Aperture Focusing using a Shear-Warp Factorization of the Viewing Transform", Proc. Workshop on Advanced 3D Imaging for Safety and Security, in conjunction with CVPR 2005. Zomet, A., Feldman, D., Peleg, S., Weinshall, D. (2003). "Mosaicing new views: the crosssed-slits projection", IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), Vol. 25, No. 6, June 2003, pp. 741-754. Halle, M., Benton, S., Klug, M., Underkoffler, J. (1991). "The UltraGram: a generalized holographic stereogram", SPIE Vol. 1461, Practical Holography V, S.A. Benton, ed., pp. 142-155. Talvala, E-V., Adams, A., Horowitz, M., Levoy, M. (2007). "Veiling glare in high dynamic range imaging", Proc. ACM SIGGRAPH. Raskar, R., Agrawal, A., Wilson, C., Veeraraghavan, A. (2008). "Glare Aware Photography: 4D Ray Sampling for Reducing Glare Effects of Camera Lenses", Proc. ACM SIGGRAPH.
{"url":"http://www.reference.com/browse/Light+fields","timestamp":"2014-04-20T21:55:48Z","content_type":null,"content_length":"102355","record_id":"<urn:uuid:54daf123-52a5-48a9-8212-cd916c5ba1e5>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00426-ip-10-147-4-33.ec2.internal.warc.gz"}
Which course did you enjoy most at University? Why did you like it better than other courses? In my case, I really liked stochastic geometry. It was quite theoretical, e.g. estimating the value of Pi by throwing a needle in a rectangular grid and counting how many times the needle hit one of the lines in the grid. But it had some nice applications too. One of the problems (known as the third Grenander Problem or Exterior/Interior Problem) consisted of estimating the shape of an oil field: about 20 oil wells had been digged, and we knew which ones had oil, which ones didn't. The only assumption was that the shape of the oil field was convex. We eventually developed a fast algorithm to compute a convex domain in any dimension (when dimension = 2 or 3, the problem is easy). As you can imagine, the issue was trying to estimate the shape / extent of oil field with as few wells as possible, as digging wells outside the field is expensive.
{"url":"http://www.analyticbridge.com/forum/topics/2004291:Topic:10661?commentId=2004291%3AComment%3A63068","timestamp":"2014-04-19T10:19:43Z","content_type":null,"content_length":"56724","record_id":"<urn:uuid:6a6ff093-7254-4592-854a-f5828319a7c3>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00570-ip-10-147-4-33.ec2.internal.warc.gz"}
Fox Valley ACT Tutor Find a Fox Valley ACT Tutor ...Currently, I am in my thirteenth year as a professional educator, and I have served a couple roles including teacher, tutor, and counselor. I have worked with a range of students including students with ADD/ADHD. In addition, for the last year I have worked one-on-one a couple hours a week as a sort of big brother with a ten year old with ADHD. 20 Subjects: including ACT Math, Spanish, English, writing ...As an undergraduate, I spent a semester studying Archeology and History in Greece. Thus I bring first hand knowledge to your history studies. I won the Botany award for my genetic research on plants as an undergraduate, and I have done extensive research in Computational Biology for my Ph.D. dissertation. 41 Subjects: including ACT Math, chemistry, physics, English ...I have served as an ACT Preparation Instructor for the past two years. I have been given the chance to work with high school and college students to prepare for upcoming ACT Examinations. I am a certified Illinois School Counselor. 16 Subjects: including ACT Math, calculus, geometry, algebra 1 ...I believe that with hard work, practice, and perseverance all students can be proficient in mathematics. I enjoy math and find helping students to succeed to be very rewarding. Your success is very important to me, so contact me and let’s get started! 8 Subjects: including ACT Math, geometry, algebra 1, statistics ...Qualification: Masters in Computer Applications My Approach : I assess the child's learning ability in the first class and then prepare an individual lesson plan. I break down math problems for the child, to make him/her understand in an easy way. I work with the child to develop his/her analytical skills. 8 Subjects: including ACT Math, geometry, algebra 1, algebra 2 Nearby Cities With ACT Tutor Big Rock, IL ACT Tutors Bristol, IL ACT Tutors Elburn ACT Tutors Eola, IL ACT Tutors Fox Valley Facility, IL ACT Tutors Hines, IL ACT Tutors Indianhead Park, IL ACT Tutors Lisbon, IL ACT Tutors Medinah ACT Tutors Mooseheart ACT Tutors Naperville ACT Tutors Plato Center ACT Tutors Virgil, IL ACT Tutors Wayne, IL ACT Tutors Western, IL ACT Tutors
{"url":"http://www.purplemath.com/Fox_Valley_ACT_tutors.php","timestamp":"2014-04-17T19:54:31Z","content_type":null,"content_length":"23687","record_id":"<urn:uuid:54656595-5313-4b1c-9d64-df500e242997>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00348-ip-10-147-4-33.ec2.internal.warc.gz"}
Determine the shear-force and bending-moment equations for the beam? Determine the shear-force and bending-moment equations for the beam. Then sketch the diagrams using the aforementioned equations if necessary to ascertain key points in the diagrams, such as the position between the supports where V=0. What is the bending moment there? The following is the figure and how I drew the Free Body Diagram. Remember, I only neglected the Coupled-Forces, not the m0ment they cause. I found: W = 180 N Ay = 60 N By = 1120 N No need for the sketching part, I can do that. Just need help with the equations. Any help is greatly appreciated.
{"url":"http://www.physicsforums.com/showthread.php?p=4263934","timestamp":"2014-04-16T16:05:03Z","content_type":null,"content_length":"28496","record_id":"<urn:uuid:f4d1cc6a-5f2e-4992-aa14-041b467d5af4>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00119-ip-10-147-4-33.ec2.internal.warc.gz"}
Matches for: American Mathematical Society Translations--Series 2 Advances in the Mathematical Sciences 2005; 277 pp; hardcover Volume: 216 ISBN-10: 0-8218-4040-1 ISBN-13: 978-0-8218-4040-5 List Price: US$120 Member Price: US$96 Order Code: TRANS2/216 This collection presents new and interesting applications of Poisson geometry to some fundamental well-known problems in mathematical physics. In addition to advanced Poisson geometry, the methods used by the authors include unexpected algebras with non-Lie commutation relations, nontrivial (quantum) Kählerian structures of hypergeometric type, dynamical systems theory, semiclassical asymptotics, and more. The volume is suitable for graduate students and researchers interested in mathematical physics. Other AMS publications by M. Karasev include Nonlinear Poisson Brackets. Geometry and Quantization, Coherent Transform, Quantization, and Poisson Geometry, and Asymptotic Methods for Wave and Quantum Graduate students and research mathematicians interested in mathematical physics. • M. Karasev -- Noncommutative algebras, nanostructures, and quantum dynamics generated by resonances • M. Karasev and E. Novikova -- Algebras with polynomial commutation relations for a quantum particle in electric and magnetic fields • Y. Vorobjev -- Poisson structures and linear Euler systems over symplectic manifolds • Y. Vorobjev -- Poisson equivalence over a symplectic leaf
{"url":"http://ams.org/bookstore?fn=20&arg1=trans2series&ikey=TRANS2-216","timestamp":"2014-04-16T20:43:49Z","content_type":null,"content_length":"15391","record_id":"<urn:uuid:594ba751-047d-4bd1-bd62-5d19f514527f>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00651-ip-10-147-4-33.ec2.internal.warc.gz"}
Unit conversions for mere mortals So, you are taking a college science course. Maybe it is physics, maybe it is chemistry, maybe its a lab. Either way, you always end up with these problems that involve unit conversions. You think you have the hang of it, but sometimes you make some mistakes. Here is my explanation for converting units. Convert units? Me? Why? I have google. Yes, that is true, google (for the most part) does an excellent job at unit conversions. But….I doubt your instructor will let you use google on your test. Don’t you think you should have a good idea of how to do it? Don’t worry. Unit conversion only involves 1 thing. Unit conversion is multiplication by 1 Yes, really. The key thing to realize is that units are an important part of a number and we don’t want to change what that number represents, just its units. Take for instance the number 5. Suppose I do the following: What would I get? Let me just work it out. It is the same thing as what I started with. You might say “duh – you multiplied by 2/2 which is one”. And you would be correct. What if I do the following: Well, that will still be equal to 5 because 6/3 is still 2. In this case, I am still multiplying 5 by 1. If you are ok with this, then you are ready for a real unit conversion. First, suppose I measure the length of my desk and I get: Note the the units are important. If I measured the desk and got a value of just 55, that is meaningless. 55 what? 55 chickens? 55 gobstoppers? 55 golden gate bridges? Nonetheless, suppose I want the length of the desk in inches instead of cm? I only need to multiply by “1″ in this case, my “1″ will be created from a a fraction with 0.394 inches and 1 cm since 1 cm = 0.394 inches. That will give: Notice that the cm canceled since that unit was on the top and the bottom. Really, you can consider the unit to be like a variable (in essence it is). But HEY! you may say – I have been doing this anyway. Why can’t I just say “to convert cm to inches, multiply by 0.394″. Yes, this is what happens, but not how it happens. With this way, it is easier to do all sorts of conversions. All you need to know is one relationship (like 1 cm = 0.394 inches – OR – 2.54 cm = 1 inch). Let me do a couple of useful examples: Convert 3 feet per minute to meters per second: You see, it is possible to do a series of conversions in a row. The key thing to remember is to multiply by fractions that have equivalent things in the top and bottom and make the units cancel (notice I multiplied by 0.305 m over 1 ft so the ft cancel). Next example – (this one is tricky for many students): Convert 345 cm^2 into m^2: I did this one wrong (sort of wrong) on purpose. This is similar to the mistake many students make. They say there is 1 m^2 in 100 cm^2. This just is not the case. Here is a simulation of that: This is a square that is 10 smaller squares by 10 smaller squares. Although there are 10 small squares in one side, there are 100 total small squares in this big square. So, for the above problem, it really should be: Last example: How fast is 20 furlongs per fortnight in meters per second? Solution: what in the world is a forlong? I know a fortnight is two weeks. Type the following in google: 20 furlongs per fortnight in m/s Yes, google calculator is pretty awesome. Here are some other fun google conversions to try: • the mass of the sun in slugs • 2 kg in solar masses • 3 cm^3 in ft^2*m Does that last one even make sense? Sure, I will do this one: The number from google is slightly different (check it out) perhaps due to rounding. 1. #1 Uncle Al January 20, 2009 If all information resides in area rather than volume, local conversions may not be diagnostic of global big stuff. 4 – 10 = 9 – 15 Add 25/4 to both sides, 4 – 10 + 25/4 = 9 – 15 + 25/4 Write sides as complete squares, (2 – 5/2)^2 = (3 – 5/2)^2 Take the square root of both sides 2 – 5/2 = 3 – 5/2, add 5/2 to both sides
{"url":"http://scienceblogs.com/dotphysics/2009/01/20/unit-conversions-for-mere-mortals/","timestamp":"2014-04-18T10:52:12Z","content_type":null,"content_length":"60532","record_id":"<urn:uuid:d7e6fd42-a247-400a-bccc-e3d3e913b2ae>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00432-ip-10-147-4-33.ec2.internal.warc.gz"}
object efficiently maintains an ordered list of sizes and corresponding positions. One situation for which might be appropriate is in a component that displays multiple rows of unequal size. In this case, a single object could be used to track the heights and Y positions of all rows. Another example would be a multi-column component, such as a JTable, in which the column sizes are not all equal. The JTable might use a single SizeSequence object to store the widths and X positions of all the columns. The JTable could then use the SizeSequence object to find the column corresponding to a certain position. The JTable could update the SizeSequence object whenever one or more column sizes changed. The following figure shows the relationship between size and position data for a multi-column component. In the figure, the first index (0) corresponds to the first column, the second index (1) to the second column, and so on. The first column's position starts at 0, and the column occupies size[0] pixels, where size[0] is the value returned by getSize(0). Thus, the first column ends at size[0] - 1. The second column then begins at the position size[0] and occupies size[1] (getSize(1)) pixels. Note that a SizeSequence object simply represents intervals along an axis. In our examples, the intervals represent height or width in pixels. However, any other unit of measure (for example, time in days) could be just as valid. Implementation Notes Normally when storing the size and position of entries, one would choose between storing the sizes or storing their positions instead. The two common operations that are needed during rendering are: setSize(index, size) . Whichever choice of internal format is made one of these operations is costly when the number of entries becomes large. If sizes are stored, finding the index of the entry that encloses a particular position is linear in the number of entries. If positions are stored instead, setting the size of an entry at a particular index requires updating the positions of the affected entries, which is also a linear calculation. Like the above techniques this class holds an array of N integers internally but uses a hybrid encoding, which is halfway between the size-based and positional-based approaches. The result is a data structure that takes the same space to store the information but can perform most operations in Log(N) time instead of O(N), where N is the number of entries in the list. Two operations that remain O(N) in the number of entries are the insertEntries and removeEntries methods, both of which are implemented by converting the internal array to a set of integer sizes, copying it into the new array, and then reforming the hybrid representation in place.
{"url":"http://docs.oracle.com/javase/7/docs/api/javax/swing/SizeSequence.html","timestamp":"2014-04-18T00:43:00Z","content_type":null,"content_length":"25652","record_id":"<urn:uuid:4a009e06-b5ab-4369-87e0-fa6084d1e923>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00163-ip-10-147-4-33.ec2.internal.warc.gz"}
For The Circuit Below, Find The Gain V0 / Vs In ... | Chegg.com Image text transcribed for accessibility: For the circuit below, find the gain V0 / Vs in terms of resistors. Assume the opamps are ideal. For R1 = R2 =1 k Ohm R3 = 3 k Ohm, R5 = 6 Ohm R5 = 5 k Ohm, fnd the maximum amplitude of Vs before the output distorts if Vs (t) = As sin (2pi 104t) and slew rate of the op-amp is 0.5 V/ |mu s. Electrical Engineering
{"url":"http://www.chegg.com/homework-help/questions-and-answers/circuit-find-gain-v0-vs-terms-resistors-assume-opamps-ideal-r1-r2-1-k-ohm-r3-3-k-ohm-r5-6--q320913","timestamp":"2014-04-20T02:55:27Z","content_type":null,"content_length":"20486","record_id":"<urn:uuid:46fd093a-bc24-4da5-9385-92b56ac0fd22>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00333-ip-10-147-4-33.ec2.internal.warc.gz"}
1. Introduction In my paper Local Knowledge in Financial Markets (2014), I study a problem where assets have with each asset’s exposure to a given attribute given by In this post, I show that the signal opacity and recovery bounds become arbitrarily close in a large market. The analysis in this post primarily builds on work done in Donoho and Tanner (2009) and Wainwright (2009). 2. Motivating Example This sort of inference problem pops up all the time in financial settings. Suppose you moved away from Chicago a year ago, and now you’re moving back and looking for a house. When studying a list of recent sales prices, you find yourself a bit surprised. People seem to have changed their preferences for By contrast, if Here’s the key point. The problem changes character at Yet, the dimensionality in this toy example can be confusing. There is obviously something different about the problem at 3. Non-Random Analysis I start by exploring how the required number of observations, where each column of the data matrix corresponds to a number With the Here’s the key observation. Only the absolute difference between The probability that traders select the correct attribute after seeing only I plot this statistic for 4. Introducing Randomness Previously, the matrix of attributes was strategically chosen so that the set of recovers the true First, I study the case where there is no noise (i.e., where 10)?” I remove the noise to make the inference problem as easy as possible for traders. Thus, the proposition below which characterizes this minimum number of observations gives a lower bound. I refer to this number of observations as the signal opacity bound and write it as Proposition (Donoho and Tanner, 2009): Suppose 10) will recover Next, I turn to the case where there is noise (i.e., where 10)?” Define traders’ error rate after seeing 10) chooses the wrong subset of attributes (i.e., makes an error) given the true support Thus, the proposition below which characterizes this number of observations gives an upper bound of sorts. I refer to this number of observations as the signal recovery bound and write it as Proposition (Wainwright, 2009): Suppose The only cognitive constraint that traders face is that their selection rule must be computationally tractable. Under minimal assumptions a convex optimization program is computationally tractable in the sense that the computational effort required to solve the problem to a given accuracy grows moderately with the dimensions of the problem. Natarajan (1995) explicitly shows that 5. Discussion What is really interesting is that the signal opacity bound, i.e., it plots how big the gap is relative to the size of the signal recover bound
{"url":"http://www.alexchinco.com/author/admin/","timestamp":"2014-04-17T06:46:28Z","content_type":null,"content_length":"300218","record_id":"<urn:uuid:7939e8a8-9341-4004-8358-805df408a43a>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00372-ip-10-147-4-33.ec2.internal.warc.gz"}
Haskell Code by HsColour {- | We use a hierarchy of signal wrappers in order to capture all features of a signal. At the bottom there is the signal storage as described in "Synthesizer.Storage". With the element type of the storage you decide whether you want mono or stereo signal and you decide on the precision, fixed point vs. floating point and so on. However, due to Haskell's flexible type system you don't need to decide finally on a type for your signal processing routines. E.g. mono and stereo signals can be handled together using the "Algebra.Module" class from the numeric-prelude package. (An algebraic module is a vector space without requiring availability of a division of scalars). You can use the storage types directly using the functions from "Synthesizer.Plain.Signal" and "Synthesizer.Generic.Signal" and its cousins. This is in a way simple, since you do not have bother with units and complicated types, but you miss type safety and re-usability. E.g. you have to give frequencies in ratios of the sampling rate. If you later decide to change the sampling rate, you must rewrite all time and frequency values. If you anticipate changes in sampling rate, you may write those values as ratios of a global sampling rate right from the start. But you might want different sample rates for some parts of the computation, or you may want sample rates that have no time dimension, but say, length dimension. The advanced system described below handles all these cases for you. Ok, we said that at the bottom, there is the signal storage. The next level is the decision whether the raw data is interpreted as straight or as cyclic signal. Most of the signals you are using, will be "Synthesizer.Dimensional.Straight.Signal". Currently, "Synthesizer.Dimensional.Cyclic.Signal" is only needed for Fourier transform and as input to oscillators. To get a straight signal out of storablevector data, you will write > import qualified Synthesizer.Storable.Signal as Store > import qualified Synthesizer.Dimensional.Straight.Signal as Straight > type MySignal y = Straight.T Store.T y Note that @Straight.T@ has the type constructor @Store.T@ as first argument, not the entire storage type @Store.T@. This way compositions of such wrappers are automatically Functors. However, I'm not completely certain, that this is good, since 'fmap' allows to do unintended things (e.g. switch from a numeric to a non-numeric element type). The next level copes with amplitudes and their units. An amplitude and its unit are provided per signal, not per sample. We think that it is the most natural way, and it is also an efficient one. Since the signal might be a stereo signal, the numeric type of the amplitude can differ from the storage element type. Usually, the first and the latter one are related by an "Algebra.Module" constraint. You get a signal with amplitude by > import qualified Synthesizer.Dimensional.Amplitude.Signal as Amp > type MySignal v y yv = Amp.T v y (Straight.T Store.T) yv where @v@ is the dimension of the amplitude of type @y@. The storage element type, a vector with respect to @y@, is of type @yv@. In some cases, an amplitude with a physical dimension just makes no sense. Imagine a control signal consisting of @Bool@ elements like a gate signal, or a signal containing elements of an enumeration for switching between signals depending on the time. For some control signals the amplitude unit is one. We call these signals flat. In this case you can choose whether you use an explicit amplitude with @Scalar@ dimension or you use no amplitude wrapper at all. Most signal processors handle both kinds of flat signals by the corresponding type class in "Synthesizer.Dimensional.Abstraction.Flat". There is a special signal type for Dirac impulses, that does not fit to that scheme, that is, it cannot be equipped with an amplitude. See "Synthesizer.Dimensional.Rate.Dirac". Last but not least we want to look on how to handle sample rates. Our goal is to write signal processes that do not depend on the sample rate. E.g. we want to have an exponential decay with a half-life of one second. A second means 44100 samples at 44100 Hz sample rate, or 22050 samples at 22050 Hz sample rate. We want to abstract from the particular number of samples in order to be able generate a signal at any sample rate (i.e. quality) we like. The ideal representation of a signal is be a real function, and we try to come close to it. (Not quite, because a Dirac impulse is not a real function, but we need it as identity element of the convolution.) To this end we can equip a discrete signal with a sample rate, see "Synthesizer.Dimensional.RateWrapper". This alone however, leads to several problems: * When combining some signals, it is not clear how to cope with different sample rates. Say you want to mix signals @a@ and @b@. Shall @mix a b@ have the sample rate of @a@ or that of @b@ or a new one? How shall the signals convert to a new rate? Since an automatically chosen method can always be inappropriate (either too slow or too low quality), the caller have to explicitly give tell it to 'mix'. This is not only inconvenient for the caller, but also requires a lot of boilerplate code in functions like 'mix'. * An alternative solution to the problem above is, to check before mixing whether sample rates are equal, and abort with an error if they differ. This way no decisions on the sample rate and subsequent conversions are necessary. This still needs boilerplate in signal processors. It also does not tell the user by types, whether a processor can handle differing sample rates. Generally, dynamic checks are both inefficient and an inferior way to indicate programming errors, since they are only catched at run-time, if at all. * Both solutions suffer from the inconvenience to specify the sample rate in all leaves, e.g. @mix (oscillator 44100 10) (oscillator 44100 11)@. Naturally, when you want to get a signal with rate 44100 Hz sample rate, you perform all signal processes at this rate. Even if want to use oversampling, then you will perform all signal processes at the higher rate and downsample only once at the end. Thus we introduce a way to run a set of signal processes in the context of a sample rate. It is still sensible and possible to escape from this context or enter it again. * You need to be to enter a sample rate context with a signal read from disk, with a sample rate that is not known at compile time. You might also intensionally compute a control signal at a low sample rate and convert it to a sample rate context for filtering. Generally the scheme of functions that allow different sample rates is: Use the sample rate of the output signal as context. Take all signals with independent sample rate as inputs outside the context. The according function is 'Synthesizer.Dimensional.Rate.Filter.frequencyModulationDecoupled'. * When you want to play a sound or write it to disk, you must choose a sample rate and fix the computation to that rate. This conversion however means to run the whole computation within one sample rate context, since everything in that context depends on the sample rate. The according function is 'Synthesizer.Dimensional.RateWrapper.runProcess'. The sample rate context is provided by "Synthesizer.Dimensional.Process". It is a Reader monad, but we only need applicative functor methods for signal processing. This context is equipped with the type parameter @s@, just as we know it from the 'Control.Monad.ST.ST' monad. It also serves the same purpose: We tag both signals and the sample rate context with the type parameter @s@. The @forall s@ constraint for @runProcess@ ensures, that a signal with such a tag remains in the context. You can only escape the sample rate context by rendering the signal and attach the sample rate to the rendered signal. The sample rate tag type is provided by Haskell's type system does not allow to restrict the types that it can wrap. So in principle you can abuse it to wrap many things. However we do not provide such functions and this way the wrappable types ar restricted anyway. module Synthesizer.Dimensional.Overview where
{"url":"http://hackage.haskell.org/package/synthesizer-0.2.0.1/docs/src/Synthesizer-Dimensional-Overview.html","timestamp":"2014-04-20T14:30:55Z","content_type":null,"content_length":"13109","record_id":"<urn:uuid:7647b5a9-37e8-46ec-8bff-4ee8af38f2cc>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00266-ip-10-147-4-33.ec2.internal.warc.gz"}
the encyclopedic entry of Maxwell bridge |- align = "center" | |width = "25"| | A Maxwell bridge (in long form, a Maxwell-Wien bridge) is a type of Wheatstone bridge used to measure an unknown inductance in terms of calibrated resistance and capacitance. It is a real product With reference to picture, in a typical application $R_1$ and $R_4$ are known fixed entities, and $R_2$ and $C_2$ are known variable entities. $R_2$ and $C_2$ are adjusted until the bridge is $R_3$ and $L_3$ can then be calculated based on the values of the other components: $R_3 = \left\{R_1 cdot R_4 over R_2 \right\}$ $L_3 = R_1 cdot R_4 cdot C_2$ To avoid the difficulties associated with determining the precise value of a variable capacitance, sometimes a fixed-value capacitor will be installed and more than one resistor will be made The additional complexity of using a Maxwell bridge over simpler bridge types is warranted in circumstances where either the mutual inductance between the load and the known bridge entities, or stray electromagnetic interference, distorts the measurement results. The capacitive reactance in the bridge will exactly oppose the inductive reactance of the load when the bridge is balanced, allowing the load's resistance and reactance to be reliabily determined.
{"url":"http://www.reference.com/browse/Maxwell+bridge","timestamp":"2014-04-18T15:49:58Z","content_type":null,"content_length":"77455","record_id":"<urn:uuid:daefc591-c391-4714-b2d4-481e3580a56e>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00038-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: MATH CONTENT SPECIALTY TEST Replies: 19 Last Post: Mar 18, 2013 1:23 PM Messages: [ Previous | Next ] pat jr cst Posted: May 11, 2003 7:28 PM Posts: 1 Registered: 12/3/04 Thanks for recommending that KAPLAN website from the teacher forum. Did you take the CST on May 10? I was also there taking the LAST and the ATS-W. How was the test if you took it? What type of questions were on there? Did you find the KAPLAN site you recommended worth while? Any suggestions for me when I take it? I graduated from college in 1988 and have not yet taught Calculus or Pre-Calc in high school. The highest level course I teach is Sequential III ( Algebra- Trig etc...). I have been out of the calculus loop for a while now and I'm worried that I will not remember enough to pass the CST in Math. Thanks again. ------- End of Forwarded Message Date Subject Author 5/8/03 MATH CONTENT SPECIALTY TEST Carol N. Morgan-Brown 5/11/03 pat jr 5/23/03 maureen lucas 5/23/03 Mike 6/11/03 Pat Fontana Jr 6/19/03 CST: specific questions on exam Rick 5/29/04 Re: CST-MATH-REVIEW SOLUTION kevin connolly 6/16/04 Re: CST-MATH-REVIEW SOLUTION Mghita@aol.com 2/20/13 Re: CST-MATH-REVIEW SOLUTION Gregory Craig 2/20/13 Re: CST-MATH-REVIEW SOLUTION Gregory Craig 2/21/13 RE: CST-MATH-REVIEW SOLUTION LeBlanc, Robert 2/21/13 Re: CST-MATH-REVIEW SOLUTION James Cala 2/21/13 Re: CST-MATH-REVIEW SOLUTION Elliott Bird 3/18/13 RE: CST-MATH-REVIEW SOLUTION Elliott Bird 8/10/03 Mathman42 8/11/03 Re: Math content speciality sheiladolgowich@juno.com 6/14/06 Re: MATH CONTENT SPECIALTY TEST K. Adams 7/29/08 Re: MATH CONTENT SPECIALTY TEST Pepper1 7/29/08 Re: MATH CONTENT SPECIALTY TEST Mghita@aol.com 6/15/06 Re: MATH CONTENT SPECIALTY TEST Abdool Inshan
{"url":"http://mathforum.org/kb/message.jspa?messageID=1926919","timestamp":"2014-04-19T10:43:13Z","content_type":null,"content_length":"38780","record_id":"<urn:uuid:0e961f6f-88fe-4cac-a094-696be1e3b4c3>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00587-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Solving for 3 variables questions :o help! Solve the system by elimination 1: x+5y-4z=-10, 2x-y+5z=-9, 2x-10y-5z=0 . 2. Solve the system by substitution: 2x-y+z=-4, z=5, -2x+3y-z=-10. 3. A food store makes a 11–pound mixture of peanuts, almonds, and raisins. The cost of peanuts is $1.50 per pound, almonds cost $3.00 per pound, and raisins cost $1.50 per pound. The mixture calls for twice as many peanuts as almonds. The total cost of the mixture is $21.00. How much of each ingredient did the store use? • one year ago • one year ago Best Response You've already chosen the best response. Help with any one of them would be great (: Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/507cb70ce4b07c5f7c1fad76","timestamp":"2014-04-20T13:46:48Z","content_type":null,"content_length":"28064","record_id":"<urn:uuid:64cc7423-684e-4159-993e-95d50d0ca6bf>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00175-ip-10-147-4-33.ec2.internal.warc.gz"}
Number Line The Number Line Layout and Appearance Number Line start Width: Number Line End Label the number line Label First Only Increment Numbers By Label First and Last Label First, Last & Middle Label First, Last & Zero How should the endings look? No Label Arrow Ending Label All Line Color (Click to change) Transparent Background
{"url":"http://www.mathwarehouse.com/number-lines/number-line-maker.php","timestamp":"2014-04-16T05:13:46Z","content_type":null,"content_length":"25796","record_id":"<urn:uuid:6e409ee3-e1e2-4a64-97af-d59114ca006e>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00630-ip-10-147-4-33.ec2.internal.warc.gz"}
[Numpy-discussion] Interact with matplotlib in Sage Gökhan Sever gokhansever@gmail.... Sun Jan 24 11:54:37 CST 2010 I have thought of this might interesting to share. Register at www.sagenb.org or try on your local Sage-notebook and using the following # Simple example demonstrating how to interact with matplotlib directly. # Comment plt.clf() to get the plots overlay in each update. # Gokhan Sever & Harald Schilly (2010-01-24) from scipy import stats import numpy as np import matplotlib.pyplot as plt def plot_norm(loc=(0,(0,10)), scale=(1,(1,10))): rv = stats.norm(loc, scale) x = np.linspace(-10,10,1000) A very easy to use example, also well-suited for learning and demonstration Posted at: http://wiki.sagemath.org/interact/graphics#Interactwithmatplotlib Have fun ;) -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.scipy.org/pipermail/numpy-discussion/attachments/20100124/dd30de7d/attachment.html More information about the NumPy-Discussion mailing list
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2010-January/048095.html","timestamp":"2014-04-18T15:41:44Z","content_type":null,"content_length":"3785","record_id":"<urn:uuid:6909065b-8915-463c-a14c-7f3321b52586>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00300-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Match the correct equation with the graph. y = -3x + 4 y = 5/2x + 5 y = -2/5x + 2 y = -3/2x + 2 x = 4 • one year ago • one year ago Best Response You've already chosen the best response. Best Response You've already chosen the best response. Do you know of the y-intercept concept? Best Response You've already chosen the best response. no i do not......... Best Response You've already chosen the best response. It means whenever the graph touches the y-axis. Look at the graph you have provided and tell me where does your graph touch (intercept) the y-axis. Best Response You've already chosen the best response. it touches at 5 Best Response You've already chosen the best response. Good. So which of the options in the question say that the equation touches at 5? Best Response You've already chosen the best response. id say this one right?y = -2/5x + 2 Best Response You've already chosen the best response. Ok, you're close but listen. Whenever you see an equation of this form. y = mx + b Where m=(-2/5) and b = 2. You know that b is always the y-intercept. Now check them again and tell me. What do you think? Since b=2 here, it means that the graph touches at 2. Best Response You've already chosen the best response. so it would be this right? y = 5/2x + 5 Best Response You've already chosen the best response. Yes it would. Good job! Best Response You've already chosen the best response. thanks ;D Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/5064b00ce4b0da5168be4f16","timestamp":"2014-04-20T11:17:41Z","content_type":null,"content_length":"52613","record_id":"<urn:uuid:edb7f4a9-8351-4727-a343-a941d4373428>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00119-ip-10-147-4-33.ec2.internal.warc.gz"}
Spatiotemporal Hotspots Analysis for Exploring the Evolution of Diseases: An Application to Oto-Laryngopharyngeal Diseases Advances in Fuzzy Systems Volume 2013 (2013), Article ID 385974, 7 pages Research Article Spatiotemporal Hotspots Analysis for Exploring the Evolution of Diseases: An Application to Oto-Laryngopharyngeal Diseases ^1Universitá degli Studi di Napoli Federico II, Dipartimento di Architettura, Via Toledo 402, 80134 Napoli, Italy ^2Seconda Università degli Studi di Napoli, Dipartimento di Psichiatria, Neuropsichiatria Infantile, Audiofoniatria e Dermatovenereologia, L.go Madonna delle Grazie, 80138 Napoli, Italy ^3Centre of Excellence IT4 Innovations, Institute for Research and Applications of Fuzzy Modelling, University of Ostrava, 30. dubna 22, 70103 Ostrava, Czech Republic ^4Universitá degli Studi di Salerno, Dipartimento di Informatica, Via Ponte don Melillo, 80084 Fisciano, Salerno, Italy Received 1 May 2013; Revised 29 July 2013; Accepted 29 July 2013 Academic Editor: Salvatore Sessa Copyright © 2013 Ferdinando Di Martino et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. This paper presents a spatiotemporal analysis of hotspot areas based on the Extended Fuzzy C-Means method implemented in a geographic information system. This method has been adapted for detecting spatial areas with high concentrations of events and tested to study their temporal evolution. The data consist of georeferenced patterns corresponding to the residence of patients in the district of Naples (Italy) to whom a surgical intervention to the oto-laryngopharyngeal apparatus was carried out between the years 2008 and 2012. 1. Introduction In a GIS, the impact of phenomena in a specific area due to the proximity of the event (e.g., the study of the impact area of an earthquake, or the area constraint around a river basin) is performed using buffer area geoprocessing functions. Given a geospatial event topologically represented as a georeferenced punctual, linear, or areal element, an atomic buffer area is constituted by circular areas centered on the element. For example, if the event is the epicenter of an earthquake, georeferenced by a point, a set of buffer areas is formed by concentric circular areas around that point; the radius of each circular buffer area is defined a priori. When it is not possible to define statically an area of impact and we need to determine what is the area affected by the presence of a consistent set of events, we are faced with the problem of detecting this area as a cluster on which the georeferenced events are thickened as well. These clusters are georeferenced, represented as polygons on the map, and called hotspot areas. The study of hotspot areas is vital in many disciplines such as crime analysis [1–3], which studies the spread on the territory of criminal events, fire analysis [4], which analyzes the phenomenon of spread of fires on forested areas, and disease analysis [5–7], which studies the localization of focuses of diseases and their temporal evolution. The clustering methods mainly used for detecting hotspot areas are the algorithms based on density (see [8, 9]); they can detect the exact geometry of the hotspots, but are highly expensive in terms of computational complexity, and in the great majority of cases, it is not necessary to determine exactly the shape of the clusters. The clustering algorithm more used for its linear computational complexity is the Fuzzy C-Means algorithm (FCM) [10], a partitive fuzzy clustering method that uses the Euclidean distance to determine prototypes cluster as points. Let be a dataset composed of pattern , where is the th component (feature) of the pattern . The FCM algorithm minimizes the following objective function: where is the number of clusters, fixed a priori, is the membership degree of the pattern to the th cluster , is the set of points given by the centers of the clusters (prototypes), is the fuzzifier parameter and, is the distance between the center of the th cluster and the th vector , calculated as the Euclidean norm: Using the Lagrange multipliers method for minimizing the objective function (1), we obtain the following solution for the center of each cluster prototype: where and for membership degrees , subjected to the constraints: Initially, the ’s and the are assigned randomly and updated in each iteration. If is the matrix calculated at the -th step, the iterative process stops when where is a prefixed parameter. This algorithm has a linear computational complexity; however, it is sensitive to the presence of noise and outliers; furthermore, the number of cluster is fixed a priori and needs to use a validity index for determining an optimal value for the parameter . In order to overcome these shortcomings, in [11, 12], the EFCM algorithm is proposed, where the cluster prototypes are hyperspheres in the case of the Euclidean metric. Like FCM, the EFCM algorithm is characterized by a linear computational complexity; furthermore, it is robust with respect to the presence of noise and outliers, and the final number of clusters is determined during the iterative process. In [13, 14], the authors propose the use of the EFCM algorithm for detecting hotspot areas. The final hotspots are identified as the detected cluster prototypes and shown on the map as circular areas. In [4], the authors analyze the spatio-temporal evolution of the hotspots in the fire analysis. The pattern event dataset is partitioned according to the time of the event’s detection; so each subset is corresponding to a specific time interval. The authors compare the hotspots obtained in two consecutive years by studying their intersections on the map. In this way, it is possible to follow the evolution of a particular phenomenon. The cluster prototypes detected from EFCM method are circular areas on the map that can approximate a hotspot area. Figure 1 shows an example of two circular hotspots, obtained as clusters. Figure 1 shows three different regions.(i) An area in which the hotspot is not intersected by the hotspot (corresponding to ): this region can be considered as a geographical area in which prematurely detected event disappears successively. (ii) The region of intersection of the two hotspots : this region can be considered a geographical area in which the event continues to persist. (iii)An area in which the hotspot is not intersected by the hotspot (corresponding to ): this region can be considered as a geographical area in which the prematurely undetected event propagates successively.We can study the spatio-temporal evolution of the hotspots by analyzing the interactions between the corresponding circular cluster prototypes obtained for consecutive periods, and detecting the presence of new hotspots in regions previously not covered by hotspots and the absebce of hotspots in regions previously spatially included in hotspot areas. In this research, we present a method for studying the spatio-temporal evolution of hotspots areas in disease analysis; we apply the EFCM algorithm for comparing, in consecutive years, event datasets corresponding to oto-laryngopharyngeal diseases diagnosis detected in the district of Naples (I). Each event corresponds to the residence of the patient who contracted the disease. We study the spatio-temporal evolution of the hotspots analyzing the intersections of hotspots corresponding to two consecutive years, the displacement of the centroids, the increase or reduction of the hotspots areas, and the emergence of new hotspots. In Section 2, we give an overview of the EFCM algorithm. In Section 3, we present our method for studying the spatio-temporal evolution of hotspots in disease analysis. In Section 4, we present the results of the spatio-temporal evolution of hotspots for the otolaryngologist-laryngopharyngeal diseases diagnosis events detected in the district of Naples (I). Our conclusions are in Section 5. 2. The EFCM Algorithm In the EFCM algorithm, we consider clustering prototypes given by hyperspheres in the -dimensional feature’s space. The th hypersphere is characterized by a centroid and a radius . Indeed, if is the radius of , we say that belongs to if . The radius is obtained considering the covariance matrix associated with the th cluster, defined as whose determinant gives the volume of the th cluster. Since is symmetric and positive, it can be decomposed in the following form: where is an orthonormal matrix and is a diagonal matrix. The radius is given by the following formula (see [12]): The objective function to be minimized is the following: where the membership degrees are updated as We set and define the number for any ; thus, we obtain However, the usage of (12) produces the negative effect of diminishing the objective function (10) when a meaningful number of features are placed in a cluster and this fact can prevent the separation of the clusters. Then a solution to this problem consists in the assumption of a small starting value of and then it is increased gradually with the factor , where is the number of clusters at the th iteration and is defined recursively as , by setting and the symmetric matrix , where is defined as well. If is the matrix at the th iteration and the threshold is introduced as limit, then two indexes and are determined such that and thus and are merged by setting The th row can be removed from the matrix . In other words, the EFCM algorithm can be summarized in the following steps.(1)The user assigns the initial number of clusters (usually ), , the initial value , and .(2)The membership degrees are assigned randomly.(3)The centers of the clusters are calculated by using (3).(4)The radii of the clusters are calculated by using (9).(5) is calculated by using (12).(6)The indexes and are determined in such a way that assumes the possible greatest value at the th iteration.(7)If and , then the th and th clusters are merged via (14) and the th row is deleted from .(8)If (6) is satisfied, then the process stops; otherwise, go to the step for the th 3. Hotspots Detection and Evolution in Disease Analysis Each pattern is given by the event corresponding to the residence of the patient to whom a specific disease has been detected. The two features of the pattern are the geographic coordinates of the The first step of our process is a geocoding activity necessary for obtaining the event dataset starting by the street address of the patients. To ensure an accurate matching for the geopositioning of the event, we need the topologically correct road network and the corresponding complete toponymic data. The starting data include the name of the street and the house number of the patient’s residence. After the matching process, each data is converted in an event point georeferenced on the map. In Figure 2, the road network of the district of Naples is shown; the name of the street is labeled on the map; the events are georeferenced as points on the map. Figure 3 shows the data corresponding to an event selected on the map. After geo-referencing each event, the event dataset can be split, partitioning them by time interval. For example, the event in Figure 3 can be split by the field “Year.” For each subset of events, we apply the EFCM algorithm to detect the final cluster prototypes. In this research, we point out the analysis of the temporal evolution and spread of oto-laryngo-pharyngeal diseases detected within the district of Naples. The datasets, divided by time sequences corresponding to periods of one year, are made up of patterns for different events georeferenced corresponding to ailments encountered in patients for which an intervention and the subsequent histological examination were pointed out as well. The event refers to the geopositioning of the location of the patient. The data have been further divided by the type of the disease for analyzing the distribution and evolution of each specific disease on the area of the study. The EFCM algorithm has been encapsulated in the GIS platform ESRI ArcGIS. Figure 4 shows the mask created for setting the parameters and running the EFCM algorithm. We can set other numerical fields for adding other features to the geographical coordinates. Initially, we set the initial number of clusters, the fuzzifier m, and the error threshold for stopping the iterations. After running EFCM, the number of iterations, the final number of clusters, and the error calculated at the last iteration are reported. The resultant clusters are shown as circular areas on the map and can be saved in a new geographic layer. The final process concerns the comparative analysis of the hotspots obtained by the clusters corresponding to each subsets of events. Figure 5 shows an example of display on the map of hotspots obtained as final clusters for two consecutive subset of events. In order to assess the expansion and the displacement of a hotspot, we measure the radius of the hotspot and the distance between the centroids of two intersecting hotspots. In the next section, we present the results obtained by applying this method for the data corresponding to surgical interventions to the oto-laryngo-pharyngeal apparatus in patients residents in the district of Naples between the years 2008 and 2012. We divide the dataset per year and analyze various types of diseases. Among the types of the most frequent diseases, the following were analyzed:(i)carcinoma,(ii)edema of bilateral Reinke,(iii)hypertrophy of the inferior turbinate,(iv)nasal polyposis,(v)bilateral vocal fold prolapse.In the next section, we show the most significant results obtained by applying this method to the each partitioned dataset of events. 4. Test Results We present the results obtained on the event dataset described above in the period between the years 2008 and 2012. We consider first the subset of data corresponding to the edema of bilateral Reinke disease. We fix the fuzzifier parameter to 0.1, the initial number of clusters to 15, and the final iteration error to 1 × 10^−2. Table 1 shows the results obtained for each year. We present the details relating to the comparison of the hotspots obtained by considering the event data for the years 2011 and 2012. Figures 6 and 7 show, respectively, the hotspots obtained by using the pattern subset of events that occurred in the years 2011 and 2012. Figure 8 shows the overlap of the hotspots obtained for the two years: in red, the hotspots corresponding to the year 2011; in blue, the ones corresponding to the year 2012. Table 2 shows in the first two columns the labels of the hotspots in 2011 and 2012, in third (resp., fourth) column the radius obtained in 2011 (resp., 2012), and the distance between the centroids is given in the fifth column. The results show that only hotspot 3 obtained for the year 2011 remains almost unchanged in the year 2012. Instead, hotspots 1 and 2 seem to merge into a single larger hotspot (the hotspot 1 obtained for the year 2012), and hotspot 4, that shifts about 1km, is expanded; the radius of this hotspot in 2012 is about 6.5km (hotspot 3 obtained for the year 2012 in Figure 8). Now we show the results obtained for the disease nasal polyposis. Figure 9 shows the overlap of the hotspots obtained for the two years, 2011 and 2012. In Table 3, the comparison’s results are reported. The results in Figure 9 show that in 2011 and 2012 there are two hotspots: the one covering an area of the city of Naples and the other covering many Vesuvian towns. The two hotspots, which in 2011 covered a circular area with a radii of about 3 and 5km, respectively, in 2012 cover a circular area with radii of about 5 and 7km, respectively. The histogram in Figure 10 shows the trend of the radii of the two hotspots in the course of time. It is relevant the spread in recent years of the hotspot that surrounds the Vesuvian towns (the radius of this hotspot, from about 2km in the year 2008, is about 7km in the year 2012). Another significant trend concerns the hotspots obtained for the carcinoma disease. Also, in this case, the two main hotsposts cover the city of Naples and many Vesuvian towns. In in this case, we have a very high spread of the hotspot covering the city of Naples (cfr. Figure 11); in recent years, the radius of this hotspot is increased up to 9.5km. 5. Conclusions The hyperspheres obtained as clusters (circles in case of two dimensions) by using EFCM can represent hotspots in hotspot analysis; this method has a linear computational complexity and is robust to noises and outliers. In hotspots analysis, the patterns are bidimensional and the features are formed by geographic coordinates; the cluster prototypes are circles that can represent a good approximation of hotspot areas and can be displayed as circular areas on the map. In this paper, we present a new method that uses the EFCM algorithm for studying the spatio-temporal evolution of hotspots in disease analysis. We consider the residence’s information of patients in the district of Naples (Italy) to whom a surgical intervention to the oto-laryngo-pharyngeal apparatus was carried out between the years 2008 and 2012. A geocoding process is used for geo-referencing the data; then, the georeferenced dataset is partitioned per year and type of disease; we compare the hotspots obtained for each pair of consecutive years and analyze the trend of each hotspot over time measuring the variation of the radius and the distance between intersecting cluster centroids concerning two consecutive years. The results show a consistent spread in the last years of the nasal polyposis disease hotspot covering some Vesuvian towns and of the carcinoma disease hotspot covering the city of Naples. 1. S. P. Chainey, S. Reid, and N. Stuart, “When is a hotspot a hotspot? A procedure for creating statistically robust hotspot geo-graphic maps of crime,” in Innovations in GIS 9: Socioeconomic Applications of Geographic Information Science, D. Kidner, G. Higgs, and S. White, Eds., Taylor and Francis, London, UK, 2002. 2. K. Harries, Geographic Mapping Crime: Principle and Practice, National Institute of Justice, Washington, DC, USA, 1999. 3. A. T. Murray, I. McGuffog, J. S. Western, and P. Mullins, “Exploratory spatial data analysis techniques for examining urban crime,” British Journal of Criminology, vol. 41, no. 2, pp. 309–329, 2001. View at Publisher · View at Google Scholar · View at Scopus 4. F. Di Martino and S. Sessa, “The extended fuzzy c-means algorithm for hotspots in spatio-temporal GIS,” Expert Systems with Applications, vol. 38, no. 9, pp. 11829–11836, 2011. View at Publisher · View at Google Scholar · View at Scopus 5. R. M. Mullner, K. Chung, K. G. Croke, and E. K. Mensah, “Introduction: geographic information systems in public health and medicine,” Journal of Medical Systems, vol. 28, no. 3, pp. 215–221, 2004. View at Publisher · View at Google Scholar · View at Scopus 6. K. Polat, “Application of attribute weighting method based on clustering centers to discrimination of linearly non-separable medical datasets,” Journal of Medical Systems, vol. 36, no. 4, pp. 2657–2673, 2012. View at Publisher · View at Google Scholar 7. C. K. Wei, S. Su, and M. C. Yang, “Application of data mining on the develoment of a disease distribution map of screened community residents of Taipei County in Taiwan,” Journal of Medical Systems, vol. 36, no. 3, pp. 2021–2027, 2012. View at Publisher · View at Google Scholar 8. I. Gath and A. B. Geva, “Unsupervised optimal fuzzy clustering,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 11, no. 7, pp. 773–780, 1989. View at Publisher · View at Google Scholar · View at Scopus 9. R. Krishnapuram and J. Kim, “Clustering algorithms based on volume criteria,” IEEE Transactions on Fuzzy Systems, vol. 8, no. 2, pp. 228–236, 2000. View at Publisher · View at Google Scholar · View at Scopus 10. J. C. Bezdek, Pattern Recognition with Fuzzy Objective Function Algorithms, Plenum Press, New York, NY, USA, 1981. 11. U. Kaymak, R. Babuska, M. Setnes, H. B. Verbruggen, and H. M. van Nauta Lemke, “Methods for simplification of fuzzy models,” in Intelligent Hybrid Systems, D. Ruan, Ed., pp. 91–108, Kluwer Academic Publishers, Dordrecht, The Netherlands, 1997. 12. U. Kaymak and M. Setnes, “Fuzzy clustering with volume prototypes and adaptive cluster merging,” IEEE Transactions on Fuzzy Systems, vol. 10, no. 6, pp. 705–712, 2002. View at Publisher · View at Google Scholar · View at Scopus 13. F. Di Martino, V. Loia, and S. Sessa, “Extended fuzzy c-means clustering algorithm for hotspot events in spatial analysis,” International Journal of Hybrid Intelligent Systems, vol. 4, pp. 1–14, 14. F. Di Martino and S. Sessa, “Implementation of the extended fuzzy c-means algorithm in geographic information systems,” Journal of Uncertain Systems, vol. 3, no. 4, pp. 298–306, 2009.
{"url":"http://www.hindawi.com/journals/afs/2013/385974/","timestamp":"2014-04-20T04:46:52Z","content_type":null,"content_length":"208277","record_id":"<urn:uuid:8e361be7-3346-4d99-8b15-89067da25183>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00278-ip-10-147-4-33.ec2.internal.warc.gz"}
Quantum computing may actually be useful, Research, Massachusetts Institute of Technology - NanoTechWire.com - The online resource for Nano Technology And Research 10/10/2009 12:49:54 AM Quantum computing may actually be useful In recent years, quantum computers have lost some of their luster. In the early 1990s, it seemed that they might be able to solve a class of difficult but common problems — the so-called NP-complete problems — exponentially faster than classical computers. Now, it seems that they probably can't. In fact, until this week, the only common calculation where quantum computation promised exponential gains was the factoring of large numbers, which isn't that useful outside cryptography. In a paper appearing today in Physical Review Letters, however, MIT researchers present a new algorithm that could bring the same type of efficiency to systems of linear equations — whose solution is crucial to image processing, video processing, signal processing, robot control, weather modeling, genetic analysis and population analysis, to name just a few applications. Quantum computers are computers that exploit the weird properties of matter at extremely small scales. Where a bit in a classical computer can represent either a "1" or a "0," a quantum bit, or qubit, can represent "1" and "0" at the same time. Two qubits can represent four values simultaneously, three qubits eight, and so on. Under the right circumstances, computations performed with qubits are thus the equivalent of multiple classical computations performed in parallel. But those circumstances are much rarer than was first anticipated. Quantum computers with maybe 12 or 16 qubits have been built in the lab, but quantum computation is such a young field, and the physics of it are so counterintuitive, that researchers are still developing the theoretical tools for thinking about it. Systems of linear equations, on the contrary, are familiar to almost everyone. We all had to solve them in algebra class: given three distinct equations featuring the same three variables, find values for the variables that make all three equations true. Computer models of weather systems or of complex chemical reactions, however, might have to solve millions of equations with millions of variables. Under the right circumstances, a classical computer can solve such equations relatively efficiently: the solution time is proportional to the number of variables. But under the same circumstances, the time required by the new quantum algorithm would be proportional to the logarithm of the number of variables. That means that for a calculation involving a trillion variables, "a supercomputer's going to take trillions of steps, and this algorithm will take a few hundred," says mechanical engineering professor Seth Lloyd, who along with Avinatan Hassidim, a postdoc in the Research Lab of Electronics, and the University of Bristol's Aram Harrow '01, PhD '05, came up with the new algorithm. Because the result of the calculation would be stored on qubits, however, "you're not going to have the full functionality of an algorithm that just solves everything and writes it all out," Lloyd says. To see why, consider the way in which each added qubit doubles the capacity of quantum memory. Eight qubits can represent 256 values simultaneously, nine qubits 512 values, and so on. This doubling rapidly yields astronomical numbers. The trillion solutions to a trillion-variable problem would be stored on only about 40 qubits. But extracting all trillion solutions from the qubits would take a trillion steps, eating up all the time that the quantum algorithm saved. With qubits, however, "you can make any measurement you like," Lloyd says. "You can figure out, for instance, their average value. You can say, okay, what fraction of them is bigger than 433?" Such measurements take little time but may still provide useful information. They could, Lloyd says, answer questions like, "In this very complicated ecosystem with, like, 10 to the 12th different species, one of which is humans, in the steady state for this particular model, do humans exist? That's the kind of question where a classical algorithm can't even provide anything." Greg Kuperberg, a mathematician at the University of California, Davis, who works on quantum algebra, says that the MIT algorithm "could be important," but that he's "not sure yet how important it will be or when." Kuperberg cautions that in applications that process empirical data, loading the data into quantum memory could be just as time consuming as extracting it would be. "If you have to spend a year loading in the data," he says, "it doesn't matter that you can then do this linear-algebra step in 10 seconds." But Hassidim argues that there could be applications that allow time for data gathering but still require rapid calculation. For instance, to yield accurate results, a weather prediction model might require data from millions of sensors transmitted continuously over high-speed optical fibers for hours. Such quantities of data would have to be loaded into quantum memory, since they would overwhelm all the conventional storage in the world. Once all the data are in, however, the resulting forecast needs to be calculated immediately to be of any use. Still, Hassidim concedes that no one has yet come up with a "killer app" for the algorithm. But he adds that "this is a tool, which hopefully other people are going to use. Other people are going to have to continue this work and understand how to use this in different problems. You do have to think some more." Indeed, researchers at the University of London have already expanded on the MIT researchers' approach to develop a new quantum algorithm for solving differential equations. Early in their paper, they describe the MIT algorithm, then say, "This promises to allow the solution of, e.g., vast engineering problems. This result is inspirational in many ways and suggests that quantum computers may be good at solving more than linear equations." Other Headlines from Massachusetts Institute of Technology ... - Toward faster transistors - New sensor developed by MIT chemical engineers can detect tiny traces of explosives - Catching cancer with carbon nanotubes - Seeing below the surface - Koch Institute for Integrative Cancer Research dedicated at MIT More Research Headlines ... - Experiments Settle Long-Standing Debate about Mysterious Array Formations in Nanofilms - "Critical baby step" taken for spying life on a molecular scale - Seeing an atomic thickness - First-ever sub-nanoscale snapshots of renegade protein in Huntington's Disease - Karlsruhe Invisibility Cloak: Disappearing Visibly
{"url":"http://www.nanotechwire.com/news.asp?nid=8736","timestamp":"2014-04-20T01:57:00Z","content_type":null,"content_length":"32869","record_id":"<urn:uuid:42ae798e-453c-4d29-a0d9-5721a7f2fc3a>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00557-ip-10-147-4-33.ec2.internal.warc.gz"}
Tough Differential Equation Problem May 2nd 2012, 07:00 PM Tough Differential Equation Problem I need to find a function F such that it is continuous everywhere and $y'(t)=F(y(t))$ and $y(0)=0$ The only thing i could think of is $y(t)=e^t$ but that obviously doesnt satisfy the initial value. Any help or hints is greatly appreciated. May 2nd 2012, 09:31 PM Re: Tough Differential Equation Problem So you're supposed to find both F and y(t) for which the given conditions are true, or ...? How about $y(t)=\sin t$. Then $y(0)=\sin 0=0$. And also $y'(t)=(\sin t)'=\cos t =\sqrt{1-\sin^2t}=\sqrt{1-(y(t))^2}=F(y(t)).$ May 3rd 2012, 03:21 PM Re: Tough Differential Equation Problem I forgot to mention that I need to find a function F such that the initial value problem has infinitely many solutions. I don't know if that changes anything.... May 3rd 2012, 03:26 PM Re: Tough Differential Equation Problem So to clear things up: I don't need to find a function y since this function F should take any function y and spit out its derivative i think. May 7th 2012, 08:19 AM Re: Tough Differential Equation Problem The problem with this question is that the complexity of any derivative is arbitrary, for example, if we know that the function y is of the general form: ... where a and b are constants. It is easy to create a function F that will reliably produce y's derivative, as in: However, even a slight change in the general form of y will upset everything and F will no longer function, while, as stated earlier the above function F produces the derivative of y, it will not produce the derivative of x (with constants a, b and c): However, a bit of manipulation will create a function that will work for both x and y, (above) function G, since they are quite similar. this won't work in all instances; consider the functions p and q, defined below: Creating a function that will produce the derivatives of both these functions is an enormous, if not impossible, task. So, my conclusion is, it is possible to create a function F that will produce the derivative of a function y, but the general form of y must first be known. Otherwise we'd have to consider an infinite number of combinations of roots, fractions, logarithms, trigonometric functions and lord knows what else - which from my humble high-school perspective - seems impossible. Sorry. =( I hope someone else has a more positive answer.
{"url":"http://mathhelpforum.com/differential-equations/198274-tough-differential-equation-problem-print.html","timestamp":"2014-04-17T01:47:32Z","content_type":null,"content_length":"8752","record_id":"<urn:uuid:e6c1da0a-e307-4d13-bad5-7c2a46f197c9>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00436-ip-10-147-4-33.ec2.internal.warc.gz"}
Marlow Heights, MD SAT Math Tutor Find a Marlow Heights, MD SAT Math Tutor ...I can teach science, math, spelling, writing, reading, social studies, and other subjects. I believe patience is key to working with elementary students who are struggling with their studies, and my experience has taught me how to be patient and meet each student's individual needs. I have tutored many students in linear algebra at the high school and college levels. 46 Subjects: including SAT math, reading, Spanish, English ...As a Mathematics/Computer science major in undergraduate school, I graduated with a 3.87 cumulative GPA. Math subjects included higher level math courses such as Linear Algebra, Discrete Mathematics, Calculus I, II, & III, and Differential Equations. I earned A's in all these subjects. 33 Subjects: including SAT math, English, reading, geometry ...We will also go over test taking strategies for mastering the CAT. The GMAT is a tricky test and adult students about to enter business school are called upon to remember mathematics they learned in pre-alegbra, algebra and geometry. I can help you achieve your business school dream by reviewing the math you need to know and only the math you need to know. 12 Subjects: including SAT math, geometry, ASVAB, GRE ...I played 4 years of varsity soccer. I was an all star honorable mention Junior and Senior year, County Exceptional Senior in 2008. Currently I am a travel soccer coach for U9/U10 academies, U15 boys travel, U16 boys travel, and U19 house programs for Vienna Youth Soccer. 16 Subjects: including SAT math, geometry, statistics, algebra 2 ...I played Wide Receiver, Defensive Back, and Quarterback. I have coached Youth Football for 5 years as both an assistant and Head Coach, winning 2 championships. I have coached High School football for 3 years, currently serving as Offensive Coordinator, QB Coach, LB Coach for Washington-Lee HS in Arlington, VA. 30 Subjects: including SAT math, reading, chemistry, physics
{"url":"http://www.purplemath.com/Marlow_Heights_MD_SAT_Math_tutors.php","timestamp":"2014-04-18T08:42:17Z","content_type":null,"content_length":"24619","record_id":"<urn:uuid:897343de-72f6-4e5a-ad5c-3fc991421f08>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00217-ip-10-147-4-33.ec2.internal.warc.gz"}
Propagation of error November 11, 2011 By Marc in the box At the onset, this was strictly an excercise of my own curiosity and I didn't imagine writing this down in any form at all. As someone who has done some modelling work in the past, I'm embarrassed to say that I had never fully grasped how one can gauge the error of a model output without having to do some sort of Monte Carlo simulation whereby the model parameters are repeatedly randomized within a given confidence interval. Its relatively easy to imagine that a model containing many parameters, each with an associated error, will tend to propagate these errors throughout. Without getting to far over my head here, I will just say that there are defined methods for calculating the error of a variable if one knows the underlying error of the functions that define them (and I have tried out only a very simple one here!). In the example below, I have three main variables (x, y, and z) and two functions that define the relationships y~x and z~y. The question is, given these functions, what would be the error of a predicted z value given an initial x value? The most general rule seems to be: error(z~x)^2 = error(y~x)^2 + error(z~y)^2 However, correlated errors require additional terms (see Wikipedia: Propagation of uncertainty ). The following example does just that by simulating correlated error terms using the package's function mvrnorm(). example:Read more » for the author, please follow the link and comment on his blog: me nugget daily e-mail updates news and on topics such as: visualization ( ), programming ( Web Scraping ) statistics ( time series ) and more... If you got this far, why not subscribe for updates from the site? Choose your flavor: , or
{"url":"http://www.r-bloggers.com/propagation-of-error/","timestamp":"2014-04-17T18:56:26Z","content_type":null,"content_length":"36763","record_id":"<urn:uuid:a6ceb226-e119-4aea-820f-cec2de5fbf30>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00596-ip-10-147-4-33.ec2.internal.warc.gz"}
Physics Forums - View Single Post - given height, and velocity thrown upward, find air-time Quote by awww, got you, it's coming together now, the new formula i got is: y = 30 + 12sin(90)t +1/2(-9.8)t2 solve for t? Yes that will give you the correct solution. Apply the quadratic formula, and remember to take the positive root.
{"url":"http://www.physicsforums.com/showpost.php?p=601482&postcount=6","timestamp":"2014-04-20T05:52:10Z","content_type":null,"content_length":"7418","record_id":"<urn:uuid:8ec00c23-8bfc-42b4-8e1e-2e43350a6d49>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00411-ip-10-147-4-33.ec2.internal.warc.gz"}
Data structures for range searching, , 1993 "... We consider the computational problem of finding nearest neighbors in general metric spaces. Of particular interest are spaces that may not be conveniently embedded or approximated in Euclidian space, or where the dimensionality of a Euclidian representation is very high. Also relevant are high-dim ..." Cited by 273 (4 self) Add to MetaCart We consider the computational problem of finding nearest neighbors in general metric spaces. Of particular interest are spaces that may not be conveniently embedded or approximated in Euclidian space, or where the dimensionality of a Euclidian representation is very high. Also relevant are high-dimensional Euclidian settings in which the distribution of data is in some sense of lower dimension and embedded in the space. The vp-tree (vantage point tree) is introduced in several forms, together with associated algorithms, as an improved method for these difficult search problems. Tree construction executes in O(n log(n)) time, and search is under certain circumstances and in the limit, O(log(n)) expected time. The theoretical basis for this approach is developed and the results of several experiments are reported. In Euclidian cases, kd-tree performance is compared.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=3012405","timestamp":"2014-04-25T01:56:37Z","content_type":null,"content_length":"12343","record_id":"<urn:uuid:83046995-5774-4ea9-99ce-2592427623d0>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00580-ip-10-147-4-33.ec2.internal.warc.gz"}
thematic mapping blog Since I'm making my own Thematic Mapping Engine , I need to understand the math behind proportional symbol calculations. Originally, I thought I would need different equations for different geometric shapes, and my book in cartography gave me the same impression. But after reading this article , I realised that life was not that complicated. This tutorial is a summary of a discussion on the CartoTalk forum . I especially want to thank Dominik Mikiewicz (mika) for his valuable comments and figures. The CartoTalk forum is highly recommended! Equations for 1D, 2D and 3D proportional symbols: 1-dimensional symbols (height) This is how the height of bars or prisms is calculated in TME. Equation: symbolSize = (value / maxValue) * maxSize PHP: $symbolSize = ($value / $maxValue) * $maxSize JavaScript: symbolSize = (value / maxValue) * maxSize Bars or prisms show “real” values scaled down to fit on a map, and you can easily see the relations and which is higher than the other. I’m not considering the problems caused by perspective and the curvature of the earth 2-dimensional symbols (area) This is how proportional images and regular polygons (e.g. circle, square) are scaled in TME. Equation: symbolSize = power(value/maxValue; 1/2) * maxSize PHP: $symbolSize = pow($value/$maxValue, 1/2) * $maxSize JavaScript: symbolSize = Math.pow(value/maxValue, 1/2) * maxSize 2D symbols use areas as mean of expression and therefore you're dealing with a square root of a showed value. This makes it relatively difficult to assess a value. 3-dimensional symbols (volume) This is how 3D Collada objects (e.g. cube, sphere) are scaled in TME. Equation: symbolSize = power(value/maxValue; 1/3) * maxSize PHP: $symbolSize = pow($value/$maxValue, 1/3) * $maxSize JavaScript: symbolSize = Math.pow(value/maxValue, 1/3) * maxSize 3D objects use volumes as mean of expression so you’re showing a cube root of the value. This makes it difficult to assess a value. It’s important to know that it’s one degree harder for the viewer to assess the relative size of 3-dimensional symbols compared to 3-dimensional, which again is harder to compare to 1-dimensional. This is clearly visualised on this figure (credit: Dominik Mikiewicz): The figure compares the circle and sphere radius for the same values. These three images shows GDP per capita (2006, ) using bars (1D), circles (2D) and spheres (3D). colour scale and legend helps the user in assessing and comparing symbols. The visual appearance of 2D and 3D symbols can also be improved by using a logarithmic scale 3D thematic mapping can be challenging, and I've been considering some of the problematic issues in a series of blog post (1, 2, 3, 4). There has also been a discussion on the CartoTalk forum, and Rich Treves posted a response on his blog. I released a new version of the Thematic Mapping Engine today as a response to this discussion. The most noticeable new feature is the enhanced colour scale. It's difficult to make a symbol legend in KML, as symbol size varies with scale (zoom). Without a legend, it's very hard to assess the exact values. This is why I duplicate symbology by supporting a colour legend. You can now easily define a colour scale for all thematic mapping techniques. The colour legend informs the user about the range of values (min and max), and where the different symbols are positioned on this range. The colour scale can be unclassed or classed: • Equal intervals: Each colour class occupies an equal interval along the value range. • Quantiles: The countries are rank-ordered and equal number of countries are placed in each colour class. Quantiles are not available for time series. Here is a few examples (all statistics from UNdata): Life expectancy at birth, 2005 (equal intervals). Life expectancy at birth, 2005 (quantiles). Aids estimated deaths, aged 0-49, 2005 (equal intervals). Eye candy: Mobile phone subscribers, 2004 (unclassed). Try it yourself! Ok, this is probably in the "how to sacrifice accuracy for eye candy" category, but it's still fun! Here is Mobile phone subsribers (statistics from UNdata) visualised using a proportional 3D mobile Mobile phone subscribers in 2003 (download KMZ). 3D phone from Mikeyjm/3D Warehouse. I'm using volume as the scaled parameter, which I think is more accurate than using area or height. You can even change the colour of the cover... :-o Animated version (1980-2004): Download KMZ (NB! You need a quick computer!) I've added a 3D mobile phone and a 3D person (both from 3D Warehouse) to the Thematic Mapping Engine, so you can make this visualisation yourself. The last issue I want to address in my 3D series is the problems of perspective. I find this issue particulary challenging. “Same with estimating sizes of oblique-viewed 3D domes for proportional symbols. The problem is further magnified when the data is re-projected to an Earth globe view making the task of estimating heights/sizes of the polygons even harder (since the user has to mentally compensate for the curvature of the earth). In short their concern is we are sacrificing accuracy for eye candy.” (Sean Yes, the use of proportional symbols on a 3D globe raises some serious questions. Here are my 3D Collada domes of world population: Download KMZ. At least, the dome shape makes it possible to calculate the volume of each object, as the volume should represent the statistical value. I'm not sure how to scale irregular objects properly, - like a 3D person. The main issue, as stated by Sean above, is how the user are going to estimate the volume of the domes when seen in perspective. The size of the domes are determined by two factors: the size of the population and the "distance" from the point of view. This makes it hard to compare 3D objects. One solution is to use a non-perspective projection (orthogonal projection) which makes it easier to make cross-scene comparsions (Shepherd, 2008). Using proportional images with the KML Icon element might be an option. Download KMZ. These symbols keep their relative size when you spin the globe. But what if the user expects the symbols to be scaled as the domes? If I overlay the two symbols it looks like this: Shepherd, I. D. H., 2008, “Travails in the Third Dimension: A Critical Evaluation of Three-dimensional Geographical Visualization”. Book chapter in "Geographic Visualization: Concepts, Tools Since I'm on a 3D choropleth jag, I want to address two common problems with prism maps; blocking and lack of north orientation. “Its possible that a high oil consumption of one country would obscure (or at least complicate the view if the 3D models are translucent) the consumption of a country located behind it, meaning I have to 'fly' around to see the value related to that country.” (Rich Treves) I agree with this statement, even though it's possible to guide the user in various ways: In my first and second blog post in this 3D series, I explained why I'm using two visual variables (colour and height) to represent the same statistical indicator. This is another good reason; colour might help the user in identifying hidden prisms. This prism map is created with the Thematic Mapping Engine, and shows life expectancy at birth (both sexes) in 2005. I'm not using a colour scale to represent the statistical value. The prism representing the low life expextancy in Afghanistan is hidden behind Pakistan. If you look at the prisms from above, the "shadows" helps you to identify the lower life expectancy in Afghanistan, but it's hard to measure the relative height. By adding a colour scale, it's easier to see that Afghanistan is experiencing lower life expectancy than the neighbouring countries. This is a suggested solution by Slocum (2005/70). “You are being redundant though by using color and prism height to show the same variable. It might be more interesting (and complicated!) to use the height to show something else to compare to internet usage.” (Cory Eicher) Not everyone likes this approach. Two different visual variables (colour and height) might also mean two different statistical indicators. I think it would be more confusing to combine two indicators on a prism map like this. I would appreciate more feedback about how people precieve prism maps using both height and colour for the same statistical indicator. I agree with Cory that combining two indicators might be more interesting (depending on what you are looking for), but I think another approach could be used with greater success. I'll return to bivariate mapping later. A reason why KML is good for thematic mapping is the FlyToView, Camera and LookAt elements. By using these elements we can automatically fly the user to the best "prism viewing location" (not so easy with whole world visualisations...), or to the location where we want to draw the attention. This might create another problem, as rotation and perspective might produces a view that is unfamiliar to users who normally see maps with north at the top (Slocum et al., 2005). This is a bigger challenge for paper-based prism maps than digital ones, since the user can switch easily between different scales to restore geographical orientation. Still, we're here at a key point: 3D visualisations tend to impose severe interaction demands on users (Shepherd, 2008/201). This can also be beneficial, as the user might learn and see more in interactive and explorative environments. Slocum, T. A, 2005, "Thematic Cartography and Geographic Visualization", Second Edition, Pearson Education, Inc Shepherd, I. D. H, 2008, “Travails in the Third Dimension: A critical Evaluation of Three-dimensional Geographic Visualization.” Book chapter in Dodge, M., McDerby, M. and Turner, M., 2008, “ Geographic Visualization: Concepts, Tools and Applications”, Wiley. It’s seems like people have a love/hate relationship with prism maps. Here are some of the arguments in the “don’t like” category together with my counter-arguments: “IMHO creating coropleth maps using prisms is not really *efficient* (but of course it’s spectacular) because it’s really hard (at least for me, maybe I’m too “plain”) to realize slight differences between non-contiguous areas. In fact you are using to techniques to present the same variable, color and prism height. As I see, the classic way to present a quantitative variable using the amount of brightness and just one value (color) is more efficient independently of using a 2D or a 3D environment.” (XuRxO) “If I wanted to find out the exact value of oil consumption for a particular country on the thematic map I can easily glance at the colors in the key to pinpoint the value. With consumption converted to height in Google Earth its difficult to read off the actual value.” (Rich Treves) “I was talking to some cartographer friends about the recent proliferation of 3D thematic maps and they had some concerns about their utility and accuracy. Specifically they pointed to testing that has shown people stink at estimating heights of the countries and have the hardest time telling the most basic differences in height.” (Sean Gorman) I’ll stick to the “Infant Mortality Rate” visualisation introduced in my last blog post: Prism map (download KMZ). Choropleth map (download KMZ). The best solution is to select a visual variable that match the level of measurements (nominal, ordinal or numerical) of the data (Slocum et al., 2005). Hue (colour) is the best way to represent nominal data, ordered hues (e.g. yellow, orange, red) or lightness are the best ways to represent ordinal data, while perspective height (prism) is the best way to represent numerical data (page 72). Clearly, brightness/hue is not the only "classic way" of representing numerical data. The reason why perspective height gets the highest rank by Slocum (2005/70) is that an unclassed prism map will portray ratios correctly, since a value twice as large as another will be represented by a prism twice as high. Slocum also identifies two problems of prism maps; that tall prisms sometimes block smaller prisms and that rotation might produce a view that is unfamilar to users. This is the topic of my next blog post in this series. If you look at the two images above (or in Google Earth) I will argue that the prism map is able to convey more detail than the choropleth map. I find it easier to see how the infant mortality rate varies across the continent, and to compare countries adjacent to each other. I agree with Rich that a colour legend helps the user to pinpoint a value, even though it's impossible to identify the exact value by this method. A height legend is diffult to make since the scale is changing. Still, I think legends are more important for paper maps than digital maps. On a digital map you should always be able to obtain the value by clicking a feature. In Google Earth, a Point Placemark is the only object you can click or roll over. You can click on a polygon by holding down the shift-key, but this is not very user friendly. Collada objects are not clickable at all. Google, I wont sleep well until this is fixed ;-) In conclusion, - try a prism map if you deal with numerical data and have 3D at your disposal! Thematic Cartography and Geographic Visualization, Second Edition 2005, Terry A. Slocum et al., Pearson Education, Inc I appreciate all the feedback I’ve got in emails, on this blog and on other blogs. Especially, I’m thankful for critical feedback as this is helping me in addressing important issues. Most of the critical comments are questioning the effectiveness of 3d globe visualisations, which are widely used on this blog. We definitely need to think critically about the pros and cons of 3D visualisations. I want to give my response in a series of blog posts. Rich Treves made a comment where he linked to his blog post “3D Rears its Ugly Head Again”. This is the first of his three arguments against 3D KML (Rich, excuse me for taking your arguments out of “I can't compare the Oil consumption of UK and Australia at the same time because they are on different sides of the globe.” True, the ability to compare all countries is lost when thematic maps are rendered on a globe. Still there are various ways to address this issue: 1. I’m using two visual variables (colour and height) to represent the same statistical indicator. This makes country comparison easier when spinning the globe. I could not find oil consumption statistics at UNdata, so I'm using "Infant Mortality Rate" instead (per 1,000 live births). You can make the map in the Thematic Mapping Engine, or download the KMZ file here. I think this is a good and effective 3D visualisation. 2. Values can be displayed on the globe, which makes country comparisons much more accurate. I added this feature to the Thematic Mapping Engine yesterday. More info will come in a separate blog post. 3. Another option, which is now possible with the new Google Earth plugin, is to have two spinning globes in the same window. Click here for a live example (based on the China Syndrome example from Google Code). If you rotate the left globe, the right globe will show you the view on the other side. I think it would be better to enable the user to rotate the two globes indepentantly. 4. A different approach is leave the globe, but hold on to 3D KML using a tool like UUorld, where you have 3D prisms on a flat world map. Currently, there are no 3D KML renderers that are able to make such visualisations, but I’m sure there will be in the near future. Maybe a job for the UUorld guys? UUorld seems to use Plate Carrée (Equidistant cylindrical) projection. This is clearly not the best choice for thematic world maps, but it’s the same projection currently supported by KML (EPSG:4326). In conclusion, country comparisons are problematic on a 3D globe, but it shouldn’t stop us from doing it! Thematic Mapping Engine (TME) is now supporting proportional symbols and choropleths in addition to prism maps. Three different types of proportional symbols are supported: 1. Images By selecting this option, you can choose between two images (so far) which are scaled according to a statistical value. This blog posts explains how this is achieved. This map shows number of mobile telephone subscribers in 2003. Same indicator, but with different image shape and colour. If you compare the result in the Google Earth browser plugin with the desktop program, you'll see a noticeable difference: The icons are much bigger in the plugin. The reason is different viewport sizes, and I consider this to be a Google Earth bug. I'm probably using the KML Icon element in a way that was not intended. Look at these images which are from the same KML file: There are two different ways adjusting the size of the planet in Google Earth. The left visualisation shows the earth in a zoomed out view. The circle images are scaled properly. You can also change the size of the planet by adjusting the Google Earth window. The problem is that the circle images maintains their size while the planet shrinks or expands. Still, this option works quite well if you have a fixed viewport for Google Earth. 2. Regular polygons The second option is to draw regular polygons as KML features. Unfortunately, KML has no build-in support for regular polygons, so these polygons have to created by complex calculations. Luckily, this is hidden behind the scenes of the Thematic Mapping Engine. The result is not nice looking in Google Earth. The satellite imagery distracts the view, and the borders are placed on top of the polygons. Other KML renderers might do a better job. It's also impossible to make perfect geometric shapes with this technique. 3. Collada objects The last option is to use 3d Collada objects. These visualisations might be eye candy, but I'm not sure how effective they are in conveying geographical patterns. There are also some scaling issues which I'll write about later. A 3d variation of mobile telephone subscribers in 2003. I've also added choropleths as an alternative to prism maps. The country polygons are shaded according to the number of mobile phone subscribers per 100 inhabitants. You can try the Thematic Mapping Engine here. Please provide your feedback! All statistics from UNdata. Thanks for all the feedback after my initial release of Thematic Mapping Engine. Two features have been requested; data upload/import and support for the new Google Earth plugin. I've now added a Preview-button, which enables you to see thematic maps in the browser instead of switching between two applications. The Google Earth plugin seems to render big and complex KML files very well. The image above shows GDP per capita in 2005. A missing feature is support for KML time primitives. The TimeSpan and TimeStamp elements are not supported, and the plugin renders all the polygon features regardless of the time specified (above image). This will hopefully be fixed in a future release. With the new Google Earth plugin it's possible to create a highly interactive and explorative user interface for geovisualisation. The plugin can be combined with other visualisation gadgets in a way that is not possible in Google Earth. As applications move from the desktop to the web, there will be a great demand for JavaScript experts who know how various APIs and libraries can be combined. By the way, Thematic Mapping Engine still works without the Google Earth plugin - just click the Download-button. It's time to introduce the Thematic Mapping Engine (TME). In my previous blog posts, I've shown various techniques of how geobrowsers can be used for thematic mapping. The goal has been to explore the possibilites and to make these techniques available to a wider audience. The Tematic Mapping Engine provides an easy-to-use web interface where you can create visually appealing maps on-the-fly. prism maps are supported, but other thematic mapping techniques will be added in the upcoming weeks. The engine returns a KMZ file that you can open in Google Earth or download to your computer. My primary data source is UNdata. download KMZ) and shows child mortaility in the world (UNdata). The Thematic Mapping Engine is also an example of what you can achieve with open source tools and datasets in the public domain: Try Thematic Mapping Engine Please give your feedback by posting a comment below! When I'm creating thematic maps with KML, I often end up with a series of files; legend images, icon images, 3D Collada objects and the KML file itself. The KML file is often large because of complex geometries repeated for several time steps. Fortunately, KML files and linked images and 3D objects can be zipped into one KMZ archive, which make file transfer easier and more efficient. I'm using the PHP ZIP extension to create KMZ files on-the-fly. Basically, a KMZ file has the same properties as a ZIP file. With the PHP extension you can open a new KMZ archive and add your KML code (named "doc.kml") and images/3D objects that are linked with the href element. For my last blog post, I created a KMZ archive (download) containing one KML file, one folder and three images. The original files are 1,418 kB in total while the KMZ file is only 153 kB - a lot of bandwidth saved!
{"url":"http://blog.thematicmapping.org/2008_06_01_archive.html","timestamp":"2014-04-21T04:43:51Z","content_type":null,"content_length":"164463","record_id":"<urn:uuid:71bf9137-3a38-40a7-8fca-e1ca78fd007a>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00558-ip-10-147-4-33.ec2.internal.warc.gz"}
Commutator of algebraic subgroups is connected up vote 5 down vote favorite Let $G$ be an algebraic group over an algebraically closed field. If $H$ and $K$ are closed subgroups and one of them is connected, then their commutator $[H,K]$ is also connected. Is there an easy way to see this fact? The proof that I see in Springer's book on Linear Algebraic Groups is very long-winded. (Of course, he obtains many results on the way). My question is, is the given statement true for any topological group? If yes, does that proof apply to the algebraic groups' case? (Remark: The definition of a topological group includes the Hausdorff property, whereas algebraic groups are not so). gr.group-theory algebraic-groups topological-groups add comment 5 Answers active oldest votes The case for $G$ a topological group doesn't look hard; it just comes down to two facts: • A union of connected sets which have a point in common is also connected. For example, suppose $K$ is a connected subgroup and $H$ is any set. For each fixed $h \in H$, the set of commutators $[h, K]$ is the image of $K$ under the continuous map $G \to G$ that takes $g$ to $[h, g]$; this image is connected and contains the identity (since $g = e$ belongs to $K$). But then the set of all commutators $[h, k]$ is a union of connected sets $[h, K]$ each containing the identity, so it too is connected. up vote 11 • If $S \subset G$ is a connected set containing the identity, then so is the group it generates. This is on the same principles as before, that the image of a connected set under a down vote continuous map is connected, and the union of connected sets with an element in common is connected. For example, the set $T = S \cup S^{-1}$ is connected, and then each $T \cdot T accepted \cdot \ldots \cdot T$ is connected, being the image of the connected set $T \times T \times \ldots \times T$ under the multiplication map $G^n \to G$, and finally the union of all these sets gives the group generated by $S$. Now apply the second fact to the connected set of commutators coming from the first fact, to show that the subgroup $[H, K]$ is connected. That answers my question, thanks! – Abhishek Parab Sep 2 '12 at 3:45 2 It is somewhat subtle, but an algebraic group is not a topological group for the Zariski topology. This is because for a topological group one requires that the multiplication map $\ mu:G \times G \to G$ be continuous for the product topology, while for algebraic groups one puts on the Zariski topology on $G \times G$, which does not coincide with the product topology in general. So it is not clear to me why the argument for topological groups should imply the statement for algebraic groups. – Guntram Sep 2 '12 at 5:24 1 Right, I was only addressing one of the questions of the OP. There is no reason I can think of why the statement for topological groups should imply the statement for algebraic groups. – Todd Trimble♦ Sep 2 '12 at 6:04 I missed the subtle point that Guntram mentioned. Thanks for pointing it out. – Abhishek Parab Sep 2 '12 at 15:31 add comment There are actually three questions here. The first is: Is there an easy way to see this fact?, to which the answer is (almost certainly) no. Keep in mind that both Chevalley and Borel came to algebraic groups from a background in Lie groups and were therefore well aware of what worked for a topological group. The third question has the same answer: whether the topological group result applies to the algebraic group case. "Long-winded" answers may be a necessary evil here. Keep in mind too that the applications of the foundational results behind your questions deal with whether certain subgroups of algebraic groups are closed and/or connected. There is some essential interaction between those properties. Todd has worked out the answer to the second question, concerning topological groups. Here the basic results are fairly old, so probably a similar proof is written down somewhere in the literature. But I'll elaborate on some of the other answers and comments about the essential difference in the algebraic group setting. 1) Algebraic groups (here assumed affine) are given the Zariski topology; in particular, irreducible sets are the more natural refinement of the topological notion of connected. In fact, Chevalley's original arguments about commutator groups and such just used the term "irreducible". It's true that for an algebraic group, which has only finitely many irreducible components (all disjoint), the notions "irreducible" and "connected" coincide. But this can cause confusion at times. In any case, the Zariski product topology isn't the usual one, so all topological up vote arguments involving products and continuous maps have to be rethought in algebraic geometry. 8 down vote 2) In his 1951 second volume in French on Theorie des groupes de Lie, Chevalley tried out a framework for algebraic groups which proved later to be inadequate in prime characteristic especially (so he changed gears). But he did rethink all the foundational material related to connectedness, which led classically to connectedness of familiar linear groups. His II.7 contains the prototype, hard to read now, of the basic argument. 3) The notes by Bass of Borel's 1968 Columbia lectures Linear Algebraic Groups adopted more modern algebraic geometry but avoided most scheme language due to time constraints. Chevalley's "long-winded" argument is recast here in I.2.2 and I.2.3 (where part (a) is used to get part (b)). These ideas are crucial for several applications. Coming this early in the theory and used for example to treat solvability, the proofs necessarily rely on first principles. (Borel's expanded second edition, Springer GTM 126, leaves this material unchanged.) 4) No substantive changes are made in my 1975 book GTM 21 and in Springer's 1981 Birkhauser text. In my book, see 7.5 and 17.2, while in Springer's book see 2.2.6-2.2.8. Thank you very much for your enlightening views. Point (1) was especially instructive. But in (4) - references, your book has no 7.5 and 17.2 is about universal enveloping algebras. Are you sure you wanted to point to that? – Abhishek Parab Sep 3 '12 at 18:25 @Abhishek: Sorry for the wrong GTM number, which I've edited. (I meant my second book in 1975, with the same title Linear Algebraic Groups as the books by Borel and Springer.) – Jim Humphreys Sep 3 '12 at 20:52 add comment Using classical varieties (and classical points only), since $G^n$ does not have the product topology (in the algebraic group setting) it isn't clear what can be useful for $G$ concerning "topological" statements (as in Todd's answer) concerning the product topology on the subset $T \times T$ inside $G \times G$ when $T$ is just some random subset of $G$ (not yet known to be constructible). So although topological groups provide valuable intuition that can sometimes be transported to the case of algebraic groups (which are of course not themselves topological groups in general), in this case the central issue is not addressed by thinking about topological groups. up vote The purpose of the longer delicate arguments one finds in the basic textbooks on algebraic groups is that the commutator subgroup is reached in "finitely many steps" (even without connected 4 down hypotheses on $H$ or $K$, which is very important for applications) and so is constructible. It is for constructible $T$ that $T \times T$ with the "right" topology (inherited from $G \times vote G$) is connected when $T$ is connected, etc. The hard part therefore involves a problem which doesn't arise in the topological group setting (unless one poses finer topological question, such as closedness of $(H,K)$ under some reasonable hypotheses, which is a deeper problem than mere connectedness). For an arbitrary (not necessarily constructible) connected subset $T$ of $G$ is the subset $T \times T$ inside $G \times G$ (the latter given the Zariski topology) connected? What do you mean by `constructible'? – Abhishek Parab Sep 2 '12 at 15:30 Chevalley's theorem on preservation of constructibility under images pervades the beginning of the theory of algebraic groups. The constructible subsets of a noetherian topological space 1 are the finite unions of locally closed sets. Check Wikipedia under "constructible set" (though the example there is weak, since it is even locally closed; a better example is the subset of the plane given by the union of the origin and the complement of the $x$-axis). By the way, the end of your posted question made me think that you were well aware that algebraic groups aren't topological groups. – grp Sep 2 '12 at 16:07 add comment It seems to me that in a topological group $G$ if $H$ is a connected subset containing the identity and $K$ is any subset then the subgroup generated by all commutators $hkh^{-1}k^{-1}$ is connected. $G$ does not to be Hausdorff, and $H$ and $K$ do not have to be closed or to be subgroups. up vote 4 The ingredients in the proof are: (1) The image of a connected set under a continuous map is connected. (2) The union of a set of connected sets is connected if they all have some point down vote in common. (3) The subgroup generated by a connected set is connected. Sorry, Tom, I didn't see your answer while I was typing. – Todd Trimble♦ Sep 2 '12 at 3:33 What are the connected sets whose union you are considering in (2)? – Abhishek Parab Sep 2 '12 at 3:37 Actually, is what you wrote true if the connected subset does not contain the identity? If for example $H$ and $K$ were both 1-element sets, then it seems $[H, K]$ could be a discrete set, unless I'm missing something. – Todd Trimble♦ Sep 2 '12 at 3:39 Todd, yes, I was unconsciously assuming $H$ contained the identity. Fixed now. – Tom Goodwillie Sep 2 '12 at 12:12 The proof that I had in mind was exactly the one that Todd gives in detail in his answer. – Tom Goodwillie Sep 2 '12 at 15:58 add comment This fact is Corollary 16.4.1 on page 40 of the course notes from (the first semester of) Brian Conrad's course on linear algebraic groups. In the notes there is a proof of this fact is up vote 1 modern language, over an arbitrary field. The notes can be found here: http://math.stanford.edu/~conrad/252Page/handouts/alggroups.pdf down vote add comment Not the answer you're looking for? Browse other questions tagged gr.group-theory algebraic-groups topological-groups or ask your own question.
{"url":"http://mathoverflow.net/questions/106150/commutator-of-algebraic-subgroups-is-connected/106152","timestamp":"2014-04-24T13:42:28Z","content_type":null,"content_length":"92302","record_id":"<urn:uuid:9732e081-efee-4432-8212-973999c80578>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00583-ip-10-147-4-33.ec2.internal.warc.gz"}
Mathematics out loud We are used to reading mathematics, we are also used to hearing it spoken in lectures. I can think of few examples of the natural way to combine these. Why do we never read mathematics out loud? There are some good reasons for this, much of the symbology of mathematics was developed as visual and so has only a bad translation into speech. More importantly mathematical reading is very rarely linear, we step back from a theorem to a definition only partly remembered, jump forward to look at the corollaries before diving into the proof. Yet speaking words has a power that simply observing them in your head cannot. I tried for years to enjoy Paradise Lost yet got nothing from it, until I heard a comment from Phillip Pullman that it needed to be read out loud, and its beauty opened up to me. Why can’t mathematics benefit from this? We have our share of great writers, Donald Coxeter, Tim Gowers, Donald Knuth, Douglas Hofstadter, Rudy Rucker, indeed John Conway‘s papers are often created by writing down his spoken words. I am of course obliged to mention Lewis Carroll and Martin Gardner. Many people, myself included, are interested in how art can be used as path into mathematics. The visual arts and music are well represented, even mathematical symbols have been considered for their aesthetic qualities. Writing feels relatively neglected, yet is intrinsic to the actual practice of mathematics. So I have a question: What mathematics would you choose to read out loud? I am especially interested in passages that work when read, even though they are written for a mathematical audience. The more esoteric the better. If we can find some great ones, then perhaps we could even persuade someone with performance skills, who is also interested in mathematics, to read them out. Yes Vi Hart I am looking at you! 11 thoughts on “Mathematics out loud” 1. I don’t know about this, but I wonder… book reviews? I think that book reviews try to make the dense accessible, but also distill and inform. One of my favorites (which is snarky, and really mean, but pretty spot on) is http://www.ams.org/journals/bull/2003-40-01/S0273-0979-02-00970-9/S0273-0979-02-00970-9.pdf… I think hearing that aloud might be a powerful experience. Although I know this is not quite what you were looking for. You wanted mathematics, and I don’t know if a book review would count as mathematics… □ That is a classic review. It certainly fits into the broader mathematical culture. I think this quest has two aspects, one more general, thinking about spoken maths. A second more precise to find tracts of mathematics books doing nothing other than communicating mathematics that can still be read out loud and even have beauty. 2. I agree this needs to be done. I’m sure there must be some good translations of the ancient Indian math texts — I know how careful they were about composing their Sanskrit verses, so I’m betting the translations would be beautiful to listen to. Also a great speaker of math: Socrates. I think it would be very cool to get a group of people together to read aloud book VII (http://www.constitution.org/pla/repub_07.htm). 3. This is a great idea. My students and I were talking about having a math soliloquy contest this year; they were thinking of writing their own monologues, but a recitation category could be fun, 4. I’ve always preferred spoken math for my research, both actively and passively. Actively, it allowed me to find better understanding of the problems I was stuck with — I had to articulate them! — passively as I learned from my colleagues. It was one of the reasons to start recording all our seminar talks on video. □ I think a lot of the processes of creating mathematics get lost in how we write papers. The other classic example is the quick sketch. So many papers have none, but as soon as you start talking to the author some image is quickly drawn. 5. This is a bit more computer science than math, but Geoffrey Pullum’s ode to the Halting Problem written in Dr. Seuss form: There are some other similarly done, for fun, math poetry out there, but this was the first one that came to my mind. also, the lyrics to Jonathan Coulton’s “Mandebrot Set” song make for great entertaining reading. 6. All the great equations are great poems. Each great constant has all of mathematics packed into it, like the superbly compact seed, which contains billions of years of evolution. I could go on… 7. Pingback: Travels in a Mathematical World 8. There are the books by Lillian Lieber. I love The Education of T.C. Mits: What modern mathematics means to you. I wonder if my poem explaining a bit about complex numbers works for you as spoken
{"url":"http://maxwelldemon.com/2013/02/01/mathematics-out-loud/","timestamp":"2014-04-19T17:01:36Z","content_type":null,"content_length":"96180","record_id":"<urn:uuid:164843f6-9b2d-44ae-b915-c3b837196bc3>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00097-ip-10-147-4-33.ec2.internal.warc.gz"}
matlabShell is an engine that can be used with matlab.el. It's so naive that I don't understand why it isn't common use and suspect I've missed somthing elementary. This directory has: MatlabShell is a simple engine that seems to work with matlab.el Version 2.1.1. Testing has been brief, and only under NT4.0. There are no windows-isms in it and it should actually work anywhere it can be recompiled with the engine library. It's a very naive approach; I am new to Matlab and welcome improvements to make it useful, especially if I am missing the point altogether. Probably if it doesn't work with gud it is not that interesting, though at least it avoids the nuisance of switching between emacs and standard Matlab command processor. Using the executable: Install it where convenient. You'll set some emacs lisp to tell matlab.el where it is. See usage notes below. Help from me: Probably not much. If it's worth anything, maybe it will be discussed in matlab-emacs@mathworks.com Help to me: Very welcome. :-) I set a MATLAB environment variable and build the executable with a batch file containing the line mex -f %MATLAB%\bin\msvc50engmatopts.bat matlabShell.c Usage notes These are the usage notes for matlabShell.exe, version 1.0, a matlab shell engine suitable for use with Windows NT and probably for Windows 95. Install this executable in your execution path, and use it with matlab.el version 2.2, which can be obtained from ftp.mathworks.com. * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License as published by * the Free Software Foundation; either version 2, or (at your option) * any later version. * If you did not receive a copy of the GPL you can get it at * //http://www.fsf.org matlabShell.exe can be used standalone in a command window, and many installation mysteries can be cleared up by doing so, thereby helping to determine whether any problems are with the shell installation or with the matlab.el installation. I don't promise to answer questions, which should therefore be mailed to the matlab-emacs interest list matlab-emacs@mathworks.com or posted to comp.soft-sys.matlab. /* usage notes 1.0 02dec98 ram@cs.umb.edu Your emacs initialization should have something like this in it: (autoload 'matlab-shell "matlab" "Interactive Matlab mode." t) ( setq matlab-shell-command "D:/users/ram/matlab/engine/matlabShell.exe" matlab-shell-command-switches "500 10000" shell-command-echoes nil)) matlab-shell-command should evaluate to a string with the full path name of the executable of this shell. With matlab.el 2.2 you may need to have (load "font-lock") in your emacs initialization if nothing else loads it. The symptom of needing this will be complaints about some emacs font stuff not found while loading matlab.el matlab-shell-command-switches is optional. It should evaluate to a string with either one or two integers. The first is the size in bytes of the buffer the program uses for input, and the second, if present, is the size of the buffer to which Matlab returns its output. If only the first is resent, the second is set to 8 times the first. If the string is not set it defaults to "", and the shelll treats it as "1024 8192". Previous releases had "500 5000" as the suggested value, but 5000 characters may be rather small for output. Beginning with release 0.93, the lisp variable matlab-shell-echoes must have value nil. matlab.el sets this to t, but on NT, the symptom of having this t is that the command you give to matlab will not be echoed in the matlabShell buffer, because the matlab.el thinks that matlabShell will echo it. matlabShell doesn't echo in order that it can also work reasonably from a DOS command window. If you want to recompile and link this code do it with the file linkit.bat which should look something like this: mex -f %MATLAB%\bin\msvc50engmatopts.bat matlabShell.c You can only do this with MS VC++ 5.0. Known issues using matlabShell.exe 1.0 with matlab.el version2.2 1. the function matlab-shell-run-region (C-c C-r) does not leave the cursor positioned in the right place. 2. The function matlab-shell-save-and-go (C-c C-s) passes only the file name to the shell, not the full pathname. Consequently, if the file you are editing is not the one found on your Matlab path, it is not the one executed. If the current directory is not on your path at all and there is no file of the same name, Matlab will complain that there is no such file. If there is one on your path with the same name, you will come to believe that your edits are having no effect. Contrary to the API documentation, in Matlab 5.2.0, engEvalString() seems to return a positive integer no matter whether the engine has exited or not. The integer is different for each invocation, but constant throughout the execution of the engine shell, so I suppose it is derived from the process id or something. Any ideas? Bob Morris
{"url":"http://www.cs.umb.edu/~ram/matlabShell/index.html","timestamp":"2014-04-17T09:35:07Z","content_type":null,"content_length":"6992","record_id":"<urn:uuid:eac2743d-8d3e-42fe-8002-e78d4a555a15>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00191-ip-10-147-4-33.ec2.internal.warc.gz"}
Numerical simulation of the SPM penalty in a 10-Gb/s RZ-DPSK system Numerical simulation of the SPM penalty in a 10-Gb/s RZ-DPSK system ABSTRACT The impact of self-phase modulation-induced nonlinear phase noise in a 10-Gb/s return-to-zero differential phase-shift keying system is studied by numerical simulation. We show that the simple differential phase Q method based on the Gaussian approximation for the phase noise provides a relatively good estimate of the nonlinear penalty. [show abstract] [hide abstract] ABSTRACT: Using an alternative approach for evaluating the Bit-Error Rate (BER), we present a numerical and experimental investigation of the performance of phase-modulated optical communication systems in the presence of nonlinear phase noise and dispersion. The numerical method is based on the well known Karhunen-Lo;eve expansion combined with a linearization technique of the Nonlinear Schr odinger Equation (NLSE) to account for the nonlinear interaction between signal and noise. Our numerical results show a good agreement with experiments. Optics Express 04/2009; 17(5):3226-41. · 3.55 Impact Factor [show abstract] [hide abstract] ABSTRACT: We describe a new method for quantitative evaluation of the quality of multilevel differential phase-shift-keyed (DxPSK) signals by using a differential phasor monitor. This method measures the phase deviation of the symbols in the differential phasor diagram and estimates the bit-error rate (BER) by using a simple relation based on the Q -factor defined for multilevel DxPSK signals. We demonstrate that the proposed method can accurately estimate the BER of differential quadrature PSK and 8-ary PSK signals. IEEE Photonics Technology Letters 10/2009; · 2.04 Impact Factor [show abstract] [hide abstract] ABSTRACT: We develop a novel phasor monitor to obtain the constellation diagram from asynchronously sampled data measured by using the delay-detection technique. This phasor monitor consists of three parts; a phase-adjustment-free delay-interferometer, an optical front-end made of three photodetectors, and analog-to-digital (A/D) convertors, and a digital signal processor. We operate the A/D convertor at the sampling rate much slower than the symbol rate and acquire the data asynchronously. However, despite the use of such a slow and asynchronous sampling rate, we obtain the clear eye and constellation diagrams by utilizing the software-based synchronization technique based on a novel phased-reference detection algorithm. Thus, the proposed phasor monitor can be implemented without using high-speed A/D convertors and buffer memories, which have been the major obstacles for the cost-effective realization of the phasor monitor. For a demonstration, we realize the proposed phasor monitor by using an A/D converter operating at 9.77 MS/s and used it for the constellation monitoring and bit-error-rate (BER) estimation of 10.7-Gsymbol/s differential quadrature phase-shift keying (DQPSK) and differential 8-ary phase-shift keying (D8PSK) signals. Optics Express 10/2010; 18(21):21511-8. · 3.55 Impact Factor Data provided are for informational purposes only. Although carefully collected, accuracy cannot be guaranteed. The impact factor represents a rough estimation of the journal's impact factor and does not reflect the actual current impact factor. Publisher conditions are provided by RoMEO. Differing provisions from the publisher's actual policy or licence agreement may be applicable. 28 Downloads Available from Dec 3, 2013
{"url":"http://www.researchgate.net/publication/3291960_Numerical_simulation_of_the_SPM_penalty_in_a_10-Gbs_RZ-DPSK_system","timestamp":"2014-04-20T05:07:40Z","content_type":null,"content_length":"186007","record_id":"<urn:uuid:65a37e8a-11ab-4219-a71e-049ef373d78e>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00628-ip-10-147-4-33.ec2.internal.warc.gz"}
Potomac, MD SAT Math Tutor Find a Potomac, MD SAT Math Tutor ...My tutoring style can adapt to individual students and will teach along with class material so that students can keep their knowledge grounded. I have a Master's degree in Chemistry and I am extremely proficient in mathematics. I have taken many math classes and received a perfect score on my SAT Math test. 11 Subjects: including SAT math, chemistry, geometry, algebra 2 ...I can tutor all high school math including calculus, pre-calculus, trigonometry, geometry and Algebra I&II; plus SAT, GRE, ACT and other standardized test preparation. I can also tutor Physics and first year Chemistry.I have tutored this subject successfully in the past. In the past I was both a faculty high school mathematics instructor and a junior college instructor. 28 Subjects: including SAT math, chemistry, calculus, physics ...Since then, I have built on my algebra knowledge with a wide array of advanced mathematics. Therefore, I am very comfortable with the basics of algebra. I took three semesters of calculus at The University of Maryland, and did well in all of them. 27 Subjects: including SAT math, calculus, physics, geometry ...I am eager to work with any student, and am confident that through hard work and dedication anyone can succeed in any subject or topic. I look forward to working with you, be it in Mathematics, German or History!!During my four years at Temple University, I tutored mathematics for over two years... 11 Subjects: including SAT math, geometry, German, algebra 2 With a background in architecture, I have been teaching science, Physics and Chemistry for the last 16 years. I am currently in my eleventh year of teaching Physics to high school students having previously taught Chemistry for seven years. I am certified to teach both subjects in New Jersey. 7 Subjects: including SAT math, chemistry, physics, algebra 1 Related Potomac, MD Tutors Potomac, MD Accounting Tutors Potomac, MD ACT Tutors Potomac, MD Algebra Tutors Potomac, MD Algebra 2 Tutors Potomac, MD Calculus Tutors Potomac, MD Geometry Tutors Potomac, MD Math Tutors Potomac, MD Prealgebra Tutors Potomac, MD Precalculus Tutors Potomac, MD SAT Tutors Potomac, MD SAT Math Tutors Potomac, MD Science Tutors Potomac, MD Statistics Tutors Potomac, MD Trigonometry Tutors
{"url":"http://www.purplemath.com/Potomac_MD_SAT_math_tutors.php","timestamp":"2014-04-18T13:54:22Z","content_type":null,"content_length":"24041","record_id":"<urn:uuid:a5179ce1-03a6-4574-9dfd-f25d4b02a5b5>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00007-ip-10-147-4-33.ec2.internal.warc.gz"}
Examples 5.2.7(a): Consider the collection of sets (0, 1/j) for all j &gt 0. What is the intersection of all of these sets ? The intersection of all intervals (0, 1/j) is empty. To see this, take any real number . If it is not in any of the intervals (0, 1/j) , and hence not in their intersection. If x &gt 0 , then there exists an integer such that 0 &lt 1 / N &lt x . But then is not in the set (0, 1 / N) and therefore is not in the intersection. Therefore, the intersection is empty. Note that this is an intersection of 'nested' sets, that is sets that are decreasing: every 'next' set is a subset of its predecessor.
{"url":"http://www.mathcs.org/analysis/reals/topo/answers/nesting1.html","timestamp":"2014-04-21T03:15:18Z","content_type":null,"content_length":"5195","record_id":"<urn:uuid:a55258b2-65d3-4a6b-8d11-af93f26d844d>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00207-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts about Math humor on Let's Play Math! Oh, my! Ben Orlin over at Math with Bad Drawings just published my new favorite math proof ever: I had a fight with Euclid on the nature of the primes. It got a little heated – you know how the tension climbs. It started out most civil, with a honeyed cup of tea; we traded tales of scholars, like Descartes and Ptolemy. But as the tea began to cool, our chatter did as well. We’d had our fill of gossip. We sat silent for a spell. That’s when Euclid turned to me, and said, “Hear this, my friend: did you know the primes go on forever, with no end?” … Click here to read the whole post at Math with Bad Drawings. Get all our new math tips and games: Subscribe in a reader, or get updates by Email. Math Teachers at Play #58 [Feature photo (above) by Alex Kehr. Photo (right) by kirstyhall via flickr.] Welcome to the Math Teachers At Play blog carnival — a smorgasbord of ideas for learning, teaching, and playing around with math from preschool to pre-college. If you like to learn new things and play around with ideas, you are sure to find something of interest. Let the mathematical fun begin… PUZZLE 1 By tradition, we start the carnival with a pair of puzzles in honor of our 58th edition. Click to download the pdf: PUZZLE 2 A Smith number is an integer the sum of whose digits is equal to the sum of the digits in its prime factorization. Got that? Well, 58 will help us to get a better grasp on that definition. Observe: 58 = 2 × 29 5 + 8 = 13 2 + 2 + 9 = 13 And that’s all there is to it! I suppose we might say that 58′s last name is Smith. [Nah! Better not.] □ What is the only Smith number that’s less than 10? □ There are four more two-digit Smith numbers. Can you find them? And now, on to the main attraction: the blog posts. Many articles were submitted by their authors; others were drawn from the immense backlog in my Google Reader. Enjoy! Have a Mathy Thanksgiving Dinner Professional Mathemusician Vi Hart is back with more mathematical holiday fun. Enjoy! Optimal Potatoes Green Bean Matherole Borromean Onion Rings Thanksgiving Turduckenen-duckenen Get all our new math tips and games: Subscribe in a reader, or get updates by Email. A Bit of Arithmetic Fun Singing Banana (James Grime) recorded this video at the Mathematical Association annual conference dinner, 2011. I’ve shared it before, but that was over a holiday weekend, so many of you may have missed it. It relates, in a way, to our PUFM lesson this week. Get all our new math tips and games: Subscribe in a reader, or get updates by Email. Why Every Proof that .999… = 1 is Wrong Vi Hart repents with an update to her last video: “Take that, mathematics!” Get all our new math tips and games: Subscribe in a reader, or get updates by Email. Super Bowl XLVI Math Worksheet and Football Comic Lance Friedman of MathPlane.com has posted two bits of fun in honor of Super Bowl XLVI. (Click the images to go to Lance’s site.) And if you’re a homeschooler, Currclick is offering a Super Bowl Mini-Helper free this week. NFL Math Quiz Square One TV: The Mathematics of Love The Engineer was away on a business trip, and Kitten and I were in the mood to veg out on YouTube, so I hunted for some golden oldies. We used to watch Square One TV faithfully, back when my eldest was in first grade. I can’t believe they haven’t released this show on DVD! We found recordings of my two favorite songs (“Nine, Nine, Nine” and “8% of My Love”), but the picture quality was horrible. This video was the runner-up: Don’t miss any of “Let’s Play Math!”: Subscribe in a reader, or get updates by Email. Elementary Arithmetic My car makes a loud, scary, grinding noise, and of course the repair shop is closed until Tuesday — so instead of visiting relatives for the holiday weekend, I get a quiet “writer’s retreat” at home. If you’re stuck at home, too, perhaps you’ll enjoy this bit of fun… Don’t miss any of “Let’s Play Math!”: Subscribe in a reader, or get updates by Email. Happy Tau Day 6/28 is τ Day. Tau = τ = one turn around the circle = $\frac{C}{r}$ = 2π = 6.28318… How do mathematicians celebrate τ Day? Protest! Share anti-π propaganda. And eat two pies… Math Teachers at Play #39 Welcome to the Math Teachers At Play blog carnival — which is not just for math teachers! If you like to learn new things and play around with ideas, you are sure to find something of interest. Several of these articles were submitted by the bloggers; others were drawn from my overflowing blog reader. Don’t try to skim everything all at once, but take the time to enjoy browsing. Savor a few posts today, and then come back for another helping tomorrow or next week. Most of the photos below are from the 2010 MAA Found Math Gallery; click each image for more details. Quotations are from Mike Cook’s Canonical List of Math Jokes. Let the mathematical fun begin… Can You Find These AWOL Math Websites? In the course of my bloggy spring cleaning, I’ve made some terrible discoveries. Some of my favorite resources have disappeared off the internet. Or perhaps they’ve moved, and I just haven’t found their new homes. Do you know where these websites went? A Very Short History of Mathematics This irreverant romp through the history of mathematics by W. W. O. Schlesinger and A. R. Curtis was read to the Adams Society (St. John’s College Mathematical Society) at their 25th anniversary dinner, Michaelmas Term, 1948. Internet Archive’s Wayback Machine found a copy, but I’d love to replace this link with the article’s new location: [Warning: Do not attempt to read this article while drinking coffee or other spittable beverage!] Update: James Clare found the article’s new home here. Thank you! New Math Joke A topologist walks into a cafe: “Can I have a doughnut of coffee, please?” — Tanya Khovanova Tanya Khovanova’s Math Blog » November Jokes Don’t miss any of “Let’s Play Math!”: Subscribe in a reader, or get updates by Email. Best Math Humor, and a Few Teaching Tips [Photo by T. Ruette.] The Best of Blog project has become the monster that ate my life, but I am determined to finish the thing. [It's done! :D] Meanwhile, I’m enjoying the chance to explore long-forgotten blog posts. If you’d like a laugh, try some of these… For Niner: A Bit of Calculus Fun Students headed into finals week need to blow off some steam, so let’s have a little fun with calculus. Hey, Niner, does this look familiar?… [10 Steps to Solving a Calculus Problem by hydriapotts.] Valentine’s Day: Say It with Music If you have trouble seeing the video, it’s here on YouTube. For more information about the singers (and lyrics to this and other songs), check out the Klein Four webpage. P.S.: You may also enjoy the Valentine’s Day Fail over at Abstruse Goose. Don’t miss any of “Let’s Play Math!”: Subscribe in a reader, or get updates by Email. Math Teachers at Play #8 [Photo by jaaron.] Welcome to the Math Teachers At Play blog carnival — which is not just for math teachers! We accept entries from anyone who enjoys playing around with math, as long as the topic is relevant to students or teachers of preK-12th grade mathematics. Some articles were submitted by their authors, other were drawn from the back-log in my blog reader, and I’ve spiced it all up with a few math jokes courtesy of the Mathematical humor collection of Andrej and Elena Cherkaev. Let the mathematical fun begin… Real-Life Story Problem Don’t miss any of “Let’s Play Math!”: Subscribe in a reader, or get updates by Email. Get a Laugh Two blog posts that brightened a stormy afternoon: New Element At The Common Room. Severe Weather Testing Protocols At Fractions speak louder than nerds… April Fool’s Day: Fun with Math Fallacies Photo by RBerteig. Take a break from “serious” math and have a little fun today with some classics of recreational mathematics. Do you have a favorite math or logic fallacy? Please share it in the Comments below. In Between Sneezes… Multiplication Videos Sitting at home with a cold, tired of watching TV and playing video games, stumbled upon… A great theorem from math history Happy Pi Day II [Feature photo above by pauladamsmith.] Now there is an ancient Greek letter, And I think no other is better. It isn’t too tall, It might look very small, But its digits, they go on forever. — Scott Mrs. Mitchell’s Virtual School Time to Celebrate Are your students doing anything special for $\pi$ Day? After two months with no significant break, we are going stir crazy. We need a day off — and what better way could we spend it than to play math all afternoon? If you need ideas, here are some great $\pi$ pages: 500 (?) and Counting Photo by rileyroxx. Could this be my 500th post? That doesn’t seem possible, even counting all those half-finished-and-then-deleted drafts. Well, at least it is my 500th something, according to the WordPress.com dashboard. And surely a 500th anything is worth a small celebration, right? Maybe my students aren’t so bad, after all… It has been awhile since I posted a link to Rudbeckia Hirta’s Learning Curves blog. Here are a few of her students’ recent bloopers: Quotations XIX: How Do We Learn Math? He doesn’t learn algebra in the algebra course; he learns it in calculus. I have been catching up on my Bloglines reading [procrastinating blogger at work --- I should be going over the MathCounts lesson for Friday's homeschool co-op class], and found the following quotation at Mathematics under the Microscope [old blog posts are no longer archived]. That’s Mathematics Things are still hectic, but at least the phone company guy found the problem and got our “extended DSL” service working. “Extended DSL” is what you get when you live out in the boonies. No guarantees that it will be faster than the ancient modem, but at least it doesn’t tie up the phone line anymore. And it is a bit faster, so I finally get to enjoy You Tube. If the video doesn’t display properly, you can find it at this link: Rewriting the History of Math Here are a couple of quick links to math in the news: • MathTrek: A Prayer for Archimedes It turns out Archimedes was even closer to discovering calculus than we had thought. • Tales of the golem: With many cheerful facts about the square of the hypotenuse While Pythagoras, on the other hand, sees his place in math history threatened by an experimental disproof of the Pythagorean Theorem. [Hat tip: jd2718.] Don’t miss any of “Let’s Play Math!”: Subscribe in a reader, or get updates by Email. A Very Short History of Mathematics This paper was read to the Adams Society (St. John’s College Mathematical Society) at their 25th anniversary dinner, Michaelmas Term, 1948. [Warning: Do not attempt to read this while drinking coffee or other spittable beverage!] Hat tip: I found this through the math carnival at a mispelt bog. Update: [DEL:The original page has disappeared from the internet, or at least I cannot find it any more, but the Internet Archive Wayback Machine came to the rescue.:DEL] After my plea for help, James Clare pointed me to the article’s new home. Don’t miss any of “Let’s Play Math!”: Subscribe in a reader, or get updates by Email. Quotations XIII: Mathematics Education Is Much More Complicated than You Expected Registrations have been rolling in for our homeschool co-op, and the most popular classes are full already. Math doesn’t seem to be a “most popular” class. I can’t imagine why! Still, many of my students from last year are coming back for another go, and I am getting spill-over from the science class waiting list. Anyway, I have started planning in earnest for our fall session. As usual, I look to those wiser than myself for inspiration… Many teachers are concerned about the amount of material they must cover in a course. One cynic suggested a formula: since, he said, students on the average remember only about 40% of what you tell them, the thing to do is to cram into each course 250% of what you hope will stick. Math Jokes Blame it on MathNotations and his Corny Math Jokes (which actually included one I hadn’t heard before) — or maybe I have been reading too many of Chickenfoot’s strange tales — but anyway, I’m in a mood for humor. So here are a couple of old favorites: □ The Frivolous Theorem of Arithmetic Almost all natural numbers are very, very, very large. □ The First Strong Law of Small Numbers There are not enough small numbers to meet the many demands made of them. Hat tip: These had gotten lost in the dustbunnies of my memory until I saw the Frivolous Theorem mentioned recently at Art of Problem Solving. Edited to add: Scott at Grey Matters recently updated his Mathematical Humor post, which may be where I had originally read these. He links to several more great MathWorld jokes, including the ever-tasty Pizza Theorem. Don’t miss any of “Let’s Play Math!”: Subscribe in a reader, or get updates by Email. Have more fun on Let’s Play Math! blog: Spring Cleaning My Blog Links Our whole family is coming down with something again. What a nuisance! Since I don’t feel up to real cleaning, I guess it’s time to spruce up my sidebar. If you haven’t posted since November or December of last year, you’re outta there. And for those of you who use Blogger — well, I’m sorry, but if I get a persistent “Blogger: 404 – Page Not Found” then you’re gone, too. If you are still actively blogging, please send me an email. For Those Really Long Family Trips… Discovered on a “mathematics” tag search: Don’t miss any of “Let’s Play Math!”: Subscribe in a reader, or get updates by Email. Have more fun on Let’s Play Math! blog:
{"url":"http://letsplaymath.net/tag/math-humor/","timestamp":"2014-04-16T04:10:38Z","content_type":null,"content_length":"132857","record_id":"<urn:uuid:85ab454e-832e-438e-bf42-4f2e942c04b6>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00233-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: What is the supremum of a set? Best Response You've already chosen the best response. usually called the least upper bound. element of the set for which is not less than any other element of the set Best Response You've already chosen the best response. just a note: it night not be an element of the set Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4e40104e0b8bfb83f3cbd1fe","timestamp":"2014-04-17T07:14:25Z","content_type":null,"content_length":"30056","record_id":"<urn:uuid:94dd96a4-b618-4df6-92eb-bba47e5eb6e4>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00083-ip-10-147-4-33.ec2.internal.warc.gz"}
Elementary statistics From inside the book What people are saying - Write a review We haven't found any reviews in the usual places. Related books CHAPTER 3 ORGANIZING DATA 18 CHAPTER 4 140 12 other sections not shown Common terms and phrases access to Minitab alternative hypotheses approximately Bernoulli trials coefficient of determination column of Table command Computer exercise confidence interval contingency table critical value curve with parameters data points data provide sufficient data set data values discrete random variable displayed equal estimate event evidence to conclude Example expected frequencies Figure frequency distribution frequency histogram graph histogram hypothesis test Illustrates income Interpret your result Key Fact left-tailed males Nissan Zs normally distributed population null and alternative null hypothesis obtain P-value percentage perform a hypothesis perform the hypothesis population mean population standard deviation prediction interval Printout probability distribution provide sufficient evidence quartiles random sample randomly selected Refer to Exercise regression equation Reject H0 rejection region relative-frequency salaries sample mean sample standard deviation sampling distribution selected at random significance level standard normal curve statistical software STEP sum of squares Suppose test statistic z-score About the author (1993) Bibliographic information
{"url":"http://books.google.com/books?id=ro4XAQAAMAAJ","timestamp":"2014-04-19T13:36:27Z","content_type":null,"content_length":"98702","record_id":"<urn:uuid:caedf6c9-a9cf-49c7-ba1a-1afa9fa966b7>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00069-ip-10-147-4-33.ec2.internal.warc.gz"}
History Of The Theory Of Numbers - I CHAP. XVHI] ASYMPTOTIC DISTRIBUTION OF PRIMES. 439 A. de Polignac305 crossed out the multiples of 2 and 3 from the series of natural numbers and obtained the " table a®": (0) 1 (2) (3) (4) 5 (6) 7 (8) (9) (10) 11.... The numbers of terms in the successive sets of consecutive deleted numbers are 1, 3, 1, 3, 1, . . . , which form the "diatomic series of 3." Similarly, after deleting the multiples of the first n primes, we get a table an and the diatomic series of the nth prime Pn. That series is periodic and the terms after 1 of the period are symmetrically distributed (two terms equidistant from the ends are equal), while the middle term is 3. Let irn denote the product of the primes 2, 3, . . . , Pn. Then the number of terms in the period is $(7rn). The sum of the terms in the period is irn— <l>(irn) and hence is the number of integers <irn which are divisible by one or more primes ^Pn. As applications he stated that there exists a prime between Pn and Pn2, also between an and an+1. He306 stated that the middle terms other than 3 of a diatomic series tend as n increases to become 1, 3, 7, 15, . . . , 2m— 1, . . . . J. Deschamps307 noted that, after suppressing from the series of natural numbers the multiples of the successive primes 2, 3, . . . , p, the numbers left form a periodic series of period 2-3... p] and similar theorems. Like remarks had been made previously by H. J. S. Smith.308 P. L. Tchebychef s261 investigation shows that for x sufficiently large the number TT(X) of primes ^x is between 0-921Q and M06Q, where Q = z/log x. He314 proved that the limit, if existent, of ir(x)/Q for cc = oo is unity. J. J. Sylvester267 obtained by the same methods the limits 0-95Q and 1-05Q. By use of the function f (s) =2""f n"8 of Biemann, J. Hadamard31* and Ch. de la Valle'e-Poussin316 independently proved that the sum of the natural logarithms of all primes ^x equals x asymptotically. Hence follows the fundamental theorem that ir(x) is asymptotic to Q, i. e., 806Recherchea nouvelles sur lea nombres premiers, Paris, 1851, 28 pp. Abstract in Comptes Rendus Paris, 29, 1849, 397-401, 738-9; same in Nouv. Ann. Math., 8, 1849, 423-9. Jour, de Math., 19, 1854, 305-333. »MNouv. Ann. Math., 10, 1851, 308-12. •"Bull. Soc. Philomathique de Paris, (9), 9, 1907, 102-112 •MProc. Aflhmolean Soc., 3,1857, 128-131; Coll. Math. Papers, 1, 36. '"Mem. Ac. Sc. St. P6tersbourg, 6, 1851, 146; Jour, de Math., 17, 1852, 348; Oeuvres, 1, 34. ""Bull. Soc. Math, de France, 24, 1896, 199-220. ""Annales de la Soc. Sc. de Bruxelles, 20, II, 1896, 183-256.
{"url":"http://www.archive.org/stream/HistoryOfTheTheoryOfNumbersI/TXT/00000445.txt","timestamp":"2014-04-18T14:26:02Z","content_type":null,"content_length":"13231","record_id":"<urn:uuid:abca3777-adf6-4bb2-8305-d73034f6ee0b>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00196-ip-10-147-4-33.ec2.internal.warc.gz"}
Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole. Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages. Do not use for reproduction, copying, pasting, or reading; exclusively for search engines. OCR for page 122 Appendix I Review of BTRA Modeling Alan R. Washburn, Ph.D. Distinguished Professor Emeritus of Operations Research Naval Postgraduate School, Monterey, California July 10, 2007 consequence of the incident, a random variable that I will call Y. I think of consequences as being “lives lost,” but MEMORANDUM FOR THE NATIONAL ACADEMY OF any other scalar measure would do. Each node of the tree SCIENCES (NAS) has a set of successor arcs, and there is a given probability distribution over these arcs. One can imagine starting at the Review of the Department of Homeland Security (2006) root and randomly selecting an arc at each node encountered work on bioterrorism. until finally the consequence is determined. In addition to Y, the event tree involved in the 2006 work is such that every Background. The Department of Homeland Security (DHS) path from root to consequence also defines two other random has produced a 2006 bioterrorism study, and is working on variables: subsequent versions. DHS has asked NAS to assess the 2006 work, which I will refer to hereafter as “the 2006 work.” I • A, the biological agent, one of 28 possibilities, and have become acquainted with the work through contacts • S, the scenario. with the NAS committee, and have been invited to provide a review. This is the review. It is intended for a scientific audi- The scenario might be null in the sense that Y is 0 because ence, so I will not hesitate to use the language of probability the incident is terminated prematurely, but is nonetheless in describing what I think was done in 2006, or in how things always defined. might be handled differently in the future. Random variables DHS determines the consequence distributions through are uppercase symbols, P() and E() are the probability and Monte Carlo simulation based on expert input. The results expected value functions, respectively. are collected into decade-width histograms. I will not com- ment further on the methodology for producing the conse- My Qualifications. After working five years for the Boeing quence distributions, since I have not examined it in detail. Company, I joined the Operations Research faculty at the DHS has modified the above definition of an event tree Naval Postgraduate School in 1970, where I did the usual aca- in three senses. One is that the initial branches from the root demic things until retiring in 2006. My teaching includes prob- are rates, rather than probabilities. Call the rate on branch i λi, and let the sum of all of these rates be λ. If one interprets ability and decision theory, which are relevant here. See my resume at http://www.nps.navy.mil/orfacpag/resumePages these rates as independent Poisson rates of the various kinds /washbu.htm for details. I have no biological or medical quali- of incident, then it is equivalent to think of incidents as oc- curring in a Poisson process with rate λ, with each incident fications. My acquaintance with the work is mainly through being of type i with probability λi/λ. These ratios can be the the references listed at the end of this review. first set of branch probabilities, so this is all equivalent to the Event Trees. The fundamental idea behind the 2006 work is standard event tree definition, except that we must remember that incidents occur at the given rate λ. This first modification an event tree. As I will use the term in this review, an event tree is a branching structure whose root corresponds to the is thus of little import. assertion that some event has occurred, the event in this The second modification is that an incident might involve case being what I will call an “incident.” The tree branches multiple attacks, each with separate consequences. This is repeatedly until a “scenario” is encountered, at which point a more significant modification, and will be discussed sepa- one will find a probability distribution that determines the rately below. 22 OCR for page 122 2 APPENDIX I The third and most significant modification is that the However, summing to 1 is not sufficient for the SME marginals to be meaningful. This is most obvious when N = branching probabilities (DHS on occasion also calls them “branch fractions”) are not fixed, but are instead themselves 2. If the first branch has probability A, then the second must have probability 1 - A, and therefore the second probability determined by sampling from beta distributions provided indirectly by Subject Matter Experts (SMEs). Let θ be the distribution has no choice but to be the mirror image of the first. If the experts feel that the first marginal has α = 1 and collection of branching probabilities. In each incident we therefore observe (θ, A, S, Y), with θ determining the event b = 1, while the second has α = 2 and b = 2, then we must tree for the other three random variables. This modification explain to the experts that what they are saying is meaning- will also be discussed separately below. less, even though both marginals have a mean of 0.5. The second marginal has no choice but to be the mirror image The Second Modification: Repeated Attacks per Incident. of the first, and must therefore be the first, by symmetry. The vision is that a cell or group of terrorists will not plan Any other possibility is literally meaningless, since there is a single attack, but will plan to continue to attack until no pair of random variables (A1, A2) such that Ai has the ith marginal distribution and also A1 + A2 is always exactly 1. interrupted, with the entire group of attacks constituting I think DHS recognizes the difficulty when N = 2, and has an incident. The effect of this is to change the distribution of consequences of an incident, since a successful attack basically fixed it in that case by asking the SMEs for only one marginal, but the same difficulty is present for N > 2, will be accompanied by afterattacks, the number of which I will call X. I believe that the formula used for calculating and has not been fixed. The sampling procedure offered on E(X) is incorrect. Specifically, let λ′ be the probability that page C-81 of Department of Homeland Security (2006) will any one of the afterattacks will succeed, assume that after- reliably produce probabilities A1, …, AN that sum to 1, and attacks continue until one of them fails, and assume that the which are correct on the average, but they do not have the failed afterattack terminates the process and itself has no marginal beta distributions given by the SMEs. This is most consequences. Then the average value of X is E(X) = λ′/(1 - obvious in the case of the last branch, since the Nth marginal λ′), the mean of a geometric-type random variable. This is is never used in the sampling process, but I believe that the not the formula in use. Using the correct formula would be marginal distribution is correct only for the first branch. a simple enough change, but I believe the numerical effect There is a multivariable distribution (the Dirichlet distri- might be significant. bution) whose marginals are all beta distributions, but the Dirichlet distribution has only N + 1 parameters. The SME Other changes may also be necessary to implement the original vision. If the afterattacks all have independent con- marginals require 2N, in total, so the Dirichlet distribution is sequences, then the distribution of total consequences is the not a satisfactory joint distribution for A1, …, AN. (1 + X)-fold convolution of the consequence distribution, a Estimation of the Spread in Agent-Damage Charts. I have complicated operation that I see no evidence of. The docu- mentation is mute on what is actually assumed about the defined Y to be the consequence and A to be the agent. Define Ya to be the consequence if A = a, or otherwise 0, so that the independence of after attacks, and on how the E(X) computa- tion is actually used. Simply scaling up the consequences of 28 random variables Ya sum to Y. Most of the DHS output one attack by the factor (1 + E(X)) is correct on the average, deals with the random variable E(Ya | θ), the expected conse- regardless of independence assumptions, but will not give quence contribution from agent a, given the sampled branch probabilities θ. This quantity is random only because of its the correct distribution of total consequences. dependence on θ, the natural variability of Ya having been averaged out. A sample E(Ya | θj), j = 1,…, 500 is produced The Third Modification: “Random Probabilities.” DHS has accommodated SME uncertainty by allowing the branch by Latin Hypercube Sampling (LHS) of the branch prob- probabilities themselves to be random quantities, with the abilities, each sample including the standard average risk ˆ SMEs merely agreeing to a distribution for each probability, computations for the event tree. A sample mean estimate Ya of 500 rather than a specific number. I will refer to each of these ∑ ˆ E(Y ) is then made by Y = (1 / 500 ) E (Y | θ ) . The agents a a a j probability distributions as a “marginal” for its branch. If a j =1 are then sorted in order of decreasing sample mean, and node has N branches, the experts contribute N marginals, one displayed in what I will call “agent-damage” charts showing for each branch. Except at the root, these marginals are all the expected values and spreads as a function of agent. The beta distributions on the interval [0 1], and each therefore sample means are normalized before being displayed, prob- has two parameters, alpha (α) and beta (b). Each of these ably by forcing them to sum to 1. The normalization destroys distributions has a mean, and since the probabilities them- information that is relevant to the decisions being made. I do selves must sum over the branches to 1, the same thing must not know the motivation for doing so. logically be true of the means. The same need not be true The spreads display the epistemic variability due to SME of the SME inputs, but DHS seems to have disciplined the uncertainty about θ, but suppress all of the aleatoric vari- elicitation process so that the SME marginal means actually ability implied by the event tree. If there were no uncertainty do sum to 1. That is true in all of the data that I have seen. OCR for page 122 2 DEPARTMENT OF HOMELAND SECURITY BIOTERRORISM RISK ASSESSMENT about θ, all of the spreads would collapse to a single point long as i and j are not the same, which is all that is required (the mean) for each agent. I am not sure how the variability for my conclusion to be true. While it is certainly true that displayed in agent-damage charts is supposed to relate to the branches chosen at nodes i and j are in general dependent, decision making, but I guess that the graphs are intended to the branch probabilities are not.) support conclusions such as the following: “I know that the Use of SMEs. It is inevitable in a project like this that mean damage for agent 1 is larger then the mean damage for agent 2, but I still think that we ought to spend our money probabilities will have to be obtained from Subject Matter defending against agent 2 because of its high associated vari- Experts, rather than experimentation. The important thing is ability. Even a small prospect of the high damages associated that the SMEs at least know what they are estimating, and with agent 2 is not acceptable.” If that is the kind of logic that that estimates be used correctly once they are obtained. I the agent-damage charts are intended to support, then they have already mentioned that SME estimates of the marginal should include aleatoric variability. Without it, the spreads branch distributions are not reproduced by the sampling pro- associated with each agent are too small. This issue affects cedure. Another concern is at the third stage of the event tree, infectious agents more than the other kind, since infectious where SMEs are asked to deal with agent selection. At that stage there are 4 × 8 = 32 nodes in the event tree where an diseases will have especially high damage variances. The agent-damage charts are intended for a high level of agent might be selected, each of which has 28 branches. I can decision-making audience, and devote considerable space certainly understand DHS’s reluctance to conduct 896 inter- (one of the two available dimensions) to showing the spread views with SMEs, each to determine one of the needed beta associated with each agent. Without the need to show spread, distributions. Some kind of a shortcut is needed, but I wonder they could be replaced by bar charts or simple tables. If whether the one adopted is a good one. The SMEs are first spread is important enough to be displayed, then it ought to asked to determine an “input regarding known preferences of be displayed in a manner that facilitates good decisions. I terrorists” for each agent. If I were an SME and somebody doubt that that is currently the case. asked me to determine the quoted expression for agent a, I would announce my estimate of P(A = a), the probability Even without the aleatoric issue, I still have concerns about the spread that is displayed. The object ought to be that agent a is actually selected in an incident. Given all of to display the mean and fractiles (the spread) of the random these SME inputs, DHS then goes over the 896 branches, variable E(Ya | θ) for each value of a. The mean of E(Ya | θ) some of which have a logical 0 for the agent, and assigns is simply E(Ya) by the conditional expectation theorem, and probabilities using the rule that the probability is either 0 or ˆ is estimated by Ya. DHS claims graphically that the LHS else proportional to the SME’s agent input, the proportional- sample fractiles are also the fractiles of the random variable ity constant being selected in each of the 32 cases so that the E(Ya | θ). I suspect that this claim is false. LHS is basically probabilities sum to 1. My objections are that a variance reduction technique that makes the variance of ˆ Ya smaller than it would be with ordinary sampling. While • The quoted expression above does not make it clear that the SME input is supposed to be P(A = a). There is a this effect is welcome, LHS also has an unpredictable effect on variability. The spread that is shown for each agent may danger of every SME making a different interpretation not be a good estimate of the spread of the random variable of what is being asked for. E(Ya | θ). • If the SME does input the probabilities P(A = a), and if One final point on estimation. As long as there is no de- DHS applies the shortcut procedure to fill out the third pendence between the branch probabilities at different nodes, stage of the event tree, and if the probabilities of the 28 as there is not in the 2006 work, it is characteristic of an event agents are then computed from the tree, they will not tree that P(Ya ≤ y) = E(P(Ya ≤ y | θ)) = P(Ya ≤ y | E(θ)). The first necessarily agree with the SME’s inputs. This would equality is due to the conditional expectation theorem, and be true even without my next objection. the second is because no event tree probability enters more • The SME’s inputs are subsequently modified by vari- than once into calculating the probability of any scenario. ous formulas involving agent lethality, etc. What is an In other words, all information pertinent to the distribution SME who is already acquainted with agent lethality of Ya could be obtained without sampling error by simply to think of this? Should he adjust his input so that the replacing the marginal branch distributions by their means. net result of all this computation is the number that he This information includes E(Ya), which is currently being wanted in the first place? If one is going to elicit SME ˆ estimated (with sampling error) by Ya. (Note added in June inputs on probabilities, then it seems to me that one 2007. Let me expand the notation to clarify this final point, ought to use them as they are intended. since it has caused some confusion. Let θ = (Q1, …, Qn), where n is the number of nodes and Qi is the collection of Given that the agent probabilities strongly influence the branch probabilities at node i. Also let Qij be the jth branch agent-damage charts, the procedure for eliciting and using probability at node i. In the sampling procedure used by DHS them should be an object of concern in future work. to obtain θ, Qij and Qkl are independent random variables as OCR for page 122 2 APPENDIX I Tree Flipping? The process described earlier for generat- software is sometimes used diagnostically, which might be ing agent-damage charts may not be a correct statement of of use in bioterrorism. One might observe that the agent is what DHS actually did in 2006. The DHS documentation in known to be anthrax, for example, and instantly recompute several places, after describing a single event tree with 17 the target probabilities based on that known condition. ranks, states that a separate analysis was actually done for Another suggestion is to examine the potential for op- each agent (paragraph C.3.4.2 of Department of Homeland timization. Given that the basic problem is how to spend Security [2006], for example). Now, it is possible to end money to reduce risk, it is too bad that a problem that simple up with the single-tree analysis described earlier by doing in structure cannot be posed formally. It is possible that that. The essential step is to first calculate P(A = a) for each some actions that we might take would be effective for all agent, and then make a new tree where the agent is selected contagious diseases. This should make them attractive, but at the root, with the agent selection probabilities on the 28 the low rank of most contagious diseases individually in the branches from the root. The second and third ranks of the agent-damage charts tends to suppress their attractiveness. tree would then be what were originally the first and second, My last suggestion is to report future results in a scientific with new probabilities as computed by Bayes’ theorem, and fashion that can be reviewed by scientists. English is a notori- the rest of the tree would be unchanged. Since the agent is ously imprecise language for describing operations involving at the root of the resulting “flipped” tree, using the flipped chance, so I have repeatedly struggled to understand what tree is in effect doing a separate analysis for each agent. The was actually done in making my way through the references. flipped tree would lead to the same earlier described agent- As a result, I may well have misinterpreted something above damage charts—the two trees are stochastically equivalent. that I hope DHS will correct. If I were reviewing the 2006 But I don’t see the motivation for doing all this extra work work for a journal, my first act would be to send the material in flipping the tree, and I have some concerns about whether back to the authors with a request that it be written up using the flipping operation was actually done correctly, or done mathematics embedded in English, instead of just English. I at all. know that DHS has to communicate complicated ideas about One concern is that the thing being manipulated is not an risk to laypeople. That task should be in addition to reporting ordinary event tree, and there is no reason to expect that beta the results scientifically, not a replacement for it. distributions will remain beta distributions in the flipping In summary, my opinion is that the 2006 DHS methodol- process. Of course, the flipping could occur after the tree is ogy is not yet the “rigorous and technically sound methodol- instantiated in each of the 500 replications, but that would get ogy” demanded by the 2004 Homeland Security Presidential to be a lot of work. I doubt if that has been the case. Directive 10: Biodefense for the 2st Century. Let me also The documentation is mute about the tree flipping pro- add that I consider the report as a whole to be a remarkable cess. I can only hope that the method actually used for pro- accomplishment, given the magnitude of the task and the ducing agent-damage charts is equivalent to analyzing the time available to do it. single event tree as described above. References. Materials that I have examined before writing Suggestions. My main suggestion for future work is that this review include the following: distributions for branch probabilities be abandoned in favor of direct branch probabilities, as in a standard event tree. In Department of Homeland Security. 2006. Bioterrorism Risk other words, keep it simple. SMEs will not be comfortable Assessment. Biological Threat Characterization Center of the expressing definite values for the probabilities, but then they National Biodefense Analysis and Countermeasures Center. are probably not comfortable with expressing definite values Fort Detrick, Md. for α and b, either. Most people are simply not comfortable quantifying uncertainty. There is very little to be gained by I have also examined various drafts of the following: including epistemic uncertainty about the branch probabili- ties in an analysis like this, and much to be lost in terms of Department of Homeland Security. 2007. “A Lexicon of complication. Epistemic uncertainty is not even discussed in Risk Terminology and Methodological Description of the most decision theory textbooks. Standard software for han- DHS Bioterrorism Risk Assessment.” April 16. dling decision trees would become applicable (event trees are just a special case where there are no decisions) if epistemic Of all the documents, this last one comes closest to the tech- uncertainty were not present. There is also standard software nical appendix that I recommend. It has been of considerable for handling influence diagrams, which ought to be consid- use to me, but even it does not address tree flipping. ered as an alternative to decision trees. Influence diagram
{"url":"http://www.nap.edu/openbook.php?record_id=12206&page=122","timestamp":"2014-04-20T16:01:18Z","content_type":null,"content_length":"69725","record_id":"<urn:uuid:2698aeda-3134-41a8-8400-8458bbf78224>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00550-ip-10-147-4-33.ec2.internal.warc.gz"}
define a `static' theory (to enable or disable a set of rules) Major Section: EVENTS This macro provides a variant of deftheory, such that the resulting theory is the same at include-book time as it was at certify-book time. We assume that the reader is familiar with theories; see deftheory. We begin here by illustrating how deftheory-static differs from deftheory. Suppose for example that the following events are the first two events in a book, where that book is certified in the initial ACL2 world (see ground-zero). (deftheory my-theory (current-theory :here)) (deftheory-static my-static-theory (current-theory :here)) Now suppose we include that book after executing the following event. (in-theory (disable car-cons)) Suppose that later we execute (in-theory (theory 'my-theory)). Then the rule car-cons will be disabled, because it was disabled at the time the expression (current-theory :here) was evaluated when processing the deftheory of my-theory while including the book. However, if we execute (in-theory (theory 'my-static-theory)), then the rule car-cons will be enabled, because the value of the theory my-static-theory was saved at the time the book was certified. General Form: (deftheory-static name term :doc doc-string) The arguments are handled the same as for deftheory. Thus, name is a new symbolic name (see name), term is a term that when evaluated will produce a theory (see theories), and doc-string is an optional documentation string (see doc-string). Except for the variable world, term must contain no free variables. Term is evaluated with world bound to the current world (see world) and the resulting theory is then converted to a runic theory (see theories) and associated with name. Henceforth, this runic theory is returned as the value of the theory expression (theory name). As for deftheory, the value returned is the length of the resulting theory. We conclude with an optional discussion about the implementation of deftheory-static, for those familiar with make-event. The following macroexpansion of the deftheory-static form above shows how this works (see trans1). ACL2 !>:trans1 (deftheory-static my-static-theory (current-theory :here)) (MAKE-EVENT (LET ((WORLD (W STATE))) (LIST 'DEFTHEORY (LIST 'QUOTE (CURRENT-THEORY :HERE))))) ACL2 !> The idea is that upon evaluation of this make-event form, the first step is to evaluate the indicated LET expression to obtain a form (deftheory my-theory '(...)), where ``(...)'' is a list of all runes in current theory. If this form is in a book being certified, then the resulting deftheory form is stored in the book's certificate, and is used when the book is included later.
{"url":"http://www.cs.utexas.edu/users/moore/acl2/v5-0/DEFTHEORY-STATIC.html","timestamp":"2014-04-23T07:57:36Z","content_type":null,"content_length":"4580","record_id":"<urn:uuid:2b2a8931-6939-4465-9576-256434565cb8>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00415-ip-10-147-4-33.ec2.internal.warc.gz"}
area of a rectangle November 8th 2008, 08:30 PM #8 Senior Member November 8th 2008, 08:24 PM #7 November 8th 2008, 08:18 PM #6 November 8th 2008, 08:15 PM #5 Senior Member November 8th 2008, 08:08 PM #4 November 8th 2008, 07:58 PM #3 Senior Member November 8th 2008, 07:49 PM #2 November 8th 2008, 07:27 PM #1 Senior Member haha! this is what i was thinking, but did not think it was that simple, although now I think it is, thank you for your help! Unless it's something really really stupid like you're expected to say $A = \frac{(x+3)(x+4)}{(x+2)(x+2)} = \left(\frac{x+3}{x+2}\right) \cdot \left(\frac{x+4}{x+2}\right)$ and then you're meant to take $\frac{x+3}{x+2}$ as the length and $\frac{x+4}{x+2}$ as the width. Which is why I asked what x was. In which case all I can say is You're given an expression for A. Divide that expression by x to get the other side of the rectangle: y = A/x. Then perimeter = x + x + y + y = 2x + 2y and the problem is answered. The question says suppose the expression represents the area of a rectangle, find the expression for the perimeter of the rectangle. And it is the second equation, where 7x and x^2 are included in the numerator. area of a rectangle suppose the expression x^2+7x+12/(x+2)^2 represents the area of a rectangle find the expression for the perimeter of the rectangle Unless it's something really really stupid like you're expected to say $A = \frac{(x+3)(x+4)}{(x+2)(x+2)} = \left(\frac{x+3}{x+2}\right) \cdot \left(\frac{x+4}{x+2}\right)$ and then you're meant to take $\frac{x+3}{x+2}$ as the length and $\frac{x+4}{x+2}$ as the width. Which is why I asked what x was. In which case all I can say is Unless it's something really really stupid like you're expected to say $A = \frac{(x+3)(x+4)}{(x+2)(x+2)} = \left(\frac{x+3}{x+2}\right) \cdot \left(\frac{x+4}{x+2}\right)$ and then you're meant to take $\frac{x+3}{x+2}$ as the length and $\frac{x+4}{x+2}$ as the width. Which is why I asked what x was. In which case all I can say is Then I would simply multiply (x+3/x+2) and (x+4/x+2) by two and then add them together, correct? Final being 2x^2+14/x^2+4 i multiplied the expression to get (x^2+6/x^2+4) + (x^2+8/x^2+4) add the numerators and the denominator stays the same? what did i do wrong? November 8th 2008, 08:35 PM #9 Senior Member November 8th 2008, 08:37 PM #10 November 8th 2008, 08:41 PM #11 Senior Member November 8th 2008, 08:44 PM #12 November 8th 2008, 08:44 PM #13 Senior Member
{"url":"http://mathhelpforum.com/math-topics/58430-area-rectangle.html","timestamp":"2014-04-17T22:50:30Z","content_type":null,"content_length":"78357","record_id":"<urn:uuid:c0d8cf90-6974-4c9d-aec4-150ddbf440c4>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00072-ip-10-147-4-33.ec2.internal.warc.gz"}
the first resource for mathematics Convex expected residual models for stochastic affine variational inequality problems and its application to the traffic equilibrium problem. (English) Zbl 1193.65107 The authors consider the expected residual (ER) method which makes use of a residual function for the affine variational inequality problem (AVIP): Find $x\in S$ such that $〈Mx+q,y-x〉\ge 0$ $\ forall y\in S$, where $S=\left\{y\in {ℝ}^{n}|Ay=b,y\ge 0\right\}$ with $A\in {ℝ}^{m×n}$ and $b\in {ℝ}^{m}$, and $M\subset {ℝ}^{n×n},q\in {ℝ}^{n}$. The AVIP is a wide class of problems which includes the quadratic programming problem and the linear complementarity problem. The ER method solves the optimization problem: $minE\left(\left[r\left(x,\omega \right)\right]$ s.t. $x\in X$, where $r\left (·,\omega \right):{ℝ}^{n}\to {ℝ}_{+}$ is a residual function for the variational inequality problem, $E$ denotes the expectation. The ER model for the stochastic affine variational problem (SAVIP) based on the regularized gap function and the D-gap function for the AVIP is considered. Main result: The authors establish convexity of both the regularized gap function and the D-gap function and show that the resulting ER models with the proposed residual functions are convex. One of the ER models proposed here, the ER-D model, is then applied to the traffic equilibrium problem under uncertainty. In the numerical experiment, the ER-D model is compared with the MCP formulation-based ER model with the Fischer-Burmeister function. The numerical results show that when the demand $D\left(\omega \right)$ is fixed (200) for all $\omega \in {\Omega }$ (the sample space of factors contributing to the uncertainty in the traffic network, such as weather and accidents, $D\left(\omega \right):$ vector with components ${D}_{W}\left(\omega \right)$ – travel demand under uncertainty for OD-pair $w\in W$ – the set of origin-destination pairs in a network $𝒢$), the proposed ER-D model with large $\alpha$ (the regularized gap function ${f}_{\alpha }$) can obtain more reasonable solutions since the obtained route flows tend to satisfy the demand condition. Moreover, the demand condition is not greatly affected by the increase in the variance of $\omega$, that is, in the change in $\delta$, as compared to the ER-FB model (the effect of $\delta$, which defines the interval ${\Omega }=\left[\frac{1}{2}-\delta ,\frac{1}{2}+\delta \right]$, on the feasibility of the solutions obtained by the two ER methods). 65K15 Numerical methods for variational inequalities and related problems 90C15 Stochastic programming 90B20 Traffic problems 49J40 Variational methods including variational inequalities 49J55 Optimal stochastic control (existence)
{"url":"http://zbmath.org/?q=an:1193.65107","timestamp":"2014-04-19T01:47:18Z","content_type":null,"content_length":"26821","record_id":"<urn:uuid:42ad4392-f48f-4273-97e5-37027f6c3afd>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00447-ip-10-147-4-33.ec2.internal.warc.gz"}
infinite barbell, circle, 37777...777773 So I guess a circle with an infinite radius can't exist, according to Ricky's notion that {1,1,1...2} equals {1,1,1...}, since you never get to the 2. I am not so adamant in my beliefs, since I haven't probably put so much time reading or thinking about the subject. What about a barbell with weights on boths ends and an infinite length bar in the middle?? Can these concepts exist in our mathematical minds?? Ricky seems to think they cannot exist?? Maybe I don't understand him correctly. Please correct me, and I am sorry for the accusations. I am just trying to learn from you all. igloo myrtilles fourmis Re: infinite barbell, circle, 37777...777773 So I guess a circle with an infinite radius can't exist, Sure it can. A circle with an infinite radius is a line. "In the real world, this would be a problem. But in mathematics, we can just define a place where this problem doesn't exist. So we'll go ahead and do that now..." Re: infinite barbell, circle, 37777...777773 So your saying it's a line, but not a circle, or are you saying it's a line and a circle at the same time, or what are you saying? igloo myrtilles fourmis Re: infinite barbell, circle, 37777...777773 So your saying it's a line, but not a circle, or are you saying it's a line and a circle at the same time, or what are you saying? I believe Ricky must mean that at any point we looked at the circle it would look like a line I believe (and its a belief not a proof) that an infinite circle is impossible because by definition a circle is a finite thing If it were infinite we would not be able to define it Re: infinite barbell, circle, 37777...777773 John E. Franklin wrote: So I guess a circle with an infinite radius can't exist Definition of a circle of radius r with centre c in the plane: The set of all points with distance r from c. So, can you have an infinite circle? (or, can you have another non-equivalent definition of a circle?) I don't think you can have an infinite circle. Say there is an infinite circle (with r = "infinity") in the plane. Then take the set of all points that make up the circle and ask where the centre is... apparently, it's "infinitely far away" from all of them. But there is no point on the plane that is infinitely far away from any other pont on the plane, simply because any two points have a finite distance between them. So you can define the centre on the plane, in which case none of the points of the circle are on the plane (not because they are "infinitely far away" and such... because they are, as previously stated, not on the plane, so there are no points to make a circle with - also, the uniqueness of the centre is then void, and that throws another potential issue). So I put it to you that you cannot have an infinitely large circle. ...and please, before you go throwing around phrases like "infinitely large" and such, take a second to think about what you mean, and see if you can come up with a clear, precise definition. Otherwise all you're saying really is just meaningless babble. And if you're not clear on anything above, I'll be happy to further discuss it. Bad speling makes me [sic] Re: infinite barbell, circle, 37777...777773 Sure it can. A circle with an infinite radius is a line. I just realised you are talking about non-Euclidean Geometry arent you ? I dont know much about that but I believe people who do know would assert that the infinite circle could be imagined and that you would be able to imagine walking along its perimiter, and that a circle of such magnitude, or approaching infinity would appear as a line In the opposite sense to which two lines that are paralell appear to meet in the distance this circle would never appear to have rounded edges Last edited by cray (2006-10-10 10:39:06) Re: infinite barbell, circle, 37777...777773 cray wrote: Sure it can. A circle with an infinite radius is a line I just realised you are talking about non-Euclidean Geometry arent you ?. Ah, but there are many types of non-Euclidean geometry, so you should really specify what sort you're using. For example, you would be hard-pressed to imagine an infinite circle if you're using spherical geometry - since all your points are on a sphere, there is most certainly a limit to how large a circle could get! Bad speling makes me [sic] Re: infinite barbell, circle, 37777...777773 Not exactly. Take a piece of paper, and draw a fair sized circle. Now double or tripple the size. Continue doing this, drawing as much of the circle as you can. You should see the curve of the circle start to become straighter and straigher. It's with this observation that you can reach a conclusions that if we take the limit of a circle as it's radius approaches infinity, it becomes a line. Not rigorous mathimatics, just something to think about. And you most certainly need an infinite circle when dealing with improper integrals in polar coordinates, the most famous example is the Gaussian integral. "In the real world, this would be a problem. But in mathematics, we can just define a place where this problem doesn't exist. So we'll go ahead and do that now..." Re: infinite barbell, circle, 37777...777773 Ricky wrote: It's with this observation that you can reach a conclusions that if we take the limit of a circle as it's radius approaches infinity, it becomes a line. Indeed - but this does not mean that an infinite circle exists. Just because exists, that does not mean that f(c) exists - f(x) could well be undefined at c. So although the limit of larger and larger circles may indeed be a line, this does not mean that there exists an infinitely large circle. Bad speling makes me [sic] Re: infinite barbell, circle, 37777...777773 For example, you would be hard-pressed to imagine an infinite circle if you're using spherical geometry - since all your points are on a sphere, there is most certainly a limit to how large a circle could get! I wish I'd listened to my teacher more when I was at school because I would be taking a rough ride to nowhere to try and prove you wrong with my knowledge but it sure sounds like an obsfucation to bring in circular geometric planes ! I am not disputing your word, just making clear I havent a clue what youre talking about ! HA ! Anyway my simple points were that 1) I dont think that a infinite circle can exist in practice simply because by definition every aspect of it would be infinitely spaced apart - just the fact that logic alone would tell you it is impossible, as a circle would have a defined radius that cannot be infinitely long. A circle is a defined article so to speak. 2) If one were to begin the mathematical excersise of defining an infinitely large circle the definition would necessarily prove that the circle was not infinite since the implication that something is circular implies that it is also curved and to define a curve that must be done in a finite space Re: infinite barbell, circle, 37777...777773 I really like your logic, Cray. What about an infinite spiral?? Can you have that?? igloo myrtilles fourmis Re: infinite barbell, circle, 37777...777773 Dross wrote: Ricky wrote: It's with this observation that you can reach a conclusions that if we take the limit of a circle as it's radius approaches infinity, it becomes a line. Indeed - but this does not mean that an infinite circle exists. Just because exists, that does not mean that f(c) exists - f(x) could well be undefined at c. So although the limit of larger and larger circles may indeed be a line, this does not mean that there exists an infinitely large circle. Does x^2 exist at infinity? Or is that a meaningless question? "In the real world, this would be a problem. But in mathematics, we can just define a place where this problem doesn't exist. So we'll go ahead and do that now..." Re: infinite barbell, circle, 37777...777773 I don't know if x^2 exists at infinity. What about x - 5 at infinity?? or ln(x) at infinity? Are they all the same thing? infinity? If everything about infinity is infinity, then you can't have an object like a circle or a drawing of a house the size of infinity. But if infinity is a world unto itself, but separate from the real world, then maybe all these things could exist there. From the real world, it all looks like infinity, one undefined number. But from the infinite world, then things are as diverse as they are here in the real world. igloo myrtilles fourmis
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=46112","timestamp":"2014-04-18T19:00:35Z","content_type":null,"content_length":"28122","record_id":"<urn:uuid:f38f1d8d-3d36-4e9f-933d-c4331717031d>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00067-ip-10-147-4-33.ec2.internal.warc.gz"}
Physiology and Mathematics Education As a math lifer, I'm concerned about what more and more people are calling a in STEM (Science, Technology, Engineering and Mathematics) education in the United States. It seems as if every few days we read about the U. S. lagging behind other countries in math or science test scores . We continue to lead the world in innovation for the moment , but it's hard to picture how we can sustain that lead without producing (or importing) more scientists and engineers (and, dare I say, mathematicians). Equally importantly, in the age of the "information economy", if average U. S. citizens (not the ones in the right tail on math scores) come up short in STEM ability, where are they going to find jobs that pay respectable salaries? (This is very important to me: I'm on the cusp of retirement, and I'm counting on them both to pay down the burgeoning national debt and to sustain my lifestyle once my paychecks stop.) The press (and, thus, the populace) are gradually waking up to the looming crisis, as evidenced by government panels , academic workshops and periodic episodes of public hand-wringing. Naturally, this spawns a wave of finger-pointing as we seek (a) one single explanation for what is undoubtedly a phenomenon with multiple roots and (b) someone (other than ourselves) whom we can blame. A backgrounder from the Heritage Foundation aptly points out that if the K-12 pipeline does not produce students well grounded in science and math, colleges and universities will be hard-pressed to find students willing and able to major in those subjects (and, by extension, graduate program enrollments will also suffer). Now please indulge me in an apparent digression that will actually tie back (I hope). I recently found myself engaged in a conversation about math education, math ability and its connection (if any) to gender. In this conversation, I recalled a colleague (a professor of economics) complaining to me that his daughter had essentially been told outright by a high school teacher that, being female, she should have diminished expectations for learning upper level mathematics (I suspect this meant calculus, but the conversation was long ago and the details elude me). My colleague was justifiably I also recalled some misadventures from my graduate student days, when I taught a math class for elementary education majors. On paper, the course dealt with how to teach math to K-6 students. In practice, it meant (gulp) teaching K-6 math to college students majoring in elementary education. Lest you think I exaggerate, let me share an anecdote. My then girlfriend also taught the course, in the summer, when the students were elementary school teachers returning to pick up additional credits. One student asked if she could bring her eight year old son to class to avoid daycare hassles, which request my girlfriend was happy to accommodate. On the day of the first exam, she saw him sitting with nothing to occupy him, so she game him a spare copy of the exam, figuring he could color on the back. Instead, he flipped it over, did the exam -- and received the highest score in the class! So, on the one hand, we have people telling children that they are doomed to be weak at math (and science?) because they lack a Y chromosome. I suspect similar comments are made based on race or other factors. On the other hand, we have teachers (at least in elementary school) who themselves are weak in math (and science?) and are inclined to pass their fear of the subject on to their students. (If the teacher finds something difficult, he or she is likely to communicate to the pupil that the pupil should not worry about finding it challenging.) On the gripping hand (and this is purely my conjecture), I suspect we also discourage students of all levels from taking STEM courses by rewarding them for weak work in non-STEM courses. I think it's harder to give inflated grades in STEM subjects because they tend to have definitive correct and incorrect answers. (At least it's harder for me to give them.) At the same time, it's easy for a student to be discouraged at having to work hard for medium grades in STEM subjects when they can get better grades with less effort in other classes. What ties this to the STEM crisis, for me, is a recent article in Newsweek about physiological triggers for improvement (or degradation) in the human brain. Specifically, according to author Sharon Begley, Finally, being told that you belong to a group that does very well on a test tends to let you do better than if you’re told you belong to a group that does poorly; the latter floods you with cortisol, while the former gives you the wherewithal and dopamine surge to keep plugging away. So the effect of communicating diminished expectations to students may be more than psychological; it may trigger physiological changes that create a self-fulfilling prophecy ... and, in doing so, deepen the STEM crisis. 4 comments: 1. This is a fantastic post, Paul. I don't have anything to add, since you covered the issues very well. I am surprised 9and saddened) that anyone would tell a woman in this day and age that her math abilities may be limited. I suppose gender stereotypes die hard. 2. @Laura: Thanks (and thanks for the link on your blog). I believe the HS teacher in question was male, but I've also heard of female teachers saying the same thing, which is in some respects more dispiriting (but not surprising -- it gives them an excuse for their own limitations). Scientific fact (and political correctness) aside, this is not a good time to be discouraging anybody from taking a whack at math. One of the issues may be ensuring that people who teach math are actually interested in math themselves (and confident in their own abilities). When I taught the math ed course (and granted this was a generation plus ago), I had students saying that they didn't understand why they needed to know math when they were going to be teaching English/music/history... Unfortunately, the combination of budget cuts and union "bumping" rules meant that some of them ultimately were going to be "teaching" math. 3. A comment from the UK. In order to teach primary school here, it is essential to have a high school pass in math. My wife (a secondary math teacher) also coached one-to-one. A prospective primary teacher approached her for coaching one-to-one, because she could not pass the math exam she needed. My wife refused to teach the young lady, because she had announced that the reason she had failed the exam in the past was "I hate math". Why? "Because if you say that you hate math, you will infect all the children that you teach in primary school." A few days later, the young lady returned, penitent, apologised, and has gone on to be an enthusiastic teacher of primary children _including_ encouraging them in their math. 4. Please convey to your wife my sincere respect for taking that position. I'm happy to hear the young lady found her way back to the Light Side. I have not kept up with our licensing requirements for primary school teachers here, but I'm not optimistic that we require any measurable mathematical competence, and I'm sure we do not police attitudes. If this is your first time commenting on the blog, please read the Ground Rules for Comments.
{"url":"http://orinanobworld.blogspot.com/2011/01/physiology-and-mathematics-education.html","timestamp":"2014-04-19T14:31:04Z","content_type":null,"content_length":"150038","record_id":"<urn:uuid:884b2614-f1d5-4658-aff6-bcaf8f876bb2>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00094-ip-10-147-4-33.ec2.internal.warc.gz"}
hexgame – Provide an environment to draw a hexgame-board Hex is a math­e­mat­i­cal game in­vented by the Dan­ish math­e­mati­cian Piet Hein and in­de­pen­dently by the math­e­mati­cian John Nash. This pack­age de­fines an en­vi­ron­ment that en­ables the user to draw such a game in a triv­ial way. Sources /macros/latex/contrib/hexgame Doc­u­men­ta­tion Readme Ver­sion 1.0 Li­cense The LaTeX Project Public Li­cense Copy­right 2006 Meron Brouwer Main­tainer Meron Brouwer Con­tained in TeXLive as hexgame MiKTeX as hexgame Topics pro­cess game di­a­grams, game­books, and other amuse­ments Down­load the con­tents of this pack­age in one zip archive (30.5k).
{"url":"http://ctan.org/pkg/hexgame","timestamp":"2014-04-17T10:05:06Z","content_type":null,"content_length":"5150","record_id":"<urn:uuid:5f9d8625-9324-4b29-bdf9-e101286067e1>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00168-ip-10-147-4-33.ec2.internal.warc.gz"}
What are the best Special walls in LC? Name at least 30. I know I'm asking a lot, but I need this! Whoever answers properly gets the BA, plus 3 upvotes. Follow this link and the answer shall be yours They all have wicked Sp Def. For some reason as to which was not explained this answer was hidden. It may be because it's not the answer you are looking for. If that is so just comment and tell me so. Best answer Let's see... Chinchou (This is 10 so far) Gothita (This is the 20 mark.) Magby (the mark of 30) MANTIKE (120 Special Defense!!!) MIME JR. Ponyta (40 mark, 10 more to go) The ones in caps are great. 50 so far... I'm gonna put all that are above 60 base special defense and bold if it is its highest stat
{"url":"http://pokemondb.net/pokebase/110539/what-are-the-best-special-walls-in-lc?show=110546","timestamp":"2014-04-25T04:38:23Z","content_type":null,"content_length":"41227","record_id":"<urn:uuid:e33cee12-c4eb-4208-a9b2-8bba11133005>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00485-ip-10-147-4-33.ec2.internal.warc.gz"}
Results 1 - 10 of 27 , 1997 "... We give algorithms for finding the k shortest paths (not required to be simple) connecting a pair of vertices in a digraph. Our algorithms output an implicit representation of these paths in a digraph with n vertices and m edges, in time O(m + n log n + k). We can also find the k shortest pat ..." Cited by 290 (1 self) Add to MetaCart We give algorithms for finding the k shortest paths (not required to be simple) connecting a pair of vertices in a digraph. Our algorithms output an implicit representation of these paths in a digraph with n vertices and m edges, in time O(m + n log n + k). We can also find the k shortest paths from a given source s to each vertex in the graph, in total time O(m + n log n +kn). We describe applications to dynamic programming problems including the knapsack problem, sequence alignment, maximum inscribed polygons, and genealogical relationship discovery. 1 Introduction We consider a long-studied generalization of the shortest path problem, in which not one but several short paths must be produced. The k shortest paths problem is to list the k paths connecting a given source-destination pair in the digraph with minimum total length. Our techniques also apply to the problem of listing all paths shorter than some given threshhold length. In the version of these problems studi... - IEEE/ACM Transactions on Networking , 1998 "... The bounded shortest multicast algorithm (BSMA) is presented for constructing minimum-cost multicast trees with delay constraints. BSMA can handle asymmetric link characteristics and variable delay bounds on destinations, specified as real values, and minimizes the total cost of a multicast routing ..." Cited by 45 (1 self) Add to MetaCart The bounded shortest multicast algorithm (BSMA) is presented for constructing minimum-cost multicast trees with delay constraints. BSMA can handle asymmetric link characteristics and variable delay bounds on destinations, specified as real values, and minimizes the total cost of a multicast routing tree. Instead of the single-pass tree construction approach used in most previous heuristics, the new algorithm is based on a feasiblesearch optimization strategy that starts with the minimum-delay multicast tree and monotonically decreases the cost by iterative improvement of the delay-bounded multicast tree. BSMA's expected time complexity is analyzed, and simulation results are provided showing that BSMA can achieve near-optimal cost reduction with fast execution. , 2003 "... We prove super-linear lower bounds for some shortest path problems in directed graphs, where no such bounds were previously known. The central problem in our study is the replacement paths problem: Given a directed graph G with non-negative edge weights, and a shortest path P = {e_1, e_2, ..., e_p} ..." Cited by 26 (8 self) Add to MetaCart We prove super-linear lower bounds for some shortest path problems in directed graphs, where no such bounds were previously known. The central problem in our study is the replacement paths problem: Given a directed graph G with non-negative edge weights, and a shortest path P = {e_1, e_2, ..., e_p} between two nodes s and t, compute the shortest path distances from s to t in each of the p graphs obtained from G by deleting one of the edges e_i. We show that the replacement paths problem requires &Omega;(m&radic;n) time in the worst case whenever m = O(n&radic;n). This also establishes a similar... - In Proc. of 11th UK Performance Engineering Workshop , 1995 "... Efficient management of networks requires that the shortest route from one point (node) to another is known; this is termed the shortest path. It is often necessary to be able to determine alternative routes through the network, in case any part of the shortest path is damaged or busy. The k-shortes ..." Cited by 17 (0 self) Add to MetaCart Efficient management of networks requires that the shortest route from one point (node) to another is known; this is termed the shortest path. It is often necessary to be able to determine alternative routes through the network, in case any part of the shortest path is damaged or busy. The k-shortest paths represent an ordered list of the alternative routes available. Four algorithms were selected for more detailed study from over seventy papers written on this subject since the 1950's. These four were implemented in the `C' programming language and, on the basis of the results, an assessment was made of their relative performance. 1 The Background The shortest path through a network is the least cost route from a given node to another given node, and this path will usually be the preferred route between those two nodes. When the shortest path between two nodes is not available for some reason, it is necessary to determine the second shortest path. If this too is not available, a thir... , 1999 "... The shortest path problem is a classical network problem that has been extensively studied. The problem of determining not only the shortest path, but also listing the K shortest paths (for a given integer K ? 1) is also a classical one but has not been studied so intensively, despite its obvious p ..." Cited by 17 (5 self) Add to MetaCart The shortest path problem is a classical network problem that has been extensively studied. The problem of determining not only the shortest path, but also listing the K shortest paths (for a given integer K ? 1) is also a classical one but has not been studied so intensively, despite its obvious practical interest. Two different types of problems are usually considered: the unconstrained and the constrained K shortest paths problem. While in the former no restriction is considered in the definition of a path, in the constrained K shortest paths problem all the paths have to satisfy some condition -- for example, to be loopless. In this paper new algorithms are proposed for the unconstrained problem, which compute a super set of the K shortest paths. It is also shown that ranking loopless paths does not hold in general the Optimality Principle and how the proposed algorithms for the unconstrained problem can be adapted for ranking loopless paths. Keywords: Network, tree, path, path d... - In Proc. 19th annual ACM-SIAM symposium on Discrete algorithms , 2008 "... Let G = (V (G), E(G)) be a weighted directed graph and let P be a shortest path from s to t in G. In the replacement paths problem we are required to compute for every edge e in P, the length of a shortest path from s to t that avoids e. The fastest known algorithm for solving the problem in weighte ..." Cited by 12 (1 self) Add to MetaCart Let G = (V (G), E(G)) be a weighted directed graph and let P be a shortest path from s to t in G. In the replacement paths problem we are required to compute for every edge e in P, the length of a shortest path from s to t that avoids e. The fastest known algorithm for solving the problem in weighted directed graphs is the trivial one: each edge in P is removed from the graph in its turn and the distance from s to t in the modified graph is computed. The running time of this algorithm is O � mn + n2 log n � , where n = |V (G) | and m = |E(G)|. The replacement paths problem is strongly motivated by two different applications. First, the fastest algorithm to compute the k simple shortest paths from s to t in directed graphs [21, 13] repeatedly computes the replacement paths from s to t. Its running time is O(kn(m + n log n)). Second, the computation of Vickrey pricing of edges in distributed networks can be reduced to the replacement paths problem. An open question raised by Nisan and Ronen [16] asks whether it is possible to compute the Vickrey pricing faster than the trivial algorithm described in the previous paragraph. In this paper we present a near-linear time algorithm for computing replacement paths in "... We say an algorithm on n × n matrices with entries in [−M,M] (or n-node graphs with edge weights from [−M,M]) is truly subcubic if it runs in O(n 3−δ · poly(log M)) time for some δ> 0. We define a notion of subcubic reducibility, and show that many important problems on graphs and matrices solvable ..." Cited by 10 (5 self) Add to MetaCart We say an algorithm on n × n matrices with entries in [−M,M] (or n-node graphs with edge weights from [−M,M]) is truly subcubic if it runs in O(n 3−δ · poly(log M)) time for some δ> 0. We define a notion of subcubic reducibility, and show that many important problems on graphs and matrices solvable in O(n 3) time are equivalent under subcubic reductions. Namely, the following weighted problems either all have truly subcubic algorithms, or none of them do: • The all-pairs shortest paths problem on weighted digraphs (APSP). • Detecting if a weighted graph has a triangle of negative total edge weight. • Listing up to n 2.99 negative triangles in an edge-weighted graph. • Finding a minimum weight cycle in a graph of non-negative edge weights. • The replacement paths problem on weighted digraphs. • Finding the second shortest simple path between two nodes in a weighted digraph. • Checking whether a given matrix defines a metric. • Verifying the correctness of a matrix product over the (min,+)-semiring. Therefore, if APSP cannot be solved in n 3−ε time for any ε> 0, then many other problems also - in Proceedings of the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining "... Measuring distance or some other form of proximity between objects is a standard data mining tool. Connection subgraphs were recently proposed as a way to demonstrate proximity between nodes in networks. We propose a new way of measuring and extracting proximity in networks called “cycle free effect ..." Cited by 9 (0 self) Add to MetaCart Measuring distance or some other form of proximity between objects is a standard data mining tool. Connection subgraphs were recently proposed as a way to demonstrate proximity between nodes in networks. We propose a new way of measuring and extracting proximity in networks called “cycle free effective conductance” (CFEC). Importantly, the measured proximity is accompanied with a proximity subgraph, which allows assessing and understanding measured values. Our proximity calculation can handle more than two endpoints, directed edges, is statistically well-behaved, and produces an effectiveness score for the computed subgraphs. We provide an efficient algorithm to measure and extract proximity. Also, we report experimental results and show examples for four large network data sets: a telecommunications calling graph, the IMDB actors graph, an academic co-authorship network, and a movie recommendation system. , 1997 "... In the navigation system, it is very important not only to find the shortest path but also a detour, in case of a traffic jam for example. This paper surveys algorithms for the shortest path problem and the k shortest path problem at first, extends the latter algorithm for the 2-terminal k shortest ..." Cited by 7 (2 self) Add to MetaCart In the navigation system, it is very important not only to find the shortest path but also a detour, in case of a traffic jam for example. This paper surveys algorithms for the shortest path problem and the k shortest path problem at first, extends the latter algorithm for the 2-terminal k shortest paths problem, using AI search techniques such as the bidirectional A 3 algorithm, then defines `detour' precisely, and proposes algorithms for finding a realistic detour based on these algorithms. The efficiency and property of the algorithms are examined through experiments on an actual road network. 1. INTRODUCTION The shortest path problem is very important in various fields. For example, route navigation systems must show the shortest route to the destination as fast as possible. Thus, the shortest path problem is studied very well for a long time. For example, the Dijkstra method is the most famous and traditional algorithm for this problem. To make this algorithm more efficient, m... , 1999 "... : In this paper an algorithm for the ranking of loopless paths problem is proposed which is valid for directed and undirected networks. Despite its theoretical computational complexity be an open problem yet, the algorithm appears to perform well in practice as the reported comparative computatio ..." Cited by 6 (2 self) Add to MetaCart : In this paper an algorithm for the ranking of loopless paths problem is proposed which is valid for directed and undirected networks. Despite its theoretical computational complexity be an open problem yet, the algorithm appears to perform well in practice as the reported comparative computational experiments allow us to conclude. This conclusion is reinforced with some results obtained with larger networks; more than 500; 000 loopless paths were ranked in 10; 000 nodes and 100; 000 arcs euclidian networks in about 0:35 seconds of CPU execution time when all the arcs are undirected and in about 0:15 seconds for directed euclidian networks, using a server with 128 Mbytes of RAM and a 275 MHz processor running DEC Unix 3.2. Keywords: path, loopless path, path distance, paths ranking, network. 1 Introduction Let (N ,A) denote a given network, where N = fv 1 ; : : : ; v n g is a finite set whose elements are called nodes and A = fa 1 ; : : : ; am g is a proper subset of N
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=105660","timestamp":"2014-04-19T23:34:32Z","content_type":null,"content_length":"40509","record_id":"<urn:uuid:c27429f7-4961-4b39-a7f0-206049fea1a0>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00025-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Vocabulary If your printer has a special "duplex" option, you can use it to automatically print double-sided. However for most printers, you need to: 1. Print the "odd-numbered" pages first 2. Feed the printed pages back into the printer 3. Print the "even-numbered" pages If your printer prints pages face up, you may need to tell your printer to reverse the order when printing the even-numbered pages.
{"url":"http://quizlet.com/1766216/print","timestamp":"2014-04-17T01:13:22Z","content_type":null,"content_length":"221783","record_id":"<urn:uuid:227c7757-47b0-4dc2-a4cb-e693b5a756b8>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00168-ip-10-147-4-33.ec2.internal.warc.gz"}
Why Things Don't Add Up Last time we learned about continuous distributions and how they are described by the frequency function. I also promised to save us from the infinitesimal probabilities represented in the frequency function. The answer is the Cumulative Distribution Function (CDF), which is merely the area under the frequency function curve to the left of a given point. (You may remember with horror from your schooldays that the way to get this area is to integrate the curve, but we need not worry about the mechanics of this here.) What matters is that the area to the left of any value x represents not the probability of the variable being exactly x (which is infinitesimal) but the probability that it is less than x. And this is a nice finite number, varying of course from 0 to 1 as x increases from some suitably low value (minus infinity if you like) and a high one (plus infinity). Furthermore, this cumulative probability is what we are generally interested in. (We rarely care whether the project will finish on a particular date, but rather whether it will finish by a particular date.) So now we are cooking with gas! The cumulative curve is generally in the shape of an S (since its gradient is at its highest at a point coinciding with the peak of the frequency function) and is sometimes called an S-curve. Here is the CDF for the near-normal distribution we got last time by adding together 10 throws of a die: The values read from this curve are often called percentiles. We say that the “80^th percentile is $40″ meaning that there is an 80% chance that the project cost will not exceed $40. This is also sometimes called the “P80″ point. Now, suppose we are interested in the P80 point, and we have a project comprising two subprojects with the same P80 cost, $40. What is the P80 point for the total project? From what we have learned so far, it should be clear that it is NOT $80. Why not? Because we have learned that means and variances are additive. Since the variances add, any value characterized by a non-zero linear offset from the mean cannot be additive. The fact is that we do not have enough information to answer the question, but we can say that the answer will be less than $80. This should not surprise us. If there is a 20% chance of each project costing more than $40, the chance of them BOTH doing so is only 4%, so we might expect that $80 is more like the P96 point. (This is not rigorous because there are an infinite number of other ways in which the total cost could exceed $80, but it should give you a feel for why you cannot add percentiles. And for the same reason in reverse, you cannot “pro-rate” percentiles either.) To answer the question we have to know something about the two individual distributions, so let’s suppose that they are both the same normal distribution we illustrated above. We know that the distribution for the total project will be normal with a mean of $70 and a standard deviation of about $7.64. (5.4 times the square root of 2). Now, for the normal distribution each percentile is a given number of standard deviations away from the mean. The relationship is not easy to calculate but can be looked up in tables. The P80 point is about 0.84 standard deviations above the mean, so we can deduce that the P80 point is actually $70 + .84*7.64, or about $76.4. The naïve $80 answer is actually about 1.31 standard deviations above the mean. Looking up the tables backward, we can also deduce that this is actually roughly the P90 point. We are nearly finished our 3-part introduction to probability and I want to finish by talking about measures of central tendency. As their name implies, they are measures of what we might imprecisely call the center of a distribution. The normal distribution we have dealt with so far is symmetrical, so there is not much doubt about where it centre is, but if the distribution is not symmetrical we have a more nuanced situation. Take this skewed beta distribution: The mean value is $4889. We can also identify the peak of the frequency function, the most likely value also called the mode, which is about $4800. Finally there is the median, which is the P50 point on the S-curve, at $4867. Which of these measures is the best depends upon what one is using it for. They are measuring different things. Suppose we have a group of 30 people in a room. 29 are regular guys but the 30^th is Warren Buffet with a net worth of say $30 billion. The mean net worth will be pretty close to $1 billion, but this does not tell us much about the typical guy in the group. The mode or median would give us a better idea. If Buffet leaves and George Soros replaces him, the mean net worth will change but the median and the mode will not. So in cases where we have very skewed distributions, the mode or median will be more representative. And the median is more reliable than the mode because there is not really a mode in this example. (No two people are likely to have exactly the same net worth, so we have to aggregate them into ranges to even get the concept of most likely.) This is why the median is the most widely quoted value when describing social phenomenon like income and wealth which tend to be very unevenly distributed. But medians are just percentiles, which means we cannot add them together. Remember that the rule about adding up means applies regardless of the distribution, so although it is not always the most representative value it is very convenient for computation. Let’s see what happens when we add two subprojects with this skewed cost distribution: You will see that the mean is indeed twice the old mean, to within a rounding error of $1, at $9779. But the mode is about $9700, and the median $9760. You may also notice that the three values have become closer together in proportional terms. The reason is that the distribution has become less skewed. And the reason for that is our old friend the Central Limit Theorem. Even highly skewed distributions obey the theorem and tend towards the (symmetrical) normal distribution. One final point about the CLT and about the fact that variances are additive, which quantifies the “swings and roundabouts” effect we all experience. Both are real. They are not aberrations. I have heard lecturers, and others who should know better, talk about the CLT (when they actually mean the additive nature of variances) as if it is some mischievous gremlin which needs to be overcome or counteracted in some way. (This is generally because they do not get the answers they want. In this they make two mistakes. Firstly, misdiagnosing the problem altogether, and secondly blaming it on the CLT rather than the additive nature of variances.) But it reflects real life; if one neutralizes it in ones model of real life then ones model will be wrong. Finally, a book recommendation. The Flaw of Averages by Sam Savage warns of the danger of basing decisions on averages (or indeed any single-point estimate of a random variable). It is written for the general reader and a lot of fun, and it refers to project schedules to exemplify a particularly pernicious strain of the “flaw”. One comment 1. These have been a great series on quantitative techniques that I realized I need to brush up on. Thanks! Leave Your Thoughts at the Camel
{"url":"http://www.arraspeople.co.uk/camel-blog/projectmanagement/why-things-dont-add-up/comment-page-1/","timestamp":"2014-04-21T15:17:53Z","content_type":null,"content_length":"96484","record_id":"<urn:uuid:1d63eb26-41f4-4703-88b1-8e787d731a4e>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00453-ip-10-147-4-33.ec2.internal.warc.gz"}
1.oa.5 WorksheetsUntitled Document Relate counting to addition and subtraction (e.g., by counting on 2 to add 2). • Students learned to count to 100 by ones and tens (kcc1). • Students learned to count up starting at any number between 1 and 100 (kcc2). • This will be the first formal experience students have counting by anything other than one or ten.
{"url":"http://commoncoresheets.com/SortedByGrade.php?Sorted=1oa5","timestamp":"2014-04-17T04:29:24Z","content_type":null,"content_length":"99746","record_id":"<urn:uuid:a7879e11-e678-4fcd-b613-fc052273236c>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00019-ip-10-147-4-33.ec2.internal.warc.gz"}
Numerical Simulation of PAHs Formation and Effect of Operating Conditions in DI-Diesel Engines Based on a Comprehensive Chemical Mechanism Advances in Mechanical Engineering Volume 2013 (2013), Article ID 567159, 19 pages Research Article Numerical Simulation of PAHs Formation and Effect of Operating Conditions in DI-Diesel Engines Based on a Comprehensive Chemical Mechanism School of Aerospace, Tsinghua University, Beijing 100084, China Received 22 November 2012; Revised 25 April 2013; Accepted 13 May 2013 Academic Editor: Moran Wang Copyright © 2013 Bei-Jing Zhong and Jun Xi. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Three-dimensional numerical simulations of polycyclic aromatic hydrocarbon (PAH) formation in a Chaochai 6102bzl direct injection diesel engine are performed. n-Heptane is chosen as the fuel. A detailed mechanism, which includes 108 species and 572 elementary reactions that describe n-heptane oxidation and PAH formation, is proposed. A reduced kinetic mechanism, with only 86 reactions and 57 species, is developed and incorporated into computational fluid dynamics (CFD) software for the numerical simulations. Results show that PAHs, which were mostly deposited at the bottom of the diesel combustion chamber wall, first increased and then decreased with the increase in diesel crank angle. Furthermore, the diesel engine operating conditions (intake vortex intensity, intake air pressure, fuel injection advance angle, diesel load, and engine speed) had a significant effect on PAH formation. 1. Introduction The diesel engine is widely used in various kinds of power devices and has gradually become one of the main power sources in different types of vehicles because of its low oil consumption, high thermal efficiency, good compatibility, and energy savings, as well as low HC and CO emissions [1]. However, the diesel engine can emit a large number of soot particles during operation [2–5]. Research indicates that diesel engines may need 0.2–0.5% fuel oil to convert soot particles into extra fine particles (~0.1μm diameter) [6] emitted from the exhaust pipe. These particles consist of hydrocarbons (including aromatic hydrocarbon materials) adsorbed on carbon black. Therefore, controlling soot emission is a key issue in the development of diesel engines. To improve engine soot emission and engine performance, understanding the soot structure and formation mechanism of diesel engines is necessary. In general, nucleation and growth are two necessary processes considered in the modelling and simulation of soot formation. Nucleation considers the dehydrogenation of hydrocarbons and recombination of small molecules, which result in the formation of polycyclic aromatic hydrocarbons (PAHs) as the soot core. Growth should consider a series of processes, including the transition of PAHs to small particles and soot particle change by the collision, condensation, absorption, and chemical reactions of gaseous compositions on the surface of the particles [7]. The numerical simulation of turbulent diffusion combustion in direct injection (DI) diesel engines is highly complicated because of the strong nonlinear interaction between turbulent flow and complex chemical reactions. Some researchers [8, 9] directly used the EBU model in three-dimensional simulations to describe diesel engine combustion. Cordiner et al. [10] used a mixed 1D-3D numerical procedure, together with the Shell and characteristic-time combustion models, to analyse combustion and exhaust emission in a dual-fuel diesel/natural gas engine. In the procedure, the 1D code provides the pressure boundary conditions as input information for the 3D simulations. Chen et al. [11] and Lim et al. [12] performed three-dimensional simulations of diesel engine combustion and emission using the eddy-dissipation-conception (EDC) model with simplified reaction mechanisms. Hu et al. [13] developed a mixed-mode combustion model, which takes advantage of the mixing details provided by large-eddy simulation, to cover the major regimes pertaining to diesel engine combustion. The model uses kinetically controlled, quasisteady homogeneous, quasisteady flamelet, and partially premixed combustion modes. The combustion models used for the simulation of turbulent diffusion flames also include the probability-density-function (PDF) transport equation model and laminar flamelet model [14, 15]. Given detailed chemical kinetics, the first two models require substantial calculation because the transport equations for all species have to be solved. Unlike the first two models, the laminar flamelet model, derived separately by Peters [16] and Kuznetsov [17], treats the mixture fraction as an independent variable. This model uses the scalar dissipation rate for the mixing process and views the turbulent flame as an ensemble of thin, laminar, local one-dimensional flamelet structures embedded within the turbulent flow field [18, 19]. The model is used to solve the local nonequilibrium chemical process caused by aerodynamic strain and can decouple chemistry and turbulent flow in considering the detailed reaction mechanism and molecular transport process. This decoupling considerably reduces the need for additional calculations. This study aims to perform numerical simulations of the actual operating process of DI diesel engines (Chaochai 6102bzl engine) and formation of PAHs, including benzene (A[1]), naphthalene (A[2]), phenanthrene (A[3]), and pyrene (A[4]), using the FLUENT CFD software. To this end, a simplified mechanism that is based on a detailed mechanism is proposed. The effect of diesel operating conditions on PAH formation is also analysed. Commercial diesel does not have a chemical formula and its components are complex, making calculations difficult. A single component of alternative fuels is therefore needed. Cetane number (CN) is a measure of fuel ignition quality. It determines the length of the ignition delay period and has a significant effect on soot formation and the combustion process [20]. Thus, the CN of alternative fuels must first be considered. The CN of commercial diesel is about 50 and that of -heptane—an ideal, widely used alternative fuel [21–23]—is about 56. In the calculations, -heptane is used as the simulated fuel, and the unsteady-state laminar flamelet concept based on the -heptane combustion reaction mechanism is adopted as the turbulent flame model of the diesel engine. 2. Governing Equations The process occurring in an engine cylinder is a turbulent reaction flow that is dominated by physical conservation laws, including mass, momentum, energy, and species conservations. Therefore, the governing equations describing combustion in cylinder are mass continuity, momentum conservation, energy conservation, species conservation, and the ideal gas equation of state. 2.1. Mass Continuity The species conservation equation was written as where is the mass density of species, ; kg/m^3; is the density of mixtures, kg/m^3; is the velocity of fluid, m/s; is the diffusion coefficient, m^2/ s; is the Dirac delta symbol; is the source term caused in reaction; is the source term caused in spray. After adding all species, the equation of continuity for the mixture is obtained: 2.2. Momentum Conservation The momentum conservation equation is written as where is the static pressure, Pa; is the turbulent kinetic energy, kJ/m^3; is the gravitational body force; is the surface tension, m; is the momentum increment per unit volume caused in spray, kg/(m^2s). For Cartesian coordinate system, the -, -, -direction momentum conservation equation are given as follows: 2.3. Energy Conservation The energy conservation equation is written as where is the specific internal energy, kJ/kg; is the vector of heat flux, kJ, equals the sum of heat conduction and thermal diffusion, and is given by where is the heat conductivity; is the gas temperature, ; is the species enthalpy; is the heat of combustion; is the thermal source term caused in spray. 2.4. The Ideal Gas Equation of State One has where is the universal gas constant, kJ/(kg·K); is the molecular weight of species; is the specific internal energy of species; is the constant-pressure specific heat of species, J/(kg·K). 3. Models and Boundary Conditions 3.1. Turbulence Model Numerous models can describe turbulent flow; these include the single equation model, double equation model (the standard , RNG , and realisable models), Reynolds stress model, and large eddy simulation. Given the geometric curvature of combustor configuration and high-speed piston movement, the in-cylinder fluid flow is complex and involves eddy and secondary flow; hence, the realisable turbulence model proposed by Shih et al. [24] is selected for the present study. The realisable model is a relatively recent development and differs from the standard model. The kinetic energy and transport equations of the former have the same form as the standard and renormalisation group models, but the dissipation rate equations are different. The produced items of dissipation rate do not contain the produced items of turbulent kinetic energy. The realisable model is adopted for fluid flow, including boundary eddy flow, strong adverse pressure gravity, separate flow, and secondary flow [25, 26]. The modelled transport equations for and in the realisable model are as follows: where In these equations, represents the generation of turbulent kinetic energy by mean velocity gradients; is the generation of turbulent kinetic energy by buoyancy; denotes the contribution of the fluctuating dilatation in compressible turbulence to the overall dissipation rate; , , , and are constants; and represent the turbulent Prandtl numbers for and , respectively; and and are user-defined source terms. 3.2. Turbulence-Chemistry Interaction Model In diesel engines, fuel is sprayed into the cylinder. The fuel evaporates, mixes with the surrounding gases, and then autoignites as compression raises temperature and pressure. The diesel engine nonsteady-state laminar flamelet model that is based on the works of Pitsch et al. [27] and Barths et al. [28] can describe the chemistry in a single one-dimensional laminar flamelet and model ignition, as well as the formation of the product, intermediate, and pollutant species. It is chosen in this study to predict the combustion process in the diesel engine with compression ignition [25 ]. Computationally expensive chemical kinetics is reduced to a one-dimensional model, which is significantly faster than the laminar-finite-rate, EDC, or PDF transport models, and calculates kinetics in two or three dimensions. The simulation results confirm good accuracy. In a laminar diffusion flame, the species mass fraction and temperature along the -axis can be mapped from the physical space to the mixture fraction space. Thus, they can be uniquely described by two parameters: mixture fraction , defined in (7), and strain rate (or equivalently, scalar dissipation , defined in (8)). The chemistry is therefore reduced and completely described by quantities and . Finally, a set of simplified laminar flamelet equations can be obtained in the mixture fraction space, including species mass fraction equations: and a temperature equation: In (13) and (14), , , , and are the th species mass fraction, temperature, density, and mixture fraction, respectively; and are the th species specific heat and mixture-averaged specific heat, respectively; denotes the th species reaction rate; and represents the specific enthalpy of the th species. The mixture fraction can be written in terms of the atomic mass fraction as [29] where is the elemental mass fraction for element ; the subscript ox is the value at the oxidiser stream inlet; and the subscript fuel denotes the value at the fuel stream inlet. If the diffusion coefficients for all the species are equal, then (15) is identical for all the elements and the mixture fraction definition is unique. The mixture fraction is therefore the elemental mass fraction that originates from the fuel stream. Scalar dissipation varies along the axis of a flamelet and must be modeled across the flamelet. An expression for variable density is used [30]: where is the density of the oxidizer stream. For a counterflow diffusion flamelet, flamelet train rate can be related to the scalar dissipation at : where is the scalar dissipation at the stoichiometric ratio; is the characteristic strain rate; denotes the stoichiometric mixture fraction; and is the inverse complementary error function. In an adiabatic turbulent diffusion flame system, the species mass fraction and temperature in the laminar flamelets are completely parameterized by and , and the time-averaged characteristic scalars can be determined from the PDF of and as where represents the species mass fraction and temperature and denotes a joint probability density function. For nonadiabatic laminar flamelets, considering the computational cost, heat transfer to the system is assumed to have a negligible effect on flamelet species mass fractions. So the flamelet profiles are convoluted with the assumed-shape PDFs as in (18). The flamelet species and energy equations ((13) and (14)) are simultaneously solved with the flow. To account for the temperature rise during compression, the flamelet energy equation (14) contains an additional term on the right-hand side: where is the specific heat and is the volume-averaged pressure in the cylinder. This rise in flamelet temperature caused by compression eventually ignites the flamelet. The flamelet equations are advanced for a fractional step using properties from the flow, which is then advanced for the same fractional time step using properties from the flamelet. The initial flamelet condition at the beginning of the diesel simulation is a mixed-but-unburned distribution. For the flamelet fractional time step, the volume-averaged scalar dissipation and pressure, as well as the fuel and oxidiser temperatures, are passed from the flow solver to the flamelet solver. After the flamelet equations are advanced for the fractional time step, a PDF table is converted into a nonadiabatic steady flamelet table. Using the properties in this table, the CFD flow field is then advanced for the same fractional time step. 3.3. Kinetic Model Turbulent combustion in an engine cylinder is highly complex. Hence, three-dimensional turbulent combustion simulation with a detailed reaction mechanism is difficult to complete using current computer resources. Detailed reaction mechanisms should be reduced. For this purpose, we first propose a detailed chemical mechanism describing -heptane as a surrogate fuel for kerosene, oxidation, and PAH formation. It is then simplified to obtain the reduced-detail mechanism that can be coupled with CFD. The detailed -heptane reaction mechanism that describes PAH formation and oxidation consists of 108 species and 572 elementary reactions [31]. The mechanism was taken from Wang and Frenklach [32] (R31–R572, refer to [31]) and Curran et al. [33] (R1–R30, refer to [31]). As the first aromatic ring (benzene) is formed, the growth of larger aromatic species essentially follows the H-abstraction-C [2]H[2]-addition (HACA) mechanism. The HACA mechanism and aromatic combination, which describe the details of the growth of aromatic hydrocarbons, can be found in [15]. The next stage is to integrate reaction kinetics in the multidimensional CFD solver. The detailed mechanism is simplified prior to multidimensional CFD modelling. Net reaction rate analysis and sensitivity analysis are conducted to simplify the detailed mechanism. The simplified mechanism includes 57 components and 86 elementary reactions (see Table 1), and can be coupled with the CFD multidimensional model. A comparison of the simplified and detailed mechanisms shows very close calculation results, with a 15% relative error for intermediate profiles. These results confirm that the authenticity of the prediction of flame structure and PAH (A[1], A[2], A[3], and A[4]) formation generated by the mechanism simplification procedure is indistinguishable from that yielded by the detailed mechanism over the specified fixed range of conditions. The details of the simplification of this mechanism and its testing can be found in [34]. 3.4. Spray Model FLUENT provides different spray models for droplet collision and breakup, as well as a dynamically varying drag coefficient that accounts for variations in droplet shape. These models are the droplet collision model, applicable to low-Weber-number collisions, and droplet breakup models; the latter include the Taylor analogy breakup (TAB) model, applicable to low-Weber-number injections and low-speed sprays into a standard atmosphere, and the wave model that is applicable to Weber numbers greater than 100 and popular in high-speed fuel injection applications [14]. The dynamic drag model provided by FLUENT is adopted for the accurate determination of droplet drag coefficients, which are crucial for accurate spray modelling in this work. It is compatible with the TAB and wave models for droplet breakup. When the collision model is switched on, the collisions reset the distortion and distortion velocities of the colliding droplets. When sprayed into the combustion chamber through the nozzle, the droplets may merge or break up. 3.5. Initial and Boundary Conditions Even as the precise geometric shapes of a physical model are an important precondition to accurate simulation, creating a computational module strictly in accordance with actual conditions is very difficult because the overall structure of the diesel engine is highly complex. Therefore, the treatment of the structures requires simplification. In this study, the approaches to simplification are as follows. The intake and exhaust strokes are disregarded, with initial and boundary conditions obtained through experiments. The calculation is completed only for one-sixth of the full geometry because the diesel cylinder that is adopted is a fuel injection nozzle with six uniform jet holes. Figure 1 shows the geometric model and calculation grid of the DI diesel engine at a 10° crack angle (CA) before top dead centre. The parameters of the geometric model and initial boundary conditions are shown in Table 2. The calculations, including those for the compression, combustion, and expansion strokes, are completed using the FLUENT CFD software solver. 4. Results and Discussion 4.1. Reliability Analysis of the Models Comprehensively verifying the models is very difficult because limited test results are available for comparison. Usually, pressure and temperature are the standards of simulation. The examined comparisons of the detailed mechanism were completed in the laminar flames [31, 35]. The results show that the mechanism can describe flame characteristics. As shown in [34], the simplified mechanism that is based on the detailed mechanism can also reproduce the major features of the detailed mechanism. Figure 2 shows the variations in average pressure in the cylinder with CA; these variations were determined by numerical simulation and testing. The figure shows that the computed pressure rise and maximum pressure, as well as their corresponding crank positions are essentially consistent with experimental data [36]. The maximum pressure in the cylinder (about 8.5MPa) occurred at 4°CA after top dead centre. The maximum pressure showed an error of 1.16% only in comparison with the experimental results for 8.6MPa. The comparison of the pressure curves shows that the numerical simulation of the DI diesel engine reflects actual conditions. Figure 3 shows the computed volume-average temperature in the cylinder against variations in CAs. The figure shows three special points, which correspond to the ignition of the fuel at 360°CA, maximum pressure at 364°CA, and maximum temperature at 379°CA. The average temperature in the engine cylinder was about 1400K at the ignition point (i.e., 360°CA). Below this temperature, the engine flamed out, which is generally consistent with the conclusion of Liu et al. [37]. Kinetic analysis shows that the following important reactions occurred at the ignition point: It is found that H, O, and OH imposed important effects on ignition, and H[2] and C[2]H[4] were produced from the decomposition of C[7]H[16]. On the one hand, the small hydrocarbon gases can be conducive to burning. On the other hand, they can increase CH[3] and C[3]H[3], after which C[3]H[3] can be converted to A[1]. This conversion may, in turn, generate A[2], A[3], and A[4]. As the fuel is burned, the considerable heat produced by combustion caused the gas pressure and temperature in the cylinder to sharply rise until the average temperature reached a maximum value of 1935K at 379°CA, where fuel injection was terminated and combustion was completed. Similarly, the kinetic analysis shows that in addition to reactions , , , , , , , , , , and (see Table 1 ), other important reactions occurred at the maximum temperature (i.e., 379°CA): At this point, adequate H, OH, and O were produced but less C[3]H[3] was generated. Figure 4 shows the corresponding spatial temperature distributions at 360°, 364°, and 379°CA. 4.2. Spatial Distributions of Temperature and Species Concentrations Figure 5 shows the spatial distributions of temperature and concentrations of the major species (C[7]H[16], O[2], CO[2], and H[2]O) and PAHs (A[1], A[2], A[3], and A[4]) at 360°, 364°, and 379°CA. The figure illustrates that the eddy formed in the combustion chamber caused the high-temperature region to propagate through the bottom of the chamber. When the crank moved to 364°CA, a local high-temperature region at 2643K was formed at the bottom centre of the chamber. Conversely, when the crank moved forward to 379°CA, a high-temperature flame fully propagated through the bottom of the chamber. At the starting point of combustion (360°CA), the fuel (i.e., C[7]H[16]) aggregated at the jet flow centre, thereby producing severely insufficient and excess oxygen inside and outside the fuel jet, respectively. When the C[7]H[16] was consumed, only a little fuel remained near the wall at the bottom of the chamber at 364°CA because of the low combustion rate near this wall. This low combustion rate resulted in a lower wall temperature and lower O[2] concentration. In the subsequent combustion (up to 379°CA), the remaining fuel continued to combust until it was completely consumed, the O[2] concentration at the bottom of the chamber continued to decrease, and the CO[2] and H[2]O concentrations gradually rose, thereby forming a high-temperature region with maximum temperature and severe oxygen deficiency. Nevertheless, the oxygen concentration remained high in the unburned region. As shown in Figure 5, large quantities of A[1], A[2], A[3], and A[4] started forming near the wall of the combustion chamber bottom at the ignition point (1750K), which corresponds to 360°CA. The species continued to form at 379°CA (i.e., the point at which the largest average temperature occurred). At 360°CA, the temperature further improved after ignition, but the fuel remained in the stage of diffusion combustion. It was poorly mixed with air, resulting in pyrolysis and incomplete fuel combustion. At 364°CA (i.e., the point at which the maximum average pressure occurred), the mixing of fuel with air continued to proceed poorly because the diffusion flame had only begun to propagate through the bottom of the chamber. Thus, the fuel jet achieved easy pyrolysis near the wall at the bottom of the combustion chamber, forming A[1], A[2], A[3], and A[4]. Moving the crank to 379°CA caused large quantities of PAHs to continue forming near the wall at the bottom of the chamber until they were nearly completely oxidised at the high-temperature region because of adequate O[2] concentration. Figure 6 shows the spatial distributions of the concentrations of some intermediate species (H, C[3]H[3], A[3-4]), C[4]H[4], and C[2]H[2]) at 360°, 364°, and 379°CA. The figure indicates that H, C[2] H[2], and C[3]H[3] formed primarily at the high-temperature region because of the combustion and decomposition of fuel, which resulted in more A[1] production through the reaction . 4.3. Species Concentration Profiles Figures 7 and 8 show the variation curves of the concentrations of the major species (C[7]H[16], O[2], CO[2], and H[2]O) and PAHs (A[1], A[2], A[3], and A[4]) as a function of the CA corresponding to the conditions shown in Table 2. Figure 7 illustrates that the C[7]H[16] concentration first increased with fuel injection during the delay period prior to ignition. C[7]H[16] was then quickly consumed until it reached its maximum at 360°CA (the C[7]H[16] ignition point). The C[7]H[16] injected during the delay (360°–364°CA) and slow burning (364°–379°CA) periods continued to combust during the spray process and was quickly and almost fully consumed when the injection process was terminated (379°CA). O[2] gradually decreased as combustion progressed, whereas CO[2] and H[2]O gradually increased until they reached a fixed value at the end of the fuel combustion process. As shown in Figure 8, the PAH concentrations first increased and then decreased with CA. During the delay period (~360°CA), A[1], A[2], A[3], and A[4] started to form because of C[7]H[16] pyrolysis before fuel ignition. These species gradually increased in the range 360–364°CA until they reached their maximum at 364°CA. This behaviour implies that the pollutants formed in a dominant manner, corresponding to the stages of diffusion flames in the diesel engine. The fuel injected in the combustion chamber underwent pyrolysis under severe oxygen deficiency before the ignition and diffusion combustion of the formed rich mixture at the jet centre in the initial stage of combustion after ignition. The pyrolysis generated a number of A[1], A[2], A[3], and A[4] species. The kinetic analysis shows that the important reactions for PAH formation occurred at , , , and . After moving the crank to 364°CA, strong turbulent flow and fuel vapour in the cylinder accelerated the mixing of fuel with air and considerably improved combustion, thereby generating less A[1], A[2], A[3], and A[4]. At the same time, large quantities of previously formed A[1], A[2], A[3], and A[4] were oxidised at high temperature by reactions , , , and . Thus, the A[1], A[2], A[3], and A[4] concentrations were quickly reduced beyond 364°CA because of their dominant oxidation. Beyond 379°CA, fuel injection was terminated, and combustion gradually ceased. Therefore, the pressure and temperature in the cylinder of the diesel engine quickly decreased, and PAH concentrations gradually reached constant 4.4. Effect of Operating Conditions on PAHs 4.4.1. Effect of Intake Vortex Intensity Vortex intensity is one of the important performance parameters that affect fuel combustion and emissions, especially particle emissions, in diesel engines. Eddy motion promotes mixing of fuel and air, improves the formation of combustible mixtures, and enhances combustion efficiency. To study the effect of intake vortex intensity (defined as the ratio of the intake swirl speed to the engine speed) on combustion and emission, we simulated the diesel engine combustion process and PAH formation under different vortex intensities (0.5, 2.5, and 4.5), with the other parameters maintained at relatively constant levels. Figure 9 shows the mean-averaged mass fractions of A[1], A[2], A[3], and A[4] as a function of CA under vortex intensities of 0.5, 2.5, and 4.5. We can see from the figure that A[1] gradually decreased, whereas A[2], A[3], and A[4] only slightly changed with increasing intake vortex intensity. The sharp points of A[2], A[3], and A[4] were 5^−10, 6^−13, and 1^−13, respectively. Increasing the intake vortex intensity is advantageous to increasing airflow velocity and turbulence intensity in the engine combustion chamber, accelerating mixture formation, expanding the high-temperature region, and controlling combustion in the cylinder. The end results are improved combustion conditions and reduced combustion time, increased combustion efficiency, and reduced PAH formation, as confirmed by the simulation results. 4.4.2. Effect of Intake Air Pressure Figure 10 shows the mean-averaged mass fractions of A[1], A[2], A[3], and A[4] as a function of CA under intake air pressures of 3.5, 4, and 4.5MPa. The figure illustrates that A[1] gradually decreased, whereas A[2], A[3], and A[4] only minimally changed with increasing intake air pressure. The variations in PAHs showed a similar trend (Figure 9). The intake air pressure was higher, and the gas density and oxygen concentration of the engine cylinder were greater. These conditions enabled complete diesel engine combustion under greater excess air coefficients, resulting in more complete fuel combustion and less PAH formation. Increasing the amount of air in the cylinder may reduce PAH production and simultaneously promote the oxidation of previously formed PAHs. Hence, PAH concentrations decreased with increasing intake air pressure. 4.4.3. Effect of Diesel Fuel Injection Advance Angle Fuel injection advance angle is defined as the time at which fuel is injected into the cylinder and then to the piston only as the experience point of the crank angle. It is an important parameter that affects diesel fuel combustion. Previous studies have shown that the fuel injection advance angle in DI diesel engines has a more significant effect on fuel economy, power, and emission performance than do other parameters. Large or small, injection advance angles have a direct effect on diesel engine power output and fuel consumption, resulting in crude diesel engine and unstable running conditions. Figure 11 shows the mean-averaged mass fractions of A[1], A[2], A[3], and A[4] as a function of CAs under fuel injection advance angles of 6°, 8°, and 10°CA, with the other parameters kept constant. The figure illustrates that A[1] gradually decreased, whereas A[2], A[3], and A[4] did not change much with increasing fuel injection advance angle. Increasing the fuel injection advance angle decreased the pressure and temperature when the fuel was injected into the cylinder. Thus, the delay period of ignition was extended. Fuel can mix well with air at a level before the top dead centre (TDC), and then the mixture promptly ignites and burns at the nearby TDC. Therefore, appropriate increments in injection advance angle can improve fuel quality and air mixture. Consequently, combustion efficiency is enhanced. However, immoderate increments in fuel injection advance angle increase the amount of fuel injected into the cylinder before ignition. When the fuel is ignited, the cylinder pressure rapidly rises, resulting in crude diesel. At an excessively small fuel injection advance angle, a fuel mixture begins to form and burn at a level beyond the TDC, where the cylinder pressure and temperature are low, resulting in reduced fuel efficiency. 4.4.4. Effect of Diesel Load Diesel load regulation is controlled with the quantity of diesel fuel injection. The fuel injected into the cylinder increases with rising diesel load. Accordingly, the heat released by combustion is greater and the engine produces more peak torque. We completed a numerical simulation of different fuel deliveries per cycle per cylinder, keeping the same other parameters constant, to study the effect of diesel load on PAH formation and oxidation in the DI diesel engine. Figure 12 presents the mean-averaged mass fractions of A[1], A[2], A[3], and A[4] as a function of CA under fuel delivery per cycle per cylinder levels of 50, 70, and 90mg. The figure shows that A [1] gradually increased, whereas A[2], A[3], and A[4] only slightly changed with increasing fuel delivery per cycle per cylinder. The different engine loads correspond to different fuel-to-air equivalence ratios. Hence, the effect of engine load on harmful gases in the exhaust is essentially attributed to the equivalence ratio. The fuel and air mixture deteriorated with increasing equivalence ratio, leading to the increased formation of PAHs. During heavy load conditions, the equivalence ratio and temperature are high, and the excess concentrated fuel/air mixture produced in the combustion chamber causes fuel pyrolysis and incomplete combustion. By contrast, during light load conditions, the diesel engine operates with substantial excess air. Thus, combustion is more complete and fewer PAHs are formed. 4.4.5. Effect of Engine Speed The effect of diesel engine speed on diesel engine combustion and PAH formation was also investigated under fixed parameters. Figure 13 shows the mean-averaged mass fractions of A[1], A[2], A[3], and A[4] as a function of CA under engine speeds of 1800, 2300, and 2800rpm. As shown in the figure, A[1] gradually increased, whereas A[2], A[3], and A[4] only minimally changed with increasing engine When the engine speed increased, more fuel was injected into the engine chamber over a short period, which was insufficient for fuel evaporation and fuel-air mixing. In turn, these resulted in incomplete combustion and the formation of large quantities of PAHs. 5. Conclusion Three-dimensional numerical simulations of an -heptane-fuelled Chaochai 6102bzl DI diesel engine were performed. The simulations were carried out under actual working conditions and PAH formation (A [1], A[2], A[3], and A[4]). A combustion model, along with laminar flamelet, nonsteady-state, turbulent flow, and spray models, was used in the numerical simulation. The comparison of the simulation and experimental results for the delay period and variation trends of the cylinder pressure indicates that the models established in this study can accurately describe the actual working conditions of an in-cylinder diesel engine. The numerical results show that during the delay period, large quantities of PAHs were formed because of the high-temperature pyrolysis of fuel at the front of the fuel jet. PAHs were formed near the wall at the bottom of the combustion chamber during the ignition stage and were then completely oxidised at a temperature higher than 2000K. The simulated results for the diesel engine operating conditions indicate that increasing the intake vortex intensity and intake air pressure promotes complete fuel combustion and reduces PAH formation. Appropriate increases in fuel injection advance angles can also reduce PAH generation. The effect of engine load on the combustion process is essentially attributed to the equivalence ratio. The load increase caused a rise in equivalence ratio, creating a rich fuel mixture and resulting in elevated PAH production. Finally, increasing engine speed shortened the time consumed in fuel and air mixing, resulting in combustion deterioration and the formation of large quantities of PAHs. The authors gratefully acknowledge the financial support of the China National Science Fund (Grant no. 51036004) and the National High Technology Research and Development Program of China (Grant no. 1. S. Daido, Y. Kodama, T. Inohara, N. Ohyama, and T. Sugiyama, “Analysis of soot accumulation inside diesel engines,” JSAE Review, vol. 21, no. 3, pp. 303–308, 2000. View at Publisher · View at Google Scholar · View at Scopus 2. A. D. H. Clague, J. B. Donnet, T. K. Wang, and J. C. M. Peng, “A comparison of diesel engine soot with carbon black,” Carbon, vol. 37, no. 10, pp. 1553–1565, 1999. View at Scopus 3. L. M. Pickett and D. L. Siebers, “Soot in diesel fuel jets: effects of ambient temperature, ambient density, and injection pressure,” Combustion and Flame, vol. 138, no. 1-2, pp. 114–135, 2004. View at Publisher · View at Google Scholar · View at Scopus 4. Z. John, M. J. Wen, S. H. Thomson, S. N. Park, and M. F. Rogak, “Study of soot growth in a plug flow reactor using a moving sectional model,” Proceedings of the Combustion Institute, vol. 30, no. 1, pp. 1477–1484, 2005. View at Publisher · View at Google Scholar 5. Y. Xin and P. G. Jay, “Two-dimensional soot distributions in buoyant turbulent fires,” Proceedings of the Combustion Institute, vol. 30, no. 1, pp. 719–726, 2005. View at Publisher · View at Google Scholar 6. J. B. Heywood, Internal Combustion Engine Fundamentals, McGraw-Hill International, 1988. 7. J. Xi and B. J. Zhong, “Soot in diesel combustion systems,” Chemical Engineering and Technology, vol. 29, no. 6, pp. 665–673, 2006. View at Publisher · View at Google Scholar · View at Scopus 8. Y. Han, W. Park, and K. Min, “Soot and temperature distribution in a diesel diffusion flame: 3-D CFD simulation and measurement with laser diagnostics,” International Journal of Automotive Technology, vol. 12, no. 1, pp. 21–28, 2011. View at Publisher · View at Google Scholar · View at Scopus 9. Z. Zheng and M. Yao, “Mechanism of oxygen concentration effects on combustion process and emissions of diesel engine,” Energy and Fuels, vol. 23, no. 12, pp. 5835–5845, 2009. View at Publisher · View at Google Scholar · View at Scopus 10. S. Cordiner, M. Gambino, S. Iannaccone, V. Rocco, and R. Scarcelli, “Numerical and experimental analysis of combustion and exhaust emissions in a dual-fuel diesel/natural gas engine,” Energy and Fuels, vol. 22, no. 3, pp. 1418–1424, 2008. View at Publisher · View at Google Scholar · View at Scopus 11. W. Chen, S. Shuai, and J. Wang, “Effect of the cetane number on the combustion and emissions of diesel engines by chemical kinetics modeling,” Energy and Fuels, vol. 24, no. 2, pp. 856–862, 2010. View at Publisher · View at Google Scholar · View at Scopus 12. J. Lim, Y. Kim, S. Lee, J. Chung, W. Kang, and K. Min, “3-D Simulation of the combustion process for di-methyl ether-fueled diesel engine,” Journal of Mechanical Science and Technology, vol. 24, no. 12, pp. 2597–2604, 2010. View at Publisher · View at Google Scholar · View at Scopus 13. B. Hu, C. J. Rutland, and T. A. Shethaji, “A mixed-mode combustion model for large-eddy simulation of diesel engines,” Combustion Science and Technology, vol. 182, no. 9, pp. 1279–1320, 2010. View at Publisher · View at Google Scholar · View at Scopus 14. I. Dhuchakallaya and A. P. Watkins, “Application of spray combustion simulation in DI diesel engine,” Applied Energy, vol. 87, no. 4, pp. 1427–1432, 2010. View at Publisher · View at Google Scholar · View at Scopus 15. M. Frenklach and H. Wang, “Detailed mechanism and modeling of soot particle formation,” Springer Series in Chemical Physics, no. 59, pp. 165–192, 1994. View at Scopus 16. N. Peters, “Local quenching due to flame stretch and non-premixed turbulent combustion,” in Western States Section of the Combustion Institute, Spring Meeting, Irvine, CA, 1980, Paper WSS 80-4. 17. V. R. Kuznetsov, “Influence of turbulence on the formation of high nonequilibrium concentrations of atoms and free radicals in diffusion flames,” Fluid Dynamics, vol. 17, no. 6, pp. 815–820, 1982. View at Publisher · View at Google Scholar · View at Scopus 18. N. Peters, “Laminar diffusion flamelet models in non-premixed turbulent combustion,” Progress in Energy and Combustion Science, vol. 10, no. 3, pp. 319–339, 1984. View at Scopus 19. N. Peters, “Laminar Flamelet Concepts in Turbulent Combustion,” in Proceedings of the 21st Symposium (Int'l.) on Combustion, pp. 1231–1250, The Combustion Institute, 1986. 20. H. Pitsch, H. Barths, and N. Peters, “Three-D in ensional modeling of NO[x] and soot formation in DI-diesel engines using detailed chemistry based on the interactive flamelet approach,” SAE Transactions, vol. 105, no. 4, pp. 2010–2024, 1996. 21. H. Pitsch, Y. P. Wan, and N. Peters, “Numerical investigation of soot formation and oxidation under diesel engine conditions,” SAE Transactions, vol. 104, no. 3, pp. 938–949, 1995. 22. F. Tao and J. Chomiak, “Numerical investigation of reaction zone structure and flame liftoff of DI diesel sprays with complex chemistry,” SAE Transactions, vol. 111, no. 3, pp. 1836–1854, 2002. 23. Z. J. Peng, H. Zhao, and N. Ladommatos, Effects of Air/Fuel ratios and EGR Rates on HCCI Combustion of n-heptane, a Diesel Type Fuel, SAE Paper, 2003. 24. T. H. Shih, W. W. Liou, A. Shabbir, Z. Yang, and J. Zhu, “New k-ε eddy-viscosity model for high reynolds number turbulent flows—model development and validation,” Computers and Fluids, vol. 24, no. 3, pp. 227–238, 1995. View at Scopus 25. FLUENT User's Guide, FLUENT Inc., 2006. 26. K. Van Maele and B. Merci, “Application of two buoyancy-modified k-ε turbulence models to different types of buoyant plumes,” Fire Safety Journal, vol. 41, no. 2, pp. 122–138, 2006. View at Publisher · View at Google Scholar · View at Scopus 27. H. Pitsch, H. Barths, and N. Peters, “Three-dimensional modeling of NO[x] and soot formation in DI-diesel engines using detailed chemistry based on the interactive flamelet approach,” SAE Paper 962057, 1996. 28. H. Barths, C. Antoni, and N. Peters, “Three-dimensional simulation of pollutant formation in a DI-diesel engine using multiple interactive flamelets,” SAE Paper, 1998. 29. Y. R. Sivathanu and G. M. Faeth, “Generalized state relationships for scalar properties in nonpremixed hydrocarbon/air flames,” Combustion and Flame, vol. 82, no. 2, pp. 211–230, 1990. View at Publisher · View at Google Scholar · View at Scopus 30. J. S. Kim and F. A. Williams, “Extinction of diffusion flames with non-unity lewis number,” Journal of Engineering Mathematics, vol. 31, no. 2-3, pp. 101–118, 1997. View at Publisher · View at Google Scholar 31. B. J. Zhong, D. S. Dang, and Y. N. Song, “3-D simulation of soot formation in a direct-injection diesel engine based on a comprehensive chemical mechanism and method of moments,” Combustion Theory and Modeling, vol. 16, no. 1, pp. 143–171, 2012. View at Publisher · View at Google Scholar 32. H. Wang and M. Frenklach, “A detailed kinetic modeling study of aromatics formation in laminar premixed acetylene and ethylene flames,” Combustion and Flame, vol. 110, no. 1-2, pp. 173–221, 1997. View at Publisher · View at Google Scholar · View at Scopus 33. H. J. Curran, P. Gaffuri, W. J. Pitz, and C. K. Westbrook, “A comprehensive modeling study of n-heptane oxidation,” Combustion and Flame, vol. 114, no. 1-2, pp. 149–177, 1998. View at Publisher · View at Google Scholar · View at Scopus 34. J. Xi and B. J. Zhong, “Reduced kinetic mechanism of n-heptane oxidation in modeling polycyclic aromatic hydrocarbon formation in diesel combustion,” Chemical Engineering and Technology, vol. 29, no. 12, pp. 1461–1468, 2006. View at Publisher · View at Google Scholar · View at Scopus 35. Y. Zhang and B. Zhong, “Benzene formation mechanism in n-heptane/air partially premixed counterflow flame,” Journal of Tsinghua University, vol. 48, no. 5, pp. 904–908, 2008. View at Scopus 36. T. C. Zhang, C. L. Song, Y. Q. Pei, S. R. Dong, G. D. Wei, and T. Z. Yang, “Experimental and CFD study on exhaust particulate from diesel engine with common rail injection system,” Journal of Combustion Science and Technology, vol. 13, no. 2, pp. 136–140, 2007 (Chinese). View at Scopus 37. Y. Liu, Y. Zhang, J. Qin, and N. Peters, “Simulation and experiment for three-dimensional combustion temperation field in direct-injection diesel engine,” Chinese Journal of Mechanical Engineering, vol. 43, no. 2, pp. 196–201, 2007 (Chinese). View at Publisher · View at Google Scholar · View at Scopus
{"url":"http://www.hindawi.com/journals/ame/2013/567159/","timestamp":"2014-04-16T13:30:51Z","content_type":null,"content_length":"332813","record_id":"<urn:uuid:4c3c2c6c-7bb1-477a-8a1f-91d4e534467f>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00642-ip-10-147-4-33.ec2.internal.warc.gz"}
[Numpy-discussion] matrix problem: float to matrix power Alan G Isaac aisaac@american.... Thu Nov 1 14:20:11 CDT 2007 On Wed, 31 Oct 2007, Timothy Hochberg apparently wrote: > because M**n results in the matrix power of n. It would be > confusing if n**M did a broadcast element wise power. In an attempt to summarize: scalar to a matrix power 1. may have been overlooked, or may have been omitted as 2. if overlooked there are two possible answers as to what it could reasonably be a. a**m = exp(ln(a)*m) natural definition b. a**m = a**m.A broadcasting In languages that provide an answer, I have the impression that the answer is usually (b). (E.g., GAUSS.) However perhaps in NumPy this is the least interesting way to go? But it is what I was expecting. Alan Isaac More information about the Numpy-discussion mailing list
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2007-November/029764.html","timestamp":"2014-04-19T00:26:31Z","content_type":null,"content_length":"3419","record_id":"<urn:uuid:633c3919-3008-4bc3-ba90-3be25ce111ad>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00078-ip-10-147-4-33.ec2.internal.warc.gz"}
Ellipse Specification Using Vectors Robert is a software engineer and award-winning independent animator. He can be reached at mathart63@gmail.com. Over the years, I've made heavy use of graphical editing programs and sometimes needed to use ellipse drawing functions. What I've found is that ellipse drawing functions in graphical editing programs typically use bounding boxes for specifying ellipses. This method serves well if the task requires fitting the ellipse within the confines of a box, or if it's not necessary to align the edge of the ellipse with precision. On occasion, however, I've had a need to use an ellipse (or a portion thereof) to approximate a freeform curve. In such cases, bounding boxes serve poorly. Why? Because the outer corner of the bounding box that specifies the ellipse is far enough away from the curve of interest that "eyeballing" placement of the nearest corner of the bounding box to specify an ellipse that approximates the freeform curve is error prone and usually produces erroneous results, as in Figure 1. The problem is that ellipse curves specified by bounding boxes are physically disconnected from the opposite corners of the box that are actually under user control. This physical disconnect robs users of the ability to place the edge of the ellipse precisely against the freeform curve of interest. To give users the control needed to produce ellipses that approximate the curvature of a freeform curve, it is more expedient to specify an ellipse using points that interpolate (lie on the actual ellipse) rather than exterpolate (in which the ellipse is tied to the points but detached from them). This lets users click directly on the curve and anchor the ellipse to the freeform curve of interest from the outset. One approach is to specify an ellipse from a vector the user specifies. In Figure 2, the user clicks on some point lying upon the freeform curve and drags the end of the vector along with the corresponding ellipse until it closely approximates the freeform You can implement this scheme using a uniform interpolating trigonometric spline (TSpline) whose control points lie at the corners of a rectangle. Alternatively, you can use the corners of a rhombus (see "Implementing Uniform Trigonometric Spline Curves"; www.ddj.com/architect/184410198). Both approaches inscribe (rather than circumscribe) the ellipse. In either case, a single vector is sufficient to define the four control points that specify an ellipse formed of a TSpline. For this algorithm, the most practical choice is to implement the TSpline ellipse that is specified by an inscribed rectangle.
{"url":"http://www.drdobbs.com/architecture-and-design/ellipse-specification-using-vectors/209601027","timestamp":"2014-04-20T21:30:48Z","content_type":null,"content_length":"97082","record_id":"<urn:uuid:8ffdef45-e88e-4b24-b103-4cf9f25b885a>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00429-ip-10-147-4-33.ec2.internal.warc.gz"}
How to calculate gravitational force? I know the formula for calculating the gravitational force between two masses. In the formula "G" means gravitational constant. The gravitational constant is 6.67E-11 m3 s-2 kg-1. But what does "6.67E-11 m3 s-2 kg-1 mean"? I would really appreciate it if give me an example of calculating gravitational force between two objects. Hey, mdmaaz. The units of the universal gravitational constant G, N m^2 kg^-2, N is shorthand for Newtons, the measure of force. m is for metre, kg is for kilogram. The units can sorta be thought as being the way they are due to the units of the other variables in the equation. F(gravity) is the force due to the gravitation attraction between the two object. G is the universal gravitational constant. m1 and m2 are the masses of the two objects between which the gravity is acting (for example, between the earth and your body). r is the distance separation between the two objects (notice that earth pulls on you much more then saturn does, despite saturn's much larger mass!) . Here is a quick example of finding the gravitational force between two objects. Let's say that object 1 has mass m1=10kg and that object 2 has mass m2=25kg. If they are 10m apart, then the force is given by (from hyperphysics)
{"url":"http://www.physicsforums.com/showthread.php?t=497543","timestamp":"2014-04-18T00:25:14Z","content_type":null,"content_length":"36613","record_id":"<urn:uuid:f9dd95ef-72e1-425c-9a05-7a01284e8ecb>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00582-ip-10-147-4-33.ec2.internal.warc.gz"}
ragged matrix (data structure) Definition: A matrix having irregular numbers of items in each row. See also uniform matrix, sparse matrix. Note: An upper triangular matrix or a lower triangular matrix are usually not thought of as being ragged since the number of items in each row is regular (changing by one from row to row). For example Author: PEB Go to the Dictionary of Algorithms and Data Structures home page. If you have suggestions, corrections, or comments, please get in touch with Paul E. Black. Entry modified 17 December 2004. HTML page formatted Fri Mar 25 16:20:35 2011. Cite this as: Paul E. Black, "ragged matrix", in Dictionary of Algorithms and Data Structures [online], Paul E. Black, ed., U.S. National Institute of Standards and Technology. 17 December 2004. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/raggedmatrix.html
{"url":"http://www.darkridge.com/~jpr5/mirror/dads/HTML/raggedmatrix.html","timestamp":"2014-04-20T16:19:42Z","content_type":null,"content_length":"2629","record_id":"<urn:uuid:0d8b152f-35ed-45ee-87f8-bb35a837addd>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00600-ip-10-147-4-33.ec2.internal.warc.gz"}
Algorithms worksheets I have made 10 worksheets with some algorithms (in Python). Sharing just in case anyone wants to use one (could be given as a homework, I guess). 1: A. M. Legendre, exponentiation by successive squaring. 2: Heron of Alexandria, square roots 3: Peano arithmetic 4: Zeller's congruence 5: Factorial 6: Fibonacci numbers 7: Euclid's GCD & prime factors 8: Eratosthenes' sieve and the Miller-Rabin algorithm 9: Pascal's triangle 10: Tau using Gregory and Euler's equations The .tar.gz and .zip files include .pdfs and the code examples. Thanks for this I think I'll be able to work them into a GCSE computer science scheme of work quite nicely. Just had a slightly more detailed look, the worksheets really are rather good! Out of curiosity , what led to you making them? I was mainly inspired to put this together because of reading The Structure and Intepretation of Computer Programs, by Abelson and Sussman. I do teach some CS, but using SmallBasic, not Python. so this was just "for fun"! I've made a few more examples in this series of maths / algorithmic worksheets: Algorithms - Series Two 1. Newton's method for nth roots. 2. Primes revisited a) faster Miller-Rabin b) Lucas-Lehmer number 3. A Little Number Theory a) Ramanujan's Highly Composite Numbers b) Ramanujann and Hardy - roundness c) Pythagoras - perfect numbers d) Pythagoras - friendly numbers 4. Pi a) Euler's method for calculating Pi b) Cesaro / Monte Carlo method 5. The RSA algorithm You can get the pdfs and the code examples here.
{"url":"http://www.raspberrypi.org/forums/viewtopic.php?t=26275&p=238722","timestamp":"2014-04-21T04:36:37Z","content_type":null,"content_length":"23094","record_id":"<urn:uuid:15bfa792-e1b0-4578-8f76-6ae400444385>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00282-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Help November 9th 2009, 05:44 PM #1 Junior Member Sep 2009 limit help Let f be a continuous function defined on [a,b]. Find the limit as n approaches infinity of the integral of (nf) from a to (a+1)/n. $(a+1)/n$ is not always in $[a,b]$, don't you mean $a+(1/n)$ (for sufficiently large $n$) Edit: Assuming you mean $a+(1/n)$ take $F(x)= \int_{a} ^{x} f(t)dt$ then $F$ is differentiable in $(a,b)$ and it's derivative is continous on $[a,b]$ ie. F has left derivative at $b$ and and right derivative at $a$. $\lim_{n \rightarrow \infty } n\int_{a} ^{a+(1/n)} f = \lim_{n \rightarrow \infty } nF(a+(1/n)) = \lim_{n \rightarrow \infty } \frac{F(a+(1/n))-F(a)}{1/n} =F'(a)=f(a)$ Last edited by Jose27; November 9th 2009 at 06:46 PM. November 9th 2009, 05:55 PM #2 Super Member Apr 2009
{"url":"http://mathhelpforum.com/differential-geometry/113559-limit-help.html","timestamp":"2014-04-20T07:30:58Z","content_type":null,"content_length":"33797","record_id":"<urn:uuid:6ac709a8-84b8-43fd-92af-f60ac6dc275d>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00239-ip-10-147-4-33.ec2.internal.warc.gz"}
David M. Lane All material presented in the Sampling Distributions chapter Selected answers You may want to use the "r to z' calculator" and the "Calculate Area for a given X" applet for some of these exercises. 1. A population has a mean of 50 and a standard deviation of 6. (a) What are the mean and standard deviation of the sampling distribution of the mean for N = 16? (b) What are the mean and standard deviation of the sampling distribution of the mean for N = 20? (relevant section) 2. Given a test that is normally distributed with a mean of 100 and a standard deviation of 12, find: (a) the probability that a single score drawn at random will be greater than 110 (relevant section) (b) the probability that a sample of 25 scores will have a mean greater than 105 (relevant section) (c) the probability that a sample of 64 scores will have a mean greater than 105 (relevant section) (d) the probability that the mean of a sample of 16 scores will be either less than 95 or greater than 105 (relevant section) 3. What term refers to the standard deviation of the sampling distribution? (relevant section) 4. (a) If the standard error of the mean is 10 for N = 12, what is the standard error of the mean for N = 22? (b) If the standard error of the mean is 50 for N = 25, what is it for N = 64? (relevant 5. A questionnaire is developed to assess women's and men's attitudes toward using animals in research. One question asks whether animal research is wrong and is answered on a 7-point scale. Assume that in the population, the mean for women is 5, the mean for men is 4, and the standard deviation for both groups is 1.5. Assume the scores are normally distributed. If 12 women and 12 men are selected randomly, what is the probability that the mean of the women will be more than 1.5 points higher than the mean of the men? (relevant section) 6. If the correlation between reading achievement and math achievement in the population of fifth graders were 0.60, what would be the probability that in a sample of 28 students, the sample correlation coefficient would be greater than 0.65? (relevant section) 7. If numerous samples of N = 15 are taken from a uniform distribution and a relative frequency distribution of the means is drawn, what would be the shape of the frequency distribution? (relevant section & relevant section) 8. A normal distribution has a mean of 20 and a standard deviation of 10. Two scores are sampled randomly from the distribution and the second score is subtracted from the first. What is the probability that the difference score will be greater than 5? Hint: Read the Variance Sum Law section of Chapter 3. (relevant section & relevant section) 9. What is the shape of the sampling distribution of r? In what way does the shape depend on the size of the population correlation? (relevant section) 10. If you sample one number from a standard normal distribution, what is the probability it will be 0.5? (relevant section & relevant section) 11. A variable is normally distributed with a mean of 120 and a standard deviation of 5. Four scores are randomly sampled. What is the probability that the mean of the four scores is above 127? ( relevant section) 12. The correlation between self esteem and extraversion is .30. A sample of 84 is taken. (a) What is the probability that the correlation will be less than 0.10? (b) What is the probability that the correlation will be greater than 0.25? (relevant section) 13. The mean GPA for students in School A is 3.0; the mean GPA for students in School B is 2.8. The standard deviation in both schools is 0.25. The GPAs of both schools are normally distributed. If 9 students are randomly sampled from each school, what is the probability that: (a) the sample mean for School A will exceed that of School B by 0.5 or more? (relevant section) (b) the sample mean for School B will be greater than the sample mean for School A? (relevant section) 14. In a city, 70% of the people prefer Candidate A. Suppose 30 people from this city were sampled. (a) What is the mean of the sampling distribution of p? (b) What is the standard error of p? (c) What is the probability that 80% or more of this sample will prefer Candidate A? (d) What is the probability that 45% or more of this sample will prefer some other candidate? (relevant section) 15. When solving problems where you need the sampling distribution of r, what is the reason for converting from r to z'? (relevant section) 16. In the population, the mean SAT score is 1000. Would you be more likely (or equally likely) to get a sample mean of 1200 if you randomly sampled 10 students or if you randomly sampled 30 students? Explain. (relevant section & relevant section) 17. True/false: The standard error of the mean is smaller when N = 20 than when N = 10. (relevant section) 18. True/false: The sampling distribution of r = .8 becomes normal as N increases. (relevant section) 19. True/false: You choose 20 students from the population and calculate the mean of their test scores. You repeat this process 100 times and plot the distribution of the means. In this case, the sample size is 100. (relevant section & relevant section) 20. True/false: In your school, 40% of students watch TV at night. You randomly ask 5 students every day if they watch TV at night. Every day, you would find that 2 of the 5 do watch TV at night. ( relevant section & relevant section) 21. True/false: The median has a sampling distribution. (relevant section) 22. True/false: Refer to the figure below. The population distribution is shown in black, and its corresponding sampling distribution of the mean for N = 10 is labeled "A." (relevant section & relevant section) Questions from Case Studies: The following questions use data from the Angry Moods (AM) case study. 23. (a) How many men were sampled? (b) How many women were sampled? 24. What is the mean difference between men and women on the Anger-Out scores? 25. Suppose in the population, the Anger-Out score for men is two points higher than it is for women. The population variances for men and women are both 20. Assume the Anger-Out scores for both genders are normally distributed. Given this information about the population parameters: (a) What is the mean of the sampling distribution of the difference between means? (relevant section) (b) What is the standard error of the difference between means? (relevant section) (c) What is the probability that you would have gotten this mean difference (see #24) or less in your sample? (relevant section) The following questions use data from the Animal Research (AR) case study. 26. How many people were sampled to give their opinions on animal research? 27. (AR#11) What is the correlation in this sample between the belief that animal research is wrong and belief that animal research is necessary? (Ch. 4.E) 28. Suppose the correlation between the belief that animal research is wrong and the belief that animal research is necessary is -.68 in the population. (a) Convert -.68 to z'. (relevant section) (b) Find the standard error of this sampling distribution. (relevant section) (c) Assuming the data used in this study was randomly sampled, what is the probability that you would get this correlation or stronger (closer to -1)? (relevant section) 1) (a) Mean = 50, SD = 1.5 2) (b) .019 4) (a) 7.39 11) .0026 12) (b) .690 13) (a) .0055 14) (c) .116 23) (a) 30 25) (a) 2 28) (c) .603 Please answer the questions:
{"url":"http://onlinestatbook.com/2/sampling_distributions/ch7_exercises.html","timestamp":"2014-04-16T18:56:07Z","content_type":null,"content_length":"15343","record_id":"<urn:uuid:7d0dda96-148f-4cb7-a55f-15dd37842316>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00483-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Which measurement is a possible side length for a triangle with two sides measuring 9cm and 4cm? A) 5cm B) 6cm C) 3cm D)4cm Can someone explain how exactly to do this? • one year ago • one year ago Best Response You've already chosen the best response. Use the Pythagorean equation. Best Response You've already chosen the best response. A^2 + B^2 = C^2 Best Response You've already chosen the best response. Best Response You've already chosen the best response. there is a rule for triangles...the sum of two sides must be greater than the third side A) 5...suppose i take 5 and 4 as two sides...this must be greater than 9..but it's not..therefore it's not 5 let's look at D) 4 4+4 is not greater than 9 either so that's not it continue with this logic...do you know the answer? Best Response You've already chosen the best response. So it would be 6 ... Because 6+4=10 and 10 is greater than 9 Best Response You've already chosen the best response. you got it! Best Response You've already chosen the best response. 6 + 9 is also greater than 4 which further supports your answer Best Response You've already chosen the best response. Awesome thanks :) Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4fd82f71e4b091dda53fcdf2","timestamp":"2014-04-16T23:02:39Z","content_type":null,"content_length":"51435","record_id":"<urn:uuid:11aade69-9551-443c-a3c2-8def6d7c636e>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00256-ip-10-147-4-33.ec2.internal.warc.gz"}
.:: Phrack Magazine ::. Introduction The Circle of Lost Hackers Phrack Prophile of the new editors The Circle of Lost Hackers Phrack World News The Circle of Lost Hackers A brief history of the Underground scene Duvel Hijacking RDS TMC traffic information signal lcars & danbia Attacking the Core: Kernel Exploitation Notes twiz & sgrakkyu The revolution will be on YouTube gladio Automated vulnerability auditing in machine code Tyler Durden The use of set_head to defeat the wilderness g463 Cryptanalysis of DPA-128 sysk Mac OS X Wars - A XNU Hope nemo Hacking deeper in the system scythale The art of exploitation: Autopsy of cvsxpl Ac1dB1tch3z Know your enemy: Facing the cops Lance Remote blind TCP/IP spoofing klm Hacking your brain: The projection of consciousness keptune International scenes Various _ _ _/B\_ _/W\_ (* *) Phrack #64 file 10 (* *) | - | | - | | | Cryptanalysis of DPA-128 | | | | | | | | By SysK | | | | | | | | syskall@phreaker.net | | (____________________________________________________) --[ Contents 1 - Introduction 2 - A short word about block ciphers 3 - Overview of block cipher cryptanalysis 4 - Veins' DPA-128 4.1 - Bugs in the implementation 4.2 - Weaknesses in the design 5 - Breaking the linearized version 6 - On the non linearity of addition modulo n in GF(2) 7 - Exploiting weak keys 7.1 - Playing with a toy cipher 7.2 - Generalization and expected complexity 7.3 - Cardinality of |W 8 - Breaking DPA based unkeyed hash function 8.1 - Introduction to hash functions 8.2 - DPAsum() algorithm 8.3 - Weaknesses in the design/implementation 8.4 - A (2nd) preimage attack 9 - Conclusion 10 - Greetings 11 - Bibliography --[ 1 - Introduction While the cracking scene has grown with cryptology thanks to the evolution of binary protection schemes, the hacking scene mostly hasn't. This fact is greatly justified by the fact that there were globally no real need. Indeed it's well known that if a hacker needs to decrypt some files then he will hack into the box of its owner, backdoor the system and then use it to steal the key. A cracker who needs to break a protection scheme will not have the same approach: he will usually try to understand it fully in order to find and exploit design and/or implementation flaws. Although the growing of the security industry those last years changed a little bit the situation regarding the hacking community, nowadays there are still too many people with weak knowledge of this science. What is disturbing is the broadcast of urban legends and other hoax by some paranoids among them. For example, haven't you ever heard people claiming that government agencies were able to break RSA or AES? A much more clever question would have been: what does "break" mean? A good example of paranoid reaction can be found in M1lt0n's article [FakeP63]. The author who is probably skilled in hacking promotes the use of "home made cryptographic algorithms" instead of standardized ones such as 3DES. The corresponding argument is that since most so-called security experts lake coding skills then they aren't able to develop appropriate tools for exotic ciphers. While I agree at least partially with him regarding the coding abilities, I can't possibly agree with the main thesis. Indeed if some public tools are sufficient to break a 3DES based protection then it means that a design and/or an implementation mistake was/were made since, according to the state of the art, 3DES is still unbroken. The cryptosystem was weak from the beginning and using "home made cryptography" would only weaken it more. It is therefore extremely important to understand cryptography and to trust the standards. In a previous Phrack issue (Phrack 62), Veins exposed to the hacking community a "home made" block cipher called DPA (Dynamic Polyalphabetic Algorithms) [DPA128]. In the following paper, we are going to analyze this cipher and demonstrate that it is not flawless - at least from a cryptanalytic perspective - thus fitting perfectly with our talk. --[ 2 - A short word about block ciphers Let's quote a little bit the excellent HAC [MenVan]: "A block cipher is a function which maps n-bit plaintext blocks to n-bit ciphertext blocks; n is called the blocklength. It may be viewed as a simple substitution cipher with large character size. The function is parametrized by a k-bit key K, taking values from a subset |K (the key space) of the set of all k-bit vectors Vk. It is generally assumed that the key is chosen at random. Use of plaintext and ciphertext blocks of equal size avoids data expansion." Pretty clear isn't it? :> So what's the purpose of such a cryptosystem? Obviously since we are dealing with encryption this class of algorithms provides confidentiality. Its construction makes it particularly suitable for applications such as large volumes encryption (files or HD for example). Used in special modes such as CBC (like in OpenSSL) then it can also provide stream encryption. For example, we use AES-CBC in the WPA2, SSL and SSH protocols. Remark: When used in conjunction with other mechanisms, block ciphers can also provide services such as authentication or integrity (cf part 8 of the paper). An important point is the understanding of the cryptology utility. While cryptography aims at designing best algorithms that is to say secure and fast, cryptanalysis allows the evaluation of the security of those algorithms. The more an algorithm is proved to have weaknesses, the less we should trust it. --[ 3 - Overview of block cipher cryptanalysis The cryptanalysis of block ciphers evolved significantly in the 90s with the apparition of some fundamental methods such as the differential [BiSha90] and the linear [Matsui92] cryptanalysis. In addition to some more recent ones like the boomerang attack of Wagner or the chi square cryptanalysis of Vaudenay [Vaud], they constitute the set of so-called statistical attacks on block ciphers in opposition to the very recent and still controverted algebraic ones (see [CourtAlg] for more information). Today the evolution of block cipher cryptanalysis tends to stabilize itself. However a cryptographer still has to acquire quite a deep knowledge of those attacks in order to design a cipher. Reading the Phrack paper, we think - actually we may be wrong - that the author mostly based his design on statistical tests. Although they are obviously necessary, they can't possibly be enough. Every component has to be carefully chosen. We identified several weaknesses and think that some more may still be left. --[ 4 - Veins' DPA-128 description DPA-128 is a 16 rounds block cipher providing 128 bits block encryption using an n bits key. Each round encryption is composed of 3 functions which are rbytechain(), rbitshift() and S_E(). Thus for each input block, we apply the E() function 16 times (one per round) : void E (unsigned char *key, unsigned char *block, unsigned int shift) { rbytechain (block); rbitshift (block, shift); S_E (key, block, shift); } where: - block is the 128b input - shift is a 32b parameter dependent of the round subkey - key is the 128b round subkey Consequently, the mathematical description of this cipher is: f: |P x |K ----> |C where: - |P is the set of all plaintexts - |K is the set of all keys - |C is the set of all ciphertexts For p element of |P, k of |K and c of |C, we have c = f(p,k) with f = EE...EE = E^16 and meaning the composition of functions. We are now going to describe each function. Since we sometimes may need mathematics to do so, we will assume that the reader is familiar with basic algebra ;> rbytechain() is described by the following C function: void rbytechain(unsigned char *block) { int i; for (i = 0; i < DPA_BLOCK_SIZE; ++i) block[i] ^= block[(i + 1) % DPA_BLOCK_SIZE]; return; } where: - block is the 128b input - DPA_BLOCK_SIZE equals 16 Such an operation on bytes is called linear mixing and its goal is to provide the diffusion of information (according to the well known Shannon theory). Mathematically, it's no more than a linear map between two GF(2) vector spaces of dimension 128. Indeed, if U and V are vectors over GF(2) representing respectively the input and the output of rbytechain() then V = M.U where M is a 128x128 matrix over GF(2) of the linear map where coefficients of the matrix are trivial to find. Now let's see rbitshift(). Its C version is: void rbitshift(unsigned char *block, unsigned int shift) { unsigned int i; unsigned int div; unsigned int mod; unsigned int rel; unsigned char mask; unsigned char remainder; unsigned char sblock[DPA_BLOCK_SIZE]; if (shift) { mask = 0; shift %= 128; div = shift / 8; mod = shift % 8; rel = DPA_BLOCK_SIZE - div; for (i = 0; i < mod; ++i) mask |= (1 << i); for (i = 0; i < DPA_BLOCK_SIZE; ++i) { remainder = ((block[(rel + i - 1) % DPA_BLOCK_SIZE]) & mask) << (8 - mod); sblock[i] = ((block[(rel + i) % DPA_BLOCK_SIZE]) >> mod) | remainder; } } memcpy(block, sblock, DPA_BLOCK_SIZE); } where: - block is the 128b input - DPA_BLOCK_SIZE equals 16 - shift is derived from the round subkey Veins describes it in his paper as a key-related shifting (in fact it has to be a key-related 'rotation' since we intend to be able to decrypt the ciphertext ;)). A careful read of the code and several tests confirmed that it was not erroneous (up to a bug detailed later in this paper), so we can describe it as a linear map between two GF(2) vector spaces of dimension 128. Indeed, if V and W are vectors over GF(2) representing respectively the input and the output of rbitshift() then: W = M'.V where M' is the 128x128 matrix over GF(2) of the linear map where, unlike the previous function, coefficients of the matrix are unknown up to a probability of 1/128 per round. Such a function also provides diffusion of information. Finally, the last operation S_E() is described by the C code: void S_E (unsigned char *key, unsigned char *block, unsigned int s) { int i; for (i = 0; i < DPA_BLOCK_SIZE; ++i) block[i] = (key[i] + block[i] + s) % 256; return; } where: - block is the 128b input - DPA_BLOCK_SIZE equals 16 - s is the shift parameter described in the previous function - key is the round subkey The main idea of veins' paper is the so-called "polyalphabetic substitution" concept, whose implementation is supposed to be the S_E() C function. Reading the code, it appears to be no more than a key mixing function over GF(2^8). Remark: We shall see later the importance of the mathematical operation know as 'addition' over GF(2^8). Regarding the key scheduling, each cipher round makes use of a 128b subkey as well as of a 32b one deriving from it called "shift". The following pseudo code describes this operation: skey(0) = checksum128(master_key) for i = 0, nbr_round-2: skey(i+1) = checksum128(skey(i)) skey(0) = skey(15) for i = 0, nbr_round-1: shift(nbr_round-1 - i) = hash32(skey(i)) where skey(i) is the i'th subkey. It is not necessary to explicit the checksum128() and hash32(), the reader just has to remind this thing: whatever the weakness there may be in those functions, we will now consider them being true oneway hash functions providing perfect entropy. As a conclusion, the studied cipher is closed to being a SPN (Substitution - Permutation Network) which is a very generic and well known construction (AES is one for example). --[ 4.1 - Bugs in the implementation Although veins himself honestly recognizes that the cipher may be weak and "strongly discourages its use" to quote him [DPA128], some people could nevertheless decide to use it as a primitive for encryption of personal and/or sensitive data as an alternative to 'already-cracked-by-NSA' ciphers [NSA2007]. Unfortunately for those theoretical people, we were able to identify a bug leading to a potentially incorrect functioning of the cryptosystem (with a non negligible probability). We saw earlier that the bitshift code skeleton was the following: /* bitshift.c */ void {r,l}bitshift(unsigned char *block, unsigned int shift) { [...] // SysK : local vars declaration unsigned char sblock[DPA_BLOCK_SIZE]; if (shift) { [...] // SysK : sblock initialization } memcpy(block, sblock, DPA_BLOCK_SIZE); } Clearly, if 'shift' is 0 then 'block' is fed with stack content! Obviously in such a case the cryptosystem can't possibly work. Since shift is an integer, such an event occurs with at least a theoretical probability of 1/2^32 per round. Now let's study the shift generation function: /* hash32.c */ /* * This function computes a 32 bits output out a variable length input. It is * not important to have a nice distribution and low collisions as it is used * on the output of checksum128() (see checksum128.c). There is a requirement * though, the function should not consider \0 as a key terminator. */ unsigned long hash32(unsigned char *k, unsigned int length) { unsigned long h; for (h = 0; *k && length; ++k, --length) h = 13 * h + *k; return (h); } As stated in the C code commentary, hash32() is the function which produces the shift. Although the author is careful and admits that the output distribution may not be completely uniform (not exactly equal probability for each byte value to appear) it is obvious that a strong bias is not desirable (Cf 7.3). However what happens if the first byte pointed by k is 0 ? Since the loop ends for k equal to 0, then h will be equal to 13 * 0 + 0 = 0. Assuming that the underlying subkey is truly random, such an event should occur with a probability of 1/256 (instead of 1/2^32). Since the output of hash32() is an integer as stated in the comment, this is clearly a bug. We could be tempted to think that this implementation failure leads to a weakness but a short look at the code tells us that: struct s_dpa_sub_key { unsigned char key[DPA_KEY_SIZE]; unsigned char shift; }; typedef struct s_dpa_sub_key DPA_SUB_KEY; Therefore since shift is a char object, the presence of "*k &&" in the code doesn't change the fact that the cryptosystem will fail with a probability of 1/ 256 per round. Since the bug may appear independently in each round, the probability of failure is even greater: p("fail") = 1 - p("ok") = 1 - Mul( p("ok in round i") ) = 1 - (255/256)^16 = 0.0607... where i is element of [0, (nbr_rounds - 1)] It's not too far from 1/16 :-) Remark: We shall see later that the special case where shift is equal to 0 is part of a general class of weak keys potentially allowing an attacker to break the cryptosystem. Hunting weaknesses and bugs in the implementation of cryptographic primitives is the common job of some reverse engineers since it sometimes allows to break implementations of algorithms which are believed to be theoretically secure. While those flaws mostly concern asymmetric primitives of digital signature or key negotiation/ generation, it can also apply in some very specific cases to the block cipher world. From now, we will consider the annoying bug in bitshift() fixed. --[ 4.2 - Weaknesses in the design When designing a block cipher, a cryptographer has to be very careful about every details of the algorithm. In the following section, we describe several design mistakes and explain why in some cases, it can reduce the security of the cipher. a) We saw earlier that the E() function was applied to each round. However such a construction is not perfect regarding the first round. Since rbytechain() is a linear mixing operating not involving key material, it shouldn't be used as the first operation on the input buffer since its effect on it can be completely canceled. Therefore, if a cryptanalyst wants to attack the bitshift() component of the first round, he just have to apply lbytechain() (the rbytechain() inverse function) to the input vector. It would thus have been a good idea to put a key mixing as the first operation. b) The rbitshift() operation only need the 7 first bits of the shift character whereas the S_E() uses all of them. It is also generally considered a bad idea to use the same key material for several operations. c) If for some reason, the attacker is able to leak the second (not the first) subkey then it implies the compromising of all the key material. Of course the master key will remain unknown because of the onewayness of checksum128() however we do not need to recover it in order to encrypt and/or decrypt datas. d) In the bitshift() function, a loop is particularly interesting: for (i = 0; i < mod; ++i) mask |= (1 << i); What is interesting is that the time execution of the loop is dependent of "mod" which is derived from the shift. Therefore we conclude that this loop probably allows a side channel attack against the cipher. Thanks to X for having pointed this out ;> In the computer security area, it's well known that a single tiny mistake can lead to the total compromising of an information system. In cryptography, the same rules apply. --[ 5 - Breaking the linearized version Even if we regret the non justification of addition operation employment, it is not the worst choice in itself. What would have happen if the key mixing had been done with a xor operation over GF(2^8) instead as it is the case in DES or AES for example? To measure the importance of algebraic consideration in the security of a block cipher, let's play a little bit with a linearized version of the cipher. That is to say that we replace the S_E() function with the following S_E2() where : void S_E2 (unsigned char *key, unsigned char *block, unsigned int s) { int i; for (i = 0; i < DPA_BLOCK_SIZE; ++i) block[i] = (key[i] ^ block[i] ^ s) % 256; [1] // + is replaced by xor return; } If X, Y and K are vectors over GF(2^8) representing respectively the input, the output of S_E2() and the round key material then Y = X xor K. Remark: K = sK xor shift. We use K for simplification purpose. Now considering the full round we have : V = M.U [a] (rbytechain) W = M'.V [b] (rbitshift) Y = W xor K [c] (S_E2) Linear algebra allows the composition of applications rbytechain() and rbitshift() since the dimensions of M and M' match but W in [b] is a vector over GF(2) whereas W in [c] is clearly over GF(2^8). However, due to the use of XOR in [c], Y, W and K can also be seen as vectors over GF(2). Therefore, S_E2() is a GF(2) affine map between two vector spaces of dimension 128. We then have: Y = M'.M.U xor K The use of differential cryptanalysis will help us to get rid of the key. Let's consider couples (U0,Y0 = E(U0)) and (U1,Y1 = E(U1)) then: DELTA(Y) = Y0 xor Y1 = (M'.M.U0 xor K) xor (M'.M.U1 xor K) = (M'.M.U0 xor M'.M.U1) xor K xor K (commutativity & associativity of xor) = (M'.M).(U0 xor U1) (distributivity) = (M'.M).DELTA(U) Such a result shows us that whatever sK and shift are, there is always a linear map linking an input differential to the corresponding output differential. The generalization to the 16 rounds using matrix multiplication is obvious. Therefore we have proved that there exists a 128x128 matrix Mf over GF(2) such as DELTA(Y) = Mf.DELTA(X) for the linearized version of the cipher. Then assuming we know one couple (U0,Y0) and Mf, we can encrypt any input U. Indeed, Y xor Y0 = Mf.(U xor U0) therefore Y = (Mf.(U xor U0)) xor Y0. Remark 1: The attack doesn't give us the knowledge of subkeys and shifts but such a thing is useless. The goal of an attacker is not the key in itself but rather the ability of encrypting/decrypting a set of plaintexts/ciphertexts. Furthermore, considering the key scheduling operation, if we really needed to recover the master key, it would be quite a pain in the ass considering the fact that checksum128() is a one way function ;-) Remark 2: Obviously in order to decrypt any output Y we need to calculate Mf^-1 which is the inverse matrix of Mf. This is somewhat more interesting isn't it ? :-) Because of rbitshift(), we are unable to determine using matrix multiplications the coefficients of Mf. An exhaustive search is of course impossible because of the huge complexity (2^16384) however, finding them is equivalent to solving 128 systems (1 system per row of Mf) of 128 variables (1 variable per column) in GF(2). To build such a system, we need 128 couples of (cleartext,ciphertext). The described attack was implemented using the nice NTL library ([SHOUP]) and can be found in annexe A of this paper. $ g++ break_linear.cpp bitshift.o bytechain.o key.c hash32.o checksum128.o -o break_linear -lntl -lcrypto -I include $ ./break_linear [+] Generating the plaintexts / ciphertexts [+] NTL stuff ! [+] Calculation of Mf [+] Let's make a test ! [+] Well done boy :> Remark: Sometimes NTL detects a linear relation between chosen inputs (DELTA_X) and will then refuse to work. Indeed, in order to solve the 128 systems, we need a situation where every equations are independent. If it's not the case, then obviously det(M) is equal to 0 (with probability 1/2). Since inputs are randomly generated, just try again until it works :-) $ ./break_linear [+] Generating the plaintexts / ciphertexts [+] NTL stuff ! det(M) = 0 As a conclusion we saw that the linearity over GF(2) of the xor operation allowed us to write an affine relation between two elements of GF(2)^128 in the S_E2() function and then to easily break the linearized version using a 128 known plaintext attack. The use of non linearity is crucial in the design. Fortunately for DPA-128, Veins chose the addition modulo 256 as the key mixer which is naturally non linear over GF(2). --[ 6 - On the non linearity of addition modulo n over GF(2) The bitshift() and bytechain() functions can be described using matrix over GF(2) therefore it is interesting to use this field for algebraic calculations. The difference between addition and xor laws in GF(2^n) lies in the carry propagation: w(i) + k(i) = w(i) xor k(i) xor carry(i) where w(i), k(i) and carry(i) are elements of GF(2). We note w(i) as the i'th bit of w and will keep this notation until the end. carry(i), written c(i) for simplification purpose, is defined recursively: c(i+1) = w(i).k(i) xor w(i).c(i) xor k(i).c(i) with c(0) = 0 Using this notation, it would thus be possible to determine a set of relations over GF(2) between input/output bits which the attacker controls using a known plaintext attack and the subkey bits (which the attacker tries to guess). However, recovering the subkey bits won't be that easy. Indeed, to determine them, we need to get rid of the carries replacing them by multivariate polynomials were unknowns are monomials of huge order. Remark 1: Because of the recursivity of the carry, the order of monomials grows up as the number of input bits per round as well as the number of rounds increases. Remark 2: Obviously we can not use intermediary input/output bits in our equations. This is because unlike the subkey bits, they are dependent of the input. We are thus able to express the cryptosystem as a multivariate polynomial system over GF(2). Solving such a system is NP-hard. There exists methods for system of reasonable order like groebner basis and relinearization techniques but the order of this system seems to be far too huge. However for a particular set of keys, the so-called weak keys, it is possible to determine the subkeys quite easily getting rid of the complexity introduced by the carry. -- [ 7 - Exploiting weak keys Let's first define a weak key. According to wikipedia: "In cryptography, a weak key is a key which when used with a specific cipher, makes the cipher behave in some undesirable way. Weak keys usually represent a very small fraction of the overall keyspace, which usually means that if one generates a random key to encrypt a message weak keys are very unlikely to give rise to a security problem. Nevertheless, it is considered desirable for a cipher to have no weak keys." Actually we identified a particular subset |W of |K allowing us to deal quite easily with the carry problem. A key "k" is part of |W if and only if for each round the shift parameter is a multiple of 8. The reader should understand why later. We will first present the attack on a reduced version of DPA for simplicity purpose and generalize it later to the full version. --[ 7.1 - Playing with a toy cipher Our toy cipher is a 2 rounds DPA. Moreover, the cipher takes as input 4*8 bits instead of 16*8 = 128 bits which means that DPA_BLOCK_SIZE = 4. We also make a little modification in bytechain() operation. Let's remember the bytechain() function: void rbytechain(unsigned char *block) { int i; for (i = 0; i < DPA_BLOCK_SIZE; ++i) block[i] ^= block[(i + 1) % DPA_BLOCK_SIZE]; return; } Since block is both input AND output of the function then we have for DPA_BLOCK_SIZE = 4: V(0) = U(0) xor U(1) V(1) = U(1) xor U(2) V(2) = U(2) xor U(3) V(3) = U(3) xor V(0) = U(0) xor U(1) xor U(3) Where V(x) is the x'th byte element. Thus with our modification: V(0) = U(0) xor U(1) V(1) = U(1) xor U(2) V(2) = U(2) xor U(3) V(3) = U(3) xor U(0) Regarding the mathematical notation (pay your ascii !@#): - U,V,W,Y vector notation of section 5 remains. - Xj(i) is the i'th bit of vector Xj where j is j'th round. - U0 vector is equivalent to P where P is a plaintext. - m is the shift of round 0 - n is the shift of round 1 - xor will be written '+' since calculation is done in GF(2) - All calculation of subscript will be done in the ring ZZ_32 How did we choose |W? Using algebra in GF(2) implies to deal with the carry. However, if k is a weak key (part of |W), then we can manage the calculation so that it's not painful anymore. Let i be the lowest bit of any input byte. Therefore for each i part of the set {0,8,16,24} we have: u0(i) = p(i) v0(i) = p(i) + p(i+8) w0(i+m) = v0 (i) y0(i) = w0(i) + k0(i) + C0(i) y0(i+m) = w0(i+m) + k0(i+m) + C0(i+m) y0(i+m) = p(i) + p(i+8) + k0(i+m) + C0(i+m) /* carry(0) = 0 */ y0(i+m) = p(i) + p(i+8) + k0(i+m) u1(i) = y0(i) v1(i) = y0(i) + y0(i+8) w1(i+n) = v1(i) y1(i) = w1(i) + k1(i) + C1(i) y1(i+n) = w1(i+n) + k1(i+n) + C1(i+n) y1(i+n) = y0(i) + y0(i+8) + k1(i+n) + C1(i+n) y1(i+n+m) = y0(i+m) + y0(i+m+8) + k1(i+n+m) + C1(i+n+m) /* carry(0) = 0 */ y1(i+n+m) = p(i) + p(i+8) + k0(i+m) + p(i+8) + p(i+16) + k0(i+m+8) + k1(i+n+m) y1(i+n+m) = p(i) + k0(i+m) + p(i+16) + k0(i+m+8) + k1(i+n+m) As stated before, i is part of the set {0,8,16,24} so we can write: y1(n+m) = p(0) + k0(m) + p(16) + k0(m+8) + k1(n+m) y1(8+n+m) = p(8) + k0(8+m) + p(24) + k0(m+16) + k1(8+n+m) y1(16+n+m) = p(16) + k0(16+m) + p(0) + k0(m+24) + k1(16+n+m) y1(24+n+m) = p(24) + k0(24+m) + p(8) + k0(m) + k1(24+n+m) In the case of a known plaintext attack, the attacker has the knowledge of a set of couples (P,Y1). Therefore considering the previous system, the lowest bit of K0 and K1 vectors are the unknowns. Here we have a system which is clearly underdefined since it is composed of 4 equations and 4*2 unknowns. It will give us the relations between each lowest bit of Y and the lowest bits of K0 and K1. Remark 1: n,m are unknown. A trivial approach is to determine them which costs a complexity of (2^4)^2 = 2^8. Although it may seem a good idea, let's recall the reader that we are considering a round reduced cipher! Indeed, applying the same idea to the full 16 rounds would cost us (2^4)^16 = 2^64! Such a complexity is a pain in the ass even nowadays :-) A much better approach is to guess (n+m) as it costs 2^4 what ever the number of rounds. It gives us the opportunity to write relations between some input and output bits. We do not need to know exactly m and n. The knowledge of the intermediate variables k0(x+m) and k1(y+n+m) is sufficient. Remark 2: An underdefined system brings several solutions. We are thus able to choose arbitrarily 4 variables thus fixing them with values of our choice. Of course we have to choose so that we are able to solve the system with remaining variables. For example taking k0(m), k0 (m+8) and k1(n+m) together is not fine because of the first equation. However, fixing all the k0(x+m) may be a good idea as it automatically gives the k1(y+n+m) corresponding ones. Now let's go further. Let i be part of the set {1,9,17,25}. We can write: u0(i) = p(i) v0(i) = p(i) + p(i+8) w0(i+m) = v0(i) y0(i) = w0(i) + k0(i) + w0(i-1)*k0(i-1) y0(i+m) = w0(i+m) + k0(i+m) + w0(i+m-1)*k0 (i+m-1) y0(i+m) = p(i) + p(i+8) + k0(i+m) + w0(i+m-1)*k0(i+m-1) y0(i+m) = p(i) + p(i+8) + k0(i+m) + (p(i-1) + p(i-1+8))*k0(i+m-1) u1(i) = y0(i) v1(i) = y0(i) + y0(i+8) w1(i+n) = v1(i) y1(i) = w1(i) + k1(i) + C1(i) y1(i) = w1(i) + k1(i) + w1(i-1)*k1(i-1) y1(i+n) = w1(i+n) + k1(i+n) + w1(i-1+n)*k1(i-1+n) y1(i+n) = y0(i) + y0(i+8) + k1(i+n) + (y0(i-1) + y0(i+8-1)) * k1(i-1+n) y1(i+n+m) = y0(i+m) + y0(i+m+8) + k1(i+m+n) + (y0(i+m-1) + y0(i+m+8-1)) * k1(i+m+n-1) y1(i+n+m) = p(i) + p(i+8) + k0(i+m) + (p(i-1) + p(i-1+8)) * k0(i+m-1) + p(i+8) + p(i+16) + k0(i+m+8) + (p(i+8-1) + p(i-1+16)) * k0 (i+m-1+8) + k1(i+n+m) + k1(i+m+n-1) * [p(i-1) + p(i+8-1) + k0(i+m-1)] + k1(i+m+n-1) * [p(i-1+8) + p(i+16-1) + k0(i+m-1+8)] y1(i+n+m) = p(i) + k0(i+m) + (p(i-1) + p(i-1+8)) * k0(i+m-1) + p(i+16) + k0 (i+m+8) + (p(i+8-1) + p(i-1+16)) * k0(i+m-1+8) + k1(i+n+m) + k1(i+m+n-1)*[p(i-1) + k0(i+m-1)] + k1(i+m+n-1)*[p(i-1+16) + k0(i+m-1+8)] Thanks to the previous system resolution, we have the knowledge of k0(i+m+n-1+x) and k1(i+m-1+y) variables. Therefore, we can reduce the previous equation to: A(i) = k0(i+m) + k0(i+m+8) + k1(i+n+m) (alpha) where A(i) is a known value for the attacker. Remark 1: This equation represents the same system as found in case of i being the lowest bit! Therefore all previous remarks remain. Remark 2: If we hadn't have the knowledge of k0(i+m+n-1+x) and k1(i+m-1+y) bits then the number of variables would have grown seriously. Moreover we would have had to deal with some degree 2 monomials :-/. We can thus conjecture that the equation alpha will remain true for each i part of {a,a+8,a+16,a+24} where 0 <= a < 8. --[ 7.2 - Generalization and expected complexity Let's deal with the real bytechain() function now. As stated before and for DPA_BLOCK_SIZE = 4 we have: V(0) = U(0) xor U(1) V(1) = U(1) xor U(2) V(2) = U(2) xor U(3) V(3) = U(0) xor U(1) xor U(3) This is clearly troublesome as the last byte V(3) is NOT calculated like V(0), V(1) and V(2). Because of the rotations involved, we wont be able to know when the bit manipulated is part of V(3) or not. Therefore, we have to use a general formula: V(i) = U(i) + U(i+1) + a(i).U(i+2) where a(i) = 1 for i = 24 to 31 For i part of {0,8,16,24} we have: u0(i) = p(i) v0(i) = p(i) + p(i+8) + a0(i).p(i+16) w0(i+m) = v0(i) y0(i) = w0(i) + k0(i) + C0(i) y0(i+m) = w0(i+m) + k0(i+m) + C0(i+m) y0(i+m) = p(i) + p(i+8) + a0(i).p(i+16) + k0(i+m) + C0(i+m) /*carry(0) = 0*/ y0(i+m) = p(i) + p(i+8) + a0(i).p(i+16) + k0(i+m) So in the second round: u1(i) = y0(i) v1(i) = y0(i) + y0(i+8) + a1(i).y0(i+16) w1(i+n) = v1(i) y1(i) = w1(i) + k1(i) + C1(i) y1(i+n) = w1(i+n) + k1(i+n) + C1(i+n) y1(i+n) = y0(i) + y0(i+8) + a1(i).y0(i+16) + k1(i+n) + C1(i+n) y1(i+n+m) = y0(i+m) + y0(i+m+8) + a1(i+m).y0(i+m+16) + k1(i+n+m) y1(i+n+m) = p(i) + p(i+8) + a0(i).p(i+16) + k0(i+m) + p(i+8) + p(i+16) + a0(i).p(i+24) + k0(i+m+8) + a1(i+m).[p(i+16) + p(i+24) + a0(i).p(i) + k0(i+m+16)] + k1(i+n+m) y1(i+n+m) = p(i) + a0(i).p(i+16) + k0(i+m) + p(i+16) + a0(i).p(i+24) + k0(i+m+8) + a1(i+m).[p(i+16) + p(i+24) + a0(i).p(i) + k0(i+m+16)] + k1(i+n+m) a0(i) is not a problem since we know it. This is coherent with the fact that the first operation of the cipher is rbytechain() which is invertible for the attacker. However, the problem lies in the a1(i+m) variables. Guessing a1(i+m) is out of question as it would cost us a complexity of (2^4)^15 = 2^60 for the 16 rounds! The solution is to consider a1(i+m) as an other set of 4 variables. We can also add the equation to our system: a1(m) + a1(m+8) + a1(m+16) + a1(m+24) = 1 This equation will remain true for other bits. So what is the global complexity? Obviously with DPA_BLOCK_SIZE = 16 each system is composed of 16+1 equations of 16+1 variables (we fixed the others). Therefore, the complexity of the resolution is: log(17^3) / log(2) ~ 2^13. We will solve 8 systems since there are 8 bits per byte. Thus the global complexity is around (2^13)*8 = 2^16. Remark: We didn't take into account the calculation of equation as it is assumed to be determined using a formal calculation program such as pari-gp or magma. --[ 7.3 - Cardinality of |W What is the probability of choosing a weak key? We have seen that our weak key criterion is that for each round, the rotation parameter needs to be multiple of 8. Obviously, it happens with 16 / 128 = 1/8 theoretical probability per round. Since we consider subkeys being random, the generation of rotation parameters are independent which means that the overall probability is (1/16)^16 = 1/2^64. Although a probability of 1/2^64 means a (huge) set of 2^64 weak keys, in the real life, there are very few chances to choose one of them. In fact, you probably have much more chances to win lottery ;) However, two facts must be noticed: - We presented one set of weak keys but there be some more! - We illustrated an other weakness in the conception of DPA-128 Remark: A probability of 1/8 per round is completely theoretic as it supposes a uniform distribution of hash32() output. Considering the extreme simplicity of the hash32() function, it wouldn't be too surprising to be different in practice. Therefore we made a short test to compute the real probability (Annexe B). $ gcc test.hash32.c checksum128.o hash32.o -o test.hash32 -O3 -fomit-frame-pointer $ time ./test.hash32 [+] Probability is 0.125204 real 0m14.654s user 0m14.649s sys 0m0.000s $ gp -q ? (1/0.125204) ^ 16 274226068900783.2739747241633 ? log(274226068900783.2739747241633) / log(2) 47.96235905375676878381741198 ? This result tells us clearly that the probability of shift being multiple of 8 is around 1/2^2.99 ~ 1/8 per round which is assimilated to the theoretical one since the difference is too small to be significant. In order to improve the measure, we used checksum128() as an input of hash32(). Furthermore, we also tried to test hash32() without the "*k &&" bug mentioned earlier. Both tests gave similar results which means that the bug is not important in practice and that checksum128() doesn't seem to be particularly skewed. This is a good point for DPA! :-D --[ 8 - Breaking DPA-based unkeyed hash function In his paper, Veins also explains how a hash function can be built out of DPA. We will analyze the proposed scheme and will show how to completely break it. --[ 8.1 - Introduction to hash functions Quoting once again the excellent HAC [MenVan]: "A hash function is a function h which has, as a minimum, the following two properties: 1. compression - h maps an input x of arbitrary finit bitlength, to an output h(x) of fixed bitlength n. 2. ease of computation - given h and an input x, h(x) is easy to compute. In cryptography there are essentially two families of hash functions: 1. The MAC (Message Authentication Codes). They are keyed ones and provides both authentication (of source) and integrity of messages. 2. The MDC (Modification Detection Code), sometimes referred as MIC. They are unkeyed and only provide integrity. We will focus on this kind of functions. When designing his hash function, the cryptographer generally wants it to satisfy the three properties: - preimage resistance. For any y, it should not be possible (that is to say computationally infeasible) to find an x such as h(x) = y. Such a property implies that the function has to be non invertible. - 2nd preimage resistance. For any x, it should not be possible to find an x' such as h(x) = h(x') when x and x' are different. - collision resistance. It should not be possible to find an x and an x' (with x different of x') such that h(x) = h(x'). Remark 1: Properties 1 and 2 and essentials when dealing with binary integrity. Remark 2: The published attacks on MD5 and SHA-0/SHA-1 were dealing with the third property. While it is true that finding collisions on a hash function is enough for the crypto community to consider it insecure (and sometimes leads to a new standard [NIST2007]), for most of usages it still remains sufficient. There are many way to design an MDC function. Some functions are based on MD4 function such as MD5 or SHA* functions which heavily rely on boolean algebra and operations in GF(2^32), some are based on NP problems such as RSA and finally some others are block cipher based. The third category is particularly interesting since the security of the hash function can be reduced to the one of the underlying block cipher. This is of course only true with a good design. --[ 8.2 - DPAsum () algorithm The DPA-based hash function lies in the functions DPA_sum() and DPA_sum_write_to_file() which can be found respectively in file sum.c and data.c. Let's detail them a little bit using pseudo code: Let M be the message to hash, let M(i) be the i'th 128b block message. Let N = DPA_BLOCK_SIZE * i + j be the size in bytes of the message where i and j are integers such as i = N / DPA_BLOCK_SIZE and 0 <= j < 16. Let C be an array of 128 bits elements were intermediary results of hash calculation are stored. The last element of this array is the hash of the message. func DPA_sum(K0,M,C): K0 = key("deadbeef"); IV = "0123456789abcdef"; C(0) = E( IV , K0); C(1) = E( IV xor M(0) , K0); FOR a = 1 to i-1: C(a+1) = E( C(a) xor M(a) , K0); if j == 0: C(i+1) = E( C(i) xor 000...000 , K0) else C(i+1) = E( C(i) xor PAD( M(i) ); C(i+2) = E( C(i+1) xor 000...00S , K0) /* s = 16-j */ return; func DPA_sum_write_to_file(C, file): write(file,C(last_element)); return; --[ 8.3 - Weaknesses in the design/implementation We noticed several implementation mistakes in the code: a) Using the algorithm of hash calculation, every element of array C is defined recursively however C (0) is never used in calculation. This doesn't impact security in itself but is somewhat strange and could let us think that the function was not designed before being programmed. b) When the size of M is not a multiple of DPA_BLOCK_SIZE (j is not equal to 0) then the algorithms calculates the last element using a xor mask where the last byte gives information on the size of the original message. However, what is included in the padding is not the size of the message in itself but rather the size of padding. If we take the example of the well known Merkle-Damgard construction on which are based MD{4,5} and SHA-{0,1} functions, then the length of the message was initially appended in order to prevent collisions attacks for messages of different sizes. Therefore in the DPASum() case, appending j to the message is not sufficient as it would be possible to find collisions for messages of size (DPA_BLOCK_SIZE*a + j) and (DPA_BLOCK_SIZE*b + j) were obviously a and b are different. Remark: The fact that the IV and the master key are initially fixed is not a problem in itself since we are dealing with MDC here. --[ 8.4 - A (2nd) preimage attack Because of the hash function construction properties, being given a message X, it is trivial to create a message X' such as h(X) = h(X'). This is called building a 2nd preimage attack. We built a quick & dirty program to illustrate it (Annexe C). It takes a 32 bytes message as input and produces an other 32 bytes one with the same hash: $ cat to.hack | hexdump -C 00000000 58 41 4c 4b 58 43 4c 4b 53 44 4c 46 4b 53 44 46 |XALKXCLKSDLFKSDF| 00000010 58 4c 4b 58 43 4c 4b 53 44 4c 46 4b 53 44 46 0a |XLKXCLKSDLFKSDF.| 00000020 $ ./dpa -s to.hack 6327b5becaab3e5c61a00430e375b734 $ gcc break_hash.c *.o -o break_hash -I ./include $ ./break_hash to.hack > hacked $ ./dpa -s hacked 6327b5becaab3e5c61a00430e375b734 $ cat hacked | hexdump -C 00000000 43 4f 4d 50 4c 45 54 45 4c 59 42 52 4f 4b 45 4e |COMPLETELYBROKEN| 00000010 3e bf de 93 d7 17 7e 1d 2a c7 c6 70 66 bb eb a3 |>.....~.*..pf...| 00000020 Nice isn't it ? :-) We were able to write arbitrary data in the first 16 bytes and then to calculate the next 16 bytes so that the 'hacked' file had the exact same hash. But how did we do such an evil thing? Assuming the size of both messages is 32 bytes then: h(Mi) = E(E(Mi(0) xor IV,K0) xor Mi(1),K0) Therefore, it is obvious that: h(M1) = h(M2) is equivalent to E(E(M1(0) xor IV,K0) xor M1(1),K0) = E(E(M2(0) xor IV,K0) xor M2(1),K0) Which can be reduced to: E(M1(0) xor IV,K0) xor M1(1) = E(M2(0) xor IV,K0) xor M2(1) Which therefore gives us: M2(1) = E(M2(0) xor IV,K0) xor E(M1(0) xor IV,K0) xor M1(1) [A] Since M1,IV,K0 are known parameters then for a chosen M2(0), [A] gives us M2(1) so that h(M1) = h(M2). Remark 1: Actually such a result can be easily generalized to n bytes messages. In particular, the attacker can put anything in his message and "correct it" using the last blocks (if n >= 32). Remark 2: Of course building a preimage attack is also very easy. We mentioned previously that we had for a 32 bytes message: h(Mi) = E(E(Mi(0) xor IV,K0) xor Mi(1),K0) Therefore, Mi(1) = E^ -1(h(Mi),K0) xor E(Mi(0) xor IV,K0) [B] The [B] equation tells us how to generate Mi(1) so that we have h(Mi) in output. It doesn't seem to be really a one way hash function does it ? ;-) Building a hash function out of a block cipher is a well known problem in cryptography which doesn't only involve the security of the underlying block cipher. One should rely on one of the many well known and heavily analyzed algorithms for this purpose instead of trying to design one. --[ 9 - Conclusion We put into evidence some weaknesses of the cipher and were also able to totally break the proposed hash function built out of DPA. In his paper, Veins implicitly set the bases of a discussion to which we wish to deliver our opinion. We claim that it is necessary to understand properly cryptology. The goal of this paper wasn't to illustrate anything else but that fact. Being hacker or not, paranoid or simply careful, the rule is the same for everybody in this domain: nothing should be done without reflexion. --[ 10 - Greetings #TF crypto dudes for friendly and smart discussions and specially X for giving me a lot of hints. I learned a lot from you guys :-) #K40r1 friends for years of fun ;-) Hi all :) Finally but not least my GF and her kindness which is her prime characteristic :> (However if she finds out the joke in the last sentence I may die :|) --[ 11 - Bibliography [DPA128] A Polyalphabetic Substitution Cipher, Veins, Phrack 62. [FakeP63] Keeping 0day Safe, m1lt0n, Phrack(.nl) 63. [MenVan] Handbook of Applied Cryptography, Menezes, Oorschot & Vanstone. [Knud99] Correlation in RC6, L. Knudsen & W. Meier. [CrypTo] Two balls ownz one, http://fr.wikipedia.org/wiki/Cryptorchidie [Vaud] An Experiment on DES - Statistical Cryptanalysis, S. Vaudenay. [Ryabko] Adaptative chi-square test and its application to some cryptographic problems, B. Ryabko. [CourtAlg] How Fast can be Algebraic Attacks on Block Ciphers ?, Courtois. [BiSha90] Differential Cryptanalysis of DES-like Cryptosystems, E. Biham & A. Shamir, Advances in Cryptology - CRYPTO 1990. [Matsui92] A new method for known plaintext attack of FEAL cipher, Matsui & A. Yamagishi, EUROCRYPT 1992. [NSA2007] Just kidding ;-) [SHOUP] NTL library, V. Shoup, http://www.shoup.net/ntl/ [NIST2007] NIST, http://www.csrc.nist.gov/pki/HashWorkshop/index.html, 2007 --[ Annexe A - Breaking the linearised version 8<- - - - 8< - - - - - 8< - - - - - 8< - - - - - 8< - - - - - 8< - - - - - /* Crappy C/C++ source. I'm in a hurry for the paper redaction so don't * blame me toooooo much please ! :> */ #include <iostream> #include <fstream> #include <openssl/rc4.h> #include <NTL/ZZ.h> #include <NTL/ZZ_p.h> #include <NTL/mat_GF2.h> #include <NTL/vec_GF2.h> #include <NTL/GF2E.h> # include <NTL/GF2XFactoring.h> #include "dpa.h" using namespace NTL; void S_E2 (unsigned char *key, unsigned char *block, unsigned int s) { int i; for (i = 0; i < DPA_BLOCK_SIZE; ++i) { block[i] ^= (key[i] ^ s) % 256; } return; } void E2 (unsigned char *key, unsigned char *block, unsigned int shift) { rbytechain (block); rbitshift (block, shift); S_E2 (key, block, shift); } void DPA_ecb_encrypt (DPA_KEY * key, unsigned char * src, unsigned char * dst) { int j; memcpy (dst, src, DPA_BLOCK_SIZE); for (j = 0; j < 16; j++) E2 (key->subkey[j].key, dst, key->subkey[j].shift); return; } void affichage(unsigned char *chaine) { int i; for(i=0; i<16; i++) printf("%.2x",(unsigned char )chaine[i]); printf("\n"); } unsigned char test_p[] = "ABCD_ABCD_12____"; unsigned char test_c1[16]; unsigned char test_c2[16]; DPA_KEY key; RC4_KEY rc4_key; struct vect { unsigned char plaintxt[16]; unsigned char ciphertxt[16]; }; struct vect toto[128]; unsigned char src1[16], src2[16]; unsigned char block1[16], block2[16]; int main() { /* Key */ unsigned char str_key[] = " _323DFF?FF4cxsdé&"; DPA_set_key (&key, str_key, DPA_KEY_SIZE); /* Init our RANDOM generator */ char time_key[16]; snprintf(time_key, 16, "%d%d",(int)time(NULL), (int)time(NULL)); RC4_set_key(&rc4_key, strlen(time_key), (unsigned char *)time_key); /* Let's crypt 16 plaintexts */ printf("[+] Generating the plaintexts / ciphertexts\n"); int i=0; int a=0; for(; i<128; i++) { RC4(&rc4_key, 16, src1, src1); // Input is nearly random :) DPA_ecb_encrypt (&key, src1, block1); RC4(&rc4_key, 16, src2, src2); // Input is nearly random :) DPA_ecb_encrypt (&key, src2, block2); for(a=0;a<16; a++) { toto[i].plaintxt[a] = src1[a] ^ src2[a]; toto[i].ciphertxt[a] = block1[a] ^ block2[a]; } } /* Now the NTL stuff */ printf("[+] NTL stuff !\n"); vec_GF2 m2(INIT_SIZE,128); vec_GF2 B(INIT_SIZE,128); mat_GF2 M(INIT_SIZE,128,128); mat_GF2 Mf(INIT_SIZE,128,128); // The final matrix ! clear(Mf); clear(M); clear(m2); clear(B); /* Lets fill M correctly */ int k=0; int j=0; for(k=0; k<128; k++) // each row ! { for(i=0; i<16; i++) { for(j=0; j<8; j++) M.put(i*8+j,k,(toto[k].plaintxt[i] >> j)&0x1); } } GF2 d; determinant(d,M); /* if !det then it means the vector were linearly linked :'( */ if(IsZero(d)) { std::cout << "det(M) = 0\n" ; exit(1); } /* Let's solve the 128 system :) */ printf("[+] Calculation of Mf\n"); for(k=0; k<16; k++) { for(j=0; j<8; j++) { for(i=0; i<128; i++) { B.put(i,(toto[i].ciphertxt[k] >> j)&0x1); } solve(d, m2, M, B); #ifdef __debug__ std::cout << "m2 is " << m2 << "\n"; # endif int b=0; for(;b<128;b++) Mf.put(k*8+j,b,m2.get(b)); } } #ifdef __debug__ std::cout << "Mf = " << Mf << "\n"; #endif /* Now that we have Mf, let's make a test ;) */ printf("[+] Let's make a test !\n"); bzero(test_c1, 16); bzero(test_c2, 16); char DELTA_X[16]; char DELTA_Y[16]; bzero(DELTA_X, 16); bzero(DELTA_Y, 16); DPA_ecb_encrypt (&key, test_p, test_c1); // DELTA_X ! unsigned char U0[] = "ABCDEFGHABCDEFG1"; unsigned char Y0[16]; DPA_ecb_encrypt (&key, U0, Y0); for(i=0; i<16; i++) { DELTA_X[i] = test_p[i] ^ U0[i]; } // DELTA_Y ! vec_GF2 X(INIT_SIZE,128); vec_GF2 Y(INIT_SIZE,128); clear(X); clear(Y); for(k=0; k<16; k++) { for(j=0; j<8; j++) { X.put(k*8+j,(DELTA_X[k] >> j)&0x1); } } Y = Mf * X; #ifdef __debug__ std::cout << "X = " << X << "\n"; std::cout << "Y = " << Y << "\n"; #endif GF2 z; for(k=0; k<16; k++) { for(j=0; j<8; j++) { z = Y.get(k*8+j); if(IsOne(z)) DELTA_Y[k] |= (1 << j); } } // test_c2 ! for(i=0; i<16; i++) test_c2[i] = DELTA_Y[i] ^ Y0[i]; /* Compare the two vectors */ if(!memcmp(test_c1,test_c2,16)) printf("\t=> Well done boy :>\n"); else printf("\t=> Hell !@#\n"); #ifdef __debug__ affichage(test_c1); affichage(test_c2); #endif return 0; } 8<- - - - 8< - - - - - 8< - - - - - 8< - - - - - 8< - - - - - 8< - - - - - --[ Annexe B - Probability evaluation of (hash32()%8 == 0) 8<- - - - 8< - - - - - 8< - - - - - 8< - - - - - 8< - - - - - 8< - - - - - #include <stdio.h> #include <stdlib.h> #include <string.h> #include <time.h> #define NBR_TESTS 0xFFFFF int main() { int i = 0, j = 0; char buffer[16]; int cmpt = 0; int rand = (time_t)time(NULL); float proba = 0; srandom(rand); for(;i<NBR_TESTS;i++) { for(j=0; j<4; j++) { rand = random(); memcpy(buffer+4*j,&rand,4); } checksum128 (buffer, buffer, 16); if(!(hash32(buffer,16)%8)) cmpt++; } proba = (float)cmpt/(float)NBR_TESTS; printf("[+] Probability is around %f\n",proba); return 0; } 8<- - - - 8< - - - - - 8< - - - - - 8< - - - - - 8< - - - - - 8< - - - - - --[ Annexe C - 2nd preimage attack on 32 bytes messages 8<- - - - 8< - - - - - 8< - - - - - 8< - - - - - 8< - - - - - 8< - - - - - #include <stdio.h> #include <stdlib.h> #include <string.h> #include <sys/types.h> # include <sys/stat.h> #include <fcntl.h> #include "dpa.h" void E2 (unsigned char *key, unsigned char *block, unsigned int shift) { rbytechain (block); rbitshift (block, shift); S_E (key, block, shift); } void DPA_ecb_encrypt (DPA_KEY * key, unsigned char * src, unsigned char * dst) { int j; memcpy (dst, src, DPA_BLOCK_SIZE); for (j = 0; j < 16; j++) E2 (key->subkey[j].key, dst, key->subkey [j].shift); return; } void affichage(unsigned char *chaine) { int i; for(i=0; i<16; i++) printf("%.2x",(unsigned char )chaine[i]); printf("\n"); } int main(int argc, char **argv) { DPA_KEY key; unsigned char str_key[] = "deadbeef"; unsigned char IV[] = "0123456789abcdef"; unsigned char evil_payload[] = "COMPLETELYBROKEN"; unsigned char D0[16],D1[16]; unsigned char final_message[32]; int fd_r = 0; int i = 0; if(argc < 2) { printf("Usage : %s <file>\n",argv[0]); exit(EXIT_FAILURE); } DPA_set_key (&key, str_key,8); if((fd_r = open(argv[1], O_RDONLY)) < 0) { printf("[+] Fuck !@#\n"); exit(EXIT_FAILURE); } if(read(fd_r, D0, 16) != DPA_BLOCK_SIZE) { printf("Too short !@#\n"); exit(EXIT_FAILURE); } if(read(fd_r, D1, 16) != DPA_BLOCK_SIZE) { printf("Too short 2 !@#\n"); exit (EXIT_FAILURE); } close(fd_r); memcpy(final_message, evil_payload, DPA_BLOCK_SIZE); blockchain(evil_payload, IV); DPA_ecb_encrypt (&key, evil_payload, evil_payload); blockchain(D0,IV); DPA_ecb_encrypt (&key, D0, D0); blockchain(D0,D1); blockchain(evil_payload, D0); memcpy(final_message+DPA_BLOCK_SIZE, evil_payload, DPA_BLOCK_SIZE); for(i=0; i<DPA_BLOCK_SIZE*2; i++) printf ("%c",final_message[i]); return 0; } 8<- - - - 8< - - - - - 8< - - - - - 8< - - - - - 8< - - - - - 8< - - - - -
{"url":"http://phrack.org/issues/64/10.html","timestamp":"2014-04-18T00:14:20Z","content_type":null,"content_length":"63502","record_id":"<urn:uuid:b13f4b62-6597-487a-a528-58423591f732>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00605-ip-10-147-4-33.ec2.internal.warc.gz"}
The triangle map: a model for quantum chaos (in collaboration with Brian Winn) An Isaac Newton Institute Workshop Random Matrix Theory and Arithmetic Aspects of Quantum Chaos The triangle map: a model for quantum chaos (in collaboration with Brian Winn) Author: Degli Esposti Mirko (Department of Mathematics, University of Bologna) We intend to discuss some recent results concerning classical and semiclassical properites of a particular weakly chaotic discrete dynamical system. Author: Degli Esposti Mirko (Department of Mathematics, University of Bologna) We intend to discuss some recent results concerning classical and semiclassical properites of a particular weakly chaotic discrete dynamical system.
{"url":"http://www.newton.ac.uk/programmes/RMA/Abstract4/degli.html","timestamp":"2014-04-19T02:01:18Z","content_type":null,"content_length":"2331","record_id":"<urn:uuid:2d01d6f7-21b7-4200-9e95-d69f91c1ab72>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00556-ip-10-147-4-33.ec2.internal.warc.gz"}
188 helpers are online right now 75% of questions are answered within 5 minutes. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/users/ebaxter01/medals","timestamp":"2014-04-21T15:25:46Z","content_type":null,"content_length":"80299","record_id":"<urn:uuid:ef98a45c-442a-4493-a91c-ea9c5cf765ea>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00458-ip-10-147-4-33.ec2.internal.warc.gz"}
numerical analysis... December 16th 2008, 11:46 PM #1 Dec 2008 numerical analysis... Derive the Adams-Bashforth (AB) method for the following problem using the interpolationg polynomial of degree 2.( Explicit multi-step method ) Y'(x)=f(x,Y(x)) on x>0 Y(0)=Y lower case 0 Derive the Adams-Moulton (AM) method for the same problem using the interpolationg polynomial of degree 2.( Implicit multi-step method ) can anyone help me with this problem?? I know this is late, but what you want to do is setup the following $y' = f(t,y) \Rightarrow \int_{t_k}^{t_{k+1}}\! y' \, dt = \int_{t_k}^{t_{k+1}}\! f(t,y) \, dt \Rightarrow y_{k+1} = y_k + \int_{t_k}^{t_{k+1}}\! f(t,y) \, dt$ For Adams-Bashforth interpolate $f(t,y)$ at the points $t_{k-2}, t_{k-1}, t_{k}$ using the Lagrange interpolating polynomial and then integrate. For Adams-Moulton interpolate $f(t,y)$ at the points $t_{k-1}, t_k, t_{k+1}$ using the Lagrange interpolating polynomial and then integrate. January 14th 2011, 09:51 AM #2 Senior Member Mar 2009
{"url":"http://mathhelpforum.com/advanced-math-topics/65324-numerical-analysis.html","timestamp":"2014-04-17T20:52:05Z","content_type":null,"content_length":"32231","record_id":"<urn:uuid:0ed3ba8a-e97e-4b08-bc95-347e1f830d33>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00509-ip-10-147-4-33.ec2.internal.warc.gz"}
Ratio of Apples to Pears to Oranges Date: 09/13/2002 at 11:22:13 From: Monique Subject: Ratios If the ratio of apples to pears to oranges is 7:8:10, how many pears are there if the total number of fruits is 500? I don't even know where to begin. These ratio questions puzzle me. Date: 09/13/2002 at 12:07:08 From: Doctor White Subject: Re: Ratios Monique - If you had 7 apples, 8 pears, and 10 oranges, then you would have 25 fruits. Hence the ratio of apples to pears to oranges would be 7:8:10. You could create a chart that would eventually find your answer. apples pears oranges fruits ...continuing the pattern you can find your answer. Another method is to set up an equation to find your answers. It could look like this: 7x + 8x + 10x = 500 Now you can solve to find x, which is the constant. Then multiply by 7 to find the number of apples, by 8 to find the number of pears, and by 10 to find the number of oranges. I hope both of these techniques help you to work with the type of problem that you are having trouble with. If you need further help with this then let me know. Come back to see us soon. - Doctor White, The Math Forum Date: 09/13/2002 at 14:41:30 From: Monique Subject: Thank you (Ratios) Thank you Dr. Math. That was extremely helpful!
{"url":"http://mathforum.org/library/drmath/view/61211.html","timestamp":"2014-04-20T01:50:01Z","content_type":null,"content_length":"6431","record_id":"<urn:uuid:4176b03b-5a96-41b1-8640-294aa81690e0>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00128-ip-10-147-4-33.ec2.internal.warc.gz"}
La Honda Calculus Tutor Find a La Honda Calculus Tutor ...I look forward to tutoring for you!I took two semesters of organic chemistry and one semester of physical organic chemistry with an overall average of an A- grade. During my undergraduate junior and senior years, I was a designated tutor for my chemistry department in organic chemistry. The students who came in regularly and from the beginning saw the greatest gain. 24 Subjects: including calculus, chemistry, reading, statistics ...I tutored my classmates as well as middle school, high school and college students outside UCLA, in math (prealgebra to calculus at all levels, including multivariable, and more), and also physics at all levels including college and university, supporting myself financially that way. It was lots... 9 Subjects: including calculus, physics, geometry, ASVAB ...As the beneficiary, you will learn first how and then why the necessary basic knowledge and skills to ace any class in physics and math within three months - this is a guarantee, but as a must, you need to do your part, that is to follow instructions, practice and retain what you have been taught... 15 Subjects: including calculus, physics, statistics, geometry ...I have an unmatched Math (with technical) background and know many issues of learning Math in general and any particular class. I was admitted to UCB Math PhD program (but chose Purdue PhD for some reason). Some of my students eventually go to top colleges (UCB (>2), Davis, UOP). Others did l... 15 Subjects: including calculus, geometry, algebra 1, GRE ...My favorite subjects to tutor are high school math, physics, and Spanish. I worked as the Children's Programming Coordinator at the Center for the Homeless by Notre Dame for three years, where, among my many duties, I tutored students twice a week. I have also run an after school program for gr... 27 Subjects: including calculus, English, chemistry, reading
{"url":"http://www.purplemath.com/la_honda_calculus_tutors.php","timestamp":"2014-04-17T15:50:43Z","content_type":null,"content_length":"23941","record_id":"<urn:uuid:275da65a-6b31-49c6-b237-41aa18c9f6f6>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00023-ip-10-147-4-33.ec2.internal.warc.gz"}
RE: [SI-LIST] : Macromodel Creation From: Mellitz, Richard (richard.mellitz@intel.com) Date: Mon Sep 18 2000 - 13:50:08 PDT All models are behavioral and ultimately based on measurements. One behavioral model can be more accurate than another behavioral model if you put enough work into it. If you put more work in, it can even be relatively fast. Only time and money ... or is that momentum and position. :-) Accuracy of a single simulation may be amusing and interesting. The application of accuracy is a statistical problem with many parameters, only one being the accuracy of a single simulation. Richard Mellitz -----Original Message----- From: abe riazi [mailto:ariazi@serverworks.com] Sent: Monday, September 18, 2000 3:10 PM To: 'si-list@silab.eng.sun.com' Cc: 'arpad.muranyi@intel.com' Subject: RE: [SI-LIST] : Macromodel Creation Thanks for your response. When I wrote that SPICE transistor level models are most accurate but also most time consuming to simulate, I did not mean that it is "always" true and there can be exceptions. but it will hold true in many cases. Ron Kielkowski (Reference 1, PP. 5 - 7) presents good definitions and examples in support of this point: 1. TRANSISTOR LEVEL MODEL: "represents devices at the most basic simulation level possible. In many cases, the transistor-level model is the most accurate model possible for simulation. On the downside though, the transistor-level model also takes the most time to simulate." 2. MACROMODEL: " A macromodel is a collection of electrical components which form a simplified representation of the modeled circuit. Many macromodels contain dependent controlled sources to help simplify the structure of the model. Being simplified means that the macromodel is often easier to construct than transistor level model, and the macromodel often simulates much faster than the transistor level model. But these two elements come at the expense of a small loss in accuracy." 3. BEHAVIORAL MACROMODEL: "The highest level in modeling hierarchy is the behavioral macromodel. Behavioral macromodels contain a collection of ideal electrical or mathematical components. Often behavioral macromodels contain a collection of ideal electrical or mathematical components which are used to describe a function of the circuit. Being at the top of the hierarchy means the behavioral model usually simulates faster than any other type of model, but often this increased speed comes from a loss in accuracy". As an example, the transistor level model of an Op Amp can have about 19 transistors (plus some passive components), the macromodel of the Op Amp consists of only two transistors and four diodes (plus some passive components and dependent controlled sources). The Op Amp behavioral model contains much simpler input and output blocks. Based on above definitions and examples, in many cases the transistor level models are the most complex (and accurate representation of the device) but at the price of being most time consuming to simulate. Best Regards, -----Original Message----- From: Muranyi, Arpad [SMTP:arpad.muranyi@intel.com] Sent: Monday, September 18, 2000 10:36 AM To: 'abe riazi'; 'si-list@silab.eng.sun.com' Subject: RE: [SI-LIST] : Macromodel Creation I would like to comment on the three bullets you listed which put accuracy and speed into an inverse relationship regarding transistor level and behavioral models. Simply said this general relationship is NOT TRUE. You CAN model devices to even a higher level of accuracy behaviorally than on a transistor (SPICE) level if you like. It all depends on what parameters you use and what goes into the behavioral model. And this increased accuracy does not mean that your model will automatically get Take a transistor, for example. You can describe it with its geometry, and properties of the materials that it is made from. A SPICE tool then converts all that information to electrical characteristics. This takes a lot of equations and calculations. On the other hand, you can describe the same transistor's characteristics by providing its node voltage and current relationships directly (with tables, equations, transfer functions, etc...) which CAN reduce the number of calculations SPICE has to do, making it faster. Now think about the underlying model equations SPICE uses when you do it the conventional SPICE way. You can have a LEVEL=3 or BSIM4 set of equations. Which one is more accurate? Most likely the BSIM4, since it is more recent. However, if your behavioral transistor model DOES describe something that even BSIM4 cannot, you behavioral model will be even more accurate. Yet this does not mean that it has to become automatically slower. What I wanted to illustrate here is that the accuracy of the model depends on what goes into it. It's speed, however, depends on how the device is described. These two are not as strongly related as your three points Arpad Muranyi Intel Corporation -----Original Message----- From: abe riazi [mailto:ariazi@serverworks.com] Sent: Friday, September 15, 2000 7:17 PM To: 'si-list@silab.eng.sun.com' Subject: [SI-LIST] : Macromodel Creation Dear Scholars: While visiting a Barnes & Noble bookstore in San Jose, I purchased a copy of the "Spice Practical Device Modeling" , by Ron Kielkowski. What especially appealed to me about this publication was its high emphasis on model creation. In this book SPICE models are classified according to a hierarchy which includes: 1. Transistor-level models ( provide highest accuracy, though most time consuming to simulate). 2. Macromodels. 3. Behavioral Macromodels (fastest to simulate, but least accurate) Most attention is devoted to Macromodels, because they offer a practical level of accuracy (less than 5% rms error over operating range) and can be created in a reasonable amount of time (less than eight hours). The procedure recommended by Ron Kielkowski for construction of macromodels consists of the following steps: i. Review the datasheet to obtain as much information related to model creation as possible (although, frequently majority of the information given in the datasheet has little value towards model generation). ii. Utilize bench-top measurement equipment to produce I-V, C-V and Z-F iii. From above data extract the desired model parameters. For a resistor, the Macromodel elements consist of a nominal resistance Rnom and a parallel capacitance Cp; for an inductor, Lnom (nominal inductance), Rs (coil resistance) and Cp (winding capacitance); and for a capacitor, Cnom (an ideal capacitor), RL (leakage resistor), Ls (series inductor) and ESR (electrical series resistance). These macromodels are illustrated by Figure 1 (attached gif picture). In this publication (reference 1), the significance of impedance vs. frequency plots is emphasized, because: a. Regarding macromodel of a resistor, the |Z| vs. F graphs aid to ascertain Cp. b. For inductor Macromodels, they allow determination of the series resistance frequency (Frs) and self resonating frequency (Fsrf) from which values of Lnom and Cp can be calculated via simple formulas. c. Considering capacitor macromodel, several parameters can be extracted from the impedance vs. frequency curves, such as ESR (RS) , lead inductance Ls (calculating Ls involves Fsrf which can be obtained from graph) and Cnom (the nominal capacitance can be also measured by means of a low frequency capacitance meter). ESR and |Z| vs. F plots have been explained previously in this forum in relation to PCB power distribution systems, decoupling and bypass capacitors. They are also included here due to their significance towards macromodel generation. Figure 2 presents two examples of impedance vs. frequency graphs. Such plots can be created in a number of different ways; here, Microsoft Excel was employed. In each case the raw data consisted of three columns: current ( I ) , Voltage drop ( V ) and frequency ( F ). The Excel program calculated another data column (impedance Z = V/I ), and produced the logarithmic impedance plots. Clearly, ESR strongly influences the shape of |Z| vs. F Macromodels can be incorporated into SPICE simulation files as subcircuits; demonstrated by the example below: Example 1. Encapsulation of a capacitor macormodel CMACRO, having parameters Cnom, RL, Ls and Rs (ESR). In the circuit input file example.cir: X_MACRO 2 0 CMACRO In the model file example.mod: .SUBCKT CMACRO 10 20 Cnom 10 30 1000uF Rs 30 40 0.15ohms Ls 40 20 5nH RL 10 30 10meg Use of macromodels instead of SPICE primitive models can significantly enhance the accuracy of a high frequency simulation and yield results in excellent agreement with physical measurements. Simulation of certain cases (such as high power circuits) require taking into consideration effects due to temperature variations. Temperature dependent macromodels can be readily constructed (reference 1). To summarize, Macromodels assume an intermediate position in the hierarchy of SPICE models in the sense that they are below the transistor-level models in accuracy and rank second to behavioral models in simulation speed. They are in demand by being practical; i.e., can be created in a reasonable amount of time with an error margin tolerable in many applications. Impedance vs. frequency plots play a critical role in creation of macromodels of passive components. These models can be inserted into SPICE input files as subcircuits. Simulations utilizing macromodels yield superior results than using ideal SPICE primitives, particularly in the high frequency domain. Reference 1. R. M. Kielkowski, "SPICE Practical Device Modeling", McGraw-Hill, Inc. 1995. Thanks for your comments and with best regards, Abe Riazi 2251 Lawson Lane Santa Clara, CA 95054 **** To unsubscribe from si-list or si-list-digest: send e-mail to majordomo@silab.eng.sun.com. In the BODY of message put: UNSUBSCRIBE si-list or UNSUBSCRIBE si-list-digest, for more help, put HELP. si-list archives are accessible at http://www.qsl.net/wb6tpu **** To unsubscribe from si-list or si-list-digest: send e-mail to majordomo@silab.eng.sun.com. In the BODY of message put: UNSUBSCRIBE si-list or UNSUBSCRIBE si-list-digest, for more help, put HELP. si-list archives are accessible at http://www.qsl.net/wb6tpu **** To unsubscribe from si-list or si-list-digest: send e-mail to majordomo@silab.eng.sun.com. In the BODY of message put: UNSUBSCRIBE si-list or UNSUBSCRIBE si-list-digest, for more help, put HELP. si-list archives are accessible at http://www.qsl.net/wb6tpu This archive was generated by hypermail 2b29 : Tue May 08 2001 - 14:29:31 PDT
{"url":"http://www.qsl.net/wb6tpu/si-list6/0244.html","timestamp":"2014-04-17T07:29:29Z","content_type":null,"content_length":"16702","record_id":"<urn:uuid:0ab70ca8-60a5-44ea-a737-244c507e29ae>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00204-ip-10-147-4-33.ec2.internal.warc.gz"}
Physics Forums - View Single Post - Representation theory and totally symmetric ground state? Ok thanks. I think I have understood how to find the selections rules. Could you explain why the vibrational modes only describes the exited modes and not the ground state. I guess it is like that because A1g is not contained in the vibrational modes of graphite (2 * E1u + 2 * E2g + 2 * A2u + 2 * B1g). I thinks it's is logical as you say that the ground state is fully symmetrical (guess you could think of it as all the atoms being at their equilibrium sites, maybe with some caution), but as i said I'm not quit sure why the ground state is not in the problem, let me explain: If I count the dimensions of the representations 2 * E1u + 2 * E2g + 2 * A2u + 2 * B1g i get 12, which I expected because there are 4 atoms in the unit cell each with three degrees of freedom. The representations tells me that I can devide these eigenfunction into sets that transform among each other, but the fully symmetric representation is not present, that is no of the eigenfunctions of the problem has the full symmetry of the problem, and hence is not the ground state. This is what i mean by it seems like the ground state has to be treated seperately. I guess somehow that the representation theory approach only describes exited modes, but i fail to see why that is? Hope that explains my problem a bit clearer.
{"url":"http://www.physicsforums.com/showpost.php?p=3804951&postcount=3","timestamp":"2014-04-20T00:51:25Z","content_type":null,"content_length":"8445","record_id":"<urn:uuid:7f8071b8-f87e-4c12-a5a8-9f7e47ba724b>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00441-ip-10-147-4-33.ec2.internal.warc.gz"}
IntroductionSelection of a Bistatic ConfigurationConfigurations Suitable for Soil Moisture RetrievalSpatial Resolution AnalysisOrbit DesignDiscussion of the OutcomesConclusionsSpatial ResolutionReferences and NotesFigures and Tables Sensors Sensors 1424-8220 Molecular Diversity Preservation International (MDPI) 10.3390/s90907250 sensors-09-07250 Article Bistatic Radar Configuration for Soil Moisture Retrieval: Analysis of the Spatial Coverage PierdiccaNazzareno^1 De TittaLudovico^1 PulvirentiLuca^1^* della PietraGiuliano^2 Department of Electronic Engineering, Sapienza University of Rome, via Eudossiana 18, 00184 Rome, Italy; E-Mail: pierdicca@die.uniroma1.it (N.P.) Space Engineering S.p.A., 91, Via dei Berio, Rome 00155, Italy Author to whom correspondence should be addressed. E-Mail: pulvirenti@die.uniroma1.it. 2009 10 9 2009 9 9 7250 7265 21 8 2009 8 9 2009 9 9 2009 © 2009 by the authors; licensee MDPI, Basel, Switzerland 2009 This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/). Some outcomes of a feasibility analysis of a spaceborne bistatic radar mission for soil moisture retrieval are presented in this paper. The study starts from the orbital design of the configuration suitable for soil moisture estimation identified in a previous study. This configuration is refined according to the results of an analysis of the spatial resolution. The paper focuses on the assessment of the spatial coverage i.e., on the verification that an adequate overlap between the footprints of the antennas is ensured and on the duty cycle, that is the fraction of orbital period during which the bistatic data are acquired. A non-cooperating system is considered, in which the transmitter is the C-band Advanced Synthetic Aperture Radar aboard Envisat. The best performances in terms of duty cycle are achieved if the transmitter operates in Wide Swath Mode. The higher resolution Image Swath Modes that comply with the selected configuration have a duty cycle that is never less than 12% and can exceed 21%. When Envisat operates in Wide Swath Mode, the bistatic system covers a wide latitude range across the equator, while in some of the Image Swath Modes, the bistatic measurements, collected from the same orbit, cover mid-latitude areas. In the latter case, it might be possible to achieve full coverage in an Envisat orbit repeat cycle, while, for a very large latitude range such as that covered in Wide Swath Mode, bistatic acquisitions could be obtained over about 65% of the area. bistatic radar sensors configuration soil moisture mission analysis A spaceborne bistatic radar system is defined when antennas for reception and transmission are physically separated and located aboard two spacecraft. The system could be either cooperating (transmitter and receiver designed for the specific bistatic application), or non-cooperating (receiver designed independently of the transmitter) [1]. Non-cooperating systems can use already-existing satellite instruments, such as Synthetic Aperture Radars (SARs), as transmitters (or illuminators) [2]. Bistatic applications were investigated in the fields of target detection [3,4] and planetology [5]. Concerning Earth Observation, the specular reflection from navigation satellites was exploited for scatterometric applications over the oceans [6]. The possibility to envisage bistatic configurations for SAR interferometry was studied in [7,8]. Recently, experiments involving the X-band have been carried out (e.g., [9–11]) to assess the additional information contained in the bistatic reflectivity of targets. As for land applications, Ceraldi et al. [12] demonstrated the potential to detect the signal scattered in the specular direction for retrieving bare soil parameters. However, they outlined that measuring the coherent component of the scattered signal by means of bistatic radars is problematic, because the spatial resolution becomes very poor around the specular direction, and ambiguities between surface points before and after the specular point occur [13]. Zavorotny and Voronovich stated, in a brief report [14], that considering a GPS signal impinging on a rough land surface, soil moisture is related to the ratio between horizontally and vertically polarized scattered waves. In a previous work [2], we reported on a theoretical investigation aiming at identifying the best bistatic measurement configuration, in terms of incidence angle, observation direction, polarization, and frequency band, for soil moisture content (SMC) retrieval. In addition, we evaluated the improvement of the estimation accuracy with respect to the conventional backscattering measurements. While in [2] we did not tackle the problem of verifying the identified radar configurations from a technical point of view, this work deals with spatial resolution, spatial coverage and duty cycle (i.e., the fraction of the orbit in which suitable bistatic data collection is ensured). It aims at evaluating whether an adequate overlap between the footprints of the transmitting and receiving antennas is ensured for the bistatic observation geometry that guarantees both an improvement of the quality of the SMC retrieval and a quite good spatial resolution. We point out that investigations on the design of a spaceborne mission implementing a configuration of bistatic radars devoted to specific environmental applications are generally lacking in the literature. Moreover, the literature generally deals with fixed bistatic configurations (i.e., the static design), without considering the orbital dynamics, while the feasibility of maintaining the selected configuration of sensors along the orbit of the spacecraft on which the receiver is installed, ensuring good performances in terms of operational parameters such as spatial coverage, is a crucial aspect. In the present study, the nominal bistatic mission is considered, so that problems such as the orbit and attitude controls are not tackled, thus implicitly assuming that the passive system is periodically controlled in order to maintain the designed bistatic formation. In Section 2, the findings regarding the bistatic configurations most suitable for SMC retrieval are summarized and the selection of one configuration, based on the evaluation of the spatial resolution, is described. In Section 3, the orbit design approach is depicted, while, in Section 4, the results of the analysis of the spatial coverage are presented and discussed. Section 5 draws the main conclusions. Bistatic configurations are defined in terms of frequency, polarization, and transmitter-target-receiver relative geometry. This geometry is shown in the left panel of Figure 1, in which the incident plane is assumed as a reference coordinate plane, and the configuration is identified through the incidence angle (θ[i]) with respect to the vertical, the zenith (θ[s]) and azimuth (φ[s]) scattering angles, and their acceptable ranges to be compatible with the considered application. In [2], we adopted a well-established electromagnetic surface scattering model, such as the Advanced Integral Equation Model (AIEM) [15], to numerically simulate the bistatic measurements, for the purpose of identifying the configurations suitable for soil moisture estimation. We underline that the AIEM model has a large range of validity, so that the study described in [2] was carried out under different conditions. L-, C- and X-bands were considered, both Gaussian and Exponential autocorrelation functions were adopted, and the parameters characterizing the surface roughness, i.e., the standard deviation of heights s and the correlation length l, varied within fairly standard intervals, encompassing most of the typical agricultural fields in bare soil conditions. The results we obtained can be therefore considered fairly general. The findings in [2] indicate that, especially at C-band, the quality of soil moisture retrievals improves by complementing bistatic measurements with monostatic ones provided by already-existing SARs. This could be expected because, for bistatic systems, the detected field is determined by a different behavior of the scattering with respect to the monostatic case, and in the specular configuration also by a coherent scattering mechanism as opposed to a diffuse one, thus providing independent information on the target parameters. The need to take advantage of various sources of information for tackling the complex and usually ill-posed problem of SMC retrieval was indeed pointed out in numerous literature studies (e.g., [16], where polarimetric data were used). The range of valuable scattering directions, for a C-band radar, singled out in [2] is sketched in the right panel of Figure 1. Observations around nadir, at θ[s] below 4°–10°, form a cone (i.e., φ [s] can assume any value) whose vertical axis corresponds to the z-axis (in a Cartesian reference system), while the vertex is in the origin. Furthermore, for φ[s] less than 40°–50°, the zenith scattering angle can indefinitely be increased by including the specular direction and some grazing observations. The illuminator should observe the Earth with a quite small incidence angle, approximately between 15° and 35°. In [2] it was demonstrated that, at C-band, the standard deviation of the SMC retrieval error can be reduced up to a factor 3 with respect to that achievable with monostatic observations, by integrating backscattering and bistatic measurements. The ground range and azimuth resolutions could be very poor in some bistatic configurations, thus implying a bad radar image quality. For instance, the ground range resolution is critical in specular configuration [1,12]. Basing on these considerations, we have performed an evaluation of the bistatic spatial resolutions in order to put an additional constraint to the sensor configurations selected in [2]. It is worth noting that no adequate model has been found in the literature for such an exercise, because only few cases addressed the spaceborne receiver configuration. In [1], this matter was analyzed, but only for a two-dimensional (2-D) configuration restricted to the bistatic plane. General considerations and formulas valid for the 3-D case were provided in [13]. Starting from the relationships found in [13], we have derived the formulas for evaluating the ground range (ρ[gr]) and azimuth (ρ[a]) bistatic resolutions, by doing the following assumptions: i) flat and fixed Earth; ii) satellites orbiting at the same height (and therefore flying with the same velocity); iii) equal coherent integration times. Then, we have normalized the bistatic resolutions to the monostatic ones, obtaining equations depending on the relative geometry of the satellites carrying the sensors and independent from orbital and antenna parameters. In this section, we do not report all the mathematical manipulations, for the sake of conciseness, but only the result. More details can be found in the Appendix. The final equations are the following: ρ gr ρ gr back = 2 sin θ i sin 2 θ i + sin 2 θ s − 2 sin θ i sin θ s cos φ s ρ a ρ a back = 2 cos θ i F ( θ i , θ s , φ s , δ T , δ R )where: F ( θ i , θ s , φ s , δ T , δ R ) = cos 2 θ i ( cos 4 θ i cos 2 δ T + sin 2 δ T ) + cos 2 θ s [ 1 − sin 2 θ s ( 1 + cos 2 θ s ) cos 2 ( φ s − δ R ) ] + + 2 cos θ i cos θ s [ cos 2 θ i cos δ T cos δ R + sin δ T sin δ R − sin 2 θ s ( cos 2 θ i cos δ T cos φ s + sin δ T sin φ s ) cos ( φ s − δ R ) ] In Equations (1) and (2a), the back superscript indicates the spatial resolutions computed for the conventional monostatic case (i.e., the backscattering measurement) and δ[T] and δ[R] are the angles complementary to those identified by the unit vector normal to the incidence plane and the velocity vectors (see Appendix). Starting from the above formulas, and supposing that the satellites’ velocity vectors are directed normally to the incidence plane (i.e., δ[T] = δ[R] =90°), maps of ρ[gr]/ρ[gr]^back and ρ[a]/ρ[a]^ back have been generated as function of θ[s] (in the range [0°–60°]) and φ[s] (in the range [0°–180°]), for a fixed value of θ[i]. Figure 2 shows the maps for the minimum and maximum values of θ[i] recommended for estimating soil moisture (i.e., 15° and 35°). By observing Figure 2, it can be noted that the ground range resolution is more critical than the azimuth one. The latter does not exceed 2ρ[a]^back, while ρ[gr] can be several times larger than ρ [gr]^back, especially near the specular direction (as expected). As an additional constraint to the bistatic sensors configurations selected in [2], we have chosen ρ[gr] < ρ[gr]^back, thus finding that φ[s] must be limited in the interval [90°−270°], that is in the backward quadrant (see Figure 1 right panel and the Appendix). We have therefore considered the following ranges for the zenith and azimuth scattering angles: θ[s] ∈ [0°−8°] and φ[s] ∈ [90°−270°]. According to the chosen frequency (i.e., C-band), we have made reference to the ASAR instrument onboard Envisat. The real operative Image Swath Modes (ISMs) and Wide Swath Mode (WSM) have been taken into account for the analysis. ISMs guarantee a fairly high resolution, but have a smaller near and far range capability that implies worse monostatic (and thus bistatic) spatial coverage. WSM presents opposite characteristics. Table 1 reports some parameters of the ASAR illuminator in Image and Wide Swath modes. Note that, among the seven Envisat/ASAR Image Swath Modes, only four were selected, because the other ones observe the Earth with incidence angles larger than 35°, which are not compatible with our requirements for SMC retrieval. We have firstly designed the orbit of the receiver by choosing its observation geometry at the equator based on the requirements for SMC estimation (i.e., the static design) and then the orbits of both platforms have been propagated to evaluate the antenna footprint superimposition, i.e., the spatial coverage of the bistatic acquisitions. Note that the time when the passive satellite is over the equator has been assumed as the initial one [17]. We have made a number of assumptions. Firstly, that the bistatic system is non-cooperative, so that orbit, attitude (including yaw-steering) and antenna pointing of the active system are not subordinated to the bistatic acquisition requirements. This is of course a worst case assumption, which aims to use existing radar as source of opportunity. Secondly, the transmitting antenna is right-looking. Moreover, the satellite carrying the receiver (hereafter denoted also as the passive satellite) flies in formation with that carrying the illuminator (the active one), on a parallel orbit, thus establishing a “parallel orbit pendulum” configuration, in which the platforms move along orbits with the same inclination, but different ascending nodes. In principle, also the “leader-follower” configuration, in which the spacecraft fly on the same orbit, but with different crossing times of ascending node, would be possible. However, it does not allow any of the bistatic observation geometries previously selected if a non-cooperative bistatic system and a side-looking illuminator are assumed. Another hypothesis we have made for the static design of the orbits is that the receiver performs bistatic acquisitions only along the ascending pass, since the crossing of the orbits near the poles makes the transmitter to illuminate out of the footprint of the receiver. Note that an eventual left/right looking capability of the receiver would dramatically change the bistatic angles with respect to the required ones, whereas only a cooperative transmitter with left/right looking capability would enable valuable data acquisition in the descending pass. The orbit of a spacecraft is described by means of the well-known six Keplerian parameters: semi-major-axis, eccentricity, inclination, right ascension of the ascending node (Ω), perigee argument, and mean anomaly (M). We have considered Envisat as the active satellite, so that its orbital parameters are fixed. The passive satellite semi-major-axis, eccentricity, and inclination have been chosen coincident with those of the active one, in order to reduce orbit maintenance operations. In addition, the same perigee argument has been considered, to minimize the instantaneous satellite velocity differences and in turn, the along-track relative displacements [17]. The remaining design parameters (Ω and M) define the difference between the orbits of the two satellites. The differences between the ascending node right ascensions (ΔΩ) and the mean anomalies (ΔM), i.e., the relative position of the satellites (shown in Figure 3), have been computed by means of the formulas provided in [17]. ΔM has been chosen in order to enable the passive satellite to be located in the incidence plane of the illuminator at the initial time, thus fixing the observation geometry when the receiver is over the equator. Such a condition corresponds to φ[s] = 180°, in this Figure 3 shows the position of the active and passive satellites when the latter is over the equator and the line joining the two positions (i.e., the baseline). This direction has to be comprised within the azimuth aperture of both the transmitting and receiving antennas. To calculate ΔΩ and ΔM, we have assumed θ[i] equal to the maximum among those suggested in [2] (i.e., 35°), in order to increase the range of latitudes for bistatic acquisitions [17]. As for θ[s] at the initial epoch, we have firstly considered that the receiver should be in the backward quadrant because of the constraint we have imposed for spatial resolution. Then we have made the hypothesis that, for a soil moisture application, the minimum bistatic swath should be 10 km. By choosing θ[s] = 0° at the initial epoch, the time interval during which θ[s] is in a useful range (i.e., [0°–8°]) when the orbits are propagated is maximized (note that at the equator the baseline is at its maximum, as will be discussed in Section 4). However, looking at Figure 4, that shows the observation geometry at the initial time, it can be noted that such a choice would imply a very small bistatic swath at the equator (theoretically, only one point), because for θ[s] = 0° the receiver is just above the boundary of the area illuminated by Envisat, with the maximum allowable zenith angle (i.e., 35°), and cannot accomplish a forward observation. For the same reason, the ISMs whose far range looking angle is less than 35° (see Table 1) do not perform bistatic acquisitions over the equator, as will be shown in Section 4. The receiving near range point (NR[RX]) has to be located within the area illuminated by the transmitting antenna (see Figure 4) and its distance to the transmitter far range point (FR[TX]) cannot be less than 10 km. Hence, the best choice for the initial θ[s] turned out to be 1°, which is (approximately) the minimum zenith scattering angle for which the overlapping between the footprints of the antennas occurs with the selected minimum width of the bistatic swath. Table 2 reports the selected configuration and the orbital parameters at the initial epoch, that is the static design. The Satellite Tool Kit (available at: http://www.agi.com) has been successively used to propagate the orbits, taking into account only the orbital perturbations due to the J[2] geo-potential harmonic, to maintain a heliosynchronous orbit and assuming negligible the orbital decay (i.e., the effect of the differential aerodynamic drag), as done in [18]. To produce coverage data, an appropriate software tool, which accounts for spacecraft propagated orbits, yaw-steering maneuvers of both spacecraft, sensors pointing geometry, Earth rotation and estimates the targeted area, has been developed. At each time point of the simulation, whose overall period is 35 days, i.e., the orbit repeat cycle of Envisat, the software evaluates the area of the Earth surface observed simultaneously by the two antennas on the basis of their aperture angles and of the satellites’ positions. The portion of this area which is actually observed according to the observation geometry selected in section 2, i.e., θ[I] < 35°, θ[s] ∈ [0°−8°] and φ[s] ∈ [90°−270°], is the area target. To perform the simulation, we have considered the transmitter able to illuminate according to the real access capabilities of the reference mission (i.e., Envisat, see Table 1). As for the receiving antenna, we have assumed a main lobe azimuth aperture of 6° (see [17]) which is capable to point toward the area illuminated by the transmitter by means of attitude maneuvers or electronic steering. The latter does not ensure the zero Doppler condition of bistatic measurements, but its purpose is the superposition of the antenna footprints. Table 3 reports the results. It can be noted that WSM presents the best results in terms of duty cycle (∼40%) thanks to the greater access capability. Higher resolution modes achieve duty cycles not less than 12% (actually 11.9%) and that can exceed 21%, which is a still acceptable performance, considering that we are in the hypothesis of a fully non-cooperative system, i.e., the maximum duty cycle is 50% for a fixed looking side of the transmitting antenna. It is worth noting that we have decided to discard the swaths smaller than 10 km in the computation of the fraction of orbital period during which the bistatic data are acquired and this choice implies an overall decrease of the resulting duty cycle. ISM1 and WSM reach the highest latitudes (almost 72° in both the geographic hemispheres). In addition, the far range capability of the considered transmitter operating mode limits the latitude range of bistatic acquisitions. As stated when discussing Figure 4, since the passive satellite orbit is defined according to the maximum incidence angle identified for the application (i.e., 35°) during static design, the operational modes ISM1, ISM2 and ISM3 do not perform bistatic acquisitions at low latitudes but they focus on mid-latitude belts. Figure 5 shows the Earth maps with the acquired targeted areas (red dots) for WSM (upper panel), ISM4 (central panel) and ISM2 (lower panel). The ground tracks of the active and passive satellites are plotted in green and blue, respectively. Although our simulation considers the entire Envisat orbit repeat cycle (35 days), Figure 5 regards a time-window of one week, for the sake of clarity. The different latitude bands covered in WSM, ISM4 and ISM2 are clearly visible. To quantify the spatial coverage of the bistatic measurements, we have evaluated the ratio between the sum of the areas of bistatic targets and the area of the Earth surface within the covered latitude range. Such a computation sums up all the areas imaged during a complete Envisat cycle of 35 days, including also zones already observed in previous orbits. This result is therefore an estimate of the fractional coverage that does not ensure that every point within a specific latitude range is actually covered by a bistatic acquisition, but that can indicate whether full coverage could be potentially obtained in such a range. The ratio is minimum for ISM4 and is in the order of 65% for WSM, while for ISM1 and ISM2 our evaluation indicates that full coverage might be achieved within 35 days. Finally, Figure 6 shows the trends of the baseline absolute value |B| (distance between the satellites) and of its components (B[x],B[y],B[z]) throughout one orbit, that we obtained as a consequence of our design. The baseline is evaluated on the orbital reference frame centered in the centre of mass of the active satellite, with x-axis towards its instantaneous velocity vector, y-axis normal to the orbital plane and z-axis in direction of the Earth center. The distance between the satellites achieves its maximum value at the equator and the minimum one near the poles. B[y] represents the main component of B and changes its sign at the orbit interceptions near the poles, when the satellites change their relative orientation, while it achieves the maximum value at the equator. Also B [z] goes to zero at the orbits interceptions and the minimum baseline at the orbit crossing, i.e., the safety distance between the satellites, has turned out to be equal to 35.5 km, a result similar to that found in [18] in which Envisat was used as the active satellite. Observing Figure 6, it can be noted that the maximum value of the baseline (at the equator), is slightly larger than 500 km. Some outcomes of a coverage analysis of a spaceborne bistatic mission for SMC estimation have been presented. The study has started from the identification of the bistatic configurations suitable for SMC retrieval, in terms of frequency and ideal transmitter-target-receiver relative geometry accomplished in a previous study. An evaluation of the spatial resolution of the bistatic system has been firstly carried out. It has led us to restrict our analysis to observations around nadir and in the backward quadrant. Then, the study has assessed the feasibility of the nominal mission in terms of spatial coverage and duty cycle, assuming a non-cooperative system, with Envisat/ASAR as illuminator. We have shown that the Wide Swath Mode has the best duty cycle (∼40%). Higher resolution image modes yield acceptable results in terms of this parameter (never less than 12%), considering that, in our hypothesis of fully non-cooperative system, the maximum achievable value is 50%. It has been found, that, for high resolution modes, the minimum and maximum latitudes do not necessarily identify one latitude belt, so that their setting may allow focusing on a particular geographic region, such as mid-latitude areas. In a selected latitude band, it might be possible to achieve full coverage within the Envisat orbit repeat cycle (35 days), while for a larger latitude range, almost covering the entire planet, it is unfeasible. The improvement of soil moisture retrieval that we have demonstrated, in a previous work, to be achievable through a bistatic mission, may be actually obtained by designing a simple and cheap (even in terms of power requirements) receiver that flies in formation with a standard C-band SAR. Combining the active monostatic with the bistatic measurements can strengthen the contribution of single frequency radar observations in many disciplines needing an evaluation of soil moisture, such as hydrology, agriculture and climate change monitoring. The methodology based on a study of the sensitivity to a geophysical target parameter of bistatic radars, followed by a system performance analysis can be useful for other applications (e.g., vegetation biomass retrieval) and for future missions. For instance, this approach can provide support to a possible employment of a passive receiver flying in formation with the forthcomong European Space Agency (ESA) Sentinel satellites. In particular, the wide swath capability of the Sentinel-1 C-band SAR is expected to improve the coverage performances of the bistatic mission with respect to that computed for Envisat. In this section, Equations (1) and (2) are derived starting from the relationships found in [13]. The hypotheses of flat Earth and same satellite heights hold true. We make reference to the geometry shown in Figure 7. Geometry for evaluating the bistatic spatial resolution. T, R and P represent the positions of transmitter, receiver and target, respectively. By assuming the incidence plane as a reference, let us consider a three-dimensional Cartesian coordinates system in which the (x,z)-plane is coincident with the incidence one and the (x,y)-plane is considered to be the (flat) Earth surface. Let us fix: i) the transmitter position T (x[T], y[T], H) in such a reference frame; ii) the incidence angle θ[i]; iii) the elevation θ[s] and azimuth φ[s] scattering angles. Then, we can determine, as function of these parameters, the following ones: iv) the ground target position P (x[T] + θ[i], 0, 0); v) the receiver position R (x[T] + Htanθ[i] + Htanθ[s]cosφ[s], y[T] + Htgθ[s]sinφ[s], H). Denoting by x[0], y[0] and z[0] the versors of the considered Cartesian system, the vectors directed from the transmitting (u[T]) and receiving (u[R]) antennas to the target are (see Figure 7): u T = H tan θ i x 0 − H z 0 u R = − H tan θ s cos φ s x 0 − H tan θ s sin φ s y 0 − H z 0 The transmitter-to-target and receiver-to-target ranges are the modules of u[T] and u[R], respectively: | u T | = H / cos θ i | u R | = H / cos θ s Now we can consider the relationship for the ground range resolution proposed in [13]: ρ gr = c 2 B W | Γ xy u eff |where c denotes the velocity of light, B[w] is the pulse bandwidth and Γ[xy] is the projector onto the horizontal plane. 2u[eff] is defined as [13]: 2 u eff = − ( u T | u T | + u R | u R | ) Accounting for Equations (3)–(6) it turns out: 2 u eff = ( sin θ s cos φ s − sin θ i ) x 0 + sin θ s sin φ s y 0 + ( cos θ i + cos θ s ) z 0 The application of the Γ[xy] operator leads to the following relationship: 2 Γ xy u eff = ( sin θ s cos φ s − sin θ i ) x 0 + sin θ s sin φ s y 0 By substituting the norm of 2Γ[xy] u[eff] in (7), we can derive the final expression of ρ[gr]: ρ gr = c / B W sin 2 θ i + sin 2 θ s − 2 sin θ i sin θ s cos φ s Considering that the ground range resolution of a conventional monostatic radar is c/(2B[w]sinθ[i]), the ρ[gr]/ρ[gr]^back ratio is given by (1). It is worth noting that, if θ[s] is equal to 0°, the ρ[gr]/ρ[gr]^back ratio does not depend on φ[s] and is equal to 2 for every θ[i]. Moreover, if we consider φ[s] equal to 0° (or around it), when θ [s] increases starting from 0°, the ρ[gr]/ρ[gr]^back ratio grows too much and is always larger than 2. On the other hand, if we assume φ[s] equal to 180° (or around it), when θ[s] increases starting from 0°, the ρ[gr]/ρ[gr]^back ratio decreases and is always smaller than 2. These considerations can be useful to choose the sensors observation geometry in order to achieve a satisfactory ground range resolution. To derive the azimuth resolution, we have to consider the transmitting (v[T]) and receiving (v[R]) satellite velocity vectors: v T = v T ( cos δ T x 0 + sin δ T y 0 ) v R = v R ( cos δ R x 0 + sin δ R y 0 )where v[T] and v[R] are the amplitudes of v[T] and v[R], respectively, and δ[T] and δ[R] indicate the angles between these vectors and the incidence plane (see Figure 7). The expression for the azimuth resolution proposed in [13] is: ρ a = λ 2 ξ int | Γ xy ω eff |where ξ[int] is the time interval during which the target is observed, λ is the electromagnetic wavelength and the vector ω[eff] is given by [13]: 2 ω eff = v Tn | u T | + v Rn | u R | In (15), v[Tn] represents the component of v[T] normal to u[T] and v[Rn] denotes the component of v[R] normal to u[R]. v[Tn] can be expressed as: v Tn = v T − ( v T ⋅ u T | u T | ) u T | u T | = v T − v T sin 2 θ i cos δ T x 0 + v T cos θ i sin θ i cos δ T z 0 By replacing (12) in (16) it turns out: v Tn = v T cos 2 θ i cos δ T x 0 + v T sin δ T y 0 + v T cos θ i sin θ i cos δ T z 0 The projection of v[Tn] on the (x,y)-plane is: Γ xy v Tn = v T cos 2 θ i cos δ T x 0 + v T sin δ T y 0 The calculation of v[Rn] is similar to that accomplished to derive v[Tn]: v Rn = v R − ( v R ⋅ u R | u R | ) u R | u R | = v R − v R sin 2 θ s cos φ s cos ( φ s − δ R ) x 0 − v R sin 2 θ s sin φ s cos ( φ s − δ R ) y 0 − v R sin θ s cos θ s cos ( φ s − δ R ) z 0 By replacing (13) in (19) and by applying the operator that project onto the horizontal plane, it turns out: Γ xy v Rn = v R [ cos δ R − sin 2 θ s cos φ s cos ( φ s − δ R ) ] x 0 + v R [ sin δ R − sin 2 θ s sin φ s cos ( φ s − δ R ) ] y 0 By taking into account the relationships (5), (6), (15), (18) and (20), we can determine the projection of the vector 2ω[eff], on the horizontal plane: 2 Γ xy ω eff = 1 H { v T cos 3 θ i cos δ T + v R cos θ s [ cos δ R − sin 2 θ s cos φ s cos ( φ s − δ R ) ] } x 0 + + 1 H { [ v T cos θ i sin δ T + v R cos θ s [ sin δ R − sin 2 θ s sin φ s cos ( φ s − δ R ) ] ] } y 0 Having assumed that the two spacecraft fly at the same height, we can make the simplifying hypothesis that the velocity vectors have the same amplitude (i.e., v[T] = v[R]), thus obtaining: 2 | Γ xy ω eff | = v T F ( θ i , θ s , φ s , δ T , δ R )where: F ( θ i , θ s , φ s , δ T , δ R ) = cos 2 θ i ( cos 4 θ i cos 2 δ T + sin 2 δ T ) + cos 2 θ s [ 1 − sin 2 θ s ( 1 + cos 2 θ s ) cos 2 ( φ s − δ R ) ] + + 2 cos θ i cos θ s [ cos 2 θ i cos δ T cos δ R + sin δ T sin δ R − sin 2 θ s ( cos 2 θ i cos δ T cos φ s + sin δ T sin φ s ) cos ( φ s − δ R ) ] The final expression for the azimuth resolution of a bistatic system is derived by replacing (22) in (14): ρ a = H λ / ξ int v T F ( θ i , θ s , φ s , δ T , δ R ) The azimuth resolution of a conventional monostatic SAR is given by: ρ a back = D 2 ≈ H λ / ξ int 2 v T cos θ iwhere D is the dimension of the antenna. Therefore, the ρ[a]/ρ[a]^back ratio turns out to be given by (2a). This work was realized under ESA/ESTEC contract 19173/05/NL/GLC, “Use of Bistatic Microwave Measurements for Earth Observation”. We thank Dr. N. Floury from ESA for his useful advice. WillisN.J.Artech HouseNew York, NY, USA1991 PierdiccaN.PulvirentiL.TicconiF.BrogioniM.Radar bistatic configurations for soil moisture retrieval: a simulation study2008463252326410.1109/ TGRS.2008.921495 BurkholderR.J.GuptaJ.JohnsonJ.T.Comparison of monostatic and bistatic radar images2003454150 SoumekhM.Moving target detection in foliage using along track monopulse synthetic aperture radar imaging199761148116310.1109/83.605412 SimpsonR.A.Spacecraft studies of planetary surfaces using bistatic radar19933146548210.1109/36.214923 GarrisonJ.L.KomjathyA.ZavorotnyV.U.KatzbergS.J.Wind speed measurement using forward scattered GPS signals200240506510.1109/36.981349 MocciaA.FasanoG.Analysis of spaceborne tandem configurations for complementing COSMO with SAR interferometry20052033043315 Sanz-MarcosJ.Lopez-DekkerP.MallorquiJ.J.AguascaA.PratsP.SABRINA: a SAR bistatic receiver for interferometric applications2007430731110.1109/ LGRS.2007.894144 WalterscheidI.EnderJ.H.G.BrennerA.R.LoffeldO.Bistatic SAR processing and experiments2006442710271710.1109/TGRS.2006.881848 KlareJ.WalterscheidI.BrennerA.R.EnderJ.H.G.Evaluation and optimisation of configurations of a hybrid bistatic SAR experiment between terraSAR-X and PAMIR200612081211 Rodriguez-CassolaM.BaumgartnerS.V.KriegerG.NottensteinerA.HornR.SteinbrecherU.MetzigR.LimbachM.PratsP.FischerJ.SchwerdtM.MoreiraA.Bistatic spaceborne-airborne experiment TerraSAR-X/F-SAR: data processing and results20083III-451III-454 CeraldiE.FranceschettiG.IodiceA.RiccioD.Estimating the soil dielectric constant via scattering measurements along the specular direction20054329530510.1109/ TGRS.2004.841357 WalterscheidI.BrennerA.R.EnderJ.H.G.Results on bistatic synthetic aperture radar2004401210.1049/el:20040044 ZavorotnyV.U.VoronovichA.G.Bistatic GPS signal reflections at various polarizations from rough land surface with moisture content2000728522854 ChenK.S.WuT.D.TsangL.LiQ.ShiJ.FungA.K.Emission of rough surfaces calculated by the integral equation method with comparison to three-dimensional moment method simulations2003419010110.1109/TGRS.2002.807587 PierdiccaN.CastracaneP.PulvirentiL.Inversion of electromagnetic models for bare soil parameter estimation from multifrequency polarimetric SAR data200888181820010.3390/s8128181 D’ErricoM.MocciaA.The BISSAT mission: a bistatic SAR operating in formation with COSMO/SkyMed X-band radarProc. IEEE Aerospace ConferenceBig Sky Montana, USAMarch 9–16, 20022809818 MocciaA.VetrellaS.BertoniR.Mission analysis and design of a bistatic synthetic aperture radar on board a small satellite20004781982910.1016/ Left panel: geometric elements that identify the transmitter-target-receiver (Tx-TG-Rx) bistatic configuration. Right panel: sketch of observing configurations suitable for SMC retrieval. Maps of ρ[gr]/ρ[gr]^back (left panels) and ρ[a]/ρ[a]^back (right panels), for θ[i] = 15° (upper panels) and θ[i] = 35° (lower panels) in the (θ[s],φ[s])-plane. Ground range resolution has been upper limited to 4 times the backscattering value for the sake of figure clarity. Relative position of the satellites at the initial epoch (γ is the yaw-steering angle of the active satellite). Bistatic observation geometry at the initial time in terms of zenith incidence (θ[i]) and scattering (θ[s]) angles, receiver (Rx) and transmitter (Tx) near range (NR) and far range (FR) points, and heights (H) of the satellites. Data acquisition (red dots) for a one week scenario considering a passive system acquiring ASAR signal in WSM (upper panel), ISM4 (central panel) and ISM2 (lower panel). Green/blue lines are the Envisat/passive satellite ground tracks. Baseline length (absolute value) and components in the active satellite orbital reference frame versus the orbital period percentage. One orbit is considered for the sake of figure clarity. Some parameters of the ASAR illuminator in Image Swath and Wide Swath Modes. Note that both the swath and the ground range resolution refer to the backscattering acquisition; regarding the latter, its value is related to a nominal incidence angle (e.g., 23° for ISM2). Mode Swath [km] Near Range [deg] Far Range [deg] Range Resolution [m] ISM1 105 15.0 22.9 30 ISM2 105 19.2 26.7 30 ISM3 82 26.0 31.4 30 ISM4 88 31.0 36.3 30 WSM 405 15 37 100 Selected bistatic configuration (in terms of zenith incidence angle, and zenith and azimuth scattering angles), and orbital parameters at the initial epoch (static design). zenith incidence angle θ[i] [deg] 35 zenith scattering angle θ[s] [deg] 1 azimuth scattering angle φ[s] [deg] 180 Semi-major-axis [km] 7,159.48 Eccentricity 0.00115 Inclination [deg] 98.5 Perigee argument [deg] 90 Ascending nodes difference (ΔΩ) [deg] 4.20 Mean anomaly difference (ΔM) [deg] 0.91 Coverage statistics and duty cycle, assuming Envisat in different Swath Modes as illuminator. Minimum and maximum acquisition latitudes, and mean width of the bistatic swath (minimum is 10 km) are reported. Note that for ISM1, ISM2 and ISM3, two latitude belts are identified. Mode ISM1 ISM2 ISM3 ISM4 WSM Duty Cycle [%] 13.0 12.9 11.9 21.5 40.1 Lat. Min. 1[deg] −71.7 −66.2 −53.2 −43.5 −71.7 Lat. Max. 1[deg] −48.5 −39.6 −24.2 42.9 71.6 Lat. Min. 2[deg] 48.9 40.5 25.6 Lat. Max. 2[deg] 71.7 66.2 51.3 Swath [km] 39.9 37.2 29.2 24.1 39.1
{"url":"http://www.mdpi.com/1424-8220/9/9/7250/xml","timestamp":"2014-04-19T22:24:41Z","content_type":null,"content_length":"112924","record_id":"<urn:uuid:2da75a46-5829-4ce3-ae4f-231ec918ed25>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00229-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Circle The Eugene Math Circle at the University of Oregon is opening weekly sessions for elementary, middle and high school students in the fall of 2013. We invite students who enjoy math, like solving challenging problems and want to learn exciting topics that are normally outside the school curriculum. The Math Circle leaders are members of the Mathematics Department. There will be sessions for four groups distinguished by age (or level): • Elementary I (recommended for 2-3 graders) at 5:00-5:50 pm on Thursdays. • Elementary II (recommended for 4-5 graders) at 6:00-6:50 pm on Thursdays. • Intermediate (recommended for 6-8 graders) at 5:45-7:00 pm on Wednesdays. • Advanced (recommended for 9-12 graders) at 5:00-6:30 pm on Wednesdays. The Spring session will begin on April 2 or 3, 2014, at UO campus (see the "Schedule") and run through the end of May. The application forms can be found here. The application deadline is March 25, The Eugene Math Circle is sponsored by the University of Oregon Department of Mathematics, by the Mathematical Sciences Research Institute (MSRI), and the National Security Agency (NSA).
{"url":"http://pages.uoregon.edu/nemirovm/emc.html","timestamp":"2014-04-21T12:56:30Z","content_type":null,"content_length":"3163","record_id":"<urn:uuid:5243d1b8-fde7-49b6-976d-ec2f7a64130e>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00645-ip-10-147-4-33.ec2.internal.warc.gz"}
Need help with my starter--I think. Hi Everyone, To begin with. I got SF Sourdough culture from Ed Wood. Also built a proofing box. I started the process on the morning of Dec 2nd. Now it's Dec 5th, so I'm 72 hours into it. I used the culture and 3/4 cup flour and water just like he says. After 24 hours proofing at 88 degrees it had grown 2 1/2 times and smelled great. Then I started the 12 hour routine of 1 cup flour and enough water to maintain consistency. The second 24 hours went well. Nice smell and bubbles. So last night and this morning fed as normal, but the smell isn't nearly as strong and growth is almost nonexistent. Smells more like just flour and water. After the first 24 hours proofing at room temp around 70 degrees. Taste is good, not to strong and not to mild. I'm following his instructions to the letter. He states the contamination might occur in the first 24 hours and it would have an "unpleasant odor". Mine smelled pretty good to me. So am I jumping to conclusions or is it just to early in the process? Should I wash it or give it a few more days? Also saw in one post the suggestion of adding some rye flour, not sure if I should. Thanks for any light you might shed on this. Mike C. Groveland, CA
{"url":"http://www.pizzamaking.com/forum/index.php?topic=16649.msg162298","timestamp":"2014-04-17T07:14:56Z","content_type":null,"content_length":"46392","record_id":"<urn:uuid:51446f17-cbb1-465b-af5e-5cc80e5a57c2>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00381-ip-10-147-4-33.ec2.internal.warc.gz"}
New Page 1 Chapter 21 Magnetic Field: Magnetism is the result of electric charge motion. As soon as an electric charge moves, it creates a magnetic effect perpendicular to its direction of motion. This can be easily verified by the a magnetized nail. If a steel nail is placed inside a coil ( A coil is a piece copper wire wrapped around a cylinder) that is connected to a battery, the nail becomes a magnet. Since the nail itself is perpendicular to the surface of each loop of the coil; therefore, we can say that the current, or the motion of electric charges is perpendicular to the nail (magnetic field). See the figure. The reason for naming the poles of the nail as N and S is that if a magnet is hung at its middle by a string, it turns and aligns itself toward almost the North and South poles of the Earth. The end that points to the North is called its North pole and the other end that points to the South pole is called the South pole of the magnet. The Geographic North and the Magnetic North are off by a few degrees however, and the position of the magnetic north changes slightly over years. The reason for the magnetic effect of the Earth is also the motion of charged particles. The molten metal at the core of the Earth is ionized and has a rotational motion parallel to the Equator creating a huge magnet whose field must be perpendicular to the Equator plane and therefore passes through the North pole and the South pole of the Earth. Magnetic Field Lines: Magnetic field lines are generally accepted to emerge from the North pole of a magnet and enter its South pole as shown. This can be easily verified by placing a tiny compass around a bar-magnet at different positions. It is impossible to separate the North and South poles of a magnet. They coexist. If a bar magnet is cut from the middle ( its neutral zone), each piece becomes an independent magnet possessing a South pole and a North pole. This is not the case with electric charges. It is possible to have a separate negative charge and a separate positive charge. The Theory Behind Magnetism As we know, atoms are made of negative electrons, positive protons, and neutral neutrons. Protons and neutrons have almost the same mass but are much heavier than electrons. Protons and neutrons form the nucleus. Electrons orbit the nucleus. Electrons are considered the moving charges in atoms. Electrons generate magnetic fields perpendicular to their planes of rotation. The following argument gives you an idea of how some materials exhibit magnetic property although such discussion does not reflect the exact picture. Visualize a single electron orbiting the nucleus of its atom. For simplicity, visualize a sphere in which space such electron spins at a rate of say 10^15 turns per second. This means one thousand trillion turns every second. Therefore, at every one-thousand-trillionth of a second it possesses a particular plane of rotation in space. In other words, the orientation of its plane of rotation changes 10^15 times every second. That is why we say it creates an electronic cloud. Three of such orientations are sketched below. For each plane of rotation, the magnetic field vector is shown to have its maximum effect at the center of the circle of rotation and perpendicular to that circle. An pure substance of mass 1 lb, for example, contains a very large number of atoms, and each atom, depending on the type of element, contains several electrons each of which at any given instant has its own orientation of rotation and its own orientation of magnetic field vector. We are talking about hundreds of trillion trillions different magnetic field vectors in a piece (1 lb) of material. There is no guarantee that all such electrons have their magnetic field vectors oriented in a single direction so that their magnetic effects add up. An orbital is a space around a nucleus where the possibility of finding electrons is high. An orbital can be spherical, dumbbell-shaped, or of a few other particular shapes. Here, we assume a spherical orbital for simplicity. Theory of atoms proves that each orbital fills up with 2 electrons. The two electrons in each orbital must have opposite spin directions. This makes the two magnetic field vectors point to opposite directions as well. The result is a zero net magnetic effect. This way, each atom that contains an even number of electrons has all of its orbitals filled with pairs of electrons. Such atoms are magnetically neutral. However, atoms that contain odd numbers of electrons will have an orbital that is left with a single electron. Such atoms are not magnetically neutral by themselves. They become magnetically neutral, when they form molecules with the same or other atoms. There are a few elements such as iron, cobalt, and nickel that have a particular atomic structure. This particular structure allows a certain orbital in each atom to have a single electron. Under normal circumstances, there is no guarantee that all such particular orbitals of the atoms a piece of iron, for example, to have their magnetic field vectors lined up parallel to each other, but if a piece of pure iron is placed in an external magnetic field, the planes of rotation of those single electrons line up such that their magnetic fields line up with the direction of the external field, and after the external field is removed, they tend to keep their new orientation and therefore the piece of pure iron becomes a magnet itself. The conclusion is the fact that magnetism is the result of electric charge motion and that the magnetic field vector is perpendicular to the plane of rotation of electron. Like poles and Unlike poles: Like poles repel and unlike poles attract. This behavior is similar to that of electric charges. Recall that like charges repel and unlike charges attract. One difference is that separate positive and negative charges are possible while separate North and South poles are not. See the figure shown: Uniform Magnetic Fields: The magnetic field strength around a bar magnet is not uniform and varies with distance from its poles. The reason is that the field lines around a bar magnet are not parallel. The density of field lines is a function of distance from the poles. In order to make parallel magnetic field lines, the bar magnet must be bent into the shape of a horseshoe. The field lines that emerge from the N-pole of the magnet then have to directly enter its S-pole and they necessarily become parallel. This is true only for the space in between the poles. See the figure. The Force of a Magnetic Field on a Moving Charge When a moving charge enters a magnetic field such that field lines are crossed, the charge finds itself under a force perpendicular to its direction of motion that gives it a circular motion. If a charge enters a field such that its motion direction is parallel to the field lines and no field line is crossed, then the charge will not be affected by the field and keeps going straight. Picture yourself sitting in a classroom facing the board, and visualize a downward uniform magnetic field (ceiling being the North pole and floor the South pole of the magnet). Visualize a positive charge entering from the left side of the classroom going toward the right side. This positive charge will initially be forced toward the board as shown: If the downward magnetic field vector is (B), and the charge rightward velocity vector is (v), the magnitude of force (F) initially pushing the moving charge toward the board is given by: F = q v B. Fig.2 If charge (q) is making an angle θ with the magnetic field lines, then (F) will have a smaller value given by: F = q v B sin θ. Fig. 1 If (v) points to the left, then (F) will point toward the class. If (B) points upward, then F will point toward the class. If the charge is negative, then (F) will point toward the class. Therefore, there are three variables that can affect the direction of (F). If an odd number of these variables change simultaneously, then the direction of (F) reverses. If an even number of these variables change simultaneously (any two), the direction of (F) remains the same. The unit for B is expressed in Tesla (T). If (F) is in Newtons, (v) in m/s, and (q) in Coulombs, then, (B) will be in Tesla. One Tesla of magnetic field strength is the strength that can exert a force of 1N on 1Coul. of electric charge that is moving at a speed of 1 m/s perpendicular to the magnetic field lines. Example 1: A 14-μC charge enters from the left perpendicular to a downward magnetic field of strength 0.030 Tesla at a speed of 1.8x10^5 m/s. Find the magnitude and direction of the initial force on it as soon as it crosses a field line. Refer to Fig.2. Solution: Referring to Fig. 2, it is clear that the charge will initially be pushed toward the board. The magnitude of this initial push is F = q v B ; F = ( 14-μC )( 1.8x10^5 m/s)( 0.030 T) = 0.076N. Example 2: A 14-μC charge enters from the left through a 65° angle with respect to a downward magnetic field of strength 0.030 Tesla at a speed of 1.8x10^5 m/s. Find the magnitude and direction of the initial force on it as soon as it crosses a field line. Refer to Fig.1. Solution: Referring to Fig. 1, it is clear that the charge will initially be pushed toward the board. The magnitude of this initial push is F = q v Bsin θ ; F = ( 14-μC )( 1.8x10^5 m/s)( 0.030 T) sin (65° )= 0.069N. Example 3: An electron enters a 0.013-T magnetic field normal to its field lines and experiences a 3.8x10^-15N force. Determine its speed. Solution: F = q v B ; v = (F / qB) = ( 3.8x10^-15N) / [(1.6x10^-19C)(0.013T)] = 1.8x10^6 m/s Motion of a charged Particle in a Magnetic Field: So far, we have learned that when a charged particle crosses magnetic field lines, it is forced to change direction. This change of direction does not stop as long as there are field lines to be crossed in the pathway of motion of the charged particle. Magnetic field lines keep changing the direction of motion of the charged particle and if the field is constant in magnitude and direction, it gives a circular motion to the charged particle. The reason is that F is perpendicular to V at any instant and position and that exactly defines the concept of centripetal force. Recall the concept of centripetal force. Centripetal force, F[c], is always directed toward the center of rotation. Such force makes an object of mass M traveling at speed V to go around a circle of radius R. In fact it is the force of magnetic field, F[m], that supplies the necessary centripetal force, F[c]. We may equate the two after comparing the two figures below: This formula is useful in finding the radius of curvature of the circular path of a charged particle when caught in a magnetic field. Example 4: A proton ( q = 1.6x10^-19C, M = 1.67x10^-27kg) is captured in a 0.107-T magnetic field an spins along a circle of radius 4.5 cm. Find its speed knowing that it moves perpendicular to the field lines. Solution: R = (Mv) / (qB) ; solving for v: v = rqB / M ; v = (0.045m)(1.6x10^-19C )(0.107T) / 1.67x10^-27kg = 4.6x10^5 m/s Example 5: In a certain device, alpha-particles enter a 0.88-T magnetic field perpendicular to its field lines. Find the radius of rotation they attain if each carries an average kinetic energy of 520 keV. An alpha-particle is a helium nucleus. It contains 2 protons and 2 neutrons. M[p] = 1.672x10^-27kg and M[n] = 1.674x10^-27kg. Solution: Since the K.E. of each alpha-particle is given, knowing its mass ( 2M[p] + 2M[n] ), its speed can be calculated. K.E. = (1/2)Mv^2 . Note that 1eV = 1.6x10^-19J ; therefore, 1keV = 1.6x10^ K.E. = (1/2)Mv^2 ; 520(1.6x10^-16J) = (1/2)[ 2(1.672x10^-27kg) + 2(1.674x10^-27kg) ] v^2 v = 5.0x10^6 m/s ; R = (Mv) / (qB) ; R = [ 2(1.672x10^-27kg) + 2(1.674x10^-27kg) ](5.0x10^6 m/s) / [(2 x 1.6x10^-19C)( 0.88T)] Each alpha-particle has 2 protons and carries 2 x 1.6x10^-19C of electric charge. R = 0.12m = 12cm. A Good Link to Try: Chapter 21 Test Yourself 1: 1) In magnetizing a nail that is wrapped around with a coil of wire, the direction of the electric current in the loops of the wire is (a) parallel to the nail. (b) perpendicular to the nail if the loops are closely packed. (c) almost perpendicular to the nail if the loops are not closely packed. (d) b & c. click here 2) The direction of the magnetic field in a magnetized nail is (a) along the nail. (b) perpendicular to the nail. (c) neither a nor b. 3) If the four bent fingers of the right hand point in the direction of current in the loops of a magnetized coil, then the thumb points to (a) the South pole of the magnet coil. (b) the North pole of the magnet coil. (c) the direction normal to the magnet coil. click here 4) The magnetized nail experiment shows that (a) magnetic field occurs anywhere that there is an iron core. (b) anywhere a charged particle moves, magnetic effect develops in all directions. (c) anywhere a charged particle moves, there appears a magnetic effect that is normal to the direction of the charged particle's motion. 5) An electron orbiting the nucleus of an atom (a) does not develop a magnetic field because its radius of rotation is extremely small. (b) generates a magnetic effect that is of course normal to its plane of rotation at any instant. (c) cannot generate any magnetic effect because of its extremely small charge. click here 6) In a hydrogen molecule, H[2] , the net magnetic effect caused by the rotation of its two electrons is zero because (a) at any instant, the two electrons spin in opposite directions creating opposite magnetic effects. (b) the instant its two electrons pass by each other, they repel and change planes of rotation that are opposite to each other causing opposite magnetic effects. (c) both a & b. 7) The reason that atoms, in general, are magnetically neutral is that (a) electrons of atoms must exist in pairs spinning in opposite directions thereby neutralizing each other's magnetic effect. (b) not all atoms are iron atoms and therefore do not have any magnetic effect in them. click here 8) The reason iron and a few other elements can maintain magnetism in them is that (a) these elements can have orbits in them that contain unpaired electrons. (b) under an external magnetic field, the orbits in these element with a single electron in them can orient themselves to the direction of the external field and stay that way. (c) both a & b. 9) For a bar-magnet, the magnetic field lines (a) emerge from its South pole and enter its North pole. (b) emerge from its North pole and enter its South pole. (c) emerge from its poles and enter its middle, the neutral zone. click here 10) If a bar magnet is cut at its middle, the neutral zone, (a) one piece becomes a pure North pole and the other piece a pure South pole. (b) both pieces will have their own South and North poles because magnetic poles coexist. (c) neither a nor b. 11) The magnetic field strength around a bar magnet is (a) uniform. (b) nonuniform that means varies with distance from its poles. (c) uniform at points far from the poles. click here 12) The magnetic field in between the poles of a horseshoe magnet is (a) uniform. (b) nonuniform. (c) zero. 13) The magnetic field in between the poles of a horseshoe magnet (a) varies with distance from its either pole. (b) is directed from N to S. (c) has a constant magnitude and direction and is therefore uniform. (d) b & c. click here Problem: Visualize you are sitting in a class facing the board. Suppose that the ceiling is the North pole of a huge horseshoe magnet and the floor is its South pole; therefore, you are sitting inside a uniform downward magnetic field. Also visualize a fast moving positive charge emerges from the left wall and is heading for the right wall; in other words, the velocity vector of the positive charge acts to the right. Answer the following questions: 14) The charge will initially be pushed (a) toward you. (b) downward. (c) toward the board. 15) The charge will take a path that is (a) straight toward the board. (b) circular at a certain radius of rotation. (c) curved upward. click here 16) If the radius of curvature is small such that the charge does not leave the space between the poles of the magnet it will have a circular motion that looking from the top will be (a) counterclockwise. (b) clockwise. (c) oscillatory. 17) If instead, a negative charge entered from the left side, it would spin (a) counterclockwise. (b) clockwise. 18) If a positive charge entered from the right side heading for the left, looking from the top again, it would spin (a) clockwise. (b) counterclockwise. click here 19) If the polarity of a magnetic field is reversed, the spin direction of a charged particle caught in it will (a) remain the same. (b) reverse as well. 20) The force, F of a magnetic field, B on a moving charge, q is proportional to the (a) filed strength, B. (b) particle's velocity, V. (c) the amount of the charge, q. (d) sinθ of the angle V makes with B. (e) a, b, c, & d. 21) The force, F of a magnetic field, B on a moving charge, q is given by (a) F = qB. (b) F = qV. (c) F = qvBsinθ. 22) In the formula F = qvBsinθ, if q is in Coulombs, v in m/s, and F in Newtons, then B is in (a) N/(Coul. m/s). (b) Tesla. (c) a & b. click here 23) The magnitude of the force that a 0.0025-T magnetic field exerts on a proton that enters it normal to its field lines and has a speed of 3.7x10^6 m/s is (a) 1.5x10^15N !! (b) 0 (c) 1.5x10^-15N 24) An electron moves at a speed of 7.4x10^7 m/s parallel to a uniform magnetic field. The force that the magnetic field exerts on it is (a) 3.2x10^-19N. (b) 0 (c) 4.8x10^-19N. click here 25) The force that keeps a particle in circular motion is (a) circular force. (b) centripetal force. (c) tangential force. 26) When a charged particle is caught in a magnetic field and it keeps spinning at a certain radius of rotation, the necessary centripetal force is (a) the force of magnetic field on it that keeps it spinning. (b) the electric force of the charged particle itself. (c) both a & b. click here 27) Equating the force of magnetic field, F[m] and the centripetal force, F[c] is like (a) Mv/R = qvB. (b) v^2/R = qvB. (c) Mv^2/R = qvB. 28) In the previous question, solving for R yields: (a) R = qvB/(Mv^2). (b) R = Mv /(qB). (c) both a & b. 29) The radius of rotation a 4.0-μCoul. being carried by a 3.4-μg mass moving at 360 m/s normal to a 0.78-T magnetic field attains is (a) 39cm. (b) 3.9m (c) 7.8m click here 30) One electron-volt of energy (1 eV) is the energy of (1 electron) in an electric field where the potential is ( 1 Volt). This follows the formula (a) P.E. = qV where q is replaced by the charge of 1 electron and V is 1volt. (b) P.E. = Mgh. (c) neither a nor b. 31) Knowing that 1eV = 1.6x10^-19J, if a moving proton has an energy of 25000eV, its energy calculated in Joules is (a) 4.0x10^-15J (b) 1.56x10^23 J (c) 4.0x10^-19J click here 32) A 25-keV proton enters a 0.014-T magnetic field normal to its field lines. Each proton has a mass of 1.67x10^-27kg. The radius of rotation if finds is (a) 1.63m (b) 2.63m (c) 3.63m. Velocity Selector: It is possible to run a charged particle through a magnetic field perpendicular to the field lines without any deviations from straight path. All one has to do is to place an electric field in a way that neutralizes the effect of the magnetic field. Let's visualize sitting in a classroom (facing the board of course) in which the ceiling is the N-pole, floor the S-pole, and a positive charge is to travel from left to right normal to the downward magnetic field lines. As was discussed before, the magnetic field does initially push the positive charge toward the board. Now, if the board is positively charge and the back wall negatively charged, the charged particle will be pushed toward the back wall by this electric field. It is possible to adjust the strengths of the magnetic and electric fields such that the forces they exert on the charge are equal in magnitude but opposite in direction. This makes the charge travel straight to the right without deviation. The resulting apparatus is called a "velocity selector." For a velocity selector we may set the magnetic force on the charge equal to the electric force on the charge: F[m] = F[e]. This results in q v B = q E ; v B = E , or v = E / B . If there is a large number of charged particles traveling at different speeds but in the same direction, and we want to separate the ones with a certain speed from the rest, this device proves Example 6: In a left-to-right flow of alpha-rays (Helium nuclei) coming out of a radioactive substance a 0.0225-T magnetic field is placed in the downward direction. What magnitude electric field should be placed around the flow such that only 0.050MeV alpha-particles survive both fields? Solution: K.E. = 0.050 MeV means 0.050 mega electron-volts that means 5.0x10^4 electron volts. Since each eV is equal to 1.6x10^-19 Joules ; therefore, K.E. = 0.050 x 10^6 x 1.6x10^-19 J ; K.E. = To find v from K.E., use K.E. = (1/2)Mv^2. Using M = 6.692x10^-27kg (verify) for the mass of an alpha-particle, the speed v is v = 1.5x10^6 m/s. Using the velocity selector formula: v = E / B ; E = vB ; E = (1.5x10^6 m/s)(0.0225 T) = 34000 N/C. An application of the foregoing discussion is in cyclotron. Cyclotron is a device that accelerates charged particles for nuclear experiments. It works on the basis of the motion of charged particles in magnetic fields. When a particle of mass M and charge q moving at velocity v is caught in a magnetic field B, as we know, it takes a circular path of radius R given by R = Mv / (qB) . The space in which the particles spin is cylindrical. To accelerate the spinning particles to higher and higher velocities, the cylinder is divided into two semicylinders called the "Dees." The dees are connected to an alternating voltage. This makes the polarity of the "dees" alternate at a certain frequency. It is arranged such that when positive particles are in one of the dees, that dee becomes positive to repel the positive particles and the other dee is negative to attract them and as soon as the particles enter the negative dee, the polarity changes, the negative dee becomes positive to repel them again. This continual process keeps accelerating the particles to a desired speed. Of course, as the speed changes, the particles acquire greater and greater radii to where they are ready to leave the cylindrical space at which point they bombard the target nuclei under experiment. A sketch is shown below: If the speed of the particles become comparable to the speed of light (3.00x10^8 m/s), their masses increase according to the Einstein's theory of relativity. The mass increase must then be taken into account when calculating the period of rotation and energy of the particles. These type of calculations are called the "relativistic" calculations. Period of Rotation: Period (T), the time it takes for a charged particle to travel one circle or 2πR, can be calculated. From the definition of speed V = 2πR / T, solving for T yields: T = 2πR / v (*) v may be found from the formula for radius of rotation R = Mv /(qB). This yields: v = qBR / M. Substituting for v in (*), yields: T = 2πM / (qB) Example 7: In a cyclotron, protons are to be accelerated. The strength of the magnetic field is 0.024 Tesla. (a) Find the period of rotation of the protons, (b) their frequency, (c) their final speed if the final radius before hitting the target is 2.0m, and (d) their K.E. in Joules and eVs. Solution: (a) T = 2πM / (qB) ; T = (2π x 1.672x10^-27kg) / (1.6x10^-19C)(0.024T) = 2.7x10^-6 s (b) f = 1 / T ; f = 3.7x10^5 s^-1 or f = 3.7x10^5 Hz. (c) v = Rω ; v = R (2πf) ; v = (2.0m)(2π)(3.7x10^5s^-1) = 4.6x10^6 m/s This speed (although very high) is still small enough compared to the light speed ( 3.00x10^8 m/s ) that the relativistic effects can still be neglected. (d) K.E. = (1/2)Mv^2 ; K.E. = (1/2)(1.672x10^-27kg)(4.6x10^6m/s)^2 = 1.8x10^-14 J K.E. = 1.8x10^-14 J ( 1 eV / 1.6x10^-19 J) = 110,000eV = 110KeV = 0.11 MeV An Easy Relativistic Calculation: According to the Einstein's theory of relativity, when mass M travels close to speed of light it becomes more massive and more difficult to accelerate further. The mass increase effect is given by the following formula: Example 8: In a cyclotron electrons are accelerated to a speed of 2.95x10^8 m/s. (a) By what factor does the electron mass increase? Knowing that the rest mass of electron M[o] = 9.108x10^-31kg, determine (b) its mass at that speed. Solution: (a) Let's find γ step by step. Let's first find (v/c), then (v/c)^2 that is the same as v^2/c^2, then 1- v^2/c^2, then the square root of it, and finally 1 over that square root. The sequence is as follows: (v / c) = 2.95 / 3.00 = 0.98333... (Note that 10^8 powers cancel) v^2 / c^2 = (v / c)^2 = (0.98333...)^2 = 0.96694... 1 - v^2 / c^2 = 1 - 0.96694... = 0.0330555... SQRT( 1 - v^2 / c^2 ) = 0.1818119 γ = 1 / SQRT( 1 - v^2 / c^2 ) = 1 / 0.1818119 = 5.50 (The # of times mass of electron increases) (b) M = M[o]γ ; M = ( 9.108x10^-31kg )( 5.50 ) = 5.01x10^-30 kg Example 9: Find the value of γ for protons in Example 7. Solution: To be done by students Sources of Magnetic Field: Aside from permanent magnets, magnetic fields are mostly generated by coils of wire. A coil is a wire wrapped around a cylinder. Most coils are cylindrical. Long coils produce a fairly uniform magnetic field inside them specially toward their middle and along their axis of symmetry. To understand the magnetic field inside a coil, we need to know that magnetic field around a long straight wire as well as that of a single circular loop. Magnetic Field Around a Straight and Long Wire: For a very long and straight wire carrying a current I, we expect the magnetic field B to be perpendicular to the direction of the current. Since anywhere around the wire this property must equally exist, the magnetic field lines are necessarily concentric circles with the current (the wire) perpendicular to the planes of the circles at their common center. The figure is shown below: As r increases, B of course decreases as is also apparent from the equation shown above. The direction of B is determined by the right-hand rule again. If the thumb shows the direction of I, the four bent fingers point in the direction of B. In the above formula μ[o] = 4π x 10^-7 Tm/A is called the permeability of free space (vacuum) for the passage of magnetic field lines. For any material or substance permeability μ may be measured. For every material a constant may then be defined that relates μ to μ[o]. Example 10: If in the above figure, I = 8.50Amps, determine the magnitude of B at r = 10.0cm, 20.0cm, and 30.0cm. Solution: Using the formula B = μ[o]I / 2πr, we get: ; B[1] = 1.7x10^-5 Tesla B[2] = 8.5x10^-6 Tesla ; B[3] = 5.7x10^-6 Tesla Magnetic Field of a Current-Carrying Circular Loop: The magnetic field produced by a current-carrying circular loop is necessarily perpendicular to the plane of the loop. The reason is that B must be perpendicular to the current I that the loop carries. The direction is determined by the right-hand rule as was discussed in the nail example at the beginning of this chapter. The magnitude at the center of the loop is given by the following formula. Pay attention to the figure as well. Example 11: If in the above figure, I = 6.80Amps, determine the magnitude of B for r = 10.0cm, 20.0cm, and 30.0cm. Solution: Using the formula B = μ[o]I / 2r, we get: ; B[1] = 4.3x10^-5 Tesla B[2] = 2.1x10^-5 Tesla ; B[3] = 1.4x10^-5 Tesla Magnetic Field Inside a Solenoid: A solenoid is a long coil of wire for which the length-to-radius ratio is not under about 10. The magnetic field of a single loop of wire is weak. A solenoid has many loops and the field lines inside it specially in the vicinity of its middle are fairly parallel and provide a uniform and stronger field. Placing an iron core inside the solenoid makes the field even stronger, some 400 times stronger. μ[iron][ = 400 ] μ[o]. (See figure below). The formula for magnetic field strength of a solenoid is: B = μ[o]nI where n is the number of turns per unit length. n in SI units is # of turns per meter of the solenoid. Example 11: A solenoid is 8.0cm long and has 2400 turns. A 1.2-A current flows through it. Find (a) the strength of the magnetic field inside it toward the middle. (a) If an iron core is inserted in the solenoid, what will the field the field strength be? Solution: (a) B = μ[o]nI ; B = (4π x 10^-7 Tm/A)(2400turn / 0.080m) )(1.2A ) = 0.045 Tesla. (b) Iron increases μ[o] by a factor of 400; therefore, (400)(0.045T) = 20T One Application of Solenoid: Anytime you start you car, a solenoid similar to the one in the above example gets magnetized and pulls in an iron rod. The strong magnetic field of the solenoid exerts a strong force on the iron rod (core) and gives it a great acceleration and high speed within a short distance. The rod is partially in the solenoid to begin with and gets fully pulled in after the solenoid is connected to the battery by you when you try to crank the engine. The current that feeds the solenoid may not be even one amp, but the connection it causes between battery and starter pulls several amps from the battery. The forceful moving rod collides with a copper connector that connects the starter to the battery. This connection allows a current of 30Amps to 80Amps to flow through the starter motor and crank your car. The variation of the amperage depends on how cold the engine is. The colder the engine, the less viscous the oil, and the more power is needed to turn the crankshaft. Example 12: The magnetic field inside a 16.3cm long solenoid is 0.027T when a current of 368 mA flows through it. How many turns does it have? Solution: B = μ[o]nI ; n = B /(μ[o]I) ; n = 0.027T / [(4π x 10^-7 Tm/A)(0.368A )] = 58400 turns /m This is the number of turns per meter. If the solenoid was 1.00m long, it would have 58400 turns. It is only 0.163m long, and therefore it has less number of turns. If N is the number of turns, we may write: N = nL ; N = (58400 turns / m)(0.163m) = 9520 turns. Definition of Ampere: We defined 1A to the flow of 1C of electric charge in 1s. A preferred definition for the unit of electric current or Ampere is made by using the force per unit length that two infinitely long parallel wires exert on each other. Recall that when an electric current flows through an infinitely long and straight wire, it generates a magnetic field around it that can be sensed along concentric circles perpendicular to the wire. If two of such wires are parallel to each other and the current in them flow in the same direction, they attract each other. If the current flows in them in opposite directions, they repel each other. The magnitude of the force they exert on each other depends on the distance between the wires and the currents that flow through them. If two parallel wires that are 1m apart have equal currents flowing in them in the same direction, and the two wires attract each other with a force of 10^-7N/m in vacuum, then the current through each wire is 1Amp. Chapter 21 Test Yourself 2: 1) A velocity selector takes advantage of (a) two perpendicular electric fields. (b) a set of perpendicular electric and magnetic fields. (c) two perpendicular magnetic fields. click here 2) The forces (F[m] and F[e]) that the magnetic and electric fields of a velocity selector exert on a charge q, must be (a) equal in magnitude. (b) opposite in direction. (c) both a & b. 3) F[m] and F[e ] in a velocity selector are given by (a) F[m] = qvB and F[e] = qE. (b) F[m] = qB and F[e] = qE. (c) F[m]=qvB and F[e]=qE such that F[m] = F[e]. click here 4) Setting F[m] = F[e] and solving for v results in (a) V = B/E. (b) V = E/B. (c) V =EB. 5) The formula V = E / B (a) depends on the amount of charge. (b) does not depend on the amount of the charge. (c) does not depend on the sign of the charge. (d) b & c. 6) What strength uniform electric field must be placed normal to a 0.0033-T uniform magnetic field such that only charged particles at a speed of 2.4x10^6m/s get passed through along straight lines? (a) 7900N/Coul. (b) 9700N/Coul. (c) 200N/Coul. click here 7) A cyclotron is a device that is used to (a) accelerate charged particles to high speeds and energies. (b) accelerate charged particles to speeds close to that of light. (c) perform experiments with the nuclei of atoms. (d) a, b, & c. 8) Speed of light is (a) 3.00x10^8m/s. (b) 3.00x10^-8m/s. (c) 3.00x10^5 km/s. (d) a & c. 9) A speed of 3.00x10^-8m/s is (a) faster than speed of light. (b) slower than the motion of an ant. (c) even germs may not move that slow. (d) extremely slow, almost motionless. (e) b, c, and d. 10) In a cyclotron, a charged particle released near the center (a) finds itself in a perpendicular magnetic field and starts spinning. (b) spins at a certain period of rotation given by T = 2πM/ (qB). (c) is also under an accelerating electric field that alternates based on the period of rotation of the charged particle. (d) a, b, and c. 11) As the particles in a cyclotron accelerate to high speeds comparable to that of light (a) a mass increase must be taken into account. (b) the mass increase affects the period of rotation. (c) a & 12) The magnetic field around a current-carrying long wire (a) is perpendicular to the wire and at equal distances from the wire has the same magnitude. (b) is parallel to the wire. (c) both a and b. 13) The magnetic field around a current-carrying long wire (a) may be pictured as concentric circles at which the field vectors act radially outward. (b) may be pictured as concentric circles at which the field vectors act tangent to the circles. (c) has a constant magnitude that does not vary with distance from the wire. (d) b & c. 14) The formula for the field strength around a current carrying long wire is (a) B = μ[o]I / (2πR). (b) B = μ[o]I / (2R). (c) B = I / (2R). click here 15) μ[o]= 4π x 10^-7 Tm/Amp is called (a) the permittivity of free space for the passage of electric field effect. (b) the permeability of free space for the passage of the magnetic field effect. (c) neither a nor b. 16) The farther from a wire that carries a current, the (a) stronger the magnetic effect. (b) the more constant the magnetic effect. (c) the weaker the magnetic effect. click here Problem: Draw two concentric circles in a plane perpendicular to a wire that passes through the center of the circles. Suppose that the wire carries a constant electric current, I, upward. Also suppose that the radius of the greater circle is exactly twice that of the smaller circle. You also know that if you were to show magnetic field vectors, you would draw them tangent to those circles. Draw a vector of length say 1/2 inch tangent to the greater circle as the magnitude of B at that radius. Then draw another vector tangent to the smaller circle to represent the field strength at the other radius. Answer the following questions: 17) The magnitude of the field strength at the smaller radius is (a) a vector of length 1 inch. (b) a vector of length 1/4 inch. (c) a vector of length 1/16 inch. click here 18) Based on the upward current in the wire, and looking from the top, the direction of vectors you draw must be (a) clockwise. (b) counterclockwise. 19) What should be the length of the tangent vector you may draw at another circle whose radius is 5 times that of the smaller circle? (a) 1/25 inch. (b) 1/125 inch. (c) 1/5 inch. 20) The magnetic field that a current carrying loop of wire (circular) generates is (a) perpendicular to the plane of the loop. (b) has its maximum effect at the center of the loop and normal to it. (c) has an upward direction if the current flows in the circular loop horizontally in the counterclockwise direction. (d) a, b, & c. click here 21) A solenoid is a coil whose length is (a) at most 5 times its radius. (b) at least 10 times its radius. 22) The magnetic field inside a solenoid and in the vicinity of its middle (a) is fairly uniform. (b) is non-uniform. (c) has a magnitude of B = μ[o]n I where n is its number of turns. (d) has a magnitude of B = μ[o]n I where n is its number of turns per unit length. (e) a & d. 23) A solenoid is 14.0cm long and has 2800. turns. A current of 5.00A flows through it. The magnetic field strength inside and near its middle is (a) 0.0176T. (b) 0.126T. (c) 0.00126T. click here 24) The magnetic field strength inside and at the middle of a 8.0cm long solenoid is 0.377T and it carries a 5.00-Amp current. The number of turns of the solenoid is (a) 4,800 turns. (b) 12,000 turns. (c) 6,000 turns. 25) The formula for the magnetic field strength, B, at the center of a coil, not a solenoid, that has N turns and carries a current, I, is (a) B = Nμ[o] I / (2R). (b) B = Nμ[o] I / (2πR). click here 1) A proton moving at a speed of 3.6x10^6 m/s enters a downward 0.0800-T uniform magnetic field normal to its field lines. Find (a) its radius of rotation in the field, (b) the magnitude the force on it, (c) its centripetal acceleration, (d) angular speed, and (e) rpm. Mp = 1.67x10^-27kg. and | e^+|=1.6x10^-19Coul. 2) An electron enters a magnetic field normal to its field lines and is forced to spin at 4.65x10^6 rpm. Find (a) its angular speed in rd/s, (b) The strength of the filed, and (c) its speed if the radius of rotation is 8.00cm. M[e] = 9.108x10^-31kg. and |e^-|=1.6x10^-19Coul. 3) 280keV alpha-particles enter a 0.125-T magnetic field perpendicular to its field lines. Find (a) the energy of each particle in Joules, (b) the speed of the particles, (c) radius of rotation they attain, and (d) their rpm if the field is big enough to keep the particles inside. Each alpha-particle is a helium nucleus. It contains 2 protons and 2 neutrons. M[p] = 1.672x10^-27kg and M[n] = 1. 4) In a left-to-right flow of alpha-rays (Helium nuclei) coming out of a radioactive substance a 0.0325-T magnetic field is placed in the downward direction. What magnitude electric field should be placed around the flow such that only 0.0360MeV alpha-particles survive both fields? 5) In a cyclotron, electrons are to be accelerated. The strength of the magnetic field is 0.00025 Tesla. (a) Find the period of rotation of the electrons, (b) their frequency, (c) their final speed if the final radius before hitting the target is 2.0m (neglect relativistic effect), and (d) their K.E. in Joules and keVs. 6) In a cyclotron protons are accelerated to a speed of 2.75x10^8 m/s. (a) By what factor does the proton's mass increase? Knowing that the rest mass of proton M[o] = 1.672x10^-27kg, determine (b) its mass at that speed. 7) A solenoid is 10.0cm long and has 4500 turns. A 0.500-A current flows through it. Find (a) the strength of the magnetic field inside it toward the middle. (a) If an iron core is inserted in the solenoid, what will the new field strength be? 8) The magnetic field inside a 12.0cm long solenoid is 0.0150T when a current of 466 mA flows through it. How many turns does it have? 1) 47.0cm, 4.60x10^-14N, 2.76x10^13m/s^2, 7.66x10^6 rd/s, 73.2x10^6 RPM 2) 4.87x10^5rd/s, 2.77x10^-6Tesla, 3.89x10^4m/s 3) 4.48x10^-14J, 3.66x10^6m/s, 0.612m, 57.1x10^7rpm 4) 42,600N/Coul. 5) 1.4x10^-7s, 7.0x10^6Hz, 8.8x10^7m/s , (3.5x10^-15J, or 22keV) 6) 2.50, 4.18x10^-27kg 7) 0.0283T, 10T 8) 3070 turns
{"url":"http://www.pstcc.edu/departments/natural_behavioral_sciences/Web%20Physics/Chapter21.htm","timestamp":"2014-04-16T16:23:25Z","content_type":null,"content_length":"68982","record_id":"<urn:uuid:3fbd0b18-7ee0-4f6c-b8a8-64917b0ba1ad>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00544-ip-10-147-4-33.ec2.internal.warc.gz"}
Partial vanishing viscosity limit for the 2D Boussinesq system with a slip boundary condition This article studies the partial vanishing viscosity limit of the 2D Boussinesq system in a bounded domain with a slip boundary condition. The result is proved globally in time by a logarithmic Sobolev inequality. 2010 MSC: 35Q30; 76D03; 76D05; 76D07. Boussinesq system; inviscid limit; slip boundary condition 1 Introduction Let Ω ⊂ ℝ^2 be a bounded, simply connected domain with smooth boundary ∂Ω, and n is the unit outward normal vector to ∂Ω. We consider the Boussinesq system in Ω × (0, ∞): where u, π, and θ denote unknown velocity vector field, pressure scalar and temperature of the fluid. ϵ > 0 is the heat conductivity coefficient and e[2]:= (0, 1)^t. ω:= curlu:= ∂[1]u[2 ]- ∂[2]u[1 ] is the vorticity. The aim of this article is to study the partial vanishing viscosity limit ϵ → 0. When Ω:= ℝ^2, the problem has been solved by Chae [1]. When θ = 0, the Boussinesq system reduces to the well-known Navier-Stokes equations. The investigation of the inviscid limit of solutions of the Navier-Stokes equations is a classical issue. We refer to the articles [2-7] when Ω is a bounded domain. However, the methods in [1-6] could not be used here directly. We will use a well-known logarithmic Sobolev inequality in [8,9] to complete our proof. We will prove: Theorem 1.1. Let u[0 ]∈ H^3, divu[0 ]= 0 in Ω, u[0]·n = 0, curlu[0 ]= 0 on ∂Ω and . Then there exists a positive constant C independent of ϵ such that for any T > 0, which implies Here (u, θ) is the unique solution of the problem (1.1)-(1.5) with ϵ = 0. 2 Proof of Theorem 1.1 Since (1.7) follows easily from (1.6) by the Aubin-Lions compactness principle, we only need to prove the a priori estimates (1.6). From now on we will drop the subscript e and throughout this section C will be a constant independent of ϵ > 0. First, we recall the following two lemmas in [8-10]. Lemma 2.1. ([8,9]) There holds for any u ∈ H^3(Ω) with divu = 0 in Ω and u · n = 0 on ∂Ω. Lemma 2.2. ([10]) For any u ∈ W^s,p with divu = 0 in Ω and u · n = 0 on ∂Ω, there holds for any s > 1 and p ∈ (1, ∞). By the maximum principle, it follows from (1.2), (1.3), and (1.4) that Testing (1.3) by θ, using (1.2), (1.3), and (1.4), we see that which gives Testing (1.1) by u, using (1.2), (1.4), and (2.1), we find that which gives Here we used the well-known inequality: Applying curl to (1.1), using (1.2), we get Testing (2.4) by |ω|^p-2ω (p > 2), using (1.2), (1.4), and (2.1), we obtain which gives (2.4) can be rewritten as with f[1]: = θ - u[1]ω, f[2]:= -u[2]ω. Using (2.1), (2.5) and the L^∞-estimate of the heat equation, we reach the key estimate Let τ be any unit tangential vector of ∂Ω, using (1.4), we infer that on ∂Ω × (0, ∞). It follows from (1.3), (1.4), and (2.7) that Applying Δ to (1.3), testing by Δθ, using (1.2), (1.4), and (2.8), we derive Now using the Gagliardo-Nirenberg inequalities we have Similarly to (2.7) and (2.8), if follows from (2.4) and (1.4) that Applying Δ to (2.4), testing by Δω, using (1.2), (1.4), (2.13), (2.10), and Lemma 2.2, we reach which yields Combining (2.11) and (2.14), using the Gronwall inequality, we conclude that It follows from (1.1), (1.3), (2.15), and (2.16) that This completes the proof. This study was partially supported by the Zhejiang Innovation Project (Grant No. T200905), the ZJNSF (Grant No. R6090109), and the NSFC (Grant No. 11171154). Sign up to receive new article alerts from Boundary Value Problems
{"url":"http://www.boundaryvalueproblems.com/content/2012/1/20/","timestamp":"2014-04-19T07:00:52Z","content_type":null,"content_length":"84774","record_id":"<urn:uuid:e25ed7a4-d600-45ef-b316-65007d05d0a6>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00274-ip-10-147-4-33.ec2.internal.warc.gz"}
A Comparative Study of Dark Matter in the MSSM and Its Singlet Extensions: A Mini Review Advances in High Energy Physics Volume 2012 (2012), Article ID 216941, 22 pages Review Article A Comparative Study of Dark Matter in the MSSM and Its Singlet Extensions: A Mini Review Institute of Theoretical Physics, College of Applied Science, Beijing University of Technology, Beijing 100124, China Received 23 May 2012; Revised 28 July 2012; Accepted 12 September 2012 Academic Editor: Ulrich Ellwanger Copyright © 2012 Wenyu Wang. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. In this note we briefly review the recent studies of dark matter in the MSSM and its singlet extensions: the NMSSM, the nMSSM, and the general singlet extension. Under the new detection results of CDMS II, XENON, CoGeNT, and PAMELA, we find that (i) the latest detection results can exclude a large part of the parameter space which is allowed by current collider constraints in these models. The future SuperCDMS and XENON can cover most of the allowed parameter space; (ii) the singlet sector will decouple from the MSSM-like sector in the NMSSM; however, singlet sector makes the nMSSM quite different from the MSSM; (iii) the NMSSM can allow light dark matter at several GeV to exist. Light CP-even or CP-odd Higgs boson must be present so as to satisfy the measured dark matter relic density. In case of the presence of a light CP-even Higgs boson, the light neutralino dark matter can explain the CoGeNT and DAMA/LIBRA results; (iv) the general singlet extension of the MSSM gives a perfect explanation for both the relic density and the PAMELA result through the Sommerfeld-enhanced annihilation. Higgs decays in different scenario are also studied. 1. Introduction Although there are many theoretical or aesthetical arguments for the necessity of TeV-scale new physics, the most convincing evidence is from the (Wilkinson Microwave Anisotropy Probe) WMAP observation of the cosmic cold dark matter, which naturally indicates the existence of (weakly interacting massive particles) WIMPs beyond the prediction of the standard model (SM). By contrast, the neutrino oscillations may rather imply trivial new physics (plainly adding right-handed neutrinos to the SM) or new physics at some very high see-saw scale unaccessible to any foreseeable colliders. Therefore, the TeV-scale new physics to be unraveled at the large hadron collider (LHC) is the most likely related to the WIMP dark matter. If WIMP dark matter is chosen by nature, it will give a strong argument for low-energy supersymmetry (SUSY) with R parity which can give a good candidate. Nevertheless, SUSY is motivated for solving the hierarchy problem elegantly. It can also solve other puzzles of the SM, such as the deviation of the muon anomalous magnetic moment from the SM prediction. In the framework of SUSY, the most intensively studied model is the minimal supersymmetric standard model (MSSM) [1], which is the most economical realization of SUSY. However, this model suffers from the -problem. The -parameter is the only dimensional parameter in the SUSY conserving sector. From a top-down view, one would expect the to be either zero or at the Planck scale. But in the MSSM, the relation of the electroweak (EW) scale soft parameters () [2] makes that must be at the EW scale, while LEP constraints on the chargino mass require to be nonzero [3]. A simple solution is to promote to a dynamical field in extensions of the MSSM that contain an additional singlet superfield which does not interact with the MSSM fields other than the two Higgs doublets. An effective can be reasonably getten at EW scale when denotes the vacuum expectation value (VEV) of the singlet field. Among these extension models the next-to-minimal supersymmetric model (NMSSM) [4–27] and the nearly minimal supersymmetric model (nMSSM) [28–37] caused much attention recently. Note that the little hierarchy problem which is also a trouble of the MSSM is relaxed greatly in the NMSSM. If we introduce a singlet superfield to the MSSM, the Higgs sector will have one more CP even component and one more CP odd component, and the neutralino sector will have one more singlino component. These singlet multiplets compose a “singlet sector” of the MSSM. It can make the phenomenologies of SUSY dark matter and Higgs different from the MSSM. More and more precision results of dark matter detection give us an opportunity to test if this singlet sector really exists. For example, experiments for the underground direct detection of cold dark matter have recently made significant progress while the null observation of in the CDMS and XENON100 experiments has set rather tight upper limits on the spin-independent (SI) cross-section of -nucleon scattering [38–40]. The CoGeNT experiment [41] reported an excess which cannot be explained by any known background sources but seems to be consistent with the signal of a light with mass around 10GeV and scattering rate . Intriguingly, this range of mass and scattering rate is compatible with dark matter explanation for both the DAMA/LIBRA data and the preliminary CRESST data [42]. Though CoGeNT result is not consistent with the CDMS or XENON results, it implies that the mass of dark matter can range a very long scope at EW scale, that is, from a few GeV to several TeV. The indirect detection PAMELA also observed an excess of the cosmic ray positron in the energy range 10–100GeV [43, 44] which may be explained by dark matter. In this paper, we will give a short review on the difference between the MSSM and the MSSM with a singlet sector under the constraints of new dark matter detection results. As the Higgs hunting on colliders has delicate relation with dark matter detections, the implication on Higgs searching is also reviewed. The content is based on our previous work [45–47]. The paper is organized as follows: in Section 2, we will give a short review on the structures of the MSSM, the NMSSM, and the nMSSM. In Section 3 we will give a comparison on the models under the constraints of CDMS, XENON, and CoGeNT. In Section 4, a general singlet extension of the MSSM is discussed, and a summary is given in Section 5. 2. The MSSM and Its Singlet Extensions As an economical realization of supersymmetry, the MSSM has the minimal content of particles, while the NMSSM and the nMSSM extend the MSSM by only adding one singlet Higgs superfield . The difference between these models is reflected in their superpotential: where with , , and being the squark superfields, and and being the slepton superfields. and are the Higgs doublet superfields, , , and are dimensionless coefficients, and and are parameters with mass dimension. Note that there is no explicit -term in the NMSSM or the nMSSM, and an effective -parameter (denoted as ) can be generated when the scalar component () of develops a VEV. Also note that the nMSSM differs from the NMSSM in the last term with the trilinear singlet term of the NMSSM replaced by the tadpole term . As pointed out in [28–36], such a tadpole term can be generated at a high loop level and naturally be of the SUSY breaking scale. The advantage of such replacement is the nMSSM that has no discrete symmetry thus free of the domain wall problem which the NMSSM suffers from. Corresponding to the superpotential, the Higgs soft terms in the scalar potentials are also different between the three models (the soft terms for gauginos and sfermions are the same thus not listed here) After the scalar fields , , and develop their VEVs , , and , respectively, they can be expanded as The mass eigenstates can be obtained by unitary rotations where and are, respectively, the CP-even and CP-odd neutral Higgs bosons, and are Goldstone bosons, and is the charged Higgs boson. Including the scalar part of the singlet sector in the NMSSM and the nMSSM leads to a pair of charged Higgs bosons, three CP-even and two CP-odd neutral Higgs bosons. In the MSSM, we only have two CP-even and one CP-odd neutral Higgs bosons in addition to a pair of charged Higgs bosons. The MSSM predicts four neutralinos (), that is, the mixture of neutral gauginos (bino and neutral wino ) and neutral higgsinos (), while the NMSSM and the nMSSM predict one more neutralino corresponding to the singlino from the fermion part of singlet sector. In the basis (for the MSSM is absent) the neutralino mass matrix is given by where and are, respectively, and soft gaugino mass parameters, , , , and with . The lightest neutralino is assumed to be the lightest supersymmetric particle (LSP), serving as the SUSY dark matter particle. It is composed by where is the unitary matrix ( is zero for the MSSM) to diagonalize the mass matrix in (2.7). For the mass matrices above we should note the following two points. (1)For a moderate value of , the neutralino sector of the NMSSM can go back to the MSSM when approaches to zero. This is because in such case the singlino component will become super heavy and decouple from EW scale. The singlet scalar will not mix with the two Higgs doublet, and then the NMSSM will be almost the same as the MSSM at EW scale.(2)Since the element of (2.7) is zero in the nMSSM, the singlino will not decouple when approaches to zero. In fact, in the nMSSM the mass of the LSP can be written as This formula shows that, to get a heavy , we need a large , a small as well as a moderate . The chargino sector of these three models is the same except that in the NMSSM/nMSSM the parameter is replaced by . The charginos () are the mixture of charged Higgsinos and winos , whose mass matrix in the basis of is given by So the chargino can be wino dominant (when is much smaller than ) or higgsino dominant (when is much smaller than ). Since the composing property (wino-like, bino-like, higgsino-like, or singlinolike) of the LSP and the chargino is very important in SUSY phenomenologies, we will show such a property in our following study. 3. Comparison with the MSSM and the MSSM with a Singlet Sector 3.1. In Light of CDMS II and XENON First let us see the MSSM, the NMSSM, and the nMSSM under the constraints of results of CDMS II and XENON100. As both current and future limits of -nucleon of CDMS and XENON are similar to each other, we will show only one of them. Nevertheless, as a good substitute of the SM, SUSY model must satisfy all the results of current collider and detector measurements. In our study we consider the following experimental constraints [48]: (1) we require to account for dark matter relic density ; (2) we require the SUSY contribution to explain the deviation of the muon , that is, , at level; (3) the LEP-I bound on the invisible -decay, MeV, and the LEP-II upper bound on , which is for , as well as the lower mass bounds on sparticles from direct searches at LEP and the Tevatron; (4) the constraints from the direct search for Higgs bosons at LEP-II, including the decay modes , which limit all possible channels for the production of the Higgs bosons; (5) the constraints from physics observable such as , , , , the mixing, and the mass difference and ; (6) the constraints from the precision EW observable such as , , , and ; (7) the constraints from the decay , and the Tevatron search for a light Higgs boson via and signals [49]. The constraints (1.1)–(2.2) have been encoded in the package NMSSMTools [50]. We use this package in our calculation and extend it by adding the constraints (2.3), (2.4). As pointed out in [49], the constraints (2.4) are important for a light Higgs boson. In addition to the above experimental limits, we also consider the constraint from the stability of the Higgs potential, which requires that the physical vacuum of the Higgs potential with nonvanishing VEVs of Higgs scalars should be lower than any local minima. For the calculation of cross-section of -nucleon scattering, we use the formulas in [51, 52] for the MSSM and extend them to the NMSSM/nMSSM. It is sufficient to consider only the SI interactions between and nucleon (denoted by for proton and for neutron [52]) in the calculation. The leading order of these interactions is induced by exchanging the SM-like Higgs boson at tree level. For moderately light Higgs bosons, is approximated by [52] (similarly for) where denotes the fraction of (proton mass) from the light quark while is the heavy quark contribution through gluon exchange. is the coefficient of the effective scalar operator. The -nucleus scattering rate is then given by [52] where is the mass of target nucleus, and is the number of proton (neutron) in the target nucleus. In our numerical calculations we take , , , , and . Note that the scattering rate is very sensitive to the value of [53, 54]. Recent lattice simulation [55–57] gave a much smaller value of (0.020), and it reduces the scattering rate significantly which can be seen in [58, 59]. Considering all the constraints listed above, we scan over the parameters in the following ranges: where and are the universal soft mass parameters of the first two generations of squarks and the three generations of sleptons, respectively. To reduce the number of the relevant soft parameters, we worked in the so-called scenario with the following choice of the soft masses for the third generation squarks: , and . The advantage of such a choice is that other SUSY parameters more easily survive the constraints (so that the bounds we obtain are conservative). Moreover, we assume the grand unification relation for the gaugino masses: This relation is often assumed in studies of SUSY at the TeV scale for it can be easily generated in the mSUGRA model [60]. Note that relaxing this relation will give a large effect on the light neutralino scenario [61–64]. The surviving points for the three model are displayed in Figure 1 for the spin-independent elastic cross-section of -nucleon scattering. We see that for each model the CDMS II limits can exclude a large part of the parameter space allowed by current collider constraints, and the future SuperCDMS (25kg) limits can cover most of the allowed parameter space. For the MSSM and the NMSSM dark matter mass is roughly in range of 50–400GeV, while for the nMSSM dark matter mass is constrained below 40GeV by current experiments and further constrained below 20GeV by SuperCDMS in case of From Figure 1, we can see that the -nucleon scattering plot of the MSSM and the NMSSM is very similar to each other, but very different from nMSSM. This implies that, under the experiment constraints, the singlet sector will decouple from the MSSM-like sector in the NMSSM, and then the NMSSM will perform almost the same as the MSSM. However, the singlet components change EW scale phenomenology greatly in the nMSSM. This can also be seen in Figures 2 and 3. We can see that for both the MSSM and the NMSSM is bino dominant, while for the nMSSM is singlino-dominant, and the region allowed by CDMS limits (and SuperCDMS limits in case of nonobservation) favors a more bino-like for the MSSM/NMSSM and a more singlinolike for the nMSSM. For the MSSM/NMSSM the LSP lower bound around 50GeV is from the chargino lower bound of 103.5GeV plus the assumed GUT relation , while the upper bound around 400GeV is from the binonature of the LSP ( cannot be too large and must be much smaller than other relevant parameters) plus the experimental constraints like the muon g-2 and B-physics. If we do not assume the GUT relation , then can be as small as 40GeV, and the LSP lower bound in the MSSM/NMSSM will not be sharply at 50GeV. (We will talk about it in the following section.) For both the MSSM and the NMSSM, the CDMS limits tend to favor a heavier chargino and ultimately the SuperCDMS limits tend to favor a wino-dominant chargino with mass about . Note that there still can be a singlino-dominant LSP in some parameter space of the NMSSM [66], but, in the scan range (3.3) listed above, getting such singlino-dominant LSP needs some fine-tuning, thus we do not focus on it. In Figure 4 we show the value of versus the charged Higgs mass in the NMSSM and the nMSSM. This figure indicates that larger than 0.4 is disfavored by the NMSSM. The underlying reason is that depends on explicitly, and large can enhance -nucleon scattering rate. By contrast, although CDMS has excluded some points with large in the nMSSM, there are still many surviving points with as large as 0.7. We have talked about the reason above to get a heavy , one needs a large , a small as well as a moderate . From the survived parameter space for all the model above, we should know that the Higgs decay will be similar for the MSSM and the NMSSM, but quite different from the nMSSM. This can be seen in Figure 5 which shows decay branching ratio of versus the mass of the SM-like Higgs boson (which is here, and it is Higgs doublet and dominant). Such a decay is strongly correlated to the -nucleon scattering because the coupling is involved in both processes. We see that in the MSSM and the NMSSM this decay mode can open only in a very narrow parameter space since cannot be so light, and in the allowed region this decay has a very small branching ratio (below ). However, in the nMSSM this decay can open in a large part of the parameter space since the LSP can be very light, and its branching ratio can be quite large (over or ). 3.2. Light Dark Matter in the NMSSM As we talked in the introduction, the data of CoGeNT experiment favors a light dark matter around 10GeV. However, we scan the parameter space in the MSSM and find that it is very difficult to find a neutralino lighter than about 28GeV, unless when it is associated with a light stau as the next to the lightest supersymmetric particle (NLSP), but such scenario always needs a fine tuning in the parameter space [67–70]. The main reason for the absence of a lighter is that the dominant annihilation channel for in the early universe is through -channel exchange of the pseudoscalar Higgs boson (), and the measured dark matter relic density requires GeV and and this is in conflict with the constraints from the LEP experiment and physics [71–74]. The LHC data gives an even more stronger constraint on the light pseudoscalar scenario [75, 76] such that light dark matter seems impossible in the MSSM. Though in the nMSSM the neutralino can be as light as 10GeV (shown in Figure 1), the scattering rate is much lower under the CoGeNT-favored region. In the NMSSM, however, with the participation of singlet sector one can get very light [4–27] Higgs. This feature is particularly useful for light scenario since it opens up new important annihilation channels for , that is, either into a pair of (or ) or into a pair of fermions via -channel exchange of (or ) [74, 77–79]. For the former case, must be heavier than (), while, for the latter case, due to the very weak couplings of () with and with the SM fermions, a resonance enhancement (i.e., or must be close to ) is needed to accelerate the annihilation. So a light may be necessarily accompanied by a light or to provide the required dark matter relic density. From the discussion in the upper section, light can be obtained by releasing the GUT relation (3.4); thus LSP in the NMSSM may explain the detection of CoGeNT. Note that as the LSP in the nMSSM is singlino dominant, relaxing the GUT relation will not the change the phenomenology of dark matter and Higgs too much. Now we discuss how to get a light or in the NMSSM. A light can be easily obtained when the theory is close to the or symmetry limit, which can be realized by setting the product to be negatively small [4–27]. In contrast, a light cannot be obtained easily. However, as shown below, it can still be achieved by somewhat subtle cancelation via tuning the value of . We note that, for any theory with multiple Higgs fields, the existence of a massless Higgs boson implies the vanishing of the determinant of its squared mass matrix and vice versa. For the NMSSM, at tree level the parameter only enters the mass term of the singlet Higgs bosons, so the determinant of the mass matrix of the CP-even Higgs bosons depends on linearly [4–27]. When other relevant parameters are fixed, one can then obtain a light by varying around the value which is the solution to the equation . In practice, one must include the important radiative corrections to the Higgs mass matrix, which will complicate the dependence of on . However, we checked that the linear dependence is approximately maintained by choosing the other relevant parameters at the SUSY scale, and one can solve the equation iteratively to get the solution . In Figure 6 we display the surviving parameter samples, showing the -nucleon scattering cross-section versus the neutralino dark matter mass (left frame) and versus the mass of or (right frame). It shows that the scattering rate of the light dark matter can reach the sensitivity of CDMS, and, consequently, a sizable parameter space is excluded by the CDMS data [65]. The future CDMS experiment can further explore (but cannot completely cover) the remained parameter space. Note that in the light- case the scattering rate can be large enough to reach the sensitivity of CoGeNT and to cover the CoGeNT-favored region. The underlying reason is that the -nucleon scattering can proceed through the -channel exchange of the CP-even Higgs bosons, which can be enhanced by a factor for a light [ 77, 78], while a light can not give such an enhancement because the CP-odd Higgs bosons do not contribute to the scattering in this way. We noticed that the studies in [73, 80] claimed that the NMSSM is unable to explain the CoGeNT data because they did not consider the light- case. In the light scenario, may decay exotically into , or , and consequently the conventional decays are reduced. This feature is illustrated in Figure 7, which shows that the sum of the exotic decay branching ratios may exceed and the traditional decays can be severely suppressed. Numerically, we find that the branching ratio of is suppressed to be below for all the surviving samples in the light- ( is ) case and for about of the surviving samples in the light- ( is ) case (for the remaining of the surviving samples in the light- case, the decay is usually kinematically forbidden so that the ratio of may exceed ). Another interesting feature shown in Figure 7 is that, due to the open-up of the exotic decays, may be significantly lighter than the LEP bound. This situation is favored by the fit of the precision electroweak data and is of great theoretical interest [81]. Since the conventional decay modes of may be greatly suppressed, especially in the light- case which can give a rather large -nucleon scattering rate, the LHC search for via the traditional channels may become difficult. Now the LHC observed a new particle in the mass region around 125-126GeV which is the most probable long sought Higgs boson [82]. In this mass range, the most important discovering channel of at the LHC is the di-photon signal. In Figure 8 we give the ratio of the di-photon production rate to the SM at the LHC with TeV. In calculating the rate, we used the narrow width approximation and only considered the leading contributions to from top quark, bottom quark, and the squark loops. Figure 8 indicates that, compared with the SM prediction, the ratio in the NMSSM in the light scenario is suppressed to be less than 0.4 for the light- case. For the light- case, most samples (about ) predict the same conclusion. Since in the light- case the -nucleon scattering rate can reach the CoGeNT sensitivity, this means that in the framework of the NMSSM the CoGeNT search for the light dark matter will be correlated with the LHC search for the Higgs boson via the di-photon channel. We checked that once the future XENON experiment fails in observing dark matter, less than of the surviving samples in light case predict the ratio of di-photon signal larger than 0.4. 4. General Extension for the Explanation to PAMELA To explain the PAMELA excess by dark matter annihilation, there are some challenges. First, dark matter must annihilate dominantly into leptons since PAMELA has observed no excess of antiprotons [43, 44] (however, as pointed in [83], this statement may be not so solid due to the significant astrophysical uncertainties associated with their propagation). Second, the explanation of PAMELA excess requires an annihilation rate which is too large to explain the relic abundance if dark matter is produced thermally in the early universe. To tackle these difficulties, a new theory of dark matter was proposed in [84]. In this new theory the Sommerfeld effect of a new force in the dark sector can greatly enhance the annihilation rate when the velocity of dark matter is much smaller than the velocity at freeze-out in the early universe, and dark matter annihilates into light particles which are kinematically allowed to decay to muons or electrons. The above fancy idea is hard to realize in the MSSM, because there is not a new force in the neutralino dark matter sector to induce the Sommerfeld enhancement, and neutralino dark matter annihilates largely to final states consisting of heavy quarks or gauge and/or Higgs bosons [52, 85–88]. However, as discussed in [89], in a general extension of the MSSM by introducing a singlet Higgs superfield, the idea in [84] can be realized by the singlinolike neutralino dark matter. (i)The singlino dark matter annihilates to the light singlet Higgs bosons, and the relic density can be naturally obtained from the interaction between singlino and singlet Higgs bosons.(ii)The singlet Higgs bosons, not related to electroweak symmetry breaking, can be light enough to be kinematically allowed to decay dominantly into muons or electrons through the tiny mixing with the Higgs doublets.(iii)The Sommerfeld enhancement needed in dark matter annihilation for the explanation of PAMELA result can be induced by the light singlet Higgs boson. In the following section, we will show how does this happen: the Higgs decay is also investigated. 4.1. Higgs and Neutralinos Spectrum If we introduce a singlet Higgs to the MSSM in general, the renormalizable holomorphic superpotential of Higgs is given by [89] which include linear term, quadratic term, cubic term of singlet superfield (like Wess-Zumino model [90]). Note that, in such case, we do not require the singlet to solve the problem. The soft SUSY-breaking terms are given by After the Higgs fields develop the VEVs , , and , that is, we get the similar Higgs spectrum as the NMSSM and the nMSSM which is the following. (1)The CP-even Higgs mass matrix in the basis is given by where with and being, respectively, the coupling constant of and in the SM. (2)The CP-odd Higgs mass matrix is given by Note that here we have dropped the Goldstone mode; thus, there left a mass matrix in the basis (), and it can be diagonalized by an orthogonal matrix and the physical CP-odd states are given by (ordered as ) (3)The charged Higgs mass matrix in the basis is given by (4)The neutralino mass matrix is 4.2. Explanation of PAMELA and Implication on Higgs Decays To explain the observation of PAMELA, is singlet dominant, while is singlet-dominant, and the next-to-lightest is doublet-dominant (). We use the notation: As discussed in [89], when the lightest neutralino in (2.8) is singlino-dominant, it can be a perfect candidate for dark matter. As shown in Figure 9, such singlino dark matter annihilates to a pair of light singlet Higgs bosons followed by the decay ( has very small mixing with the Higgs doublets and thus has very small couplings to the SM fermions). In order to decay dominantly into muons, must be light enough. Further, in order to induce the Sommerfeld enhancement, must also be light enough. From the superpotential term we know that the couplings and are proportional to . To obtain the relic density of dark matter, should be . are singlet-dominant, and is singlino-dominant, this implies small mixing between singlet and doublet Higgs fields. From the superpotential in (4.1) we see that this means that the mixing parameter must be small enough. On the other hand, from (4.5) and (4.10) lightness of and also requires and other term approaching to zero. Therefore, in our scan we require that parameters and have the relation: to realize light and . The numerical results of this model are displayed in different planes in Figures 10–12. We see from Figure 10 that, in the range , decays dominantly into muons. It is clear that can be as light as a few GeV, which is light enough to induce the necessary Sommerfeld enhancement as shown in Figure 11. In left plot of Figure 12, we show the branching ratios of decays. We see that in the allowed parameter space tends to decay into or instead of . This can be understood as follows: the MSSM parameter space is stringently constrained by the LEP experiments if is relatively light and decays dominantly to , and to escape such stringent constraints tends to have exotic decays into or . As a result, the allowed parameter space tends to favor a large , as shown in right plot of Figure 12, which greatly enhances the couplings and through the soft term although has a small mixing with the doublet Higgs bosons. Such an enhancement can be easily seen. Take the coupling as an example, the soft term gives a term which then gives the interaction because with and (see (2.6) and (4.15)). Although the mixing is small for a small , a large can enhance the coupling . Note that as the mass of the observed Higgs boson at the LHC is around 125GeV, thus in the MSSM, the dominant decay mode of is . In this general singlet extension of the MSSM, its dominant decay mode may be changed to or , as shown in our above results. Finally, we note that, for the specified singlet extensions like the nMSSM and the NMSSM, the explanation of PAMELA and relic density through Sommerfeld enhancement is not possible. The reason is that the parameter space of such models is stringently constrained by various experiments and dark matter relic density as shown in the above section, and, as a result, the neutralino dark matter may explain either the relic density or PAMELA, but impossible to explain both via Sommerfeld enhancement. For example, in the nMSSM various experiments and dark matter relic density constrain the neutralino dark matter particle in a narrow mass range [91], which is too light to explain PAMELA. 5. Summary At last we summarize here the SUSY dark matter, and Higgs physics will be changed if we introduce a singlet to the MSSM. Under the latest results of dark matter detection, we have the following.(1)In the MSSM, the NMSSM, and the nMSSM, the latest detection result can exclude a large part of the parameter space allowed by current collider constraints, and the future SuperCDMS and XENON can cover most of the allowed parameter space. (2)Under the new dark matter constraints, the singlet sector will decouple from the MSSM-like sector in the NMSSM; thus, the phenomenologies of dark matter and Higgs are similar to the MSSM. The singlet sector makes the nMSSM quite different from the MSSM, the LSP in the nMSSM is singlet dominant, and the SM-like Higgs will mainly decay into the singlet sector. Future precision measurements will give us an opportunity to determine whether the new scalar is from standard model or from SUSY. Perhaps the nMSSM will be the first model excluded for its much larger branching ratio of invisible Higgs decay.(3)The NMSSM can allow light dark matter at several GeV to exist. Light CP-even or CP-odd Higgs boson must be present so as to satisfy the measured dark matter relic density. In case of the presence of a light CP-even Higgs boson, the light neutralino dark matter can explain the CoGeNT and DAMA/LIBRA results. Further, we find that in such a scenario the SM-like Higgs boson will decay predominantly into a pair of light Higgs bosons or a pair of neutralinos, and the conventional decay modes will be greatly suppressed.(4)The general singlet extension of the MSSM gives a perfect explanation for both the relic density, and the PAMELA result through the Sommerfeld enhanced annihilation into singlet Higgs bosons ( or followed by ) with being light enough to decay dominantly to muons or electrons. Although the light singlet Higgs bosons have small mixing with the Higgs doublets in the allowed parameter space, their couplings with the SM-like Higgs boson can be enhanced by the soft parameter . In order to meet the stringent LEP constraints, the tends to decay into the singlet Higgs pairs or instead of . This work was supported in part by the NSFC no. 11005006, no. 11172008, and Doctor Foundation of BJUT no. X0006015201102. 1. H. E. Haber and G. L. Kane, “The search for supersymmetry: Probing physics beyond the standard model,” Physics Reports, vol. 117, no. 2–4, pp. 75–263, 1985. View at Publisher · View at Google 2. S. Abel, E. Accomando, G. Anderson, et al., “Report of SUGRA Working Group for Run II of the Tevatron,” http://arxiv.org/abs/hep-ph/0003154. 3. LEPSUSYWG, ALEPH, DELPHI, L3 and OPAL Collaborations, LEPSUSYWG/01-03. 1http://lepsusy.web.cern.ch/lepsusy/. 4. J. Ellis, J. F. Gunion, H. E. Haber, L. Roszkowski, and F. Zwirner, “Higgs bosons in a nonminimal supersymmetric model,” Physical Review D, vol. 39, no. 3, pp. 844–869, 1989. View at Publisher · View at Google Scholar · View at Scopus 5. M. Drees, “Supersymmetric models with extended higgs sector,” International Journal of Modern Physics A, vol. 4, no. 14, p. 3635, 1989. View at Publisher · View at Google Scholar 6. P. N. Pandita, “One-loop radiative corrections to the lightest Higgs scalar mass in the non-minimal supersymmetric standard model,” Physics Letters B, vol. 318, no. 2, pp. 338–346, 1993. View at Publisher · View at Google Scholar 7. P. N. Pandita, “Approximate formulas for the neutralino masses in the nonminimal supersymmetric standard model,” Physical Review D, vol. 50, pp. 571–577, 1994. View at Publisher · View at Google 8. S. F. King and P. L. White, “Resolving the constrained minimal and next-to-minimal supersymmetric standard models,” Physical Review D, vol. 52, pp. 4183–4216, 1995. View at Publisher · View at Google Scholar 9. B. Ananthanarayan and P. N. Pandita, “The non-minimal supersymmetric standard model with tan $\beta \simeq {\text{m}}_{\text{t}}{/\text{m}}_{\text{b}}$,” vol. 353, no. 1, pp. 70–78, 1995. View at Publisher · View at Google Scholar 10. B. Ananthanarayan and P. N. Pandita, “Particle spectrum in the non-minimal supersymmetric standard model with tan $\beta \simeq {\text{m}}_{\text{t}}{/\text{m}}_{\text{b}}$,” Physics Letters B, vol. 371, no. 3-4, pp. 245–251, 1996. View at Publisher · View at Google Scholar 11. B. Ananthanarayan and P. N. Pandita, “The Nonminimal Supersymmetric Standard Model at Large tan β,” International Journal of Modern Physics A, vol. 12, no. 13, p. 2321, 1997. View at Publisher · View at Google Scholar 12. B. A. Dobrescu and K. T. Matchev, “Light axion within the next-to-minimal supersymmetric standard model,” Journal of High Energy Physics, vol. 2000, no. 09, p. 031, 2000. View at Publisher · View at Google Scholar 13. V. Barger, P. Langacker, H. S. Lee, and G. Shaughnessy, “Higgs sector in extensions of the minimal supersymmetric standard model,” Physical Review D, vol. 73, no. 11, Article ID 115010, 31 pages, 2006. View at Publisher · View at Google Scholar 14. R. Dermisek and J. F. Gunion, “Escaping the large fine-tuning and little hierarchy problems in the next to minimal supersymmetric model and h→aa decays,” Physical Review Letters, vol. 95, no. 4, Article ID 041801, 4 pages, 2005. View at Publisher · View at Google Scholar 15. G. Hiller, “b-physics signals of the lightest CP-odd Higgs boson in the next-to-minimal supersymmetric standard model at large tanβ,” Physical Review D, vol. 70, no. 3, Article ID 034018, 4 pages, 2004. View at Publisher · View at Google Scholar 16. F. Domingo and U. Ellwanger, “Updated constraints from B physics on the MSSM and the NMSSM,” Journal of High Energy Physics, vol. 2007, no. 12, p. 090, 2007. View at Publisher · View at Google 17. Z. Heng, et al., “B meson dileptonic decays in the next-to-minimal supersymmetric model with a light CP-odd Higgs boson,” Physical Review D, vol. 77, no. 9, Article ID 095012, 12 pages, 2008. View at Publisher · View at Google Scholar 18. R. N. Hodgkinson and A. Pilaftsis, “Radiative Yukawa couplings for supersymmetric Higgs singlets at large tanβ,” Physical Review D, vol. 76, no. 1, Article ID 015007, 12 pages, 2007. View at Publisher · View at Google Scholar 19. R. N. Hodgkinson and A. Pilaftsis, “Supersymmetric Higgs singlet effects on B-meson flavor-changing neutral current observables at large tanβ,” Physical Review D - Particles, Fields, Gravitation and Cosmology, vol. 78, no. 7, Article ID Article number075004, 2008. View at Publisher · View at Google Scholar · View at Scopus 20. W. Wang, Z. Xiong, and J. M. Yang, “Residual effects of heavy sparticles in the bottom quark Yukawa coupling: a comparative study for the MSSM and NMSSM,” Physics Letters B, vol. 680, no. 2, pp. 167–171, 2009. View at Publisher · View at Google Scholar 21. J. Cao and J. M. Yang, “Anomaly of Z$\overline{b}$ coupling revisited in MSSM and NMSSM,” Journal of High Energy Physics, vol. 2008, no. 12, p. 006, 2008. View at Publisher · View at Google 22. J. Cao and J. M. Yang, “Current experimental constraints on the next-to-minimal supersymmetric standard model with large λ,” Physical Review D, vol. 78, no. 11, Article ID 115001, 8 pages, 2008. View at Publisher · View at Google Scholar 23. U. Ellwanger, C. Hugonie, and A. M. Teixeira, “The next-to-minimal supersymmetric standard model,” Physics Reports, vol. 496, no. 1-2, pp. 1–77, 2010. View at Publisher · View at Google Scholar 24. J. Cao, Z. Heng, and J. M. Yang, “Rare Z-decay into light CP-odd Higgs bosons: a comparative study in different new physics models,” Journal of High Energy Physics, vol. 2010, no. 11, p. 110, 2010. View at Publisher · View at Google Scholar 25. M. Maniatis, “The next-to-minimal supersymmetric extension of the standard model reviewed,” International Journal of Modern Physics A, vol. 25, no. 18-19, p. 3505, 2010. View at Publisher · View at Google Scholar 26. U. Ellwanger, “Higgs bosons in the next-to-minimal supersymmetric standard model at the LHC,” The European Physical Journal C, vol. 71, no. 10, p. 1782, 2011. View at Publisher · View at Google 27. J. Cao, Z. Heng, J. M. Yang, and J. Zhu, “Higgs decay to dark matter in low energy SUSY: is it detectable at the LHC?” Journal of High Energy Physics, vol. 2012, no. 06, p. 145, 2012. View at Publisher · View at Google Scholar 28. P. Fayet, Nuclear Physics B, vol. 90, pp. 104–124, 1975. View at Publisher · View at Google Scholar 29. C. Panagiotakopoulos and K. Tamvakis, “Stabilized NMSSM without domain walls,” Physics Letters B, vol. 446, no. 3-4, pp. 224–227, 1999. View at Publisher · View at Google Scholar 30. C. Panagiotakopoulos and K. Tamvakis, “New minimal extension of MSSM,” Physics Letters B, vol. 469, no. 1–4, pp. 145–148, 1999. View at Publisher · View at Google Scholar 31. C. Panagiotakopoulos and A. Pilaftsis, “Higgs scalars in the minimal nonminimal supersymmetric standard model,” Physical Review D, vol. 63, Article ID 055003, 33 pages, 2001. View at Publisher · View at Google Scholar 32. A. Dedes, et al., “Phenomenology of a new minimal supersymmetric extension of the standard model,” Physical Review D, vol. 63, no. 5, Article ID 055009, 9 pages, 2001. View at Publisher · View at Google Scholar 33. A. Menon, D. E. Morrissey, and C. E. M. Wagner, “Electroweak baryogenesis and dark matter in a minimal extension of the MSSM,” Physical Review D, vol. 70, no. 3, Article ID 035005, 20 pages, 2004. View at Publisher · View at Google Scholar 34. V. Barger, P. Langackerb, and H.-S. Leea, “Lightest neutralino in extensions of the MSSM,” Physics Letters B, vol. 630, no. 3-4, pp. 85–99, 2005. View at Publisher · View at Google Scholar 35. C. Balazs, M. Carena, A. Freitas, et al., “Phenomenology of the nMSSM from colliders to cosmology,” Journal of High Energy Physics, vol. 2007, no. 06, p. 066, 2007. View at Publisher · View at Google Scholar 36. J. Cao, Z. Heng, and J. M. Yang, “Rare Z-decay into light CP-odd Higgs bosons: a comparative study in different new physics models,” Journal of High Energy Physics, vol. 2010, no. 11, p. 110, 2010. View at Publisher · View at Google Scholar 37. J. Cao, H. E. Logan, and J. M. Yang, “Experimental constraints on the nearly minimal supersymmetric standard model and implications for its phenomenology,” Physical Review D, vol. 79, no. 9, Article ID 091701, 5 pages, 2009. View at Publisher · View at Google Scholar 38. Z. Ahmed, D. S. Akerib, S. Arrenberg, et al., “Dark matter search results from the CDMS II experiment,” Science, vol. 327, no. 5973, pp. 1619–1621, 2010. View at Publisher · View at Google 39. E. Aprile, K. Arisaka, F. Arneodo, et al., “First dark matter results from the XENON100 experiment,” Physical Review Letters, vol. 105, no. 13, Article ID 131302, 5 pages, 2010. View at Publisher · View at Google Scholar 40. E. Aprile, et al., “Dark matter results from 100 live days of XENON100 data,” Physical Review Letters, vol. 107, no. 13, Article ID 131302, 6 pages, 2011. View at Publisher · View at Google 41. C. E. Aalseth, P. S. Barbeau, N. S. Bowden, et al., “Results from a search for light-mass dark matter with a p-type point contact germanium detector,” Physcal Review Letters, vol. 106, no. 13, Article ID 131301, 4 pages, 2011. View at Publisher · View at Google Scholar 42. D. Hooper, J. I. Collar, J. Hall, et al., “Consistent dark matter interpretation for CoGeNT and DAMA/LIBRA,” Physical Review D, vol. 82, no. 12, Article ID 123509, 8 pages, 2010. View at Publisher · View at Google Scholar 43. O. Adriani, G. C. Barbarino, G. A. Bazilevskaya, et al., “An anomalous positron abundance in cosmic rays with energies 1.5–100GeV,” Nature, vol. 458, pp. 607–609, 2009. View at Publisher · View at Google Scholar 44. O. Adriani, G. A. Bazilevskaya, O. Adriani, et al., “New measurement of the antiproton-to-proton flux ratio up to 100 geV in the cosmic radiation,” Physical Review Letters, vol. 102, no. 5, Article ID 051101, 5 pages, 2009. View at Publisher · View at Google Scholar 45. W. Wang, Z. Xiong, J. M. Yang, and L.-X. Yu, “Dark matter in the singlet extension of MSSM: explanation of Pamela and implication on Higgs phenomenology,” Journal of High Energy Physics, vol. 2009, no. 11, p. 053, 2009. View at Publisher · View at Google Scholar 46. J. Cao, K.-I. Hikasa, W. Wang, J. M. Yang, and L.-X. Yu, “SUSY dark matter in light of CDMS II results: a comparative study for different models,” Journal of High Energy Physics, vol. 2010, no. 7, p. 44, 2010. View at Publisher · View at Google Scholar 47. J.-J. Cao, K.-I. Hikasa, W. Wang, and J. M. Yang, “Light dark matter in NMSSM and implication on Higgs phenomenology,” Physics Letters B, vol. 703, no. 3, pp. 292–297, 2011. View at Publisher · View at Google Scholar 48. K. Nakamura and Particle Data Group, “Review of particle physics,” Journal of Physics G, vol. 37, no. 7A, Article ID 075021, 2010. View at Publisher · View at Google Scholar 49. P. Draper, T. Liu, C. E. M. Wagner, et al., “Dark light-Higgs bosons,” Physical Review Letters, vol. 106, Article ID 121805, 4 pages, 2011. View at Publisher · View at Google Scholar 50. U. Ellwanger, J. F. Gunion, and C. Hugonie, “NMHDECAY: a fortran code for the Higgs masses, couplings and decay widths in the NMSSM,” Journal of High Energy Physics, vol. 2005, no. 02, p. 066, 51. M. Drees and M. M. Nojiri, “Neutralino-nucleon scattering reexamined,” Physical Review D, vol. 48, no. 8, pp. 3483–3501, 1993. View at Publisher · View at Google Scholar · View at Scopus 52. G. Junman, M. Kamionkowski, and K. Griest, “Supersymmetric dark matter,” Physics Reports, vol. 267, pp. 5–6, 1996. 53. J. R. Ellis, K. A. Olive, and C. Savage, “Hadronic uncertainties in the elastic scattering of supersymmetric dark matter,” Physical Review D, vol. 77, no. 6, Article ID 065026, 15 pages, 2008. 54. A. Bottino, F. Donato, N. Fornengo, and S. Scopel, “Size of the neutralino-nucleon cross-section in the light of a new determination of the pion-nucleon sigma term,” Astroparticle Physics, vol. 18, no. 2, pp. 205–211, 2002. View at Publisher · View at Google Scholar 55. H. Ohki, H. Fukaya, S. Hashimoto, et al., “Nucleon sigma term and strange quark content from lattice QCD with exact chiral symmetry,” Physical Review D, vol. 78, no. 5, Article ID 054502, 12 pages, 2008. 56. D. Toussaint and W. Freeman, “Strange quark condensate in the nucleon in 2+1 flavor QCD,” Physical Review Letters, vol. 103, no. 12, Article ID 122002, 4 pages, 2009. 57. J. Giedt, A. W. Thomas, and R. D. Young, “Dark matter constrained minimal supersymmetric standard model, and lattice QCD,” Physical Review Letters, vol. 103, no. 20, Article ID 201802, 4 pages, 58. J. Cao, K.-I. Hikasa, W. Wang, J. M. Yang, and L.-X. Yu, “Dark matter direct detection constraints on the minimal supersymmetric standard model and implications for LHC Higgs boson searches,” Physical Review D, vol. 82, no. 5, Article ID 051701(R), 5 pages, 2010. 59. J. Cao, W. Wang, and J. M. Yang, “Split-SUSY dark matter in light of direct detection limits,” Physics Letters B, vol. 706, no. 1, pp. 72–76, 2011. View at Publisher · View at Google Scholar · View at Scopus 60. H. P. Nilles, “Supersymmetry, supergravity and particle physics,” Physics Reports, vol. 110, pp. 1–162, 1984. 61. D. Feldman, Z. Liu, and P. Nath, “Low mass neutralino dark matter in the minimal supersymmetric standard model with constraints from ${B}_{s}\to {\mu }^{+}{\mu }^{-}$ and Higgs boson search limits,” Physical Review D, vol. 81, no. 11, Article ID 117701, 4 pages, 2010. View at Publisher · View at Google Scholar 62. A. V. Belikov, J. F. Gunion, D. Hooper, and T. M. P. Tait, “CoGeNT, DAMA, and light neutralino dark matter,” Physics Letters B, vol. 705, pp. 82–86, 2011. View at Publisher · View at Google 63. N. Fornengo, S. Scopel, and A. Bottino, “Discussing direct search of dark matter particles in the minimal supersymmetric extension of the standard model with light neutralinos,” Physical Review D , vol. 83, no. 1, Article ID 015001, 22 pages, 2011. View at Publisher · View at Google Scholar 64. S. Scopel, S. Choi, N. Fornengo, and A. Bottino, “Impact of the recent results by the CMS and ATLAS collaborations at the CERN Large Hadron Collider on an effective minimal supersymmetric extension of the standard model,” Physical Review D, vol. 83, no. 9, Article ID 095016, 6 pages, 2011. View at Publisher · View at Google Scholar 65. R. Gaitskell, V. Mandic, and J. Filippini, http://dmtools.berkeley.edu/limitplots, and, http://dmtools.brown.edu:8080. 66. G. Belanger, F. Boudjema, C. Hugonie, A. Pukhov, and A. Semenov, “Relic density of dark matter in the next-to-minimal supersymmetric standard model,” Journal of Cosmology and Astroparticle Physics, vol. 2005, no. 09, p. 001, 2005. View at Publisher · View at Google Scholar 67. H. K. Dreiner, S. Heinemeyer, O. Kittel, U. Langenfeld, A. M. Weber, and G. Weiglein, “Mass bounds on a very light neutralino,” European Physical Journal C, vol. 62, no. 3, pp. 547–572, 2009. View at Publisher · View at Google Scholar · View at Scopus 68. L. Calibbi, T. Ota, and Y. Takanishi, “Light Neutralino in the MSSM: a playground for dark matter, flavor physics and collider experiments,” Journal of High Energy Physics, vol. 2011, no. 07, p. 013, 2011. View at Publisher · View at Google Scholar 69. D. T. Cumberbatch, D. E. Lopez-Fogliani, L. Roszkowski, R. R. de Austri, and Y.-L. S. Tsai, “Is light neutralino as dark matter still viable?” http://arxiv.org/abs/1107.1604. 70. A. Choudhury and A. Datta, “Many faces of low mass neutralino dark matter in the unconstrained MSSM, LHC data and new signals,” Journal of High Energy Physics, vol. 2012, no. 06, p. 006, 2012. View at Publisher · View at Google Scholar 71. D. Feldman, Z. Liu, and P. Nath, “Low mass neutralino dark matter in the minimal supersymmetric standard model with constraints from ${B}_{s}\to {\mu }^{+}{\mu }^{-}$ and Higgs boson search limits,” Physical Review D, vol. 81, no. 11, Article ID 117701, 4 pages, 2010. 72. E. Kuflik, A. Pierce, and K. M. Zurek, “Light neutralinos with large scattering cross sections in the minimal supersymmetric standard model,” Physical Review D, vol. 81, no. 11, Article ID 111701, 5 pages, 2010. 73. J. F. Gunion, A. V. Belikov, and D. Hooper, “CoGeNT, DAMA, and neutralino dark matter in the next-to-minimal supersymmetric standard model,” http://arxiv.org/abs/1009.2555. 74. D. A. Vasquez, G. Bélanger, C. Bœhm, et al., “Can neutralinos in the MSSM and NMSSM scenarios still be light?” Physical Review D, vol. 82, no. 11, Article ID 115027, 11 pages, 2010. View at Publisher · View at Google Scholar 75. R. Dermisek and J. F. Gunion, “Direct production of a light CP-odd Higgs boson at the Tevatron and LHC,” Physical Review D, vol. 81, no. 5, Article ID 055001, 13 pages, 2010. View at Publisher · View at Google Scholar 76. R. Dermisek and J. F. Gunion, “New constraints on a light CP-odd Higgs boson and related NMSSM ideal Higgs scenarios,” Physical Review D, vol. 81, no. 7, Article ID 075003, 16 pages, 2010. View at Publisher · View at Google Scholar 77. A. V. Belikov, J. F. Gunion, D. Hooper, and T. M. P. Tait, “CoGeNT, DAMA, and light neutralino dark matter,” Physical Letters B, vol. 705, no. 1-2, pp. 82–86, 2011. View at Publisher · View at Google Scholar 78. R. Kappl, M. Ratza, and M. W. Winkler, “Light dark matter in the singlet-extended MSSM,” Physics Letters B, vol. 695, no. 1–4, pp. 169–173, 2011. 79. J. Cao, H. E. Logan, and J. M. Yang, “Experimental constraints on the nearly minimal supersymmetric standard model and implications for its phenomenology,” Physical Review D, vol. 79, no. 9, Article ID 091701, 5 pages, 2009. 80. D. Das and U. Ellwanger, “Light dark matter in the NMSSM: upper bounds on direct detection cross sections,” Journal of High Energy Physics, vol. 2010, no. 09, p. 085, 2010. View at Publisher · View at Google Scholar 81. R. Dermisek and J. F. Gunion, “Escaping the large fine-tuning and little hierarchy problems in the next to minimal supersymmetric model and $h\to aa$ decays,” Physical Review Letters, vol. 95, no. 4, Article ID 041801, 4 pages, 2005. 82. P. Grajek, G. L. Kane, D. J. Phalen, A. Pierce, and S. Watson, “Is the PAMELA positron excess winos?” Physical Review D, vol. 79, no. 4, Article ID 043506, 2009. View at Publisher · View at Google Scholar · View at Scopus 83. N. Arkani-Hamed, D. P. Finkbeiner, T. R. Slatyer, and N. Weiner, “A theory of dark matter,” Physical Review D, vol. 79, no. 1, Article ID 015014, 16 pages, 2009. View at Publisher · View at Google Scholar 84. E. A. Baltz, J. Edsjo, K. Freese, and P. Gondolo, “Cosmic ray positron excess and neutralino dark matter,” Physical Review D, vol. 65, no. 6, Article ID 063511, 10 pages, 2002. 85. G. L. Kane, L. T. Wang, and T. T. Wang, “Supersymmetry and the cosmic ray positron excess,” Physics Letters B, vol. 536, no. 3-4, pp. 263–269, 2002. 86. G. L. Kane, L. T. Wang, and J. D. Wells, “Supersymmetry and the positron excess in cosmic rays,” Physical Review D, vol. 65, no. 5, Article ID 057701, 4 pages, 2002. 87. K. Ishiwata, S. Matsumoto, and T. Moroi, “Cosmic-ray positron from superparticle dark matter and the PAMELA anomaly,” Physical Letters B, vol. 675, no. 5, pp. 446–449, 2009. 88. D. Hooper and T. M. P. Tait, “Neutralinos in an extension of the minimal supersymmetric standard model as the source of the PAMELA positron excess,” Physical Review D, vol. 80, no. 5, Article ID 055028, 5 pages, 2009. View at Publisher · View at Google Scholar 89. J. Wess and J. Bagger, Supersymmetry and Supergravity, Princeton Series in Physics, Princeton University Press, Princeton, NJ, USA, 2nd edition, 1992. 90. Y. Bai, M. Carena, and J. Lykken, “PAMELA excess from neutralino annihilation in the NMSSM,” Physical Review D, vol. 80, no. 5, Article ID 055004, 17 pages, 2009. View at Publisher · View at Google Scholar
{"url":"http://www.hindawi.com/journals/ahep/2012/216941/","timestamp":"2014-04-18T03:06:27Z","content_type":null,"content_length":"540219","record_id":"<urn:uuid:67ff7e19-52c5-42d7-b513-946557967d4f>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00386-ip-10-147-4-33.ec2.internal.warc.gz"}
ACP Atmospheric Chemistry and Physics ACP Atmos. Chem. Phys. 1680-7324 Copernicus GmbH Göttingen, Germany 10.5194/acp-3-2139-2003 Köhler theory for a polydisperse droplet population in the presence of a soluble trace gas, and an application to stratospheric STS droplet growth Kokkola H. ^1 Romakkaniemi S. ^1 Laaksonen A. ^1 Department of Applied Physics, University of Kuopio, Finland 03 12 2003 3 6 2139 2146 This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. This article is available from http://www.atmos-chem-phys.net/3/2139/2003/acp-3-2139-2003.html The full text article is available as a PDF file from http://www.atmos-chem-phys.net/3/2139/2003/acp-3-2139-2003.pdf We consider the equilibrium behavior of a polydisperse aqueous droplet population as a function of relative humidity (RH) when a soluble trace gas, such as nitric acid, is present in the system. The droplet population experiences a splitting when the RH is increased sufficiently. This splitting is not related to the traditional Köhler activation of cloud droplets, as it may occur at relative humidities below 100%. Remarkably, the splitting always takes place in such a way that the largest size class of the (discretized) droplet population starts taking up the soluble trace gas efficiently, growing steeply as a function of RH, and forcing the smaller droplets to shrink. We consider this behavior in terms of open and closed system Köhler curves (open system referring to one in which the trace gas concentration remains constant and closed system to one in which the gas concentration decreases as a result of uptake of the trace gas). We show how the open and closed system Köhler curves are related, and that the splitting of the population can be explained in terms of closed system curves crossing the Köhler maxima of the open system curves. We then go on to consider time-dependent situations, and show that due to gas-phase mass transfer limitations, the splitting of the size distributions moves toward smaller sizes as the rate of RH increase becomes more rapid. Finally, we consider stratospheric supercooled ternary solution droplet populations, and show that the splitting described using the new theory may lead to formation of bimodal size distributions in the stratosphere.
{"url":"http://www.atmos-chem-phys.net/3/2139/2003/acp-3-2139-2003.xml","timestamp":"2014-04-19T19:58:59Z","content_type":null,"content_length":"4915","record_id":"<urn:uuid:28011226-d809-4cbd-8e17-47d8f5635c63>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00524-ip-10-147-4-33.ec2.internal.warc.gz"}
Bayesian Reliability Demonstration Test Design Reliability HotWire Issue 136, June 2012 Reliability Basics Bayesian Reliability Demonstration Test Design For life tests, especially for systems like missiles, sample size is always limited. It is a challenge to get accurate reliability estimates with limited samples, and Bayesian methods have been used to solve this problem. Weibull++ 8 now offers non-parametric Bayesian reliability demonstration test (RDT) design. Unlike traditional non-parametric test design methods, this new feature allows engineers to integrate engineering knowledge or subsystem testing results into system reliability test design. In this article, we will explain the theory of Bayesian RDT and illustrate how to use Weibull++ to design an efficient reliability demonstration test. Theory of Bayesian RDT The following binomial equation is often used in reliability demonstration test design: where CL is the required confidence level, r is the number of failures, n is the total number of units on test and R is the reliability that needs to be demonstrated. Eqn. (1) has four unknowns: CL, r, n and R. Given any three of these, the remaining one can be solved. For example, given CL, r and R, the required sample size can be determined. Eqn. (1) shows that the reliability is a random number with a beta distribution. It can be rewritten as: As discussed in [1], Eqn. (2) also can be obtained using the Bayesian theory when the non-informative prior distribution 1/R is used. In general, when a beta distribution is used as the prior distribution for reliability R, the posterior distribution obtained from Eqn. (1) is also a beta distribution. For example, assuming the prior distribution is Beta(R, α[0], β[0]), the posterior distribution for R is: Therefore, Eqn. (3) can be used for Bayesian RDT. For a random variable x with beta distribution Beta(x, α, β), its mean and variance are: If the expected value and the variance are known, parameters α and β in the beta distribution can be solved by: Example 1: Bayesian Test Design with Prior Information from Expert Opinion Suppose you wanted to know the reliability of a system and the following prior knowledge of the system is available: • Lowest possible reliability: a = 0.80 • Most likely reliability: b = 0.85 • Highest possible reliability: c = 0.97 Based on this information, the mean and variance of the prior system reliability can be estimated as: Using the above two values and Eqn. (5), the prior distribution for R is a beta distribution Beta(R, α[0], β[0]) with: Given the above prior information, if there is 1 failure out of 20 test samples, what is the demonstrated reliability at a confidence level of CL = 0.8? The result is given in Figure 1. Figure 1 - Bayesian DRT with expert opinion as prior Figure 1 shows the demonstrated reliability is 85.103991%. Without the prior information for system reliability, the demonstrated reliability is 85.75745%, as given in Figure 2: Figure 2 - DRT without expert opinion as prior The prior expert opinion can have a significant effect on the RDT result. From this example, we can see that the Bayesian method and the regular non-parametric binomial method produce similar results. However, if the reliability is very high based on prior expert opinion, the Bayesian RDT will give very different results than the regular one. Therefore, you must be very careful when you apply any Bayesian method. For example, if we change the expert opinion in this example to: • Lowest possible reliability: a = 0.90 • Most likely reliability: b = 0.95 • Highest possible reliability: c = 0.97 and keep the rest settings unchanged, the Bayesian RDT result is: Figure 3 - DRT with modified expert opinion as prior Example 2: Bayesian Test Design with Prior Information from Subsystem Tests For each subsystem i in a system, the reliability can also be modeled using a beta distribution. If there are r[i] failures out of n[i] test samples, R[i] is a beta distribution with the following cumulative distribution function: where s[i] = n[i] - r[i] is the number of successes. Therefore, the expected value and the variance of R[i] are: Assuming that all the subsystems are connected reliability-wise in a series configuration, the expected value and the variance of the system’s reliability R can then be calculated as: From Eqn. (9), we can get the α and β parameters for the prior distribution of R in Eqn. (5). Assume a system of interest is composed of three subsystems: A, B and C. Prior information from tests of these subsystems is given in the table below. Subsystem Number of Units (n) Number of Failures (r) A 20 0 B 30 1 C 100 4 Given the above information, in order to demonstrate the system reliability of 0.9 at a confidence level of 0.8, how many samples are needed in the test? Assume the allowed number of failures is 1. The result is given in the figure shown next, which shows that at least 49 test units are needed. Figure 3 - Bayesian DRT with subsystem tests as prior In this article, we discussed the theory of Bayesian reliability demonstration test design. Weibull++ was used to solve two examples. Considering prior information about system reliability allows us to design a better test and estimate the system reliability more accurately. This article shows how both expert opinion and subsystem test information can be used to incorporate the prior distribution of the system reliability. [1] H. Guo, T. Jin and A. Mettas, "Designing reliability demonstration tests for one-shot systems under zero component failure," IEEE Transactions on Reliability, vol. 60, no. 1, pp. 286-294, March
{"url":"http://www.weibull.com/hotwire/issue136/relbasics136.htm","timestamp":"2014-04-19T20:24:27Z","content_type":null,"content_length":"22279","record_id":"<urn:uuid:9cde3582-19cf-421d-856b-dd30b485e8cd>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00186-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: e t a c i f i t r e c s i h t d e e n i e s a e l p • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50db41ece4b0d6c1d542aae4","timestamp":"2014-04-19T04:35:48Z","content_type":null,"content_length":"37280","record_id":"<urn:uuid:7d3d740a-0972-423d-a925-b638cbb78cda>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00474-ip-10-147-4-33.ec2.internal.warc.gz"}
Violin and boxplots with lattice and R April 2, 2011 By Oscar Perpiñán Lamigueiro A violin plot is a combination of a boxplot and a kernel density plot. Lattice includes the panel.violin function for this graphical tool. This example draws a violin and a boxplot together. First, let’s download some solar radiation data from the NASA webpage: nasafile <- 'http://eosweb.larc.nasa.gov/sse/global/text/global_radiation' nasa <- read.table(file=nasafile, skip=13, header=TRUE) Now, I plot a violin plot and a boxplot of the yearly average of daily solar radiation for latitudes between -60º and 60º. I have to convert this numeric vector to a factor with the combination of cut and pretty. It is possible to plot the violin plot and the boxplot together (example included in the help of panel.violin). I choose the pch='|' in order to get an horizontal line at the median. Last, the plot.symbol component in par.settings defines the symbol of the outliers of the boxplot and the box.rectangle component configures the box of the boxplot: bwplot(Ann~cut(Lat, pretty(Lat, 40)), data=nasa, subset=(abs(Lat)<60), xlab='Latitude', ylab='G(0) (kWh/m²)', panel = function(..., box.ratio) { panel.violin(..., col = "lightblue", varwidth = FALSE, box.ratio = box.ratio) panel.bwplot(..., col='black', cex=0.8, pch='|', fill='gray', box.ratio = .1) par.settings = list(box.rectangle=list(col='black'), plot.symbol = list(pch='.', cex = 0.1)), scales=list(x=list(rot=45, cex=0.5)) Now, I plot a violin plot (without a boxplot) of the monthly means of daily solar radiation. First, I have to build the formula: x <- paste(names(nasa)[3:14], collapse='+') formula <- as.formula(paste(x, '~cut(Lat, pretty(Lat, 20))', sep='')) And then I can print the plot. I have to choose outer=TRUE in order to get individual panels for each month, and as.table=TRUE if I want January to be at the upper left corner: bwplot(formula, data=nasa, subset=(abs(Lat)<60), xlab='Latitude', ylab='G(0) (kWh/m²)', outer=TRUE, as.table=TRUE, horizontal=FALSE, scales=list(x=list(rot=70, cex=0.5)) daily e-mail updates news and on topics such as: visualization ( ), programming ( Web Scraping ) statistics ( time series ) and more... If you got this far, why not subscribe for updates from the site? Choose your flavor: , or
{"url":"http://www.r-bloggers.com/violin-and-boxplots-with-lattice-and-r/","timestamp":"2014-04-16T16:36:41Z","content_type":null,"content_length":"39976","record_id":"<urn:uuid:592685b3-a971-43ce-9b88-047fd40c5ad9>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00097-ip-10-147-4-33.ec2.internal.warc.gz"}
Reply to comment Submitted by Anonymous on February 9, 2012. A mathematics fan myself (and a self proclaimed philosopher :), though everyone is a philosopher in their own rights), I do believe in the universality of mathematics but feel that mathematics is just an aspect of reality and not an all pervading foundation. Mathematics can be seen as a language of describing certain kinds of correlations between objects, events etc. So whether mathematics is all pervading depends on whether i) Everything in this world is correlated ii) Can mathematics describe all correlations in this world. Irrespective of whether i) is true or not, I think its not too difficult to construct examples to show ii) is not true. As many would already have thought of, feelings like love, etc cannot be explained by mathematics. (Reminds me of this valentine's day quote by H.L. Mencken: Love is the triumph of imagination over intelligence). On a different note, some of you may enjoy this tangentially related blog post of mine: http://janakspen.blogspot.in/2011/05/infinite-soul-and-bit-of-discrete-m... I can say this to extend Holger's comment: Most of the things or phenomena around us can be measured (if not objectively, then perhaps subjectively), and measurement is very much mathematical. So one may feel that everything is mathematical. But in reality, measurement or correlation between entities is just one aspect of their existence. So its wrong to say mathematics can describe or is the basis of everything. I did enjoy aspects of this article .. this has become one of my favorite sites on the internet :)
{"url":"http://plus.maths.org/content/comment/reply/5497/3119","timestamp":"2014-04-19T17:09:21Z","content_type":null,"content_length":"21729","record_id":"<urn:uuid:cf9b16ea-5e2d-44dc-97a8-1eb817c4f7ce>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00044-ip-10-147-4-33.ec2.internal.warc.gz"}
Binary Decision Making ( Binary Decision Making (and Why it’s Good) This is the story of how solving a very simple mathematical problem taught me a lot about decision making and my perspective on life. This can even help you make decisions quicker and be stronger in your relationships. If you know nothing about math or even hate it, don’t worry, this is really easy. First I’ll explain the problem and how it can be solved. Just follow me there, as it is important in order to understand the rest. Then I’ll describe what I have learned out of it about decision making. Apples in a Bag and How to Solve it Here is the problem: Given a set X of n elements, how many different subsets of X are there? Let’s say your set of elements is a shopping bag containing apples. Let’s call the shopping bag X, and the number of apples in the bag n. We assume that each apple is uniquely identifiable. How many different ways are there to distribute these apples in or outside the bag? - There is only one way to have zero apples in the bag (the bag is empty) and also only one way to have all apples in the bag (they are all in the bag and none is out). - If we look at all possible ways to have one apple in the bag, then there are n possibilities for that, since we have n apples that could be alone in the bag. - The number of ways to have two apples in the bag depends on how many different pairs of apples we can form and put into the bag. - The number of ways to have three apples in the bag depends on how many different groups of three apples we can form and put into the bag. - And so on. If we add up all the possibilities for zero apples, one apple, two apples, etc., up to n apples, then we have the total number of different subsets of X. How do we know how many pairs of apples we can form? Well, for the first apple in the pair, we have n possibilities, since we can pick any of the n apples. For the second apple in the pair, we can pick all apples but the one we already picked, so that is (n – 1) choices. The total number of possible pairs therefore is n * (n – 1), since for each of the n ways to start a pair we have (n – 1) ways to complete it. But then, we would have some pairs twice. Sets are not ordered, which means that the apples are in no particular order in the bag. Therefore, the subset {apple1, apple3} is the same as the subset {apple3, apple1}. So here we need to take only half of the pairs into account. With three apples it is the same: we have n possible choices for the first apple, (n – 1) for the second one, and (n – 2) for the third one, which is a total of n * (n – 1) * (n – 2) possibilities. There are, however, six possible ways to order three apples. Therefore, here we need to take only one sixth of the results. If you keep doing this for a few more n’s, the pattern that emerges is that the number of subsets with k elements is 1/k! * n!/(n-k)! and after some applying of the binomial theorem, the total number of subsets of X turns out to be 2^n (2 to the power of n). (You don’t need to understand this. What counts is that the result is 2^n. I apologize for not having figured out how to get mathematical notation in wordpress.) I hope you are confused. I definitely was. I think that is all pretty complicated. Moreover, I was able to prove the result with the binomial theorem, or simply with induction (you don’t need to know what that is), but I did not truly understand it. Demonstrating something is one thing, but understanding why it is that way is another, and I did definitely not understand where this 2^n came from! A Better Way to Solve the Problem Fortunately after a while of being grumpy and grumping around, I suddenly got it. It was a matter of focus. My perspective had been ineffective all this time. Instead of considering all possible subsets, I needed to focus my attention only on one thing: the apples. Or even better, only one apple. No matter which combination we are examining and in which subset it is, each apple can only be either in the bag or not, right? That is a simple yes/no choice. Two options. Let’s call them 0 (no) and 1 (yes). Now there are n apples. So each subset, which means, each possible way to place these n apples in or outside the bag, can be expressed as a chain of n zeros or ones. The empty bag would be n times a zero. All apples in the bag would be n times the one. Only apple1 in the bag would be one one and (n – 1) times zero. And so on. That is a really simple and elegant way of putting it. Since we have two available choices for each apple, and n apples, the total number of all possible combinations is 2 * 2 * …. * 2, n times. That is 2^n, 2 to the power of n. That is where 2^n came Duh… how simple. And so much faster. And cleaner. It involves less calculations. I love math, as long as it doesn’t involve calculating. I find calculating with numbers somewhat dirty. Binary Decision Making I was unable to see this explanation at first because I was focusing so much on the big picture, all these subsets and possibilities and pairs and numbers of sets of size k… I was also fascinated by the symmetry of the problem. The number of subsets with k elements is the same as the number of subsets with (n – k) elements, since forming groups of three apples is the same as forming groups of all apples but three of them, the only difference being whether the three apples are in the bag or not – the number of possibilities is the same. This was a big insight for me. I don’t make this mistake only in math, I also have a general tendency in life to be overwhelmed by the bigger picture, to have all possible choices and possibilities dancing around in my head, to be fascinated by symmetries and other patterns, to get hung up on details, to try and construct the most elegant and perfect orders, and to get lost in vast theoretical Often we have to make ourselves dumber in order to be more effective. This means, narrowing our focus down. When I think about it, no matter how many choices we can think of, the truth is that we only ever have two of them: yes, or no. Our life is made of choices. It is a big chain of decisions. Each decision we make changes our reality, our life, our future, just like a zero or a one for one apple makes the difference between one subset or the other. It is nice to see the bigger picture, consider all possibilities, plan ahead, or think about how to make it all fit together, just like I did when I was trying to calculate how many subsets of each apple number there were. But in the end, what really counts are the apples, and the point is to know which ones are in the bag and which ones aren’t. The analogy is not perfect. Unlike a bag of apples, in life each decision you make changes the number, size, color and shape of the apples you have not yet placed in or out of the bag. Every time you make a decision, you become someone else. Your reality shifts, influenced by the energy of the choice you made. New choices become available to you that you didn’t know about or would not even have thought of before. That is exactly one reason more to focus on each apple instead of on the bigger picture. The bigger picture changes all the time and you cannot predict what the following apples will look like, after the one you are holding in your hand. So you cannot (and don’t need to) think about them yet. All you need to do is to focus on this one apple in your hand and decide whether or not you want it in your bag. Binary Decision Making Makes You Stronger in Your Relationships We often tend to think that our choices are dependent on other choices, our own or other people’s. But that is not true. Just like each apple can be or not in the bag regardless of where the others are, you can make your own choices in life independently from any other circumstances. Just face your choice, go deep down in your gut and ask “Yes or no? Do I really want this? Is this me?”. And that’s it. No matter what the answer is, once you made your decision (AND implemented it by taking action on it) new [DEL:apples:DEL] opportunities will appear, your intuition will guide you, and you’ll figure it out one way or another. • This kind of binary decision making is very simple to use. “Yes” or “No” – not “Maybe, if x happens and y agrees, and z is not available”. • It allows you to remain focused on the present moment instead of floating off into the theoretical worlds of possibility. • It will save you tons of time and energy. • It can help you be authentic and make your own choices free from fear-based considerations. • It prevents half-assed decisions and will force you to face your limitations. No compromise. • It will bring much focus to your actions. If the answer is “no”, you just forget about it, period. If the answer is “yes” you can put all of your energy, focus and attention on this one thing and use all of your intellectual, emotional and physical resources to make it happen, since you won’t be distracted by any other apples. So, what is the most immediate decision that you are facing in your relationships? What kind of apple are you holding in your hand right now? Do you want it in your bag or not? :-) Thank You for Reading I appreciate your time and attention. :) If you enjoy my articles, I invite you to subscribe to my newsletter so you don't miss any! There are no related articles. 4 Responses to Binary Decision Making (and Why it’s Good) • I like it :) I already try to live this way. I rationalize my decision making process a little differently but still using a mathematical principle, induction. When I focus on the small sphere of influence around me and make small decisions that fill it with peace and love, those who are in the sphere experience that peace and love. It is my belief that they will be more likely to fill their sphere with similar feelings and a wave of peace and love propogates outward from me by this process of induction. Even if I can’t solve the problems in far away places, I can positively affect them by induction! • I like it, Ken! :) Absolutely. And additionally I believe that you *do* directly influence problems in far away places when you practice being peace and love. In these matters distance is not relevant, and we are all connected with each other anyway. Thanks for creating lovely waves. <3 • And you call THAT easy? I didn’t even read more then naming apples n and a bag X and you’ve lost me :) • Ach Sandra, you just have some limiting beliefs about maths. :)
{"url":"http://www.rosinecaplot.com/2010/08/binary-decision-making-and-why-its-good/","timestamp":"2014-04-20T08:53:47Z","content_type":null,"content_length":"26865","record_id":"<urn:uuid:c8d0a678-d2c9-451c-842c-1aef301697e9>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00621-ip-10-147-4-33.ec2.internal.warc.gz"}
Pi in Real Life Date: 7/1/96 at 12:34:6 From: Ian Ralph Subject: help My name is Ian and I'm a teacher from High Wycombe Primary School W.A. We have just covered circumference of circles and pi. The children asked how this concept is used in real life, i.e. engineering, etc., and I had a devil of a time thinking of one. Also what / who is pi named after? Any help would be appreciated. Date: 01/04/97 at 14:59:02 From: Doctor Keith Subject: Re: help I love "how is this used" questions, since I am an engineer. Pi is an often-used number, but here are some general categories with Geometry problems: drawing, machining, etc. For instance, I used to work on fighter jets before I went back to get my Ph.D., and we would frequently need to calculate areas of the skin of the aircraft or arc lengths, for everything from fitting equipment in, to line-of-sight calculations. Additionally, pi comes up in machining parts for aircraft; for instance, you might need a circular slot for mounting a camera that has a certain radius and a certain arc length. Signals: radio, TV, radar, telephones, etc. You have probably heard of sine waves. Well, sine waves have a fundamental period of 2*pi, so pi becomes vital in signal processing, spectrum analysis (finding out what frequencies are in a wave you receive or send), etc. A neat example of this is listening to peoples' voices and rating them from high to low (bass). Then if you have access to a computer with a microphone and a sound player with a graphic display of the sound, you can take a sample of the voices and play them back in a sound player and watch the graph. Check to see if your guesses were right. The frequency plot that they are showing when the sound plays is actually used by engineers to do such things as decide sampling rates, ideal processing of the sound, etc., and the graph is usually in multiples of 2*pi. Probability: estimation, testing, simulation. Everyone's favorite distribution (normal or Gaussian) has pi in the formula, and it is used in all areas of engineering to simulate unknown factors and loading conditions. One example is what is called "white noise," which is a normally distributed random variable used in estimation to predict such things as wind gusts on a plane or the worst case vibrational loading on a beam (this is a really big use of pi). White noise is also used to give a certain amount of apparent "bumpiness" in many software simulations such as games. Navigation: global paths, global positioning. When planes fly great distances they are actually flying on a arc of a circle. The path must be calculated as such in order to accurately gauge fuel use, etc. Additionally, when locating yourself on a globe, pi comes into the calculation in most methods. Plenty of other areas exist, but I thought these would probably be the most easily understood by students. If you need additional info let us know--we are here to help. Pi is named after the Greek letter pi, which is the symbol we use for it. I have checked my math history book, and the original discoverer of pi is not known, but it was used by the ancient Egyptians, Greeks, Hebrews, and Babylonians. Many mathematicians have played with it and have improved the approximation that we have. Some examples are: Babylonian (1800-1600 BC): pi ~ 3 Hebrew (1 Kings 7:23): pi ~ 3 Egypt (Rhind Papyrus): pi ~ 3 1/7 All of these are approximations, as they are measures of real near- circular objects (a large metal bowl in the Hebrew case, the volume of a cylindrical grain silo for the Egyptian reference) which would never be perfectly circular due to manufacturing considerations, measurement techniques, etc., but they served as useful approximations to do the necessary work. Thus such things as the volume of a can for packing things, or the area a water sprinkler can water, involve pi. It's everywhere! Good luck. -Doctor Keith, The Math Forum Check out our web site! http://mathforum.org/dr.math/ Date: 02/04/97 at 18:53:54 From: Doctor Ken Subject: Re: help Hi - I was just going through some old questions and answers, and found this one. Here's another tidbit for your cranium: you might ask why we chose Pi as the letter to represent the number 3.141592..., rather than some other Greek letter like Alpha or Omega. Well, it's Pi as in Perimeter - the letter Pi in Greek is like our letter P. -Doctor Ken, The Math Forum Check out our web site! http://mathforum.org/dr.math/ Date: Sun, 7 Sep 1997 10:11:43 -0500 From: Anonymous I used your site to help me confirm a simple formula that my failing memory simply lost a pointer to: the definition for circle circumference! In return, here's some more about how pi is used in the real world: I bought a ski boat this year, and my front-wheel drive minivan has a tough time pulling it out of some of the sandier, wetter, and steeper boat launches; there simply isn't enough weight on the front tires in some cases to get good traction. It won't entirely fix my problem (a 4WD is in my future), but it will help (perhaps only marginally) to get slightly larger tires. The problem with getting larger tires is that you change the outer diameter (circumference) of the tire, changing both the speedometer and odometer readings. One annoyance of passenger tires is that the sizes mix American and metric measurements. Most passenger tires, including my minivan, have a tire size like P195 75R15, where the 'P' is for passenger (as opposed to cargo), the 195 is the size, in millimeters of the width of the tire (face that meets the pavement), the '75' is the aspect ratio (percentage of profile or sidewall height of tire, given the width number), the 'R' signifies inner radius, or wheel radius size to mount on; and the '15' is this inner radius of the tire, in inches. So, inner diameter of tire in inches, width in millimeters, sidewall height a percentage of the two. The tire shop suggests that a P205 70R15 will give me that extra centimeter of tread (not much, huh?), and come closest to not affecting the speedometer and odometer readings of my P195 75 R15. That is, a 205mm width tire with a 70% profile is 'close enough' to a 195mm width tire with a 75% profile. Is he right? I spend a few minutes on the net getting some metric/American conversion formulas, heat up the left lobe a bit, and get an answer. A couple of constraints are that: 1) the only practical aspect ratios available for my vehicle are 60, 70, and 75; and 2) tire widths are available only in 10mm increments - 195, 205, 215, 225, etc... Turns out, he's right. What I really want is a 208.9mm tire with a 70% aspect ratio, OR a 205mm tire with a 71.3% aspect ratio, neither of which is available, so the 205mm 70 is closest. The exact formula I come up with is aw=146.25; where a = aspect ratio and w = tire width. How close is it? I run a few more numbers and find that it's not bad: at 65MPH, my speedometer will read 65.3MPH, and at 70MPH, it will read 70.3MPH - off by far less than what I can actually read. And, over the next 50,000 miles, the odometer will actually show 50, 225 - an extra 225 miles. Again, somewhat negligible. So there you have it! Without pi, I would have had to trust the garage mechanic, something I always try to avoid! - raz, Eagan, MN
{"url":"http://mathforum.org/library/drmath/view/57045.html","timestamp":"2014-04-21T05:58:04Z","content_type":null,"content_length":"12624","record_id":"<urn:uuid:d1367527-0059-481f-890f-660280d57c8e>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00568-ip-10-147-4-33.ec2.internal.warc.gz"}
A Concise Introduction to the Theory of Integration Format Post in Mathematics BY Daniel W. Stroock 0817637591 Shared By Guest A Concise Introduction to the Theory of Integration Daniel W. Stroock is available to download A Concise Introduction to the Theory of Integration Daniel W.A Concise Introduction to ... Textbook Stroock Type: eBook Released: 1994 Publisher: Birkhauser Page Count: 191 Format: djvu Language: English ISBN-10: 0817637591 ISBN-13: 9780817637590 From Scientific American "This book, unusual in many respects, fully achieves its goal, a one-semester, concise but logically complete treatment of abstract integration. There are only 148 pages, making the book psychologically attractive and a price of $18..." --This text refers to an alternate edition. Review "This is a very attractive textbook… Unusual in many respects, [it] fully achieves its goal, a one-semester, concise but logically complete treatment of abstract integration. It is remarkable that [the author] has accomplished so much in so short a compass." - Mathematical Reviews (review of the first edition) "A number of valuable applications and a good collection of problems... A very interesting, well-informed book which draws on recent approaches not found in several commonly used texts." - Zentralblatt Math (review of the second edition) "The author succeeded in choosing the right level of generality and showed how a good combination of a measure and integration course and advanced calculus can be done. Strongly recommended to students as well as to teachers." - EMS Newsletter (review of the second edition) "...the author is a distinguished probabilist/ analyst who has made seminal contributions to the interface of probability theory with PDEs/harmonic analysis/functional analysis; the flavor of all these subjects is brought out in the book, especially in chapters V-VII...[the] book can be highly rewarding, serving as a launching pad for an intensive study of any branch of analysis including probability theory." --Current Science (review of the third edition) --This text refers to an alternate edition. A Concise Introduction to the Theory of Integration You should be logged in to Download this Document. Membership is Required. Register here Related Books on A Concise Introduction to the Theory of Integration Comments (0) Currently,no comments for this book!
{"url":"http://bookmoving.com/book/a-concise-introduction-theory-integration_81411.html","timestamp":"2014-04-21T07:13:44Z","content_type":null,"content_length":"14795","record_id":"<urn:uuid:de5fbaaa-d1c4-4189-be11-fd733db56a54>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00607-ip-10-147-4-33.ec2.internal.warc.gz"}
Working out peak of a curve December 13th, 2004, 01:28 AM #1 Flash M0nkey Join Date Sep 2001 Working out peak of a curve ok am making a flash game and need some help in writing a formula for it basically user will select power and angle character will then jump depending upon the values user entered i need however to figure out depending upon power, angle and a fixed value for gravity where the peak of a curve will be so that character then starts to fall again so that the characters x/y postion increases until they reach peak of curce and then start to decrease again what equation could i use to calculate this? to give you some idea what i mean look at attached image ok red line : user enters high power setting & low angle setting blue line : mid power setting mid angle setting green line : high power setting high angle setting you can see that depending upon the values entered by user shape of curce changes - now i will know the values for Power (P), Angle (A) and Gravity (G) but what equation could I use to figure out the peak at which the upward acceleration stops and the user then begins to fall down again? Wow. Long time since I took calculus. I'll try to answer, though. At the peak of a curve, an object is neither accelerating or decellerating (vertically). To find the rate of acceleration, you need to find the first derivative of the function which calculates the object's movement. This derivative, rather than plotting height as a function of time, will plot acceleration as a function of time. The peak of that curve will be where the first derivative crosses the x-axis, or where acceleration is equal to zero. The value of x at this point indicates the time at which this occurs. Now, as for how to incorporate that into a flash animation, I have no idea you may need some higher math libraries. I don't even know if this will help, but wanted to see how good my calculus skills still are. I think I'm right. Somebody correct me if I'm wrong. to give you some idea what i mean look at attached image What attached image? Government is like fire - a handy servant, but a dangerous master - George Washington Government is not reason, it is not eloquence - it is force. - George Washington. Join the UnError community! heh image is now attached - lol soz for that &gt;.&lt; hmm wonder if I could use something like : Power (P) = 50 Angle (A) = 45 Gravity (G) = 5 B = P-G C = A-G then make B & C equal to P & A and then add B & C onto characters current x + y postions they wil start out as positive values but as am constantly taking G away from them will eventually become negative values and character will start a downward arc......would just be a matter of getting a seemingly correct value for G so that the arc falls at aprox the right time. Unless you have wind or something similar (I'm thinking Scorched Earth-type game here?), the horizontal velocity will remain constant. So, I think you will need to calculate a horizontal velocity and a vertical velocity from the initial power and angle variables, like [vH = P cos A]; should give the horizontal velocity. This will remain constant. [vV = P sin A]; should give the initial vertical velocity, which will be reduced by G for every loop iteration. And the loop would look something like this: do { X = X + vH; Y = Y + vV; vV = vV - G } while Y &gt;= 0 The top of the curve should then be where Y equals 0 (or shortly before it becomes less than zero). Maybe calculus isn't needed after all, but I don't think you'll get away without trigonometery. Government is like fire - a handy servant, but a dangerous master - George Washington Government is not reason, it is not eloquence - it is force. - George Washington. Join the UnError community! ok code i came up with //set inital variables onClipEvent(load) { alt = 30; speed = 30; gravity = 2; /next part runs every time frame loads (12fps) onClipEvent(enterFrame) { Ymov = alt-gravity; Xmov = speed-gravity; this._y -= Ymov; //added next part in to stop pause looking as bad when equals 0 if (Xmov == 0) { this._x += 3; if (Xmov &gt; 0) { this._x += Xmov; if (Xmov &lt; 0) { this._x -= Xmov; alt = Ymov; speed = Xmov; works kinda but gives a very unatural looking curve - i knew i should have listened more in maths.....to think i took bleedin advanced maths as well, seems like 100 yrs ago now btw thought would add as am doing this in flash it adds its own lil quirk to it the top left hand corner of screen is 0,0 so as you move -&gt; x increases and as you move ^ y decreases - just to make things extra simple &gt;.&lt; I think causing X movement to be reduced by gravity would cause some problems. I imagine the downward side is steeper than the upward side, perhaps doubling back on itself? Discounting wind and air resistance (which I doubt you're adding in yet at this point), the horizontal velocity should remain constant. Something like Xmov = Xmov + speed might work better. //set inital variables onClipEvent(load) { alt = (whatever); Xpos = (whatever); speed.horizontal = (power cos angle); speed.vertical = (power sin angle); gravity = 2; TopReached == Flase; /next part runs every time frame loads (12fps) onClipEvent(enterFrame) { /Set the initial position this._y == alt; this._x == Xpos; /determine how far to move Ymov = speed.vertical; Xmov = speed.horizontal; /move the object this._y -= Ymov; this._x += Xmov; /adjust vertical speed as a function of gravity speed.vertical -= gravity; /has the top been reached? if (speed.vertical &lt;= 0 && !TopReached) TopReached == True; Then checking the TopReached flag will tell you when the top of the curve has been reached. Does that make any sense? Government is like fire - a handy servant, but a dangerous master - George Washington Government is not reason, it is not eloquence - it is force. - George Washington. Join the UnError community! If I understood your coordinate system correctly, it is (000,000) ------ (320,000) | .............................. | | .............................. | | .............................. | | .....x....................... | (000,200) ------ (320,200) Let's say, there is a guy waiting at (50,180), which is somewhere in the left bottom region. I would suggest as the three start-parameters: - velocity (v0) (instead of "power") - angle (alpha) - gravity (g) I call startposition x0=50 I call startposition y0=180 Take a small "stepsize" (timestep) dt = 0.5 (or something like that. Adjust this for the correct "feeling"). 1. x coordinate: x(t) = x0 + v0*cos(alpha)*t 2. y coordinate: y(t) = y0 - [ x(t) - x0 ]*tan(alpha) + g/(2*v0*v0*cos(alpha)*cos(alpha))*[ x(t) - x0 ]*[ x(t) - x0 ] until (floorreached(y(t))) || (bumbed_in_a_wall(x(t))). You can test it as follows: If the floor is straight (no steps, ...), then the x(tmax) you have reached should be: x(tmax) = x0+v0*v0*sin(2*alpha)*sin(2*alpha)/g. The maximal y-height that can be reached is y_maximal = y0-v0*v0*sin(alpha)*sin(alpha)/ (2g). This should give you the correct parabolic motion. I hope, this was not too technical. I tried to adjust the coordinates to your y-reversed coordinate system Have fun! /edit: In case you want to incorporate the airflow, I could elaborate ( In short: the additional force (acceleration) is proportional to the area of the object (person, ie. feetsize /edit2: a little error in the main formula corrected. Reasonable initial conditions (example) If the only tool you have is a hammer, you tend to see every problem as a nail. (Abraham Maslow, Psychologist, 1908-70) Just so you know, the coordinate system with 0,0 top left isn't really weird. It's the coordinate system used in all (AFAIK) computer graphics. Any graphics coding I've done has used that coordinate system anyhow. December 13th, 2004, 01:36 AM #2 Flash M0nkey Join Date Sep 2001 December 13th, 2004, 01:37 AM #3 Senior Member Join Date Oct 2002 December 13th, 2004, 01:43 AM #4 Flash M0nkey Join Date Sep 2001 December 13th, 2004, 02:06 AM #5 Senior Member Join Date Oct 2002 December 13th, 2004, 02:09 AM #6 Flash M0nkey Join Date Sep 2001 December 13th, 2004, 02:11 AM #7 Flash M0nkey Join Date Sep 2001 December 13th, 2004, 02:31 AM #8 Senior Member Join Date Oct 2002 December 13th, 2004, 12:17 PM #9 Senior Member Join Date Mar 2004 December 14th, 2004, 01:31 AM #10 Custom User Join Date Oct 2001
{"url":"http://www.antionline.com/showthread.php?261993-Working-out-peak-of-a-curve","timestamp":"2014-04-18T09:26:06Z","content_type":null,"content_length":"119626","record_id":"<urn:uuid:48cdc1e9-640a-4614-8e6a-ff951751d26c>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00392-ip-10-147-4-33.ec2.internal.warc.gz"}
A Text-Dependent Approach to Speaker Identification | EE Times Design How-To A Text-Dependent Approach to Speaker Identification A. Sankaranarayanan received a Bachelors' degree in Electronics and Telecommunication Engineering at the University of Mumbai and plans to pursue graduate studies in Electrical Engineering. His area of interest is speech signal processing. The past two decades have witnessed the emergence of security systems based on biometric technology. Biometric techniques rely on one or more physical features uniquely attributed to an individual. The most popular biometric methods include digital fingerprint identification, iris scanning, face recognition, and voice (speaker) identification. The advent of efficient DSP algorithms and fast digital computers to implement these algorithms has seen a tremendous rise in the growth of such systems. Although digital fingerprint identification and iris scanning are extremely accurate indicators of an individual's identity, speaker identification is an upcoming technique. Speaker identification systems are popular in spite of their poorer accuracy vis--vis the other techniques previously mentioned because they are the least expensive to build (they can be implemented on any general-purpose computer) and are also non-invasive in nature. Speaker identification systems may be classified in two categories based on their principle of operation. • Text-dependent systems, which make use of a fixed utterance for test and training, and rely on specific features of the test utterance in order to effect a match. • Text-independent systems, which make use of different utterances for test and training, and rely on long-term statistical characteristics of speech for making a successful identification. Text-dependent systems require less training than text-independent systems and are capable of producing good results with a fraction of the test speech sample required by a text-independent system. Speech-Production Model The development of a text-dependent speaker identification system requires a thorough understanding of the nature of speech and the model of speech production. At a relatively high level, speech may be thought of as being composed of a string of phonemes (basic sound units). The English language consists of approximately 42 phonemes. Speech is produced by the flow of air through the various articulators such as the vocal tract, lips, tongue, and nose. Air is forced out of the lungs through the trachea and the glottis, where it passes through the vocal cords. The vocal cords, if tense, vibrate like an oscillator, but if relaxed, do not vibrate and simply let the air pass through. The air stream then passes through the pharynx cavity and, depending on the position of a movable flap called the velum, exits either through the oral cavity (mouth), or the nasal cavity (nostrils). In the former case, the tongue and the teeth may modify the flow of the air stream as well. Different positions of these articulators give rise to different types of sounds. All sounds can be divided into the following broad categories. • Voiced sounds are produced whenever the vocal cords are tensed and vibrate. Vowels ('a', 'e', 'i', 'o', and 'u') and diphthongs fall in this category of sounds. The frequency of vibration of the vocal cords is called the pitch. Moreover, the vocal-tract configuration for these sounds results in a resonant structure the vocal-tract resonance frequencies are known as formants. • Unvoiced sounds are produced when the vocal cords are relaxed and, therefore, do not vibrate. Fricatives (sounds such as 'shh' and 'f') and aspirated sounds (whispered speech) are examples of unvoiced sounds. Turbulent airflow occurs either at the mouth (fricatives) or at the glottis (aspirated sounds) to produce speech that exhibits a distinct lack of periodicity. The spectrum of unvoiced sounds usually lacks resonant peaks and has a broadband structure. • Plosive sounds are produced when there is a build-up of pressure due to constriction at some point in the vocal tract followed by a sudden release, which leads to transient excitation. This may occur with or without vocal cord excitation. Examples of plosive sounds include the 'p' in 'pin' (an unvoiced plosive) and 'b' in 'bin' (a voiced plosive). A powerful tool for analysis of speech is the source-filter model (Figure 1 shows a simplified version) of human speech production. This model is an approximate representation of the excitation source and the vocal tract. Although not very accurate for some types of sounds (especially unvoiced sounds), it provides a useful way to quantify several parameters that you can use for speaker Figure 1: The source-filter model of speech production. The model in Figure 1 assumes two sources the switch alternates between the glottal pulse generator (for voiced sounds) and the random noise generator (for unvoiced sounds). These sources are filtered by the vocal tract (represented by the time-varying filter). The figure omits some details (such as the mouth radiation model) for simplicity. • The glottal pulse generator represents the vibration of the vocal cords and is the active source for production of voiced sounds such as vowels. It is also known as the buzz source. The period of the impulse train generated by this source is known as the pitch period or fundamental frequency of the utterance. The output frequency spectrum is rich in harmonics of the fundamental frequency. • The random noise generator is responsible for generating the random turbulence and pressure build-up waveform for unvoiced sounds such as the fricatives. It is sometimes called a hiss source. The frequency spectrum of this source is relatively flat; this explains the broadband nature of unvoiced sounds. • You can represent the dynamic nature of the speech articulators constituting the human vocal tract by a time-varying digital-filter labeled in Figure 1 as the vocal-tract filter model. The parameters (coefficients) associated with this filter vary over a period of about 5 to 20 milliseconds, depending on the nature of the utterance, in step with the changing configuration of the vocal tract. Since you can model the vocal tract as a tube whose shape changes with time, it exhibits resonance at specific frequencies (formants). Peaks in the frequency response of the vocal-tract filter represent these formants. The source-filter model assumes that it is possible to separate the excitation source from the vocal-tract filter, and also assumes an all-pole (autoregressive) vocal-tract filter. These assumptions are not entirely accurate for many speech sounds. Nevertheless, this model forms a very useful basis for understanding the nature of speech production and for quantifying several parameters that characterize speech. Speaker-Identification Features The source-filter model discussed in the previous section provides useful parameters for identifying a speaker. One such quantity is the pitch period or fundamental frequency of speech. Pitch varies from one individual to another; pitch frequency is high for female voices and low for male voices. This suggests that pitch might be a suitable parameter to distinguish one speaker from another, or at least to narrow down the set of probable matches. Analysis of the frequency spectrum of the test utterance also provides valuable information about speaker identify. The spectrum contains both pitch harmonics and vocal-tract resonant peaks, making it possible to identify the speaker with a high probability of being correct. You can also use the vocal-tract filter parameters (filter coefficients) to good effect for speaker identification. This is due to the fact that different speakers have different vocal-tract configurations for the same utterance. In any text-dependent speaker identification system, an important decision is the choice of test utterance. As discussed in the previous section, the source-filter model is most accurate at representing voiced sounds, such as the vowels. Vowels have a definite, consistent pitch period. The vocal-tract configuration for vowel-utterances exhibits a clear formant (resonant) structure. The frequency spectrum corresponding to vowel-utterances therefore contains a wealth of information that can be used for speaker identification. The prototype speaker identification system built by the author (to be described later in this paper) makes use of the vowels ('a', 'e', 'i', 'o', and 'u') for the test utterance. Pitch-Period Estimation A number of algorithms exist for pitch-period estimation. The two broad categories of pitch-estimation algorithms are time-domain algorithms and frequency-domain algorithms. Time-domain algorithms attempt to determine pitch period directly from the speech waveform (examples include the Gold-Rabiner algorithm and the autocorrelation algorithm). Frequency-domain algorithms use some form of spectral analysis to determine the pitch period (an example is the method of cepstral truncation). Although frequency-domain algorithms may yield higher accuracy, time-domain algorithms have the advantage that they can be implemented with minimal difficulty on a general-purpose digital computer. A computationally efficient algorithm due to Gold and Rabiner makes use of parallel processors to produce pitch period estimates that are quite reliable. A brief description of the algorithm follows. The algorithm begins by passing the speech signal through a low-pass filter with a cutoff frequency of 600-800 Hz, which removes the higher harmonics of pitch frequency that might interfere with accurate pitch estimation. This is acceptable, since the pitch frequency rarely increases above 500 Hz, even for a high-pitched female voice. The filtered speech signal is processed to generate six impulse trains. These impulse trains come from the local maxima and minima of the speech waveform; their function is to retain the periodicity of the speech signal while discarding features irrelevant to the process of pitch detection. The reason for using six impulse trains is that the algorithm must function with few errors even under extreme conditions (in the presence of harmonics). In many cases, only two or three of the six impulse trains will indicate the correct pitch period the rest will be incorrect. However, the redundancy built into the algorithm ensures that it is able to determine the fundamental frequency with a low probability of error even in these cases. The six impulse trains are fed to six identical pitch extractors. Each pitch extractor latches on to an impulse and holds it for a blanking interval, during which subsequent impulses are ignored. After the blanking interval, the latched value begins to decay exponentially. The decay period ends when the pitch extractor encounters an impulse that is greater in amplitude than the instantaneous amplitude of the decaying value. The time period between the initial impulse latch and the end of the decay phase is the new pitch-period estimate. The current average pitch estimate is calculated as the mean of the previous average pitch estimate and the new pitch period estimate. New values for the blanking interval and exponential-decay constant are empirically determined from the current average pitch estimate. The final pitch-period estimate is determined from the current and previous pitch estimates (and the sums of the current and previous pitch estimates) of each of the six pitch extractors through a process of consensus. This ensures accuracy of the algorithm. The algorithm occasionally picks the wrong pitch-period estimate; this problem manifests itself in the form of impulsive noise that occurs randomly in the pitch-estimate array and can cause serious errors during comparison. A low-pass filter will remove these impulses, but will 'spread' or 'blur' the noise over the pitch contour. A median filter, however, produces the desired result of removing most of the impulsive noise while retaining the original pitch contour (Figure 2). For most purposes, a three- or five-point median filter is suitable for eliminating noise in the pitch estimates. Figure 2: The-low pass (moving average) filter 'blurs' the pitch contour by spreading the impulsive noise, while the median filter removes impulsive noise without affecting the pitch contour. The filters used were both five-point. Spectral Analysis: Wavelets Spectral analysis of speech is complicated by the fact that the speech signal is non-stationary, in other words, it has a time-varying frequency spectrum depending on the utterance. However, the speech articulators vary relatively slowly and it is not incorrect to assume that short segments (about 10-20 milliseconds) of speech are stationary. This leads to the idea of short-time techniques, in which analysis is carried out with such spectrally invariant segments (windows) of speech. The short-time Fourier transform is one of the most popular techniques in this category. The short-time Fourier transform results in a spectrogram or time-frequency plot, which illustrates the temporal variation of the spectral components of speech. Although popular, the short-time Fourier transform is limited by the uncertainty principle of spectral analysis, which states that the product of uncertainty in time and in frequency has a finite lower bound. In other words, resolution in time and frequency cannot be increased independently of one another an increase in time resolution (a smaller window) results in a decrease in frequency resolution (spectral leakage) and vice-versa. The short-time Fourier transform uses nominally fixed window widths with the consequence that it can only provide fixed resolution in time and frequency. Recently, we've seen the emergence of a new technique known as the wavelet transform for spectral analysis of non-stationary signals. It makes use of special time functions known as wavelets, and provides the flexibility in time-frequency resolution unobtainable with the classical short-time Fourier transform. With wavelets, it is possible to analyze a signal at several levels of resolution, making it possible to capture transient, high-frequency bursts with poor frequency resolution and also slowly varying characteristics with high-frequency resolution. Therefore, it is possible to trade off frequency resolution for better time resolution (for analyzing transients) and time resolution for better frequency resolution (for analyzing slow variations), a facility not afforded by the short-time Fourier transform. The CWT (Continuous Wavelet Transform) is given by the following equation. f(t) is the non-stationary time signal to be analyzed. The function y(t) is called the mother wavelet. The mother wavelet is an oscillatory function having zero mean; most of its energy is confined in a small region near the origin. The parameter a is referred to as the scale or dilation. The scale specifies the time duration or 'stretch' of the wavelet; a large value of scale indicates poor time resolution and increased frequency resolution and vice-versa. The parameter b is known as the translation. The translation specifies the position of the wavelet on the time axis. Both parameters are continuous. You can use a continuous-time convolution operation to interpret the CWT given by Equation 1. The scale parameter specifies an infinite number of impulse responses with which to convolve the signal f (t). This interpretation is equivalent to passing the signal f(t) through a bank of (infinite) analog filters, each having an impulse response specified by one value of scale (Figure 3). The filters are of the band-pass variety (this is expected, since the mother wavelet has zero mean) and have the special property that their Q-factors (center frequency to bandwidth ratio) are equal. Figure 3: Fourier transform of a wavelet for three values of scale. Note the band-pass nature of the filters. As the center frequency increases, the bandwidth of the filter also increases in proportion, keeping their ratio (the Q-factor) constant. The filters have been normalized so that the energy of their impulse responses is equal. The CWT is of little computational value. For implementation on a digital computer, you must discretize the scale and translation parameters. The discretization is usually dyadic, meaning scale and translation parameters are integral powers of two. This leads to a representation of the continuous-time function as a linear combination of dyadically scaled and translated wavelets known as the DWT (Discrete Wavelet Transform). There is a further complication. Although the DWT discretizes the scale and translation parameters, it still applies to a continuous-time function. Digital computers, on the other hand, work with a discrete version of the time signal itself (obtained by sampling the continuous-time signal at the Nyquist rate). The above considerations lead to a modified form of the DWT that digital filters can implement. Samples of the discrete-time signal are considered to be the approximation coefficients of the signal at the highest (finest) possible level of resolution (labeled the 0th level of resolution). These represent the entire digital frequency range from 0 to p radians. A process of high-pass filtering using a half-band filter and down sampling produces the detail coefficients at the next (coarser) level of resolution (the first level). The detail coefficients represent the frequency range between p/2 and p radians. Similarly, the approximation coefficients at the first level of resolution are obtained by passing the signal through a low-pass filter and down sampling the result. These coefficients contain spectral information in the range 0 to p/2 radians. Continuing in this fashion, you can use the approximation coefficients at this coarser level to generate approximation and detail coefficients at further coarser levels (levels 2, 3, ...). At each level, the spectrum of the approximation coefficients is divided in two by the low-pass and high-pass filtering operations; thus the DWT is reduced to a form of dyadic sub-band filtering (Figure 4 illustrates a three-level decomposition). Figure 4: Dyadic sub-band configuration for a discrete-time three-level decomposition, illustrating the sub-bands occupied by the detail coefficients at the first, second, and third levels of resolution. The spectrum (extending from the end of the third dyadic sub-band up to DC) occupied by the approximation coefficients at the coarsest (third) level is not shown. This process is carried out recursively with a bank of digital filters till the required level of frequency resolution is achieved (for a speech-signal band limited to ~ 6 KHz, a seven-level analysis is usually sufficient). The process of generating the approximation and detail coefficients at the kth level of resolution given the approximation coefficients at the (k-1)st level is summarized by the schematic of Figure 5. Figure 5: Generation of approximation and detail coefficients at a coarser level using approximation coefficients at the next finer level of resolution. In Figure 5, a[k](n) and b[k](n) are the approximation and detail coefficients respectively at resolution level k. a[k-1](n) are the approximation coefficients at the (k-1)st level of resolution. h (n) is the low-pass (approximation) filter and g(n) is the high-pass (detail) filter. The exact nature (impulse response) of these filters depends on the wavelet chosen. Linear Predictive Analysis LPA (Linear Predictive Analysis) is a powerful and popular technique for estimating the vocal-tract filter coefficients (predictor coefficients) which, as already mentioned, are useful for speaker identification since different speakers have different vocal-tract configurations for the same utterance. The basic premise of LPA is that you can approximate the current sample of the speech signal (within reasonable accuracy limits) as a linear combination of past samples of speech. The difference between the predicted sample and the actual sample is known as the prediction error. You can determine a set of predictor coefficients by minimizing the mean-squared error. Thus, the theory of LPA is intimately tied to the source-filter model of speech production. The number of coefficients used to characterize the time-varying vocal-tract filter is known as the order of the predictor. As already mentioned, the filter is treated as an all-pole system, also known as an autoregressive model. This imposes certain limitations on the filter in that it is able to accurately model only voiced sounds, and introduces significant prediction error for unvoiced sounds. Moreover, the transfer function of the filter requires zeros for accurately modeling nasals, a facility the autoregressive model does not afford. In spite of these limitations, autoregressive LPA provides a sufficiently accurate model for speaker identification, especially if the test utterance comprises vowels. The vocal-tract filter is a time-varying system. A new set of predictor coefficients must, therefore, be evaluated once every 10-20 milliseconds. The LPA algorithm typically sections the speech signal into windows of length 10-20 milliseconds, with an overlap of about 5-10 milliseconds. A set of linear equations (p equations, where p is the predictor order) results from minimizing the mean-squared error between the predicted and actual samples within the window. You can solve this set of equations using one of two techniques: the autocorrelation method or the covariance method. Although the latter results in faster convergence, the former guarantees a stable predictor and is more often used. The matrix form of these equations for the autocorrelation method is given by Equation 2. In Equation 2, R(k) represents the short-time autocorrelation function of the speech signal, and (a[1], a[2], ..., a[p]) represent the p predictor coefficients. The solution of this set of linear equations can be found using the usual matrix inversion technique, but a computationally efficient iterative solution due to Levinson and Durbin is often employed. This algorithm exploits the special properties of the autocorrelation matrix in Equation 2 (the matrix is symmetric, has equal elements along the diagonal, and is said to possess the Toeplitz property). You can obtain a reasonably accurate estimate of the vocal-tract filter using a tenth- or twelfth-order predictor. The transfer function and frequency response of the vocal-tract filter can be easily determined once the predictor coefficients have been evaluated. Figure 6 shows the vocal-tract response for a 20-millisecond frame of the voiced utterance 'a' for two speakers. The spectrum is smooth and shows no harmonic ripple due to pitch. A clear formant structure is visible; the location as well as amplitude of these formants is different, thus vindicating the effectiveness of LPA for speaker identification. Figure 6: Vocal-tract filter responses of two speakers uttering the voiced sound 'a'. A twelfth-order predictor was used to capture the vocal-tract resonant peaks during a 20-millisecond stationary period. Note the complete absence of pitch harmonics in the spectra and the clear formant structure (three formants). Also note the difference in amplitude and location of the formants for the two Distance Metrics During the training phase, the features described in the previous sections must be extracted from the training utterance and stored in a database (the collection of features extracted will henceforth be referred to as a profile). The test phase involves creation of a profile from the test utterance (which is the same as the training utterance in a text-dependent speaker-identification system) and comparison of this profile with those stored in the database. The profile in the database that is 'closest' to the test profile (subject to some independent threshold) is then declared a match. The measure of 'closeness' between two profiles is provided by suitable distance metrics. Different features within the profile may use different distance metrics. The squared-Euclidean distance is eminently suitable for computing the distance between pitch estimates of the two profiles. The squared-Euclidean distance between two N-dimensional vectors (denoting the pitch vectors) {a[1], a[2], ..., a[N]} and {b[1], b[2], ..., b[N]} is given by Equation 3. Pitch vectors extracted from speech will almost certainly be of different lengths and the larger vector will have to be truncated to the size of the smaller one before Equation 3 is applied. Normalization of the distance is also usually performed to avoid variability in pitch vector length. The DWT coefficients contain spectral information in dyadic sub-bands whose location and extent depend on the level of resolution. One possible method for comparing the two sets of DWT coefficients follows. For both DWTs, the fraction of normalized (per sample) energy in each scale is evaluated, and the ratio of the corresponding fractional energy in each DWT is taken (for similar DWTs, this ratio should be close to unity; it is inverted if less than unity). These ratios are weighted by a non-linear (decreasing) function of the type a^n, where 0.92 a 0.96. This is because ratios of fractional energies at higher scales are in greater error due to a smaller number of samples; assigning lower weights to these scales reduces the error in the final distance measure. The logarithm of each weighted ratio is then accumulated. For DWTs of two utterances by the same speaker, this distance is close to zero. LPA provides only an approximate estimate of the vocal-tract frequency response. Due to noise as well as the inexactness of the linear prediction model, the predictor coefficients obtained from two speech samples of the same utterance by the same individual will vary. The Itakura distance provides an estimate of the distance between two sets of linear predictor coefficients. The mathematical expression for this distance metric is given by Equation 4. In Equation 4, a and â are the two predictor coefficient vectors being compared. R is the autocorrelation matrix corresponding to the profile stored in the database (see Equation 3). This distance metric is accumulated for each frame of speech (after an initial adjustment to make the number of LPA frames equal). The final distance may be normalized to account for speech-rate variability. The final distance between two profiles is a weighted sum of the three distance metrics previously discussed. Weighting is necessary, since not all features are equally effective at identifying a speaker. The pitch estimates of two individuals may be similar, in which case the squared-Euclidean distance would be small. By contrast, DWT and LPA coefficients are much better at identifying a speaker, yielding relatively small distances for a match and large distances for a mismatch. Performance Criteria The performance of a speaker-identification system is described in terms of three parameters: • A false acceptance occurs when the system incorrectly identifies an unregistered individual as an enrolled one, or when one registered individual is mistaken for another. The FAR (False Acceptance Ratio) is the ratio of the number of false acceptances to the total number of trials. You can reduce the FAR by setting a strict (low) threshold. • A false rejection occurs when the system incorrectly refuses to identify an individual who is registered with the system. The FRR (False Rejection Ratio) is the ratio of the number of false rejections to the total number of trials. You can minimize the FRR by setting the threshold to a liberal (high) value. • The equal error rate is defined as the error rate offered by the system when the FAR and FRR are made equal to each other. You can obtain an equal error rate by plotting FAR/FRR curves for threshold values. The requirements for low FAR and FRR are seen to be conflicting, and both parameters cannot be simultaneously lowered. However, a low FAR is vital for good speaker identification systems (otherwise security of the system would be jeopardized), and most systems are biased for good FAR performance at the expense of FRR. Prototype System The author has developed a small-scale prototype speaker identification system based on the principles described in the previous sections of this paper. The entire system has been developed using object-oriented concepts in the C++ language. An important design objective was to ensure a modular and highly portable system. The prototype system uses a fixed training and test utterance comprising the English vowels ('a', 'e', 'i', 'o', and 'u') for reasons discussed earlier. A sampling rate of 11,025 Hz is used, limiting the maximum analog frequency to ~ 5.5 KHz, which is sufficient to preserve all required information. In the training phase, feature-extraction algorithms are used to create a profile from the speech sample. The Gold-Rabiner algorithm is used to estimate pitch; pitch post-processing makes use of a five-point median filter. Extraction of spectral information is accomplished using a seven-level DWT, yielding a peak frequency resolution of ~ 40 Hz at the lower end of the spectrum. The DWT makes use of a filter bank corresponding to the Daubechies (D2) wavelet. LPA is performed on the speech signal after first-order pre-emphasis (high-pass filtering) to account for the 6 dB/octave roll-off characteristic of the vocal tract. A twelfth-order predictor is used. Profiles thus created are stored in a local disk database. In the test phase, the same features are used to create a profile from the test utterance. The test profile is then compared with the profiles in the database. The profiles in the database are indexed on overall average pitch, and a modified binary-search algorithm is used to retrieve the profiles more efficiently than a sequential search. The profile in the database that yields the smallest distance to the test profile is chosen (subject to an independent threshold) as the match. The system is adaptive; in other words, it is capable of tracking slight changes in speech patterns over multiple test utterances. A successful match causes the profile in the database to be updated upon request. The system was tested with a group of fifteen speakers consisting of nine males and six females. Ten of the fifteen speakers were enrolled in the database. Three values of threshold (STRICT, NORMAL, and LIBERAL) were used to evaluate the performance of the system. Three trials were conducted for every individual for each value of threshold. The system performance characteristics, FAR and FRR, were determined for each threshold. The point of intersection of the FAR and FRR curves yielded the equal error rate. The system was found to yield a very low error rate (FAR and FRR) for registered individuals. The error rate (FAR) was, however, quite considerable for individuals not registered with the system. Tests also indicated that the system was resistant to minor changes in the utterance rate and intonation. The prototype speaker identification system was developed as a senior undergraduate project at the University of Mumbai. The author gratefully acknowledges the invaluable contribution of his collaborators Mr. Akshay Bhat and Mr. Mihir Gosrani that made possible the successful implementation of this system. The author also wishes to thank Mr. Milind Shah, whose advice on fundamental topics in speech signal processing laid the foundation for much of the investigation.
{"url":"http://www.eetimes.com/document.asp?doc_id=1275846","timestamp":"2014-04-19T22:30:36Z","content_type":null,"content_length":"161935","record_id":"<urn:uuid:2a4c83c3-31d8-4001-b769-d5bed3720aa6>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00326-ip-10-147-4-33.ec2.internal.warc.gz"}
of mathematic Applications of mathematical ideas Here are a few files on topics vaguely applying mathematics. (At least the reader is sure to say, "Only a mathematician would do it like this!") Applications slightly closer to traditional mathematics are part of the general collection of math essays (link below). Other pages at my own web site you may find of interest: • My home page is just a short introduction to the pages I maintain • I keep a large set of pages which provide a thematic introduction to the subject areas of modern mathematics, complete with answers to thousands of more or less common questions. This page is Last modified 2005/01/05 by Dave Rusin, rusin@math.niu.edu
{"url":"http://www.math.niu.edu/~rusin/uses-math/index.html","timestamp":"2014-04-20T18:25:31Z","content_type":null,"content_length":"2055","record_id":"<urn:uuid:022418d3-30f4-47c7-8d79-c7aecca877f8>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00306-ip-10-147-4-33.ec2.internal.warc.gz"}
Analysis of Wave Solutions of an Adhenovirus-Tumor Cell System Abstract and Applied Analysis Volume 2012 (2012), Article ID 590326, 13 pages Research Article Analysis of Wave Solutions of an Adhenovirus-Tumor Cell System ^1Laboratoire des Interactions Ecotoxicologie, Biodiversité, Ecosystèmes, Université de Lorraine, CNRS UMR 7146, 8 rue du Général Delestraint, 57070 METZ, France ^2Laboratoire de Mathématiques Raphaël Salem, Université de Rouen, UMR 6085 CNRS, Avenue de l'Université, P.O. Box 12, 76801 Saint Etienne du Rouvray, France Received 2 December 2011; Accepted 13 February 2012 Academic Editor: Muhammad Aslam Noor Copyright © 2012 Baba Issa Camara and Houda Mokrani. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. We discuss biological background and mathematical analysis of glioma gene therapy for contributing to cancer treatment. By a reaction-diffusion system, we model interactions between gliom cells and viruses. We establish some sufficient conditions on model parameters which guarantee the permanence of the system and the existence of periodic solutions. Our study has experimental and theoretical implication in the perspective management strategy of therapy. 1. Introduction Diffuse infiltrative gliomas are the most frequent primary central nervous system (CNS) tumors in adults. Their deserved reputation as devastating diseases is due in large part to their widespread invasiveness in the neuropil, that is, the dense network of interwoven neuronal and glial cell processes. Partially because of their growth pattern, curative treatment for diffuse gliomas is generally impossible. Although patients with low-grade diffuse gliomas may survive for multiple years, these tumors lead to death of the patient sooner or later, often after progression to high-grade malignancy. Whereas surgery of most other tumors aims at complete resection, the diffuse growth of gliomas in the brain parenchyma precludes complete tumor removal. So to increase the length of survival time of patients with malignant brain tumors, novel therapeutic alternatives are currently being explored. Some of these experimental treatment strategies are based on advances in immunotherapy, stem cell therapy, local chemotherapy, and radiotherapy. However, in areas where the original tissue architecture is relatively preserved, the blood-brain barrier that may form an obstacle for optimal delivery of chemotherapeutics to diffuse infiltrative tumor cells eradicating diffuse infiltrative glioma cells by radiotherapy without significantly damaging the infiltrated brain parenchyma has been difficult to achieve [1]. In addition, gene therapy is becoming a promising alternative. In essence, gene therapy consists of the delivery of a gene of interest to tumor cell populations to control and, when possible, kill the growing tumor. Viruses are prominent vehicles for gene therapy, and some adenoviral vectors exhibit oncolytic properties. To this end, a variety of viral vectors have been developed, with oncolytic viruses emerging as an innovative therapeutic tool for these tumors. To be effective, a virus used for oncolytic therapy must have several features. The desired properties of these vectors include selectivity for the tumor target, minimal brain and systemic toxicities, and the capacity to penetrate and diffuse throughout the brain to reach all neoplastic foci residing beyond the resection border of the tumor. In addition, the viral vector needs to remain active despite evoking an immune response. The goal of developing an ideal vehicle for treatment of malignant brain tumors remains to be achieved. A wide variety of viral vectors have been developed and tested in the setting of gene therapy for malignant gliomas. These are based on different kinds of viruses, such as herpes simplex virus, retrovirus, measles virus, reovirus, and adenovirus. Some have shown promising results when tested in animal models of intracranial gliomas, but, to date, clinical trials performed in humans have not shown a significant increase in survival. Various vectors have been targeted toward cancer cells by deleting the genes responsible for bypassing those cells’ antiviral proteins. Without these genes, the designed vectors will only be able to replicate within cancer cells with disrupted antiviral mechanisms. The deletion of viral genes to enhance specificity for the killing of neoplastic cells is a principle well exemplified by the actions of two oncolytic adenoviruses, ONYX 015 and Ad5-Delta24. ONYX 015 has a deletion in the viral genomic region coding E1B 55kd. This deletion effectively limits the replication of the virus to neoplastic cells that have a defective p53 pathway [2, 3]. A Phase I clinical trial to examine the effects of injection of ONYX 015 into peritumoral regions of recurrent malignant gliomas was recently completed and published [4]. In that study, ONYX 015 was injected into the walls of tumor resection cavities. This trial proved that the injection of up to 1010 plaque-forming units of ONYX 015 into brain tissue surrounding a resected malignant glioma is safe in humans. Extensive efforts have been dedicated over many years to mathematical modelling of cancer development [5–7]. These mathematical models serve as valuable tools to predict possible outcomes of virus infection and propose the optimal strategy of anti-virus therapy. Wodarz [8, 9] presented a mathematical model that describes interaction between two types of tumor cells, the cells that are infected by the virus and the cells that are not infected but are susceptible to the virus so far as they have cancer phenotype and the immune system. Our system is more general than the one considered in [8, 9] even when there is no diffusion. Because the free virus particles are very small, they disperse in the fluid tissue like Brownian particles. Therefore, we have incorporated into our model a diffusion term for the free viruses. We also assume that the tumor has a logistic growth, which can be slowed down by the inhibitor, captured in the expression . Thus, the tumor admits a maximum size and density defined by the carrying capacity . When the virus is administered, the dynamic interactions between the virus and tumor cell populations are described by the following diffusive ratio-dependent predator-prey model of reaction-diffusion equations in the tumor region : where is the number density of susceptible and uninfected tumor cells, is the number density of infected tumor cells, and is the number density of free virus, that is, virus in the extracellular tissue. When parameters of system (1.1)–(1.4) are constant, we determined in [10] the conditions for optimal therapy and, estimated by numerical simulations, the patient survival time when tumor cannot be cured. This paper is organized as follows Section 2 is devoted to some preliminaries, which are needed in next sections, including some lemmas, due to Walter and Smith. In Section 3, some conditions for the ultimate boundedness of solutions and permanence of this system are established 2. Preliminaries We need the following lemmas due to Walter [11] and Smith [12], respectively. Lemma 2.1. Suppose that vector functions and , , satisfy the following conditions: (i)they are of class in , , and of class in , where is a bounded domain with smooth boundary;(ii), where , , vector function is continuously differential and quasi-monotonically increasing with respect to and , , ;(iii), . Then, , for all . Lemma 2.2. Assume that and are positive real numbers, a function is continuous on , continuously differential in , with continuous derivatives and on , and satisfies the following inequalities: where is bounded on . Then on . Moreover, is strictly positive on if is not identically zero. Consider the following logistic differential equation: where ; and are positive constants. Lemma 2.3. Every solution , of (2.2) satisfies 3. Permanence Throughout the paper we always assume that : , , , , and are bounded positive-valued functions on continuously differential in and and are periodic in with a period . Definition 3.1. Solutions of system (1.1)–(1.4) are said to be ultimately bounded if there exist positive constants , , such that for every solution there exists a moment of time such that , , , for all and . Definition 3.2. System (1.1)–(1.4) is said to be permanent if there exist positive constants and such that for every solution with nonnegative initial functions , , and , , , , there exists a moment of time such that , , , for all and . For simplicity, for a bounded function , we denote and . Now we have the following positively invariant principle for system (1.1)–(1.4). Theorem 3.3. Assume that conditions hold, then nonnegative and positive quadrants of are positively invariant for system (1.1)–(1.4). Proof. Suppose is a solution of system (1.1)–(1.4) with initial condition (), (), (). Let be a solution of It holds that which implies that is a lower solution of (1.1). By Lemma 2.2, we have , for all and . In addition, since (), then , for all and . Thus by Lemma 2.1, is bounded from below by positive function , and so . Let be a solution of It holds that which implies that is a lower solution of (1.2). Let be a solution of It holds that which implies that is a lower solution of (1.3). A similar argument to leads to that and are bounded from below, respectively, by positive functions and . Theorem 3.4. Assume that conditions hold then all solutions of system (1.1)–(1.4) with nonnegative initial functions are ultimately bounded. Proof. Let be a solution of where is such that . It holds that So, we get Therefore, Lemma 2.1 gives . Note that, according to the uniqueness theorem, the solution of (3.7) does not depend on for , and so satisfies the ordinary differential equation: By Lemma 2.3, we have So, there exists and such that . For the infected tumor cells, we have the following inequality: So, we get Therefore, Lemma 2.1 gives , where satisfies the ordinary differential equation: Since thus, there exists and such that . For the virus in the extracellular tissue, we have the following inequality: So, we get Therefore, Lemma 2.1 gives , where satisfies the ordinary differential equation: Since thus, there exists and such that . Theorem 3.5. Assume that conditions hold; in addition, if , then the system (1.1)–(1.4) is permanent. Proof. Theorem 3.4 implies that there exists such that , , and starting with a certain moment of time. Note that, by comparison principle, if (), (), and (), then , , and for all and . Considering the solution on the interval with some , we get separated from zero. Therefore, we can assume that , and, . Let be a solution of So using the inequality one has, by (3.20), Therefore, Lemma 2.1 gives . By the condition , we have Therefore, there exists such that for large enough. Let be a solution of Using the inequality we have, by (3.24), Therefore, , as . Therefore, there exists such that for large enough. Now, let be a solution of By the following inequality, we have Therefore, , as . Therefore, there exists such that for large enough. 4. Periodic Solutions Theorem 4.1. Assume that conditions hold and system (1.1)–(1.4) is permanent. Moreover, if one assumes the following conditions then the system has a unique globally asymptotic stable strictly positive -periodic solution. Proof. For convenience, we denote and similar meaning to , , , , and . Let and be two solutions of system bounded by constants and for below and above. Consider the function So, It follows from the boundary condition (1.4) that The terms Thus, we have where is the maximal eigenvalue of the diagonal matrix with So, we deduce that Thus, , , and , as . By Theorem 3.3, solutions of system (1.1)–( 1.4) are bounded in the space , where and . Therefore, Consider the sequence Then is compact in the space . Let be a limit point of this sequence. It follows, from that . Next, let and be two limit points of the sequence . Using (4.9) and , we have Thus, , and so is the unique periodic solutions of system (1.1)–(1.4). By (4.9), it is asymptotically stable. 1. A. Claes, A. J. Idema, and P. Wesseling, “Diffuse glioma growth: a guerilla war,” Acta Neuropathologica, vol. 114, no. 5, pp. 443–458, 2007. View at Publisher · View at Google Scholar · View at 2. J. R. Bischoff, D. H. Kirn, A. Williams et al., “An adenovirus mutant that replicates selectively in p53-deficient human tumor cells,” Science, vol. 274, no. 5286, pp. 373–376, 1996. View at Publisher · View at Google Scholar · View at Scopus 3. C. Heise, A. Sampson-Johannes, A. Williams, F. McCormick, D. D. Von Hoff, and D. H. Kirn, “ONYX-015, an E1b gene-attenuated adenovirus, causes tumor-specific cytolysis and antitumoral efficacy that can be augmented by standard chemotherapeutic agents,” Nature Medicine, vol. 3, no. 6, pp. 639–645, 1997. View at Publisher · View at Google Scholar · View at Scopus 4. E. A. Chiocca, K. M. Abbed, S. Tatter et al., “A phase I open-label, dose-escalation, multi-institutional trial of injection with an E1B-attenuated adenovirus, ONYX-015, into the peritumoral region of recurrent malignant gliomas, in the adjuvant setting,” Molecular Therapy, vol. 10, no. 5, pp. 958–966, 2004. View at Publisher · View at Google Scholar · View at Scopus 5. N. L. Komarova, “Mathematical modeling of tumorigenesis: mission possible,” Current Opinion in Oncology, vol. 17, no. 1, pp. 39–43, 2005. View at Publisher · View at Google Scholar · View at 6. A. S. Novozhilov, F. S. Berezovskaya, E. V. Koonin, and G. P. Karev, “Mathematical modeling of tumor therapy with oncolytic viruses: regimes with complete tumor elimination within the framework of deterministic models,” Biology Direct, vol. 1, article no. 6, 2006. View at Publisher · View at Google Scholar · View at Scopus 7. J. T. Oden, A. Hawkins, and S. Prudhomme, “General diffuse-interface theories and an approach to predictive tumor growth modeling,” Mathematical Models & Methods in Applied Sciences, vol. 20, no. 3, pp. 477–517, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 8. D. Wodarz, “Viruses as antitumor weapons: defining conditions for tumor remission,” Cancer Research, vol. 61, no. 8, pp. 3501–3507, 2001. View at Scopus 9. D. Wodarz and N. Komarova, Computational Biology of Cancer: Lecture Notes And Mathematical Modelin, World Scientific, Singapour, 2005. 10. B. I. Camara, H. Mokrani, and E. Afenya, “Mathematical modelling of gliomas therapy using oncolytic viruses,” to appear. 11. W. Walter, “Differential inequalities and maximum principles: theory, new methods and applications,” vol. 30, no. 8, pp. 4695–4711, 1997. View at Publisher · View at Google Scholar 12. H. L. Smith, “Dynamics of competition,” in Mathematics Inspired by Biology, vol. 1714 of Lecture Notes in Math., pp. 191–240, Springer, Berlin, Germany, 1999. View at Zentralblatt MATH
{"url":"http://www.hindawi.com/journals/aaa/2012/590326/","timestamp":"2014-04-19T08:01:04Z","content_type":null,"content_length":"708748","record_id":"<urn:uuid:24ab95e3-ec9f-438b-9539-3805fecae5d4>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00165-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts by Total # Posts: 743 Months (they are not capitalized.) janvier février mars avril mai juin juillet août septembre octobre novembre décembre Seasons le printemps (spring) l'été (summer) l'automne (fall, autumn) l'hiver (winter) I don't understand w... 1. à/Jean/allons/un/Nous/donner/pantaloon 2. Chapeaux/porte de/ne/pas/Lucie 3. professeur/Le/donne/examens/ne/d /pas/faciles **I thought this one was=Le professeur ne donne pas faciles d examens. I need to know if either un, une, des, or de (d ) go in the following blanks. 1. Le professeur ne donne pas____ devoirs aujourd hui. 2. Le père de Luc a _____ nouvelle voiture américaine. 3. Nous avons ______ amis à Berlin. 4.Le banquier porte __... I need help unscrambling these words to form sentences in French. 1. Je/des/cherche/bibliothèque/à/livres/la 2. à/Jean/allons/un/Nous/donner/pantaloon 3. Chapeaux/porte de/ne/pas/Lucie 4. professeur/ Le/donne/examens/ne/d /pas/faciles **I thought this ... Intergarted Science I had a couple of questions on the topic of Waves-Sound and Light. 1. What is the source of all waves? *Vibration* 2. In one word, what is it that moves from source to receiver in a wave motion? *Energy* 3. Does the medium in which a wave travels move with the wave? *No* 4. Wh... physics =( A person on a small, 7.00 meter high, hill aims a water-balloon slingshot straight at a friend hanging from a tree branch 4.00 meters above and 6.00 meters away. At the instant the water balloon is released at 7.00 m/s, the friend lets go and falls from the tree, hoping to avo... Kyle wishes to fly to a point 450 km due south in 3 hours. A wind is blowing from the west at 50 km/hr. Compute the proper heading and speed that Kyle must choose in order to reach his destination on A football is kicked at 45 degrees and travels 82 meters before hitting the ground. A) What was its initial velocity? B) How long was it in the air? C) How high did it go? for this question A bullet is fired from a rifle that is held 1.8 m above the ground in a horizontal position. The initial speed of the bullet is 1120 m/s. Find why is the equation just (.5)(9.8)(t^ 2)=1.8 why dont we have to take in account for initial velocity: (.5)(a)(T^2)+(... please help me im stuck. A 400 g air-track glider attached to a spring with spring constant 9.00 N/m is sitting at rest on a frictionless air track. A 600 g glider is pushed toward it from the far end of the track at a speed of 94.0 cm/s. It collides with and sticks to the 400... the difference between two numbers is 3. the sum of the numbers is 45. what are the two numbers the profits of Mr/ Lucky's company can be represented by the equation y=-3x^2+18x-4, where y is the amount of profit in hundreds of thousands of dollars and x is the number of years of operation. He realizes his company is on the downturn and wishes to sell it before he en... A spaceship passes Earth at a speed such that γ = 112. A creature aboard the vessel does the Mongoian greeting dance that takes it 21 s (on its clock -- bought on a previous visit -- which is why it reads in seconds) to perform. How long will the welcoming committee on Ea... two examples of everyday devices of relativity? besides a gps? United States History i think its e right? United States History What do the "sick chicken" case (schechter poultry) and Jefferson's objection to hamilton's bank of the United states have in common? A. both- neessary and proper clause B. oppostiion of direct taxation of fed govt C. attempts to control inflation thro bankng... the common name for grain is called? name the seven principle grains? what does more liquid do for a recipe ? question why are many eggs used in some recipes? WHAT DOES SUGAR DO IN A RECIPE? what does more liquid do to a receipe? why are many eggs used in receipes a function is defined as every "x" value giving a unique "y" value, if you have a graph then you can tell if it's a function by 'the vertical line test' simply a vertical line placed anywhere on the graph will only cross through the graph once a... saying that the tree is stationary would give it a velocity of 0 therefor meaning it has no momentum as mass x 0 = 0 that sort of disproves your whole explanation think about the question too, what does the Law of Conservation of Momentum say? think about what a law is, can yo... apologies M(MgCO3*5H20) = 154 ratio = 70/154 = 0.4545 so 45.5% is water, same process as above just use this value instead no i don't think the weight will be increasing after getting rid of the water. work out the molar mass of hydrated magnesium carbonate, then find the ratio of water to the whole thing in regard to molar mass eg M(MgCO3*5H20)= 144 M(5H2O) = 70 ratio = 70/144 = 0.486 this me... sorry i'm a bit confused, what are we trying to find? physics 20 acceleration due to gravity must be factored in as the rocket is traveling upwards. since you have chosen up to be positive acceleration down must be negative, making gravity -9.8m/s the equation is still force=mass x acceleration accept the formula used to calculate accelerat... the second answer would be -70 technically all square roots have two answers, when you multiply two negative numbers you get a positive number 9th grade 3 over the square root of 6 y=3x+4; -1, 1 2, 10 3, 12 7, 1 find the solution set for each equation , given the replacement set AP 9th grade-Philosophy In order to prove this thesis by the Greek Thales, "All substances can be reduced to water." what would i have to do?! im clueless... How are elements represented on a periodic chart? 6th grade Show me a plot diagram example please. density is given by mass/volume so if the density is 1.53 g/L then we rearrange the formula to find the mass which just happens to be 1.53grams due to there only being 1 litre so depending on whether the question is asking for how many moles of NAOH or how many grams of pure N... I5p+12I = 7 there will most likely be 2 solutions if we treat it as 5p+12= +or-7 then it may be easier to see what is happening (if we get an answer of -7 from the equation, it will be changed back to +7 by the absolute value) so we have 2 equations now (1)5p+12=7 (2)5p+12=-7 ... corrrected? The new heart clinic is located at 555 NW First Street, Seattle, Washington 98104. He arrived at the hospital with a heavily bleeding gunshot wound. Differential diagnoses include: Diabetes, adult-onset Cholelithiasis, acute Endocarditis, subacuteTerry Brazelton, M... Are these correctly punctuated? Infection, for example, can be a complication of surgery. He arrived at the hospital with a heavily, bleeding gunshot wound. The new heart clinic is located at 555 NW First Street, Seattle, Washington, 98104. Additionally, anesthesia was given b... Are these sentences correctly punctuated? Mrs. Smith came to the hospital accompanied by her husband, Mr. Smith, and their youngest daughter. Naturally, if the need arises, I would be happy to see him again. Dr. Jones is an excellent surgeon, but does not have a good bedside m... Are these correctly punctuated? This pleasant patient has been seen for therapy from February 2, 2005 to March 2, 2005. This patient comes in today with many, many complaints. This year she'll have a breast augmentation and next year she will have an abdominoplasty. Depres... 1.Three fatty acids form a triglyceride with one glycerol. What similarities would the fatty acids have with polysaccharides? What differences? 2. When you consume more food than you require, the mitochondria in the liver are involved n forming triglycerides from the excess. W... Math cont. The neighborhood skateboard club is starting a fundraiser in order to buy more skateboards. They have decided to sell logo mugs. The Mugs on Mugs Company offers to supply mugs to the club for $3.75 each plus a $55 process and handling fee. The Punny-Cups Company offers to supp... help please(: The neighborhood skateboard club is starting a fundraiser in order to buy more skateboards. They have decided to sell logo mugs. The Mugs on Mugs Company offers to supply mugs to the club for $3.75 each plus a $55 process and handling fee. The Punny-Cups Company offers to supp... Well, does it mean that n has to be the same for both? The neighborhood skateboard club is starting a fundraiser in order to buy more skateboards. They have decided to sell logo mugs. The Mugs on Mugs Company offers to supply mugs to the club for $3.75 each plus a $55 process and handling fee. The Punny-Cups Company offers to supp... 11th grade But, if i did get 3.5 that would mess up getting 6.25 or 10.25. 2(3.05) + .5(.3)= 6.25--correct. 3(3.05) + 1(.3)= 9.45... this is the one where i went wrong. i went wrong. 11th grade wow haha im lost... 6w + 1.5j= 18.75 -6w - 2j= -20.50 ----------------- -.5j=-1.75 j=.3 ? why would i multiply? 11th grade -.5/1.75= -.2857...or .3 11th grade Yesterday Lucy walked 2 hours and 1/2 jogged hour and covered 6.25 miles. Today she walked for 3 hours and jogged for 1 hour and covered 10.25 miles. Assuming a constant walking rate and a constant jogging rate, how fast did she walk and how fast did she jog? Define two variab... Yesterday Lucy walked 2 hours and jogged 1/2 hour and covered 6.25 miles. Today she walked for 3 hours and jogged for 1 hour and covered 10.25 miles. Assuming a constant walking rate and a constant jogging rate, how fast did she walk and how fast did she jog? Define two variab... messed up,,,redo. w- walking j- jogging 2w + .5j= 6.25 --> 3(2w+.5j)= 3(6.25) 3w + 1j= 10.25 --> -2(3w+1j)= -2(10.25 6w + 1.5j= 18.75 -6w - 2j= -20.50 ----------------- -.5j=-1.75 j=.3 2w + .5(.3)= 6.25 2w + 15= 6.25 2w= 6.10 w= 3.05 check: 2(3.05) + .5(.3)= 6.25 (correct) 3(3.05) + 1(.3)... Yesterday Lucy walked 2 hours and jogged 1/2 hour and covered 6.25 miles. Today she walked for 3 hours and jogged for 1 hour and covered 10.25 miles. Assuming a constant walking rate and a constant jogging rate, how fast did she walk and how fast did she jog? Define two variab... At the annual turtle races, Poindexter, the record holder, is trying to retain his title. As the race begins, Poindexter starts off racing North with a speed of 0.025 m/s. He retains this speed for 0.5 meters at which time he speeds up to 0.10 m/s. He continues at this rate fo... judicial, executive(: At the annual turtle races, Poindexter, the record holder, is trying to retain his title. As the race begins, Poindexter starts off racing North with a speed of 0.025 m/s. He retains this speed for 0.5 meters at which time he speeds up to 0.10 m/s. He continues at this rate fo... 12th grade what's the key factors which led to our constitution algebra 3 Find a polynomial of lowest degree with only real coefficients and having the given zeros. -2+i, -2-i, 3, -3 A car starts from rest on a curve with a radius of 120m and accelerates at 1.0m/s^2. Through what angle will the car have traveled when the magnitude of its total acceleration is 2.0 m/s^2? why are crash test dummies so important the gas tank of a car is 2/3 full. a trip uses up 1/4 of the gas in the tank. how full is the tank at the end of the trip? please include how you got the answer, thanks how do the wavelengths of radiant energy vary with the temperature of the radiating source? How does this affect solar and terrestrial radiation? What was the bull moose party? What is the difference between division and multipication? A shot-putter throws the shot (mass = 7.3kg) with an initial speed of 15.0 m/s at a 33.0 degree angle to the horizontal. Calculate the horizontal distance traveled by the shot if it leaves the athlete's hand at a height of 2.00m above the ground. How do i do the difference os quares if y and 9x are different. 6x(squared) - 2xy ________________________ (divided by) y(squared) - 9x(squared) x(2nd power)+ 2x - 3 ________________________ (Divided by) x (2nd power) + 3x math am i correct do you mean the absolute value of -14? if so you are correct 4th grade Language Arts what is the complete predicate in the sentence He sometimes stays at our home i'm guessing Liz meant questions not equations substitute in what you can from the info you are given in this case change 'A' to 'D+E' this gives us D/D+E =0.35 & D+E/E see if you can take it from here if this is your code Private Function F(x as Int, y As Int, z as Int) If (y = 1) then Return 2 Else Return x * Z - y + F(x-1,y-1, z) End If End Function what is your test data. if you test this code with y=1 then you will get an answer of 2, else Return x * Z - y + F(x-1,y-1, z) Math PIZZAZZZ whats the expression? math 5th grade sorry the answer to the original question is 21,100,000. the example i used would be 22,000,000 because this has been rounded to the nearest 100,000 but in doing so we have reached another million, make sense? math 5th grade sure would :) math 5th grade if we have a number that is 21,079,000 and we need to round to the nearest 100,000 then we are looking at this part of the number 21,(079),000. the 21 million has no effect on the rounding, so treat it as rounding 79,000 to the nearest 100,000. if the question was to round 21,... lets start by multiplying out the brackets 8g+416-96=504+6g-84 then combine the like terms 8g+320=420+6g we then group the g's on one side and our numbers on the other 8g-6g=420-320 which leaves us with 2g=100 meaning g=50 hope this helped post back if u got any questions :) I'll answer 2. to get rid of the y's we make this a simultaneous equation (1)4x-y=-9 (2)3Y-2X=7 the number of y's must be equal so that they will cancel eachother out when we add them together so 3* (1)=12x-3y=-27 then we do (1)+(2) 12x-3y+3y-2x=-27+7 hopefully you ... inverse is 1/(8x^2-9) then make this true for x>= 0 tell me what you get 12*(-8) is -96 the absolute value just makes this a positive value so the I12(-8)I is 96 check geo i hate to tell you but communism has nothing to do with monarchs, lords and nobles. If unsure with a question like this, probably the best thing to do is a quick search on all your possible answers, this will probably give you the answer you are looking for quicker than waitin... DrBob222 is perfectly correct although it does say in the question that the same amount of salt is added at both temperatures. when the snow is at -2 degrees the addition of the NaCl lowers the temperature that the snow melts at to somewhere between -2 and -30 degrees. However... Need Help!! Algebra linear is a straight line such as y=ax+b while y=ax^2+bx+c is non linear. a function is defined as each x value giving a unique y value. I'll leave you to take it from here, hope this helped. physical science explorations well if F=ma and a=v-u/t (final velocity-initial velocity/time) so combining these two equations we get F=m(v-u/t) which gives us F=mv-mu/t if the car is to stop then obviously the final velocity is zero (now this will give us a negative answer but that is OK because it is a r... if something dissociates then that means that it completely dissolves in a solution. so DrBob222 is completely correct as NaCl, MgCl2 and CaCl2 all dissolve. unlike table sugar. check a solubility table if you are ever unsure. physical science explorations ok now Force Force = mass*acceleration use the acceleration you calculated in the first question to find the force exerted on the puck physical science explorations a=(v-u)/t where v=final velocity, u=initial velocity and t= time in seconds initial is 0 m/s go from here yourself :) physical science explorations see the post above if something works 100% of the time then there would be no errors, human or equipment related always draw a diagram as a visual aid if you are having problems with a question i think Gina may mean the factors of a number, like the factors of 12 are 1,2,3,4,6,12 one way is to start at the lowest factor and write down its pair, ie: 1*12, then 2*6 then 3*4, when you get to 4*3 you have started to overlap meaning that you have reached the maximum numbe... we know that the force the table is exerting on the box must be equal to the force that the box is exerting on the table. in reality the 85N of the box is not the force that is being exerted on the table because the 30N on the other end of the rope is acting against the 85N th... 5 is fine except for the last line "That means your are happy?". 'That means you're happy' is a much better sentence. you can of course just do it in one step and take out 6x^4 if you are comfortable enough to do that divide through by 6 first 6(x^6+5x^5+6x^4)=0 take out the lowest common factor x^4 6x^4(x^2+5x+6) then factorise the bracket 6x^4.(x+2)(x+3) i'm not entirely sure what you mean by expanded form for a single number. when going from seconds to hours, first times by 60 to get to minutes then 60 again for hours, or just multiply by 60*60=360. the most common conversions you will have to do in sciences are from m/s to m/hr and back again. (however km/hr is the more commonly used units) so rem... Pages: <<Prev | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | Next>>
{"url":"http://www.jiskha.com/members/profile/posts.cgi?name=Jordan&page=5","timestamp":"2014-04-20T12:17:17Z","content_type":null,"content_length":"30016","record_id":"<urn:uuid:d7fc9e3d-30b8-4ac6-b8e3-d0fa0fc1f6cc>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00396-ip-10-147-4-33.ec2.internal.warc.gz"}
Stoke's Theorem April 11th 2008, 06:20 AM Stoke's Theorem F=y^3i-x^3j+z^3k;C the trace of the cylinder x^2+y^2=1 in the plane x+y+z=1 and i arrive at -3integrate 0 to pi/2 integrate 0 to 1 r^4 dr dθ and obtain the answer -3pi/8 but the answer given is -3pi/2.This imply that it integrate from 0 to 2pi.But why this happen since the region is generated from 0 to pi/2 only i think.Thanks. April 11th 2008, 10:30 PM F=y^3i-x^3j+z^3k;C the trace of the cylinder x^2+y^2=1 in the plane x+y+z=1 and i arrive at -3integrate 0 to pi/2 integrate 0 to 1 r^4 dr dθ and obtain the answer -3pi/8 but the answer given is -3pi/2.This imply that it integrate from 0 to 2pi.But why this happen since the region is generated from 0 to pi/2 only i think.Thanks. using stokes theorem we need to find the parametric rep of the space curve we know that using the equation of the plane we get $x+y+z=1 \iff z=1-x-y \iff z=1-\cos(t) -\sin(t)$ so finally $\vec r (t)=<\cos(t),\sin(t),1-\cos(t)-\sin(t)>$ $d\vec r (t) =<-\sin(t),\cos(t),\sin(t)-\cos(t)>dt$ $\vec F(x,y,z)=<y^3,-x^3,z^3>$ $\vec F(r(t)=<\sin^{3}(t),-\cos^{3}(t),(1-\cos^3(t)-\sin^3(t))^3>$ $\int_{0}^{2\pi}\vec F \cdot d\vec r =\int_{0}^{2\pi}-\sin^4(t)-\cos^4(t)+(\sin(t)-\cos(t))(1-\cos(t)-\sin(t))^3dt$ $-\int_{0}^{2\pi}(\sin^4(t)+\cos^4(t))dt+\underbrace {\int_{0}^{2\pi}(\sin(t)-\cos(t))(1-\cos(t)-\sin(t))^3dt}_{=0(WHY?)}$ $-\int_{0}^{2\pi}\frac{3}{4}+\frac{1}{4}\cos(4t)dt =-\frac{3}{4}t-\frac{1}{16}\sin(4t)|_{0}^{2\pi}=-\frac{3}{2}\pi$ April 11th 2008, 11:49 PM $<br /> <br /> \int_{0}^{2\pi}\vec F \cdot d\vec r =\int_{0}^{2\pi}-\sin^4(t)-\cos^4(t)+(\sin(t)-\cos(t))(1-\cos(t)-\sin(t))^3dt<br />$ why you integrate from 0 to 2 pi and not from 0 to pi/2?since C trace the plane x+y+z=1 from 0 to pi/2 only. By the way,can you show me the way to evaluate it in (double integral curlF.n ds)? April 12th 2008, 12:37 AM $<br /> <br /> \int_{0}^{2\pi}\vec F \cdot d\vec r =\int_{0}^{2\pi}-\sin^4(t)-\cos^4(t)+(\sin(t)-\cos(t))(1-\cos(t)-\sin(t))^3dt<br />$ why you integrate from 0 to 2 pi and not from 0 to pi/2?since C trace the plane x+y+z=1 from 0 to pi/2 only. By the way,can you show me the way to evaluate it in (double integral curlF.n ds)? This should answer you question. Did you sketch the surface. The integral should go from 0 to 2 Pi Here is a graph of the surface and its projection into the xy plane Attachment 5813 your integral should be
{"url":"http://mathhelpforum.com/calculus/34071-stokes-theorem-print.html","timestamp":"2014-04-21T02:12:26Z","content_type":null,"content_length":"10546","record_id":"<urn:uuid:b11b28e1-eb27-4b42-a232-4680eaf9a0e5>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00403-ip-10-147-4-33.ec2.internal.warc.gz"}
Prince Of Persia (2008) - Platform: XBox 360 FAQ Prince Of Persia (2008) - Platform: XBox 360 Play as Classic Prince and Princess: At the main menu, select the "Extras" option. Choose the "Skin Manager" selection, then enter "525858542" as a code to unlock the Classic Prince and Princess from Prince Of Persia: The Sands Of Time. Play as Prince Altair: At the main menu, select the "Extras" option, and register the game to link your online profile to your existing ubi.com account to unlock Altair from Assassin's Creed at the "Skin Manager" menu. Play as Princess Jade: Successfully complete Story mode to unlock Jade from Beyond Good And Evil at the "Skin Manager" menu. Play as Prototype Prince and Elika: Collect all 1001 Light Seeds to unlock the Prototype Prince and Elika at the "Skin Manager" menu. Note: The final Light Seed is only collectible after completing the game. Easy "Assassin View" achievement: Go halfway up the stairs that wrap around the Martyr's Tower and eventually lead to its fertile grounds. Put the prince and the Persian princess on the strange looking beam there to get the "Assassin View" achievement. Easy "Combo Specialist" achievement: Perform the following combos during Boss fights during the early part of the game. Note: The combo will not count if you kill the Boss with it, if the Boss blocks part of it, or if the Boss is hit into a wall or off a ledge. Normal combos X, Y X(2), Y X(3), Y, B X, B X(2), B X(3), B X, A X(2), A X(3), A Magic combos Y, X(2) Y(2), X Y, X, Y, X Y, X, Y(2) Y, B Y(2), B Y, X, B Y, X, Y, B Y, A Y(2), A Y, X, A Y, X, Y, A Gauntlet combos B, X B, Y(2), X B, Y, X(2) B, Y, X, Y, X B, Y(3) B, Y, X, Y(2) B, Y, B B, Y(2), B B, Y, X, B B, Y, X, Y, B B, A B, Y, A B, Y(2), A B, Y, X, A B, Y, X, Y, A Acrobatic combos A, X(3) A, Y(2), X A, Y, X(2) A, Y, X, Y, X A, Y(3) A, Y, X, Y(2) A, B A, Y, B A, Y(2), B A, Y, X, B A, Y, X, Y, B AB combos A, B, A, X A, B, A, Y, X A, B, A, Y(2) A, B, A, B A, B, A, Y, B BA combos B, A, X B, A, Y, X B, A, Y(2) B, A, B B, A, Y, B Easy "Climbing To New Heights" achievement: Defeat the Alchemist in his Observatory to get the "Climbing To New Heights" Easy "Precious Time" achievement: Defeat Ahriman, then allow the game to idle for one minute before cutting any trees and proceeding to the ending credits. Easy "Sinking to New Depths!" achievement: At the beginning of the game, after the last combat tutorial, the princess will tell you to follow her to the temple. As you follow her, instead of going up the steps, continue past them. Look at the perimeter of the tree shrine in the desert. There is a small room under the shrine's roots near the edge of the cliff. Stand inside it to get the "Sinking to New Depths!" achievement. Easy "Sword Master" achivement: To get an easy fourteen hits in a single combo, execute the following sequence during a Boss fight during the early part of the game. Note: You do not have to kill the Boss. Y, X, Y, A Y, X, Y, B A, Y, X Easy "Titanic View" achievement: The fertile grounds for Machinery Grounds is on a giant airship. Put the prince and the Persian princess on the beam at the ship's prow to get the "Titanic View" achievement.
{"url":"http://www.cheatbook.de/cfiles/princeofpersia2008xbox360.htm","timestamp":"2014-04-19T23:27:14Z","content_type":null,"content_length":"22325","record_id":"<urn:uuid:863caf3e-447c-4396-a434-39c17115de03>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00514-ip-10-147-4-33.ec2.internal.warc.gz"}
Outline Of A Proof That P (1): Saul Kripke Some philosophers have argued that not-p. But none of them seems to me to have made a convincing argument against the intuitive view that this is not the case. Therefore, p. (1) This outline was prepared hastily -- at the editor's insistence -- from a taped manuscript of a lecture. Since I was not even given the opportunity to revise the first draft before publication, I cannot be held responsible for any lacunae in the (published version of the) argument, or for any fallacious or garbled inferences resulting from faulty preparation of the typescript. Also, the argument now seems to me to have problems which I did not know when I wrote it, but which I can't discuss here, and which are completely unrelated to any criticisms that have appeared in the literature (or that I have seen in manuscript); all such criticisms misconstrue my argument. It will be noted that the present version of the argument seems to presuppose the (intuitionistically unacceptable) law of double negation. But the argument can easily be reformulated in a way that avoids employing such an inference rule. I hope to expand on these matters further in a separate monograph. Routley and Meyer:
{"url":"http://consc.net/misc/proofs.html","timestamp":"2014-04-20T05:42:32Z","content_type":null,"content_length":"5397","record_id":"<urn:uuid:e35707ea-0a4d-4a1c-ae42-dd784b464d53>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00646-ip-10-147-4-33.ec2.internal.warc.gz"}
Spectral decomposition of parabolic induced for GL2(Zp) up vote 1 down vote favorite Let $F$ be a number field and let $o$ be its ring of integers. Let $o_p$ resp. $F_p$ be the completion at a prime ideal $p$ in $o$. Let $B$ be the group of upper triangular matrices in $GL_2$. Let $\ pi $ be a character of $B(F_p)$. How can we describe the irreducible which occur in the restriction of the induced representations $$Res_{GL_2(o_p)} Ind_{B(F_p)}^{GL_2(F_p)} \pi = Ind_{B(o_p)}^{GL_2(o_p)} \pi = lim_r Ind_{B(o/p^r)}^ {GL_2(o/p^r)} \pi.$$ Is there a nice description of the irreducibles? algebraic-groups finite-groups rt.representation-theory add comment 1 Answer active oldest votes This is done in : Casselman, William The restriction of a representation of ${\rm GL}_2 (k)$ to ${\rm GL}_{2}({\mathfrak o})$. Math. Ann. 206 (1973), 311–318. Also see : Silberger, A.: $PGL_2$ over the p-adics. Lecture Notes in Mathematics 166, Berlin- Heidelberg-New York: Springer 1970 up vote 3 Very roughly speaking, the idea is the following. When you restrict an irreducible smooth representation of ${\rm GL}(2)$ to ${\rm GL}(2,{\mathfrak o})$ you get two types of down vote constituents. The first ones (infinitely many) are uninteresting : they appear in the restriction of many other representations. These representations are described by Casselman. The accepted second ones are more interesting : if an ireeducible representation contains such constituents then it must belong to a single component of the Bernstein decomposition of the category. They are called typical. Actually the situation is a bit more complicate. I've made it simpler. For more detail, you may read the paper by Henniart : Breuil, Christophe; Mézard, Ariane Multiplicités modulaires et représentations de ${\rm GL}_2({\bf Z}_p)$ et de ${\rm Gal}(\overline{\bf Q}_p/{\bf Q}_p)$ en $l=p$. (French) [Modular multiplicities and representations of ${\rm GL}_2({\bf Z}_p)$ and ${\rm Gal}(\overline{\bf Q}_p/{\bf Q}_p)$ at $l=p$] With an appendix by Guy Henniart. Duke Math. J. 115 (2002), no. 2, The article of Casselman exactly provides what I need. Thanks for this nice reference. – plusepsilon.de Apr 2 '11 at 11:28 add comment Not the answer you're looking for? Browse other questions tagged algebraic-groups finite-groups rt.representation-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/60272/spectral-decomposition-of-parabolic-induced-for-gl2zp","timestamp":"2014-04-19T07:31:17Z","content_type":null,"content_length":"53866","record_id":"<urn:uuid:ade44de6-ed59-47dd-b318-ded7b65773b1>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00326-ip-10-147-4-33.ec2.internal.warc.gz"}
Mathematical Opportunities Are you interested in becoming more involved in mathematics? Want to complete research and present at a conference? Want to complete a summer internship or research experience? Tons of opportunities are available for you. Research Experiences for Undergraduates - Every summer, the National Science Foundation funds Research Experiences for Undergraduates (REUs) at many universities around the U.S. There are many in mathematics and other fields. Summer Internship Opportunities - Corporations and agencies offer many internships for undergraduates. Academic Year Opportunities - If you are looking for a semester of intensive mathematical studies, you should consider one of the following opportunities. Overview of Opportunities for Undergraduates - The AMS is a great resource for everything of interest to a young math student. Their Undergraduate Mathematics Majors page contains information about graduate school, employment, and careers. Mathematics Conference for Undergraduates - There are many conferences that feature presentations by undergraduate mathematics majors. Ithaca College can help offset the cost of travel and
{"url":"http://www.ithaca.edu/hs/depts/math/studentinfo/opportunities/","timestamp":"2014-04-19T00:25:31Z","content_type":null,"content_length":"19460","record_id":"<urn:uuid:6ab517a8-21ab-4bda-bd27-b58bdfd1d077>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00011-ip-10-147-4-33.ec2.internal.warc.gz"}
Physics Forums - View Single Post - Projectile motion- initial velocity and launch angle vy= v0y - gt and for t I'm using 1/2 s, which is the time needed for the ball to rise to the max height. So, vy= 5.9 m/s - (9.8 m/s² * 1/2 s)= 1 m/s And, if I plug this into the Pythagoras formula, the initial velocity should be 13.02 m/s? As for the angle, is this correct: α= (tan)-¹(v0y / v0x)= 24.49° ?
{"url":"http://www.physicsforums.com/showpost.php?p=2393247&postcount=10","timestamp":"2014-04-19T09:46:36Z","content_type":null,"content_length":"7105","record_id":"<urn:uuid:393cc879-c702-4ab8-928e-358338e737ed>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00251-ip-10-147-4-33.ec2.internal.warc.gz"}
Why More Data and Simple Algorithms Beat Complex Analytics Models With all the buzz surrounding big data, the data management practitioner is constantly inundated with information regarding big data technologies. After identifying which big data problems an organization must solve, the next step is understanding the advantages and disadvantages of different approaches to address these challenges. Most importantly, the practitioner must make a case: why collect more data or develop more sophisticated algorithms? There is a debate going on, and many experienced statisticians argue that the secret to taming your big data problems is by embracing the size of your detail data, rather than the complexity of your models. (Detail data are the attributes and interactions of entities—usually users or customers. Preferences, impressions, clicks, ratings and transactions are all examples of detail data.) Dozens of articles have been written detailing how more data beats better algorithms. But very few address why this approach yields the greatest return. Related Stories Opinion: Avoiding pitfalls in predictive analytics models. Read the story » Podcast: Advanced analytics adoption still low, an opportunity to get ahead. Read the story » Hadoop sandboxes offer experimental spaces for analytics modelers. Read the story » Hadoop system developers carry on quest for real-time queries. Read the story » In a nutshell, having more data allows the “data to speak for itself,” instead of relying on unproven assumptions and weak correlations. While you can invest vast amounts of resources in algorithm development, often the smarter option is to invest in data collection and accessibility, which is more economically viable and provides greater prediction accuracy. This article explains why having more training data can improve the accuracy of a model and allow organizations to better serve their users. Training data can be defined as the subset of relevant data you use when doing analysis and building predictive models. Because you often don’t know what data is going to be relevant in predicting user behavior, best practices dictate that we collect everything we can. The amount of training data you have available can never be more than the total data you collect, which is why new data infrastructure like Hadoop is valuable: you can afford to collect everything. The New Bottleneck When data management technologies first came to market years ago, hardware was the primary bottleneck with thin network pipes, small hard disks and slow processors. As hardware improved, it became possible to create software that distributed storage and processing across large numbers of independent servers. This new type of software takes care of reliability and scalability issues across hardware devices, resulting in a platform that can scale with the data being collected. Hadoop is the software platform that enables large-scale collection and storage of detail data at low cost so you can afford to “collect everything.” Software frameworks like Kiji and Impala make Hadoop data accessible to analysts for predictive modeling development and deployment. The current bottleneck most analysts face is finding software that allows them to make sense of all the detail data. Instead of spending vast amounts of time sifting through an avalanche of data, data scientists need tools to determine what subset of data to analyze and how to make sense of it. The next generation of business intelligence software is tackling this challenge by using the full amount of data available to create more accurate algorithms and garner better results. Why More Data Beats Better Algorithms The logic behind the concept that more data beats better algorithms is rather subtle. Say, for example, we believe the relationship between two variables – such as number of pages viewed on a website and percent likelihood of a website visitor to make a purchase – is linear. Having more data points would improve our estimate of the underlying linear relationship. The two graphs in Listing 1, show that more data will give us a more accurate and confident estimation of the linear relationship. Simple correlations between and two variables are common in retail, finance, and mobile applications. In retail, we estimate the probability that a user will check out given the contents of their shopping cart. In finance, we compute the probability that a transaction is fraudulent given a ZIP code. In mobile, we compute the probability a user will redeem an offer given a GPS location. However, when comparing the two graphs above in Figure 1, notice that more data points do not affect the linear estimate—the result is virtually the same once you have “enough data.” The “trick” in effectively using more data is to make fewer initial assumptions about the underlying model and let the data guide which model is most appropriate. In the above example, we assumed the linear model before collecting data to parameterize the relationship. Next, we’ll examine how the data itself can provide actionable insight beyond a linear More Data Can Reveal a Non-Linear Relationship Within a Dataset Many organizations build complicated models that use a smaller subset of data to determine what content should be offered to the user next. Let’s say the graph above in Figure 1 represents a predictive model for a recommendations engine for a sports website that delivers national sporting news to subscribers. The linear model suggests that there is a strong correlation between “reading about football” and “reading about soccer.” The X-axis represents how often users read football news and the Y-axis represents the likelihood these users will also read soccer news. However, the graphs above show that the true relationship is not quite linear. The U-shaped dip between 10 and 30 on the X-axis cannot be captured using our linear model, yet it provides tremendous insight into individual user behavior. Detail data allows us to pick a nonparametric model—such as estimating a distribution with a histogram—and provides more confidence that we are building an accurate model. If we have significantly more data we can accurately represent our model as a histogram, as depicted in Figure 2, better capturing the relationship between the variables. We can essentially forego the linear parametric model for a simple density estimation technique. With more data, the simpler solution (estimating a distribution with a histogram) actually becomes more accurate than the sophisticated solution (estimating the parameters of a model using a linear This insight allows the sports website to better serve its subscribers by building recommendations engines that adapt to user preferences. Other examples in which this technique is relevant include item similarity matrices for millions of products, and association rules derived using collaborative filtering techniques. Nonparametric Models Win Overall, a weak assumption coupled with complex algorithms is far less efficient than using more data with simpler algorithms. If this were a much larger parameter space, you could imagine that the model itself could be very large (the data representing just the red histogram). Nonparametric models are becoming more commonplace in big data analysis, especially when the model is too large in memory to fit on a single machine. Next generation open source frameworks, such as Kiji and Cloudera’s Impala, have been designed to support distributed training sets and distributed model representations, taking full advantage of nonparametric model techniques. By simplifying your models and increasing the data available, enterprises can better automate the sales and marketing funnel, create more effective calls to action and increase customer lifetime value. Garrett Wu is vice president of engineering at WibiData. A former technical lead at Google’s personalized recommendations team, he now focuses on natural language processing, machine learning and data mining. 10 Comments 1. I am not about the following statement – “With more data, the simpler solution (estimating a distribution with a histogram) actually becomes more accurate than the sophisticated solution (estimating the parameters of a model using a linear regression).” Here you are suggesting – Histogram is simpler model/solution than linear regression. ? First of all, IMHO, both of these solutions are much simple to make the point you are making here. Secondly, if we consider these models – What sophistication are we talking about in estimating parameters of linear regression model ? – I am assuming you are talking about the coefficients or is there something I am missing here ? □ Yes, I am suggesting that bucketing values into a histogram is “simpler” than a linear regression. By simple here, I mean that it makes fewer assumptions about the data, not that it is less work to train. My apologies for the confusion. It was challenging for me to draw from examples that would communicate my meaning while also being understandable to a nontechnical audience. Perhaps I erred too far to the side of simplicity and failed to communicate the point. I’ll try to clarify. In this example, the “sophistication” is the fact that the model author would have had to assume that these two variables are linearly correlated to begin with, an assumption that may not actually be true. You could imagine continuing to invest in adding “sophistication” to the model by engineering new features or assuming different underlying relationships between variables, but I argue that it often makes sense to instead invest in more data for the histogram. 2. Thanks I appreciate your response, and agree the difficulty in communicating the ideas you tried to convey here. “simpler” and “sophisticated” are very subjective words to convey the point in a short writeup. Irrespective, I understand your point that a model author may start with assumption of linear correlation, and that it may not be true. There may exist linear correlation between different set of features, and not between the set of features one started with. If I am not wrong, isn’t this case best suited for iterative querying platforms/frameworks, as one has to start with some feature set assumption and validate models with different feature sets by iterative querying of the data set. I agree in case of histograms – the non-parametric model – is “simpler” with fewer assumption of data than “linear regression” parametric model. In this example, the histograms are in effect providing distributions of data but there are limitations of non-parametric models and the scenario where they are useful. Do you agree ? In general, I agree with the premise of the article, but I think calling “histogram” as “simpler” model and pitting it against “linear regression” as “complex” model is little far fetched. □ Yes, all good points. I agree that the implication of this is that minimizing the overhead in querying and iteration is a priority. In fact, my hope is to use this article to set up the argument that data-driven organizations should invest in the infrastructure and tooling to give data scientists access to more data, efficient processing of that data, and short iteration times between experiments and production. By doing so, model authors will be able to quickly validate hypotheses instead of making assumptions. I also agree that non-parametric models have their limitations and scenarios where they are useful. And this example of histograms vs. linear regression, though somewhat contrived, was easy to depict visually. 3. Hi Garrett, Thanks for this very interesting post. Great intro for a guy (like me) with a stats background but limited knowledge of the software side of big data. I was wondering, did you actually obtain such a bizarre relationship between reading about football and reading about soccer? I’m trying to see what the underlying qualitatives could be, but struggling. (unless 20-30 corresponds to watchign the superbowl, which many people do without having a particular interest for “team & ball” sports) I was also wondering whether the histogram approach was as strong with multiple linear regression, because it gets pretty hard to plot 4+ dimensional histograms. The visual aspect of a 2D histogram is brilliant, as it captures so much more info than a single number (i.e. the regression coefficient), so how do you maintain that with higher level dimensions? And finally, as a student I’m very curious how much of what I’m learning is useful for the business world out there! It would awesome to learn more about your own career path… anywhere on the web I could find that? 4. I like the spirit of the article as it points out to some very important aspects of modelling. Some issues though deserve clarification. -The data is incapable of speaking for itself. We (analysts), the choices we make, the assumptions we make, the methods we choose ultimately speak for the data. -The logic behind more data beats better algorithm is suspect. I’m afraid is much more complex than simply saying more is better. -Linearity (or lack thereof) has little to do with the size of the data (. 5. nice article.. I have worked with Japanese company whose top priority was user experience with UI on their website. Needless to say there is no clear answer what user really likes.. Steve Jobs (along with Forstall) showed that UI is really hard to predict but it is big winning factor. We went with collect all behavior. we keep finding things that surprise us. Eg, users really love auto fillup but really hate when it fills wrong. We used that knowledge to design auto fill up with some intelligence in guessing the fills. Eg off the shelf market solution is to do text matching… but we had to tweak it to the product that company was making and thus return results even if the match is not exact. I suppose google has that. And I hate bing because frequently it guesses wrong. I think this example would have been more appropriate rather than games. I never watch games. I feel they are addicts :).. other good example would have been retain.. probably Amazon can shed some lights on it.. I do find their suggestion quite accurate even if I shop some item that is break from my history. 6. What you have called “Best Practice” would be considered blasphemy in statistics and science in general. The strategy “collect everything” could not be less scientific. You write: “Because you often don’t know what data is going to be relevant” as a justification for “collect everything.” First, you are using one word to represent two very different things. Every type of statistical test makes “Assumptions” about the type of data that is appropriate for that analysis. An example would be the Heterogeneity of Variance assumption associated with ANOVA models. When comparing independent groups of people, the ANOVA procedure only produces reliable information on group differences when the groups have approximately the same variance, and there are a number of tests that will let the analyst know if there are significant differences between groups’ variance. This is an assumption. But you also use the same word when discussing model development – what dimensions are on the model. That is not an assumption – that is called a “hypothesis,” which should be based on knowledge gained in previous analyses by you and others. In science, the data you collect is based on what you would need to have to check the validity of your hypothesis. If you do not start with a hypothesis – that is called a “fishing expedition”- NOT Best Practices. More like “totally unscientific terrible practices.” In science, one has a theory, and one makes hypotheses and collects relevant data as potential evidence for the correctness of the theory. There is no reason under the sun for any Big Data analyst to say that they do not know what data will be relevant. If you are working in context, and have been performing analyses in context, then you had better goddamn know- or at the very least, have an educated guess regard in what will be relevant. Collecting everything and then searching everything for something relevant to your enterprise would be viewed as “post-hoc” analysis, which is never considered correct unless it has been replicated, or confirmed in another analysis with hypothesis emerging from your post-hoc fishing expedition. Also, when you collect everything, you practically exponentially increase your chances of seeing an effect where there really isn’t one. The more tests you perform, naturally, the higher and higher your chances for “false positives” become. In statistical terms, your collect everything strategy inflated your Type I error rate to totally unacceptable levels. Your conclusions become highly suspect, and this is not good in any way, for the enterprise. A histogram is a certain type of graph available to visualize your data. It is not comparable in any way to a linear (or non-linear) regression, which is statistical test that elucidates how much of the variation in what you are trying to predict, can be accounted for by the dimensions you are using to make your prediction, and it tells you if your model is statistically significant. The procedure also estimates the “Beta” or weight that each dimension is assigned in the regression equation that maximizes the amount of variance your model explains. “Collect Everything” is not justified by “because we can.” It’s not justified at all. In 5 years, you are going to be able to collect a great deal more on your fishing expedition, which will worsen your predictions more and more. If you continue with collect everything, I would estimate that as capacity increases you will become more likely to find effects that are not real. At some pony in your collect everything strategy, the real effects that your enterprise needs, will likely become extremely rare because your chances of finding false effects will become the most likely outcome of your non- scientific, non-statistical, nonsense approach to informing your enterprise. And then? After the majority of your predictions fail to materialize- your enterprise will be better off without you. 7. “totally unscientific terrible practices.” – Love it. Thanks for laying down the truth on “Big Data” 8. A better title would be something like, ‘How More Data Can Sometimes Outperform A Complex Model.’ Of course, you have not shown that and naturally, professionals already know this. However, this would be a more realistic title.
{"url":"http://data-informed.com/why-more-data-and-simple-algorithms-beat-complex-analytics-models/","timestamp":"2014-04-18T23:15:59Z","content_type":null,"content_length":"81589","record_id":"<urn:uuid:544073fb-a7cd-41b8-8bd1-5816c29270e0>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00624-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: Re: Polya (discussion) Replies: 1 Last Post: Oct 9, 1994 5:50 PM Messages: [ Previous | Next ] Re: Polya (discussion) Posted: Oct 3, 1994 8:48 AM >In case no one noticed, I see no women in on this discussion. Baseball >sure won't interest me in mathematics, nor will Barbie Dolls. On the >other hand, a classroon visit by a real person who uses mathematics in >their profession can and will excite students. The book problems, real >or contrived will never replace the excitement of good teachers and >others who help show students see both the "application" and the "for its >own sake--beauty & power" sides of mathematics. >Doris Schattschneider The issue of interest is so very complex. Do we talk about barbie dolls or baseball scores? Unfortunately, I think that might be the wrong question. In the past, I've found it very important and reassuring to attempt to know the students that I work with on an individual level, to the extent that I know what might interest them. It is important to work those general types of interest at times. However, I think the real fascination (especially in math) is with what "real" mathematicians do. I wish I knew. The idea that Schattschneider presented about inviting a mathematician from the outside into the classroom to talk about what they do is a great one. Or even an engineer...someone who works with numbers a great deal. The problem with doing baseball or barbie questions is that I think students really do just want to "strip away the fluff". Kids aren't oblivious; if you present a math problem to them using baseball just to catch their interest, they will catch on quickly, and merely strip off the context. If, on the other hand, you give them some sort of real life task to complete, that is interesting to them, and that involves the math that you wish to teach, the interest being addressed cannot be seen as fluff. If you actually let them go out and measure a baseball field, or give them the task of creating a baseball field within a certain amount of space, you might unleash some greater creative energy that can be directed at math. In this way, math can be creative just like literature and science. -Katie Laird
{"url":"http://mathforum.org/kb/message.jspa?messageID=1079278&tstart=0","timestamp":"2014-04-20T22:31:35Z","content_type":null,"content_length":"19145","record_id":"<urn:uuid:78e21c5b-6d81-4866-b950-e227e467fe52>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00599-ip-10-147-4-33.ec2.internal.warc.gz"}
Castle Pines, CO Algebra Tutor Find a Castle Pines, CO Algebra Tutor ...I have more than two years Software tester work experience. It was required to use Microsoft Outlook to receive all email from work. It is very easy to use and very helpful. 27 Subjects: including algebra 1, algebra 2, calculus, physics ...I have taught Oceanography, and Introduction to Geology at the college and university level. I have grown up in the Christian church. I taught a small group in high school. 24 Subjects: including algebra 2, algebra 1, calculus, geometry ...When helping prepare students for taking the SAT, I like to spend time discussing test taking strategies in addition to brushing up on math skills. I usually have students complete practice exams first to determine which topics need different explanations or more practice. In the past, I have f... 10 Subjects: including algebra 1, algebra 2, geometry, ASVAB ...I have accrued hundreds of hours of tutoring experience covering nearly all subjects through the high school level, including two years of regularly scheduled GED tutoring and the successful tutoring of four grown children, one of whom was clinically diagnosed with ADHD. All have since made succ... 24 Subjects: including algebra 2, algebra 1, reading, writing ...Attention is limited in its capacity and can be exhausted in any person. Cognitive techniques can empower learners to anticipate this deficit, alternate between task types, and resist the depletion of attentive energy. No single solution exists for every learner. 31 Subjects: including algebra 1, algebra 2, English, writing Related Castle Pines, CO Tutors Castle Pines, CO Accounting Tutors Castle Pines, CO ACT Tutors Castle Pines, CO Algebra Tutors Castle Pines, CO Algebra 2 Tutors Castle Pines, CO Calculus Tutors Castle Pines, CO Geometry Tutors Castle Pines, CO Math Tutors Castle Pines, CO Prealgebra Tutors Castle Pines, CO Precalculus Tutors Castle Pines, CO SAT Tutors Castle Pines, CO SAT Math Tutors Castle Pines, CO Science Tutors Castle Pines, CO Statistics Tutors Castle Pines, CO Trigonometry Tutors Nearby Cities With algebra Tutor Cadet Sta, CO algebra Tutors Crystola, CO algebra Tutors Deckers, CO algebra Tutors Dupont, CO algebra Tutors Fort Logan, CO algebra Tutors Foxton, CO algebra Tutors Lowry, CO algebra Tutors Montbello, CO algebra Tutors Montclair, CO algebra Tutors Roxborough, CO algebra Tutors Sedalia, CO algebra Tutors Tarryall, CO algebra Tutors Welby, CO algebra Tutors Western Area, CO algebra Tutors Woodmoor, CO algebra Tutors
{"url":"http://www.purplemath.com/Castle_Pines_CO_Algebra_tutors.php","timestamp":"2014-04-17T16:02:42Z","content_type":null,"content_length":"23904","record_id":"<urn:uuid:bb78eeb0-3d3f-4bbc-9385-36128623c18e>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00590-ip-10-147-4-33.ec2.internal.warc.gz"}
Heap Sort November 6, 2009 A priority queue is a data structure that permits insertion of a new element and retrieval of its smallest member; we have seen priority queues in two previous exercises. Priority queues permit sorting by inserting elements in random order and retrieving them in sorted order. Heapsort uses the heap data structure to maintain a priority queue. The heap is a tree embedded in an array, with the property that the item at each index i of the array is less than the children at indices 2i and 2i+1. The key to understanding heapsort is a function we call heapify that gives the sub-array A[i .. n] the heap property if the sub-array A[i+1 .. n] already has the property. Heapify starts at the i^th element of the array and swaps each element with its smallest child, repeating the operation at that child, stopping at the end of the array or when the current element is smaller than either of its children. Then heapsort works in two phases; the first phase forms an initial heap by calling heapify on each element of the array from n/2 down to 1, then a second phase extracts the elements in order by repeatedly swapping the first element with the last, re-heaping the sub-array that excludes the last element, and recurring with the smaller sub-array that excludes the last element. Your task is to write a function that sorts an array using the heapsort algorithm, using the conventions of the prior exercise. When you are finished, you are welcome to read or run a suggested solution, or to post your solution or discuss the exercise in the comments below. Pages: 1 2 One Response to “Heap Sort” 1. April 29, 2011 at 8:07 AM here is my implementation in c static void heapify(int arr[], int left, int right); static void sort(int arr[], int left, int right); static void siftup(int arr[], int left, int right, int key); void heapsort(int arr[], int left, int right) /* transform input file into a heap */ heapify(arr, (right+1)/2, right+1); /* sort the heap */ sort(arr, left+1, right+1); static void heapify(int arr[], int left, int right) int i; /* convert to heap */ for(i = left; i > 0; i--) siftup(arr, i, right, arr[i-1]); static void sort(int arr[], int left, int right) int i, key; for(i = right; i > 1; i--) key = arr[i-1]; arr[i-1] = arr[left-1]; /* move the biggest element to the rightmost position in array */ /* move up the next biggest element to root */ siftup(arr, left, i-1, key); static void siftup(int arr[], int left, int right, int key) int i, j; /* move down the tree */ for(i = left, j = 2*i; j <= right; i = j, j = 2*i) /* j should point to the largest child */ if(j < right && arr[j-1] < arr[j]) /* if child is smaller or equal to parent, than break out of loop */ if(key >= arr[j-1]) /* move up the child element */ arr[i-1] = arr[j-1]; arr[i-1] = key;
{"url":"http://programmingpraxis.com/2009/11/06/heap-sort/?like=1&_wpnonce=ef2a6945bd","timestamp":"2014-04-19T09:39:49Z","content_type":null,"content_length":"63305","record_id":"<urn:uuid:6d1ad819-72ff-496c-b160-4ec259bd5bbe>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00663-ip-10-147-4-33.ec2.internal.warc.gz"}
November 23rd, 2012, 04:33 PM Finsik Gojani 1) A written application which makes encryption and decryption by Caesar encryption, application first asks the user if he wants encryption or Decryption, then write the relevant text and display the result. Should allow users to write in capital letters and small letters and numbers? (3 points) 2) The written application that implements extended Euclid algorithm (recurrent method)? (1 point) 3) The written application that implements the encryption permutation. At first application prompts the user if wants encryption or Dekriptim, then given the length of permutation and permutations, relevant text and display the result? (3 points) 4) The written application that makes the attack on encrypted text via permutation encryption, knowing that the length of permutation not exceed 6? (3 points) Who can help me to write this aplications, can anyone do it for me November 23rd, 2012, 04:45 PM Re: Seminar Post closed (moderated) as he is asking for help to cheat. Private message sent.
{"url":"http://www.javaprogrammingforums.com/%20whats-wrong-my-code/19585-seminar-printingthethread.html","timestamp":"2014-04-20T14:16:39Z","content_type":null,"content_length":"4517","record_id":"<urn:uuid:8ed47856-71b3-4b81-913a-02a68b96f3d8>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00561-ip-10-147-4-33.ec2.internal.warc.gz"}
Rockwall Math Tutor Find a Rockwall Math Tutor ...I am a life-long learner who loves to share and inspire children to do the same.I am certified in Texas EC-4, 4th-8th, ESL, and in bilingual education. I have taught Pre-K 4 years, 4th grade 2 years, 7th and 8th grade ESL 3 years. In addition, I have co-taught in several secondary core content ... 8 Subjects: including algebra 1, prealgebra, Spanish, reading ...He also lived in Cape Town, South Africa for approximately one year, until returning to the States to continue with his studies. Drew specializes in tutoring math and science. He is, however, equally as helpful in language studies. 37 Subjects: including calculus, SAT math, chemistry, algebra 2 ...Thanks for your time.On top of my BS in Mathematics from UT Dallas I was fortunate enough to substitute teach Geometry and Pre-AP Geometry for two months at Wylie High School. I enjoyed really teaching the subject and I enjoyed helping students before and after school. Please consider me for your students Geometry needs. 4 Subjects: including algebra 1, algebra 2, geometry, prealgebra ...Starting in 2006 to 2009 I took additional courses in child development and special education. In the school year of 2007/2008 I served for Cedar Hill ISD as a level III educational aide in an elementary school, teaching a class of emotionally disabled children grades K through 5. The subjects I taught were Math, Biology, History, Art, PE, and English. 33 Subjects: including algebra 2, English, Spanish, geometry ...Nomenclature by IUPAC rules, interpretation of spectral data (NMR, IR, mass spectrometry), all types of isomerism, chirality, functional group chemistry, reaction mechanisms, cycloaddition reactions, organometallic chemistry, carbanion reactions, and substitution/elimination reactions are particu... 17 Subjects: including algebra 1, algebra 2, chemistry, geometry
{"url":"http://www.purplemath.com/rockwall_tx_math_tutors.php","timestamp":"2014-04-18T11:12:54Z","content_type":null,"content_length":"23626","record_id":"<urn:uuid:3066c7ef-a909-4760-9fd0-4f93e51299f6>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00491-ip-10-147-4-33.ec2.internal.warc.gz"}
How much money (U.S.) does it cost to wash and dry one load of laundry? I was given an assignment from my boss to evaluate the cost of how much money (U.S.) it costs to wash and dry one load of laundry. I fully understand that there are a lot of variables; the size of the load, the heat, the economics of the washer and dryer, but I would just like a rough estimite. I would greatly apreciate if you could give me a rough estimite. Thanks very much! -The internHow much money (U.S.) does it cost to wash and dry one load of laundry? Here's how I would go about it.. you may need to do a little research into prices in your area. Total up the following: - Water: Gallons of water (washer capacity) x cost/gallon (look on your water bill) - Electric: Energy consumption per load (in KWhr)x cost per KWhr (look on your electric bill) - Detergent: Divide the price you paid by the number of loads the bottle can give you - Fabric Softener: Divide the price you paid by the number of loads the bottle can give you - Electric: Energy consumption per load (in KWhr) x cost per KWhr (look on your electric bill) - Dryer Sheet: Rough cost of a dryer sheet (price paid/# sheets in a box). If you wanted to get really fancy, you could add in the cost of depreciation of the washer and dryer (I would only do this is you worked in a field where total cost of ownership of the appliance was important or your boss owned laundromats and the washer/dryer were capital expenses... or for a finance class). Also for extra credit, once you have your answer, compare it to the cost of a load at the nearest laundromat. Don't forget to factor in the cost of gas to get there. You have me curious now, so I am going to see if I can figure out how much it costs me and will come back and edit my answer with a rough cost. My numbers may be off because I have the high efficiency/front load type of washer/dryer.. so my energy consumption for the washer and dryer is much less than on a standard dryer and it uses less water. These numbers could easily be 2x on a standard washer/dryer. Wash (one load): - Water: $0.00 (I pay $0.0006 per gallon, so cost was not material) - Electric: $0.21 (0.4 KWhr/load at $0.5227/KWhr) - Detergent: $0.14 (Tide bought in bulk) - Fabric Softener: $0.05 (Downey bought in bulk) Total: $0.40 Dry (one load): - Electric: $0.16 (0.3 KWhr/load at $0.5227/KWhr) - Dryer Sheet: $0.04 Total: $0.20 The depreciation worked out to be $0.20-$0.30 per load per appliance, but it depends on your assumptions and how much you paid. Hope this helps! Some sites that have facts and figures to help you out: http://www.solarwindcanada.ca/misc/energ鈥?/a>How much money (U.S.) does it cost to wash and dry one load of laundry? When I do my laundry at a Laundromat it costs me $2.00 to wash 1 large load and $1.00 to dry that load. No comments:
{"url":"http://laundry-wash.blogspot.com/2010/01/how-much-money-us-does-it-cost-to-wash.html","timestamp":"2014-04-16T18:56:35Z","content_type":null,"content_length":"40951","record_id":"<urn:uuid:d4e91696-b610-4332-8345-b150d6c8a08c>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00109-ip-10-147-4-33.ec2.internal.warc.gz"}
binary search tree (data structure) Definition: A binary tree where every node's left subtree has keys less than the node's key, and every right subtree has keys greater than the node's key. Generalization (I am a kind of ...) binary tree, search tree. Specialization (... is a kind of me.) AVL tree, splay tree, threaded tree, randomized binary search tree, discrete interval encoding tree. Aggregate parent (I am a part of or used in ...) treesort (1). See also relaxed balance, ternary search tree, move-to-root heuristic, jump list. Note: A binary search tree is almost always implemented with pointers, but may have a variety of constraints on how it is composed. Author: PEB Ben Pfaff's insert, delete, search, copy, etc. (literate C); Maksim Goleta's Collections (C#) implementing stacks, queues, linked lists, binary search trees, AVL trees, and dictionaries. insert (C), insert (C), search (C). Algorithms and Data Structures' explanation with links to add, delete, search, and output values in order (Java and C++). Insert, search, delete, and various traversals (Modula-2) (use must be acknowledged). Go to the Dictionary of Algorithms and Data Structures home page. If you have suggestions, corrections, or comments, please get in touch with Paul E. Black. Entry modified 12 December 2011. HTML page formatted Mon Dec 12 10:22:39 2011. Cite this as: Paul E. Black, "binary search tree", in Dictionary of Algorithms and Data Structures [online], Paul E. Black, ed., U.S. National Institute of Standards and Technology. 12 December 2011. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/binarySearchTree.html
{"url":"http://www.darkridge.com/~jpr5/mirror/dads/HTML/binarySearchTree.html","timestamp":"2014-04-19T09:24:10Z","content_type":null,"content_length":"4806","record_id":"<urn:uuid:23647eb2-3777-4484-b43b-76d650a39cb5>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00496-ip-10-147-4-33.ec2.internal.warc.gz"}
Broad range of warming by 2050? This CPDN paper doesn't seem to have attracted much comment, perhaps because the results aren't actually very far off what the IPCC already said (just a touch higher). But Chris (and Carrick) commented on it down here so I think it is worth a post. It's the results from a large ensemble of transient simulations of the mid 20-21st centuries, analysed to produce a "likely" range of warming by 2050. Here is the main result: (click for full size) where the vertical dotted lines demarcate their "likely" range, and the horizontal line is the threshold for goodness of fit (such that only the results below this line actually contribute to the final answer). The grey triangles represent models that are thown out due to large radiative imbalance. I am puzzled by a few aspects of this research. Firstly, on a somewhat philosophical point, I don't have much of a feel for what "likelihood profiling" is or how/why/if it works, and that's even after having obtained the book that they cite on the method. The authors are quite emphatic about not adopting a Bayesian interpretation of probability as a degree of belief, so the results are presented as a confidence interval (remember, this is not the same thing as a credible interval ). Therefore, I don't really think the comparison with the IPCC "likely" range is meaningful, since the latter is surely intended as a Bayesian credible interval. Whatever this method does, it certainly does not generate an interval that anyone can credibly believe in! Secondly, on a more practical point, it seems a bit fishy to use the range of results achieved by 2050, relative to 1961-90, without accounting for the fact that almost all of their models have already over-estimated the warming by 2010, many by quite a large margin (albeit an acceptable level according to their statistical test of model perfomance). The point is, given that we currently enjoy 0.5C of warming relative to the baseline, then reaching 3C by 2050 implies an additional warming of 2.5C over the next 40 years. However, as far as I can see none of the models in their sample warms by this much. Certainly the two highest values in their sample - which are the only ones that lie outside the IPCC range, and which can be clearly identified in both the panels of the figure above - were already far too warm by 2010, by about 0.3-0.4C. So although they present a warming of 3C by 2050 as the upper bound of their "likely" range, none of their models actually warmed over the next 40 years by as much as the real world would have to do to reach this level. Finally, on a fundamental point about the viability of the method, the authors clearly state (in the SI) that they "assume that our sample of ensemble members is sufficient to represent a continuum (i.e. infinite number)". They also use a Gaussian statistical model of natural variability in their statistical method (which is entirely standard and non-controversial, I should point out - if anything, optimistic in its lack of long tails). Their "likely" range is defined as the extremal values from their ensemble of acceptable models. This seems to imply that as the ensemble size grows, the range will also grow without limit . (Most people would of course use a quantile range, and not have this problem.) So I don't understand how this method can work at all in this sort of application where there is a formally unbounded (albeit probabilistically small) component of internal variability. In a mere 10 samples or so, their bounds would have been as wide as ± 10 sigma of the natural variability alone - which based on the left hand panel of the fig, would have been rather wider than what they actually found. 18 comments: D'oh it would help if I read what the grey triangles were. 10^23 is a rather large increase in the sample size. But, improve the quality control from ~1.15 to ~0.9 and the range changes to about 76% of the range they decided upon. (But of course they wouldn't decide on the result they wanted then set the quality control to achieve that result.) On the contrary, 10^23 is actually very very small indeed compared to an infinite ensemble :-) Even ignoring internal variability, I don't know how they can make any statements about having (nearly) found the extrema of the forced response in their region of parameter space. With a Monte Carlo approach, this is obvious and easy (even 100 samples probably covers the 5-95% range of the underlying continuum distribution). But with no sampling distribution, only a multivariate space, there could be a very small region with very high response and their method simply won't find it. Interesting to see you criticising them for their sample sizes being too small. :o) Not quite sure what you mean by "no sampling distribution". Which models they get back is rather random AFAICS. Compared to telling each model to generate random parameters in the ranges set, the distribution they get back will definitely be more widely spaced. Is that undesireably compared to a clumpier distribution you would get from a generate random parameters in the ranges set? How would you go about getting a good distribution of parameter combinations? Chris, the essential point is that they are claiming to have actually found the most extreme model (or at least, a close approximation to it) within their high-dimensional parameter space. Standard Bayesian approaches would only care about finding the percentiles (eg 95%ile or 99 etc) of a distribution, which is likely to be a substantially simpler problem (though it still may be quite hard, depending on the details). In their approach, the actual distribution of sensitivity (etc) across their ensemble has no particular importance, it is just the range covered that counts. A Bayesian would have to choose (eg) uniform, gaussian, or some other distribution for each parameter, and the results would depend on this choice. If CPDN doesn't have the computing power to do a large enough sample then no climate modelling group does. So the question is what is the better input to some form of emulator? Is it the wide distribution that CPDN get with their tripple peak distribution or is it some ditribution that samples quite heavily 'near' the centre but is very poor at sampling the extremes. I would rather extrapolate within the range tested rather than outside the range tested. (I also wouldn't want all at the extremes with nothing in the middle so I wouldn't go more widely spaced than CPDN have used.) The other thing is, if you specifically distribute your sample to get a full range of the sensitivity, then you haven't got the best sample of the best models at median expected sensitivity. Anyway, I cannot see anything in what you have said that makes me think there is a valid criticism of the distribution used. I do completely agree that they should have calculated a 5-95% range or a 16-83% range (or both or others). I also think they should have shown this for various different quality control levels. I think that is likely to show curves with lower uncertainty for higher quality control. Though whether that really points to lower uncertainty in the way I am tending to think may well be rather dubious. FWIW is there an open version of the paper. From the description their figure of merit was not the global temperature anomaly but depended on the geographical distribution of the warming which would be a step forward Eli, I think as a co-author I have the right to digitally share it, so here it is: https://docs.google.com/open?id=0B9HdfZpD8H7vX2VIQ2hUdTZSN0NhSm5BWWVZRktfUQ (If I'm violating some policy please let me know and I'll remove the link.) I'm not going to comment on the science since it's been years since I was involved on CPDN, my role was that of computer geek, and I forget everything by now anyway! ;-) But I think it would be great if other groups explore these huge datasets from CPDN, I know there's a nice portal one of my colleagues at Oxford set up (Milo Thurston) to this end. There's much more data than Oxford boffins & postdocs to handle it all anyway IMHO. Then perhaps some of the other things James & crandles bring up could probably be explored? There probably wasn't enough room in Nature to explore things more, considering the author list took up half the space! ;-) I don't have a Nature subscription any more unfortunately - did the Isaac Held "forward" or "commentary" or whatever say anything useful for you? Carl, given your name and address on the paper, I was wondering if you had been tempted back... Isaac Held's blurb was fairly anodyne...nothing negative, but "it is nevertheless important to think of the results as a work in progress". I suppose it's the sort of freebie you get in return for writing a positive review :-) Chris, what they have actually generated *is* a 66% confidence interval, which is basically the frequentist analogue of a 17-83% probability range. Given their method, it does not make sense (AIUI) to consider percentiles of their "acceptable" models, the theory is based on using the extreme solutions. My questions are not so much whether there would be a better way of performing their particular calculation, but firstly whether their method gives an accurate solution to their problem as stated, and secondly whether their interpretation of the problem is actually a useful Sorry, I could do with an expanation of that reply (particularly the 'it does not make sense' part). I do accept that questioning whether their method gives an accurate answer and whether their interpretation is a useful one should come first. But to a certain extent, whether their method gives an accurate answer surely might involve exploring whether it is better than alternatives? I am thinking, for simplicity, that for an example problem you can create 1000 good models which have answers in the range 1.5 to 4.5. To generate those 1000 models you can hand out 2000 models and the bad models have answers in the range 5 to 10. Alternately you could look around more by allowing more extreme parameters. Suppose this generates 4000 models, of which 1000 are good as before and 3000 models with answers in the 5 to 20 range. (Consequently it is now clear that we didn't need to look for more extreme parameters. If only real problems were this simple.) Question then is whether 1.5-4.5 range is a 50% confidence interval, a 25% confidence interval or (possibly?)a 100% confidence interval? On this simple problem, you might be able to justify all three with the difference depending on how exactly you define your method. This simply shows that a confidence interval is highly dependant on method and is therefore not a reliable number nor the credible interval that we want. I said I think they should have calculated a 5-95% range. I should accept that this range isn't a confidence interval per *their* method. Neither is it credible interval but perhaps just a confidence interval for what I see as a potentially better method. Obviously it would need much more consideration before it could be considered to be a better method. There is likely to be some reason why it isn't a better method or it would have been used. However, without discussing it, I will probably never know why it isn't a better method. With this information, would you end up with a credible interval of more than 75% for 1.5 to 4.5 even if the answer and prior was for number of years before arctic sea ice practically disappears for 1 day a year? (assume you believe these models are the best around.) (BTY did you think these numbers were for climate sensitivity?) Hopefully, with that example the comments on the distribution being widely spaced being sensible for setting up an emulator seem more relevant. Though good for setting up an emulator, the distribution is almost certainly too widely spaced for pracical use. The way I am thinking, when this is corrected for using an emulator, I think a sensible 66% likely range is likely to turn out to be a much narrower range than they have shown in this paper. (They may well be aware of this but in order to have more impact they want to show a wide range now (shock, horror, worse than we thought) so a later paper has different dramatic effect (wow, startling improvement from these techniques). But of course, in reality, it is just an unfortunate situation that space constraints stop them from including a discussion of likely differences between their confidence interval and a credible interval. This sort of bias couldn't be why some people turn septic could it?) Chris, using their method, the way to get a range different to the 66% interval, would be to change the severity of their constraint (horizontal line on fig 2a), and see how this different set of acceptable models propagates. I'm sure they tested this and it's obvious from the diagram that the result would be that the the 95% CI is not so different from the 66% CI (the upper bound would only rise to to 3.2C), which is probably why they published the latter :-) >"Chris, using their method, the way to get a range different to the 66% interval, would be to change the severity of their constraint" Yes, I can see that is what you do according to their method. But what I am wondering, through my example above, whether this is reaching a 'through the looking glass' level of a confidence level means what the authors intend it to mean. If it is so crazily easy to manipulate yet irrefutable as the answer is true by definition of the method in that way, is it time to begin to question whether this method of theirs is scientific or not? Chris, it's a general weakness of many statistical methods that the results are only valid if the method was defined prior to the data being collected. However, it might be harsh to pick on this particular occasion to complain. That sounds reasonable, thanks James. received by email from Nic Lewis who was having trouble getting commenting to work - reply to follow shortly: "James, I'm not sure why their "likely" range would grow with the number of ensemble members. As the range would include only the 66% of ensemble members that passed goodness-of-fit test, I would expect it to remain largely unchanged with ensemble size, assuming a close link between goodness-of-fit and forecast warming. Isn't this a similar position to randomly sampling a (say) Gaussian distribution of fixed variance, where the more samples are taken the more extreme values will be reached, but the 66% central CI will be little affected? Even if the link between goodness-of-fit and forecast warming is weak, I would expect only random fluctuations in the "likely" range. However, I don't think that the study's "likely" range of 2050 warming relates closely to how likely warming in that range actually is. The study didn't explore anything like full ranges of key climate parameters: equilibrium climate sensitivities below 2 K were not included in the ensemble, only a limited range of ocean heat uptake levels appears to have been considered, and it is unclear to me to what extent the possibility of aerosol forcing being small was represented. So it looks to me as if the lower bound of the study's "likely" range is probably significantly biased Also, can you clarify why you don't expect Bayesian credible intervals (of the one sided variety) to be the same as the corresponding frequentist confidence interval? Is that because you think Bayesian credible intervals are not valid as objective measures of probability? Most statisticians involved with these issues seem to view checking the matching of one sided credible intervals against frequentist CIs as an important way of testing the validity of Bayesian inference, and in particular of the ability of candidate noninformative priors to generate objective posterior probability densities, a position that I agree with." considering the first part of your comment, let's write the response of a model over the hindcast and forecast periods as something like (A+e,B+d) where A and B are the forced response over the two intervals (which depends on the parameter choices) and e and d are gaussian deviates due to internal variability (which depends on random initial conditions). Now, it doesn't matter how you select/constrain over the hindcast interval, the range of forecast warming still has no supremum because even if B is bounded, d is unbounded due to being a gaussian. Gaussian here is a very natural (perhaps optimistic) choice and I'm certainly not aware of any strict limit on the magnitude of internal variability. Averaging over a finite size initial condition ensemble (as they did) only makes the distribution narower in variance, it doesn't bound it strictly. [Strictly speaking, there is no absolute guarantee even that B is bounded, and it certainly doesn't seem from the diagram that they have sampled densely around the high end of their range - I only see two samples in their acceptable ensemble that are above 2.8C or so.] I agree with you about the lower bound, it seems particularly unreasonable for them to criticise the lower end of the IPCC range, especially as the rather small IPCC ensemble includes models with a lower forecast than their full range, which also satisfy their statistical criterion. There's a heap of evidence that single model ensembles simply don't generate as diverse a range of behaviour as structurally different models can do. I don't expect Bayesian intervals to match frequentist ones because they address a different problem and require different inputs. Of course there are some quite common situations where they do coincide numerically, but I don't see why this should be one of them. Furthermore, there seem to be a bunch of ways of approaching this frequentist likelihood profiling thing, which do not agree with each other, so they can't possibly all agree with a specific Bayesian analysis.
{"url":"http://julesandjames.blogspot.com/2012/04/broad-range-of-warming-by-2050.html?showComment=1333545416448","timestamp":"2014-04-19T23:12:48Z","content_type":null,"content_length":"164266","record_id":"<urn:uuid:5c0e914a-21dc-433a-977e-f747bd46e763>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00470-ip-10-147-4-33.ec2.internal.warc.gz"}
Every time a Base stat increases by one, how much does the max stat increase by? I realize that a Pokemon's base stats won't just magically get higher, but I want to know what the ratio for Base stats to Max stats is. For example, base 100 attack means the Pokemon will have a max attack stat of 328, and a base attack of 120 will have a max attack stat of 372. So the base stat increased by 20, and the max stat increased by 44. So for every time the base stat goes up by 20 the max stat goes up by 44. What I want to know is how much the max stat increases when the base stat goes up by 1 (100 to 101 for example). You should be able to work it out by Getting the max Base stat and the maxium stat then divide the the Maxium stat by the base sta to tell you how much the stat increase is Well I think this should do it so... You take the Maxium HP in this instance 344 and then divide it by 70 which is... 4.9 (rounded) So if you take this as an average Scizors Maxium HP stats go up around about 5 every time you get 1 HP stats. You can do this for the Pokemon that you need to work it out for. I think this is right... If not I am sorry Scizors HP Stat Base HP=70 Maxium HP 250 - 344
{"url":"http://pokemondb.net/pokebase/97742/every-time-base-stat-increases-much-does-the-max-stat-increase?show=97779","timestamp":"2014-04-17T09:17:11Z","content_type":null,"content_length":"38045","record_id":"<urn:uuid:9e7d0d52-7af5-4f89-99e4-df3f0743e6c5>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00346-ip-10-147-4-33.ec2.internal.warc.gz"}
I Am Interested In Finding Out The Velocity Obtained ... | Chegg.com I am interested in finding out the velocity obtained from a 50 gram Aluminum projectile approximately 3/4"x3/4"x 2" long with a rounded nose. This projectile would be discharged from a similar square tube approminately 15" long. Source creating the power would be compressed air between 10 psi and 150 psi. The limiting factor on this would be the smallest round tube approximately coming off the air compressor with an i.d. of 1/2" approximately 3' long. The air compressor holding tank is approximately 127 cubic inches and it fully discharges upon release of the ball valve. The holding tank does not start refilling until the ball valve is the returned to closed. I would prefer this formula written in single variable calculus vs. multi variable calculus if possible. Another thing of interest would be if the square tube was changed from 15" to 48" If one was able to create a formula for this it would be much appreciated.
{"url":"http://www.chegg.com/homework-help/questions-and-answers/interested-finding-velocity-obtained-50-gram-aluminum-projectile-approximately-3-4-x3-4-x--q812324","timestamp":"2014-04-18T02:14:40Z","content_type":null,"content_length":"19699","record_id":"<urn:uuid:5e0b8bb9-5de5-4bcf-a049-b8b9026c13cd>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00506-ip-10-147-4-33.ec2.internal.warc.gz"}